Skip to content

Commit

Permalink
Merge pull request #183 from NREL/update-dependencies
Browse files Browse the repository at this point in the history
Update dependencies
  • Loading branch information
daniel-thom committed Apr 7, 2024
0 parents commit 3811431
Show file tree
Hide file tree
Showing 95 changed files with 14,778 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 53d416767d6454373a24714add246362
tags: 645f666f9bcd5a90fca523b33c5a78b7
Empty file added .nojekyll
Empty file.
Binary file added _images/DISCO-Workflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/SMART-DS-flowchart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/db-tables.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/feeder.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/hca__pf1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/max_voltage_pri_sec.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/thermal_upgrades.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/thermal_workflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/thermalafter_thermalupgrades.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/thermalbefore_thermalupgrades.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/upgrades.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/voltage_upgrades.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/voltage_workflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/voltageafter_thermalupgrades.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/voltageafter_voltageupgrades.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/voltagebefore_thermalupgrades.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/voltagebefore_voltageupgrades.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
27 changes: 27 additions & 0 deletions _sources/advanced-guide.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
.. _advanced_guide:

Advanced Guide
##############

.. Content here is commented-out because it doesn't currently fit. Might need it again.
.. This page describes how to use the DISCO package to create, modify, and run
.. simulations locally or on an HPC.
.. DISCO create JADE extensions in DISCO, and calls high-level interfaces of PyDSS
.. to run simulations on top of OpenDSS.The supported simulations in DISCO currently
.. include:
.. * DISCO PV Deployment Simulation via ``pydss_simulation`` extension.
.. Please refer to the following links and check the simulation types in detail.
.. toctree::
:maxdepth: 1

advanced-guide/upgrade-cost-analysis-generic-models.rst

.. If you need to create your own extension, the
.. `JADE documentation <https://nrel.github.io/jade/advanced_usage.html>`_
.. provides step-by-step instructions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
.. _upgrade_cost_analysis_schemas:

**********************************
Upgrade Cost Analysis JSON Schemas
**********************************

UpgradeCostAnalysisSimulationModel
==================================
.. literalinclude:: ../../build/json_schemas/UpgradeCostAnalysisSimulationModel.json
:language: json

UpgradesCostResultSummaryModel
==============================
.. literalinclude:: ../../build/json_schemas/UpgradesCostResultSummaryModel.json
:language: json

JobUpgradeSummaryOutputModel
============================
.. literalinclude:: ../../build/json_schemas/JobUpgradeSummaryOutputModel.json
:language: json
24 changes: 24 additions & 0 deletions _sources/analysis-workflows.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
.. _analysis_workflows:

******************
Analysis Workflows
******************

DISCO implements analysis workflows that allow post-processing of individual jobs,
batches of jobs, or a pipeline of batches.

The supported analyses include:

* Static Hosting Capacity Analysis
* Dynamic Hosting Capacity Analysis
* Upgrade Cost Analysis Analysis
* Snapshot/Time Series Impact Analysis

The following sections show the analysis workflows in detail.

.. toctree::
:maxdepth: 2

analysis-workflows/hosting-capacity-analysis
analysis-workflows/upgrade-cost-analysis
analysis-workflows/impact-analysis
220 changes: 220 additions & 0 deletions _sources/analysis-workflows/hosting-capacity-analysis.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,220 @@
Hosting Capacity Analysis
=========================

This section shows how to conduct *hosting capacity analysis* using DISCO pipeline with *snapshot*
and *time-series* models as inputs. This tutorial assumes there's an existing ``snapshot-feeder-models``
directory generated from the ``transform-model`` command as below. The workflow below can also be
applied to ``time-series-feeder-models``.

**1. Config Pipeline**

Check the ``--help`` option for creating pipeline template.

.. code-block:: bash
$ disco create-pipeline template --help
Usage: disco create-pipeline template [OPTIONS] INPUTS
Create pipeline template file
Options:
-T, --task-name TEXT The task name of the simulation/analysis
[required]
-P, --preconfigured Whether inputs models are preconfigured
[default: False]
-s, --simulation-type [snapshot|time-series|upgrade]
Choose a DISCO simulation type [default:
snapshot]
--with-loadshape / --no-with-loadshape
Indicate if loadshape file used for Snapshot
simulation.
--auto-select-time-points / --no-auto-select-time-points
Automatically select the time point based on
max PV-load ratio for snapshot simulations.
Only applicable if --with-loadshape.
[default: auto-select-time-points]
-d, --auto-select-time-points-search-duration-days INTEGER
Search duration in days. Only applicable
with --auto-select-time-points. [default:
365]
-i, --impact-analysis Enable impact analysis computations
[default: False]
-h, --hosting-capacity Enable hosting capacity computations
[default: False]
-u, --upgrade-analysis Enable upgrade cost computations [default:
False]
-c, --cost-benefit Enable cost benefit computations [default:
False]
-p, --prescreen Enable PV penetration level prescreening
[default: False]
-t, --template-file TEXT Output pipeline template file [default:
pipeline-template.toml]
-r, --reports-filename TEXT PyDSS report options. If None, use the
default for the simulation type.
-S, --enable-singularity Add Singularity parameters and set the
config to run in a container. [default:
False]
-C, --container PATH Path to container
-D, --database PATH The path of new or existing SQLite database
[default: results.sqlite]
-l, --local Run in local mode (non-HPC). [default:
False]
--help Show this message and exit.
Given an output directory from ``transform-model``, we use this command with ``--preconfigured`` option
to create the template.
.. code-block:: bash
$ disco create-pipeline template -T SnapshotTask -s snapshot -h -P snapshot-feeder-models --with-loadshape
.. note:: For configuring a dynamic hosting capacity pipeline, use ``-s time-series``
It creates ``pipeline-template.toml`` with configurable parameters of different sections. Update
parameter values if needed. Then run
.. code-block:: bash
$ disco create-pipeline config pipeline-template.toml
This command creates a ``pipeline.json`` file containing two stages:
* stage 1 - simulation
* stage 2 - post-process
Accordingly, there will be an output directory for each stage,
* output-stage1
* output-stage2
**2. Submit Pipeline**
With a configured DISCO pipeline in ``pipeline.json`` the next step is to submit the pipeline with
JADE:
.. code-block:: bash
$ jade pipeline submit pipeline.json -o output
What does each stage do?
* In the simulation stage DISCO runs a power flow simulation for each job through PyDSS and stores
per-job metrics.
* In the post-process stage DISCO aggregates the metrics from each simulation job, calculates
the hosting capacity, and then ingests results into a SQLite database.
**3. Check Results**
The post-process stage aggregates metrics in the following tables in ``output/output-stage1``:
* ``feeder_head_table.csv``
* ``feeder_losses_table.csv``
* ``metadata_table.csv``
* ``thermal_metrics_table.csv``
* ``voltage_metrics_table.csv``
Each table contains metrics related to the *snapshot* or *time-series* simulation. DISCO
computes hosting capacity results from these metrics and then writes them to the following files,
also in ``output/output-stage1``:
* ``hosting_capacity_summary__<scenario_name>.json``
* ``hosting_capacity_overall__<scenario_name>.json``
The scenario name will be ``scenario``, ``pf1`` and/or ``control_mode``, depending on your
simulation type and/or ``--with-loadshape`` option.
Note that DISCO also produces prototypical visualizations for hosting capacity automatically after each run:
* ``hca__{scenario_name}.png``
.. image:: ../images/hca__pf1.png
:scale: 60
The voltage plot examples for the first feeder comparing pf1 vs. voltvar and comparing primary and secondary voltages:
* ``max_voltage_pf1_voltvar.png``
* ``max_voltage_pri_sec.png``
.. image:: ../images/max_voltage_pri_sec.png
:scale: 60
**4. Results database**
DISCO ingests the hosting capacity results and report metrics into a SQLite database named
``output/output-stage1/results.sqlite``. You can use standard SQL to query data, and perform
further analysis.
If you want to ingest the results into an existing database, please specify the absolute path
of the database in ``pipeline.toml``.
For sqlite query examples, please refer to the Jupyter notebook ``notebooks/db-query.ipynb`` in
the source code repo.
If you would like to use the CLI tool ``sqlite3`` directly, here are some examples. Note that in
this case the database contains the results from a single task, and so the queries are not first
pre-filtering the tables.
If you don't already have ``sqlite3`` installed, please refer to their
`website <https://www.sqlite.org/download.html>`_.
Run this command to start the CLI utility:
.. code-block:: bash
$ sqlite3 -table <path-to-db.sqlite>
.. note:: If your version of sqlite3 doesn't support ``-table``, use ``-header -column`` instead.
1. View DISCO's hosting capacity results for all feeders.
.. code-block:: bash
sqlite> SELECT * from hosting_capacity WHERE hc_type = 'overall';
2. View voltage violations for one feeder and scenario.
.. code-block:: bash
sqlite> SELECT feeder, scenario, sample, penetration_level, node_type, min_voltage, max_voltage
FROM voltage_metrics
WHERE (max_voltage > 1.05 or min_voltage < 0.95)
AND scenario = 'pf1'
AND feeder = 'p19udt14287';
3. View the min and max voltages for each penetration_level (across samples) for one feeder.
.. code-block:: bash
sqlite> SELECT feeder, sample, penetration_level
,MIN(min_voltage) as min_voltage_overall
,MAX(max_voltage) as max_voltage_overall
,MAX(num_nodes_any_outside_ansi_b) as num_nodes_any_outside_ansi_b_overall
,MAX(num_time_points_with_ansi_b_violations) as num_time_points_with_ansi_b_violations_overall
FROM voltage_metrics
WHERE scenario = 'pf1'
AND feeder = 'p19udt14287'
GROUP BY feeder, penetration_level;
4. View the max thermal loadings for each penetration_level (across samples) for one feeder.
.. code-block:: bash
sqlite> SELECT feeder, sample, penetration_level
,MAX(line_max_instantaneous_loading_pct) as line_max_inst
,MAX(line_max_moving_average_loading_pct) as line_max_mavg
,MAX(line_num_time_points_with_instantaneous_violations) as line_num_inst
,MAX(line_num_time_points_with_moving_average_violations) as line_num_mavg
,MAX(transformer_max_instantaneous_loading_pct) as xfmr_max_inst
,MAX(transformer_max_moving_average_loading_pct) as xfmr_max_mavg
,MAX(transformer_num_time_points_with_instantaneous_violations) as xfmr_num_inst
,MAX(transformer_num_time_points_with_moving_average_violations) as xfmr_num_mavg
FROM thermal_metrics
WHERE scenario = 'pf1'
AND feeder = 'p19udt14287'
GROUP BY feeder, penetration_level;
Loading

0 comments on commit 3811431

Please sign in to comment.