-
Notifications
You must be signed in to change notification settings - Fork 0
LP battery multiobjective optimiser targetting the Owen Square microgrid
License
cepro/fyp-optimiser
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
README - updated Summer 2024 by A. Deaney
What is this thing doing?
This tool optimises microgrid energy flows to minimise a defined cost function
Inputs Required:
Microgrid
- Name
- PCC Import Capacity (kW)
- PCC Export Capacity (kW)
Generation (optional)
- Name
- Generation profile (kWh)
Demand (optional)
- Name
- Demand profile (kWh)
Storage (optional)
- Name
- Maximum SOE (kWh)
- Minimum SOE (kWh)
- Max Charge Power (kW)
- Max Discharge Power (kW)
- Storage Efficiency (%) - input as decimal equiv.
MicrogridImpTariff - Applies to flows entering central microgrid from components
- Name
- Rate (£/kWh)
MicrogridExpTariff - Applies to flows leaving central microgrid to components
- Name
- Rate (£/kWh)
ThroughputTariff - Applies to ALL energy flows
- Name
- Rate (£/kWh)
Outputs Returned:
Microgrid
- df_non_disp = Timeseries of aggregated non-dispatchable energy flows
- df_disp_cost = Timeseries of costs associated with component energy flows
- df_opt_profile = Timeseries of optimised energy flows
- other useful functions and metrics...
How to get this running? - Follow the user activity diagram included within Appendix A of A.Deaney's report
To get started, use the adhoc_test_optimiser.py script as an example. The other more complicated (and yes, messy) analysis scripts within the Analysis directory follow the same concept and use the analysis objects defined in the Analysis.py module. Be warned - some of the scripts, if run correctly, will produce a lot of output charts. They are easily understood if viewed in the context of my report, but could be unclear otherwise. A summary of the relevant analysis scripts is included below:
Analysis.AlgorithmComparison.py
- Evaluates the performance differential between CEPRO's, unoptimised, and optimised microgrid schedules.
- Creates (hopefully) insightful visualisation into the varying costs - AlgorithmRelativeSavings.pdf.
- Also creates (much worse) visualisation showing the difference the scheduling makes to customer bills. The analysis for this is very poor and was dropped from the report following discussions with Damon.
- Corresponds to analysis in report Section #4
Analysis.EvChargeCost.py
- Calculates difference in unit costs for an optimised microgrid with and without EV charging demand
- Corresponds to analysis in report Section #4
Analysis.evCharging.py
- An electric mountain sized volume of analysis on EV charging behaviour.
- Some parts of this script have been commented out and will need uncommenting to run in the script.
- This script is fairly well commented and it "should" be relatively understandable.
- Again, this script draws upon the analysis objects defined in Analysis.py
- Corresponds to analysis in report Section #3
Analysis.GrandpaExampleAnalysis.py
- Some pretty cool analysis on the extent of additional savings possible for my Grandpa's new battery and solar system.
- Demonstrates how important grid export and arbitrage is for domestic BESS viability.
- Corresponds to analysis in report Section #2
Analysis.hazelmead_comparitive_analysis.py
- This script also compares the relative perfomance of optimised/unoptimised/cepro optimised battery schedules.
- PLEASE NOTE - As this script is for battery schedules, it does NOT consider demand.
- Et supra, this script also has some analysis that is commented out and will need commenting back in to run correctly.
- Corresponds to analysis in report Section #2
If I was reviewing/building upon this work, I would recommend the following:
1) Have a stiff drink.
2) Review the high level C4 documentation and activity diagram for the microgrid optimiser. Test out adhoc_optimiser_test.py
3) Review the component class diagram for the optimiser class and follow through the code's "setup" and "run" method. Ideally this will run correctly...
4) Have a break. If you've got here, you've done well.
5) Follow through Analysis.hazelmead_comparitive_analysis.py, understanding how ingest, staging, and analysis calls work and how data flows through them.
Looking back at this work, why did I make it so complicated? The famous words "Don't go down a rabbit hole Deaney!" from Will M. come to mind...
Please Note - Legacy code has been deprecated, but is included in the Legacy Code directory for reference. Many thanks to all previous contributors: for without them, this work would have been far more challenging. A special thanks to Tonda for his lighthearted remarks and comments, which were entertaining during the long days and late nights of this project!
############################################################################################################################################################################
README - updated autumn 2022
What is this thing doing?
This tool is optimising battery's exports & imports to minimise carbon emissions and cost of electricity use
Inputs are half-hourly data for:
i. Original (no battery) exports&imports to the facility
ii. Variable electricity prices - spot price+charges
iii. Average carbon intensity of the grid
Exports are half-hourly data for:
i. Battery import & export to and from the facility (and the grid)
ii. Associated data such as total cost, total carbon emitted etc
The typical use case is optimising for long periods of time (by default the full year 2021) = file longRunner-battonly-72hrs.py, but single day optimisation can also be run = file runOptimiserBattonly72h.py.
In short, the optimiser puts a price on carbon and then tries to run the battery in such a way that the total cost (inc. carbon cost) is the lowest possible given the constraints.
How to get this running?
So, if I remember correctly, here is how I would do it:
1. Install Anaconda Python and PyCharm as the code editor (not compulsory).
2. Open the project and open the few files that need editing when changing inputs and running it multiple times:
a. longRunner-battonly-72hrs.py
b. systemconfig-v5.json
3. in the longRunner file,
a. Check that the "stock_config" loads the same systemconfig file you opened (no need to change)
b. elecgen_date = the first day for optimisation (no need to change, but you can)
c. elecgen_dates = the last number should be set to 365 as a full year of daily optimisations. You can change this
d. duration = I advise to not change this, it was kept as 24 hours for the carbon&cost optimisation
e. name the output CSV file - currently on line 45 - if you do not rename the file, it will rewrite the file with the same name
f. run this file and see if it works - the output CSV can then be analysed. This can take 0.5-3 hours.
4. in the systemconfig file:
a. change the settings for the run, such as specifications of the battery used
b. the input dataset files can also be selected here, check the input data folder if these files are actually there. There should some real and some dummy datasets in the folder to be loaded by the optimiser. Eventually, you can use your own data.
c. specifications with units include: optimisation duration in hours (may not work with other than 24), supply period duration in minutes (only tested on 30), carbon price in GBP/tCO2e (only an input to the model, may differ from your internal carbon price), start State of Charge in %, end (target) SOC in %, Roundtrip efficiency of the battery in %, capacity of the battery in kWh, limits to dis/charging of the battery in %, max power of the battery in kW, Cost of energy throughput through the battery in penny/kWh
5. the length of the carbon-only pre-optimisation can be changed in the file "runOptimiserBattonly72h.py", it can be for example 48h. (changing this may lead to errors around christmas 2021 due to carbonintensity API bug)
Random notes:
If you want to get better understanding of the tool, read this whole document.
Some files are rather obsolete and do not do anything anymore, mentions of Mixergy hot water tanks, for example (passwords have been deleted & the newest versions do not optimise the hot water tanks anymore. But for example, file optimizerV12 and runOptimizerV12 can be used for this).
Running 365 days can take a while, about 30mins on a standard laptop and multiple times longer with the "72h pre-optimisation". Not sure how much would it speed up by having a faster device, some of the time takes loading the Carbon intensity API - so that could be improved by downloading the data and loading it from a CSV file.
The LongRunner code was used mostly on data for the whole 2021 so there can be issues with running it for dates before/after it and also between years. For example, some days the carbonintensity API does not have 48 data points
The tool was built to run on 30 minutes intervals so that is needed to keep or rework some parts of it.
Input datasets:
The tool as it is uses API for loading carbon intensity of the regional UK grid and CSV files to get the other inputs (import, export, electricity price)
The CSV files need to have yyyy-mm-dd hh:mm:ss format on date (opening in Excel may change this automatically)
Latest updates:
The latest feature is a sort-of pre optimisation. The optimiser runs 365x - but each time it optimises 72hrs ahead for carbon only (as the forecast is available), then sets a target State of Charge for the end of the first optimised day and runs the 24hrs optimisation for cost & carbon with this target. It should improve efficiency.
README - summer 2022
-----------------NEW VERSIONS 2, 3, 4 README BY ANTONIN tonda.samal@gmail.com ------------
Works on Anaconda Python 3.9 on Windows 10
The newly developed code is much messier than the original but bear with me, it kinda works (unless it doesn't, of course)
When there was something changed from the original V1 code, it is typically described in a comment starting with "T:"
This documentation is to get better with time
Please note: the FYP report references 4 versions of the optimiser, however, there are 2 main variant:
Versions 1 & 2 use an old OF (Objective Function ofc ;) ), only optimising the Mixergy hot water tanks
V2 has only added carbon intensity and price. These two run on the same files (which have to differ from the other versions), denoted as V12 typically
See the V1 readme file to see the documentation for Version 2. only differences are:
* naming of key files ends with V12
* Runnable Code: longRunnerV1_2.py = run the runOptimiserV12 for multiple subsequent days and output its daily summary to CSV
* fetchCI/CI_rest.py = Carbon Intensity data loader from API, data processed via SystemAssembler as in V1
Versions 3 & 4 use a new OF, minimising total electricity costs & carbon by optimising battery & hot water tanks
V4 has only added Cost of Throughput (CoT) - thus, it runs on the same files, set CoT to 0 to see V3 results
The V4 (and V3) is divided into its full scope files and battery-only (battonly) files - to skip lower quality data from Mixergy API
Following documentation considers the V4 battery+HWT - where significantly different from V1
V4 Updated Contents:
Runnable Code:
longRunner-batteryHWT.py
longRunner-batteryHWT-mixdate.py
runOptimizer.py
Configuration Files:
systemconfig-v5.json
Supporting Libraries:
optimizer.py
optimiserbattonly.py
fetchCI/CI_rest.py
fetchSM/SMfromCSV.py
Other Files:
damons_HH_combined_20-22_DUoS_21up_Battery.csv
owensquare-2020-2022-intervals-30min.csv
Description of the main edited features since V1
----- Runnable Code -----
**longRunner-batteryHWT.py**
From provided start date and duration in days, it loops through all the consecutive days,
calls runOptimizer on it by modifying a template systemconfig.json file with the
new dates. Calculates/collects the values to export to the CSV and writes out (to folder "longRun")
It's definitely very beautiful.
All changes to values need to be made in the code. By default, uses systemconfig-v5.json
as the template system configuration
Optimises battery + HW tanks
"battonly" version does not include HWT
**longRunner-batteryHWT-mixdate.py**
Same as longRunner-batteryHWT.py but enables using a different day's data for Mixergy tanks
- useful when it is missing data from API
**runOptimizer.py**
As in V1, loads system config (by default systemconfig-v5.json), assembles the system with
SystemAssembler.py, passes the system dict to the optimizer which solves. Extracts
the optimised power value(s), writes the timeseries data to csv (in folder "output")
Assumes use of exactly 2 HWTs, "ECC" and "Nursery"
"battonly" version does not include HWT
----- Libraries -----
**optimizer.py**
Creates the linear programming problem, then it solves it, outputs data to the runOptimiser
Creates decision variables:
Total simulated OSCE Import & Export
Battery simulated Import Export
Mixergy simulated Power use (for each tank included)
Set Battery input parameters:
It is not nice, but all battery parameters are set in this code
All can be reasonably changed & have in comments their units
New Objective Function (OF):
Minimation of (the total cost (inc. carbon cost) of Import - total cost of Export + cost of battery use (throughput))
Constraints:
Mixergy has the same as in V1
Battery uses similar as the Mixergy, has midnight SOC set at the value given by input parameter
Power balance equation ensures that grid sim export & import are equal with other loads
"battonly" version does not include HWT
----- Configuration Files -----
**systemconfig-v5.json**
JSON object, loaded in to describe a system.
optimization_duration: Length of optimisation in hours
supply_period_duration: Length of supply period in minutes
carbon_price: input price of carbon in [GBP/tCO2eq]
- not necessarily equal to how much the cost will increase / how much carbon will be saved
E: Emissions from the grid = carbon intensity
loading from API
by default uses Regional average HH data - for that, it needs the postcode of the building
------------------ORIGINAL VERSI0N 1 README BY SEBASTIAN ---------------
README
Developed for Anaconda Python 3.8.3 on Ubuntu on WSL
Contents:
Runnable Code:
bulkRunner.py
runOptimizer.py
Configuration Files:
systemconfig-v3.json
tank_parameters.json
Supporting Libraries:
mixergyModel.py
optimizer.py
SystemAssembler.py
utilities.py
fetchLJ/lj_rest.py
fetchLJ/LJfromCSV.py
fetchPV/PVfromCSV.py
mixergyio_main/*
Other Files:
damons_ECC-gen-meter-intervals.csv
damons_HH_combined.csv
----- Runnable Code -----
**bulkRunner.py**
From provided starting dates, calculates every combination of tank-day and energy-day
then calls runOptimizer on it by modifying a template systemconfig.json file with the
new dates. Calculates/collects the values to export to the CSV and writes out
It's pretty shoddy because this isn't designed for production, just for my benefit.
All changes to values need to be made in the code. By default, uses systemconfig-v3.json
as the template system configuration
**runOptimizer.py**
Loads system config (by default systemconfig-v3.json), assembles the system with
SystemAssembler.py, passes the system dict to the optimizer which solves. Extracts
the optimised power value(s), writes the timeseries data to csv
----- Configuration Files -----
**systemconfig-v3.json**
JSON object, loaded in to describe a system.
optimization_duration: Length of optimisation in hours
supply_period_duration: Length of supply period in minutes
_I, X, G, tanks_
Each of these can be retrieved from any valid source. Obviously the tanks have to return
an object with tank descriptions in it, so the entire set of valid sources doesn't apply
_Valid Sources for I, X, G_
Case sensitive
As defined in SystemAssembler.py:
"included": The object whose source is "included" includes a "data" field containing its value
"JSON" : The object should be loaded from the JSON file path specified in the "filename" field
"LJ_REST" : ALPHA. Using the "date_from" field for the source and the optimization_duration, access the LimeJump REST API and retrieve data for every supply period
Note LJ supply periods are implicitly 30min long, so fix supply_period_duration. Update the API key in lj_rest.py for access
"LJ_CSV" : Using the "date_from" field for the source, retrieve LJ supply period data (again, fixed to 30min with provided data) from the file at "filename"
"PV_CSV" : Fetch PV generation data for the PV4 array from "filename" for optimization_duration from "date_from". implicitly 30minutely with provided data
_Valid sources for tanks_
"included" or "JSON", defined as above
Within tanks, each tank should be defined
Valid tank sources are "included" or "MixergyModel"
"included" : Expects a corresponding "data" key, with "H" key as list of length (supply-periods in optimization_duration)
"MixergyModel": Access the tank_parameters file at "filename", retrieve the parameters for tank "tankname" and pull API values
from "date_from" for optimization_duration. Not supply_period_duration dependent
**tank_parameters.json**
A dictionary, with keys which define the tank name. Each key contains:
"user_pass": list, [str(username), str(password)] for the Mixergy account connected with this tank
"tank_id" : The mixergy tank ID, e.g. MX001224
"params" :
"TANK_LOSSES" : List of length (length(unusual_periods) + 1) containing Excel-calculated %/s loss rate for default case, unusual_period1, unusual_period2, etc
"HEATING_TANK_GAIN" : As above, but for %/kWh as found in Excel
"heater_power" : Heater power in kW
"supply_period_duration": Should be set to match systemconfig. Shoddy, I know, but I was short on time!
"unusual_periods" : Optional. Dictionary containing numbered keys, beginning from 1. Each key defined as:
"time_from" : time (in 24hr clock) when the special period begins
"time_to" : time (in 24hr clock) when the special period ends
Note lower numbered periods take priority.
----- Example Configuration -----
**systemconfig-v3.json** - updated to v5
Provides for the dual-tank configuration as discussed in report
{
"optimization_duration": 24,
"supply_period_duration": 30,
"I": {
"source": "LJ_CSV",
"filename": "./input_data/damons_HH_combined.csv",
"date_from": "2021-01-21T00:00:00"
},
"X": {
"source": "LJ_CSV",
"filename": "./input_data/damons_HH_combined.csv",
"date_from": "2021-01-21T00:00:00"
},
"G": {
"source": "PV_CSV",
"filename": "./input_data/damons_ECC-gen-meter-intervals.csv",
"date_from": "2021-01-21T00:00:00"
},
"tanks": {
"source": "included",
"data": {
"ECC": {
"source": "MixergyModel",
"filename": "./input_data/tank_parameters.json",
"tankname": "ECC",
"date_from": "2021-03-31T00:00:00"
},
"Damon": {
"source": "MixergyModel",
"filename": "./input_data/tank_parameters.json",
"tankname": "Damon",
"date_from": "2021-03-31T00:00:00"
}
}
}
}
**tank_parameters.json**
Passwords have been removed, contact Damon or me if you're trying to run the code
{
"ECC": {
"user_pass": ["damon@somewhere.coop", "psw"],
"tank_id": "MX001224",
"params": {
"TANK_LOSSES": [-0.0003726711, -0.0016186246],
"HEATING_TANK_GAIN" : [6.182967037, 8.091063334],
"heater_power": 3,
"supply_period_duration": 30
},
"unusual_periods": {
"1": {
"time_from": "05:00",
"time_to": "18:00"
}
}
},
"Damon": {
"user_pass": ["damon@somewhere.net", "psw"],
"tank_id": "MX001896",
"params": {
"TANK_LOSSES": [-0.000735962892],
"HEATING_TANK_GAIN" : [12.26539269],
"heater_power": 3,
"supply_period_duration": 30
}
}
}
About
LP battery multiobjective optimiser targetting the Owen Square microgrid
Resources
License
Stars
Watchers
Forks
Packages 0
No packages published