Skip to content

Commit

Permalink
Updates for FTOT 2024.2 release (#59)
Browse files Browse the repository at this point in the history
* Add files at base level directory

* Add files to program folder

* Add files to lib folder

* Add files to tools folder

* Add XLSX templates
  • Loading branch information
kzhang81 authored Jul 8, 2024
1 parent de8fec0 commit 64b74cb
Show file tree
Hide file tree
Showing 12 changed files with 3,702 additions and 2,400 deletions.
13 changes: 13 additions & 0 deletions changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
# FTOT Change Log

## v2024_2

The FTOT 2024.2 public release includes updates related to cost reporting outputs and visualizations, modeling of intermodal movement costs, scenario input validation, and back-end improvements to how the transportation network is processed and translated into NetworkX. The following changes have been made:
* Developed additional FTOT outputs to explain and visualize costs associated with the optimal routing solution of a scenario. A new CSV report summarizes both scaled and unscaled costs by commodity and by mode. The scaled costs account for user-specified transport and CO2 cost scalars and are used in the optimization. Costs are categorized as movement costs (broken out into costs from transport, transloading, first mile/last mile, mode short haul penalties, and impedances), emissions costs (broken out into CO2 and CO2 first mile/last mile), processor build costs, and unmet demand penalties. A new Cost Breakdown dashboard in the Tableau workbook visualizes the scaled and unscaled cost components.
* Updated the transport cost and routing cost methodology for intermodal movements. In addition to the transloading cost applied per unit moved between modes, transportation costs along the edges connecting the transloading facility to the rest of the FTOT network are now applied using the default per ton-mile (or thousand gallon-mile) costs of the mode that the transloading facility is connected to. The routing cost component from transport for intermodal movements is equivalent to the (unimpeded) transport cost.
* Added new input validation checks to confirm alignment between facility geodatabase and facility-commodity input files. The user is provided log messages when all facilities in the facility-commodity input files fail to match with a facility location in the corresponding feature class of the geodatabase.
* Updated method for hooking user-specified facilities into the network to ensure that all network segment attributes are passed down to split links (previously, only attributes that were part of the FTOT network specification were retained).
* Other updates:
* Generalized FTOT code used to translate the network geodatabase into a NetworkX object to allow network segment attributes to be passed through the G step to support future extensions cost varying link cost based on other network attributes.
* Fixed a logging bug that was consistently printing out a warning to users that certain log files were not successfully added to the FTOT text report.
See documentation files for additional details.


## v2024_1

The FTOT 2024.1 public release includes updates related to the pipeline network, waterway background volume and capacity handling, FTOT Tools and the Scenario Setup Template, the Tableau routes dashboard, and network resilience analysis. The following changes have been made:
Expand Down
6 changes: 3 additions & 3 deletions program/ftot.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@
ureg.define('us_ton = US_ton')


FTOT_VERSION = "2024.1"
SCHEMA_VERSION = "7.0.4"
VERSION_DATE = "4/3/2024"
FTOT_VERSION = "2024.2"
SCHEMA_VERSION = "7.0.5"
VERSION_DATE = "7/8/2024"

# ===================================================================================================

Expand Down
100 changes: 72 additions & 28 deletions program/ftot_facilities.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,6 @@ def db_populate_tables(the_scenario, logger):

# populate schedules table
populate_schedules_table(the_scenario, logger)


# populate locations table
populate_locations_table(the_scenario, logger)
Expand Down Expand Up @@ -1282,7 +1281,8 @@ def gis_ultimate_destinations_setup_fc(the_scenario, logger):
# copy the destination from the baseline layer to the scenario gdb
# --------------------------------------------------------------
if not arcpy.Exists(the_scenario.base_destination_layer):
error = "can't find baseline data destinations layer {}".format(the_scenario.base_destination_layer)
error = "Can't find baseline data destinations layer {}".format(the_scenario.base_destination_layer)
logger.error(error)
raise IOError(error)

destinations_fc = the_scenario.destinations_fc
Expand All @@ -1302,6 +1302,8 @@ def gis_ultimate_destinations_setup_fc(the_scenario, logger):
temp_facility_commodities_dict = {}
counter = 0

# check if dest CSV exists happens in S step

# read through facility_commodities input CSV
with open(the_scenario.destinations_commodity_data, 'rt') as f:
reader = csv.DictReader(f)
Expand Down Expand Up @@ -1345,6 +1347,12 @@ def gis_ultimate_destinations_setup_fc(the_scenario, logger):
if facility not in list(temp_gis_facilities_dict.keys()):
logger.warning("Could not match facility {} in input CSV file to data in Base_Destination_Layer".format(facility))

# if zero destinations from CSV matched to FC, error out
if result == 0:
error = "Destinations feature class contains zero facilities in CSV file {}".format(the_scenario.destinations_commodity_data)
logger.error(error)
raise IOError(error)

logger.info("finished: gis_ultimate_destinations_setup_fc: Runtime (HMS): \t{}".format(ftot_supporting.get_total_runtime_string(start_time)))


Expand All @@ -1359,7 +1367,8 @@ def gis_rmp_setup_fc(the_scenario, logger):
# copy the rmp from the baseline data to the working gdb
# ----------------------------------------------------------------
if not arcpy.Exists(the_scenario.base_rmp_layer):
error = "can't find baseline data rmp layer {}".format(the_scenario.base_rmp_layer)
error = "Can't find baseline data rmp layer {}".format(the_scenario.base_rmp_layer)
logger.error(error)
raise IOError(error)

rmp_fc = the_scenario.rmp_fc
Expand All @@ -1379,10 +1388,12 @@ def gis_rmp_setup_fc(the_scenario, logger):
temp_facility_commodities_dict = {}
counter = 0

# check if rmp CSV exists happens in S step

# read through facility_commodities input CSV
with open(the_scenario.rmp_commodity_data, 'rt') as f:

reader = csv.DictReader(f)

# check required fieldnames in facility_commodities input CSV
for field in ["facility_name", "value"]:
if field not in reader.fieldnames:
Expand Down Expand Up @@ -1422,6 +1433,12 @@ def gis_rmp_setup_fc(the_scenario, logger):
if facility not in list(temp_gis_facilities_dict.keys()):
logger.warning("Could not match facility {} in input CSV file to data in Base_RMP_Layer".format(facility))

# if zero RMPs from CSV matched to FC, error out
if result == 0:
error = "Raw material producer feature class contains zero facilities in CSV file {}".format(the_scenario.rmp_commodity_data)
logger.error(error)
raise IOError(error)

logger.info("finished: gis_rmp_setup_fc: Runtime (HMS): \t{}".format(ftot_supporting.get_total_runtime_string(start_time)))


Expand All @@ -1435,6 +1452,14 @@ def gis_processors_setup_fc(the_scenario, logger):

scenario_proj = ftot_supporting_gis.get_coordinate_system(the_scenario)

if str(the_scenario.processors_commodity_data).lower() != "null" and \
str(the_scenario.processors_commodity_data).lower() != "none":
# check if proc CSV exists happens in S step
# read through facility_commodities input CSV
with open(the_scenario.processors_commodity_data, 'rt') as f:
reader = csv.DictReader(f)
row_count = sum(1 for row in reader)

if str(the_scenario.base_processors_layer).lower() == "null" or \
str(the_scenario.base_processors_layer).lower() == "none":
# create an empty processors layer
Expand All @@ -1452,11 +1477,20 @@ def gis_processors_setup_fc(the_scenario, logger):
"NON_REQUIRED", "#")
arcpy.AddField_management(processors_fc, "Candidate", "SHORT")


# check if there is a discrepancy with proc CSV
if os.path.exists(the_scenario.processors_commodity_data):
if row_count > 0:
error = "Facility data are provided in input CSV file but Base_Processors_Layer is not specified; set CSV file path to None or provide GIS layer"
logger.error(error)
raise IOError(error)

else:
# copy the processors from the baseline data to the working gdb
# ----------------------------------------------------------------
if not arcpy.Exists(the_scenario.base_processors_layer):
error = "can't find baseline data processors layer {}".format(the_scenario.base_processors_layer)
error = "Can't find baseline data processors layer {}".format(the_scenario.base_processors_layer)
logger.error(error)
raise IOError(error)

processors_fc = the_scenario.processors_fc
Expand All @@ -1467,7 +1501,7 @@ def gis_processors_setup_fc(the_scenario, logger):
# Check for required field 'Facility_Name' in FC
check_fields = [field.name for field in arcpy.ListFields(processors_fc)]
if 'Facility_Name' not in check_fields:
error = "The destinations feature class {} must have the field 'Facility_Name'.".format(processors_fc)
error = "The processors feature class {} must have the field 'Facility_Name'.".format(processors_fc)
logger.error(error)
raise Exception(error)

Expand All @@ -1477,28 +1511,30 @@ def gis_processors_setup_fc(the_scenario, logger):
temp_facility_commodities_dict = {}
counter = 0

# read through facility_commodities input CSV
with open(the_scenario.processors_commodity_data, 'rt') as f:

reader = csv.DictReader(f)
# check required fieldnames in facility_commodities input CSV
for field in ["facility_name", "value"]:
if field not in reader.fieldnames:
error = "The processors commodity data CSV {} must have field {}.".format(the_scenario.processors_commodity_data, field)
logger.error(error)
raise Exception(error)

for row in reader:
facility_name = str(row["facility_name"])
# This check for blank values is necessary to handle "total" processor rows which specify only capacity
if row["value"]:
commodity_quantity = float(row["value"])
else:
commodity_quantity = float(0)
if str(the_scenario.processors_commodity_data).lower() != "null" and \
str(the_scenario.processors_commodity_data).lower() != "none":
# read through facility_commodities input CSV
with open(the_scenario.processors_commodity_data, 'rt') as f:
reader = csv.DictReader(f)

# check required fieldnames in facility_commodities input CSV
for field in ["facility_name", "value"]:
if field not in reader.fieldnames:
error = "The processors commodity data CSV {} must have field {}.".format(the_scenario.processors_commodity_data, field)
logger.error(error)
raise Exception(error)

for row in reader:
facility_name = str(row["facility_name"])
# This check for blank values is necessary to handle "total" processor rows which specify only capacity
if row["value"]:
commodity_quantity = float(row["value"])
else:
commodity_quantity = float(0)

if facility_name not in list(temp_facility_commodities_dict.keys()):
if commodity_quantity > 0:
temp_facility_commodities_dict[facility_name] = True
if facility_name not in list(temp_facility_commodities_dict.keys()):
if commodity_quantity > 0:
temp_facility_commodities_dict[facility_name] = True

# create a temp dict to store values from FC
temp_gis_facilities_dict = {}
Expand All @@ -1525,7 +1561,7 @@ def gis_processors_setup_fc(the_scenario, logger):
# check for candidates or other processors specified in XML
layers_to_merge = []

# add the candidates_for_merging if they exists.
# add the candidates_for_merging if they exists
if arcpy.Exists(the_scenario.processor_candidates_fc):
logger.info("adding {} candidate processors to the processors fc".format(
gis_get_feature_count(the_scenario.processor_candidates_fc)))
Expand All @@ -1535,6 +1571,14 @@ def gis_processors_setup_fc(the_scenario, logger):
result = gis_get_feature_count(processors_fc)
logger.info("Number of Processors: \t{}".format(result))

# if processors in FC are zero and proc CSV file exists with data, then error out
if result == 0:
if os.path.exists(the_scenario.processors_commodity_data):
if row_count > 0:
error = "Processor feature class contains zero facilities from processor CSV file {}".format(the_scenario.processors_commodity_data)
logger.error(error)
raise IOError(error)

logger.info("finished: gis_processors_setup_fc: Runtime (HMS): \t{}".format(ftot_supporting.get_total_runtime_string(start_time)))


Expand Down
Loading

0 comments on commit 64b74cb

Please sign in to comment.