Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

639 allow the user to define input and output file names #734

Open
wants to merge 11 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,9 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
DataStructures = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
Distances = "b4f34e82-e78d-54a5-968a-f98e89d6e8f7"
DuckDB = "d2f5444f-75bc-4fdf-ac35-56f514c445e1"
HiGHS = "87dc4568-4c63-4d18-b0c0-bb2238e4078b"
JSON = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Logging = "56ddb016-857b-54e1-b83d-db4d58db5568"
Expand Down
15 changes: 15 additions & 0 deletions docs/src/User_Guide/model_configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,21 @@ The following tables summarize the model settings parameters and their default/p
|OverwriteResults | Flag for overwriting the output results from the previous run.|
||1 = overwrite the results.|
||0 = do not overwrite the results.|
|ResultsFileType | File type to save the results files.|
||Default `auto_detect` = Detect the extension from the name of the results file. In the abscence of an extension `.csv` will be used|
||`.csv` = Save as uncompressed CSV.|
||`.csv.gz` = Save as compressed CSV.|
||`.json` = Save as uncompressed JSON.|
||`.json.csv` = Save as compressed JSON.|
||`.parquet` = Save as uncompressed parquet.|
||`-snappy.parquet` = Save as snappy parquet.|
||`-zstd.parquet` = Save as zstd parquet.|
|ResultsCompressionType | Compression type to save the results files.|
||Default `auto_detect` = Detect the compression from the name of the results file. In the abscence of a compression type, none will be used|
||`gzip` = GZIP compression for CSV or JSON files|
|| `snappy` = snappy compression for parquet files|
|| `zstd` = zstd compression for parquet files|
|| `none` = no compression|

## 6. Solver related

Expand Down
258 changes: 158 additions & 100 deletions docs/src/User_Guide/model_input.md

Large diffs are not rendered by default.

78 changes: 78 additions & 0 deletions docs/src/User_Guide/model_output.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,3 +153,81 @@ This file includes the renewable/clean credit revenue earned by each generator l
### 2.8 SubsidyRevenue.csv

This file includes subsidy revenue earned if a generator specified Min\_Cap is provided in the input file. GenX will print this file only the shadow price can be obtained form the solver. Do not confuse this with the Minimum Capacity Carveout constraint, which is for a subset of generators, and a separate revenue term will be calculated in other files. The unit is $.

## 3 Output file names and type

As of GenX v0.4.2, the names of all results files can be changed by including the file `results_settings.yml`. Inclusion of this file is not necessary; the default names described above will be used if the file is not present.

Files are automatically saved as `.csv` in GenX. To change this, you can 1) change the extension of a file name in `results_settings.yml` (e.g. set `demand: "Demand.json"`), or 2) use `ResultsFileType` in `genx_settings.yml` to change the type of all the files. File names in `results_settings.yml` with preexisting extensions will override the type in `ResultsFileType`. For example, if `ResultsFileType = .csv`, but in `results_settings.yml` you have `demand: "Demand.json"`, the demand file will be saved as JSON, and all others will be saved as CSV. No error will be thrown.

Files can also be saved with gzip, snappy, and zstd compression. To choose which files are compressed, add the compression to the extension in `results_settings.yml` (e.g. set `demand: "Demand-snappy.parquet"` or `fuels: "Fuels.csv.gz"`). To compress all files, specify the `ResultsCompressionType` in `genx_settings.yml`. If the compression type is specified in settings, it does not need to be present in the results names. The correct file extension will be appended to the filename. If the file type and compression type conflict (e.g. CSV with snappy compression), no compression will be used.

For an example, see `1_three_zones/settings`.

Both single and multistage results files are in the same file. The file `results_settings.yml` has the following structure:

|**Key** | **Default Value**|
|:----------------------|:---------------|
|angles | angles|
|capacity_name | capacity|
|capacity_factor | capacityfactor|
|capacity_vaue | CapacityValue|
|capacities_charge_multi_stage | capacities_charge_multi_stage|
|capacities_multi_stage | capacities_multi_stage|
|capacities_energy_multi_stage | capacities_energy_multi_stage|
|captured_emissions_plant | captured_emissions_plant|
|charge | charge.csv|
|charging_cost | ChargingCost|
|co2_prices | CO2_prices_and_penalties|
|commit | commit|
|costs | costs|
|costs_multi_stage | costs_multi_stage |
|curtail | curtail|
|dStorage | dStorage|
|emissions_plant | emissions_plant|
|emissions | emissions|
|energy_revenue | EnergyRevenue|
|esr_prices_and_penalties | ESR_prices_and_penalties|
|esr_revenue | ESR_Revenue|
|flow | flow|
|fuel_cost_plant | Fuel_cost_plant|
|fuel_consumption_plant | FuelConsumption_plant_MMBTU|
|fuel_consumption_total | FuelConsumtion_total_MMBTU|
|hourly_matching_prices | hourly_matching_prices|
|hydrogen_prices | hydrogen_prices|
|mincap | MinCapReq_prices_and_penalties|
|maxcap | MaxCapReq_prices_and_penalties|
|maint_down | maint_down|
|revenue | NetRevenue|
|network_expansion | network_expansion|
|network_expansion_multi_stage | network_expansion_multi_stage|
|nse | nse|
|power_balance | power_balance|
|power | power|
|prices | prices|
|reg_subsidy_revenue | RegSubsidyRevenue|
|reserve_margin | ReserveMargin|
|reserve_margin_revenue | ReserveMarginRevenue|
|reserve_margin_prices_and_penalties | ReserveMargin_prices_and_penalties|
|reserve_margin_w | ReserveMargin_w.csv|
|reg | reg|
|reg_dn | reg_dn|
|reliability | reliability|
|shutdown | shutdown|
|start | start|
|status | status|
|storage | storage|
|storagebal_duals | storagebal_duals|
|storage_init | StorageInit|
|subsidy_revenue | SubsidyRevenue|
|time_weights | time_weights|
|tlosses | tlosses|
|virtual_discharge | virtual_discharge|
|vre_stor_dc_charge | vre_stor_dc_charge|
|vre_stor_ac_charge | vre_stor_ac_charge|
|vre_stor_dc_discharge | vre_stor_dc_discharge|
|vre_stor_ac_discharge | vre_stor_ac_discharge|
|vre_stor_elec_power_consumption | vre_stor_elec_power_consumption|
|vre_stor_wind_power | vre_stor_wind_power|
|vre_stor_solar_power | vre_stor_solar_power
|vre_stor_capacity | vre_stor_capacity|
4 changes: 3 additions & 1 deletion example_systems/1_three_zones/settings/genx_settings.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,6 @@ ParameterScale: 1 # Turn on parameter scaling wherein demand, capacity and power
WriteShadowPrices: 1 # Write shadow prices of LP or relaxed MILP; 0 = not active; 1 = active
UCommit: 2 # Unit committment of thermal power plants; 0 = not active; 1 = active using integer clestering; 2 = active using linearized clustering
TimeDomainReduction: 1 # Time domain reduce (i.e. cluster) inputs based on Demand_data.csv, Generators_variability.csv, and Fuels_data.csv; 0 = not active (use input data as provided); 0 = active (cluster input data, or use data that has already been clustered)
OutputFullTimeSeries: 1
OutputFullTimeSeries: 1 # Reconstuct all hours of the year and output in a folder called Full_TimeSeries
ResultsFileType: "auto_detect" # Automatically detect the type of the results files from the extension name. If no extension is present, files are saved as .csv.
ResultsCompressionType: "auto_detect" # Automatically detect the type of compression for the results files from the extension name. If no compression is present, saved as uncompressed.
15 changes: 15 additions & 0 deletions example_systems/1_three_zones/settings/input_settings.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
#default_location: default/path/to/data
# system
#system_location: path/to/data
demand: "Demand_data.csv"
fuel: "Fuels_data.csv"
generators: "Generators_variability.csv"

# policies
#policies_location: /path/to/data
#co2_cap: file_or_table_name
#minimum_capacity: file_or_table_name

# resources

# policy assignments
14 changes: 14 additions & 0 deletions example_systems/1_three_zones/settings/results_settings.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
capacity: "Capacity_test.csv"
capacity_factor: "capacityfactor.csv"
charge: "charge.csv"
charging_cost: "ChargingCost.csv"
co2_prices: "CO2_prices_and_penalties.csv"
commit: "commit.parquet"
costs: "costs.parquet"
curtail: "curtail.csv.gz"
emissions_plant: "emissions_plant"
nse: "nse.csv"
power_balance: "power_balance.csv"



Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# HiGHS Solver Parameters
# Common solver settings
Feasib_Tol: 1.0e-05 # Primal feasibility tolerance # [type: double, advanced: false, range: [1e-10, inf], default: 1e-07]
Optimal_Tol: 1.0e-05 # Dual feasibility tolerance # [type: double, advanced: false, range: [1e-10, inf], default: 1e-07]
Feasib_Tol: 1.0e-05 # Primal feasibility tolerance # [type: double, advanced: false, range: [1e-10, inf], default: 1e-07]
Optimal_Tol: 1.0e-05 # Dual feasibility tolerance # [type: double, advanced: false, range: [1e-10, inf], default: 1e-07]
TimeLimit: 1.0e23 # Time limit # [type: double, advanced: false, range: [0, inf], default: inf]
Pre_Solve: choose # Presolve option: "off", "choose" or "on" # [type: string, advanced: false, default: "choose"]
Method: ipm #HiGHS-specific solver settings # Solver option: "simplex", "choose" or "ipm" # [type: string, advanced: false, default: "choose"]
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
inputs_p1:
resources_location: "inputs/inputs_p1/resources"
resources: "Resource_multistage_data.csv"
storage: "Storage.csv"
thermal: "Thermal.csv"
vre: "Vre.csv"
policies_location: "inputs/inputs_p1/policies"
co2_cap: "CO2_cap.csv"
system_location: "inputs/inputs_p1/system"
demand: "Demand_data.csv"
fuel: "Fuels_data.csv"
generators: "Generators_variability1.csv"
network: "Network1.csv"

inputs_p2:
resources_location: "inputs/inputs_p1/resources"
resources: "Resource_multistage_data.csv"
storage: "Storage.csv"
thermal: "Thermal.csv"
vre: "Vre.csv"
policies_location: "inputs/inputs_p2/policies"
co2_cap: "CO2_cap.csv"
system_location: "inputs/inputs_p2/system"
demand: "Demand_data.csv"
fuel: "Fuels_data.csv"
generators: "Generators_variability.csv"
network: "Network.csv"

inputs_p3:
resources_location: "inputs/inputs_p1/resources"
resources: "Resource_multistage_data.csv"
storage: "Storage.csv"
thermal: "Thermal.csv"
vre: "Vre.csv"
policies_location: "inputs/inputs_p3/policies"
co2_cap: "CO2_cap.csv"
system_location: "inputs/inputs_p3/system"
demand: "Demand_data.csv"
fuel: "Fuels_data.csv"
generators: "Generators_variability.csv"
network: "Network.csv"
16 changes: 16 additions & 0 deletions precompile/case/settings/input_settings.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#default_location: default/path/to/data
# system
system_location: path/to/data
demand: "Demand_data.csv"
fuel: "Fuels_data.csv"
generators: "Generators_variability.csv"

# policies
#policies_location: /path/to/data
#co2_name: file_or_table_name
#minimum_capacity_name: file_or_table_name

# resources


# policy assignments
14 changes: 14 additions & 0 deletions precompile/case/settings/results_settings.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
capacity: "Capacity_test.csv"
capacity_factor: "capacityfactor.csv"
charge: "charge.csv"
charging_cost: "ChargingCost.csv"
co2_prices: "CO2_prices_and_penalties.csv"
commit: "commit.parquet"
costs: "costs.parquet"
curtail: "curtail.csv.gz"
emissions_plant: "emissions_plant"
nse: "nse.csv"
power_balance: "power_balance.csv"



3 changes: 2 additions & 1 deletion src/GenX.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ export run_timedomainreduction!
using JuMP # used for mathematical programming
using DataFrames #This package allows put together data into a matrix
using CSV
using JSON
using StatsBase
using LinearAlgebra
using YAML
Expand All @@ -37,8 +38,8 @@ using RecursiveArrayTools
using Statistics
using HiGHS
using Logging

using PrecompileTools: @compile_workload
using DuckDB

# Global scaling factor used when ParameterScale is on to shift values from MW to GW
# DO NOT CHANGE THIS (Unless you do so very carefully)
Expand Down
5 changes: 4 additions & 1 deletion src/additional_tools/method_of_morris.jl
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,9 @@ function morris(EP::Model,
#save the variance of effect of each uncertain variable on the objective function
Morris_range[!, :variance] = DataFrame(m.variances', :auto)[!, :x1]

CSV.write(joinpath(outpath, "morris.csv"), Morris_range)
write_output_file(joinpath(outpath,
setup["WriteResultsNamesDict"]["morris"]),
Morris_range,filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])
Comment on lines +267 to +268
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
Morris_range,filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])
Morris_range, filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])

return Morris_range
end
28 changes: 14 additions & 14 deletions src/case_runners/case_runner.jl
Original file line number Diff line number Diff line change
Expand Up @@ -31,19 +31,18 @@ run_genx_case!("path/to/case", Gurobi.Optimizer)
function run_genx_case!(case::AbstractString, optimizer::Any = HiGHS.Optimizer)
genx_settings = get_settings_path(case, "genx_settings.yml") # Settings YAML file path
writeoutput_settings = get_settings_path(case, "output_settings.yml") # Write-output settings YAML file path
mysetup = configure_settings(genx_settings, writeoutput_settings) # mysetup dictionary stores settings and GenX-specific parameters

mysetup = configure_settings(genx_settings, writeoutput_settings, case) # mysetup dictionary stores settings and GenX-specific parameters
if mysetup["MultiStage"] == 0
run_genx_case_simple!(case, mysetup, optimizer)
else
run_genx_case_multistage!(case, mysetup, optimizer)
end
end

function time_domain_reduced_files_exist(tdrpath)
tdr_demand = file_exists(tdrpath, ["Demand_data.csv", "Load_data.csv"])
tdr_genvar = isfile(joinpath(tdrpath, "Generators_variability.csv"))
tdr_fuels = isfile(joinpath(tdrpath, "Fuels_data.csv"))
function time_domain_reduced_files_exist(tdrpath, setup::Dict)
tdr_demand = isfile(joinpath(tdrpath, setup["demand"]))
tdr_genvar = isfile(joinpath(tdrpath, setup["generators"]))
tdr_fuels = isfile(joinpath(tdrpath, setup["fuel"]))
return (tdr_demand && tdr_genvar && tdr_fuels)
end

Expand All @@ -54,8 +53,8 @@ function run_genx_case_simple!(case::AbstractString, mysetup::Dict, optimizer::A
if mysetup["TimeDomainReduction"] == 1
TDRpath = joinpath(case, mysetup["TimeDomainReductionFolder"])
system_path = joinpath(case, mysetup["SystemFolder"])
prevent_doubled_timedomainreduction(system_path)
if !time_domain_reduced_files_exist(TDRpath)
prevent_doubled_timedomainreduction(system_path, mysetup["WriteInputNamesDict"])
if !time_domain_reduced_files_exist(TDRpath, mysetup["WriteInputNamesDict"])
println("Clustering Time Series Data (Grouped)...")
cluster_inputs(case, settings_path, mysetup)
else
Expand Down Expand Up @@ -108,7 +107,7 @@ function run_genx_case_multistage!(case::AbstractString, mysetup::Dict, optimize
settings_path = get_settings_path(case)
multistage_settings = get_settings_path(case, "multi_stage_settings.yml") # Multi stage settings YAML file path
# merge default settings with those specified in the YAML file
mysetup["MultiStageSettingsDict"] = configure_settings_multistage(multistage_settings)
mysetup["MultiStageSettingsDict"] = configure_settings_multistage(case,multistage_settings)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
mysetup["MultiStageSettingsDict"] = configure_settings_multistage(case,multistage_settings)
mysetup["MultiStageSettingsDict"] = configure_settings_multistage(
case, multistage_settings)


### Cluster time series inputs if necessary and if specified by the user
if mysetup["TimeDomainReduction"] == 1
Expand All @@ -118,8 +117,11 @@ function run_genx_case_multistage!(case::AbstractString, mysetup::Dict, optimize
first_stage_path = joinpath(case, "inputs", "inputs_p1")
TDRpath = joinpath(first_stage_path, mysetup["TimeDomainReductionFolder"])
system_path = joinpath(first_stage_path, mysetup["SystemFolder"])
prevent_doubled_timedomainreduction(system_path)
if !time_domain_reduced_files_exist(TDRpath)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change

mysetup["MultiStageSettingsDict"]["CurStage"] = 1 # Define current stage for cluster_inputs to access input_names dictionary at stage 1

prevent_doubled_timedomainreduction(system_path, mysetup["WriteInputNamesDict"]["inputs_p1"])
if !time_domain_reduced_files_exist(TDRpath, mysetup["WriteInputNamesDict"]["inputs_p1"])
Comment on lines +123 to +124
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
prevent_doubled_timedomainreduction(system_path, mysetup["WriteInputNamesDict"]["inputs_p1"])
if !time_domain_reduced_files_exist(TDRpath, mysetup["WriteInputNamesDict"]["inputs_p1"])
prevent_doubled_timedomainreduction(
system_path, mysetup["WriteInputNamesDict"]["inputs_p1"])
if !time_domain_reduced_files_exist(
TDRpath, mysetup["WriteInputNamesDict"]["inputs_p1"])

if (mysetup["MultiStage"] == 1) &&
(TDRSettingsDict["MultiStageConcatenate"] == 0)
println("Clustering Time Series Data (Individually)...")
Expand Down Expand Up @@ -148,9 +150,7 @@ function run_genx_case_multistage!(case::AbstractString, mysetup::Dict, optimize
mysetup["MultiStageSettingsDict"]["CurStage"] = t

# Step 1) Load Inputs
inpath_sub = joinpath(case, "inputs", string("inputs_p", t))

inputs_dict[t] = load_inputs(mysetup, inpath_sub)
inputs_dict[t] = load_inputs(mysetup, case)
inputs_dict[t] = configure_multi_stage_inputs(inputs_dict[t],
mysetup["MultiStageSettingsDict"],
mysetup["NetworkExpansion"])
Expand Down
Loading
Loading