Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

639 allow the user to define input and output file names #734

Open
wants to merge 11 commits into
base: develop
Choose a base branch
from

Conversation

mmutic
Copy link
Collaborator

@mmutic mmutic commented Aug 5, 2024

Description

This feature allows users to input two YAML files, input_settings.yml, and results_settings.yml. The file input_settings.yml allows users to specify the name of their input files as well as the paths and names of the input folders. A function called configure_input_settings has been added that adds a dictionary of these names, merged with default names, to the setup dictionary.

The function load_dataframe(), which is called to open all files in GenX, has been altered to use DuckDB (per Greg's suggestion). DuckDB can open files of type CSV, Parquet, and JSON, all of which can also be compressed (i.e. .gz), so users can now have input files of any of those types.

The file results_settings.yml can contain the desired names of the results file. Names in the YAML file can be entered with or without a file extension. In genx_settings.yml, two new keys can be added: ResultsFileType and ResultsCompressionType, whose defaults are both "auto_detect". Both of those keys are used in the function "write_output_files()".

The function write_output_files() has been added to write_outputs.jl. It uses DuckDB to save files according to a specified file type, which can be .csv, .csv.gz, .parquet, .json, or .json.gz. If filetype is set to "auto_detect", it will detect if the file name contains an extension (if no extension is present, .csv is used). If a filetype is set to something (eg .parquet) but that extension is not present in the filename, the extension is added. A compression type can also be specified, these are .gz for CSV and JSON files, and -snappy and -zstd for Parquet files. The compression type can also be auto_detected.

The goal is for write_output_files to replace all instances in which CSV.write is currently used. This is a work in progress and is only present in some places in GenX at the moment.

An example file, 10_three_zones_define_input, contains the aforementioned YAML files.

Edit 9/11/24: Multistage inputs can now also be defined using input_settings.yml. The structure is a bit different (uses indentation to make a separate subdictionary for each input stage), see 6_three_zones_w_multistage for an example YAML file. Results multistage file names can also be changed using results_settings.yml, but the file structure is the same as in single stage. I deleted 10_three_zones_define_input as it's the exact same as 1_three_zones, but added input_settings.yml and results_settings.yml to 1_three_zones. The function write_output_files now replaces CSV.write() in almost all instances. Documentation has also been updated to reflect new capabilities.

Notes from GenX Meeting 9/12

  1. The function write_output_file() could potentially be split into smaller functions as it is very long.
  2. The headers in NSE and PowerBalance are currently incompatible with DuckDB as they have repeated column names.
  3. Being able to read and output databases using DuckDB could be added (I think this can be done relatively easily).
  4. The structure of input_settings.yml for multistage could be altered to have global files. The last two commits on this PR (done on 9/11 and 9/12) are the ones involving multistage, so this PR can be split into single and multistage PRs.

Side note, not brought up in the meeting: the results files specific to multistage (capacities_multi_stage etc) have not been tested with write_output_file(), but the code is present and commented out.

What type of PR is this? (check all applicable)

  • Feature

Related Tickets & Documents

Issue #639

Checklist

  • Code changes are sufficiently documented; i.e. new functions contain docstrings and .md files under /docs/src have been updated if necessary.
  • The latest changes on the target branch have been incorporated, so that any conflicts are taken care of before merging. This can be accomplished either by merging in the target branch (e.g. 'git merge develop') or by rebasing on top of the target branch (e.g. 'git rebase develop'). Please do not hesitate to reach out to the GenX development team if you need help with this.
  • Code has been tested to ensure all functionality works as intended.
  • CHANGELOG.md has been updated (if this is a 'notable' change).
  • I consent to the release of this PR's code under the GNU General Public license.

How this can be tested

Working on writing test functions. For now, testing can be done by altering the input and results YAML files in example 10 and ensuring the expected results follow.

Post-approval checklist for GenX core developers

After the PR is approved

  • Check that the latest changes on the target branch are incorporated, either via merge or rebase
  • Remember to squash and merge if incorporating into develop

Comment on lines 214 to 242
"demand_name" => "Demand_data.csv",
"fuel_name" => "Fuels_data.csv",
"generators_name" => "Generators_variability.csv",
"network_name" => "Network.csv",
"resources_location" => joinpath(case, "resources"),
"storage_name" => "Storage.csv",
"thermal_name" => "Thermal.csv",
"vre_name" => "Vre.csv",
"vre_stor_name" => "Vre_stor.csv",
"vre_stor_solar_name" => "Vre_and_stor_solar_variability.csv",
"vre_stor_wind_name" => "Vre_and_stor_wind_variability.csv",
"hydro_name" => "Hydro.csv",
"flex_demand_name" => "Flex_demand.csv",
"must_run_name" => "Must_run.csv",
"electrolyzer_name" => "Electrolyzer.csv",
"resource_cap_name" => "Resource_capacity_reserve_margin.csv",
"resource_energy_share_requirement" => "Resource_energy_share_requirement.csv",
"resource_min_name" => "Resource_minimum_capacity_requirement.csv",
"resource_max_name" => "Resource_maximum_capacity_requirement.csv",
"policies_location" => joinpath(case, "policies"),
"capacity_name" => "Capacity_reserve_margin.csv",
"CRM_slack_name" => "Capacity_reserve_margin_slack.csv",
"co2_cap_name" => "CO2_cap.csv",
"co2_cap_slack_name" => "CO2_cap_slack.csv",
"esr_name" => "Energy_share_requirement.csv",
"esr_slack_name" => "Energy_share_requirement_slack.csv",
"min_cap_name" => "Minimum_capacity_requirement.csv",
"max_cap_name" => "Maximum_capacity_requirement.csv",
"operational_reserves_name" => "Operational_reserves.csv")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of these are _names. Since we're working with an input_names Dict here that should be clear enough from context. Consider just calling them "demand", "fuel" etc.

Comment on lines 270 to 345
function default_results_names()
Dict{Any, Any}("capacity_name" => "capacity",
"capacity_factor_name" => "capacityfactor",
"captured_emissions_plant_name" => "captured_emissions_plant",
"charge_name" => "charge.csv",
"charging_cost_name" => "ChargingCost",
"co2_prices_name" => "CO2_prices_and_penalties",
"commit_name" => "commit",
"costs_name" => "costs",
"curtail_name" => "curtail",
"emissions_plant_name" => "emissions_plant",
"emissions_name" => "emissions",
"energy_revenue_name" => "EnergyRevenue",
"flow_name" => "flow",
"fuel_cost_plant_name" => "Fuel_cost_plant",
"fuel_consumption_plant_name" => "FuelConsumption_plant_MMBTU",
"fuel_consumption_total_name" => "FuelConsumtion_total_MMBTU",
"mincap_name" => "MinCapReq_prices_and_penalties",
"revenue_name" => "NetRevenue",
"network_expansion_name" => "network_expansion",
"nse_name" => "nse",
"power_balance_name" => "power_balance",
"power_name" => "power",
"prices_name" => "prices",
"reg_subsidy_revenue_name" => "RegSubsidyRevenue",
"reg_name" => "reg",
"reg_dn_name" => "reg_dn",
"reliability_name" => "reliability",
"shutdown_name" => "shutdown",
"start_name" => "start",
"status_name" => "status",
"storage_name" => "storage",
"subsidy_revenue_name" => "SubsidyRevenue",
"time_weights_name" => "time_weights",
"tlosses_name" => "tlosses",
"virtual_discharge_name" => "virtual_discharge",
"vre_stor_dc_charge_name" => "vre_stor_dc_charge",
"vre_stor_ac_charge_name" => "vre_stor_ac_charge",
"vre_stor_dc_discharge_name" => "vre_stor_dc_discharge",
"vre_stor_ac_discharge_name" => "vre_stor_ac_discharge",
"vre_stor_wind_power_name" => "vre_stor_wind_power",
"vre_stor_solar_power_name" => "vre_stor_solar_power")
end
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of these are _names. Since we're working with an results_names Dict here that should be clear enough from context. Consider just calling them "capacity_factor" and so on. There's nothing wrong with having the same entry as key and value, like "storage" => "storage".

As an alternate suggestion you might want to make this a Dict of Symbol => AbstractString.
Strings are preferred when they need to be manipulated in some way, but especially in the keys these are just symbolic signifiers.

@@ -61,6 +64,12 @@ function configure_settings(settings_path::String, output_settings_path::String)
output_settings = configure_writeoutput(output_settings_path, settings)
settings["WriteOutputsSettingsDict"] = output_settings

input_settings = configure_input_names(case)
settings["WriteInputNamesDict"] = input_settings
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be "ReadInputNamesDict" rather than "Write..."?

Comment on lines 33 to 37
#esr_filenames = ["Resource_energy_share_requirement.csv"]
#cap_res_filenames = ["Resource_capacity_reserve_margin.csv"]
#min_cap_filenames = ["Resource_minimum_capacity_requirement.csv"]
#max_cap_filenames = ["Resource_maximum_capacity_requirement.csv"]

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please delete old code rather than commenting it out.

path = path * ".parquet"
elseif compression == "snappy" || compression == "-snappy"
path = path * "-snappy.parqet"
elseif compression == "zstd" || compressoin == "-zstd"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo! compressoin

Comment on lines 651 to 698
path = path * ".csv"
elseif filetype == ".csv" # If no extension is present, but filetype is set to .csv, .csv will be appended to the path name.
if compression == "none"
path = path * ".csv"
elseif compression == "gzip" || compression == ".gz" || compression == "gz"# If no extension is present, and compression is set to gzip, add .gz to the end of the file name.
path = path * ".csv.gz"
elseif compression == "auto_detect" # If no extension is present, but compression is set to auto_detect, no compression is added
path = path * ".csv"
else
@warn("Compression type '$compression' not supported with .csv. Saving as uncompressed csv.")
path = path * ".csv"
end
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the path = path * "..." can be expressed as path *= "...".

@warn("Filetype '$filetype' is incompatible with extension specified in results_settings.yml. Saving as '$newfiletype' instead.")
filetype = splitext(path)[2]
end
if splitext(path)[2] == ".csv" && (compression == ".gz" || compression == "gz" || compression == "gzip")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comparison
(compression == ".gz" || compression == "gz" || compression == "gzip")
gets made a bunch of times. Consider

a) normalize compression internally so that if the function is called with ".gz" or "gz" or "gzip" it's always "gzip", or something.
b) save the result to a boolean is_gzip or something.

@@ -127,9 +127,13 @@ function check_for_duplicate_keys(path::AbstractString)
end
end

function load_dataframe_from_file(path)::DataFrame
function load_dataframe_from_file(path::AbstractString)
check_for_duplicate_keys(path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mmutic and @cfe316 the check_for_duplicate_keys function assumes a csv file. Maybe use DuckDB's DESCRIBE function to get column names.

Here's a potential usage. The con object could also be passed in from load_dataframe_from_file -- not sure if it makes much difference.

function check_for_duplicate_keys(path::AbstractString)
    con = DBInterface.connect(DuckDB.DB, ":memory:")
    tbl_schema = DBInterface.execute(con, "DESCRIBE TABLE '$path'") |> DataFrames.DataFrame
    keys = tbl_schema[:, :column_name]
    uniques = unique(keys)
    if length(keys) > length(uniques)
        dupes = keep_duplicated_entries!(keys, uniques)
        @error """Some duplicate column names detected in the header of $path: $dupes.
        Duplicate column names may cause errors, as only the first is used.
        """
    end
end

@mmutic mmutic force-pushed the 639-allow-the-user-to-define-input-and-output-file-names branch from d489262 to 2c1a803 Compare August 31, 2024 13:46
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remaining comments which cannot be posted as a review comment to avoid GitHub Rate Limit

JuliaFormatter

[JuliaFormatter] reported by reviewdog 🐶

if isfile(joinpath(system_path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["network"]))


[JuliaFormatter] reported by reviewdog 🐶

# Read temporal-resolved load data, and clustering information if relevant
load_demand_data!(setup, system_path, inputs)
# Read fuel cost data, including time-varying fuel costs
load_fuels_data!(setup, system_path, inputs)
# Read in generator/resource related inputs
load_resources_data!(inputs, setup, path, resources_path)
# Read in generator/resource availability profiles
load_generators_variability!(setup,system_path, inputs)


[JuliaFormatter] reported by reviewdog 🐶

if setup["TimeDomainReduction"] == 1 && time_domain_reduced_files_exist(TDR_directory, setup["WriteInputNamesDict"][string("inputs_p",stage)])


[JuliaFormatter] reported by reviewdog 🐶

if setup["TimeDomainReduction"] == 1 && time_domain_reduced_files_exist(TDR_directory, setup)


[JuliaFormatter] reported by reviewdog 🐶

filename = setup["WriteInputNamesDict"][string("inputs_p",stage)]["network"]


[JuliaFormatter] reported by reviewdog 🐶

input_names = setup["WriteInputNamesDict"][string("inputs_p",stage)]


[JuliaFormatter] reported by reviewdog 🐶

input_names = setup["WriteInputNamesDict"][string("inputs_p",stage)]


[JuliaFormatter] reported by reviewdog 🐶

resource_policy_path::AbstractString,input_names::Dict)


[JuliaFormatter] reported by reviewdog 🐶

filename = joinpath(resources_path, setup["WriteInputNamesDict"][string("inputs_p",stage)]["resource_multistage_data"])


[JuliaFormatter] reported by reviewdog 🐶

add_policies_to_resources!(resources, resource_policies_path, setup["WriteInputNamesDict"][string("inputs_p",stage)])


[JuliaFormatter] reported by reviewdog 🐶

add_policies_to_resources!(resources, resource_policies_path, setup["WriteInputNamesDict"])


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

CSV.write(joinpath(outpath, setup["WriteResultsNamesDict"]["capacities_charge_multi_stage"]), df_cap)


[JuliaFormatter] reported by reviewdog 🐶

CSV.write(joinpath(outpath, setup["WriteResultsNamesDict"]["capacities_multi_stage"]), df_cap)


[JuliaFormatter] reported by reviewdog 🐶

CSV.write(joinpath(outpath, setup["WriteResultsNamesDict"]["capacities_energy_multi_stage"]), df_cap)


[JuliaFormatter] reported by reviewdog 🐶

CSV.write(joinpath(outpath, setup["WriteResultsNamesDict"]["network_expansion_multi_stage"]), df_trans_cap)


[JuliaFormatter] reported by reviewdog 🐶

Period_map = CSV.read(joinpath(TDRpath, setup["WriteResultsNamesDict"]["period_map"]), DataFrame)


[JuliaFormatter] reported by reviewdog 🐶

mysetup["MultiStageSettingsDict"] = configure_settings_multistage(case,multistage_settings)


[JuliaFormatter] reported by reviewdog 🐶

Demand_Outfile = joinpath(TimeDomainReductionFolder, mysetup["WriteInputNamesDict"][string("inputs_p",stage)]["demand"])
GVar_Outfile = joinpath(TimeDomainReductionFolder, mysetup["WriteInputNamesDict"][string("inputs_p",stage)]["generators"])
Fuel_Outfile = joinpath(TimeDomainReductionFolder, mysetup["WriteInputNamesDict"][string("inputs_p",stage)]["fuel"])
PMap_Outfile = joinpath(TimeDomainReductionFolder, mysetup["WriteInputNamesDict"][string("inputs_p",stage)]["period_map"])
YAML_Outfile = joinpath(TimeDomainReductionFolder, "time_domain_reduction_settings.yml")


[JuliaFormatter] reported by reviewdog 🐶

Demand_Outfile = joinpath(TimeDomainReductionFolder, mysetup["WriteInputNamesDict"]["demand"])
GVar_Outfile = joinpath(TimeDomainReductionFolder, mysetup["WriteInputNamesDict"]["generators"])
Fuel_Outfile = joinpath(TimeDomainReductionFolder, mysetup["WriteInputNamesDict"]["fuel"])
PMap_Outfile = joinpath(TimeDomainReductionFolder, mysetup["WriteInputNamesDict"]["period_map"])
YAML_Outfile = joinpath(TimeDomainReductionFolder, "time_domain_reduction_settings.yml")


[JuliaFormatter] reported by reviewdog 🐶

prevent_doubled_timedomainreduction(joinpath(inpath,mysetup["WriteInputNamesDict"][string("inputs_p",t)]["system_location"]),mysetup["WriteInputNamesDict"][string("inputs_p",t)])


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_solar_variability"])


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_wind_variability"])


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["Demand"]), demand_in)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["GVar"]),GVOutputData)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["GSolar"]), solar_var)
write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["GWind"]), wind_var)


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["Fuel"]), NewFuelOutput)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["PMap"]),Stage_PeriodMaps[per])


[JuliaFormatter] reported by reviewdog 🐶

demand_in = get_demand_dataframe(joinpath(inpath,
"inputs",
input_stage_directory,
mysetup["SystemFolder"]),
mysetup["WriteInputNamesDict"][string("inputs_p",stage_id)]
)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", input_stage_directory, Demand_Outfile),demand_in)


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", input_stage_directory, GVar_Outfile),GVOutputData)


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_solar_variability"])


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_wind_variability"])
write_output_file(joinpath(inpath, "inputs", input_stage_directory, SolarVar_Outfile),solar_var)
write_output_file(joinpath(inpath, "inputs", input_stage_directory, WindVar_Outfile),wind_var)


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"][string("inputs_p",stage_id)]["fuel"]))


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", input_stage_directory, Fuel_Outfile),NewFuelOutput)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", input_stage_directory, PMap_Outfile),PeriodMap)


[JuliaFormatter] reported by reviewdog 🐶

demand_in = get_demand_dataframe(system_path,mysetup)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, Demand_Outfile),demand_in)


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_solar_variability"])


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_wind_variability"])


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

fuel_in = load_dataframe(joinpath(system_path, mysetup["WriteInputNamesDict"]["fuel"]))


[JuliaFormatter] reported by reviewdog 🐶

dfCapValue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfResMar,false),
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["reserve_margin_revenue"]),
dfResRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["reserve_margin_prices_and_penalties"]),


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["reserve_margin_prices_and_penalties"]),


[JuliaFormatter] reported by reviewdog 🐶

dfResMar_w,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfCO2Price,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

df_new = df[:,2:end]


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["esr_prices_and_penalties"]),
dfESR,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfESRRev,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["hourly_matching_prices"]),


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

dfOpRsvRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])
write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["op_regulation_revenue"]),
dfOpRegRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

filepath = joinpath(path,setup["WriteResultsNamesDict"]["reg"])


[JuliaFormatter] reported by reviewdog 🐶

dfTransCap,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dftranspose(dfFlow, true), setup["WriteResultsNamesDict"]["flow"])


[JuliaFormatter] reported by reviewdog 🐶

dfTLosses,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dftranspose(dfTLosses, true), setup["WriteResultsNamesDict"]["tlosses"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

dfCommit = dftranspose(dfCommit,true)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path,
setup["WriteResultsNamesDict"]["commit"]),
dfCommit,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dfCommit, setup["WriteResultsNamesDict"]["commit"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

filepath = joinpath(path,setup["WriteResultsNamesDict"]["shutdown"])


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, df_Shutdown, setup["WriteResultsNamesDict"]["shutdown"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, df_Start, setup["WriteResultsNamesDict"]["start"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["angles"]),
dftranspose(dfAngles, false),
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["capacity_name"]), dfCap, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["charging_cost"]),
dfChargingcost,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

write_temporal_data(df, emissions_plant, path, setup, setup["WriteResultsNamesDict"]["emissions"])


[JuliaFormatter] reported by reviewdog 🐶

df, emissions_captured_plant, path, setup, setup["WriteResultsNamesDict"]["captured_emissions_plant"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["costs"]), dfCost, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

write_temporal_data(df, curtailment, path, setup, setup["WriteResultsNamesDict"]["curtail"])


[JuliaFormatter] reported by reviewdog 🐶

dfEmissions[!,1] = convert.(Float64,dfEmissions[!,1])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path,
setup["WriteResultsNamesDict"]["emissions"]),
dfEmissions, filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["emissions"]),
dfEmissions,
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["emissions"]),
dfEmissions,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["emissions"]),
dfEmissions,
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dfEmissions, setup["WriteResultsNamesDict"]["emissions"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["energy_revenue"]),
dfEnergyRevenue,
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["fuel_cost_plant"]), dfPlantFuel, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path,


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfPlantFuel_TS, true), filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

path, setup, dftranspose(dfPlantFuel_TS, true),setup["WriteResultsNamesDict"]["fuel_consumption_plant"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["fuel_consumption_total"]),dfFuel, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(filename,df,filetype = setup["ResultsFileType"],compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_timeseries_variables(EP, downvars, joinpath(path, setup["WriteResultsNamesDict"]["maint_down"]))


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["revenue"]),dfNetRevenue, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfNse,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

CSV.write(joinpath(path, setup["WriteResultsNamesDict"]["nse"]), dftranspose(dfNse, false), writeheader = false)


[JuliaFormatter] reported by reviewdog 🐶

#= if setup["OutputFullTimeSeries"] == 1 && setup["TimeDomainReduction"] == 1
write_full_time_series_reconstruction(path, setup, dfNse, "nse")
@info("Writing Full Time Series for NSE")
end=#


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(fullpath, dfOut, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(fullpath, dfOut,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(output_path, "$name"),
dfOut_full,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

function write_output_file(path::AbstractString, file::DataFrame; filetype::String = "auto_detect", compression::String = "auto_detect")


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

if compression == "none"


[JuliaFormatter] reported by reviewdog 🐶

elseif isgzip(compression) # If no extension is present, and compression is set to gzip, add .gz to the end of the file name.


[JuliaFormatter] reported by reviewdog 🐶

elseif compression == "auto_detect" # If no extension is present, but compression is set to auto_detect, no compression is added


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","none") # DuckDB will automatically detect if the file should be compressed or not


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","snappy")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","zstd")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed") # If no "-" is present, file is saved uncompressed.


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","none")


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","gzip")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","gzip")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","auto_detect")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","auto_detect")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","snappy")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","none")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","auto_detect")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","zstd")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","none")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","zstd")


[JuliaFormatter] reported by reviewdog 🐶

if filetype == ".csv"
save_with_duckdb(file,path,"csv","none")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","gzip")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","none")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","gzip")


[JuliaFormatter] reported by reviewdog 🐶

function save_with_duckdb(file::DataFrame,path::AbstractString,filetype::String,compression::String)


[JuliaFormatter] reported by reviewdog 🐶

DBInterface.execute(con, "COPY temp_df TO '$path' (FORMAT 'parquet', CODEC '$compression');")


[JuliaFormatter] reported by reviewdog 🐶

DBInterface.execute(con, "COPY temp_df TO '$path' (FORMAT JSON, AUTO_DETECT true);")


[JuliaFormatter] reported by reviewdog 🐶

DBInterface.execute(con, "COPY temp_df TO '$path' (FORMAT JSON, COMPRESSION '$compression');")


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

CSV.write(joinpath(path, setup["WriteResultsNamesDict"]["power_balance"]), dfPowerBalance)


[JuliaFormatter] reported by reviewdog 🐶

dfPowerBalance[!,:Zone] = convert.(Float64,dfPowerBalance[!,:Zone])


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfPrice, true),
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dftranspose(dfPrice, true), setup["WriteResultsNamesDict"]["prices"])


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfReliability, true),
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dftranspose(dfReliability, true), setup["WriteResultsNamesDict"]["reliability"])


[JuliaFormatter] reported by reviewdog 🐶

dfStatus,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfStorageDual, true),
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

path, setup, dftranspose(dfStorageDual, true), setup["WriteResultsNamesDict"]["storagebal_duals"])


[JuliaFormatter] reported by reviewdog 🐶

dfSubRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])
write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["reg_subsidy_revenue"]),
dfRegSubRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfTimeWeights,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfCap,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

filepath = joinpath(path, setup["WriteResultsNamesDict"]["vre_stor_elec_power_consumption"])


[JuliaFormatter] reported by reviewdog 🐶

write_annual(filepath, dfVP_VRE_STOR,setup)

Comment on lines +267 to +268
Morris_range,filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
Morris_range,filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])
Morris_range, filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])

@@ -108,7 +107,7 @@ function run_genx_case_multistage!(case::AbstractString, mysetup::Dict, optimize
settings_path = get_settings_path(case)
multistage_settings = get_settings_path(case, "multi_stage_settings.yml") # Multi stage settings YAML file path
# merge default settings with those specified in the YAML file
mysetup["MultiStageSettingsDict"] = configure_settings_multistage(multistage_settings)
mysetup["MultiStageSettingsDict"] = configure_settings_multistage(case,multistage_settings)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
mysetup["MultiStageSettingsDict"] = configure_settings_multistage(case,multistage_settings)
mysetup["MultiStageSettingsDict"] = configure_settings_multistage(
case, multistage_settings)

@@ -118,8 +117,11 @@
first_stage_path = joinpath(case, "inputs", "inputs_p1")
TDRpath = joinpath(first_stage_path, mysetup["TimeDomainReductionFolder"])
system_path = joinpath(first_stage_path, mysetup["SystemFolder"])
prevent_doubled_timedomainreduction(system_path)
if !time_domain_reduced_files_exist(TDRpath)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change

Comment on lines +123 to +124
prevent_doubled_timedomainreduction(system_path, mysetup["WriteInputNamesDict"]["inputs_p1"])
if !time_domain_reduced_files_exist(TDRpath, mysetup["WriteInputNamesDict"]["inputs_p1"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
prevent_doubled_timedomainreduction(system_path, mysetup["WriteInputNamesDict"]["inputs_p1"])
if !time_domain_reduced_files_exist(TDRpath, mysetup["WriteInputNamesDict"]["inputs_p1"])
prevent_doubled_timedomainreduction(
system_path, mysetup["WriteInputNamesDict"]["inputs_p1"])
if !time_domain_reduced_files_exist(
TDRpath, mysetup["WriteInputNamesDict"]["inputs_p1"])


# Returns
- `settings::Dict`: The settings dictionary.
"""
function configure_settings(settings_path::String, output_settings_path::String)
function configure_settings(settings_path::String, output_settings_path::String, case::AbstractString)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
function configure_settings(settings_path::String, output_settings_path::String, case::AbstractString)
function configure_settings(
settings_path::String, output_settings_path::String, case::AbstractString)

my_dir = get_systemfiles_path(setup, TDR_directory, path)
filename = setup["WriteInputNamesDict"]["fuel"]
end

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change

my_dir = get_systemfiles_path(setup, TDR_directory, path)
if setup["MultiStage"] == 1
stage = setup["MultiStageSettingsDict"]["CurStage"]
filename = setup["WriteInputNamesDict"][string("inputs_p",stage)]["generators"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
filename = setup["WriteInputNamesDict"][string("inputs_p",stage)]["generators"]
filename = setup["WriteInputNamesDict"][string("inputs_p", stage)]["generators"]

stage = setup["MultiStageSettingsDict"]["CurStage"]
filename = setup["WriteInputNamesDict"][string("inputs_p",stage)]["generators"]
else

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change

policies_path = setup["WriteInputNamesDict"]["policies_location"]

# Read input data about power network topology, operating and expansion attributes
if isfile(joinpath(system_path,setup["WriteInputNamesDict"]["network"]))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
if isfile(joinpath(system_path,setup["WriteInputNamesDict"]["network"]))
if isfile(joinpath(system_path, setup["WriteInputNamesDict"]["network"]))

Comment on lines +34 to +36
system_path = joinpath(path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["system_location"])
resources_path = joinpath(path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["resources_location"])
policies_path = joinpath(path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["policies_location"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
system_path = joinpath(path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["system_location"])
resources_path = joinpath(path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["resources_location"])
policies_path = joinpath(path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["policies_location"])
system_path = joinpath(path,
setup["WriteInputNamesDict"][string("inputs_p", stage)]["system_location"])
resources_path = joinpath(path,
setup["WriteInputNamesDict"][string("inputs_p", stage)]["resources_location"])
policies_path = joinpath(path,
setup["WriteInputNamesDict"][string("inputs_p", stage)]["policies_location"])

@mmutic mmutic requested a review from lbonaldo September 11, 2024 19:12
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remaining comments which cannot be posted as a review comment to avoid GitHub Rate Limit

JuliaFormatter

[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["Fuel"]), NewFuelOutput)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["PMap"]),Stage_PeriodMaps[per])


[JuliaFormatter] reported by reviewdog 🐶

demand_in = get_demand_dataframe(joinpath(inpath,
"inputs",
input_stage_directory,
mysetup["SystemFolder"]),
mysetup["WriteInputNamesDict"][string("inputs_p",stage_id)]
)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", input_stage_directory, Demand_Outfile),demand_in)


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", input_stage_directory, GVar_Outfile),GVOutputData)


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_solar_variability"])


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_wind_variability"])
write_output_file(joinpath(inpath, "inputs", input_stage_directory, SolarVar_Outfile),solar_var)
write_output_file(joinpath(inpath, "inputs", input_stage_directory, WindVar_Outfile),wind_var)


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"][string("inputs_p",stage_id)]["fuel"]))


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", input_stage_directory, Fuel_Outfile),NewFuelOutput)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, "inputs", input_stage_directory, PMap_Outfile),PeriodMap)


[JuliaFormatter] reported by reviewdog 🐶

demand_in = get_demand_dataframe(system_path,mysetup)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(inpath, Demand_Outfile),demand_in)


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_solar_variability"])


[JuliaFormatter] reported by reviewdog 🐶

mysetup["WriteInputNamesDict"]["vre_stor_wind_variability"])


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

fuel_in = load_dataframe(joinpath(system_path, mysetup["WriteInputNamesDict"]["fuel"]))


[JuliaFormatter] reported by reviewdog 🐶

dfCapValue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfResMar,false),
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["reserve_margin_revenue"]),
dfResRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["reserve_margin_prices_and_penalties"]),


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["reserve_margin_prices_and_penalties"]),


[JuliaFormatter] reported by reviewdog 🐶

dfResMar_w,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfCO2Price,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

df_new = df[:,2:end]


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["esr_prices_and_penalties"]),
dfESR,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfESRRev,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["hourly_matching_prices"]),


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

dfOpRsvRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])
write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["op_regulation_revenue"]),
dfOpRegRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

filepath = joinpath(path,setup["WriteResultsNamesDict"]["reg"])


[JuliaFormatter] reported by reviewdog 🐶

dfTransCap,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dftranspose(dfFlow, true), setup["WriteResultsNamesDict"]["flow"])


[JuliaFormatter] reported by reviewdog 🐶

dfTLosses,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dftranspose(dfTLosses, true), setup["WriteResultsNamesDict"]["tlosses"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

dfCommit = dftranspose(dfCommit,true)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path,
setup["WriteResultsNamesDict"]["commit"]),
dfCommit,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dfCommit, setup["WriteResultsNamesDict"]["commit"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

filepath = joinpath(path,setup["WriteResultsNamesDict"]["shutdown"])


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, df_Shutdown, setup["WriteResultsNamesDict"]["shutdown"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, df_Start, setup["WriteResultsNamesDict"]["start"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["angles"]),
dftranspose(dfAngles, false),
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["capacity"]), dfCap, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["charging_cost"]),
dfChargingcost,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

write_temporal_data(df, emissions_plant, path, setup, setup["WriteResultsNamesDict"]["emissions"])


[JuliaFormatter] reported by reviewdog 🐶

df, emissions_captured_plant, path, setup, setup["WriteResultsNamesDict"]["captured_emissions_plant"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["costs"]), dfCost, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

write_temporal_data(df, curtailment, path, setup, setup["WriteResultsNamesDict"]["curtail"])


[JuliaFormatter] reported by reviewdog 🐶

dfEmissions[!,1] = convert.(Float64,dfEmissions[!,1])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path,
setup["WriteResultsNamesDict"]["emissions"]),
dfEmissions, filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["emissions"]),
dfEmissions,
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["emissions"]),
dfEmissions,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["emissions"]),
dfEmissions,
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dfEmissions, setup["WriteResultsNamesDict"]["emissions"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["energy_revenue"]),
dfEnergyRevenue,
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["fuel_cost_plant"]), dfPlantFuel, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path,


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfPlantFuel_TS, true), filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

path, setup, dftranspose(dfPlantFuel_TS, true),setup["WriteResultsNamesDict"]["fuel_consumption_plant"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["fuel_consumption_total"]),dfFuel, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(filename,df,filetype = setup["ResultsFileType"],compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_timeseries_variables(EP, downvars, joinpath(path, setup["WriteResultsNamesDict"]["maint_down"]))


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["revenue"]),dfNetRevenue, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfNse,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

CSV.write(joinpath(path, setup["WriteResultsNamesDict"]["nse"]), dftranspose(dfNse, false), writeheader = false)


[JuliaFormatter] reported by reviewdog 🐶

#= if setup["OutputFullTimeSeries"] == 1 && setup["TimeDomainReduction"] == 1
write_full_time_series_reconstruction(path, setup, dfNse, "nse")
@info("Writing Full Time Series for NSE")
end=#


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(fullpath, dfOut, filetype = setup["ResultsFileType"], compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(fullpath, dfOut,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_output_file(joinpath(output_path, "$name"),
dfOut_full,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

function write_output_file(path::AbstractString, file::DataFrame; filetype::String = "auto_detect", compression::String = "auto_detect")


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

if compression == "none"


[JuliaFormatter] reported by reviewdog 🐶

elseif isgzip(compression) # If no extension is present, and compression is set to gzip, add .gz to the end of the file name.


[JuliaFormatter] reported by reviewdog 🐶

elseif compression == "auto_detect" # If no extension is present, but compression is set to auto_detect, no compression is added


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","none") # DuckDB will automatically detect if the file should be compressed or not


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","snappy")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","zstd")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed") # If no "-" is present, file is saved uncompressed.


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","none")


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","gzip")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","gzip")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","auto_detect")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","auto_detect")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","snappy")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","none")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","auto_detect")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","zstd")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","none")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","zstd")


[JuliaFormatter] reported by reviewdog 🐶

if filetype == ".csv"
save_with_duckdb(file,path,"csv","none")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"csv","gzip")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"parquet","uncompressed")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","none")


[JuliaFormatter] reported by reviewdog 🐶

save_with_duckdb(file,path,"json","gzip")


[JuliaFormatter] reported by reviewdog 🐶

if compression == "gzip" || compression == ".gz" || compression == "gz" || compression == ".gzip"


[JuliaFormatter] reported by reviewdog 🐶

function save_with_duckdb(file::DataFrame,path::AbstractString,filetype::String,compression::String)


[JuliaFormatter] reported by reviewdog 🐶

DBInterface.execute(con, "COPY temp_df TO '$path' (FORMAT 'parquet', CODEC '$compression');")


[JuliaFormatter] reported by reviewdog 🐶

DBInterface.execute(con, "COPY temp_df TO '$path' (FORMAT JSON, AUTO_DETECT true);")


[JuliaFormatter] reported by reviewdog 🐶

DBInterface.execute(con, "COPY temp_df TO '$path' (FORMAT JSON, COMPRESSION '$compression');")


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

CSV.write(joinpath(path, setup["WriteResultsNamesDict"]["power_balance"]), dfPowerBalance)


[JuliaFormatter] reported by reviewdog 🐶

dfPowerBalance[!,:Zone] = convert.(Float64,dfPowerBalance[!,:Zone])


[JuliaFormatter] reported by reviewdog 🐶


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfPrice, true),
filetype = setup["ResultsFileType"],


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dftranspose(dfPrice, true), setup["WriteResultsNamesDict"]["prices"])


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfReliability, true),
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

write_full_time_series_reconstruction(path, setup, dftranspose(dfReliability, true), setup["WriteResultsNamesDict"]["reliability"])


[JuliaFormatter] reported by reviewdog 🐶

dfStatus,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

zones = convert.(Float64,zones)


[JuliaFormatter] reported by reviewdog 🐶

dftranspose(dfStorageDual, true),
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

path, setup, dftranspose(dfStorageDual, true), setup["WriteResultsNamesDict"]["storagebal_duals"])


[JuliaFormatter] reported by reviewdog 🐶

dfSubRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])
write_output_file(joinpath(path, setup["WriteResultsNamesDict"]["reg_subsidy_revenue"]),
dfRegSubRevenue,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfTimeWeights,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

dfCap,
filetype = setup["ResultsFileType"],
compression = setup["ResultsCompressionType"])


[JuliaFormatter] reported by reviewdog 🐶

filepath = joinpath(path, setup["WriteResultsNamesDict"]["vre_stor_elec_power_consumption"])


[JuliaFormatter] reported by reviewdog 🐶

write_annual(filepath, dfVP_VRE_STOR,setup)

@@ -36,32 +36,55 @@ function default_settings()
"ResourcePoliciesFolder" => "policy_assignments",
"SystemFolder" => "system",
"PoliciesFolder" => "policies",
"ObjScale" => 1)
"ObjScale" => 1,
"ResultsFileType" => "auto_detect",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
"ResultsFileType" => "auto_detect",
"ResultsFileType" => "auto_detect",

Comment on lines +309 to +372
Dict{Any, Any}("angles" => "angles",
"capacity" => "capacity",
"capacity_factor" => "capacityfactor",
"capacity_vaue" => "CapacityValue",
"capacities_charge_multi_stage" => "capacities_charge_multi_stage",
"capacities_multi_stage" => "capacities_multi_stage",
"capacities_energy_multi_stage" => "capacities_energy_multi_stage",
"captured_emissions_plant" => "captured_emissions_plant",
"charge" => "charge.csv",
"charging_cost" => "ChargingCost",
"co2_prices" => "CO2_prices_and_penalties",
"commit" => "commit",
"costs" => "costs",
"costs_multi_stage" => "costs_multi_stage",
"curtail" => "curtail",
"dStorage" => "dStorage",
"emissions_plant" => "emissions_plant",
"emissions" => "emissions",
"energy_revenue" => "EnergyRevenue",
"esr_prices_and_penalties" => "ESR_prices_and_penalties",
"esr_revenue" => "ESR_Revenue",
"flow" => "flow",
"fuel_cost_plant" => "Fuel_cost_plant",
"fuel_consumption_plant" => "FuelConsumption_plant_MMBTU",
"fuel_consumption_total" => "FuelConsumtion_total_MMBTU",
"hourly_matching_prices" => "hourly_matching_prices",
"hydrogen_prices" => "hydrogen_prices",
"mincap" => "MinCapReq_prices_and_penalties",
"maxcap" => "MaxCapReq_prices_and_penalties",
"maint_down" => "maint_down",
"morris" => "morris",
"revenue" => "NetRevenue",
"network_expansion" => "network_expansion",
"network_expansion_multi_stage" => "network_expansion_multi_stage",
"nse" => "nse",
"power_balance" => "power_balance",
"power" => "power",
"prices" => "prices",
"reg_subsidy_revenue" => "RegSubsidyRevenue",
"reserve_margin" => "ReserveMargin",
"reserve_margin_revenue" => "ReserveMarginRevenue",
"reserve_margin_prices_and_penalties" => "ReserveMargin_prices_and_penalties",
"reserve_margin_w" => "ReserveMargin_w.csv",
"reg" => "reg",
"reg_dn" => "reg_dn",
"reliability" => "reliability",
"shutdown" => "shutdown",
"start" => "start",
"status" => "status",
"storage" => "storage",
"storagebal_duals" => "storagebal_duals",
"storage_init" => "StorageInit",
"subsidy_revenue" => "SubsidyRevenue",
"time_weights" => "time_weights",
"tlosses" => "tlosses",
"virtual_discharge" => "virtual_discharge",
"vre_stor_dc_charge" => "vre_stor_dc_charge",
"vre_stor_ac_charge" => "vre_stor_ac_charge",
"vre_stor_dc_discharge" => "vre_stor_dc_discharge",
"vre_stor_ac_discharge" => "vre_stor_ac_discharge",
"vre_stor_elec_power_consumption" => "vre_stor_elec_power_consumption",
"vre_stor_wind_power" => "vre_stor_wind_power",
"vre_stor_solar_power" => "vre_stor_solar_power",
"vre_stor_capacity" => "vre_stor_capacity")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
Dict{Any, Any}("angles" => "angles",
"capacity" => "capacity",
"capacity_factor" => "capacityfactor",
"capacity_vaue" => "CapacityValue",
"capacities_charge_multi_stage" => "capacities_charge_multi_stage",
"capacities_multi_stage" => "capacities_multi_stage",
"capacities_energy_multi_stage" => "capacities_energy_multi_stage",
"captured_emissions_plant" => "captured_emissions_plant",
"charge" => "charge.csv",
"charging_cost" => "ChargingCost",
"co2_prices" => "CO2_prices_and_penalties",
"commit" => "commit",
"costs" => "costs",
"costs_multi_stage" => "costs_multi_stage",
"curtail" => "curtail",
"dStorage" => "dStorage",
"emissions_plant" => "emissions_plant",
"emissions" => "emissions",
"energy_revenue" => "EnergyRevenue",
"esr_prices_and_penalties" => "ESR_prices_and_penalties",
"esr_revenue" => "ESR_Revenue",
"flow" => "flow",
"fuel_cost_plant" => "Fuel_cost_plant",
"fuel_consumption_plant" => "FuelConsumption_plant_MMBTU",
"fuel_consumption_total" => "FuelConsumtion_total_MMBTU",
"hourly_matching_prices" => "hourly_matching_prices",
"hydrogen_prices" => "hydrogen_prices",
"mincap" => "MinCapReq_prices_and_penalties",
"maxcap" => "MaxCapReq_prices_and_penalties",
"maint_down" => "maint_down",
"morris" => "morris",
"revenue" => "NetRevenue",
"network_expansion" => "network_expansion",
"network_expansion_multi_stage" => "network_expansion_multi_stage",
"nse" => "nse",
"power_balance" => "power_balance",
"power" => "power",
"prices" => "prices",
"reg_subsidy_revenue" => "RegSubsidyRevenue",
"reserve_margin" => "ReserveMargin",
"reserve_margin_revenue" => "ReserveMarginRevenue",
"reserve_margin_prices_and_penalties" => "ReserveMargin_prices_and_penalties",
"reserve_margin_w" => "ReserveMargin_w.csv",
"reg" => "reg",
"reg_dn" => "reg_dn",
"reliability" => "reliability",
"shutdown" => "shutdown",
"start" => "start",
"status" => "status",
"storage" => "storage",
"storagebal_duals" => "storagebal_duals",
"storage_init" => "StorageInit",
"subsidy_revenue" => "SubsidyRevenue",
"time_weights" => "time_weights",
"tlosses" => "tlosses",
"virtual_discharge" => "virtual_discharge",
"vre_stor_dc_charge" => "vre_stor_dc_charge",
"vre_stor_ac_charge" => "vre_stor_ac_charge",
"vre_stor_dc_discharge" => "vre_stor_dc_discharge",
"vre_stor_ac_discharge" => "vre_stor_ac_discharge",
"vre_stor_elec_power_consumption" => "vre_stor_elec_power_consumption",
"vre_stor_wind_power" => "vre_stor_wind_power",
"vre_stor_solar_power" => "vre_stor_solar_power",
"vre_stor_capacity" => "vre_stor_capacity")
Dict{Any, Any}("angles" => "angles",
"capacity" => "capacity",
"capacity_factor" => "capacityfactor",
"capacity_vaue" => "CapacityValue",
"capacities_charge_multi_stage" => "capacities_charge_multi_stage",
"capacities_multi_stage" => "capacities_multi_stage",
"capacities_energy_multi_stage" => "capacities_energy_multi_stage",
"captured_emissions_plant" => "captured_emissions_plant",
"charge" => "charge.csv",
"charging_cost" => "ChargingCost",
"co2_prices" => "CO2_prices_and_penalties",
"commit" => "commit",
"costs" => "costs",
"costs_multi_stage" => "costs_multi_stage",
"curtail" => "curtail",
"dStorage" => "dStorage",
"emissions_plant" => "emissions_plant",
"emissions" => "emissions",
"energy_revenue" => "EnergyRevenue",
"esr_prices_and_penalties" => "ESR_prices_and_penalties",
"esr_revenue" => "ESR_Revenue",
"flow" => "flow",
"fuel_cost_plant" => "Fuel_cost_plant",
"fuel_consumption_plant" => "FuelConsumption_plant_MMBTU",
"fuel_consumption_total" => "FuelConsumtion_total_MMBTU",
"hourly_matching_prices" => "hourly_matching_prices",
"hydrogen_prices" => "hydrogen_prices",
"mincap" => "MinCapReq_prices_and_penalties",
"maxcap" => "MaxCapReq_prices_and_penalties",
"maint_down" => "maint_down",
"morris" => "morris",
"revenue" => "NetRevenue",
"network_expansion" => "network_expansion",
"network_expansion_multi_stage" => "network_expansion_multi_stage",
"nse" => "nse",
"power_balance" => "power_balance",
"power" => "power",
"prices" => "prices",
"reg_subsidy_revenue" => "RegSubsidyRevenue",
"reserve_margin" => "ReserveMargin",
"reserve_margin_revenue" => "ReserveMarginRevenue",
"reserve_margin_prices_and_penalties" => "ReserveMargin_prices_and_penalties",
"reserve_margin_w" => "ReserveMargin_w.csv",
"reg" => "reg",
"reg_dn" => "reg_dn",
"reliability" => "reliability",
"shutdown" => "shutdown",
"start" => "start",
"status" => "status",
"storage" => "storage",
"storagebal_duals" => "storagebal_duals",
"storage_init" => "StorageInit",
"subsidy_revenue" => "SubsidyRevenue",
"time_weights" => "time_weights",
"tlosses" => "tlosses",
"virtual_discharge" => "virtual_discharge",
"vre_stor_dc_charge" => "vre_stor_dc_charge",
"vre_stor_ac_charge" => "vre_stor_ac_charge",
"vre_stor_dc_discharge" => "vre_stor_dc_discharge",
"vre_stor_ac_discharge" => "vre_stor_ac_discharge",
"vre_stor_elec_power_consumption" => "vre_stor_elec_power_consumption",
"vre_stor_wind_power" => "vre_stor_wind_power",
"vre_stor_solar_power" => "vre_stor_solar_power",
"vre_stor_capacity" => "vre_stor_capacity")

resources_path = joinpath(path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["resources_location"])
policies_path = joinpath(path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["policies_location"])
# Read input data about power network topology, operating and expansion attributes
if isfile(joinpath(system_path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["network"]))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
if isfile(joinpath(system_path,setup["WriteInputNamesDict"][string("inputs_p", stage)]["network"]))
if isfile(joinpath(system_path,
setup["WriteInputNamesDict"][string("inputs_p", stage)]["network"]))

Comment on lines +46 to +54
# Read temporal-resolved load data, and clustering information if relevant
load_demand_data!(setup, system_path, inputs)
# Read fuel cost data, including time-varying fuel costs
load_fuels_data!(setup, system_path, inputs)
# Read in generator/resource related inputs
load_resources_data!(inputs, setup, path, resources_path)
# Read in generator/resource availability profiles
load_generators_variability!(setup,system_path, inputs)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
# Read temporal-resolved load data, and clustering information if relevant
load_demand_data!(setup, system_path, inputs)
# Read fuel cost data, including time-varying fuel costs
load_fuels_data!(setup, system_path, inputs)
# Read in generator/resource related inputs
load_resources_data!(inputs, setup, path, resources_path)
# Read in generator/resource availability profiles
load_generators_variability!(setup,system_path, inputs)
# Read temporal-resolved load data, and clustering information if relevant
load_demand_data!(setup, system_path, inputs)
# Read fuel cost data, including time-varying fuel costs
load_fuels_data!(setup, system_path, inputs)
# Read in generator/resource related inputs
load_resources_data!(inputs, setup, path, resources_path)
# Read in generator/resource availability profiles
load_generators_variability!(setup, system_path, inputs)

return TDR_directory
if setup["MultiStage"] == 1
stage = setup["MultiStageSettingsDict"]["CurStage"]
if setup["TimeDomainReduction"] == 1 && time_domain_reduced_files_exist(TDR_directory, setup["WriteInputNamesDict"][string("inputs_p",stage)])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
if setup["TimeDomainReduction"] == 1 && time_domain_reduced_files_exist(TDR_directory, setup["WriteInputNamesDict"][string("inputs_p",stage)])
if setup["TimeDomainReduction"] == 1 && time_domain_reduced_files_exist(
TDR_directory, setup["WriteInputNamesDict"][string("inputs_p", stage)])

@@ -1253,7 +1257,8 @@
### TDR_Results/Demand_data_clustered.csv
demand_in = get_demand_dataframe(
joinpath(inpath, "inputs", "inputs_p$per"),
mysetup["SystemFolder"])
mysetup["SystemFolder"]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
)
)

Comment on lines +1285 to +1286
write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["Demand"]), demand_in)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["Demand"]), demand_in)
write_output_file(
joinpath(inpath, "inputs", Stage_Outfiles[per]["Demand"]), demand_in)

Comment on lines +1299 to +1300
write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["GVar"]),GVOutputData)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["GVar"]),GVOutputData)
write_output_file(
joinpath(inpath, "inputs", Stage_Outfiles[per]["GVar"]), GVOutputData)

Comment on lines +1334 to +1337

write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["GSolar"]), solar_var)
write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["GWind"]), wind_var)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["GSolar"]), solar_var)
write_output_file(joinpath(inpath, "inputs", Stage_Outfiles[per]["GWind"]), wind_var)

CSV.write(joinpath(inpath, "inputs", Stage_Outfiles[per]["GSolar"]),
solar_var)
CSV.write(joinpath(inpath, "inputs", Stage_Outfiles[per]["GWind"]),
wind_var)
end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
end
write_output_file(
joinpath(inpath, "inputs", Stage_Outfiles[per]["GSolar"]),
solar_var)
write_output_file(
joinpath(inpath, "inputs", Stage_Outfiles[per]["GWind"]), wind_var)
end

@mmutic mmutic marked this pull request as ready for review September 12, 2024 15:38
Comment on lines +241 to +275
function default_input_names(case::AbstractString)
Dict{Any, Any}("system_location" => joinpath(case, "system"),
"demand" => "Demand_data.csv",
"fuel" => "Fuels_data.csv",
"generators" => "Generators_variability.csv",
"network" => "Network.csv",
"resources_location" => joinpath(case, "resources"),
"storage" => "Storage.csv",
"thermal" => "Thermal.csv",
"vre" => "Vre.csv",
"vre_stor" => "Vre_stor.csv",
"vre_stor_solar_variability" => "Vre_and_stor_solar_variability.csv",
"vre_stor_wind_variability" => "Vre_and_stor_wind_variability.csv",
"hydro" => "Hydro.csv",
"flex_demand" => "Flex_demand.csv",
"must_run" => "Must_run.csv",
"electrolyzer" => "Electrolyzer.csv",
"resource_cap" => "Resource_capacity_reserve_margin.csv",
"resource_energy_share_requirement" => "Resource_energy_share_requirement.csv",
"resource_min" => "Resource_minimum_capacity_requirement.csv",
"resource_max" => "Resource_maximum_capacity_requirement.csv",
"resource_hydrogen_demand" => "Resource_hydrogen_demand.csv",
"resource_hourly_matching" => "Resource_hourly_matching.csv",
"resource_multistage_data" => "Resource_multistage_data.csv",
"policies_location" => joinpath(case, "policies"),
"period_map" => "Period_map.csv",
"capacity" => "Capacity_reserve_margin.csv",
"CRM_slack" => "Capacity_reserve_margin_slack.csv",
"co2_cap" => "CO2_cap.csv",
"co2_cap_slack" => "CO2_cap_slack.csv",
"esr" => "Energy_share_requirement.csv",
"esr_slack" => "Energy_share_requirement_slack.csv",
"min_cap" => "Minimum_capacity_requirement.csv",
"max_cap" => "Maximum_capacity_requirement.csv",
"operational_reserves" => "Operational_reserves.csv")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might make sense to have this be a dict{Symbol, String} or dict{Symbol, Any}. i.e. the keys of the dict should be Symbols, not strings. In general symbols are nicer as identifiers where you don't need to decompose the strings into characters or concatenate them into longer strings.

@lbonaldo lbonaldo added the enhancement New feature or request label Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants