diff --git a/docs/source/_static/dag_sector.jpg b/docs/source/_static/dag_sector.jpg new file mode 100644 index 00000000..07231df4 Binary files /dev/null and b/docs/source/_static/dag_sector.jpg differ diff --git a/docs/source/config-configuration.md b/docs/source/config-configuration.md index 336e38c4..2a722641 100644 --- a/docs/source/config-configuration.md +++ b/docs/source/config-configuration.md @@ -28,7 +28,7 @@ The `run` section is used for running and storing scenarios with different confi The `scenario` section is used for setting the wildcards and defining planning horizon settings. All configurations within this section are described in [wildcards](#wildcards) with the exception of planning_horizons and foresight. -Planning horizons determines which year(s) of future demand forecast to use for your planning model. To build a multi-investment period model set multiple `planning_horizons:` years. The `foresight:` option specifies whether perfect foresight or myopoic foresight optimization model is developed. In perfect foresight, a monolithic model is developed where all `planning_horizons` specified are optimized at once, e.g. future horizon values of costs and demand are incorporated into decisions made in earlier planning horizons. Myopic optimization solves each planning horizon sequentially, and passes the results forward. Currently only `perfect` foresight is implemented. +Planning horizons determines which year(s) of future demand forecast to use for your planning model. To build a multi-investment period model set multiple `planning_horizons:` years. The `foresight:` option specifies whether perfect foresight or myopic foresight optimization model is developed. In perfect foresight, a monolithic model is developed where all `planning_horizons` specified are optimized at once, e.g. future horizon values of costs and demand are incorporated into decisions made in earlier planning horizons. Myopic optimization solves each planning horizon sequentially, and passes the results forward. ```{eval-rst} .. literalinclude:: ../../workflow/repo_data/config/config.default.yaml diff --git a/docs/source/configtables/scenario.csv b/docs/source/configtables/scenario.csv index 80933a9f..66124fed 100644 --- a/docs/source/configtables/scenario.csv +++ b/docs/source/configtables/scenario.csv @@ -1,3 +1,3 @@ ,Unit,Values,Description planning_horizons,int,"(2018-2023, 2030, 2040, 2050)","Specifies the year of demand data to use. Historical values will use EIA930 data, Future years will use NREL EFS data. Specify multiple planning horizons to build a multi-horizon model." -foresight,str,perfect,Specifies foresight option for multi-horizon optimization. Currently only perfect foresight is supported. Myopic foresight will be added in the future. +foresight,str,"One of {'perfect','myopic’}",Specifies foresight option for multi-horizon optimization. diff --git a/docs/source/data-sectors.md b/docs/source/data-sectors.md index a731c38d..49d6e07f 100644 --- a/docs/source/data-sectors.md +++ b/docs/source/data-sectors.md @@ -5,7 +5,7 @@ Full energy system models can be run through PyPSA-USA. This page gives an overv ## Overview -Sector coupled models are built ontop of the power sector representation. This means all assumptions[^1], including spatial and temporal options, implemented to bild the electricity sector are shared in sector coupled studies. +Sector coupled models are built ontop of the power sector representation. This means all assumptions, including spatial and temporal options, implemented to bild the electricity sector are shared in sector coupled studies[^1]. When running sector coupled studies, four end use sectors are added to the system. These include residential, commercial, industrial, and transportation. Each of these sectors will always have electrical loads. Then, based on user configuration options, heating, cooling, or fossil fuel loads may be added to the system. Moreover, the natural gas network is added to track imports/exports, storage levels, methane leaks, and endogenously solve for natural gas cost. @@ -68,7 +68,9 @@ Capacity constraints limit how much energy can be delievered through demand resp &\ \hspace{1cm} d_{n,t} \times s_{n,t} \geq dr_{n,t} \hspace{0.5cm} \forall_{\text{n,t}}\\ \end{align*} -This is not applied directly to the `Link` object via the `p_nom` paprameter to be consistent with the transport sector. Within the transport sector, demand response is applied to the aggregation bus due to endogenous investment options. Therefore, knowing how much electrical load to be shifted at each timestep is not possible before solving. For the transport sector, the following constraint is added. See the [transportation section](./data-transportation.md) schematics for details on where demand reponse is applied. +#### Trasport Demand Response Capacity Constraint + +Within the transport sector, demand response is applied to the aggregation bus due to endogenous investment options. This is because it is not possible to know how much electrical load can be shifted before solving. For the transport sector, the following constraint is added. See the [transportation section](./data-transportation.md) schematics for details on where demand reponse is applied. \begin{align*} &\ \text{let:} \\ @@ -82,6 +84,17 @@ This is not applied directly to the `Link` object via the `p_nom` paprameter to &\ \hspace{1cm} \sum_{v}(p_{n,t,v}) \times s_{n,t} - dr_{n,t} \geq 0 \hspace{0.5cm} \forall_{\text{n,t}}\\ \end{align*} +(workflow-sector)= +## Workflow + +The diagram below illustrates the workflow of PyPSA-USA Sector. Many rules overlap with the electricity sector workflow; however, several additional rules are also present. + +:::{figure-md} workflow + + +Snakemake DAG for sector coupled studies +::: + ## Further Information Each sector has a dedicated page giving information on data sources, implementation details, and assumptions. See the corresponding page linked below. diff --git a/workflow/repo_data/config/config.common.yaml b/workflow/repo_data/config/config.common.yaml index c7fad783..0bcf3b7d 100644 --- a/workflow/repo_data/config/config.common.yaml +++ b/workflow/repo_data/config/config.common.yaml @@ -1,6 +1,5 @@ pudl_path: s3://pudl.catalyst.coop/v2025.2.0 -foresight: 'perfect' # docs : renewable: EGS: diff --git a/workflow/repo_data/config/config.default.yaml b/workflow/repo_data/config/config.default.yaml index 3eacb4b0..4919068f 100644 --- a/workflow/repo_data/config/config.default.yaml +++ b/workflow/repo_data/config/config.default.yaml @@ -15,6 +15,7 @@ scenario: ll: [v1.0] sector: "" # G planning_horizons: [2030, 2040, 2050] #(2018-2023, 2030, 2040, 2050) +foresight: 'perfect' # myopic, perfect model_topology: transmission_network: 'reeds' # [reeds, tamu] diff --git a/workflow/repo_data/config/config.tutorial.yaml b/workflow/repo_data/config/config.tutorial.yaml index 5f3b7727..2200434f 100644 --- a/workflow/repo_data/config/config.tutorial.yaml +++ b/workflow/repo_data/config/config.tutorial.yaml @@ -16,6 +16,7 @@ scenario: scope: "total" # "urban", "rural", or "total" sector: "" # G planning_horizons: [2050] #(2018-2023, 2030, 2040, 2050) +foresight: 'perfect' # myopic, perfect model_topology: transmission_network: 'reeds' # [reeds, tamu] diff --git a/workflow/repo_data/dag.jpg b/workflow/repo_data/dag.jpg index af23fa8a..82aaadcf 100644 Binary files a/workflow/repo_data/dag.jpg and b/workflow/repo_data/dag.jpg differ diff --git a/workflow/scripts/solve_network.py b/workflow/scripts/solve_network.py index c1d3f924..588db710 100644 --- a/workflow/scripts/solve_network.py +++ b/workflow/scripts/solve_network.py @@ -23,6 +23,7 @@ based on the rule :mod:`solve_network`. """ +import copy import logging import re from typing import Any @@ -182,7 +183,7 @@ def prepare_network( p_nom=1e9, # kW ) - if solve_opts.get("noisy_costs"): + if solve_opts.get("noisy_costs"): ##random noise to costs of generators for t in n.iterate_components(): if "marginal_cost" in t.df: t.df["marginal_cost"] += 1e-2 + 2e-3 * (np.random.random(len(t.df)) - 0.5) @@ -366,7 +367,7 @@ def add_technology_capacity_target_constraints(n, config): ) -def add_RPS_constraints(n, config, sector): +def add_RPS_constraints(n, sns, config, sector): """ Add Renewable Portfolio Standards (RPS) constraints to the network. @@ -389,7 +390,7 @@ def add_RPS_constraints(n, config, sector): Returns ------- - None + """ def process_reeds_data(filepath, carriers, value_col): @@ -465,11 +466,7 @@ def process_reeds_data(filepath, carriers, value_col): portfolio_standards = pd.concat([portfolio_standards, rps_reeds, ces_reeds]) portfolio_standards = portfolio_standards[ (portfolio_standards.pct > 0.0) - & ( - portfolio_standards.planning_horizon.isin( - snakemake.params.planning_horizons, - ) - ) + & (portfolio_standards.planning_horizon.isin(sns.get_level_values(0))) & (portfolio_standards.region.isin(n.buses.reeds_state.unique())) ] @@ -705,8 +702,11 @@ def add_regional_co2limit(n, sns, config): config["electricity"]["regional_Co2_limits"], index_col=[0], ) + logger.info("Adding regional Co2 Limits.") - regional_co2_lims = regional_co2_lims[regional_co2_lims.planning_horizon.isin(snakemake.params.planning_horizons)] + + # Filter the regional_co2_lims DataFrame based on the planning horizons present in the snapshots + regional_co2_lims = regional_co2_lims[regional_co2_lims.planning_horizon.isin(sns.get_level_values(0))] weightings = n.snapshot_weightings.loc[n.snapshots] for idx, emmission_lim in regional_co2_lims.iterrows(): @@ -1675,7 +1675,7 @@ def extra_functionality(n, snapshots): config = n.config if "RPS" in opts and n.generators.p_nom_extendable.any(): sector_rps = True if "sector" in opts else False - add_RPS_constraints(n, config, sector_rps) + add_RPS_constraints(n, snapshots, config, sector_rps) if "REM" in opts and n.generators.p_nom_extendable.any(): add_regional_co2limit(n, snapshots, config) if "BAU" in opts and n.generators.p_nom_extendable.any(): @@ -1716,10 +1716,37 @@ def extra_functionality(n, snapshots): add_land_use_constraints(n) +def run_optimize(n, rolling_horizon, skip_iterations, cf_solving, **kwargs): + """Initiate the correct type of pypsa.optimize function.""" + if rolling_horizon: + kwargs["horizon"] = cf_solving.get("horizon", 365) + kwargs["overlap"] = cf_solving.get("overlap", 0) + n.optimize.optimize_with_rolling_horizon(**kwargs) + status, condition = "", "" + elif skip_iterations: + status, condition = n.optimize(**kwargs) + else: + kwargs["track_iterations"] = (cf_solving.get("track_iterations", False),) + kwargs["min_iterations"] = (cf_solving.get("min_iterations", 4),) + kwargs["max_iterations"] = (cf_solving.get("max_iterations", 6),) + status, condition = n.optimize.optimize_transmission_expansion_iteratively( + **kwargs, + ) + + if status != "ok" and not rolling_horizon: + logger.warning( + f"Solving status '{status}' with termination condition '{condition}'", + ) + if "infeasible" in condition: + # n.model.print_infeasibilities() + raise RuntimeError("Solving status 'infeasible'") + + def solve_network(n, config, solving, opts="", **kwargs): set_of_options = solving["solver"]["options"] cf_solving = solving["options"] + foresight = snakemake.params.foresight if "sector" not in opts: kwargs["multi_investment_periods"] = config["foresight"] == "perfect" @@ -1743,28 +1770,91 @@ def solve_network(n, config, solving, opts="", **kwargs): n.config = config n.opts = opts - if rolling_horizon: - kwargs["horizon"] = cf_solving.get("horizon", 365) - kwargs["overlap"] = cf_solving.get("overlap", 0) - n.optimize.optimize_with_rolling_horizon(**kwargs) - status, condition = "", "" - elif skip_iterations: - status, condition = n.optimize(**kwargs) - else: - kwargs["track_iterations"] = (cf_solving.get("track_iterations", False),) - kwargs["min_iterations"] = (cf_solving.get("min_iterations", 4),) - kwargs["max_iterations"] = (cf_solving.get("max_iterations", 6),) - status, condition = n.optimize.optimize_transmission_expansion_iteratively( - **kwargs, - ) + match foresight: + case "perfect": + run_optimize(n, rolling_horizon, skip_iterations, cf_solving, **kwargs) + case "myopic": + for i, planning_horizon in enumerate(n.investment_periods): + # planning_horizons = snakemake.params.planning_horizons + sns_horizon = n.snapshots[n.snapshots.get_level_values(0) == planning_horizon] - if status != "ok" and not rolling_horizon: - logger.warning( - f"Solving status '{status}' with termination condition '{condition}'", - ) - if "infeasible" in condition: - # n.model.print_infeasibilities() - raise RuntimeError("Solving status 'infeasible'") + # add sns_horizon to kwarg + kwargs["snapshots"] = sns_horizon + + run_optimize(n, rolling_horizon, skip_iterations, cf_solving, **kwargs) + + if i == len(n.investment_periods) - 1: + logger.info(f"Final time horizon {planning_horizon}") + continue + logger.info(f"Preparing brownfield from {planning_horizon}") + + # electric transmission grid set optimised capacities of previous as minimum + n.lines.s_nom_min = n.lines.s_nom_opt # for lines + dc_i = n.links[n.links.carrier == "DC"].index + n.links.loc[dc_i, "p_nom_min"] = n.links.loc[dc_i, "p_nom_opt"] # for links + + for c in n.iterate_components(["Generator", "Link", "StorageUnit"]): + nm = c.name + # limit our components that we remove/modify to those prior to this time horizon + c_lim = c.df.loc[n.get_active_assets(nm, planning_horizon)] + + logger.info(f"Preparing brownfield for the component {nm}") + # attribute selection for naming convention + attr = "p" + # copy over asset sizing from previous period + c_lim[f"{attr}_nom"] = c_lim[f"{attr}_nom_opt"] + c_lim[f"{attr}_nom_extendable"] = False + df = copy.deepcopy(c_lim) + time_df = copy.deepcopy(c.pnl) + + for c_idx in c_lim.index: + n.remove(nm, c_idx) + + for df_idx in df.index: + if nm == "Generator": + n.madd( + nm, + [df_idx], + carrier=df.loc[df_idx].carrier, + bus=df.loc[df_idx].bus, + p_nom_min=df.loc[df_idx].p_nom_min, + p_nom=df.loc[df_idx].p_nom, + p_nom_max=df.loc[df_idx].p_nom_max, + p_nom_extendable=df.loc[df_idx].p_nom_extendable, + ramp_limit_up=df.loc[df_idx].ramp_limit_up, + ramp_limit_down=df.loc[df_idx].ramp_limit_down, + efficiency=df.loc[df_idx].efficiency, + marginal_cost=df.loc[df_idx].marginal_cost, + capital_cost=df.loc[df_idx].capital_cost, + build_year=df.loc[df_idx].build_year, + lifetime=df.loc[df_idx].lifetime, + heat_rate=df.loc[df_idx].heat_rate, + fuel_cost=df.loc[df_idx].fuel_cost, + vom_cost=df.loc[df_idx].vom_cost, + carrier_base=df.loc[df_idx].carrier_base, + p_min_pu=df.loc[df_idx].p_min_pu, + p_max_pu=df.loc[df_idx].p_max_pu, + land_region=df.loc[df_idx].land_region, + ) + else: + n.add(nm, df_idx, **df.loc[df_idx]) + logger.info(n.consistency_check()) + + # copy time-dependent + selection = n.component_attrs[nm].type.str.contains( + "series", + ) + + for tattr in n.component_attrs[nm].index[selection]: + n.import_series_from_dataframe(time_df[tattr], nm, tattr) + + # roll over the last snapshot of time varying storage state of charge to be the state_of_charge_initial for the next time period + n.storage_units.loc[:, "state_of_charge_initial"] = n.storage_units_t.state_of_charge.loc[ + planning_horizon + ].iloc[-1] + + case _: + raise ValueError(f"Invalid foresight option: '{foresight}'. Must be 'perfect' or 'myopic'.") return n @@ -1821,7 +1911,6 @@ def solve_network(n, config, solving, opts="", **kwargs): ) n.meta = dict(snakemake.config, **dict(wildcards=dict(snakemake.wildcards))) n.export_to_netcdf(snakemake.output[0]) - with open(snakemake.output.config, "w") as file: yaml.dump( n.meta,