Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: create full-dimension input profiles from Switch inputs and resulting Grid #107

Merged
merged 3 commits into from
Jun 15, 2021

Conversation

danielolsen
Copy link
Contributor

Pull Request doc

Purpose

To have a MockScenario (#51) that can be used to calculate curtailment (e.g. #41), we need input profiles. This PR uses the inputs to SwitchWrapper, the inputs to Switch, and the Grid that comes from the expansion results to construct a set of profiles that are consistent with the output Grid.

What the code is doing

For loads:

  • We sum the bus-level demand to zone-level demand,
  • reshape the data frame, and
  • use the timestamp to timepoint mapping to go from timepoint resolution back to full timestamp resolution.

For hydro/solar/wind/plants:

  • We translate from the Switch plant IDs to our PowerSimData plant IDs,
  • reshape the data frame
  • use the timestamp to timepoint mapping to go from timepoint resolution back to full timestamp resolution.
  • use the resulting plant capacities to translate from normalized power output to absolute
  • break up the profiles into individual ones for hydro, solar, and {wind + wind_offshore}

Usage Example/Visuals

import pickle
import pandas as pd
from powersimdata import Scenario
from switchwrapper.switch_to_grid import construct_grids_from_switch_results
from switchwrapper.switch_to_profiles import reconstruct_input_profiles

# Inputs
scenario = Scenario(599)
grid = scenario.get_grid()
loads = pd.read_csv("path_to_prepared_switch_inputs/loads.csv")
variable_capacity_factors = pd.read_csv("path_to_prepared_switch_inputs/variable_capacity_factors.csv")
timestamps_to_timepoints = pd.read_csv("slicing_recovery.csv", index_col=0).squeeze()

# Outputs
results_filepath = "path_to_results/results.pickle"
with open(results_filepath, "rb") as f:
    results = pickle.load(f)

output_grid = construct_grids_from_switch_results(grid, results)[2030]

profiles = reconstruct_input_profiles(output_grid, loads, variable_capacity_factors, timestamps_to_timepoints)

Validation

Subtracting the original scenario profiles from the reconstructed profiles shows that the indices and columns are the same, but the values are different (to be expected, since the temporal reduction averaged the values of many timepoints together). Comparing the sums shows that the overall differences are rounding errors (since in this particular example, Switch does not choose to build any new hydro/wind/solar plants, choosing to build only ten natural gas plants instead).

>>> profiles["demand"] - scenario.get_demand()
--> Loading demand
zone_id                      201          202  ...         215         216
UTC Time                                       ...
2016-01-01 00:00:00 -1788.143265  -719.574247  ... -168.027009   61.753279
2016-01-01 01:00:00 -2943.743265 -1107.544247  ... -295.227009   -9.726721
2016-01-01 02:00:00 -3962.543265 -1673.884247  ... -302.257009  -38.319721
2016-01-01 03:00:00   -77.530952    -8.801190  ...  120.021429   33.715286
2016-01-01 04:00:00  -215.238095   -53.079524  ...   57.822143   39.688738
...                          ...          ...  ...         ...         ...
2016-12-31 19:00:00 -2796.599886  -859.791929  ... -157.350126   60.139312
2016-12-31 20:00:00 -2625.899886  -775.531929  ... -116.490126   58.709312
2016-12-31 21:00:00 -2280.999886  -602.061929  ...  -79.230126   69.431312
2016-12-31 22:00:00 -2345.943265  -557.914247  ...  -64.977009  103.927279
2016-12-31 23:00:00 -2253.643265  -485.284247  ...  -71.487009  112.505279

[8784 rows x 16 columns]
>>> profiles["demand"].sum().sum() - scenario.get_demand().sum().sum()
--> Loading demand
9.059906005859375e-06
>>> profiles["hydro"] - scenario.get_hydro()
--> Loading hydro
plant_id                 10390      10391  ...     12862     12863
UTC Time                                   ...
2016-01-01 00:00:00  15.031691  15.031401  ...  5.093939  5.093939
2016-01-01 01:00:00   7.358058   7.357912  ...  4.078544  4.078544
2016-01-01 02:00:00  -0.748815  -0.748808  ...  3.005805  3.005805
2016-01-01 03:00:00   7.465305   7.465145  ...  7.174879  7.174879
2016-01-01 04:00:00   5.035807   5.035700  ...  5.201087  5.201087
...                        ...        ...  ...       ...       ...
2016-12-31 19:00:00  -4.619034  -4.618954  ... -4.656815 -4.656815
2016-12-31 20:00:00  -2.963497  -2.963449  ... -4.296705 -4.296705
2016-12-31 21:00:00   1.635038   1.634999  ... -3.296381 -3.296381
2016-12-31 22:00:00   4.262475   4.262387  ... -2.664632 -2.664632
2016-12-31 23:00:00   1.983208   1.983164  ... -3.160438 -3.160438

[8784 rows x 715 columns]
>>> profiles["hydro"].sum().sum() - scenario.get_hydro().sum().sum()
--> Loading hydro
1.7881393432617188e-07
>>> profiles["solar"] - scenario.get_solar()
--> Loading solar
plant_id                10441     10447     10448  ...  13990  13991  13992
UTC Time                                           ...
2016-01-01 00:00:00  0.013670  0.006448  0.004606  ...    0.0    0.0    0.0
2016-01-01 01:00:00  0.013670  0.006448  0.004606  ...    0.0    0.0    0.0
2016-01-01 02:00:00  0.013670  0.006448  0.004606  ...    0.0    0.0    0.0
2016-01-01 03:00:00  0.000000  0.000000  0.000000  ...    0.0    0.0    0.0
2016-01-01 04:00:00  0.000000  0.000000  0.000000  ...    0.0    0.0    0.0
...                       ...       ...       ...  ...    ...    ...    ...
2016-12-31 19:00:00 -0.038768 -0.010236 -0.007312  ...    0.0    0.0    0.0
2016-12-31 20:00:00 -0.042496 -0.012061 -0.008615  ...    0.0    0.0    0.0
2016-12-31 21:00:00 -0.037504 -0.011005 -0.007860  ...    0.0    0.0    0.0
2016-12-31 22:00:00 -0.049545 -0.019262 -0.013759  ...    0.0    0.0    0.0
2016-12-31 23:00:00 -0.025864 -0.008274 -0.005910  ...    0.0    0.0    0.0

[8784 rows x 433 columns]
>>> profiles["solar"].sum().sum() - scenario.get_solar().sum().sum()
--> Loading solar
-2.2351741790771484e-07
>>> profiles["wind"] - scenario.get_wind()
--> Loading wind
plant_id                 10397      10400      10401  ...  14017  14018  14019
UTC Time                                              ...
2016-01-01 00:00:00 -12.415869 -20.896780 -23.230129  ...    0.0    0.0    0.0
2016-01-01 01:00:00 -19.205668 -13.286196 -26.999297  ...    0.0    0.0    0.0
2016-01-01 02:00:00 -16.740515   1.119795 -23.612031  ...    0.0    0.0    0.0
2016-01-01 03:00:00 -33.903199 -12.028569 -47.582920  ...    0.0    0.0    0.0
2016-01-01 04:00:00 -37.561448  -1.811623 -50.983635  ...    0.0    0.0    0.0
...                        ...        ...        ...  ...    ...    ...    ...
2016-12-31 19:00:00 -20.778923  -7.999457  17.624864  ...    0.0    0.0    0.0
2016-12-31 20:00:00 -27.095077 -30.998020  16.972132  ...    0.0    0.0    0.0
2016-12-31 21:00:00 -30.992759 -43.380799  15.597011  ...    0.0    0.0    0.0
2016-12-31 22:00:00 -21.660810 -39.005182  17.325017  ...    0.0    0.0    0.0
2016-12-31 23:00:00 -36.307123 -42.573883 -39.555224  ...    0.0    0.0    0.0

[8784 rows x 280 columns]
>>> profiles["wind"].sum().sum() - scenario.get_wind().sum().sum()
--> Loading wind
1.1920928955078125e-07

Time estimate

15-30 minutes.

@danielolsen danielolsen force-pushed the daniel/reconstruct_profiles branch from 3ff6f81 to 0a879fb Compare June 14, 2021 16:17
@danielolsen danielolsen mentioned this pull request Jun 14, 2021
full_time_profiles = reshaped_values.loc[timestamps_to_timepoints.tolist()]
full_time_profiles.index = timestamps_to_timepoints.index
# Un-normalize
built_variable_plants = grid.plant.query("type in @const.variable_types").index
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More query call, more #noqa!

@danielolsen
Copy link
Contributor Author

Per @BainanXia's suggestion, we now return a dict of dicts. The new demo code is:

import pandas as pd
import pickle
from powersimdata import Scenario
from switchwrapper.switch_to_grid import construct_grids_from_switch_results
from switchwrapper.switch_to_profiles import reconstruct_input_profiles

# Get dict of Grids for each investment year
scenario = Scenario(599)
grid = scenario.get_grid()
filename = "path_to_results_file/results.pickle"
with open(filename, "rb") as f:
    results = pickle.load(f)

all_grids = construct_grids_from_switch_results(grid, results)

# Get inputs required to un-map
loads = pd.read_csv("path_to_prepared_inputs/loads.csv")
variable_capacity_factors = pd.read_csv("path_to_prepared_inputs/variable_capacity_factors.csv")
timestamps_to_timepoints = pd.read_csv("path_to_timestamp_mapping/slicing_recovery.csv", index_col=0).squeeze()

# New functionality from this feature
profiles = reconstruct_input_profiles(all_grids, loads, variable_capacity_factors, timestamps_to_timepoints)

Copy link
Collaborator

@BainanXia BainanXia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I will refactor #108 to include this in the extraction class after this is merged.

@danielolsen danielolsen merged commit c90bf9f into output_processing Jun 15, 2021
@danielolsen danielolsen deleted the daniel/reconstruct_profiles branch June 15, 2021 00:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants