diff --git a/doc/tutorial.rst b/doc/tutorial.rst index 0e47f372..5c3b8d70 100644 --- a/doc/tutorial.rst +++ b/doc/tutorial.rst @@ -3,29 +3,27 @@ .. _tutorial: Tutorial -======== +======= The README file contains `installation notes`__. This tutorial expands on the steps that follow this installation. .. __: https://github.com/tum-ens/urbs/blob/master/README.md#installation -This tutorial is a commented walk-through through the script ``runme.py``, -which is a demonstration user script that can serve as a good basis for one's -own script. +This tutorial is a commented walk-through through the script ``runme.py`` and +``scenrario.py``, which is a demonstration user script that can serve as a good +basis for one's own script. ``scenrario.py`` is contained in the subfolder +`urbs`. Imports ------- :: - import os - import pandas as pd - import pyomo.environ + import os import shutil import urbs - from datetime import datetime - from pyomo.opt.base import SolverFactory + from urbs.data import timeseries_number Several packages are included. @@ -46,14 +44,7 @@ Several packages are included. :func:`create_model`, :func:`report`, :func:`result_figures` and all scenario functions. More functions can be found in the document :ref:`API`. -* `pyomo.opt.base`_ is a utility package by `pyomo`_ and provides the function - ``SolverFactory`` that allows creating a ``solver`` object. This objects - hides the differences in input/output formats among solvers from the user. - More on that in section `Solving`_ below. - -* `datetime` is used to append the current date and time to the result - directory name (used in :func:`prepare_result_directory`) - + In the following sections the user definded input, output and scenario settings are described. @@ -69,7 +60,9 @@ located in the same folder as script ``runme.py``:: # copy input file to result directory shutil.copyfile(input_file, os.path.join(result_dir, input_file)) # copy runme.py to result directory - shutil.copy(__file__, result_dir + shutil.copy(__file__, result_dir) + # copy current version of scenario functions + shutil.copy('urbs/scenarios.py', result_dir) Variable ``input_file`` defines the input spreadsheet, from which the optimization problem will draw all its set/parameter data. The input file and @@ -103,7 +96,7 @@ must be a subset of the labels used in ``input_file``'s sheets "Demand" and accessible directly, so that one can quickly reduce the problem size by reducing the simulation ``length``, i.e. the number of timesteps to be optimised. Variable ``dt`` is the duration of each timestep in the list in -hours, where any positiv real value is allowed. +hours, where any positiv real value is allowed. :func:`range` is used to create a list of consecutive integers. The argument ``+1`` is needed, because ``range(a,b)`` only includes integers from ``a`` to @@ -117,7 +110,7 @@ Output Settings The desired output is also specified by the user in script ``runme.py``. It is split into two parts: reporting and plotting. The former is used to generate an excel output file and the latter for standard graphs. - + Reporting ^^^^^^^^^ @@ -145,7 +138,7 @@ sums") and as individual timeseries (in sheet "... timeseries"). # optional: define names for sites in report_tuples report_sites_name = {['North', 'Mid', 'South']: 'Greenland'} -Optional it is possible to define ``report_tuples`` to control what shall be +Optional it is possible to define ``report_tuples`` to control what should be reported. And with ``report_sites_name`` it is possible to define, if the sites inside the report tuples should be named differently. If they are empty, the default value will be taken. See also :ref:`report-function` for a detailed @@ -219,7 +212,7 @@ manually to modify some aspects of a plot without having to recreate the plotting function from scratch. For more ideas for adaptations, look into :func:`plot`'s code or the `matplotlib documentation`_. -The last paragraph uses the :meth:`~matplotlib.figure.Figure.savefig` method +The second paragraph uses the :meth:`~matplotlib.figure.Figure.savefig` method to save the figure as a pixel ``png`` (raster) and ``pdf`` (vector) image. The ``bbox_inches='tight'`` argument eliminates whitespace around the plot. @@ -237,8 +230,10 @@ the same base scenarios, defined by the data in ``input_file``, they serve as a short way of defining the difference in input data. If needed, completely separate input data files could be loaded as well. -The ``scenarios`` list in the end of the input file allows then to select the -scenarios to be actually run. :: +The ``scenarios`` list in the end of the script ``runme.py`` allows then to +select the scenarios to be actually run. +In Python, functions are objects, so they can be put into data structures just +like any variable could be. :: scenarios = [ urbs.scenario_base, @@ -254,69 +249,120 @@ script ``scenarios.py``. Scenario functions ^^^^^^^^^^^^^^^^^^ -A scenario is simply a function that takes the input ``data`` and modifies it -in a certain way. with the required argument ``data``, the input -data :class:`dict`.:: - + +A scenario is a function that takes an existing pyomo concrete model ``prob``, +modifies its data and updates the corresponding constraints. :: + # SCENARIOS - def scenario_base(data): + def scenario_base(prob, reverse, not_used): # do nothing - return data + return prob -The simplest scenario does not change anything in the original input file. It +The simplest scenario does not change anything in the original model. It usually is called "base" scenario for that reason. All other scenarios are -defined by 1 or 2 distinct changes in parameter values, relative to this common -foundation.:: +defined by 1 or 2 distinct changes in parameter values. In order to actually +have an effect on the model to be solved updating the corresponding constraints +is necessary. See how the changes get undone if the function is called with +reverse=True. This enables further use of the model in the following scenarios. +Cloning of the model is very expensive and for this reason not recommended. +``not_used`` is only needed for the :func:`scenario_new_timeseries` and will +contain file extensions in that case. For all other scenarios it is unused. :: - def scenario_stock_prices(data): + def scenario_stock_prices(prob, reverse, not_used): # change stock commodity prices - co = data['commodity'] - stock_commodities_only = (co.index.get_level_values('Type') == 'Stock') - co.loc[stock_commodities_only, 'price'] *= 1.5 - return data + if not reverse: + for x in tuple(prob.commodity_dict["price"].keys()): + if x[2] == "Stock": + prob.commodity_dict["price"][x] *= 1.5 + update_cost(prob) + return prob + if reverse: + for x in tuple(prob.commodity_dict["price"].keys()): + if x[2] == "Stock": + prob.commodity_dict["price"][x] *= 1/1.5 + update_cost(prob) + return prob For example, :func:`scenario_stock_prices` selects all stock commodities from -the :class:`DataFrame` ``commodity``, and increases their *price* value by 50%. -See also pandas documentation :ref:`Selection by label ` -for more information about the ``.loc`` function to access fields. Also note -the use of `Augmented assignment statements`_ (``*=``) to modify data -in-place.:: +the :class:`Dictionary` ``commodity_dict``, and increases their *price* value +by 50%. Also note the use of `Augmented assignment statements`_ (``*=``) to +modify data in-place.:: - def scenario_co2_limit(data): + def scenario_co2_limit(prob, reverse, not_used): # change global CO2 limit - hacks = data['hacks'] - hacks.loc['Global CO2 limit', 'Value'] *= 0.05 - return data + if not reverse: + prob.global_prop_dict["value"]["CO2 limit"] *= 0.05 + update_co2_limit(prob) + return prob + if reverse: + prob.global_prop_dict["value"]["CO2 limit"] *= 1/0.05 + update_co2_limit(prob) + return prob Scenario :func:`scenario_co2_limit` shows the simple case of changing a single input data value. In this case, a 95% CO2 reduction compared to the base scenario must be accomplished. This drastically limits the amount of coal and gas that may be used by all three sites.:: - def scenario_north_process_caps(data): + def scenario_north_process_caps(prob, reverse, not_used): # change maximum installable capacity - pro = data['process'] - pro.loc[('North', 'Hydro plant'), 'cap-up'] *= 0.5 - pro.loc[('North', 'Biomass plant'), 'cap-up'] *= 0.25 - return data + if not reverse: + prob.process_dict["cap-up"][('North', 'Hydro plant')] *= 0.5 + prob.process_dict["cap-up"][('North', 'Biomass plant')] *= 0.25 + update_process_capacity(prob) + return prob + if reverse: + prob.process_dict["cap-up"][('North', 'Hydro plant')] *= 2 + prob.process_dict["cap-up"][('North', 'Biomass plant')] *= 4 + update_process_capacity(prob) + return prob Scenario :func:`scenario_north_process_caps` demonstrates accessing single values in the ``process`` :class:`~pandas.DataFrame`. By reducing the amount of renewable energy conversion processes (hydropower and biomass), this scenario explores the "second best" option for this region to supply its demand.:: - def scenario_all_together(data): + def scenario_all_together(prob, reverse, not_used): # combine all other scenarios - data = scenario_stock_prices(data) - data = scenario_co2_limit(data) - data = scenario_north_process_caps(data) - return data + if not reverse: + prob = scenario_stock_prices(prob, 0, not_used) + prob = scenario_co2_limit(prob, 0, not_used) + prob = scenario_north_process_caps(prob, 0, not_used) + return prob + if reverse: + prob = scenario_stock_prices(prob, 1, not_used) + prob = scenario_co2_limit(prob, 1, not_used) + prob = scenario_north_process_caps(prob, 1, not_used) + return prob Scenario :func:`scenario_all_together` finally shows that scenarios can also be combined by chaining other scenario functions, making them dependent. This way, complex scenario trees can written with any single input change coded at a single place and then building complex composite scenarios from those. - + + + + +Reading input & model creation +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:: + # Read data from Excel Sheet and create model for use in scenarios + data = urbs.read_excel(input_file) + prob = urbs.create_model(data, dt, timesteps) + +Function :func:`read_excel` returns a dict ``data`` of up to 12 pandas +DataFrames with hard-coded column names that correspond to the parameters of +the optimization problem (like ``eff`` for efficiency or ``inv-cost-c`` for +capacity investment costs). The row labels on the other hand may be freely +chosen (like site names, process identifiers or commodity names). By +convention, it must contain the six keys ``commodity``, ``process``, +``storage``, ``transmission``, ``demand``, and ``supim``. Each value must be a +:class:`pandas.DataFrame`, whose index (row labels) and columns (column labels) +conforms to the specification given by the example dataset in the spreadsheet +:file:`mimo-example.xlsx`. +``prob`` is then modified by applying the :func:`scenario` function to it. + Run scenarios ------------- @@ -335,46 +381,28 @@ Having prepared settings, input data and scenarios, the actual computations happen in the function :func:`run_scenario` of the script ``runfunctions.py`` in subfolder ``urbs``. It is executed for each of the scenarios included in the scenario list. The following sections describe the content of function -:func:`run_scenario`. In a nutshell, it reads the input data from its argument -``input_file``, modifies it with the supplied ``scenario``, runs the -optimisation for the given ``timesteps`` and writes report and plots to -``result_dir``. +:func:`run_scenario`. In a nutshell, it modifies the input +model instance ``prob`` by applying the :func:`scenario` function to it. It +then runs the optimization for the given ``timesteps`` and writes report and +plots to ``result_dir``. -Reading input +Scenario creation ^^^^^^^^^^^^^ :: - # scenario name, read and modify data for scenario + # scenario name and modify data for scenario sce = scenario.__name__ - data = read_excel(input_file) - data = scenario(data) - validate_input(data) + prob = scenario(prob, 0) -Function :func:`read_excel` returns a dict ``data`` of up to 12 pandas -DataFrames with hard-coded column names that correspond to the parameters of -the optimization problem (like ``eff`` for efficiency or ``inv-cost-c`` for -capacity investment costs). The row labels on the other hand may be freely -chosen (like site names, process identifiers or commodity names). By -convention, it must contain the six keys ``commodity``, ``process``, -``storage``, ``transmission``, ``demand``, and ``supim``. Each value must be a -:class:`pandas.DataFrame`, whose index (row labels) and columns (column labels) -conforms to the specification given by the example dataset in the spreadsheet -:file:`mimo-example.xlsx`. -``data`` is then modified by applying the :func:`scenario` function to it. To -then rule out a list of known errors, that accumulate through growing user -experience, a variety of validation functions specified in script -``validate.py`` in subfolder ``urbs`` is run on the dict ``data``. +The pyomo model instance ``prob`` now contains the scenario to be solved. Solving ^^^^^^^ :: - # create model - prob = urbs.create_model(data, dt, timesteps) - # refresh time stamp string and create filename for logfile now = prob.created log_filename = os.path.join(result_dir, '{}.log').format(sce) @@ -385,18 +413,27 @@ Solving result = optim.solve(prob, tee=True) This section is the "work horse", where most computation and time is spent. The -optimization problem is first defined (:func:`create_model`), then filled with -values (``create``). The ``SolverFactory`` object is an abstract representation -of the solver used. The returned object ``optim`` has a method -:meth:`set_options` to set solver options (not used in this tutorial). +optimization problem is already well defined. The ``SolverFactory`` object is +an abstract representation of the solver used. The returned object ``optim`` +has a method :meth:`set_options` to set solver options (not used in this +tutorial). The remaining line calls the solver and reads the ``result`` object back into the ``prob`` object, which is queried to for variable values in the remaining script file. Argument ``tee=True`` enables the realtime console output for the solver. If you want less verbose output, simply set it to ``False`` or remove it. - +Rebuilding of base scenario +^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:: + + prob = scenario(prob, 1, filename) + +This line calls the :func:`scenario` function with the codeword reverse=True. +The function is built such that it undoes all changes done to ``prob`` if +called with this codeword. Afterwards prob again contains the base scenario for +correct usage in the next scenario. .. _augmented assignment statements: http://docs.python.org/2/reference/\ diff --git a/doc/workflow.rst b/doc/workflow.rst index 936c7aa6..618d737d 100644 --- a/doc/workflow.rst +++ b/doc/workflow.rst @@ -448,18 +448,28 @@ a script can automate these. This is where a ``runme.py`` script becomes handy. Create a copy of the script file ``runme.py`` and give it a suitable name, e.g. ``runns.py``. -Modify the ``scenario_co2_limit`` function. As the base scenario now has no -limit, reducing it by 95 % does not make it finite. Therefore you set a fixed -hard (annual) limit of 40 million tonnes of CO2 equivalent: +Modify the ``scenario_co2_limit`` function in ``scenario.py´´ in the subfolder +urbs and reduce the fixed hard (annual) limit by the factor of 20. +The scenario function modifies the model of the existing base scenario. +Therefore it is necessary to undo all changes of the model once it is solved. +This makes it possible to reuse the model without expensive cloning. +# Modify the ``scenario_co2_limit`` function. As the base scenario now has no +# limit, reducing it by 95 % does not make it finite. Therefore you set a fixed +# hard (annual) limit of 40 million tonnes of CO2 equivalent: .. code-block:: python :emphasize-lines: 4 - def scenario_co2_limit(data): - # change global CO2 limit - global_prop = data['global_prop'] - global_prop.loc['CO2 limit', 'value'] = 40000 - return data +def scenario_co2_limit(prob, reverse, not_used): + # change global CO2 limit + if not reverse: + prob.global_prop_dict["value"]["CO2 limit"] *= 0.05 + update_co2_limit(prob) + return prob + if reverse: + prob.global_prop_dict["value"]["CO2 limit"] *= 1/0.05 + update_co2_limit(prob) + return prob Next, set adjust the plot_tuples and report_tuples by replacing ``North``, ``Mid`` and ``South`` by the four islands of Newsealand. @@ -472,28 +482,32 @@ custom colors. So you modify the ``my_colors`` :class:`dict` like this:: 'Qlyph Archipelago': (200, 200, 230), 'Jepid Island': (215,215,215)} -Finally, you head down to the ``if __name__ == '__main__'`` section that is -executed when the script is called. There, you change the ``input_file`` to -your spreadsheet ``newsealand.xlsx`` and increase the optimisation duration to -14 days (``14*24`` time steps). For now, you don't need the other scenarios, -so you exclude them from the ``scenarios`` :class:`list`: +Finally, you head up to the beginning of the code. There you change the +``input_file`` toyour spreadsheet ``newsealand.xlsx`` and increase the +optimisation duration to 14 days (``14*24`` time steps). For now, you don't +need the other scenarios, so you exclude them from the +``scenarios`` :class:`list`: +# Finally, you head down to the ``if __name__ == '__main__'`` section that is +# executed when the script is called. There, you change the ``input_file`` to +# your spreadsheet ``newsealand.xlsx`` and increase the optimisation duration to +# 14 days (``14*24`` time steps). For now, you don't need the other scenarios, +# so you exclude them from the ``scenarios`` :class:`list`: .. code-block:: python - :emphasize-lines: 2,6,10,11,12 - - if __name__ == '__main__': - input_file = 'newsealand.xlsx' - result_name = os.path.splitext(input_file)[0] # cut away file extension - result_dir = prepare_result_directory(result_name) # name + time stamp - - (offset, length) = (3500, 14*24) # time step selection - timesteps = range(offset, offset+length+1) - dt = 1 - - # select scenarios to be run - scenarios = [ - scenario_base, - scenario_co2_limit] + :emphasize-lines: 1,5,10,11,12 + + input_file = 'newsealand.xlsx' + result_name = os.path.splitext(input_file)[0] # cut away file extension + result_dir = prepare_result_directory(result_name) # name + time stamp + + (offset, length) = (3500, 14*24) # time step selection + timesteps = range(offset, offset+length+1) + dt = 1 + + # select scenarios to be run + scenarios = [ + urbs.scenario_base, + urbs.scenario_co2_limit] # define the commodities/sites to be plotted plot_tuples = [ @@ -515,17 +529,19 @@ so you exclude them from the ``scenarios`` :class:`list`: report_sites_name = {'Vled Haven': 'Another name for Vled Haven'} # define the time periods to be plotted - plot_periods = { - 'all': timesteps[1:] - } - - for scenario in scenarios: - prob = run_scenario(input_file, timesteps, scenario, result_dir, dt, - plot_tuples=plot_tuples, - plot_sites_name=plot_sites_name, - plot_periods=plot_periods, - report_tuples=report_tuples, - report_sites_name=report_sites_name) + plot_periods = {'all': timesteps[1:]} + # Read data from Excel Sheet and create model for use in scenarios + data = urbs.read_excel(input_file) + prob = urbs.create_model(data, dt, timesteps) + + for scenario in scenarios: + prob = run_scenario(prob, timesteps, scenario, result_dir, dt, + objective, + plot_tuples=plot_tuples, + plot_sites_name=plot_sites_name, + plot_periods=plot_periods, + report_tuples=report_tuples, + report_sites_name=report_sites_name) .. note:: diff --git a/input/alternative_scenario_new_timeseries_.xlsx b/input/alternative_scenario_new_timeseries_.xlsx new file mode 100644 index 00000000..da050874 Binary files /dev/null and b/input/alternative_scenario_new_timeseries_.xlsx differ diff --git a/input/alternative_scenario_new_timeseries_example_file_extension.xlsx b/input/alternative_scenario_new_timeseries_example_file_extension.xlsx new file mode 100644 index 00000000..e755e99d Binary files /dev/null and b/input/alternative_scenario_new_timeseries_example_file_extension.xlsx differ diff --git a/runme.py b/runme.py index 12926e80..2290040d 100644 --- a/runme.py +++ b/runme.py @@ -1,15 +1,12 @@ import os -import pandas as pd -import pyomo.environ import shutil import urbs -from datetime import datetime -from pyomo.opt.base import SolverFactory +from urbs.data import timeseries_number input_file = 'mimo-example.xlsx' result_name = os.path.splitext(input_file)[0] # cut away file extension -result_dir = urbs.prepare_result_directory(result_name) # name+time stamp +result_dir = urbs.prepare_result_directory(result_name) # name + time stamp # copy input file to result directory shutil.copyfile(input_file, os.path.join(result_dir, input_file)) @@ -49,9 +46,7 @@ report_sites_name = {('North', 'Mid', 'South'): 'Greenland'} # plotting timesteps -plot_periods = { - 'all': timesteps[1:] -} +plot_periods = {'all': timesteps[1:]} # add or change plot colors my_colors = { @@ -69,13 +64,24 @@ urbs.scenario_co2_tax_mid, urbs.scenario_no_dsm, urbs.scenario_north_process_caps, - urbs.scenario_all_together] + urbs.scenario_all_together, + urbs.alternative_scenario_base, + urbs.alternative_scenario_new_timeseries(timeseries_number, + "example_file_extension") + ] +prob = None for scenario in scenarios: - prob = urbs.run_scenario(input_file, solver, timesteps, scenario, - result_dir, dt, objective, - plot_tuples=plot_tuples, - plot_sites_name=plot_sites_name, - plot_periods=plot_periods, - report_tuples=report_tuples, - report_sites_name=report_sites_name) + if str(scenario.__name__).find("alternative") >= 0: + # if alternative scenario: check if model instance already exists + try: + prob.data + except AttributeError: + data = urbs.read_excel(input_file) + prob = urbs.create_model(data, dt, timesteps) + prob = urbs.run_scenario(input_file, prob, solver, timesteps, scenario, + result_dir, dt, objective, plot_tuples=plot_tuples, + plot_sites_name=plot_sites_name, + plot_periods=plot_periods, + report_tuples=report_tuples, + report_sites_name=report_sites_name) diff --git a/urbs/__init__.py b/urbs/__init__.py index 93ac0dab..e3d2c6d7 100644 --- a/urbs/__init__.py +++ b/urbs/__init__.py @@ -11,13 +11,7 @@ """ from .data import COLORS -from .model import create_model -from .input import read_excel, get_input +from .input import read_excel from .validation import validate_input -from .output import get_constants, get_timeseries -from .plot import plot, result_figures, to_color -from .pyomoio import get_entity, get_entities, list_entities -from .report import report from .runfunctions import * -from .saveload import load, save from .scenarios import * diff --git a/urbs/data.py b/urbs/data.py index 0b873f98..9996b2cd 100644 --- a/urbs/data.py +++ b/urbs/data.py @@ -28,3 +28,8 @@ 'Purchase': (0, 51, 89), 'Startup': (105, 8, 90), 'Variable': (128, 153, 172)} + + +# Helper list used for global declaration of current timeseries data sheet. +# Automatically filled by scenario_new_timeseries function +timeseries_number = [] diff --git a/urbs/plot.py b/urbs/plot.py index cc1ee78b..eca5c142 100644 --- a/urbs/plot.py +++ b/urbs/plot.py @@ -124,9 +124,12 @@ def plot(prob, com, sit, dt, timesteps, timesteps_plot, try: # detect whether DSM could be used in this plot # if so, show DSM subplot (even if delta == 0 for the whole time) - df_dsm = get_input(prob, 'dsm') - plot_dsm = df_dsm.loc[(sit, com), - ['cap-max-do', 'cap-max-up']].sum().sum() > 0 + df_dsm = get_input(prob, 'dsm_dict') + plot_dsm=0 + for s in sit: + plot_dsm += (df_dsm["cap-max-do"][s,com] + df_dsm["cap-max-up"] + [s,com]) + plot_dsm=plot_dsm>0 except (KeyError, TypeError): plot_dsm = False diff --git a/urbs/pyomoio.py b/urbs/pyomoio.py index bf91f750..2872382e 100644 --- a/urbs/pyomoio.py +++ b/urbs/pyomoio.py @@ -2,7 +2,7 @@ import pyomo.core as pyomo -def get_entity(instance, name): +def get_entity(instance, name, skip_result_cache=False): """ Retrieve values (or duals) for an entity in a model instance. Args: @@ -13,8 +13,10 @@ def get_entity(instance, name): a Pandas Series with domain as index and values (or 1's, for sets) of entity name. For constraints, it retrieves the dual values """ + # magic: short-circuit if problem contains a result cache - if hasattr(instance, '_result') and name in instance._result: + if (not skip_result_cache and hasattr(instance, '_result') and name in + instance._result): return instance._result[name].copy(deep=True) # retrieve entity, its type and its onset names @@ -74,8 +76,7 @@ def get_entity(instance, name): [(v[0], v[1].value) for v in entity.iteritems()]) else: # assert(entity.dim() == 0) - results = pd.DataFrame( - [(v[0], v[1].value) for v in entity.iteritems()]) + results = pd.DataFrame() labels = ['None'] # check for duplicate onset names and append one to several "_" to make diff --git a/urbs/runfunctions.py b/urbs/runfunctions.py index 4b3ecda7..da2e0947 100644 --- a/urbs/runfunctions.py +++ b/urbs/runfunctions.py @@ -8,6 +8,7 @@ from .input import * from .validation import * from .saveload import * +from .data import timeseries_number def prepare_result_directory(result_name): @@ -37,13 +38,15 @@ def setup_solver(optim, logfile='solver.log'): optim.set_options("log={}".format(logfile)) # optim.set_options("tmlim=7200") # seconds # optim.set_options("mipgap=.0005") + elif optim.name == 'cplex': + optim.set_options("log={}".format(logfile)) else: print("Warning from setup_solver: no options set for solver " "'{}'!".format(optim.name)) return optim -def run_scenario(input_file, solver, timesteps, scenario, result_dir, dt, +def run_scenario(input_file, prob, solver, timesteps, scenario, result_dir, dt, objective, plot_tuples=None, plot_sites_name=None, plot_periods=None, report_tuples=None, report_sites_name=None): @@ -51,6 +54,9 @@ def run_scenario(input_file, solver, timesteps, scenario, result_dir, dt, Args: input_file: filename to an Excel spreadsheet for urbs.read_excel + prob: urbs model instance initialized with base scenario if alternative + scenario + solver: name of the solver to be used timesteps: a list of timesteps, e.g. range(0,8761) scenario: a scenario function that modifies the input data dict result_dir: directory name for result spreadsheet and plots @@ -67,35 +73,48 @@ def run_scenario(input_file, solver, timesteps, scenario, result_dir, dt, # scenario name, read and modify data for scenario sce = scenario.__name__ - data = read_excel(input_file) - data = scenario(data) - validate_input(data) - # create model - prob = create_model(data, dt, timesteps, objective) + # if alternative scenario: special scenario function treatment is necessary + if str(sce).find("alternative") >= 0: + # Only needed for scenario_new_timeseries, but handed to all functions: + filename = "" + # scenario_new_timeseries needs special treatment: + # Add file extension to scenario name and create path to excel sheet + if str(sce).find("scenario_new_timeseries") >= 0: + sce = sce+str(timeseries_number.pop()) + filename = os.path.join("input", "{}.xlsx").format(sce) + # model instance, undo scenario changes?, path to excel sheet + prob = scenario(prob, False, filename) + instance = prob + else: + # it is a normal scenario: load data and build new model instance + data = read_excel(input_file) + data = scenario(data) + validate_input(data) + instance = create_model(data, dt, timesteps) - # refresh time stamp string and create filename for logfile - now = prob.created + # create filename for logfile log_filename = os.path.join(result_dir, '{}.log').format(sce) # solve model and read results optim = SolverFactory(solver) # cplex, glpk, gurobi, ... optim = setup_solver(optim, logfile=log_filename) - result = optim.solve(prob, tee=True) + result = optim.solve(instance, tee=True) + assert str(result.solver.termination_condition) == 'optimal' # save problem solution (and input data) to HDF5 file - save(prob, os.path.join(result_dir, '{}.h5'.format(sce))) + save(instance, os.path.join(result_dir, '{}.h5'.format(sce))) # write report to spreadsheet report( - prob, + instance, os.path.join(result_dir, '{}.xlsx').format(sce), report_tuples=report_tuples, report_sites_name=report_sites_name) # result plots result_figures( - prob, + instance, os.path.join(result_dir, '{}'.format(sce)), timesteps, plot_title_prefix=sce.replace('_', ' '), @@ -103,4 +122,11 @@ def run_scenario(input_file, solver, timesteps, scenario, result_dir, dt, plot_sites_name=plot_sites_name, periods=plot_periods, figure_size=(24, 9)) - return prob + + if str(sce).find("alternative") >= 0: + # Undo all changes to model instance to retrieve base scenario model + prob = scenario(prob, True, filename) + if str(sce).find("scenario_base") >= 0: + # use base scenario model instance for future alternative scenarios + return instance + return prob \ No newline at end of file diff --git a/urbs/saveload.py b/urbs/saveload.py index 9cebf738..e528e515 100644 --- a/urbs/saveload.py +++ b/urbs/saveload.py @@ -13,7 +13,7 @@ def create_result_cache(prob): result_cache = {} for entity in entities: - result_cache[entity] = get_entity(prob, entity) + result_cache[entity] = get_entity(prob, entity, skip_result_cache=True) return result_cache @@ -31,12 +31,20 @@ def save(prob, filename): warnings.filterwarnings('ignore', category=pd.io.pytables.PerformanceWarning) - if not hasattr(prob, '_result'): - prob._result = create_result_cache(prob) + prob._result = create_result_cache(prob) with pd.HDFStore(filename, mode='w') as store: - for name in prob._data.keys(): - store['data/'+name] = prob._data[name] + # Search all attributes of prob for those containing the input data + for name in prob.__dict__: + if str(name).find("_dict") > 0: + name_no_dict = name.split("_dict")[0] # remove _dict + try: + store['data/'+name_no_dict] = (pd.DataFrame(getattr(prob, + name))) + # 1D dictionaries need an index: + except ValueError: + store['data/'+name_no_dict] = (pd.DataFrame(getattr(prob, + name), index=[0])) for name in prob._result.keys(): store['result/'+name] = prob._result[name] diff --git a/urbs/scenarios.py b/urbs/scenarios.py index f0b50e8e..8ddcf25a 100644 --- a/urbs/scenarios.py +++ b/urbs/scenarios.py @@ -1,10 +1,9 @@ -import pandas as pd - -# SCENARIO GENERATORS -# In this script a variety of scenario generator functions are defined to -# facilitate scenario definitions. +from .model import * +from .input import split_columns +from .data import timeseries_number +# SCENARIOS def scenario_base(data): # do nothing return data @@ -52,3 +51,408 @@ def scenario_all_together(data): data = scenario_co2_limit(data) data = scenario_north_process_caps(data) return data + + + +''' +These Scenarios don´t built up a whole new model. Instead they save time by +using an existing base model and by manipulating the dictionaries containing +the data and updating the affected constraints afterwards. +It is important to rebuilt the base scenario once the problem is solved. +This is done by handing reverse=True to the scenario function. + +If you want to create your own scenarios either manipulate the functions below +or do the following steps: +1) Build an Excel file where the data is similar to the one the scenario + funtion should produce +2) Create a model instance of this scenario using prob = urbs.create_model() +3) Write the model to an .lp file using prob.write(filename, + io_options={"symbolic_solver_labels":True}) +4) Write a function with similar structure as the scenario functions below that + changes the data in the dictionaries as you wish +5) Do step 2 (using the standard Excel input file, not the modified version), + call your scenario function to manipulate the model, then do step 3. + This creates an .lp file of your current scenario model. +6) Compare the .lp file of your model to the .lp file you created before + (Notepad++ Plugin 'Compare' is very useful) +7) Spot the constraints that need to be updated, delete these constraints and + recreate them in your scenario function. Sometimes indices have to be + deleted, too. Do this both for the reverse=False part and the reverse=True + part +8) Repeat starting from step 5 until there are no more differences between the + two .lp files +9) Do step 2 and 3 using the standard input file, +10) Do step 2 (using the standard Excel input file), call your scenario + function to manipulate the model twice: One time with reverse=False and + afterwards with reverse=True, then do step 3. This .lp file should match + the .lp file of the base model. Compare them! +11) Congratulations! You created your own scenario function! +''' + + +def alternative_scenario_base(prob, reverse, not_used): + # do nothing + return prob + + +def alternative_scenario_stock_prices(prob, reverse, not_used): + # change stock commodity prices + if not reverse: + for x in tuple(prob.commodity_dict["price"].keys()): + if x[2] == "Stock": + prob.commodity_dict["price"][x] *= 1.5 + update_cost(prob) + return prob + if reverse: + for x in tuple(prob.commodity_dict["price"].keys()): + if x[2] == "Stock": + prob.commodity_dict["price"][x] *= 1/1.5 + update_cost(prob) + return prob + + +def alternative_scenario_co2_limit(prob, reverse, not_used): + # change global CO2 limit + if not reverse: + prob.global_prop_dict["value"]["CO2 limit"] *= 0.05 + update_co2_limit(prob) + return prob + if reverse: + prob.global_prop_dict["value"]["CO2 limit"] *= 1/0.05 + update_co2_limit(prob) + return prob + + +def alternative_scenario_co2_tax_mid(prob, reverse, not_used): + # change CO2 price in Mid + if not reverse: + prob.commodity_dict["price"][('Mid', 'CO2', 'Env')] = 50 + update_cost(prob) + return prob + if reverse: + prob.commodity_dict["price"][('Mid', 'CO2', 'Env')] = ( + prob._data["commodity"]["price"][('Mid', 'CO2', 'Env')]) + update_cost(prob) + return prob + + +def alternative_scenario_north_process_caps(prob, reverse, not_used): + # change maximum installable capacity + if not reverse: + prob.process_dict["cap-up"][('North', 'Hydro plant')] *= 0.5 + prob.process_dict["cap-up"][('North', 'Biomass plant')] *= 0.25 + update_process_capacity(prob) + return prob + if reverse: + prob.process_dict["cap-up"][('North', 'Hydro plant')] *= 2 + prob.process_dict["cap-up"][('North', 'Biomass plant')] *= 4 + update_process_capacity(prob) + return prob + + +def alternative_scenario_no_dsm(prob, reverse, not_used): + if not reverse: + del_dsm(prob) + return prob + if reverse: + recreate_dsm(prob) + return prob + + +def alternative_scenario_all_together(prob, reverse, not_used): + # combine all other alternative_scenarios + if not reverse: + prob = alternative_scenario_stock_prices(prob, 0, not_used) + prob = alternative_scenario_co2_limit(prob, 0, not_used) + prob = alternative_scenario_north_process_caps(prob, 0, not_used) + return prob + if reverse: + prob = alternative_scenario_stock_prices(prob, 1, not_used) + prob = alternative_scenario_co2_limit(prob, 1, not_used) + prob = alternative_scenario_north_process_caps(prob, 1, not_used) + return prob + + +# Usage: In main the scenarios are function handles. In order to hand further +# information to the scenario function it is necessary to define an external +# data structure to store the additional information in. This function should +# be called like this: +# alternative_scenario_new_timeseries(timeseries_number, ) +# value specifies the path/file extension in order to locate the excel file. +# value will be appended to input\\alternative_scenario_new_timeseries_ +def alternative_scenario_new_timeseries(timeseries_number, number): + timeseries_number.insert(0, number) + return alternative_scenario_new_timeseries_ + + +def alternative_scenario_new_timeseries_(prob, reverse, + filename="input\\scenario_new_timeseries.xlsx"): + if not reverse: + sheetnames = load_timeseries(prob, reverse, filename) + if "Demand" in sheetnames: + update_res_vertex(prob) + if "SupIm" in sheetnames: + update_supim(prob) + if "Buy-Sell-Price" in sheetnames: + update_cost(prob) + if 'TimeVarEff' in sheetnames: + update_TimeVarEff(prob) + if reverse: + sheetnames = load_timeseries(prob, reverse, filename) + if "Demand" in sheetnames: + update_res_vertex(prob) + if "SupIm" in sheetnames: + update_supim(prob) + if "Buy-Sell-Price" in sheetnames: + update_cost(prob) + if 'TimeVarEff' in sheetnames: + update_TimeVarEff(prob) + return prob + + +# Constraint updating funtions: +def del_dsm(prob): + # empty the DSM dataframe completely + prob.dsm_dict = pd.DataFrame().to_dict() + + prob.del_component(prob.dsm_site_tuples) + prob.del_component(prob.dsm_down_tuples) + prob.dsm_site_tuples = pyomo.Set() + prob.dsm_down_tuples = pyomo.Set() + + prob.del_component(prob.dsm_down) + prob.del_component(prob.dsm_up) + prob.dsm_down = pyomo.Var() + prob.dsm_up = pyomo.Var() + + prob.del_component(prob.def_dsm_variables) + prob.del_component(prob.res_dsm_upward) + prob.del_component(prob.res_dsm_downward) + prob.del_component(prob.res_dsm_maximum) + prob.del_component(prob.res_dsm_recovery) + + prob.def_dsm_variables = pyomo.Constraint.Skip + prob.res_dsm_upward = pyomo.Constraint.Skip + prob.res_dsm_downward = pyomo.Constraint.Skip + prob.res_dsm_maximum = pyomo.Constraint.Skip + prob.res_dsm_recovery = pyomo.Constraint.Skip + + # The following lines cause 99% of work for the formation of this scenario + update_res_vertex(prob) + + +def recreate_dsm(prob): + # dsm_variables & vertex rule + prob.dsm_dict = prob._data["dsm"].to_dict() + try: + myset = tuple(prob.dsm_dict["delay"].keys()) + except KeyError: + raise NotImplementedError("Could not rebuild base modell!") + + prob.del_component(prob.dsm_site_tuples_domain) + prob.del_component(prob.dsm_site_tuples) + prob.dsm_site_tuples = pyomo.Set( + within=prob.sit*prob.com, + initialize=myset, + doc='Combinations of possible dsm by site, e.g.(Mid, Elec)') + + prob.del_component(prob.dsm_down_tuples_domain) + prob.del_component(prob.dsm_down_tuples_domain_index_0) + prob.del_component(prob.dsm_down_tuples_domain_index_0_index_0) + prob.del_component(prob.dsm_down_tuples) + prob.dsm_down_tuples = pyomo.Set( + within=prob.tm*prob.tm*prob.sit*prob.com, + initialize=[(t, tt, site, commodity) + for (t, tt, site, commodity) + in dsm_down_time_tuples(prob.timesteps[1:], + prob.dsm_site_tuples, + prob)], + doc='Combinations of possible dsm_down combinations, e.g. ' + '(5001,5003,Mid,Elec)') + + prob.del_component(prob.dsm_up_index) + prob.del_component(prob.dsm_up) + prob.dsm_up = pyomo.Var( + prob.tm, prob.dsm_site_tuples, + within=pyomo.NonNegativeReals, + doc='DSM upshift') + + prob.del_component(prob.dsm_down) + prob.dsm_down = pyomo.Var( + prob.dsm_down_tuples, + within=pyomo.NonNegativeReals, + doc='DSM downshift') + + prob.del_component(prob.def_dsm_variables_index) + prob.del_component(prob.def_dsm_variables) + del prob.def_dsm_variables + prob.def_dsm_variables = pyomo.Constraint( + prob.tm, prob.dsm_site_tuples, + rule=def_dsm_variables_rule, + doc='DSMup * efficiency factor n == DSMdo (summed)') + + prob.del_component(prob.res_dsm_upward_index) + prob.del_component(prob.res_dsm_upward) + del prob.res_dsm_upward + prob.res_dsm_upward = pyomo.Constraint( + prob.tm, prob.dsm_site_tuples, + rule=res_dsm_upward_rule, + doc='DSMup <= Cup (threshold capacity of DSMup)') + + prob.del_component(prob.res_dsm_downward_index) + prob.del_component(prob.res_dsm_downward) + del prob.res_dsm_downward + prob.res_dsm_downward = pyomo.Constraint( + prob.tm, prob.dsm_site_tuples, + rule=res_dsm_downward_rule, + doc='DSMdo (summed) <= Cdo (threshold capacity of DSMdo)') + + prob.del_component(prob.res_dsm_maximum) + prob.del_component(prob.res_dsm_maximum_index) + del prob.res_dsm_maximum + prob.res_dsm_maximum = pyomo.Constraint( + prob.tm, prob.dsm_site_tuples, + rule=res_dsm_maximum_rule, + doc='DSMup + DSMdo (summed) <= max(Cup,Cdo)') + + prob.del_component(prob.res_dsm_recovery) + prob.del_component(prob.res_dsm_recovery_index) + del prob.res_dsm_recovery + prob.res_dsm_recovery = pyomo.Constraint( + prob.tm, prob.dsm_site_tuples, + rule=res_dsm_recovery_rule, + doc='DSMup(t, t + recovery time R) <= Cup * delay time L') + + # The following lines cause 50% of rebuilding work + update_res_vertex(prob) + + +def change_dsm(prob): + # not implemented yet + return prob + + +def upd_dsm_constraints(prob): + # not implemented yet + return prob + + +def load_timeseries(prob, reverse, filename): + with pd.ExcelFile(filename) as xls: + try: + sheetnames = xls.sheet_names + except KeyError: + print("Could not find file for new timeseries scenario") + if not reverse: + for temp in sheetnames: + temp2 = xls.parse(temp).set_index(["t"]) + temp2.columns = split_columns(temp2.columns, '.') + if str(temp) == "Demand": + prob.demand_dict = temp2.to_dict() + if str(temp) == "SupIm": + prob.supim_dict = temp2.to_dict() + if str(temp) == "Buy-Sell-Price": + prob.buy_sell_price_dict = temp2.to_dict() + if str(temp) == "TimeVarEff": + prob.eff_factor_dict = temp2.to_dict() + if reverse: + for temp in sheetnames: + if str(temp) == "Demand": + prob.demand_dict = prob._data["demand"].to_dict() + if str(temp) == "SupIm": + prob.supim_dict = prob._data["supim"].to_dict() + if str(temp) == "Buy-Sell-Price": + prob.buy_sell_price_dict = (prob._data["buy_sell_price"] + .to_dict()) + if str(temp) == "TimeVarEff": + prob.eff_factor_dict = prob._data["eff_factor"].to_dict() + return sheetnames + + +def update_TimeVarEff(prob): + prob.del_component(prob.pro_timevar_output_tuples) + prob.del_component(prob.pro_timevar_output_tuples_domain) + prob.del_component(prob.pro_timevar_output_tuples_domain_index_0) + prob.pro_timevar_output_tuples = pyomo.Set( + within=prob.sit*prob.pro*prob.com, + initialize=[(site, process, commodity) + for (site, process) in tuple(prob.eff_factor_dict.keys()) + for (pro, commodity) in tuple(prob.r_out_dict.keys()) + if process == pro], + doc='Outputs of processes with time dependent efficiency') + + prob.del_component(prob.def_process_output) + prob.del_component(prob.def_process_output_index) + prob.del_component(prob.def_process_output_index_1) + prob.del_component(prob.def_process_output_index_1_index_0) + prob.def_process_output = pyomo.Constraint( + prob.tm, (prob.pro_output_tuples - prob.pro_partial_output_tuples - + prob.pro_timevar_output_tuples), + rule=def_process_output_rule, + doc='process output = process throughput * output ratio') + + prob.del_component(prob.def_process_timevar_output) + prob.del_component(prob.def_process_timevar_output_index) + prob.del_component(prob.def_process_timevar_output_index_1) + prob.del_component(prob.def_process_timevar_output_index_1_index_1) + prob.def_process_timevar_output = ( + pyomo.Constraint(prob.tm, (prob.pro_timevar_output_tuples - + (prob.pro_partial_output_tuples & + prob.pro_timevar_output_tuples)), + rule=def_pro_timevar_output_rule, + doc='e_pro_out = tau_pro * r_out * eff_factor')) + + prob.del_component(prob.def_process_partial_timevar_output) + prob.del_component(prob.def_process_partial_timevar_output_index) + prob.del_component(prob.def_process_partial_timevar_output_index_1) + prob.def_process_partial_timevar_output = pyomo.Constraint( + prob.tm, + prob.pro_partial_output_tuples & prob.pro_timevar_output_tuples, + rule=def_pro_partial_timevar_output_rule, + doc='e_pro_out = tau_pro * r_out * eff_factor') + + +def update_cost(prob): + prob.del_component(prob.def_costs) + prob.def_costs = pyomo.Constraint( + prob.cost_type, + rule=def_costs_rule, + doc='main cost function by cost type') + + +def update_supim(prob): + prob.del_component(prob.def_intermittent_supply) + prob.del_component(prob.def_intermittent_supply_index) + prob.def_intermittent_supply = pyomo.Constraint( + prob.tm, prob.pro_input_tuples, + rule=def_intermittent_supply_rule, + doc='process output = process capacity * supim timeseries') + + +def update_res_vertex(prob): + prob.del_component(prob.res_vertex) + prob.del_component(prob.res_vertex_index) + prob.res_vertex = pyomo.Constraint( + prob.tm, prob.com_tuples, + rule=res_vertex_rule, + doc='storage + transmission + process + source + buy - sell == demand') + + +def update_process_capacity(prob): + prob.del_component(prob.def_process_capacity) + prob.def_process_capacity = pyomo.Constraint( + prob.pro_tuples, + rule=def_process_capacity_rule, + doc='total process capacity = inst-cap + new capacity') + prob.del_component(prob.res_process_capacity) + prob.res_process_capacity = pyomo.Constraint( + prob.pro_tuples, + rule=res_process_capacity_rule, + doc='process.cap-lo <= total process capacity <= process.cap-up') + + +def update_co2_limit(prob): + prob.del_component(prob.res_global_co2_limit) + prob.res_global_co2_limit = pyomo.Constraint( + rule=res_global_co2_limit_rule, + doc='total co2 commodity output <= Global CO2 limit')