diff --git a/redoc/analysis_settings/description.md b/redoc/analysis_settings/description.md index daf8d63..7af7700 100644 --- a/redoc/analysis_settings/description.md +++ b/redoc/analysis_settings/description.md @@ -2,6 +2,6 @@ The run time settings for the analysis are controlled by the `analysis_settings.json` file which is a user supplied file detailing all of the options requested for the run (model to run, exposure set to use, number of samples, occurrence options, outputs required, etc.). In the MDK, the analysis settings file must be specified as part of the command line arguments (or in the oasislmf.json configuration file) and in the platform, it needs to be posted to the endpoint. A full json schema for the available options in the analysis settings file can be found here: -https://github.com/OasisLMF/ODS_Tools/blob/develop/ods_tools/data/analysis_settings_schema.json +https://github.com/OasisLMF/ODS_Tools/blob/main/ods_tools/data/analysis_settings_schema.json This is useful for more technical users who are looking to create their own UI or integrate Oasis with an existing system. The `analysis_settings` schema hierarchy is shown in `json` format in right column of the page. An interactive version of the schema, with descriptions and examples, can be found below: \ No newline at end of file diff --git a/src/home/git-repo.rst b/src/home/git-repo.rst index ff41913..c728ba9 100644 --- a/src/home/git-repo.rst +++ b/src/home/git-repo.rst @@ -91,10 +91,9 @@ OasisAzureDeployment can be used to manage, deploy, run, monitor, and configure * `OasisEvaluation `_ -The Oasis Evalutaion repository can be use to spin up an Oasis enviroment to quickly and efficiently run and test models. -The Oasis Platform release now includes a full API for operating catastrophe models and a general consolidation of the -platform architecture. Windows SQL server is no longer a strict requirement. The platform can be run via docker containers -on a single machine or, if required, scaled up to run on a cluster. +The OasisEvaluation repository provides a streamlined way to run the Oasis stack in multi-container environment using +docker-compose. This is intended for locally testing the OasisPlatform 1 with a toy model example OasisPiWind, via the Web UI +OasisUI. ---- diff --git a/src/home/introduction.rst b/src/home/introduction.rst index 5962856..7005055 100644 --- a/src/home/introduction.rst +++ b/src/home/introduction.rst @@ -19,7 +19,7 @@ Overview ---- -.. figure:: ../images/oasis_ecosystem.jpg +.. figure:: ../images/oasis_ecosystem_new.png :alt: Oasis Ecosystem Oasis Ecosystem @@ -59,44 +59,3 @@ It is designed with a model developer or academic user in mind, who are likely t **Oasis Model Library** is a hosted catalogue for Oasis models, hosted in AWS. It allows regression of the models after updates to the Oasis Platform code, and validation of model operation and scalability within a hosted Oasis Platform. - - -.. - This doesn't really work - gets messy having an index inside of the same index -.. - .. toctree:: - :titlesonly: - :caption: Home: - - introduction.rst - git-repo.rst - FAQs.rst - - .. toctree:: - :titlesonly: - :caption: Use Cases: - - ../use_cases/model-developer - ../use_cases/model-users - ../use_cases/installing-deploying-Oasis - - .. toctree:: - :titlesonly: - :caption: Sections: - - ../sections/API.rst - ../sections/deployment.rst - ../sections/errors.rst - ../sections/financial-module.rst - ../sections/keys-service.rst - ../sections/ktools-pytools.rst - ../sections/Oasis-evaluation.rst - ../sections/Oasis-model-data-formats.rst - ../sections/Oasis-models.rst - ../sections/Oasis-platform.rst - ../sections/Oasis-UI.rst - ../sections/Oasis-workflow.rst - ../sections/OasisLMF-package.rst - ../sections/OED.rst - ../sections/options.rst - ../ sections/results.rst diff --git a/src/images/correlation1.png b/src/images/correlation1.png index 11ec9ab..9a33ffd 100644 Binary files a/src/images/correlation1.png and b/src/images/correlation1.png differ diff --git a/src/images/correlation2.png b/src/images/correlation2.png index 8659779..f0f949c 100644 Binary files a/src/images/correlation2.png and b/src/images/correlation2.png differ diff --git a/src/images/correlation3.png b/src/images/correlation3.png index fb9676b..85bad81 100644 Binary files a/src/images/correlation3.png and b/src/images/correlation3.png differ diff --git a/src/images/correlation4.png b/src/images/correlation4.png new file mode 100644 index 0000000..726773f Binary files /dev/null and b/src/images/correlation4.png differ diff --git a/src/images/oasis_ecosystem_new.png b/src/images/oasis_ecosystem_new.png new file mode 100644 index 0000000..dcb54f8 Binary files /dev/null and b/src/images/oasis_ecosystem_new.png differ diff --git a/src/images/plat2_arch.png b/src/images/plat2_arch.png new file mode 100644 index 0000000..34229c9 Binary files /dev/null and b/src/images/plat2_arch.png differ diff --git a/src/images/sampling1.png b/src/images/sampling1.png new file mode 100644 index 0000000..6701dec Binary files /dev/null and b/src/images/sampling1.png differ diff --git a/src/images/sampling2.png b/src/images/sampling2.png new file mode 100644 index 0000000..a6d1148 Binary files /dev/null and b/src/images/sampling2.png differ diff --git a/src/index.rst b/src/index.rst index 38cd64f..b63725c 100644 --- a/src/index.rst +++ b/src/index.rst @@ -19,7 +19,7 @@ Overview ---- -.. figure:: images/oasis_ecosystem.jpg +.. figure:: images/oasis_ecosystem_new.png :alt: Oasis Ecosystem Oasis Ecosystem @@ -82,6 +82,7 @@ It allows regression of the models after updates to the Oasis Platform code, and :titlesonly: :caption: Sections: + sections/absolute-damage.rst sections/analysis_settings sections/API.rst sections/camel.rst @@ -89,6 +90,7 @@ It allows regression of the models after updates to the Oasis Platform code, and sections/deployment.rst sections/disaggregation.rst sections/financial-module.rst + sections/geocoding.rst sections/keys-service.rst sections/ktools.rst sections/model-data-library.rst @@ -111,12 +113,14 @@ It allows regression of the models after updates to the Oasis Platform code, and sections/platform_1 sections/platform_2 sections/post-loss-amplification.rst + sections/pre-analysis-adjustments.rst sections/pytools.rst sections/releases.rst sections/results.rst sections/SaaS-providers.rst + sections/sampling-methodology.rst sections/versioning.rst .. - sections to be populated: sections/pre-analysis-adjustments.rst, sections/sampling-methodology.rst, sections/errors.rst + sections to be populated: sections/pre-analysis-adjustments.rst, sections/errors.rst, sections/complex-model.rst diff --git a/src/sections/API.rst b/src/sections/API.rst index 0b78ddc..5e5a79d 100644 --- a/src/sections/API.rst +++ b/src/sections/API.rst @@ -17,10 +17,9 @@ Introduction: ---- -Oasis has a full REST API for managing exposure data and operating modelling workflows. API Swagger documentation can be -found `here `_. An evaluation version of the Oasis platform and using can be deployed -using the `Oasis evaluation repository `_. This includes a Jupyter notebook -that illustrates the basic operation of the API, using the Python API client. +An evaluation version of the Oasis platform and using can be deployed using the `Oasis evaluation repository +`_. This includes a Jupyter notebook that illustrates the basic +operation of the API, using the Python API client. The API schemas can be found here: diff --git a/src/sections/ODS-tools.rst b/src/sections/ODS-tools.rst index e4204e4..8dec87f 100644 --- a/src/sections/ODS-tools.rst +++ b/src/sections/ODS-tools.rst @@ -23,8 +23,8 @@ ODS Tools is a Python package designed to manage :doc:`../../sections/ODS` data, :doc:`ODS <../../sections/ODS>` schema. It includes a range of tools for working with Oasis data files, including loading, conversion, and validation. This package is in accordance with :doc:`ODS <../../sections/ODS>`. -As a separate service, the package include functionality to manage :doc:`../../sections/model-settings` and -:doc:`../../sections/analysis-settings` that are used to perform an analysis. +As a separate service, the package include functionality to manage :doc:`../../sections/model_settings` and +:doc:`../../sections/analysis_settings` that are used to perform an analysis. ODS tools comprises primarily of two parts: @@ -103,7 +103,7 @@ ODS Tools can be installed via pip by running the following command: pip install ods-tools | -Once installed, ODS Tools can be used utilised via the command line interface to quickly convert oed files. +Once installed, ODS Tools can be used via the command line interface to quickly convert oed files. Example : diff --git a/src/sections/Oasis-evaluation.rst b/src/sections/Oasis-evaluation.rst index 321a04e..05388e4 100644 --- a/src/sections/Oasis-evaluation.rst +++ b/src/sections/Oasis-evaluation.rst @@ -1,18 +1,63 @@ Oasis Evaluation ================ -The Oasis Evalutaion repository can be use to spin up an Oasis enviroment to quickly and efficiently run and test models. -The Oasis Platform release now includes a full API for operating catastrophe models and a general consolidation of the -platform architecture. Windows SQL server is no longer a strict requirement. The platform can be run via docker containers -on a single machine or, if required, scaled up to run on a cluster. +The OasisEvaluation repository provides a streamlined way to run the Oasis stack in multi-container environment using docker-compose. +This is intended for locally testing the `OasisPlatform 1 `_ with a toy model example `OasisPiWind `_, via the Web UI `OasisUI `_. + + + +.. _installing_oasis: + +Installing Oasis +**************** + +1. Install prerequisites, ``docker``, ``docker-compose``, and ``git`` +2. (optional) Edit the software versions at the top of ``install.sh`` installation script, These control the oasis versions installed + +| +.. code-block:: python + + export VERS_API=1.28.0 + export VERS_WORKER=1.28.0 + export VERS_UI=1.11.6 + export VERS_PIWIND='stable/1.28.x' +| + +These control the oasis versions installed + - ``VERS_API``, OasisPlatform server version + - ``VERS_WORKER``, OasisPlatform worker version + - ``VERS_UI``, OasisUI container version + - ``VERS_PIWIND``, the PiWind branch to run. + +3. Run the installaion script + +| +.. code-block:: python + + ./install.sh +| + + + + +---- + +Oasis Installation Guide: Windows 10 OS +####################################### + +.. youtube:: SxRt5E-Y5Sw + +| +Oasis Installation Guide: Linux based OS +######################################## + +.. youtube:: OFLTpGGEM10 + -Docker support is the main requirement for running the platform. A Linux based installation is the main focus of this -example deployment. Running the install script from this repository automates install process of the OasisPlatform API v1, -User Interface and example PiWind model. GitHub repository: ------------------ ---- -`Oasis Platform Evaluation `_. \ No newline at end of file +`Oasis Platform Evaluation `_. diff --git a/src/sections/Oasis-models.rst b/src/sections/Oasis-models.rst index 3e08e5c..6715d7f 100644 --- a/src/sections/Oasis-models.rst +++ b/src/sections/Oasis-models.rst @@ -28,6 +28,8 @@ This is a single event model which allows users to apply deterministic losses to in the OED location file. It is similar to the ``exposure`` feature in the oasislmf package, but can be deployed as a model in it's own right to model deterministic losses which can then be passed through the Oasis financial module. +This model is availible to use `here `_. + ---- Paris Windstorm @@ -35,6 +37,8 @@ Paris Windstorm This is very small, single peril model used for demonstration of how to build a simple model in Oasis. +This model is availible to use `here `_. + ---- PiWind @@ -43,16 +47,33 @@ PiWind This is the original test model in Oasis and is an example of a multi-peril model implementation representing ficticious events with wind and flood affecting the Town of Melton Mowbray in England. +This model is availible to use `here `_. + More information on this model can be found here: :ref:`piwind_models` ---- +PiWind Absolute Damage +********************** + +This model expands upon the PiWind model with the absolute damage option. This option allows model providers to include +absolute damage amounts rather than damage factors in the damage bin dictionary. If the damage factors are less than or +equal to 1 in the damage bin dictionary, the factor will be applied as normal during the loss calculation, by applying the +sampled damage factor to the TIV to give a simulated loss; but with absolute damage factors, where the factor is greater +than 1, the TIV is not used in the calculation at all, but rather the absolute damage is applied as the loss. + +This model is availible to use from `here `_. + +---- + PiWind Complex Model ******************** This is a version of the PiWind model which uses the complex model integreation approach to generate ground up losses in a custoim module, which then sits in the workflow and replaces the standard ground up loss calculation from Oasis. +This model is availible to use from `here `_. + ---- PiWind Postcode @@ -61,6 +82,32 @@ PiWind Postcode This is a variant of the original PiWind model designed for running exposures whose locations are known at postcode level rather than by latitude and longitude. This model demonstrates the disaggregation features of Oasis. +This model is availible to use `here `_. + +---- + +PiWind Post Loss Amplification +****************************** + +This is a version of the PiWind model with post loss amplification factors applied. Major catastrophic events can +give rise to inflated and/or deflated costs depending on that specific situation. To account for this, the ground up +losses produced by the GUL calculation component are multiplied by post loss amplification factors, by the component +plapy. + +This model is availible to use `here `_. + +---- + +PiWind Post Pre Analysis +************************ + +This model builds upon the original PiWind model with a pre-analysis adjustment hook. This step allows the user to modify input +files before they are processed in the analysis. This functionality is utilised by this model by implementing an external geocoder: +this checks the location data before it is analysed for any addresses that are missing OED location data. If an address is found t +o be incomplete, it is geocoded to fill these gaps. + +This model is availible to use `here `_. + ---- PiWind Single Peril @@ -69,14 +116,12 @@ PiWind Single Peril This is a simplified variant of the original PiWind model which has single peril (wind only) and would be a good basis for a single peril model in Oasis. -| - -.. note:: - More information about these models can be found `here `_. - +This model is availible to use `here `_. ---- +.. note:: + More information about these models can be found `here `_. | .. _piwind_models: @@ -87,8 +132,9 @@ PiWind - toy model ---- Oasis has developed a toy model, PiWind, available `here `_. PiWind is a wind storm -model for a small area of the UK.The data is mocked up to illustrate the Oasis data formats and functionality, and is not -meant to be a usable risk model. +model for a small area of the UK. The data is mocked up to illustrate the Oasis data formats and functionality, and is not +meant to be a usable risk model. The PiWind toy model is availible to use from `here `_. There are three main components to a catastrophe risk model deployed in Oasis. A fuller discussion of the components of a hazard model can be found in :doc:`modelling-methodology`. diff --git a/src/sections/Oasis-platform.rst b/src/sections/Oasis-platform.rst index 2720be4..109bc20 100644 --- a/src/sections/Oasis-platform.rst +++ b/src/sections/Oasis-platform.rst @@ -7,8 +7,8 @@ On this page: * :ref:`introduction_platform` * :ref:`installing_oasis` * :ref:`platform_architecture` -* :ref:`hard_scaling` -* :ref:`weak_scaling` +* :ref:`single_server` +* :ref:`horizontal_scaling` * :ref:`development_approach` * :ref:`technology_stack` @@ -38,28 +38,6 @@ platform provides: * Toolkit for developing, testing and deploying catastrophe models (Oasis Model Development Toolkit) - -| -.. _installing_oasis: - -Installing Oasis -**************** - ----- - -Oasis Installation Guide: Windows 10 OS -####################################### - -.. youtube:: SxRt5E-Y5Sw - -| -Oasis Installation Guide: Linux based OS -######################################## - -.. youtube:: OFLTpGGEM10 - - - | .. _platform_architecture: @@ -91,10 +69,10 @@ A schematic of the Oasis Platform architecture is shown in the diagram below, an | -.. _hard_scaling: +.. _single_server: -Hard Scaling -************ +Single server deployment (Platform 1) +************************************* ---- @@ -126,27 +104,31 @@ To overcome these limitations we are putting in place new approach. - gul-fm load balancer (next release) that will split events out of the gul further and increase fmcalc parallelization. -- Oasis at scale (in test) will provide to the Oasis platform a way to split events - on a cluster using celery with the ability to auto-scale depending on the workload size. - (see detail at: https://github.com/OasisLMF/OasisAtScaleEvaluation) +| +.. _horizontal_scaling: +Kubernetes deployment (Platform 2) +********************************** +---- +The second iteration of the OasisPlatform provides helm charts to deploy oasis to a Kubernetes cluster. +For details on deploying to an Azure environment see: https://github.com/OasisLMF/OasisAzureDeployment +This allows for horizontal scaling across multiple nodes in a cluster by breaking a catastrophe analysis into to several sub-tasks, where each ``chunk`` is a batch of +events batches split by ``eve``. These all run in parallel across all running nodes in a worker pool, which are combined in a final step to correlate output results. -| -.. _weak_scaling: +.. figure:: /images/plat2_arch.png + :alt: Platform 2 architecture -Weak Scaling -************ - ----- +The Kubernetes installation adds three new oasis components. -All of the components are packaged as Docker images. -Docker-compose can be used to deploy the system on one or more physical servers. -You can therefore increase the throughput of analysis by -provisioning more calculation servers and deploying more Analysis Worker images. +.. csv-table:: + :header: "Component", "Description", "Technology" + "oasis-task-controller", "handles analysis chunking based on chunking options set for a model.", "Custom Python code" + "oasis-websocket", "publishes messages to an auto-scaling component that controls the size of worker pool nodes", "Django Channels, Redis" + "oasis-worker-controller", "scales up and down the number of running VMs in a worker pool", "Custom Python code" | @@ -193,14 +175,13 @@ Technology stack **Using** ======================== =============================================================================== -Python 3.6 General system programming and tools. +Python 3 General system programming and tools. C++ 11 Simulation and analytics kernel. Docker Deployment of Oasis Platform and UI. -Ubuntu 18.04 LTS Development servers and base Docker image. +Ubuntu LTS Base Docker images. AWS Cloud infrastructure for Oasis Model Library and Oasis Platform deployment. -Jenkins 2 & BlueOcean Continuous integration. +Github Actions Continuous integration. Django Web service framework. -Apache Web server. Terraform Infrastructure automation. Sphinx Code documentation generation. RShiny Application framework build on R. diff --git a/src/sections/OasisLMF-package.rst b/src/sections/OasisLMF-package.rst index b9b6a8d..6c90cf0 100644 --- a/src/sections/OasisLMF-package.rst +++ b/src/sections/OasisLMF-package.rst @@ -5,11 +5,16 @@ On this page: ------------- * :ref:`intro_package` -* :ref:`getting_started` -* :ref:`MDK` - +* :ref:`features_package` +* :ref:`requirements_package` +* :ref:`installation_package` +* :ref:`bash_enable_package` +* :ref:`dependencies_package` +* :ref:`testing_package` +* :ref:`links_package` | + .. _intro_package: Introduction @@ -22,414 +27,375 @@ toolkit for developing, testing and running Oasis models end-to-end locally, or ground-up losses (GUL), direct/insured losses (IL) and reinsurance losses (RIL). It can also generate deterministic losses at all these levels. - - | -.. _getting_started: -Getting started: -**************** +.. _features_package: + +Features +******** ---- -This documentation on the `OasisLMF `_ package go through setting up the environment to run a -basic pipeline using this package. This will be achieved with the following steps: +For running models locally the CLI provides a ``model`` subcommand with the following options: -* Install `OasisLMF `_ -* Generate fake test data -* Read events and stream them -* Construct a model -* Construct a python model -* Carrying out these steps will enable you to understand the basics of how the model pipeline works. It will also enable you - to test our installation and contribute to the project. +* ``model generate-exposure-pre-analysis``: generate new Exposure input using user custom code (ex: geo-coding, exposure + enhancement, or disaggregation...). +* ``model generate-keys``: generates Oasis keys files from model lookups; these are essentially line items of (location ID, + peril ID, coverage type ID, area peril ID, vulnerability ID) where peril ID and coverage type ID span the full set of + perils and coverage types that the model supports; if the lookup is for a complex/custom model the keys file will have + the same format except that area peril ID and vulnerability ID are replaced by a model data JSON string. +* ``model generate-oasis-files``: generates the Oasis input CSV files for losses (GUL, GUL + IL, or GUL + IL + RIL); it + requires the provision of source exposure and optionally source accounts and reinsurance info and scope files (in OED + format), as well as assets for instantiating model lookups and generating keys files. +* ``model generate-losses``: generates losses (GUL, or GUL + IL, or GUL + IL + RIL) from a set of pre-existing Oasis files. +* ``model run``: runs the model from start to finish by generating losses (GUL, or GUL + IL, or GUL + IL + RIL) from the + source exposure, and optionally source accounts and reinsurance info. and scope files (in OED or RMS format), as well as + assets related to lookup instantiation and keys file generation. | -Installing OasisLMF -################### +The optional ``--summarise-exposure`` flag can be issued with ``model generate-oasis-files`` and ``model run`` to generate +a summary of Total Insured Values (TIVs) grouped by coverage type and peril. This produces the +``exposure_summary_report.json`` file. ----- +For remote model execution the ``api`` subcommand provides the following main subcommand: -Installing OasisLMF is supported via `pip `_. This can be done by carrying out the -command below: +* ``api run``: runs the model remotely (same as ``model run``) but via the Oasis API -.. code-block:: python +For generating deterministic losses an ``exposure run`` subcommand is available: - pip install oasislmf +* ``exposure run``: generates deterministic losses (GUL, or GUL + IL, or GUL + IL + RIL) | -To install the `OasisLMF `_ package in relation to a specific branch, carry out the command -below: +The reusable libraries are organised into several sub-packages, the most relevant of which from a model developer or user's +perspective are: -.. code-block:: python - - pip install git+https://github.com/OasisLMF/OasisLMF@some-branch +* ``api_client`` +* ``model_preparation`` +* ``model_execution`` +* ``utils`` | -With the command above, the branch ``some-branch`` can be substituted with whatever branch you want to install using pip. -Once the package is installed, you can move onto the next section: generating fake test data. +.. _requirements_package: -| +Minimum Python Requirements +*************************** -Generate fake test data -####################### +----- ----- +Starting from 1st January 2019, Pandas will no longer be supporting Python 2. As Pandas is a key dependency of the MDK we +are **dropping Python 2 (2.7) support** as of this release (1.3.4). The last version which still supports Python 2.7 is +version ``1.3.3`` (published 12/03/2019). -Generating fake test data is necessary in order for the model to take in a range of event IDs, and pass this through to a model -that is constructed also using the data generated by the fake test data that has been generated. Right now, the aim is to -generate data that will not break the pipeline. This can be done by creating a -`JSON `_ configuration file with the content below: +Also for this release (and all future releases) a **minimum of Python 3.8 is required**. -.. code-block:: JSON +| - { - "num_vulnerabilities": 50, - "num_intensity_bins": 50, - "num_damage_bins": 50, - "vulnerability_sparseness": 0.5, - "num_events": 10000, - "num_areaperils": 100, - "areaperils_per_event": 100, - "intensity_sparseness": 0.5, - "num_periods": 1000, - "num_locations": 1000, - "coverages_per_location": 3, - "num_layers": 1 - } +.. _installation_package: -| +Installation +************ -This will create a range of binary files that we can ingest for our model. Once this -`JSON `_ file is saved, and you have access to this file, data can be generated with -the command below: +---- + +The latest released version of the package, or a specific package version, can be installed using ``pip``: -.. code-block:: python +.. code-block:: - oasislmf test model generate-oasis-files -C oasislmf_dummyModel.json + pip install oasislmf[==] | -The ``-C`` argument points to the `JSON `_ configuration file. Once this runs, -there will be the following file: +Alternatively you can install the latest development version using: -* **events.bin:** contains the event IDs that the model is going to compute -* **footprint.bin:** contains data about the probability of disasters occurring within an intensity bin in a geographical - location -* **footprint.idx:** contains the offset and location in the ``footprint.bin`` file for the model -* **vulnerability.bin:** contains the data about the probability of the disasters causing damage within a damage bin in a - geographical location -* **occurrence.bin:** [PLEASE ADD AN DESCRIPTION HERE] -* **damage_bin_dict.bin:** [PLEASE ADD AN DESCRIPTION HERE] -* **coverages.bin:** [PLEASE ADD AN DESCRIPTION HERE] -* **fm_policytc.bin:** [PLEASE ADD AN DESCRIPTION HERE] -* **fm_programme.bin:** [PLEASE ADD AN DESCRIPTION HERE] -* **fm_xref.bin:** [PLEASE ADD AN DESCRIPTION HERE] -* **fm_profile.bin:** [PLEASE ADD AN DESCRIPTION HERE] -* **fmsummaryxref.bin:** [PLEASE ADD AN DESCRIPTION HERE] -* **gulsummaryxref.bin:** [PLEASE ADD AN DESCRIPTION HERE] -* **items.bin:** [PLEASE ADD AN DESCRIPTION HERE] +.. code-block:: -Now have all the data that is needed to run the model. Now this data can be examined for reading events and streaming them. + pip install git+{https,ssh}://git@github.com/OasisLMF/OasisLMF | -Read events and stream them -########################### +You can also install from a specific branch ```` using: ----- +.. code-block:: -Before reading and streaming the event IDs, an input directory has to be created and the events need ot be copied into this with -the command below: + pip install [-v] git+{https,ssh}://git@github.com/OasisLMF/OasisLMF.git@#egg=oasislmf -.. code-block:: python +| - mkdir input && cp events.bin ./input/events.bin +.. _bash_enable_package: -| +Enable Bash completion +********************** -This gives the event IDs in our input directory. These can be read and streamed with the command below: +---- -.. code-block:: python +Bash completion is a functionality which bash helps users type their commands by presenting possible options when users +press the tab key while typing a command. - eve 1 1 +Once oasislmf is installed you'll need to be activate the feature by sourcing a bash file (only needs to be run once). | -[ENTER DESCRIPTION ABOUT THE 1 1] - -Running this gives a byte stream that cannot be read by the human eyes as it looks like the printout snippet below: +Local +##### -.. code-block:: python +.. code-block:: - �!�"�#�$�%�&�'�(�)�*�+�,�-�.�/�0�1�2 + oasislmf admin enable-bash-complete | -The ``getmodel`` that is next in the pipeline will process this stream. However, if you want to process this yourself in -Python, this can be done using the `struct `_ module with the code below: - -.. code-block:: python +Global +###### - import sys - import struct +.. code-block:: - data = sys.stdin.buffer.read() - eve_raw_data = [data[i:i + 4] for i in range(0, len(data), 4)] - eve_buffer = [struct.unpack("i", i)[0] for i in eve_raw_data] + echo 'complete -C completer_oasislmf oasislmf' | sudo tee /usr/share/bash-completion/completions/oasislmf | -The event IDs are integers. Because integers take up 4 bytes each, the data needs to be looped through, breaking it into chunks -or 4 bytes and using the `struct `_ module to unpack this giving us a list of -integers that are event IDs. This is used to construct a model. -| +.. _dependencies_package: -Construct a model -################# +Dependencies +************ ---- -Before using a model, it has to be ensured that the correct data is in the ``static`` and ``input`` directories with the -command below: +System +###### -.. code-block:: python +The package provides a built-in lookup framework (``oasislmf.model_preparation.lookup.OasisLookup``) which uses the Rtree +Python package, which in turn requires the ``libspatialindex`` spatial indexing C library. - mkdir static && cp footprint.bin ./static/footprint.bin && cp items.bin ./input/items.bin && cp vulnerability.bin - ./static/vulnerability.bin && cp damage_bin_dict.bin ./static/damage_bin_dict.bin && cp footprint.idx - ./static/footprint.idx +https://libspatialindex.github.io/index.html | -Now that the data is in the correct directories, the ``getmodel`` command can be ran and the output is dumped into a ``csv`` file -with the command below: +Linux users can install the development version of ``libspatialindex`` from the command line using ``apt``. -.. code-block:: python +.. code-block:: - eve 1 1 | getmodel | cdftocsv > dump.csv + [sudo] apt install -y libspatialindex-dev | -This streams the event IDs into the ``getmodel``, the model is then passed into the ``cdftocsv`` and the output of this is -dumped into a ``csv`` file called ``dump.csv``. The outcome in the ``dump.csv`` will look similar to the outcome below: +and OS X users can do the same via ``brew``. -.. csv-table:: - :header: "event_id", "areaperil_id", "vulnerability_id", "bin_index", "prob_to", "bin_mean" +.. code-block:: - "1", "7", "3", "1", "0.104854", "0.00000" - "1", "7", "3", "2", "0.288763", "0.0625 " - "1", "7", "3", "3", "0.480476", "0.187500" - "1", "7", "3", "4", "0.505688", "0.312500" - "..", "..", "..", "..", "..", ".." - "1", "7", "3", "10", "1", "1" - "1", "7", "9", "1", "0.194455", "0.00000" -| + brew install spatialindex +| -Here the ``prob_to`` is the probability of an event happening multiplied by the probability of damage happening. The -probability of ``prob_to`` for all ``bin_indexs`` for a specific ``vulnerability_id``, ``areaperil_id``, and ``event_id``. +The PiWind demonstration model uses the built-in lookup framework, therefore running PiWind or any model which uses the +built-in lookup, requires that you install ``libspatialindex``. | -Construct a Python model -######################## +**GNU/Linux** ----- +For GNU/Linux the following is a specific list of required system libraries + +* **Debian**: g++ compiler build-essential, libtool, zlib1g-dev autoconf on debian distros -When running a Python model, the type of file that being ingested has to be defined. This is because there are only binary -files present and the Python model ingests ``csv`` files as default. The Python model can be ran with the command below: +.. code-block:: -.. code-block:: python + sudo apt install g++ build-essential libtool zlib1g-dev autoconf - eve 1 1 | getpymodel -f bin | cdftocsv > dump_two.csv +* **Red Hat**: 'Development Tools' and zlib-devel | -This achieves the same as the previous section. However, it runs in the Python model so at this stage it will be slower. The data -also has to be dumped in the file ``dump_two.csv``. +Python +###### -You have now ran a basic model with fake data. With this knowledge you can now move onto a toy example where the model is run end -to end. This has not covered everything that goes on in the end to end model however. The toy model goes into more detail. +Package Python dependencies are controlled by ``pip-tools``. To install the development dependencies first, install +``pip-tools`` using: -| +.. code-block:: -Running an end to end toy model -############################### + pip install pip-tools ----- +and run: -The toy model is the `Paris windstorm model `_. First, this repo -needs to be cloned; check you have `OasisLMF `_ pip package installed to run it. -Once this is done, the model can be ran with the command below: +.. code-block:: -.. code-block:: python + pip-sync - oasislmf model run --config oasislmf_mdk.json +| + +To add new dependencies to the development requirements add the package name to ``requirements.in`` or to add a new +dependency to the installed package add the package name to ``requirements-package.in``. Version specifiers can be supplied +to the packages but these should be kept as loose as possible so thatall packages can be easily updated and there will be +fewer conflict when installing. | -Here, the model is running using the config file that is already defined in the repo. This will result in a lot of -printout where the model is being created and then ran. The result can be found in the ``runs`` directory. Here there is a -losses directory with a random number which denotes the model run. If you run multiple models you will see multiple losses -directories with multiple unique IDs. The bash script can be inpsected with the command below: +After adding packages to either ``*.in`` file: + +.. code-block:: -.. code-block:: python + pip-compile && pip-sync - ParisWindstormModel/runs/losses-XXXXXXXXXXXXXX/run_ktools.sh +This should be ran ensuring the development dependencies are kept up to date. | -This bash script is essentially the entire process of constructing the model and running it. There is a lot of moving parts -here that have not been covered yet, however, if you scroll down you will find this seen below: +ods_tools +######### -.. code-block:: python +OasisLMF uses the ods_tools package to read exposure files and the setting files. The version compatible with each OasisLMF +is manage in the requirement files. Below is the summary: - ( eve 1 8 | getmodel | gulcalc -S10 -L0 -a0 -i ... - ( eve 2 8 | getmodel | gulcalc -S10 -L0 -a0 -i ... - ( eve 3 8 | getmodel | gulcalc -S10 -L0 -a0 -i ... - ( eve 4 8 | getmodel | gulcalc -S10 -L0 -a0 -i ... - ( eve 5 8 | getmodel | gulcalc -S10 -L0 -a0 -i ... - ( eve 6 8 | getmodel | gulcalc -S10 -L0 -a0 -i ... - ( eve 7 8 | getmodel | gulcalc -S10 -L0 -a0 -i ... - ( eve 8 8 | getmodel | gulcalc -S10 -L0 -a0 -i ... +* OasisLMF 1.23.x or before => no ods_tools +* OasisLMF 1.26.x => use ods_tools 2.3.2 +* OasisLMF 1.27.0 => use ods_tools 3.0.0 or later +* OasisLMF 1.27.1 => use ods_tools 3.0.0 or later +* OasisLMF 1.27.2 => use ods_tools 3.0.4 or later | -This shows how the events have been split into eight different streams and been fed them into our getmodel and then fed the -results of the getmodel to the rest of the process. - - +pandas +###### +Pandas has released its major version number 2 breaking some of the compatibility with the 1st version. Therefore, for all +version of OasisLMF ``<= 1.27.2``, the latest supported version for pandas is ``1.5.3``. Support for pandas 2, starts from +version ``1.27.3``. | -.. _MDK: -Model Development Kit (MDK) -*************************** +.. _testing_package: + +Testing +******* ---- -The oasislmf Python package comes with a command line interface for creating, testing and managing models. -The tool is split into several namespaces that group similar commands. -For a full list of namespaces use ``oasislmf --help``, and ``oasislmf --help`` for a full list of commands -available in each namespace. +To test the code style run: -| +.. code-block:: -config -###### + flake8 -.. autocli:: oasislmf.cli.config.ConfigCmd - :noindex: | -model -##### +To test against all supported python versions run: +.. code-block:: -``oasislmf model generate-exposure-pre-analysis`` -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + tox -.. autocli:: oasislmf.cli.model.GenerateExposurePreAnalysisCmd - :noindex: | +To test against your currently installed version of python run: -``oasislmf model generate-keys`` -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. autocli:: oasislmf.cli.model.GenerateKeysCmd - :noindex: -| +.. code-block:: -``oasislmf model generate-losses`` -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + py.test -.. autocli:: oasislmf.cli.model.GenerateLossesCmd - :noindex: | -``oasislmf model generate-oasis-files`` -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +To run the full test suite run: -.. autocli:: oasislmf.cli.model.GenerateOasisFilesCmd - :noindex: -| +.. code-block:: -``oasislmf model run`` -^^^^^^^^^^^^^^^^^^^^^^ + ./runtests.sh -.. autocli:: oasislmf.cli.model.RunCmd - :noindex: | -exposure -######## +Publishing +********** + +---- -``oasislmf exposure run`` -^^^^^^^^^^^^^^^^^^^^^^^^^ +Before publishing the latest version of the package make you sure increment the ``__version__`` value in +``oasislmf/__init__.py``, and commit the change. You'll also need to install the ``twine`` Python package which +``setuptools`` uses for publishing packages on PyPI. If publishing wheels then you'll also need to install the ``wheel`` +Python package. -.. autocli:: oasislmf.cli.model.RunCmd - :noindex: | -API client -########## +Using the ``publish`` subcommand in ``setup.py`` +################################################ -``oasislmf api run`` -^^^^^^^^^^^^^^^^^^^^^^^^^ +The distribution format can be either a source distribution or a platform-specific wheel. To publish the source +distribution package run: -.. autocli:: oasislmf.cli.api.RunApiCmd - :noindex: -| +.. code-block:: + python setup.py publish --sdist -version -####### +Or to publish the platform specific wheel run: + +.. code-block:: + + python setup.py publish --wheel -.. autocli:: oasislmf.cli.version.VersionCmd - :noindex: | +Creating a bdist for another platform +##################################### +To create a distribution for a non-host platform use the ``--plat-name`` flag: -Run a model using the Oasis MDK -############################### +.. code-block:: ----- + python setup.py bdist_wheel --plat-name Linux_x86_64 + +or -The Model Development Kit (MDK) is the best way to get started using the Oasis platform. -The MDK is a command line tookit providing command line access to Oasis' modelling functionality. -It is installed as a Python package, and available from PYPI: `OasisLMF PYPI module `_. +.. code-block:: -The OasisLMF package has the following dependencies: + python setup.py bdist_wheel --plat-name Darwin_x86_64 | -* Debian +Manually publishing, with a GPG signature +######################################### -.. code-block:: Debian +The first step is to create the distribution package with the desired format: - g++, build-essential, libtool, zlib1g-dev, autoconf, unixobdbc-dev +For the source distribution run: -* RHEL +.. code-block:: -.. code-block:: RHEL + python setup.py sdist - Development Tools, zlib-devel | -To install the OasisLMF package run: +Which will create a ``.tar.gz`` file in the ``dist`` subfolder, or for the platform specific wheel run: -.. code-block:: python +.. code-block:: + + python setup.py bdist_wheel - pip install oasislmf | -.. warning:: Windows is not directly supported for running the MDK. - You can run the Oasis MDK on Linux or MacOS. - You can only run on Windows using a docker container or Linux Subsystem (WSL). +Which will create ``.whl`` file in the ``dist`` subfolder. To attach a GPG signature using your default private key you can +then run: + +.. code-block:: + + gpg --detach-sign -a dist/.{tar.gz,whl} + | +This will create ``.asc`` signature file named ``.{tar.gz,whl}.asc`` in ``dist``. You can just publish +the package with the signature using: + +.. code-block:: + + twine upload dist/.{tar.gz,whl} dist/.{tar.gz,whl}.asc + +| + +.. _links_package: + +Links for further information +***************************** + +* :doc:`../../sections/releases` +* :doc:`../../sections/model-development-kit` +* `OasisLMF Github repository `_ \ No newline at end of file diff --git a/src/sections/absolute-damage.rst b/src/sections/absolute-damage.rst new file mode 100644 index 0000000..d40d600 --- /dev/null +++ b/src/sections/absolute-damage.rst @@ -0,0 +1,24 @@ +Absolute Damage +=============== + +Introduction +------------ + +The absolute damage option allows model providers to include absolute damage amounts rather than damage factors in the +damage bin dictionary. If the damage factors are less than or equal to 1 in the damage bin dictionary, the factor will +be applied as normal during the loss calculation, by applying the sampled damage factor to the TIV to give a simulated +loss; but with absolute damage factors, where the factor is greater than 1, the TIV is not used in the calculation at +all, but rather the absolute damage is applied as the loss. + +| + +**Example** + + **Example 1:** if the sampled damage factor is 0.6 and the TIV is 100,000, the sampled loss will be 60,000 + + **Example 2:** if the sampled damage factor is 500 and the TIV is 100,000, the sampled loss will be 500 + +| + +An example toy model with the absolute damage factor option is availible to use from `here `_. \ No newline at end of file diff --git a/src/sections/complex-model.rst b/src/sections/complex-model.rst new file mode 100644 index 0000000..fe071c8 --- /dev/null +++ b/src/sections/complex-model.rst @@ -0,0 +1,7 @@ +Complex Model +============= + +Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec +lorem neque, interdum in ipsum nec, finibus dictum velit. Ut eu +efficitur arcu, id aliquam erat. In sit amet diam gravida, imperdiet +tellus eu, gravida nisl. \ No newline at end of file diff --git a/src/sections/correlation.rst b/src/sections/correlation.rst index f91e778..4076b25 100644 --- a/src/sections/correlation.rst +++ b/src/sections/correlation.rst @@ -7,10 +7,8 @@ On this page * :ref:`intro_correlation` * :ref:`sources_of_correlation` * :ref:`features_by_version` -* :ref:`features_1.15.x` -* :ref:`features_1.15.5` -* :ref:`features_1.27.x` -* :ref:`features_1.27.2` +* :ref:`available_1.15` +* :ref:`available_1.27` | @@ -24,7 +22,7 @@ Introduction This section covers the options in Oasis for modelling correlation in secondary uncertainty, or correlation in the modelled severity of loss given an event. -Correlation is modelled at the most detailed level in Oasis for all models. The correlated ground up losses are aggregated as they are passed through the financial terms so that all of the downstream financial perspectives capture this correlation. +Correlation can be modelled at a very detailed level in Oasis. A rule may be specified for correlating loss between the individual coverages of each site for each hazard that impacts them within the context of a single event. The correlated ground up losses are aggregated as they are passed through the financial terms so that all of the downstream financial perspectives capture this correlation. The methods of correlating losses can vary by model depending on which of the features are used. Users can also control correlation settings for their portfolio. @@ -37,11 +35,15 @@ Sources of correlation ---- -There is correlation in the hazard intensity that multiple exposures will experience. The closer they are to each other, the more likely it is that they will experience similar hazard intensities. The relationship between the distance between exposures and the level of hazard correlation will depend on the peril being modelled. Catastrophe modellers define the geographical resolution of area in their footprint carefully in order to represent the spatial variability of hazard intensity for the peril. +In large catastrophes, there is a tendency for losses across multiple locations to be correlated, meaning relatively high losses across locations or low losses across locations tend to occur together. The correlation is stronger the closer together the exposures are located. -A second source of correlation is in the level of damage given the force of hazard intensity. This arises because buildings that are close to each other of similar construction can have the same vulnerabilities to damage. +Two main reasons why this would be the case for buildings situated close together are; + +* They experience similar a hazard intensity in an event; flood depth, windspeed etc. +* They have similar vulnerability characteristics (such as being built by the same developer) and similar modes of failure/types of damage given the hazard intensity. + +Correlation increases the range of potential claims at a portfolio level and particularly for large, rare events, a model can significantly underestimate uncertainty and extreme losses if this correlation is not captured. It is therefore desirable to allow modellers and users the ability to express views on the degree of spatial correlation in Oasis so that the effect on portfolio risk can be explored. -The combination of hazard intensity and damage correlation leads to more extreme losses across a portfolio which is of primary concern to a risk carrier. It does not change the mean ground up loss, but leads to more extreme losses at higher return periods. | @@ -56,25 +58,24 @@ There are several options in Oasis to represent correlation, and more features h These can be summarized as follows; -* 1.15.x and later +* 1.15 and later * Group correlation for damage * Model specification of correlation groups * User override using CorrelationGroup field in OED -* 1.15.5 and later - * User override using an OED field list -* 1.27.0 and later + * User override using an OED field list parameter +* 1.27 and later + * Peril correlation groups + * Partial correlation for damage + * Separate groupings for hazard correlation * Separate hazard and damage sampling (full monte carlo sampling). - * Partial correlation for damage. - * Separate groupings for hazard correlation. -* 1.27.2 and later * Partial correlation for hazard | -.. _features_1.15.x: +.. _available_1.15: -Features in OasisLMF 1.15.x -########################### +Available in OasisLMF 1.15 +########################## ---- @@ -85,7 +86,7 @@ In Oasis, each exposure at risk is assigned a ‘group_id’ which is its correl • When exposures have the same group_id, damage will be sampled with full correlation. • When exposures have different group_ids, damage will be sampled independently. -To find out how the correlated and independent sampling works, please see the ‘calculation’ section. +To find out how the correlated and independent sampling works, please see the Sampling Methodology section. The three illustrated exposures have different group_ids assigned and would all be sampled independently. @@ -96,6 +97,10 @@ The three illustrated exposures have different group_ids assigned and would all | +Note that the locations illustrated may be impacted by the same or similar hazard intensity values per event, depending on the model's footprint. Where there is a single intensity value per model cell in the footprint (this is generally the case), it is only the correlation in damage given the hazard intensity that is being specified using the group_id. + +**Default settings** + Each location in the OED location file is assigned a unique group_id. This is the system default behaviour for all models. The group_id is generated automatically based on unique values of the input OED location fields that uniquely define a location, as illustrated in the table. @@ -107,7 +112,7 @@ The group_id is generated automatically based on unique values of the input OED "Port1", "Acc1", "Loc2" "Port1", "Acc1", "Loc3" -Under this setting, multiple coverages at each location will be damaged with full correlation, because the group_id is defined at the location level. +Multiple coverages at each location will be damaged with full correlation, because the group_id is defined at the location level and is the same number for all coverages. **Model specification of correlation groups** @@ -140,7 +145,7 @@ A modeller can use other OED fields to define the groups, and/or internal Oasis This data setting would result in all locations with the same areaperil_id (located in the same hazard model grid cell) being assigned the same group_id. -The two locations in the cell on the left would be assigned the same group_id and damaged with full correlation, but the location in the cell on the right would be sampled independently from every other model cell. +The two locations in the cell on the left would be assigned the same group_id and damaged with full correlation, but the location in the cell on the right would be sampled independently from locations in every other model cell. **Correlation groups assigned by model cell** @@ -182,27 +187,167 @@ This will override the system default behaviour for generating the group_id, and | +**User override using OED field list parameter** + +Rather than specifying each correlation group_id location by location, the user can instead specify a field list to generate the correlation groups. This can be any combination of OED location fields. Each unique set of values for the specified fields will be assigned a unique group_id. + +For instance, if "PostalCode" was chosen as the grouping field, then the group_ids might be assigned as follows. Locations 3 and 4 are located in the same postcode, and they would be assigned the same group_id. + +.. csv-table:: + :header: "PortNumber", "AccNumber", "LocNumber", "PostalCode", "group_id" + + "Port1", "Acc1", "Loc1", "SR3 5LX","1" + "Port1", "Acc1", "Loc2", "SR3 5LY", "2" + "Port1", "Acc1", "Loc3", "SR3 5LZ", "3" + "Port1", "Acc1", "Loc4", "SR3 5LZ", "3" + +The OED field list can be specified in the oasislmf settings using the **group_id_cols** parameter, as follows; + +``oasislmf.json`` + +.. code-block:: JSON + + { + "group_id_cols": ["PostalCode"] + } | -.. _features_1.15.5: +.. _available_1.27: -Features in OasisLMF 1.15.5 -########################### +Available in OasisLMF 1.27 +########################## ----- +New correlation features were introduced in release 1.27 in 2022. This meant breaking changes in the format of the model settings file, and an alternative runtime calculation option 'gulmc' which is required for some of the features explained below. +The correlation functionality described here is available to use for any standard Oasis model. Complex models that use bespoke correlation methodologies can continue to be used as before, or the new functionality could be incorporated within the complex model wrapper by the model provider. -.. _features_1.27.x: +| -Features in OasisLMF 1.27.x -########################### +**Peril correlation groups** ----- +There can be multiple hazards in an event which can give rise to loss. There may be the same peril type, for example flooding from different sources such as river flood / heavy rainfall, or there may be completely different perils and types of damage (e.g. high wind speeds causing roof damage, and flooding causing ground floor damage). + +In previous versions of Oasis, all peril damage at a location has been treated as fully correlated. + +There are now two options; model developers can group the same peril types together to fully correlate them at a location, or treat damage from different peril types (e.g. wind and flood) as independent. + +A peril correlation group number (an integer) can be specified in the lookup settings of the model settings file. This is done for each single peril code used by the model. If peril codes are assigned the same peril correlation group, it means that damage will be fully correlated for those peril codes at each location. + +Here is an example of independent peril damage for a model using two single peril codes; + +| + +``Model_settings.json`` + +.. code-block:: JSON + + "lookup_settings":{ + "supported_perils":[ + {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, + {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 2}, + {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, + {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} + ] + }, + +| + +The second example groups two single peril codes together in one peril correlation group, meaning that damage will be fully correlated at a location. + +``Model_settings.json`` + +.. code-block:: JSON + + "lookup_settings":{ + "supported_perils":[ + {"id": "ORF", "desc": "Single Peril: Fluvial Flood", "peril_correlation_group": 1}, + {"id": "OSF", "desc": "Single Peril: Pluvial Flood", "peril_correlation_group": 1}, + {"id": "OO1", "desc": "Group Peril: All Flood perils"} + ] + }, + +| + +This feature only defines whether peril damage is correlated or independent at a location, and the behaviour is the same for all locations. + +Correlation in damage between locations is still governed by the group correlation feature of 1.15. If locations share the same group_id across locations, then the damage will be 100% correlated, for each peril correlation group. + +The partial damage correlation feature described below has been introduced to enable a finer degree of control of damage correlation across locations. + +| -.. _features_1.27.2: +**Partial damage correlation** -Features in OasisLMF 1.27.2 -########################### +A global damage correlation factor can be specified by the model provider to define how damage should be correlated across locations for each event. One factor may be specified for each peril correlation group. This enables correlation in damage for perils that occur in the same event but have different spatial variability in hazard intensity to be specified separately. + +The global correlation factor is a number between 0 and 1, where 0 means no correlation and 1 means 100% correlation. The higher the correlation factor, the greater the tendancy that damage will be consistently low or high across the portfolio with each sample. When losses are summed to the portfolio level, this leads to a wider range of loss outcomes for the portfolio, per event, and greater extreme losses. + +The correlation factor works together with the group correlation functionality. Locations with the same group_id will still have 100% damage correlation, but locations with different group_ids will have partially correlated damage rather than fully independent damage. + +This means that the decision of how group_ids are assigned and the global correlation factor must be made together by the model provider. + +| + +**Partial damage correlation of 40% between all locations** + +.. image:: ../images/correlation4.png + :width: 600 + +| + +The correlation factor is specified in a new 'correlation_settings' section of the model settings file. + +The example illustrated above would be specified using: + +* data settings to specify how locations should be grouped +* lookup settings to specify the peril correlation group (single peril in this case), and +* correlation settings to specify the global damage correlation factor + +| + +``Model_settings.json`` + +.. code-block:: JSON + + + "data_settings": { + "damage_group_fields": ["PortNumber", "AccNumber", "LocNumber"] } + }, + + "lookup_settings":{ + "supported_perils":[ + {"id": "OSF", "desc": "Single Peril: Pluvial Flood", "peril_correlation_group": 1} + ] + }, + + "correlation_settings": [ + {"peril_correlation_group": 1, "damage_correlation_value": "0.4"} + ] + +| + +Note there is a breaking change in the data_settings parameter **group_fields** which has been changed to **damage_group_fields** in 1.27. + +| + +**Separate hazard and damage sampling** + +(TO DO) + +| + +**Separate groupings for hazard correlation** + +(TO DO) + +| + +**Partial hazard correlation** + +(TO DO) + +| + +---- ----- \ No newline at end of file diff --git a/src/sections/disaggregation.rst b/src/sections/disaggregation.rst index 8049cc3..1e9a102 100644 --- a/src/sections/disaggregation.rst +++ b/src/sections/disaggregation.rst @@ -30,6 +30,12 @@ together determine the effective damage distribution per event in the analysis. vulnerability_id might be valid and produce a different distribution and therefore different losses. The question is what general features can Oasis provide to capture this uncertainty in the modelling process? +Our toy model `PiWind Postcode `_ demonstrates the +disaggregation features of Oasis. This model is availible to use from `here +`_. + + + | .. _how_it_works_disaggregation: diff --git a/src/sections/geocoding.rst b/src/sections/geocoding.rst new file mode 100644 index 0000000..e563600 --- /dev/null +++ b/src/sections/geocoding.rst @@ -0,0 +1,148 @@ +Geocoding +========= + +On this page +------------ + +* :ref:`introduction_geocoding` +* :ref:`how_it_works_geocoding` +* :ref:`example_geocoding` +* :ref:`links_geocoding` + +| + +.. _introduction_geocoding: + +Introduction +************ + +---- + +The Oasis modelling platform is designed to model individual buildings with known locations and vulnerability attributes. However, +this exposure data can sometimes be missing vital location data – a situation which is particularly true in the developing world. + +Incomplete exposure data can negatively affect the performance when a model run, and the consequent uncertainty from this is not +always captured in loss output. To overcome this issue, models can be integrated with geocoding in the pre-analysis step. This +feature fills in incomplete OED fields for addresses in the exposure location data, based on the available information about an +address provided. + +An example of the geocoding step can be seen in our toy model `PiWind Pre Analysis +`_, which is available for use from `here +`_. + +.. note:: + Oasis does not do any of the geocoding for this model. The geocoding aspect is performed by `Precisely's Geocode API + `_ and is integrated + into the model using the pre-analysis adjustment functionality. + +| + +.. _how_it_works_geocoding: + +How it works +************ + +---- + +Currently in the Oasis platform, as of August 2022, exposure must be converted into detailed data before being imported into the +platform for analysis. The format for this data is one building per row in the location file. This can be done outside of the +system, or alternatively the model developer, as part of the Oasis model assets, can provide a pre-analysis routine to generate a +modified OED location file from an input OED location file. + +Pre-analysis routines provide flexibility to manipulate the OED input files before the model is run, for +augmentation as required by the model. They are completely customisable ofr changing input files in whatever way a user requires. +An example pre-analysis ‘hook’ for the PiWind model can be found `here +`_. + +The Oasis toy model `PiWind Pre Analysis `_ uses +the pre-analysis feature to integrate an external geocoder. The purpose of the geocoding is to ‘complete’ the location data in +the OED input by calculating the values for any empty fields for addresses in the location file, that would hinder the performance +of the model if left incomplete. + +| + +Geocoding for latitude and longitude +#################################### + +| + +In the OED input, the typical location fields that define an address are: CountryCode, PostalCode, City, StreetAddress, Latitude, +and longitude. The fields are not limited to this, but these listed describe the physical location of an address. If one or more +of these fields are missing, the ability of the model to correctly assign these addresses can be affect; there could be multiple +streets with the same name in a country, or multiple addresses that are the same in different countries, etc. This is typically +not an issue when only one field empty, as latitude and longitude, or another field, will dispel any ambiguity in the accuracy of +the address. However, if more are missing, especially latitude and longitude, issues can arise. + +The geocoding pre-analysis step overcomes this by calculating any incomplete OED fields in preparation for the model run. It uses +Precisely’s Geocode API to achieve this, which is built into the script for the pre-analysis step. More information on this +service offered by Precisely can be found `here +`_. + +This script takes the OED location file and runs through it line-by-line, checking the fields in each address. If an address has +empty values for its latitude and longitude fields, the remaining location data (what is available from CountryCode, PostalCode, +City, StreetAddress) is sent off for geocoding. + +This geocoding step takes in the incomplete address data, checks it against its extensive database of locations, and returns a +detailed response of information about that address – this includes its latitude and longitude. These two values are then inserted +into their corresponding empty fields to make that address complete. In addition, two new OED fields are added that indicate the +presence of geocoding: Geocoder and GeocodeQuality. Geocoder is set to ‘Precisely’ by default, as this is what the pre-analysis +step uses. GeocodeQuality is a value between 0 and 1 that indicates the precision of the geocoded values (e.g. 80% is entered as +0.8). More information on how quality is quantified can be found `here +`_. + +Once this has ran through the entire location file, all addresses should be complete with every field accounted for with +corresponding values. This exposure data is then written over the old, incomplete file and is then ready for model run. + +| + +.. _example_geocoding: + +Example of geocoding +******************** + +---- + +Below is example of the geocode pre-analysis step that demonstrates latitude and longitude fields being completed when they have +not been provided in the original location file. The table below shows a location file with empty entries for latitude and +longitude. + +.. csv-table:: + :header: PortNumber,AccNumber,LocNumber,IsTenant,BuildingID,CountryCode,Latitude,Longitude,StreetAddress,PostalCode,OccupancyCode,ConstructionCode,LocPerilsCovered,BuildingTIV,OtherTIV,ContentsTIV,BITIV,LocCurrency,OEDVersion + + 1,A11111,100030535219,1,1,GB,,,1 BENTLEY STREET,LE13 1LY,1120,5204,WSS,150000,0,37500,15000,GBP,2.0.0 + 1,A11111,100030535220,1,1,GB,,,2 BENTLEY STREET,LE13 1LY,1120,5204,WW1,150000,0,37500,15000,GBP,2.0.0 + 1,A11111,100030535221,1,1,GB,52.7658503,-0.8832562,3 BENTLEY STREET,LE13 1LY,1120,5204,WW1,150000,0,37500,15000,GBP,2.0.0 + 1,A11111,100030535222,1,1,GB,52.7659084,-0.882736,4 BENTLEY STREET,LE13 1LY,1120,5204,WW1,150000,0,37500,15000,GBP,2.0.0 + +| + +The geocode pre-analysis step identifies that the address in this row are incomplete and sends it for geocoding. The geocoder +returns the values for the latitude and longitude, and these are inserted to this row to complete the address data, along with the +geocode fields(the addresses that aren't geocoded are blank for these two fields). + +.. csv-table:: + :header: PortNumber,AccNumber,LocNumber,IsTenant,BuildingID,CountryCode,Latitude,Longitude,StreetAddress,PostalCode,OccupancyCode,ConstructionCode,LocPerilsCovered,BuildingTIV,OtherTIV,ContentsTIV,BITIV,LocCurrency,OEDVersion,Geocoder,GeocodeQuality + + 1,A11111,100030535219,1,1,GB,52.7657126,-0.8831089,1 BENTLEY STREET,LE13 1LY,1120,5204,WSS,150000.0,0.0,37500.0,15000.0,GBP,2.0.0,Precisely,0.05 + 1,A11111,100030535220,1,1,GB,52.7657510,-0.8829107,2 BENTLEY STREET,LE13 1LY,1120,5204,WW1,150000.0,0.0,37500.0,15000.0,GBP,2.0.0,Precisely,0.05 + 1,A11111,100030535221,1,1,GB,52.7658503,-0.8832562,3 BENTLEY STREET,LE13 1LY,1120,5204,WW1,150000.0,0.0,37500.0,15000.0,GBP,2.0.0,, + 1,A11111,100030535222,1,1,GB,52.7659084,-0.882736,4 BENTLEY STREET,LE13 1LY,1120,5204,WW1,150000.0,0.0,37500.0,15000.0,GBP,2.0.0,, + +| + +This data is then written over the old location file to be processes by the model. + +| + +.. _links_geocoding: + +Links for further information +***************************** + +---- + +* The example model PiWind Pre Analysis, with geocoding, can be found `here + `_. + +* More information on Precisely’s geocoding API can be found `here + `_. diff --git a/src/sections/modelling-methodology.rst b/src/sections/modelling-methodology.rst index 9b55a4b..f4bf3b4 100644 --- a/src/sections/modelling-methodology.rst +++ b/src/sections/modelling-methodology.rst @@ -88,12 +88,14 @@ Simulation methodology ---- -The Oasis kernel provides a robust loss simulation engine for catastrophe modelling.Insurance practitioners are used to -dealing with losses arising from events. These losses are numbers, not distributions. Policy terms are applied to the -losses individually and then aggregated and further conditions or reinsurances applied. Oasis takes the same perspective, -which is to generate individual losses from the probability distributions. The way to achieve this is random sampling called -“Monte-Carlo” sampling from the use of random numbers, as if from a roulette wheel, to solve equations that are otherwise -intractable. +The Oasis kernel provides a robust loss simulation engine for catastrophe modelling. + +Insurance practitioners are used to dealing with losses arising from events. Policy terms are applied to the losses individually +and then aggregated and further conditions or reinsurances applied. + +Oasis takes the same approach in the modelling of losses, which is to generate individual losses from the damage probability +distributions. The way to achieve this is random sampling called “Monte-Carlo” sampling from the use of random numbers, as if +from a roulette wheel, to solve equations that are otherwise intractable. Modelled and empirical intensities and damage responses can show significant uncertainty. Sometimes this uncertainty is multi-modal, meaning that there can be different peaks of behaviour rather than just a single central behaviour. Moreover, @@ -109,7 +111,7 @@ the damage by “convolving” the binned intensity distribution with the vulner 'law of total probability' to evaluate the overall probability of each damage outcome, by summing the probability of all levels of intensity multiplied by the conditional probability of the damage outcome in each case. -Uniform sampling of the cumulative distribution function is then performed. Random numbers between 0 and 1 are drawn, and +Sampling of the cumulative distribution function is then performed. Random numbers between 0 and 1 are drawn, and used to sample a relative damage ratio from the effective damage CDF. Linear interpolation of the cumulative probability thresholds of the bin in which the random number falls is used to calculate the damage ratio for each sample. @@ -118,4 +120,5 @@ Finally, a ground up loss sample is calculated by multiplying the damage ratio w .. figure:: /images/simulation_approach.png :alt: Oasis simulation approach -| \ No newline at end of file +| + diff --git a/src/sections/post-loss-amplification.rst b/src/sections/post-loss-amplification.rst index b800b26..3cc4459 100644 --- a/src/sections/post-loss-amplification.rst +++ b/src/sections/post-loss-amplification.rst @@ -54,6 +54,9 @@ The uplift factor is applied after the GUL calculation and is it's own module. T gulpy or gulmc...or complex model wrapper implementations. However, if elements from the logic can be inherited by gulpy/gulmc to improve performance, this is also an option. +An example toy model with post loss amplification is availible to use from `here `_. + | .. _file_format_pla: diff --git a/src/sections/pre-analysis-adjustments.rst b/src/sections/pre-analysis-adjustments.rst index 16aec1c..5030152 100644 --- a/src/sections/pre-analysis-adjustments.rst +++ b/src/sections/pre-analysis-adjustments.rst @@ -1,7 +1,64 @@ Pre Analysis Adjustments ======================== -Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec -lorem neque, interdum in ipsum nec, finibus dictum velit. Ut eu -efficitur arcu, id aliquam erat. In sit amet diam gravida, imperdiet -tellus eu, gravida nisl. \ No newline at end of file +On this page +------------ + +* :ref:`introduction_paa` +* :ref:`how_it_works_paa` +* :ref:`example_models_paa` + +| + +.. _introduction_paa: + +Introduction +************ + +---- + +The Oasis modelling platform is designed to model individual buildings with known locations and vulnerability attributes. However, +this exposure data can sometimes be aggregated, low resolution or missing key attributes, such as location data – a situation +which is particularly true in the developing world. A pre-analysis adjustment step allows the user to overcome the issues that could +arise from this by performing data cleansing of any errors or inconsistencies in their OED exposure data, before it is used in a +model run. The code and cofig for the pre-analysis step are completely customisable; the user can change these to modify input +files in any way they desire to achieve a particular output, and automating this kind of preparation improves the quality of +analyses. + +| + +.. _how_it_works_paa: + +How it works +************ + +---- + +Currently in the Oasis platform, as of August 2022, exposure must be converted into detailed data before being imported into the +platform for analysis. The format for this data is one building per row in the location file. This can be done outside of the +system, or alternatively the model developer, as part of the Oasis model assets, can provide a pre-analysis routine to generate a +modified OED location file from an input OED location file. + +The purpose of a pre-analysis routines is to provide flexibility to manipulate the OED input files before the model is run, for +augmentation as required by the model. An example pre-analysis ‘hook’ for the PiWind model can be found `here +`_. + +| + +.. _example_models_paa: + +Example models +************** + +---- + +Oasis currently offers two toy models that demonstrate the possible options for pre-analysis adjustment: +:doc:`Disaggregation <../../sections/disaggregation>` via `PiWind Postcode +`_, and :doc:`Geocoding <../../sections/geocoding>` via +`PiWind Pre Analysis `_. + +For more information on these model: + +* :doc:`Disaggregation <../../sections/disaggregation>` + +* :doc:`Geocoding <../../sections/geocoding>` \ No newline at end of file diff --git a/src/sections/pytools.rst b/src/sections/pytools.rst index 0675711..c3d1c3b 100644 --- a/src/sections/pytools.rst +++ b/src/sections/pytools.rst @@ -3,4 +3,679 @@ Pytools Pytools comprises the pythons equivalent of some of the :doc:`../../sections/ktools` components. These have been developed to take advantage of python libraries dedicated to large compute efficiency to make running the oasis workflow more -performant and reliable. \ No newline at end of file +performant and reliable. + +| + +The components of Pytools are: + +* **modelpy** +* **gulpy** - more information can be found in the :ref:`gulpy-pytools` section +* **gulmc** - more information can be found in the :ref:`gulmc-pytools` section +* **fmpy** - more information can be found in the :doc:`../../sections/financial-module` section +* **plapy** - more information can be found in the :doc:`../../sections/post-loss-amplification` section + +| + +.. _gulpy-pytools: + +gulpy +***** + +---- + +``gulpy`` is the new tool for the computation of ground up losses that is set to replace ``gulcalc`` in the Oasis Loss Modelling +Framework. + +``gulpy`` is ready for production usage from oasislmf > version ``1.26.0``. + +This document summarizes the changes introduced with ``gulpy`` in terms of command line arguments and features. + +| + +Command line arguments +###################### + +``gulpy`` offers the following command line arguments: + +.. code-block:: sh + + $ gulpy -h + usage: use "gulpy --help" for more information + + optional arguments: + -h, --help show this help message and exit + -a ALLOC_RULE back-allocation rule + -d output random numbers instead of gul (default: False). + -i FILE_IN, --file-in FILE_IN + filename of input stream. + -o FILE_OUT, --file-out FILE_OUT + filename of output stream. + -L LOSS_THRESHOLD Loss treshold (default: 1e-6) + -S SAMPLE_SIZE Sample size (default: 0). + -V, --version show program version number and exit + --ignore-file-type [IGNORE_FILE_TYPE [IGNORE_FILE_TYPE ...]] + the type of file to be loaded + --random-generator RANDOM_GENERATOR + random number generator + 0: numpy default (MT19937), 1: Latin Hypercube. Default: 1. + --run-dir RUN_DIR path to the run directory + --logging-level LOGGING_LEVEL + logging level (debug:10, info:20, warning:30, error:40, critical:50). Default: 30. + +| + +The following gulcalc arguments were ported to gulpy with the same meaning and requirements: + +.. code-block:: sh + + -a, -d, -h, -L, -S + +| + +The following gulcalc arguments were ported to gulpy but were renamed: + +.. code-block:: sh + + # in gulcalc # in gulpy + -v -V, --version + -i -o, --file-out + +| + +The following gulcalc arguments were not ported to gulpy: + +.. code-block:: sh + + -r, -R, -c, -j, -s, -A, -l, -b, -v + +| + +The following arguments were introduced with gulpy: + +.. code-block:: sh + + --file-in, --ignore-file-type, --random-generator, --run-dir, --logging-level + +| + +New random number generator: the Latin Hypercube Sampling algorithm +################################################################### + +To compute random loss samples, it is necessary to draw random values from the effective damageability probability distribution +function (PDF). Drawing random values from a given PDF is normally achieved by generating a random float value between 0 and 1 and +by taking the inverse of the cumulative distribution function (CDF) for such random value. The collection of random values +produced with this approach will be distributed according to the PDF. + +To generate random values ``gulcalc`` uses the `Mersenne Twister generator `_. In +``gulpy``, instead, we introduce the `Latin Hypercube Sampling (LHS) `_ as +the default algorithm to generate random values. Compared to the Mersenne Twister, LHS implements a sort of stratified random +number generation that more evenly probes the range between 0 and 1, which translates in a faster convergence to the desired PDF. + +In other words, in order to probe a given PDF to the same accuracy, the LHS algorithm requires a smaller number of samples than +the Mersenne Twister. + +| + +Examples +######## + +| + +Setting the Output +"""""""""""""""""" + +In order to run the ground-up loss calculation and stream the output to stdout in binary format, the following commands are +equivalent: + +.. code-block:: sh + + # with gulcalc # with gulpy + gulcalc -a0 -S10 -i - gulpy -a0 -S10 + gulcalc -a1 -S20 -i - gulpy -a1 -S20 + gulcalc -a2 -S30 -i - gulpy -a2 -S30 + +| + +Alternatively, the binary output can be redirected to file with: + +.. code-block:: sh + + # with gulcalc # with gulpy # with gulpy [alternative] + gulcalc -a0 -S10 -i gul_out.bin gulpy -a0 -S10 -o gul_out.bin gulpy -a0 -S10 --file-out gul_out.bin + gulcalc -a1 -S20 -i gul_out.bin gulpy -a1 -S20 -o gul_out.bin gulpy -a1 -S20 --file-out gul_out.bin + gulcalc -a2 -S30 -i gul_out.bin gulpy -a2 -S30 -o gul_out.bin gulpy -a2 -S30 --file-out gul_out.bin + +| + +Choosing the random number generator +"""""""""""""""""""""""""""""""""""" + +By default, ``gulpy`` uses the LHS algorithm to draw random numbers samples, which is shown to require less samples than the +Mersenne Twister used by ``gulcalc`` when probing a given probability distribution function. + +If needed, the user can force gulpy to use a specific random number generator: + +.. code-block:: sh + + gulpy --random-generator 0 # uses Mersenne Twister (like gulcalc) + gulpy --random-generator 1 # uses Latin Hypercube Sampling algorithm (new in gulpy) + +| + +Performance +########### + +As of oasislmf version 1.0.26.rc1 ``gulpy`` is not used by default in the oasislmf MDK but it can be used by passing the ``--gulpy`` +argument, e.g: + +.. code-block:: sh + + # using gulcalc # using gulpy + oasislmf model run oasislmf model run --gulpy + +| + +On a real windstorm model these are the execution times: + +.. code-block:: sh + + # command # info on this run # total execution time # uses # speedup + oasislmf model run [ 10 samples -a0 rule ] 3634 sec ~ 1h getmodel + gulcalc 1.0x [baseline for 10 samples] + oasislmf model run --modelpy [ 10 samples -a0 rule ] 1544 sec ~ 25 min modelpy + gulcalc 2.4x + oasislmf model run --modelpy --gulpy [ 10 samples -a0 rule ] 1508 sec ~ 25 min modelpy + gulpy 2.4x + oasislmf model run [ 250 samples -a0 rule ] 10710 sec ~ 3h getmodel + gulcalc 1.0x [baseline for 250 samples] + oasislmf model run --modelpy [ 250 samples -a0 rule ] 8617 sec ~ 2h 23 min modelpy + gulcalc 1.2x + oasislmf model run --modelpy --gulpy [ 250 samples -a0 rule ] 4969 sec ~ 1h 23 min modelpy + gulpy 2.2x + +| + +.. _gulmc-pytools: + +gulmc +***** + +---- + +``gulmc`` is a new tool that uses a "full Monte Carlo" approach for ground up losses calculation that, instead of drawing loss +samples from the 'effective damageability' probability distribution (as done by calling ``eve | modelpy | gulpy``): it first +draws a sample of the hazard intensity, and then draws an independent sample of the damage from the vulnerability function +corresponding to the hazard intensity sample. + +``gulmc`` was first introduced in oasislmf v1.27.0 and is ready for production usage from oasislmf v ``1.28.0`` onwards. + +This document summarizes the changes introduced with ``gulmc`` with respect to ``gulpy``. + +.. note:: + + Note: features such as the Latin Hypercube Sampler introduced with ``gulpy`` are not discussed here as they are described at + length in the ``gulpy`` documentation. + +| + +Command line arguments +###################### + +``gulmc`` offers the following command line arguments: + +.. code-block:: bash + + $ gulmc -h + usage: use "gulmc --help" for more information + + options: + -h, --help show this help message and exit + -a ALLOC_RULE back-allocation rule. Default: 0 + -d DEBUG output the ground up loss (0), the random numbers used for hazard sampling (1), the random numbers used for damage sampling (2). Default: 0 + -i FILE_IN, --file-in FILE_IN + filename of input stream (list of events from `eve`). + -o FILE_OUT, --file-out FILE_OUT + filename of output stream (ground up losses). + -L LOSS_THRESHOLD Loss treshold. Default: 1e-6 + -S SAMPLE_SIZE Sample size. Default: 0 + -V, --version show program version number and exit + --effective-damageability + if passed true, the effective damageability is used to draw loss samples instead of full MC. Default: False + --ignore-correlation if passed true, peril correlation groups (if defined) are ignored for the generation of correlated samples. Default: False + --ignore-haz-correlation + if passed true, hazard correlation groups (if defined) are ignored for the generation of correlated samples. Default: False + --ignore-file-type [IGNORE_FILE_TYPE ...] + the type of file to be loaded. Default: set() + --data-server =Use tcp/sockets for IPC data sharing. + --logging-level LOGGING_LEVEL + logging level (debug:10, info:20, warning:30, error:40, critical:50). Default: 30 + --vuln-cache-size MAX_CACHED_VULN_CDF_SIZE_MB + Size in MB of the in-memory cache to store and reuse vulnerability cdf. Default: 200 + --peril-filter PERIL_FILTER [PERIL_FILTER ...] + Id of the peril to keep, if empty take all perils + --random-generator RANDOM_GENERATOR + random number generator + 0: numpy default (MT19937), 1: Latin Hypercube. Default: 1 + --run-dir RUN_DIR path to the run directory. Default: "." + +| + +While all of ``gulpy`` command line arguments are present in ``gulmc`` with the same usage and functionality, the following +command line arguments have been introduced in ``gulmc``: + +.. code-block:: bash + + --effective-damageability + --ignore-correlation + --ignore-haz-correlation + --data-server + --vuln-cache-size + --peril-filter + +| + +Comparing ``gulpy`` and ``gulmc`` output +######################################## + +``gulmc`` runs the same algorithm of ``eve | modelpy | gulpy``, i.e., it runs the 'effective damageability' calculation mode, +with the same command line arguments. For example, to run a model with 1000 samples, alloc rule 1, and streaming the binary +output to the ``output.bin`` file, can be done with: + +.. code-block:: bash + + eve 1 1 | modelpy | gulpy -S1000 -a1 -o output.bin + +or + +.. code-block:: bash + + eve 1 1 | gulmc -S1000 -a1 -o output.bin + +| + +On the usage of ``modelpy`` and ``eve`` with ``gulmc`` +"""""""""""""""""""""""""""""""""""""""""""""""""""""" +Due to internal refactoring, ``gulmc`` now incorporates the functionality performed by ``modelpy``, therefore ``modelpy`` should +not be used in a pipe with ``gulmc``: + +.. code-block:: bash + + eve 1 1 | modelpy | gulpy -S1000 -a1 -o output.bin # wrong usage, won't work + eve 1 1 | gulpy -S1000 -a1 -o output.bin # correct usage + + +**NOTE** Both ``gulpy`` and ``gulmc`` can read the events stream from binary file, i.e., without the need of ``eve``, with: + +.. code-block:: bash + + gulmc -i input/events.bin -S1000 -a1 -o output.bin + +| + +``gulmc`` handles hazard uncertainty +#################################### + +If the hazard intensity in the fooprint has no uncertainty, i.e.: + +.. code-block:: csv + + event_id,areaperil_id,intensity_bin_id,probability + 1,4,1,1 + [...] + +then ``gulpy`` and ``gulmc`` produce the same outputs. However, if the hazard intensity has a probability distribution, e.g.: + +.. code-block:: csv + + event_id,areaperil_id,intensity_bin_id,probability + 1,4,1,2.0000000298e-01 + 1,4,2,6.0000002384e-01 + 1,4,3,2.0000000298e-01 + [...] + +then, by default, ``gulmc`` runs the full Monte Carlo sampling of the hazard intensity, and then of damage. In order to reproduce the same results that `gulpy` produces can be achieved by using the `--effective-damageability` flag: + +.. code-block:: bash + + eve 1 1 | gulmc -S1000 -a1 -o output.bin --effective-damageability + +| + +Probing random values used for sampling +####################################### + +Since we now sample in two dimensions (hazard intensity and damage), the ``-d`` flag is revamped to output both random values +used for sampling. While ``gulpy -d`` printed the random values used to sample the effective damageability distribution, in +``gulmc``: + +.. code-block:: bash + + gulmc -d1 [...] # prints the random values used for the hazard intensity sampling + gulmc -d2 [...] # prints the random values used for the damage sampling + +.. note:: + + if the ``--effective-damageability`` flag is used, only ``-d2`` is valid since there is no sampling of the hazard intensity, + and the random value printed are those used for the effective damageability sampling. + +.. note:: + + if ``-d1`` or ``-d2`` are passed, the only valid ``alloc_rule`` value is ``0``. This is because, when printing the random + values, back-allocation is not meaningful. ``alloc_rule=0`` is the default value or it can be set with ``-a0``. If a value + other than 0 is passed to ``-a``, an error will be thrown. + +| + +``gulmc`` supports *aggregate vulnerability* definitions +######################################################## + +``gulmc`` supports aggregate vulnerability functions, i.e., vulnerability functions that are composed of multiple individual +vulnerability functions. + +``gulmc`` now can efficiently reconstruct the aggregate vulnerability functions on-the-fly and compute the aggregate (aka blended, +aka weighted) vulnerability function. This new functionality works both in the "effective damageability" mode and in the full +Monte Carlo mode. + +Aggregate vulnerability functions are defined using two new tables, to be stored in the ``static/`` directory of the model data: +``aggregate_vulnerability.csv`` (or ``.bin``) and ``weights.csv`` (or ``.bin``). Example tables: + +* an ``aggregate_vulnerability`` table that defines 3 aggregate vulnerability functions, made of 2, 3, and 4 individual + vulnerabilities, respectively: + +.. code-block:: csv + + aggregate_vulnerability_id,vulnerability_id + 100001,1 + 100001,2 + 100002,3 + 100002,4 + 100002,5 + 100003,6 + 100003,7 + 100003,8 + 100003,9 + +* a `weights` table that specifies weights for each of the individual vulnerability functions in all ``areaperil_id``: + +.. code-block:: csv + + areaperil_id,vulnerability_id,weight + 54,1,138 + 54,2,224 + 54,3,194 + 54,4,264 + 54,5,390 + 54,6,107 + [...] + 154,1,1 + 154,2,97 + 154,3,273 + 154,4,296 + [...] + +| + +items.csv (use only two aggregate vulnerability ids): +.. code-block:: + + item_id,coverage_id,areaperil_id,vulnerability_id,group_id + 1,1,154,8,833720067 + 2,1,54,2,833720067 + 3,2,154,8,956003481 + 4,2,54,100001,956003481 + 5,4,154,100002,2030714556 + [...] + +| + +**Notes**: + +* if ``aggregate_vulnerability.csv`` or ``.bin`` is present, then ``weights.csv`` or ``weights.bin`` needs to be present too, or + ``gulmc`` raises an error. +* if ``aggregate_vulnerability.csv`` or ``.bin`` is not present, then ``gulmc`` runs normally, without any definition of aggregate + vulnerability. + +| + +Caching in ``gulmc`` +#################### + +In order to speed up the calculation of losses in the full Monte Carlo mode, we implement a simple caching mechanism whereby the +most commonly used vulnerability functions cdf are stored in memory for efficient re-usage. + +The cache size is set as the minimum between the cache size specified by the user with the new ``--vuln-cache-size`` argument +(default: 200, units: MB) and the amount of memory needed to store all the vulnerability functions to be used in the calculations. + +The cache dramatically speeds up the execution when the hazard intensity distribution is narrowly peaked (i.e., when most of the +intensity falls in a few intensity bins), which implies a few vulnerability functions are used repeatedly. + +The cache only stores individual vulnerability functions cdf, not the aggregate/weighted cdf, which would be too many to be stored. + +Example: to allow the vulnerability cache size to grow up to 1000 MB can be done with: + +.. code-block:: bash + + eve 1 1 | gulmc -S100 -a1 --vuln-cache-size=1000 + +| + +``gulmc`` supports hazard correlation +##################################### + +Hazard correlation parameters are defined analogously to damage correlation parameters. + +Before entering into details, these are **breaking changes** vs the past: + +* group ids are now always hashed. This ensures results are fully reproducible. Therefore ``hashed_group_id`` argument has been + dropped from the relevant functions. +* from this version, ``oasislmf model run`` will fail if an older model settings JSON file using ``group_fields`` is used vs the + new schema that uses ``damage_group_fields`` and ``hazard_group_fields`` as defined in the ``data_settings`` key. See more + details below. +* command line interface argument ``--group_id_cols`` for ``oasislmf model run`` has been renamed ``--damage_group_id_cols``. A + new argument ``--hazard_group_id_cols`` has been introduced to specify the columns to use for defining group ids for the hazard + sampling. They respectively default to: + +.. code-block:: python + DAMAGE_GROUP_ID_COLS = ["PortNumber", "AccNumber", "LocNumber"] + HAZARD_GROUP_ID_COLS = ["PortNumber", "AccNumber", "LocNumber"] + +| + +Update to the model settings JSON schema +"""""""""""""""""""""""""""""""""""""""" + +The oasislmf model settings JSON schema is updated to support the new feature with a breaking change. Previous +``correlation_settings`` and ``data_settings`` entries in the model settings such as: + +.. code-block:: json + + "correlation_settings": [ + {"peril_correlation_group": 1, "correlation_value": "0.7"}, + ], + "data_settings": { + "group_fields": ["PortNumber", "AccNumber", "LocNumber"], + }, + +are not supported anymore. The ``correlation_settings`` must contain a new key ``hazard_correlation_value`` and the +``correlation_value`` key is renamed to ``damage_correlation_value``: + +.. code-block:: json + + "correlation_settings": [ + {"peril_correlation_group": 1, "damage_correlation_value": "0.7", "hazard_correlation_value": "0.0"}, + {"peril_correlation_group": 2, "damage_correlation_value": "0.5", "hazard_correlation_value": "0.0"} + ], + +| + +Likewise, the ``data_settings`` entries are renamed from ``group_fields`` to ``damage_group_fields`` and now supports +``hazard_group_fields``, which is `_optional_` key: + +.. code-block:: json + + "data_settings": { + "damage_group_fields": ["PortNumber", "AccNumber", "LocNumber"], + "hazard_group_fields": ["PortNumber", "AccNumber", "LocNumber"] + }, + +| + +Correlations updated schema +""""""""""""""""""""""""""" + +The schema has been updated as follows in order to support correlation parameters: + +* if ``correlation_settings`` is not present, ``damage_correlation_value`` and ``hazard_correlation_value`` are assumed zero. +Peril correlation groups (if defined in supported perils) are ignored. No errors are raised. Example of valid model settings: + +.. code-block:: json + + "lookup_settings":{ + "supported_perils":[ + {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, + {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 1}, + {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, + {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} + ] + }, + +| + +* if ``correlation_settings`` is present, it needs to contain , ``damage_correlation_value`` and ``hazard_correlation_value`` for + each ``peril_correlation_group`` entry. + +Example of a valid model settings: + +.. code-block:: json + + "lookup_settings":{ + "supported_perils":[ + {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, + {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 1}, + {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, + {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} + ] + }, + "correlation_settings": [ + {"peril_correlation_group": 1, "damage_correlation_value": "0.7", "hazard_correlation_value": "0.3"} + ], + +| + +Example of an invalid model settings that raises a ``ValueError``: + +.. code-block:: json + + "lookup_settings":{ + "supported_perils":[ + {"id": "WSS", "desc": "Single Peril: Storm Surge", "peril_correlation_group": 1}, + {"id": "WTC", "desc": "Single Peril: Tropical Cyclone", "peril_correlation_group": 1}, + {"id": "WW1", "desc": "Group Peril: Windstorm with storm surge"}, + {"id": "WW2", "desc": "Group Peril: Windstorm w/o storm surge"} + ] + }, + "correlation_settings": [ + {"peril_correlation_group": 1} + ], + +| + +Correlations files updated schema for csv <-> binary conversion tools +""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +The correlations.csv and .bin files are modified as they now contain two additional columns: ``hazard_group_id`` and +``hazard_correlation_value``. They also feature a renamed column from ``correlation_value`` to ``damage_correlation_value``. + +The ``oasislmf`` package ships conversion tools for the correlations files: ``correlationtobin`` to convert a correlations file +from csv to bin: + +.. code-block:: bash + + correlationtobin correlations.csv -o correlations.bin + +and ``correlationtocsv`` to convert a ``correlations.bin`` file to ``csv``. If ``-o `` is specified, it writes the csv +table to file: + +.. code-block:: bash + + correlationtocsv correlations.bin -o correlations.csv + +| + +If no ``-o `` is specified, it prints the csv table to stdout: + +.. code-block:: bash + + correlationtocsv correlations.bin + item_id,peril_correlation_group,damage_correlation_value,hazard_correlation_value + 1,1,0.4,0.0 + 2,1,0.4,0.0 + 3,1,0.4,0.0 + 4,1,0.4,0.0 + 5,1,0.4,0.0 + 6,1,0.4,0.0 + 7,1,0.4,0.0 + 8,1,0.4,0.0 + 9,1,0.4,0.0 + 10,2,0.7,0.9 + 11,2,0.7,0.9 + 12,2,0.7,0.9 + 13,2,0.7,0.9 + 14,2,0.7,0.9 + 15,2,0.7,0.9 + 16,2,0.7,0.9 + 17,2,0.7,0.9 + 18,2,0.7,0.9 + 19,2,0.7,0.9 + 20,2,0.7,0.9 + +| + +``gulmc`` supports *stochastic disaggregation* for items and fm files +##################################################################### + +Use ``NumberOfBuildings`` from location file to generate expanded items file + +Use ``IsAggregate`` flag value from location file to generate fm files. + +Each disaggregated location has the same areaperil / vulnerability attributes as the parent coverage. + +A new field is needed in gul_summary_map and fm_summary_map to link disaggregated locations to original location (disagg_id) + +TIV, deductibles and limits are split equally. + +The definition of site for the application of site terms depends on the value of IsAggregate. + +where ``IsAggregate`` = 1, site is the disaggregated location +where ``IsAggregate`` = 0, site is the non-disaggregated location + + +``gulmc`` supports *absolute damage (vulnerability) functions* +############################################################## + +In its current implementation, the damage bin dicts file containing the definition of the damage bins for an entire model can +contain both relative and absolute damage bins, e.g.: + +.. code-block:: csv + + "bin_index","bin_from","bin_to","interpolation" + 1,0.000000,0.000000,0.000000 + 2,0.000000,0.100000,0.050000 + 3,0.100000,0.200000,0.150000 + 4,0.200000,0.300000,0.250000 + 5,0.300000,0.400000,0.350000 + 6,0.400000,0.500000,0.450000 + 7,0.500000,0.600000,0.550000 + 8,0.600000,0.700000,0.650000 + 9,0.700000,0.800000,0.750000 + 10,0.800000,0.900000,0.850000 + 11,0.900000,1.000000,0.950000 + 12,1.000000,1.000000,1.000000 + 13,1.000000,2.000000,1.500000 + 14,2.000000,3.000000,2.500000 + 15,3.000000,30.00000,16.50000 + +where bins 1 to 12 represent a relative damage, and bins 13 to 15 represent an absolute damage. + +For random losses falling in absolute damage bins that have a non-zero width (e.g., bins 13, 14, and 15), the loss is +interpolated using the same linear or parabolic interpolation algorithm already used for the relative damage bins. + +**IMPORTANT**: vulnerability functions are required to be **either entirely absolute or entirely relative**. *Mixed* +vulnerability functions defined by a mixture of absolute and relative vulnerability function are not supported. Currently there +is no automatic pre-run check that verifies that all vulnerability functions comply with this requirement; the user must check +this manually. \ No newline at end of file diff --git a/src/sections/releases.rst b/src/sections/releases.rst index d123500..f1733d2 100644 --- a/src/sections/releases.rst +++ b/src/sections/releases.rst @@ -43,7 +43,7 @@ Release Schedule the new version number, so on release 1.28.0 the code base is copied to a branch ``backports/1.28.x`` where backported features and fixes are applied. -* **After 2023** - Starting from 2023, we will transition to a yearly release cycle for our stable versions. Each year, we will +* **After 2023** - From 2024, we will transition to a yearly release cycle for our stable versions. Each year, we will release a new stable version with additional features. A full, detailed list of the changes from each release can be found `here `_. diff --git a/src/sections/sampling-methodology.rst b/src/sections/sampling-methodology.rst index 6c782d2..062940b 100644 --- a/src/sections/sampling-methodology.rst +++ b/src/sections/sampling-methodology.rst @@ -1,7 +1,232 @@ Sampling Methodology ==================== -Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec -lorem neque, interdum in ipsum nec, finibus dictum velit. Ut eu -efficitur arcu, id aliquam erat. In sit amet diam gravida, imperdiet -tellus eu, gravida nisl. \ No newline at end of file + +On this page +------------ + +* :ref:`introduction_sampling` +* :ref:`features_by_version` +* :ref:`effective_damageability` +* :ref:`monte_carlo_sampling` +* :ref:`numerical_calculations` +* :ref:`loss_allocation` +* :ref:`available_1.27` +* :ref:`latin_hypercube_sampling` +* :ref:`correlated_random_numbers` + + +| +.. _introduction_sampling: + +Introduction +************ + +---- + +This section explains the full ground up loss calculation, which consists of two main stages; + +1) calculation of probability distributions +2) the sampling method + + +This section cover the inputs to the ground up loss calculation and how they control the random numbers that are drawn, the seeding of random numbers, the algorithms used to draw random numbers and the method of generating partially correlated random numbers more recent releases of Oasis. + +| + +.. _features_by_version: + +Sampling features by version +**************************** + +---- + +More features have been added in the more recent oasislmf package versions. + +These can be summarized as follows; + +* 1.27 + * Latin Hypercube sampling + * Partially correlated random numbers for damage + * Full Monte Carlo Sampling + * Partially correlated random numbers for hazard intensity + +| + + +.. _effective_damageability: + +Effective damageability method +****************************** + +---- + +Effective damageability is the name of the method used for the construction of probability distributions. + +The model input files to this stage of the calculation are; + +* footprint +* vulnerability + +Hazard intensity uncertainty is represented in the footprint data, and damage uncertainty given the hazard intensity is represented in the vulnerability data. Both types of uncertainty are represented as discrete probability distributions. + +Effective damage is calculated during an analysis by combining ('convolving') the hazard intensity distribution with the conditional damage distribution. + +In the general case, the calculated effective damage distribution represents both uncertainty in the hazard intensity and in the level of damage given the intensity. + +However it is common in models to have no hazard uncertainty distribution in the footprint. This is when each areaperil (representing a geographical area/cell) in the footprint is assigned a single hazard intensity bin with probability set to 1. In this case, the effective damage distribution is still generated but it is the same as the conditional damage probability distribution in the vulnerability file for a single intensity bin. + +Under the effective damageability method, it is always the effective damage distribution that is sampled, but the sources of uncertainty that are represented may be just damage, or may be a combination of hazard intensity uncertainty and damage uncertainty, depending on the model files. + + +.. _monte_carlo_sampling: + +Monte Carlo sampling +******************** + +---- + +Monte Carlo methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. + +The Oasis kernel performs a Monte Carlo sampling of ground up loss from effective damage probability distributions by drawing random numbers. + +The probability distribution is provided by the effective damageability calculation described above, and the damage intervals are provided by a third model input file, the damage bin dictionary. + + +**Exposure inputs** + +The exposure data input files control all aspects of how ground up losses are sampled. The inputs are two related files; + +* items +* coverages + +Items are the smallest calculation unit in Oasis, and they represent the loss to an individual coverage of a single physical risk from a particular peril. The coverage file lists the exposure coverages (for example the building coverage of an individual site) with their total insured values. + +The attributes of items are; + +* item_id - the unique identifier of the item +* coverage_id - identifier of the exposure coverage that an item is associated with (links to the coverages file) +* areaperil_id - identifier which determines what event footprints will impact the exposure and with what hazard intensity +* vulnerability_id - identifier which determines what the damage severity will be given the hazard intensity +* group_id - identifier that generates a set of random numbers that will be used to sample loss + +The attributes of coverages are; + +* coverage_id - the unique identifier of the exposure coverage +* tiv - the total insured value of the coverage + +For each item, the values of areaperil_id and vulnerability_id determine the inputs to the calculation of the effective damage distribution for each event, the group_id determines which set of random numbers will be used to sample damage, and the coverage_id determines what tiv the damage factor will be multiplied by to generate a loss sample. + +See correlation.rst for more information about how group_ids are generated. + +**Random number seeding** + +A random number set is seeded from the input keys 'event_id' and 'group_id'. This means that for each unique set of values of 'event_id' and 'group_id', an independent set of random numbers is drawn. The size of the random number set is determined by the number of samples specified in the analysis settings. + +Seeded random number sets are repeatable. This means that for the same value of 'event_id' and 'group_id', the set of random numbers generated will always be the same. + +Whereever items are assigned the same group_id, the same set of random numbers will be used to sample ground up losses. The damage samples are fully correlated for these items, whereas they are uncorrelated to all items with different assigned group_ids. + +Note that just because the random numbers used to sample from two item's damage distributions are the same does not mean the sampled damage factors will be the same. The damage factor will also depend on the cumulative distribution function, which will vary across items. + +However, damage samples will have 'rank' correlation, meaning that the largest damage factors for two fully correlated items across the sample set will occur together, and the second largest damage factors will occur together, and so on. + +**Full correlation sampling across two different effective damage cdfs** + +.. image:: ../images/sampling2.png + :width: 600 + +| + +**Inverse transform sampling** + +Inverse transform sampling is a basic method for psuedo-random number sampling, i.e. for generating sample numbers at random from any probability distribution given its cumulative distribution function. + +In Oasis, all probability distributions are discrete. The cumulative probability is calculated for each damage interval threshold and the random number (a value between 0 and 1) is matched to a damage bin when its value is between the cumulative probability lower and upper threshold for the bin. Linear interpolation is performed between the lower and upper cumulative probability thresholds to calculate a damage factor between 0 and 1. + +**Inverse transform method for a discrete cumulative distribution function** + +.. image:: ../images/sampling1.png + :width: 600 + +| + +Each damage factor is multiplied by the total insured value of the exposed asset to produce a ground up loss sample. This is performed at the individual building coverage level, for each modelled peril, for every event in the model. This is repeated for the number of samples specified for the analysis. + +| + +.. _numerical_calculations: + +Numerical calculations +###################### + +---- + +| + +.. _loss_allocation: + +Loss allocation +############### + +---- + +| + +.. _available_1.27: + +Available in OasisLMF 1.27 +######################### + +---- + +| +.. _latin_hypercube_sampling: + +Latin Hypercube sampling +************************ + +| + +.. _correlated_random_numbers: + +Correlated random numbers +************************* + +A one-factor Gaussian copula generates correlated random numbers across group_ids for each peril group and event. + +For an event, for each peril correlation group k and sample j, a random number Y_jk ~ N(0,1) is generated as the correlated random variable across groups. It is seeded from the event, sample j and peril correlation group k so that it repeatable. + +For each event, sample j and group_id ik (ik = i locations times k peril groups), one random number, X_ijk ~ N(0,1) is generated as the noise/uncorrelated variable. The group_id is hashed from the location details and the peril correlation group id so that the random numbers are repeatable for the same item group and peril correlation group across analyses. + +The dependent variable Z_ijk ~ N(0,1) for peril correlation group k, sample j and group_id ik is + +Z_ijk=Y_jk √(ρ_k )+X_ijk √(〖1-ρ〗_k ) + +Where ρ_k is the input correlation factor for peril correlation group k. + +The normal inverse function is used to transform independent uniform random numbers generated from the chosen RNG function (Mersenne Twister / Latin Hypercube) into the normally distributed random variables, X_ijk and Y_jk. The cumulative normal distribution function is used to transform the dependent normally distributed Z_ijk values to the uniform distribution, which are the correlated uniform random numbers to use for damage interpolation of the cdf. + +Future enhancements + +We agreed that we would continue to investigate and document the enhancements to the correlation model to include more options for coverage correlation within each group, and a chance of loss factor which would ensure the sampled correlations between losses are closer to the input correlation factor. The sampled correlation can be less than the input correlation factors in cases where zero losses for some locations are generated. + +These features can be added to the backlog for future implementation. + +Correlation model extended to coverages + +The one-factor Gaussian copula model can be extended to a multi-factor model to allow for different correlation factors between the coverages within an item group. The proposal is to enable a different factor to be entered for buildings and contents loss correlation, buildings and BI loss correlation, and contents and BI loss correlation. + +For i=1,…,N groups, with peril group correlation (ρ_k), buildings-contents correlation (ρ_BC), buildings-BI correlation (ρ_BBi) and contents-BI correlation (ρ_CBi), the correlated normally distributed random values can be generated from the following expressions. + +Z_(ijk,B)=Y_jk √(ρ_k )+Y_1ijk √(ρ_BC ) +Y_2ijk √(ρ_BBi )+ +X_(ijk,B) √(1-ρ_k-ρ_(BC )-ρ_BBi ) + +Z_(ijk,C) =Y_jk √(ρ_k )+Y_1ijk √(ρ_BC )+ +Y_3ijk √(ρ_CBi )+X_(ijk,C) √(1-ρ_k-ρ_BC - ρ_CBi ) + +Z_(ijk,Bi)=Y_jk √(ρ_k )+ +Y_2ijk √(ρ_BBi ) +Y_3ijk √(ρ_CBi )+X_(ijk,Bi) √(1-ρ_k -ρ_BBi - ρ_CBi ) + +Where Y_jk,Y_1ijk,Y_2ijk,Y_3ijk,X_(ijk,B),X_(ijk,C),X_(ijk,Bi) are N(0,1) distributed. + +Y_jk is the global variable drawn once for all risks for each sample j and peril correlation group k, Y_1ijk,Y_2ijk,Y_3ijk are drawn for each group ik and sample j but are the same for each coverage, and〖 X〗_(ijk,B),X_(ijk,C),X_(ijk,Bi) are drawn for each coverage, group ik and, sample j. + +There is not a free choice of each correlation factor between 0 and 1 because the last term in each of the above expressions cannot go negative under the square root. The requirement is that the correlation matrix of the coverage correlations must be positive definite, so some work is needed to work out the rules of how these correlations may be specified, and how to control the inputs to ensure the combinations of values entered are valid. A rule must also be specified for generating the ‘Other structure’ coverage random number, if this coverage is present. diff --git a/src/sections/versioning.rst b/src/sections/versioning.rst index ab7170a..f2c768c 100644 --- a/src/sections/versioning.rst +++ b/src/sections/versioning.rst @@ -3,8 +3,6 @@ Versioning | -.. _intro_versioning: - Introduction ************ @@ -14,8 +12,6 @@ This page lists what features were released with each verion. | -.. _1.28_versioning: - 1.28 **** @@ -34,8 +30,6 @@ This page lists what features were released with each verion. | -.. _1.27_versioning: - 1.27 **** @@ -56,8 +50,6 @@ This page lists what features were released with each verion. | -.. _1.26_versioning: - 1.26 **** @@ -71,35 +63,6 @@ This page lists what features were released with each verion. | -.. _1.25_versioning: - -1.25 -**** - ----- - -* Feature/docs -* Add supported OED versions to model metadata (model_settings.json) -* Feature/976 quantile -* Footprint server profiling - -| - -.. _1.24_versioning: - -1.24 -**** - ----- - -8* allow event subset to be passed in analysis settings -* Footprint server -* Enable the use of summary index files by ktools component aalcalc - -| - -.. _1.23_versioning: - 1.23 **** @@ -116,95 +79,6 @@ This page lists what features were released with each verion. | -.. _1.22_versioning: - -1.22 -**** - -* fmpy: areaperil_id 8 bytes support -* Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT) -* support parquet for OED -* stashing -* Step policies: support files with both step and non-step policies - ----- - -| - -.. _1.21_versioning: - -1.21 -**** - ----- - -* Max Ded back allocation -* fmpy: areaperil_id 8 bytes support -* Generate Quantile Event Loss Table (QELT) and Quantile Period Loss Table (QPLT) -* support parquet for OED - -| - -.. _1.20_versioning: - -1.20 -**** - ----- - -* Generate Moment Event Loss Table (MELT), Sample Event Loss Table (SELT), Moment Period Loss Table (MPLT) and Sample - Period Loss Table (SPLT) - -| - -.. _1.19_versioning: - -1.19 -**** - ----- - -* improve memory usage of fmpy - -| - -.. _1.18_versioning: - -1.18 -**** - ----- - -* correction for PolDed6All fields -* Add PALT to genbash -* Pol Fac Contracts - -| - -.. _1.17_versioning: - -1.17 -**** - ----- - -* Error handling for invalid oasislmf.json config files - -| - -.. _1.16_versioning: - -1.16 -**** - ----- - -* Store analysis run settings to outputs via the MDK - -| - -.. _1.15_versioning: - 1.15 **** @@ -216,173 +90,5 @@ This page lists what features were released with each verion. * The Group ids can now be set by the following internal oasis fields 'item_id', 'peril_id', 'coverage_id', and 'coverage_type_id' * Added validation for unsupported special conditions -* - -| - -.. _1.14_versioning: - -1.14 -**** - ----- - -**Nothing notable** - -| - -.. _1.13_versioning: - -1.13 -**** - ----- - -* Add CLI flags for lookup multiprocessing options -* Added fmpy support for stepped policies -* Added user defined return periods option to analysis_settings.json -* Enabled Fmpy to handle multiple input streams - -| - -.. _1.12_versioning: - -1.12 -**** - ----- - -* Peril Handling in Input Generation -* Added experimental financial module written in Python 'fmpy' -* Define relationships between event and occurrence in model_settings - -| - -.. _1.11_versioning: - -1.11 -**** - ----- - -**Nothing notable** - -| - -.. _1.10_versioning: - -1.10 -**** - ----- - -* Extract and apply default values for OED mapped FM terms -* Split calc. rules files -* Include unsupported coverages in type 2 financial terms calculation -* Integration of GUL-FM load balancer -* Refactor oasislmf package - -| - -.. _1.9_versioning: - -1.9 -**** - ----- - -* Add type 2 financial terms tests for multi-peril to regression test -* Added Scripts for generated example model data for testing - -| - -.. _1.8_versioning: - -1.8 -**** - ----- - -* Install complex_itemstobin and complex_itemstocsv by default -* Add FM Tests May 2020 -* Add JSON schema validation on CLI -* Add api client progressbars for OasisAtScale - -| - -.. _1.7_versioning: - -1.7 -**** - ----- - -* item file ordering of item_id -* extend calcrules -* Add exception wrapping to OasisException -* Pre-analysis exposure modification (CLI interface) - -| - -.. _1.6_versioning: - -1.6 -**** - ----- - -* Extend calcrules to cover more combinations of financial terms -* Improve performance in write_exposure_summary() -* Long description field to model_settings.json schema -* Total TIV sums in exposure report -* Group OED fields from model settings - -| - -.. _1.5_versioning: - -1.5 -**** - ----- - -* Step Policy features supported -* Command line option for setting group_id -* CLI option to set a complex model gulcalc command -* Update to the Model Settings schema - -| - -.. _1.4_versioning: - -1.4 -**** - ----- - -* all custom lookups now need to set a loc_id column in the loc. dataframe -* new gulcalc stream type - -| - -.. _1.3_versioning: - -1.3 -**** - ----- - -**Nothing notable** - -| - -.. _1.2_versioning: - -1.2 -**** - ----- - -**Nothing notable** | \ No newline at end of file