diff --git a/v1.1.0/.buildinfo b/v1.1.0/.buildinfo new file mode 100644 index 0000000..31ca022 --- /dev/null +++ b/v1.1.0/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. +config: 57a4fb9b09d49636494a0ab425b8341e +tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/v1.1.0/_sources/datasets.rst.txt b/v1.1.0/_sources/datasets.rst.txt new file mode 100644 index 0000000..ab84625 --- /dev/null +++ b/v1.1.0/_sources/datasets.rst.txt @@ -0,0 +1,2 @@ +.. include:: ../DATASETS.md + :parser: myst_parser.sphinx_ \ No newline at end of file diff --git a/v1.1.0/_sources/explainer/attributions.md.txt b/v1.1.0/_sources/explainer/attributions.md.txt new file mode 100644 index 0000000..bb1d936 --- /dev/null +++ b/v1.1.0/_sources/explainer/attributions.md.txt @@ -0,0 +1,11 @@ +(attributions)= + +```{include} ../../explainer/attributions/README.md +``` + +```{eval-rst} + +.. automodule:: explainer.attributions.attributions + :members: + +``` diff --git a/v1.1.0/_sources/explainer/cam.md.txt b/v1.1.0/_sources/explainer/cam.md.txt new file mode 100644 index 0000000..4fc63cd --- /dev/null +++ b/v1.1.0/_sources/explainer/cam.md.txt @@ -0,0 +1,11 @@ +(cam)= + +```{include} ../../explainer/cam/README.md +``` + +```{eval-rst} + +.. automodule:: explainer.cam.cam + :members: + +``` diff --git a/v1.1.0/_sources/explainer/index.md.txt b/v1.1.0/_sources/explainer/index.md.txt new file mode 100644 index 0000000..a4a7893 --- /dev/null +++ b/v1.1.0/_sources/explainer/index.md.txt @@ -0,0 +1,44 @@ +(explainer)= +# Explainer + +Explainer is a Python module in Intel® Explainable AI Tools that provides explainability methods for PyTorch and Tensorflow models. + +## Goals + +````{grid} 3 + +```{grid-item-card} +:text-align: center +:class-header: sd-font-weight-bold +:class-body: sd-font-italic +{octicon}`workflow` Composable +^^^ +Add explainers to models methods with minimal code +``` + +```{grid-item-card} +:text-align: center +:class-header: sd-font-weight-bold +:class-body: sd-font-italic +{octicon}`stack` Extensible +^^^ +Easy to add new methods +``` + +```{grid-item-card} +:text-align: center +:class-header: sd-font-weight-bold +:class-body: sd-font-italic +{octicon}`package-dependencies` Community +^^^ +Contributions welcome +``` + +```` + +## Explainer Submodules + +* {ref}`attributions`: visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions +* {ref}`cam`: create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping +* {ref}`metrics`: Gain insight into models with the measurements and visualizations needed during the machine learning workflow + diff --git a/v1.1.0/_sources/explainer/metrics.md.txt b/v1.1.0/_sources/explainer/metrics.md.txt new file mode 100644 index 0000000..f04898a --- /dev/null +++ b/v1.1.0/_sources/explainer/metrics.md.txt @@ -0,0 +1,13 @@ + +(metrics)= + +```{include} ../../explainer/metrics/README.md +``` +## API Refrence + +```{eval-rst} + +.. automodule:: explainer.metrics.metrics + :members: + +``` diff --git a/v1.1.0/_sources/index.md.txt b/v1.1.0/_sources/index.md.txt new file mode 100644 index 0000000..9e3fa31 --- /dev/null +++ b/v1.1.0/_sources/index.md.txt @@ -0,0 +1,2 @@ +```{include} markdown/Welcome.md +``` diff --git a/v1.1.0/_sources/install.rst.txt b/v1.1.0/_sources/install.rst.txt new file mode 100644 index 0000000..e81d8ee --- /dev/null +++ b/v1.1.0/_sources/install.rst.txt @@ -0,0 +1,2 @@ +.. include:: markdown/Install.md + :parser: myst_parser.sphinx_ diff --git a/v1.1.0/_sources/legal.rst.txt b/v1.1.0/_sources/legal.rst.txt new file mode 100644 index 0000000..e12b4db --- /dev/null +++ b/v1.1.0/_sources/legal.rst.txt @@ -0,0 +1,2 @@ +.. include:: markdown/Legal.md + :parser: myst_parser.sphinx_ diff --git a/v1.1.0/_sources/markdown/Install.md.txt b/v1.1.0/_sources/markdown/Install.md.txt new file mode 100644 index 0000000..112e93c --- /dev/null +++ b/v1.1.0/_sources/markdown/Install.md.txt @@ -0,0 +1,125 @@ +## Installation +### Software Requirements +* Linux system or WSL2 on Windows (validated on Ubuntu* 20.04/22.04 LTS) +* Python 3.9, 3.10 +* Install required OS packages with `apt-get install build-essential python3-dev` +* git (only required for the "Developer Installation") +* Poetry + +### Developer Installation with Poetry + +Use these instructions to install the Intel AI Safety python library with a clone of the +GitHub repository. This can be done instead of the basic pip install, if you plan +on making code changes. + +1. Clone this repo and navigate to the repo directory. + +2. Allow poetry to create virtual envionment contained in `.venv` directory of current directory. + + ```bash + poetry lock + ``` + In addtion, you can explicitly tell poetry which python instance to use + + ```bash + poetry env use /full/path/to/python + ``` + +3. Choose the `intel_ai_safety` subpackages and plugins that you wish to install. + + a. Install `intel_ai_safety` with all of its subpackages (e.g. `explainer` and `model_card_gen`) and plugins + ```bash + poetry install --extras all + ``` + + b. Install `intel_ai_safety` with just `explainer` + ```bash + poetry install --extras explainer + ``` + + c. Install `intel_ai_safety` with just `model_card_gen` + ```bash + poetry install --extras model-card + ``` + + d. Install `intel_ai_safety` with `explainer` and all of its plugins + ```bash + poetry install --extras explainer-all + ``` + + e. Install `intel_ai_safety` with `explainer` and just its pytorch implementations + + ```bash + poetry install --extras explainer-pytorch + ``` + + f. Install `intel_ai_safety` with `explainer` and just its tensroflow implementations + + ```bash + poetry install --extras explainer-tensorflow + ``` + +4. Activate the environment: + + ```bash + source .venv/bin/activate + ``` + +### Install to existing enviornment with Poetry + +#### Create and activate a Python3 virtual environment +We encourage you to use a python virtual environment (virtualenv or conda) for consistent package management. +There are two ways to do this: +1. Choose a virtual enviornment to use: + a. Using `virtualenv`: + ```bash + python3 -m virtualenv xai_env + source xai_env/bin/activate + ``` + + b. Or `conda`: + ```bash + conda create --name xai_env python=3.9 + conda activate xai_env + ``` +2. Install to current enviornment + ```bash + poetry config virtualenvs.create false && poetry install --extras all + ``` + +### Additional Feature-Specific Steps +Notebooks may require additional dependencies listed in their associated documentation. + +### Verify Installation + +Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions: +```bash +python -c "from intel_ai_safety.explainer import version; print(version.__version__)" +python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)" +``` + +## Running Notebooks + +The following links have Jupyter* notebooks showing how to use the Explainer and Model Card Generator APIs in various ML domains and use cases: +* [Model Card Generator Notebooks]() +* [Explainer Notebooks]() + +## Support + +The Intel Explainable AI Tools team tracks bugs and enhancement requests using +[GitHub issues](https://github.com/intel/intel-xai-tools/issues). Before submitting a +suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported. + +*Other names and brands may be claimed as the property of others. [Trademarks](http://www.intel.com/content/www/us/en/legal/trademarks.html) + +#### DISCLAIMER +These scripts are not intended for benchmarking Intel platforms. For any performance and/or benchmarking information on specific Intel platforms, visit https://www.intel.ai/blog. + +Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the Intel Global Human Rights Principles. Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right. + +#### License +Intel® Explainable AI Tools is licensed under Apache License Version 2.0. + +#### Datasets and Models +To the extent that any data, datasets, or models are referenced by Intel or accessed using tools or code on this site such data, datasets and models are provided by the third party indicated as the source of such content. Intel does not create the data, datasets, or models, provide a license to any third-party data, datasets, or models referenced, and does not warrant their accuracy or quality. By accessing such data, dataset(s) or model(s) you agree to the terms associated with that content and that your use complies with the applicable license. [DATASETS]() +*Other names and brands may be claimed as the property of others. [Trademarks](http://www.intel.com/content/www/us/en/legal/trademarks.html) diff --git a/v1.1.0/_sources/markdown/Legal.md.txt b/v1.1.0/_sources/markdown/Legal.md.txt new file mode 100644 index 0000000..9d075d7 --- /dev/null +++ b/v1.1.0/_sources/markdown/Legal.md.txt @@ -0,0 +1,13 @@ +# Legal Information +## Disclaimer +These scripts are not intended for benchmarking Intel® platforms. For any performance and/or benchmarking information on specific Intel platforms, visit https://www.intel.ai/blog. + +Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the Intel Global Human Rights Principles. Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right. + +## License +Intel® Explainable AI Tools is licensed under Apache License Version 2.0. + +## Datasets and Models +To the extent that any data, datasets, or models are referenced by Intel or accessed using tools or code on this site such data, datasets and models are provided by the third party indicated as the source of such content. Intel does not create the data, datasets, or models, provide a license to any third-party data, datasets, or models referenced, and does not warrant their accuracy or quality. By accessing such data, dataset(s) or model(s) you agree to the terms associated with that content and that your use complies with the applicable license. [DATASETS]() + +Intel expressly disclaims the accuracy, adequacy, or completeness of any data, datasets or models, and is not liable for any errors, omissions, or defects in such content, or for any reliance thereon. Intel also expressly disclaims any warranty of non-infringement with respect to such data, dataset(s), or model(s). Intel is not liable for any liability or damages relating to your use of such data, datasets, or models. diff --git a/v1.1.0/_sources/markdown/Overview.md.txt b/v1.1.0/_sources/markdown/Overview.md.txt new file mode 100644 index 0000000..156cda3 --- /dev/null +++ b/v1.1.0/_sources/markdown/Overview.md.txt @@ -0,0 +1,14 @@ +## Overview + +The Intel® Explainable AI Tools are designed to help users detect and mitigate against issues of fairness and interpretability, while running best on Intel hardware. +There are two Python* components in the repository: + +* [Model Card Generator](model_card_gen) + * Creates interactive HTML reports containing model performance and fairness metrics +* [Explainer](explainer) + * Runs post-hoc model distillation and visualization methods to examine predictive behavior for both TensorFlow* and PyTorch* models via a simple Python API including the following modules: + * [Attributions](plugins/explainers/attributions): Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions + * [CAM (Class Activation Mapping)](plugins/explainers/cam-pytorch): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping + * [Metrics](plugins/explainers/metrics): Gain insight into models with the measurements and visualizations needed during the machine learning workflow + +*Other names and brands may be claimed as the property of others. [Trademarks](http://www.intel.com/content/www/us/en/legal/trademarks.html) diff --git a/v1.1.0/_sources/markdown/Welcome.md.txt b/v1.1.0/_sources/markdown/Welcome.md.txt new file mode 100644 index 0000000..c01c4d9 --- /dev/null +++ b/v1.1.0/_sources/markdown/Welcome.md.txt @@ -0,0 +1,144 @@ +# Intel® Explainable AI Tools + +This repository provides tools for data scientists and MLOps engineers that have requirements specific to AI model interpretability. + +## Overview + +The Intel Explainable AI Tools are designed to help users detect and mitigate against issues of fairness and interpretability, while running best on Intel hardware. +There are two Python* components in the repository: + +* [Model Card Generator](model_card_gen) + * Creates interactive HTML reports containing model performance and fairness metrics +* [Explainer](explainer) + * Runs post-hoc model distillation and visualization methods to examine predictive behavior for both TensorFlow* and PyTorch* models via a simple Python API including the following modules: + * [Attributions](plugins/explainers/attributions): Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions + * [CAM (Class Activation Mapping)](plugins/explainers/cam-pytorch): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping + * [Metrics](plugins/explainers/metrics): Gain insight into models with the measurements and visualizations needed during the machine learning workflow + +## Get Started + +### Requirements +* Linux system or WSL2 on Windows (validated on Ubuntu* 20.04/22.04 LTS) +* Python 3.9, 3.10 +* Install required OS packages with `apt-get install build-essential python3-dev` +* git (only required for the "Developer Installation") +* Poetry + +### Developer Installation with Poetry + +Use these instructions to install the Intel AI Safety python library with a clone of the +GitHub repository. This can be done instead of the basic pip install, if you plan +on making code changes. + +1. Clone this repo and navigate to the repo directory. + +2. Allow poetry to create virtual envionment contained in `.venv` directory of current directory. + + ```bash + poetry lock + ``` + In addtion, you can explicitly tell poetry which python instance to use + + ```bash + poetry env use /full/path/to/python + ``` + +3. Choose the `intel_ai_safety` subpackages and plugins that you wish to install. + + a. Install `intel_ai_safety` with all of its subpackages (e.g. `explainer` and `model_card_gen`) and plugins + ```bash + poetry install --extras all + ``` + + b. Install `intel_ai_safety` with just `explainer` + ```bash + poetry install --extras explainer + ``` + + c. Install `intel_ai_safety` with just `model_card_gen` + ```bash + poetry install --extras model-card + ``` + + d. Install `intel_ai_safety` with `explainer` and all of its plugins + ```bash + poetry install --extras explainer-all + ``` + + e. Install `intel_ai_safety` with `explainer` and just its pytorch implementations + + ```bash + poetry install --extras explainer-pytorch + ``` + + f. Install `intel_ai_safety` with `explainer` and just its tensroflow implementations + + ```bash + poetry install --extras explainer-tensorflow + ``` + +4. Activate the environment: + + ```bash + source .venv/bin/activate + ``` + +### Install to existing enviornment with Poetry + +#### Create and activate a Python3 virtual environment +We encourage you to use a python virtual environment (virtualenv or conda) for consistent package management. +There are two ways to do this: +1. Choose a virtual enviornment to use: + a. Using `virtualenv`: + ```bash + python3 -m virtualenv xai_env + source xai_env/bin/activate + ``` + + b. Or `conda`: + ```bash + conda create --name xai_env python=3.9 + conda activate xai_env + ``` +2. Install to current enviornment + ```bash + poetry config virtualenvs.create false && poetry install --extras all + ``` + +### Additional Feature-Specific Steps +Notebooks may require additional dependencies listed in their associated documentation. + +### Verify Installation + +Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions: +```bash +python -c "from intel_ai_safety.explainer import version; print(version.__version__)" +python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)" +``` + +## Running Notebooks + +The following links have Jupyter* notebooks showing how to use the Explainer and Model Card Generator APIs in various ML domains and use cases: +* [Model Card Generator Notebooks]() +* [Explainer Notebooks]() + +## Support + +The Intel Explainable AI Tools team tracks bugs and enhancement requests using +[GitHub issues](https://github.com/intel/intel-xai-tools/issues). Before submitting a +suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported. + +*Other names and brands may be claimed as the property of others. [Trademarks](http://www.intel.com/content/www/us/en/legal/trademarks.html) + +#### DISCLAIMER +These scripts are not intended for benchmarking Intel platforms. For any performance and/or benchmarking information on specific Intel platforms, visit https://www.intel.ai/blog. + +Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the Intel Global Human Rights Principles. Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right. + +#### License +Intel® Explainable AI Tools is licensed under Apache License Version 2.0. + +#### Datasets and Models +To the extent that any data, datasets, or models are referenced by Intel or accessed using tools or code on this site such data, datasets and models are provided by the third party indicated as the source of such content. Intel does not create the data, datasets, or models, provide a license to any third-party data, datasets, or models referenced, and does not warrant their accuracy or quality. By accessing such data, dataset(s) or model(s) you agree to the terms associated with that content and that your use complies with the applicable license. [DATASETS]() + +Intel expressly disclaims the accuracy, adequacy, or completeness of any data, datasets or models, and is not liable for any errors, omissions, or defects in such content, or for any reliance thereon. Intel also expressly disclaims any warranty of non-infringement with respect to such data, dataset(s), or model(s). Intel is not liable for any liability or damages relating to your use of such data, datasets, or models. diff --git a/v1.1.0/_sources/model_card_gen/api.rst.txt b/v1.1.0/_sources/model_card_gen/api.rst.txt new file mode 100644 index 0000000..c76584f --- /dev/null +++ b/v1.1.0/_sources/model_card_gen/api.rst.txt @@ -0,0 +1,10 @@ +API Reference +============= + +Model Card Generator +-------------------- + +.. currentmodule:: model_card_gen + +.. automodule:: model_card_gen.model_card_gen + :members: diff --git a/v1.1.0/_sources/model_card_gen/example.md.txt b/v1.1.0/_sources/model_card_gen/example.md.txt new file mode 100644 index 0000000..814975a --- /dev/null +++ b/v1.1.0/_sources/model_card_gen/example.md.txt @@ -0,0 +1,13 @@ +--- +sd_hide_title: true +--- +(mcg_example)= +# Example Model Card + +```{eval-rst} + +.. raw:: html + :file: ../../model_card_gen/docs/examples/html/compas_model_card.html + +``` + diff --git a/v1.1.0/_sources/model_card_gen/index.md.txt b/v1.1.0/_sources/model_card_gen/index.md.txt new file mode 100644 index 0000000..14ca1d6 --- /dev/null +++ b/v1.1.0/_sources/model_card_gen/index.md.txt @@ -0,0 +1,4 @@ +(mcg)= + +```{include} ../../model_card_gen/README.md +``` diff --git a/v1.1.0/_sources/notebooks.rst.txt b/v1.1.0/_sources/notebooks.rst.txt new file mode 100644 index 0000000..4ab6aa7 --- /dev/null +++ b/v1.1.0/_sources/notebooks.rst.txt @@ -0,0 +1,30 @@ +Example Notebooks +================= + +Explainer Notebooks +------------------- + +.. csv-table:: + :header: "Notebook", "Domain: Use Case", "Framework" + :widths: 50, 20, 30 + + :doc:`Explaining ResNet50 ImageNet Classification Using the CAM Explainer `, CV: Image Classification, "PyTorch*, TensorFlow* and Intel® Explainable AI API" + :doc:`Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer`, "Numerical/Categorical: Tabular Classification", "TensorFlow & Intel Explainable AI API" + :doc:`Explaining Custom CNN MNIST Classification Using the Attributions Explainer`, "CV: Image Classification", "PyTorch and Intel Explainable AI API" + :doc:`Multimodal Breast Cancer Detection Explainability using the Intel® Explainable AI API`, "CV: Image Classification & NLP: Text Classification", "PyTorch, HuggingFace, Intel Explainable AI API & Intel® Transfer Learning Tool API" + :doc:`Explaining Custom NN NewsGroups Classification Using the Attributions Explainer`, "NLP: Text Classification", "PyTorch and Intel Explainable AI API" + :doc:`Explaining Fine Tuned Text Classifier with PyTorch using the Intel® Explainable AI API`, "NLP: Text Classification", "PyTorch, HuggingFace, Intel Explainable AI API & Intel Transfer Learning Tool API" + :doc:`Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer`, "CV: Image Classification", "PyTorch and Intel Explainable AI API" + +Model Card Generator Notebooks +------------------------------ + +.. csv-table:: + :header: "Notebook", "Domain: Use Case", "Framework" + :widths: 50, 20, 30 + + :doc:`Generating a Model Card with PyTorch`, "Numerical/Categorical: Tabular Classification", "PyTorch, TensorFlow and Intel Explainable AI API" + :doc:`Detecting Issues in Fairness by Generate Model Card from TensorFlow Estimators`, "Numerical/Categorical: Tabular Classification", "TensorFlow & Intel Explainable AI API" + :doc:`Creating Model Card for Toxic Comments Classification in TensorFlow`, "Numerical/Categorical: Tabular Classification", "TensorFlow and Intel Explainable AI API" + +\*Other names and brands may be claimed as the property of others. `Trademarks `_ diff --git a/v1.1.0/_sources/notebooks/ExplainingImageClassification.nblink.txt b/v1.1.0/_sources/notebooks/ExplainingImageClassification.nblink.txt new file mode 100644 index 0000000..caf80cb --- /dev/null +++ b/v1.1.0/_sources/notebooks/ExplainingImageClassification.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/explainer/imagenet_with_cam/ExplainingImageClassification.ipynb" +} \ No newline at end of file diff --git a/v1.1.0/_sources/notebooks/Multimodal_Cancer_Detection.nblink.txt b/v1.1.0/_sources/notebooks/Multimodal_Cancer_Detection.nblink.txt new file mode 100644 index 0000000..3ef386f --- /dev/null +++ b/v1.1.0/_sources/notebooks/Multimodal_Cancer_Detection.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/explainer/multimodal_cancer_detection/Multimodal_Cancer_Detection.ipynb" +} \ No newline at end of file diff --git a/v1.1.0/_sources/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.nblink.txt b/v1.1.0/_sources/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.nblink.txt new file mode 100644 index 0000000..fb34e71 --- /dev/null +++ b/v1.1.0/_sources/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/explainer/transfer_learning_text_classification/PyTorch_Text_Classifier_fine_tuning_with_Attributions.ipynb" +} \ No newline at end of file diff --git a/v1.1.0/_sources/notebooks/TorchVision_CIFAR_Interpret.nblink.txt b/v1.1.0/_sources/notebooks/TorchVision_CIFAR_Interpret.nblink.txt new file mode 100644 index 0000000..debeb5f --- /dev/null +++ b/v1.1.0/_sources/notebooks/TorchVision_CIFAR_Interpret.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/explainer/cifar_with_attributions/TorchVision_CIFAR_Interpret.ipynb" +} \ No newline at end of file diff --git a/v1.1.0/_sources/notebooks/adult-pytorch-model-card.nblink.txt b/v1.1.0/_sources/notebooks/adult-pytorch-model-card.nblink.txt new file mode 100644 index 0000000..e2c1c4f --- /dev/null +++ b/v1.1.0/_sources/notebooks/adult-pytorch-model-card.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/model_card_gen/model_card_generation_with_pytorch/adult-pytorch-model-card.ipynb" +} diff --git a/v1.1.0/_sources/notebooks/compas-model-card-tfx.nblink.txt b/v1.1.0/_sources/notebooks/compas-model-card-tfx.nblink.txt new file mode 100644 index 0000000..d23e845 --- /dev/null +++ b/v1.1.0/_sources/notebooks/compas-model-card-tfx.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/model_card_gen/compas_with_model_card_gen/compas-model-card-tfx.ipynb" +} diff --git a/v1.1.0/_sources/notebooks/heart_disease.nblink.txt b/v1.1.0/_sources/notebooks/heart_disease.nblink.txt new file mode 100644 index 0000000..7ef987a --- /dev/null +++ b/v1.1.0/_sources/notebooks/heart_disease.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/explainer/heart_disease_with_attributions/heart_disease.ipynb" +} \ No newline at end of file diff --git a/v1.1.0/_sources/notebooks/mnist.nblink.txt b/v1.1.0/_sources/notebooks/mnist.nblink.txt new file mode 100644 index 0000000..428fc96 --- /dev/null +++ b/v1.1.0/_sources/notebooks/mnist.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/explainer/mnist_with_attributions_and_metrics/mnist.ipynb" +} \ No newline at end of file diff --git a/v1.1.0/_sources/notebooks/partitionexplainer.nblink.txt b/v1.1.0/_sources/notebooks/partitionexplainer.nblink.txt new file mode 100644 index 0000000..2663e12 --- /dev/null +++ b/v1.1.0/_sources/notebooks/partitionexplainer.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/explainer/newsgroups_with_attributions_and_metrics/partitionexplainer.ipynb" +} \ No newline at end of file diff --git a/v1.1.0/_sources/notebooks/toxicity-tfma-model-card.nblink.txt b/v1.1.0/_sources/notebooks/toxicity-tfma-model-card.nblink.txt new file mode 100644 index 0000000..03163cb --- /dev/null +++ b/v1.1.0/_sources/notebooks/toxicity-tfma-model-card.nblink.txt @@ -0,0 +1,3 @@ +{ + "path": "../../notebooks/model_card_gen/toxic_comments_classification/toxicity-tfma-model-card.ipynb" +} diff --git a/v1.1.0/_sources/overview.rst.txt b/v1.1.0/_sources/overview.rst.txt new file mode 100644 index 0000000..93c06be --- /dev/null +++ b/v1.1.0/_sources/overview.rst.txt @@ -0,0 +1,2 @@ +.. include:: markdown/Overview.md + :parser: myst_parser.sphinx_ diff --git a/v1.1.0/_sphinx_design_static/design-style.1e8bd061cd6da7fc9cf755528e8ffc24.min.css b/v1.1.0/_sphinx_design_static/design-style.1e8bd061cd6da7fc9cf755528e8ffc24.min.css new file mode 100644 index 0000000..eb19f69 --- /dev/null +++ b/v1.1.0/_sphinx_design_static/design-style.1e8bd061cd6da7fc9cf755528e8ffc24.min.css @@ -0,0 +1 @@ +.sd-bg-primary{background-color:var(--sd-color-primary) !important}.sd-bg-text-primary{color:var(--sd-color-primary-text) !important}button.sd-bg-primary:focus,button.sd-bg-primary:hover{background-color:var(--sd-color-primary-highlight) !important}a.sd-bg-primary:focus,a.sd-bg-primary:hover{background-color:var(--sd-color-primary-highlight) !important}.sd-bg-secondary{background-color:var(--sd-color-secondary) !important}.sd-bg-text-secondary{color:var(--sd-color-secondary-text) !important}button.sd-bg-secondary:focus,button.sd-bg-secondary:hover{background-color:var(--sd-color-secondary-highlight) !important}a.sd-bg-secondary:focus,a.sd-bg-secondary:hover{background-color:var(--sd-color-secondary-highlight) !important}.sd-bg-success{background-color:var(--sd-color-success) !important}.sd-bg-text-success{color:var(--sd-color-success-text) !important}button.sd-bg-success:focus,button.sd-bg-success:hover{background-color:var(--sd-color-success-highlight) !important}a.sd-bg-success:focus,a.sd-bg-success:hover{background-color:var(--sd-color-success-highlight) !important}.sd-bg-info{background-color:var(--sd-color-info) !important}.sd-bg-text-info{color:var(--sd-color-info-text) !important}button.sd-bg-info:focus,button.sd-bg-info:hover{background-color:var(--sd-color-info-highlight) !important}a.sd-bg-info:focus,a.sd-bg-info:hover{background-color:var(--sd-color-info-highlight) !important}.sd-bg-warning{background-color:var(--sd-color-warning) !important}.sd-bg-text-warning{color:var(--sd-color-warning-text) !important}button.sd-bg-warning:focus,button.sd-bg-warning:hover{background-color:var(--sd-color-warning-highlight) !important}a.sd-bg-warning:focus,a.sd-bg-warning:hover{background-color:var(--sd-color-warning-highlight) !important}.sd-bg-danger{background-color:var(--sd-color-danger) !important}.sd-bg-text-danger{color:var(--sd-color-danger-text) !important}button.sd-bg-danger:focus,button.sd-bg-danger:hover{background-color:var(--sd-color-danger-highlight) !important}a.sd-bg-danger:focus,a.sd-bg-danger:hover{background-color:var(--sd-color-danger-highlight) !important}.sd-bg-light{background-color:var(--sd-color-light) !important}.sd-bg-text-light{color:var(--sd-color-light-text) !important}button.sd-bg-light:focus,button.sd-bg-light:hover{background-color:var(--sd-color-light-highlight) !important}a.sd-bg-light:focus,a.sd-bg-light:hover{background-color:var(--sd-color-light-highlight) !important}.sd-bg-muted{background-color:var(--sd-color-muted) !important}.sd-bg-text-muted{color:var(--sd-color-muted-text) !important}button.sd-bg-muted:focus,button.sd-bg-muted:hover{background-color:var(--sd-color-muted-highlight) !important}a.sd-bg-muted:focus,a.sd-bg-muted:hover{background-color:var(--sd-color-muted-highlight) !important}.sd-bg-dark{background-color:var(--sd-color-dark) !important}.sd-bg-text-dark{color:var(--sd-color-dark-text) !important}button.sd-bg-dark:focus,button.sd-bg-dark:hover{background-color:var(--sd-color-dark-highlight) !important}a.sd-bg-dark:focus,a.sd-bg-dark:hover{background-color:var(--sd-color-dark-highlight) !important}.sd-bg-black{background-color:var(--sd-color-black) !important}.sd-bg-text-black{color:var(--sd-color-black-text) !important}button.sd-bg-black:focus,button.sd-bg-black:hover{background-color:var(--sd-color-black-highlight) !important}a.sd-bg-black:focus,a.sd-bg-black:hover{background-color:var(--sd-color-black-highlight) !important}.sd-bg-white{background-color:var(--sd-color-white) !important}.sd-bg-text-white{color:var(--sd-color-white-text) !important}button.sd-bg-white:focus,button.sd-bg-white:hover{background-color:var(--sd-color-white-highlight) !important}a.sd-bg-white:focus,a.sd-bg-white:hover{background-color:var(--sd-color-white-highlight) !important}.sd-text-primary,.sd-text-primary>p{color:var(--sd-color-primary) !important}a.sd-text-primary:focus,a.sd-text-primary:hover{color:var(--sd-color-primary-highlight) !important}.sd-text-secondary,.sd-text-secondary>p{color:var(--sd-color-secondary) !important}a.sd-text-secondary:focus,a.sd-text-secondary:hover{color:var(--sd-color-secondary-highlight) !important}.sd-text-success,.sd-text-success>p{color:var(--sd-color-success) !important}a.sd-text-success:focus,a.sd-text-success:hover{color:var(--sd-color-success-highlight) !important}.sd-text-info,.sd-text-info>p{color:var(--sd-color-info) !important}a.sd-text-info:focus,a.sd-text-info:hover{color:var(--sd-color-info-highlight) !important}.sd-text-warning,.sd-text-warning>p{color:var(--sd-color-warning) !important}a.sd-text-warning:focus,a.sd-text-warning:hover{color:var(--sd-color-warning-highlight) !important}.sd-text-danger,.sd-text-danger>p{color:var(--sd-color-danger) !important}a.sd-text-danger:focus,a.sd-text-danger:hover{color:var(--sd-color-danger-highlight) !important}.sd-text-light,.sd-text-light>p{color:var(--sd-color-light) !important}a.sd-text-light:focus,a.sd-text-light:hover{color:var(--sd-color-light-highlight) !important}.sd-text-muted,.sd-text-muted>p{color:var(--sd-color-muted) !important}a.sd-text-muted:focus,a.sd-text-muted:hover{color:var(--sd-color-muted-highlight) !important}.sd-text-dark,.sd-text-dark>p{color:var(--sd-color-dark) !important}a.sd-text-dark:focus,a.sd-text-dark:hover{color:var(--sd-color-dark-highlight) !important}.sd-text-black,.sd-text-black>p{color:var(--sd-color-black) !important}a.sd-text-black:focus,a.sd-text-black:hover{color:var(--sd-color-black-highlight) !important}.sd-text-white,.sd-text-white>p{color:var(--sd-color-white) !important}a.sd-text-white:focus,a.sd-text-white:hover{color:var(--sd-color-white-highlight) !important}.sd-outline-primary{border-color:var(--sd-color-primary) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-primary:focus,a.sd-outline-primary:hover{border-color:var(--sd-color-primary-highlight) !important}.sd-outline-secondary{border-color:var(--sd-color-secondary) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-secondary:focus,a.sd-outline-secondary:hover{border-color:var(--sd-color-secondary-highlight) !important}.sd-outline-success{border-color:var(--sd-color-success) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-success:focus,a.sd-outline-success:hover{border-color:var(--sd-color-success-highlight) !important}.sd-outline-info{border-color:var(--sd-color-info) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-info:focus,a.sd-outline-info:hover{border-color:var(--sd-color-info-highlight) !important}.sd-outline-warning{border-color:var(--sd-color-warning) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-warning:focus,a.sd-outline-warning:hover{border-color:var(--sd-color-warning-highlight) !important}.sd-outline-danger{border-color:var(--sd-color-danger) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-danger:focus,a.sd-outline-danger:hover{border-color:var(--sd-color-danger-highlight) !important}.sd-outline-light{border-color:var(--sd-color-light) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-light:focus,a.sd-outline-light:hover{border-color:var(--sd-color-light-highlight) !important}.sd-outline-muted{border-color:var(--sd-color-muted) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-muted:focus,a.sd-outline-muted:hover{border-color:var(--sd-color-muted-highlight) !important}.sd-outline-dark{border-color:var(--sd-color-dark) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-dark:focus,a.sd-outline-dark:hover{border-color:var(--sd-color-dark-highlight) !important}.sd-outline-black{border-color:var(--sd-color-black) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-black:focus,a.sd-outline-black:hover{border-color:var(--sd-color-black-highlight) !important}.sd-outline-white{border-color:var(--sd-color-white) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-white:focus,a.sd-outline-white:hover{border-color:var(--sd-color-white-highlight) !important}.sd-bg-transparent{background-color:transparent !important}.sd-outline-transparent{border-color:transparent !important}.sd-text-transparent{color:transparent !important}.sd-p-0{padding:0 !important}.sd-pt-0,.sd-py-0{padding-top:0 !important}.sd-pr-0,.sd-px-0{padding-right:0 !important}.sd-pb-0,.sd-py-0{padding-bottom:0 !important}.sd-pl-0,.sd-px-0{padding-left:0 !important}.sd-p-1{padding:.25rem !important}.sd-pt-1,.sd-py-1{padding-top:.25rem !important}.sd-pr-1,.sd-px-1{padding-right:.25rem !important}.sd-pb-1,.sd-py-1{padding-bottom:.25rem !important}.sd-pl-1,.sd-px-1{padding-left:.25rem !important}.sd-p-2{padding:.5rem !important}.sd-pt-2,.sd-py-2{padding-top:.5rem !important}.sd-pr-2,.sd-px-2{padding-right:.5rem !important}.sd-pb-2,.sd-py-2{padding-bottom:.5rem !important}.sd-pl-2,.sd-px-2{padding-left:.5rem !important}.sd-p-3{padding:1rem !important}.sd-pt-3,.sd-py-3{padding-top:1rem !important}.sd-pr-3,.sd-px-3{padding-right:1rem !important}.sd-pb-3,.sd-py-3{padding-bottom:1rem !important}.sd-pl-3,.sd-px-3{padding-left:1rem !important}.sd-p-4{padding:1.5rem !important}.sd-pt-4,.sd-py-4{padding-top:1.5rem !important}.sd-pr-4,.sd-px-4{padding-right:1.5rem !important}.sd-pb-4,.sd-py-4{padding-bottom:1.5rem !important}.sd-pl-4,.sd-px-4{padding-left:1.5rem !important}.sd-p-5{padding:3rem !important}.sd-pt-5,.sd-py-5{padding-top:3rem !important}.sd-pr-5,.sd-px-5{padding-right:3rem !important}.sd-pb-5,.sd-py-5{padding-bottom:3rem !important}.sd-pl-5,.sd-px-5{padding-left:3rem !important}.sd-m-auto{margin:auto !important}.sd-mt-auto,.sd-my-auto{margin-top:auto !important}.sd-mr-auto,.sd-mx-auto{margin-right:auto !important}.sd-mb-auto,.sd-my-auto{margin-bottom:auto !important}.sd-ml-auto,.sd-mx-auto{margin-left:auto !important}.sd-m-0{margin:0 !important}.sd-mt-0,.sd-my-0{margin-top:0 !important}.sd-mr-0,.sd-mx-0{margin-right:0 !important}.sd-mb-0,.sd-my-0{margin-bottom:0 !important}.sd-ml-0,.sd-mx-0{margin-left:0 !important}.sd-m-1{margin:.25rem !important}.sd-mt-1,.sd-my-1{margin-top:.25rem !important}.sd-mr-1,.sd-mx-1{margin-right:.25rem !important}.sd-mb-1,.sd-my-1{margin-bottom:.25rem !important}.sd-ml-1,.sd-mx-1{margin-left:.25rem !important}.sd-m-2{margin:.5rem !important}.sd-mt-2,.sd-my-2{margin-top:.5rem !important}.sd-mr-2,.sd-mx-2{margin-right:.5rem !important}.sd-mb-2,.sd-my-2{margin-bottom:.5rem !important}.sd-ml-2,.sd-mx-2{margin-left:.5rem !important}.sd-m-3{margin:1rem !important}.sd-mt-3,.sd-my-3{margin-top:1rem !important}.sd-mr-3,.sd-mx-3{margin-right:1rem !important}.sd-mb-3,.sd-my-3{margin-bottom:1rem !important}.sd-ml-3,.sd-mx-3{margin-left:1rem !important}.sd-m-4{margin:1.5rem !important}.sd-mt-4,.sd-my-4{margin-top:1.5rem !important}.sd-mr-4,.sd-mx-4{margin-right:1.5rem !important}.sd-mb-4,.sd-my-4{margin-bottom:1.5rem !important}.sd-ml-4,.sd-mx-4{margin-left:1.5rem !important}.sd-m-5{margin:3rem !important}.sd-mt-5,.sd-my-5{margin-top:3rem !important}.sd-mr-5,.sd-mx-5{margin-right:3rem !important}.sd-mb-5,.sd-my-5{margin-bottom:3rem !important}.sd-ml-5,.sd-mx-5{margin-left:3rem !important}.sd-w-25{width:25% !important}.sd-w-50{width:50% !important}.sd-w-75{width:75% !important}.sd-w-100{width:100% !important}.sd-w-auto{width:auto !important}.sd-h-25{height:25% !important}.sd-h-50{height:50% !important}.sd-h-75{height:75% !important}.sd-h-100{height:100% !important}.sd-h-auto{height:auto !important}.sd-d-none{display:none !important}.sd-d-inline{display:inline !important}.sd-d-inline-block{display:inline-block !important}.sd-d-block{display:block !important}.sd-d-grid{display:grid !important}.sd-d-flex-row{display:-ms-flexbox !important;display:flex !important;flex-direction:row !important}.sd-d-flex-column{display:-ms-flexbox !important;display:flex !important;flex-direction:column !important}.sd-d-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}@media(min-width: 576px){.sd-d-sm-none{display:none !important}.sd-d-sm-inline{display:inline !important}.sd-d-sm-inline-block{display:inline-block !important}.sd-d-sm-block{display:block !important}.sd-d-sm-grid{display:grid !important}.sd-d-sm-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-sm-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}@media(min-width: 768px){.sd-d-md-none{display:none !important}.sd-d-md-inline{display:inline !important}.sd-d-md-inline-block{display:inline-block !important}.sd-d-md-block{display:block !important}.sd-d-md-grid{display:grid !important}.sd-d-md-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-md-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}@media(min-width: 992px){.sd-d-lg-none{display:none !important}.sd-d-lg-inline{display:inline !important}.sd-d-lg-inline-block{display:inline-block !important}.sd-d-lg-block{display:block !important}.sd-d-lg-grid{display:grid !important}.sd-d-lg-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-lg-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}@media(min-width: 1200px){.sd-d-xl-none{display:none !important}.sd-d-xl-inline{display:inline !important}.sd-d-xl-inline-block{display:inline-block !important}.sd-d-xl-block{display:block !important}.sd-d-xl-grid{display:grid !important}.sd-d-xl-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-xl-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}.sd-align-major-start{justify-content:flex-start !important}.sd-align-major-end{justify-content:flex-end !important}.sd-align-major-center{justify-content:center !important}.sd-align-major-justify{justify-content:space-between !important}.sd-align-major-spaced{justify-content:space-evenly !important}.sd-align-minor-start{align-items:flex-start !important}.sd-align-minor-end{align-items:flex-end !important}.sd-align-minor-center{align-items:center !important}.sd-align-minor-stretch{align-items:stretch !important}.sd-text-justify{text-align:justify !important}.sd-text-left{text-align:left !important}.sd-text-right{text-align:right !important}.sd-text-center{text-align:center !important}.sd-font-weight-light{font-weight:300 !important}.sd-font-weight-lighter{font-weight:lighter !important}.sd-font-weight-normal{font-weight:400 !important}.sd-font-weight-bold{font-weight:700 !important}.sd-font-weight-bolder{font-weight:bolder !important}.sd-font-italic{font-style:italic !important}.sd-text-decoration-none{text-decoration:none !important}.sd-text-lowercase{text-transform:lowercase !important}.sd-text-uppercase{text-transform:uppercase !important}.sd-text-capitalize{text-transform:capitalize !important}.sd-text-wrap{white-space:normal !important}.sd-text-nowrap{white-space:nowrap !important}.sd-text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.sd-fs-1,.sd-fs-1>p{font-size:calc(1.375rem + 1.5vw) !important;line-height:unset !important}.sd-fs-2,.sd-fs-2>p{font-size:calc(1.325rem + 0.9vw) !important;line-height:unset !important}.sd-fs-3,.sd-fs-3>p{font-size:calc(1.3rem + 0.6vw) !important;line-height:unset !important}.sd-fs-4,.sd-fs-4>p{font-size:calc(1.275rem + 0.3vw) !important;line-height:unset !important}.sd-fs-5,.sd-fs-5>p{font-size:1.25rem !important;line-height:unset !important}.sd-fs-6,.sd-fs-6>p{font-size:1rem !important;line-height:unset !important}.sd-border-0{border:0 solid !important}.sd-border-top-0{border-top:0 solid !important}.sd-border-bottom-0{border-bottom:0 solid !important}.sd-border-right-0{border-right:0 solid !important}.sd-border-left-0{border-left:0 solid !important}.sd-border-1{border:1px solid !important}.sd-border-top-1{border-top:1px solid !important}.sd-border-bottom-1{border-bottom:1px solid !important}.sd-border-right-1{border-right:1px solid !important}.sd-border-left-1{border-left:1px solid !important}.sd-border-2{border:2px solid !important}.sd-border-top-2{border-top:2px solid !important}.sd-border-bottom-2{border-bottom:2px solid !important}.sd-border-right-2{border-right:2px solid !important}.sd-border-left-2{border-left:2px solid !important}.sd-border-3{border:3px solid !important}.sd-border-top-3{border-top:3px solid !important}.sd-border-bottom-3{border-bottom:3px solid !important}.sd-border-right-3{border-right:3px solid !important}.sd-border-left-3{border-left:3px solid !important}.sd-border-4{border:4px solid !important}.sd-border-top-4{border-top:4px solid !important}.sd-border-bottom-4{border-bottom:4px solid !important}.sd-border-right-4{border-right:4px solid !important}.sd-border-left-4{border-left:4px solid !important}.sd-border-5{border:5px solid !important}.sd-border-top-5{border-top:5px solid !important}.sd-border-bottom-5{border-bottom:5px solid !important}.sd-border-right-5{border-right:5px solid !important}.sd-border-left-5{border-left:5px solid !important}.sd-rounded-0{border-radius:0 !important}.sd-rounded-1{border-radius:.2rem !important}.sd-rounded-2{border-radius:.3rem !important}.sd-rounded-3{border-radius:.5rem !important}.sd-rounded-pill{border-radius:50rem !important}.sd-rounded-circle{border-radius:50% !important}.shadow-none{box-shadow:none !important}.sd-shadow-sm{box-shadow:0 .125rem .25rem var(--sd-color-shadow) !important}.sd-shadow-md{box-shadow:0 .5rem 1rem var(--sd-color-shadow) !important}.sd-shadow-lg{box-shadow:0 1rem 3rem var(--sd-color-shadow) !important}@keyframes sd-slide-from-left{0%{transform:translateX(-100%)}100%{transform:translateX(0)}}@keyframes sd-slide-from-right{0%{transform:translateX(200%)}100%{transform:translateX(0)}}@keyframes sd-grow100{0%{transform:scale(0);opacity:.5}100%{transform:scale(1);opacity:1}}@keyframes sd-grow50{0%{transform:scale(0.5);opacity:.5}100%{transform:scale(1);opacity:1}}@keyframes sd-grow50-rot20{0%{transform:scale(0.5) rotateZ(-20deg);opacity:.5}75%{transform:scale(1) rotateZ(5deg);opacity:1}95%{transform:scale(1) rotateZ(-1deg);opacity:1}100%{transform:scale(1) rotateZ(0);opacity:1}}.sd-animate-slide-from-left{animation:1s ease-out 0s 1 normal none running sd-slide-from-left}.sd-animate-slide-from-right{animation:1s ease-out 0s 1 normal none running sd-slide-from-right}.sd-animate-grow100{animation:1s ease-out 0s 1 normal none running sd-grow100}.sd-animate-grow50{animation:1s ease-out 0s 1 normal none running sd-grow50}.sd-animate-grow50-rot20{animation:1s ease-out 0s 1 normal none running sd-grow50-rot20}.sd-badge{display:inline-block;padding:.35em .65em;font-size:.75em;font-weight:700;line-height:1;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem}.sd-badge:empty{display:none}a.sd-badge{text-decoration:none}.sd-btn .sd-badge{position:relative;top:-1px}.sd-btn{background-color:transparent;border:1px solid transparent;border-radius:.25rem;cursor:pointer;display:inline-block;font-weight:400;font-size:1rem;line-height:1.5;padding:.375rem .75rem;text-align:center;text-decoration:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;vertical-align:middle;user-select:none;-moz-user-select:none;-ms-user-select:none;-webkit-user-select:none}.sd-btn:hover{text-decoration:none}@media(prefers-reduced-motion: reduce){.sd-btn{transition:none}}.sd-btn-primary,.sd-btn-outline-primary:hover,.sd-btn-outline-primary:focus{color:var(--sd-color-primary-text) !important;background-color:var(--sd-color-primary) !important;border-color:var(--sd-color-primary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-primary:hover,.sd-btn-primary:focus{color:var(--sd-color-primary-text) !important;background-color:var(--sd-color-primary-highlight) !important;border-color:var(--sd-color-primary-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-primary{color:var(--sd-color-primary) !important;border-color:var(--sd-color-primary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-secondary,.sd-btn-outline-secondary:hover,.sd-btn-outline-secondary:focus{color:var(--sd-color-secondary-text) !important;background-color:var(--sd-color-secondary) !important;border-color:var(--sd-color-secondary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-secondary:hover,.sd-btn-secondary:focus{color:var(--sd-color-secondary-text) !important;background-color:var(--sd-color-secondary-highlight) !important;border-color:var(--sd-color-secondary-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-secondary{color:var(--sd-color-secondary) !important;border-color:var(--sd-color-secondary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-success,.sd-btn-outline-success:hover,.sd-btn-outline-success:focus{color:var(--sd-color-success-text) !important;background-color:var(--sd-color-success) !important;border-color:var(--sd-color-success) !important;border-width:1px !important;border-style:solid !important}.sd-btn-success:hover,.sd-btn-success:focus{color:var(--sd-color-success-text) !important;background-color:var(--sd-color-success-highlight) !important;border-color:var(--sd-color-success-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-success{color:var(--sd-color-success) !important;border-color:var(--sd-color-success) !important;border-width:1px !important;border-style:solid !important}.sd-btn-info,.sd-btn-outline-info:hover,.sd-btn-outline-info:focus{color:var(--sd-color-info-text) !important;background-color:var(--sd-color-info) !important;border-color:var(--sd-color-info) !important;border-width:1px !important;border-style:solid !important}.sd-btn-info:hover,.sd-btn-info:focus{color:var(--sd-color-info-text) !important;background-color:var(--sd-color-info-highlight) !important;border-color:var(--sd-color-info-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-info{color:var(--sd-color-info) !important;border-color:var(--sd-color-info) !important;border-width:1px !important;border-style:solid !important}.sd-btn-warning,.sd-btn-outline-warning:hover,.sd-btn-outline-warning:focus{color:var(--sd-color-warning-text) !important;background-color:var(--sd-color-warning) !important;border-color:var(--sd-color-warning) !important;border-width:1px !important;border-style:solid !important}.sd-btn-warning:hover,.sd-btn-warning:focus{color:var(--sd-color-warning-text) !important;background-color:var(--sd-color-warning-highlight) !important;border-color:var(--sd-color-warning-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-warning{color:var(--sd-color-warning) !important;border-color:var(--sd-color-warning) !important;border-width:1px !important;border-style:solid !important}.sd-btn-danger,.sd-btn-outline-danger:hover,.sd-btn-outline-danger:focus{color:var(--sd-color-danger-text) !important;background-color:var(--sd-color-danger) !important;border-color:var(--sd-color-danger) !important;border-width:1px !important;border-style:solid !important}.sd-btn-danger:hover,.sd-btn-danger:focus{color:var(--sd-color-danger-text) !important;background-color:var(--sd-color-danger-highlight) !important;border-color:var(--sd-color-danger-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-danger{color:var(--sd-color-danger) !important;border-color:var(--sd-color-danger) !important;border-width:1px !important;border-style:solid !important}.sd-btn-light,.sd-btn-outline-light:hover,.sd-btn-outline-light:focus{color:var(--sd-color-light-text) !important;background-color:var(--sd-color-light) !important;border-color:var(--sd-color-light) !important;border-width:1px !important;border-style:solid !important}.sd-btn-light:hover,.sd-btn-light:focus{color:var(--sd-color-light-text) !important;background-color:var(--sd-color-light-highlight) !important;border-color:var(--sd-color-light-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-light{color:var(--sd-color-light) !important;border-color:var(--sd-color-light) !important;border-width:1px !important;border-style:solid !important}.sd-btn-muted,.sd-btn-outline-muted:hover,.sd-btn-outline-muted:focus{color:var(--sd-color-muted-text) !important;background-color:var(--sd-color-muted) !important;border-color:var(--sd-color-muted) !important;border-width:1px !important;border-style:solid !important}.sd-btn-muted:hover,.sd-btn-muted:focus{color:var(--sd-color-muted-text) !important;background-color:var(--sd-color-muted-highlight) !important;border-color:var(--sd-color-muted-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-muted{color:var(--sd-color-muted) !important;border-color:var(--sd-color-muted) !important;border-width:1px !important;border-style:solid !important}.sd-btn-dark,.sd-btn-outline-dark:hover,.sd-btn-outline-dark:focus{color:var(--sd-color-dark-text) !important;background-color:var(--sd-color-dark) !important;border-color:var(--sd-color-dark) !important;border-width:1px !important;border-style:solid !important}.sd-btn-dark:hover,.sd-btn-dark:focus{color:var(--sd-color-dark-text) !important;background-color:var(--sd-color-dark-highlight) !important;border-color:var(--sd-color-dark-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-dark{color:var(--sd-color-dark) !important;border-color:var(--sd-color-dark) !important;border-width:1px !important;border-style:solid !important}.sd-btn-black,.sd-btn-outline-black:hover,.sd-btn-outline-black:focus{color:var(--sd-color-black-text) !important;background-color:var(--sd-color-black) !important;border-color:var(--sd-color-black) !important;border-width:1px !important;border-style:solid !important}.sd-btn-black:hover,.sd-btn-black:focus{color:var(--sd-color-black-text) !important;background-color:var(--sd-color-black-highlight) !important;border-color:var(--sd-color-black-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-black{color:var(--sd-color-black) !important;border-color:var(--sd-color-black) !important;border-width:1px !important;border-style:solid !important}.sd-btn-white,.sd-btn-outline-white:hover,.sd-btn-outline-white:focus{color:var(--sd-color-white-text) !important;background-color:var(--sd-color-white) !important;border-color:var(--sd-color-white) !important;border-width:1px !important;border-style:solid !important}.sd-btn-white:hover,.sd-btn-white:focus{color:var(--sd-color-white-text) !important;background-color:var(--sd-color-white-highlight) !important;border-color:var(--sd-color-white-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-white{color:var(--sd-color-white) !important;border-color:var(--sd-color-white) !important;border-width:1px !important;border-style:solid !important}.sd-stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;content:""}.sd-hide-link-text{font-size:0}.sd-octicon,.sd-material-icon{display:inline-block;fill:currentColor;vertical-align:middle}.sd-avatar-xs{border-radius:50%;object-fit:cover;object-position:center;width:1rem;height:1rem}.sd-avatar-sm{border-radius:50%;object-fit:cover;object-position:center;width:3rem;height:3rem}.sd-avatar-md{border-radius:50%;object-fit:cover;object-position:center;width:5rem;height:5rem}.sd-avatar-lg{border-radius:50%;object-fit:cover;object-position:center;width:7rem;height:7rem}.sd-avatar-xl{border-radius:50%;object-fit:cover;object-position:center;width:10rem;height:10rem}.sd-avatar-inherit{border-radius:50%;object-fit:cover;object-position:center;width:inherit;height:inherit}.sd-avatar-initial{border-radius:50%;object-fit:cover;object-position:center;width:initial;height:initial}.sd-card{background-clip:border-box;background-color:var(--sd-color-card-background);border:1px solid var(--sd-color-card-border);border-radius:.25rem;color:var(--sd-color-card-text);display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;min-width:0;position:relative;word-wrap:break-word}.sd-card>hr{margin-left:0;margin-right:0}.sd-card-hover:hover{border-color:var(--sd-color-card-border-hover);transform:scale(1.01)}.sd-card-body{-ms-flex:1 1 auto;flex:1 1 auto;padding:1rem 1rem}.sd-card-title{margin-bottom:.5rem}.sd-card-subtitle{margin-top:-0.25rem;margin-bottom:0}.sd-card-text:last-child{margin-bottom:0}.sd-card-link:hover{text-decoration:none}.sd-card-link+.card-link{margin-left:1rem}.sd-card-header{padding:.5rem 1rem;margin-bottom:0;background-color:var(--sd-color-card-header);border-bottom:1px solid var(--sd-color-card-border)}.sd-card-header:first-child{border-radius:calc(0.25rem - 1px) calc(0.25rem - 1px) 0 0}.sd-card-footer{padding:.5rem 1rem;background-color:var(--sd-color-card-footer);border-top:1px solid var(--sd-color-card-border)}.sd-card-footer:last-child{border-radius:0 0 calc(0.25rem - 1px) calc(0.25rem - 1px)}.sd-card-header-tabs{margin-right:-0.5rem;margin-bottom:-0.5rem;margin-left:-0.5rem;border-bottom:0}.sd-card-header-pills{margin-right:-0.5rem;margin-left:-0.5rem}.sd-card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1rem;border-radius:calc(0.25rem - 1px)}.sd-card-img,.sd-card-img-bottom,.sd-card-img-top{width:100%}.sd-card-img,.sd-card-img-top{border-top-left-radius:calc(0.25rem - 1px);border-top-right-radius:calc(0.25rem - 1px)}.sd-card-img,.sd-card-img-bottom{border-bottom-left-radius:calc(0.25rem - 1px);border-bottom-right-radius:calc(0.25rem - 1px)}.sd-cards-carousel{width:100%;display:flex;flex-wrap:nowrap;-ms-flex-direction:row;flex-direction:row;overflow-x:hidden;scroll-snap-type:x mandatory}.sd-cards-carousel.sd-show-scrollbar{overflow-x:auto}.sd-cards-carousel:hover,.sd-cards-carousel:focus{overflow-x:auto}.sd-cards-carousel>.sd-card{flex-shrink:0;scroll-snap-align:start}.sd-cards-carousel>.sd-card:not(:last-child){margin-right:3px}.sd-card-cols-1>.sd-card{width:90%}.sd-card-cols-2>.sd-card{width:45%}.sd-card-cols-3>.sd-card{width:30%}.sd-card-cols-4>.sd-card{width:22.5%}.sd-card-cols-5>.sd-card{width:18%}.sd-card-cols-6>.sd-card{width:15%}.sd-card-cols-7>.sd-card{width:12.8571428571%}.sd-card-cols-8>.sd-card{width:11.25%}.sd-card-cols-9>.sd-card{width:10%}.sd-card-cols-10>.sd-card{width:9%}.sd-card-cols-11>.sd-card{width:8.1818181818%}.sd-card-cols-12>.sd-card{width:7.5%}.sd-container,.sd-container-fluid,.sd-container-lg,.sd-container-md,.sd-container-sm,.sd-container-xl{margin-left:auto;margin-right:auto;padding-left:var(--sd-gutter-x, 0.75rem);padding-right:var(--sd-gutter-x, 0.75rem);width:100%}@media(min-width: 576px){.sd-container-sm,.sd-container{max-width:540px}}@media(min-width: 768px){.sd-container-md,.sd-container-sm,.sd-container{max-width:720px}}@media(min-width: 992px){.sd-container-lg,.sd-container-md,.sd-container-sm,.sd-container{max-width:960px}}@media(min-width: 1200px){.sd-container-xl,.sd-container-lg,.sd-container-md,.sd-container-sm,.sd-container{max-width:1140px}}.sd-row{--sd-gutter-x: 1.5rem;--sd-gutter-y: 0;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-top:calc(var(--sd-gutter-y) * -1);margin-right:calc(var(--sd-gutter-x) * -0.5);margin-left:calc(var(--sd-gutter-x) * -0.5)}.sd-row>*{box-sizing:border-box;flex-shrink:0;width:100%;max-width:100%;padding-right:calc(var(--sd-gutter-x) * 0.5);padding-left:calc(var(--sd-gutter-x) * 0.5);margin-top:var(--sd-gutter-y)}.sd-col{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-auto>*{flex:0 0 auto;width:auto}.sd-row-cols-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}@media(min-width: 576px){.sd-col-sm{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-sm-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-sm-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-sm-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-sm-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-sm-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-sm-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-sm-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-sm-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-sm-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-sm-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-sm-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-sm-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-sm-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}@media(min-width: 768px){.sd-col-md{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-md-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-md-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-md-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-md-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-md-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-md-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-md-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-md-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-md-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-md-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-md-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-md-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-md-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}@media(min-width: 992px){.sd-col-lg{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-lg-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-lg-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-lg-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-lg-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-lg-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-lg-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-lg-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-lg-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-lg-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-lg-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-lg-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-lg-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-lg-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}@media(min-width: 1200px){.sd-col-xl{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-xl-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-xl-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-xl-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-xl-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-xl-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-xl-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-xl-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-xl-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-xl-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-xl-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-xl-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-xl-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-xl-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}.sd-col-auto{flex:0 0 auto;-ms-flex:0 0 auto;width:auto}.sd-col-1{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}.sd-col-2{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-col-3{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-col-4{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-col-5{flex:0 0 auto;-ms-flex:0 0 auto;width:41.6666666667%}.sd-col-6{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-col-7{flex:0 0 auto;-ms-flex:0 0 auto;width:58.3333333333%}.sd-col-8{flex:0 0 auto;-ms-flex:0 0 auto;width:66.6666666667%}.sd-col-9{flex:0 0 auto;-ms-flex:0 0 auto;width:75%}.sd-col-10{flex:0 0 auto;-ms-flex:0 0 auto;width:83.3333333333%}.sd-col-11{flex:0 0 auto;-ms-flex:0 0 auto;width:91.6666666667%}.sd-col-12{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-g-0,.sd-gy-0{--sd-gutter-y: 0}.sd-g-0,.sd-gx-0{--sd-gutter-x: 0}.sd-g-1,.sd-gy-1{--sd-gutter-y: 0.25rem}.sd-g-1,.sd-gx-1{--sd-gutter-x: 0.25rem}.sd-g-2,.sd-gy-2{--sd-gutter-y: 0.5rem}.sd-g-2,.sd-gx-2{--sd-gutter-x: 0.5rem}.sd-g-3,.sd-gy-3{--sd-gutter-y: 1rem}.sd-g-3,.sd-gx-3{--sd-gutter-x: 1rem}.sd-g-4,.sd-gy-4{--sd-gutter-y: 1.5rem}.sd-g-4,.sd-gx-4{--sd-gutter-x: 1.5rem}.sd-g-5,.sd-gy-5{--sd-gutter-y: 3rem}.sd-g-5,.sd-gx-5{--sd-gutter-x: 3rem}@media(min-width: 576px){.sd-col-sm-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-sm-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-sm-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-sm-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-sm-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-sm-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-sm-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-sm-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-sm-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-sm-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-sm-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-sm-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-sm-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-sm-0,.sd-gy-sm-0{--sd-gutter-y: 0}.sd-g-sm-0,.sd-gx-sm-0{--sd-gutter-x: 0}.sd-g-sm-1,.sd-gy-sm-1{--sd-gutter-y: 0.25rem}.sd-g-sm-1,.sd-gx-sm-1{--sd-gutter-x: 0.25rem}.sd-g-sm-2,.sd-gy-sm-2{--sd-gutter-y: 0.5rem}.sd-g-sm-2,.sd-gx-sm-2{--sd-gutter-x: 0.5rem}.sd-g-sm-3,.sd-gy-sm-3{--sd-gutter-y: 1rem}.sd-g-sm-3,.sd-gx-sm-3{--sd-gutter-x: 1rem}.sd-g-sm-4,.sd-gy-sm-4{--sd-gutter-y: 1.5rem}.sd-g-sm-4,.sd-gx-sm-4{--sd-gutter-x: 1.5rem}.sd-g-sm-5,.sd-gy-sm-5{--sd-gutter-y: 3rem}.sd-g-sm-5,.sd-gx-sm-5{--sd-gutter-x: 3rem}}@media(min-width: 768px){.sd-col-md-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-md-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-md-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-md-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-md-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-md-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-md-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-md-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-md-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-md-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-md-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-md-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-md-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-md-0,.sd-gy-md-0{--sd-gutter-y: 0}.sd-g-md-0,.sd-gx-md-0{--sd-gutter-x: 0}.sd-g-md-1,.sd-gy-md-1{--sd-gutter-y: 0.25rem}.sd-g-md-1,.sd-gx-md-1{--sd-gutter-x: 0.25rem}.sd-g-md-2,.sd-gy-md-2{--sd-gutter-y: 0.5rem}.sd-g-md-2,.sd-gx-md-2{--sd-gutter-x: 0.5rem}.sd-g-md-3,.sd-gy-md-3{--sd-gutter-y: 1rem}.sd-g-md-3,.sd-gx-md-3{--sd-gutter-x: 1rem}.sd-g-md-4,.sd-gy-md-4{--sd-gutter-y: 1.5rem}.sd-g-md-4,.sd-gx-md-4{--sd-gutter-x: 1.5rem}.sd-g-md-5,.sd-gy-md-5{--sd-gutter-y: 3rem}.sd-g-md-5,.sd-gx-md-5{--sd-gutter-x: 3rem}}@media(min-width: 992px){.sd-col-lg-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-lg-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-lg-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-lg-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-lg-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-lg-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-lg-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-lg-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-lg-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-lg-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-lg-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-lg-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-lg-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-lg-0,.sd-gy-lg-0{--sd-gutter-y: 0}.sd-g-lg-0,.sd-gx-lg-0{--sd-gutter-x: 0}.sd-g-lg-1,.sd-gy-lg-1{--sd-gutter-y: 0.25rem}.sd-g-lg-1,.sd-gx-lg-1{--sd-gutter-x: 0.25rem}.sd-g-lg-2,.sd-gy-lg-2{--sd-gutter-y: 0.5rem}.sd-g-lg-2,.sd-gx-lg-2{--sd-gutter-x: 0.5rem}.sd-g-lg-3,.sd-gy-lg-3{--sd-gutter-y: 1rem}.sd-g-lg-3,.sd-gx-lg-3{--sd-gutter-x: 1rem}.sd-g-lg-4,.sd-gy-lg-4{--sd-gutter-y: 1.5rem}.sd-g-lg-4,.sd-gx-lg-4{--sd-gutter-x: 1.5rem}.sd-g-lg-5,.sd-gy-lg-5{--sd-gutter-y: 3rem}.sd-g-lg-5,.sd-gx-lg-5{--sd-gutter-x: 3rem}}@media(min-width: 1200px){.sd-col-xl-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-xl-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-xl-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-xl-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-xl-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-xl-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-xl-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-xl-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-xl-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-xl-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-xl-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-xl-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-xl-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-xl-0,.sd-gy-xl-0{--sd-gutter-y: 0}.sd-g-xl-0,.sd-gx-xl-0{--sd-gutter-x: 0}.sd-g-xl-1,.sd-gy-xl-1{--sd-gutter-y: 0.25rem}.sd-g-xl-1,.sd-gx-xl-1{--sd-gutter-x: 0.25rem}.sd-g-xl-2,.sd-gy-xl-2{--sd-gutter-y: 0.5rem}.sd-g-xl-2,.sd-gx-xl-2{--sd-gutter-x: 0.5rem}.sd-g-xl-3,.sd-gy-xl-3{--sd-gutter-y: 1rem}.sd-g-xl-3,.sd-gx-xl-3{--sd-gutter-x: 1rem}.sd-g-xl-4,.sd-gy-xl-4{--sd-gutter-y: 1.5rem}.sd-g-xl-4,.sd-gx-xl-4{--sd-gutter-x: 1.5rem}.sd-g-xl-5,.sd-gy-xl-5{--sd-gutter-y: 3rem}.sd-g-xl-5,.sd-gx-xl-5{--sd-gutter-x: 3rem}}.sd-flex-row-reverse{flex-direction:row-reverse !important}details.sd-dropdown{position:relative}details.sd-dropdown .sd-summary-title{font-weight:700;padding-right:3em !important;-moz-user-select:none;-ms-user-select:none;-webkit-user-select:none;user-select:none}details.sd-dropdown:hover{cursor:pointer}details.sd-dropdown .sd-summary-content{cursor:default}details.sd-dropdown summary{list-style:none;padding:1em}details.sd-dropdown summary .sd-octicon.no-title{vertical-align:middle}details.sd-dropdown[open] summary .sd-octicon.no-title{visibility:hidden}details.sd-dropdown summary::-webkit-details-marker{display:none}details.sd-dropdown summary:focus{outline:none}details.sd-dropdown .sd-summary-icon{margin-right:.5em}details.sd-dropdown .sd-summary-icon svg{opacity:.8}details.sd-dropdown summary:hover .sd-summary-up svg,details.sd-dropdown summary:hover .sd-summary-down svg{opacity:1;transform:scale(1.1)}details.sd-dropdown .sd-summary-up svg,details.sd-dropdown .sd-summary-down svg{display:block;opacity:.6}details.sd-dropdown .sd-summary-up,details.sd-dropdown .sd-summary-down{pointer-events:none;position:absolute;right:1em;top:1em}details.sd-dropdown[open]>.sd-summary-title .sd-summary-down{visibility:hidden}details.sd-dropdown:not([open])>.sd-summary-title .sd-summary-up{visibility:hidden}details.sd-dropdown:not([open]).sd-card{border:none}details.sd-dropdown:not([open])>.sd-card-header{border:1px solid var(--sd-color-card-border);border-radius:.25rem}details.sd-dropdown.sd-fade-in[open] summary~*{-moz-animation:sd-fade-in .5s ease-in-out;-webkit-animation:sd-fade-in .5s ease-in-out;animation:sd-fade-in .5s ease-in-out}details.sd-dropdown.sd-fade-in-slide-down[open] summary~*{-moz-animation:sd-fade-in .5s ease-in-out,sd-slide-down .5s ease-in-out;-webkit-animation:sd-fade-in .5s ease-in-out,sd-slide-down .5s ease-in-out;animation:sd-fade-in .5s ease-in-out,sd-slide-down .5s ease-in-out}.sd-col>.sd-dropdown{width:100%}.sd-summary-content>.sd-tab-set:first-child{margin-top:0}@keyframes sd-fade-in{0%{opacity:0}100%{opacity:1}}@keyframes sd-slide-down{0%{transform:translate(0, -10px)}100%{transform:translate(0, 0)}}.sd-tab-set{border-radius:.125rem;display:flex;flex-wrap:wrap;margin:1em 0;position:relative}.sd-tab-set>input{opacity:0;position:absolute}.sd-tab-set>input:checked+label{border-color:var(--sd-color-tabs-underline-active);color:var(--sd-color-tabs-label-active)}.sd-tab-set>input:checked+label+.sd-tab-content{display:block}.sd-tab-set>input:not(:checked)+label:hover{color:var(--sd-color-tabs-label-hover);border-color:var(--sd-color-tabs-underline-hover)}.sd-tab-set>input:focus+label{outline-style:auto}.sd-tab-set>input:not(.focus-visible)+label{outline:none;-webkit-tap-highlight-color:transparent}.sd-tab-set>label{border-bottom:.125rem solid transparent;margin-bottom:0;color:var(--sd-color-tabs-label-inactive);border-color:var(--sd-color-tabs-underline-inactive);cursor:pointer;font-size:var(--sd-fontsize-tabs-label);font-weight:700;padding:1em 1.25em .5em;transition:color 250ms;width:auto;z-index:1}html .sd-tab-set>label:hover{color:var(--sd-color-tabs-label-active)}.sd-col>.sd-tab-set{width:100%}.sd-tab-content{box-shadow:0 -0.0625rem var(--sd-color-tabs-overline),0 .0625rem var(--sd-color-tabs-underline);display:none;order:99;padding-bottom:.75rem;padding-top:.75rem;width:100%}.sd-tab-content>:first-child{margin-top:0 !important}.sd-tab-content>:last-child{margin-bottom:0 !important}.sd-tab-content>.sd-tab-set{margin:0}.sd-sphinx-override,.sd-sphinx-override *{-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box}.sd-sphinx-override p{margin-top:0}:root{--sd-color-primary: #0071bc;--sd-color-secondary: #6c757d;--sd-color-success: #28a745;--sd-color-info: #17a2b8;--sd-color-warning: #f0b37e;--sd-color-danger: #dc3545;--sd-color-light: #f8f9fa;--sd-color-muted: #6c757d;--sd-color-dark: #212529;--sd-color-black: black;--sd-color-white: white;--sd-color-primary-highlight: #0060a0;--sd-color-secondary-highlight: #5c636a;--sd-color-success-highlight: #228e3b;--sd-color-info-highlight: #148a9c;--sd-color-warning-highlight: #cc986b;--sd-color-danger-highlight: #bb2d3b;--sd-color-light-highlight: #d3d4d5;--sd-color-muted-highlight: #5c636a;--sd-color-dark-highlight: #1c1f23;--sd-color-black-highlight: black;--sd-color-white-highlight: #d9d9d9;--sd-color-primary-text: #fff;--sd-color-secondary-text: #fff;--sd-color-success-text: #fff;--sd-color-info-text: #fff;--sd-color-warning-text: #212529;--sd-color-danger-text: #fff;--sd-color-light-text: #212529;--sd-color-muted-text: #fff;--sd-color-dark-text: #fff;--sd-color-black-text: #fff;--sd-color-white-text: #212529;--sd-color-shadow: rgba(0, 0, 0, 0.15);--sd-color-card-border: rgba(0, 0, 0, 0.125);--sd-color-card-border-hover: hsla(231, 99%, 66%, 1);--sd-color-card-background: transparent;--sd-color-card-text: inherit;--sd-color-card-header: transparent;--sd-color-card-footer: transparent;--sd-color-tabs-label-active: hsla(231, 99%, 66%, 1);--sd-color-tabs-label-hover: hsla(231, 99%, 66%, 1);--sd-color-tabs-label-inactive: hsl(0, 0%, 66%);--sd-color-tabs-underline-active: hsla(231, 99%, 66%, 1);--sd-color-tabs-underline-hover: rgba(178, 206, 245, 0.62);--sd-color-tabs-underline-inactive: transparent;--sd-color-tabs-overline: rgb(222, 222, 222);--sd-color-tabs-underline: rgb(222, 222, 222);--sd-fontsize-tabs-label: 1rem} diff --git a/v1.1.0/_sphinx_design_static/design-tabs.js b/v1.1.0/_sphinx_design_static/design-tabs.js new file mode 100644 index 0000000..36b38cf --- /dev/null +++ b/v1.1.0/_sphinx_design_static/design-tabs.js @@ -0,0 +1,27 @@ +var sd_labels_by_text = {}; + +function ready() { + const li = document.getElementsByClassName("sd-tab-label"); + for (const label of li) { + syncId = label.getAttribute("data-sync-id"); + if (syncId) { + label.onclick = onLabelClick; + if (!sd_labels_by_text[syncId]) { + sd_labels_by_text[syncId] = []; + } + sd_labels_by_text[syncId].push(label); + } + } +} + +function onLabelClick() { + // Activate other inputs with the same sync id. + syncId = this.getAttribute("data-sync-id"); + for (label of sd_labels_by_text[syncId]) { + if (label === this) continue; + label.previousElementSibling.checked = true; + } + window.localStorage.setItem("sphinx-design-last-tab", syncId); +} + +document.addEventListener("DOMContentLoaded", ready, false); diff --git a/v1.1.0/_static/_sphinx_javascript_frameworks_compat.js b/v1.1.0/_static/_sphinx_javascript_frameworks_compat.js new file mode 100644 index 0000000..8549469 --- /dev/null +++ b/v1.1.0/_static/_sphinx_javascript_frameworks_compat.js @@ -0,0 +1,134 @@ +/* + * _sphinx_javascript_frameworks_compat.js + * ~~~~~~~~~~ + * + * Compatability shim for jQuery and underscores.js. + * + * WILL BE REMOVED IN Sphinx 6.0 + * xref RemovedInSphinx60Warning + * + */ + +/** + * select a different prefix for underscore + */ +$u = _.noConflict(); + + +/** + * small helper function to urldecode strings + * + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL + */ +jQuery.urldecode = function(x) { + if (!x) { + return x + } + return decodeURIComponent(x.replace(/\+/g, ' ')); +}; + +/** + * small helper function to urlencode strings + */ +jQuery.urlencode = encodeURIComponent; + +/** + * This function returns the parsed url parameters of the + * current request. Multiple values per key are supported, + * it will always return arrays of strings for the value parts. + */ +jQuery.getQueryParameters = function(s) { + if (typeof s === 'undefined') + s = document.location.search; + var parts = s.substr(s.indexOf('?') + 1).split('&'); + var result = {}; + for (var i = 0; i < parts.length; i++) { + var tmp = parts[i].split('=', 2); + var key = jQuery.urldecode(tmp[0]); + var value = jQuery.urldecode(tmp[1]); + if (key in result) + result[key].push(value); + else + result[key] = [value]; + } + return result; +}; + +/** + * highlight a given string on a jquery object by wrapping it in + * span elements with the given class name. + */ +jQuery.fn.highlightText = function(text, className) { + function highlight(node, addItems) { + if (node.nodeType === 3) { + var val = node.nodeValue; + var pos = val.toLowerCase().indexOf(text); + if (pos >= 0 && + !jQuery(node.parentNode).hasClass(className) && + !jQuery(node.parentNode).hasClass("nohighlight")) { + var span; + var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.className = className; + } + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + node.parentNode.insertBefore(span, node.parentNode.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling)); + node.nodeValue = val.substr(0, pos); + if (isInSVG) { + var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); + var bbox = node.parentElement.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute('class', className); + addItems.push({ + "parent": node.parentNode, + "target": rect}); + } + } + } + else if (!jQuery(node).is("button, select, textarea")) { + jQuery.each(node.childNodes, function() { + highlight(this, addItems); + }); + } + } + var addItems = []; + var result = this.each(function() { + highlight(this, addItems); + }); + for (var i = 0; i < addItems.length; ++i) { + jQuery(addItems[i].parent).before(addItems[i].target); + } + return result; +}; + +/* + * backward compatibility for jQuery.browser + * This will be supported until firefox bug is fixed. + */ +if (!jQuery.browser) { + jQuery.uaMatch = function(ua) { + ua = ua.toLowerCase(); + + var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || + /(webkit)[ \/]([\w.]+)/.exec(ua) || + /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || + /(msie) ([\w.]+)/.exec(ua) || + ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || + []; + + return { + browser: match[ 1 ] || "", + version: match[ 2 ] || "0" + }; + }; + jQuery.browser = {}; + jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; +} diff --git a/v1.1.0/_static/basic.css b/v1.1.0/_static/basic.css new file mode 100644 index 0000000..4e9a9f1 --- /dev/null +++ b/v1.1.0/_static/basic.css @@ -0,0 +1,900 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +div.section::after { + display: block; + content: ''; + clear: left; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; + word-wrap: break-word; + overflow-wrap : break-word; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar #searchbox form.search { + overflow: hidden; +} + +div.sphinxsidebar #searchbox input[type="text"] { + float: left; + width: 80%; + padding: 0.25em; + box-sizing: border-box; +} + +div.sphinxsidebar #searchbox input[type="submit"] { + float: left; + width: 20%; + border-left: none; + padding: 0.25em; + box-sizing: border-box; +} + + +img { + border: 0; + max-width: 100%; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li p.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; + margin-left: auto; + margin-right: auto; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable ul { + margin-top: 0; + margin-bottom: 0; + list-style-type: none; +} + +table.indextable > tbody > tr > td > ul { + padding-left: 0em; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- domain module index --------------------------------------------------- */ + +table.modindextable td { + padding: 2px; + border-collapse: collapse; +} + +/* -- general body styles --------------------------------------------------- */ + +div.body { + min-width: 360px; + max-width: 800px; +} + +div.body p, div.body dd, div.body li, div.body blockquote { + -moz-hyphens: auto; + -ms-hyphens: auto; + -webkit-hyphens: auto; + hyphens: auto; +} + +a.headerlink { + visibility: hidden; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink, +caption:hover > a.headerlink, +p.caption:hover > a.headerlink, +div.code-block-caption:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, figure.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, figure.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, figure.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +img.align-default, figure.align-default, .figure.align-default { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-default { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar, +aside.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px; + background-color: #ffe; + width: 40%; + float: right; + clear: right; + overflow-x: auto; +} + +p.sidebar-title { + font-weight: bold; +} +nav.contents, +aside.topic, +div.admonition, div.topic, blockquote { + clear: left; +} + +/* -- topics ---------------------------------------------------------------- */ +nav.contents, +aside.topic, +div.topic { + border: 1px solid #ccc; + padding: 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- content of sidebars/topics/admonitions -------------------------------- */ + +div.sidebar > :last-child, +aside.sidebar > :last-child, +nav.contents > :last-child, +aside.topic > :last-child, +div.topic > :last-child, +div.admonition > :last-child { + margin-bottom: 0; +} + +div.sidebar::after, +aside.sidebar::after, +nav.contents::after, +aside.topic::after, +div.topic::after, +div.admonition::after, +blockquote::after { + display: block; + content: ''; + clear: both; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + margin-top: 10px; + margin-bottom: 10px; + border: 0; + border-collapse: collapse; +} + +table.align-center { + margin-left: auto; + margin-right: auto; +} + +table.align-default { + margin-left: auto; + margin-right: auto; +} + +table caption span.caption-number { + font-style: italic; +} + +table caption span.caption-text { +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +th > :first-child, +td > :first-child { + margin-top: 0px; +} + +th > :last-child, +td > :last-child { + margin-bottom: 0px; +} + +/* -- figures --------------------------------------------------------------- */ + +div.figure, figure { + margin: 0.5em; + padding: 0.5em; +} + +div.figure p.caption, figcaption { + padding: 0.3em; +} + +div.figure p.caption span.caption-number, +figcaption span.caption-number { + font-style: italic; +} + +div.figure p.caption span.caption-text, +figcaption span.caption-text { +} + +/* -- field list styles ----------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.field-name { + -moz-hyphens: manual; + -ms-hyphens: manual; + -webkit-hyphens: manual; + hyphens: manual; +} + +/* -- hlist styles ---------------------------------------------------------- */ + +table.hlist { + margin: 1em 0; +} + +table.hlist td { + vertical-align: top; +} + +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +:not(li) > ol > li:first-child > :first-child, +:not(li) > ul > li:first-child > :first-child { + margin-top: 0px; +} + +:not(li) > ol > li:last-child > :last-child, +:not(li) > ul > li:last-child > :last-child { + margin-bottom: 0px; +} + +ol.simple ol p, +ol.simple ul p, +ul.simple ol p, +ul.simple ul p { + margin-top: 0; +} + +ol.simple > li:not(:first-child) > p, +ul.simple > li:not(:first-child) > p { + margin-top: 0; +} + +ol.simple p, +ul.simple p { + margin-bottom: 0; +} +aside.footnote > span, +div.citation > span { + float: left; +} +aside.footnote > span:last-of-type, +div.citation > span:last-of-type { + padding-right: 0.5em; +} +aside.footnote > p { + margin-left: 2em; +} +div.citation > p { + margin-left: 4em; +} +aside.footnote > p:last-of-type, +div.citation > p:last-of-type { + margin-bottom: 0em; +} +aside.footnote > p:last-of-type:after, +div.citation > p:last-of-type:after { + content: ""; + clear: both; +} + +dl.field-list { + display: grid; + grid-template-columns: fit-content(30%) auto; +} + +dl.field-list > dt { + font-weight: bold; + word-break: break-word; + padding-left: 0.5em; + padding-right: 5px; +} + +dl.field-list > dd { + padding-left: 0.5em; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0em; +} + +dl { + margin-bottom: 15px; +} + +dd > :first-child { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +dl > dd:last-child, +dl > dd:last-child > :last-child { + margin-bottom: 0; +} + +dt:target, span.highlighted { + background-color: #fbe54e; +} + +rect.highlighted { + fill: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +.classifier:before { + font-style: normal; + margin: 0 0.5em; + content: ":"; + display: inline-block; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +pre, div[class*="highlight-"] { + clear: both; +} + +span.pre { + -moz-hyphens: none; + -ms-hyphens: none; + -webkit-hyphens: none; + hyphens: none; + white-space: nowrap; +} + +div[class*="highlight-"] { + margin: 1em 0; +} + +td.linenos pre { + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + display: block; +} + +table.highlighttable tbody { + display: block; +} + +table.highlighttable tr { + display: flex; +} + +table.highlighttable td { + margin: 0; + padding: 0; +} + +table.highlighttable td.linenos { + padding-right: 0.5em; +} + +table.highlighttable td.code { + flex: 1; + overflow: hidden; +} + +.highlight .hll { + display: block; +} + +div.highlight pre, +table.highlighttable pre { + margin: 0; +} + +div.code-block-caption + div { + margin-top: 0; +} + +div.code-block-caption { + margin-top: 1em; + padding: 2px 5px; + font-size: small; +} + +div.code-block-caption code { + background-color: transparent; +} + +table.highlighttable td.linenos, +span.linenos, +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ +} + +div.code-block-caption span.caption-number { + padding: 0.1em 0.3em; + font-style: italic; +} + +div.code-block-caption span.caption-text { +} + +div.literal-block-wrapper { + margin: 1em 0; +} + +code.xref, a code { + background-color: transparent; + font-weight: bold; +} + +h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +span.eqno a.headerlink { + position: absolute; + z-index: 1; +} + +div.math:hover a.headerlink { + visibility: visible; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file diff --git a/v1.1.0/_static/css/badge_only.css b/v1.1.0/_static/css/badge_only.css new file mode 100644 index 0000000..c718cee --- /dev/null +++ b/v1.1.0/_static/css/badge_only.css @@ -0,0 +1 @@ +.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}@font-face{font-family:FontAwesome;font-style:normal;font-weight:400;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#FontAwesome) format("svg")}.fa:before{font-family:FontAwesome;font-style:normal;font-weight:400;line-height:1}.fa:before,a .fa{text-decoration:inherit}.fa:before,a .fa,li .fa{display:inline-block}li .fa-large:before{width:1.875em}ul.fas{list-style-type:none;margin-left:2em;text-indent:-.8em}ul.fas li .fa{width:.8em}ul.fas li .fa-large:before{vertical-align:baseline}.fa-book:before,.icon-book:before{content:"\f02d"}.fa-caret-down:before,.icon-caret-down:before{content:"\f0d7"}.fa-caret-up:before,.icon-caret-up:before{content:"\f0d8"}.fa-caret-left:before,.icon-caret-left:before{content:"\f0d9"}.fa-caret-right:before,.icon-caret-right:before{content:"\f0da"}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60}.rst-versions .rst-current-version:after{clear:both;content:"";display:block}.rst-versions .rst-current-version .fa{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}} \ No newline at end of file diff --git a/v1.1.0/_static/css/fonts/Roboto-Slab-Bold.woff b/v1.1.0/_static/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 0000000..6cb6000 Binary files /dev/null and b/v1.1.0/_static/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/v1.1.0/_static/css/fonts/Roboto-Slab-Bold.woff2 b/v1.1.0/_static/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 0000000..7059e23 Binary files /dev/null and b/v1.1.0/_static/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/v1.1.0/_static/css/fonts/Roboto-Slab-Regular.woff b/v1.1.0/_static/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 0000000..f815f63 Binary files /dev/null and b/v1.1.0/_static/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/v1.1.0/_static/css/fonts/Roboto-Slab-Regular.woff2 b/v1.1.0/_static/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 0000000..f2c76e5 Binary files /dev/null and b/v1.1.0/_static/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/v1.1.0/_static/css/fonts/fontawesome-webfont.eot b/v1.1.0/_static/css/fonts/fontawesome-webfont.eot new file mode 100644 index 0000000..e9f60ca Binary files /dev/null and b/v1.1.0/_static/css/fonts/fontawesome-webfont.eot differ diff --git a/v1.1.0/_static/css/fonts/fontawesome-webfont.svg b/v1.1.0/_static/css/fonts/fontawesome-webfont.svg new file mode 100644 index 0000000..855c845 --- /dev/null +++ b/v1.1.0/_static/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/v1.1.0/_static/css/fonts/fontawesome-webfont.ttf b/v1.1.0/_static/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 0000000..35acda2 Binary files /dev/null and b/v1.1.0/_static/css/fonts/fontawesome-webfont.ttf differ diff --git a/v1.1.0/_static/css/fonts/fontawesome-webfont.woff b/v1.1.0/_static/css/fonts/fontawesome-webfont.woff new file mode 100644 index 0000000..400014a Binary files /dev/null and b/v1.1.0/_static/css/fonts/fontawesome-webfont.woff differ diff --git a/v1.1.0/_static/css/fonts/fontawesome-webfont.woff2 b/v1.1.0/_static/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 0000000..4d13fc6 Binary files /dev/null and b/v1.1.0/_static/css/fonts/fontawesome-webfont.woff2 differ diff --git a/v1.1.0/_static/css/fonts/lato-bold-italic.woff b/v1.1.0/_static/css/fonts/lato-bold-italic.woff new file mode 100644 index 0000000..88ad05b Binary files /dev/null and b/v1.1.0/_static/css/fonts/lato-bold-italic.woff differ diff --git a/v1.1.0/_static/css/fonts/lato-bold-italic.woff2 b/v1.1.0/_static/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 0000000..c4e3d80 Binary files /dev/null and b/v1.1.0/_static/css/fonts/lato-bold-italic.woff2 differ diff --git a/v1.1.0/_static/css/fonts/lato-bold.woff b/v1.1.0/_static/css/fonts/lato-bold.woff new file mode 100644 index 0000000..c6dff51 Binary files /dev/null and b/v1.1.0/_static/css/fonts/lato-bold.woff differ diff --git a/v1.1.0/_static/css/fonts/lato-bold.woff2 b/v1.1.0/_static/css/fonts/lato-bold.woff2 new file mode 100644 index 0000000..bb19504 Binary files /dev/null and b/v1.1.0/_static/css/fonts/lato-bold.woff2 differ diff --git a/v1.1.0/_static/css/fonts/lato-normal-italic.woff b/v1.1.0/_static/css/fonts/lato-normal-italic.woff new file mode 100644 index 0000000..76114bc Binary files /dev/null and b/v1.1.0/_static/css/fonts/lato-normal-italic.woff differ diff --git a/v1.1.0/_static/css/fonts/lato-normal-italic.woff2 b/v1.1.0/_static/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 0000000..3404f37 Binary files /dev/null and b/v1.1.0/_static/css/fonts/lato-normal-italic.woff2 differ diff --git a/v1.1.0/_static/css/fonts/lato-normal.woff b/v1.1.0/_static/css/fonts/lato-normal.woff new file mode 100644 index 0000000..ae1307f Binary files /dev/null and b/v1.1.0/_static/css/fonts/lato-normal.woff differ diff --git a/v1.1.0/_static/css/fonts/lato-normal.woff2 b/v1.1.0/_static/css/fonts/lato-normal.woff2 new file mode 100644 index 0000000..3bf9843 Binary files /dev/null and b/v1.1.0/_static/css/fonts/lato-normal.woff2 differ diff --git a/v1.1.0/_static/css/theme.css b/v1.1.0/_static/css/theme.css new file mode 100644 index 0000000..19a446a --- /dev/null +++ b/v1.1.0/_static/css/theme.css @@ -0,0 +1,4 @@ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel,.rst-content .menuselection{font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content .guilabel,.rst-content .menuselection{border:1px solid #7fbbe3;background:#e7f2fa}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} \ No newline at end of file diff --git a/v1.1.0/_static/design-style.1e8bd061cd6da7fc9cf755528e8ffc24.min.css b/v1.1.0/_static/design-style.1e8bd061cd6da7fc9cf755528e8ffc24.min.css new file mode 100644 index 0000000..eb19f69 --- /dev/null +++ b/v1.1.0/_static/design-style.1e8bd061cd6da7fc9cf755528e8ffc24.min.css @@ -0,0 +1 @@ +.sd-bg-primary{background-color:var(--sd-color-primary) !important}.sd-bg-text-primary{color:var(--sd-color-primary-text) !important}button.sd-bg-primary:focus,button.sd-bg-primary:hover{background-color:var(--sd-color-primary-highlight) !important}a.sd-bg-primary:focus,a.sd-bg-primary:hover{background-color:var(--sd-color-primary-highlight) !important}.sd-bg-secondary{background-color:var(--sd-color-secondary) !important}.sd-bg-text-secondary{color:var(--sd-color-secondary-text) !important}button.sd-bg-secondary:focus,button.sd-bg-secondary:hover{background-color:var(--sd-color-secondary-highlight) !important}a.sd-bg-secondary:focus,a.sd-bg-secondary:hover{background-color:var(--sd-color-secondary-highlight) !important}.sd-bg-success{background-color:var(--sd-color-success) !important}.sd-bg-text-success{color:var(--sd-color-success-text) !important}button.sd-bg-success:focus,button.sd-bg-success:hover{background-color:var(--sd-color-success-highlight) !important}a.sd-bg-success:focus,a.sd-bg-success:hover{background-color:var(--sd-color-success-highlight) !important}.sd-bg-info{background-color:var(--sd-color-info) !important}.sd-bg-text-info{color:var(--sd-color-info-text) !important}button.sd-bg-info:focus,button.sd-bg-info:hover{background-color:var(--sd-color-info-highlight) !important}a.sd-bg-info:focus,a.sd-bg-info:hover{background-color:var(--sd-color-info-highlight) !important}.sd-bg-warning{background-color:var(--sd-color-warning) !important}.sd-bg-text-warning{color:var(--sd-color-warning-text) !important}button.sd-bg-warning:focus,button.sd-bg-warning:hover{background-color:var(--sd-color-warning-highlight) !important}a.sd-bg-warning:focus,a.sd-bg-warning:hover{background-color:var(--sd-color-warning-highlight) !important}.sd-bg-danger{background-color:var(--sd-color-danger) !important}.sd-bg-text-danger{color:var(--sd-color-danger-text) !important}button.sd-bg-danger:focus,button.sd-bg-danger:hover{background-color:var(--sd-color-danger-highlight) !important}a.sd-bg-danger:focus,a.sd-bg-danger:hover{background-color:var(--sd-color-danger-highlight) !important}.sd-bg-light{background-color:var(--sd-color-light) !important}.sd-bg-text-light{color:var(--sd-color-light-text) !important}button.sd-bg-light:focus,button.sd-bg-light:hover{background-color:var(--sd-color-light-highlight) !important}a.sd-bg-light:focus,a.sd-bg-light:hover{background-color:var(--sd-color-light-highlight) !important}.sd-bg-muted{background-color:var(--sd-color-muted) !important}.sd-bg-text-muted{color:var(--sd-color-muted-text) !important}button.sd-bg-muted:focus,button.sd-bg-muted:hover{background-color:var(--sd-color-muted-highlight) !important}a.sd-bg-muted:focus,a.sd-bg-muted:hover{background-color:var(--sd-color-muted-highlight) !important}.sd-bg-dark{background-color:var(--sd-color-dark) !important}.sd-bg-text-dark{color:var(--sd-color-dark-text) !important}button.sd-bg-dark:focus,button.sd-bg-dark:hover{background-color:var(--sd-color-dark-highlight) !important}a.sd-bg-dark:focus,a.sd-bg-dark:hover{background-color:var(--sd-color-dark-highlight) !important}.sd-bg-black{background-color:var(--sd-color-black) !important}.sd-bg-text-black{color:var(--sd-color-black-text) !important}button.sd-bg-black:focus,button.sd-bg-black:hover{background-color:var(--sd-color-black-highlight) !important}a.sd-bg-black:focus,a.sd-bg-black:hover{background-color:var(--sd-color-black-highlight) !important}.sd-bg-white{background-color:var(--sd-color-white) !important}.sd-bg-text-white{color:var(--sd-color-white-text) !important}button.sd-bg-white:focus,button.sd-bg-white:hover{background-color:var(--sd-color-white-highlight) !important}a.sd-bg-white:focus,a.sd-bg-white:hover{background-color:var(--sd-color-white-highlight) !important}.sd-text-primary,.sd-text-primary>p{color:var(--sd-color-primary) !important}a.sd-text-primary:focus,a.sd-text-primary:hover{color:var(--sd-color-primary-highlight) !important}.sd-text-secondary,.sd-text-secondary>p{color:var(--sd-color-secondary) !important}a.sd-text-secondary:focus,a.sd-text-secondary:hover{color:var(--sd-color-secondary-highlight) !important}.sd-text-success,.sd-text-success>p{color:var(--sd-color-success) !important}a.sd-text-success:focus,a.sd-text-success:hover{color:var(--sd-color-success-highlight) !important}.sd-text-info,.sd-text-info>p{color:var(--sd-color-info) !important}a.sd-text-info:focus,a.sd-text-info:hover{color:var(--sd-color-info-highlight) !important}.sd-text-warning,.sd-text-warning>p{color:var(--sd-color-warning) !important}a.sd-text-warning:focus,a.sd-text-warning:hover{color:var(--sd-color-warning-highlight) !important}.sd-text-danger,.sd-text-danger>p{color:var(--sd-color-danger) !important}a.sd-text-danger:focus,a.sd-text-danger:hover{color:var(--sd-color-danger-highlight) !important}.sd-text-light,.sd-text-light>p{color:var(--sd-color-light) !important}a.sd-text-light:focus,a.sd-text-light:hover{color:var(--sd-color-light-highlight) !important}.sd-text-muted,.sd-text-muted>p{color:var(--sd-color-muted) !important}a.sd-text-muted:focus,a.sd-text-muted:hover{color:var(--sd-color-muted-highlight) !important}.sd-text-dark,.sd-text-dark>p{color:var(--sd-color-dark) !important}a.sd-text-dark:focus,a.sd-text-dark:hover{color:var(--sd-color-dark-highlight) !important}.sd-text-black,.sd-text-black>p{color:var(--sd-color-black) !important}a.sd-text-black:focus,a.sd-text-black:hover{color:var(--sd-color-black-highlight) !important}.sd-text-white,.sd-text-white>p{color:var(--sd-color-white) !important}a.sd-text-white:focus,a.sd-text-white:hover{color:var(--sd-color-white-highlight) !important}.sd-outline-primary{border-color:var(--sd-color-primary) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-primary:focus,a.sd-outline-primary:hover{border-color:var(--sd-color-primary-highlight) !important}.sd-outline-secondary{border-color:var(--sd-color-secondary) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-secondary:focus,a.sd-outline-secondary:hover{border-color:var(--sd-color-secondary-highlight) !important}.sd-outline-success{border-color:var(--sd-color-success) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-success:focus,a.sd-outline-success:hover{border-color:var(--sd-color-success-highlight) !important}.sd-outline-info{border-color:var(--sd-color-info) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-info:focus,a.sd-outline-info:hover{border-color:var(--sd-color-info-highlight) !important}.sd-outline-warning{border-color:var(--sd-color-warning) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-warning:focus,a.sd-outline-warning:hover{border-color:var(--sd-color-warning-highlight) !important}.sd-outline-danger{border-color:var(--sd-color-danger) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-danger:focus,a.sd-outline-danger:hover{border-color:var(--sd-color-danger-highlight) !important}.sd-outline-light{border-color:var(--sd-color-light) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-light:focus,a.sd-outline-light:hover{border-color:var(--sd-color-light-highlight) !important}.sd-outline-muted{border-color:var(--sd-color-muted) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-muted:focus,a.sd-outline-muted:hover{border-color:var(--sd-color-muted-highlight) !important}.sd-outline-dark{border-color:var(--sd-color-dark) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-dark:focus,a.sd-outline-dark:hover{border-color:var(--sd-color-dark-highlight) !important}.sd-outline-black{border-color:var(--sd-color-black) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-black:focus,a.sd-outline-black:hover{border-color:var(--sd-color-black-highlight) !important}.sd-outline-white{border-color:var(--sd-color-white) !important;border-style:solid !important;border-width:1px !important}a.sd-outline-white:focus,a.sd-outline-white:hover{border-color:var(--sd-color-white-highlight) !important}.sd-bg-transparent{background-color:transparent !important}.sd-outline-transparent{border-color:transparent !important}.sd-text-transparent{color:transparent !important}.sd-p-0{padding:0 !important}.sd-pt-0,.sd-py-0{padding-top:0 !important}.sd-pr-0,.sd-px-0{padding-right:0 !important}.sd-pb-0,.sd-py-0{padding-bottom:0 !important}.sd-pl-0,.sd-px-0{padding-left:0 !important}.sd-p-1{padding:.25rem !important}.sd-pt-1,.sd-py-1{padding-top:.25rem !important}.sd-pr-1,.sd-px-1{padding-right:.25rem !important}.sd-pb-1,.sd-py-1{padding-bottom:.25rem !important}.sd-pl-1,.sd-px-1{padding-left:.25rem !important}.sd-p-2{padding:.5rem !important}.sd-pt-2,.sd-py-2{padding-top:.5rem !important}.sd-pr-2,.sd-px-2{padding-right:.5rem !important}.sd-pb-2,.sd-py-2{padding-bottom:.5rem !important}.sd-pl-2,.sd-px-2{padding-left:.5rem !important}.sd-p-3{padding:1rem !important}.sd-pt-3,.sd-py-3{padding-top:1rem !important}.sd-pr-3,.sd-px-3{padding-right:1rem !important}.sd-pb-3,.sd-py-3{padding-bottom:1rem !important}.sd-pl-3,.sd-px-3{padding-left:1rem !important}.sd-p-4{padding:1.5rem !important}.sd-pt-4,.sd-py-4{padding-top:1.5rem !important}.sd-pr-4,.sd-px-4{padding-right:1.5rem !important}.sd-pb-4,.sd-py-4{padding-bottom:1.5rem !important}.sd-pl-4,.sd-px-4{padding-left:1.5rem !important}.sd-p-5{padding:3rem !important}.sd-pt-5,.sd-py-5{padding-top:3rem !important}.sd-pr-5,.sd-px-5{padding-right:3rem !important}.sd-pb-5,.sd-py-5{padding-bottom:3rem !important}.sd-pl-5,.sd-px-5{padding-left:3rem !important}.sd-m-auto{margin:auto !important}.sd-mt-auto,.sd-my-auto{margin-top:auto !important}.sd-mr-auto,.sd-mx-auto{margin-right:auto !important}.sd-mb-auto,.sd-my-auto{margin-bottom:auto !important}.sd-ml-auto,.sd-mx-auto{margin-left:auto !important}.sd-m-0{margin:0 !important}.sd-mt-0,.sd-my-0{margin-top:0 !important}.sd-mr-0,.sd-mx-0{margin-right:0 !important}.sd-mb-0,.sd-my-0{margin-bottom:0 !important}.sd-ml-0,.sd-mx-0{margin-left:0 !important}.sd-m-1{margin:.25rem !important}.sd-mt-1,.sd-my-1{margin-top:.25rem !important}.sd-mr-1,.sd-mx-1{margin-right:.25rem !important}.sd-mb-1,.sd-my-1{margin-bottom:.25rem !important}.sd-ml-1,.sd-mx-1{margin-left:.25rem !important}.sd-m-2{margin:.5rem !important}.sd-mt-2,.sd-my-2{margin-top:.5rem !important}.sd-mr-2,.sd-mx-2{margin-right:.5rem !important}.sd-mb-2,.sd-my-2{margin-bottom:.5rem !important}.sd-ml-2,.sd-mx-2{margin-left:.5rem !important}.sd-m-3{margin:1rem !important}.sd-mt-3,.sd-my-3{margin-top:1rem !important}.sd-mr-3,.sd-mx-3{margin-right:1rem !important}.sd-mb-3,.sd-my-3{margin-bottom:1rem !important}.sd-ml-3,.sd-mx-3{margin-left:1rem !important}.sd-m-4{margin:1.5rem !important}.sd-mt-4,.sd-my-4{margin-top:1.5rem !important}.sd-mr-4,.sd-mx-4{margin-right:1.5rem !important}.sd-mb-4,.sd-my-4{margin-bottom:1.5rem !important}.sd-ml-4,.sd-mx-4{margin-left:1.5rem !important}.sd-m-5{margin:3rem !important}.sd-mt-5,.sd-my-5{margin-top:3rem !important}.sd-mr-5,.sd-mx-5{margin-right:3rem !important}.sd-mb-5,.sd-my-5{margin-bottom:3rem !important}.sd-ml-5,.sd-mx-5{margin-left:3rem !important}.sd-w-25{width:25% !important}.sd-w-50{width:50% !important}.sd-w-75{width:75% !important}.sd-w-100{width:100% !important}.sd-w-auto{width:auto !important}.sd-h-25{height:25% !important}.sd-h-50{height:50% !important}.sd-h-75{height:75% !important}.sd-h-100{height:100% !important}.sd-h-auto{height:auto !important}.sd-d-none{display:none !important}.sd-d-inline{display:inline !important}.sd-d-inline-block{display:inline-block !important}.sd-d-block{display:block !important}.sd-d-grid{display:grid !important}.sd-d-flex-row{display:-ms-flexbox !important;display:flex !important;flex-direction:row !important}.sd-d-flex-column{display:-ms-flexbox !important;display:flex !important;flex-direction:column !important}.sd-d-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}@media(min-width: 576px){.sd-d-sm-none{display:none !important}.sd-d-sm-inline{display:inline !important}.sd-d-sm-inline-block{display:inline-block !important}.sd-d-sm-block{display:block !important}.sd-d-sm-grid{display:grid !important}.sd-d-sm-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-sm-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}@media(min-width: 768px){.sd-d-md-none{display:none !important}.sd-d-md-inline{display:inline !important}.sd-d-md-inline-block{display:inline-block !important}.sd-d-md-block{display:block !important}.sd-d-md-grid{display:grid !important}.sd-d-md-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-md-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}@media(min-width: 992px){.sd-d-lg-none{display:none !important}.sd-d-lg-inline{display:inline !important}.sd-d-lg-inline-block{display:inline-block !important}.sd-d-lg-block{display:block !important}.sd-d-lg-grid{display:grid !important}.sd-d-lg-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-lg-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}@media(min-width: 1200px){.sd-d-xl-none{display:none !important}.sd-d-xl-inline{display:inline !important}.sd-d-xl-inline-block{display:inline-block !important}.sd-d-xl-block{display:block !important}.sd-d-xl-grid{display:grid !important}.sd-d-xl-flex{display:-ms-flexbox !important;display:flex !important}.sd-d-xl-inline-flex{display:-ms-inline-flexbox !important;display:inline-flex !important}}.sd-align-major-start{justify-content:flex-start !important}.sd-align-major-end{justify-content:flex-end !important}.sd-align-major-center{justify-content:center !important}.sd-align-major-justify{justify-content:space-between !important}.sd-align-major-spaced{justify-content:space-evenly !important}.sd-align-minor-start{align-items:flex-start !important}.sd-align-minor-end{align-items:flex-end !important}.sd-align-minor-center{align-items:center !important}.sd-align-minor-stretch{align-items:stretch !important}.sd-text-justify{text-align:justify !important}.sd-text-left{text-align:left !important}.sd-text-right{text-align:right !important}.sd-text-center{text-align:center !important}.sd-font-weight-light{font-weight:300 !important}.sd-font-weight-lighter{font-weight:lighter !important}.sd-font-weight-normal{font-weight:400 !important}.sd-font-weight-bold{font-weight:700 !important}.sd-font-weight-bolder{font-weight:bolder !important}.sd-font-italic{font-style:italic !important}.sd-text-decoration-none{text-decoration:none !important}.sd-text-lowercase{text-transform:lowercase !important}.sd-text-uppercase{text-transform:uppercase !important}.sd-text-capitalize{text-transform:capitalize !important}.sd-text-wrap{white-space:normal !important}.sd-text-nowrap{white-space:nowrap !important}.sd-text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.sd-fs-1,.sd-fs-1>p{font-size:calc(1.375rem + 1.5vw) !important;line-height:unset !important}.sd-fs-2,.sd-fs-2>p{font-size:calc(1.325rem + 0.9vw) !important;line-height:unset !important}.sd-fs-3,.sd-fs-3>p{font-size:calc(1.3rem + 0.6vw) !important;line-height:unset !important}.sd-fs-4,.sd-fs-4>p{font-size:calc(1.275rem + 0.3vw) !important;line-height:unset !important}.sd-fs-5,.sd-fs-5>p{font-size:1.25rem !important;line-height:unset !important}.sd-fs-6,.sd-fs-6>p{font-size:1rem !important;line-height:unset !important}.sd-border-0{border:0 solid !important}.sd-border-top-0{border-top:0 solid !important}.sd-border-bottom-0{border-bottom:0 solid !important}.sd-border-right-0{border-right:0 solid !important}.sd-border-left-0{border-left:0 solid !important}.sd-border-1{border:1px solid !important}.sd-border-top-1{border-top:1px solid !important}.sd-border-bottom-1{border-bottom:1px solid !important}.sd-border-right-1{border-right:1px solid !important}.sd-border-left-1{border-left:1px solid !important}.sd-border-2{border:2px solid !important}.sd-border-top-2{border-top:2px solid !important}.sd-border-bottom-2{border-bottom:2px solid !important}.sd-border-right-2{border-right:2px solid !important}.sd-border-left-2{border-left:2px solid !important}.sd-border-3{border:3px solid !important}.sd-border-top-3{border-top:3px solid !important}.sd-border-bottom-3{border-bottom:3px solid !important}.sd-border-right-3{border-right:3px solid !important}.sd-border-left-3{border-left:3px solid !important}.sd-border-4{border:4px solid !important}.sd-border-top-4{border-top:4px solid !important}.sd-border-bottom-4{border-bottom:4px solid !important}.sd-border-right-4{border-right:4px solid !important}.sd-border-left-4{border-left:4px solid !important}.sd-border-5{border:5px solid !important}.sd-border-top-5{border-top:5px solid !important}.sd-border-bottom-5{border-bottom:5px solid !important}.sd-border-right-5{border-right:5px solid !important}.sd-border-left-5{border-left:5px solid !important}.sd-rounded-0{border-radius:0 !important}.sd-rounded-1{border-radius:.2rem !important}.sd-rounded-2{border-radius:.3rem !important}.sd-rounded-3{border-radius:.5rem !important}.sd-rounded-pill{border-radius:50rem !important}.sd-rounded-circle{border-radius:50% !important}.shadow-none{box-shadow:none !important}.sd-shadow-sm{box-shadow:0 .125rem .25rem var(--sd-color-shadow) !important}.sd-shadow-md{box-shadow:0 .5rem 1rem var(--sd-color-shadow) !important}.sd-shadow-lg{box-shadow:0 1rem 3rem var(--sd-color-shadow) !important}@keyframes sd-slide-from-left{0%{transform:translateX(-100%)}100%{transform:translateX(0)}}@keyframes sd-slide-from-right{0%{transform:translateX(200%)}100%{transform:translateX(0)}}@keyframes sd-grow100{0%{transform:scale(0);opacity:.5}100%{transform:scale(1);opacity:1}}@keyframes sd-grow50{0%{transform:scale(0.5);opacity:.5}100%{transform:scale(1);opacity:1}}@keyframes sd-grow50-rot20{0%{transform:scale(0.5) rotateZ(-20deg);opacity:.5}75%{transform:scale(1) rotateZ(5deg);opacity:1}95%{transform:scale(1) rotateZ(-1deg);opacity:1}100%{transform:scale(1) rotateZ(0);opacity:1}}.sd-animate-slide-from-left{animation:1s ease-out 0s 1 normal none running sd-slide-from-left}.sd-animate-slide-from-right{animation:1s ease-out 0s 1 normal none running sd-slide-from-right}.sd-animate-grow100{animation:1s ease-out 0s 1 normal none running sd-grow100}.sd-animate-grow50{animation:1s ease-out 0s 1 normal none running sd-grow50}.sd-animate-grow50-rot20{animation:1s ease-out 0s 1 normal none running sd-grow50-rot20}.sd-badge{display:inline-block;padding:.35em .65em;font-size:.75em;font-weight:700;line-height:1;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem}.sd-badge:empty{display:none}a.sd-badge{text-decoration:none}.sd-btn .sd-badge{position:relative;top:-1px}.sd-btn{background-color:transparent;border:1px solid transparent;border-radius:.25rem;cursor:pointer;display:inline-block;font-weight:400;font-size:1rem;line-height:1.5;padding:.375rem .75rem;text-align:center;text-decoration:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;vertical-align:middle;user-select:none;-moz-user-select:none;-ms-user-select:none;-webkit-user-select:none}.sd-btn:hover{text-decoration:none}@media(prefers-reduced-motion: reduce){.sd-btn{transition:none}}.sd-btn-primary,.sd-btn-outline-primary:hover,.sd-btn-outline-primary:focus{color:var(--sd-color-primary-text) !important;background-color:var(--sd-color-primary) !important;border-color:var(--sd-color-primary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-primary:hover,.sd-btn-primary:focus{color:var(--sd-color-primary-text) !important;background-color:var(--sd-color-primary-highlight) !important;border-color:var(--sd-color-primary-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-primary{color:var(--sd-color-primary) !important;border-color:var(--sd-color-primary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-secondary,.sd-btn-outline-secondary:hover,.sd-btn-outline-secondary:focus{color:var(--sd-color-secondary-text) !important;background-color:var(--sd-color-secondary) !important;border-color:var(--sd-color-secondary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-secondary:hover,.sd-btn-secondary:focus{color:var(--sd-color-secondary-text) !important;background-color:var(--sd-color-secondary-highlight) !important;border-color:var(--sd-color-secondary-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-secondary{color:var(--sd-color-secondary) !important;border-color:var(--sd-color-secondary) !important;border-width:1px !important;border-style:solid !important}.sd-btn-success,.sd-btn-outline-success:hover,.sd-btn-outline-success:focus{color:var(--sd-color-success-text) !important;background-color:var(--sd-color-success) !important;border-color:var(--sd-color-success) !important;border-width:1px !important;border-style:solid !important}.sd-btn-success:hover,.sd-btn-success:focus{color:var(--sd-color-success-text) !important;background-color:var(--sd-color-success-highlight) !important;border-color:var(--sd-color-success-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-success{color:var(--sd-color-success) !important;border-color:var(--sd-color-success) !important;border-width:1px !important;border-style:solid !important}.sd-btn-info,.sd-btn-outline-info:hover,.sd-btn-outline-info:focus{color:var(--sd-color-info-text) !important;background-color:var(--sd-color-info) !important;border-color:var(--sd-color-info) !important;border-width:1px !important;border-style:solid !important}.sd-btn-info:hover,.sd-btn-info:focus{color:var(--sd-color-info-text) !important;background-color:var(--sd-color-info-highlight) !important;border-color:var(--sd-color-info-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-info{color:var(--sd-color-info) !important;border-color:var(--sd-color-info) !important;border-width:1px !important;border-style:solid !important}.sd-btn-warning,.sd-btn-outline-warning:hover,.sd-btn-outline-warning:focus{color:var(--sd-color-warning-text) !important;background-color:var(--sd-color-warning) !important;border-color:var(--sd-color-warning) !important;border-width:1px !important;border-style:solid !important}.sd-btn-warning:hover,.sd-btn-warning:focus{color:var(--sd-color-warning-text) !important;background-color:var(--sd-color-warning-highlight) !important;border-color:var(--sd-color-warning-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-warning{color:var(--sd-color-warning) !important;border-color:var(--sd-color-warning) !important;border-width:1px !important;border-style:solid !important}.sd-btn-danger,.sd-btn-outline-danger:hover,.sd-btn-outline-danger:focus{color:var(--sd-color-danger-text) !important;background-color:var(--sd-color-danger) !important;border-color:var(--sd-color-danger) !important;border-width:1px !important;border-style:solid !important}.sd-btn-danger:hover,.sd-btn-danger:focus{color:var(--sd-color-danger-text) !important;background-color:var(--sd-color-danger-highlight) !important;border-color:var(--sd-color-danger-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-danger{color:var(--sd-color-danger) !important;border-color:var(--sd-color-danger) !important;border-width:1px !important;border-style:solid !important}.sd-btn-light,.sd-btn-outline-light:hover,.sd-btn-outline-light:focus{color:var(--sd-color-light-text) !important;background-color:var(--sd-color-light) !important;border-color:var(--sd-color-light) !important;border-width:1px !important;border-style:solid !important}.sd-btn-light:hover,.sd-btn-light:focus{color:var(--sd-color-light-text) !important;background-color:var(--sd-color-light-highlight) !important;border-color:var(--sd-color-light-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-light{color:var(--sd-color-light) !important;border-color:var(--sd-color-light) !important;border-width:1px !important;border-style:solid !important}.sd-btn-muted,.sd-btn-outline-muted:hover,.sd-btn-outline-muted:focus{color:var(--sd-color-muted-text) !important;background-color:var(--sd-color-muted) !important;border-color:var(--sd-color-muted) !important;border-width:1px !important;border-style:solid !important}.sd-btn-muted:hover,.sd-btn-muted:focus{color:var(--sd-color-muted-text) !important;background-color:var(--sd-color-muted-highlight) !important;border-color:var(--sd-color-muted-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-muted{color:var(--sd-color-muted) !important;border-color:var(--sd-color-muted) !important;border-width:1px !important;border-style:solid !important}.sd-btn-dark,.sd-btn-outline-dark:hover,.sd-btn-outline-dark:focus{color:var(--sd-color-dark-text) !important;background-color:var(--sd-color-dark) !important;border-color:var(--sd-color-dark) !important;border-width:1px !important;border-style:solid !important}.sd-btn-dark:hover,.sd-btn-dark:focus{color:var(--sd-color-dark-text) !important;background-color:var(--sd-color-dark-highlight) !important;border-color:var(--sd-color-dark-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-dark{color:var(--sd-color-dark) !important;border-color:var(--sd-color-dark) !important;border-width:1px !important;border-style:solid !important}.sd-btn-black,.sd-btn-outline-black:hover,.sd-btn-outline-black:focus{color:var(--sd-color-black-text) !important;background-color:var(--sd-color-black) !important;border-color:var(--sd-color-black) !important;border-width:1px !important;border-style:solid !important}.sd-btn-black:hover,.sd-btn-black:focus{color:var(--sd-color-black-text) !important;background-color:var(--sd-color-black-highlight) !important;border-color:var(--sd-color-black-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-black{color:var(--sd-color-black) !important;border-color:var(--sd-color-black) !important;border-width:1px !important;border-style:solid !important}.sd-btn-white,.sd-btn-outline-white:hover,.sd-btn-outline-white:focus{color:var(--sd-color-white-text) !important;background-color:var(--sd-color-white) !important;border-color:var(--sd-color-white) !important;border-width:1px !important;border-style:solid !important}.sd-btn-white:hover,.sd-btn-white:focus{color:var(--sd-color-white-text) !important;background-color:var(--sd-color-white-highlight) !important;border-color:var(--sd-color-white-highlight) !important;border-width:1px !important;border-style:solid !important}.sd-btn-outline-white{color:var(--sd-color-white) !important;border-color:var(--sd-color-white) !important;border-width:1px !important;border-style:solid !important}.sd-stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;content:""}.sd-hide-link-text{font-size:0}.sd-octicon,.sd-material-icon{display:inline-block;fill:currentColor;vertical-align:middle}.sd-avatar-xs{border-radius:50%;object-fit:cover;object-position:center;width:1rem;height:1rem}.sd-avatar-sm{border-radius:50%;object-fit:cover;object-position:center;width:3rem;height:3rem}.sd-avatar-md{border-radius:50%;object-fit:cover;object-position:center;width:5rem;height:5rem}.sd-avatar-lg{border-radius:50%;object-fit:cover;object-position:center;width:7rem;height:7rem}.sd-avatar-xl{border-radius:50%;object-fit:cover;object-position:center;width:10rem;height:10rem}.sd-avatar-inherit{border-radius:50%;object-fit:cover;object-position:center;width:inherit;height:inherit}.sd-avatar-initial{border-radius:50%;object-fit:cover;object-position:center;width:initial;height:initial}.sd-card{background-clip:border-box;background-color:var(--sd-color-card-background);border:1px solid var(--sd-color-card-border);border-radius:.25rem;color:var(--sd-color-card-text);display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;min-width:0;position:relative;word-wrap:break-word}.sd-card>hr{margin-left:0;margin-right:0}.sd-card-hover:hover{border-color:var(--sd-color-card-border-hover);transform:scale(1.01)}.sd-card-body{-ms-flex:1 1 auto;flex:1 1 auto;padding:1rem 1rem}.sd-card-title{margin-bottom:.5rem}.sd-card-subtitle{margin-top:-0.25rem;margin-bottom:0}.sd-card-text:last-child{margin-bottom:0}.sd-card-link:hover{text-decoration:none}.sd-card-link+.card-link{margin-left:1rem}.sd-card-header{padding:.5rem 1rem;margin-bottom:0;background-color:var(--sd-color-card-header);border-bottom:1px solid var(--sd-color-card-border)}.sd-card-header:first-child{border-radius:calc(0.25rem - 1px) calc(0.25rem - 1px) 0 0}.sd-card-footer{padding:.5rem 1rem;background-color:var(--sd-color-card-footer);border-top:1px solid var(--sd-color-card-border)}.sd-card-footer:last-child{border-radius:0 0 calc(0.25rem - 1px) calc(0.25rem - 1px)}.sd-card-header-tabs{margin-right:-0.5rem;margin-bottom:-0.5rem;margin-left:-0.5rem;border-bottom:0}.sd-card-header-pills{margin-right:-0.5rem;margin-left:-0.5rem}.sd-card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1rem;border-radius:calc(0.25rem - 1px)}.sd-card-img,.sd-card-img-bottom,.sd-card-img-top{width:100%}.sd-card-img,.sd-card-img-top{border-top-left-radius:calc(0.25rem - 1px);border-top-right-radius:calc(0.25rem - 1px)}.sd-card-img,.sd-card-img-bottom{border-bottom-left-radius:calc(0.25rem - 1px);border-bottom-right-radius:calc(0.25rem - 1px)}.sd-cards-carousel{width:100%;display:flex;flex-wrap:nowrap;-ms-flex-direction:row;flex-direction:row;overflow-x:hidden;scroll-snap-type:x mandatory}.sd-cards-carousel.sd-show-scrollbar{overflow-x:auto}.sd-cards-carousel:hover,.sd-cards-carousel:focus{overflow-x:auto}.sd-cards-carousel>.sd-card{flex-shrink:0;scroll-snap-align:start}.sd-cards-carousel>.sd-card:not(:last-child){margin-right:3px}.sd-card-cols-1>.sd-card{width:90%}.sd-card-cols-2>.sd-card{width:45%}.sd-card-cols-3>.sd-card{width:30%}.sd-card-cols-4>.sd-card{width:22.5%}.sd-card-cols-5>.sd-card{width:18%}.sd-card-cols-6>.sd-card{width:15%}.sd-card-cols-7>.sd-card{width:12.8571428571%}.sd-card-cols-8>.sd-card{width:11.25%}.sd-card-cols-9>.sd-card{width:10%}.sd-card-cols-10>.sd-card{width:9%}.sd-card-cols-11>.sd-card{width:8.1818181818%}.sd-card-cols-12>.sd-card{width:7.5%}.sd-container,.sd-container-fluid,.sd-container-lg,.sd-container-md,.sd-container-sm,.sd-container-xl{margin-left:auto;margin-right:auto;padding-left:var(--sd-gutter-x, 0.75rem);padding-right:var(--sd-gutter-x, 0.75rem);width:100%}@media(min-width: 576px){.sd-container-sm,.sd-container{max-width:540px}}@media(min-width: 768px){.sd-container-md,.sd-container-sm,.sd-container{max-width:720px}}@media(min-width: 992px){.sd-container-lg,.sd-container-md,.sd-container-sm,.sd-container{max-width:960px}}@media(min-width: 1200px){.sd-container-xl,.sd-container-lg,.sd-container-md,.sd-container-sm,.sd-container{max-width:1140px}}.sd-row{--sd-gutter-x: 1.5rem;--sd-gutter-y: 0;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-top:calc(var(--sd-gutter-y) * -1);margin-right:calc(var(--sd-gutter-x) * -0.5);margin-left:calc(var(--sd-gutter-x) * -0.5)}.sd-row>*{box-sizing:border-box;flex-shrink:0;width:100%;max-width:100%;padding-right:calc(var(--sd-gutter-x) * 0.5);padding-left:calc(var(--sd-gutter-x) * 0.5);margin-top:var(--sd-gutter-y)}.sd-col{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-auto>*{flex:0 0 auto;width:auto}.sd-row-cols-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}@media(min-width: 576px){.sd-col-sm{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-sm-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-sm-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-sm-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-sm-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-sm-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-sm-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-sm-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-sm-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-sm-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-sm-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-sm-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-sm-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-sm-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}@media(min-width: 768px){.sd-col-md{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-md-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-md-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-md-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-md-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-md-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-md-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-md-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-md-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-md-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-md-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-md-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-md-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-md-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}@media(min-width: 992px){.sd-col-lg{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-lg-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-lg-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-lg-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-lg-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-lg-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-lg-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-lg-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-lg-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-lg-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-lg-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-lg-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-lg-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-lg-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}@media(min-width: 1200px){.sd-col-xl{flex:1 0 0%;-ms-flex:1 0 0%}.sd-row-cols-xl-auto{flex:1 0 auto;-ms-flex:1 0 auto;width:100%}.sd-row-cols-xl-1>*{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-row-cols-xl-2>*{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-row-cols-xl-3>*{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-row-cols-xl-4>*{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-row-cols-xl-5>*{flex:0 0 auto;-ms-flex:0 0 auto;width:20%}.sd-row-cols-xl-6>*{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-row-cols-xl-7>*{flex:0 0 auto;-ms-flex:0 0 auto;width:14.2857142857%}.sd-row-cols-xl-8>*{flex:0 0 auto;-ms-flex:0 0 auto;width:12.5%}.sd-row-cols-xl-9>*{flex:0 0 auto;-ms-flex:0 0 auto;width:11.1111111111%}.sd-row-cols-xl-10>*{flex:0 0 auto;-ms-flex:0 0 auto;width:10%}.sd-row-cols-xl-11>*{flex:0 0 auto;-ms-flex:0 0 auto;width:9.0909090909%}.sd-row-cols-xl-12>*{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}}.sd-col-auto{flex:0 0 auto;-ms-flex:0 0 auto;width:auto}.sd-col-1{flex:0 0 auto;-ms-flex:0 0 auto;width:8.3333333333%}.sd-col-2{flex:0 0 auto;-ms-flex:0 0 auto;width:16.6666666667%}.sd-col-3{flex:0 0 auto;-ms-flex:0 0 auto;width:25%}.sd-col-4{flex:0 0 auto;-ms-flex:0 0 auto;width:33.3333333333%}.sd-col-5{flex:0 0 auto;-ms-flex:0 0 auto;width:41.6666666667%}.sd-col-6{flex:0 0 auto;-ms-flex:0 0 auto;width:50%}.sd-col-7{flex:0 0 auto;-ms-flex:0 0 auto;width:58.3333333333%}.sd-col-8{flex:0 0 auto;-ms-flex:0 0 auto;width:66.6666666667%}.sd-col-9{flex:0 0 auto;-ms-flex:0 0 auto;width:75%}.sd-col-10{flex:0 0 auto;-ms-flex:0 0 auto;width:83.3333333333%}.sd-col-11{flex:0 0 auto;-ms-flex:0 0 auto;width:91.6666666667%}.sd-col-12{flex:0 0 auto;-ms-flex:0 0 auto;width:100%}.sd-g-0,.sd-gy-0{--sd-gutter-y: 0}.sd-g-0,.sd-gx-0{--sd-gutter-x: 0}.sd-g-1,.sd-gy-1{--sd-gutter-y: 0.25rem}.sd-g-1,.sd-gx-1{--sd-gutter-x: 0.25rem}.sd-g-2,.sd-gy-2{--sd-gutter-y: 0.5rem}.sd-g-2,.sd-gx-2{--sd-gutter-x: 0.5rem}.sd-g-3,.sd-gy-3{--sd-gutter-y: 1rem}.sd-g-3,.sd-gx-3{--sd-gutter-x: 1rem}.sd-g-4,.sd-gy-4{--sd-gutter-y: 1.5rem}.sd-g-4,.sd-gx-4{--sd-gutter-x: 1.5rem}.sd-g-5,.sd-gy-5{--sd-gutter-y: 3rem}.sd-g-5,.sd-gx-5{--sd-gutter-x: 3rem}@media(min-width: 576px){.sd-col-sm-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-sm-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-sm-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-sm-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-sm-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-sm-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-sm-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-sm-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-sm-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-sm-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-sm-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-sm-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-sm-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-sm-0,.sd-gy-sm-0{--sd-gutter-y: 0}.sd-g-sm-0,.sd-gx-sm-0{--sd-gutter-x: 0}.sd-g-sm-1,.sd-gy-sm-1{--sd-gutter-y: 0.25rem}.sd-g-sm-1,.sd-gx-sm-1{--sd-gutter-x: 0.25rem}.sd-g-sm-2,.sd-gy-sm-2{--sd-gutter-y: 0.5rem}.sd-g-sm-2,.sd-gx-sm-2{--sd-gutter-x: 0.5rem}.sd-g-sm-3,.sd-gy-sm-3{--sd-gutter-y: 1rem}.sd-g-sm-3,.sd-gx-sm-3{--sd-gutter-x: 1rem}.sd-g-sm-4,.sd-gy-sm-4{--sd-gutter-y: 1.5rem}.sd-g-sm-4,.sd-gx-sm-4{--sd-gutter-x: 1.5rem}.sd-g-sm-5,.sd-gy-sm-5{--sd-gutter-y: 3rem}.sd-g-sm-5,.sd-gx-sm-5{--sd-gutter-x: 3rem}}@media(min-width: 768px){.sd-col-md-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-md-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-md-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-md-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-md-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-md-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-md-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-md-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-md-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-md-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-md-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-md-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-md-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-md-0,.sd-gy-md-0{--sd-gutter-y: 0}.sd-g-md-0,.sd-gx-md-0{--sd-gutter-x: 0}.sd-g-md-1,.sd-gy-md-1{--sd-gutter-y: 0.25rem}.sd-g-md-1,.sd-gx-md-1{--sd-gutter-x: 0.25rem}.sd-g-md-2,.sd-gy-md-2{--sd-gutter-y: 0.5rem}.sd-g-md-2,.sd-gx-md-2{--sd-gutter-x: 0.5rem}.sd-g-md-3,.sd-gy-md-3{--sd-gutter-y: 1rem}.sd-g-md-3,.sd-gx-md-3{--sd-gutter-x: 1rem}.sd-g-md-4,.sd-gy-md-4{--sd-gutter-y: 1.5rem}.sd-g-md-4,.sd-gx-md-4{--sd-gutter-x: 1.5rem}.sd-g-md-5,.sd-gy-md-5{--sd-gutter-y: 3rem}.sd-g-md-5,.sd-gx-md-5{--sd-gutter-x: 3rem}}@media(min-width: 992px){.sd-col-lg-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-lg-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-lg-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-lg-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-lg-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-lg-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-lg-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-lg-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-lg-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-lg-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-lg-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-lg-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-lg-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-lg-0,.sd-gy-lg-0{--sd-gutter-y: 0}.sd-g-lg-0,.sd-gx-lg-0{--sd-gutter-x: 0}.sd-g-lg-1,.sd-gy-lg-1{--sd-gutter-y: 0.25rem}.sd-g-lg-1,.sd-gx-lg-1{--sd-gutter-x: 0.25rem}.sd-g-lg-2,.sd-gy-lg-2{--sd-gutter-y: 0.5rem}.sd-g-lg-2,.sd-gx-lg-2{--sd-gutter-x: 0.5rem}.sd-g-lg-3,.sd-gy-lg-3{--sd-gutter-y: 1rem}.sd-g-lg-3,.sd-gx-lg-3{--sd-gutter-x: 1rem}.sd-g-lg-4,.sd-gy-lg-4{--sd-gutter-y: 1.5rem}.sd-g-lg-4,.sd-gx-lg-4{--sd-gutter-x: 1.5rem}.sd-g-lg-5,.sd-gy-lg-5{--sd-gutter-y: 3rem}.sd-g-lg-5,.sd-gx-lg-5{--sd-gutter-x: 3rem}}@media(min-width: 1200px){.sd-col-xl-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto}.sd-col-xl-1{-ms-flex:0 0 auto;flex:0 0 auto;width:8.3333333333%}.sd-col-xl-2{-ms-flex:0 0 auto;flex:0 0 auto;width:16.6666666667%}.sd-col-xl-3{-ms-flex:0 0 auto;flex:0 0 auto;width:25%}.sd-col-xl-4{-ms-flex:0 0 auto;flex:0 0 auto;width:33.3333333333%}.sd-col-xl-5{-ms-flex:0 0 auto;flex:0 0 auto;width:41.6666666667%}.sd-col-xl-6{-ms-flex:0 0 auto;flex:0 0 auto;width:50%}.sd-col-xl-7{-ms-flex:0 0 auto;flex:0 0 auto;width:58.3333333333%}.sd-col-xl-8{-ms-flex:0 0 auto;flex:0 0 auto;width:66.6666666667%}.sd-col-xl-9{-ms-flex:0 0 auto;flex:0 0 auto;width:75%}.sd-col-xl-10{-ms-flex:0 0 auto;flex:0 0 auto;width:83.3333333333%}.sd-col-xl-11{-ms-flex:0 0 auto;flex:0 0 auto;width:91.6666666667%}.sd-col-xl-12{-ms-flex:0 0 auto;flex:0 0 auto;width:100%}.sd-g-xl-0,.sd-gy-xl-0{--sd-gutter-y: 0}.sd-g-xl-0,.sd-gx-xl-0{--sd-gutter-x: 0}.sd-g-xl-1,.sd-gy-xl-1{--sd-gutter-y: 0.25rem}.sd-g-xl-1,.sd-gx-xl-1{--sd-gutter-x: 0.25rem}.sd-g-xl-2,.sd-gy-xl-2{--sd-gutter-y: 0.5rem}.sd-g-xl-2,.sd-gx-xl-2{--sd-gutter-x: 0.5rem}.sd-g-xl-3,.sd-gy-xl-3{--sd-gutter-y: 1rem}.sd-g-xl-3,.sd-gx-xl-3{--sd-gutter-x: 1rem}.sd-g-xl-4,.sd-gy-xl-4{--sd-gutter-y: 1.5rem}.sd-g-xl-4,.sd-gx-xl-4{--sd-gutter-x: 1.5rem}.sd-g-xl-5,.sd-gy-xl-5{--sd-gutter-y: 3rem}.sd-g-xl-5,.sd-gx-xl-5{--sd-gutter-x: 3rem}}.sd-flex-row-reverse{flex-direction:row-reverse !important}details.sd-dropdown{position:relative}details.sd-dropdown .sd-summary-title{font-weight:700;padding-right:3em !important;-moz-user-select:none;-ms-user-select:none;-webkit-user-select:none;user-select:none}details.sd-dropdown:hover{cursor:pointer}details.sd-dropdown .sd-summary-content{cursor:default}details.sd-dropdown summary{list-style:none;padding:1em}details.sd-dropdown summary .sd-octicon.no-title{vertical-align:middle}details.sd-dropdown[open] summary .sd-octicon.no-title{visibility:hidden}details.sd-dropdown summary::-webkit-details-marker{display:none}details.sd-dropdown summary:focus{outline:none}details.sd-dropdown .sd-summary-icon{margin-right:.5em}details.sd-dropdown .sd-summary-icon svg{opacity:.8}details.sd-dropdown summary:hover .sd-summary-up svg,details.sd-dropdown summary:hover .sd-summary-down svg{opacity:1;transform:scale(1.1)}details.sd-dropdown .sd-summary-up svg,details.sd-dropdown .sd-summary-down svg{display:block;opacity:.6}details.sd-dropdown .sd-summary-up,details.sd-dropdown .sd-summary-down{pointer-events:none;position:absolute;right:1em;top:1em}details.sd-dropdown[open]>.sd-summary-title .sd-summary-down{visibility:hidden}details.sd-dropdown:not([open])>.sd-summary-title .sd-summary-up{visibility:hidden}details.sd-dropdown:not([open]).sd-card{border:none}details.sd-dropdown:not([open])>.sd-card-header{border:1px solid var(--sd-color-card-border);border-radius:.25rem}details.sd-dropdown.sd-fade-in[open] summary~*{-moz-animation:sd-fade-in .5s ease-in-out;-webkit-animation:sd-fade-in .5s ease-in-out;animation:sd-fade-in .5s ease-in-out}details.sd-dropdown.sd-fade-in-slide-down[open] summary~*{-moz-animation:sd-fade-in .5s ease-in-out,sd-slide-down .5s ease-in-out;-webkit-animation:sd-fade-in .5s ease-in-out,sd-slide-down .5s ease-in-out;animation:sd-fade-in .5s ease-in-out,sd-slide-down .5s ease-in-out}.sd-col>.sd-dropdown{width:100%}.sd-summary-content>.sd-tab-set:first-child{margin-top:0}@keyframes sd-fade-in{0%{opacity:0}100%{opacity:1}}@keyframes sd-slide-down{0%{transform:translate(0, -10px)}100%{transform:translate(0, 0)}}.sd-tab-set{border-radius:.125rem;display:flex;flex-wrap:wrap;margin:1em 0;position:relative}.sd-tab-set>input{opacity:0;position:absolute}.sd-tab-set>input:checked+label{border-color:var(--sd-color-tabs-underline-active);color:var(--sd-color-tabs-label-active)}.sd-tab-set>input:checked+label+.sd-tab-content{display:block}.sd-tab-set>input:not(:checked)+label:hover{color:var(--sd-color-tabs-label-hover);border-color:var(--sd-color-tabs-underline-hover)}.sd-tab-set>input:focus+label{outline-style:auto}.sd-tab-set>input:not(.focus-visible)+label{outline:none;-webkit-tap-highlight-color:transparent}.sd-tab-set>label{border-bottom:.125rem solid transparent;margin-bottom:0;color:var(--sd-color-tabs-label-inactive);border-color:var(--sd-color-tabs-underline-inactive);cursor:pointer;font-size:var(--sd-fontsize-tabs-label);font-weight:700;padding:1em 1.25em .5em;transition:color 250ms;width:auto;z-index:1}html .sd-tab-set>label:hover{color:var(--sd-color-tabs-label-active)}.sd-col>.sd-tab-set{width:100%}.sd-tab-content{box-shadow:0 -0.0625rem var(--sd-color-tabs-overline),0 .0625rem var(--sd-color-tabs-underline);display:none;order:99;padding-bottom:.75rem;padding-top:.75rem;width:100%}.sd-tab-content>:first-child{margin-top:0 !important}.sd-tab-content>:last-child{margin-bottom:0 !important}.sd-tab-content>.sd-tab-set{margin:0}.sd-sphinx-override,.sd-sphinx-override *{-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box}.sd-sphinx-override p{margin-top:0}:root{--sd-color-primary: #0071bc;--sd-color-secondary: #6c757d;--sd-color-success: #28a745;--sd-color-info: #17a2b8;--sd-color-warning: #f0b37e;--sd-color-danger: #dc3545;--sd-color-light: #f8f9fa;--sd-color-muted: #6c757d;--sd-color-dark: #212529;--sd-color-black: black;--sd-color-white: white;--sd-color-primary-highlight: #0060a0;--sd-color-secondary-highlight: #5c636a;--sd-color-success-highlight: #228e3b;--sd-color-info-highlight: #148a9c;--sd-color-warning-highlight: #cc986b;--sd-color-danger-highlight: #bb2d3b;--sd-color-light-highlight: #d3d4d5;--sd-color-muted-highlight: #5c636a;--sd-color-dark-highlight: #1c1f23;--sd-color-black-highlight: black;--sd-color-white-highlight: #d9d9d9;--sd-color-primary-text: #fff;--sd-color-secondary-text: #fff;--sd-color-success-text: #fff;--sd-color-info-text: #fff;--sd-color-warning-text: #212529;--sd-color-danger-text: #fff;--sd-color-light-text: #212529;--sd-color-muted-text: #fff;--sd-color-dark-text: #fff;--sd-color-black-text: #fff;--sd-color-white-text: #212529;--sd-color-shadow: rgba(0, 0, 0, 0.15);--sd-color-card-border: rgba(0, 0, 0, 0.125);--sd-color-card-border-hover: hsla(231, 99%, 66%, 1);--sd-color-card-background: transparent;--sd-color-card-text: inherit;--sd-color-card-header: transparent;--sd-color-card-footer: transparent;--sd-color-tabs-label-active: hsla(231, 99%, 66%, 1);--sd-color-tabs-label-hover: hsla(231, 99%, 66%, 1);--sd-color-tabs-label-inactive: hsl(0, 0%, 66%);--sd-color-tabs-underline-active: hsla(231, 99%, 66%, 1);--sd-color-tabs-underline-hover: rgba(178, 206, 245, 0.62);--sd-color-tabs-underline-inactive: transparent;--sd-color-tabs-overline: rgb(222, 222, 222);--sd-color-tabs-underline: rgb(222, 222, 222);--sd-fontsize-tabs-label: 1rem} diff --git a/v1.1.0/_static/design-tabs.js b/v1.1.0/_static/design-tabs.js new file mode 100644 index 0000000..36b38cf --- /dev/null +++ b/v1.1.0/_static/design-tabs.js @@ -0,0 +1,27 @@ +var sd_labels_by_text = {}; + +function ready() { + const li = document.getElementsByClassName("sd-tab-label"); + for (const label of li) { + syncId = label.getAttribute("data-sync-id"); + if (syncId) { + label.onclick = onLabelClick; + if (!sd_labels_by_text[syncId]) { + sd_labels_by_text[syncId] = []; + } + sd_labels_by_text[syncId].push(label); + } + } +} + +function onLabelClick() { + // Activate other inputs with the same sync id. + syncId = this.getAttribute("data-sync-id"); + for (label of sd_labels_by_text[syncId]) { + if (label === this) continue; + label.previousElementSibling.checked = true; + } + window.localStorage.setItem("sphinx-design-last-tab", syncId); +} + +document.addEventListener("DOMContentLoaded", ready, false); diff --git a/v1.1.0/_static/doctools.js b/v1.1.0/_static/doctools.js new file mode 100644 index 0000000..527b876 --- /dev/null +++ b/v1.1.0/_static/doctools.js @@ -0,0 +1,156 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Base JavaScript utilities for all Sphinx HTML documentation. + * + * :copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); + } +}; + +/** + * Small JavaScript module for the documentation. + */ +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); + }, + + /** + * i18n support + */ + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } + }, + + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; + }, + + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; + }, + + /** + * helper function to focus on search bar + */ + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); + }, + + /** + * Initialise the domain index toggle buttons + */ + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); + } + }; + + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); + }, + + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); + } + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); + } + break; + } + } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } + }); + }, +}; + +// quick alias for translations +const _ = Documentation.gettext; + +_ready(Documentation.init); diff --git a/v1.1.0/_static/documentation_options.js b/v1.1.0/_static/documentation_options.js new file mode 100644 index 0000000..995f333 --- /dev/null +++ b/v1.1.0/_static/documentation_options.js @@ -0,0 +1,14 @@ +var DOCUMENTATION_OPTIONS = { + URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), + VERSION: '1.0.0', + LANGUAGE: 'en', + COLLAPSE_INDEX: false, + BUILDER: 'html', + FILE_SUFFIX: '.html', + LINK_SUFFIX: '.html', + HAS_SOURCE: true, + SOURCELINK_SUFFIX: '.txt', + NAVIGATION_WITH_KEYS: false, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, +}; \ No newline at end of file diff --git a/v1.1.0/_static/file.png b/v1.1.0/_static/file.png new file mode 100644 index 0000000..a858a41 Binary files /dev/null and b/v1.1.0/_static/file.png differ diff --git a/v1.1.0/_static/jquery-3.6.0.js b/v1.1.0/_static/jquery-3.6.0.js new file mode 100644 index 0000000..fc6c299 --- /dev/null +++ b/v1.1.0/_static/jquery-3.6.0.js @@ -0,0 +1,10881 @@ +/*! + * jQuery JavaScript Library v3.6.0 + * https://jquery.com/ + * + * Includes Sizzle.js + * https://sizzlejs.com/ + * + * Copyright OpenJS Foundation and other contributors + * Released under the MIT license + * https://jquery.org/license + * + * Date: 2021-03-02T17:08Z + */ +( function( global, factory ) { + + "use strict"; + + if ( typeof module === "object" && typeof module.exports === "object" ) { + + // For CommonJS and CommonJS-like environments where a proper `window` + // is present, execute the factory and get jQuery. + // For environments that do not have a `window` with a `document` + // (such as Node.js), expose a factory as module.exports. + // This accentuates the need for the creation of a real `window`. + // e.g. var jQuery = require("jquery")(window); + // See ticket #14549 for more info. + module.exports = global.document ? + factory( global, true ) : + function( w ) { + if ( !w.document ) { + throw new Error( "jQuery requires a window with a document" ); + } + return factory( w ); + }; + } else { + factory( global ); + } + +// Pass this if window is not defined yet +} )( typeof window !== "undefined" ? window : this, function( window, noGlobal ) { + +// Edge <= 12 - 13+, Firefox <=18 - 45+, IE 10 - 11, Safari 5.1 - 9+, iOS 6 - 9.1 +// throw exceptions when non-strict code (e.g., ASP.NET 4.5) accesses strict mode +// arguments.callee.caller (trac-13335). But as of jQuery 3.0 (2016), strict mode should be common +// enough that all such attempts are guarded in a try block. +"use strict"; + +var arr = []; + +var getProto = Object.getPrototypeOf; + +var slice = arr.slice; + +var flat = arr.flat ? function( array ) { + return arr.flat.call( array ); +} : function( array ) { + return arr.concat.apply( [], array ); +}; + + +var push = arr.push; + +var indexOf = arr.indexOf; + +var class2type = {}; + +var toString = class2type.toString; + +var hasOwn = class2type.hasOwnProperty; + +var fnToString = hasOwn.toString; + +var ObjectFunctionString = fnToString.call( Object ); + +var support = {}; + +var isFunction = function isFunction( obj ) { + + // Support: Chrome <=57, Firefox <=52 + // In some browsers, typeof returns "function" for HTML elements + // (i.e., `typeof document.createElement( "object" ) === "function"`). + // We don't want to classify *any* DOM node as a function. + // Support: QtWeb <=3.8.5, WebKit <=534.34, wkhtmltopdf tool <=0.12.5 + // Plus for old WebKit, typeof returns "function" for HTML collections + // (e.g., `typeof document.getElementsByTagName("div") === "function"`). (gh-4756) + return typeof obj === "function" && typeof obj.nodeType !== "number" && + typeof obj.item !== "function"; + }; + + +var isWindow = function isWindow( obj ) { + return obj != null && obj === obj.window; + }; + + +var document = window.document; + + + + var preservedScriptAttributes = { + type: true, + src: true, + nonce: true, + noModule: true + }; + + function DOMEval( code, node, doc ) { + doc = doc || document; + + var i, val, + script = doc.createElement( "script" ); + + script.text = code; + if ( node ) { + for ( i in preservedScriptAttributes ) { + + // Support: Firefox 64+, Edge 18+ + // Some browsers don't support the "nonce" property on scripts. + // On the other hand, just using `getAttribute` is not enough as + // the `nonce` attribute is reset to an empty string whenever it + // becomes browsing-context connected. + // See https://github.com/whatwg/html/issues/2369 + // See https://html.spec.whatwg.org/#nonce-attributes + // The `node.getAttribute` check was added for the sake of + // `jQuery.globalEval` so that it can fake a nonce-containing node + // via an object. + val = node[ i ] || node.getAttribute && node.getAttribute( i ); + if ( val ) { + script.setAttribute( i, val ); + } + } + } + doc.head.appendChild( script ).parentNode.removeChild( script ); + } + + +function toType( obj ) { + if ( obj == null ) { + return obj + ""; + } + + // Support: Android <=2.3 only (functionish RegExp) + return typeof obj === "object" || typeof obj === "function" ? + class2type[ toString.call( obj ) ] || "object" : + typeof obj; +} +/* global Symbol */ +// Defining this global in .eslintrc.json would create a danger of using the global +// unguarded in another place, it seems safer to define global only for this module + + + +var + version = "3.6.0", + + // Define a local copy of jQuery + jQuery = function( selector, context ) { + + // The jQuery object is actually just the init constructor 'enhanced' + // Need init if jQuery is called (just allow error to be thrown if not included) + return new jQuery.fn.init( selector, context ); + }; + +jQuery.fn = jQuery.prototype = { + + // The current version of jQuery being used + jquery: version, + + constructor: jQuery, + + // The default length of a jQuery object is 0 + length: 0, + + toArray: function() { + return slice.call( this ); + }, + + // Get the Nth element in the matched element set OR + // Get the whole matched element set as a clean array + get: function( num ) { + + // Return all the elements in a clean array + if ( num == null ) { + return slice.call( this ); + } + + // Return just the one element from the set + return num < 0 ? this[ num + this.length ] : this[ num ]; + }, + + // Take an array of elements and push it onto the stack + // (returning the new matched element set) + pushStack: function( elems ) { + + // Build a new jQuery matched element set + var ret = jQuery.merge( this.constructor(), elems ); + + // Add the old object onto the stack (as a reference) + ret.prevObject = this; + + // Return the newly-formed element set + return ret; + }, + + // Execute a callback for every element in the matched set. + each: function( callback ) { + return jQuery.each( this, callback ); + }, + + map: function( callback ) { + return this.pushStack( jQuery.map( this, function( elem, i ) { + return callback.call( elem, i, elem ); + } ) ); + }, + + slice: function() { + return this.pushStack( slice.apply( this, arguments ) ); + }, + + first: function() { + return this.eq( 0 ); + }, + + last: function() { + return this.eq( -1 ); + }, + + even: function() { + return this.pushStack( jQuery.grep( this, function( _elem, i ) { + return ( i + 1 ) % 2; + } ) ); + }, + + odd: function() { + return this.pushStack( jQuery.grep( this, function( _elem, i ) { + return i % 2; + } ) ); + }, + + eq: function( i ) { + var len = this.length, + j = +i + ( i < 0 ? len : 0 ); + return this.pushStack( j >= 0 && j < len ? [ this[ j ] ] : [] ); + }, + + end: function() { + return this.prevObject || this.constructor(); + }, + + // For internal use only. + // Behaves like an Array's method, not like a jQuery method. + push: push, + sort: arr.sort, + splice: arr.splice +}; + +jQuery.extend = jQuery.fn.extend = function() { + var options, name, src, copy, copyIsArray, clone, + target = arguments[ 0 ] || {}, + i = 1, + length = arguments.length, + deep = false; + + // Handle a deep copy situation + if ( typeof target === "boolean" ) { + deep = target; + + // Skip the boolean and the target + target = arguments[ i ] || {}; + i++; + } + + // Handle case when target is a string or something (possible in deep copy) + if ( typeof target !== "object" && !isFunction( target ) ) { + target = {}; + } + + // Extend jQuery itself if only one argument is passed + if ( i === length ) { + target = this; + i--; + } + + for ( ; i < length; i++ ) { + + // Only deal with non-null/undefined values + if ( ( options = arguments[ i ] ) != null ) { + + // Extend the base object + for ( name in options ) { + copy = options[ name ]; + + // Prevent Object.prototype pollution + // Prevent never-ending loop + if ( name === "__proto__" || target === copy ) { + continue; + } + + // Recurse if we're merging plain objects or arrays + if ( deep && copy && ( jQuery.isPlainObject( copy ) || + ( copyIsArray = Array.isArray( copy ) ) ) ) { + src = target[ name ]; + + // Ensure proper type for the source value + if ( copyIsArray && !Array.isArray( src ) ) { + clone = []; + } else if ( !copyIsArray && !jQuery.isPlainObject( src ) ) { + clone = {}; + } else { + clone = src; + } + copyIsArray = false; + + // Never move original objects, clone them + target[ name ] = jQuery.extend( deep, clone, copy ); + + // Don't bring in undefined values + } else if ( copy !== undefined ) { + target[ name ] = copy; + } + } + } + } + + // Return the modified object + return target; +}; + +jQuery.extend( { + + // Unique for each copy of jQuery on the page + expando: "jQuery" + ( version + Math.random() ).replace( /\D/g, "" ), + + // Assume jQuery is ready without the ready module + isReady: true, + + error: function( msg ) { + throw new Error( msg ); + }, + + noop: function() {}, + + isPlainObject: function( obj ) { + var proto, Ctor; + + // Detect obvious negatives + // Use toString instead of jQuery.type to catch host objects + if ( !obj || toString.call( obj ) !== "[object Object]" ) { + return false; + } + + proto = getProto( obj ); + + // Objects with no prototype (e.g., `Object.create( null )`) are plain + if ( !proto ) { + return true; + } + + // Objects with prototype are plain iff they were constructed by a global Object function + Ctor = hasOwn.call( proto, "constructor" ) && proto.constructor; + return typeof Ctor === "function" && fnToString.call( Ctor ) === ObjectFunctionString; + }, + + isEmptyObject: function( obj ) { + var name; + + for ( name in obj ) { + return false; + } + return true; + }, + + // Evaluates a script in a provided context; falls back to the global one + // if not specified. + globalEval: function( code, options, doc ) { + DOMEval( code, { nonce: options && options.nonce }, doc ); + }, + + each: function( obj, callback ) { + var length, i = 0; + + if ( isArrayLike( obj ) ) { + length = obj.length; + for ( ; i < length; i++ ) { + if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { + break; + } + } + } else { + for ( i in obj ) { + if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { + break; + } + } + } + + return obj; + }, + + // results is for internal usage only + makeArray: function( arr, results ) { + var ret = results || []; + + if ( arr != null ) { + if ( isArrayLike( Object( arr ) ) ) { + jQuery.merge( ret, + typeof arr === "string" ? + [ arr ] : arr + ); + } else { + push.call( ret, arr ); + } + } + + return ret; + }, + + inArray: function( elem, arr, i ) { + return arr == null ? -1 : indexOf.call( arr, elem, i ); + }, + + // Support: Android <=4.0 only, PhantomJS 1 only + // push.apply(_, arraylike) throws on ancient WebKit + merge: function( first, second ) { + var len = +second.length, + j = 0, + i = first.length; + + for ( ; j < len; j++ ) { + first[ i++ ] = second[ j ]; + } + + first.length = i; + + return first; + }, + + grep: function( elems, callback, invert ) { + var callbackInverse, + matches = [], + i = 0, + length = elems.length, + callbackExpect = !invert; + + // Go through the array, only saving the items + // that pass the validator function + for ( ; i < length; i++ ) { + callbackInverse = !callback( elems[ i ], i ); + if ( callbackInverse !== callbackExpect ) { + matches.push( elems[ i ] ); + } + } + + return matches; + }, + + // arg is for internal usage only + map: function( elems, callback, arg ) { + var length, value, + i = 0, + ret = []; + + // Go through the array, translating each of the items to their new values + if ( isArrayLike( elems ) ) { + length = elems.length; + for ( ; i < length; i++ ) { + value = callback( elems[ i ], i, arg ); + + if ( value != null ) { + ret.push( value ); + } + } + + // Go through every key on the object, + } else { + for ( i in elems ) { + value = callback( elems[ i ], i, arg ); + + if ( value != null ) { + ret.push( value ); + } + } + } + + // Flatten any nested arrays + return flat( ret ); + }, + + // A global GUID counter for objects + guid: 1, + + // jQuery.support is not used in Core but other projects attach their + // properties to it so it needs to exist. + support: support +} ); + +if ( typeof Symbol === "function" ) { + jQuery.fn[ Symbol.iterator ] = arr[ Symbol.iterator ]; +} + +// Populate the class2type map +jQuery.each( "Boolean Number String Function Array Date RegExp Object Error Symbol".split( " " ), + function( _i, name ) { + class2type[ "[object " + name + "]" ] = name.toLowerCase(); + } ); + +function isArrayLike( obj ) { + + // Support: real iOS 8.2 only (not reproducible in simulator) + // `in` check used to prevent JIT error (gh-2145) + // hasOwn isn't used here due to false negatives + // regarding Nodelist length in IE + var length = !!obj && "length" in obj && obj.length, + type = toType( obj ); + + if ( isFunction( obj ) || isWindow( obj ) ) { + return false; + } + + return type === "array" || length === 0 || + typeof length === "number" && length > 0 && ( length - 1 ) in obj; +} +var Sizzle = +/*! + * Sizzle CSS Selector Engine v2.3.6 + * https://sizzlejs.com/ + * + * Copyright JS Foundation and other contributors + * Released under the MIT license + * https://js.foundation/ + * + * Date: 2021-02-16 + */ +( function( window ) { +var i, + support, + Expr, + getText, + isXML, + tokenize, + compile, + select, + outermostContext, + sortInput, + hasDuplicate, + + // Local document vars + setDocument, + document, + docElem, + documentIsHTML, + rbuggyQSA, + rbuggyMatches, + matches, + contains, + + // Instance-specific data + expando = "sizzle" + 1 * new Date(), + preferredDoc = window.document, + dirruns = 0, + done = 0, + classCache = createCache(), + tokenCache = createCache(), + compilerCache = createCache(), + nonnativeSelectorCache = createCache(), + sortOrder = function( a, b ) { + if ( a === b ) { + hasDuplicate = true; + } + return 0; + }, + + // Instance methods + hasOwn = ( {} ).hasOwnProperty, + arr = [], + pop = arr.pop, + pushNative = arr.push, + push = arr.push, + slice = arr.slice, + + // Use a stripped-down indexOf as it's faster than native + // https://jsperf.com/thor-indexof-vs-for/5 + indexOf = function( list, elem ) { + var i = 0, + len = list.length; + for ( ; i < len; i++ ) { + if ( list[ i ] === elem ) { + return i; + } + } + return -1; + }, + + booleans = "checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|" + + "ismap|loop|multiple|open|readonly|required|scoped", + + // Regular expressions + + // http://www.w3.org/TR/css3-selectors/#whitespace + whitespace = "[\\x20\\t\\r\\n\\f]", + + // https://www.w3.org/TR/css-syntax-3/#ident-token-diagram + identifier = "(?:\\\\[\\da-fA-F]{1,6}" + whitespace + + "?|\\\\[^\\r\\n\\f]|[\\w-]|[^\0-\\x7f])+", + + // Attribute selectors: http://www.w3.org/TR/selectors/#attribute-selectors + attributes = "\\[" + whitespace + "*(" + identifier + ")(?:" + whitespace + + + // Operator (capture 2) + "*([*^$|!~]?=)" + whitespace + + + // "Attribute values must be CSS identifiers [capture 5] + // or strings [capture 3 or capture 4]" + "*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|(" + identifier + "))|)" + + whitespace + "*\\]", + + pseudos = ":(" + identifier + ")(?:\\((" + + + // To reduce the number of selectors needing tokenize in the preFilter, prefer arguments: + // 1. quoted (capture 3; capture 4 or capture 5) + "('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|" + + + // 2. simple (capture 6) + "((?:\\\\.|[^\\\\()[\\]]|" + attributes + ")*)|" + + + // 3. anything else (capture 2) + ".*" + + ")\\)|)", + + // Leading and non-escaped trailing whitespace, capturing some non-whitespace characters preceding the latter + rwhitespace = new RegExp( whitespace + "+", "g" ), + rtrim = new RegExp( "^" + whitespace + "+|((?:^|[^\\\\])(?:\\\\.)*)" + + whitespace + "+$", "g" ), + + rcomma = new RegExp( "^" + whitespace + "*," + whitespace + "*" ), + rcombinators = new RegExp( "^" + whitespace + "*([>+~]|" + whitespace + ")" + whitespace + + "*" ), + rdescend = new RegExp( whitespace + "|>" ), + + rpseudo = new RegExp( pseudos ), + ridentifier = new RegExp( "^" + identifier + "$" ), + + matchExpr = { + "ID": new RegExp( "^#(" + identifier + ")" ), + "CLASS": new RegExp( "^\\.(" + identifier + ")" ), + "TAG": new RegExp( "^(" + identifier + "|[*])" ), + "ATTR": new RegExp( "^" + attributes ), + "PSEUDO": new RegExp( "^" + pseudos ), + "CHILD": new RegExp( "^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\(" + + whitespace + "*(even|odd|(([+-]|)(\\d*)n|)" + whitespace + "*(?:([+-]|)" + + whitespace + "*(\\d+)|))" + whitespace + "*\\)|)", "i" ), + "bool": new RegExp( "^(?:" + booleans + ")$", "i" ), + + // For use in libraries implementing .is() + // We use this for POS matching in `select` + "needsContext": new RegExp( "^" + whitespace + + "*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\(" + whitespace + + "*((?:-\\d)?\\d*)" + whitespace + "*\\)|)(?=[^-]|$)", "i" ) + }, + + rhtml = /HTML$/i, + rinputs = /^(?:input|select|textarea|button)$/i, + rheader = /^h\d$/i, + + rnative = /^[^{]+\{\s*\[native \w/, + + // Easily-parseable/retrievable ID or TAG or CLASS selectors + rquickExpr = /^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/, + + rsibling = /[+~]/, + + // CSS escapes + // http://www.w3.org/TR/CSS21/syndata.html#escaped-characters + runescape = new RegExp( "\\\\[\\da-fA-F]{1,6}" + whitespace + "?|\\\\([^\\r\\n\\f])", "g" ), + funescape = function( escape, nonHex ) { + var high = "0x" + escape.slice( 1 ) - 0x10000; + + return nonHex ? + + // Strip the backslash prefix from a non-hex escape sequence + nonHex : + + // Replace a hexadecimal escape sequence with the encoded Unicode code point + // Support: IE <=11+ + // For values outside the Basic Multilingual Plane (BMP), manually construct a + // surrogate pair + high < 0 ? + String.fromCharCode( high + 0x10000 ) : + String.fromCharCode( high >> 10 | 0xD800, high & 0x3FF | 0xDC00 ); + }, + + // CSS string/identifier serialization + // https://drafts.csswg.org/cssom/#common-serializing-idioms + rcssescape = /([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g, + fcssescape = function( ch, asCodePoint ) { + if ( asCodePoint ) { + + // U+0000 NULL becomes U+FFFD REPLACEMENT CHARACTER + if ( ch === "\0" ) { + return "\uFFFD"; + } + + // Control characters and (dependent upon position) numbers get escaped as code points + return ch.slice( 0, -1 ) + "\\" + + ch.charCodeAt( ch.length - 1 ).toString( 16 ) + " "; + } + + // Other potentially-special ASCII characters get backslash-escaped + return "\\" + ch; + }, + + // Used for iframes + // See setDocument() + // Removing the function wrapper causes a "Permission Denied" + // error in IE + unloadHandler = function() { + setDocument(); + }, + + inDisabledFieldset = addCombinator( + function( elem ) { + return elem.disabled === true && elem.nodeName.toLowerCase() === "fieldset"; + }, + { dir: "parentNode", next: "legend" } + ); + +// Optimize for push.apply( _, NodeList ) +try { + push.apply( + ( arr = slice.call( preferredDoc.childNodes ) ), + preferredDoc.childNodes + ); + + // Support: Android<4.0 + // Detect silently failing push.apply + // eslint-disable-next-line no-unused-expressions + arr[ preferredDoc.childNodes.length ].nodeType; +} catch ( e ) { + push = { apply: arr.length ? + + // Leverage slice if possible + function( target, els ) { + pushNative.apply( target, slice.call( els ) ); + } : + + // Support: IE<9 + // Otherwise append directly + function( target, els ) { + var j = target.length, + i = 0; + + // Can't trust NodeList.length + while ( ( target[ j++ ] = els[ i++ ] ) ) {} + target.length = j - 1; + } + }; +} + +function Sizzle( selector, context, results, seed ) { + var m, i, elem, nid, match, groups, newSelector, + newContext = context && context.ownerDocument, + + // nodeType defaults to 9, since context defaults to document + nodeType = context ? context.nodeType : 9; + + results = results || []; + + // Return early from calls with invalid selector or context + if ( typeof selector !== "string" || !selector || + nodeType !== 1 && nodeType !== 9 && nodeType !== 11 ) { + + return results; + } + + // Try to shortcut find operations (as opposed to filters) in HTML documents + if ( !seed ) { + setDocument( context ); + context = context || document; + + if ( documentIsHTML ) { + + // If the selector is sufficiently simple, try using a "get*By*" DOM method + // (excepting DocumentFragment context, where the methods don't exist) + if ( nodeType !== 11 && ( match = rquickExpr.exec( selector ) ) ) { + + // ID selector + if ( ( m = match[ 1 ] ) ) { + + // Document context + if ( nodeType === 9 ) { + if ( ( elem = context.getElementById( m ) ) ) { + + // Support: IE, Opera, Webkit + // TODO: identify versions + // getElementById can match elements by name instead of ID + if ( elem.id === m ) { + results.push( elem ); + return results; + } + } else { + return results; + } + + // Element context + } else { + + // Support: IE, Opera, Webkit + // TODO: identify versions + // getElementById can match elements by name instead of ID + if ( newContext && ( elem = newContext.getElementById( m ) ) && + contains( context, elem ) && + elem.id === m ) { + + results.push( elem ); + return results; + } + } + + // Type selector + } else if ( match[ 2 ] ) { + push.apply( results, context.getElementsByTagName( selector ) ); + return results; + + // Class selector + } else if ( ( m = match[ 3 ] ) && support.getElementsByClassName && + context.getElementsByClassName ) { + + push.apply( results, context.getElementsByClassName( m ) ); + return results; + } + } + + // Take advantage of querySelectorAll + if ( support.qsa && + !nonnativeSelectorCache[ selector + " " ] && + ( !rbuggyQSA || !rbuggyQSA.test( selector ) ) && + + // Support: IE 8 only + // Exclude object elements + ( nodeType !== 1 || context.nodeName.toLowerCase() !== "object" ) ) { + + newSelector = selector; + newContext = context; + + // qSA considers elements outside a scoping root when evaluating child or + // descendant combinators, which is not what we want. + // In such cases, we work around the behavior by prefixing every selector in the + // list with an ID selector referencing the scope context. + // The technique has to be used as well when a leading combinator is used + // as such selectors are not recognized by querySelectorAll. + // Thanks to Andrew Dupont for this technique. + if ( nodeType === 1 && + ( rdescend.test( selector ) || rcombinators.test( selector ) ) ) { + + // Expand context for sibling selectors + newContext = rsibling.test( selector ) && testContext( context.parentNode ) || + context; + + // We can use :scope instead of the ID hack if the browser + // supports it & if we're not changing the context. + if ( newContext !== context || !support.scope ) { + + // Capture the context ID, setting it first if necessary + if ( ( nid = context.getAttribute( "id" ) ) ) { + nid = nid.replace( rcssescape, fcssescape ); + } else { + context.setAttribute( "id", ( nid = expando ) ); + } + } + + // Prefix every selector in the list + groups = tokenize( selector ); + i = groups.length; + while ( i-- ) { + groups[ i ] = ( nid ? "#" + nid : ":scope" ) + " " + + toSelector( groups[ i ] ); + } + newSelector = groups.join( "," ); + } + + try { + push.apply( results, + newContext.querySelectorAll( newSelector ) + ); + return results; + } catch ( qsaError ) { + nonnativeSelectorCache( selector, true ); + } finally { + if ( nid === expando ) { + context.removeAttribute( "id" ); + } + } + } + } + } + + // All others + return select( selector.replace( rtrim, "$1" ), context, results, seed ); +} + +/** + * Create key-value caches of limited size + * @returns {function(string, object)} Returns the Object data after storing it on itself with + * property name the (space-suffixed) string and (if the cache is larger than Expr.cacheLength) + * deleting the oldest entry + */ +function createCache() { + var keys = []; + + function cache( key, value ) { + + // Use (key + " ") to avoid collision with native prototype properties (see Issue #157) + if ( keys.push( key + " " ) > Expr.cacheLength ) { + + // Only keep the most recent entries + delete cache[ keys.shift() ]; + } + return ( cache[ key + " " ] = value ); + } + return cache; +} + +/** + * Mark a function for special use by Sizzle + * @param {Function} fn The function to mark + */ +function markFunction( fn ) { + fn[ expando ] = true; + return fn; +} + +/** + * Support testing using an element + * @param {Function} fn Passed the created element and returns a boolean result + */ +function assert( fn ) { + var el = document.createElement( "fieldset" ); + + try { + return !!fn( el ); + } catch ( e ) { + return false; + } finally { + + // Remove from its parent by default + if ( el.parentNode ) { + el.parentNode.removeChild( el ); + } + + // release memory in IE + el = null; + } +} + +/** + * Adds the same handler for all of the specified attrs + * @param {String} attrs Pipe-separated list of attributes + * @param {Function} handler The method that will be applied + */ +function addHandle( attrs, handler ) { + var arr = attrs.split( "|" ), + i = arr.length; + + while ( i-- ) { + Expr.attrHandle[ arr[ i ] ] = handler; + } +} + +/** + * Checks document order of two siblings + * @param {Element} a + * @param {Element} b + * @returns {Number} Returns less than 0 if a precedes b, greater than 0 if a follows b + */ +function siblingCheck( a, b ) { + var cur = b && a, + diff = cur && a.nodeType === 1 && b.nodeType === 1 && + a.sourceIndex - b.sourceIndex; + + // Use IE sourceIndex if available on both nodes + if ( diff ) { + return diff; + } + + // Check if b follows a + if ( cur ) { + while ( ( cur = cur.nextSibling ) ) { + if ( cur === b ) { + return -1; + } + } + } + + return a ? 1 : -1; +} + +/** + * Returns a function to use in pseudos for input types + * @param {String} type + */ +function createInputPseudo( type ) { + return function( elem ) { + var name = elem.nodeName.toLowerCase(); + return name === "input" && elem.type === type; + }; +} + +/** + * Returns a function to use in pseudos for buttons + * @param {String} type + */ +function createButtonPseudo( type ) { + return function( elem ) { + var name = elem.nodeName.toLowerCase(); + return ( name === "input" || name === "button" ) && elem.type === type; + }; +} + +/** + * Returns a function to use in pseudos for :enabled/:disabled + * @param {Boolean} disabled true for :disabled; false for :enabled + */ +function createDisabledPseudo( disabled ) { + + // Known :disabled false positives: fieldset[disabled] > legend:nth-of-type(n+2) :can-disable + return function( elem ) { + + // Only certain elements can match :enabled or :disabled + // https://html.spec.whatwg.org/multipage/scripting.html#selector-enabled + // https://html.spec.whatwg.org/multipage/scripting.html#selector-disabled + if ( "form" in elem ) { + + // Check for inherited disabledness on relevant non-disabled elements: + // * listed form-associated elements in a disabled fieldset + // https://html.spec.whatwg.org/multipage/forms.html#category-listed + // https://html.spec.whatwg.org/multipage/forms.html#concept-fe-disabled + // * option elements in a disabled optgroup + // https://html.spec.whatwg.org/multipage/forms.html#concept-option-disabled + // All such elements have a "form" property. + if ( elem.parentNode && elem.disabled === false ) { + + // Option elements defer to a parent optgroup if present + if ( "label" in elem ) { + if ( "label" in elem.parentNode ) { + return elem.parentNode.disabled === disabled; + } else { + return elem.disabled === disabled; + } + } + + // Support: IE 6 - 11 + // Use the isDisabled shortcut property to check for disabled fieldset ancestors + return elem.isDisabled === disabled || + + // Where there is no isDisabled, check manually + /* jshint -W018 */ + elem.isDisabled !== !disabled && + inDisabledFieldset( elem ) === disabled; + } + + return elem.disabled === disabled; + + // Try to winnow out elements that can't be disabled before trusting the disabled property. + // Some victims get caught in our net (label, legend, menu, track), but it shouldn't + // even exist on them, let alone have a boolean value. + } else if ( "label" in elem ) { + return elem.disabled === disabled; + } + + // Remaining elements are neither :enabled nor :disabled + return false; + }; +} + +/** + * Returns a function to use in pseudos for positionals + * @param {Function} fn + */ +function createPositionalPseudo( fn ) { + return markFunction( function( argument ) { + argument = +argument; + return markFunction( function( seed, matches ) { + var j, + matchIndexes = fn( [], seed.length, argument ), + i = matchIndexes.length; + + // Match elements found at the specified indexes + while ( i-- ) { + if ( seed[ ( j = matchIndexes[ i ] ) ] ) { + seed[ j ] = !( matches[ j ] = seed[ j ] ); + } + } + } ); + } ); +} + +/** + * Checks a node for validity as a Sizzle context + * @param {Element|Object=} context + * @returns {Element|Object|Boolean} The input node if acceptable, otherwise a falsy value + */ +function testContext( context ) { + return context && typeof context.getElementsByTagName !== "undefined" && context; +} + +// Expose support vars for convenience +support = Sizzle.support = {}; + +/** + * Detects XML nodes + * @param {Element|Object} elem An element or a document + * @returns {Boolean} True iff elem is a non-HTML XML node + */ +isXML = Sizzle.isXML = function( elem ) { + var namespace = elem && elem.namespaceURI, + docElem = elem && ( elem.ownerDocument || elem ).documentElement; + + // Support: IE <=8 + // Assume HTML when documentElement doesn't yet exist, such as inside loading iframes + // https://bugs.jquery.com/ticket/4833 + return !rhtml.test( namespace || docElem && docElem.nodeName || "HTML" ); +}; + +/** + * Sets document-related variables once based on the current document + * @param {Element|Object} [doc] An element or document object to use to set the document + * @returns {Object} Returns the current document + */ +setDocument = Sizzle.setDocument = function( node ) { + var hasCompare, subWindow, + doc = node ? node.ownerDocument || node : preferredDoc; + + // Return early if doc is invalid or already selected + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + // eslint-disable-next-line eqeqeq + if ( doc == document || doc.nodeType !== 9 || !doc.documentElement ) { + return document; + } + + // Update global variables + document = doc; + docElem = document.documentElement; + documentIsHTML = !isXML( document ); + + // Support: IE 9 - 11+, Edge 12 - 18+ + // Accessing iframe documents after unload throws "permission denied" errors (jQuery #13936) + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + // eslint-disable-next-line eqeqeq + if ( preferredDoc != document && + ( subWindow = document.defaultView ) && subWindow.top !== subWindow ) { + + // Support: IE 11, Edge + if ( subWindow.addEventListener ) { + subWindow.addEventListener( "unload", unloadHandler, false ); + + // Support: IE 9 - 10 only + } else if ( subWindow.attachEvent ) { + subWindow.attachEvent( "onunload", unloadHandler ); + } + } + + // Support: IE 8 - 11+, Edge 12 - 18+, Chrome <=16 - 25 only, Firefox <=3.6 - 31 only, + // Safari 4 - 5 only, Opera <=11.6 - 12.x only + // IE/Edge & older browsers don't support the :scope pseudo-class. + // Support: Safari 6.0 only + // Safari 6.0 supports :scope but it's an alias of :root there. + support.scope = assert( function( el ) { + docElem.appendChild( el ).appendChild( document.createElement( "div" ) ); + return typeof el.querySelectorAll !== "undefined" && + !el.querySelectorAll( ":scope fieldset div" ).length; + } ); + + /* Attributes + ---------------------------------------------------------------------- */ + + // Support: IE<8 + // Verify that getAttribute really returns attributes and not properties + // (excepting IE8 booleans) + support.attributes = assert( function( el ) { + el.className = "i"; + return !el.getAttribute( "className" ); + } ); + + /* getElement(s)By* + ---------------------------------------------------------------------- */ + + // Check if getElementsByTagName("*") returns only elements + support.getElementsByTagName = assert( function( el ) { + el.appendChild( document.createComment( "" ) ); + return !el.getElementsByTagName( "*" ).length; + } ); + + // Support: IE<9 + support.getElementsByClassName = rnative.test( document.getElementsByClassName ); + + // Support: IE<10 + // Check if getElementById returns elements by name + // The broken getElementById methods don't pick up programmatically-set names, + // so use a roundabout getElementsByName test + support.getById = assert( function( el ) { + docElem.appendChild( el ).id = expando; + return !document.getElementsByName || !document.getElementsByName( expando ).length; + } ); + + // ID filter and find + if ( support.getById ) { + Expr.filter[ "ID" ] = function( id ) { + var attrId = id.replace( runescape, funescape ); + return function( elem ) { + return elem.getAttribute( "id" ) === attrId; + }; + }; + Expr.find[ "ID" ] = function( id, context ) { + if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { + var elem = context.getElementById( id ); + return elem ? [ elem ] : []; + } + }; + } else { + Expr.filter[ "ID" ] = function( id ) { + var attrId = id.replace( runescape, funescape ); + return function( elem ) { + var node = typeof elem.getAttributeNode !== "undefined" && + elem.getAttributeNode( "id" ); + return node && node.value === attrId; + }; + }; + + // Support: IE 6 - 7 only + // getElementById is not reliable as a find shortcut + Expr.find[ "ID" ] = function( id, context ) { + if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { + var node, i, elems, + elem = context.getElementById( id ); + + if ( elem ) { + + // Verify the id attribute + node = elem.getAttributeNode( "id" ); + if ( node && node.value === id ) { + return [ elem ]; + } + + // Fall back on getElementsByName + elems = context.getElementsByName( id ); + i = 0; + while ( ( elem = elems[ i++ ] ) ) { + node = elem.getAttributeNode( "id" ); + if ( node && node.value === id ) { + return [ elem ]; + } + } + } + + return []; + } + }; + } + + // Tag + Expr.find[ "TAG" ] = support.getElementsByTagName ? + function( tag, context ) { + if ( typeof context.getElementsByTagName !== "undefined" ) { + return context.getElementsByTagName( tag ); + + // DocumentFragment nodes don't have gEBTN + } else if ( support.qsa ) { + return context.querySelectorAll( tag ); + } + } : + + function( tag, context ) { + var elem, + tmp = [], + i = 0, + + // By happy coincidence, a (broken) gEBTN appears on DocumentFragment nodes too + results = context.getElementsByTagName( tag ); + + // Filter out possible comments + if ( tag === "*" ) { + while ( ( elem = results[ i++ ] ) ) { + if ( elem.nodeType === 1 ) { + tmp.push( elem ); + } + } + + return tmp; + } + return results; + }; + + // Class + Expr.find[ "CLASS" ] = support.getElementsByClassName && function( className, context ) { + if ( typeof context.getElementsByClassName !== "undefined" && documentIsHTML ) { + return context.getElementsByClassName( className ); + } + }; + + /* QSA/matchesSelector + ---------------------------------------------------------------------- */ + + // QSA and matchesSelector support + + // matchesSelector(:active) reports false when true (IE9/Opera 11.5) + rbuggyMatches = []; + + // qSa(:focus) reports false when true (Chrome 21) + // We allow this because of a bug in IE8/9 that throws an error + // whenever `document.activeElement` is accessed on an iframe + // So, we allow :focus to pass through QSA all the time to avoid the IE error + // See https://bugs.jquery.com/ticket/13378 + rbuggyQSA = []; + + if ( ( support.qsa = rnative.test( document.querySelectorAll ) ) ) { + + // Build QSA regex + // Regex strategy adopted from Diego Perini + assert( function( el ) { + + var input; + + // Select is set to empty string on purpose + // This is to test IE's treatment of not explicitly + // setting a boolean content attribute, + // since its presence should be enough + // https://bugs.jquery.com/ticket/12359 + docElem.appendChild( el ).innerHTML = "" + + ""; + + // Support: IE8, Opera 11-12.16 + // Nothing should be selected when empty strings follow ^= or $= or *= + // The test attribute must be unknown in Opera but "safe" for WinRT + // https://msdn.microsoft.com/en-us/library/ie/hh465388.aspx#attribute_section + if ( el.querySelectorAll( "[msallowcapture^='']" ).length ) { + rbuggyQSA.push( "[*^$]=" + whitespace + "*(?:''|\"\")" ); + } + + // Support: IE8 + // Boolean attributes and "value" are not treated correctly + if ( !el.querySelectorAll( "[selected]" ).length ) { + rbuggyQSA.push( "\\[" + whitespace + "*(?:value|" + booleans + ")" ); + } + + // Support: Chrome<29, Android<4.4, Safari<7.0+, iOS<7.0+, PhantomJS<1.9.8+ + if ( !el.querySelectorAll( "[id~=" + expando + "-]" ).length ) { + rbuggyQSA.push( "~=" ); + } + + // Support: IE 11+, Edge 15 - 18+ + // IE 11/Edge don't find elements on a `[name='']` query in some cases. + // Adding a temporary attribute to the document before the selection works + // around the issue. + // Interestingly, IE 10 & older don't seem to have the issue. + input = document.createElement( "input" ); + input.setAttribute( "name", "" ); + el.appendChild( input ); + if ( !el.querySelectorAll( "[name='']" ).length ) { + rbuggyQSA.push( "\\[" + whitespace + "*name" + whitespace + "*=" + + whitespace + "*(?:''|\"\")" ); + } + + // Webkit/Opera - :checked should return selected option elements + // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked + // IE8 throws error here and will not see later tests + if ( !el.querySelectorAll( ":checked" ).length ) { + rbuggyQSA.push( ":checked" ); + } + + // Support: Safari 8+, iOS 8+ + // https://bugs.webkit.org/show_bug.cgi?id=136851 + // In-page `selector#id sibling-combinator selector` fails + if ( !el.querySelectorAll( "a#" + expando + "+*" ).length ) { + rbuggyQSA.push( ".#.+[+~]" ); + } + + // Support: Firefox <=3.6 - 5 only + // Old Firefox doesn't throw on a badly-escaped identifier. + el.querySelectorAll( "\\\f" ); + rbuggyQSA.push( "[\\r\\n\\f]" ); + } ); + + assert( function( el ) { + el.innerHTML = "" + + ""; + + // Support: Windows 8 Native Apps + // The type and name attributes are restricted during .innerHTML assignment + var input = document.createElement( "input" ); + input.setAttribute( "type", "hidden" ); + el.appendChild( input ).setAttribute( "name", "D" ); + + // Support: IE8 + // Enforce case-sensitivity of name attribute + if ( el.querySelectorAll( "[name=d]" ).length ) { + rbuggyQSA.push( "name" + whitespace + "*[*^$|!~]?=" ); + } + + // FF 3.5 - :enabled/:disabled and hidden elements (hidden elements are still enabled) + // IE8 throws error here and will not see later tests + if ( el.querySelectorAll( ":enabled" ).length !== 2 ) { + rbuggyQSA.push( ":enabled", ":disabled" ); + } + + // Support: IE9-11+ + // IE's :disabled selector does not pick up the children of disabled fieldsets + docElem.appendChild( el ).disabled = true; + if ( el.querySelectorAll( ":disabled" ).length !== 2 ) { + rbuggyQSA.push( ":enabled", ":disabled" ); + } + + // Support: Opera 10 - 11 only + // Opera 10-11 does not throw on post-comma invalid pseudos + el.querySelectorAll( "*,:x" ); + rbuggyQSA.push( ",.*:" ); + } ); + } + + if ( ( support.matchesSelector = rnative.test( ( matches = docElem.matches || + docElem.webkitMatchesSelector || + docElem.mozMatchesSelector || + docElem.oMatchesSelector || + docElem.msMatchesSelector ) ) ) ) { + + assert( function( el ) { + + // Check to see if it's possible to do matchesSelector + // on a disconnected node (IE 9) + support.disconnectedMatch = matches.call( el, "*" ); + + // This should fail with an exception + // Gecko does not error, returns false instead + matches.call( el, "[s!='']:x" ); + rbuggyMatches.push( "!=", pseudos ); + } ); + } + + rbuggyQSA = rbuggyQSA.length && new RegExp( rbuggyQSA.join( "|" ) ); + rbuggyMatches = rbuggyMatches.length && new RegExp( rbuggyMatches.join( "|" ) ); + + /* Contains + ---------------------------------------------------------------------- */ + hasCompare = rnative.test( docElem.compareDocumentPosition ); + + // Element contains another + // Purposefully self-exclusive + // As in, an element does not contain itself + contains = hasCompare || rnative.test( docElem.contains ) ? + function( a, b ) { + var adown = a.nodeType === 9 ? a.documentElement : a, + bup = b && b.parentNode; + return a === bup || !!( bup && bup.nodeType === 1 && ( + adown.contains ? + adown.contains( bup ) : + a.compareDocumentPosition && a.compareDocumentPosition( bup ) & 16 + ) ); + } : + function( a, b ) { + if ( b ) { + while ( ( b = b.parentNode ) ) { + if ( b === a ) { + return true; + } + } + } + return false; + }; + + /* Sorting + ---------------------------------------------------------------------- */ + + // Document order sorting + sortOrder = hasCompare ? + function( a, b ) { + + // Flag for duplicate removal + if ( a === b ) { + hasDuplicate = true; + return 0; + } + + // Sort on method existence if only one input has compareDocumentPosition + var compare = !a.compareDocumentPosition - !b.compareDocumentPosition; + if ( compare ) { + return compare; + } + + // Calculate position if both inputs belong to the same document + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + // eslint-disable-next-line eqeqeq + compare = ( a.ownerDocument || a ) == ( b.ownerDocument || b ) ? + a.compareDocumentPosition( b ) : + + // Otherwise we know they are disconnected + 1; + + // Disconnected nodes + if ( compare & 1 || + ( !support.sortDetached && b.compareDocumentPosition( a ) === compare ) ) { + + // Choose the first element that is related to our preferred document + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + // eslint-disable-next-line eqeqeq + if ( a == document || a.ownerDocument == preferredDoc && + contains( preferredDoc, a ) ) { + return -1; + } + + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + // eslint-disable-next-line eqeqeq + if ( b == document || b.ownerDocument == preferredDoc && + contains( preferredDoc, b ) ) { + return 1; + } + + // Maintain original order + return sortInput ? + ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : + 0; + } + + return compare & 4 ? -1 : 1; + } : + function( a, b ) { + + // Exit early if the nodes are identical + if ( a === b ) { + hasDuplicate = true; + return 0; + } + + var cur, + i = 0, + aup = a.parentNode, + bup = b.parentNode, + ap = [ a ], + bp = [ b ]; + + // Parentless nodes are either documents or disconnected + if ( !aup || !bup ) { + + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + /* eslint-disable eqeqeq */ + return a == document ? -1 : + b == document ? 1 : + /* eslint-enable eqeqeq */ + aup ? -1 : + bup ? 1 : + sortInput ? + ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : + 0; + + // If the nodes are siblings, we can do a quick check + } else if ( aup === bup ) { + return siblingCheck( a, b ); + } + + // Otherwise we need full lists of their ancestors for comparison + cur = a; + while ( ( cur = cur.parentNode ) ) { + ap.unshift( cur ); + } + cur = b; + while ( ( cur = cur.parentNode ) ) { + bp.unshift( cur ); + } + + // Walk down the tree looking for a discrepancy + while ( ap[ i ] === bp[ i ] ) { + i++; + } + + return i ? + + // Do a sibling check if the nodes have a common ancestor + siblingCheck( ap[ i ], bp[ i ] ) : + + // Otherwise nodes in our document sort first + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + /* eslint-disable eqeqeq */ + ap[ i ] == preferredDoc ? -1 : + bp[ i ] == preferredDoc ? 1 : + /* eslint-enable eqeqeq */ + 0; + }; + + return document; +}; + +Sizzle.matches = function( expr, elements ) { + return Sizzle( expr, null, null, elements ); +}; + +Sizzle.matchesSelector = function( elem, expr ) { + setDocument( elem ); + + if ( support.matchesSelector && documentIsHTML && + !nonnativeSelectorCache[ expr + " " ] && + ( !rbuggyMatches || !rbuggyMatches.test( expr ) ) && + ( !rbuggyQSA || !rbuggyQSA.test( expr ) ) ) { + + try { + var ret = matches.call( elem, expr ); + + // IE 9's matchesSelector returns false on disconnected nodes + if ( ret || support.disconnectedMatch || + + // As well, disconnected nodes are said to be in a document + // fragment in IE 9 + elem.document && elem.document.nodeType !== 11 ) { + return ret; + } + } catch ( e ) { + nonnativeSelectorCache( expr, true ); + } + } + + return Sizzle( expr, document, null, [ elem ] ).length > 0; +}; + +Sizzle.contains = function( context, elem ) { + + // Set document vars if needed + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + // eslint-disable-next-line eqeqeq + if ( ( context.ownerDocument || context ) != document ) { + setDocument( context ); + } + return contains( context, elem ); +}; + +Sizzle.attr = function( elem, name ) { + + // Set document vars if needed + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + // eslint-disable-next-line eqeqeq + if ( ( elem.ownerDocument || elem ) != document ) { + setDocument( elem ); + } + + var fn = Expr.attrHandle[ name.toLowerCase() ], + + // Don't get fooled by Object.prototype properties (jQuery #13807) + val = fn && hasOwn.call( Expr.attrHandle, name.toLowerCase() ) ? + fn( elem, name, !documentIsHTML ) : + undefined; + + return val !== undefined ? + val : + support.attributes || !documentIsHTML ? + elem.getAttribute( name ) : + ( val = elem.getAttributeNode( name ) ) && val.specified ? + val.value : + null; +}; + +Sizzle.escape = function( sel ) { + return ( sel + "" ).replace( rcssescape, fcssescape ); +}; + +Sizzle.error = function( msg ) { + throw new Error( "Syntax error, unrecognized expression: " + msg ); +}; + +/** + * Document sorting and removing duplicates + * @param {ArrayLike} results + */ +Sizzle.uniqueSort = function( results ) { + var elem, + duplicates = [], + j = 0, + i = 0; + + // Unless we *know* we can detect duplicates, assume their presence + hasDuplicate = !support.detectDuplicates; + sortInput = !support.sortStable && results.slice( 0 ); + results.sort( sortOrder ); + + if ( hasDuplicate ) { + while ( ( elem = results[ i++ ] ) ) { + if ( elem === results[ i ] ) { + j = duplicates.push( i ); + } + } + while ( j-- ) { + results.splice( duplicates[ j ], 1 ); + } + } + + // Clear input after sorting to release objects + // See https://github.com/jquery/sizzle/pull/225 + sortInput = null; + + return results; +}; + +/** + * Utility function for retrieving the text value of an array of DOM nodes + * @param {Array|Element} elem + */ +getText = Sizzle.getText = function( elem ) { + var node, + ret = "", + i = 0, + nodeType = elem.nodeType; + + if ( !nodeType ) { + + // If no nodeType, this is expected to be an array + while ( ( node = elem[ i++ ] ) ) { + + // Do not traverse comment nodes + ret += getText( node ); + } + } else if ( nodeType === 1 || nodeType === 9 || nodeType === 11 ) { + + // Use textContent for elements + // innerText usage removed for consistency of new lines (jQuery #11153) + if ( typeof elem.textContent === "string" ) { + return elem.textContent; + } else { + + // Traverse its children + for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { + ret += getText( elem ); + } + } + } else if ( nodeType === 3 || nodeType === 4 ) { + return elem.nodeValue; + } + + // Do not include comment or processing instruction nodes + + return ret; +}; + +Expr = Sizzle.selectors = { + + // Can be adjusted by the user + cacheLength: 50, + + createPseudo: markFunction, + + match: matchExpr, + + attrHandle: {}, + + find: {}, + + relative: { + ">": { dir: "parentNode", first: true }, + " ": { dir: "parentNode" }, + "+": { dir: "previousSibling", first: true }, + "~": { dir: "previousSibling" } + }, + + preFilter: { + "ATTR": function( match ) { + match[ 1 ] = match[ 1 ].replace( runescape, funescape ); + + // Move the given value to match[3] whether quoted or unquoted + match[ 3 ] = ( match[ 3 ] || match[ 4 ] || + match[ 5 ] || "" ).replace( runescape, funescape ); + + if ( match[ 2 ] === "~=" ) { + match[ 3 ] = " " + match[ 3 ] + " "; + } + + return match.slice( 0, 4 ); + }, + + "CHILD": function( match ) { + + /* matches from matchExpr["CHILD"] + 1 type (only|nth|...) + 2 what (child|of-type) + 3 argument (even|odd|\d*|\d*n([+-]\d+)?|...) + 4 xn-component of xn+y argument ([+-]?\d*n|) + 5 sign of xn-component + 6 x of xn-component + 7 sign of y-component + 8 y of y-component + */ + match[ 1 ] = match[ 1 ].toLowerCase(); + + if ( match[ 1 ].slice( 0, 3 ) === "nth" ) { + + // nth-* requires argument + if ( !match[ 3 ] ) { + Sizzle.error( match[ 0 ] ); + } + + // numeric x and y parameters for Expr.filter.CHILD + // remember that false/true cast respectively to 0/1 + match[ 4 ] = +( match[ 4 ] ? + match[ 5 ] + ( match[ 6 ] || 1 ) : + 2 * ( match[ 3 ] === "even" || match[ 3 ] === "odd" ) ); + match[ 5 ] = +( ( match[ 7 ] + match[ 8 ] ) || match[ 3 ] === "odd" ); + + // other types prohibit arguments + } else if ( match[ 3 ] ) { + Sizzle.error( match[ 0 ] ); + } + + return match; + }, + + "PSEUDO": function( match ) { + var excess, + unquoted = !match[ 6 ] && match[ 2 ]; + + if ( matchExpr[ "CHILD" ].test( match[ 0 ] ) ) { + return null; + } + + // Accept quoted arguments as-is + if ( match[ 3 ] ) { + match[ 2 ] = match[ 4 ] || match[ 5 ] || ""; + + // Strip excess characters from unquoted arguments + } else if ( unquoted && rpseudo.test( unquoted ) && + + // Get excess from tokenize (recursively) + ( excess = tokenize( unquoted, true ) ) && + + // advance to the next closing parenthesis + ( excess = unquoted.indexOf( ")", unquoted.length - excess ) - unquoted.length ) ) { + + // excess is a negative index + match[ 0 ] = match[ 0 ].slice( 0, excess ); + match[ 2 ] = unquoted.slice( 0, excess ); + } + + // Return only captures needed by the pseudo filter method (type and argument) + return match.slice( 0, 3 ); + } + }, + + filter: { + + "TAG": function( nodeNameSelector ) { + var nodeName = nodeNameSelector.replace( runescape, funescape ).toLowerCase(); + return nodeNameSelector === "*" ? + function() { + return true; + } : + function( elem ) { + return elem.nodeName && elem.nodeName.toLowerCase() === nodeName; + }; + }, + + "CLASS": function( className ) { + var pattern = classCache[ className + " " ]; + + return pattern || + ( pattern = new RegExp( "(^|" + whitespace + + ")" + className + "(" + whitespace + "|$)" ) ) && classCache( + className, function( elem ) { + return pattern.test( + typeof elem.className === "string" && elem.className || + typeof elem.getAttribute !== "undefined" && + elem.getAttribute( "class" ) || + "" + ); + } ); + }, + + "ATTR": function( name, operator, check ) { + return function( elem ) { + var result = Sizzle.attr( elem, name ); + + if ( result == null ) { + return operator === "!="; + } + if ( !operator ) { + return true; + } + + result += ""; + + /* eslint-disable max-len */ + + return operator === "=" ? result === check : + operator === "!=" ? result !== check : + operator === "^=" ? check && result.indexOf( check ) === 0 : + operator === "*=" ? check && result.indexOf( check ) > -1 : + operator === "$=" ? check && result.slice( -check.length ) === check : + operator === "~=" ? ( " " + result.replace( rwhitespace, " " ) + " " ).indexOf( check ) > -1 : + operator === "|=" ? result === check || result.slice( 0, check.length + 1 ) === check + "-" : + false; + /* eslint-enable max-len */ + + }; + }, + + "CHILD": function( type, what, _argument, first, last ) { + var simple = type.slice( 0, 3 ) !== "nth", + forward = type.slice( -4 ) !== "last", + ofType = what === "of-type"; + + return first === 1 && last === 0 ? + + // Shortcut for :nth-*(n) + function( elem ) { + return !!elem.parentNode; + } : + + function( elem, _context, xml ) { + var cache, uniqueCache, outerCache, node, nodeIndex, start, + dir = simple !== forward ? "nextSibling" : "previousSibling", + parent = elem.parentNode, + name = ofType && elem.nodeName.toLowerCase(), + useCache = !xml && !ofType, + diff = false; + + if ( parent ) { + + // :(first|last|only)-(child|of-type) + if ( simple ) { + while ( dir ) { + node = elem; + while ( ( node = node[ dir ] ) ) { + if ( ofType ? + node.nodeName.toLowerCase() === name : + node.nodeType === 1 ) { + + return false; + } + } + + // Reverse direction for :only-* (if we haven't yet done so) + start = dir = type === "only" && !start && "nextSibling"; + } + return true; + } + + start = [ forward ? parent.firstChild : parent.lastChild ]; + + // non-xml :nth-child(...) stores cache data on `parent` + if ( forward && useCache ) { + + // Seek `elem` from a previously-cached index + + // ...in a gzip-friendly way + node = parent; + outerCache = node[ expando ] || ( node[ expando ] = {} ); + + // Support: IE <9 only + // Defend against cloned attroperties (jQuery gh-1709) + uniqueCache = outerCache[ node.uniqueID ] || + ( outerCache[ node.uniqueID ] = {} ); + + cache = uniqueCache[ type ] || []; + nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; + diff = nodeIndex && cache[ 2 ]; + node = nodeIndex && parent.childNodes[ nodeIndex ]; + + while ( ( node = ++nodeIndex && node && node[ dir ] || + + // Fallback to seeking `elem` from the start + ( diff = nodeIndex = 0 ) || start.pop() ) ) { + + // When found, cache indexes on `parent` and break + if ( node.nodeType === 1 && ++diff && node === elem ) { + uniqueCache[ type ] = [ dirruns, nodeIndex, diff ]; + break; + } + } + + } else { + + // Use previously-cached element index if available + if ( useCache ) { + + // ...in a gzip-friendly way + node = elem; + outerCache = node[ expando ] || ( node[ expando ] = {} ); + + // Support: IE <9 only + // Defend against cloned attroperties (jQuery gh-1709) + uniqueCache = outerCache[ node.uniqueID ] || + ( outerCache[ node.uniqueID ] = {} ); + + cache = uniqueCache[ type ] || []; + nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; + diff = nodeIndex; + } + + // xml :nth-child(...) + // or :nth-last-child(...) or :nth(-last)?-of-type(...) + if ( diff === false ) { + + // Use the same loop as above to seek `elem` from the start + while ( ( node = ++nodeIndex && node && node[ dir ] || + ( diff = nodeIndex = 0 ) || start.pop() ) ) { + + if ( ( ofType ? + node.nodeName.toLowerCase() === name : + node.nodeType === 1 ) && + ++diff ) { + + // Cache the index of each encountered element + if ( useCache ) { + outerCache = node[ expando ] || + ( node[ expando ] = {} ); + + // Support: IE <9 only + // Defend against cloned attroperties (jQuery gh-1709) + uniqueCache = outerCache[ node.uniqueID ] || + ( outerCache[ node.uniqueID ] = {} ); + + uniqueCache[ type ] = [ dirruns, diff ]; + } + + if ( node === elem ) { + break; + } + } + } + } + } + + // Incorporate the offset, then check against cycle size + diff -= last; + return diff === first || ( diff % first === 0 && diff / first >= 0 ); + } + }; + }, + + "PSEUDO": function( pseudo, argument ) { + + // pseudo-class names are case-insensitive + // http://www.w3.org/TR/selectors/#pseudo-classes + // Prioritize by case sensitivity in case custom pseudos are added with uppercase letters + // Remember that setFilters inherits from pseudos + var args, + fn = Expr.pseudos[ pseudo ] || Expr.setFilters[ pseudo.toLowerCase() ] || + Sizzle.error( "unsupported pseudo: " + pseudo ); + + // The user may use createPseudo to indicate that + // arguments are needed to create the filter function + // just as Sizzle does + if ( fn[ expando ] ) { + return fn( argument ); + } + + // But maintain support for old signatures + if ( fn.length > 1 ) { + args = [ pseudo, pseudo, "", argument ]; + return Expr.setFilters.hasOwnProperty( pseudo.toLowerCase() ) ? + markFunction( function( seed, matches ) { + var idx, + matched = fn( seed, argument ), + i = matched.length; + while ( i-- ) { + idx = indexOf( seed, matched[ i ] ); + seed[ idx ] = !( matches[ idx ] = matched[ i ] ); + } + } ) : + function( elem ) { + return fn( elem, 0, args ); + }; + } + + return fn; + } + }, + + pseudos: { + + // Potentially complex pseudos + "not": markFunction( function( selector ) { + + // Trim the selector passed to compile + // to avoid treating leading and trailing + // spaces as combinators + var input = [], + results = [], + matcher = compile( selector.replace( rtrim, "$1" ) ); + + return matcher[ expando ] ? + markFunction( function( seed, matches, _context, xml ) { + var elem, + unmatched = matcher( seed, null, xml, [] ), + i = seed.length; + + // Match elements unmatched by `matcher` + while ( i-- ) { + if ( ( elem = unmatched[ i ] ) ) { + seed[ i ] = !( matches[ i ] = elem ); + } + } + } ) : + function( elem, _context, xml ) { + input[ 0 ] = elem; + matcher( input, null, xml, results ); + + // Don't keep the element (issue #299) + input[ 0 ] = null; + return !results.pop(); + }; + } ), + + "has": markFunction( function( selector ) { + return function( elem ) { + return Sizzle( selector, elem ).length > 0; + }; + } ), + + "contains": markFunction( function( text ) { + text = text.replace( runescape, funescape ); + return function( elem ) { + return ( elem.textContent || getText( elem ) ).indexOf( text ) > -1; + }; + } ), + + // "Whether an element is represented by a :lang() selector + // is based solely on the element's language value + // being equal to the identifier C, + // or beginning with the identifier C immediately followed by "-". + // The matching of C against the element's language value is performed case-insensitively. + // The identifier C does not have to be a valid language name." + // http://www.w3.org/TR/selectors/#lang-pseudo + "lang": markFunction( function( lang ) { + + // lang value must be a valid identifier + if ( !ridentifier.test( lang || "" ) ) { + Sizzle.error( "unsupported lang: " + lang ); + } + lang = lang.replace( runescape, funescape ).toLowerCase(); + return function( elem ) { + var elemLang; + do { + if ( ( elemLang = documentIsHTML ? + elem.lang : + elem.getAttribute( "xml:lang" ) || elem.getAttribute( "lang" ) ) ) { + + elemLang = elemLang.toLowerCase(); + return elemLang === lang || elemLang.indexOf( lang + "-" ) === 0; + } + } while ( ( elem = elem.parentNode ) && elem.nodeType === 1 ); + return false; + }; + } ), + + // Miscellaneous + "target": function( elem ) { + var hash = window.location && window.location.hash; + return hash && hash.slice( 1 ) === elem.id; + }, + + "root": function( elem ) { + return elem === docElem; + }, + + "focus": function( elem ) { + return elem === document.activeElement && + ( !document.hasFocus || document.hasFocus() ) && + !!( elem.type || elem.href || ~elem.tabIndex ); + }, + + // Boolean properties + "enabled": createDisabledPseudo( false ), + "disabled": createDisabledPseudo( true ), + + "checked": function( elem ) { + + // In CSS3, :checked should return both checked and selected elements + // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked + var nodeName = elem.nodeName.toLowerCase(); + return ( nodeName === "input" && !!elem.checked ) || + ( nodeName === "option" && !!elem.selected ); + }, + + "selected": function( elem ) { + + // Accessing this property makes selected-by-default + // options in Safari work properly + if ( elem.parentNode ) { + // eslint-disable-next-line no-unused-expressions + elem.parentNode.selectedIndex; + } + + return elem.selected === true; + }, + + // Contents + "empty": function( elem ) { + + // http://www.w3.org/TR/selectors/#empty-pseudo + // :empty is negated by element (1) or content nodes (text: 3; cdata: 4; entity ref: 5), + // but not by others (comment: 8; processing instruction: 7; etc.) + // nodeType < 6 works because attributes (2) do not appear as children + for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { + if ( elem.nodeType < 6 ) { + return false; + } + } + return true; + }, + + "parent": function( elem ) { + return !Expr.pseudos[ "empty" ]( elem ); + }, + + // Element/input types + "header": function( elem ) { + return rheader.test( elem.nodeName ); + }, + + "input": function( elem ) { + return rinputs.test( elem.nodeName ); + }, + + "button": function( elem ) { + var name = elem.nodeName.toLowerCase(); + return name === "input" && elem.type === "button" || name === "button"; + }, + + "text": function( elem ) { + var attr; + return elem.nodeName.toLowerCase() === "input" && + elem.type === "text" && + + // Support: IE<8 + // New HTML5 attribute values (e.g., "search") appear with elem.type === "text" + ( ( attr = elem.getAttribute( "type" ) ) == null || + attr.toLowerCase() === "text" ); + }, + + // Position-in-collection + "first": createPositionalPseudo( function() { + return [ 0 ]; + } ), + + "last": createPositionalPseudo( function( _matchIndexes, length ) { + return [ length - 1 ]; + } ), + + "eq": createPositionalPseudo( function( _matchIndexes, length, argument ) { + return [ argument < 0 ? argument + length : argument ]; + } ), + + "even": createPositionalPseudo( function( matchIndexes, length ) { + var i = 0; + for ( ; i < length; i += 2 ) { + matchIndexes.push( i ); + } + return matchIndexes; + } ), + + "odd": createPositionalPseudo( function( matchIndexes, length ) { + var i = 1; + for ( ; i < length; i += 2 ) { + matchIndexes.push( i ); + } + return matchIndexes; + } ), + + "lt": createPositionalPseudo( function( matchIndexes, length, argument ) { + var i = argument < 0 ? + argument + length : + argument > length ? + length : + argument; + for ( ; --i >= 0; ) { + matchIndexes.push( i ); + } + return matchIndexes; + } ), + + "gt": createPositionalPseudo( function( matchIndexes, length, argument ) { + var i = argument < 0 ? argument + length : argument; + for ( ; ++i < length; ) { + matchIndexes.push( i ); + } + return matchIndexes; + } ) + } +}; + +Expr.pseudos[ "nth" ] = Expr.pseudos[ "eq" ]; + +// Add button/input type pseudos +for ( i in { radio: true, checkbox: true, file: true, password: true, image: true } ) { + Expr.pseudos[ i ] = createInputPseudo( i ); +} +for ( i in { submit: true, reset: true } ) { + Expr.pseudos[ i ] = createButtonPseudo( i ); +} + +// Easy API for creating new setFilters +function setFilters() {} +setFilters.prototype = Expr.filters = Expr.pseudos; +Expr.setFilters = new setFilters(); + +tokenize = Sizzle.tokenize = function( selector, parseOnly ) { + var matched, match, tokens, type, + soFar, groups, preFilters, + cached = tokenCache[ selector + " " ]; + + if ( cached ) { + return parseOnly ? 0 : cached.slice( 0 ); + } + + soFar = selector; + groups = []; + preFilters = Expr.preFilter; + + while ( soFar ) { + + // Comma and first run + if ( !matched || ( match = rcomma.exec( soFar ) ) ) { + if ( match ) { + + // Don't consume trailing commas as valid + soFar = soFar.slice( match[ 0 ].length ) || soFar; + } + groups.push( ( tokens = [] ) ); + } + + matched = false; + + // Combinators + if ( ( match = rcombinators.exec( soFar ) ) ) { + matched = match.shift(); + tokens.push( { + value: matched, + + // Cast descendant combinators to space + type: match[ 0 ].replace( rtrim, " " ) + } ); + soFar = soFar.slice( matched.length ); + } + + // Filters + for ( type in Expr.filter ) { + if ( ( match = matchExpr[ type ].exec( soFar ) ) && ( !preFilters[ type ] || + ( match = preFilters[ type ]( match ) ) ) ) { + matched = match.shift(); + tokens.push( { + value: matched, + type: type, + matches: match + } ); + soFar = soFar.slice( matched.length ); + } + } + + if ( !matched ) { + break; + } + } + + // Return the length of the invalid excess + // if we're just parsing + // Otherwise, throw an error or return tokens + return parseOnly ? + soFar.length : + soFar ? + Sizzle.error( selector ) : + + // Cache the tokens + tokenCache( selector, groups ).slice( 0 ); +}; + +function toSelector( tokens ) { + var i = 0, + len = tokens.length, + selector = ""; + for ( ; i < len; i++ ) { + selector += tokens[ i ].value; + } + return selector; +} + +function addCombinator( matcher, combinator, base ) { + var dir = combinator.dir, + skip = combinator.next, + key = skip || dir, + checkNonElements = base && key === "parentNode", + doneName = done++; + + return combinator.first ? + + // Check against closest ancestor/preceding element + function( elem, context, xml ) { + while ( ( elem = elem[ dir ] ) ) { + if ( elem.nodeType === 1 || checkNonElements ) { + return matcher( elem, context, xml ); + } + } + return false; + } : + + // Check against all ancestor/preceding elements + function( elem, context, xml ) { + var oldCache, uniqueCache, outerCache, + newCache = [ dirruns, doneName ]; + + // We can't set arbitrary data on XML nodes, so they don't benefit from combinator caching + if ( xml ) { + while ( ( elem = elem[ dir ] ) ) { + if ( elem.nodeType === 1 || checkNonElements ) { + if ( matcher( elem, context, xml ) ) { + return true; + } + } + } + } else { + while ( ( elem = elem[ dir ] ) ) { + if ( elem.nodeType === 1 || checkNonElements ) { + outerCache = elem[ expando ] || ( elem[ expando ] = {} ); + + // Support: IE <9 only + // Defend against cloned attroperties (jQuery gh-1709) + uniqueCache = outerCache[ elem.uniqueID ] || + ( outerCache[ elem.uniqueID ] = {} ); + + if ( skip && skip === elem.nodeName.toLowerCase() ) { + elem = elem[ dir ] || elem; + } else if ( ( oldCache = uniqueCache[ key ] ) && + oldCache[ 0 ] === dirruns && oldCache[ 1 ] === doneName ) { + + // Assign to newCache so results back-propagate to previous elements + return ( newCache[ 2 ] = oldCache[ 2 ] ); + } else { + + // Reuse newcache so results back-propagate to previous elements + uniqueCache[ key ] = newCache; + + // A match means we're done; a fail means we have to keep checking + if ( ( newCache[ 2 ] = matcher( elem, context, xml ) ) ) { + return true; + } + } + } + } + } + return false; + }; +} + +function elementMatcher( matchers ) { + return matchers.length > 1 ? + function( elem, context, xml ) { + var i = matchers.length; + while ( i-- ) { + if ( !matchers[ i ]( elem, context, xml ) ) { + return false; + } + } + return true; + } : + matchers[ 0 ]; +} + +function multipleContexts( selector, contexts, results ) { + var i = 0, + len = contexts.length; + for ( ; i < len; i++ ) { + Sizzle( selector, contexts[ i ], results ); + } + return results; +} + +function condense( unmatched, map, filter, context, xml ) { + var elem, + newUnmatched = [], + i = 0, + len = unmatched.length, + mapped = map != null; + + for ( ; i < len; i++ ) { + if ( ( elem = unmatched[ i ] ) ) { + if ( !filter || filter( elem, context, xml ) ) { + newUnmatched.push( elem ); + if ( mapped ) { + map.push( i ); + } + } + } + } + + return newUnmatched; +} + +function setMatcher( preFilter, selector, matcher, postFilter, postFinder, postSelector ) { + if ( postFilter && !postFilter[ expando ] ) { + postFilter = setMatcher( postFilter ); + } + if ( postFinder && !postFinder[ expando ] ) { + postFinder = setMatcher( postFinder, postSelector ); + } + return markFunction( function( seed, results, context, xml ) { + var temp, i, elem, + preMap = [], + postMap = [], + preexisting = results.length, + + // Get initial elements from seed or context + elems = seed || multipleContexts( + selector || "*", + context.nodeType ? [ context ] : context, + [] + ), + + // Prefilter to get matcher input, preserving a map for seed-results synchronization + matcherIn = preFilter && ( seed || !selector ) ? + condense( elems, preMap, preFilter, context, xml ) : + elems, + + matcherOut = matcher ? + + // If we have a postFinder, or filtered seed, or non-seed postFilter or preexisting results, + postFinder || ( seed ? preFilter : preexisting || postFilter ) ? + + // ...intermediate processing is necessary + [] : + + // ...otherwise use results directly + results : + matcherIn; + + // Find primary matches + if ( matcher ) { + matcher( matcherIn, matcherOut, context, xml ); + } + + // Apply postFilter + if ( postFilter ) { + temp = condense( matcherOut, postMap ); + postFilter( temp, [], context, xml ); + + // Un-match failing elements by moving them back to matcherIn + i = temp.length; + while ( i-- ) { + if ( ( elem = temp[ i ] ) ) { + matcherOut[ postMap[ i ] ] = !( matcherIn[ postMap[ i ] ] = elem ); + } + } + } + + if ( seed ) { + if ( postFinder || preFilter ) { + if ( postFinder ) { + + // Get the final matcherOut by condensing this intermediate into postFinder contexts + temp = []; + i = matcherOut.length; + while ( i-- ) { + if ( ( elem = matcherOut[ i ] ) ) { + + // Restore matcherIn since elem is not yet a final match + temp.push( ( matcherIn[ i ] = elem ) ); + } + } + postFinder( null, ( matcherOut = [] ), temp, xml ); + } + + // Move matched elements from seed to results to keep them synchronized + i = matcherOut.length; + while ( i-- ) { + if ( ( elem = matcherOut[ i ] ) && + ( temp = postFinder ? indexOf( seed, elem ) : preMap[ i ] ) > -1 ) { + + seed[ temp ] = !( results[ temp ] = elem ); + } + } + } + + // Add elements to results, through postFinder if defined + } else { + matcherOut = condense( + matcherOut === results ? + matcherOut.splice( preexisting, matcherOut.length ) : + matcherOut + ); + if ( postFinder ) { + postFinder( null, results, matcherOut, xml ); + } else { + push.apply( results, matcherOut ); + } + } + } ); +} + +function matcherFromTokens( tokens ) { + var checkContext, matcher, j, + len = tokens.length, + leadingRelative = Expr.relative[ tokens[ 0 ].type ], + implicitRelative = leadingRelative || Expr.relative[ " " ], + i = leadingRelative ? 1 : 0, + + // The foundational matcher ensures that elements are reachable from top-level context(s) + matchContext = addCombinator( function( elem ) { + return elem === checkContext; + }, implicitRelative, true ), + matchAnyContext = addCombinator( function( elem ) { + return indexOf( checkContext, elem ) > -1; + }, implicitRelative, true ), + matchers = [ function( elem, context, xml ) { + var ret = ( !leadingRelative && ( xml || context !== outermostContext ) ) || ( + ( checkContext = context ).nodeType ? + matchContext( elem, context, xml ) : + matchAnyContext( elem, context, xml ) ); + + // Avoid hanging onto element (issue #299) + checkContext = null; + return ret; + } ]; + + for ( ; i < len; i++ ) { + if ( ( matcher = Expr.relative[ tokens[ i ].type ] ) ) { + matchers = [ addCombinator( elementMatcher( matchers ), matcher ) ]; + } else { + matcher = Expr.filter[ tokens[ i ].type ].apply( null, tokens[ i ].matches ); + + // Return special upon seeing a positional matcher + if ( matcher[ expando ] ) { + + // Find the next relative operator (if any) for proper handling + j = ++i; + for ( ; j < len; j++ ) { + if ( Expr.relative[ tokens[ j ].type ] ) { + break; + } + } + return setMatcher( + i > 1 && elementMatcher( matchers ), + i > 1 && toSelector( + + // If the preceding token was a descendant combinator, insert an implicit any-element `*` + tokens + .slice( 0, i - 1 ) + .concat( { value: tokens[ i - 2 ].type === " " ? "*" : "" } ) + ).replace( rtrim, "$1" ), + matcher, + i < j && matcherFromTokens( tokens.slice( i, j ) ), + j < len && matcherFromTokens( ( tokens = tokens.slice( j ) ) ), + j < len && toSelector( tokens ) + ); + } + matchers.push( matcher ); + } + } + + return elementMatcher( matchers ); +} + +function matcherFromGroupMatchers( elementMatchers, setMatchers ) { + var bySet = setMatchers.length > 0, + byElement = elementMatchers.length > 0, + superMatcher = function( seed, context, xml, results, outermost ) { + var elem, j, matcher, + matchedCount = 0, + i = "0", + unmatched = seed && [], + setMatched = [], + contextBackup = outermostContext, + + // We must always have either seed elements or outermost context + elems = seed || byElement && Expr.find[ "TAG" ]( "*", outermost ), + + // Use integer dirruns iff this is the outermost matcher + dirrunsUnique = ( dirruns += contextBackup == null ? 1 : Math.random() || 0.1 ), + len = elems.length; + + if ( outermost ) { + + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + // eslint-disable-next-line eqeqeq + outermostContext = context == document || context || outermost; + } + + // Add elements passing elementMatchers directly to results + // Support: IE<9, Safari + // Tolerate NodeList properties (IE: "length"; Safari: ) matching elements by id + for ( ; i !== len && ( elem = elems[ i ] ) != null; i++ ) { + if ( byElement && elem ) { + j = 0; + + // Support: IE 11+, Edge 17 - 18+ + // IE/Edge sometimes throw a "Permission denied" error when strict-comparing + // two documents; shallow comparisons work. + // eslint-disable-next-line eqeqeq + if ( !context && elem.ownerDocument != document ) { + setDocument( elem ); + xml = !documentIsHTML; + } + while ( ( matcher = elementMatchers[ j++ ] ) ) { + if ( matcher( elem, context || document, xml ) ) { + results.push( elem ); + break; + } + } + if ( outermost ) { + dirruns = dirrunsUnique; + } + } + + // Track unmatched elements for set filters + if ( bySet ) { + + // They will have gone through all possible matchers + if ( ( elem = !matcher && elem ) ) { + matchedCount--; + } + + // Lengthen the array for every element, matched or not + if ( seed ) { + unmatched.push( elem ); + } + } + } + + // `i` is now the count of elements visited above, and adding it to `matchedCount` + // makes the latter nonnegative. + matchedCount += i; + + // Apply set filters to unmatched elements + // NOTE: This can be skipped if there are no unmatched elements (i.e., `matchedCount` + // equals `i`), unless we didn't visit _any_ elements in the above loop because we have + // no element matchers and no seed. + // Incrementing an initially-string "0" `i` allows `i` to remain a string only in that + // case, which will result in a "00" `matchedCount` that differs from `i` but is also + // numerically zero. + if ( bySet && i !== matchedCount ) { + j = 0; + while ( ( matcher = setMatchers[ j++ ] ) ) { + matcher( unmatched, setMatched, context, xml ); + } + + if ( seed ) { + + // Reintegrate element matches to eliminate the need for sorting + if ( matchedCount > 0 ) { + while ( i-- ) { + if ( !( unmatched[ i ] || setMatched[ i ] ) ) { + setMatched[ i ] = pop.call( results ); + } + } + } + + // Discard index placeholder values to get only actual matches + setMatched = condense( setMatched ); + } + + // Add matches to results + push.apply( results, setMatched ); + + // Seedless set matches succeeding multiple successful matchers stipulate sorting + if ( outermost && !seed && setMatched.length > 0 && + ( matchedCount + setMatchers.length ) > 1 ) { + + Sizzle.uniqueSort( results ); + } + } + + // Override manipulation of globals by nested matchers + if ( outermost ) { + dirruns = dirrunsUnique; + outermostContext = contextBackup; + } + + return unmatched; + }; + + return bySet ? + markFunction( superMatcher ) : + superMatcher; +} + +compile = Sizzle.compile = function( selector, match /* Internal Use Only */ ) { + var i, + setMatchers = [], + elementMatchers = [], + cached = compilerCache[ selector + " " ]; + + if ( !cached ) { + + // Generate a function of recursive functions that can be used to check each element + if ( !match ) { + match = tokenize( selector ); + } + i = match.length; + while ( i-- ) { + cached = matcherFromTokens( match[ i ] ); + if ( cached[ expando ] ) { + setMatchers.push( cached ); + } else { + elementMatchers.push( cached ); + } + } + + // Cache the compiled function + cached = compilerCache( + selector, + matcherFromGroupMatchers( elementMatchers, setMatchers ) + ); + + // Save selector and tokenization + cached.selector = selector; + } + return cached; +}; + +/** + * A low-level selection function that works with Sizzle's compiled + * selector functions + * @param {String|Function} selector A selector or a pre-compiled + * selector function built with Sizzle.compile + * @param {Element} context + * @param {Array} [results] + * @param {Array} [seed] A set of elements to match against + */ +select = Sizzle.select = function( selector, context, results, seed ) { + var i, tokens, token, type, find, + compiled = typeof selector === "function" && selector, + match = !seed && tokenize( ( selector = compiled.selector || selector ) ); + + results = results || []; + + // Try to minimize operations if there is only one selector in the list and no seed + // (the latter of which guarantees us context) + if ( match.length === 1 ) { + + // Reduce context if the leading compound selector is an ID + tokens = match[ 0 ] = match[ 0 ].slice( 0 ); + if ( tokens.length > 2 && ( token = tokens[ 0 ] ).type === "ID" && + context.nodeType === 9 && documentIsHTML && Expr.relative[ tokens[ 1 ].type ] ) { + + context = ( Expr.find[ "ID" ]( token.matches[ 0 ] + .replace( runescape, funescape ), context ) || [] )[ 0 ]; + if ( !context ) { + return results; + + // Precompiled matchers will still verify ancestry, so step up a level + } else if ( compiled ) { + context = context.parentNode; + } + + selector = selector.slice( tokens.shift().value.length ); + } + + // Fetch a seed set for right-to-left matching + i = matchExpr[ "needsContext" ].test( selector ) ? 0 : tokens.length; + while ( i-- ) { + token = tokens[ i ]; + + // Abort if we hit a combinator + if ( Expr.relative[ ( type = token.type ) ] ) { + break; + } + if ( ( find = Expr.find[ type ] ) ) { + + // Search, expanding context for leading sibling combinators + if ( ( seed = find( + token.matches[ 0 ].replace( runescape, funescape ), + rsibling.test( tokens[ 0 ].type ) && testContext( context.parentNode ) || + context + ) ) ) { + + // If seed is empty or no tokens remain, we can return early + tokens.splice( i, 1 ); + selector = seed.length && toSelector( tokens ); + if ( !selector ) { + push.apply( results, seed ); + return results; + } + + break; + } + } + } + } + + // Compile and execute a filtering function if one is not provided + // Provide `match` to avoid retokenization if we modified the selector above + ( compiled || compile( selector, match ) )( + seed, + context, + !documentIsHTML, + results, + !context || rsibling.test( selector ) && testContext( context.parentNode ) || context + ); + return results; +}; + +// One-time assignments + +// Sort stability +support.sortStable = expando.split( "" ).sort( sortOrder ).join( "" ) === expando; + +// Support: Chrome 14-35+ +// Always assume duplicates if they aren't passed to the comparison function +support.detectDuplicates = !!hasDuplicate; + +// Initialize against the default document +setDocument(); + +// Support: Webkit<537.32 - Safari 6.0.3/Chrome 25 (fixed in Chrome 27) +// Detached nodes confoundingly follow *each other* +support.sortDetached = assert( function( el ) { + + // Should return 1, but returns 4 (following) + return el.compareDocumentPosition( document.createElement( "fieldset" ) ) & 1; +} ); + +// Support: IE<8 +// Prevent attribute/property "interpolation" +// https://msdn.microsoft.com/en-us/library/ms536429%28VS.85%29.aspx +if ( !assert( function( el ) { + el.innerHTML = ""; + return el.firstChild.getAttribute( "href" ) === "#"; +} ) ) { + addHandle( "type|href|height|width", function( elem, name, isXML ) { + if ( !isXML ) { + return elem.getAttribute( name, name.toLowerCase() === "type" ? 1 : 2 ); + } + } ); +} + +// Support: IE<9 +// Use defaultValue in place of getAttribute("value") +if ( !support.attributes || !assert( function( el ) { + el.innerHTML = ""; + el.firstChild.setAttribute( "value", "" ); + return el.firstChild.getAttribute( "value" ) === ""; +} ) ) { + addHandle( "value", function( elem, _name, isXML ) { + if ( !isXML && elem.nodeName.toLowerCase() === "input" ) { + return elem.defaultValue; + } + } ); +} + +// Support: IE<9 +// Use getAttributeNode to fetch booleans when getAttribute lies +if ( !assert( function( el ) { + return el.getAttribute( "disabled" ) == null; +} ) ) { + addHandle( booleans, function( elem, name, isXML ) { + var val; + if ( !isXML ) { + return elem[ name ] === true ? name.toLowerCase() : + ( val = elem.getAttributeNode( name ) ) && val.specified ? + val.value : + null; + } + } ); +} + +return Sizzle; + +} )( window ); + + + +jQuery.find = Sizzle; +jQuery.expr = Sizzle.selectors; + +// Deprecated +jQuery.expr[ ":" ] = jQuery.expr.pseudos; +jQuery.uniqueSort = jQuery.unique = Sizzle.uniqueSort; +jQuery.text = Sizzle.getText; +jQuery.isXMLDoc = Sizzle.isXML; +jQuery.contains = Sizzle.contains; +jQuery.escapeSelector = Sizzle.escape; + + + + +var dir = function( elem, dir, until ) { + var matched = [], + truncate = until !== undefined; + + while ( ( elem = elem[ dir ] ) && elem.nodeType !== 9 ) { + if ( elem.nodeType === 1 ) { + if ( truncate && jQuery( elem ).is( until ) ) { + break; + } + matched.push( elem ); + } + } + return matched; +}; + + +var siblings = function( n, elem ) { + var matched = []; + + for ( ; n; n = n.nextSibling ) { + if ( n.nodeType === 1 && n !== elem ) { + matched.push( n ); + } + } + + return matched; +}; + + +var rneedsContext = jQuery.expr.match.needsContext; + + + +function nodeName( elem, name ) { + + return elem.nodeName && elem.nodeName.toLowerCase() === name.toLowerCase(); + +} +var rsingleTag = ( /^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i ); + + + +// Implement the identical functionality for filter and not +function winnow( elements, qualifier, not ) { + if ( isFunction( qualifier ) ) { + return jQuery.grep( elements, function( elem, i ) { + return !!qualifier.call( elem, i, elem ) !== not; + } ); + } + + // Single element + if ( qualifier.nodeType ) { + return jQuery.grep( elements, function( elem ) { + return ( elem === qualifier ) !== not; + } ); + } + + // Arraylike of elements (jQuery, arguments, Array) + if ( typeof qualifier !== "string" ) { + return jQuery.grep( elements, function( elem ) { + return ( indexOf.call( qualifier, elem ) > -1 ) !== not; + } ); + } + + // Filtered directly for both simple and complex selectors + return jQuery.filter( qualifier, elements, not ); +} + +jQuery.filter = function( expr, elems, not ) { + var elem = elems[ 0 ]; + + if ( not ) { + expr = ":not(" + expr + ")"; + } + + if ( elems.length === 1 && elem.nodeType === 1 ) { + return jQuery.find.matchesSelector( elem, expr ) ? [ elem ] : []; + } + + return jQuery.find.matches( expr, jQuery.grep( elems, function( elem ) { + return elem.nodeType === 1; + } ) ); +}; + +jQuery.fn.extend( { + find: function( selector ) { + var i, ret, + len = this.length, + self = this; + + if ( typeof selector !== "string" ) { + return this.pushStack( jQuery( selector ).filter( function() { + for ( i = 0; i < len; i++ ) { + if ( jQuery.contains( self[ i ], this ) ) { + return true; + } + } + } ) ); + } + + ret = this.pushStack( [] ); + + for ( i = 0; i < len; i++ ) { + jQuery.find( selector, self[ i ], ret ); + } + + return len > 1 ? jQuery.uniqueSort( ret ) : ret; + }, + filter: function( selector ) { + return this.pushStack( winnow( this, selector || [], false ) ); + }, + not: function( selector ) { + return this.pushStack( winnow( this, selector || [], true ) ); + }, + is: function( selector ) { + return !!winnow( + this, + + // If this is a positional/relative selector, check membership in the returned set + // so $("p:first").is("p:last") won't return true for a doc with two "p". + typeof selector === "string" && rneedsContext.test( selector ) ? + jQuery( selector ) : + selector || [], + false + ).length; + } +} ); + + +// Initialize a jQuery object + + +// A central reference to the root jQuery(document) +var rootjQuery, + + // A simple way to check for HTML strings + // Prioritize #id over to avoid XSS via location.hash (#9521) + // Strict HTML recognition (#11290: must start with <) + // Shortcut simple #id case for speed + rquickExpr = /^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/, + + init = jQuery.fn.init = function( selector, context, root ) { + var match, elem; + + // HANDLE: $(""), $(null), $(undefined), $(false) + if ( !selector ) { + return this; + } + + // Method init() accepts an alternate rootjQuery + // so migrate can support jQuery.sub (gh-2101) + root = root || rootjQuery; + + // Handle HTML strings + if ( typeof selector === "string" ) { + if ( selector[ 0 ] === "<" && + selector[ selector.length - 1 ] === ">" && + selector.length >= 3 ) { + + // Assume that strings that start and end with <> are HTML and skip the regex check + match = [ null, selector, null ]; + + } else { + match = rquickExpr.exec( selector ); + } + + // Match html or make sure no context is specified for #id + if ( match && ( match[ 1 ] || !context ) ) { + + // HANDLE: $(html) -> $(array) + if ( match[ 1 ] ) { + context = context instanceof jQuery ? context[ 0 ] : context; + + // Option to run scripts is true for back-compat + // Intentionally let the error be thrown if parseHTML is not present + jQuery.merge( this, jQuery.parseHTML( + match[ 1 ], + context && context.nodeType ? context.ownerDocument || context : document, + true + ) ); + + // HANDLE: $(html, props) + if ( rsingleTag.test( match[ 1 ] ) && jQuery.isPlainObject( context ) ) { + for ( match in context ) { + + // Properties of context are called as methods if possible + if ( isFunction( this[ match ] ) ) { + this[ match ]( context[ match ] ); + + // ...and otherwise set as attributes + } else { + this.attr( match, context[ match ] ); + } + } + } + + return this; + + // HANDLE: $(#id) + } else { + elem = document.getElementById( match[ 2 ] ); + + if ( elem ) { + + // Inject the element directly into the jQuery object + this[ 0 ] = elem; + this.length = 1; + } + return this; + } + + // HANDLE: $(expr, $(...)) + } else if ( !context || context.jquery ) { + return ( context || root ).find( selector ); + + // HANDLE: $(expr, context) + // (which is just equivalent to: $(context).find(expr) + } else { + return this.constructor( context ).find( selector ); + } + + // HANDLE: $(DOMElement) + } else if ( selector.nodeType ) { + this[ 0 ] = selector; + this.length = 1; + return this; + + // HANDLE: $(function) + // Shortcut for document ready + } else if ( isFunction( selector ) ) { + return root.ready !== undefined ? + root.ready( selector ) : + + // Execute immediately if ready is not present + selector( jQuery ); + } + + return jQuery.makeArray( selector, this ); + }; + +// Give the init function the jQuery prototype for later instantiation +init.prototype = jQuery.fn; + +// Initialize central reference +rootjQuery = jQuery( document ); + + +var rparentsprev = /^(?:parents|prev(?:Until|All))/, + + // Methods guaranteed to produce a unique set when starting from a unique set + guaranteedUnique = { + children: true, + contents: true, + next: true, + prev: true + }; + +jQuery.fn.extend( { + has: function( target ) { + var targets = jQuery( target, this ), + l = targets.length; + + return this.filter( function() { + var i = 0; + for ( ; i < l; i++ ) { + if ( jQuery.contains( this, targets[ i ] ) ) { + return true; + } + } + } ); + }, + + closest: function( selectors, context ) { + var cur, + i = 0, + l = this.length, + matched = [], + targets = typeof selectors !== "string" && jQuery( selectors ); + + // Positional selectors never match, since there's no _selection_ context + if ( !rneedsContext.test( selectors ) ) { + for ( ; i < l; i++ ) { + for ( cur = this[ i ]; cur && cur !== context; cur = cur.parentNode ) { + + // Always skip document fragments + if ( cur.nodeType < 11 && ( targets ? + targets.index( cur ) > -1 : + + // Don't pass non-elements to Sizzle + cur.nodeType === 1 && + jQuery.find.matchesSelector( cur, selectors ) ) ) { + + matched.push( cur ); + break; + } + } + } + } + + return this.pushStack( matched.length > 1 ? jQuery.uniqueSort( matched ) : matched ); + }, + + // Determine the position of an element within the set + index: function( elem ) { + + // No argument, return index in parent + if ( !elem ) { + return ( this[ 0 ] && this[ 0 ].parentNode ) ? this.first().prevAll().length : -1; + } + + // Index in selector + if ( typeof elem === "string" ) { + return indexOf.call( jQuery( elem ), this[ 0 ] ); + } + + // Locate the position of the desired element + return indexOf.call( this, + + // If it receives a jQuery object, the first element is used + elem.jquery ? elem[ 0 ] : elem + ); + }, + + add: function( selector, context ) { + return this.pushStack( + jQuery.uniqueSort( + jQuery.merge( this.get(), jQuery( selector, context ) ) + ) + ); + }, + + addBack: function( selector ) { + return this.add( selector == null ? + this.prevObject : this.prevObject.filter( selector ) + ); + } +} ); + +function sibling( cur, dir ) { + while ( ( cur = cur[ dir ] ) && cur.nodeType !== 1 ) {} + return cur; +} + +jQuery.each( { + parent: function( elem ) { + var parent = elem.parentNode; + return parent && parent.nodeType !== 11 ? parent : null; + }, + parents: function( elem ) { + return dir( elem, "parentNode" ); + }, + parentsUntil: function( elem, _i, until ) { + return dir( elem, "parentNode", until ); + }, + next: function( elem ) { + return sibling( elem, "nextSibling" ); + }, + prev: function( elem ) { + return sibling( elem, "previousSibling" ); + }, + nextAll: function( elem ) { + return dir( elem, "nextSibling" ); + }, + prevAll: function( elem ) { + return dir( elem, "previousSibling" ); + }, + nextUntil: function( elem, _i, until ) { + return dir( elem, "nextSibling", until ); + }, + prevUntil: function( elem, _i, until ) { + return dir( elem, "previousSibling", until ); + }, + siblings: function( elem ) { + return siblings( ( elem.parentNode || {} ).firstChild, elem ); + }, + children: function( elem ) { + return siblings( elem.firstChild ); + }, + contents: function( elem ) { + if ( elem.contentDocument != null && + + // Support: IE 11+ + // elements with no `data` attribute has an object + // `contentDocument` with a `null` prototype. + getProto( elem.contentDocument ) ) { + + return elem.contentDocument; + } + + // Support: IE 9 - 11 only, iOS 7 only, Android Browser <=4.3 only + // Treat the template element as a regular one in browsers that + // don't support it. + if ( nodeName( elem, "template" ) ) { + elem = elem.content || elem; + } + + return jQuery.merge( [], elem.childNodes ); + } +}, function( name, fn ) { + jQuery.fn[ name ] = function( until, selector ) { + var matched = jQuery.map( this, fn, until ); + + if ( name.slice( -5 ) !== "Until" ) { + selector = until; + } + + if ( selector && typeof selector === "string" ) { + matched = jQuery.filter( selector, matched ); + } + + if ( this.length > 1 ) { + + // Remove duplicates + if ( !guaranteedUnique[ name ] ) { + jQuery.uniqueSort( matched ); + } + + // Reverse order for parents* and prev-derivatives + if ( rparentsprev.test( name ) ) { + matched.reverse(); + } + } + + return this.pushStack( matched ); + }; +} ); +var rnothtmlwhite = ( /[^\x20\t\r\n\f]+/g ); + + + +// Convert String-formatted options into Object-formatted ones +function createOptions( options ) { + var object = {}; + jQuery.each( options.match( rnothtmlwhite ) || [], function( _, flag ) { + object[ flag ] = true; + } ); + return object; +} + +/* + * Create a callback list using the following parameters: + * + * options: an optional list of space-separated options that will change how + * the callback list behaves or a more traditional option object + * + * By default a callback list will act like an event callback list and can be + * "fired" multiple times. + * + * Possible options: + * + * once: will ensure the callback list can only be fired once (like a Deferred) + * + * memory: will keep track of previous values and will call any callback added + * after the list has been fired right away with the latest "memorized" + * values (like a Deferred) + * + * unique: will ensure a callback can only be added once (no duplicate in the list) + * + * stopOnFalse: interrupt callings when a callback returns false + * + */ +jQuery.Callbacks = function( options ) { + + // Convert options from String-formatted to Object-formatted if needed + // (we check in cache first) + options = typeof options === "string" ? + createOptions( options ) : + jQuery.extend( {}, options ); + + var // Flag to know if list is currently firing + firing, + + // Last fire value for non-forgettable lists + memory, + + // Flag to know if list was already fired + fired, + + // Flag to prevent firing + locked, + + // Actual callback list + list = [], + + // Queue of execution data for repeatable lists + queue = [], + + // Index of currently firing callback (modified by add/remove as needed) + firingIndex = -1, + + // Fire callbacks + fire = function() { + + // Enforce single-firing + locked = locked || options.once; + + // Execute callbacks for all pending executions, + // respecting firingIndex overrides and runtime changes + fired = firing = true; + for ( ; queue.length; firingIndex = -1 ) { + memory = queue.shift(); + while ( ++firingIndex < list.length ) { + + // Run callback and check for early termination + if ( list[ firingIndex ].apply( memory[ 0 ], memory[ 1 ] ) === false && + options.stopOnFalse ) { + + // Jump to end and forget the data so .add doesn't re-fire + firingIndex = list.length; + memory = false; + } + } + } + + // Forget the data if we're done with it + if ( !options.memory ) { + memory = false; + } + + firing = false; + + // Clean up if we're done firing for good + if ( locked ) { + + // Keep an empty list if we have data for future add calls + if ( memory ) { + list = []; + + // Otherwise, this object is spent + } else { + list = ""; + } + } + }, + + // Actual Callbacks object + self = { + + // Add a callback or a collection of callbacks to the list + add: function() { + if ( list ) { + + // If we have memory from a past run, we should fire after adding + if ( memory && !firing ) { + firingIndex = list.length - 1; + queue.push( memory ); + } + + ( function add( args ) { + jQuery.each( args, function( _, arg ) { + if ( isFunction( arg ) ) { + if ( !options.unique || !self.has( arg ) ) { + list.push( arg ); + } + } else if ( arg && arg.length && toType( arg ) !== "string" ) { + + // Inspect recursively + add( arg ); + } + } ); + } )( arguments ); + + if ( memory && !firing ) { + fire(); + } + } + return this; + }, + + // Remove a callback from the list + remove: function() { + jQuery.each( arguments, function( _, arg ) { + var index; + while ( ( index = jQuery.inArray( arg, list, index ) ) > -1 ) { + list.splice( index, 1 ); + + // Handle firing indexes + if ( index <= firingIndex ) { + firingIndex--; + } + } + } ); + return this; + }, + + // Check if a given callback is in the list. + // If no argument is given, return whether or not list has callbacks attached. + has: function( fn ) { + return fn ? + jQuery.inArray( fn, list ) > -1 : + list.length > 0; + }, + + // Remove all callbacks from the list + empty: function() { + if ( list ) { + list = []; + } + return this; + }, + + // Disable .fire and .add + // Abort any current/pending executions + // Clear all callbacks and values + disable: function() { + locked = queue = []; + list = memory = ""; + return this; + }, + disabled: function() { + return !list; + }, + + // Disable .fire + // Also disable .add unless we have memory (since it would have no effect) + // Abort any pending executions + lock: function() { + locked = queue = []; + if ( !memory && !firing ) { + list = memory = ""; + } + return this; + }, + locked: function() { + return !!locked; + }, + + // Call all callbacks with the given context and arguments + fireWith: function( context, args ) { + if ( !locked ) { + args = args || []; + args = [ context, args.slice ? args.slice() : args ]; + queue.push( args ); + if ( !firing ) { + fire(); + } + } + return this; + }, + + // Call all the callbacks with the given arguments + fire: function() { + self.fireWith( this, arguments ); + return this; + }, + + // To know if the callbacks have already been called at least once + fired: function() { + return !!fired; + } + }; + + return self; +}; + + +function Identity( v ) { + return v; +} +function Thrower( ex ) { + throw ex; +} + +function adoptValue( value, resolve, reject, noValue ) { + var method; + + try { + + // Check for promise aspect first to privilege synchronous behavior + if ( value && isFunction( ( method = value.promise ) ) ) { + method.call( value ).done( resolve ).fail( reject ); + + // Other thenables + } else if ( value && isFunction( ( method = value.then ) ) ) { + method.call( value, resolve, reject ); + + // Other non-thenables + } else { + + // Control `resolve` arguments by letting Array#slice cast boolean `noValue` to integer: + // * false: [ value ].slice( 0 ) => resolve( value ) + // * true: [ value ].slice( 1 ) => resolve() + resolve.apply( undefined, [ value ].slice( noValue ) ); + } + + // For Promises/A+, convert exceptions into rejections + // Since jQuery.when doesn't unwrap thenables, we can skip the extra checks appearing in + // Deferred#then to conditionally suppress rejection. + } catch ( value ) { + + // Support: Android 4.0 only + // Strict mode functions invoked without .call/.apply get global-object context + reject.apply( undefined, [ value ] ); + } +} + +jQuery.extend( { + + Deferred: function( func ) { + var tuples = [ + + // action, add listener, callbacks, + // ... .then handlers, argument index, [final state] + [ "notify", "progress", jQuery.Callbacks( "memory" ), + jQuery.Callbacks( "memory" ), 2 ], + [ "resolve", "done", jQuery.Callbacks( "once memory" ), + jQuery.Callbacks( "once memory" ), 0, "resolved" ], + [ "reject", "fail", jQuery.Callbacks( "once memory" ), + jQuery.Callbacks( "once memory" ), 1, "rejected" ] + ], + state = "pending", + promise = { + state: function() { + return state; + }, + always: function() { + deferred.done( arguments ).fail( arguments ); + return this; + }, + "catch": function( fn ) { + return promise.then( null, fn ); + }, + + // Keep pipe for back-compat + pipe: function( /* fnDone, fnFail, fnProgress */ ) { + var fns = arguments; + + return jQuery.Deferred( function( newDefer ) { + jQuery.each( tuples, function( _i, tuple ) { + + // Map tuples (progress, done, fail) to arguments (done, fail, progress) + var fn = isFunction( fns[ tuple[ 4 ] ] ) && fns[ tuple[ 4 ] ]; + + // deferred.progress(function() { bind to newDefer or newDefer.notify }) + // deferred.done(function() { bind to newDefer or newDefer.resolve }) + // deferred.fail(function() { bind to newDefer or newDefer.reject }) + deferred[ tuple[ 1 ] ]( function() { + var returned = fn && fn.apply( this, arguments ); + if ( returned && isFunction( returned.promise ) ) { + returned.promise() + .progress( newDefer.notify ) + .done( newDefer.resolve ) + .fail( newDefer.reject ); + } else { + newDefer[ tuple[ 0 ] + "With" ]( + this, + fn ? [ returned ] : arguments + ); + } + } ); + } ); + fns = null; + } ).promise(); + }, + then: function( onFulfilled, onRejected, onProgress ) { + var maxDepth = 0; + function resolve( depth, deferred, handler, special ) { + return function() { + var that = this, + args = arguments, + mightThrow = function() { + var returned, then; + + // Support: Promises/A+ section 2.3.3.3.3 + // https://promisesaplus.com/#point-59 + // Ignore double-resolution attempts + if ( depth < maxDepth ) { + return; + } + + returned = handler.apply( that, args ); + + // Support: Promises/A+ section 2.3.1 + // https://promisesaplus.com/#point-48 + if ( returned === deferred.promise() ) { + throw new TypeError( "Thenable self-resolution" ); + } + + // Support: Promises/A+ sections 2.3.3.1, 3.5 + // https://promisesaplus.com/#point-54 + // https://promisesaplus.com/#point-75 + // Retrieve `then` only once + then = returned && + + // Support: Promises/A+ section 2.3.4 + // https://promisesaplus.com/#point-64 + // Only check objects and functions for thenability + ( typeof returned === "object" || + typeof returned === "function" ) && + returned.then; + + // Handle a returned thenable + if ( isFunction( then ) ) { + + // Special processors (notify) just wait for resolution + if ( special ) { + then.call( + returned, + resolve( maxDepth, deferred, Identity, special ), + resolve( maxDepth, deferred, Thrower, special ) + ); + + // Normal processors (resolve) also hook into progress + } else { + + // ...and disregard older resolution values + maxDepth++; + + then.call( + returned, + resolve( maxDepth, deferred, Identity, special ), + resolve( maxDepth, deferred, Thrower, special ), + resolve( maxDepth, deferred, Identity, + deferred.notifyWith ) + ); + } + + // Handle all other returned values + } else { + + // Only substitute handlers pass on context + // and multiple values (non-spec behavior) + if ( handler !== Identity ) { + that = undefined; + args = [ returned ]; + } + + // Process the value(s) + // Default process is resolve + ( special || deferred.resolveWith )( that, args ); + } + }, + + // Only normal processors (resolve) catch and reject exceptions + process = special ? + mightThrow : + function() { + try { + mightThrow(); + } catch ( e ) { + + if ( jQuery.Deferred.exceptionHook ) { + jQuery.Deferred.exceptionHook( e, + process.stackTrace ); + } + + // Support: Promises/A+ section 2.3.3.3.4.1 + // https://promisesaplus.com/#point-61 + // Ignore post-resolution exceptions + if ( depth + 1 >= maxDepth ) { + + // Only substitute handlers pass on context + // and multiple values (non-spec behavior) + if ( handler !== Thrower ) { + that = undefined; + args = [ e ]; + } + + deferred.rejectWith( that, args ); + } + } + }; + + // Support: Promises/A+ section 2.3.3.3.1 + // https://promisesaplus.com/#point-57 + // Re-resolve promises immediately to dodge false rejection from + // subsequent errors + if ( depth ) { + process(); + } else { + + // Call an optional hook to record the stack, in case of exception + // since it's otherwise lost when execution goes async + if ( jQuery.Deferred.getStackHook ) { + process.stackTrace = jQuery.Deferred.getStackHook(); + } + window.setTimeout( process ); + } + }; + } + + return jQuery.Deferred( function( newDefer ) { + + // progress_handlers.add( ... ) + tuples[ 0 ][ 3 ].add( + resolve( + 0, + newDefer, + isFunction( onProgress ) ? + onProgress : + Identity, + newDefer.notifyWith + ) + ); + + // fulfilled_handlers.add( ... ) + tuples[ 1 ][ 3 ].add( + resolve( + 0, + newDefer, + isFunction( onFulfilled ) ? + onFulfilled : + Identity + ) + ); + + // rejected_handlers.add( ... ) + tuples[ 2 ][ 3 ].add( + resolve( + 0, + newDefer, + isFunction( onRejected ) ? + onRejected : + Thrower + ) + ); + } ).promise(); + }, + + // Get a promise for this deferred + // If obj is provided, the promise aspect is added to the object + promise: function( obj ) { + return obj != null ? jQuery.extend( obj, promise ) : promise; + } + }, + deferred = {}; + + // Add list-specific methods + jQuery.each( tuples, function( i, tuple ) { + var list = tuple[ 2 ], + stateString = tuple[ 5 ]; + + // promise.progress = list.add + // promise.done = list.add + // promise.fail = list.add + promise[ tuple[ 1 ] ] = list.add; + + // Handle state + if ( stateString ) { + list.add( + function() { + + // state = "resolved" (i.e., fulfilled) + // state = "rejected" + state = stateString; + }, + + // rejected_callbacks.disable + // fulfilled_callbacks.disable + tuples[ 3 - i ][ 2 ].disable, + + // rejected_handlers.disable + // fulfilled_handlers.disable + tuples[ 3 - i ][ 3 ].disable, + + // progress_callbacks.lock + tuples[ 0 ][ 2 ].lock, + + // progress_handlers.lock + tuples[ 0 ][ 3 ].lock + ); + } + + // progress_handlers.fire + // fulfilled_handlers.fire + // rejected_handlers.fire + list.add( tuple[ 3 ].fire ); + + // deferred.notify = function() { deferred.notifyWith(...) } + // deferred.resolve = function() { deferred.resolveWith(...) } + // deferred.reject = function() { deferred.rejectWith(...) } + deferred[ tuple[ 0 ] ] = function() { + deferred[ tuple[ 0 ] + "With" ]( this === deferred ? undefined : this, arguments ); + return this; + }; + + // deferred.notifyWith = list.fireWith + // deferred.resolveWith = list.fireWith + // deferred.rejectWith = list.fireWith + deferred[ tuple[ 0 ] + "With" ] = list.fireWith; + } ); + + // Make the deferred a promise + promise.promise( deferred ); + + // Call given func if any + if ( func ) { + func.call( deferred, deferred ); + } + + // All done! + return deferred; + }, + + // Deferred helper + when: function( singleValue ) { + var + + // count of uncompleted subordinates + remaining = arguments.length, + + // count of unprocessed arguments + i = remaining, + + // subordinate fulfillment data + resolveContexts = Array( i ), + resolveValues = slice.call( arguments ), + + // the primary Deferred + primary = jQuery.Deferred(), + + // subordinate callback factory + updateFunc = function( i ) { + return function( value ) { + resolveContexts[ i ] = this; + resolveValues[ i ] = arguments.length > 1 ? slice.call( arguments ) : value; + if ( !( --remaining ) ) { + primary.resolveWith( resolveContexts, resolveValues ); + } + }; + }; + + // Single- and empty arguments are adopted like Promise.resolve + if ( remaining <= 1 ) { + adoptValue( singleValue, primary.done( updateFunc( i ) ).resolve, primary.reject, + !remaining ); + + // Use .then() to unwrap secondary thenables (cf. gh-3000) + if ( primary.state() === "pending" || + isFunction( resolveValues[ i ] && resolveValues[ i ].then ) ) { + + return primary.then(); + } + } + + // Multiple arguments are aggregated like Promise.all array elements + while ( i-- ) { + adoptValue( resolveValues[ i ], updateFunc( i ), primary.reject ); + } + + return primary.promise(); + } +} ); + + +// These usually indicate a programmer mistake during development, +// warn about them ASAP rather than swallowing them by default. +var rerrorNames = /^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/; + +jQuery.Deferred.exceptionHook = function( error, stack ) { + + // Support: IE 8 - 9 only + // Console exists when dev tools are open, which can happen at any time + if ( window.console && window.console.warn && error && rerrorNames.test( error.name ) ) { + window.console.warn( "jQuery.Deferred exception: " + error.message, error.stack, stack ); + } +}; + + + + +jQuery.readyException = function( error ) { + window.setTimeout( function() { + throw error; + } ); +}; + + + + +// The deferred used on DOM ready +var readyList = jQuery.Deferred(); + +jQuery.fn.ready = function( fn ) { + + readyList + .then( fn ) + + // Wrap jQuery.readyException in a function so that the lookup + // happens at the time of error handling instead of callback + // registration. + .catch( function( error ) { + jQuery.readyException( error ); + } ); + + return this; +}; + +jQuery.extend( { + + // Is the DOM ready to be used? Set to true once it occurs. + isReady: false, + + // A counter to track how many items to wait for before + // the ready event fires. See #6781 + readyWait: 1, + + // Handle when the DOM is ready + ready: function( wait ) { + + // Abort if there are pending holds or we're already ready + if ( wait === true ? --jQuery.readyWait : jQuery.isReady ) { + return; + } + + // Remember that the DOM is ready + jQuery.isReady = true; + + // If a normal DOM Ready event fired, decrement, and wait if need be + if ( wait !== true && --jQuery.readyWait > 0 ) { + return; + } + + // If there are functions bound, to execute + readyList.resolveWith( document, [ jQuery ] ); + } +} ); + +jQuery.ready.then = readyList.then; + +// The ready event handler and self cleanup method +function completed() { + document.removeEventListener( "DOMContentLoaded", completed ); + window.removeEventListener( "load", completed ); + jQuery.ready(); +} + +// Catch cases where $(document).ready() is called +// after the browser event has already occurred. +// Support: IE <=9 - 10 only +// Older IE sometimes signals "interactive" too soon +if ( document.readyState === "complete" || + ( document.readyState !== "loading" && !document.documentElement.doScroll ) ) { + + // Handle it asynchronously to allow scripts the opportunity to delay ready + window.setTimeout( jQuery.ready ); + +} else { + + // Use the handy event callback + document.addEventListener( "DOMContentLoaded", completed ); + + // A fallback to window.onload, that will always work + window.addEventListener( "load", completed ); +} + + + + +// Multifunctional method to get and set values of a collection +// The value/s can optionally be executed if it's a function +var access = function( elems, fn, key, value, chainable, emptyGet, raw ) { + var i = 0, + len = elems.length, + bulk = key == null; + + // Sets many values + if ( toType( key ) === "object" ) { + chainable = true; + for ( i in key ) { + access( elems, fn, i, key[ i ], true, emptyGet, raw ); + } + + // Sets one value + } else if ( value !== undefined ) { + chainable = true; + + if ( !isFunction( value ) ) { + raw = true; + } + + if ( bulk ) { + + // Bulk operations run against the entire set + if ( raw ) { + fn.call( elems, value ); + fn = null; + + // ...except when executing function values + } else { + bulk = fn; + fn = function( elem, _key, value ) { + return bulk.call( jQuery( elem ), value ); + }; + } + } + + if ( fn ) { + for ( ; i < len; i++ ) { + fn( + elems[ i ], key, raw ? + value : + value.call( elems[ i ], i, fn( elems[ i ], key ) ) + ); + } + } + } + + if ( chainable ) { + return elems; + } + + // Gets + if ( bulk ) { + return fn.call( elems ); + } + + return len ? fn( elems[ 0 ], key ) : emptyGet; +}; + + +// Matches dashed string for camelizing +var rmsPrefix = /^-ms-/, + rdashAlpha = /-([a-z])/g; + +// Used by camelCase as callback to replace() +function fcamelCase( _all, letter ) { + return letter.toUpperCase(); +} + +// Convert dashed to camelCase; used by the css and data modules +// Support: IE <=9 - 11, Edge 12 - 15 +// Microsoft forgot to hump their vendor prefix (#9572) +function camelCase( string ) { + return string.replace( rmsPrefix, "ms-" ).replace( rdashAlpha, fcamelCase ); +} +var acceptData = function( owner ) { + + // Accepts only: + // - Node + // - Node.ELEMENT_NODE + // - Node.DOCUMENT_NODE + // - Object + // - Any + return owner.nodeType === 1 || owner.nodeType === 9 || !( +owner.nodeType ); +}; + + + + +function Data() { + this.expando = jQuery.expando + Data.uid++; +} + +Data.uid = 1; + +Data.prototype = { + + cache: function( owner ) { + + // Check if the owner object already has a cache + var value = owner[ this.expando ]; + + // If not, create one + if ( !value ) { + value = {}; + + // We can accept data for non-element nodes in modern browsers, + // but we should not, see #8335. + // Always return an empty object. + if ( acceptData( owner ) ) { + + // If it is a node unlikely to be stringify-ed or looped over + // use plain assignment + if ( owner.nodeType ) { + owner[ this.expando ] = value; + + // Otherwise secure it in a non-enumerable property + // configurable must be true to allow the property to be + // deleted when data is removed + } else { + Object.defineProperty( owner, this.expando, { + value: value, + configurable: true + } ); + } + } + } + + return value; + }, + set: function( owner, data, value ) { + var prop, + cache = this.cache( owner ); + + // Handle: [ owner, key, value ] args + // Always use camelCase key (gh-2257) + if ( typeof data === "string" ) { + cache[ camelCase( data ) ] = value; + + // Handle: [ owner, { properties } ] args + } else { + + // Copy the properties one-by-one to the cache object + for ( prop in data ) { + cache[ camelCase( prop ) ] = data[ prop ]; + } + } + return cache; + }, + get: function( owner, key ) { + return key === undefined ? + this.cache( owner ) : + + // Always use camelCase key (gh-2257) + owner[ this.expando ] && owner[ this.expando ][ camelCase( key ) ]; + }, + access: function( owner, key, value ) { + + // In cases where either: + // + // 1. No key was specified + // 2. A string key was specified, but no value provided + // + // Take the "read" path and allow the get method to determine + // which value to return, respectively either: + // + // 1. The entire cache object + // 2. The data stored at the key + // + if ( key === undefined || + ( ( key && typeof key === "string" ) && value === undefined ) ) { + + return this.get( owner, key ); + } + + // When the key is not a string, or both a key and value + // are specified, set or extend (existing objects) with either: + // + // 1. An object of properties + // 2. A key and value + // + this.set( owner, key, value ); + + // Since the "set" path can have two possible entry points + // return the expected data based on which path was taken[*] + return value !== undefined ? value : key; + }, + remove: function( owner, key ) { + var i, + cache = owner[ this.expando ]; + + if ( cache === undefined ) { + return; + } + + if ( key !== undefined ) { + + // Support array or space separated string of keys + if ( Array.isArray( key ) ) { + + // If key is an array of keys... + // We always set camelCase keys, so remove that. + key = key.map( camelCase ); + } else { + key = camelCase( key ); + + // If a key with the spaces exists, use it. + // Otherwise, create an array by matching non-whitespace + key = key in cache ? + [ key ] : + ( key.match( rnothtmlwhite ) || [] ); + } + + i = key.length; + + while ( i-- ) { + delete cache[ key[ i ] ]; + } + } + + // Remove the expando if there's no more data + if ( key === undefined || jQuery.isEmptyObject( cache ) ) { + + // Support: Chrome <=35 - 45 + // Webkit & Blink performance suffers when deleting properties + // from DOM nodes, so set to undefined instead + // https://bugs.chromium.org/p/chromium/issues/detail?id=378607 (bug restricted) + if ( owner.nodeType ) { + owner[ this.expando ] = undefined; + } else { + delete owner[ this.expando ]; + } + } + }, + hasData: function( owner ) { + var cache = owner[ this.expando ]; + return cache !== undefined && !jQuery.isEmptyObject( cache ); + } +}; +var dataPriv = new Data(); + +var dataUser = new Data(); + + + +// Implementation Summary +// +// 1. Enforce API surface and semantic compatibility with 1.9.x branch +// 2. Improve the module's maintainability by reducing the storage +// paths to a single mechanism. +// 3. Use the same single mechanism to support "private" and "user" data. +// 4. _Never_ expose "private" data to user code (TODO: Drop _data, _removeData) +// 5. Avoid exposing implementation details on user objects (eg. expando properties) +// 6. Provide a clear path for implementation upgrade to WeakMap in 2014 + +var rbrace = /^(?:\{[\w\W]*\}|\[[\w\W]*\])$/, + rmultiDash = /[A-Z]/g; + +function getData( data ) { + if ( data === "true" ) { + return true; + } + + if ( data === "false" ) { + return false; + } + + if ( data === "null" ) { + return null; + } + + // Only convert to a number if it doesn't change the string + if ( data === +data + "" ) { + return +data; + } + + if ( rbrace.test( data ) ) { + return JSON.parse( data ); + } + + return data; +} + +function dataAttr( elem, key, data ) { + var name; + + // If nothing was found internally, try to fetch any + // data from the HTML5 data-* attribute + if ( data === undefined && elem.nodeType === 1 ) { + name = "data-" + key.replace( rmultiDash, "-$&" ).toLowerCase(); + data = elem.getAttribute( name ); + + if ( typeof data === "string" ) { + try { + data = getData( data ); + } catch ( e ) {} + + // Make sure we set the data so it isn't changed later + dataUser.set( elem, key, data ); + } else { + data = undefined; + } + } + return data; +} + +jQuery.extend( { + hasData: function( elem ) { + return dataUser.hasData( elem ) || dataPriv.hasData( elem ); + }, + + data: function( elem, name, data ) { + return dataUser.access( elem, name, data ); + }, + + removeData: function( elem, name ) { + dataUser.remove( elem, name ); + }, + + // TODO: Now that all calls to _data and _removeData have been replaced + // with direct calls to dataPriv methods, these can be deprecated. + _data: function( elem, name, data ) { + return dataPriv.access( elem, name, data ); + }, + + _removeData: function( elem, name ) { + dataPriv.remove( elem, name ); + } +} ); + +jQuery.fn.extend( { + data: function( key, value ) { + var i, name, data, + elem = this[ 0 ], + attrs = elem && elem.attributes; + + // Gets all values + if ( key === undefined ) { + if ( this.length ) { + data = dataUser.get( elem ); + + if ( elem.nodeType === 1 && !dataPriv.get( elem, "hasDataAttrs" ) ) { + i = attrs.length; + while ( i-- ) { + + // Support: IE 11 only + // The attrs elements can be null (#14894) + if ( attrs[ i ] ) { + name = attrs[ i ].name; + if ( name.indexOf( "data-" ) === 0 ) { + name = camelCase( name.slice( 5 ) ); + dataAttr( elem, name, data[ name ] ); + } + } + } + dataPriv.set( elem, "hasDataAttrs", true ); + } + } + + return data; + } + + // Sets multiple values + if ( typeof key === "object" ) { + return this.each( function() { + dataUser.set( this, key ); + } ); + } + + return access( this, function( value ) { + var data; + + // The calling jQuery object (element matches) is not empty + // (and therefore has an element appears at this[ 0 ]) and the + // `value` parameter was not undefined. An empty jQuery object + // will result in `undefined` for elem = this[ 0 ] which will + // throw an exception if an attempt to read a data cache is made. + if ( elem && value === undefined ) { + + // Attempt to get data from the cache + // The key will always be camelCased in Data + data = dataUser.get( elem, key ); + if ( data !== undefined ) { + return data; + } + + // Attempt to "discover" the data in + // HTML5 custom data-* attrs + data = dataAttr( elem, key ); + if ( data !== undefined ) { + return data; + } + + // We tried really hard, but the data doesn't exist. + return; + } + + // Set the data... + this.each( function() { + + // We always store the camelCased key + dataUser.set( this, key, value ); + } ); + }, null, value, arguments.length > 1, null, true ); + }, + + removeData: function( key ) { + return this.each( function() { + dataUser.remove( this, key ); + } ); + } +} ); + + +jQuery.extend( { + queue: function( elem, type, data ) { + var queue; + + if ( elem ) { + type = ( type || "fx" ) + "queue"; + queue = dataPriv.get( elem, type ); + + // Speed up dequeue by getting out quickly if this is just a lookup + if ( data ) { + if ( !queue || Array.isArray( data ) ) { + queue = dataPriv.access( elem, type, jQuery.makeArray( data ) ); + } else { + queue.push( data ); + } + } + return queue || []; + } + }, + + dequeue: function( elem, type ) { + type = type || "fx"; + + var queue = jQuery.queue( elem, type ), + startLength = queue.length, + fn = queue.shift(), + hooks = jQuery._queueHooks( elem, type ), + next = function() { + jQuery.dequeue( elem, type ); + }; + + // If the fx queue is dequeued, always remove the progress sentinel + if ( fn === "inprogress" ) { + fn = queue.shift(); + startLength--; + } + + if ( fn ) { + + // Add a progress sentinel to prevent the fx queue from being + // automatically dequeued + if ( type === "fx" ) { + queue.unshift( "inprogress" ); + } + + // Clear up the last queue stop function + delete hooks.stop; + fn.call( elem, next, hooks ); + } + + if ( !startLength && hooks ) { + hooks.empty.fire(); + } + }, + + // Not public - generate a queueHooks object, or return the current one + _queueHooks: function( elem, type ) { + var key = type + "queueHooks"; + return dataPriv.get( elem, key ) || dataPriv.access( elem, key, { + empty: jQuery.Callbacks( "once memory" ).add( function() { + dataPriv.remove( elem, [ type + "queue", key ] ); + } ) + } ); + } +} ); + +jQuery.fn.extend( { + queue: function( type, data ) { + var setter = 2; + + if ( typeof type !== "string" ) { + data = type; + type = "fx"; + setter--; + } + + if ( arguments.length < setter ) { + return jQuery.queue( this[ 0 ], type ); + } + + return data === undefined ? + this : + this.each( function() { + var queue = jQuery.queue( this, type, data ); + + // Ensure a hooks for this queue + jQuery._queueHooks( this, type ); + + if ( type === "fx" && queue[ 0 ] !== "inprogress" ) { + jQuery.dequeue( this, type ); + } + } ); + }, + dequeue: function( type ) { + return this.each( function() { + jQuery.dequeue( this, type ); + } ); + }, + clearQueue: function( type ) { + return this.queue( type || "fx", [] ); + }, + + // Get a promise resolved when queues of a certain type + // are emptied (fx is the type by default) + promise: function( type, obj ) { + var tmp, + count = 1, + defer = jQuery.Deferred(), + elements = this, + i = this.length, + resolve = function() { + if ( !( --count ) ) { + defer.resolveWith( elements, [ elements ] ); + } + }; + + if ( typeof type !== "string" ) { + obj = type; + type = undefined; + } + type = type || "fx"; + + while ( i-- ) { + tmp = dataPriv.get( elements[ i ], type + "queueHooks" ); + if ( tmp && tmp.empty ) { + count++; + tmp.empty.add( resolve ); + } + } + resolve(); + return defer.promise( obj ); + } +} ); +var pnum = ( /[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/ ).source; + +var rcssNum = new RegExp( "^(?:([+-])=|)(" + pnum + ")([a-z%]*)$", "i" ); + + +var cssExpand = [ "Top", "Right", "Bottom", "Left" ]; + +var documentElement = document.documentElement; + + + + var isAttached = function( elem ) { + return jQuery.contains( elem.ownerDocument, elem ); + }, + composed = { composed: true }; + + // Support: IE 9 - 11+, Edge 12 - 18+, iOS 10.0 - 10.2 only + // Check attachment across shadow DOM boundaries when possible (gh-3504) + // Support: iOS 10.0-10.2 only + // Early iOS 10 versions support `attachShadow` but not `getRootNode`, + // leading to errors. We need to check for `getRootNode`. + if ( documentElement.getRootNode ) { + isAttached = function( elem ) { + return jQuery.contains( elem.ownerDocument, elem ) || + elem.getRootNode( composed ) === elem.ownerDocument; + }; + } +var isHiddenWithinTree = function( elem, el ) { + + // isHiddenWithinTree might be called from jQuery#filter function; + // in that case, element will be second argument + elem = el || elem; + + // Inline style trumps all + return elem.style.display === "none" || + elem.style.display === "" && + + // Otherwise, check computed style + // Support: Firefox <=43 - 45 + // Disconnected elements can have computed display: none, so first confirm that elem is + // in the document. + isAttached( elem ) && + + jQuery.css( elem, "display" ) === "none"; + }; + + + +function adjustCSS( elem, prop, valueParts, tween ) { + var adjusted, scale, + maxIterations = 20, + currentValue = tween ? + function() { + return tween.cur(); + } : + function() { + return jQuery.css( elem, prop, "" ); + }, + initial = currentValue(), + unit = valueParts && valueParts[ 3 ] || ( jQuery.cssNumber[ prop ] ? "" : "px" ), + + // Starting value computation is required for potential unit mismatches + initialInUnit = elem.nodeType && + ( jQuery.cssNumber[ prop ] || unit !== "px" && +initial ) && + rcssNum.exec( jQuery.css( elem, prop ) ); + + if ( initialInUnit && initialInUnit[ 3 ] !== unit ) { + + // Support: Firefox <=54 + // Halve the iteration target value to prevent interference from CSS upper bounds (gh-2144) + initial = initial / 2; + + // Trust units reported by jQuery.css + unit = unit || initialInUnit[ 3 ]; + + // Iteratively approximate from a nonzero starting point + initialInUnit = +initial || 1; + + while ( maxIterations-- ) { + + // Evaluate and update our best guess (doubling guesses that zero out). + // Finish if the scale equals or crosses 1 (making the old*new product non-positive). + jQuery.style( elem, prop, initialInUnit + unit ); + if ( ( 1 - scale ) * ( 1 - ( scale = currentValue() / initial || 0.5 ) ) <= 0 ) { + maxIterations = 0; + } + initialInUnit = initialInUnit / scale; + + } + + initialInUnit = initialInUnit * 2; + jQuery.style( elem, prop, initialInUnit + unit ); + + // Make sure we update the tween properties later on + valueParts = valueParts || []; + } + + if ( valueParts ) { + initialInUnit = +initialInUnit || +initial || 0; + + // Apply relative offset (+=/-=) if specified + adjusted = valueParts[ 1 ] ? + initialInUnit + ( valueParts[ 1 ] + 1 ) * valueParts[ 2 ] : + +valueParts[ 2 ]; + if ( tween ) { + tween.unit = unit; + tween.start = initialInUnit; + tween.end = adjusted; + } + } + return adjusted; +} + + +var defaultDisplayMap = {}; + +function getDefaultDisplay( elem ) { + var temp, + doc = elem.ownerDocument, + nodeName = elem.nodeName, + display = defaultDisplayMap[ nodeName ]; + + if ( display ) { + return display; + } + + temp = doc.body.appendChild( doc.createElement( nodeName ) ); + display = jQuery.css( temp, "display" ); + + temp.parentNode.removeChild( temp ); + + if ( display === "none" ) { + display = "block"; + } + defaultDisplayMap[ nodeName ] = display; + + return display; +} + +function showHide( elements, show ) { + var display, elem, + values = [], + index = 0, + length = elements.length; + + // Determine new display value for elements that need to change + for ( ; index < length; index++ ) { + elem = elements[ index ]; + if ( !elem.style ) { + continue; + } + + display = elem.style.display; + if ( show ) { + + // Since we force visibility upon cascade-hidden elements, an immediate (and slow) + // check is required in this first loop unless we have a nonempty display value (either + // inline or about-to-be-restored) + if ( display === "none" ) { + values[ index ] = dataPriv.get( elem, "display" ) || null; + if ( !values[ index ] ) { + elem.style.display = ""; + } + } + if ( elem.style.display === "" && isHiddenWithinTree( elem ) ) { + values[ index ] = getDefaultDisplay( elem ); + } + } else { + if ( display !== "none" ) { + values[ index ] = "none"; + + // Remember what we're overwriting + dataPriv.set( elem, "display", display ); + } + } + } + + // Set the display of the elements in a second loop to avoid constant reflow + for ( index = 0; index < length; index++ ) { + if ( values[ index ] != null ) { + elements[ index ].style.display = values[ index ]; + } + } + + return elements; +} + +jQuery.fn.extend( { + show: function() { + return showHide( this, true ); + }, + hide: function() { + return showHide( this ); + }, + toggle: function( state ) { + if ( typeof state === "boolean" ) { + return state ? this.show() : this.hide(); + } + + return this.each( function() { + if ( isHiddenWithinTree( this ) ) { + jQuery( this ).show(); + } else { + jQuery( this ).hide(); + } + } ); + } +} ); +var rcheckableType = ( /^(?:checkbox|radio)$/i ); + +var rtagName = ( /<([a-z][^\/\0>\x20\t\r\n\f]*)/i ); + +var rscriptType = ( /^$|^module$|\/(?:java|ecma)script/i ); + + + +( function() { + var fragment = document.createDocumentFragment(), + div = fragment.appendChild( document.createElement( "div" ) ), + input = document.createElement( "input" ); + + // Support: Android 4.0 - 4.3 only + // Check state lost if the name is set (#11217) + // Support: Windows Web Apps (WWA) + // `name` and `type` must use .setAttribute for WWA (#14901) + input.setAttribute( "type", "radio" ); + input.setAttribute( "checked", "checked" ); + input.setAttribute( "name", "t" ); + + div.appendChild( input ); + + // Support: Android <=4.1 only + // Older WebKit doesn't clone checked state correctly in fragments + support.checkClone = div.cloneNode( true ).cloneNode( true ).lastChild.checked; + + // Support: IE <=11 only + // Make sure textarea (and checkbox) defaultValue is properly cloned + div.innerHTML = ""; + support.noCloneChecked = !!div.cloneNode( true ).lastChild.defaultValue; + + // Support: IE <=9 only + // IE <=9 replaces "; + support.option = !!div.lastChild; +} )(); + + +// We have to close these tags to support XHTML (#13200) +var wrapMap = { + + // XHTML parsers do not magically insert elements in the + // same way that tag soup parsers do. So we cannot shorten + // this by omitting or other required elements. + thead: [ 1, "", "
" ], + col: [ 2, "", "
" ], + tr: [ 2, "", "
" ], + td: [ 3, "", "
" ], + + _default: [ 0, "", "" ] +}; + +wrapMap.tbody = wrapMap.tfoot = wrapMap.colgroup = wrapMap.caption = wrapMap.thead; +wrapMap.th = wrapMap.td; + +// Support: IE <=9 only +if ( !support.option ) { + wrapMap.optgroup = wrapMap.option = [ 1, "" ]; +} + + +function getAll( context, tag ) { + + // Support: IE <=9 - 11 only + // Use typeof to avoid zero-argument method invocation on host objects (#15151) + var ret; + + if ( typeof context.getElementsByTagName !== "undefined" ) { + ret = context.getElementsByTagName( tag || "*" ); + + } else if ( typeof context.querySelectorAll !== "undefined" ) { + ret = context.querySelectorAll( tag || "*" ); + + } else { + ret = []; + } + + if ( tag === undefined || tag && nodeName( context, tag ) ) { + return jQuery.merge( [ context ], ret ); + } + + return ret; +} + + +// Mark scripts as having already been evaluated +function setGlobalEval( elems, refElements ) { + var i = 0, + l = elems.length; + + for ( ; i < l; i++ ) { + dataPriv.set( + elems[ i ], + "globalEval", + !refElements || dataPriv.get( refElements[ i ], "globalEval" ) + ); + } +} + + +var rhtml = /<|&#?\w+;/; + +function buildFragment( elems, context, scripts, selection, ignored ) { + var elem, tmp, tag, wrap, attached, j, + fragment = context.createDocumentFragment(), + nodes = [], + i = 0, + l = elems.length; + + for ( ; i < l; i++ ) { + elem = elems[ i ]; + + if ( elem || elem === 0 ) { + + // Add nodes directly + if ( toType( elem ) === "object" ) { + + // Support: Android <=4.0 only, PhantomJS 1 only + // push.apply(_, arraylike) throws on ancient WebKit + jQuery.merge( nodes, elem.nodeType ? [ elem ] : elem ); + + // Convert non-html into a text node + } else if ( !rhtml.test( elem ) ) { + nodes.push( context.createTextNode( elem ) ); + + // Convert html into DOM nodes + } else { + tmp = tmp || fragment.appendChild( context.createElement( "div" ) ); + + // Deserialize a standard representation + tag = ( rtagName.exec( elem ) || [ "", "" ] )[ 1 ].toLowerCase(); + wrap = wrapMap[ tag ] || wrapMap._default; + tmp.innerHTML = wrap[ 1 ] + jQuery.htmlPrefilter( elem ) + wrap[ 2 ]; + + // Descend through wrappers to the right content + j = wrap[ 0 ]; + while ( j-- ) { + tmp = tmp.lastChild; + } + + // Support: Android <=4.0 only, PhantomJS 1 only + // push.apply(_, arraylike) throws on ancient WebKit + jQuery.merge( nodes, tmp.childNodes ); + + // Remember the top-level container + tmp = fragment.firstChild; + + // Ensure the created nodes are orphaned (#12392) + tmp.textContent = ""; + } + } + } + + // Remove wrapper from fragment + fragment.textContent = ""; + + i = 0; + while ( ( elem = nodes[ i++ ] ) ) { + + // Skip elements already in the context collection (trac-4087) + if ( selection && jQuery.inArray( elem, selection ) > -1 ) { + if ( ignored ) { + ignored.push( elem ); + } + continue; + } + + attached = isAttached( elem ); + + // Append to fragment + tmp = getAll( fragment.appendChild( elem ), "script" ); + + // Preserve script evaluation history + if ( attached ) { + setGlobalEval( tmp ); + } + + // Capture executables + if ( scripts ) { + j = 0; + while ( ( elem = tmp[ j++ ] ) ) { + if ( rscriptType.test( elem.type || "" ) ) { + scripts.push( elem ); + } + } + } + } + + return fragment; +} + + +var rtypenamespace = /^([^.]*)(?:\.(.+)|)/; + +function returnTrue() { + return true; +} + +function returnFalse() { + return false; +} + +// Support: IE <=9 - 11+ +// focus() and blur() are asynchronous, except when they are no-op. +// So expect focus to be synchronous when the element is already active, +// and blur to be synchronous when the element is not already active. +// (focus and blur are always synchronous in other supported browsers, +// this just defines when we can count on it). +function expectSync( elem, type ) { + return ( elem === safeActiveElement() ) === ( type === "focus" ); +} + +// Support: IE <=9 only +// Accessing document.activeElement can throw unexpectedly +// https://bugs.jquery.com/ticket/13393 +function safeActiveElement() { + try { + return document.activeElement; + } catch ( err ) { } +} + +function on( elem, types, selector, data, fn, one ) { + var origFn, type; + + // Types can be a map of types/handlers + if ( typeof types === "object" ) { + + // ( types-Object, selector, data ) + if ( typeof selector !== "string" ) { + + // ( types-Object, data ) + data = data || selector; + selector = undefined; + } + for ( type in types ) { + on( elem, type, selector, data, types[ type ], one ); + } + return elem; + } + + if ( data == null && fn == null ) { + + // ( types, fn ) + fn = selector; + data = selector = undefined; + } else if ( fn == null ) { + if ( typeof selector === "string" ) { + + // ( types, selector, fn ) + fn = data; + data = undefined; + } else { + + // ( types, data, fn ) + fn = data; + data = selector; + selector = undefined; + } + } + if ( fn === false ) { + fn = returnFalse; + } else if ( !fn ) { + return elem; + } + + if ( one === 1 ) { + origFn = fn; + fn = function( event ) { + + // Can use an empty set, since event contains the info + jQuery().off( event ); + return origFn.apply( this, arguments ); + }; + + // Use same guid so caller can remove using origFn + fn.guid = origFn.guid || ( origFn.guid = jQuery.guid++ ); + } + return elem.each( function() { + jQuery.event.add( this, types, fn, data, selector ); + } ); +} + +/* + * Helper functions for managing events -- not part of the public interface. + * Props to Dean Edwards' addEvent library for many of the ideas. + */ +jQuery.event = { + + global: {}, + + add: function( elem, types, handler, data, selector ) { + + var handleObjIn, eventHandle, tmp, + events, t, handleObj, + special, handlers, type, namespaces, origType, + elemData = dataPriv.get( elem ); + + // Only attach events to objects that accept data + if ( !acceptData( elem ) ) { + return; + } + + // Caller can pass in an object of custom data in lieu of the handler + if ( handler.handler ) { + handleObjIn = handler; + handler = handleObjIn.handler; + selector = handleObjIn.selector; + } + + // Ensure that invalid selectors throw exceptions at attach time + // Evaluate against documentElement in case elem is a non-element node (e.g., document) + if ( selector ) { + jQuery.find.matchesSelector( documentElement, selector ); + } + + // Make sure that the handler has a unique ID, used to find/remove it later + if ( !handler.guid ) { + handler.guid = jQuery.guid++; + } + + // Init the element's event structure and main handler, if this is the first + if ( !( events = elemData.events ) ) { + events = elemData.events = Object.create( null ); + } + if ( !( eventHandle = elemData.handle ) ) { + eventHandle = elemData.handle = function( e ) { + + // Discard the second event of a jQuery.event.trigger() and + // when an event is called after a page has unloaded + return typeof jQuery !== "undefined" && jQuery.event.triggered !== e.type ? + jQuery.event.dispatch.apply( elem, arguments ) : undefined; + }; + } + + // Handle multiple events separated by a space + types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; + t = types.length; + while ( t-- ) { + tmp = rtypenamespace.exec( types[ t ] ) || []; + type = origType = tmp[ 1 ]; + namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); + + // There *must* be a type, no attaching namespace-only handlers + if ( !type ) { + continue; + } + + // If event changes its type, use the special event handlers for the changed type + special = jQuery.event.special[ type ] || {}; + + // If selector defined, determine special event api type, otherwise given type + type = ( selector ? special.delegateType : special.bindType ) || type; + + // Update special based on newly reset type + special = jQuery.event.special[ type ] || {}; + + // handleObj is passed to all event handlers + handleObj = jQuery.extend( { + type: type, + origType: origType, + data: data, + handler: handler, + guid: handler.guid, + selector: selector, + needsContext: selector && jQuery.expr.match.needsContext.test( selector ), + namespace: namespaces.join( "." ) + }, handleObjIn ); + + // Init the event handler queue if we're the first + if ( !( handlers = events[ type ] ) ) { + handlers = events[ type ] = []; + handlers.delegateCount = 0; + + // Only use addEventListener if the special events handler returns false + if ( !special.setup || + special.setup.call( elem, data, namespaces, eventHandle ) === false ) { + + if ( elem.addEventListener ) { + elem.addEventListener( type, eventHandle ); + } + } + } + + if ( special.add ) { + special.add.call( elem, handleObj ); + + if ( !handleObj.handler.guid ) { + handleObj.handler.guid = handler.guid; + } + } + + // Add to the element's handler list, delegates in front + if ( selector ) { + handlers.splice( handlers.delegateCount++, 0, handleObj ); + } else { + handlers.push( handleObj ); + } + + // Keep track of which events have ever been used, for event optimization + jQuery.event.global[ type ] = true; + } + + }, + + // Detach an event or set of events from an element + remove: function( elem, types, handler, selector, mappedTypes ) { + + var j, origCount, tmp, + events, t, handleObj, + special, handlers, type, namespaces, origType, + elemData = dataPriv.hasData( elem ) && dataPriv.get( elem ); + + if ( !elemData || !( events = elemData.events ) ) { + return; + } + + // Once for each type.namespace in types; type may be omitted + types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; + t = types.length; + while ( t-- ) { + tmp = rtypenamespace.exec( types[ t ] ) || []; + type = origType = tmp[ 1 ]; + namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); + + // Unbind all events (on this namespace, if provided) for the element + if ( !type ) { + for ( type in events ) { + jQuery.event.remove( elem, type + types[ t ], handler, selector, true ); + } + continue; + } + + special = jQuery.event.special[ type ] || {}; + type = ( selector ? special.delegateType : special.bindType ) || type; + handlers = events[ type ] || []; + tmp = tmp[ 2 ] && + new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ); + + // Remove matching events + origCount = j = handlers.length; + while ( j-- ) { + handleObj = handlers[ j ]; + + if ( ( mappedTypes || origType === handleObj.origType ) && + ( !handler || handler.guid === handleObj.guid ) && + ( !tmp || tmp.test( handleObj.namespace ) ) && + ( !selector || selector === handleObj.selector || + selector === "**" && handleObj.selector ) ) { + handlers.splice( j, 1 ); + + if ( handleObj.selector ) { + handlers.delegateCount--; + } + if ( special.remove ) { + special.remove.call( elem, handleObj ); + } + } + } + + // Remove generic event handler if we removed something and no more handlers exist + // (avoids potential for endless recursion during removal of special event handlers) + if ( origCount && !handlers.length ) { + if ( !special.teardown || + special.teardown.call( elem, namespaces, elemData.handle ) === false ) { + + jQuery.removeEvent( elem, type, elemData.handle ); + } + + delete events[ type ]; + } + } + + // Remove data and the expando if it's no longer used + if ( jQuery.isEmptyObject( events ) ) { + dataPriv.remove( elem, "handle events" ); + } + }, + + dispatch: function( nativeEvent ) { + + var i, j, ret, matched, handleObj, handlerQueue, + args = new Array( arguments.length ), + + // Make a writable jQuery.Event from the native event object + event = jQuery.event.fix( nativeEvent ), + + handlers = ( + dataPriv.get( this, "events" ) || Object.create( null ) + )[ event.type ] || [], + special = jQuery.event.special[ event.type ] || {}; + + // Use the fix-ed jQuery.Event rather than the (read-only) native event + args[ 0 ] = event; + + for ( i = 1; i < arguments.length; i++ ) { + args[ i ] = arguments[ i ]; + } + + event.delegateTarget = this; + + // Call the preDispatch hook for the mapped type, and let it bail if desired + if ( special.preDispatch && special.preDispatch.call( this, event ) === false ) { + return; + } + + // Determine handlers + handlerQueue = jQuery.event.handlers.call( this, event, handlers ); + + // Run delegates first; they may want to stop propagation beneath us + i = 0; + while ( ( matched = handlerQueue[ i++ ] ) && !event.isPropagationStopped() ) { + event.currentTarget = matched.elem; + + j = 0; + while ( ( handleObj = matched.handlers[ j++ ] ) && + !event.isImmediatePropagationStopped() ) { + + // If the event is namespaced, then each handler is only invoked if it is + // specially universal or its namespaces are a superset of the event's. + if ( !event.rnamespace || handleObj.namespace === false || + event.rnamespace.test( handleObj.namespace ) ) { + + event.handleObj = handleObj; + event.data = handleObj.data; + + ret = ( ( jQuery.event.special[ handleObj.origType ] || {} ).handle || + handleObj.handler ).apply( matched.elem, args ); + + if ( ret !== undefined ) { + if ( ( event.result = ret ) === false ) { + event.preventDefault(); + event.stopPropagation(); + } + } + } + } + } + + // Call the postDispatch hook for the mapped type + if ( special.postDispatch ) { + special.postDispatch.call( this, event ); + } + + return event.result; + }, + + handlers: function( event, handlers ) { + var i, handleObj, sel, matchedHandlers, matchedSelectors, + handlerQueue = [], + delegateCount = handlers.delegateCount, + cur = event.target; + + // Find delegate handlers + if ( delegateCount && + + // Support: IE <=9 + // Black-hole SVG instance trees (trac-13180) + cur.nodeType && + + // Support: Firefox <=42 + // Suppress spec-violating clicks indicating a non-primary pointer button (trac-3861) + // https://www.w3.org/TR/DOM-Level-3-Events/#event-type-click + // Support: IE 11 only + // ...but not arrow key "clicks" of radio inputs, which can have `button` -1 (gh-2343) + !( event.type === "click" && event.button >= 1 ) ) { + + for ( ; cur !== this; cur = cur.parentNode || this ) { + + // Don't check non-elements (#13208) + // Don't process clicks on disabled elements (#6911, #8165, #11382, #11764) + if ( cur.nodeType === 1 && !( event.type === "click" && cur.disabled === true ) ) { + matchedHandlers = []; + matchedSelectors = {}; + for ( i = 0; i < delegateCount; i++ ) { + handleObj = handlers[ i ]; + + // Don't conflict with Object.prototype properties (#13203) + sel = handleObj.selector + " "; + + if ( matchedSelectors[ sel ] === undefined ) { + matchedSelectors[ sel ] = handleObj.needsContext ? + jQuery( sel, this ).index( cur ) > -1 : + jQuery.find( sel, this, null, [ cur ] ).length; + } + if ( matchedSelectors[ sel ] ) { + matchedHandlers.push( handleObj ); + } + } + if ( matchedHandlers.length ) { + handlerQueue.push( { elem: cur, handlers: matchedHandlers } ); + } + } + } + } + + // Add the remaining (directly-bound) handlers + cur = this; + if ( delegateCount < handlers.length ) { + handlerQueue.push( { elem: cur, handlers: handlers.slice( delegateCount ) } ); + } + + return handlerQueue; + }, + + addProp: function( name, hook ) { + Object.defineProperty( jQuery.Event.prototype, name, { + enumerable: true, + configurable: true, + + get: isFunction( hook ) ? + function() { + if ( this.originalEvent ) { + return hook( this.originalEvent ); + } + } : + function() { + if ( this.originalEvent ) { + return this.originalEvent[ name ]; + } + }, + + set: function( value ) { + Object.defineProperty( this, name, { + enumerable: true, + configurable: true, + writable: true, + value: value + } ); + } + } ); + }, + + fix: function( originalEvent ) { + return originalEvent[ jQuery.expando ] ? + originalEvent : + new jQuery.Event( originalEvent ); + }, + + special: { + load: { + + // Prevent triggered image.load events from bubbling to window.load + noBubble: true + }, + click: { + + // Utilize native event to ensure correct state for checkable inputs + setup: function( data ) { + + // For mutual compressibility with _default, replace `this` access with a local var. + // `|| data` is dead code meant only to preserve the variable through minification. + var el = this || data; + + // Claim the first handler + if ( rcheckableType.test( el.type ) && + el.click && nodeName( el, "input" ) ) { + + // dataPriv.set( el, "click", ... ) + leverageNative( el, "click", returnTrue ); + } + + // Return false to allow normal processing in the caller + return false; + }, + trigger: function( data ) { + + // For mutual compressibility with _default, replace `this` access with a local var. + // `|| data` is dead code meant only to preserve the variable through minification. + var el = this || data; + + // Force setup before triggering a click + if ( rcheckableType.test( el.type ) && + el.click && nodeName( el, "input" ) ) { + + leverageNative( el, "click" ); + } + + // Return non-false to allow normal event-path propagation + return true; + }, + + // For cross-browser consistency, suppress native .click() on links + // Also prevent it if we're currently inside a leveraged native-event stack + _default: function( event ) { + var target = event.target; + return rcheckableType.test( target.type ) && + target.click && nodeName( target, "input" ) && + dataPriv.get( target, "click" ) || + nodeName( target, "a" ); + } + }, + + beforeunload: { + postDispatch: function( event ) { + + // Support: Firefox 20+ + // Firefox doesn't alert if the returnValue field is not set. + if ( event.result !== undefined && event.originalEvent ) { + event.originalEvent.returnValue = event.result; + } + } + } + } +}; + +// Ensure the presence of an event listener that handles manually-triggered +// synthetic events by interrupting progress until reinvoked in response to +// *native* events that it fires directly, ensuring that state changes have +// already occurred before other listeners are invoked. +function leverageNative( el, type, expectSync ) { + + // Missing expectSync indicates a trigger call, which must force setup through jQuery.event.add + if ( !expectSync ) { + if ( dataPriv.get( el, type ) === undefined ) { + jQuery.event.add( el, type, returnTrue ); + } + return; + } + + // Register the controller as a special universal handler for all event namespaces + dataPriv.set( el, type, false ); + jQuery.event.add( el, type, { + namespace: false, + handler: function( event ) { + var notAsync, result, + saved = dataPriv.get( this, type ); + + if ( ( event.isTrigger & 1 ) && this[ type ] ) { + + // Interrupt processing of the outer synthetic .trigger()ed event + // Saved data should be false in such cases, but might be a leftover capture object + // from an async native handler (gh-4350) + if ( !saved.length ) { + + // Store arguments for use when handling the inner native event + // There will always be at least one argument (an event object), so this array + // will not be confused with a leftover capture object. + saved = slice.call( arguments ); + dataPriv.set( this, type, saved ); + + // Trigger the native event and capture its result + // Support: IE <=9 - 11+ + // focus() and blur() are asynchronous + notAsync = expectSync( this, type ); + this[ type ](); + result = dataPriv.get( this, type ); + if ( saved !== result || notAsync ) { + dataPriv.set( this, type, false ); + } else { + result = {}; + } + if ( saved !== result ) { + + // Cancel the outer synthetic event + event.stopImmediatePropagation(); + event.preventDefault(); + + // Support: Chrome 86+ + // In Chrome, if an element having a focusout handler is blurred by + // clicking outside of it, it invokes the handler synchronously. If + // that handler calls `.remove()` on the element, the data is cleared, + // leaving `result` undefined. We need to guard against this. + return result && result.value; + } + + // If this is an inner synthetic event for an event with a bubbling surrogate + // (focus or blur), assume that the surrogate already propagated from triggering the + // native event and prevent that from happening again here. + // This technically gets the ordering wrong w.r.t. to `.trigger()` (in which the + // bubbling surrogate propagates *after* the non-bubbling base), but that seems + // less bad than duplication. + } else if ( ( jQuery.event.special[ type ] || {} ).delegateType ) { + event.stopPropagation(); + } + + // If this is a native event triggered above, everything is now in order + // Fire an inner synthetic event with the original arguments + } else if ( saved.length ) { + + // ...and capture the result + dataPriv.set( this, type, { + value: jQuery.event.trigger( + + // Support: IE <=9 - 11+ + // Extend with the prototype to reset the above stopImmediatePropagation() + jQuery.extend( saved[ 0 ], jQuery.Event.prototype ), + saved.slice( 1 ), + this + ) + } ); + + // Abort handling of the native event + event.stopImmediatePropagation(); + } + } + } ); +} + +jQuery.removeEvent = function( elem, type, handle ) { + + // This "if" is needed for plain objects + if ( elem.removeEventListener ) { + elem.removeEventListener( type, handle ); + } +}; + +jQuery.Event = function( src, props ) { + + // Allow instantiation without the 'new' keyword + if ( !( this instanceof jQuery.Event ) ) { + return new jQuery.Event( src, props ); + } + + // Event object + if ( src && src.type ) { + this.originalEvent = src; + this.type = src.type; + + // Events bubbling up the document may have been marked as prevented + // by a handler lower down the tree; reflect the correct value. + this.isDefaultPrevented = src.defaultPrevented || + src.defaultPrevented === undefined && + + // Support: Android <=2.3 only + src.returnValue === false ? + returnTrue : + returnFalse; + + // Create target properties + // Support: Safari <=6 - 7 only + // Target should not be a text node (#504, #13143) + this.target = ( src.target && src.target.nodeType === 3 ) ? + src.target.parentNode : + src.target; + + this.currentTarget = src.currentTarget; + this.relatedTarget = src.relatedTarget; + + // Event type + } else { + this.type = src; + } + + // Put explicitly provided properties onto the event object + if ( props ) { + jQuery.extend( this, props ); + } + + // Create a timestamp if incoming event doesn't have one + this.timeStamp = src && src.timeStamp || Date.now(); + + // Mark it as fixed + this[ jQuery.expando ] = true; +}; + +// jQuery.Event is based on DOM3 Events as specified by the ECMAScript Language Binding +// https://www.w3.org/TR/2003/WD-DOM-Level-3-Events-20030331/ecma-script-binding.html +jQuery.Event.prototype = { + constructor: jQuery.Event, + isDefaultPrevented: returnFalse, + isPropagationStopped: returnFalse, + isImmediatePropagationStopped: returnFalse, + isSimulated: false, + + preventDefault: function() { + var e = this.originalEvent; + + this.isDefaultPrevented = returnTrue; + + if ( e && !this.isSimulated ) { + e.preventDefault(); + } + }, + stopPropagation: function() { + var e = this.originalEvent; + + this.isPropagationStopped = returnTrue; + + if ( e && !this.isSimulated ) { + e.stopPropagation(); + } + }, + stopImmediatePropagation: function() { + var e = this.originalEvent; + + this.isImmediatePropagationStopped = returnTrue; + + if ( e && !this.isSimulated ) { + e.stopImmediatePropagation(); + } + + this.stopPropagation(); + } +}; + +// Includes all common event props including KeyEvent and MouseEvent specific props +jQuery.each( { + altKey: true, + bubbles: true, + cancelable: true, + changedTouches: true, + ctrlKey: true, + detail: true, + eventPhase: true, + metaKey: true, + pageX: true, + pageY: true, + shiftKey: true, + view: true, + "char": true, + code: true, + charCode: true, + key: true, + keyCode: true, + button: true, + buttons: true, + clientX: true, + clientY: true, + offsetX: true, + offsetY: true, + pointerId: true, + pointerType: true, + screenX: true, + screenY: true, + targetTouches: true, + toElement: true, + touches: true, + which: true +}, jQuery.event.addProp ); + +jQuery.each( { focus: "focusin", blur: "focusout" }, function( type, delegateType ) { + jQuery.event.special[ type ] = { + + // Utilize native event if possible so blur/focus sequence is correct + setup: function() { + + // Claim the first handler + // dataPriv.set( this, "focus", ... ) + // dataPriv.set( this, "blur", ... ) + leverageNative( this, type, expectSync ); + + // Return false to allow normal processing in the caller + return false; + }, + trigger: function() { + + // Force setup before trigger + leverageNative( this, type ); + + // Return non-false to allow normal event-path propagation + return true; + }, + + // Suppress native focus or blur as it's already being fired + // in leverageNative. + _default: function() { + return true; + }, + + delegateType: delegateType + }; +} ); + +// Create mouseenter/leave events using mouseover/out and event-time checks +// so that event delegation works in jQuery. +// Do the same for pointerenter/pointerleave and pointerover/pointerout +// +// Support: Safari 7 only +// Safari sends mouseenter too often; see: +// https://bugs.chromium.org/p/chromium/issues/detail?id=470258 +// for the description of the bug (it existed in older Chrome versions as well). +jQuery.each( { + mouseenter: "mouseover", + mouseleave: "mouseout", + pointerenter: "pointerover", + pointerleave: "pointerout" +}, function( orig, fix ) { + jQuery.event.special[ orig ] = { + delegateType: fix, + bindType: fix, + + handle: function( event ) { + var ret, + target = this, + related = event.relatedTarget, + handleObj = event.handleObj; + + // For mouseenter/leave call the handler if related is outside the target. + // NB: No relatedTarget if the mouse left/entered the browser window + if ( !related || ( related !== target && !jQuery.contains( target, related ) ) ) { + event.type = handleObj.origType; + ret = handleObj.handler.apply( this, arguments ); + event.type = fix; + } + return ret; + } + }; +} ); + +jQuery.fn.extend( { + + on: function( types, selector, data, fn ) { + return on( this, types, selector, data, fn ); + }, + one: function( types, selector, data, fn ) { + return on( this, types, selector, data, fn, 1 ); + }, + off: function( types, selector, fn ) { + var handleObj, type; + if ( types && types.preventDefault && types.handleObj ) { + + // ( event ) dispatched jQuery.Event + handleObj = types.handleObj; + jQuery( types.delegateTarget ).off( + handleObj.namespace ? + handleObj.origType + "." + handleObj.namespace : + handleObj.origType, + handleObj.selector, + handleObj.handler + ); + return this; + } + if ( typeof types === "object" ) { + + // ( types-object [, selector] ) + for ( type in types ) { + this.off( type, selector, types[ type ] ); + } + return this; + } + if ( selector === false || typeof selector === "function" ) { + + // ( types [, fn] ) + fn = selector; + selector = undefined; + } + if ( fn === false ) { + fn = returnFalse; + } + return this.each( function() { + jQuery.event.remove( this, types, fn, selector ); + } ); + } +} ); + + +var + + // Support: IE <=10 - 11, Edge 12 - 13 only + // In IE/Edge using regex groups here causes severe slowdowns. + // See https://connect.microsoft.com/IE/feedback/details/1736512/ + rnoInnerhtml = /\s*$/g; + +// Prefer a tbody over its parent table for containing new rows +function manipulationTarget( elem, content ) { + if ( nodeName( elem, "table" ) && + nodeName( content.nodeType !== 11 ? content : content.firstChild, "tr" ) ) { + + return jQuery( elem ).children( "tbody" )[ 0 ] || elem; + } + + return elem; +} + +// Replace/restore the type attribute of script elements for safe DOM manipulation +function disableScript( elem ) { + elem.type = ( elem.getAttribute( "type" ) !== null ) + "/" + elem.type; + return elem; +} +function restoreScript( elem ) { + if ( ( elem.type || "" ).slice( 0, 5 ) === "true/" ) { + elem.type = elem.type.slice( 5 ); + } else { + elem.removeAttribute( "type" ); + } + + return elem; +} + +function cloneCopyEvent( src, dest ) { + var i, l, type, pdataOld, udataOld, udataCur, events; + + if ( dest.nodeType !== 1 ) { + return; + } + + // 1. Copy private data: events, handlers, etc. + if ( dataPriv.hasData( src ) ) { + pdataOld = dataPriv.get( src ); + events = pdataOld.events; + + if ( events ) { + dataPriv.remove( dest, "handle events" ); + + for ( type in events ) { + for ( i = 0, l = events[ type ].length; i < l; i++ ) { + jQuery.event.add( dest, type, events[ type ][ i ] ); + } + } + } + } + + // 2. Copy user data + if ( dataUser.hasData( src ) ) { + udataOld = dataUser.access( src ); + udataCur = jQuery.extend( {}, udataOld ); + + dataUser.set( dest, udataCur ); + } +} + +// Fix IE bugs, see support tests +function fixInput( src, dest ) { + var nodeName = dest.nodeName.toLowerCase(); + + // Fails to persist the checked state of a cloned checkbox or radio button. + if ( nodeName === "input" && rcheckableType.test( src.type ) ) { + dest.checked = src.checked; + + // Fails to return the selected option to the default selected state when cloning options + } else if ( nodeName === "input" || nodeName === "textarea" ) { + dest.defaultValue = src.defaultValue; + } +} + +function domManip( collection, args, callback, ignored ) { + + // Flatten any nested arrays + args = flat( args ); + + var fragment, first, scripts, hasScripts, node, doc, + i = 0, + l = collection.length, + iNoClone = l - 1, + value = args[ 0 ], + valueIsFunction = isFunction( value ); + + // We can't cloneNode fragments that contain checked, in WebKit + if ( valueIsFunction || + ( l > 1 && typeof value === "string" && + !support.checkClone && rchecked.test( value ) ) ) { + return collection.each( function( index ) { + var self = collection.eq( index ); + if ( valueIsFunction ) { + args[ 0 ] = value.call( this, index, self.html() ); + } + domManip( self, args, callback, ignored ); + } ); + } + + if ( l ) { + fragment = buildFragment( args, collection[ 0 ].ownerDocument, false, collection, ignored ); + first = fragment.firstChild; + + if ( fragment.childNodes.length === 1 ) { + fragment = first; + } + + // Require either new content or an interest in ignored elements to invoke the callback + if ( first || ignored ) { + scripts = jQuery.map( getAll( fragment, "script" ), disableScript ); + hasScripts = scripts.length; + + // Use the original fragment for the last item + // instead of the first because it can end up + // being emptied incorrectly in certain situations (#8070). + for ( ; i < l; i++ ) { + node = fragment; + + if ( i !== iNoClone ) { + node = jQuery.clone( node, true, true ); + + // Keep references to cloned scripts for later restoration + if ( hasScripts ) { + + // Support: Android <=4.0 only, PhantomJS 1 only + // push.apply(_, arraylike) throws on ancient WebKit + jQuery.merge( scripts, getAll( node, "script" ) ); + } + } + + callback.call( collection[ i ], node, i ); + } + + if ( hasScripts ) { + doc = scripts[ scripts.length - 1 ].ownerDocument; + + // Reenable scripts + jQuery.map( scripts, restoreScript ); + + // Evaluate executable scripts on first document insertion + for ( i = 0; i < hasScripts; i++ ) { + node = scripts[ i ]; + if ( rscriptType.test( node.type || "" ) && + !dataPriv.access( node, "globalEval" ) && + jQuery.contains( doc, node ) ) { + + if ( node.src && ( node.type || "" ).toLowerCase() !== "module" ) { + + // Optional AJAX dependency, but won't run scripts if not present + if ( jQuery._evalUrl && !node.noModule ) { + jQuery._evalUrl( node.src, { + nonce: node.nonce || node.getAttribute( "nonce" ) + }, doc ); + } + } else { + DOMEval( node.textContent.replace( rcleanScript, "" ), node, doc ); + } + } + } + } + } + } + + return collection; +} + +function remove( elem, selector, keepData ) { + var node, + nodes = selector ? jQuery.filter( selector, elem ) : elem, + i = 0; + + for ( ; ( node = nodes[ i ] ) != null; i++ ) { + if ( !keepData && node.nodeType === 1 ) { + jQuery.cleanData( getAll( node ) ); + } + + if ( node.parentNode ) { + if ( keepData && isAttached( node ) ) { + setGlobalEval( getAll( node, "script" ) ); + } + node.parentNode.removeChild( node ); + } + } + + return elem; +} + +jQuery.extend( { + htmlPrefilter: function( html ) { + return html; + }, + + clone: function( elem, dataAndEvents, deepDataAndEvents ) { + var i, l, srcElements, destElements, + clone = elem.cloneNode( true ), + inPage = isAttached( elem ); + + // Fix IE cloning issues + if ( !support.noCloneChecked && ( elem.nodeType === 1 || elem.nodeType === 11 ) && + !jQuery.isXMLDoc( elem ) ) { + + // We eschew Sizzle here for performance reasons: https://jsperf.com/getall-vs-sizzle/2 + destElements = getAll( clone ); + srcElements = getAll( elem ); + + for ( i = 0, l = srcElements.length; i < l; i++ ) { + fixInput( srcElements[ i ], destElements[ i ] ); + } + } + + // Copy the events from the original to the clone + if ( dataAndEvents ) { + if ( deepDataAndEvents ) { + srcElements = srcElements || getAll( elem ); + destElements = destElements || getAll( clone ); + + for ( i = 0, l = srcElements.length; i < l; i++ ) { + cloneCopyEvent( srcElements[ i ], destElements[ i ] ); + } + } else { + cloneCopyEvent( elem, clone ); + } + } + + // Preserve script evaluation history + destElements = getAll( clone, "script" ); + if ( destElements.length > 0 ) { + setGlobalEval( destElements, !inPage && getAll( elem, "script" ) ); + } + + // Return the cloned set + return clone; + }, + + cleanData: function( elems ) { + var data, elem, type, + special = jQuery.event.special, + i = 0; + + for ( ; ( elem = elems[ i ] ) !== undefined; i++ ) { + if ( acceptData( elem ) ) { + if ( ( data = elem[ dataPriv.expando ] ) ) { + if ( data.events ) { + for ( type in data.events ) { + if ( special[ type ] ) { + jQuery.event.remove( elem, type ); + + // This is a shortcut to avoid jQuery.event.remove's overhead + } else { + jQuery.removeEvent( elem, type, data.handle ); + } + } + } + + // Support: Chrome <=35 - 45+ + // Assign undefined instead of using delete, see Data#remove + elem[ dataPriv.expando ] = undefined; + } + if ( elem[ dataUser.expando ] ) { + + // Support: Chrome <=35 - 45+ + // Assign undefined instead of using delete, see Data#remove + elem[ dataUser.expando ] = undefined; + } + } + } + } +} ); + +jQuery.fn.extend( { + detach: function( selector ) { + return remove( this, selector, true ); + }, + + remove: function( selector ) { + return remove( this, selector ); + }, + + text: function( value ) { + return access( this, function( value ) { + return value === undefined ? + jQuery.text( this ) : + this.empty().each( function() { + if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { + this.textContent = value; + } + } ); + }, null, value, arguments.length ); + }, + + append: function() { + return domManip( this, arguments, function( elem ) { + if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { + var target = manipulationTarget( this, elem ); + target.appendChild( elem ); + } + } ); + }, + + prepend: function() { + return domManip( this, arguments, function( elem ) { + if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { + var target = manipulationTarget( this, elem ); + target.insertBefore( elem, target.firstChild ); + } + } ); + }, + + before: function() { + return domManip( this, arguments, function( elem ) { + if ( this.parentNode ) { + this.parentNode.insertBefore( elem, this ); + } + } ); + }, + + after: function() { + return domManip( this, arguments, function( elem ) { + if ( this.parentNode ) { + this.parentNode.insertBefore( elem, this.nextSibling ); + } + } ); + }, + + empty: function() { + var elem, + i = 0; + + for ( ; ( elem = this[ i ] ) != null; i++ ) { + if ( elem.nodeType === 1 ) { + + // Prevent memory leaks + jQuery.cleanData( getAll( elem, false ) ); + + // Remove any remaining nodes + elem.textContent = ""; + } + } + + return this; + }, + + clone: function( dataAndEvents, deepDataAndEvents ) { + dataAndEvents = dataAndEvents == null ? false : dataAndEvents; + deepDataAndEvents = deepDataAndEvents == null ? dataAndEvents : deepDataAndEvents; + + return this.map( function() { + return jQuery.clone( this, dataAndEvents, deepDataAndEvents ); + } ); + }, + + html: function( value ) { + return access( this, function( value ) { + var elem = this[ 0 ] || {}, + i = 0, + l = this.length; + + if ( value === undefined && elem.nodeType === 1 ) { + return elem.innerHTML; + } + + // See if we can take a shortcut and just use innerHTML + if ( typeof value === "string" && !rnoInnerhtml.test( value ) && + !wrapMap[ ( rtagName.exec( value ) || [ "", "" ] )[ 1 ].toLowerCase() ] ) { + + value = jQuery.htmlPrefilter( value ); + + try { + for ( ; i < l; i++ ) { + elem = this[ i ] || {}; + + // Remove element nodes and prevent memory leaks + if ( elem.nodeType === 1 ) { + jQuery.cleanData( getAll( elem, false ) ); + elem.innerHTML = value; + } + } + + elem = 0; + + // If using innerHTML throws an exception, use the fallback method + } catch ( e ) {} + } + + if ( elem ) { + this.empty().append( value ); + } + }, null, value, arguments.length ); + }, + + replaceWith: function() { + var ignored = []; + + // Make the changes, replacing each non-ignored context element with the new content + return domManip( this, arguments, function( elem ) { + var parent = this.parentNode; + + if ( jQuery.inArray( this, ignored ) < 0 ) { + jQuery.cleanData( getAll( this ) ); + if ( parent ) { + parent.replaceChild( elem, this ); + } + } + + // Force callback invocation + }, ignored ); + } +} ); + +jQuery.each( { + appendTo: "append", + prependTo: "prepend", + insertBefore: "before", + insertAfter: "after", + replaceAll: "replaceWith" +}, function( name, original ) { + jQuery.fn[ name ] = function( selector ) { + var elems, + ret = [], + insert = jQuery( selector ), + last = insert.length - 1, + i = 0; + + for ( ; i <= last; i++ ) { + elems = i === last ? this : this.clone( true ); + jQuery( insert[ i ] )[ original ]( elems ); + + // Support: Android <=4.0 only, PhantomJS 1 only + // .get() because push.apply(_, arraylike) throws on ancient WebKit + push.apply( ret, elems.get() ); + } + + return this.pushStack( ret ); + }; +} ); +var rnumnonpx = new RegExp( "^(" + pnum + ")(?!px)[a-z%]+$", "i" ); + +var getStyles = function( elem ) { + + // Support: IE <=11 only, Firefox <=30 (#15098, #14150) + // IE throws on elements created in popups + // FF meanwhile throws on frame elements through "defaultView.getComputedStyle" + var view = elem.ownerDocument.defaultView; + + if ( !view || !view.opener ) { + view = window; + } + + return view.getComputedStyle( elem ); + }; + +var swap = function( elem, options, callback ) { + var ret, name, + old = {}; + + // Remember the old values, and insert the new ones + for ( name in options ) { + old[ name ] = elem.style[ name ]; + elem.style[ name ] = options[ name ]; + } + + ret = callback.call( elem ); + + // Revert the old values + for ( name in options ) { + elem.style[ name ] = old[ name ]; + } + + return ret; +}; + + +var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" ); + + + +( function() { + + // Executing both pixelPosition & boxSizingReliable tests require only one layout + // so they're executed at the same time to save the second computation. + function computeStyleTests() { + + // This is a singleton, we need to execute it only once + if ( !div ) { + return; + } + + container.style.cssText = "position:absolute;left:-11111px;width:60px;" + + "margin-top:1px;padding:0;border:0"; + div.style.cssText = + "position:relative;display:block;box-sizing:border-box;overflow:scroll;" + + "margin:auto;border:1px;padding:1px;" + + "width:60%;top:1%"; + documentElement.appendChild( container ).appendChild( div ); + + var divStyle = window.getComputedStyle( div ); + pixelPositionVal = divStyle.top !== "1%"; + + // Support: Android 4.0 - 4.3 only, Firefox <=3 - 44 + reliableMarginLeftVal = roundPixelMeasures( divStyle.marginLeft ) === 12; + + // Support: Android 4.0 - 4.3 only, Safari <=9.1 - 10.1, iOS <=7.0 - 9.3 + // Some styles come back with percentage values, even though they shouldn't + div.style.right = "60%"; + pixelBoxStylesVal = roundPixelMeasures( divStyle.right ) === 36; + + // Support: IE 9 - 11 only + // Detect misreporting of content dimensions for box-sizing:border-box elements + boxSizingReliableVal = roundPixelMeasures( divStyle.width ) === 36; + + // Support: IE 9 only + // Detect overflow:scroll screwiness (gh-3699) + // Support: Chrome <=64 + // Don't get tricked when zoom affects offsetWidth (gh-4029) + div.style.position = "absolute"; + scrollboxSizeVal = roundPixelMeasures( div.offsetWidth / 3 ) === 12; + + documentElement.removeChild( container ); + + // Nullify the div so it wouldn't be stored in the memory and + // it will also be a sign that checks already performed + div = null; + } + + function roundPixelMeasures( measure ) { + return Math.round( parseFloat( measure ) ); + } + + var pixelPositionVal, boxSizingReliableVal, scrollboxSizeVal, pixelBoxStylesVal, + reliableTrDimensionsVal, reliableMarginLeftVal, + container = document.createElement( "div" ), + div = document.createElement( "div" ); + + // Finish early in limited (non-browser) environments + if ( !div.style ) { + return; + } + + // Support: IE <=9 - 11 only + // Style of cloned element affects source element cloned (#8908) + div.style.backgroundClip = "content-box"; + div.cloneNode( true ).style.backgroundClip = ""; + support.clearCloneStyle = div.style.backgroundClip === "content-box"; + + jQuery.extend( support, { + boxSizingReliable: function() { + computeStyleTests(); + return boxSizingReliableVal; + }, + pixelBoxStyles: function() { + computeStyleTests(); + return pixelBoxStylesVal; + }, + pixelPosition: function() { + computeStyleTests(); + return pixelPositionVal; + }, + reliableMarginLeft: function() { + computeStyleTests(); + return reliableMarginLeftVal; + }, + scrollboxSize: function() { + computeStyleTests(); + return scrollboxSizeVal; + }, + + // Support: IE 9 - 11+, Edge 15 - 18+ + // IE/Edge misreport `getComputedStyle` of table rows with width/height + // set in CSS while `offset*` properties report correct values. + // Behavior in IE 9 is more subtle than in newer versions & it passes + // some versions of this test; make sure not to make it pass there! + // + // Support: Firefox 70+ + // Only Firefox includes border widths + // in computed dimensions. (gh-4529) + reliableTrDimensions: function() { + var table, tr, trChild, trStyle; + if ( reliableTrDimensionsVal == null ) { + table = document.createElement( "table" ); + tr = document.createElement( "tr" ); + trChild = document.createElement( "div" ); + + table.style.cssText = "position:absolute;left:-11111px;border-collapse:separate"; + tr.style.cssText = "border:1px solid"; + + // Support: Chrome 86+ + // Height set through cssText does not get applied. + // Computed height then comes back as 0. + tr.style.height = "1px"; + trChild.style.height = "9px"; + + // Support: Android 8 Chrome 86+ + // In our bodyBackground.html iframe, + // display for all div elements is set to "inline", + // which causes a problem only in Android 8 Chrome 86. + // Ensuring the div is display: block + // gets around this issue. + trChild.style.display = "block"; + + documentElement + .appendChild( table ) + .appendChild( tr ) + .appendChild( trChild ); + + trStyle = window.getComputedStyle( tr ); + reliableTrDimensionsVal = ( parseInt( trStyle.height, 10 ) + + parseInt( trStyle.borderTopWidth, 10 ) + + parseInt( trStyle.borderBottomWidth, 10 ) ) === tr.offsetHeight; + + documentElement.removeChild( table ); + } + return reliableTrDimensionsVal; + } + } ); +} )(); + + +function curCSS( elem, name, computed ) { + var width, minWidth, maxWidth, ret, + + // Support: Firefox 51+ + // Retrieving style before computed somehow + // fixes an issue with getting wrong values + // on detached elements + style = elem.style; + + computed = computed || getStyles( elem ); + + // getPropertyValue is needed for: + // .css('filter') (IE 9 only, #12537) + // .css('--customProperty) (#3144) + if ( computed ) { + ret = computed.getPropertyValue( name ) || computed[ name ]; + + if ( ret === "" && !isAttached( elem ) ) { + ret = jQuery.style( elem, name ); + } + + // A tribute to the "awesome hack by Dean Edwards" + // Android Browser returns percentage for some values, + // but width seems to be reliably pixels. + // This is against the CSSOM draft spec: + // https://drafts.csswg.org/cssom/#resolved-values + if ( !support.pixelBoxStyles() && rnumnonpx.test( ret ) && rboxStyle.test( name ) ) { + + // Remember the original values + width = style.width; + minWidth = style.minWidth; + maxWidth = style.maxWidth; + + // Put in the new values to get a computed value out + style.minWidth = style.maxWidth = style.width = ret; + ret = computed.width; + + // Revert the changed values + style.width = width; + style.minWidth = minWidth; + style.maxWidth = maxWidth; + } + } + + return ret !== undefined ? + + // Support: IE <=9 - 11 only + // IE returns zIndex value as an integer. + ret + "" : + ret; +} + + +function addGetHookIf( conditionFn, hookFn ) { + + // Define the hook, we'll check on the first run if it's really needed. + return { + get: function() { + if ( conditionFn() ) { + + // Hook not needed (or it's not possible to use it due + // to missing dependency), remove it. + delete this.get; + return; + } + + // Hook needed; redefine it so that the support test is not executed again. + return ( this.get = hookFn ).apply( this, arguments ); + } + }; +} + + +var cssPrefixes = [ "Webkit", "Moz", "ms" ], + emptyStyle = document.createElement( "div" ).style, + vendorProps = {}; + +// Return a vendor-prefixed property or undefined +function vendorPropName( name ) { + + // Check for vendor prefixed names + var capName = name[ 0 ].toUpperCase() + name.slice( 1 ), + i = cssPrefixes.length; + + while ( i-- ) { + name = cssPrefixes[ i ] + capName; + if ( name in emptyStyle ) { + return name; + } + } +} + +// Return a potentially-mapped jQuery.cssProps or vendor prefixed property +function finalPropName( name ) { + var final = jQuery.cssProps[ name ] || vendorProps[ name ]; + + if ( final ) { + return final; + } + if ( name in emptyStyle ) { + return name; + } + return vendorProps[ name ] = vendorPropName( name ) || name; +} + + +var + + // Swappable if display is none or starts with table + // except "table", "table-cell", or "table-caption" + // See here for display values: https://developer.mozilla.org/en-US/docs/CSS/display + rdisplayswap = /^(none|table(?!-c[ea]).+)/, + rcustomProp = /^--/, + cssShow = { position: "absolute", visibility: "hidden", display: "block" }, + cssNormalTransform = { + letterSpacing: "0", + fontWeight: "400" + }; + +function setPositiveNumber( _elem, value, subtract ) { + + // Any relative (+/-) values have already been + // normalized at this point + var matches = rcssNum.exec( value ); + return matches ? + + // Guard against undefined "subtract", e.g., when used as in cssHooks + Math.max( 0, matches[ 2 ] - ( subtract || 0 ) ) + ( matches[ 3 ] || "px" ) : + value; +} + +function boxModelAdjustment( elem, dimension, box, isBorderBox, styles, computedVal ) { + var i = dimension === "width" ? 1 : 0, + extra = 0, + delta = 0; + + // Adjustment may not be necessary + if ( box === ( isBorderBox ? "border" : "content" ) ) { + return 0; + } + + for ( ; i < 4; i += 2 ) { + + // Both box models exclude margin + if ( box === "margin" ) { + delta += jQuery.css( elem, box + cssExpand[ i ], true, styles ); + } + + // If we get here with a content-box, we're seeking "padding" or "border" or "margin" + if ( !isBorderBox ) { + + // Add padding + delta += jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); + + // For "border" or "margin", add border + if ( box !== "padding" ) { + delta += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); + + // But still keep track of it otherwise + } else { + extra += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); + } + + // If we get here with a border-box (content + padding + border), we're seeking "content" or + // "padding" or "margin" + } else { + + // For "content", subtract padding + if ( box === "content" ) { + delta -= jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); + } + + // For "content" or "padding", subtract border + if ( box !== "margin" ) { + delta -= jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); + } + } + } + + // Account for positive content-box scroll gutter when requested by providing computedVal + if ( !isBorderBox && computedVal >= 0 ) { + + // offsetWidth/offsetHeight is a rounded sum of content, padding, scroll gutter, and border + // Assuming integer scroll gutter, subtract the rest and round down + delta += Math.max( 0, Math.ceil( + elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - + computedVal - + delta - + extra - + 0.5 + + // If offsetWidth/offsetHeight is unknown, then we can't determine content-box scroll gutter + // Use an explicit zero to avoid NaN (gh-3964) + ) ) || 0; + } + + return delta; +} + +function getWidthOrHeight( elem, dimension, extra ) { + + // Start with computed style + var styles = getStyles( elem ), + + // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-4322). + // Fake content-box until we know it's needed to know the true value. + boxSizingNeeded = !support.boxSizingReliable() || extra, + isBorderBox = boxSizingNeeded && + jQuery.css( elem, "boxSizing", false, styles ) === "border-box", + valueIsBorderBox = isBorderBox, + + val = curCSS( elem, dimension, styles ), + offsetProp = "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ); + + // Support: Firefox <=54 + // Return a confounding non-pixel value or feign ignorance, as appropriate. + if ( rnumnonpx.test( val ) ) { + if ( !extra ) { + return val; + } + val = "auto"; + } + + + // Support: IE 9 - 11 only + // Use offsetWidth/offsetHeight for when box sizing is unreliable. + // In those cases, the computed value can be trusted to be border-box. + if ( ( !support.boxSizingReliable() && isBorderBox || + + // Support: IE 10 - 11+, Edge 15 - 18+ + // IE/Edge misreport `getComputedStyle` of table rows with width/height + // set in CSS while `offset*` properties report correct values. + // Interestingly, in some cases IE 9 doesn't suffer from this issue. + !support.reliableTrDimensions() && nodeName( elem, "tr" ) || + + // Fall back to offsetWidth/offsetHeight when value is "auto" + // This happens for inline elements with no explicit setting (gh-3571) + val === "auto" || + + // Support: Android <=4.1 - 4.3 only + // Also use offsetWidth/offsetHeight for misreported inline dimensions (gh-3602) + !parseFloat( val ) && jQuery.css( elem, "display", false, styles ) === "inline" ) && + + // Make sure the element is visible & connected + elem.getClientRects().length ) { + + isBorderBox = jQuery.css( elem, "boxSizing", false, styles ) === "border-box"; + + // Where available, offsetWidth/offsetHeight approximate border box dimensions. + // Where not available (e.g., SVG), assume unreliable box-sizing and interpret the + // retrieved value as a content box dimension. + valueIsBorderBox = offsetProp in elem; + if ( valueIsBorderBox ) { + val = elem[ offsetProp ]; + } + } + + // Normalize "" and auto + val = parseFloat( val ) || 0; + + // Adjust for the element's box model + return ( val + + boxModelAdjustment( + elem, + dimension, + extra || ( isBorderBox ? "border" : "content" ), + valueIsBorderBox, + styles, + + // Provide the current computed size to request scroll gutter calculation (gh-3589) + val + ) + ) + "px"; +} + +jQuery.extend( { + + // Add in style property hooks for overriding the default + // behavior of getting and setting a style property + cssHooks: { + opacity: { + get: function( elem, computed ) { + if ( computed ) { + + // We should always get a number back from opacity + var ret = curCSS( elem, "opacity" ); + return ret === "" ? "1" : ret; + } + } + } + }, + + // Don't automatically add "px" to these possibly-unitless properties + cssNumber: { + "animationIterationCount": true, + "columnCount": true, + "fillOpacity": true, + "flexGrow": true, + "flexShrink": true, + "fontWeight": true, + "gridArea": true, + "gridColumn": true, + "gridColumnEnd": true, + "gridColumnStart": true, + "gridRow": true, + "gridRowEnd": true, + "gridRowStart": true, + "lineHeight": true, + "opacity": true, + "order": true, + "orphans": true, + "widows": true, + "zIndex": true, + "zoom": true + }, + + // Add in properties whose names you wish to fix before + // setting or getting the value + cssProps: {}, + + // Get and set the style property on a DOM Node + style: function( elem, name, value, extra ) { + + // Don't set styles on text and comment nodes + if ( !elem || elem.nodeType === 3 || elem.nodeType === 8 || !elem.style ) { + return; + } + + // Make sure that we're working with the right name + var ret, type, hooks, + origName = camelCase( name ), + isCustomProp = rcustomProp.test( name ), + style = elem.style; + + // Make sure that we're working with the right name. We don't + // want to query the value if it is a CSS custom property + // since they are user-defined. + if ( !isCustomProp ) { + name = finalPropName( origName ); + } + + // Gets hook for the prefixed version, then unprefixed version + hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; + + // Check if we're setting a value + if ( value !== undefined ) { + type = typeof value; + + // Convert "+=" or "-=" to relative numbers (#7345) + if ( type === "string" && ( ret = rcssNum.exec( value ) ) && ret[ 1 ] ) { + value = adjustCSS( elem, name, ret ); + + // Fixes bug #9237 + type = "number"; + } + + // Make sure that null and NaN values aren't set (#7116) + if ( value == null || value !== value ) { + return; + } + + // If a number was passed in, add the unit (except for certain CSS properties) + // The isCustomProp check can be removed in jQuery 4.0 when we only auto-append + // "px" to a few hardcoded values. + if ( type === "number" && !isCustomProp ) { + value += ret && ret[ 3 ] || ( jQuery.cssNumber[ origName ] ? "" : "px" ); + } + + // background-* props affect original clone's values + if ( !support.clearCloneStyle && value === "" && name.indexOf( "background" ) === 0 ) { + style[ name ] = "inherit"; + } + + // If a hook was provided, use that value, otherwise just set the specified value + if ( !hooks || !( "set" in hooks ) || + ( value = hooks.set( elem, value, extra ) ) !== undefined ) { + + if ( isCustomProp ) { + style.setProperty( name, value ); + } else { + style[ name ] = value; + } + } + + } else { + + // If a hook was provided get the non-computed value from there + if ( hooks && "get" in hooks && + ( ret = hooks.get( elem, false, extra ) ) !== undefined ) { + + return ret; + } + + // Otherwise just get the value from the style object + return style[ name ]; + } + }, + + css: function( elem, name, extra, styles ) { + var val, num, hooks, + origName = camelCase( name ), + isCustomProp = rcustomProp.test( name ); + + // Make sure that we're working with the right name. We don't + // want to modify the value if it is a CSS custom property + // since they are user-defined. + if ( !isCustomProp ) { + name = finalPropName( origName ); + } + + // Try prefixed name followed by the unprefixed name + hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; + + // If a hook was provided get the computed value from there + if ( hooks && "get" in hooks ) { + val = hooks.get( elem, true, extra ); + } + + // Otherwise, if a way to get the computed value exists, use that + if ( val === undefined ) { + val = curCSS( elem, name, styles ); + } + + // Convert "normal" to computed value + if ( val === "normal" && name in cssNormalTransform ) { + val = cssNormalTransform[ name ]; + } + + // Make numeric if forced or a qualifier was provided and val looks numeric + if ( extra === "" || extra ) { + num = parseFloat( val ); + return extra === true || isFinite( num ) ? num || 0 : val; + } + + return val; + } +} ); + +jQuery.each( [ "height", "width" ], function( _i, dimension ) { + jQuery.cssHooks[ dimension ] = { + get: function( elem, computed, extra ) { + if ( computed ) { + + // Certain elements can have dimension info if we invisibly show them + // but it must have a current display style that would benefit + return rdisplayswap.test( jQuery.css( elem, "display" ) ) && + + // Support: Safari 8+ + // Table columns in Safari have non-zero offsetWidth & zero + // getBoundingClientRect().width unless display is changed. + // Support: IE <=11 only + // Running getBoundingClientRect on a disconnected node + // in IE throws an error. + ( !elem.getClientRects().length || !elem.getBoundingClientRect().width ) ? + swap( elem, cssShow, function() { + return getWidthOrHeight( elem, dimension, extra ); + } ) : + getWidthOrHeight( elem, dimension, extra ); + } + }, + + set: function( elem, value, extra ) { + var matches, + styles = getStyles( elem ), + + // Only read styles.position if the test has a chance to fail + // to avoid forcing a reflow. + scrollboxSizeBuggy = !support.scrollboxSize() && + styles.position === "absolute", + + // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-3991) + boxSizingNeeded = scrollboxSizeBuggy || extra, + isBorderBox = boxSizingNeeded && + jQuery.css( elem, "boxSizing", false, styles ) === "border-box", + subtract = extra ? + boxModelAdjustment( + elem, + dimension, + extra, + isBorderBox, + styles + ) : + 0; + + // Account for unreliable border-box dimensions by comparing offset* to computed and + // faking a content-box to get border and padding (gh-3699) + if ( isBorderBox && scrollboxSizeBuggy ) { + subtract -= Math.ceil( + elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - + parseFloat( styles[ dimension ] ) - + boxModelAdjustment( elem, dimension, "border", false, styles ) - + 0.5 + ); + } + + // Convert to pixels if value adjustment is needed + if ( subtract && ( matches = rcssNum.exec( value ) ) && + ( matches[ 3 ] || "px" ) !== "px" ) { + + elem.style[ dimension ] = value; + value = jQuery.css( elem, dimension ); + } + + return setPositiveNumber( elem, value, subtract ); + } + }; +} ); + +jQuery.cssHooks.marginLeft = addGetHookIf( support.reliableMarginLeft, + function( elem, computed ) { + if ( computed ) { + return ( parseFloat( curCSS( elem, "marginLeft" ) ) || + elem.getBoundingClientRect().left - + swap( elem, { marginLeft: 0 }, function() { + return elem.getBoundingClientRect().left; + } ) + ) + "px"; + } + } +); + +// These hooks are used by animate to expand properties +jQuery.each( { + margin: "", + padding: "", + border: "Width" +}, function( prefix, suffix ) { + jQuery.cssHooks[ prefix + suffix ] = { + expand: function( value ) { + var i = 0, + expanded = {}, + + // Assumes a single number if not a string + parts = typeof value === "string" ? value.split( " " ) : [ value ]; + + for ( ; i < 4; i++ ) { + expanded[ prefix + cssExpand[ i ] + suffix ] = + parts[ i ] || parts[ i - 2 ] || parts[ 0 ]; + } + + return expanded; + } + }; + + if ( prefix !== "margin" ) { + jQuery.cssHooks[ prefix + suffix ].set = setPositiveNumber; + } +} ); + +jQuery.fn.extend( { + css: function( name, value ) { + return access( this, function( elem, name, value ) { + var styles, len, + map = {}, + i = 0; + + if ( Array.isArray( name ) ) { + styles = getStyles( elem ); + len = name.length; + + for ( ; i < len; i++ ) { + map[ name[ i ] ] = jQuery.css( elem, name[ i ], false, styles ); + } + + return map; + } + + return value !== undefined ? + jQuery.style( elem, name, value ) : + jQuery.css( elem, name ); + }, name, value, arguments.length > 1 ); + } +} ); + + +function Tween( elem, options, prop, end, easing ) { + return new Tween.prototype.init( elem, options, prop, end, easing ); +} +jQuery.Tween = Tween; + +Tween.prototype = { + constructor: Tween, + init: function( elem, options, prop, end, easing, unit ) { + this.elem = elem; + this.prop = prop; + this.easing = easing || jQuery.easing._default; + this.options = options; + this.start = this.now = this.cur(); + this.end = end; + this.unit = unit || ( jQuery.cssNumber[ prop ] ? "" : "px" ); + }, + cur: function() { + var hooks = Tween.propHooks[ this.prop ]; + + return hooks && hooks.get ? + hooks.get( this ) : + Tween.propHooks._default.get( this ); + }, + run: function( percent ) { + var eased, + hooks = Tween.propHooks[ this.prop ]; + + if ( this.options.duration ) { + this.pos = eased = jQuery.easing[ this.easing ]( + percent, this.options.duration * percent, 0, 1, this.options.duration + ); + } else { + this.pos = eased = percent; + } + this.now = ( this.end - this.start ) * eased + this.start; + + if ( this.options.step ) { + this.options.step.call( this.elem, this.now, this ); + } + + if ( hooks && hooks.set ) { + hooks.set( this ); + } else { + Tween.propHooks._default.set( this ); + } + return this; + } +}; + +Tween.prototype.init.prototype = Tween.prototype; + +Tween.propHooks = { + _default: { + get: function( tween ) { + var result; + + // Use a property on the element directly when it is not a DOM element, + // or when there is no matching style property that exists. + if ( tween.elem.nodeType !== 1 || + tween.elem[ tween.prop ] != null && tween.elem.style[ tween.prop ] == null ) { + return tween.elem[ tween.prop ]; + } + + // Passing an empty string as a 3rd parameter to .css will automatically + // attempt a parseFloat and fallback to a string if the parse fails. + // Simple values such as "10px" are parsed to Float; + // complex values such as "rotate(1rad)" are returned as-is. + result = jQuery.css( tween.elem, tween.prop, "" ); + + // Empty strings, null, undefined and "auto" are converted to 0. + return !result || result === "auto" ? 0 : result; + }, + set: function( tween ) { + + // Use step hook for back compat. + // Use cssHook if its there. + // Use .style if available and use plain properties where available. + if ( jQuery.fx.step[ tween.prop ] ) { + jQuery.fx.step[ tween.prop ]( tween ); + } else if ( tween.elem.nodeType === 1 && ( + jQuery.cssHooks[ tween.prop ] || + tween.elem.style[ finalPropName( tween.prop ) ] != null ) ) { + jQuery.style( tween.elem, tween.prop, tween.now + tween.unit ); + } else { + tween.elem[ tween.prop ] = tween.now; + } + } + } +}; + +// Support: IE <=9 only +// Panic based approach to setting things on disconnected nodes +Tween.propHooks.scrollTop = Tween.propHooks.scrollLeft = { + set: function( tween ) { + if ( tween.elem.nodeType && tween.elem.parentNode ) { + tween.elem[ tween.prop ] = tween.now; + } + } +}; + +jQuery.easing = { + linear: function( p ) { + return p; + }, + swing: function( p ) { + return 0.5 - Math.cos( p * Math.PI ) / 2; + }, + _default: "swing" +}; + +jQuery.fx = Tween.prototype.init; + +// Back compat <1.8 extension point +jQuery.fx.step = {}; + + + + +var + fxNow, inProgress, + rfxtypes = /^(?:toggle|show|hide)$/, + rrun = /queueHooks$/; + +function schedule() { + if ( inProgress ) { + if ( document.hidden === false && window.requestAnimationFrame ) { + window.requestAnimationFrame( schedule ); + } else { + window.setTimeout( schedule, jQuery.fx.interval ); + } + + jQuery.fx.tick(); + } +} + +// Animations created synchronously will run synchronously +function createFxNow() { + window.setTimeout( function() { + fxNow = undefined; + } ); + return ( fxNow = Date.now() ); +} + +// Generate parameters to create a standard animation +function genFx( type, includeWidth ) { + var which, + i = 0, + attrs = { height: type }; + + // If we include width, step value is 1 to do all cssExpand values, + // otherwise step value is 2 to skip over Left and Right + includeWidth = includeWidth ? 1 : 0; + for ( ; i < 4; i += 2 - includeWidth ) { + which = cssExpand[ i ]; + attrs[ "margin" + which ] = attrs[ "padding" + which ] = type; + } + + if ( includeWidth ) { + attrs.opacity = attrs.width = type; + } + + return attrs; +} + +function createTween( value, prop, animation ) { + var tween, + collection = ( Animation.tweeners[ prop ] || [] ).concat( Animation.tweeners[ "*" ] ), + index = 0, + length = collection.length; + for ( ; index < length; index++ ) { + if ( ( tween = collection[ index ].call( animation, prop, value ) ) ) { + + // We're done with this property + return tween; + } + } +} + +function defaultPrefilter( elem, props, opts ) { + var prop, value, toggle, hooks, oldfire, propTween, restoreDisplay, display, + isBox = "width" in props || "height" in props, + anim = this, + orig = {}, + style = elem.style, + hidden = elem.nodeType && isHiddenWithinTree( elem ), + dataShow = dataPriv.get( elem, "fxshow" ); + + // Queue-skipping animations hijack the fx hooks + if ( !opts.queue ) { + hooks = jQuery._queueHooks( elem, "fx" ); + if ( hooks.unqueued == null ) { + hooks.unqueued = 0; + oldfire = hooks.empty.fire; + hooks.empty.fire = function() { + if ( !hooks.unqueued ) { + oldfire(); + } + }; + } + hooks.unqueued++; + + anim.always( function() { + + // Ensure the complete handler is called before this completes + anim.always( function() { + hooks.unqueued--; + if ( !jQuery.queue( elem, "fx" ).length ) { + hooks.empty.fire(); + } + } ); + } ); + } + + // Detect show/hide animations + for ( prop in props ) { + value = props[ prop ]; + if ( rfxtypes.test( value ) ) { + delete props[ prop ]; + toggle = toggle || value === "toggle"; + if ( value === ( hidden ? "hide" : "show" ) ) { + + // Pretend to be hidden if this is a "show" and + // there is still data from a stopped show/hide + if ( value === "show" && dataShow && dataShow[ prop ] !== undefined ) { + hidden = true; + + // Ignore all other no-op show/hide data + } else { + continue; + } + } + orig[ prop ] = dataShow && dataShow[ prop ] || jQuery.style( elem, prop ); + } + } + + // Bail out if this is a no-op like .hide().hide() + propTween = !jQuery.isEmptyObject( props ); + if ( !propTween && jQuery.isEmptyObject( orig ) ) { + return; + } + + // Restrict "overflow" and "display" styles during box animations + if ( isBox && elem.nodeType === 1 ) { + + // Support: IE <=9 - 11, Edge 12 - 15 + // Record all 3 overflow attributes because IE does not infer the shorthand + // from identically-valued overflowX and overflowY and Edge just mirrors + // the overflowX value there. + opts.overflow = [ style.overflow, style.overflowX, style.overflowY ]; + + // Identify a display type, preferring old show/hide data over the CSS cascade + restoreDisplay = dataShow && dataShow.display; + if ( restoreDisplay == null ) { + restoreDisplay = dataPriv.get( elem, "display" ); + } + display = jQuery.css( elem, "display" ); + if ( display === "none" ) { + if ( restoreDisplay ) { + display = restoreDisplay; + } else { + + // Get nonempty value(s) by temporarily forcing visibility + showHide( [ elem ], true ); + restoreDisplay = elem.style.display || restoreDisplay; + display = jQuery.css( elem, "display" ); + showHide( [ elem ] ); + } + } + + // Animate inline elements as inline-block + if ( display === "inline" || display === "inline-block" && restoreDisplay != null ) { + if ( jQuery.css( elem, "float" ) === "none" ) { + + // Restore the original display value at the end of pure show/hide animations + if ( !propTween ) { + anim.done( function() { + style.display = restoreDisplay; + } ); + if ( restoreDisplay == null ) { + display = style.display; + restoreDisplay = display === "none" ? "" : display; + } + } + style.display = "inline-block"; + } + } + } + + if ( opts.overflow ) { + style.overflow = "hidden"; + anim.always( function() { + style.overflow = opts.overflow[ 0 ]; + style.overflowX = opts.overflow[ 1 ]; + style.overflowY = opts.overflow[ 2 ]; + } ); + } + + // Implement show/hide animations + propTween = false; + for ( prop in orig ) { + + // General show/hide setup for this element animation + if ( !propTween ) { + if ( dataShow ) { + if ( "hidden" in dataShow ) { + hidden = dataShow.hidden; + } + } else { + dataShow = dataPriv.access( elem, "fxshow", { display: restoreDisplay } ); + } + + // Store hidden/visible for toggle so `.stop().toggle()` "reverses" + if ( toggle ) { + dataShow.hidden = !hidden; + } + + // Show elements before animating them + if ( hidden ) { + showHide( [ elem ], true ); + } + + /* eslint-disable no-loop-func */ + + anim.done( function() { + + /* eslint-enable no-loop-func */ + + // The final step of a "hide" animation is actually hiding the element + if ( !hidden ) { + showHide( [ elem ] ); + } + dataPriv.remove( elem, "fxshow" ); + for ( prop in orig ) { + jQuery.style( elem, prop, orig[ prop ] ); + } + } ); + } + + // Per-property setup + propTween = createTween( hidden ? dataShow[ prop ] : 0, prop, anim ); + if ( !( prop in dataShow ) ) { + dataShow[ prop ] = propTween.start; + if ( hidden ) { + propTween.end = propTween.start; + propTween.start = 0; + } + } + } +} + +function propFilter( props, specialEasing ) { + var index, name, easing, value, hooks; + + // camelCase, specialEasing and expand cssHook pass + for ( index in props ) { + name = camelCase( index ); + easing = specialEasing[ name ]; + value = props[ index ]; + if ( Array.isArray( value ) ) { + easing = value[ 1 ]; + value = props[ index ] = value[ 0 ]; + } + + if ( index !== name ) { + props[ name ] = value; + delete props[ index ]; + } + + hooks = jQuery.cssHooks[ name ]; + if ( hooks && "expand" in hooks ) { + value = hooks.expand( value ); + delete props[ name ]; + + // Not quite $.extend, this won't overwrite existing keys. + // Reusing 'index' because we have the correct "name" + for ( index in value ) { + if ( !( index in props ) ) { + props[ index ] = value[ index ]; + specialEasing[ index ] = easing; + } + } + } else { + specialEasing[ name ] = easing; + } + } +} + +function Animation( elem, properties, options ) { + var result, + stopped, + index = 0, + length = Animation.prefilters.length, + deferred = jQuery.Deferred().always( function() { + + // Don't match elem in the :animated selector + delete tick.elem; + } ), + tick = function() { + if ( stopped ) { + return false; + } + var currentTime = fxNow || createFxNow(), + remaining = Math.max( 0, animation.startTime + animation.duration - currentTime ), + + // Support: Android 2.3 only + // Archaic crash bug won't allow us to use `1 - ( 0.5 || 0 )` (#12497) + temp = remaining / animation.duration || 0, + percent = 1 - temp, + index = 0, + length = animation.tweens.length; + + for ( ; index < length; index++ ) { + animation.tweens[ index ].run( percent ); + } + + deferred.notifyWith( elem, [ animation, percent, remaining ] ); + + // If there's more to do, yield + if ( percent < 1 && length ) { + return remaining; + } + + // If this was an empty animation, synthesize a final progress notification + if ( !length ) { + deferred.notifyWith( elem, [ animation, 1, 0 ] ); + } + + // Resolve the animation and report its conclusion + deferred.resolveWith( elem, [ animation ] ); + return false; + }, + animation = deferred.promise( { + elem: elem, + props: jQuery.extend( {}, properties ), + opts: jQuery.extend( true, { + specialEasing: {}, + easing: jQuery.easing._default + }, options ), + originalProperties: properties, + originalOptions: options, + startTime: fxNow || createFxNow(), + duration: options.duration, + tweens: [], + createTween: function( prop, end ) { + var tween = jQuery.Tween( elem, animation.opts, prop, end, + animation.opts.specialEasing[ prop ] || animation.opts.easing ); + animation.tweens.push( tween ); + return tween; + }, + stop: function( gotoEnd ) { + var index = 0, + + // If we are going to the end, we want to run all the tweens + // otherwise we skip this part + length = gotoEnd ? animation.tweens.length : 0; + if ( stopped ) { + return this; + } + stopped = true; + for ( ; index < length; index++ ) { + animation.tweens[ index ].run( 1 ); + } + + // Resolve when we played the last frame; otherwise, reject + if ( gotoEnd ) { + deferred.notifyWith( elem, [ animation, 1, 0 ] ); + deferred.resolveWith( elem, [ animation, gotoEnd ] ); + } else { + deferred.rejectWith( elem, [ animation, gotoEnd ] ); + } + return this; + } + } ), + props = animation.props; + + propFilter( props, animation.opts.specialEasing ); + + for ( ; index < length; index++ ) { + result = Animation.prefilters[ index ].call( animation, elem, props, animation.opts ); + if ( result ) { + if ( isFunction( result.stop ) ) { + jQuery._queueHooks( animation.elem, animation.opts.queue ).stop = + result.stop.bind( result ); + } + return result; + } + } + + jQuery.map( props, createTween, animation ); + + if ( isFunction( animation.opts.start ) ) { + animation.opts.start.call( elem, animation ); + } + + // Attach callbacks from options + animation + .progress( animation.opts.progress ) + .done( animation.opts.done, animation.opts.complete ) + .fail( animation.opts.fail ) + .always( animation.opts.always ); + + jQuery.fx.timer( + jQuery.extend( tick, { + elem: elem, + anim: animation, + queue: animation.opts.queue + } ) + ); + + return animation; +} + +jQuery.Animation = jQuery.extend( Animation, { + + tweeners: { + "*": [ function( prop, value ) { + var tween = this.createTween( prop, value ); + adjustCSS( tween.elem, prop, rcssNum.exec( value ), tween ); + return tween; + } ] + }, + + tweener: function( props, callback ) { + if ( isFunction( props ) ) { + callback = props; + props = [ "*" ]; + } else { + props = props.match( rnothtmlwhite ); + } + + var prop, + index = 0, + length = props.length; + + for ( ; index < length; index++ ) { + prop = props[ index ]; + Animation.tweeners[ prop ] = Animation.tweeners[ prop ] || []; + Animation.tweeners[ prop ].unshift( callback ); + } + }, + + prefilters: [ defaultPrefilter ], + + prefilter: function( callback, prepend ) { + if ( prepend ) { + Animation.prefilters.unshift( callback ); + } else { + Animation.prefilters.push( callback ); + } + } +} ); + +jQuery.speed = function( speed, easing, fn ) { + var opt = speed && typeof speed === "object" ? jQuery.extend( {}, speed ) : { + complete: fn || !fn && easing || + isFunction( speed ) && speed, + duration: speed, + easing: fn && easing || easing && !isFunction( easing ) && easing + }; + + // Go to the end state if fx are off + if ( jQuery.fx.off ) { + opt.duration = 0; + + } else { + if ( typeof opt.duration !== "number" ) { + if ( opt.duration in jQuery.fx.speeds ) { + opt.duration = jQuery.fx.speeds[ opt.duration ]; + + } else { + opt.duration = jQuery.fx.speeds._default; + } + } + } + + // Normalize opt.queue - true/undefined/null -> "fx" + if ( opt.queue == null || opt.queue === true ) { + opt.queue = "fx"; + } + + // Queueing + opt.old = opt.complete; + + opt.complete = function() { + if ( isFunction( opt.old ) ) { + opt.old.call( this ); + } + + if ( opt.queue ) { + jQuery.dequeue( this, opt.queue ); + } + }; + + return opt; +}; + +jQuery.fn.extend( { + fadeTo: function( speed, to, easing, callback ) { + + // Show any hidden elements after setting opacity to 0 + return this.filter( isHiddenWithinTree ).css( "opacity", 0 ).show() + + // Animate to the value specified + .end().animate( { opacity: to }, speed, easing, callback ); + }, + animate: function( prop, speed, easing, callback ) { + var empty = jQuery.isEmptyObject( prop ), + optall = jQuery.speed( speed, easing, callback ), + doAnimation = function() { + + // Operate on a copy of prop so per-property easing won't be lost + var anim = Animation( this, jQuery.extend( {}, prop ), optall ); + + // Empty animations, or finishing resolves immediately + if ( empty || dataPriv.get( this, "finish" ) ) { + anim.stop( true ); + } + }; + + doAnimation.finish = doAnimation; + + return empty || optall.queue === false ? + this.each( doAnimation ) : + this.queue( optall.queue, doAnimation ); + }, + stop: function( type, clearQueue, gotoEnd ) { + var stopQueue = function( hooks ) { + var stop = hooks.stop; + delete hooks.stop; + stop( gotoEnd ); + }; + + if ( typeof type !== "string" ) { + gotoEnd = clearQueue; + clearQueue = type; + type = undefined; + } + if ( clearQueue ) { + this.queue( type || "fx", [] ); + } + + return this.each( function() { + var dequeue = true, + index = type != null && type + "queueHooks", + timers = jQuery.timers, + data = dataPriv.get( this ); + + if ( index ) { + if ( data[ index ] && data[ index ].stop ) { + stopQueue( data[ index ] ); + } + } else { + for ( index in data ) { + if ( data[ index ] && data[ index ].stop && rrun.test( index ) ) { + stopQueue( data[ index ] ); + } + } + } + + for ( index = timers.length; index--; ) { + if ( timers[ index ].elem === this && + ( type == null || timers[ index ].queue === type ) ) { + + timers[ index ].anim.stop( gotoEnd ); + dequeue = false; + timers.splice( index, 1 ); + } + } + + // Start the next in the queue if the last step wasn't forced. + // Timers currently will call their complete callbacks, which + // will dequeue but only if they were gotoEnd. + if ( dequeue || !gotoEnd ) { + jQuery.dequeue( this, type ); + } + } ); + }, + finish: function( type ) { + if ( type !== false ) { + type = type || "fx"; + } + return this.each( function() { + var index, + data = dataPriv.get( this ), + queue = data[ type + "queue" ], + hooks = data[ type + "queueHooks" ], + timers = jQuery.timers, + length = queue ? queue.length : 0; + + // Enable finishing flag on private data + data.finish = true; + + // Empty the queue first + jQuery.queue( this, type, [] ); + + if ( hooks && hooks.stop ) { + hooks.stop.call( this, true ); + } + + // Look for any active animations, and finish them + for ( index = timers.length; index--; ) { + if ( timers[ index ].elem === this && timers[ index ].queue === type ) { + timers[ index ].anim.stop( true ); + timers.splice( index, 1 ); + } + } + + // Look for any animations in the old queue and finish them + for ( index = 0; index < length; index++ ) { + if ( queue[ index ] && queue[ index ].finish ) { + queue[ index ].finish.call( this ); + } + } + + // Turn off finishing flag + delete data.finish; + } ); + } +} ); + +jQuery.each( [ "toggle", "show", "hide" ], function( _i, name ) { + var cssFn = jQuery.fn[ name ]; + jQuery.fn[ name ] = function( speed, easing, callback ) { + return speed == null || typeof speed === "boolean" ? + cssFn.apply( this, arguments ) : + this.animate( genFx( name, true ), speed, easing, callback ); + }; +} ); + +// Generate shortcuts for custom animations +jQuery.each( { + slideDown: genFx( "show" ), + slideUp: genFx( "hide" ), + slideToggle: genFx( "toggle" ), + fadeIn: { opacity: "show" }, + fadeOut: { opacity: "hide" }, + fadeToggle: { opacity: "toggle" } +}, function( name, props ) { + jQuery.fn[ name ] = function( speed, easing, callback ) { + return this.animate( props, speed, easing, callback ); + }; +} ); + +jQuery.timers = []; +jQuery.fx.tick = function() { + var timer, + i = 0, + timers = jQuery.timers; + + fxNow = Date.now(); + + for ( ; i < timers.length; i++ ) { + timer = timers[ i ]; + + // Run the timer and safely remove it when done (allowing for external removal) + if ( !timer() && timers[ i ] === timer ) { + timers.splice( i--, 1 ); + } + } + + if ( !timers.length ) { + jQuery.fx.stop(); + } + fxNow = undefined; +}; + +jQuery.fx.timer = function( timer ) { + jQuery.timers.push( timer ); + jQuery.fx.start(); +}; + +jQuery.fx.interval = 13; +jQuery.fx.start = function() { + if ( inProgress ) { + return; + } + + inProgress = true; + schedule(); +}; + +jQuery.fx.stop = function() { + inProgress = null; +}; + +jQuery.fx.speeds = { + slow: 600, + fast: 200, + + // Default speed + _default: 400 +}; + + +// Based off of the plugin by Clint Helfers, with permission. +// https://web.archive.org/web/20100324014747/http://blindsignals.com/index.php/2009/07/jquery-delay/ +jQuery.fn.delay = function( time, type ) { + time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; + type = type || "fx"; + + return this.queue( type, function( next, hooks ) { + var timeout = window.setTimeout( next, time ); + hooks.stop = function() { + window.clearTimeout( timeout ); + }; + } ); +}; + + +( function() { + var input = document.createElement( "input" ), + select = document.createElement( "select" ), + opt = select.appendChild( document.createElement( "option" ) ); + + input.type = "checkbox"; + + // Support: Android <=4.3 only + // Default value for a checkbox should be "on" + support.checkOn = input.value !== ""; + + // Support: IE <=11 only + // Must access selectedIndex to make default options select + support.optSelected = opt.selected; + + // Support: IE <=11 only + // An input loses its value after becoming a radio + input = document.createElement( "input" ); + input.value = "t"; + input.type = "radio"; + support.radioValue = input.value === "t"; +} )(); + + +var boolHook, + attrHandle = jQuery.expr.attrHandle; + +jQuery.fn.extend( { + attr: function( name, value ) { + return access( this, jQuery.attr, name, value, arguments.length > 1 ); + }, + + removeAttr: function( name ) { + return this.each( function() { + jQuery.removeAttr( this, name ); + } ); + } +} ); + +jQuery.extend( { + attr: function( elem, name, value ) { + var ret, hooks, + nType = elem.nodeType; + + // Don't get/set attributes on text, comment and attribute nodes + if ( nType === 3 || nType === 8 || nType === 2 ) { + return; + } + + // Fallback to prop when attributes are not supported + if ( typeof elem.getAttribute === "undefined" ) { + return jQuery.prop( elem, name, value ); + } + + // Attribute hooks are determined by the lowercase version + // Grab necessary hook if one is defined + if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { + hooks = jQuery.attrHooks[ name.toLowerCase() ] || + ( jQuery.expr.match.bool.test( name ) ? boolHook : undefined ); + } + + if ( value !== undefined ) { + if ( value === null ) { + jQuery.removeAttr( elem, name ); + return; + } + + if ( hooks && "set" in hooks && + ( ret = hooks.set( elem, value, name ) ) !== undefined ) { + return ret; + } + + elem.setAttribute( name, value + "" ); + return value; + } + + if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { + return ret; + } + + ret = jQuery.find.attr( elem, name ); + + // Non-existent attributes return null, we normalize to undefined + return ret == null ? undefined : ret; + }, + + attrHooks: { + type: { + set: function( elem, value ) { + if ( !support.radioValue && value === "radio" && + nodeName( elem, "input" ) ) { + var val = elem.value; + elem.setAttribute( "type", value ); + if ( val ) { + elem.value = val; + } + return value; + } + } + } + }, + + removeAttr: function( elem, value ) { + var name, + i = 0, + + // Attribute names can contain non-HTML whitespace characters + // https://html.spec.whatwg.org/multipage/syntax.html#attributes-2 + attrNames = value && value.match( rnothtmlwhite ); + + if ( attrNames && elem.nodeType === 1 ) { + while ( ( name = attrNames[ i++ ] ) ) { + elem.removeAttribute( name ); + } + } + } +} ); + +// Hooks for boolean attributes +boolHook = { + set: function( elem, value, name ) { + if ( value === false ) { + + // Remove boolean attributes when set to false + jQuery.removeAttr( elem, name ); + } else { + elem.setAttribute( name, name ); + } + return name; + } +}; + +jQuery.each( jQuery.expr.match.bool.source.match( /\w+/g ), function( _i, name ) { + var getter = attrHandle[ name ] || jQuery.find.attr; + + attrHandle[ name ] = function( elem, name, isXML ) { + var ret, handle, + lowercaseName = name.toLowerCase(); + + if ( !isXML ) { + + // Avoid an infinite loop by temporarily removing this function from the getter + handle = attrHandle[ lowercaseName ]; + attrHandle[ lowercaseName ] = ret; + ret = getter( elem, name, isXML ) != null ? + lowercaseName : + null; + attrHandle[ lowercaseName ] = handle; + } + return ret; + }; +} ); + + + + +var rfocusable = /^(?:input|select|textarea|button)$/i, + rclickable = /^(?:a|area)$/i; + +jQuery.fn.extend( { + prop: function( name, value ) { + return access( this, jQuery.prop, name, value, arguments.length > 1 ); + }, + + removeProp: function( name ) { + return this.each( function() { + delete this[ jQuery.propFix[ name ] || name ]; + } ); + } +} ); + +jQuery.extend( { + prop: function( elem, name, value ) { + var ret, hooks, + nType = elem.nodeType; + + // Don't get/set properties on text, comment and attribute nodes + if ( nType === 3 || nType === 8 || nType === 2 ) { + return; + } + + if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { + + // Fix name and attach hooks + name = jQuery.propFix[ name ] || name; + hooks = jQuery.propHooks[ name ]; + } + + if ( value !== undefined ) { + if ( hooks && "set" in hooks && + ( ret = hooks.set( elem, value, name ) ) !== undefined ) { + return ret; + } + + return ( elem[ name ] = value ); + } + + if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { + return ret; + } + + return elem[ name ]; + }, + + propHooks: { + tabIndex: { + get: function( elem ) { + + // Support: IE <=9 - 11 only + // elem.tabIndex doesn't always return the + // correct value when it hasn't been explicitly set + // https://web.archive.org/web/20141116233347/http://fluidproject.org/blog/2008/01/09/getting-setting-and-removing-tabindex-values-with-javascript/ + // Use proper attribute retrieval(#12072) + var tabindex = jQuery.find.attr( elem, "tabindex" ); + + if ( tabindex ) { + return parseInt( tabindex, 10 ); + } + + if ( + rfocusable.test( elem.nodeName ) || + rclickable.test( elem.nodeName ) && + elem.href + ) { + return 0; + } + + return -1; + } + } + }, + + propFix: { + "for": "htmlFor", + "class": "className" + } +} ); + +// Support: IE <=11 only +// Accessing the selectedIndex property +// forces the browser to respect setting selected +// on the option +// The getter ensures a default option is selected +// when in an optgroup +// eslint rule "no-unused-expressions" is disabled for this code +// since it considers such accessions noop +if ( !support.optSelected ) { + jQuery.propHooks.selected = { + get: function( elem ) { + + /* eslint no-unused-expressions: "off" */ + + var parent = elem.parentNode; + if ( parent && parent.parentNode ) { + parent.parentNode.selectedIndex; + } + return null; + }, + set: function( elem ) { + + /* eslint no-unused-expressions: "off" */ + + var parent = elem.parentNode; + if ( parent ) { + parent.selectedIndex; + + if ( parent.parentNode ) { + parent.parentNode.selectedIndex; + } + } + } + }; +} + +jQuery.each( [ + "tabIndex", + "readOnly", + "maxLength", + "cellSpacing", + "cellPadding", + "rowSpan", + "colSpan", + "useMap", + "frameBorder", + "contentEditable" +], function() { + jQuery.propFix[ this.toLowerCase() ] = this; +} ); + + + + + // Strip and collapse whitespace according to HTML spec + // https://infra.spec.whatwg.org/#strip-and-collapse-ascii-whitespace + function stripAndCollapse( value ) { + var tokens = value.match( rnothtmlwhite ) || []; + return tokens.join( " " ); + } + + +function getClass( elem ) { + return elem.getAttribute && elem.getAttribute( "class" ) || ""; +} + +function classesToArray( value ) { + if ( Array.isArray( value ) ) { + return value; + } + if ( typeof value === "string" ) { + return value.match( rnothtmlwhite ) || []; + } + return []; +} + +jQuery.fn.extend( { + addClass: function( value ) { + var classes, elem, cur, curValue, clazz, j, finalValue, + i = 0; + + if ( isFunction( value ) ) { + return this.each( function( j ) { + jQuery( this ).addClass( value.call( this, j, getClass( this ) ) ); + } ); + } + + classes = classesToArray( value ); + + if ( classes.length ) { + while ( ( elem = this[ i++ ] ) ) { + curValue = getClass( elem ); + cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); + + if ( cur ) { + j = 0; + while ( ( clazz = classes[ j++ ] ) ) { + if ( cur.indexOf( " " + clazz + " " ) < 0 ) { + cur += clazz + " "; + } + } + + // Only assign if different to avoid unneeded rendering. + finalValue = stripAndCollapse( cur ); + if ( curValue !== finalValue ) { + elem.setAttribute( "class", finalValue ); + } + } + } + } + + return this; + }, + + removeClass: function( value ) { + var classes, elem, cur, curValue, clazz, j, finalValue, + i = 0; + + if ( isFunction( value ) ) { + return this.each( function( j ) { + jQuery( this ).removeClass( value.call( this, j, getClass( this ) ) ); + } ); + } + + if ( !arguments.length ) { + return this.attr( "class", "" ); + } + + classes = classesToArray( value ); + + if ( classes.length ) { + while ( ( elem = this[ i++ ] ) ) { + curValue = getClass( elem ); + + // This expression is here for better compressibility (see addClass) + cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); + + if ( cur ) { + j = 0; + while ( ( clazz = classes[ j++ ] ) ) { + + // Remove *all* instances + while ( cur.indexOf( " " + clazz + " " ) > -1 ) { + cur = cur.replace( " " + clazz + " ", " " ); + } + } + + // Only assign if different to avoid unneeded rendering. + finalValue = stripAndCollapse( cur ); + if ( curValue !== finalValue ) { + elem.setAttribute( "class", finalValue ); + } + } + } + } + + return this; + }, + + toggleClass: function( value, stateVal ) { + var type = typeof value, + isValidValue = type === "string" || Array.isArray( value ); + + if ( typeof stateVal === "boolean" && isValidValue ) { + return stateVal ? this.addClass( value ) : this.removeClass( value ); + } + + if ( isFunction( value ) ) { + return this.each( function( i ) { + jQuery( this ).toggleClass( + value.call( this, i, getClass( this ), stateVal ), + stateVal + ); + } ); + } + + return this.each( function() { + var className, i, self, classNames; + + if ( isValidValue ) { + + // Toggle individual class names + i = 0; + self = jQuery( this ); + classNames = classesToArray( value ); + + while ( ( className = classNames[ i++ ] ) ) { + + // Check each className given, space separated list + if ( self.hasClass( className ) ) { + self.removeClass( className ); + } else { + self.addClass( className ); + } + } + + // Toggle whole class name + } else if ( value === undefined || type === "boolean" ) { + className = getClass( this ); + if ( className ) { + + // Store className if set + dataPriv.set( this, "__className__", className ); + } + + // If the element has a class name or if we're passed `false`, + // then remove the whole classname (if there was one, the above saved it). + // Otherwise bring back whatever was previously saved (if anything), + // falling back to the empty string if nothing was stored. + if ( this.setAttribute ) { + this.setAttribute( "class", + className || value === false ? + "" : + dataPriv.get( this, "__className__" ) || "" + ); + } + } + } ); + }, + + hasClass: function( selector ) { + var className, elem, + i = 0; + + className = " " + selector + " "; + while ( ( elem = this[ i++ ] ) ) { + if ( elem.nodeType === 1 && + ( " " + stripAndCollapse( getClass( elem ) ) + " " ).indexOf( className ) > -1 ) { + return true; + } + } + + return false; + } +} ); + + + + +var rreturn = /\r/g; + +jQuery.fn.extend( { + val: function( value ) { + var hooks, ret, valueIsFunction, + elem = this[ 0 ]; + + if ( !arguments.length ) { + if ( elem ) { + hooks = jQuery.valHooks[ elem.type ] || + jQuery.valHooks[ elem.nodeName.toLowerCase() ]; + + if ( hooks && + "get" in hooks && + ( ret = hooks.get( elem, "value" ) ) !== undefined + ) { + return ret; + } + + ret = elem.value; + + // Handle most common string cases + if ( typeof ret === "string" ) { + return ret.replace( rreturn, "" ); + } + + // Handle cases where value is null/undef or number + return ret == null ? "" : ret; + } + + return; + } + + valueIsFunction = isFunction( value ); + + return this.each( function( i ) { + var val; + + if ( this.nodeType !== 1 ) { + return; + } + + if ( valueIsFunction ) { + val = value.call( this, i, jQuery( this ).val() ); + } else { + val = value; + } + + // Treat null/undefined as ""; convert numbers to string + if ( val == null ) { + val = ""; + + } else if ( typeof val === "number" ) { + val += ""; + + } else if ( Array.isArray( val ) ) { + val = jQuery.map( val, function( value ) { + return value == null ? "" : value + ""; + } ); + } + + hooks = jQuery.valHooks[ this.type ] || jQuery.valHooks[ this.nodeName.toLowerCase() ]; + + // If set returns undefined, fall back to normal setting + if ( !hooks || !( "set" in hooks ) || hooks.set( this, val, "value" ) === undefined ) { + this.value = val; + } + } ); + } +} ); + +jQuery.extend( { + valHooks: { + option: { + get: function( elem ) { + + var val = jQuery.find.attr( elem, "value" ); + return val != null ? + val : + + // Support: IE <=10 - 11 only + // option.text throws exceptions (#14686, #14858) + // Strip and collapse whitespace + // https://html.spec.whatwg.org/#strip-and-collapse-whitespace + stripAndCollapse( jQuery.text( elem ) ); + } + }, + select: { + get: function( elem ) { + var value, option, i, + options = elem.options, + index = elem.selectedIndex, + one = elem.type === "select-one", + values = one ? null : [], + max = one ? index + 1 : options.length; + + if ( index < 0 ) { + i = max; + + } else { + i = one ? index : 0; + } + + // Loop through all the selected options + for ( ; i < max; i++ ) { + option = options[ i ]; + + // Support: IE <=9 only + // IE8-9 doesn't update selected after form reset (#2551) + if ( ( option.selected || i === index ) && + + // Don't return options that are disabled or in a disabled optgroup + !option.disabled && + ( !option.parentNode.disabled || + !nodeName( option.parentNode, "optgroup" ) ) ) { + + // Get the specific value for the option + value = jQuery( option ).val(); + + // We don't need an array for one selects + if ( one ) { + return value; + } + + // Multi-Selects return an array + values.push( value ); + } + } + + return values; + }, + + set: function( elem, value ) { + var optionSet, option, + options = elem.options, + values = jQuery.makeArray( value ), + i = options.length; + + while ( i-- ) { + option = options[ i ]; + + /* eslint-disable no-cond-assign */ + + if ( option.selected = + jQuery.inArray( jQuery.valHooks.option.get( option ), values ) > -1 + ) { + optionSet = true; + } + + /* eslint-enable no-cond-assign */ + } + + // Force browsers to behave consistently when non-matching value is set + if ( !optionSet ) { + elem.selectedIndex = -1; + } + return values; + } + } + } +} ); + +// Radios and checkboxes getter/setter +jQuery.each( [ "radio", "checkbox" ], function() { + jQuery.valHooks[ this ] = { + set: function( elem, value ) { + if ( Array.isArray( value ) ) { + return ( elem.checked = jQuery.inArray( jQuery( elem ).val(), value ) > -1 ); + } + } + }; + if ( !support.checkOn ) { + jQuery.valHooks[ this ].get = function( elem ) { + return elem.getAttribute( "value" ) === null ? "on" : elem.value; + }; + } +} ); + + + + +// Return jQuery for attributes-only inclusion + + +support.focusin = "onfocusin" in window; + + +var rfocusMorph = /^(?:focusinfocus|focusoutblur)$/, + stopPropagationCallback = function( e ) { + e.stopPropagation(); + }; + +jQuery.extend( jQuery.event, { + + trigger: function( event, data, elem, onlyHandlers ) { + + var i, cur, tmp, bubbleType, ontype, handle, special, lastElement, + eventPath = [ elem || document ], + type = hasOwn.call( event, "type" ) ? event.type : event, + namespaces = hasOwn.call( event, "namespace" ) ? event.namespace.split( "." ) : []; + + cur = lastElement = tmp = elem = elem || document; + + // Don't do events on text and comment nodes + if ( elem.nodeType === 3 || elem.nodeType === 8 ) { + return; + } + + // focus/blur morphs to focusin/out; ensure we're not firing them right now + if ( rfocusMorph.test( type + jQuery.event.triggered ) ) { + return; + } + + if ( type.indexOf( "." ) > -1 ) { + + // Namespaced trigger; create a regexp to match event type in handle() + namespaces = type.split( "." ); + type = namespaces.shift(); + namespaces.sort(); + } + ontype = type.indexOf( ":" ) < 0 && "on" + type; + + // Caller can pass in a jQuery.Event object, Object, or just an event type string + event = event[ jQuery.expando ] ? + event : + new jQuery.Event( type, typeof event === "object" && event ); + + // Trigger bitmask: & 1 for native handlers; & 2 for jQuery (always true) + event.isTrigger = onlyHandlers ? 2 : 3; + event.namespace = namespaces.join( "." ); + event.rnamespace = event.namespace ? + new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ) : + null; + + // Clean up the event in case it is being reused + event.result = undefined; + if ( !event.target ) { + event.target = elem; + } + + // Clone any incoming data and prepend the event, creating the handler arg list + data = data == null ? + [ event ] : + jQuery.makeArray( data, [ event ] ); + + // Allow special events to draw outside the lines + special = jQuery.event.special[ type ] || {}; + if ( !onlyHandlers && special.trigger && special.trigger.apply( elem, data ) === false ) { + return; + } + + // Determine event propagation path in advance, per W3C events spec (#9951) + // Bubble up to document, then to window; watch for a global ownerDocument var (#9724) + if ( !onlyHandlers && !special.noBubble && !isWindow( elem ) ) { + + bubbleType = special.delegateType || type; + if ( !rfocusMorph.test( bubbleType + type ) ) { + cur = cur.parentNode; + } + for ( ; cur; cur = cur.parentNode ) { + eventPath.push( cur ); + tmp = cur; + } + + // Only add window if we got to document (e.g., not plain obj or detached DOM) + if ( tmp === ( elem.ownerDocument || document ) ) { + eventPath.push( tmp.defaultView || tmp.parentWindow || window ); + } + } + + // Fire handlers on the event path + i = 0; + while ( ( cur = eventPath[ i++ ] ) && !event.isPropagationStopped() ) { + lastElement = cur; + event.type = i > 1 ? + bubbleType : + special.bindType || type; + + // jQuery handler + handle = ( dataPriv.get( cur, "events" ) || Object.create( null ) )[ event.type ] && + dataPriv.get( cur, "handle" ); + if ( handle ) { + handle.apply( cur, data ); + } + + // Native handler + handle = ontype && cur[ ontype ]; + if ( handle && handle.apply && acceptData( cur ) ) { + event.result = handle.apply( cur, data ); + if ( event.result === false ) { + event.preventDefault(); + } + } + } + event.type = type; + + // If nobody prevented the default action, do it now + if ( !onlyHandlers && !event.isDefaultPrevented() ) { + + if ( ( !special._default || + special._default.apply( eventPath.pop(), data ) === false ) && + acceptData( elem ) ) { + + // Call a native DOM method on the target with the same name as the event. + // Don't do default actions on window, that's where global variables be (#6170) + if ( ontype && isFunction( elem[ type ] ) && !isWindow( elem ) ) { + + // Don't re-trigger an onFOO event when we call its FOO() method + tmp = elem[ ontype ]; + + if ( tmp ) { + elem[ ontype ] = null; + } + + // Prevent re-triggering of the same event, since we already bubbled it above + jQuery.event.triggered = type; + + if ( event.isPropagationStopped() ) { + lastElement.addEventListener( type, stopPropagationCallback ); + } + + elem[ type ](); + + if ( event.isPropagationStopped() ) { + lastElement.removeEventListener( type, stopPropagationCallback ); + } + + jQuery.event.triggered = undefined; + + if ( tmp ) { + elem[ ontype ] = tmp; + } + } + } + } + + return event.result; + }, + + // Piggyback on a donor event to simulate a different one + // Used only for `focus(in | out)` events + simulate: function( type, elem, event ) { + var e = jQuery.extend( + new jQuery.Event(), + event, + { + type: type, + isSimulated: true + } + ); + + jQuery.event.trigger( e, null, elem ); + } + +} ); + +jQuery.fn.extend( { + + trigger: function( type, data ) { + return this.each( function() { + jQuery.event.trigger( type, data, this ); + } ); + }, + triggerHandler: function( type, data ) { + var elem = this[ 0 ]; + if ( elem ) { + return jQuery.event.trigger( type, data, elem, true ); + } + } +} ); + + +// Support: Firefox <=44 +// Firefox doesn't have focus(in | out) events +// Related ticket - https://bugzilla.mozilla.org/show_bug.cgi?id=687787 +// +// Support: Chrome <=48 - 49, Safari <=9.0 - 9.1 +// focus(in | out) events fire after focus & blur events, +// which is spec violation - http://www.w3.org/TR/DOM-Level-3-Events/#events-focusevent-event-order +// Related ticket - https://bugs.chromium.org/p/chromium/issues/detail?id=449857 +if ( !support.focusin ) { + jQuery.each( { focus: "focusin", blur: "focusout" }, function( orig, fix ) { + + // Attach a single capturing handler on the document while someone wants focusin/focusout + var handler = function( event ) { + jQuery.event.simulate( fix, event.target, jQuery.event.fix( event ) ); + }; + + jQuery.event.special[ fix ] = { + setup: function() { + + // Handle: regular nodes (via `this.ownerDocument`), window + // (via `this.document`) & document (via `this`). + var doc = this.ownerDocument || this.document || this, + attaches = dataPriv.access( doc, fix ); + + if ( !attaches ) { + doc.addEventListener( orig, handler, true ); + } + dataPriv.access( doc, fix, ( attaches || 0 ) + 1 ); + }, + teardown: function() { + var doc = this.ownerDocument || this.document || this, + attaches = dataPriv.access( doc, fix ) - 1; + + if ( !attaches ) { + doc.removeEventListener( orig, handler, true ); + dataPriv.remove( doc, fix ); + + } else { + dataPriv.access( doc, fix, attaches ); + } + } + }; + } ); +} +var location = window.location; + +var nonce = { guid: Date.now() }; + +var rquery = ( /\?/ ); + + + +// Cross-browser xml parsing +jQuery.parseXML = function( data ) { + var xml, parserErrorElem; + if ( !data || typeof data !== "string" ) { + return null; + } + + // Support: IE 9 - 11 only + // IE throws on parseFromString with invalid input. + try { + xml = ( new window.DOMParser() ).parseFromString( data, "text/xml" ); + } catch ( e ) {} + + parserErrorElem = xml && xml.getElementsByTagName( "parsererror" )[ 0 ]; + if ( !xml || parserErrorElem ) { + jQuery.error( "Invalid XML: " + ( + parserErrorElem ? + jQuery.map( parserErrorElem.childNodes, function( el ) { + return el.textContent; + } ).join( "\n" ) : + data + ) ); + } + return xml; +}; + + +var + rbracket = /\[\]$/, + rCRLF = /\r?\n/g, + rsubmitterTypes = /^(?:submit|button|image|reset|file)$/i, + rsubmittable = /^(?:input|select|textarea|keygen)/i; + +function buildParams( prefix, obj, traditional, add ) { + var name; + + if ( Array.isArray( obj ) ) { + + // Serialize array item. + jQuery.each( obj, function( i, v ) { + if ( traditional || rbracket.test( prefix ) ) { + + // Treat each array item as a scalar. + add( prefix, v ); + + } else { + + // Item is non-scalar (array or object), encode its numeric index. + buildParams( + prefix + "[" + ( typeof v === "object" && v != null ? i : "" ) + "]", + v, + traditional, + add + ); + } + } ); + + } else if ( !traditional && toType( obj ) === "object" ) { + + // Serialize object item. + for ( name in obj ) { + buildParams( prefix + "[" + name + "]", obj[ name ], traditional, add ); + } + + } else { + + // Serialize scalar item. + add( prefix, obj ); + } +} + +// Serialize an array of form elements or a set of +// key/values into a query string +jQuery.param = function( a, traditional ) { + var prefix, + s = [], + add = function( key, valueOrFunction ) { + + // If value is a function, invoke it and use its return value + var value = isFunction( valueOrFunction ) ? + valueOrFunction() : + valueOrFunction; + + s[ s.length ] = encodeURIComponent( key ) + "=" + + encodeURIComponent( value == null ? "" : value ); + }; + + if ( a == null ) { + return ""; + } + + // If an array was passed in, assume that it is an array of form elements. + if ( Array.isArray( a ) || ( a.jquery && !jQuery.isPlainObject( a ) ) ) { + + // Serialize the form elements + jQuery.each( a, function() { + add( this.name, this.value ); + } ); + + } else { + + // If traditional, encode the "old" way (the way 1.3.2 or older + // did it), otherwise encode params recursively. + for ( prefix in a ) { + buildParams( prefix, a[ prefix ], traditional, add ); + } + } + + // Return the resulting serialization + return s.join( "&" ); +}; + +jQuery.fn.extend( { + serialize: function() { + return jQuery.param( this.serializeArray() ); + }, + serializeArray: function() { + return this.map( function() { + + // Can add propHook for "elements" to filter or add form elements + var elements = jQuery.prop( this, "elements" ); + return elements ? jQuery.makeArray( elements ) : this; + } ).filter( function() { + var type = this.type; + + // Use .is( ":disabled" ) so that fieldset[disabled] works + return this.name && !jQuery( this ).is( ":disabled" ) && + rsubmittable.test( this.nodeName ) && !rsubmitterTypes.test( type ) && + ( this.checked || !rcheckableType.test( type ) ); + } ).map( function( _i, elem ) { + var val = jQuery( this ).val(); + + if ( val == null ) { + return null; + } + + if ( Array.isArray( val ) ) { + return jQuery.map( val, function( val ) { + return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; + } ); + } + + return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; + } ).get(); + } +} ); + + +var + r20 = /%20/g, + rhash = /#.*$/, + rantiCache = /([?&])_=[^&]*/, + rheaders = /^(.*?):[ \t]*([^\r\n]*)$/mg, + + // #7653, #8125, #8152: local protocol detection + rlocalProtocol = /^(?:about|app|app-storage|.+-extension|file|res|widget):$/, + rnoContent = /^(?:GET|HEAD)$/, + rprotocol = /^\/\//, + + /* Prefilters + * 1) They are useful to introduce custom dataTypes (see ajax/jsonp.js for an example) + * 2) These are called: + * - BEFORE asking for a transport + * - AFTER param serialization (s.data is a string if s.processData is true) + * 3) key is the dataType + * 4) the catchall symbol "*" can be used + * 5) execution will start with transport dataType and THEN continue down to "*" if needed + */ + prefilters = {}, + + /* Transports bindings + * 1) key is the dataType + * 2) the catchall symbol "*" can be used + * 3) selection will start with transport dataType and THEN go to "*" if needed + */ + transports = {}, + + // Avoid comment-prolog char sequence (#10098); must appease lint and evade compression + allTypes = "*/".concat( "*" ), + + // Anchor tag for parsing the document origin + originAnchor = document.createElement( "a" ); + +originAnchor.href = location.href; + +// Base "constructor" for jQuery.ajaxPrefilter and jQuery.ajaxTransport +function addToPrefiltersOrTransports( structure ) { + + // dataTypeExpression is optional and defaults to "*" + return function( dataTypeExpression, func ) { + + if ( typeof dataTypeExpression !== "string" ) { + func = dataTypeExpression; + dataTypeExpression = "*"; + } + + var dataType, + i = 0, + dataTypes = dataTypeExpression.toLowerCase().match( rnothtmlwhite ) || []; + + if ( isFunction( func ) ) { + + // For each dataType in the dataTypeExpression + while ( ( dataType = dataTypes[ i++ ] ) ) { + + // Prepend if requested + if ( dataType[ 0 ] === "+" ) { + dataType = dataType.slice( 1 ) || "*"; + ( structure[ dataType ] = structure[ dataType ] || [] ).unshift( func ); + + // Otherwise append + } else { + ( structure[ dataType ] = structure[ dataType ] || [] ).push( func ); + } + } + } + }; +} + +// Base inspection function for prefilters and transports +function inspectPrefiltersOrTransports( structure, options, originalOptions, jqXHR ) { + + var inspected = {}, + seekingTransport = ( structure === transports ); + + function inspect( dataType ) { + var selected; + inspected[ dataType ] = true; + jQuery.each( structure[ dataType ] || [], function( _, prefilterOrFactory ) { + var dataTypeOrTransport = prefilterOrFactory( options, originalOptions, jqXHR ); + if ( typeof dataTypeOrTransport === "string" && + !seekingTransport && !inspected[ dataTypeOrTransport ] ) { + + options.dataTypes.unshift( dataTypeOrTransport ); + inspect( dataTypeOrTransport ); + return false; + } else if ( seekingTransport ) { + return !( selected = dataTypeOrTransport ); + } + } ); + return selected; + } + + return inspect( options.dataTypes[ 0 ] ) || !inspected[ "*" ] && inspect( "*" ); +} + +// A special extend for ajax options +// that takes "flat" options (not to be deep extended) +// Fixes #9887 +function ajaxExtend( target, src ) { + var key, deep, + flatOptions = jQuery.ajaxSettings.flatOptions || {}; + + for ( key in src ) { + if ( src[ key ] !== undefined ) { + ( flatOptions[ key ] ? target : ( deep || ( deep = {} ) ) )[ key ] = src[ key ]; + } + } + if ( deep ) { + jQuery.extend( true, target, deep ); + } + + return target; +} + +/* Handles responses to an ajax request: + * - finds the right dataType (mediates between content-type and expected dataType) + * - returns the corresponding response + */ +function ajaxHandleResponses( s, jqXHR, responses ) { + + var ct, type, finalDataType, firstDataType, + contents = s.contents, + dataTypes = s.dataTypes; + + // Remove auto dataType and get content-type in the process + while ( dataTypes[ 0 ] === "*" ) { + dataTypes.shift(); + if ( ct === undefined ) { + ct = s.mimeType || jqXHR.getResponseHeader( "Content-Type" ); + } + } + + // Check if we're dealing with a known content-type + if ( ct ) { + for ( type in contents ) { + if ( contents[ type ] && contents[ type ].test( ct ) ) { + dataTypes.unshift( type ); + break; + } + } + } + + // Check to see if we have a response for the expected dataType + if ( dataTypes[ 0 ] in responses ) { + finalDataType = dataTypes[ 0 ]; + } else { + + // Try convertible dataTypes + for ( type in responses ) { + if ( !dataTypes[ 0 ] || s.converters[ type + " " + dataTypes[ 0 ] ] ) { + finalDataType = type; + break; + } + if ( !firstDataType ) { + firstDataType = type; + } + } + + // Or just use first one + finalDataType = finalDataType || firstDataType; + } + + // If we found a dataType + // We add the dataType to the list if needed + // and return the corresponding response + if ( finalDataType ) { + if ( finalDataType !== dataTypes[ 0 ] ) { + dataTypes.unshift( finalDataType ); + } + return responses[ finalDataType ]; + } +} + +/* Chain conversions given the request and the original response + * Also sets the responseXXX fields on the jqXHR instance + */ +function ajaxConvert( s, response, jqXHR, isSuccess ) { + var conv2, current, conv, tmp, prev, + converters = {}, + + // Work with a copy of dataTypes in case we need to modify it for conversion + dataTypes = s.dataTypes.slice(); + + // Create converters map with lowercased keys + if ( dataTypes[ 1 ] ) { + for ( conv in s.converters ) { + converters[ conv.toLowerCase() ] = s.converters[ conv ]; + } + } + + current = dataTypes.shift(); + + // Convert to each sequential dataType + while ( current ) { + + if ( s.responseFields[ current ] ) { + jqXHR[ s.responseFields[ current ] ] = response; + } + + // Apply the dataFilter if provided + if ( !prev && isSuccess && s.dataFilter ) { + response = s.dataFilter( response, s.dataType ); + } + + prev = current; + current = dataTypes.shift(); + + if ( current ) { + + // There's only work to do if current dataType is non-auto + if ( current === "*" ) { + + current = prev; + + // Convert response if prev dataType is non-auto and differs from current + } else if ( prev !== "*" && prev !== current ) { + + // Seek a direct converter + conv = converters[ prev + " " + current ] || converters[ "* " + current ]; + + // If none found, seek a pair + if ( !conv ) { + for ( conv2 in converters ) { + + // If conv2 outputs current + tmp = conv2.split( " " ); + if ( tmp[ 1 ] === current ) { + + // If prev can be converted to accepted input + conv = converters[ prev + " " + tmp[ 0 ] ] || + converters[ "* " + tmp[ 0 ] ]; + if ( conv ) { + + // Condense equivalence converters + if ( conv === true ) { + conv = converters[ conv2 ]; + + // Otherwise, insert the intermediate dataType + } else if ( converters[ conv2 ] !== true ) { + current = tmp[ 0 ]; + dataTypes.unshift( tmp[ 1 ] ); + } + break; + } + } + } + } + + // Apply converter (if not an equivalence) + if ( conv !== true ) { + + // Unless errors are allowed to bubble, catch and return them + if ( conv && s.throws ) { + response = conv( response ); + } else { + try { + response = conv( response ); + } catch ( e ) { + return { + state: "parsererror", + error: conv ? e : "No conversion from " + prev + " to " + current + }; + } + } + } + } + } + } + + return { state: "success", data: response }; +} + +jQuery.extend( { + + // Counter for holding the number of active queries + active: 0, + + // Last-Modified header cache for next request + lastModified: {}, + etag: {}, + + ajaxSettings: { + url: location.href, + type: "GET", + isLocal: rlocalProtocol.test( location.protocol ), + global: true, + processData: true, + async: true, + contentType: "application/x-www-form-urlencoded; charset=UTF-8", + + /* + timeout: 0, + data: null, + dataType: null, + username: null, + password: null, + cache: null, + throws: false, + traditional: false, + headers: {}, + */ + + accepts: { + "*": allTypes, + text: "text/plain", + html: "text/html", + xml: "application/xml, text/xml", + json: "application/json, text/javascript" + }, + + contents: { + xml: /\bxml\b/, + html: /\bhtml/, + json: /\bjson\b/ + }, + + responseFields: { + xml: "responseXML", + text: "responseText", + json: "responseJSON" + }, + + // Data converters + // Keys separate source (or catchall "*") and destination types with a single space + converters: { + + // Convert anything to text + "* text": String, + + // Text to html (true = no transformation) + "text html": true, + + // Evaluate text as a json expression + "text json": JSON.parse, + + // Parse text as xml + "text xml": jQuery.parseXML + }, + + // For options that shouldn't be deep extended: + // you can add your own custom options here if + // and when you create one that shouldn't be + // deep extended (see ajaxExtend) + flatOptions: { + url: true, + context: true + } + }, + + // Creates a full fledged settings object into target + // with both ajaxSettings and settings fields. + // If target is omitted, writes into ajaxSettings. + ajaxSetup: function( target, settings ) { + return settings ? + + // Building a settings object + ajaxExtend( ajaxExtend( target, jQuery.ajaxSettings ), settings ) : + + // Extending ajaxSettings + ajaxExtend( jQuery.ajaxSettings, target ); + }, + + ajaxPrefilter: addToPrefiltersOrTransports( prefilters ), + ajaxTransport: addToPrefiltersOrTransports( transports ), + + // Main method + ajax: function( url, options ) { + + // If url is an object, simulate pre-1.5 signature + if ( typeof url === "object" ) { + options = url; + url = undefined; + } + + // Force options to be an object + options = options || {}; + + var transport, + + // URL without anti-cache param + cacheURL, + + // Response headers + responseHeadersString, + responseHeaders, + + // timeout handle + timeoutTimer, + + // Url cleanup var + urlAnchor, + + // Request state (becomes false upon send and true upon completion) + completed, + + // To know if global events are to be dispatched + fireGlobals, + + // Loop variable + i, + + // uncached part of the url + uncached, + + // Create the final options object + s = jQuery.ajaxSetup( {}, options ), + + // Callbacks context + callbackContext = s.context || s, + + // Context for global events is callbackContext if it is a DOM node or jQuery collection + globalEventContext = s.context && + ( callbackContext.nodeType || callbackContext.jquery ) ? + jQuery( callbackContext ) : + jQuery.event, + + // Deferreds + deferred = jQuery.Deferred(), + completeDeferred = jQuery.Callbacks( "once memory" ), + + // Status-dependent callbacks + statusCode = s.statusCode || {}, + + // Headers (they are sent all at once) + requestHeaders = {}, + requestHeadersNames = {}, + + // Default abort message + strAbort = "canceled", + + // Fake xhr + jqXHR = { + readyState: 0, + + // Builds headers hashtable if needed + getResponseHeader: function( key ) { + var match; + if ( completed ) { + if ( !responseHeaders ) { + responseHeaders = {}; + while ( ( match = rheaders.exec( responseHeadersString ) ) ) { + responseHeaders[ match[ 1 ].toLowerCase() + " " ] = + ( responseHeaders[ match[ 1 ].toLowerCase() + " " ] || [] ) + .concat( match[ 2 ] ); + } + } + match = responseHeaders[ key.toLowerCase() + " " ]; + } + return match == null ? null : match.join( ", " ); + }, + + // Raw string + getAllResponseHeaders: function() { + return completed ? responseHeadersString : null; + }, + + // Caches the header + setRequestHeader: function( name, value ) { + if ( completed == null ) { + name = requestHeadersNames[ name.toLowerCase() ] = + requestHeadersNames[ name.toLowerCase() ] || name; + requestHeaders[ name ] = value; + } + return this; + }, + + // Overrides response content-type header + overrideMimeType: function( type ) { + if ( completed == null ) { + s.mimeType = type; + } + return this; + }, + + // Status-dependent callbacks + statusCode: function( map ) { + var code; + if ( map ) { + if ( completed ) { + + // Execute the appropriate callbacks + jqXHR.always( map[ jqXHR.status ] ); + } else { + + // Lazy-add the new callbacks in a way that preserves old ones + for ( code in map ) { + statusCode[ code ] = [ statusCode[ code ], map[ code ] ]; + } + } + } + return this; + }, + + // Cancel the request + abort: function( statusText ) { + var finalText = statusText || strAbort; + if ( transport ) { + transport.abort( finalText ); + } + done( 0, finalText ); + return this; + } + }; + + // Attach deferreds + deferred.promise( jqXHR ); + + // Add protocol if not provided (prefilters might expect it) + // Handle falsy url in the settings object (#10093: consistency with old signature) + // We also use the url parameter if available + s.url = ( ( url || s.url || location.href ) + "" ) + .replace( rprotocol, location.protocol + "//" ); + + // Alias method option to type as per ticket #12004 + s.type = options.method || options.type || s.method || s.type; + + // Extract dataTypes list + s.dataTypes = ( s.dataType || "*" ).toLowerCase().match( rnothtmlwhite ) || [ "" ]; + + // A cross-domain request is in order when the origin doesn't match the current origin. + if ( s.crossDomain == null ) { + urlAnchor = document.createElement( "a" ); + + // Support: IE <=8 - 11, Edge 12 - 15 + // IE throws exception on accessing the href property if url is malformed, + // e.g. http://example.com:80x/ + try { + urlAnchor.href = s.url; + + // Support: IE <=8 - 11 only + // Anchor's host property isn't correctly set when s.url is relative + urlAnchor.href = urlAnchor.href; + s.crossDomain = originAnchor.protocol + "//" + originAnchor.host !== + urlAnchor.protocol + "//" + urlAnchor.host; + } catch ( e ) { + + // If there is an error parsing the URL, assume it is crossDomain, + // it can be rejected by the transport if it is invalid + s.crossDomain = true; + } + } + + // Convert data if not already a string + if ( s.data && s.processData && typeof s.data !== "string" ) { + s.data = jQuery.param( s.data, s.traditional ); + } + + // Apply prefilters + inspectPrefiltersOrTransports( prefilters, s, options, jqXHR ); + + // If request was aborted inside a prefilter, stop there + if ( completed ) { + return jqXHR; + } + + // We can fire global events as of now if asked to + // Don't fire events if jQuery.event is undefined in an AMD-usage scenario (#15118) + fireGlobals = jQuery.event && s.global; + + // Watch for a new set of requests + if ( fireGlobals && jQuery.active++ === 0 ) { + jQuery.event.trigger( "ajaxStart" ); + } + + // Uppercase the type + s.type = s.type.toUpperCase(); + + // Determine if request has content + s.hasContent = !rnoContent.test( s.type ); + + // Save the URL in case we're toying with the If-Modified-Since + // and/or If-None-Match header later on + // Remove hash to simplify url manipulation + cacheURL = s.url.replace( rhash, "" ); + + // More options handling for requests with no content + if ( !s.hasContent ) { + + // Remember the hash so we can put it back + uncached = s.url.slice( cacheURL.length ); + + // If data is available and should be processed, append data to url + if ( s.data && ( s.processData || typeof s.data === "string" ) ) { + cacheURL += ( rquery.test( cacheURL ) ? "&" : "?" ) + s.data; + + // #9682: remove data so that it's not used in an eventual retry + delete s.data; + } + + // Add or update anti-cache param if needed + if ( s.cache === false ) { + cacheURL = cacheURL.replace( rantiCache, "$1" ); + uncached = ( rquery.test( cacheURL ) ? "&" : "?" ) + "_=" + ( nonce.guid++ ) + + uncached; + } + + // Put hash and anti-cache on the URL that will be requested (gh-1732) + s.url = cacheURL + uncached; + + // Change '%20' to '+' if this is encoded form body content (gh-2658) + } else if ( s.data && s.processData && + ( s.contentType || "" ).indexOf( "application/x-www-form-urlencoded" ) === 0 ) { + s.data = s.data.replace( r20, "+" ); + } + + // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. + if ( s.ifModified ) { + if ( jQuery.lastModified[ cacheURL ] ) { + jqXHR.setRequestHeader( "If-Modified-Since", jQuery.lastModified[ cacheURL ] ); + } + if ( jQuery.etag[ cacheURL ] ) { + jqXHR.setRequestHeader( "If-None-Match", jQuery.etag[ cacheURL ] ); + } + } + + // Set the correct header, if data is being sent + if ( s.data && s.hasContent && s.contentType !== false || options.contentType ) { + jqXHR.setRequestHeader( "Content-Type", s.contentType ); + } + + // Set the Accepts header for the server, depending on the dataType + jqXHR.setRequestHeader( + "Accept", + s.dataTypes[ 0 ] && s.accepts[ s.dataTypes[ 0 ] ] ? + s.accepts[ s.dataTypes[ 0 ] ] + + ( s.dataTypes[ 0 ] !== "*" ? ", " + allTypes + "; q=0.01" : "" ) : + s.accepts[ "*" ] + ); + + // Check for headers option + for ( i in s.headers ) { + jqXHR.setRequestHeader( i, s.headers[ i ] ); + } + + // Allow custom headers/mimetypes and early abort + if ( s.beforeSend && + ( s.beforeSend.call( callbackContext, jqXHR, s ) === false || completed ) ) { + + // Abort if not done already and return + return jqXHR.abort(); + } + + // Aborting is no longer a cancellation + strAbort = "abort"; + + // Install callbacks on deferreds + completeDeferred.add( s.complete ); + jqXHR.done( s.success ); + jqXHR.fail( s.error ); + + // Get transport + transport = inspectPrefiltersOrTransports( transports, s, options, jqXHR ); + + // If no transport, we auto-abort + if ( !transport ) { + done( -1, "No Transport" ); + } else { + jqXHR.readyState = 1; + + // Send global event + if ( fireGlobals ) { + globalEventContext.trigger( "ajaxSend", [ jqXHR, s ] ); + } + + // If request was aborted inside ajaxSend, stop there + if ( completed ) { + return jqXHR; + } + + // Timeout + if ( s.async && s.timeout > 0 ) { + timeoutTimer = window.setTimeout( function() { + jqXHR.abort( "timeout" ); + }, s.timeout ); + } + + try { + completed = false; + transport.send( requestHeaders, done ); + } catch ( e ) { + + // Rethrow post-completion exceptions + if ( completed ) { + throw e; + } + + // Propagate others as results + done( -1, e ); + } + } + + // Callback for when everything is done + function done( status, nativeStatusText, responses, headers ) { + var isSuccess, success, error, response, modified, + statusText = nativeStatusText; + + // Ignore repeat invocations + if ( completed ) { + return; + } + + completed = true; + + // Clear timeout if it exists + if ( timeoutTimer ) { + window.clearTimeout( timeoutTimer ); + } + + // Dereference transport for early garbage collection + // (no matter how long the jqXHR object will be used) + transport = undefined; + + // Cache response headers + responseHeadersString = headers || ""; + + // Set readyState + jqXHR.readyState = status > 0 ? 4 : 0; + + // Determine if successful + isSuccess = status >= 200 && status < 300 || status === 304; + + // Get response data + if ( responses ) { + response = ajaxHandleResponses( s, jqXHR, responses ); + } + + // Use a noop converter for missing script but not if jsonp + if ( !isSuccess && + jQuery.inArray( "script", s.dataTypes ) > -1 && + jQuery.inArray( "json", s.dataTypes ) < 0 ) { + s.converters[ "text script" ] = function() {}; + } + + // Convert no matter what (that way responseXXX fields are always set) + response = ajaxConvert( s, response, jqXHR, isSuccess ); + + // If successful, handle type chaining + if ( isSuccess ) { + + // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. + if ( s.ifModified ) { + modified = jqXHR.getResponseHeader( "Last-Modified" ); + if ( modified ) { + jQuery.lastModified[ cacheURL ] = modified; + } + modified = jqXHR.getResponseHeader( "etag" ); + if ( modified ) { + jQuery.etag[ cacheURL ] = modified; + } + } + + // if no content + if ( status === 204 || s.type === "HEAD" ) { + statusText = "nocontent"; + + // if not modified + } else if ( status === 304 ) { + statusText = "notmodified"; + + // If we have data, let's convert it + } else { + statusText = response.state; + success = response.data; + error = response.error; + isSuccess = !error; + } + } else { + + // Extract error from statusText and normalize for non-aborts + error = statusText; + if ( status || !statusText ) { + statusText = "error"; + if ( status < 0 ) { + status = 0; + } + } + } + + // Set data for the fake xhr object + jqXHR.status = status; + jqXHR.statusText = ( nativeStatusText || statusText ) + ""; + + // Success/Error + if ( isSuccess ) { + deferred.resolveWith( callbackContext, [ success, statusText, jqXHR ] ); + } else { + deferred.rejectWith( callbackContext, [ jqXHR, statusText, error ] ); + } + + // Status-dependent callbacks + jqXHR.statusCode( statusCode ); + statusCode = undefined; + + if ( fireGlobals ) { + globalEventContext.trigger( isSuccess ? "ajaxSuccess" : "ajaxError", + [ jqXHR, s, isSuccess ? success : error ] ); + } + + // Complete + completeDeferred.fireWith( callbackContext, [ jqXHR, statusText ] ); + + if ( fireGlobals ) { + globalEventContext.trigger( "ajaxComplete", [ jqXHR, s ] ); + + // Handle the global AJAX counter + if ( !( --jQuery.active ) ) { + jQuery.event.trigger( "ajaxStop" ); + } + } + } + + return jqXHR; + }, + + getJSON: function( url, data, callback ) { + return jQuery.get( url, data, callback, "json" ); + }, + + getScript: function( url, callback ) { + return jQuery.get( url, undefined, callback, "script" ); + } +} ); + +jQuery.each( [ "get", "post" ], function( _i, method ) { + jQuery[ method ] = function( url, data, callback, type ) { + + // Shift arguments if data argument was omitted + if ( isFunction( data ) ) { + type = type || callback; + callback = data; + data = undefined; + } + + // The url can be an options object (which then must have .url) + return jQuery.ajax( jQuery.extend( { + url: url, + type: method, + dataType: type, + data: data, + success: callback + }, jQuery.isPlainObject( url ) && url ) ); + }; +} ); + +jQuery.ajaxPrefilter( function( s ) { + var i; + for ( i in s.headers ) { + if ( i.toLowerCase() === "content-type" ) { + s.contentType = s.headers[ i ] || ""; + } + } +} ); + + +jQuery._evalUrl = function( url, options, doc ) { + return jQuery.ajax( { + url: url, + + // Make this explicit, since user can override this through ajaxSetup (#11264) + type: "GET", + dataType: "script", + cache: true, + async: false, + global: false, + + // Only evaluate the response if it is successful (gh-4126) + // dataFilter is not invoked for failure responses, so using it instead + // of the default converter is kludgy but it works. + converters: { + "text script": function() {} + }, + dataFilter: function( response ) { + jQuery.globalEval( response, options, doc ); + } + } ); +}; + + +jQuery.fn.extend( { + wrapAll: function( html ) { + var wrap; + + if ( this[ 0 ] ) { + if ( isFunction( html ) ) { + html = html.call( this[ 0 ] ); + } + + // The elements to wrap the target around + wrap = jQuery( html, this[ 0 ].ownerDocument ).eq( 0 ).clone( true ); + + if ( this[ 0 ].parentNode ) { + wrap.insertBefore( this[ 0 ] ); + } + + wrap.map( function() { + var elem = this; + + while ( elem.firstElementChild ) { + elem = elem.firstElementChild; + } + + return elem; + } ).append( this ); + } + + return this; + }, + + wrapInner: function( html ) { + if ( isFunction( html ) ) { + return this.each( function( i ) { + jQuery( this ).wrapInner( html.call( this, i ) ); + } ); + } + + return this.each( function() { + var self = jQuery( this ), + contents = self.contents(); + + if ( contents.length ) { + contents.wrapAll( html ); + + } else { + self.append( html ); + } + } ); + }, + + wrap: function( html ) { + var htmlIsFunction = isFunction( html ); + + return this.each( function( i ) { + jQuery( this ).wrapAll( htmlIsFunction ? html.call( this, i ) : html ); + } ); + }, + + unwrap: function( selector ) { + this.parent( selector ).not( "body" ).each( function() { + jQuery( this ).replaceWith( this.childNodes ); + } ); + return this; + } +} ); + + +jQuery.expr.pseudos.hidden = function( elem ) { + return !jQuery.expr.pseudos.visible( elem ); +}; +jQuery.expr.pseudos.visible = function( elem ) { + return !!( elem.offsetWidth || elem.offsetHeight || elem.getClientRects().length ); +}; + + + + +jQuery.ajaxSettings.xhr = function() { + try { + return new window.XMLHttpRequest(); + } catch ( e ) {} +}; + +var xhrSuccessStatus = { + + // File protocol always yields status code 0, assume 200 + 0: 200, + + // Support: IE <=9 only + // #1450: sometimes IE returns 1223 when it should be 204 + 1223: 204 + }, + xhrSupported = jQuery.ajaxSettings.xhr(); + +support.cors = !!xhrSupported && ( "withCredentials" in xhrSupported ); +support.ajax = xhrSupported = !!xhrSupported; + +jQuery.ajaxTransport( function( options ) { + var callback, errorCallback; + + // Cross domain only allowed if supported through XMLHttpRequest + if ( support.cors || xhrSupported && !options.crossDomain ) { + return { + send: function( headers, complete ) { + var i, + xhr = options.xhr(); + + xhr.open( + options.type, + options.url, + options.async, + options.username, + options.password + ); + + // Apply custom fields if provided + if ( options.xhrFields ) { + for ( i in options.xhrFields ) { + xhr[ i ] = options.xhrFields[ i ]; + } + } + + // Override mime type if needed + if ( options.mimeType && xhr.overrideMimeType ) { + xhr.overrideMimeType( options.mimeType ); + } + + // X-Requested-With header + // For cross-domain requests, seeing as conditions for a preflight are + // akin to a jigsaw puzzle, we simply never set it to be sure. + // (it can always be set on a per-request basis or even using ajaxSetup) + // For same-domain requests, won't change header if already provided. + if ( !options.crossDomain && !headers[ "X-Requested-With" ] ) { + headers[ "X-Requested-With" ] = "XMLHttpRequest"; + } + + // Set headers + for ( i in headers ) { + xhr.setRequestHeader( i, headers[ i ] ); + } + + // Callback + callback = function( type ) { + return function() { + if ( callback ) { + callback = errorCallback = xhr.onload = + xhr.onerror = xhr.onabort = xhr.ontimeout = + xhr.onreadystatechange = null; + + if ( type === "abort" ) { + xhr.abort(); + } else if ( type === "error" ) { + + // Support: IE <=9 only + // On a manual native abort, IE9 throws + // errors on any property access that is not readyState + if ( typeof xhr.status !== "number" ) { + complete( 0, "error" ); + } else { + complete( + + // File: protocol always yields status 0; see #8605, #14207 + xhr.status, + xhr.statusText + ); + } + } else { + complete( + xhrSuccessStatus[ xhr.status ] || xhr.status, + xhr.statusText, + + // Support: IE <=9 only + // IE9 has no XHR2 but throws on binary (trac-11426) + // For XHR2 non-text, let the caller handle it (gh-2498) + ( xhr.responseType || "text" ) !== "text" || + typeof xhr.responseText !== "string" ? + { binary: xhr.response } : + { text: xhr.responseText }, + xhr.getAllResponseHeaders() + ); + } + } + }; + }; + + // Listen to events + xhr.onload = callback(); + errorCallback = xhr.onerror = xhr.ontimeout = callback( "error" ); + + // Support: IE 9 only + // Use onreadystatechange to replace onabort + // to handle uncaught aborts + if ( xhr.onabort !== undefined ) { + xhr.onabort = errorCallback; + } else { + xhr.onreadystatechange = function() { + + // Check readyState before timeout as it changes + if ( xhr.readyState === 4 ) { + + // Allow onerror to be called first, + // but that will not handle a native abort + // Also, save errorCallback to a variable + // as xhr.onerror cannot be accessed + window.setTimeout( function() { + if ( callback ) { + errorCallback(); + } + } ); + } + }; + } + + // Create the abort callback + callback = callback( "abort" ); + + try { + + // Do send the request (this may raise an exception) + xhr.send( options.hasContent && options.data || null ); + } catch ( e ) { + + // #14683: Only rethrow if this hasn't been notified as an error yet + if ( callback ) { + throw e; + } + } + }, + + abort: function() { + if ( callback ) { + callback(); + } + } + }; + } +} ); + + + + +// Prevent auto-execution of scripts when no explicit dataType was provided (See gh-2432) +jQuery.ajaxPrefilter( function( s ) { + if ( s.crossDomain ) { + s.contents.script = false; + } +} ); + +// Install script dataType +jQuery.ajaxSetup( { + accepts: { + script: "text/javascript, application/javascript, " + + "application/ecmascript, application/x-ecmascript" + }, + contents: { + script: /\b(?:java|ecma)script\b/ + }, + converters: { + "text script": function( text ) { + jQuery.globalEval( text ); + return text; + } + } +} ); + +// Handle cache's special case and crossDomain +jQuery.ajaxPrefilter( "script", function( s ) { + if ( s.cache === undefined ) { + s.cache = false; + } + if ( s.crossDomain ) { + s.type = "GET"; + } +} ); + +// Bind script tag hack transport +jQuery.ajaxTransport( "script", function( s ) { + + // This transport only deals with cross domain or forced-by-attrs requests + if ( s.crossDomain || s.scriptAttrs ) { + var script, callback; + return { + send: function( _, complete ) { + script = jQuery( " + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Datasets

+

This is a comprehensive list of public datasets used by this repository.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Name (Link/Source)

Framework

Use Case

Adult Income Dataset

PyTorch

Tabular Classification

CDD-CESM

PyTorch

Image & Text Classification

CIFAR-10 (TorchVision)

Tensorflow

Text Classification

Civil Comments (TFDS)

Tensorflow

Text Classification

COMPAS Recidivism Risk Score Data and Analysis

TensorFlow

Tabular Classification

ImageNet (TorchVision)

PyTorch

Image Classification

IMDB Reviews

PyTorch

Text Classification

MNIST (TorchVision)

PyTorch

Image Classification

SMS Spam Collection

PyTorch

Text Classification

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/explainer/attributions.html b/v1.1.0/explainer/attributions.html new file mode 100644 index 0000000..8e57885 --- /dev/null +++ b/v1.1.0/explainer/attributions.html @@ -0,0 +1,123 @@ + + + + + + + <no title> — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/explainer/cam.html b/v1.1.0/explainer/cam.html new file mode 100644 index 0000000..ef51f70 --- /dev/null +++ b/v1.1.0/explainer/cam.html @@ -0,0 +1,123 @@ + + + + + + + <no title> — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/explainer/index.html b/v1.1.0/explainer/index.html new file mode 100644 index 0000000..d37c78d --- /dev/null +++ b/v1.1.0/explainer/index.html @@ -0,0 +1,176 @@ + + + + + + + Explainer — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Explainer

+

Explainer is a Python module in Intel® Explainable AI Tools that provides explainability methods for PyTorch and Tensorflow models.

+
+

Goals

+
+
+
+
+
+

Composable

+
+
+

Add explainers to models methods with minimal code

+
+
+
+
+
+
+

Extensible

+
+
+

Easy to add new methods

+
+
+
+
+
+
+

Community

+
+
+

Contributions welcome

+
+
+
+
+
+
+
+

Explainer Submodules

+
    +
  • attributions: visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

  • +
  • cam: create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

  • +
  • API Refrence: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

  • +
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/explainer/metrics.html b/v1.1.0/explainer/metrics.html new file mode 100644 index 0000000..5cc87e4 --- /dev/null +++ b/v1.1.0/explainer/metrics.html @@ -0,0 +1,129 @@ + + + + + + + API Refrence — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

API Refrence

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/genindex.html b/v1.1.0/genindex.html new file mode 100644 index 0000000..116d7fe --- /dev/null +++ b/v1.1.0/genindex.html @@ -0,0 +1,121 @@ + + + + + + Index — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + +

Index

+ +
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/index.html b/v1.1.0/index.html new file mode 100644 index 0000000..7915278 --- /dev/null +++ b/v1.1.0/index.html @@ -0,0 +1,279 @@ + + + + + + + Intel® Explainable AI Tools — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Intel® Explainable AI Tools

+

This repository provides tools for data scientists and MLOps engineers that have requirements specific to AI model interpretability.

+
+

Overview

+

The Intel Explainable AI Tools are designed to help users detect and mitigate against issues of fairness and interpretability, while running best on Intel hardware. +There are two Python* components in the repository:

+
    +
  • Model Card Generator

    +
      +
    • Creates interactive HTML reports containing model performance and fairness metrics

    • +
    +
  • +
  • Explainer

    +
      +
    • Runs post-hoc model distillation and visualization methods to examine predictive behavior for both TensorFlow* and PyTorch* models via a simple Python API including the following modules:

      +
        +
      • Attributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

      • +
      • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

      • +
      • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

      • +
      +
    • +
    +
  • +
+
+
+

Get Started

+
+

Requirements

+
    +
  • Linux system or WSL2 on Windows (validated on Ubuntu* 20.04/22.04 LTS)

  • +
  • Python 3.9, 3.10

  • +
  • Install required OS packages with apt-get install build-essential python3-dev

  • +
  • git (only required for the “Developer Installation”)

  • +
  • Poetry

  • +
+
+
+

Developer Installation with Poetry

+

Use these instructions to install the Intel AI Safety python library with a clone of the +GitHub repository. This can be done instead of the basic pip install, if you plan +on making code changes.

+
    +
  1. Clone this repo and navigate to the repo directory.

  2. +
  3. Allow poetry to create virtual envionment contained in .venv directory of current directory.

    +
    poetry lock
    +
    +
    +

    In addtion, you can explicitly tell poetry which python instance to use

    +
    poetry env use /full/path/to/python
    +
    +
    +
  4. +
  5. Choose the intel_ai_safety subpackages and plugins that you wish to install.

    +

    a. Install intel_ai_safety with all of its subpackages (e.g. explainer and model_card_gen) and plugins

    +
    poetry install --extras all
    +
    +
    +

    b. Install intel_ai_safety with just explainer

    +
    poetry install --extras explainer
    +
    +
    +

    c. Install intel_ai_safety with just model_card_gen

    +
    poetry install --extras model-card
    +
    +
    +

    d. Install intel_ai_safety with explainer and all of its plugins

    +
    poetry install --extras explainer-all
    +
    +
    +

    e. Install intel_ai_safety with explainer and just its pytorch implementations

    +
    poetry install --extras explainer-pytorch
    +
    +
    +

    f. Install intel_ai_safety with explainer and just its tensroflow implementations

    +
    poetry install --extras explainer-tensorflow
    +
    +
    +
  6. +
  7. Activate the environment:

    +
    source .venv/bin/activate
    +
    +
    +
  8. +
+
+
+

Install to existing enviornment with Poetry

+
+

Create and activate a Python3 virtual environment

+

We encourage you to use a python virtual environment (virtualenv or conda) for consistent package management. +There are two ways to do this:

+
    +
  1. Choose a virtual enviornment to use: +a. Using virtualenv:

    +
    python3 -m virtualenv xai_env
    +source xai_env/bin/activate
    +
    +
    +

    b. Or conda:

    +
    conda create --name xai_env python=3.9
    +conda activate xai_env
    +
    +
    +
  2. +
  3. Install to current enviornment

    +
    poetry config virtualenvs.create false && poetry install --extras all
    +
    +
    +
  4. +
+
+
+
+

Additional Feature-Specific Steps

+

Notebooks may require additional dependencies listed in their associated documentation.

+
+
+

Verify Installation

+

Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions:

+
python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
+python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
+
+
+
+
+
+

Running Notebooks

+

The following links have Jupyter* notebooks showing how to use the Explainer and Model Card Generator APIs in various ML domains and use cases:

+ +
+
+

Support

+

The Intel Explainable AI Tools team tracks bugs and enhancement requests using +GitHub issues. Before submitting a +suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

+

*Other names and brands may be claimed as the property of others. Trademarks

+
+

DISCLAIMER

+

These scripts are not intended for benchmarking Intel platforms. For any performance and/or benchmarking information on specific Intel platforms, visit https://www.intel.ai/blog.

+

Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the Intel Global Human Rights Principles. Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right.

+
+
+

License

+

Intel® Explainable AI Tools is licensed under Apache License Version 2.0.

+
+
+

Datasets and Models

+

To the extent that any data, datasets, or models are referenced by Intel or accessed using tools or code on this site such data, datasets and models are provided by the third party indicated as the source of such content. Intel does not create the data, datasets, or models, provide a license to any third-party data, datasets, or models referenced, and does not warrant their accuracy or quality. By accessing such data, dataset(s) or model(s) you agree to the terms associated with that content and that your use complies with the applicable license. DATASETS

+

Intel expressly disclaims the accuracy, adequacy, or completeness of any data, datasets or models, and is not liable for any errors, omissions, or defects in such content, or for any reliance thereon. Intel also expressly disclaims any warranty of non-infringement with respect to such data, dataset(s), or model(s). Intel is not liable for any liability or damages relating to your use of such data, datasets, or models.

+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/install.html b/v1.1.0/install.html new file mode 100644 index 0000000..502cd88 --- /dev/null +++ b/v1.1.0/install.html @@ -0,0 +1,252 @@ + + + + + + + Installation — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Installation

+
+

Software Requirements

+
    +
  • Linux system or WSL2 on Windows (validated on Ubuntu* 20.04/22.04 LTS)

  • +
  • Python 3.9, 3.10

  • +
  • Install required OS packages with apt-get install build-essential python3-dev

  • +
  • git (only required for the “Developer Installation”)

  • +
  • Poetry

  • +
+
+
+

Developer Installation with Poetry

+

Use these instructions to install the Intel AI Safety python library with a clone of the +GitHub repository. This can be done instead of the basic pip install, if you plan +on making code changes.

+
    +
  1. Clone this repo and navigate to the repo directory.

  2. +
  3. Allow poetry to create virtual envionment contained in .venv directory of current directory.

    +
    poetry lock
    +
    +
    +

    In addtion, you can explicitly tell poetry which python instance to use

    +
    poetry env use /full/path/to/python
    +
    +
    +
  4. +
  5. Choose the intel_ai_safety subpackages and plugins that you wish to install.

    +

    a. Install intel_ai_safety with all of its subpackages (e.g. explainer and model_card_gen) and plugins

    +
    poetry install --extras all
    +
    +
    +

    b. Install intel_ai_safety with just explainer

    +
    poetry install --extras explainer
    +
    +
    +

    c. Install intel_ai_safety with just model_card_gen

    +
    poetry install --extras model-card
    +
    +
    +

    d. Install intel_ai_safety with explainer and all of its plugins

    +
    poetry install --extras explainer-all
    +
    +
    +

    e. Install intel_ai_safety with explainer and just its pytorch implementations

    +
    poetry install --extras explainer-pytorch
    +
    +
    +

    f. Install intel_ai_safety with explainer and just its tensroflow implementations

    +
    poetry install --extras explainer-tensorflow
    +
    +
    +
  6. +
  7. Activate the environment:

    +
    source .venv/bin/activate
    +
    +
    +
  8. +
+
+
+

Install to existing enviornment with Poetry

+
+

Create and activate a Python3 virtual environment

+

We encourage you to use a python virtual environment (virtualenv or conda) for consistent package management. +There are two ways to do this:

+
    +
  1. Choose a virtual enviornment to use: +a. Using virtualenv:

    +
    python3 -m virtualenv xai_env
    +source xai_env/bin/activate
    +
    +
    +

    b. Or conda:

    +
    conda create --name xai_env python=3.9
    +conda activate xai_env
    +
    +
    +
  2. +
  3. Install to current enviornment

    +
    poetry config virtualenvs.create false && poetry install --extras all
    +
    +
    +
  4. +
+
+
+
+

Additional Feature-Specific Steps

+

Notebooks may require additional dependencies listed in their associated documentation.

+
+
+

Verify Installation

+

Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions:

+
python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
+python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
+
+
+
+
+
+

Running Notebooks

+

The following links have Jupyter* notebooks showing how to use the Explainer and Model Card Generator APIs in various ML domains and use cases:

+ +
+
+

Support

+

The Intel Explainable AI Tools team tracks bugs and enhancement requests using +GitHub issues. Before submitting a +suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

+

*Other names and brands may be claimed as the property of others. Trademarks

+
+

DISCLAIMER

+

These scripts are not intended for benchmarking Intel platforms. For any performance and/or benchmarking information on specific Intel platforms, visit https://www.intel.ai/blog.

+

Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the Intel Global Human Rights Principles. Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right.

+
+
+

License

+

Intel® Explainable AI Tools is licensed under Apache License Version 2.0.

+
+
+

Datasets and Models

+

To the extent that any data, datasets, or models are referenced by Intel or accessed using tools or code on this site such data, datasets and models are provided by the third party indicated as the source of such content. Intel does not create the data, datasets, or models, provide a license to any third-party data, datasets, or models referenced, and does not warrant their accuracy or quality. By accessing such data, dataset(s) or model(s) you agree to the terms associated with that content and that your use complies with the applicable license. DATASETS +*Other names and brands may be claimed as the property of others. Trademarks

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/legal.html b/v1.1.0/legal.html new file mode 100644 index 0000000..57ee57d --- /dev/null +++ b/v1.1.0/legal.html @@ -0,0 +1,137 @@ + + + + + + + Legal Information — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + + + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/markdown/Install.html b/v1.1.0/markdown/Install.html new file mode 100644 index 0000000..bc54e5c --- /dev/null +++ b/v1.1.0/markdown/Install.html @@ -0,0 +1,247 @@ + + + + + + + Installation — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Installation

+
+

Software Requirements

+
    +
  • Linux system or WSL2 on Windows (validated on Ubuntu* 20.04/22.04 LTS)

  • +
  • Python 3.9, 3.10

  • +
  • Install required OS packages with apt-get install build-essential python3-dev

  • +
  • git (only required for the “Developer Installation”)

  • +
  • Poetry

  • +
+
+
+

Developer Installation with Poetry

+

Use these instructions to install the Intel AI Safety python library with a clone of the +GitHub repository. This can be done instead of the basic pip install, if you plan +on making code changes.

+
    +
  1. Clone this repo and navigate to the repo directory.

  2. +
  3. Allow poetry to create virtual envionment contained in .venv directory of current directory.

    +
    poetry lock
    +
    +
    +

    In addtion, you can explicitly tell poetry which python instance to use

    +
    poetry env use /full/path/to/python
    +
    +
    +
  4. +
  5. Choose the intel_ai_safety subpackages and plugins that you wish to install.

    +

    a. Install intel_ai_safety with all of its subpackages (e.g. explainer and model_card_gen) and plugins

    +
    poetry install --extras all
    +
    +
    +

    b. Install intel_ai_safety with just explainer

    +
    poetry install --extras explainer
    +
    +
    +

    c. Install intel_ai_safety with just model_card_gen

    +
    poetry install --extras model-card
    +
    +
    +

    d. Install intel_ai_safety with explainer and all of its plugins

    +
    poetry install --extras explainer-all
    +
    +
    +

    e. Install intel_ai_safety with explainer and just its pytorch implementations

    +
    poetry install --extras explainer-pytorch
    +
    +
    +

    f. Install intel_ai_safety with explainer and just its tensroflow implementations

    +
    poetry install --extras explainer-tensorflow
    +
    +
    +
  6. +
  7. Activate the environment:

    +
    source .venv/bin/activate
    +
    +
    +
  8. +
+
+
+

Install to existing enviornment with Poetry

+
+

Create and activate a Python3 virtual environment

+

We encourage you to use a python virtual environment (virtualenv or conda) for consistent package management. +There are two ways to do this:

+
    +
  1. Choose a virtual enviornment to use: +a. Using virtualenv:

    +
    python3 -m virtualenv xai_env
    +source xai_env/bin/activate
    +
    +
    +

    b. Or conda:

    +
    conda create --name xai_env python=3.9
    +conda activate xai_env
    +
    +
    +
  2. +
  3. Install to current enviornment

    +
    poetry config virtualenvs.create false && poetry install --extras all
    +
    +
    +
  4. +
+
+
+
+

Additional Feature-Specific Steps

+

Notebooks may require additional dependencies listed in their associated documentation.

+
+
+

Verify Installation

+

Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions:

+
python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
+python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
+
+
+
+
+
+

Running Notebooks

+

The following links have Jupyter* notebooks showing how to use the Explainer and Model Card Generator APIs in various ML domains and use cases:

+ +
+
+

Support

+

The Intel Explainable AI Tools team tracks bugs and enhancement requests using +GitHub issues. Before submitting a +suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

+

*Other names and brands may be claimed as the property of others. Trademarks

+
+

DISCLAIMER

+

These scripts are not intended for benchmarking Intel platforms. For any performance and/or benchmarking information on specific Intel platforms, visit https://www.intel.ai/blog.

+

Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the Intel Global Human Rights Principles. Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right.

+
+
+

License

+

Intel® Explainable AI Tools is licensed under Apache License Version 2.0.

+
+
+

Datasets and Models

+

To the extent that any data, datasets, or models are referenced by Intel or accessed using tools or code on this site such data, datasets and models are provided by the third party indicated as the source of such content. Intel does not create the data, datasets, or models, provide a license to any third-party data, datasets, or models referenced, and does not warrant their accuracy or quality. By accessing such data, dataset(s) or model(s) you agree to the terms associated with that content and that your use complies with the applicable license. DATASETS +*Other names and brands may be claimed as the property of others. Trademarks

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/markdown/Legal.html b/v1.1.0/markdown/Legal.html new file mode 100644 index 0000000..9876f61 --- /dev/null +++ b/v1.1.0/markdown/Legal.html @@ -0,0 +1,134 @@ + + + + + + + Legal Information — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ + + + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/markdown/Overview.html b/v1.1.0/markdown/Overview.html new file mode 100644 index 0000000..812a28b --- /dev/null +++ b/v1.1.0/markdown/Overview.html @@ -0,0 +1,141 @@ + + + + + + + Overview — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Overview

+

The Intel® Explainable AI Tools are designed to help users detect and mitigate against issues of fairness and interpretability, while running best on Intel hardware. +There are two Python* components in the repository:

+
    +
  • Model Card Generator

    +
      +
    • Creates interactive HTML reports containing model performance and fairness metrics

    • +
    +
  • +
  • Explainer

    +
      +
    • Runs post-hoc model distillation and visualization methods to examine predictive behavior for both TensorFlow* and PyTorch* models via a simple Python API including the following modules:

      +
        +
      • Attributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

      • +
      • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

      • +
      • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

      • +
      +
    • +
    +
  • +
+

*Other names and brands may be claimed as the property of others. Trademarks

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/markdown/Welcome.html b/v1.1.0/markdown/Welcome.html new file mode 100644 index 0000000..a8e8fb1 --- /dev/null +++ b/v1.1.0/markdown/Welcome.html @@ -0,0 +1,274 @@ + + + + + + + Intel® Explainable AI Tools — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Intel® Explainable AI Tools

+

This repository provides tools for data scientists and MLOps engineers that have requirements specific to AI model interpretability.

+
+

Overview

+

The Intel Explainable AI Tools are designed to help users detect and mitigate against issues of fairness and interpretability, while running best on Intel hardware. +There are two Python* components in the repository:

+
    +
  • Model Card Generator

    +
      +
    • Creates interactive HTML reports containing model performance and fairness metrics

    • +
    +
  • +
  • Explainer

    +
      +
    • Runs post-hoc model distillation and visualization methods to examine predictive behavior for both TensorFlow* and PyTorch* models via a simple Python API including the following modules:

      +
        +
      • Attributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

      • +
      • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

      • +
      • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

      • +
      +
    • +
    +
  • +
+
+
+

Get Started

+
+

Requirements

+
    +
  • Linux system or WSL2 on Windows (validated on Ubuntu* 20.04/22.04 LTS)

  • +
  • Python 3.9, 3.10

  • +
  • Install required OS packages with apt-get install build-essential python3-dev

  • +
  • git (only required for the “Developer Installation”)

  • +
  • Poetry

  • +
+
+
+

Developer Installation with Poetry

+

Use these instructions to install the Intel AI Safety python library with a clone of the +GitHub repository. This can be done instead of the basic pip install, if you plan +on making code changes.

+
    +
  1. Clone this repo and navigate to the repo directory.

  2. +
  3. Allow poetry to create virtual envionment contained in .venv directory of current directory.

    +
    poetry lock
    +
    +
    +

    In addtion, you can explicitly tell poetry which python instance to use

    +
    poetry env use /full/path/to/python
    +
    +
    +
  4. +
  5. Choose the intel_ai_safety subpackages and plugins that you wish to install.

    +

    a. Install intel_ai_safety with all of its subpackages (e.g. explainer and model_card_gen) and plugins

    +
    poetry install --extras all
    +
    +
    +

    b. Install intel_ai_safety with just explainer

    +
    poetry install --extras explainer
    +
    +
    +

    c. Install intel_ai_safety with just model_card_gen

    +
    poetry install --extras model-card
    +
    +
    +

    d. Install intel_ai_safety with explainer and all of its plugins

    +
    poetry install --extras explainer-all
    +
    +
    +

    e. Install intel_ai_safety with explainer and just its pytorch implementations

    +
    poetry install --extras explainer-pytorch
    +
    +
    +

    f. Install intel_ai_safety with explainer and just its tensroflow implementations

    +
    poetry install --extras explainer-tensorflow
    +
    +
    +
  6. +
  7. Activate the environment:

    +
    source .venv/bin/activate
    +
    +
    +
  8. +
+
+
+

Install to existing enviornment with Poetry

+
+

Create and activate a Python3 virtual environment

+

We encourage you to use a python virtual environment (virtualenv or conda) for consistent package management. +There are two ways to do this:

+
    +
  1. Choose a virtual enviornment to use: +a. Using virtualenv:

    +
    python3 -m virtualenv xai_env
    +source xai_env/bin/activate
    +
    +
    +

    b. Or conda:

    +
    conda create --name xai_env python=3.9
    +conda activate xai_env
    +
    +
    +
  2. +
  3. Install to current enviornment

    +
    poetry config virtualenvs.create false && poetry install --extras all
    +
    +
    +
  4. +
+
+
+
+

Additional Feature-Specific Steps

+

Notebooks may require additional dependencies listed in their associated documentation.

+
+
+

Verify Installation

+

Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions:

+
python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
+python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
+
+
+
+
+
+

Running Notebooks

+

The following links have Jupyter* notebooks showing how to use the Explainer and Model Card Generator APIs in various ML domains and use cases:

+ +
+
+

Support

+

The Intel Explainable AI Tools team tracks bugs and enhancement requests using +GitHub issues. Before submitting a +suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

+

*Other names and brands may be claimed as the property of others. Trademarks

+
+

DISCLAIMER

+

These scripts are not intended for benchmarking Intel platforms. For any performance and/or benchmarking information on specific Intel platforms, visit https://www.intel.ai/blog.

+

Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the Intel Global Human Rights Principles. Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right.

+
+
+

License

+

Intel® Explainable AI Tools is licensed under Apache License Version 2.0.

+
+
+

Datasets and Models

+

To the extent that any data, datasets, or models are referenced by Intel or accessed using tools or code on this site such data, datasets and models are provided by the third party indicated as the source of such content. Intel does not create the data, datasets, or models, provide a license to any third-party data, datasets, or models referenced, and does not warrant their accuracy or quality. By accessing such data, dataset(s) or model(s) you agree to the terms associated with that content and that your use complies with the applicable license. DATASETS

+

Intel expressly disclaims the accuracy, adequacy, or completeness of any data, datasets or models, and is not liable for any errors, omissions, or defects in such content, or for any reliance thereon. Intel also expressly disclaims any warranty of non-infringement with respect to such data, dataset(s), or model(s). Intel is not liable for any liability or damages relating to your use of such data, datasets, or models.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/model_card_gen/api.html b/v1.1.0/model_card_gen/api.html new file mode 100644 index 0000000..388c744 --- /dev/null +++ b/v1.1.0/model_card_gen/api.html @@ -0,0 +1,133 @@ + + + + + + + API Reference — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

API Reference

+
+

Model Card Generator

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/model_card_gen/example.html b/v1.1.0/model_card_gen/example.html new file mode 100644 index 0000000..06f7655 --- /dev/null +++ b/v1.1.0/model_card_gen/example.html @@ -0,0 +1,130 @@ + + + + + + + Example Model Card — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Example Model Card

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/model_card_gen/index.html b/v1.1.0/model_card_gen/index.html new file mode 100644 index 0000000..9f4afce --- /dev/null +++ b/v1.1.0/model_card_gen/index.html @@ -0,0 +1,388 @@ + + + + + + + Model Card Generator — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Model Card Generator

+

Model Card Generator allows users to create interactive HTML reports of containing model performance and fairness metrics

+

Model Card Sections

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Section
SubsectionDecription
Model DetailsOverviewA brief, one-line description of the model card.
DocumentationA thorough description of the model and its usage.
OwnersThe individuals or teams who own the model.
VersionThe version of the schema
LicensesThe model's license for use.
ReferencesLinks providing more information about the model.
CitationsHow to reference this model card.
PathThe path where the model is stored.
GraphicsCollection of overview graphics.
Model ParametersModel ArchitectureThe architecture of the model.
DataThe datasets used to train and evaluate the model.
Input FormatThe data format for inputs to the model.
Input Format MapThe data format for inputs to the model, in key-value format.
Output FormatThe data format for outputs from the model.
Output Format MapThe data format for outputs from the model, in key-value format.
Quantitative analysisPerformance MetricsThe model performance metrics being reported.
GraphicsColleciton of performance graphics
ConsiderationsUsersWho are the intended users of the model?
Use CasesWhat are the intended use cases of the model?
LimitationsWhat are the known technical limitations of the model? E.g. What kind(s) of data should the model be expected not to perform well on? What are the factors that might degrade model performance?
TradeoffsWhat are the known tradeoffs in accuracy/performance of the model?
Ethical ConsiderationsWhat are the ethical (or environmental) risks involved in the application of this model?
+
+

Install

+

Step 1: Clone the GitHub repository.

+
git clone https://github.com/Intel/intel-xai-tools.git
+
+
+

Step 2: Navigate to intel-xai-tools directory.

+
cd intel-xai-tools/
+
+
+

Step 3: Install package with pip.

+
pip install .
+
+
+
+
+

Run

+
+

Model Card Generator Inputs

+

The ModelCardGen.generate classmethod requires three inputs and returns a ModelCardGen class instance:

+
    +
  • data_sets (dict) : dictionary containing the user-defined name of the dataset as key and the path to the tfrecords or raw dataframe containing prediction values as value.

    +
      +
    • For TensorFlow TFRecords {'eval': TensorflowDataset(dataset_path='eval.tfrecord*')} (file glob pattern)

    • +
    • For PyTorch Dataset {'eval': PytorchDataset(pytorch_dataset, feature_names=feature_names)}

    • +
    • For Pandas DataFrames {'eval': pd.Daraframe({"y_true": y_true, "y_pred": ypred})}

    • +
    +
  • +
  • model_path (str) : this field represents the path to the TensorFlow SavedModel and it is only required for TensorFlow models.

  • +
  • eval_config (tfma.EvalConfig or str) : this is either the path to the proto config file used by the tfma evaluator or the proto string to be parsed. For example, let us review the following file entitled “eval_config.proto” defined for the COMPAS proxy model found in /notebooks/model_card_gen/compas_with_model_card_gen/compas-model-card-tfx.ipynb.

  • +
+

TFMA EvalConfig

+

For example eval_config parameter, let us review the following file entitled “eval_config.proto” defined for the COMPAS proxy model found in /notebooks/model_card_gen/compas_with_model_card_gen/compas-model-card-tfx.ipynb.

+

In the model_specs section it tells the evaluator “label_key” is the ground truth label. In the metric_specs section it defines the following metrics to be computed: “BinaryAccuracy”, “AUC”, “ConfusionMatrixPlot”, and “FairnessIndicators”. In the slicing_specs section it tells the evaluator to compute these metrics accross all datapoints and aggregate these metrics grouped by the “race” feature.

+
model_specs {
+    label_key: 'is_recid'
+  }
+metrics_specs {
+  metrics {class_name: "BinaryAccuracy"}
+  metrics {class_name: "AUC"}
+  metrics {class_name: "ConfusionMatrixPlot"}
+  metrics {
+    class_name: "FairnessIndicators"
+      config: '{"thresholds": [0.25, 0.5, 0.75]}'
+  }
+  }
+# The overall slice
+slicing_specs {}
+slicing_specs {
+    feature_keys: 'race'
+    }
+options {
+    include_default_metrics { value: false }
+  }
+
+
+

If we are computing metrics on a raw dataframe we must add the “prediction_key” to model_specs as follows

+
model_specs {
+    label_key: 'y_true'
+    prediction_key: 'y_pred'
+  }
+...
+
+
+

Populate Model Card user-defined fields

+

The Model Card object that is generated can be serialized/deserialized via the JSON schema defined in schema/v*/model_card.schema.json. You can use a Python dictionary to provide content to static fields like those contained in the “model_details” section of the mc variable below. Any field can be added to this dictionary of pre-defined fields as long as it is it coheres to the schema being used.

+
mc = {
+  "model_details": {
+    "name": "COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)",
+    "overview": "COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a public dataset, which contains approximately 18,000 criminal cases from Broward County, Florida between January, 2013 and December, 2014. The data contains information about 11,000 unique defendants, including criminal history demographics, and a risk score intended to represent the defendant’s likelihood of reoffending (recidivism)",
+    "owners": [
+      {
+        "name": "Intel XAI Team",
+        "contact": "xai@intel.com"
+      }
+    ],
+    "references": [
+      {
+        "reference": "Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199."
+      },
+      {
+        "reference": "Chouldechova, A., G'Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046."
+      },
+      {
+        "reference": "Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207."
+      }
+    ],
+    "graphics": {
+      "description": " "
+    }
+  },
+  "quantitative_analysis": {
+    "graphics": {
+      "description": " "
+    }
+  },
+  "schema_version": "0.0.1"
+}
+
+
+

A more comprehensive JSON example, that includes formatting for Ethical Considerations, can be found here: +/model_card_gen/intel_ai_safety/model_card_gen/docs/examples/json/model_card_example.json.

+

Create Model Card

+

+from intel_ai_safety.model_card_gen.model_card_gen import ModelCardGen
+
+model_path = 'compas/model'
+data_paths = {
+  'eval': 'compas/eval.tfrecord',
+  'train': 'compas/train.tfrecord'
+}
+eval_config = 'compas/eval_config.proto'
+
+mcg = ModelCardGen.generate(_data_paths, _model_path, _eval_config, model_card=mc)
+
+
+
+
+
+

Test

+

Step 1: Test by installing test dependencies:

+
pip install ".[test]"
+
+
+

Step 2: Run tests

+
python -m pytest tests/
+
+
+
+

Markers

+

The following custom markers have been defined in the Model Card Generator tests:

+
@pytest.mark.tensorflow: test requires tensorflow to be installed
+
+@pytest.mark.pytorch: test requires pytorch and tensorflow-model-analysis to be installed
+
+@pytest.mark.common: test does not require a specific framework to be installed
+
+
+

Note that running PyTorch tests still requires TensorFlow libararies for model analysis.

+
+
+

Sample test commands using markers

+

Run only the TensorFlow tests:

+
python -m pytest tests/ -m tensorflow
+
+
+

Run the PyTorch and common tests:

+
python -m pytest tests/ -m "pytorch or common"
+
+
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks.html b/v1.1.0/notebooks.html new file mode 100644 index 0000000..f464eb8 --- /dev/null +++ b/v1.1.0/notebooks.html @@ -0,0 +1,206 @@ + + + + + + + Example Notebooks — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Example Notebooks

+
+

Explainer Notebooks

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Notebook

Domain: Use Case

Framework

Explaining ResNet50 ImageNet Classification Using the CAM Explainer

CV: Image Classification

PyTorch*, TensorFlow* and Intel® Explainable AI API

Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer

Numerical/Categorical: Tabular Classification

TensorFlow & Intel Explainable AI API

Explaining Custom CNN MNIST Classification Using the Attributions Explainer

CV: Image Classification

PyTorch and Intel Explainable AI API

Multimodal Breast Cancer Detection Explainability using the Intel® Explainable AI API

CV: Image Classification & NLP: Text Classification

PyTorch, HuggingFace, Intel Explainable AI API & Intel® Transfer Learning Tool API

Explaining Custom NN NewsGroups Classification Using the Attributions Explainer

NLP: Text Classification

PyTorch and Intel Explainable AI API

Explaining Fine Tuned Text Classifier with PyTorch using the Intel® Explainable AI API

NLP: Text Classification

PyTorch, HuggingFace, Intel Explainable AI API & Intel Transfer Learning Tool API

Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer

CV: Image Classification

PyTorch and Intel Explainable AI API

+
+
+

Model Card Generator Notebooks

+ +++++ + + + + + + + + + + + + + + + + + + + + +

Notebook

Domain: Use Case

Framework

Generating a Model Card with PyTorch

Numerical/Categorical: Tabular Classification

PyTorch, TensorFlow and Intel Explainable AI API

Detecting Issues in Fairness by Generate Model Card from TensorFlow Estimators

Numerical/Categorical: Tabular Classification

TensorFlow & Intel Explainable AI API

Creating Model Card for Toxic Comments Classification in TensorFlow

Numerical/Categorical: Tabular Classification

TensorFlow and Intel Explainable AI API

+

*Other names and brands may be claimed as the property of others. Trademarks

+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/ExplainingImageClassification.html b/v1.1.0/notebooks/ExplainingImageClassification.html new file mode 100644 index 0000000..4171d47 --- /dev/null +++ b/v1.1.0/notebooks/ExplainingImageClassification.html @@ -0,0 +1,281 @@ + + + + + + + Explaining ResNet50 ImageNet Classification Using the CAM Explainer — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • + View page source +
  • +
+
+
+
+
+ +
+

Explaining ResNet50 ImageNet Classification Using the CAM Explainer

+
+

Objective

+

The goal of this notebook is to explore various CAM methods for image classification models. For now, we only support XGradCAM method, which is the state-of-the-art CAM method.

+
+
+

Loading Intel XAI Tools PyTorch CAM Module

+
+
[ ]:
+
+
+
from intel_ai_safety.explainer.cam import pt_cam as cam
+
+
+
+
+
+

Loading Notebook Modules

+
+
[ ]:
+
+
+
import torch
+import numpy as np
+from torchvision.models import resnet50, ResNet50_Weights
+import matplotlib.pyplot as plt
+
+
+
+
+

Using XGradCAM

+
+
+
+

Loading the input image

+

Load the input image as a numpy array in RGB order.

+
+
[ ]:
+
+
+
from PIL import Image
+import requests
+from io import BytesIO
+
+response = requests.get("https://raw.githubusercontent.com/jacobgil/pytorch-grad-cam/master/examples/both.png")
+image = np.array(Image.open(BytesIO(response.content)))
+plt.imshow(image)
+
+
+
+
+
+

Loading the Model

+

Load the trained model depending on how the model was saved. If you have your trained model, load it from the model’s path using ‘torch.load()’.

+
+
[ ]:
+
+
+
model = resnet50(weights=ResNet50_Weights.IMAGENET1K_V2) # Let's use ResNet50 trained on ImageNet as our model
+
+
+
+

We need to choose the target layer (normally the last convolutional layer) to compute CAM for. Simply printing the model will give you some idea about the name of layers and their specifications. Here are some common choices: - FasterRCNN: model.backbone - Resnet18 and 50: model.layer4 - VGG and densenet161: model.features

+
+
[ ]:
+
+
+
target_layer = model.layer4
+
+
+
+

We need to specify the target class as an integer to compute CAM for. This can be specified with the class index in the range [0, NUM_OF_CLASSES-1] based on the training dataset. For example, the index of the class ‘tabby cat’ is 281 in ImageNet. If targetClass is None, the highest scoring category will be used.

+
+
[ ]:
+
+
+
target_class = 281
+
+
+
+
+
+

Visualization

+
+
[ ]:
+
+
+
image_dims = (224, 224)
+xgc = cam.x_gradcam(model, target_layer, target_class, image, image_dims, 'cpu')
+xgc.visualize()
+
+
+
+
+

References

+

pytorch-grad-cam GitHub Project - https://github.com/jacobgil/pytorch-grad-cam

+
+
+
+

Loading Intel XAI Tools TensorFlow CAM Module

+
+
[ ]:
+
+
+
from intel_ai_safety.explainer.cam import tf_cam as cam
+
+
+
+
+
+
+

Explaining Image Classification Models with TensorFlow

+
+
[ ]:
+
+
+
%matplotlib inline
+import numpy as np
+import matplotlib.pyplot as plt
+import tensorflow as tf
+from urllib.request import urlopen
+from tensorflow.keras.applications.resnet50 import ResNet50
+
+from PIL import Image
+import requests
+from io import BytesIO
+
+
+
+
+
[ ]:
+
+
+
response = requests.get("https://raw.githubusercontent.com/jacobgil/pytorch-grad-cam/master/examples/both.png")
+image = np.array(Image.open(BytesIO(response.content)))
+plt.imshow(image)
+
+
+
+
+
[ ]:
+
+
+
model = ResNet50()
+target_layer = model.get_layer("conv5_block3_out")
+target_class = 281
+
+
+
+
+
[ ]:
+
+
+
tfgc = cam.tf_gradcam(model, target_layer, target_class, image)
+tfgc.visualize()
+
+
+
+

https://github.com/ismailuddin/gradcam-tensorflow-2/blob/master/notebooks/GradCam.ipynb

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/ExplainingImageClassification.ipynb b/v1.1.0/notebooks/ExplainingImageClassification.ipynb new file mode 100644 index 0000000..faddacc --- /dev/null +++ b/v1.1.0/notebooks/ExplainingImageClassification.ipynb @@ -0,0 +1,298 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "# Explaining ResNet50 ImageNet Classification Using the CAM Explainer" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Objective\n", + "The goal of this notebook is to explore various CAM methods for image classification models.\n", + "For now, we only support XGradCAM method, which is the state-of-the-art CAM method." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Loading Intel XAI Tools PyTorch CAM Module" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer.cam import pt_cam as cam" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Loading Notebook Modules" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "import torch\n", + "import numpy as np\n", + "from torchvision.models import resnet50, ResNet50_Weights\n", + "import matplotlib.pyplot as plt" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Using XGradCAM" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Loading the input image\n", + "Load the input image as a numpy array in RGB order." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from PIL import Image\n", + "import requests\n", + "from io import BytesIO\n", + "\n", + "response = requests.get(\"https://raw.githubusercontent.com/jacobgil/pytorch-grad-cam/master/examples/both.png\")\n", + "image = np.array(Image.open(BytesIO(response.content)))\n", + "plt.imshow(image)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Loading the Model\n", + "Load the trained model depending on how the model was saved. If you have your trained model, load it from the model's path using 'torch.load()'." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "model = resnet50(weights=ResNet50_Weights.IMAGENET1K_V2) # Let's use ResNet50 trained on ImageNet as our model" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We need to choose the target layer (normally the last convolutional layer) to compute CAM for.\n", + "Simply printing the model will give you some idea about the name of layers and their specifications.\n", + "Here are some common choices:\n", + "- FasterRCNN: model.backbone\n", + "- Resnet18 and 50: model.layer4\n", + "- VGG and densenet161: model.features" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "target_layer = model.layer4" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We need to specify the target class as an integer to compute CAM for.\n", + "This can be specified with the class index in the range [0, NUM_OF_CLASSES-1] based on the training dataset.\n", + "For example, the index of the class 'tabby cat' is 281 in ImageNet. If targetClass is None, the highest scoring category\n", + "will be used." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "target_class = 281" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Visualization" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "image_dims = (224, 224)\n", + "xgc = cam.x_gradcam(model, target_layer, target_class, image, image_dims, 'cpu')\n", + "xgc.visualize()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## References\n", + "pytorch-grad-cam GitHub Project - https://github.com/jacobgil/pytorch-grad-cam" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Loading Intel XAI Tools TensorFlow CAM Module" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer.cam import tf_cam as cam" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "# Explaining Image Classification Models with TensorFlow" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import tensorflow as tf\n", + "from urllib.request import urlopen\n", + "from tensorflow.keras.applications.resnet50 import ResNet50\n", + "\n", + "from PIL import Image\n", + "import requests\n", + "from io import BytesIO" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "response = requests.get(\"https://raw.githubusercontent.com/jacobgil/pytorch-grad-cam/master/examples/both.png\")\n", + "image = np.array(Image.open(BytesIO(response.content)))\n", + "plt.imshow(image)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "model = ResNet50()\n", + "target_layer = model.get_layer(\"conv5_block3_out\")\n", + "target_class = 281" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "tfgc = cam.tf_gradcam(model, target_layer, target_class, image)\n", + "tfgc.visualize()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## References\n", + "https://github.com/ismailuddin/gradcam-tensorflow-2/blob/master/notebooks/GradCam.ipynb" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/v1.1.0/notebooks/Multimodal_Cancer_Detection.html b/v1.1.0/notebooks/Multimodal_Cancer_Detection.html new file mode 100644 index 0000000..1dd21ba --- /dev/null +++ b/v1.1.0/notebooks/Multimodal_Cancer_Detection.html @@ -0,0 +1,1272 @@ + + + + + + + Multimodal Breast Cancer Detection Explainability using the Intel® Explainable AI API — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • + View page source +
  • +
+
+
+
+
+ +
+

Multimodal Breast Cancer Detection Explainability using the Intel® Explainable AI API

+

This application is a multimodal solution for predicting cancer diagnosis using categorized contrast enhanced mammography data and radiology notes. It trains two models - one for image classification and the other for text classification.

+
+

Import Dependencies and Setup Directories

+
+
[ ]:
+
+
+
# This notebook requires the latest version of intel-transfer-learning (v0.7.0)
+# The package and directions to install it can be found at its repo:
+# https://github.com/Intel/transfer-learning
+
+! pip install --no-cache-dir  nltk docx2txt openpyxl et-xmlfile schema
+
+
+
+
+
[ ]:
+
+
+
import numpy as np
+import os
+import pandas as pd
+import tensorflow as tf
+import torch
+
+from transformers import EvalPrediction, TrainingArguments, pipeline
+
+# tlt imports
+from tlt.datasets import dataset_factory
+from tlt.models import model_factory
+
+# explainability imports
+import matplotlib.pyplot as plt
+import plotly.express as px
+from plotly.subplots import make_subplots
+import plotly.graph_objects as go
+import nltk
+from nltk.corpus import words
+import string
+import shap
+import warnings
+warnings.filterwarnings( "ignore", module = "matplotlib\..*" )
+
+# Specify the root directory where the images and annotations are located
+dataset_dir = os.path.join(os.environ["DATASET_DIR"]) if "DATASET_DIR" in os.environ else \
+    os.path.join(os.environ["HOME"], "dataset")
+
+# Specify a directory for output
+output_dir = os.environ["OUTPUT_DIR"] if "OUTPUT_DIR" in os.environ else \
+    os.path.join(os.environ["HOME"], "output")
+
+print("Dataset directory:", dataset_dir)
+print("Output directory:", output_dir)
+
+
+
+
+
+

Dataset

+

Download the images and radiology annotations from https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=109379611 and save in the path <dataset_dir>/brca/data.

+
+
[ ]:
+
+
+
! python prepare_nlp_data.py --data_root {dataset_dir}/brca/data
+
+
+
+
+
[ ]:
+
+
+
! python prepare_vision_data.py --data_root {dataset_dir}/brca/data
+
+
+
+

Image files should have the .jpg extension and be arranged in subfolders for each class. The annotation file should be a .csv. The final brca dataset directory should look something like this:

+
brca
+  ├── data
+  │   ├── PKG - CDD-CESM
+  │   ├── Medical reports for cases .zip
+  │   ├── Radiology manual annotations.xlsx
+  │   └── Radiology_hand_drawn_segmentations_v2.csv
+  ├── annotation
+  │   └── annotation.csv
+  └── vision_images
+      ├── Benign
+      │   ├── P100_L_CM_CC.jpg
+      │   ├── P100_L_CM_MLO.jpg
+      │   └── ...
+      ├── Malignant
+      │   ├── P102_R_CM_CC.jpg
+      │   ├── P102_R_CM_MLO.jpg
+      │   └── ...
+      └── Normal
+          ├── P100_R_CM_CC.jpg
+          ├── P100_R_CM_MLO.jpg
+          └── ...
+
+
+
+
[ ]:
+
+
+
# User input needed - supply the path to the images in the dataset_dir according to your system
+source_image_path = os.path.join(dataset_dir, 'brca', 'data', 'vision_images')
+image_path = source_image_path
+
+# User input needed - supply the path and name of the annotation file in the dataset_dir
+source_annotation_path = os.path.join(dataset_dir, 'brca', 'data', 'annotation', 'annotation.csv')
+annotation_path = source_annotation_path
+
+
+
+
+

Optional: Group Data by Patient ID

+

This section is not required to run the workload, but it is helpful to assign all of a subject’s records to be entirely in the train set or test set. This section will do a random stratification based on patient ID and save new copies of the grouped data files.

+
+
[ ]:
+
+
+
from data_utils import split_images, split_annotation
+
+grouped_image_path = '{}_grouped'.format(source_image_path)
+
+if os.path.isdir(grouped_image_path):
+    print("Grouped directory already exists and will be used: {}".format(grouped_image_path))
+else:
+    split_images(source_image_path, grouped_image_path)
+
+train_image_path = os.path.join(grouped_image_path, 'train')
+test_image_path = os.path.join(grouped_image_path, 'test')
+
+
+
+
+
[ ]:
+
+
+
from data_utils import split_images, split_annotation
+
+file_dir, file_name = os.path.split(source_annotation_path)
+grouped_annotation_path = os.path.join(file_dir, '{}_grouped.csv'.format(os.path.splitext(file_name)[0]))
+
+if os.path.isfile(grouped_annotation_path):
+    print("Grouped annotation already exists and will be used: {}".format(grouped_annotation_path))
+else:
+    train_dataset, test_dataset = split_annotation(file_dir, file_name, train_image_path, test_image_path)
+    train_dataset.to_csv(grouped_annotation_path, index=False)
+    test_dataset.to_csv(grouped_annotation_path[:-4] + '_test.csv', index=False)
+    print('Grouped training annotation saved to: {}'.format(grouped_annotation_path))
+    print('Grouped testing annotation saved to: {}'.format(grouped_annotation_path[:-4] + '_test.csv'))
+
+train_annotation_path = grouped_annotation_path
+test_annotation_path = grouped_annotation_path[:-4] + '_test.csv'
+label_col = 0  # Index of the label column in the grouped data file
+
+
+
+
+
+
+

Model 1: Image Classification with PyTorch

+
+

Get the Model and Dataset

+

Call the model factory to get a pretrained model from PyTorch Hub and the dataset factory to load the images from their location. The get_model function returns a model object that will later be used for training. We will use resnet50 by default.

+
+
[ ]:
+
+
+
viz_model = model_factory.get_model(model_name="resnet50", framework='pytorch')
+
+# Load the dataset from the custom dataset path
+train_viz_dataset = dataset_factory.load_dataset(dataset_dir=train_image_path,
+                                       use_case='image_classification',
+                                       framework='pytorch')
+
+test_viz_dataset = dataset_factory.load_dataset(dataset_dir=test_image_path,
+                                       use_case='image_classification',
+                                       framework='pytorch')
+
+print("Class names:", str(train_viz_dataset.class_names))
+
+
+
+
+
+

Data Preparation

+

Once you have your dataset loaded, use the following cell to preprocess the dataset. We split the images into training and validation subsets, resize them to match the model, and then batch the images.

+
+
[ ]:
+
+
+
batch_size = 16
+# shuffle split the training dataset
+train_viz_dataset.shuffle_split(train_pct=.80, val_pct=.20, seed=3)
+train_viz_dataset.preprocess(viz_model.image_size, batch_size=batch_size)
+test_viz_dataset.preprocess(viz_model.image_size, batch_size=batch_size)
+
+
+
+
+
+

Image dataset analysis

+

Let’s take a look at the dataset and verify that we are loading the data correctly. This includes looking at the distributions amongst the training and validation and visual confirmation of the images themselves.

+
+
[ ]:
+
+
+
# Create a label map function and reverse label map for the dataset
+def label_map_func(label):
+        if label == 'Benign':
+            return 0
+        elif label == 'Malignant':
+            return 1
+        elif label == 'Normal':
+            return 2
+
+reverse_label_map = {0: 'Benign', 1: 'Malignant', 2: 'Normal'}
+
+
+
+
+
[ ]:
+
+
+
train_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}
+
+for x, y in train_viz_dataset.train_subset:
+    train_label_count[reverse_label_map[y]] += 1
+
+print('Training label distribution:')
+train_label_count
+
+
+
+
+
[ ]:
+
+
+
valid_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}
+
+for x, y in train_viz_dataset.validation_subset:
+    valid_label_count[reverse_label_map[y]] += 1
+
+print('Validation label distribution:')
+valid_label_count
+
+
+
+
+
[ ]:
+
+
+
test_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}
+
+for x, y in test_viz_dataset.dataset:
+    test_label_count[reverse_label_map[y]] += 1
+
+print('Validation label distribution:')
+test_label_count
+
+
+
+
+
[ ]:
+
+
+
# get datsaet distrubtions
+form = {'type':'domain'}
+fig = make_subplots(rows=1, cols=3, specs=[[form, form, form]], subplot_titles=['Training', 'Validation', 'Testing'])
+fig.add_trace(go.Pie(values=list(train_label_count.values()), labels=list(train_label_count.keys())), 1, 1)
+fig.add_trace(go.Pie(values=list(valid_label_count.values()), labels=list(valid_label_count.keys())), 1, 2)
+fig.add_trace(go.Pie(values=list(test_label_count.values()), labels=list(valid_label_count.keys())), 1, 3)
+
+fig.update_layout(height=600, width=800, title_text="Label Distributions")
+fig.show()
+
+
+
+
+
[ ]:
+
+
+
def get_examples(dataset, reverse_label_map, n=6):
+    # get n images from each label in dataset and return as dictionary
+
+    loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size)
+
+    example_images = {'Benign': [], 'Malignant': [], 'Normal': []}
+    for x, y in loader:
+        for i, label in enumerate(y):
+            label_name = reverse_label_map[int(label)]
+            if len(example_images[label_name]) < n:
+                example_images[label_name].append(x[i])
+        if len(example_images['Malignant']) == n and\
+        len(example_images['Benign']) == n and\
+        len(example_images['Normal']) == n:
+            break
+    return example_images
+
+
+
+
+
[ ]:
+
+
+
# plot some training examples
+fig = plt.figure(figsize=(12,6))
+columns = 6
+rows = 3
+fig.suptitle('Training Torch Tensor examples', size=16)
+
+
+train_example_images = get_examples(train_viz_dataset.train_subset, reverse_label_map)
+for i in range(1, columns*rows +1):
+    idx = i - 1
+    if idx < 6:
+        img = train_example_images['Benign'][idx]
+    elif idx >= 6 and idx < 12:
+        img = train_example_images['Malignant'][idx - 6]
+    else:
+        img = train_example_images['Normal'][idx - 12]
+
+    fig.add_subplot(rows, columns, i)
+    plt.axis('off')
+    plt.tight_layout()
+    if idx == 0 or idx == 6 or idx == 12:
+        plt.axis('on')
+        label_name = reverse_label_map[int(idx/6)]
+        plt.ylabel(label_name, fontsize=16)
+        plt.tick_params(axis='x', bottom=False, labelbottom=False)
+        plt.tick_params(axis='y', left=False, labelleft=False)
+
+    plt.imshow(torch.movedim(img, 0, 2).detach().cpu().numpy().astype(np.uint8))
+
+plt.show()
+
+
+
+
+
[ ]:
+
+
+
# plot some validation images
+fig = plt.figure(figsize=(12,6))
+columns = 6
+rows = 3
+fig.suptitle('Validation Torch Tensor examples', size=16)
+
+
+valid_example_images = get_examples(train_viz_dataset.validation_subset, reverse_label_map)
+
+for i in range(1, columns*rows +1):
+    idx = i - 1
+    if idx < 6:
+        img = valid_example_images['Benign'][idx]
+    elif idx >= 6 and idx < 12:
+        img = valid_example_images['Malignant'][idx - 6]
+    else:
+        img = valid_example_images['Normal'][idx - 12]
+
+    fig.add_subplot(rows, columns, i)
+    plt.axis('off')
+    plt.tight_layout()
+    if idx == 0 or idx == 6 or idx == 12:
+        plt.axis('on')
+        label_name = reverse_label_map[int(idx/6)]
+        plt.ylabel(label_name, fontsize=16)
+        plt.tick_params(axis='x', bottom=False, labelbottom=False)
+        plt.tick_params(axis='y', left=False, labelleft=False)
+
+    plt.imshow(torch.movedim(img, 0, 2).detach().cpu().numpy().astype(np.uint8))
+
+plt.show()
+
+
+
+
+
+

Transfer Learning

+

This step calls the model’s train function with the dataset that was just prepared. The training function will get the PyTorch feature vector and add on a dense layer based on the number of classes in the dataset. The model is then compiled and trained based on the number of epochs specified in the argument. We also add two more dense layers using the extra_layers parameter.

+

To optionally insert additional dense layers between the base model and output layer, extra_layers=[1024, 512] will insert two dense layers, the first with 1024 neurons and the second with 512 neurons.

+
+
[ ]:
+
+
+
viz_history = viz_model.train(train_viz_dataset, output_dir=output_dir, epochs=5, seed=10, extra_layers=[1024, 512], ipex_optimize=False)
+
+
+
+
+
[ ]:
+
+
+
validation_viz_metrics = viz_model.evaluate(train_viz_dataset)
+test_viz_metrics = viz_model.evaluate(test_viz_dataset)
+print(validation_viz_metrics)
+print(test_viz_metrics)
+
+
+
+
+
+

Save the Computer Vision Model

+
+
[ ]:
+
+
+
saved_model_dir = viz_model.export(output_dir)
+
+
+
+
+
+

Error Analysis

+

Analyzing the errors via a confusion matrix and ROC and PR curves will help us identify if our model is exibiting any label bias

+
+
[ ]:
+
+
+
from scipy.special import softmax
+y_pred = []
+# get the logit predictions and then convert to probabilities
+for batch in test_viz_dataset.dataset:
+    y_pred.append(softmax(viz_model._model(batch[0][None, :]).detach().numpy())[0])
+
+y_true =[y for x, y in test_viz_dataset.dataset]
+
+
+
+
+
[ ]:
+
+
+
from intel_ai_safety.explainer import metrics
+viz_cm = metrics.confusion_matrix(y_true, y_pred, test_viz_dataset.class_names)
+viz_cm.visualize()
+print(viz_cm.report)
+
+
+
+
+
[ ]:
+
+
+
plotter = metrics.plot(y_true, y_pred, test_viz_dataset.class_names)
+plotter.pr_curve()
+
+
+
+
+
[ ]:
+
+
+
plotter.roc_curve()
+
+
+
+
+
+

Explainability

+
+
[ ]:
+
+
+
# convert one-hot encoded predictions to the index labels
+y_pred_labels = np.array(y_pred).argmax(axis=1)
+
+# get the malignant indexes and then the normal and benign prediction indexes
+mal_idxs = np.where(np.array(y_true) == label_map_func('Malignant'))[0].tolist()
+nor_preds = np.where(np.array(y_pred_labels) == label_map_func('Normal'))[0].tolist()
+ben_preds = np.where(np.array(y_pred_labels) == label_map_func('Benign'))[0].tolist()
+
+
+
+
+
[ ]:
+
+
+
# get mal examples that were misclassified as ben
+mal_classified_as_nor = list(set(mal_idxs).intersection(nor_preds))
+
+# get mal examples that were misclassified as ben
+mal_classified_as_ben = list(set(mal_idxs).intersection(ben_preds))
+
+
+
+
+
[ ]:
+
+
+
# get the images for all mals predicted as nors
+mal_as_nor_images = [test_viz_dataset.dataset[i][0] for i in mal_classified_as_nor]
+
+# get the images for all mals predicted as bens
+mal_as_ben_images = [test_viz_dataset.dataset[i][0] for i in mal_classified_as_ben]
+
+
+
+
+
[ ]:
+
+
+
from skimage import io
+# plot 14 mal_as_nor images
+fig = plt.figure(figsize=(12,6))
+columns = 7
+rows = 2
+
+for i in range(1, columns*rows +1):
+    if i == len(mal_as_nor_images):
+        break
+    idx = i - 1
+
+    fig.add_subplot(rows, columns, i)
+    plt.axis('off')
+    plt.tight_layout()
+
+    plt.imshow(torch.movedim(mal_as_nor_images[idx], 0, 2).detach().cpu().numpy().astype(np.uint8))
+
+fig.suptitle('Malignant predicted as Normal', fontsize=18)
+plt.tight_layout()
+plt.show()
+
+
+
+
+
[ ]:
+
+
+
# lets calculate gradcam on the 0th, 1st and 10th images since they
+# seem to have tnhe clearest visual of a malignant tumor
+from intel_ai_safety.explainer.cam import pt_cam as cam
+
+images = [torch.movedim(mal_as_nor_images[0], 0, 2).detach().cpu().numpy().astype(np.uint8),
+          torch.movedim(mal_as_nor_images[3], 0, 2).detach().cpu().numpy().astype(np.uint8),
+          torch.movedim(mal_as_nor_images[5], 0, 2).detach().cpu().numpy().astype(np.uint8)]
+
+
+final_image_dim = (224, 224)
+targetLayer = viz_model._model.layer4
+xgc = cam.x_gradcam(viz_model._model, targetLayer,
+                      label_map_func('Normal'),
+                      images[0],
+                      final_image_dim,
+                      'cpu')
+
+xgc.visualize()
+
+xgc = cam.x_gradcam(viz_model._model, targetLayer,
+                      label_map_func('Normal'),
+                      images[1],
+                      final_image_dim,
+                      'cpu')
+
+xgc.visualize()
+
+xgc = cam.x_gradcam(viz_model._model, targetLayer,
+                      label_map_func('Normal'),
+                      images[2],
+                      final_image_dim,
+                      'cpu')
+
+xgc.visualize()
+
+
+
+
+
[ ]:
+
+
+
# plot 14 mal_as_ben images
+fig = plt.figure(figsize=(12,6))
+columns = 7
+rows = 2
+
+for i in range(1, columns*rows +1):
+    idx = i - 1
+    if idx == len(mal_as_ben_images):
+        break
+
+    fig.add_subplot(rows, columns, i)
+    plt.axis('off')
+    plt.tight_layout()
+
+    plt.imshow(torch.movedim(mal_as_ben_images[idx], 0, 2).detach().cpu().numpy().astype(np.uint8))
+
+fig.suptitle('Malignant predicted as Benign', fontsize=18)
+plt.tight_layout()
+plt.show()
+
+
+
+
+
[ ]:
+
+
+
# lets calculate gradcam on the 5th, 10th and 11th images since they
+# seem to have tnhe clearest visual of a malignant tumor
+
+images = [torch.movedim(mal_as_ben_images[0], 0, 2).detach().cpu().numpy().astype(np.uint8),
+          torch.movedim(mal_as_ben_images[1], 0, 2).detach().cpu().numpy().astype(np.uint8),
+          torch.movedim(mal_as_ben_images[2], 0, 2).detach().cpu().numpy().astype(np.uint8)]
+
+
+
+final_image_dim = (224, 224)
+targetLayer = viz_model._model.layer4
+xgc = cam.x_gradcam(viz_model._model, targetLayer,
+                      label_map_func('Benign'),
+                      images[0],
+                      final_image_dim,
+                      'cpu')
+
+xgc.visualize()
+
+xgc = cam.x_gradcam(viz_model._model, targetLayer,
+                      label_map_func('Benign'),
+                      images[1],
+                      final_image_dim,
+                      'cpu')
+
+xgc.visualize()
+
+xgc = cam.x_gradcam(viz_model._model, targetLayer,
+                      label_map_func('Benign'),
+                      images[2],
+                      final_image_dim,
+                      'cpu')
+
+xgc.visualize()
+
+
+
+
+
+
+

Model 2: Text Classification with PyTorch

+
+

Get the Model and Dataset

+

Now we will call the model factory to get a pretrained model from HuggingFace and load the annotation file using the dataset factory. We will use clinical-bert for this part.

+
+
[ ]:
+
+
+
# Set up NLP parameters
+model_name = 'clinical-bert'
+seq_length = 64
+batch_size = 5
+quantization_criterion = 0.05
+quantization_max_trial = 50
+
+
+
+
+
[ ]:
+
+
+
nlp_model = model_factory.get_model(model_name=model_name, framework='pytorch')
+
+
+
+
+
[ ]:
+
+
+
# Create a label map function and reverse label map for the dataset
+def label_map_func(label):
+        if label == 'Benign':
+            return 0
+        elif label == 'Malignant':
+            return 1
+        elif label == 'Normal':
+            return 2
+
+reverse_label_map = {0: 'Benign', 1: 'Malignant', 2: 'Normal'}
+
+
+
+
+
[ ]:
+
+
+
os.path.split(os.path.splitext(train_annotation_path)[0] + '.csv')
+
+
+
+
+
[ ]:
+
+
+
train_file_dir, train_file_name =  os.path.split(os.path.splitext(train_annotation_path)[0] +'.csv')
+train_nlp_dataset = dataset_factory.load_dataset(dataset_dir=train_file_dir,
+                       use_case='text_classification',
+                       framework='pytorch',
+                       dataset_name='brca',
+                       csv_file_name=train_file_name,
+                       label_map_func=label_map_func,
+                       class_names=['Benign', 'Malignant', 'Normal'],
+                       header=True,
+                       label_col=label_col,
+                       shuffle_files=True,
+                       exclude_cols=[2])
+
+test_file_dir, test_file_name =  os.path.split(os.path.splitext(test_annotation_path)[0] +'.csv')
+test_nlp_dataset = dataset_factory.load_dataset(dataset_dir=test_file_dir,
+                       use_case='text_classification',
+                       framework='pytorch',
+                       dataset_name='brca',
+                       csv_file_name=test_file_name,
+                       label_map_func=label_map_func,
+                       class_names=['Benign', 'Malignant', 'Normal'],
+                       header=True,
+                       label_col=label_col,
+                       shuffle_files=True,
+                       exclude_cols=[2])
+
+
+
+
+
+

Data Preparation

+
+
[ ]:
+
+
+
train_nlp_dataset.preprocess(nlp_model.hub_name, batch_size=batch_size, max_length=seq_length)
+test_nlp_dataset.preprocess(nlp_model.hub_name, batch_size=batch_size, max_length=seq_length)
+train_nlp_dataset.shuffle_split(train_pct=0.67, val_pct=0.33, shuffle_files=False)
+
+
+
+
+
+

Corpus analysis

+

Let’s take a look at the word distribution across each label to get an idea what BERT will be training on as well make sure that our training and validation datasets are distributed similarly.

+
+
[ ]:
+
+
+
import plotly.express as px
+
+train_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}
+for label in train_nlp_dataset.train_subset['label']:
+    train_label_count[reverse_label_map[int(label)]] += 1
+
+print('Training label distribution:')
+train_label_count
+
+
+
+
+
[ ]:
+
+
+
valid_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}
+for label in train_nlp_dataset.validation_subset['label']:
+    valid_label_count[reverse_label_map[int(label)]] += 1
+
+print('Validation label distribution:')
+valid_label_count
+
+
+
+
+
[ ]:
+
+
+
test_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}
+for label in test_nlp_dataset.dataset['label']:
+    test_label_count[reverse_label_map[int(label)]] += 1
+
+print('Validation label distribution:')
+test_label_count
+
+
+
+
+
[ ]:
+
+
+
form = {'type':'domain'}
+
+fig = make_subplots(rows=1, cols=3, specs=[[form, form, form]], subplot_titles=['Training', 'Validation', 'Testing'])
+fig.add_trace(go.Pie(values=list(train_label_count.values()), labels=list(train_label_count.keys())), 1, 1)
+fig.add_trace(go.Pie(values=list(valid_label_count.values()), labels=list(valid_label_count.keys())), 1, 2)
+fig.add_trace(go.Pie(values=list(test_label_count.values()), labels=list(test_label_count.keys())), 1, 3)
+
+
+fig.update_layout(height=600, width=800, title_text="Label Distributions")
+fig.show()
+
+
+
+
+
[ ]:
+
+
+
nltk.download('punkt')
+nltk.download('words')
+
+def get_mc_df(words_list, n=50, ignored_words=[]):
+    '''
+    Get's the most common words from a list of words and returns a pd DataFrame for Plotly
+    '''
+
+    frequency_dict = nltk.FreqDist(words_list)
+    most_common = frequency_dict.most_common(n=500)
+
+
+    final_fd = pd.DataFrame({'Token': [], 'Frequency': []})
+    cnt = 0
+    idx = 0
+    while(cnt < n):
+        if most_common[idx][0] in string.punctuation:
+            print(f'{most_common[idx][0]} is not a word')
+        else:
+            final_fd.loc[len(final_fd.index)] = [most_common[idx][0], most_common[idx][1]]
+            cnt += 1
+        idx += 1
+
+    return final_fd
+
+
+
+
+
[ ]:
+
+
+
df = pd.read_csv(train_annotation_path)
+
+# get string arrays of symptoms for each label
+mal_text = list(df.loc[df['label'] == 'Malignant']['symptoms'])
+nor_text = list(df.loc[df['label'] == 'Normal']['symptoms'])
+ben_text = list(df.loc[df['label'] == 'Benign']['symptoms'])
+
+# get tokenized words for each
+mal_tokenized: list[str] = nltk.word_tokenize(" ".join(mal_text))
+nor_tokenized: list[str] = nltk.word_tokenize(" ".join(nor_text))
+ben_tokenized: list[str] = nltk.word_tokenize(" ".join(ben_text))
+
+# generate the dataframes necesarry to plot distributions
+mal_fd = get_mc_df(mal_tokenized)
+nor_fd = get_mc_df(nor_tokenized)
+ben_fd = get_mc_df(ben_tokenized)
+
+
+
+
+
[ ]:
+
+
+
fig = px.bar(mal_fd, x="Token", y='Frequency', color='Frequency', title='Malignant word distribution')
+fig.update(layout_coloraxis_showscale=False)
+fig.show()
+
+
+
+
+
[ ]:
+
+
+
fig = px.bar(nor_fd, x="Token", y='Frequency', color='Frequency', title='Normal word distribution')
+fig.update(layout_coloraxis_showscale=False)
+fig.show()
+
+
+
+
+
[ ]:
+
+
+
fig = px.bar(ben_fd, x="Token", y='Frequency', color='Frequency', title='Benign word distribution')
+fig.update(layout_coloraxis_showscale=False)
+fig.show()
+
+
+
+
+
+

Transfer Learning

+

This step calls the model’s train function with the dataset that was just prepared. The training function will get the pretrained model from HuggingFace and add on a dense layer based on the number of classes in the dataset. The model is then trained using an instance of HuggingFace Trainer for the number of epochs specified. If desired, a native PyTorch loop can be invoked instead of Trainer by setting use_trainer=False.

+
+
[ ]:
+
+
+
import transformers
+transformers.set_seed(1)
+nlp_history = nlp_model.train(train_nlp_dataset, output_dir, epochs=3, use_trainer=True, seed=1)
+
+
+
+
+
+

Save the NLP Model

+
+
[ ]:
+
+
+
nlp_model.export(output_dir)
+
+
+
+
+
[ ]:
+
+
+
# This currently isn't showing the correct output for test
+train_nlp_metrics = nlp_model.evaluate(train_nlp_dataset)
+test_nlp_metrics = nlp_model.evaluate(test_nlp_dataset)
+
+
+
+
+
+

Error analysis

+

We can see that BERT has a much better accuracy than the CNN. Nonetheless, similar to the CNN, let’s see where BERT makes mistakes across the three classes using a confusion matrix and ROC and PR curves.

+
+
[ ]:
+
+
+
# get predictions in logits (one-hot-encoded)
+# NOTE: added a new flag to predict function
+logit_predictions = nlp_model.predict(test_nlp_dataset.dataset, return_raw=True)['logits']
+#convert logits to probability
+from scipy.special import softmax
+y_pred = softmax(logit_predictions.detach().numpy(), axis=1)
+y_true = test_nlp_dataset.validation_subset['label'].numpy().astype(int)
+
+
+
+
+
[ ]:
+
+
+
from intel_ai_safety.explainer import metrics
+
+nlp_cm = metrics.confusion_matrix(y_true, y_pred, test_nlp_dataset.class_names)
+nlp_cm.visualize()
+print(nlp_cm.report)
+
+
+
+
+
[ ]:
+
+
+
plotter = metrics.plot(y_true, y_pred, test_nlp_dataset.class_names)
+plotter.pr_curve()
+
+
+
+
+
[ ]:
+
+
+
plotter.roc_curve()
+
+
+
+
+
+

Explanation

+
+
[ ]:
+
+
+
mal_idxs = np.where(test_nlp_dataset.dataset['label'].numpy() == label_map_func('Malignant'))[0].tolist()
+ben_preds = np.where(nlp_model.predict(test_nlp_dataset.dataset).numpy() == label_map_func('Benign'))[0].tolist()
+
+# get mal examples that were misclassified as ben
+mal_classified_as_ben = list(set(mal_idxs).intersection(ben_preds))
+
+
+
+
+
[ ]:
+
+
+
mal_classified_as_ben_text = test_nlp_dataset.get_text(test_nlp_dataset.dataset[mal_classified_as_ben]['input_ids'])
+
+
+
+
+
[ ]:
+
+
+
# define a prediction function
+def f(x):
+    encoded_input = nlp_model._tokenizer(x.tolist(), padding=True, return_tensors='pt')
+    outputs = nlp_model._model(**encoded_input)
+    return softmax(outputs.logits.detach().numpy(), axis=1)
+
+
+
+
+
[ ]:
+
+
+
from intel_ai_safety.explainer.attributions import attributions
+partition_explainer = attributions.partition_text_explainer(f, test_nlp_dataset.class_names, np.array(mal_classified_as_ben_text), r"\W+")
+partition_explainer.visualize()
+
+
+
+
+
+

Int8 Quantization

+

We can use the Intel® Extension for Transformers to quantize the trained model for faster inference. If you want to run this part of the notebook, make sure you have intel-extension-for-transformers installed in your environment.

+
+
[ ]:
+
+
+
! pip install --no-cache-dir intel-extension-for-transformers==1.4
+
+
+
+
+
[ ]:
+
+
+
from intel_extension_for_transformers.transformers.trainer import NLPTrainer
+from intel_extension_for_transformers.transformers import objectives, OptimizedModel, QuantizationConfig
+from intel_extension_for_transformers.transformers import metrics as nlptk_metrics
+
+
+
+
+
[ ]:
+
+
+
# Set up quantization config
+tune_metric = nlptk_metrics.Metric(
+    name="eval_accuracy",
+    greater_is_better=True,
+    is_relative=True,
+    criterion=quantization_criterion,
+    weight_ratio=None,
+)
+
+objective = objectives.Objective(
+    name="performance", greater_is_better=True, weight_ratio=None
+)
+
+quantization_config = QuantizationConfig(
+    approach="PostTrainingDynamic",
+    max_trials=quantization_max_trial,
+    metrics=[tune_metric],
+    objectives=[objective],
+)
+
+# Set up metrics computation
+def compute_metrics(p: EvalPrediction):
+    preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
+    preds = np.argmax(preds, axis=1)
+    return {"accuracy": (preds == p.label_ids).astype(np.float32).mean().item()}
+
+
+
+
+
[ ]:
+
+
+
quantizer = NLPTrainer(model=nlp_model._model,
+                       train_dataset=train_nlp_dataset.train_subset,
+                       eval_dataset=train_nlp_dataset.validation_subset,
+                       compute_metrics=compute_metrics,
+                       tokenizer=train_nlp_dataset._tokenizer)
+quantized_model = quantizer.quantize(quant_config=quantization_config)
+
+
+
+
+
[ ]:
+
+
+
results = quantizer.evaluate()
+eval_acc = results.get("eval_accuracy")
+print("Final Eval Accuracy: {:.5f}".format(eval_acc))
+
+
+
+
+

Save the Quantized NLP Model

+
+
[ ]:
+
+
+
quantizer.save_model(os.path.join(output_dir, 'quantized_BERT'))
+nlp_model._model.config.save_pretrained(os.path.join(output_dir, 'quantized_BERT'))
+
+
+
+
+
+
+

Error analysis

+

The quantized BERT model has the same validation accuracy as it’s stock counterpart. This does not mean, however, that they perform the same. Let’s look at the confusion matrix and PR and ROC curves to see if the errors are different.

+
+
[ ]:
+
+
+
# get predictions in logits (one-hot-encoded)
+# NOTE: added a new flag to predict function
+logit_predictions = quantizer.predict(test_nlp_dataset.dataset)[0]
+#convert logits to probability
+from scipy.special import softmax
+y_pred = softmax(logit_predictions, axis=1)
+y_true = test_nlp_dataset.dataset['label'].numpy().astype(int)
+
+
+
+
+
[ ]:
+
+
+
quant_cm = metrics.confusion_matrix(y_true, y_pred, test_nlp_dataset.class_names)
+quant_cm.visualize()
+print(quant_cm.report)
+
+
+
+
+
[ ]:
+
+
+
plotter = metrics.plot(y_true, y_pred, test_nlp_dataset.class_names)
+plotter.pr_curve()
+
+
+
+
+
+
+

Citations

+
+

Data Citation

+

Khaled R., Helal M., Alfarghaly O., Mokhtar O., Elkorany A., El Kassas H., Fahmy A. Categorized Digital Database for Low energy and Subtracted Contrast Enhanced Spectral Mammography images [Dataset]. (2021) The Cancer Imaging Archive. DOI: 10.7937/29kw-ae92

+
+
+

Publication Citation

+

Khaled, R., Helal, M., Alfarghaly, O., Mokhtar, O., Elkorany, A., El Kassas, H., & Fahmy, A. Categorized contrast enhanced mammography dataset for diagnostic and artificial intelligence research. (2022) Scientific Data, Volume 9, Issue 1. DOI: 10.1038/s41597-022-01238-0

+
+
+

TCIA Citation

+

Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository, Journal of Digital Imaging, Volume 26, Number 6, December, 2013, pp 1045-1057. DOI: 10.1007/s10278-013-9622-7

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/Multimodal_Cancer_Detection.ipynb b/v1.1.0/notebooks/Multimodal_Cancer_Detection.ipynb new file mode 100644 index 0000000..d28fb99 --- /dev/null +++ b/v1.1.0/notebooks/Multimodal_Cancer_Detection.ipynb @@ -0,0 +1,1512 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "2e3e807d", + "metadata": {}, + "source": [ + "# Multimodal Breast Cancer Detection Explainability using the Intel® Explainable AI API\n", + "\n", + "This application is a multimodal solution for predicting cancer diagnosis using categorized contrast enhanced mammography data and radiology notes. It trains two models - one for image classification and the other for text classification.\n", + "\n", + "## Import Dependencies and Setup Directories" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4c892925-7970-4e5d-9d3f-b413a7d7c401", + "metadata": {}, + "outputs": [], + "source": [ + "# This notebook requires the latest version of intel-transfer-learning (v0.7.0)\n", + "# The package and directions to install it can be found at its repo:\n", + "# https://github.com/Intel/transfer-learning\n", + "\n", + "! pip install --no-cache-dir nltk docx2txt openpyxl et-xmlfile schema" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ad8a9723-5dbe-44eb-9baa-eca188e435f2", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "import os\n", + "import pandas as pd\n", + "import tensorflow as tf\n", + "import torch\n", + "\n", + "from transformers import EvalPrediction, TrainingArguments, pipeline\n", + "\n", + "# tlt imports\n", + "from tlt.datasets import dataset_factory\n", + "from tlt.models import model_factory\n", + "\n", + "# explainability imports\n", + "import matplotlib.pyplot as plt\n", + "import plotly.express as px\n", + "from plotly.subplots import make_subplots\n", + "import plotly.graph_objects as go\n", + "import nltk\n", + "from nltk.corpus import words\n", + "import string\n", + "import shap\n", + "import warnings\n", + "warnings.filterwarnings( \"ignore\", module = \"matplotlib\\..*\" )\n", + "\n", + "# Specify the root directory where the images and annotations are located\n", + "dataset_dir = os.path.join(os.environ[\"DATASET_DIR\"]) if \"DATASET_DIR\" in os.environ else \\\n", + " os.path.join(os.environ[\"HOME\"], \"dataset\")\n", + "\n", + "# Specify a directory for output\n", + "output_dir = os.environ[\"OUTPUT_DIR\"] if \"OUTPUT_DIR\" in os.environ else \\\n", + " os.path.join(os.environ[\"HOME\"], \"output\")\n", + "\n", + "print(\"Dataset directory:\", dataset_dir)\n", + "print(\"Output directory:\", output_dir)" + ] + }, + { + "cell_type": "markdown", + "id": "bb53162b", + "metadata": { + "tags": [] + }, + "source": [ + "## Dataset\n", + "\n", + "Download the images and radiology annotations from https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=109379611 and save in the path `/brca/data`. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ad93d752-d650-4201-8ccd-0440218b09f4", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "! python prepare_nlp_data.py --data_root {dataset_dir}/brca/data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "27529580-9606-427b-ba37-2493a2a935e3", + "metadata": { + "jupyter": { + "source_hidden": true + }, + "tags": [] + }, + "outputs": [], + "source": [ + "! python prepare_vision_data.py --data_root {dataset_dir}/brca/data" + ] + }, + { + "cell_type": "markdown", + "id": "bce1a16c-3528-49e1-af6c-6c32bb122964", + "metadata": {}, + "source": [ + "Image files should have the .jpg extension and be arranged in subfolders for each class. The annotation file should be a .csv. The final brca dataset directory should look something like this:\n", + "\n", + "```\n", + "brca\n", + " ├── data\n", + " │ ├── PKG - CDD-CESM\n", + " │ ├── Medical reports for cases .zip\n", + " │ ├── Radiology manual annotations.xlsx\n", + " │ └── Radiology_hand_drawn_segmentations_v2.csv\n", + " ├── annotation\n", + " │ └── annotation.csv\n", + " └── vision_images\n", + " ├── Benign\n", + " │ ├── P100_L_CM_CC.jpg\n", + " │ ├── P100_L_CM_MLO.jpg\n", + " │ └── ...\n", + " ├── Malignant\n", + " │ ├── P102_R_CM_CC.jpg\n", + " │ ├── P102_R_CM_MLO.jpg\n", + " │ └── ...\n", + " └── Normal\n", + " ├── P100_R_CM_CC.jpg\n", + " ├── P100_R_CM_MLO.jpg\n", + " └── ...\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bd9c3ef2", + "metadata": {}, + "outputs": [], + "source": [ + "# User input needed - supply the path to the images in the dataset_dir according to your system\n", + "source_image_path = os.path.join(dataset_dir, 'brca', 'data', 'vision_images')\n", + "image_path = source_image_path\n", + "\n", + "# User input needed - supply the path and name of the annotation file in the dataset_dir\n", + "source_annotation_path = os.path.join(dataset_dir, 'brca', 'data', 'annotation', 'annotation.csv')\n", + "annotation_path = source_annotation_path" + ] + }, + { + "cell_type": "markdown", + "id": "245df47c", + "metadata": {}, + "source": [ + "### Optional: Group Data by Patient ID\n", + "\n", + "This section is not required to run the workload, but it is helpful to assign all of a subject's records to be entirely in the train set or test set. This section will do a random stratification based on patient ID and save new copies of the grouped data files." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "44dbd990", + "metadata": {}, + "outputs": [], + "source": [ + "from data_utils import split_images, split_annotation\n", + "\n", + "grouped_image_path = '{}_grouped'.format(source_image_path)\n", + "\n", + "if os.path.isdir(grouped_image_path):\n", + " print(\"Grouped directory already exists and will be used: {}\".format(grouped_image_path))\n", + "else:\n", + " split_images(source_image_path, grouped_image_path)\n", + "\n", + "train_image_path = os.path.join(grouped_image_path, 'train')\n", + "test_image_path = os.path.join(grouped_image_path, 'test')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0d21bdff", + "metadata": {}, + "outputs": [], + "source": [ + "from data_utils import split_images, split_annotation\n", + "\n", + "file_dir, file_name = os.path.split(source_annotation_path)\n", + "grouped_annotation_path = os.path.join(file_dir, '{}_grouped.csv'.format(os.path.splitext(file_name)[0]))\n", + "\n", + "if os.path.isfile(grouped_annotation_path):\n", + " print(\"Grouped annotation already exists and will be used: {}\".format(grouped_annotation_path))\n", + "else:\n", + " train_dataset, test_dataset = split_annotation(file_dir, file_name, train_image_path, test_image_path)\n", + " train_dataset.to_csv(grouped_annotation_path, index=False)\n", + " test_dataset.to_csv(grouped_annotation_path[:-4] + '_test.csv', index=False)\n", + " print('Grouped training annotation saved to: {}'.format(grouped_annotation_path))\n", + " print('Grouped testing annotation saved to: {}'.format(grouped_annotation_path[:-4] + '_test.csv'))\n", + "\n", + "train_annotation_path = grouped_annotation_path\n", + "test_annotation_path = grouped_annotation_path[:-4] + '_test.csv'\n", + "label_col = 0 # Index of the label column in the grouped data file" + ] + }, + { + "cell_type": "markdown", + "id": "01e9e5cf", + "metadata": { + "tags": [] + }, + "source": [ + "## Model 1: Image Classification with PyTorch\n", + "\n", + "### Get the Model and Dataset\n", + "Call the model factory to get a pretrained model from PyTorch Hub and the dataset factory to load the images from their location. The `get_model` function returns a model object that will later be used for training. We will use resnet50 by default." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d9c93b18", + "metadata": {}, + "outputs": [], + "source": [ + "viz_model = model_factory.get_model(model_name=\"resnet50\", framework='pytorch')\n", + "\n", + "# Load the dataset from the custom dataset path\n", + "train_viz_dataset = dataset_factory.load_dataset(dataset_dir=train_image_path,\n", + " use_case='image_classification',\n", + " framework='pytorch')\n", + "\n", + "test_viz_dataset = dataset_factory.load_dataset(dataset_dir=test_image_path,\n", + " use_case='image_classification',\n", + " framework='pytorch')\n", + "\n", + "print(\"Class names:\", str(train_viz_dataset.class_names))" + ] + }, + { + "cell_type": "markdown", + "id": "6472bedd", + "metadata": { + "tags": [] + }, + "source": [ + "### Data Preparation\n", + "Once you have your dataset loaded, use the following cell to preprocess the dataset. We split the images into training and validation subsets, resize them to match the model, and then batch the images." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "98dcf057", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "batch_size = 16\n", + "# shuffle split the training dataset\n", + "train_viz_dataset.shuffle_split(train_pct=.80, val_pct=.20, seed=3)\n", + "train_viz_dataset.preprocess(viz_model.image_size, batch_size=batch_size)\n", + "test_viz_dataset.preprocess(viz_model.image_size, batch_size=batch_size)" + ] + }, + { + "cell_type": "markdown", + "id": "df2cf08c-aecd-4746-9041-ec90cdb2795f", + "metadata": { + "tags": [] + }, + "source": [ + "### Image dataset analysis\n", + "\n", + "Let's take a look at the dataset and verify that we are loading the data correctly. This includes looking at the distributions amongst the training and validation and visual confirmation of the images themselves." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4af33b10-8f92-4212-bc83-be83d987d7f6", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# Create a label map function and reverse label map for the dataset\n", + "def label_map_func(label):\n", + " if label == 'Benign':\n", + " return 0\n", + " elif label == 'Malignant':\n", + " return 1\n", + " elif label == 'Normal':\n", + " return 2\n", + " \n", + "reverse_label_map = {0: 'Benign', 1: 'Malignant', 2: 'Normal'}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b3b131c6-969c-4c7b-9e20-de0ab6de67af", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "train_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}\n", + "\n", + "for x, y in train_viz_dataset.train_subset:\n", + " train_label_count[reverse_label_map[y]] += 1\n", + "\n", + "print('Training label distribution:')\n", + "train_label_count" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4007cf93-868c-445f-b28f-f3383ecd90ad", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "valid_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}\n", + "\n", + "for x, y in train_viz_dataset.validation_subset:\n", + " valid_label_count[reverse_label_map[y]] += 1\n", + "\n", + "print('Validation label distribution:')\n", + "valid_label_count" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6d600e39-c5bc-4224-9798-7d3dea87844a", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "test_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}\n", + "\n", + "for x, y in test_viz_dataset.dataset:\n", + " test_label_count[reverse_label_map[y]] += 1\n", + "\n", + "print('Validation label distribution:')\n", + "test_label_count" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6e0bad41-d9aa-4c73-8d52-1a5905124bd3", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# get datsaet distrubtions\n", + "form = {'type':'domain'}\n", + "fig = make_subplots(rows=1, cols=3, specs=[[form, form, form]], subplot_titles=['Training', 'Validation', 'Testing'])\n", + "fig.add_trace(go.Pie(values=list(train_label_count.values()), labels=list(train_label_count.keys())), 1, 1)\n", + "fig.add_trace(go.Pie(values=list(valid_label_count.values()), labels=list(valid_label_count.keys())), 1, 2)\n", + "fig.add_trace(go.Pie(values=list(test_label_count.values()), labels=list(valid_label_count.keys())), 1, 3)\n", + "\n", + "fig.update_layout(height=600, width=800, title_text=\"Label Distributions\")\n", + "fig.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6acc455c-8461-4d35-a8f7-752aa693a44c", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "def get_examples(dataset, reverse_label_map, n=6):\n", + " # get n images from each label in dataset and return as dictionary\n", + " \n", + " loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size)\n", + " \n", + " example_images = {'Benign': [], 'Malignant': [], 'Normal': []}\n", + " for x, y in loader:\n", + " for i, label in enumerate(y):\n", + " label_name = reverse_label_map[int(label)]\n", + " if len(example_images[label_name]) < n:\n", + " example_images[label_name].append(x[i])\n", + " if len(example_images['Malignant']) == n and\\\n", + " len(example_images['Benign']) == n and\\\n", + " len(example_images['Normal']) == n:\n", + " break\n", + " return example_images" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "780aecc6-ded0-4bdd-aa29-38ef1207ae4b", + "metadata": {}, + "outputs": [], + "source": [ + "# plot some training examples\n", + "fig = plt.figure(figsize=(12,6))\n", + "columns = 6\n", + "rows = 3\n", + "fig.suptitle('Training Torch Tensor examples', size=16)\n", + "\n", + "\n", + "train_example_images = get_examples(train_viz_dataset.train_subset, reverse_label_map)\n", + "for i in range(1, columns*rows +1):\n", + " idx = i - 1\n", + " if idx < 6: \n", + " img = train_example_images['Benign'][idx]\n", + " elif idx >= 6 and idx < 12:\n", + " img = train_example_images['Malignant'][idx - 6]\n", + " else:\n", + " img = train_example_images['Normal'][idx - 12]\n", + "\n", + " fig.add_subplot(rows, columns, i)\n", + " plt.axis('off')\n", + " plt.tight_layout()\n", + " if idx == 0 or idx == 6 or idx == 12:\n", + " plt.axis('on')\n", + " label_name = reverse_label_map[int(idx/6)]\n", + " plt.ylabel(label_name, fontsize=16)\n", + " plt.tick_params(axis='x', bottom=False, labelbottom=False)\n", + " plt.tick_params(axis='y', left=False, labelleft=False)\n", + "\n", + " plt.imshow(torch.movedim(img, 0, 2).detach().cpu().numpy().astype(np.uint8))\n", + " \n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c11500c8-f0c5-4732-86ca-907de1617cbd", + "metadata": {}, + "outputs": [], + "source": [ + "# plot some validation images\n", + "fig = plt.figure(figsize=(12,6))\n", + "columns = 6\n", + "rows = 3\n", + "fig.suptitle('Validation Torch Tensor examples', size=16)\n", + "\n", + "\n", + "valid_example_images = get_examples(train_viz_dataset.validation_subset, reverse_label_map)\n", + "\n", + "for i in range(1, columns*rows +1):\n", + " idx = i - 1\n", + " if idx < 6: \n", + " img = valid_example_images['Benign'][idx]\n", + " elif idx >= 6 and idx < 12:\n", + " img = valid_example_images['Malignant'][idx - 6]\n", + " else:\n", + " img = valid_example_images['Normal'][idx - 12]\n", + " \n", + " fig.add_subplot(rows, columns, i)\n", + " plt.axis('off')\n", + " plt.tight_layout()\n", + " if idx == 0 or idx == 6 or idx == 12:\n", + " plt.axis('on')\n", + " label_name = reverse_label_map[int(idx/6)]\n", + " plt.ylabel(label_name, fontsize=16)\n", + " plt.tick_params(axis='x', bottom=False, labelbottom=False)\n", + " plt.tick_params(axis='y', left=False, labelleft=False)\n", + " \n", + " plt.imshow(torch.movedim(img, 0, 2).detach().cpu().numpy().astype(np.uint8))\n", + " \n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "id": "f2f49c77", + "metadata": {}, + "source": [ + "### Transfer Learning\n", + "\n", + "This step calls the model's train function with the dataset that was just prepared. The training function will get the PyTorch feature vector and add on a dense layer based on the number of classes in the dataset. The model is then compiled and trained based on the number of epochs specified in the argument. We also add two more dense layers using the `extra_layers` parameter.\n", + "\n", + "To optionally insert additional dense layers between the base model and output layer, `extra_layers=[1024, 512]` will insert two dense layers, the first with 1024 neurons and the second with 512 neurons." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "21a92e4e", + "metadata": {}, + "outputs": [], + "source": [ + "viz_history = viz_model.train(train_viz_dataset, output_dir=output_dir, epochs=5, seed=10, extra_layers=[1024, 512], ipex_optimize=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4db8d57d-a556-43fb-a7e9-3ba7a5328dd9", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "validation_viz_metrics = viz_model.evaluate(train_viz_dataset)\n", + "test_viz_metrics = viz_model.evaluate(test_viz_dataset)\n", + "print(validation_viz_metrics)\n", + "print(test_viz_metrics)" + ] + }, + { + "cell_type": "markdown", + "id": "bce6bafe", + "metadata": { + "tags": [] + }, + "source": [ + "### Save the Computer Vision Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "093905b2", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "saved_model_dir = viz_model.export(output_dir)" + ] + }, + { + "cell_type": "markdown", + "id": "10c4b35b-79d0-4acc-abae-145f81a5e4be", + "metadata": { + "tags": [] + }, + "source": [ + "### Error Analysis\n", + "\n", + "Analyzing the errors via a confusion matrix and ROC and PR curves will help us identify if our model is exibiting any label bias " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b9f1659f-71f1-4d46-bae4-3dedaac0990b", + "metadata": {}, + "outputs": [], + "source": [ + "from scipy.special import softmax\n", + "y_pred = []\n", + "# get the logit predictions and then convert to probabilities\n", + "for batch in test_viz_dataset.dataset:\n", + " y_pred.append(softmax(viz_model._model(batch[0][None, :]).detach().numpy())[0])\n", + "\n", + "y_true =[y for x, y in test_viz_dataset.dataset]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f0249827-a1b7-45f2-9c03-3deca2482362", + "metadata": {}, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer import metrics\n", + "viz_cm = metrics.confusion_matrix(y_true, y_pred, test_viz_dataset.class_names)\n", + "viz_cm.visualize()\n", + "print(viz_cm.report)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4d6141e9-5e2a-4ada-9269-47df260b3532", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "plotter = metrics.plot(y_true, y_pred, test_viz_dataset.class_names)\n", + "plotter.pr_curve()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "77d215ac-054b-42c5-be2e-e4aa3233311e", + "metadata": {}, + "outputs": [], + "source": [ + "plotter.roc_curve()" + ] + }, + { + "cell_type": "markdown", + "id": "ff916933-c240-4ef2-b1ca-73c962297958", + "metadata": {}, + "source": [ + "### Explainability" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3dc8c2de-bb77-4183-a3d5-6e9372bc3bc7", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# convert one-hot encoded predictions to the index labels\n", + "y_pred_labels = np.array(y_pred).argmax(axis=1)\n", + "\n", + "# get the malignant indexes and then the normal and benign prediction indexes\n", + "mal_idxs = np.where(np.array(y_true) == label_map_func('Malignant'))[0].tolist()\n", + "nor_preds = np.where(np.array(y_pred_labels) == label_map_func('Normal'))[0].tolist()\n", + "ben_preds = np.where(np.array(y_pred_labels) == label_map_func('Benign'))[0].tolist()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "af69d2d6-a675-45d5-bd3f-02f009853576", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# get mal examples that were misclassified as ben\n", + "mal_classified_as_nor = list(set(mal_idxs).intersection(nor_preds))\n", + "\n", + "# get mal examples that were misclassified as ben\n", + "mal_classified_as_ben = list(set(mal_idxs).intersection(ben_preds))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1133212d-461e-45a1-82dc-cf2b721a5ef6", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# get the images for all mals predicted as nors\n", + "mal_as_nor_images = [test_viz_dataset.dataset[i][0] for i in mal_classified_as_nor]\n", + "\n", + "# get the images for all mals predicted as bens\n", + "mal_as_ben_images = [test_viz_dataset.dataset[i][0] for i in mal_classified_as_ben]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5ba81869-f659-41fb-9a4f-af8f97c6c4c7", + "metadata": {}, + "outputs": [], + "source": [ + "from skimage import io\n", + "# plot 14 mal_as_nor images\n", + "fig = plt.figure(figsize=(12,6))\n", + "columns = 7\n", + "rows = 2\n", + "\n", + "for i in range(1, columns*rows +1):\n", + " if i == len(mal_as_nor_images):\n", + " break\n", + " idx = i - 1\n", + " \n", + " fig.add_subplot(rows, columns, i)\n", + " plt.axis('off')\n", + " plt.tight_layout()\n", + "\n", + " plt.imshow(torch.movedim(mal_as_nor_images[idx], 0, 2).detach().cpu().numpy().astype(np.uint8))\n", + "\n", + "fig.suptitle('Malignant predicted as Normal', fontsize=18)\n", + "plt.tight_layout()\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fda72f36-59c7-4e2c-afc4-e750b969a5ed", + "metadata": {}, + "outputs": [], + "source": [ + "# lets calculate gradcam on the 0th, 1st and 10th images since they\n", + "# seem to have tnhe clearest visual of a malignant tumor\n", + "from intel_ai_safety.explainer.cam import pt_cam as cam\n", + "\n", + "images = [torch.movedim(mal_as_nor_images[0], 0, 2).detach().cpu().numpy().astype(np.uint8),\n", + " torch.movedim(mal_as_nor_images[3], 0, 2).detach().cpu().numpy().astype(np.uint8),\n", + " torch.movedim(mal_as_nor_images[5], 0, 2).detach().cpu().numpy().astype(np.uint8)]\n", + "\n", + "\n", + "final_image_dim = (224, 224)\n", + "targetLayer = viz_model._model.layer4\n", + "xgc = cam.x_gradcam(viz_model._model, targetLayer, \n", + " label_map_func('Normal'), \n", + " images[0],\n", + " final_image_dim,\n", + " 'cpu')\n", + "\n", + "xgc.visualize()\n", + "\n", + "xgc = cam.x_gradcam(viz_model._model, targetLayer, \n", + " label_map_func('Normal'), \n", + " images[1],\n", + " final_image_dim,\n", + " 'cpu')\n", + "\n", + "xgc.visualize()\n", + "\n", + "xgc = cam.x_gradcam(viz_model._model, targetLayer, \n", + " label_map_func('Normal'), \n", + " images[2],\n", + " final_image_dim,\n", + " 'cpu')\n", + "\n", + "xgc.visualize()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "85e989c2-caf7-4c9b-8496-bf2cc4d72ecb", + "metadata": {}, + "outputs": [], + "source": [ + "# plot 14 mal_as_ben images\n", + "fig = plt.figure(figsize=(12,6))\n", + "columns = 7\n", + "rows = 2\n", + "\n", + "for i in range(1, columns*rows +1):\n", + " idx = i - 1\n", + " if idx == len(mal_as_ben_images):\n", + " break\n", + " \n", + " fig.add_subplot(rows, columns, i)\n", + " plt.axis('off')\n", + " plt.tight_layout()\n", + "\n", + " plt.imshow(torch.movedim(mal_as_ben_images[idx], 0, 2).detach().cpu().numpy().astype(np.uint8))\n", + "\n", + "fig.suptitle('Malignant predicted as Benign', fontsize=18)\n", + "plt.tight_layout()\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f9cb7a71-3868-421b-9a0a-8e11890a6e15", + "metadata": {}, + "outputs": [], + "source": [ + "# lets calculate gradcam on the 5th, 10th and 11th images since they\n", + "# seem to have tnhe clearest visual of a malignant tumor\n", + "\n", + "images = [torch.movedim(mal_as_ben_images[0], 0, 2).detach().cpu().numpy().astype(np.uint8),\n", + " torch.movedim(mal_as_ben_images[1], 0, 2).detach().cpu().numpy().astype(np.uint8),\n", + " torch.movedim(mal_as_ben_images[2], 0, 2).detach().cpu().numpy().astype(np.uint8)]\n", + " \n", + " \n", + "\n", + "final_image_dim = (224, 224)\n", + "targetLayer = viz_model._model.layer4\n", + "xgc = cam.x_gradcam(viz_model._model, targetLayer, \n", + " label_map_func('Benign'), \n", + " images[0],\n", + " final_image_dim,\n", + " 'cpu')\n", + "\n", + "xgc.visualize()\n", + "\n", + "xgc = cam.x_gradcam(viz_model._model, targetLayer, \n", + " label_map_func('Benign'), \n", + " images[1],\n", + " final_image_dim,\n", + " 'cpu')\n", + "\n", + "xgc.visualize()\n", + "\n", + "xgc = cam.x_gradcam(viz_model._model, targetLayer, \n", + " label_map_func('Benign'), \n", + " images[2],\n", + " final_image_dim,\n", + " 'cpu')\n", + "\n", + "xgc.visualize()" + ] + }, + { + "cell_type": "markdown", + "id": "5621b571", + "metadata": { + "tags": [] + }, + "source": [ + "## Model 2: Text Classification with PyTorch\n", + "\n", + "### Get the Model and Dataset\n", + "Now we will call the model factory to get a pretrained model from HuggingFace and load the annotation file using the dataset factory. We will use clinical-bert for this part." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d18cebff", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up NLP parameters\n", + "model_name = 'clinical-bert'\n", + "seq_length = 64\n", + "batch_size = 5\n", + "quantization_criterion = 0.05\n", + "quantization_max_trial = 50 " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d939924f", + "metadata": {}, + "outputs": [], + "source": [ + "nlp_model = model_factory.get_model(model_name=model_name, framework='pytorch')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2e9dff00", + "metadata": {}, + "outputs": [], + "source": [ + "# Create a label map function and reverse label map for the dataset\n", + "def label_map_func(label):\n", + " if label == 'Benign':\n", + " return 0\n", + " elif label == 'Malignant':\n", + " return 1\n", + " elif label == 'Normal':\n", + " return 2\n", + " \n", + "reverse_label_map = {0: 'Benign', 1: 'Malignant', 2: 'Normal'}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "59e510e1-4ca5-49e6-9ec7-b59b5047061c", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "os.path.split(os.path.splitext(train_annotation_path)[0] + '.csv')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "879bad74", + "metadata": {}, + "outputs": [], + "source": [ + "train_file_dir, train_file_name = os.path.split(os.path.splitext(train_annotation_path)[0] +'.csv')\n", + "train_nlp_dataset = dataset_factory.load_dataset(dataset_dir=train_file_dir,\n", + " use_case='text_classification',\n", + " framework='pytorch',\n", + " dataset_name='brca',\n", + " csv_file_name=train_file_name,\n", + " label_map_func=label_map_func,\n", + " class_names=['Benign', 'Malignant', 'Normal'],\n", + " header=True,\n", + " label_col=label_col,\n", + " shuffle_files=True,\n", + " exclude_cols=[2])\n", + "\n", + "test_file_dir, test_file_name = os.path.split(os.path.splitext(test_annotation_path)[0] +'.csv')\n", + "test_nlp_dataset = dataset_factory.load_dataset(dataset_dir=test_file_dir,\n", + " use_case='text_classification',\n", + " framework='pytorch',\n", + " dataset_name='brca',\n", + " csv_file_name=test_file_name,\n", + " label_map_func=label_map_func,\n", + " class_names=['Benign', 'Malignant', 'Normal'],\n", + " header=True,\n", + " label_col=label_col,\n", + " shuffle_files=True,\n", + " exclude_cols=[2])" + ] + }, + { + "cell_type": "markdown", + "id": "e2b9ddba", + "metadata": {}, + "source": [ + "### Data Preparation" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b166b757", + "metadata": {}, + "outputs": [], + "source": [ + "train_nlp_dataset.preprocess(nlp_model.hub_name, batch_size=batch_size, max_length=seq_length)\n", + "test_nlp_dataset.preprocess(nlp_model.hub_name, batch_size=batch_size, max_length=seq_length)\n", + "train_nlp_dataset.shuffle_split(train_pct=0.67, val_pct=0.33, shuffle_files=False)" + ] + }, + { + "cell_type": "markdown", + "id": "4d2543ce-604f-4666-87be-9c1994cc356c", + "metadata": { + "tags": [] + }, + "source": [ + "### Corpus analysis\n", + "Let's take a look at the word distribution across each label to get an idea what BERT will be training on as well make sure that our training and validation datasets are distributed similarly." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "36180cff-81a6-4f3d-b4ac-b3314152510b", + "metadata": {}, + "outputs": [], + "source": [ + "import plotly.express as px\n", + "\n", + "train_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}\n", + "for label in train_nlp_dataset.train_subset['label']:\n", + " train_label_count[reverse_label_map[int(label)]] += 1\n", + "\n", + "print('Training label distribution:')\n", + "train_label_count" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6cbac9b4-e776-4df8-bb39-1d9625d58ba4", + "metadata": {}, + "outputs": [], + "source": [ + "valid_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}\n", + "for label in train_nlp_dataset.validation_subset['label']:\n", + " valid_label_count[reverse_label_map[int(label)]] += 1\n", + "\n", + "print('Validation label distribution:')\n", + "valid_label_count" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fd17aec7-74b1-4146-bdf9-339fa31bb15e", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "test_label_count = {'Benign': 0, 'Malignant': 0, 'Normal': 0}\n", + "for label in test_nlp_dataset.dataset['label']:\n", + " test_label_count[reverse_label_map[int(label)]] += 1\n", + "\n", + "print('Validation label distribution:')\n", + "test_label_count" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bc4b4c72-268a-4ad5-a3aa-4c47cc9df000", + "metadata": {}, + "outputs": [], + "source": [ + "form = {'type':'domain'}\n", + "\n", + "fig = make_subplots(rows=1, cols=3, specs=[[form, form, form]], subplot_titles=['Training', 'Validation', 'Testing'])\n", + "fig.add_trace(go.Pie(values=list(train_label_count.values()), labels=list(train_label_count.keys())), 1, 1)\n", + "fig.add_trace(go.Pie(values=list(valid_label_count.values()), labels=list(valid_label_count.keys())), 1, 2)\n", + "fig.add_trace(go.Pie(values=list(test_label_count.values()), labels=list(test_label_count.keys())), 1, 3)\n", + "\n", + "\n", + "fig.update_layout(height=600, width=800, title_text=\"Label Distributions\")\n", + "fig.show()\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2af0e920-ea06-487e-ae70-e5709f7c5f12", + "metadata": {}, + "outputs": [], + "source": [ + "nltk.download('punkt')\n", + "nltk.download('words')\n", + "\n", + "def get_mc_df(words_list, n=50, ignored_words=[]):\n", + " '''\n", + " Get's the most common words from a list of words and returns a pd DataFrame for Plotly\n", + " '''\n", + "\n", + " frequency_dict = nltk.FreqDist(words_list)\n", + " most_common = frequency_dict.most_common(n=500)\n", + "\n", + " \n", + " final_fd = pd.DataFrame({'Token': [], 'Frequency': []})\n", + " cnt = 0\n", + " idx = 0\n", + " while(cnt < n):\n", + " if most_common[idx][0] in string.punctuation:\n", + " print(f'{most_common[idx][0]} is not a word')\n", + " else:\n", + " final_fd.loc[len(final_fd.index)] = [most_common[idx][0], most_common[idx][1]]\n", + " cnt += 1\n", + " idx += 1\n", + " \n", + " return final_fd\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d4b57320-a39c-4c40-9613-31af8575e825", + "metadata": {}, + "outputs": [], + "source": [ + "df = pd.read_csv(train_annotation_path)\n", + "\n", + "# get string arrays of symptoms for each label\n", + "mal_text = list(df.loc[df['label'] == 'Malignant']['symptoms'])\n", + "nor_text = list(df.loc[df['label'] == 'Normal']['symptoms'])\n", + "ben_text = list(df.loc[df['label'] == 'Benign']['symptoms'])\n", + "\n", + "# get tokenized words for each\n", + "mal_tokenized: list[str] = nltk.word_tokenize(\" \".join(mal_text))\n", + "nor_tokenized: list[str] = nltk.word_tokenize(\" \".join(nor_text))\n", + "ben_tokenized: list[str] = nltk.word_tokenize(\" \".join(ben_text))\n", + "\n", + "# generate the dataframes necesarry to plot distributions\n", + "mal_fd = get_mc_df(mal_tokenized)\n", + "nor_fd = get_mc_df(nor_tokenized)\n", + "ben_fd = get_mc_df(ben_tokenized)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8212ffc3-00f4-4be7-8424-cd3b54483398", + "metadata": {}, + "outputs": [], + "source": [ + "fig = px.bar(mal_fd, x=\"Token\", y='Frequency', color='Frequency', title='Malignant word distribution')\n", + "fig.update(layout_coloraxis_showscale=False)\n", + "fig.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ff05363a-1834-4502-aba8-7b88635463cb", + "metadata": {}, + "outputs": [], + "source": [ + "fig = px.bar(nor_fd, x=\"Token\", y='Frequency', color='Frequency', title='Normal word distribution')\n", + "fig.update(layout_coloraxis_showscale=False)\n", + "fig.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "705e076e-47a0-4b15-a4b9-9bfe9ddbe052", + "metadata": {}, + "outputs": [], + "source": [ + "fig = px.bar(ben_fd, x=\"Token\", y='Frequency', color='Frequency', title='Benign word distribution')\n", + "fig.update(layout_coloraxis_showscale=False)\n", + "fig.show()" + ] + }, + { + "cell_type": "markdown", + "id": "020303ee", + "metadata": {}, + "source": [ + "### Transfer Learning\n", + "\n", + "This step calls the model's train function with the dataset that was just prepared. The training function will get the pretrained model from HuggingFace and add on a dense layer based on the number of classes in the dataset. The model is then trained using an instance of HuggingFace Trainer for the number of epochs specified. If desired, a native PyTorch loop can be invoked instead of Trainer by setting `use_trainer=False`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "41fb0612", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "import transformers\n", + "transformers.set_seed(1)\n", + "nlp_history = nlp_model.train(train_nlp_dataset, output_dir, epochs=3, use_trainer=True, seed=1)" + ] + }, + { + "cell_type": "markdown", + "id": "de70a029", + "metadata": {}, + "source": [ + "### Save the NLP Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ba08847d", + "metadata": {}, + "outputs": [], + "source": [ + "nlp_model.export(output_dir)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5031efd2-6674-416c-abc5-196008efd7e9", + "metadata": {}, + "outputs": [], + "source": [ + "# This currently isn't showing the correct output for test\n", + "train_nlp_metrics = nlp_model.evaluate(train_nlp_dataset)\n", + "test_nlp_metrics = nlp_model.evaluate(test_nlp_dataset)" + ] + }, + { + "cell_type": "markdown", + "id": "5ac80be5-d53f-4add-b10b-b477fa1e2350", + "metadata": { + "tags": [] + }, + "source": [ + "### Error analysis\n", + "\n", + "We can see that BERT has a much better accuracy than the CNN. Nonetheless, similar to the CNN, let's see where BERT makes mistakes across the three classes using a confusion matrix and ROC and PR curves." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3bd77544-21ca-4555-b796-9bb90a5ebf5e", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# get predictions in logits (one-hot-encoded)\n", + "# NOTE: added a new flag to predict function\n", + "logit_predictions = nlp_model.predict(test_nlp_dataset.dataset, return_raw=True)['logits']\n", + "#convert logits to probability\n", + "from scipy.special import softmax\n", + "y_pred = softmax(logit_predictions.detach().numpy(), axis=1)\n", + "y_true = test_nlp_dataset.validation_subset['label'].numpy().astype(int)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "29f2505e-87e7-4cd0-bc24-c3d36556c2ba", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer import metrics\n", + "\n", + "nlp_cm = metrics.confusion_matrix(y_true, y_pred, test_nlp_dataset.class_names)\n", + "nlp_cm.visualize()\n", + "print(nlp_cm.report)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8e2ca2f5-9473-4cc8-acf8-3c14fd1a9803", + "metadata": {}, + "outputs": [], + "source": [ + "plotter = metrics.plot(y_true, y_pred, test_nlp_dataset.class_names)\n", + "plotter.pr_curve()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "60074e34-2d5a-4d92-b6de-977cc22faa66", + "metadata": {}, + "outputs": [], + "source": [ + "plotter.roc_curve()" + ] + }, + { + "cell_type": "markdown", + "id": "d323fe46-877c-47b6-88ae-5207f91ad636", + "metadata": { + "tags": [] + }, + "source": [ + "### Explanation" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "79c6b6ae-a556-453e-91ec-3b806969a7ed", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "mal_idxs = np.where(test_nlp_dataset.dataset['label'].numpy() == label_map_func('Malignant'))[0].tolist()\n", + "ben_preds = np.where(nlp_model.predict(test_nlp_dataset.dataset).numpy() == label_map_func('Benign'))[0].tolist()\n", + "\n", + "# get mal examples that were misclassified as ben\n", + "mal_classified_as_ben = list(set(mal_idxs).intersection(ben_preds))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eb972a96-7b3f-4729-8e25-2de239a4bd30", + "metadata": {}, + "outputs": [], + "source": [ + "mal_classified_as_ben_text = test_nlp_dataset.get_text(test_nlp_dataset.dataset[mal_classified_as_ben]['input_ids'])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "90c63607-3801-4ada-8988-7b5a30999e9f", + "metadata": {}, + "outputs": [], + "source": [ + "# define a prediction function\n", + "def f(x):\n", + " encoded_input = nlp_model._tokenizer(x.tolist(), padding=True, return_tensors='pt')\n", + " outputs = nlp_model._model(**encoded_input)\n", + " return softmax(outputs.logits.detach().numpy(), axis=1)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4ed89c66-1fbf-4036-88d6-39ee914748f0", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer.attributions import attributions\n", + "partition_explainer = attributions.partition_text_explainer(f, test_nlp_dataset.class_names, np.array(mal_classified_as_ben_text), r\"\\W+\")\n", + "partition_explainer.visualize()" + ] + }, + { + "cell_type": "markdown", + "id": "45752dd6", + "metadata": { + "tags": [] + }, + "source": [ + "### Int8 Quantization\n", + "\n", + "We can use the [Intel® Extension for Transformers](https://github.com/intel/intel-extension-for-transformers) to quantize the trained model for faster inference. If you want to run this part of the notebook, make sure you have `intel-extension-for-transformers` installed in your environment." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a9ee82fd-da4e-4e5e-9627-3811fe9a9eff", + "metadata": {}, + "outputs": [], + "source": [ + "! pip install --no-cache-dir intel-extension-for-transformers==1.4" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ce0687ce", + "metadata": {}, + "outputs": [], + "source": [ + "from intel_extension_for_transformers.transformers.trainer import NLPTrainer\n", + "from intel_extension_for_transformers.transformers import objectives, OptimizedModel, QuantizationConfig\n", + "from intel_extension_for_transformers.transformers import metrics as nlptk_metrics" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f9557a68", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up quantization config\n", + "tune_metric = nlptk_metrics.Metric(\n", + " name=\"eval_accuracy\",\n", + " greater_is_better=True,\n", + " is_relative=True,\n", + " criterion=quantization_criterion,\n", + " weight_ratio=None,\n", + ")\n", + "\n", + "objective = objectives.Objective(\n", + " name=\"performance\", greater_is_better=True, weight_ratio=None\n", + ")\n", + "\n", + "quantization_config = QuantizationConfig(\n", + " approach=\"PostTrainingDynamic\",\n", + " max_trials=quantization_max_trial,\n", + " metrics=[tune_metric],\n", + " objectives=[objective],\n", + ")\n", + "\n", + "# Set up metrics computation\n", + "def compute_metrics(p: EvalPrediction):\n", + " preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions\n", + " preds = np.argmax(preds, axis=1)\n", + " return {\"accuracy\": (preds == p.label_ids).astype(np.float32).mean().item()}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f406d6db", + "metadata": {}, + "outputs": [], + "source": [ + "quantizer = NLPTrainer(model=nlp_model._model,\n", + " train_dataset=train_nlp_dataset.train_subset,\n", + " eval_dataset=train_nlp_dataset.validation_subset,\n", + " compute_metrics=compute_metrics,\n", + " tokenizer=train_nlp_dataset._tokenizer)\n", + "quantized_model = quantizer.quantize(quant_config=quantization_config)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "56e5f2f5", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "results = quantizer.evaluate()\n", + "eval_acc = results.get(\"eval_accuracy\")\n", + "print(\"Final Eval Accuracy: {:.5f}\".format(eval_acc))" + ] + }, + { + "cell_type": "markdown", + "id": "3a3611cc", + "metadata": { + "tags": [] + }, + "source": [ + "#### Save the Quantized NLP Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "20ab77de", + "metadata": {}, + "outputs": [], + "source": [ + "quantizer.save_model(os.path.join(output_dir, 'quantized_BERT'))\n", + "nlp_model._model.config.save_pretrained(os.path.join(output_dir, 'quantized_BERT'))" + ] + }, + { + "cell_type": "markdown", + "id": "ae1add70-c947-40f2-b401-7f5efa11d994", + "metadata": { + "tags": [] + }, + "source": [ + "### Error analysis\n", + "\n", + "The quantized BERT model has the same validation accuracy as it's stock counterpart. This does not mean, however, that they perform the same. Let's look at the confusion matrix and PR and ROC curves to see if the errors are different." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "272cbea1-f475-4fb4-8ef0-0e25ca2fb5b3", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# get predictions in logits (one-hot-encoded)\n", + "# NOTE: added a new flag to predict function\n", + "logit_predictions = quantizer.predict(test_nlp_dataset.dataset)[0]\n", + "#convert logits to probability\n", + "from scipy.special import softmax\n", + "y_pred = softmax(logit_predictions, axis=1)\n", + "y_true = test_nlp_dataset.dataset['label'].numpy().astype(int)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d4506509-2bd1-4605-8850-b9e17ec371cb", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "quant_cm = metrics.confusion_matrix(y_true, y_pred, test_nlp_dataset.class_names)\n", + "quant_cm.visualize()\n", + "print(quant_cm.report)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dc92b3d7-6795-49f9-b8ba-7c03ce4f7a7e", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "plotter = metrics.plot(y_true, y_pred, test_nlp_dataset.class_names)\n", + "plotter.pr_curve()" + ] + }, + { + "cell_type": "markdown", + "id": "b69df1a0", + "metadata": { + "tags": [] + }, + "source": [ + "## Citations\n", + "\n", + "### Data Citation\n", + "Khaled R., Helal M., Alfarghaly O., Mokhtar O., Elkorany A., El Kassas H., Fahmy A. Categorized Digital Database for Low energy and Subtracted Contrast Enhanced Spectral Mammography images [Dataset]. (2021) The Cancer Imaging Archive. DOI: [10.7937/29kw-ae92](https://doi.org/10.7937/29kw-ae92)\n", + "\n", + "### Publication Citation\n", + "Khaled, R., Helal, M., Alfarghaly, O., Mokhtar, O., Elkorany, A., El Kassas, H., & Fahmy, A. Categorized contrast enhanced mammography dataset for diagnostic and artificial intelligence research. (2022) Scientific Data, Volume 9, Issue 1. DOI: [10.1038/s41597-022-01238-0](https://doi.org/10.1038/s41597-022-01238-0)\n", + "\n", + "### TCIA Citation\n", + "Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository, Journal of Digital Imaging, Volume 26, Number 6, December, 2013, pp 1045-1057. DOI: [10.1007/s10278-013-9622-7](https://doi.org/10.1007/s10278-013-9622-7)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/v1.1.0/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.html b/v1.1.0/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.html new file mode 100644 index 0000000..dc3fbd9 --- /dev/null +++ b/v1.1.0/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.html @@ -0,0 +1,866 @@ + + + + + + + Explaining Fine Tuned Text Classifier with PyTorch using the Intel® Explainable AI API — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • + View page source +
  • +
+
+
+
+
+ +
+

Explaining Fine Tuned Text Classifier with PyTorch using the Intel® Explainable AI API

+

This notebook demonstrates fine tuning pretrained models from Hugging Face using text classification datasets from the Hugging Face Datasets catalog or a custom dataset. The notebook uses Intel® Extension for PyTorch*, which extends PyTorch with optimizations for an extra performance boost on Intel hardware.

+

Please install the dependencies from the pytorch_requirements.txt file before executing this notebook.

+

The notebook performs the following steps: 1. Import dependencies and setup parameters 2. Prepare the dataset 3. Prepare the Model for Fine Tuning and Evaluation 4. Export the model 5. Reload the model and make predictions 6. Get Explainations with Intel Explainable AI +Tools

+
+

1. Import dependencies and setup parameters

+

This notebook assumes that you have already followed the instructions in the README.md to setup a PyTorch environment with all the dependencies required to run the notebook.

+
+
[ ]:
+
+
+
import intel_extension_for_pytorch as ipex
+import logging
+import numpy as np
+import os
+import pandas as pd
+import sys
+import torch
+import warnings
+import typing
+import pickle
+
+from tqdm.auto import tqdm
+from torch.optim import AdamW
+from torch.utils.data import DataLoader
+from datasets import ClassLabel, load_dataset, load_metric, Split
+from datasets import logging as datasets_logging
+from transformers.utils import logging as transformers_logging
+from transformers import (
+    AutoModelForSequenceClassification,
+    AutoTokenizer,
+    Trainer,
+    TrainingArguments,
+    get_scheduler
+)
+from tlt.utils.file_utils import download_and_extract_zip_file
+
+# Set the logging stream to stdout
+for handler in transformers_logging._get_library_root_logger().handlers:
+    handler.setStream(sys.stdout)
+
+sh = datasets_logging.logging.StreamHandler(sys.stdout)
+
+datasets_logging.set_verbosity_error()
+warnings.filterwarnings('ignore')
+os.environ["TRANSFORMERS_NO_ADVISORY_WARNINGS"] = "1"
+
+
+
+
+
[ ]:
+
+
+
# Specify the name of the Hugging Face pretrained model to use (https://huggingface.co/models)
+# For example:
+#   albert-base-v2
+#   bert-base-uncased
+#   distilbert-base-uncased
+#   distilbert-base-uncased-finetuned-sst-2-english
+#   roberta-base
+model_name = "distilbert-base-uncased"
+
+# Define an output directory
+output_dir = os.environ["OUTPUT_DIR"] if "OUTPUT_DIR" in os.environ else \
+    os.path.join(os.environ["HOME"], "output", model_name)
+
+# Define a dataset directory
+dataset_dir = os.environ["DATASET_DIR"] if "DATASET_DIR" in os.environ else \
+    os.path.join(os.environ["HOME"], "dataset")
+
+print("Model name:", model_name)
+print("Output directory:", output_dir)
+print("Dataset directory:", dataset_dir)
+
+
+
+
+
+

2. Prepare the dataset

+

The notebook has two options for getting a dataset: * Option A: Use a dataset from the Hugging Face Datasets catalog * Option B: Use a custom dataset (downloaded from another source or from your local system)

+

In both cases, the code ends up defining `datasets.Dataset <https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset>`__ objects for the train and evaluation splits.

+

Execute the following cell to load the tokenizer and declare the base class used for the dataset setup.

+
+
[ ]:
+
+
+
# Load the tokenizer
+tokenizer = AutoTokenizer.from_pretrained(model_name)
+
+class TextClassificationData():
+    """
+    Base class used for defining the text classification dataset being used. Defines Hugging Face datasets.Dataset
+    objects for train and evaluations splits, along with helper functions for preprocessing the dataset.
+    """
+
+    def __init__(self, dataset_name, tokenizer, sentence1_key, sentence2_key, label_key):
+        self.tokenizer = tokenizer
+        self.dataset_name = dataset_name
+        self.class_labels = None
+
+        # Tokenized train and eval ds
+        self.train_ds = None
+        self.eval_ds = None
+
+        # Column keys
+        self.sentence1_key = sentence1_key
+        self.sentence2_key = sentence2_key
+        self.label_key = label_key
+
+    def tokenize_function(self, examples):
+        # Define the tokenizer args, depending on if the data has 2 sentences or just 1
+        args = ((examples[self.sentence1_key],) if self.sentence2_key is None \
+                 else (examples[self.sentence1_key], examples[self.sentence2_key]))
+        return self.tokenizer(*args, padding="max_length", truncation=True)
+
+    def tokenize_dataset(self, dataset):
+        # Apply the tokenize function to the dataset
+        tokenized_dataset = dataset.map(self.tokenize_function, batched=True)
+
+        # Remove the raw text from the tokenized dataset
+        raw_text_columns = [self.sentence1_key, self.sentence2_key] if self.sentence2_key else [self.sentence1_key]
+        return tokenized_dataset.remove_columns(raw_text_columns)
+
+    def define_train_eval_splits(self, dataset, train_split_name, eval_split_name, train_size=None, eval_size=None):
+        self.train_ds = dataset[train_split_name].shuffle().select(range(train_size)) if train_size \
+            else tokenized_dataset[train_split_name]
+        self.eval_ds = dataset[eval_split_name].shuffle().select(range(eval_size)) if eval_size \
+            else tokenized_dataset[eval_split_name]
+
+    def get_label_names(self):
+        if self.class_labels:
+            return self.class_labels.names
+        else:
+            raise ValueError("Class labels were not defined")
+
+    def display_sample(self, split_name="train", sample_size=7):
+        # Display a sample of the raw data
+        sentence1_sample = self.dataset[split_name][self.sentence1_key][:sample_size]
+        sentence2_sample = self.dataset[split_name][self.sentence2_key][:sample_size] if self.sentence2_key else None
+        label_sample = self.dataset[split_name][self.label_key][:sample_size]
+        dataset_sample = zip(sentence1_sample, sentence2_sample, label_sample) if self.sentence2_key \
+            else zip(sentence1_sample, label_sample)
+
+        columns = [self.sentence1_key, self.sentence2_key, self.label_key] if self.sentence2_key else \
+            [self.sentence1_key, self.label_key]
+
+        # Display the sample using a dataframe
+        sample = pd.DataFrame(dataset_sample, columns=columns)
+        return sample.style.hide_index()
+
+
+
+

Now that the base class is defined, either run Option A to use the Hugging Face Dataset catalog or Option B for a custom dataset downloaded from online or from your local system.

+
+

Option A: Use a Hugging Face dataset

+

Hugging Face Datasets has a catalog of datasets that can be specified by name. Information about the dataset is available in the catalog (including information on the size of the dataset and the splits).

+

The next cell gets the IMDb movie review dataset using the Hugging Face datasets API. If the notebook is executed multiple times, the dataset will be used from the dataset directory, to speed up the time that it takes to run.

+

The IMDb dataset in Hugging Face has 3 splits: train, test, and unsupervised. This notebook will be using data from the train split for training and data from the test split for evaluation. The data has 2 columns: text (string with the movie review) and label (integer class label). The code in the next cell is setup to run using the IMDb dataset, so note that if a different dataset is being used, you may need to change the split names and/or the column names.

+
+
[ ]:
+
+
+
class HFDSTextClassificationData(TextClassificationData):
+    """
+    Class used for loading and preprocessing text classification datasets from the Hugging Face datasets catalog
+    """
+
+    def __init__(self, tokenizer, dataset_dir, dataset_name, train_size, eval_size, train_split_name,
+                 eval_split_name, sentence1_key, sentence2_key, label_key):
+        """
+        Initialize the HFDSTextClassificationData class for a text classification dataset from Hugging Face.
+
+        :param tokenizer: Tokenizer to preprocess the dataset
+        :param dataset_dir: Cache directory used when loading the dataset
+        :param dataset_name: Name of the dataset to load from the Hugging Face catalog
+        :param train_size: Size of the training dataset. For quicker training or debug, use a subset of the data.
+                           Set to `None` to use all the data.
+        :param eval_size: Size of the evaluation dataset.
+        :param train_split_name: String specifying which split to load for training (e.g. "train[:80%]"). See the
+                                 https://www.tensorflow.org/datasets/splits documentation for more information on
+                                 defining splits.
+        :param eval_split_name: String specifying the split to load for evaluation.
+        :param sentence1_key: Name of the sentence1 column
+        :param sentence2_key: Name of the sentence2 column or `None` if there's only one text column
+        :param label_key: Name of the label column
+        """
+
+        # Init base class
+        TextClassificationData.__init__(self, dataset_name, tokenizer, sentence1_key, sentence2_key, label_key)
+
+        # Load the dataset from the Hugging Face dataset API
+        self.dataset = load_dataset(dataset_name, cache_dir=dataset_dir)
+
+        # Tokenize the dataset
+        tokenized_dataset = self.tokenize_dataset(self.dataset)
+
+        # Get the training and eval dataset based on the specified dataset sizes
+        self.define_train_eval_splits(tokenized_dataset, train_split_name, eval_split_name, train_size, eval_size)
+
+        # Save the class label information to use later when predicting
+        self.class_labels = self.dataset[train_split_name].features[label_key]
+
+# Name of the Hugging Face dataset
+dataset_name = "imdb"
+
+# For quicker training and debug runs, use a subset of the dataset by specifying the size of the train/eval datasets.
+# Set the sizes `None` to use the full dataset. The full IMDb dataset has 25,000 training and 25,000 test examples.
+train_dataset_size = 1000
+eval_dataset_size = 1000
+
+# Name of the columns in the dataset (the column names may vary if you are not using the IMDb dataset)
+sentence1_key = "text"
+sentence2_key = None
+label_key = "label"
+
+dataset = HFDSTextClassificationData(tokenizer, dataset_dir, dataset_name, train_dataset_size, eval_dataset_size,
+                                     Split.TRAIN, Split.TEST, sentence1_key, sentence2_key, label_key)
+
+# Print a sample of the data
+dataset.display_sample(Split.TRAIN, sample_size=5)
+
+
+
+

Skip to Step 3 Get the model and setup the Trainer to continue using the dataset from the Hugging Face catalog.

+
+
+

Option B: Use a custom dataset

+

Instead of using a dataset from the Hugging Face dataset catalog, a custom dataset from your local system or a download can be used.

+

In this example, we download the SMS Spam Collection dataset. The zip file has a single tab-separated value file with two columns. The first column is the label (ham or spam) and the second column is the text of the SMS message:

+
<ham or spam>   <text>
+<ham or spam>   <text>
+<ham or spam>   <text>
+...
+
+
+

If you are using a custom dataset that has a similarly formatted csv or tsv file, you can use the class defined below. Create your object by passing in custom values for csv file name, delimiter, the label map, mapping function, etc.

+
+
[ ]:
+
+
+
class CustomCsvTextClassificationData(TextClassificationData):
+    """
+    Class used for loading and preprocessing text classification datasets from CSV files
+    """
+
+    def __init__(self, tokenizer, dataset_name, dataset_dir, data_files, delimiter, label_names, sentence1_key, sentence2_key,
+                 label_key, train_percent=0.8, eval_percent=0.2, train_size=None, eval_size=None, map_function=None):
+        """
+        Intialize the CustomCsvTextClassificationData class for a text classification
+        dataset. The classes uses the Hugging Face datasets API to load the CSV file,
+        and split it into a train and eval datasets based on the specified percentages.
+        If train_size and eval_size are also defined, the datasets are reduced to the
+        specified number of examples.
+
+        :param tokenizer: Tokenizer to preprocess the dataset
+        :param dataset_name: Dataset name for identification purposes
+        :param dataset_dir: Directory where the csv file(s) are located
+        :param data_files: List of data file names
+        :param delimiter: Delimited for the csv files
+        :param label_names: List of label names
+        :param sentence1_key: Name of the sentence1 column
+        :param sentence2_key: Name of the sentence2 column or `None` if there's only one text column
+        :param label_key: Name of the label column
+        :param train_percent: Decimal value for the percentage of the dataset that should be used for training
+                              (e.g. 0.8 for 80%)
+        :param eval_percent: Decimal value for the percentage of the dataset that should used for validation
+                             (e.g. 0.2 for 20%)
+        :param train_size: Size of the training dataset. For quicker training or debug, use a subset of the data.
+                           Set to `None` to use all the data.
+        :param eval_size: Size of the eval dataset. Set to `None` to use all the data.
+        :param map_function: (Optional) Map function to apply to the dataset. For example, if the csv file has string
+                             labels instead of numerical values, map function can do the conversion.
+        """
+        # Init base class
+        TextClassificationData.__init__(self, dataset_name, tokenizer, sentence1_key, sentence2_key, label_key)
+
+        if (train_percent + eval_percent) > 1:
+            raise ValueError("The combined value of the train percentage and eval percentage " \
+                             "cannot be greater than 1")
+
+        # Create a list of the column names
+        column_names = [label_key, sentence1_key, sentence2_key] if sentence2_key else [label_key, sentence1_key]
+
+        # Load the dataset using the Hugging Face API
+        self.dataset = load_dataset(dataset_dir, delimiter=delimiter, data_files=data_files, column_names=column_names)
+
+        # Optionally map the dataset labels using the map_function
+        if map_function:
+            self.dataset = self.dataset.map(map_function)
+
+        # Setup the class labels
+        self.class_labels = ClassLabel(num_classes=len(label_names), names=label_names)
+        self.dataset[Split.TRAIN].features[label_key] = self.class_labels
+
+        # Split the dataset based on the percentages defined
+        self.dataset = self.dataset[Split.TRAIN].train_test_split(train_size=train_percent, test_size=eval_percent)
+
+        # Tokenize the dataset
+        tokenized_dataset = self.tokenize_dataset(self.dataset)
+
+        # Get the training and eval dataset based on the specified dataset sizes
+        self.define_train_eval_splits(tokenized_dataset, Split.TRAIN, Split.TEST, train_size, eval_size)
+
+
+# Modify the variables below to use a different dataset or a csv file on your local system.
+# The csv_path variable should be pointing to a csv file with 2 columns (the label and the text)
+dataset_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip"
+dataset_dir = os.path.join(dataset_dir, "smsspamcollection")
+csv_name = "SMSSpamCollection"
+delimiter = "\t"
+label_names = ["ham", "spam"]
+
+# Rename the file to include the csv extension so that the dataset API knows how to load the file
+renamed_csv = "{}.csv".format(csv_name)
+
+# If we don't already have the csv file, download and extract the zip file to get it.
+if not os.path.exists(os.path.join(dataset_dir, csv_name)) and \
+                      not os.path.exists(os.path.join(dataset_dir, renamed_csv)):
+    download_and_extract_zip_file(dataset_url, dataset_dir)
+
+if not os.path.exists(os.path.join(dataset_dir, renamed_csv)):
+    os.rename(os.path.join(dataset_dir, csv_name), os.path.join(dataset_dir, renamed_csv))
+
+# Columns
+sentence1_key = "text"
+sentence2_key = None
+label_key = "label"
+
+# Map function to translate labels in the csv file to numerical values when loading the dataset
+def map_spam(example):
+    example["label"] = int(example["label"] == "spam")
+    return example
+
+dataset = CustomCsvTextClassificationData(tokenizer, "smsspamcollection", dataset_dir, [renamed_csv], delimiter,
+                                          label_names, sentence1_key, sentence2_key, label_key, train_size=1000,
+                                          eval_size=1000, map_function=map_spam)
+
+# Print a sample of the data
+dataset.display_sample(Split.TRAIN, 10)
+
+
+
+
+
+
+

3. Prepare the Model for Fine Tuning and Evaluation

+

The notebook has two options to train the model.

+ +

In both cases, the model ends up being a transformers model and depending on the class constructor arguments, the appropriate API is selected.

+

Execute the following cell to declare the base class used for the Text Classification Model setup.

+
+
[ ]:
+
+
+
class TextClassificationModel():
+    """
+    Class used for model loading, training and evaluation.
+    """
+    def __init__(self,
+                 model_name: str,
+                 num_labels: int,
+                 training_args: TrainingArguments = None,
+                 ipex_optimize: bool = True,
+                 device: str = "cpu"):
+        """
+        Initialize the TextClassificationModel class for a text classification model with
+        PyTorch. The class uses the model_name to load the pre-trained PyTorch model from
+        Hugging Face. If the training_args are given then the Trainer API is selected for
+        training and evaluation of the model otherwise native PyTorch API is selected for
+        model training and evaluation
+
+        :param model_name: Name of the pre-trained model to load from Hugging Face
+        :param num_labels: Number of class labels
+        :param training_args: A TrainingArguments object if using the Trainer API to train
+                              the model. If None, native PyTorch API is used for training.
+        :param ipex_optimize: If True, then the model is optimized to run on intel hardware.
+        :param device: Device to run on the PyTorch model.
+        """
+        self.model_name = model_name
+        self.num_labels = num_labels
+        self.training_args = training_args
+        self.device = device
+        self.trainer = None
+
+        self.train_ds = dataset.train_ds
+        self.eval_ds = dataset.eval_ds
+
+        # Load the model using the pretrained weights
+        self.model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels)
+
+        # Apply the ipex optimize function to the model
+        if ipex_optimize:
+            self.model = ipex.optimize(self.model)
+
+    def train(self,
+              dataset: TextClassificationData,
+              optimizers: typing.Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR],
+              num_train_epochs: int = 1,
+              batch_size: int = 16,
+              compute_metrics: typing.Callable = None,
+              shuffle_samples: bool = True
+             ):
+
+        # If training_args are given, we use the `Trainer` API to train the model
+        if self.training_args:
+            self.model.train()
+            self.trainer = Trainer(model=self.model,
+                                   args=self.training_args,
+                                   train_dataset=self.train_ds,
+                                   eval_dataset=self.eval_ds,
+                                   optimizers=optimizers,
+                                   compute_metrics=compute_metrics)
+            self.trainer.train()
+
+        # If training_args are not given, we use native PyTorch API to train the model
+        else:
+
+            # Rename the `label` column to `labels` because the model expects the argument to be named `labels`
+            self.train_ds = self.train_ds.rename_column("label", "labels")
+
+            # Set the format of the dataset to return PyTorch tensors instead of lists
+            self.train_ds.set_format("torch")
+
+            train_dataloader = DataLoader(self.train_ds, shuffle=shuffle_samples, batch_size=batch_size)
+
+            # Unpack the `optimizers` parameter to get optimizer and lr_scheduler
+            optimizer, lr_scheduler = optimizers[0], optimizers[1]
+
+            # Define number of training steps for the training progress bar
+            num_training_steps = num_train_epochs * len(train_dataloader)
+            progress_bar = tqdm(range(num_training_steps))
+
+            # Training loop
+            self.model.to(self.device)
+            self.model.train()
+            for epoch in range(num_train_epochs):
+                for batch in train_dataloader:
+                    batch = {k: v.to(self.device) for k, v in batch.items()}
+                    outputs = self.model(**batch)
+                    loss = outputs.loss
+                    loss.backward()
+
+                    optimizer.step()
+                    lr_scheduler.step()
+                    optimizer.zero_grad()
+                    progress_bar.update(1)
+
+    def evaluate(self, batch_size=16):
+
+        if self.trainer:
+            self.model.eval()
+            metrics = self.trainer.evaluate()
+            for key in metrics.keys():
+                print("{}: {}".format(key, metrics[key]))
+        else:
+            # Rename the `label` column to `labels` because the model expects the argument to be named `labels`
+            self.eval_ds = self.eval_ds.rename_column("label", "labels")
+
+            # Set the format of the dataset to return PyTorch tensors instead of lists
+            self.eval_ds.set_format("torch")
+
+            eval_dataloader = DataLoader(self.eval_ds, batch_size=batch_size)
+            progress_bar = tqdm(range(len(eval_dataloader)))
+
+            metric = load_metric("accuracy")
+            self.model.eval()
+            for batch in eval_dataloader:
+                batch = {k: v.to(self.device) for k, v in batch.items()}
+                with torch.no_grad():
+                    outputs = self.model(**batch)
+
+                logits = outputs.logits
+                predictions = torch.argmax(logits, dim=-1)
+                metric.add_batch(predictions=predictions, references=batch["labels"])
+                progress_bar.update(1)
+
+            print(metric.compute())
+
+    def predict(self, raw_input_text):
+        if isinstance(raw_input_text, str):
+            raw_input_text = [raw_input_text]
+
+        # Encode the raw text using the tokenizer
+        encoded_input = tokenizer(raw_input_text, padding=True, return_tensors='pt')
+
+        # Input the encoded text(s) to the model and get the predicted results
+        output = self.model(**encoded_input)
+        _, predictions = torch.max(output.logits, dim=1)
+
+        # Translate the predictions to class label strings
+        prediction_labels = dataset.class_labels.int2str(predictions)
+
+        # Create a dataframe to display the results
+        result_list = [list(x) for x in zip(raw_text_input, prediction_labels)]
+        result_df = pd.DataFrame(result_list, columns=["Input Text", "Predicted Label"])
+        return result_df.style.hide_index()
+
+    def parameters(self):
+        return self.model.parameters()
+
+    def save(self, output_dir):
+        self.model.save_pretrained(output_dir)
+
+    @classmethod
+    def load(cls, output_dir):
+        return cls(output_dir, num_labels=len(dataset.get_label_names()))
+
+
+
+

Now that the TextClassificationModel class is defined, either use Option A to use the `Trainer <https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.Trainer>`__ API from Hugging Face or Option B to use the native PyTorch API.

+
+

Option A: Use the `Trainer <https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.Trainer>`__ API from Hugging Face

+

This step gets the pretrained model from Hugging Face and sets up the TrainingArguments and the Trainer. For simplicity, this example is using default values for most of the training args, but we are specifying our output directory and the number of +training epochs. If your output directory already has checkpoints from a previous run, training will resume from the last checkpoint. The overwrite_output_dir training argument can be set to True if you want to instead overwrite previously generated checkpoints.

+
+

Note that it is expected to see a warning at this step about some weights not being used. This is because the pretraining head from the original model is being replaced with a classification head.

+
+
+
[ ]:
+
+
+
num_train_epochs = 2
+batch_size = 16
+num_labels = len(dataset.get_label_names())
+
+# Define a TrainingArguments object for the Trainer API to use.
+training_args = TrainingArguments(output_dir=output_dir, num_train_epochs=num_train_epochs)
+
+# Get the model from Hugging Face. Since we are specifying training_args, the model is trained and
+# evaluated with the Trainer API.
+model = TextClassificationModel(model_name=model_name, num_labels=num_labels, training_args=training_args)
+
+# Define model training parameters
+learning_rate      = 5e-5
+optimizer          = AdamW(model.parameters(), lr=learning_rate)
+num_training_steps = num_train_epochs * len(dataset.train_ds)
+metric             = load_metric("accuracy")
+lr_scheduler       = get_scheduler(
+                        name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps
+                     )
+
+# Helper function for the Trainer API to compute metrics
+def compute_metrics(eval_pred):
+    logits, labels = eval_pred
+    predictions = np.argmax(logits, axis=-1)
+    return metric.compute(predictions=predictions, references=labels)
+
+
+
+

Train and evaluate the model with the Trainer API

+
+
[ ]:
+
+
+
model.train(
+    dataset,
+    optimizers=(optimizer, lr_scheduler),
+    num_train_epochs=num_train_epochs,
+    batch_size=batch_size,
+    compute_metrics=compute_metrics
+)
+
+
+
+
+
[ ]:
+
+
+
model.evaluate()
+
+
+
+
+
+

Option B: Use the native PyTorch API

+

This step gets the pretrained model from Hugging Face and uses native PyTorch API to train and evaluate the model.

+
+

Note that it is expected to see a warning at this step about some weights not being used. This is because the pretraining head from the original model is being replaced with a classification head.

+
+
+
[ ]:
+
+
+
num_train_epochs = 2
+batch_size = 16
+num_labels = len(dataset.get_label_names())
+
+# Get the model from Hugging Face. Since we are not specifying training_args, the model is trained and
+# evaluated with the native PyTorch API.
+model = TextClassificationModel(model_name=model_name, num_labels=num_labels)
+
+# Define model training parameters
+learning_rate      = 5e-5
+optimizer          = AdamW(model.parameters(), lr=learning_rate)
+num_training_steps = num_train_epochs * len(dataset.train_ds)
+lr_scheduler       = get_scheduler(
+                        name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps
+                     )
+
+
+
+

Train and evaluate the model with the native PyTorch API

+
+
[ ]:
+
+
+
model.train(
+    dataset,
+    optimizers=(optimizer, lr_scheduler),
+    num_train_epochs=num_train_epochs,
+    batch_size=batch_size
+)
+
+
+
+
+
[ ]:
+
+
+
model.evaluate()
+
+
+
+
+
+
+

4. Export the model

+
+
[ ]:
+
+
+
# Save the model to our output directory
+model.save(output_dir)
+
+
+
+
+
+

5. Reload the model and make predictions

+

The output directory is used to reload the model. In the next cell, we evalute the reloaded model to verify that we are getting the same metrics that we saw after fine tuning.

+
+
[ ]:
+
+
+
reloaded_model = TextClassificationModel.load(output_dir)
+
+reloaded_model.evaluate()
+
+
+
+

Next, we demonstrate how encode raw text input and get predictions from the reloaded model.

+
+
[ ]:
+
+
+
model = reloaded_model
+
+
+
+
+
[ ]:
+
+
+
# Setup some raw text input
+raw_text_input = ["It was okay. I finished it, but wouldn't watch it again.",
+                  "So bad",
+                  "Definitely not my favorite",
+                  "Highly recommended"]
+
+model.predict(raw_text_input)
+
+
+
+
+
+

6. Get Explainations with Intel Explainable AI Tools

+
+
[ ]:
+
+
+
from intel_ai_safety.explainer import attributions
+
+
+
+
+
[ ]:
+
+
+
from scipy.special import softmax
+# Define a prediction function
+def f(x):
+    encoded_input = tokenizer(x.tolist(), padding='max_length', max_length=512, truncation=True, return_tensors='pt')
+    outputs = model.model(**encoded_input)
+    return softmax(outputs.logits.detach().numpy(), axis=1)
+
+
+
+
+
[ ]:
+
+
+
from intel_ai_safety.explainer import attributions
+# Get shap values
+text_for_shap = dataset.dataset['test'][:10]['text']
+partition_explainer = attributions.partition_text_explainer(f, dataset.class_labels.names, text_for_shap, r"\W+", )
+
+
+
+
+
[ ]:
+
+
+
partition_explainer.visualize()
+
+
+
+
+
+

Citations

+
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
+  author    = {Maas, Andrew L.  and  Daly, Raymond E.  and  Pham, Peter T.  and  Huang, Dan  and  Ng, Andrew Y.  and  Potts, Christopher},
+  title     = {Learning Word Vectors for Sentiment Analysis},
+  booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
+  month     = {June},
+  year      = {2011},
+  address   = {Portland, Oregon, USA},
+  publisher = {Association for Computational Linguistics},
+  pages     = {142--150},
+  url       = {http://www.aclweb.org/anthology/P11-1015}
+}
+
+@misc{misc_sms_spam_collection_228,
+  author       = {Almeida, Tiago},
+  title        = {{SMS Spam Collection}},
+  year         = {2012},
+  howpublished = {UCI Machine Learning Repository}
+}
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.ipynb b/v1.1.0/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.ipynb new file mode 100644 index 0000000..340e185 --- /dev/null +++ b/v1.1.0/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.ipynb @@ -0,0 +1,941 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "b65f0c82", + "metadata": {}, + "source": [ + "# Explaining Fine Tuned Text Classifier with PyTorch using the Intel® Explainable AI API\n", + "\n", + "This notebook demonstrates fine tuning pretrained models from [Hugging Face](https://huggingface.co) using text classification datasets from the [Hugging Face Datasets catalog](https://huggingface.co/datasets) or a custom dataset. The notebook uses [Intel® Extension for PyTorch*](https://github.com/intel/intel-extension-for-pytorch), which extends PyTorch with optimizations for an extra performance boost on Intel hardware.\n", + "\n", + "Please install the dependencies from the [pytorch_requirements.txt](/notebooks/pytorch_requirements.txt) file before executing this notebook.\n", + "\n", + "The notebook performs the following steps:\n", + "1. [Import dependencies and setup parameters](#1.-Import-dependencies-and-setup-parameters)\n", + "2. [Prepare the dataset](#2.-Prepare-the-dataset)\n", + "3. [Prepare the Model for Fine Tuning and Evaluation](#3.-Prepare-the-Model-for-Fine-Tuning-and-Evaluation)\n", + "4. [Export the model](#4.-Export-the-model)\n", + "5. [Reload the model and make predictions](#5.-Reload-the-model-and-make-predictions)\n", + "6. [Get Explainations with Intel Explainable AI Tools](#6.-Get-Explainations-with-Intel-Explainable-AI-Tools)" + ] + }, + { + "cell_type": "markdown", + "id": "454a6685", + "metadata": {}, + "source": [ + "## 1. Import dependencies and setup parameters\n", + "\n", + "This notebook assumes that you have already followed the instructions in the [README.md](/notebooks/README.md) to setup a PyTorch environment with all the dependencies required to run the notebook." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0b2b3bf9", + "metadata": {}, + "outputs": [], + "source": [ + "import intel_extension_for_pytorch as ipex\n", + "import logging\n", + "import numpy as np\n", + "import os\n", + "import pandas as pd\n", + "import sys\n", + "import torch\n", + "import warnings\n", + "import typing\n", + "import pickle\n", + "\n", + "from tqdm.auto import tqdm\n", + "from torch.optim import AdamW\n", + "from torch.utils.data import DataLoader\n", + "from datasets import ClassLabel, load_dataset, load_metric, Split\n", + "from datasets import logging as datasets_logging\n", + "from transformers.utils import logging as transformers_logging\n", + "from transformers import (\n", + " AutoModelForSequenceClassification,\n", + " AutoTokenizer,\n", + " Trainer,\n", + " TrainingArguments,\n", + " get_scheduler\n", + ")\n", + "from tlt.utils.file_utils import download_and_extract_zip_file\n", + "\n", + "# Set the logging stream to stdout\n", + "for handler in transformers_logging._get_library_root_logger().handlers:\n", + " handler.setStream(sys.stdout)\n", + "\n", + "sh = datasets_logging.logging.StreamHandler(sys.stdout)\n", + "\n", + "datasets_logging.set_verbosity_error()\n", + "warnings.filterwarnings('ignore')\n", + "os.environ[\"TRANSFORMERS_NO_ADVISORY_WARNINGS\"] = \"1\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0fdf13ec", + "metadata": {}, + "outputs": [], + "source": [ + "# Specify the name of the Hugging Face pretrained model to use (https://huggingface.co/models)\n", + "# For example: \n", + "# albert-base-v2\n", + "# bert-base-uncased\n", + "# distilbert-base-uncased\n", + "# distilbert-base-uncased-finetuned-sst-2-english\n", + "# roberta-base\n", + "model_name = \"distilbert-base-uncased\"\n", + "\n", + "# Define an output directory\n", + "output_dir = os.environ[\"OUTPUT_DIR\"] if \"OUTPUT_DIR\" in os.environ else \\\n", + " os.path.join(os.environ[\"HOME\"], \"output\", model_name)\n", + "\n", + "# Define a dataset directory\n", + "dataset_dir = os.environ[\"DATASET_DIR\"] if \"DATASET_DIR\" in os.environ else \\\n", + " os.path.join(os.environ[\"HOME\"], \"dataset\")\n", + "\n", + "print(\"Model name:\", model_name)\n", + "print(\"Output directory:\", output_dir)\n", + "print(\"Dataset directory:\", dataset_dir)" + ] + }, + { + "cell_type": "markdown", + "id": "2f258d4f", + "metadata": {}, + "source": [ + "## 2. Prepare the dataset\n", + "\n", + "The notebook has two options for getting a dataset:\n", + "* Option A: Use a dataset from the [Hugging Face Datasets catalog](https://huggingface.co/datasets)\n", + "* Option B: Use a custom dataset (downloaded from another source or from your local system)\n", + "\n", + "In both cases, the code ends up defining [`datasets.Dataset`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset) objects for the train and evaluation splits.\n", + "\n", + "Execute the following cell to load the tokenizer and declare the base class used for the dataset setup." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "649c5c22", + "metadata": {}, + "outputs": [], + "source": [ + "# Load the tokenizer\n", + "tokenizer = AutoTokenizer.from_pretrained(model_name)\n", + "\n", + "class TextClassificationData():\n", + " \"\"\"\n", + " Base class used for defining the text classification dataset being used. Defines Hugging Face datasets.Dataset\n", + " objects for train and evaluations splits, along with helper functions for preprocessing the dataset.\n", + " \"\"\"\n", + "\n", + " def __init__(self, dataset_name, tokenizer, sentence1_key, sentence2_key, label_key):\n", + " self.tokenizer = tokenizer\n", + " self.dataset_name = dataset_name\n", + " self.class_labels = None\n", + " \n", + " # Tokenized train and eval ds\n", + " self.train_ds = None\n", + " self.eval_ds = None\n", + " \n", + " # Column keys\n", + " self.sentence1_key = sentence1_key\n", + " self.sentence2_key = sentence2_key\n", + " self.label_key = label_key\n", + " \n", + " def tokenize_function(self, examples):\n", + " # Define the tokenizer args, depending on if the data has 2 sentences or just 1\n", + " args = ((examples[self.sentence1_key],) if self.sentence2_key is None \\\n", + " else (examples[self.sentence1_key], examples[self.sentence2_key]))\n", + " return self.tokenizer(*args, padding=\"max_length\", truncation=True)\n", + " \n", + " def tokenize_dataset(self, dataset):\n", + " # Apply the tokenize function to the dataset\n", + " tokenized_dataset = dataset.map(self.tokenize_function, batched=True)\n", + "\n", + " # Remove the raw text from the tokenized dataset\n", + " raw_text_columns = [self.sentence1_key, self.sentence2_key] if self.sentence2_key else [self.sentence1_key]\n", + " return tokenized_dataset.remove_columns(raw_text_columns)\n", + " \n", + " def define_train_eval_splits(self, dataset, train_split_name, eval_split_name, train_size=None, eval_size=None):\n", + " self.train_ds = dataset[train_split_name].shuffle().select(range(train_size)) if train_size \\\n", + " else tokenized_dataset[train_split_name] \n", + " self.eval_ds = dataset[eval_split_name].shuffle().select(range(eval_size)) if eval_size \\\n", + " else tokenized_dataset[eval_split_name]\n", + " \n", + " def get_label_names(self):\n", + " if self.class_labels:\n", + " return self.class_labels.names\n", + " else:\n", + " raise ValueError(\"Class labels were not defined\")\n", + " \n", + " def display_sample(self, split_name=\"train\", sample_size=7):\n", + " # Display a sample of the raw data\n", + " sentence1_sample = self.dataset[split_name][self.sentence1_key][:sample_size]\n", + " sentence2_sample = self.dataset[split_name][self.sentence2_key][:sample_size] if self.sentence2_key else None\n", + " label_sample = self.dataset[split_name][self.label_key][:sample_size]\n", + " dataset_sample = zip(sentence1_sample, sentence2_sample, label_sample) if self.sentence2_key \\\n", + " else zip(sentence1_sample, label_sample)\n", + "\n", + " columns = [self.sentence1_key, self.sentence2_key, self.label_key] if self.sentence2_key else \\\n", + " [self.sentence1_key, self.label_key]\n", + "\n", + " # Display the sample using a dataframe\n", + " sample = pd.DataFrame(dataset_sample, columns=columns)\n", + " return sample.style.hide_index()" + ] + }, + { + "cell_type": "markdown", + "id": "fd8512e1", + "metadata": {}, + "source": [ + "Now that the base class is defined, either run [Option A to use the Hugging Face Dataset catalog](#Option-A:-Use-a-Hugging-Face-dataset) or [Option B for a custom dataset](#Option-B:-Use-a-custom-dataset) downloaded from online or from your local system." + ] + }, + { + "cell_type": "markdown", + "id": "640e5611", + "metadata": {}, + "source": [ + "### Option A: Use a Hugging Face dataset\n", + "\n", + "[Hugging Face Datasets](https://huggingface.co/datasets) has a catalog of datasets that can be specified by name. Information about the dataset is available in the catalog (including information on the size of the dataset and the splits).\n", + "\n", + "The next cell gets the [IMDb movie review dataset](https://huggingface.co/datasets/imdb) using the Hugging Face datasets API. If the notebook is executed multiple times, the dataset will be used from the dataset directory, to speed up the time that it takes to run.\n", + "\n", + "The IMDb dataset in Hugging Face has 3 splits: `train`, `test`, and `unsupervised`. This notebook will be using data from the `train` split for training and data from the `test` split for evaluation. The data has 2 columns: `text` (string with the movie review) and `label` (integer class label). The code in the next cell is setup to run using the IMDb dataset, so note that if a different dataset is being used, you may need to change the split names and/or the column names." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9a1d5fc0", + "metadata": {}, + "outputs": [], + "source": [ + "class HFDSTextClassificationData(TextClassificationData):\n", + " \"\"\"\n", + " Class used for loading and preprocessing text classification datasets from the Hugging Face datasets catalog\n", + " \"\"\"\n", + " \n", + " def __init__(self, tokenizer, dataset_dir, dataset_name, train_size, eval_size, train_split_name,\n", + " eval_split_name, sentence1_key, sentence2_key, label_key):\n", + " \"\"\"\n", + " Initialize the HFDSTextClassificationData class for a text classification dataset from Hugging Face.\n", + " \n", + " :param tokenizer: Tokenizer to preprocess the dataset\n", + " :param dataset_dir: Cache directory used when loading the dataset\n", + " :param dataset_name: Name of the dataset to load from the Hugging Face catalog\n", + " :param train_size: Size of the training dataset. For quicker training or debug, use a subset of the data.\n", + " Set to `None` to use all the data.\n", + " :param eval_size: Size of the evaluation dataset.\n", + " :param train_split_name: String specifying which split to load for training (e.g. \"train[:80%]\"). See the\n", + " https://www.tensorflow.org/datasets/splits documentation for more information on\n", + " defining splits.\n", + " :param eval_split_name: String specifying the split to load for evaluation.\n", + " :param sentence1_key: Name of the sentence1 column\n", + " :param sentence2_key: Name of the sentence2 column or `None` if there's only one text column\n", + " :param label_key: Name of the label column\n", + " \"\"\"\n", + "\n", + " # Init base class\n", + " TextClassificationData.__init__(self, dataset_name, tokenizer, sentence1_key, sentence2_key, label_key) \n", + " \n", + " # Load the dataset from the Hugging Face dataset API\n", + " self.dataset = load_dataset(dataset_name, cache_dir=dataset_dir)\n", + "\n", + " # Tokenize the dataset\n", + " tokenized_dataset = self.tokenize_dataset(self.dataset)\n", + "\n", + " # Get the training and eval dataset based on the specified dataset sizes\n", + " self.define_train_eval_splits(tokenized_dataset, train_split_name, eval_split_name, train_size, eval_size)\n", + "\n", + " # Save the class label information to use later when predicting\n", + " self.class_labels = self.dataset[train_split_name].features[label_key]\n", + "\n", + "# Name of the Hugging Face dataset\n", + "dataset_name = \"imdb\"\n", + "\n", + "# For quicker training and debug runs, use a subset of the dataset by specifying the size of the train/eval datasets.\n", + "# Set the sizes `None` to use the full dataset. The full IMDb dataset has 25,000 training and 25,000 test examples.\n", + "train_dataset_size = 1000\n", + "eval_dataset_size = 1000\n", + "\n", + "# Name of the columns in the dataset (the column names may vary if you are not using the IMDb dataset)\n", + "sentence1_key = \"text\"\n", + "sentence2_key = None\n", + "label_key = \"label\"\n", + "\n", + "dataset = HFDSTextClassificationData(tokenizer, dataset_dir, dataset_name, train_dataset_size, eval_dataset_size,\n", + " Split.TRAIN, Split.TEST, sentence1_key, sentence2_key, label_key)\n", + "\n", + "# Print a sample of the data\n", + "dataset.display_sample(Split.TRAIN, sample_size=5)" + ] + }, + { + "cell_type": "markdown", + "id": "362625ec", + "metadata": {}, + "source": [ + "Skip to Step 3 [Get the model and setup the Trainer](#3.-Get-the-model-and-setup-the-Trainer) to continue using the dataset from the Hugging Face catalog." + ] + }, + { + "cell_type": "markdown", + "id": "28c0ba36", + "metadata": {}, + "source": [ + "### Option B: Use a custom dataset\n", + "\n", + "Instead of using a dataset from the Hugging Face dataset catalog, a custom dataset from your local system or a download can be used.\n", + "\n", + "In this example, we download the [SMS Spam Collection dataset](https://archive-beta.ics.uci.edu/ml/datasets/sms+spam+collection). The zip file has a single tab-separated value file with two columns. The first column is the label (`ham` or `spam`) and the second column is the text of the SMS message:\n", + "```\n", + "\t\n", + "\t\n", + "\t\n", + "...\n", + "```\n", + "If you are using a custom dataset that has a similarly formatted csv or tsv file, you can use the class defined below. Create your object by passing in custom values for csv file name, delimiter, the label map, mapping function, etc." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c706bfbf", + "metadata": {}, + "outputs": [], + "source": [ + "class CustomCsvTextClassificationData(TextClassificationData):\n", + " \"\"\"\n", + " Class used for loading and preprocessing text classification datasets from CSV files\n", + " \"\"\"\n", + " \n", + " def __init__(self, tokenizer, dataset_name, dataset_dir, data_files, delimiter, label_names, sentence1_key, sentence2_key,\n", + " label_key, train_percent=0.8, eval_percent=0.2, train_size=None, eval_size=None, map_function=None):\n", + " \"\"\"\n", + " Intialize the CustomCsvTextClassificationData class for a text classification\n", + " dataset. The classes uses the Hugging Face datasets API to load the CSV file,\n", + " and split it into a train and eval datasets based on the specified percentages.\n", + " If train_size and eval_size are also defined, the datasets are reduced to the\n", + " specified number of examples.\n", + " \n", + " :param tokenizer: Tokenizer to preprocess the dataset\n", + " :param dataset_name: Dataset name for identification purposes\n", + " :param dataset_dir: Directory where the csv file(s) are located\n", + " :param data_files: List of data file names\n", + " :param delimiter: Delimited for the csv files\n", + " :param label_names: List of label names\n", + " :param sentence1_key: Name of the sentence1 column\n", + " :param sentence2_key: Name of the sentence2 column or `None` if there's only one text column\n", + " :param label_key: Name of the label column\n", + " :param train_percent: Decimal value for the percentage of the dataset that should be used for training\n", + " (e.g. 0.8 for 80%)\n", + " :param eval_percent: Decimal value for the percentage of the dataset that should used for validation\n", + " (e.g. 0.2 for 20%)\n", + " :param train_size: Size of the training dataset. For quicker training or debug, use a subset of the data.\n", + " Set to `None` to use all the data.\n", + " :param eval_size: Size of the eval dataset. Set to `None` to use all the data.\n", + " :param map_function: (Optional) Map function to apply to the dataset. For example, if the csv file has string\n", + " labels instead of numerical values, map function can do the conversion.\n", + " \"\"\"\n", + " # Init base class\n", + " TextClassificationData.__init__(self, dataset_name, tokenizer, sentence1_key, sentence2_key, label_key)\n", + " \n", + " if (train_percent + eval_percent) > 1:\n", + " raise ValueError(\"The combined value of the train percentage and eval percentage \" \\\n", + " \"cannot be greater than 1\")\n", + " \n", + " # Create a list of the column names\n", + " column_names = [label_key, sentence1_key, sentence2_key] if sentence2_key else [label_key, sentence1_key]\n", + " \n", + " # Load the dataset using the Hugging Face API\n", + " self.dataset = load_dataset(dataset_dir, delimiter=delimiter, data_files=data_files, column_names=column_names)\n", + " \n", + " # Optionally map the dataset labels using the map_function\n", + " if map_function:\n", + " self.dataset = self.dataset.map(map_function)\n", + " \n", + " # Setup the class labels\n", + " self.class_labels = ClassLabel(num_classes=len(label_names), names=label_names)\n", + " self.dataset[Split.TRAIN].features[label_key] = self.class_labels\n", + " \n", + " # Split the dataset based on the percentages defined\n", + " self.dataset = self.dataset[Split.TRAIN].train_test_split(train_size=train_percent, test_size=eval_percent)\n", + " \n", + " # Tokenize the dataset\n", + " tokenized_dataset = self.tokenize_dataset(self.dataset)\n", + "\n", + " # Get the training and eval dataset based on the specified dataset sizes\n", + " self.define_train_eval_splits(tokenized_dataset, Split.TRAIN, Split.TEST, train_size, eval_size)\n", + "\n", + "\n", + "# Modify the variables below to use a different dataset or a csv file on your local system.\n", + "# The csv_path variable should be pointing to a csv file with 2 columns (the label and the text)\n", + "dataset_url = \"https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip\"\n", + "dataset_dir = os.path.join(dataset_dir, \"smsspamcollection\")\n", + "csv_name = \"SMSSpamCollection\"\n", + "delimiter = \"\\t\"\n", + "label_names = [\"ham\", \"spam\"]\n", + "\n", + "# Rename the file to include the csv extension so that the dataset API knows how to load the file\n", + "renamed_csv = \"{}.csv\".format(csv_name)\n", + "\n", + "# If we don't already have the csv file, download and extract the zip file to get it.\n", + "if not os.path.exists(os.path.join(dataset_dir, csv_name)) and \\\n", + " not os.path.exists(os.path.join(dataset_dir, renamed_csv)):\n", + " download_and_extract_zip_file(dataset_url, dataset_dir)\n", + "\n", + "if not os.path.exists(os.path.join(dataset_dir, renamed_csv)):\n", + " os.rename(os.path.join(dataset_dir, csv_name), os.path.join(dataset_dir, renamed_csv))\n", + " \n", + "# Columns\n", + "sentence1_key = \"text\"\n", + "sentence2_key = None\n", + "label_key = \"label\"\n", + "\n", + "# Map function to translate labels in the csv file to numerical values when loading the dataset\n", + "def map_spam(example):\n", + " example[\"label\"] = int(example[\"label\"] == \"spam\")\n", + " return example\n", + "\n", + "dataset = CustomCsvTextClassificationData(tokenizer, \"smsspamcollection\", dataset_dir, [renamed_csv], delimiter,\n", + " label_names, sentence1_key, sentence2_key, label_key, train_size=1000,\n", + " eval_size=1000, map_function=map_spam)\n", + "\n", + "# Print a sample of the data\n", + "dataset.display_sample(Split.TRAIN, 10)" + ] + }, + { + "cell_type": "markdown", + "id": "e3a24bcd", + "metadata": {}, + "source": [ + "## 3. Prepare the Model for Fine Tuning and Evaluation\n", + "\n", + "The notebook has two options to train the model.\n", + "\n", + "- Option A: Use the [`Trainer`](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.Trainer) API from Hugging Face.\n", + "- Option B: Use the native PyTorch API.\n", + "\n", + "In both cases, the model ends up being a transformers model and depending on the class constructor arguments, the appropriate API is selected.\n", + "\n", + "Execute the following cell to declare the base class used for the Text Classification Model setup." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "797485aa", + "metadata": {}, + "outputs": [], + "source": [ + "class TextClassificationModel():\n", + " \"\"\"\n", + " Class used for model loading, training and evaluation.\n", + " \"\"\"\n", + " def __init__(self, \n", + " model_name: str, \n", + " num_labels: int, \n", + " training_args: TrainingArguments = None, \n", + " ipex_optimize: bool = True, \n", + " device: str = \"cpu\"):\n", + " \"\"\"\n", + " Initialize the TextClassificationModel class for a text classification model with\n", + " PyTorch. The class uses the model_name to load the pre-trained PyTorch model from\n", + " Hugging Face. If the training_args are given then the Trainer API is selected for\n", + " training and evaluation of the model otherwise native PyTorch API is selected for\n", + " model training and evaluation\n", + " \n", + " :param model_name: Name of the pre-trained model to load from Hugging Face\n", + " :param num_labels: Number of class labels\n", + " :param training_args: A TrainingArguments object if using the Trainer API to train\n", + " the model. If None, native PyTorch API is used for training.\n", + " :param ipex_optimize: If True, then the model is optimized to run on intel hardware.\n", + " :param device: Device to run on the PyTorch model.\n", + " \"\"\"\n", + " self.model_name = model_name\n", + " self.num_labels = num_labels\n", + " self.training_args = training_args\n", + " self.device = device\n", + " self.trainer = None\n", + " \n", + " self.train_ds = dataset.train_ds\n", + " self.eval_ds = dataset.eval_ds\n", + " \n", + " # Load the model using the pretrained weights\n", + " self.model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels)\n", + " \n", + " # Apply the ipex optimize function to the model\n", + " if ipex_optimize:\n", + " self.model = ipex.optimize(self.model)\n", + " \n", + " def train(self, \n", + " dataset: TextClassificationData,\n", + " optimizers: typing.Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR],\n", + " num_train_epochs: int = 1,\n", + " batch_size: int = 16,\n", + " compute_metrics: typing.Callable = None,\n", + " shuffle_samples: bool = True\n", + " ):\n", + "\n", + " # If training_args are given, we use the `Trainer` API to train the model\n", + " if self.training_args:\n", + " self.model.train()\n", + " self.trainer = Trainer(model=self.model,\n", + " args=self.training_args,\n", + " train_dataset=self.train_ds,\n", + " eval_dataset=self.eval_ds,\n", + " optimizers=optimizers,\n", + " compute_metrics=compute_metrics)\n", + " self.trainer.train()\n", + " \n", + " # If training_args are not given, we use native PyTorch API to train the model\n", + " else:\n", + " \n", + " # Rename the `label` column to `labels` because the model expects the argument to be named `labels`\n", + " self.train_ds = self.train_ds.rename_column(\"label\", \"labels\")\n", + " \n", + " # Set the format of the dataset to return PyTorch tensors instead of lists\n", + " self.train_ds.set_format(\"torch\")\n", + " \n", + " train_dataloader = DataLoader(self.train_ds, shuffle=shuffle_samples, batch_size=batch_size)\n", + " \n", + " # Unpack the `optimizers` parameter to get optimizer and lr_scheduler\n", + " optimizer, lr_scheduler = optimizers[0], optimizers[1]\n", + " \n", + " # Define number of training steps for the training progress bar\n", + " num_training_steps = num_train_epochs * len(train_dataloader)\n", + " progress_bar = tqdm(range(num_training_steps))\n", + " \n", + " # Training loop\n", + " self.model.to(self.device)\n", + " self.model.train()\n", + " for epoch in range(num_train_epochs):\n", + " for batch in train_dataloader:\n", + " batch = {k: v.to(self.device) for k, v in batch.items()}\n", + " outputs = self.model(**batch)\n", + " loss = outputs.loss\n", + " loss.backward()\n", + "\n", + " optimizer.step()\n", + " lr_scheduler.step()\n", + " optimizer.zero_grad()\n", + " progress_bar.update(1)\n", + " \n", + " def evaluate(self, batch_size=16):\n", + " \n", + " if self.trainer:\n", + " self.model.eval()\n", + " metrics = self.trainer.evaluate()\n", + " for key in metrics.keys():\n", + " print(\"{}: {}\".format(key, metrics[key]))\n", + " else:\n", + " # Rename the `label` column to `labels` because the model expects the argument to be named `labels`\n", + " self.eval_ds = self.eval_ds.rename_column(\"label\", \"labels\")\n", + " \n", + " # Set the format of the dataset to return PyTorch tensors instead of lists\n", + " self.eval_ds.set_format(\"torch\")\n", + " \n", + " eval_dataloader = DataLoader(self.eval_ds, batch_size=batch_size)\n", + " progress_bar = tqdm(range(len(eval_dataloader)))\n", + " \n", + " metric = load_metric(\"accuracy\")\n", + " self.model.eval()\n", + " for batch in eval_dataloader:\n", + " batch = {k: v.to(self.device) for k, v in batch.items()}\n", + " with torch.no_grad():\n", + " outputs = self.model(**batch)\n", + "\n", + " logits = outputs.logits\n", + " predictions = torch.argmax(logits, dim=-1)\n", + " metric.add_batch(predictions=predictions, references=batch[\"labels\"])\n", + " progress_bar.update(1)\n", + "\n", + " print(metric.compute())\n", + " \n", + " def predict(self, raw_input_text):\n", + " if isinstance(raw_input_text, str):\n", + " raw_input_text = [raw_input_text]\n", + " \n", + " # Encode the raw text using the tokenizer\n", + " encoded_input = tokenizer(raw_input_text, padding=True, return_tensors='pt')\n", + " \n", + " # Input the encoded text(s) to the model and get the predicted results\n", + " output = self.model(**encoded_input)\n", + " _, predictions = torch.max(output.logits, dim=1)\n", + " \n", + " # Translate the predictions to class label strings\n", + " prediction_labels = dataset.class_labels.int2str(predictions)\n", + "\n", + " # Create a dataframe to display the results\n", + " result_list = [list(x) for x in zip(raw_text_input, prediction_labels)]\n", + " result_df = pd.DataFrame(result_list, columns=[\"Input Text\", \"Predicted Label\"])\n", + " return result_df.style.hide_index()\n", + " \n", + " def parameters(self):\n", + " return self.model.parameters()\n", + " \n", + " def save(self, output_dir):\n", + " self.model.save_pretrained(output_dir)\n", + " \n", + " @classmethod\n", + " def load(cls, output_dir):\n", + " return cls(output_dir, num_labels=len(dataset.get_label_names()))" + ] + }, + { + "cell_type": "markdown", + "id": "3e8f1edd", + "metadata": {}, + "source": [ + "Now that the `TextClassificationModel` class is defined, either use Option A to use the [`Trainer`](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.Trainer) API from Hugging Face or Option B to use the native PyTorch API." + ] + }, + { + "cell_type": "markdown", + "id": "1a606f16", + "metadata": {}, + "source": [ + "### Option A: Use the [`Trainer`](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.Trainer) API from Hugging Face\n", + "\n", + "This step gets the pretrained model from [Hugging Face](https://huggingface.co/models) and sets up the\n", + "[TrainingArguments](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.TrainingArguments) and the\n", + "[Trainer](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.Trainer). For simplicity, this example is using default values for most of the training args, but we are specifying our output directory and the number of training epochs. If your output directory already has checkpoints from a previous run,\n", + "training will resume from the last checkpoint. The `overwrite_output_dir` training argument can be set to\n", + "`True` if you want to instead overwrite previously generated checkpoints.\n", + "\n", + "> Note that it is expected to see a warning at this step about some weights not being used. This is because\n", + "> the pretraining head from the original model is being replaced with a classification head." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0d70a4f8", + "metadata": {}, + "outputs": [], + "source": [ + "num_train_epochs = 2\n", + "batch_size = 16\n", + "num_labels = len(dataset.get_label_names())\n", + "\n", + "# Define a TrainingArguments object for the Trainer API to use.\n", + "training_args = TrainingArguments(output_dir=output_dir, num_train_epochs=num_train_epochs)\n", + "\n", + "# Get the model from Hugging Face. Since we are specifying training_args, the model is trained and\n", + "# evaluated with the Trainer API.\n", + "model = TextClassificationModel(model_name=model_name, num_labels=num_labels, training_args=training_args)\n", + "\n", + "# Define model training parameters\n", + "learning_rate = 5e-5\n", + "optimizer = AdamW(model.parameters(), lr=learning_rate)\n", + "num_training_steps = num_train_epochs * len(dataset.train_ds)\n", + "metric = load_metric(\"accuracy\")\n", + "lr_scheduler = get_scheduler(\n", + " name=\"linear\", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps\n", + " )\n", + "\n", + "# Helper function for the Trainer API to compute metrics\n", + "def compute_metrics(eval_pred):\n", + " logits, labels = eval_pred\n", + " predictions = np.argmax(logits, axis=-1)\n", + " return metric.compute(predictions=predictions, references=labels)" + ] + }, + { + "cell_type": "markdown", + "id": "5fcabd97", + "metadata": {}, + "source": [ + "**Train and evaluate the model with the Trainer API**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e1256c40", + "metadata": {}, + "outputs": [], + "source": [ + "model.train(\n", + " dataset, \n", + " optimizers=(optimizer, lr_scheduler), \n", + " num_train_epochs=num_train_epochs, \n", + " batch_size=batch_size,\n", + " compute_metrics=compute_metrics\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4a7afe59", + "metadata": {}, + "outputs": [], + "source": [ + "model.evaluate()" + ] + }, + { + "cell_type": "markdown", + "id": "cce10679", + "metadata": {}, + "source": [ + "### Option B: Use the native PyTorch API\n", + "\n", + "This step gets the pretrained model from [Hugging Face](https://huggingface.co/models) and uses native PyTorch API to train and evaluate the model.\n", + "\n", + "> Note that it is expected to see a warning at this step about some weights not being used. This is because\n", + "> the pretraining head from the original model is being replaced with a classification head." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "202f6b01", + "metadata": {}, + "outputs": [], + "source": [ + "num_train_epochs = 2\n", + "batch_size = 16\n", + "num_labels = len(dataset.get_label_names())\n", + "\n", + "# Get the model from Hugging Face. Since we are not specifying training_args, the model is trained and\n", + "# evaluated with the native PyTorch API.\n", + "model = TextClassificationModel(model_name=model_name, num_labels=num_labels)\n", + "\n", + "# Define model training parameters\n", + "learning_rate = 5e-5\n", + "optimizer = AdamW(model.parameters(), lr=learning_rate)\n", + "num_training_steps = num_train_epochs * len(dataset.train_ds)\n", + "lr_scheduler = get_scheduler(\n", + " name=\"linear\", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps\n", + " )" + ] + }, + { + "cell_type": "markdown", + "id": "f4821e46", + "metadata": {}, + "source": [ + "**Train and evaluate the model with the native PyTorch API**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eeb2e694", + "metadata": {}, + "outputs": [], + "source": [ + "model.train(\n", + " dataset, \n", + " optimizers=(optimizer, lr_scheduler), \n", + " num_train_epochs=num_train_epochs, \n", + " batch_size=batch_size\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7ff3b884", + "metadata": {}, + "outputs": [], + "source": [ + "model.evaluate()" + ] + }, + { + "cell_type": "markdown", + "id": "b83b873f", + "metadata": {}, + "source": [ + "## 4. Export the model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "faa4beb3", + "metadata": {}, + "outputs": [], + "source": [ + "# Save the model to our output directory\n", + "model.save(output_dir)" + ] + }, + { + "cell_type": "markdown", + "id": "49449342", + "metadata": {}, + "source": [ + "## 5. Reload the model and make predictions\n", + "\n", + "The output directory is used to reload the model. In the next cell, we evalute the reloaded model to verify that we are getting the same metrics that we saw after fine tuning." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e339d50d", + "metadata": {}, + "outputs": [], + "source": [ + "reloaded_model = TextClassificationModel.load(output_dir)\n", + " \n", + "reloaded_model.evaluate()" + ] + }, + { + "cell_type": "markdown", + "id": "c70a5386", + "metadata": {}, + "source": [ + "Next, we demonstrate how encode raw text input and get predictions from the reloaded model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f084d0ce", + "metadata": {}, + "outputs": [], + "source": [ + "model = reloaded_model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "40d231c0", + "metadata": {}, + "outputs": [], + "source": [ + "# Setup some raw text input\n", + "raw_text_input = [\"It was okay. I finished it, but wouldn't watch it again.\",\n", + " \"So bad\",\n", + " \"Definitely not my favorite\",\n", + " \"Highly recommended\"]\n", + "\n", + "model.predict(raw_text_input)" + ] + }, + { + "cell_type": "markdown", + "id": "62bdc89b", + "metadata": {}, + "source": [ + "## 6. Get Explainations with Intel Explainable AI Tools" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7144cfca", + "metadata": {}, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer import attributions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "34224ef1", + "metadata": {}, + "outputs": [], + "source": [ + "from scipy.special import softmax\n", + "# Define a prediction function\n", + "def f(x):\n", + " encoded_input = tokenizer(x.tolist(), padding='max_length', max_length=512, truncation=True, return_tensors='pt')\n", + " outputs = model.model(**encoded_input)\n", + " return softmax(outputs.logits.detach().numpy(), axis=1)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dcfe9faa", + "metadata": {}, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer import attributions\n", + "# Get shap values\n", + "text_for_shap = dataset.dataset['test'][:10]['text']\n", + "partition_explainer = attributions.partition_text_explainer(f, dataset.class_labels.names, text_for_shap, r\"\\W+\", )" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7c1e308d", + "metadata": {}, + "outputs": [], + "source": [ + "partition_explainer.visualize()" + ] + }, + { + "cell_type": "markdown", + "id": "bee40324", + "metadata": {}, + "source": [ + "## Citations\n", + "\n", + "```\n", + "@InProceedings{maas-EtAl:2011:ACL-HLT2011,\n", + " author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},\n", + " title = {Learning Word Vectors for Sentiment Analysis},\n", + " booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},\n", + " month = {June},\n", + " year = {2011},\n", + " address = {Portland, Oregon, USA},\n", + " publisher = {Association for Computational Linguistics},\n", + " pages = {142--150},\n", + " url = {http://www.aclweb.org/anthology/P11-1015}\n", + "}\n", + "\n", + "@misc{misc_sms_spam_collection_228,\n", + " author = {Almeida, Tiago},\n", + " title = {{SMS Spam Collection}},\n", + " year = {2012},\n", + " howpublished = {UCI Machine Learning Repository}\n", + "}\n", + "```" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/v1.1.0/notebooks/TorchVision_CIFAR_Interpret.html b/v1.1.0/notebooks/TorchVision_CIFAR_Interpret.html new file mode 100644 index 0000000..5e5d71a --- /dev/null +++ b/v1.1.0/notebooks/TorchVision_CIFAR_Interpret.html @@ -0,0 +1,321 @@ + + + + + + + Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • + View page source +
  • +
+
+
+
+
+ +
+

Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer

+
+
[ ]:
+
+
+
import matplotlib.pyplot as plt
+import matplotlib
+import numpy as np
+
+%matplotlib inline
+
+# PyTorch
+import torch
+import torchvision
+import torchvision.transforms as transforms
+import torchvision.transforms.functional as TF
+import torch.nn as nn
+import torch.nn.functional as F
+from torchvision import models
+import torch.optim as optim
+
+
+
+
+
[ ]:
+
+
+
transform = transforms.Compose(
+    [transforms.ToTensor(),
+     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
+
+trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
+                                        download=True, transform=transform)
+trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
+                                          shuffle=True, num_workers=2)
+
+testset = torchvision.datasets.CIFAR10(root='./data', train=False,
+                                       download=True, transform=transform)
+testloader = torch.utils.data.DataLoader(testset, batch_size=4,
+                                         shuffle=False, num_workers=2)
+
+classes = ('plane', 'car', 'bird', 'cat',
+           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
+
+
+
+
+
[ ]:
+
+
+
class Net(nn.Module):
+    def __init__(self):
+        super(Net, self).__init__()
+        self.conv1 = nn.Conv2d(3, 6, 5)
+        self.pool1 = nn.MaxPool2d(2, 2)
+        self.pool2 = nn.MaxPool2d(2, 2)
+        self.conv2 = nn.Conv2d(6, 16, 5)
+        self.fc1 = nn.Linear(16 * 5 * 5, 120)
+        self.fc2 = nn.Linear(120, 84)
+        self.fc3 = nn.Linear(84, 10)
+        self.relu1 = nn.ReLU()
+        self.relu2 = nn.ReLU()
+        self.relu3 = nn.ReLU()
+        self.relu4 = nn.ReLU()
+
+    def forward(self, x):
+        x = self.pool1(self.relu1(self.conv1(x)))
+        x = self.pool2(self.relu2(self.conv2(x)))
+        x = x.view(-1, 16 * 5 * 5)
+        x = self.relu3(self.fc1(x))
+        x = self.relu4(self.fc2(x))
+        x = self.fc3(x)
+        return x
+
+
+net = Net()
+
+
+
+
+
[ ]:
+
+
+
criterion = nn.CrossEntropyLoss()
+optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
+
+
+
+
+
[ ]:
+
+
+
USE_PRETRAINED_MODEL = False # if the model is not saved, set false
+
+if USE_PRETRAINED_MODEL:
+    print("Using existing trained model")
+    from urllib.request import urlopen
+    import os.path
+    if os.path.isfile("models/cifar_torchvision.pt"):
+        print("File found, will be loaded")
+        net.load_state_dict(torch.load('models/cifar_torchvision.pt'))
+    else:
+        print("Please train the model first by setting USE_PRETRAINED_MODEL to False")
+else:
+    for epoch in range(1):  # loop over the dataset multiple times
+
+        running_loss = 0.0
+        for i, data in enumerate(trainloader, 0):
+            # get the inputs
+            inputs, labels = data
+            # zero the parameter gradients
+            optimizer.zero_grad()
+
+            # forward + backward + optimize
+            outputs = net(inputs)
+            loss = criterion(outputs, labels)
+            loss.backward()
+            optimizer.step()
+
+            # print statistics
+            running_loss += loss.item()
+            if i % 2000 == 1999:    # print every 2000 mini-batches
+                print('[%d, %5d] loss: %.3f' %
+                      (epoch + 1, i + 1, running_loss / 2000))
+                running_loss = 0.0
+
+    print('Finished Training')
+    torch.save(net.state_dict(), 'cifar_torchvision.pt')
+
+
+
+
+
[ ]:
+
+
+
def imshow(img, transpose = True):
+    img = img / 2 + 0.5     # unnormalize
+    npimg = img.numpy()
+    plt.imshow(np.transpose(npimg, (1, 2, 0)))
+    plt.show()
+
+dataiter = iter(testloader)
+images, labels = next(dataiter)
+
+# print images
+imshow(torchvision.utils.make_grid(images))
+print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
+
+
+outputs = net(images)
+
+_, predicted = torch.max(outputs, 1)
+
+print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
+                              for j in range(4)))
+
+
+
+
+
[ ]:
+
+
+
ind = 3
+
+input = images[ind].unsqueeze(0)
+input.requires_grad = True
+
+
+
+
+
[ ]:
+
+
+
net.eval()
+
+
+
+
+
[ ]:
+
+
+
from intel_ai_safety.explainer.attributions import pt_attributions as attributions
+from captum.attr import visualization as viz
+
+# handeling Original Image
+print('Original Image')
+print('Predicted:', classes[predicted[ind]], ' Probability:', torch.max(F.softmax(outputs, 1)).item())
+original_image = np.transpose((images[ind].cpu().detach().numpy() / 2) + 0.5, (1, 2, 0))
+viz.visualize_image_attr(None, original_image, method="original_image", title="Original Image")
+
+# Entry Points
+attributions.saliency(net).visualize(input,labels[ind],original_image,"Saliency")
+attributions.integratedgradients(net).visualize(input,labels[ind],original_image,"Integrated Gradients")
+attributions.deeplift(net).visualize(input,labels[ind],original_image,"Deep Lift")
+attributions.smoothgrad(net).visualize(input,labels[ind],original_image,"Smooth Grad")
+attributions.featureablation(net).visualize(input,labels[ind],original_image,"Feature Ablation")
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/TorchVision_CIFAR_Interpret.ipynb b/v1.1.0/notebooks/TorchVision_CIFAR_Interpret.ipynb new file mode 100644 index 0000000..c0ede31 --- /dev/null +++ b/v1.1.0/notebooks/TorchVision_CIFAR_Interpret.ipynb @@ -0,0 +1,262 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "import matplotlib.pyplot as plt\n", + "import matplotlib\n", + "import numpy as np\n", + "\n", + "%matplotlib inline\n", + "\n", + "# PyTorch\n", + "import torch\n", + "import torchvision\n", + "import torchvision.transforms as transforms\n", + "import torchvision.transforms.functional as TF\n", + "import torch.nn as nn\n", + "import torch.nn.functional as F\n", + "from torchvision import models\n", + "import torch.optim as optim" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "transform = transforms.Compose(\n", + " [transforms.ToTensor(),\n", + " transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n", + "\n", + "trainset = torchvision.datasets.CIFAR10(root='./data', train=True,\n", + " download=True, transform=transform)\n", + "trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,\n", + " shuffle=True, num_workers=2)\n", + "\n", + "testset = torchvision.datasets.CIFAR10(root='./data', train=False,\n", + " download=True, transform=transform)\n", + "testloader = torch.utils.data.DataLoader(testset, batch_size=4,\n", + " shuffle=False, num_workers=2)\n", + "\n", + "classes = ('plane', 'car', 'bird', 'cat',\n", + " 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "class Net(nn.Module):\n", + " def __init__(self):\n", + " super(Net, self).__init__()\n", + " self.conv1 = nn.Conv2d(3, 6, 5)\n", + " self.pool1 = nn.MaxPool2d(2, 2)\n", + " self.pool2 = nn.MaxPool2d(2, 2)\n", + " self.conv2 = nn.Conv2d(6, 16, 5)\n", + " self.fc1 = nn.Linear(16 * 5 * 5, 120)\n", + " self.fc2 = nn.Linear(120, 84)\n", + " self.fc3 = nn.Linear(84, 10)\n", + " self.relu1 = nn.ReLU()\n", + " self.relu2 = nn.ReLU()\n", + " self.relu3 = nn.ReLU()\n", + " self.relu4 = nn.ReLU()\n", + "\n", + " def forward(self, x):\n", + " x = self.pool1(self.relu1(self.conv1(x)))\n", + " x = self.pool2(self.relu2(self.conv2(x)))\n", + " x = x.view(-1, 16 * 5 * 5)\n", + " x = self.relu3(self.fc1(x))\n", + " x = self.relu4(self.fc2(x))\n", + " x = self.fc3(x)\n", + " return x\n", + "\n", + "\n", + "net = Net()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "criterion = nn.CrossEntropyLoss()\n", + "optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "USE_PRETRAINED_MODEL = False # if the model is not saved, set false\n", + "\n", + "if USE_PRETRAINED_MODEL:\n", + " print(\"Using existing trained model\")\n", + " from urllib.request import urlopen\n", + " import os.path\n", + " if os.path.isfile(\"models/cifar_torchvision.pt\"):\n", + " print(\"File found, will be loaded\") \n", + " net.load_state_dict(torch.load('models/cifar_torchvision.pt'))\n", + " else:\n", + " print(\"Please train the model first by setting USE_PRETRAINED_MODEL to False\")\n", + "else:\n", + " for epoch in range(1): # loop over the dataset multiple times\n", + "\n", + " running_loss = 0.0\n", + " for i, data in enumerate(trainloader, 0):\n", + " # get the inputs\n", + " inputs, labels = data\n", + " # zero the parameter gradients\n", + " optimizer.zero_grad()\n", + "\n", + " # forward + backward + optimize\n", + " outputs = net(inputs)\n", + " loss = criterion(outputs, labels)\n", + " loss.backward()\n", + " optimizer.step()\n", + "\n", + " # print statistics\n", + " running_loss += loss.item()\n", + " if i % 2000 == 1999: # print every 2000 mini-batches\n", + " print('[%d, %5d] loss: %.3f' %\n", + " (epoch + 1, i + 1, running_loss / 2000))\n", + " running_loss = 0.0\n", + "\n", + " print('Finished Training')\n", + " torch.save(net.state_dict(), 'cifar_torchvision.pt')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "def imshow(img, transpose = True):\n", + " img = img / 2 + 0.5 # unnormalize\n", + " npimg = img.numpy()\n", + " plt.imshow(np.transpose(npimg, (1, 2, 0)))\n", + " plt.show()\n", + "\n", + "dataiter = iter(testloader)\n", + "images, labels = next(dataiter)\n", + "\n", + "# print images\n", + "imshow(torchvision.utils.make_grid(images))\n", + "print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))\n", + "\n", + "\n", + "outputs = net(images)\n", + "\n", + "_, predicted = torch.max(outputs, 1)\n", + "\n", + "print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]\n", + " for j in range(4)))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "ind = 3\n", + "\n", + "input = images[ind].unsqueeze(0)\n", + "input.requires_grad = True" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "net.eval()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer.attributions import pt_attributions as attributions\n", + "from captum.attr import visualization as viz\n", + "\n", + "# handeling Original Image\n", + "print('Original Image')\n", + "print('Predicted:', classes[predicted[ind]], ' Probability:', torch.max(F.softmax(outputs, 1)).item())\n", + "original_image = np.transpose((images[ind].cpu().detach().numpy() / 2) + 0.5, (1, 2, 0))\n", + "viz.visualize_image_attr(None, original_image, method=\"original_image\", title=\"Original Image\")\n", + "\n", + "# Entry Points\n", + "attributions.saliency(net).visualize(input,labels[ind],original_image,\"Saliency\")\n", + "attributions.integratedgradients(net).visualize(input,labels[ind],original_image,\"Integrated Gradients\")\n", + "attributions.deeplift(net).visualize(input,labels[ind],original_image,\"Deep Lift\")\n", + "attributions.smoothgrad(net).visualize(input,labels[ind],original_image,\"Smooth Grad\")\n", + "attributions.featureablation(net).visualize(input,labels[ind],original_image,\"Feature Ablation\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + }, + "vscode": { + "interpreter": { + "hash": "96236a153fd3d7caad1c1cb01382c242720ec562c4aea791607b97e2527b6a8f" + } + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/v1.1.0/notebooks/adult-pytorch-model-card.html b/v1.1.0/notebooks/adult-pytorch-model-card.html new file mode 100644 index 0000000..ee305f5 --- /dev/null +++ b/v1.1.0/notebooks/adult-pytorch-model-card.html @@ -0,0 +1,461 @@ + + + + + + + Generating Model Card with PyTorch — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Generating Model Card with PyTorch

+

This notebook intends to provide an example of generating a model card for a PyTorch model using Intel Model Card Generator.

+
    +
  1. Data Collection and Prerpocessing from Adult Dataset

  2. +
  3. Build Multilayer Neural NetWork using PyTorch

  4. +
  5. Train Model

  6. +
  7. Save Model

  8. +
  9. Generate Model Card with Intel Model Card Generator

  10. +
+
+
[ ]:
+
+
+
import pandas as pd
+import numpy as np
+import torch
+import torch.nn as nn
+from torch.nn.functional import relu
+import os
+from sklearn.datasets import fetch_openml
+from intel_ai_safety.model_card_gen.model_card_gen import ModelCardGen
+from intel_ai_safety.model_card_gen.datasets import PytorchDataset
+from torch.utils.data import Dataset
+
+
+
+
+

1. Data Collection and Preprocessing

+
+
[ ]:
+
+
+
CATEGORICAL_FEATURE_KEYS = [
+    'workclass',
+    'marital-status',
+    'occupation',
+    'relationship',
+    'race',
+    'sex',
+    'native-country',
+]
+
+NUMERIC_FEATURE_KEYS = [
+    'age',
+    'capital-gain',
+    'capital-loss',
+    'hours-per-week',
+    'education-num'
+]
+
+
+DROP_COLUMNS = ['fnlwgt', 'education']
+
+LABEL_KEY = 'label'
+
+
+
+
+

Fetch Data from OpenML

+
+
[ ]:
+
+
+
data = fetch_openml(data_id=1590, as_frame=True)
+raw_data = data.data
+raw_data['label'] = data.target
+adult_data = raw_data.copy()
+
+
+
+
+
+

Drop Unneeded Columns

+
+
[ ]:
+
+
+
adult_data = adult_data.drop(DROP_COLUMNS, axis=1)
+adult_data = pd.get_dummies(adult_data, columns=CATEGORICAL_FEATURE_KEYS)
+adult_data['label'] = adult_data['label'].map({'<=50K': 0, '>50K': 1})
+
+
+
+
+
+

Train Test Split

+
+
[ ]:
+
+
+
# Convert features and labels to numpy arrays.
+labels = adult_data['label'].to_numpy()
+adult_data = adult_data.drop(['label'], axis=1)
+feature_names = list(adult_data.columns)
+
+
+
+
+
[ ]:
+
+
+
class AdultDataset(Dataset):
+    """Face Landmarks dataset."""
+
+    def __init__(self, df, labels, transform=None):
+        self.data = self.make_input_tensor(df)
+        self.labels = self.make_label_tensor(labels)
+        self.transform = transform
+
+    def __len__(self):
+        return len(self.adult_df)
+
+    def make_input_tensor(self, df):
+        return torch.from_numpy(df.to_numpy()).type(torch.FloatTensor)
+
+    def make_label_tensor(self, label_array):
+        return torch.from_numpy(label_array)
+
+    def __getitem__(self, idx):
+        if torch.is_tensor(idx):
+            idx = idx.tolist()
+        sample = self.data[idx]
+        label = self.labels[idx]
+        if self.transform:
+            sample = self.transform(sample)
+        return sample, label
+
+
+
+
+
[ ]:
+
+
+
adult_dataset = AdultDataset(adult_data, labels)
+
+
+
+
+
+
+

2. Build Model

+
+
[ ]:
+
+
+
class AdultNN(nn.Module):
+
+    def __init__(self, num_features, num_classes):
+        super().__init__()
+        self.num_features = num_features
+        self.num_classes = num_classes
+
+        self.lin1 = torch.nn.Linear(self.num_features,  150)
+        self.lin2 = torch.nn.Linear(50, 50)
+        self.lin3 = torch.nn.Linear(50, 50)
+
+        self.lin4 = torch.nn.Linear(150, 150)
+
+        self.lin5 = torch.nn.Linear(50, 50)
+        self.lin6 = torch.nn.Linear(50, 50)
+        self.lin10 = torch.nn.Linear(150, self.num_classes)
+
+        self.prelu = nn.PReLU()
+        self.dropout = nn.Dropout(0.25)
+
+    def forward(self, xin):
+        x = relu(self.lin1(xin))
+        x = relu(self.lin4(x))
+        x = self.dropout(x)
+        x = relu(self.lin10(x))
+        return x
+
+
+
+
+
[ ]:
+
+
+
torch.manual_seed(1)  # Set seed for reproducibility.
+
+class AdultNN(nn.Module):
+    def __init__(self, feature_size, num_labels):
+        super().__init__()
+        self.linear1 = nn.Linear(feature_size, feature_size)
+        self.sigmoid1 = nn.Sigmoid()
+        self.linear2 = nn.Linear(feature_size, 8)
+        self.sigmoid2 = nn.Sigmoid()
+        self.linear3 = nn.Linear(8, 2)
+        self.softmax = nn.Softmax(dim=1)
+
+    def forward(self, x):
+        lin1_out = self.linear1(x)
+        sigmoid_out1 = self.sigmoid1(lin1_out)
+        sigmoid_out2 = self.sigmoid2(self.linear2(sigmoid_out1))
+        return self.softmax(self.linear3(sigmoid_out2))
+
+
+
+
+
+

3. Train Model

+
+
[ ]:
+
+
+
net = AdultNN(len(feature_names), 2)
+
+criterion = nn.CrossEntropyLoss()
+num_epochs = 500
+
+optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
+input_tensor, label_tensor = adult_dataset[:]
+for epoch in range(num_epochs):
+    output = net(input_tensor)
+    loss = criterion(output, label_tensor)
+    optimizer.zero_grad()
+    loss.backward()
+    optimizer.step()
+    if epoch % 20 == 0:
+        print ('Epoch {}/{} => Loss: {:.2f}'.format(epoch+1, num_epochs, loss.item()))
+
+
+
+
+
+

4. Save Model

+

Save offline version of our module

+
+
[ ]:
+
+
+
torch.jit.save(torch.jit.script(net), 'adult_model.pt')
+
+
+
+
+
+

5. Generate Model Card

+
+

EvalConfig Input

+
+
[ ]:
+
+
+
_eval_config = 'eval_config.proto'
+
+
+
+
+
[ ]:
+
+
+
%%writefile {_eval_config}
+
+model_specs {
+    label_key: 'label'
+    prediction_key: 'prediction'
+  }
+metrics_specs {
+    metrics {class_name: "BinaryAccuracy"}
+    metrics {class_name: "AUC"}
+    metrics {class_name: "ConfusionMatrixPlot"}
+#     metrics {class_name: "ConfusionMatrixAtThresholds"}
+    metrics {
+      class_name: "FairnessIndicators"
+#       config: '{"thresholds": [0.25, 0.5, 0.75]}'
+    }
+  }
+slicing_specs {}
+slicing_specs {
+        feature_keys: 'sex_Female'
+#         feature_keys: 'sex_Male'
+  }
+options {
+    include_default_metrics { value: false }
+  }
+
+
+
+
+
[ ]:
+
+
+
mc = {
+    "schema_version": "0.0.1",
+    "model_details": {
+        "name": "Adult Multilayer Neural Network",
+        "version": {
+            "name": "0.1",
+            "date": "2022-08-01"
+        },
+        "graphics": {},
+
+        "citations": [
+             {
+                "citation": 'Simoudis, Evangelos, Jiawei Han, and Usama Fayyad. Proceedings of the second international conference on knowledge discovery & data mining. No. CONF-960830-. AAAI Press, Menlo Park, CA (United States), 1996.'
+             },
+            {
+                "citation": 'Friedler, Sorelle A., et al. "A Comparative Study of Fairness-Enhancing Interventions in Machine Learning." Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, https://doi.org/10.1145/3287560.3287589.'
+            },
+            {
+                "citation": 'Lahoti, Preethi, et al. "Fairness without demographics through adversarially reweighted learning." Advances in neural information processing systems 33 (2020): 728-740.'
+            }
+        ],
+        "overview": 'This example model card is for a multilayer network trained "Adult" dataset from the UCI repository with the learning task of predicting whether a person has a salary greater or less than $50,000.',
+    }
+}
+
+
+
+
+
[ ]:
+
+
+
train_dataset = PytorchDataset(AdultDataset(adult_data, labels), feature_names=adult_data.columns)
+
+
+
+
+
[ ]:
+
+
+
mcg = ModelCardGen.generate(data_sets={'train': train_dataset},
+                      model_path='adult_model.pt',
+                      eval_config=_eval_config,
+                      model_card=mc)
+
+
+
+
+
[ ]:
+
+
+
mcg.export_html('census_mc.html')
+
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/adult-pytorch-model-card.ipynb b/v1.1.0/notebooks/adult-pytorch-model-card.ipynb new file mode 100644 index 0000000..a03cb64 --- /dev/null +++ b/v1.1.0/notebooks/adult-pytorch-model-card.ipynb @@ -0,0 +1,463 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "cd16e50d", + "metadata": {}, + "source": [ + "# Generating Model Card with PyTorch" + ] + }, + { + "cell_type": "markdown", + "id": "3a116383", + "metadata": {}, + "source": [ + "This notebook intends to provide an example of generating a model card for a PyTorch model using Intel Model Card Generator.\n", + "\n", + " 1. [Data Collection and Prerpocessing from Adult Dataset](#1.-Data-Collection-and-Prerpocessing)\n", + " 2. [Build Multilayer Neural NetWork using PyTorch](#2.-Build-Model)\n", + " 3. [Train Model](#3.-Train-Model)\n", + " 4. [Save Model](#4.-Save-Model)\n", + " 5. [Generate Model Card with Intel Model Card Generator](#5.-Generate-Model-Card)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "863845f1", + "metadata": {}, + "outputs": [], + "source": [ + "import pandas as pd\n", + "import numpy as np\n", + "import torch\n", + "import torch.nn as nn\n", + "from torch.nn.functional import relu\n", + "import os\n", + "from sklearn.datasets import fetch_openml\n", + "from intel_ai_safety.model_card_gen.model_card_gen import ModelCardGen\n", + "from intel_ai_safety.model_card_gen.datasets import PytorchDataset\n", + "from torch.utils.data import Dataset" + ] + }, + { + "cell_type": "markdown", + "id": "8a5a1e7a", + "metadata": {}, + "source": [ + "## 1. Data Collection and Preprocessing" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f9a4f04f", + "metadata": {}, + "outputs": [], + "source": [ + "CATEGORICAL_FEATURE_KEYS = [\n", + " 'workclass',\n", + " 'marital-status',\n", + " 'occupation',\n", + " 'relationship',\n", + " 'race',\n", + " 'sex',\n", + " 'native-country',\n", + "]\n", + "\n", + "NUMERIC_FEATURE_KEYS = [\n", + " 'age',\n", + " 'capital-gain',\n", + " 'capital-loss',\n", + " 'hours-per-week',\n", + " 'education-num'\n", + "]\n", + "\n", + "\n", + "DROP_COLUMNS = ['fnlwgt', 'education']\n", + "\n", + "LABEL_KEY = 'label'" + ] + }, + { + "cell_type": "markdown", + "id": "69777d31", + "metadata": {}, + "source": [ + "#### Fetch Data from OpenML" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "92436295", + "metadata": {}, + "outputs": [], + "source": [ + "data = fetch_openml(data_id=1590, as_frame=True)\n", + "raw_data = data.data\n", + "raw_data['label'] = data.target\n", + "adult_data = raw_data.copy()" + ] + }, + { + "cell_type": "markdown", + "id": "a0595f01", + "metadata": {}, + "source": [ + "#### Drop Unneeded Columns" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9657c9ab", + "metadata": {}, + "outputs": [], + "source": [ + "adult_data = adult_data.drop(DROP_COLUMNS, axis=1)\n", + "adult_data = pd.get_dummies(adult_data, columns=CATEGORICAL_FEATURE_KEYS)\n", + "adult_data['label'] = adult_data['label'].map({'<=50K': 0, '>50K': 1})" + ] + }, + { + "cell_type": "markdown", + "id": "74bdd119", + "metadata": {}, + "source": [ + "#### Train Test Split" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c4d12735", + "metadata": {}, + "outputs": [], + "source": [ + "# Convert features and labels to numpy arrays.\n", + "labels = adult_data['label'].to_numpy()\n", + "adult_data = adult_data.drop(['label'], axis=1)\n", + "feature_names = list(adult_data.columns)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4d71fa28", + "metadata": {}, + "outputs": [], + "source": [ + "class AdultDataset(Dataset):\n", + " \"\"\"Face Landmarks dataset.\"\"\"\n", + "\n", + " def __init__(self, df, labels, transform=None):\n", + " self.data = self.make_input_tensor(df)\n", + " self.labels = self.make_label_tensor(labels)\n", + " self.transform = transform\n", + "\n", + " def __len__(self):\n", + " return len(self.adult_df)\n", + " \n", + " def make_input_tensor(self, df):\n", + " return torch.from_numpy(df.to_numpy()).type(torch.FloatTensor)\n", + " \n", + " def make_label_tensor(self, label_array):\n", + " return torch.from_numpy(label_array)\n", + "\n", + " def __getitem__(self, idx):\n", + " if torch.is_tensor(idx):\n", + " idx = idx.tolist()\n", + " sample = self.data[idx]\n", + " label = self.labels[idx]\n", + " if self.transform:\n", + " sample = self.transform(sample)\n", + " return sample, label" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9d6f1472", + "metadata": {}, + "outputs": [], + "source": [ + "adult_dataset = AdultDataset(adult_data, labels)" + ] + }, + { + "cell_type": "markdown", + "id": "0ae44650", + "metadata": {}, + "source": [ + "## 2. Build Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "62bdb925", + "metadata": {}, + "outputs": [], + "source": [ + "class AdultNN(nn.Module):\n", + " \n", + " def __init__(self, num_features, num_classes):\n", + " super().__init__()\n", + " self.num_features = num_features\n", + " self.num_classes = num_classes\n", + " \n", + " self.lin1 = torch.nn.Linear(self.num_features, 150) \n", + " self.lin2 = torch.nn.Linear(50, 50) \n", + " self.lin3 = torch.nn.Linear(50, 50)\n", + " \n", + " self.lin4 = torch.nn.Linear(150, 150) \n", + " \n", + " self.lin5 = torch.nn.Linear(50, 50) \n", + " self.lin6 = torch.nn.Linear(50, 50)\n", + " self.lin10 = torch.nn.Linear(150, self.num_classes)\n", + " \n", + " self.prelu = nn.PReLU()\n", + " self.dropout = nn.Dropout(0.25)\n", + "\n", + " def forward(self, xin):\n", + " x = relu(self.lin1(xin))\n", + " x = relu(self.lin4(x)) \n", + " x = self.dropout(x)\n", + " x = relu(self.lin10(x)) \n", + " return x" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ee07279f", + "metadata": {}, + "outputs": [], + "source": [ + "torch.manual_seed(1) # Set seed for reproducibility.\n", + "\n", + "class AdultNN(nn.Module):\n", + " def __init__(self, feature_size, num_labels):\n", + " super().__init__()\n", + " self.linear1 = nn.Linear(feature_size, feature_size)\n", + " self.sigmoid1 = nn.Sigmoid()\n", + " self.linear2 = nn.Linear(feature_size, 8)\n", + " self.sigmoid2 = nn.Sigmoid()\n", + " self.linear3 = nn.Linear(8, 2)\n", + " self.softmax = nn.Softmax(dim=1)\n", + "\n", + " def forward(self, x):\n", + " lin1_out = self.linear1(x)\n", + " sigmoid_out1 = self.sigmoid1(lin1_out)\n", + " sigmoid_out2 = self.sigmoid2(self.linear2(sigmoid_out1))\n", + " return self.softmax(self.linear3(sigmoid_out2))" + ] + }, + { + "cell_type": "markdown", + "id": "488da543", + "metadata": {}, + "source": [ + "## 3. Train Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a6e0a24c", + "metadata": {}, + "outputs": [], + "source": [ + "net = AdultNN(len(feature_names), 2)\n", + "\n", + "criterion = nn.CrossEntropyLoss()\n", + "num_epochs = 500\n", + "\n", + "optimizer = torch.optim.Adam(net.parameters(), lr=0.001)\n", + "input_tensor, label_tensor = adult_dataset[:]\n", + "for epoch in range(num_epochs): \n", + " output = net(input_tensor)\n", + " loss = criterion(output, label_tensor)\n", + " optimizer.zero_grad()\n", + " loss.backward()\n", + " optimizer.step()\n", + " if epoch % 20 == 0:\n", + " print ('Epoch {}/{} => Loss: {:.2f}'.format(epoch+1, num_epochs, loss.item()))" + ] + }, + { + "cell_type": "markdown", + "id": "84abfd7a", + "metadata": {}, + "source": [ + "## 4. Save Model" + ] + }, + { + "cell_type": "markdown", + "id": "30eb0b42", + "metadata": {}, + "source": [ + "Save offline version of our module" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c9ac4e8d", + "metadata": {}, + "outputs": [], + "source": [ + "torch.jit.save(torch.jit.script(net), 'adult_model.pt')" + ] + }, + { + "cell_type": "markdown", + "id": "91af620e", + "metadata": {}, + "source": [ + "## 5. Generate Model Card" + ] + }, + { + "cell_type": "markdown", + "id": "2967b587", + "metadata": {}, + "source": [ + "#### EvalConfig Input" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ed9522a3", + "metadata": {}, + "outputs": [], + "source": [ + "_eval_config = 'eval_config.proto'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1f0f8609", + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile {_eval_config}\n", + "\n", + "model_specs {\n", + " label_key: 'label'\n", + " prediction_key: 'prediction'\n", + " }\n", + "metrics_specs {\n", + " metrics {class_name: \"BinaryAccuracy\"}\n", + " metrics {class_name: \"AUC\"}\n", + " metrics {class_name: \"ConfusionMatrixPlot\"}\n", + "# metrics {class_name: \"ConfusionMatrixAtThresholds\"}\n", + " metrics {\n", + " class_name: \"FairnessIndicators\"\n", + "# config: '{\"thresholds\": [0.25, 0.5, 0.75]}'\n", + " }\n", + " }\n", + "slicing_specs {}\n", + "slicing_specs {\n", + " feature_keys: 'sex_Female'\n", + "# feature_keys: 'sex_Male'\n", + " }\n", + "options {\n", + " include_default_metrics { value: false }\n", + " }" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "12cc26dc", + "metadata": {}, + "outputs": [], + "source": [ + "mc = {\n", + " \"schema_version\": \"0.0.1\",\n", + " \"model_details\": {\n", + " \"name\": \"Adult Multilayer Neural Network\",\n", + " \"version\": {\n", + " \"name\": \"0.1\",\n", + " \"date\": \"2022-08-01\"\n", + " },\n", + " \"graphics\": {},\n", + "\n", + " \"citations\": [\n", + " {\n", + " \"citation\": 'Simoudis, Evangelos, Jiawei Han, and Usama Fayyad. Proceedings of the second international conference on knowledge discovery & data mining. No. CONF-960830-. AAAI Press, Menlo Park, CA (United States), 1996.'\n", + " },\n", + " {\n", + " \"citation\": 'Friedler, Sorelle A., et al. \"A Comparative Study of Fairness-Enhancing Interventions in Machine Learning.\" Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, https://doi.org/10.1145/3287560.3287589.'\n", + " },\n", + " {\n", + " \"citation\": 'Lahoti, Preethi, et al. \"Fairness without demographics through adversarially reweighted learning.\" Advances in neural information processing systems 33 (2020): 728-740.'\n", + " }\n", + " ],\n", + " \"overview\": 'This example model card is for a multilayer network trained \"Adult\" dataset from the UCI repository with the learning task of predicting whether a person has a salary greater or less than $50,000.',\n", + " }\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c36e9439", + "metadata": {}, + "outputs": [], + "source": [ + "train_dataset = PytorchDataset(AdultDataset(adult_data, labels), feature_names=adult_data.columns)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "51d56b07", + "metadata": {}, + "outputs": [], + "source": [ + "mcg = ModelCardGen.generate(data_sets={'train': train_dataset},\n", + " model_path='adult_model.pt', \n", + " eval_config=_eval_config,\n", + " model_card=mc)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f9794efc", + "metadata": {}, + "outputs": [], + "source": [ + "mcg.export_html('census_mc.html')" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/v1.1.0/notebooks/compas-model-card-tfx.html b/v1.1.0/notebooks/compas-model-card-tfx.html new file mode 100644 index 0000000..bc1a659 --- /dev/null +++ b/v1.1.0/notebooks/compas-model-card-tfx.html @@ -0,0 +1,954 @@ + + + + + + + Detecting Issues in Fairness by Generating Model Card from Tensorflow Estimators — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • + View page source +
  • +
+
+
+
+
+ +
+

Detecting Issues in Fairness by Generating Model Card from Tensorflow Estimators

+

In this notebook we will create a TFX pipeline to create a Proxy model for COMPAS (originally published by Tensorflow Authors). First, we will train a tf.estimator with defined eval_input_reciever_fn. This will allow us to run userdefined metrics with tensorflow-model-analysis on seralized tf.Example.

+

After this pipeline has be created, we will show how Intel’s ModelCardGen class can take this tf.estimator in the form of an SavedModel and TFRecord to create a Model Card with interactive graphics.

+
+

Install Dependencies

+
+
[ ]:
+
+
+
!python -m pip install --no-cache-dir --no-deps \
+    docker==7.0.0 \
+    keras-tuner==1.4.7 \
+    kubernetes==29.0.0 \
+    ml-metadata==1.14.0 \
+    portpicker==1.6.0 \
+    tensorflow-transform==1.14.0 \
+    tfx==1.14.0
+
+
+
+
+
[ ]:
+
+
+
!mkdir -p compas/data/train compas/data/eval
+
+
+
+
+
+

Import Libraries

+
+
[ ]:
+
+
+
import os
+import tempfile
+import pandas as pd
+from sklearn.model_selection import train_test_split
+
+# Intel Model Card Genorator
+from intel_ai_safety.model_card_gen.model_card_gen import ModelCardGen
+from intel_ai_safety.model_card_gen.datasets import TensorflowDataset
+
+
+
+
+

Download and preprocess the dataset

+

The COMPAS dataset is a common case study in the ML fairness literature1, 2, 3, where it is use to apply techniques for identifying and remediating issues around fairness. ___

+
    +
  1. Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199.

  2. +
  3. Chouldechova, A., G’Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046.

  4. +
  5. Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207.

  6. +
+
+
[ ]:
+
+
+
# Download the COMPAS dataset and setup the required filepaths.
+_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')
+_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'
+_DATA_FILEPATH = os.path.join('compas', 'data')
+
+_COMPAS_DF = pd.read_csv(_DATA_PATH)
+
+# To simpliy the case study, we will only use the columns that will be used for
+# our model.
+_COLUMN_NAMES = [
+  'age',
+  'c_charge_desc',
+  'c_charge_degree',
+  'c_days_from_compas',
+  'is_recid',            # ground truth
+  'juv_fel_count',
+  'juv_misd_count',
+  'juv_other_count',
+  'priors_count',
+  'r_days_from_arrest',
+  'race',
+  'sex',
+  'vr_charge_desc',
+  'score_text',          # COMPAS predction
+]
+
+_GROUND_TRUTH = 'is_recid'
+_COMPAS_SCORE = 'score_text'
+
+_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]
+
+# We will use 'is_recid' as our ground truth lable, which is boolean value
+# indicating if a defendant committed another crime. There are some rows with -1
+# indicating that there is no data. These rows we will drop from training.
+_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]
+_COMPAS_DF = _COMPAS_DF.dropna(subset=['score_text'])
+_COMPAS_DF['score_text'] = _COMPAS_DF.score_text.map({'Low': 0, 'High': 1, 'Medium': 1})
+# is_recid field is ground truth to create a COMPAS proxy we will need to train on score_text
+# _COMPAS_DF = _COMPAS_DF.rename(columns={'is_recid': 'ground_truth', 'score_text': 'compas_score'})
+
+# Given the distribution between races in this dataset we will only focuse on
+# recidivism for African-Americans and Caucasians.
+_COMPAS_DF = _COMPAS_DF[
+  _COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]
+
+X  = _COMPAS_DF[_COLUMN_NAMES]
+# to create a COMPAS proxy we will need to train on score_text not to be confused with ground truth is_recid field
+# y = _COMPAS_DF[[_COMPAS_SCORE]]
+
+X_train, X_test = train_test_split(X, test_size=0.33, random_state=42)
+
+# Load the DataFrame back to a CSV file for our TFX model.
+X_train.to_csv(os.path.join(_DATA_FILEPATH, 'train', 'train.csv'), index=False, na_rep='')
+X_test.to_csv(os.path.join(_DATA_FILEPATH, 'eval', 'eval.csv'), index=False, na_rep='')
+
+
+
+
+
+
+
+

TFX Pipeline Scripts

+

We opt to create a custom pipeline script so that we can transform data and train a model saved as artifacts to use in as input in Model Card Generator.

+
+
[ ]:
+
+
+
_transformer_path = os.path.join('compas', 'transformer.py')
+
+
+
+
+
[ ]:
+
+
+
%%writefile {_transformer_path}
+import tensorflow as tf
+import tensorflow_transform as tft
+
+CATEGORICAL_FEATURE_KEYS = [
+    'sex',
+    'race',
+    'c_charge_desc',
+    'c_charge_degree',
+]
+
+INT_FEATURE_KEYS = [
+    'age',
+    'c_days_from_compas',
+    'juv_fel_count',
+    'juv_misd_count',
+    'juv_other_count',
+    'priors_count',
+]
+
+LABEL_KEY = 'is_recid'
+
+# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.
+MAX_CATEGORICAL_FEATURE_VALUES = [
+    2,
+    6,
+    513,
+    14,
+]
+
+
+def transformed_name(key):
+  return '{}_xf'.format(key)
+
+
+def preprocessing_fn(inputs):
+  """tf.transform's callback function for preprocessing inputs.
+
+  Args:
+    inputs: Map from feature keys to raw features.
+
+  Returns:
+    Map from string feature key to transformed feature operations.
+  """
+  outputs = {}
+  for key in CATEGORICAL_FEATURE_KEYS:
+    outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
+        _fill_in_missing(inputs[key]),
+        vocab_filename=key)
+
+  for key in INT_FEATURE_KEYS:
+    outputs[transformed_name(key)] = tft.scale_to_z_score(
+        _fill_in_missing(inputs[key]))
+
+  # Target label will be to see if the defendant is charged for another crime.
+  outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])
+  return outputs
+
+
+def _fill_in_missing(tensor_value):
+  """Replaces a missing values in a SparseTensor.
+
+  Fills in missing values of `tensor_value` with '' or 0, and converts to a
+  dense tensor.
+
+  Args:
+    tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size
+      at most 1 in the second dimension.
+
+  Returns:
+    A rank 1 tensor where missing values of `tensor_value` are filled in.
+  """
+  if not isinstance(tensor_value, tf.sparse.SparseTensor):
+    return tensor_value
+  default_value = '' if tensor_value.dtype == tf.string else 0
+  sparse_tensor = tf.SparseTensor(
+      tensor_value.indices,
+      tensor_value.values,
+      [tensor_value.dense_shape[0], 1])
+  dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)
+  return tf.squeeze(dense_tensor, axis=1)
+
+
+
+
+
[ ]:
+
+
+
_trainer_path = os.path.join('compas', 'trainer.py')
+
+
+
+
+
[ ]:
+
+
+
%%writefile {_trainer_path}
+
+import tensorflow as tf
+import tensorflow_model_analysis as tfma
+import tensorflow_transform as tft
+from tensorflow_transform.tf_metadata import schema_utils
+
+from transformer import *
+
+_BATCH_SIZE = 1000
+_LEARNING_RATE = 0.00001
+_MAX_CHECKPOINTS = 1
+_SAVE_CHECKPOINT_STEPS = 999
+
+
+def transformed_names(keys):
+  return [transformed_name(key) for key in keys]
+
+
+def transformed_name(key):
+  return '{}_xf'.format(key)
+
+
+def _gzip_reader_fn(filenames):
+  """Returns a record reader that can read gzip'ed files.
+
+  Args:
+    filenames: A tf.string tensor or tf.data.Dataset containing one or more
+      filenames.
+
+  Returns: A nested structure of tf.TypeSpec objects matching the structure of
+    an element of this dataset and specifying the type of individual components.
+  """
+  return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
+
+
+# Tf.Transform considers these features as "raw".
+def _get_raw_feature_spec(schema):
+  """Generates a feature spec from a Schema proto.
+
+  Args:
+    schema: A Schema proto.
+
+  Returns:
+    A feature spec defined as a dict whose keys are feature names and values are
+      instances of FixedLenFeature, VarLenFeature or SparseFeature.
+  """
+  return schema_utils.schema_as_feature_spec(schema).feature_spec
+
+
+def _example_serving_receiver_fn(tf_transform_output, schema):
+  """Builds the serving in inputs.
+
+  Args:
+    tf_transform_output: A TFTransformOutput.
+    schema: the schema of the input data.
+
+  Returns:
+    TensorFlow graph which parses examples, applying tf-transform to them.
+  """
+  raw_feature_spec = _get_raw_feature_spec(schema)
+  raw_feature_spec.pop(LABEL_KEY)
+
+  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
+      raw_feature_spec)
+  serving_input_receiver = raw_input_fn()
+
+  transformed_features = tf_transform_output.transform_raw_features(
+      serving_input_receiver.features)
+  transformed_features.pop(transformed_name(LABEL_KEY))
+  return tf.estimator.export.ServingInputReceiver(
+      transformed_features, serving_input_receiver.receiver_tensors)
+
+
+def _eval_input_receiver_fn(tf_transform_output, schema):
+  """Builds everything needed for the tf-model-analysis to run the model.
+
+  Args:
+    tf_transform_output: A TFTransformOutput.
+    schema: the schema of the input data.
+
+  Returns:
+    EvalInputReceiver function, which contains:
+      - TensorFlow graph which parses raw untransformed features, applies the
+          tf-transform preprocessing operators.
+      - Set of raw, untransformed features.
+      - Label against which predictions will be compared.
+  """
+  # Notice that the inputs are raw features, not transformed features here.
+  raw_feature_spec = _get_raw_feature_spec(schema)
+
+  serialized_tf_example = tf.compat.v1.placeholder(
+      dtype=tf.string, shape=[None], name='input_example_tensor')
+
+  # Add a parse_example operator to the tensorflow graph, which will parse
+  # raw, untransformed, tf examples.
+  features = tf.io.parse_example(
+      serialized=serialized_tf_example, features=raw_feature_spec)
+
+  transformed_features = tf_transform_output.transform_raw_features(features)
+  labels = transformed_features.pop(transformed_name(LABEL_KEY))
+
+  receiver_tensors = {'examples': serialized_tf_example}
+
+  return tfma.export.EvalInputReceiver(
+      features=transformed_features,
+      receiver_tensors=receiver_tensors,
+      labels=labels)
+
+
+def _input_fn(filenames, tf_transform_output, batch_size=200):
+  """Generates features and labels for training or evaluation.
+
+  Args:
+    filenames: List of CSV files to read data from.
+    tf_transform_output: A TFTransformOutput.
+    batch_size: First dimension size of the Tensors returned by input_fn.
+
+  Returns:
+    A (features, indices) tuple where features is a dictionary of
+      Tensors, and indices is a single Tensor of label indices.
+  """
+  transformed_feature_spec = (
+      tf_transform_output.transformed_feature_spec().copy())
+
+  dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
+      filenames,
+      batch_size,
+      transformed_feature_spec,
+      shuffle=False,
+      reader=_gzip_reader_fn)
+
+  transformed_features = dataset.make_one_shot_iterator().get_next()
+
+  # We pop the label because we do not want to use it as a feature while we're
+  # training.
+  return transformed_features, transformed_features.pop(
+      transformed_name(LABEL_KEY))
+
+
+def _keras_model_builder():
+  """Build a keras model for COMPAS dataset classification.
+
+  Returns:
+    A compiled Keras model.
+  """
+  feature_columns = []
+  feature_layer_inputs = {}
+
+  for key in transformed_names(INT_FEATURE_KEYS):
+    feature_columns.append(tf.feature_column.numeric_column(key))
+    feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)
+
+  for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
+                              MAX_CATEGORICAL_FEATURE_VALUES):
+    feature_columns.append(
+        tf.feature_column.indicator_column(
+            tf.feature_column.categorical_column_with_identity(
+                key, num_buckets=num_buckets)))
+    feature_layer_inputs[key] = tf.keras.Input(
+        shape=(1,), name=key, dtype=tf.dtypes.int32)
+
+  feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
+  feature_layer_outputs = feature_columns_input(feature_layer_inputs)
+
+  dense_layers = tf.keras.layers.Dense(
+      20, activation='relu', name='dense_1')(feature_layer_outputs)
+  dense_layers = tf.keras.layers.Dense(
+      10, activation='relu', name='dense_2')(dense_layers)
+  output = tf.keras.layers.Dense(
+      1, name='predictions')(dense_layers)
+
+  model = tf.keras.Model(
+      inputs=[v for v in feature_layer_inputs.values()], outputs=output)
+
+  model.compile(
+      loss=tf.keras.losses.MeanAbsoluteError(),
+      optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))
+
+  return model
+
+
+# TFX will call this function.
+def trainer_fn(hparams, schema):
+  """Build the estimator using the high level API.
+
+  Args:
+    hparams: Hyperparameters used to train the model as name/value pairs.
+    schema: Holds the schema of the training examples.
+
+  Returns:
+    A dict of the following:
+      - estimator: The estimator that will be used for training and eval.
+      - train_spec: Spec for training.
+      - eval_spec: Spec for eval.
+      - eval_input_receiver_fn: Input function for eval.
+  """
+  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
+
+  train_input_fn = lambda: _input_fn(
+      hparams.train_files,
+      tf_transform_output,
+      batch_size=_BATCH_SIZE)
+
+  eval_input_fn = lambda: _input_fn(
+      hparams.eval_files,
+      tf_transform_output,
+      batch_size=_BATCH_SIZE)
+
+  train_spec = tf.estimator.TrainSpec(
+      train_input_fn,
+      max_steps=hparams.train_steps)
+
+  serving_receiver_fn = lambda: _example_serving_receiver_fn(
+      tf_transform_output, schema)
+
+  exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
+  eval_spec = tf.estimator.EvalSpec(
+      eval_input_fn,
+      steps=hparams.eval_steps,
+      exporters=[exporter],
+      name='compas-eval')
+
+  run_config = tf.estimator.RunConfig(
+      save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
+      keep_checkpoint_max=_MAX_CHECKPOINTS)
+
+  run_config = run_config.replace(model_dir=hparams.serving_model_dir)
+
+  estimator = tf.keras.estimator.model_to_estimator(
+      keras_model=_keras_model_builder(), config=run_config)
+
+  # Create an input receiver for TFMA processing.
+  receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)
+
+  return {
+      'estimator': estimator,
+      'train_spec': train_spec,
+      'eval_spec': eval_spec,
+      'eval_input_receiver_fn': receiver_fn
+  }
+
+
+
+
+
[ ]:
+
+
+
_pipelie_path = os.path.join('compas', 'pipeline.py')
+
+
+
+
+
[ ]:
+
+
+
%%writefile {_pipelie_path}
+
+from typing import Optional
+import os
+
+import absl
+import tensorflow_model_analysis as tfma
+from tfx import v1 as tfx
+from tfx.components import (CsvExampleGen,
+                            Evaluator,
+                            Pusher,
+                            SchemaGen,
+                            StatisticsGen,
+                            Trainer,
+                            Transform)
+
+from tfx.components.trainer.executor import Executor
+from tfx.dsl.components.base import executor_spec
+
+from tfx.orchestration import pipeline
+from tfx.orchestration import metadata
+from tfx.proto import pusher_pb2
+from tfx.proto import trainer_pb2
+from tfx.proto import example_gen_pb2
+from tfx.orchestration.local.local_dag_runner import LocalDagRunner
+
+_pipeline_name = 'compas'
+_compas_root = os.path.join('.', 'compas')
+_data_path = os.path.join(_compas_root, 'data')
+# Python module file to inject customized logic into the TFX components. The
+# Transform and Trainer both require user-defined functions to run successfully.
+_transformer_file = os.path.join(_compas_root, 'transformer.py')
+_trainer_file = os.path.join(_compas_root, 'trainer.py')
+# Path which can be listened to by the model server.  Pusher will output the
+# trained model here.
+_serving_model_dir = os.path.join(_compas_root, 'serving_model', _pipeline_name)
+
+# Directory and data locations.  This example assumes all of the chicago taxi
+# example code and metadata library is relative to $HOME, but you can store
+# these files anywhere on your local filesystem.
+_tfx_root = os.path.join('compas', 'tfx')
+_pipeline_root = os.path.join(_tfx_root, 'pipelines', _pipeline_name)
+# Sqlite ML-metadata db path.
+_metadata_path = os.path.join(_tfx_root, 'metadata', _pipeline_name,
+                              'metadata.db')
+
+def create_pipeline(
+    pipeline_name: str,
+    pipeline_root: str,
+    data_path: str,
+    preprocessing_module_file: str,
+    trainer_module_file: str,
+    train_args: tfx.proto.TrainArgs,
+    eval_args: tfx.proto.EvalArgs,
+    serving_model_dir: str,
+    metadata_path: str,
+    schema_path: Optional[str] = None,
+) -> tfx.dsl.Pipeline:
+  """Implements the compass pipeline with TFX."""
+
+  # Brings data into the pipeline or otherwise joins/converts training data.
+
+  input = tfx.proto.Input(splits=[
+                example_gen_pb2.Input.Split(name='train', pattern='train/*'),
+                example_gen_pb2.Input.Split(name='eval', pattern='eval/*')
+            ])
+  example_gen = CsvExampleGen(input_base=data_path, input_config=input)
+
+  # Computes statistics over data for visualization and example validation.
+  statistics_gen = StatisticsGen(
+      examples=example_gen.outputs['examples'])
+
+  if schema_path is None:
+    # Generates schema based on statistics files.
+    schema_gen = SchemaGen(
+        statistics=statistics_gen.outputs['statistics'])
+  else:
+    # Import user provided schema into the pipeline.
+    schema_gen = tfx.components.ImportSchemaGen(schema_file=schema_path)
+
+
+  # Performs transformations and feature engineering in training and serving.
+  transform = Transform(
+      examples=example_gen.outputs['examples'],
+      schema=schema_gen.outputs['schema'],
+      module_file=os.path.abspath(preprocessing_module_file))
+
+  # Uses user-provided Python function that implements a model.
+  trainer_args = {
+      'module_file': trainer_module_file,
+      'examples': transform.outputs['transformed_examples'],
+      'schema': schema_gen.outputs['schema'],
+      'custom_executor_spec' : executor_spec.ExecutorClassSpec(Executor),
+      'transform_graph': transform.outputs['transform_graph'],
+      'train_args': train_args,
+      'eval_args': eval_args,
+  }
+  trainer = Trainer(**trainer_args)
+
+  # Uses TFMA to compute a evaluation statistics over features of a model and
+  # perform quality validation of a candidate model (compared to a baseline).
+  eval_config = tfma.EvalConfig(
+      model_specs=[
+          tfma.ModelSpec(
+              label_key='is_recid')
+      ],
+      slicing_specs=[
+          tfma.SlicingSpec(
+              feature_keys=['race'])
+      ],
+      metrics_specs=[
+          tfma.MetricsSpec(metrics=[
+              tfma.MetricConfig(
+                  class_name='BinaryAccuracy'),
+              tfma.MetricConfig(
+                  class_name='AUC'),
+              tfma.MetricConfig(
+                  class_name='FairnessIndicators',
+                  config='{"thresholds": [0.25, 0.5, 0.75]}')
+
+          ])
+      ])
+  evaluator = Evaluator(examples=example_gen.outputs['examples'],
+                        model=trainer.outputs['model'],
+                        eval_config=eval_config)
+
+  return pipeline.Pipeline(
+      pipeline_name=pipeline_name,
+      pipeline_root=pipeline_root,
+      components=[
+          example_gen,
+          statistics_gen,
+          schema_gen,
+          transform,
+          trainer,
+          evaluator,
+      ],
+      metadata_connection_config=metadata.sqlite_metadata_connection_config(
+          metadata_path)
+  )
+
+if __name__ == '__main__':
+  absl.logging.set_verbosity(absl.logging.INFO)
+
+  LocalDagRunner().run(
+      create_pipeline(
+          pipeline_name=_pipeline_name,
+          pipeline_root=_pipeline_root,
+          data_path=_data_path,
+          preprocessing_module_file= _transformer_file,
+          trainer_module_file=_trainer_file,
+          serving_model_dir=_serving_model_dir,
+          metadata_path=_metadata_path,
+          train_args=trainer_pb2.TrainArgs(num_steps=10000),
+          eval_args=trainer_pb2.EvalArgs(num_steps=5000))
+  )
+
+
+
+
+
[ ]:
+
+
+
!python {_pipelie_path}
+
+
+
+
+
[ ]:
+
+
+
from ml_metadata.metadata_store import metadata_store
+from ml_metadata.proto import metadata_store_pb2
+
+connection_config = metadata_store_pb2.ConnectionConfig()
+connection_config.sqlite.filename_uri = './compas/tfx/metadata/compas/metadata.db'
+connection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE
+store = metadata_store.MetadataStore(connection_config)
+
+
+
+
+
[ ]:
+
+
+
data = store.get_artifacts_by_type("Examples")[0].uri
+evaluator = store.get_artifacts_by_type("ModelEvaluation")[-1].uri
+model = store.get_artifacts_by_type("Model")[-1].uri
+
+
+
+
+
[ ]:
+
+
+
_model_path = os.path.join(model, 'Format-Serving')
+_data_paths = {'eval': TensorflowDataset(dataset_path=os.path.join(data, 'Split-eval', '*.gz')),
+               'train': TensorflowDataset(dataset_path=os.path.join(data, 'Split-train', '*.gz'))}
+
+
+
+
+
[ ]:
+
+
+
_project_path = os.path.join('.', 'compas')
+
+
+
+
+
[ ]:
+
+
+
_eval_config = os.path.join(_project_path, 'eval_config.proto')
+
+
+
+
+
[ ]:
+
+
+
%%writefile {_eval_config}
+
+model_specs {
+    label_key: 'is_recid'
+  }
+metrics_specs {
+    metrics {class_name: "BinaryAccuracy"}
+    metrics {class_name: "AUC"}
+    metrics {class_name: "ConfusionMatrixPlot"}
+    metrics {
+      class_name: "FairnessIndicators"
+      config: '{"thresholds": [0.25, 0.5, 0.75]}'
+    }
+  }
+slicing_specs {}
+slicing_specs {
+        feature_keys: 'race'
+  }
+options {
+    include_default_metrics { value: false }
+  }
+
+
+
+
+
[ ]:
+
+
+
overview = ("COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)"
+" is a public dataset, which contains approximately 18,000 criminal cases from "
+"Broward County, Florida between January, 2013 and December, 2014. The data contains"
+" information about 11,000 unique defendants, including criminal history demographics,"
+" and a risk score intended to represent the defendant’s likelihood of reoffending"
+" (recidivism). A machine learning model trained on this data has been used by judges"
+" and parole officers to determine whether or not to set bail and whether or not to"
+" grant parole."
+
+"In 2016, an article published in ProPublica found that the COMPAS model was incorrectly"
+" predicting that African-American defendants would recidivate at much higher rates than"
+" their white counterparts while Caucasian would not recidivate at a much higher rate. "
+"For Caucasian defendants, the model made mistakes in the opposite direction, making incorrect predictions "
+"that they wouldn’t commit another crime. The authors went on to show that these biases were likely due to "
+"an uneven distribution in the data between African-Americans and Caucasian defendants. Specifically, the "
+"ground truth label of a negative example (a defendant would not commit another crime) and a positive example "
+"(defendant would commit another crime) were disproportionate between the two races. "
+"Since 2016, the COMPAS dataset has appeared frequently in the ML fairness literature "
+"1, 2, 3, with researchers using it to demonstrate techniques for identifying and remediating "
+"fairness concerns."
+
+"It is important to note that developing a machine learning model to predict pre-trial detention "
+"has a number of important ethical considerations. You can learn more about these issues in the "
+"Partnership on AI Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System."
+" The Partnership on AI is a multi-stakeholder organization -- of which Google is a member -- that "
+"creates guidelines around AI.")
+
+
+
+
+
[ ]:
+
+
+
mc = {
+  "model_details": {
+    "name": "COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)",
+    "overview": overview,
+    "owners": [
+      {
+        "name": "Intel XAI Team",
+        "contact": "xai@intel.com"
+      }
+    ],
+    "references": [
+      {
+        "reference": "Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199."
+      },
+      {
+        "reference": "Chouldechova, A., G'Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046."
+      },
+      {
+        "reference": "Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207."
+      }
+    ],
+    "graphics": {
+      "description": " "
+    }
+  },
+  "quantitative_analysis": {
+    "graphics": {
+      "description": " "
+    }
+  },
+  "schema_version": "0.0.1"
+}
+
+
+
+
+
[ ]:
+
+
+
mcg = ModelCardGen.generate(data_sets=_data_paths,
+                            eval_config=_eval_config,
+                            model_path=_model_path,
+                            model_card=mc)
+
+
+
+
+

Display Model Card

+
+
[ ]:
+
+
+
mcg
+
+
+
+
+
[ ]:
+
+
+
mcg.export_html('compas_plotly.html')
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/compas-model-card-tfx.ipynb b/v1.1.0/notebooks/compas-model-card-tfx.ipynb new file mode 100644 index 0000000..206094d --- /dev/null +++ b/v1.1.0/notebooks/compas-model-card-tfx.ipynb @@ -0,0 +1,1028 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "c450c5ce", + "metadata": {}, + "source": [ + "# Detecting Issues in Fairness by Generating Model Card from Tensorflow Estimators" + ] + }, + { + "cell_type": "markdown", + "id": "a24fd3c1", + "metadata": {}, + "source": [ + "In this notebook we will create a TFX pipeline to create a Proxy model for COMPAS (originally published by [Tensorflow Authors](https://github.com/tensorflow/fairness-indicators/blob/r0.38.0/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb)). First, we will train a `tf.estimator` with defined `eval_input_reciever_fn`. This will allow us to run userdefined metrics with `tensorflow-model-analysis` on seralized `tf.Example`.\n", + "\n", + "After this pipeline has be created, we will show how Intel's `ModelCardGen` class can take this `tf.estimator` in the form of an SavedModel and TFRecord to create a Model Card with interactive graphics." + ] + }, + { + "cell_type": "markdown", + "id": "026ee99b", + "metadata": {}, + "source": [ + "### Install Dependencies" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3b8c11b8-2fef-47c0-a97f-5d45a137af36", + "metadata": {}, + "outputs": [], + "source": [ + "!python -m pip install --no-cache-dir --no-deps \\\n", + " docker==7.0.0 \\\n", + " keras-tuner==1.4.7 \\\n", + " kubernetes==29.0.0 \\\n", + " ml-metadata==1.14.0 \\\n", + " portpicker==1.6.0 \\\n", + " tensorflow-transform==1.14.0 \\\n", + " tfx==1.14.0" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "afebf888", + "metadata": {}, + "outputs": [], + "source": [ + "!mkdir -p compas/data/train compas/data/eval" + ] + }, + { + "cell_type": "markdown", + "id": "e09769e7", + "metadata": {}, + "source": [ + "### Import Libraries" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4c4f8793", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import tempfile\n", + "import pandas as pd\n", + "from sklearn.model_selection import train_test_split\n", + "\n", + "# Intel Model Card Genorator \n", + "from intel_ai_safety.model_card_gen.model_card_gen import ModelCardGen\n", + "from intel_ai_safety.model_card_gen.datasets import TensorflowDataset" + ] + }, + { + "cell_type": "markdown", + "id": "f6e52507", + "metadata": {}, + "source": [ + "## Download and preprocess the dataset" + ] + }, + { + "cell_type": "markdown", + "id": "f2e81e74", + "metadata": {}, + "source": [ + "The COMPAS dataset is a common case study in the ML fairness literature1, 2, 3, where it is use to apply techniques for identifying and remediating issues around fairness. \n", + "___\n", + "\n", + "1. Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199.\n", + "\n", + "2. Chouldechova, A., G’Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046.\n", + "\n", + "3. Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c275fc48", + "metadata": {}, + "outputs": [], + "source": [ + "# Download the COMPAS dataset and setup the required filepaths.\n", + "_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')\n", + "_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'\n", + "_DATA_FILEPATH = os.path.join('compas', 'data')\n", + "\n", + "_COMPAS_DF = pd.read_csv(_DATA_PATH)\n", + "\n", + "# To simpliy the case study, we will only use the columns that will be used for\n", + "# our model.\n", + "_COLUMN_NAMES = [\n", + " 'age',\n", + " 'c_charge_desc',\n", + " 'c_charge_degree',\n", + " 'c_days_from_compas',\n", + " 'is_recid', # ground truth\n", + " 'juv_fel_count',\n", + " 'juv_misd_count',\n", + " 'juv_other_count',\n", + " 'priors_count',\n", + " 'r_days_from_arrest',\n", + " 'race',\n", + " 'sex',\n", + " 'vr_charge_desc',\n", + " 'score_text', # COMPAS predction\n", + "]\n", + "\n", + "_GROUND_TRUTH = 'is_recid'\n", + "_COMPAS_SCORE = 'score_text'\n", + "\n", + "_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]\n", + "\n", + "# We will use 'is_recid' as our ground truth lable, which is boolean value\n", + "# indicating if a defendant committed another crime. There are some rows with -1\n", + "# indicating that there is no data. These rows we will drop from training.\n", + "_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]\n", + "_COMPAS_DF = _COMPAS_DF.dropna(subset=['score_text'])\n", + "_COMPAS_DF['score_text'] = _COMPAS_DF.score_text.map({'Low': 0, 'High': 1, 'Medium': 1})\n", + "# is_recid field is ground truth to create a COMPAS proxy we will need to train on score_text\n", + "# _COMPAS_DF = _COMPAS_DF.rename(columns={'is_recid': 'ground_truth', 'score_text': 'compas_score'})\n", + "\n", + "# Given the distribution between races in this dataset we will only focuse on\n", + "# recidivism for African-Americans and Caucasians.\n", + "_COMPAS_DF = _COMPAS_DF[\n", + " _COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]\n", + "\n", + "X = _COMPAS_DF[_COLUMN_NAMES]\n", + "# to create a COMPAS proxy we will need to train on score_text not to be confused with ground truth is_recid field\n", + "# y = _COMPAS_DF[[_COMPAS_SCORE]]\n", + "\n", + "X_train, X_test = train_test_split(X, test_size=0.33, random_state=42)\n", + "\n", + "# Load the DataFrame back to a CSV file for our TFX model.\n", + "X_train.to_csv(os.path.join(_DATA_FILEPATH, 'train', 'train.csv'), index=False, na_rep='')\n", + "X_test.to_csv(os.path.join(_DATA_FILEPATH, 'eval', 'eval.csv'), index=False, na_rep='')" + ] + }, + { + "cell_type": "markdown", + "id": "6712ff05", + "metadata": {}, + "source": [ + "# TFX Pipeline Scripts" + ] + }, + { + "cell_type": "markdown", + "id": "23ffb2d6", + "metadata": {}, + "source": [ + "We opt to create a custom pipeline script so that we can transform data and train a model saved as artifacts to use in as input in Model Card Generator." + ] + }, + { + "cell_type": "markdown", + "id": "3cbb20fa", + "metadata": {}, + "source": [ + "## Transformer" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4141ed39", + "metadata": {}, + "outputs": [], + "source": [ + "_transformer_path = os.path.join('compas', 'transformer.py')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "086ce007", + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile {_transformer_path}\n", + "import tensorflow as tf\n", + "import tensorflow_transform as tft\n", + "\n", + "CATEGORICAL_FEATURE_KEYS = [\n", + " 'sex',\n", + " 'race',\n", + " 'c_charge_desc',\n", + " 'c_charge_degree',\n", + "]\n", + "\n", + "INT_FEATURE_KEYS = [\n", + " 'age',\n", + " 'c_days_from_compas',\n", + " 'juv_fel_count',\n", + " 'juv_misd_count',\n", + " 'juv_other_count',\n", + " 'priors_count',\n", + "]\n", + "\n", + "LABEL_KEY = 'is_recid'\n", + "\n", + "# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.\n", + "MAX_CATEGORICAL_FEATURE_VALUES = [\n", + " 2,\n", + " 6,\n", + " 513,\n", + " 14,\n", + "]\n", + "\n", + "\n", + "def transformed_name(key):\n", + " return '{}_xf'.format(key)\n", + "\n", + "\n", + "def preprocessing_fn(inputs):\n", + " \"\"\"tf.transform's callback function for preprocessing inputs.\n", + "\n", + " Args:\n", + " inputs: Map from feature keys to raw features.\n", + "\n", + " Returns:\n", + " Map from string feature key to transformed feature operations.\n", + " \"\"\"\n", + " outputs = {}\n", + " for key in CATEGORICAL_FEATURE_KEYS:\n", + " outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(\n", + " _fill_in_missing(inputs[key]),\n", + " vocab_filename=key)\n", + "\n", + " for key in INT_FEATURE_KEYS:\n", + " outputs[transformed_name(key)] = tft.scale_to_z_score(\n", + " _fill_in_missing(inputs[key]))\n", + "\n", + " # Target label will be to see if the defendant is charged for another crime.\n", + " outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])\n", + " return outputs\n", + "\n", + "\n", + "def _fill_in_missing(tensor_value):\n", + " \"\"\"Replaces a missing values in a SparseTensor.\n", + "\n", + " Fills in missing values of `tensor_value` with '' or 0, and converts to a\n", + " dense tensor.\n", + "\n", + " Args:\n", + " tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size\n", + " at most 1 in the second dimension.\n", + "\n", + " Returns:\n", + " A rank 1 tensor where missing values of `tensor_value` are filled in.\n", + " \"\"\"\n", + " if not isinstance(tensor_value, tf.sparse.SparseTensor):\n", + " return tensor_value\n", + " default_value = '' if tensor_value.dtype == tf.string else 0\n", + " sparse_tensor = tf.SparseTensor(\n", + " tensor_value.indices,\n", + " tensor_value.values,\n", + " [tensor_value.dense_shape[0], 1])\n", + " dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)\n", + " return tf.squeeze(dense_tensor, axis=1)\n" + ] + }, + { + "cell_type": "markdown", + "id": "5a5eed0a", + "metadata": {}, + "source": [ + "## Trainer" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "25b700a9", + "metadata": {}, + "outputs": [], + "source": [ + "_trainer_path = os.path.join('compas', 'trainer.py')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c42e6152", + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile {_trainer_path}\n", + "\n", + "import tensorflow as tf\n", + "import tensorflow_model_analysis as tfma\n", + "import tensorflow_transform as tft\n", + "from tensorflow_transform.tf_metadata import schema_utils\n", + "\n", + "from transformer import *\n", + "\n", + "_BATCH_SIZE = 1000\n", + "_LEARNING_RATE = 0.00001\n", + "_MAX_CHECKPOINTS = 1\n", + "_SAVE_CHECKPOINT_STEPS = 999\n", + "\n", + "\n", + "def transformed_names(keys):\n", + " return [transformed_name(key) for key in keys]\n", + "\n", + "\n", + "def transformed_name(key):\n", + " return '{}_xf'.format(key)\n", + "\n", + "\n", + "def _gzip_reader_fn(filenames):\n", + " \"\"\"Returns a record reader that can read gzip'ed files.\n", + "\n", + " Args:\n", + " filenames: A tf.string tensor or tf.data.Dataset containing one or more\n", + " filenames.\n", + "\n", + " Returns: A nested structure of tf.TypeSpec objects matching the structure of\n", + " an element of this dataset and specifying the type of individual components.\n", + " \"\"\"\n", + " return tf.data.TFRecordDataset(filenames, compression_type='GZIP')\n", + "\n", + "\n", + "# Tf.Transform considers these features as \"raw\".\n", + "def _get_raw_feature_spec(schema):\n", + " \"\"\"Generates a feature spec from a Schema proto.\n", + "\n", + " Args:\n", + " schema: A Schema proto.\n", + "\n", + " Returns:\n", + " A feature spec defined as a dict whose keys are feature names and values are\n", + " instances of FixedLenFeature, VarLenFeature or SparseFeature.\n", + " \"\"\"\n", + " return schema_utils.schema_as_feature_spec(schema).feature_spec\n", + "\n", + "\n", + "def _example_serving_receiver_fn(tf_transform_output, schema):\n", + " \"\"\"Builds the serving in inputs.\n", + "\n", + " Args:\n", + " tf_transform_output: A TFTransformOutput.\n", + " schema: the schema of the input data.\n", + "\n", + " Returns:\n", + " TensorFlow graph which parses examples, applying tf-transform to them.\n", + " \"\"\"\n", + " raw_feature_spec = _get_raw_feature_spec(schema)\n", + " raw_feature_spec.pop(LABEL_KEY)\n", + "\n", + " raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n", + " raw_feature_spec)\n", + " serving_input_receiver = raw_input_fn()\n", + "\n", + " transformed_features = tf_transform_output.transform_raw_features(\n", + " serving_input_receiver.features)\n", + " transformed_features.pop(transformed_name(LABEL_KEY))\n", + " return tf.estimator.export.ServingInputReceiver(\n", + " transformed_features, serving_input_receiver.receiver_tensors)\n", + "\n", + "\n", + "def _eval_input_receiver_fn(tf_transform_output, schema):\n", + " \"\"\"Builds everything needed for the tf-model-analysis to run the model.\n", + "\n", + " Args:\n", + " tf_transform_output: A TFTransformOutput.\n", + " schema: the schema of the input data.\n", + "\n", + " Returns:\n", + " EvalInputReceiver function, which contains:\n", + " - TensorFlow graph which parses raw untransformed features, applies the\n", + " tf-transform preprocessing operators.\n", + " - Set of raw, untransformed features.\n", + " - Label against which predictions will be compared.\n", + " \"\"\"\n", + " # Notice that the inputs are raw features, not transformed features here.\n", + " raw_feature_spec = _get_raw_feature_spec(schema)\n", + "\n", + " serialized_tf_example = tf.compat.v1.placeholder(\n", + " dtype=tf.string, shape=[None], name='input_example_tensor')\n", + "\n", + " # Add a parse_example operator to the tensorflow graph, which will parse\n", + " # raw, untransformed, tf examples.\n", + " features = tf.io.parse_example(\n", + " serialized=serialized_tf_example, features=raw_feature_spec)\n", + "\n", + " transformed_features = tf_transform_output.transform_raw_features(features)\n", + " labels = transformed_features.pop(transformed_name(LABEL_KEY))\n", + "\n", + " receiver_tensors = {'examples': serialized_tf_example}\n", + "\n", + " return tfma.export.EvalInputReceiver(\n", + " features=transformed_features,\n", + " receiver_tensors=receiver_tensors,\n", + " labels=labels)\n", + "\n", + "\n", + "def _input_fn(filenames, tf_transform_output, batch_size=200):\n", + " \"\"\"Generates features and labels for training or evaluation.\n", + "\n", + " Args:\n", + " filenames: List of CSV files to read data from.\n", + " tf_transform_output: A TFTransformOutput.\n", + " batch_size: First dimension size of the Tensors returned by input_fn.\n", + "\n", + " Returns:\n", + " A (features, indices) tuple where features is a dictionary of\n", + " Tensors, and indices is a single Tensor of label indices.\n", + " \"\"\"\n", + " transformed_feature_spec = (\n", + " tf_transform_output.transformed_feature_spec().copy())\n", + "\n", + " dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(\n", + " filenames,\n", + " batch_size,\n", + " transformed_feature_spec,\n", + " shuffle=False,\n", + " reader=_gzip_reader_fn)\n", + "\n", + " transformed_features = dataset.make_one_shot_iterator().get_next()\n", + "\n", + " # We pop the label because we do not want to use it as a feature while we're\n", + " # training.\n", + " return transformed_features, transformed_features.pop(\n", + " transformed_name(LABEL_KEY))\n", + "\n", + "\n", + "def _keras_model_builder():\n", + " \"\"\"Build a keras model for COMPAS dataset classification.\n", + " \n", + " Returns:\n", + " A compiled Keras model.\n", + " \"\"\"\n", + " feature_columns = []\n", + " feature_layer_inputs = {}\n", + "\n", + " for key in transformed_names(INT_FEATURE_KEYS):\n", + " feature_columns.append(tf.feature_column.numeric_column(key))\n", + " feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)\n", + "\n", + " for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),\n", + " MAX_CATEGORICAL_FEATURE_VALUES):\n", + " feature_columns.append(\n", + " tf.feature_column.indicator_column(\n", + " tf.feature_column.categorical_column_with_identity(\n", + " key, num_buckets=num_buckets)))\n", + " feature_layer_inputs[key] = tf.keras.Input(\n", + " shape=(1,), name=key, dtype=tf.dtypes.int32)\n", + "\n", + " feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)\n", + " feature_layer_outputs = feature_columns_input(feature_layer_inputs)\n", + "\n", + " dense_layers = tf.keras.layers.Dense(\n", + " 20, activation='relu', name='dense_1')(feature_layer_outputs)\n", + " dense_layers = tf.keras.layers.Dense(\n", + " 10, activation='relu', name='dense_2')(dense_layers)\n", + " output = tf.keras.layers.Dense(\n", + " 1, name='predictions')(dense_layers)\n", + "\n", + " model = tf.keras.Model(\n", + " inputs=[v for v in feature_layer_inputs.values()], outputs=output)\n", + "\n", + " model.compile(\n", + " loss=tf.keras.losses.MeanAbsoluteError(),\n", + " optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))\n", + "\n", + " return model\n", + "\n", + "\n", + "# TFX will call this function.\n", + "def trainer_fn(hparams, schema):\n", + " \"\"\"Build the estimator using the high level API.\n", + "\n", + " Args:\n", + " hparams: Hyperparameters used to train the model as name/value pairs.\n", + " schema: Holds the schema of the training examples.\n", + "\n", + " Returns:\n", + " A dict of the following:\n", + " - estimator: The estimator that will be used for training and eval.\n", + " - train_spec: Spec for training.\n", + " - eval_spec: Spec for eval.\n", + " - eval_input_receiver_fn: Input function for eval.\n", + " \"\"\"\n", + " tf_transform_output = tft.TFTransformOutput(hparams.transform_output)\n", + "\n", + " train_input_fn = lambda: _input_fn(\n", + " hparams.train_files,\n", + " tf_transform_output,\n", + " batch_size=_BATCH_SIZE)\n", + "\n", + " eval_input_fn = lambda: _input_fn(\n", + " hparams.eval_files,\n", + " tf_transform_output,\n", + " batch_size=_BATCH_SIZE)\n", + "\n", + " train_spec = tf.estimator.TrainSpec(\n", + " train_input_fn,\n", + " max_steps=hparams.train_steps)\n", + "\n", + " serving_receiver_fn = lambda: _example_serving_receiver_fn(\n", + " tf_transform_output, schema)\n", + "\n", + " exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)\n", + " eval_spec = tf.estimator.EvalSpec(\n", + " eval_input_fn,\n", + " steps=hparams.eval_steps,\n", + " exporters=[exporter],\n", + " name='compas-eval')\n", + "\n", + " run_config = tf.estimator.RunConfig(\n", + " save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,\n", + " keep_checkpoint_max=_MAX_CHECKPOINTS)\n", + "\n", + " run_config = run_config.replace(model_dir=hparams.serving_model_dir)\n", + "\n", + " estimator = tf.keras.estimator.model_to_estimator(\n", + " keras_model=_keras_model_builder(), config=run_config)\n", + "\n", + " # Create an input receiver for TFMA processing.\n", + " receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)\n", + "\n", + " return {\n", + " 'estimator': estimator,\n", + " 'train_spec': train_spec,\n", + " 'eval_spec': eval_spec,\n", + " 'eval_input_receiver_fn': receiver_fn\n", + " }" + ] + }, + { + "cell_type": "markdown", + "id": "c9eeb83b", + "metadata": {}, + "source": [ + "## Pipeline" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0cce0c5b", + "metadata": {}, + "outputs": [], + "source": [ + "_pipelie_path = os.path.join('compas', 'pipeline.py')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0108bf05", + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile {_pipelie_path}\n", + "\n", + "from typing import Optional\n", + "import os\n", + "\n", + "import absl\n", + "import tensorflow_model_analysis as tfma\n", + "from tfx import v1 as tfx\n", + "from tfx.components import (CsvExampleGen,\n", + " Evaluator,\n", + " Pusher,\n", + " SchemaGen,\n", + " StatisticsGen,\n", + " Trainer,\n", + " Transform)\n", + "\n", + "from tfx.components.trainer.executor import Executor\n", + "from tfx.dsl.components.base import executor_spec\n", + "\n", + "from tfx.orchestration import pipeline\n", + "from tfx.orchestration import metadata\n", + "from tfx.proto import pusher_pb2\n", + "from tfx.proto import trainer_pb2\n", + "from tfx.proto import example_gen_pb2\n", + "from tfx.orchestration.local.local_dag_runner import LocalDagRunner\n", + "\n", + "_pipeline_name = 'compas'\n", + "_compas_root = os.path.join('.', 'compas')\n", + "_data_path = os.path.join(_compas_root, 'data')\n", + "# Python module file to inject customized logic into the TFX components. The\n", + "# Transform and Trainer both require user-defined functions to run successfully.\n", + "_transformer_file = os.path.join(_compas_root, 'transformer.py')\n", + "_trainer_file = os.path.join(_compas_root, 'trainer.py')\n", + "# Path which can be listened to by the model server. Pusher will output the\n", + "# trained model here.\n", + "_serving_model_dir = os.path.join(_compas_root, 'serving_model', _pipeline_name)\n", + "\n", + "# Directory and data locations. This example assumes all of the chicago taxi\n", + "# example code and metadata library is relative to $HOME, but you can store\n", + "# these files anywhere on your local filesystem.\n", + "_tfx_root = os.path.join('compas', 'tfx')\n", + "_pipeline_root = os.path.join(_tfx_root, 'pipelines', _pipeline_name)\n", + "# Sqlite ML-metadata db path.\n", + "_metadata_path = os.path.join(_tfx_root, 'metadata', _pipeline_name,\n", + " 'metadata.db')\n", + "\n", + "def create_pipeline(\n", + " pipeline_name: str,\n", + " pipeline_root: str,\n", + " data_path: str,\n", + " preprocessing_module_file: str,\n", + " trainer_module_file: str,\n", + " train_args: tfx.proto.TrainArgs,\n", + " eval_args: tfx.proto.EvalArgs,\n", + " serving_model_dir: str,\n", + " metadata_path: str,\n", + " schema_path: Optional[str] = None,\n", + ") -> tfx.dsl.Pipeline:\n", + " \"\"\"Implements the compass pipeline with TFX.\"\"\"\n", + "\n", + " # Brings data into the pipeline or otherwise joins/converts training data.\n", + " \n", + " input = tfx.proto.Input(splits=[\n", + " example_gen_pb2.Input.Split(name='train', pattern='train/*'),\n", + " example_gen_pb2.Input.Split(name='eval', pattern='eval/*')\n", + " ])\n", + " example_gen = CsvExampleGen(input_base=data_path, input_config=input)\n", + "\n", + " # Computes statistics over data for visualization and example validation.\n", + " statistics_gen = StatisticsGen(\n", + " examples=example_gen.outputs['examples'])\n", + "\n", + " if schema_path is None:\n", + " # Generates schema based on statistics files.\n", + " schema_gen = SchemaGen(\n", + " statistics=statistics_gen.outputs['statistics'])\n", + " else:\n", + " # Import user provided schema into the pipeline.\n", + " schema_gen = tfx.components.ImportSchemaGen(schema_file=schema_path)\n", + " \n", + " \n", + " # Performs transformations and feature engineering in training and serving.\n", + " transform = Transform(\n", + " examples=example_gen.outputs['examples'],\n", + " schema=schema_gen.outputs['schema'],\n", + " module_file=os.path.abspath(preprocessing_module_file))\n", + " \n", + " # Uses user-provided Python function that implements a model.\n", + " trainer_args = {\n", + " 'module_file': trainer_module_file,\n", + " 'examples': transform.outputs['transformed_examples'],\n", + " 'schema': schema_gen.outputs['schema'],\n", + " 'custom_executor_spec' : executor_spec.ExecutorClassSpec(Executor),\n", + " 'transform_graph': transform.outputs['transform_graph'],\n", + " 'train_args': train_args,\n", + " 'eval_args': eval_args,\n", + " }\n", + " trainer = Trainer(**trainer_args)\n", + " \n", + " # Uses TFMA to compute a evaluation statistics over features of a model and\n", + " # perform quality validation of a candidate model (compared to a baseline).\n", + " eval_config = tfma.EvalConfig(\n", + " model_specs=[\n", + " tfma.ModelSpec(\n", + " label_key='is_recid')\n", + " ],\n", + " slicing_specs=[\n", + " tfma.SlicingSpec(\n", + " feature_keys=['race'])\n", + " ],\n", + " metrics_specs=[\n", + " tfma.MetricsSpec(metrics=[\n", + " tfma.MetricConfig(\n", + " class_name='BinaryAccuracy'),\n", + " tfma.MetricConfig(\n", + " class_name='AUC'),\n", + " tfma.MetricConfig(\n", + " class_name='FairnessIndicators',\n", + " config='{\"thresholds\": [0.25, 0.5, 0.75]}')\n", + " \n", + " ])\n", + " ])\n", + " evaluator = Evaluator(examples=example_gen.outputs['examples'],\n", + " model=trainer.outputs['model'],\n", + " eval_config=eval_config)\n", + "\n", + " return pipeline.Pipeline(\n", + " pipeline_name=pipeline_name,\n", + " pipeline_root=pipeline_root,\n", + " components=[\n", + " example_gen,\n", + " statistics_gen,\n", + " schema_gen,\n", + " transform,\n", + " trainer,\n", + " evaluator,\n", + " ],\n", + " metadata_connection_config=metadata.sqlite_metadata_connection_config(\n", + " metadata_path)\n", + " )\n", + "\n", + "if __name__ == '__main__':\n", + " absl.logging.set_verbosity(absl.logging.INFO)\n", + "\n", + " LocalDagRunner().run(\n", + " create_pipeline(\n", + " pipeline_name=_pipeline_name,\n", + " pipeline_root=_pipeline_root,\n", + " data_path=_data_path,\n", + " preprocessing_module_file= _transformer_file,\n", + " trainer_module_file=_trainer_file,\n", + " serving_model_dir=_serving_model_dir,\n", + " metadata_path=_metadata_path,\n", + " train_args=trainer_pb2.TrainArgs(num_steps=10000),\n", + " eval_args=trainer_pb2.EvalArgs(num_steps=5000))\n", + " )" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "420f287a", + "metadata": {}, + "outputs": [], + "source": [ + "!python {_pipelie_path}" + ] + }, + { + "cell_type": "markdown", + "id": "23c881b9", + "metadata": {}, + "source": [ + "## Model Card Inputs" + ] + }, + { + "cell_type": "markdown", + "id": "a01c14d7", + "metadata": {}, + "source": [ + "#### Retrieve URIs form MLMD" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6b40ef88", + "metadata": {}, + "outputs": [], + "source": [ + "from ml_metadata.metadata_store import metadata_store\n", + "from ml_metadata.proto import metadata_store_pb2\n", + "\n", + "connection_config = metadata_store_pb2.ConnectionConfig()\n", + "connection_config.sqlite.filename_uri = './compas/tfx/metadata/compas/metadata.db'\n", + "connection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE\n", + "store = metadata_store.MetadataStore(connection_config)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "510a2789", + "metadata": {}, + "outputs": [], + "source": [ + "data = store.get_artifacts_by_type(\"Examples\")[0].uri\n", + "evaluator = store.get_artifacts_by_type(\"ModelEvaluation\")[-1].uri\n", + "model = store.get_artifacts_by_type(\"Model\")[-1].uri" + ] + }, + { + "cell_type": "markdown", + "id": "b073a8dc", + "metadata": {}, + "source": [ + "#### Model and Data Locations" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f70f3055", + "metadata": {}, + "outputs": [], + "source": [ + "_model_path = os.path.join(model, 'Format-Serving')\n", + "_data_paths = {'eval': TensorflowDataset(dataset_path=os.path.join(data, 'Split-eval', '*.gz')),\n", + " 'train': TensorflowDataset(dataset_path=os.path.join(data, 'Split-train', '*.gz'))}" + ] + }, + { + "cell_type": "markdown", + "id": "c7f6fc4b", + "metadata": {}, + "source": [ + "#### Metric Evaluation Config" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6bce3c27", + "metadata": {}, + "outputs": [], + "source": [ + "_project_path = os.path.join('.', 'compas')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "56334b24", + "metadata": {}, + "outputs": [], + "source": [ + "_eval_config = os.path.join(_project_path, 'eval_config.proto')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ee6c0e9f", + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile {_eval_config}\n", + "\n", + "model_specs {\n", + " label_key: 'is_recid'\n", + " }\n", + "metrics_specs {\n", + " metrics {class_name: \"BinaryAccuracy\"}\n", + " metrics {class_name: \"AUC\"}\n", + " metrics {class_name: \"ConfusionMatrixPlot\"}\n", + " metrics {\n", + " class_name: \"FairnessIndicators\"\n", + " config: '{\"thresholds\": [0.25, 0.5, 0.75]}'\n", + " }\n", + " }\n", + "slicing_specs {}\n", + "slicing_specs {\n", + " feature_keys: 'race'\n", + " }\n", + "options {\n", + " include_default_metrics { value: false }\n", + " }" + ] + }, + { + "cell_type": "markdown", + "id": "bb0a40d9", + "metadata": {}, + "source": [ + "#### User defined inputs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "afcc75a7", + "metadata": {}, + "outputs": [], + "source": [ + "overview = (\"COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)\"\n", + "\" is a public dataset, which contains approximately 18,000 criminal cases from \"\n", + "\"Broward County, Florida between January, 2013 and December, 2014. The data contains\"\n", + "\" information about 11,000 unique defendants, including criminal history demographics,\"\n", + "\" and a risk score intended to represent the defendant’s likelihood of reoffending\"\n", + "\" (recidivism). A machine learning model trained on this data has been used by judges\"\n", + "\" and parole officers to determine whether or not to set bail and whether or not to\"\n", + "\" grant parole.\"\n", + "\n", + "\"In 2016, an article published in ProPublica found that the COMPAS model was incorrectly\"\n", + "\" predicting that African-American defendants would recidivate at much higher rates than\"\n", + "\" their white counterparts while Caucasian would not recidivate at a much higher rate. \"\n", + "\"For Caucasian defendants, the model made mistakes in the opposite direction, making incorrect predictions \"\n", + "\"that they wouldn’t commit another crime. The authors went on to show that these biases were likely due to \"\n", + "\"an uneven distribution in the data between African-Americans and Caucasian defendants. Specifically, the \"\n", + "\"ground truth label of a negative example (a defendant would not commit another crime) and a positive example \"\n", + "\"(defendant would commit another crime) were disproportionate between the two races. \"\n", + "\"Since 2016, the COMPAS dataset has appeared frequently in the ML fairness literature \"\n", + "\"1, 2, 3, with researchers using it to demonstrate techniques for identifying and remediating \"\n", + "\"fairness concerns.\"\n", + "\n", + "\"It is important to note that developing a machine learning model to predict pre-trial detention \"\n", + "\"has a number of important ethical considerations. You can learn more about these issues in the \"\n", + "\"Partnership on AI Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System.\"\n", + "\" The Partnership on AI is a multi-stakeholder organization -- of which Google is a member -- that \"\n", + "\"creates guidelines around AI.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "75b09fd2", + "metadata": {}, + "outputs": [], + "source": [ + "mc = {\n", + " \"model_details\": {\n", + " \"name\": \"COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)\",\n", + " \"overview\": overview,\n", + " \"owners\": [\n", + " {\n", + " \"name\": \"Intel XAI Team\",\n", + " \"contact\": \"xai@intel.com\"\n", + " }\n", + " ],\n", + " \"references\": [\n", + " {\n", + " \"reference\": \"Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199.\"\n", + " },\n", + " {\n", + " \"reference\": \"Chouldechova, A., G'Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046.\"\n", + " },\n", + " {\n", + " \"reference\": \"Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207.\"\n", + " }\n", + " ],\n", + " \"graphics\": {\n", + " \"description\": \" \"\n", + " }\n", + " },\n", + " \"quantitative_analysis\": {\n", + " \"graphics\": {\n", + " \"description\": \" \"\n", + " }\n", + " },\n", + " \"schema_version\": \"0.0.1\"\n", + "}" + ] + }, + { + "cell_type": "markdown", + "id": "34bf9e91-3aad-44cc-94f8-f2f6277a062e", + "metadata": {}, + "source": [ + "## Generating Model Card from TFRecord" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "39a5be28", + "metadata": {}, + "outputs": [], + "source": [ + "mcg = ModelCardGen.generate(data_sets=_data_paths,\n", + " eval_config=_eval_config,\n", + " model_path=_model_path, \n", + " model_card=mc)" + ] + }, + { + "cell_type": "markdown", + "id": "91213a4e", + "metadata": {}, + "source": [ + "### Display Model Card" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "afcd6fbd", + "metadata": {}, + "outputs": [], + "source": [ + "mcg" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aef202f2", + "metadata": {}, + "outputs": [], + "source": [ + "mcg.export_html('compas_plotly.html')" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/v1.1.0/notebooks/heart_disease.html b/v1.1.0/notebooks/heart_disease.html new file mode 100644 index 0000000..83a283e --- /dev/null +++ b/v1.1.0/notebooks/heart_disease.html @@ -0,0 +1,294 @@ + + + + + + + Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • + View page source +
  • +
+
+
+
+
+ +
+

Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer

+
+
[ ]:
+
+
+
from intel_ai_safety.explainer.attributions import attributions
+
+import warnings
+warnings.filterwarnings('ignore')
+
+
+
+
+
[ ]:
+
+
+
import tensorflow as tf
+import pandas as pd
+
+tf.__version__
+
+
+
+
+
[ ]:
+
+
+
from sklearn.model_selection import train_test_split
+from sklearn.preprocessing import StandardScaler
+
+
+
+
+
[ ]:
+
+
+
file_url = "http://storage.googleapis.com/download.tensorflow.org/data/heart.csv"
+df = pd.read_csv(file_url)
+
+
+
+
+
[ ]:
+
+
+
# make target variable
+y = df.pop('target')
+
+
+
+
+
[ ]:
+
+
+
# prepare features
+list_numerical = ['age', 'thalach', 'trestbps',  'chol', 'oldpeak']
+
+X = df[list_numerical]
+
+
+
+
+

Data Splitting

+
+
[ ]:
+
+
+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
+
+
+
+
+
+

Feature Preprocessing

+
+
[ ]:
+
+
+
scaler = StandardScaler().fit(X_train[list_numerical])
+
+X_train[list_numerical] = scaler.transform(X_train[list_numerical])
+X_test[list_numerical] = scaler.transform(X_test[list_numerical])
+
+
+
+
+
+

Model

+
+
[ ]:
+
+
+
model = tf.keras.Sequential([
+    tf.keras.layers.Dense(10, activation='relu'),
+    tf.keras.layers.Dense(10, activation='relu'),
+    tf.keras.layers.Dense(1, activation='sigmoid')
+  ])
+
+
+
+
+
[ ]:
+
+
+
model.compile(optimizer="adam",
+              loss ="binary_crossentropy",
+              metrics=["accuracy"])
+
+model.fit(X_train, y_train,
+         epochs=15,
+         batch_size=13,
+         validation_data=(X_test, y_test)
+         )
+
+
+
+
+
+

Visualize the connectivity graph:

+
+
[ ]:
+
+
+
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
+
+
+
+
+
+

Accuracy

+
+
[ ]:
+
+
+
loss, accuracy = model.evaluate(X_test, y_test)
+
+print("Accuracy", accuracy)
+
+
+
+
+
[ ]:
+
+
+
predictions = model.predict(X_train)
+
+
+
+
+
[ ]:
+
+
+
print(
+    "This particular patient had a %.1f percent probability "
+    "of having a heart disease, as evaluated by our model." % (100 * predictions[0][0],)
+)
+
+
+
+
+
[ ]:
+
+
+
# Let's look at the shap value estimations for this patient's features that resulted in this probability
+ke = attributions.kernel_explainer(model, X_train.iloc[1:101, :], X_train.iloc[0, :])
+ke.visualize()
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/heart_disease.ipynb b/v1.1.0/notebooks/heart_disease.ipynb new file mode 100644 index 0000000..317a181 --- /dev/null +++ b/v1.1.0/notebooks/heart_disease.ipynb @@ -0,0 +1,278 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer.attributions import attributions\n", + "\n", + "import warnings\n", + "warnings.filterwarnings('ignore')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "import tensorflow as tf\n", + "import pandas as pd\n", + "\n", + "tf.__version__" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from sklearn.model_selection import train_test_split\n", + "from sklearn.preprocessing import StandardScaler" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "file_url = \"http://storage.googleapis.com/download.tensorflow.org/data/heart.csv\"\n", + "df = pd.read_csv(file_url)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# make target variable\n", + "y = df.pop('target')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# prepare features\n", + "list_numerical = ['age', 'thalach', 'trestbps', 'chol', 'oldpeak']\n", + "\n", + "X = df[list_numerical]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Data Splitting" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Feature Preprocessing" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "scaler = StandardScaler().fit(X_train[list_numerical]) \n", + "\n", + "X_train[list_numerical] = scaler.transform(X_train[list_numerical])\n", + "X_test[list_numerical] = scaler.transform(X_test[list_numerical])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "model = tf.keras.Sequential([\n", + " tf.keras.layers.Dense(10, activation='relu'),\n", + " tf.keras.layers.Dense(10, activation='relu'),\n", + " tf.keras.layers.Dense(1, activation='sigmoid')\n", + " ])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "model.compile(optimizer=\"adam\", \n", + " loss =\"binary_crossentropy\", \n", + " metrics=[\"accuracy\"])\n", + "\n", + "model.fit(X_train, y_train, \n", + " epochs=15, \n", + " batch_size=13,\n", + " validation_data=(X_test, y_test)\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Visualize the connectivity graph:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "tf.keras.utils.plot_model(model, show_shapes=True, rankdir=\"LR\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Accuracy" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "loss, accuracy = model.evaluate(X_test, y_test)\n", + "\n", + "print(\"Accuracy\", accuracy)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "predictions = model.predict(X_train)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "print(\n", + " \"This particular patient had a %.1f percent probability \"\n", + " \"of having a heart disease, as evaluated by our model.\" % (100 * predictions[0][0],)\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# Let's look at the shap value estimations for this patient's features that resulted in this probability\n", + "ke = attributions.kernel_explainer(model, X_train.iloc[1:101, :], X_train.iloc[0, :])\n", + "ke.visualize()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + }, + "vscode": { + "interpreter": { + "hash": "47de4e6fcdf76d9f7e9823221d58331110ca8a86e4fcaa17b27f269bc08adee8" + } + }, + "widgets": { + "application/vnd.jupyter.widget-state+json": { + "state": {}, + "version_major": 2, + "version_minor": 0 + } + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/v1.1.0/notebooks/mnist.html b/v1.1.0/notebooks/mnist.html new file mode 100644 index 0000000..6635b41 --- /dev/null +++ b/v1.1.0/notebooks/mnist.html @@ -0,0 +1,332 @@ + + + + + + + Explaining Custom CNN MNIST Classification Using the Attributions Explainer — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • + View page source +
  • +
+
+
+
+
+ +
+

Explaining Custom CNN MNIST Classification Using the Attributions Explainer

+
+

1. Design the CNN from scatch

+
+
[ ]:
+
+
+
import torch, torchvision
+from torchvision import datasets, transforms
+from torch import nn, optim
+from torch.nn import functional as F
+torch.manual_seed(0)
+
+import numpy as np
+
+batch_size = 128
+num_epochs = 1
+device = torch.device('cpu')
+
+class Net(nn.Module):
+    def __init__(self):
+        super(Net, self).__init__()
+
+        self.conv_layers = nn.Sequential(
+            nn.Conv2d(1, 10, kernel_size=5),
+            nn.MaxPool2d(2),
+            nn.ReLU(),
+            nn.Conv2d(10, 20, kernel_size=5),
+            nn.Dropout(),
+            nn.MaxPool2d(2),
+            nn.ReLU(),
+        )
+        self.fc_layers = nn.Sequential(
+            nn.Linear(320, 50),
+            nn.ReLU(),
+            nn.Dropout(),
+            nn.Linear(50, 10),
+            nn.Softmax(dim=1)
+        )
+
+    def forward(self, x):
+        x = self.conv_layers(x)
+        x = x.view(-1, 320)
+        x = self.fc_layers(x)
+        return x
+
+def train(model, device, train_loader, optimizer, epoch):
+    model.train()
+    for batch_idx, (data, target) in enumerate(train_loader):
+        data, target = data.to(device), target.to(device)
+        optimizer.zero_grad()
+        output = model(data)
+        loss = F.nll_loss(output.log(), target)
+        loss.backward()
+        optimizer.step()
+        if batch_idx % 100 == 0:
+            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
+                epoch, batch_idx * len(data), len(train_loader.dataset),
+                100. * batch_idx / len(train_loader), loss.item()))
+
+
+train_loader = torch.utils.data.DataLoader(
+    datasets.MNIST('mnist_data', train=True, download=True,
+                   transform=transforms.Compose([
+                       transforms.ToTensor()
+                   ])),
+    batch_size=batch_size, shuffle=True)
+
+test_loader = torch.utils.data.DataLoader(
+    datasets.MNIST('mnist_data', train=False, transform=transforms.Compose([
+                       transforms.ToTensor()
+                   ])),
+    batch_size=batch_size, shuffle=True)
+
+model = Net().to(device)
+optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
+
+
+
+
+
+

2. Train the CNN on the MNIST dataset

+
+
[ ]:
+
+
+
for epoch in range(1, num_epochs + 1):
+    train(model, device, train_loader, optimizer, epoch)
+
+
+
+
+
+

3. Predict the MNIST test data

+
+
[ ]:
+
+
+
# test the model
+model.eval()
+test_loss = 0
+correct = 0
+y_true = torch.empty(0)
+y_pred = torch.empty((0, 10))
+X_test = torch.empty((0, 1, 28, 28))
+
+with torch.no_grad():
+    for data, target in test_loader:
+        data, target = data.to(device), target.to(device)
+        output = model(data)
+        X_test = torch.cat((X_test, data))
+        y_true, y_pred = torch.cat((y_true, target)), torch.cat((y_pred, output))
+
+        test_loss += F.nll_loss(output.log(), target).item() # sum up batch loss
+        pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
+        correct += pred.eq(target.view_as(pred)).sum().item()
+
+test_loss /= len(test_loader.dataset)
+print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
+    test_loss, correct, len(test_loader.dataset),
+100. * correct / len(test_loader.dataset)))
+
+
+
+
+
+

4. Survey performance across all classes using the metrics_explainer plugin

+
+
[ ]:
+
+
+
from intel_ai_safety.explainer import metrics
+
+classes = np.array(['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'])
+
+cm = metrics.confusion_matrix(y_true, y_pred, classes)
+cm.visualize()
+print(cm.report)
+
+
+
+
+
[ ]:
+
+
+
plotter = metrics.plot(y_true, y_pred, classes)
+plotter.pr_curve()
+
+
+
+
+
[ ]:
+
+
+
plotter.roc_curve()
+
+
+
+
+
+

5. Explain performance across the classes using the feature_attributions_explainer plugin

+
+

From (4), it can be observed from the confusion matrix that classes 4 and 9 perform poorly. Additionallly, there is a high misclassification rate exclusively amongst the two labels. In other words, it appears that the CNN if confusing 4’s with 9’s, and vice-versa. 7.4% of all the 9 examples were misclassified as 4, and 10% of all the 4 examples were misclassified as 9.

+
+
+

Let’s take a closer look at the pixel-based shap values for the test examples where the CNN predicts ‘9’ when the correct groundtruth label is ‘4’.

+
+
[ ]:
+
+
+
# get the prediction indices where the model predicted 9
+pred_idx = list(np.where(np.argmax(y_pred, axis=1) == 9)[0])
+# get the groundtruth indices where the true label is 4
+gt_idx = list(np.where(y_true == 4)[0])
+
+# collect the indices where the CNN misclassified 4 as 9
+matches = list(set(pred_idx).intersection(gt_idx))
+
+
+
+
+
[ ]:
+
+
+
from intel_ai_safety.explainer.attributions import attributions
+# run the deep explainer
+deViz = attributions.deep_explainer(model, X_test[:100], X_test[matches[:6]], classes)
+deViz.visualize()
+
+
+
+
+
[ ]:
+
+
+
# instatiate gradient explainer object
+# run the deep explainer
+grViz = attributions.gradient_explainer(model, X_test[:100],  X_test[matches[:6]], classes, 2)
+grViz.visualize()
+
+
+
+
+
+
+

6. Conclusion

+
+

From the deep and gradient explainer visuals, it can be observed that the CNN pays close attention to the top of the digit in distinguishing between a 4 and a 9. On the first and last row of the above gradient explainer visualization we can the 4’s are closed. The contributes to postiive shap values (red) for the 9 classification. This begins explaining why the CNN is confusing the two digits.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/mnist.ipynb b/v1.1.0/notebooks/mnist.ipynb new file mode 100644 index 0000000..4a35684 --- /dev/null +++ b/v1.1.0/notebooks/mnist.ipynb @@ -0,0 +1,317 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "ac418890-656c-4063-b90c-151957c097b3", + "metadata": {}, + "source": [ + "# Explaining Custom CNN MNIST Classification Using the Attributions Explainer" + ] + }, + { + "cell_type": "markdown", + "id": "39d4b774-8c19-4c88-9131-b6d682032f89", + "metadata": {}, + "source": [ + "### 1. Design the CNN from scatch" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "70c16da9-d896-4c88-9984-72b0912d02fc", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "import torch, torchvision\n", + "from torchvision import datasets, transforms\n", + "from torch import nn, optim\n", + "from torch.nn import functional as F\n", + "torch.manual_seed(0)\n", + "\n", + "import numpy as np\n", + "\n", + "batch_size = 128\n", + "num_epochs = 1\n", + "device = torch.device('cpu')\n", + "\n", + "class Net(nn.Module):\n", + " def __init__(self):\n", + " super(Net, self).__init__()\n", + "\n", + " self.conv_layers = nn.Sequential(\n", + " nn.Conv2d(1, 10, kernel_size=5),\n", + " nn.MaxPool2d(2),\n", + " nn.ReLU(),\n", + " nn.Conv2d(10, 20, kernel_size=5),\n", + " nn.Dropout(),\n", + " nn.MaxPool2d(2),\n", + " nn.ReLU(),\n", + " )\n", + " self.fc_layers = nn.Sequential(\n", + " nn.Linear(320, 50),\n", + " nn.ReLU(),\n", + " nn.Dropout(),\n", + " nn.Linear(50, 10),\n", + " nn.Softmax(dim=1)\n", + " )\n", + "\n", + " def forward(self, x):\n", + " x = self.conv_layers(x)\n", + " x = x.view(-1, 320)\n", + " x = self.fc_layers(x)\n", + " return x\n", + "\n", + "def train(model, device, train_loader, optimizer, epoch):\n", + " model.train()\n", + " for batch_idx, (data, target) in enumerate(train_loader):\n", + " data, target = data.to(device), target.to(device)\n", + " optimizer.zero_grad()\n", + " output = model(data)\n", + " loss = F.nll_loss(output.log(), target)\n", + " loss.backward()\n", + " optimizer.step()\n", + " if batch_idx % 100 == 0:\n", + " print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n", + " epoch, batch_idx * len(data), len(train_loader.dataset),\n", + " 100. * batch_idx / len(train_loader), loss.item()))\n", + "\n", + "\n", + "train_loader = torch.utils.data.DataLoader(\n", + " datasets.MNIST('mnist_data', train=True, download=True,\n", + " transform=transforms.Compose([\n", + " transforms.ToTensor()\n", + " ])),\n", + " batch_size=batch_size, shuffle=True)\n", + "\n", + "test_loader = torch.utils.data.DataLoader(\n", + " datasets.MNIST('mnist_data', train=False, transform=transforms.Compose([\n", + " transforms.ToTensor()\n", + " ])),\n", + " batch_size=batch_size, shuffle=True)\n", + "\n", + "model = Net().to(device)\n", + "optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)" + ] + }, + { + "cell_type": "markdown", + "id": "399cc3d8-0082-40e3-b2e3-4b7b2719864b", + "metadata": {}, + "source": [ + "### 2. Train the CNN on the MNIST dataset" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1edec3cb-8ee2-4dad-8ba9-f175f42422e7", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "for epoch in range(1, num_epochs + 1):\n", + " train(model, device, train_loader, optimizer, epoch)" + ] + }, + { + "cell_type": "markdown", + "id": "3827d21d-13d5-4cd4-a93a-aa639ec65abe", + "metadata": {}, + "source": [ + "### 3. Predict the MNIST test data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1b38063b-6b86-4b24-9b83-ca3a0b437a19", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# test the model\n", + "model.eval()\n", + "test_loss = 0\n", + "correct = 0\n", + "y_true = torch.empty(0)\n", + "y_pred = torch.empty((0, 10))\n", + "X_test = torch.empty((0, 1, 28, 28))\n", + "\n", + "with torch.no_grad():\n", + " for data, target in test_loader:\n", + " data, target = data.to(device), target.to(device)\n", + " output = model(data)\n", + " X_test = torch.cat((X_test, data))\n", + " y_true, y_pred = torch.cat((y_true, target)), torch.cat((y_pred, output))\n", + "\n", + " test_loss += F.nll_loss(output.log(), target).item() # sum up batch loss\n", + " pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability\n", + " correct += pred.eq(target.view_as(pred)).sum().item()\n", + "\n", + "test_loss /= len(test_loader.dataset)\n", + "print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n", + " test_loss, correct, len(test_loader.dataset),\n", + "100. * correct / len(test_loader.dataset)))" + ] + }, + { + "cell_type": "markdown", + "id": "1017d195-f344-4568-a684-fa22426922cc", + "metadata": {}, + "source": [ + "### 4. Survey performance across all classes using the metrics_explainer plugin" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d5bf5aa5-e9b2-4189-820c-651e0670bd5c", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer import metrics\n", + "\n", + "classes = np.array(['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'])\n", + "\n", + "cm = metrics.confusion_matrix(y_true, y_pred, classes)\n", + "cm.visualize()\n", + "print(cm.report)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1ffd7e1c-43d2-4934-b445-1259ce1a066d", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "plotter = metrics.plot(y_true, y_pred, classes)\n", + "plotter.pr_curve()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dc55016f-e2d4-4b58-b553-528d4ab10f2f", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "plotter.roc_curve()" + ] + }, + { + "cell_type": "markdown", + "id": "aca78854-4a03-48b3-bbdc-1f1a02cbcf81", + "metadata": {}, + "source": [ + "### 5. Explain performance across the classes using the feature_attributions_explainer plugin" + ] + }, + { + "cell_type": "markdown", + "id": "402afd71-eee8-4d2b-a9a8-b31acac28531", + "metadata": {}, + "source": [ + "##### From (4), it can be observed from the confusion matrix that classes 4 and 9 perform poorly. Additionallly, there is a high misclassification rate exclusively amongst the two labels. In other words, it appears that the CNN if confusing 4's with 9's, and vice-versa. 7.4% of all the 9 examples were misclassified as 4, and 10% of all the 4 examples were misclassified as 9.\n", + "\n", + "##### Let's take a closer look at the pixel-based shap values for the test examples where the CNN predicts '9' when the correct groundtruth label is '4'." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0425e555-fd7f-49bb-9ee1-40abe2e85b35", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# get the prediction indices where the model predicted 9\n", + "pred_idx = list(np.where(np.argmax(y_pred, axis=1) == 9)[0])\n", + "# get the groundtruth indices where the true label is 4\n", + "gt_idx = list(np.where(y_true == 4)[0])\n", + "\n", + "# collect the indices where the CNN misclassified 4 as 9\n", + "matches = list(set(pred_idx).intersection(gt_idx))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fe568b27-52a8-47bf-a593-0e3224f16cfb", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer.attributions import attributions\n", + "# run the deep explainer\n", + "deViz = attributions.deep_explainer(model, X_test[:100], X_test[matches[:6]], classes)\n", + "deViz.visualize()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "35e8a255-0236-462a-8a3b-c749280a06f2", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "# instatiate gradient explainer object\n", + "# run the deep explainer\n", + "grViz = attributions.gradient_explainer(model, X_test[:100], X_test[matches[:6]], classes, 2)\n", + "grViz.visualize()" + ] + }, + { + "cell_type": "markdown", + "id": "ea555792-d870-46bf-8b24-cc686afffcc1", + "metadata": {}, + "source": [ + "### 6. Conclusion" + ] + }, + { + "cell_type": "markdown", + "id": "d852d37c-f275-4628-b2dd-f9af78d258f4", + "metadata": {}, + "source": [ + "##### From the deep and gradient explainer visuals, it can be observed that the CNN pays close attention to the top of the digit in distinguishing between a 4 and a 9. On the first and last row of the above gradient explainer visualization we can the 4's are closed. The contributes to postiive shap values (red) for the 9 classification. This begins explaining why the CNN is confusing the two digits." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/v1.1.0/notebooks/partitionexplainer.html b/v1.1.0/notebooks/partitionexplainer.html new file mode 100644 index 0000000..5bfa5c9 --- /dev/null +++ b/v1.1.0/notebooks/partitionexplainer.html @@ -0,0 +1,393 @@ + + + + + + + Explaining Custom NN NewsGroups Classification Using the Attributions Explainer — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • + View page source +
  • +
+
+
+
+
+ +
+

Explaining Custom NN NewsGroups Classification Using the Attributions Explainer

+
+
[ ]:
+
+
+
from intel_ai_safety.explainer.attributions import attributions
+from intel_ai_safety.explainer.metrics import metrics
+
+
+
+
+
[ ]:
+
+
+
import warnings
+warnings.filterwarnings('ignore')
+import os
+os.environ['KMP_WARNINGS'] = 'off'
+
+import numpy as np
+from sklearn import datasets
+
+all_categories = ['alt.atheism','comp.graphics','comp.os.ms-windows.misc','comp.sys.ibm.pc.hardware',
+                  'comp.sys.mac.hardware','comp.windows.x', 'misc.forsale','rec.autos','rec.motorcycles',
+                  'rec.sport.baseball','rec.sport.hockey','sci.crypt','sci.electronics','sci.med',
+                  'sci.space','soc.religion.christian','talk.politics.guns','talk.politics.mideast',
+                  'talk.politics.misc','talk.religion.misc']
+
+selected_categories = ['alt.atheism','comp.graphics','rec.motorcycles','sci.space','talk.politics.misc']
+
+X_train_text, Y_train = datasets.fetch_20newsgroups(subset="train", categories=selected_categories, return_X_y=True)
+X_test_text , Y_test  = datasets.fetch_20newsgroups(subset="test", categories=selected_categories, return_X_y=True)
+
+X_train_text = np.array(X_train_text)
+X_test_text = np.array(X_test_text)
+
+classes = np.unique(Y_train)
+mapping = dict(zip(classes, selected_categories))
+
+len(X_train_text), len(X_test_text), classes, mapping
+
+
+
+
+

Vectorize Text Data

+
+
[ ]:
+
+
+
import sklearn
+import numpy as np
+from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
+
+vectorizer = TfidfVectorizer(max_features=50000)
+
+vectorizer.fit(np.concatenate((X_train_text, X_test_text)))
+X_train = vectorizer.transform(X_train_text)
+X_test = vectorizer.transform(X_test_text)
+
+X_train, X_test = X_train.toarray(), X_test.toarray()
+
+X_train.shape, X_test.shape
+
+
+
+
+
+

Define the Model

+
+
[ ]:
+
+
+
from tensorflow.keras.models import Sequential
+from tensorflow.keras import layers
+
+def create_model():
+    return Sequential([
+                        layers.Input(shape=X_train.shape[1:]),
+                        layers.Dense(128, activation="relu"),
+                        layers.Dense(64, activation="relu"),
+                        layers.Dense(len(classes), activation="softmax"),
+                    ])
+
+model = create_model()
+

+
+
+
+
[ ]:
+
+
+
model.summary()
+
+
+
+
+
+

Compile and Train Model

+
+
[ ]:
+
+
+
model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])
+history = model.fit(X_train, Y_train, batch_size=256, epochs=5, validation_data=(X_test, Y_test))
+
+
+
+
+
+

Evaluate Model Performance

+
+
[ ]:
+
+
+
from sklearn.metrics import accuracy_score
+train_preds = model.predict(X_train)
+test_preds = model.predict(X_test)
+
+print("Train Accuracy : {:.3f}".format(accuracy_score(Y_train, np.argmax(train_preds, axis=1))))
+print("Test  Accuracy : {:.3f}".format(accuracy_score(Y_test, np.argmax(test_preds, axis=1))))
+
+
+
+
+
[ ]:
+
+
+
cm = metrics.confusion_matrix(Y_test, test_preds, selected_categories)
+cm.visualize()
+print(cm.report)
+
+
+
+
+
[ ]:
+
+
+
plotter = metrics.plot(Y_test, test_preds, selected_categories)
+plotter.pr_curve()
+
+
+
+
+
[ ]:
+
+
+
plotter.roc_curve()
+
+
+
+
+
[ ]:
+
+
+
import re
+
+X_batch_text = X_test_text[1:3]
+X_batch = X_test[1:3]
+
+print("Samples : ")
+for text in X_batch_text:
+    print(re.split(r"\W+", text))
+    print()
+
+preds_proba = model.predict(X_batch)
+preds = preds_proba.argmax(axis=1)
+
+print("Actual    Target Values : {}".format([selected_categories[target] for target in Y_test[1:3]]))
+print("Predicted Target Values : {}".format([selected_categories[target] for target in preds]))
+print("Predicted Probabilities : {}".format(preds_proba.max(axis=1)))
+
+
+
+
+
+

SHAP Partition Explainer

+
+
+

Visualize SHAP Values Correct Predictions

+
+
[ ]:
+
+
+
def make_predictions(X_batch_text):
+    X_batch = vectorizer.transform(X_batch_text).toarray()
+    preds = model.predict(X_batch)
+    return preds
+
+partition_explainer = attributions.partition_text_explainer(make_predictions, selected_categories, X_batch_text, r"\W+")
+
+
+
+
+

Text Plot

+
+
[ ]:
+
+
+
partition_explainer.visualize()
+
+
+
+
+
+

Bar Plots

+
+

Bar Plot 1

+
+
[ ]:
+
+
+
shap.plots.bar(shap_values[0,:, selected_categories[preds[0]]], max_display=15,
+               order=shap.Explanation.argsort.flip)
+
+
+
+
+
+
+

Bar Plot 2

+
+
[ ]:
+
+
+
shap.plots.bar(shap_values[1,:, selected_categories[preds[1]]], max_display=15,
+               order=shap.Explanation.argsort.flip)
+
+
+
+
+
+
+

Waterfall Plots

+
+

Waterfall Plot 1

+
+
[ ]:
+
+
+
shap.waterfall_plot(shap_values[0][:, selected_categories[preds[0]]], max_display=15)
+
+
+
+
+
+

Waterfall Plot 2

+
+
[ ]:
+
+
+
shap.waterfall_plot(shap_values[1][:, selected_categories[preds[1]]], max_display=15)
+
+
+
+
+
+
+

Force Plot

+
+
[ ]:
+
+
+
import re
+tokens = re.split("\W+", X_batch_text[0].lower())
+shap.initjs()
+shap.force_plot(shap_values.base_values[0][preds[0]], shap_values[0][:, preds[0]].values,
+                feature_names = tokens[:-1], out_names=selected_categories[preds[0]])
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/partitionexplainer.ipynb b/v1.1.0/notebooks/partitionexplainer.ipynb new file mode 100644 index 0000000..9631f7e --- /dev/null +++ b/v1.1.0/notebooks/partitionexplainer.ipynb @@ -0,0 +1,436 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "4a5b3e26-8af8-49a6-8d56-85ee0a0df736", + "metadata": {}, + "source": [ + "# [Explaining Custom NN NewsGroups Classification Using the Attributions Explainer](https://coderzcolumn.com/tutorials/artificial-intelligence/explain-text-classification-models-using-shap-values-keras)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0b16c4fc-6221-42a3-94d6-3636bd73aa73", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from intel_ai_safety.explainer.attributions import attributions\n", + "from intel_ai_safety.explainer.metrics import metrics" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a3aff356-2c45-4de1-8f9e-451a0ae8e915", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "import warnings\n", + "warnings.filterwarnings('ignore')\n", + "import os\n", + "os.environ['KMP_WARNINGS'] = 'off'\n", + "\n", + "import numpy as np\n", + "from sklearn import datasets\n", + "\n", + "all_categories = ['alt.atheism','comp.graphics','comp.os.ms-windows.misc','comp.sys.ibm.pc.hardware',\n", + " 'comp.sys.mac.hardware','comp.windows.x', 'misc.forsale','rec.autos','rec.motorcycles',\n", + " 'rec.sport.baseball','rec.sport.hockey','sci.crypt','sci.electronics','sci.med',\n", + " 'sci.space','soc.religion.christian','talk.politics.guns','talk.politics.mideast',\n", + " 'talk.politics.misc','talk.religion.misc']\n", + "\n", + "selected_categories = ['alt.atheism','comp.graphics','rec.motorcycles','sci.space','talk.politics.misc']\n", + "\n", + "X_train_text, Y_train = datasets.fetch_20newsgroups(subset=\"train\", categories=selected_categories, return_X_y=True)\n", + "X_test_text , Y_test = datasets.fetch_20newsgroups(subset=\"test\", categories=selected_categories, return_X_y=True)\n", + "\n", + "X_train_text = np.array(X_train_text)\n", + "X_test_text = np.array(X_test_text)\n", + "\n", + "classes = np.unique(Y_train)\n", + "mapping = dict(zip(classes, selected_categories))\n", + "\n", + "len(X_train_text), len(X_test_text), classes, mapping" + ] + }, + { + "cell_type": "markdown", + "id": "8f5830fc-cfa0-4142-96a0-c214eb7d7f2f", + "metadata": {}, + "source": [ + "## Vectorize Text Data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "49bc8bd2-d943-42eb-be11-1df91c26120d", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "import sklearn\n", + "import numpy as np\n", + "from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\n", + "\n", + "vectorizer = TfidfVectorizer(max_features=50000)\n", + "\n", + "vectorizer.fit(np.concatenate((X_train_text, X_test_text)))\n", + "X_train = vectorizer.transform(X_train_text)\n", + "X_test = vectorizer.transform(X_test_text)\n", + "\n", + "X_train, X_test = X_train.toarray(), X_test.toarray()\n", + "\n", + "X_train.shape, X_test.shape" + ] + }, + { + "cell_type": "markdown", + "id": "9411e976-c96d-45c0-8e46-34ba6c27fc88", + "metadata": {}, + "source": [ + "## Define the Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7801f652-d266-427a-af38-e66dcbaf4cc6", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from tensorflow.keras.models import Sequential\n", + "from tensorflow.keras import layers\n", + "\n", + "def create_model():\n", + " return Sequential([\n", + " layers.Input(shape=X_train.shape[1:]),\n", + " layers.Dense(128, activation=\"relu\"),\n", + " layers.Dense(64, activation=\"relu\"),\n", + " layers.Dense(len(classes), activation=\"softmax\"),\n", + " ])\n", + "\n", + "model = create_model()\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5318769b-d5d7-42de-8e4b-7a209344595a", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "model.summary()" + ] + }, + { + "cell_type": "markdown", + "id": "d9aef351-62cc-434a-a7df-41a7ca8c1fd0", + "metadata": {}, + "source": [ + "## Compile and Train Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "17e1c3ad-647a-47a6-ac08-910acf7a718b", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "model.compile(\"adam\", \"sparse_categorical_crossentropy\", metrics=[\"accuracy\"])\n", + "history = model.fit(X_train, Y_train, batch_size=256, epochs=5, validation_data=(X_test, Y_test))" + ] + }, + { + "cell_type": "markdown", + "id": "0445a068-b5d0-4ad8-a19b-1f97ee6f8a82", + "metadata": {}, + "source": [ + "## Evaluate Model Performance" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e989daac-a0bd-4659-ba86-a2d7b680e35c", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from sklearn.metrics import accuracy_score\n", + "train_preds = model.predict(X_train)\n", + "test_preds = model.predict(X_test)\n", + "\n", + "print(\"Train Accuracy : {:.3f}\".format(accuracy_score(Y_train, np.argmax(train_preds, axis=1))))\n", + "print(\"Test Accuracy : {:.3f}\".format(accuracy_score(Y_test, np.argmax(test_preds, axis=1))))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "650b4d22-c4af-4078-98bf-8496407af0d6", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "cm = metrics.confusion_matrix(Y_test, test_preds, selected_categories)\n", + "cm.visualize()\n", + "print(cm.report)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b1b7bf93-275c-4f35-a27a-b9a12d184b36", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "plotter = metrics.plot(Y_test, test_preds, selected_categories)\n", + "plotter.pr_curve()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0c45a0b9-55b9-4ae2-9462-cbfaac6b886e", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "plotter.roc_curve()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a6ad95c8-94db-4e18-9f8d-da8362d4d797", + "metadata": {}, + "outputs": [], + "source": [ + "import re\n", + "\n", + "X_batch_text = X_test_text[1:3]\n", + "X_batch = X_test[1:3]\n", + "\n", + "print(\"Samples : \")\n", + "for text in X_batch_text:\n", + " print(re.split(r\"\\W+\", text))\n", + " print()\n", + "\n", + "preds_proba = model.predict(X_batch)\n", + "preds = preds_proba.argmax(axis=1)\n", + "\n", + "print(\"Actual Target Values : {}\".format([selected_categories[target] for target in Y_test[1:3]]))\n", + "print(\"Predicted Target Values : {}\".format([selected_categories[target] for target in preds]))\n", + "print(\"Predicted Probabilities : {}\".format(preds_proba.max(axis=1)))" + ] + }, + { + "cell_type": "markdown", + "id": "720c09ac-ef40-40be-8a03-bc0041912668", + "metadata": {}, + "source": [ + "## SHAP Partition Explainer" + ] + }, + { + "cell_type": "markdown", + "id": "5abf7cfe-5c85-4f2a-94dd-0b2813e42985", + "metadata": {}, + "source": [ + "## Visualize SHAP Values Correct Predictions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "44966218-0abe-489c-8f21-fea9cf0b81a9", + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "def make_predictions(X_batch_text):\n", + " X_batch = vectorizer.transform(X_batch_text).toarray()\n", + " preds = model.predict(X_batch)\n", + " return preds\n", + "\n", + "partition_explainer = attributions.partition_text_explainer(make_predictions, selected_categories, X_batch_text, r\"\\W+\")" + ] + }, + { + "cell_type": "markdown", + "id": "af9179a3-1eea-4801-a72d-12e70eebd9e7", + "metadata": {}, + "source": [ + "### Text Plot" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bf79d100-43de-4b01-b823-b1ec1a80fda4", + "metadata": {}, + "outputs": [], + "source": [ + "partition_explainer.visualize()" + ] + }, + { + "cell_type": "markdown", + "id": "bb76b907-5985-4766-ad0c-87b3019e321a", + "metadata": {}, + "source": [ + "### Bar Plots" + ] + }, + { + "cell_type": "markdown", + "id": "2248ba63-312f-4fa6-a487-788d58d950e9", + "metadata": {}, + "source": [ + "#### Bar Plot 1" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bd02eb33-36b1-4eee-a5d4-bf2e7632ae5d", + "metadata": {}, + "outputs": [], + "source": [ + "shap.plots.bar(shap_values[0,:, selected_categories[preds[0]]], max_display=15,\n", + " order=shap.Explanation.argsort.flip)" + ] + }, + { + "cell_type": "markdown", + "id": "736d04af-d999-43ff-a10a-50760907de51", + "metadata": {}, + "source": [ + "### Bar Plot 2" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ff7cdea7-21dc-44e3-be30-c484d290a1b5", + "metadata": {}, + "outputs": [], + "source": [ + "shap.plots.bar(shap_values[1,:, selected_categories[preds[1]]], max_display=15,\n", + " order=shap.Explanation.argsort.flip)" + ] + }, + { + "cell_type": "markdown", + "id": "da6c2a46-fb58-4972-b48c-475437a17874", + "metadata": {}, + "source": [ + "## Waterfall Plots" + ] + }, + { + "cell_type": "markdown", + "id": "c2a4a486-a975-4465-8158-5f621abcea46", + "metadata": {}, + "source": [ + "### Waterfall Plot 1" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c7af5d14-9fad-4ce6-b442-260bf35d62b3", + "metadata": {}, + "outputs": [], + "source": [ + "shap.waterfall_plot(shap_values[0][:, selected_categories[preds[0]]], max_display=15)" + ] + }, + { + "cell_type": "markdown", + "id": "8616e02f-1a66-42ef-8598-2c9e190da534", + "metadata": {}, + "source": [ + "### Waterfall Plot 2" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "77536572-b7c6-4abc-a978-2a9ac92b1873", + "metadata": {}, + "outputs": [], + "source": [ + "shap.waterfall_plot(shap_values[1][:, selected_categories[preds[1]]], max_display=15)" + ] + }, + { + "cell_type": "markdown", + "id": "c56958cf-8983-415c-8065-5536b7a49be3", + "metadata": {}, + "source": [ + "## Force Plot" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d7e8e34b-7e99-4ebe-8fdd-22f83f01c7d2", + "metadata": {}, + "outputs": [], + "source": [ + "import re\n", + "tokens = re.split(\"\\W+\", X_batch_text[0].lower())\n", + "shap.initjs()\n", + "shap.force_plot(shap_values.base_values[0][preds[0]], shap_values[0][:, preds[0]].values,\n", + " feature_names = tokens[:-1], out_names=selected_categories[preds[0]])" + ] + } + ], + "metadata": { + "jupytext": { + "formats": "ipynb,md:myst", + "orphan": true + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/v1.1.0/notebooks/toxicity-tfma-model-card.html b/v1.1.0/notebooks/toxicity-tfma-model-card.html new file mode 100644 index 0000000..ce35ff1 --- /dev/null +++ b/v1.1.0/notebooks/toxicity-tfma-model-card.html @@ -0,0 +1,545 @@ + + + + + + + Creating Model Card for Toxic Comments Classification in Tensorflow — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • + View page source +
  • +
+
+
+
+
+ +
+

Creating Model Card for Toxic Comments Classification in Tensorflow

+

Adapted form Tensorflow

+
+

Training Dependencies

+
+
[ ]:
+
+
+
import os
+import tempfile
+import numpy as np
+import pandas as pd
+from datetime import datetime
+
+import tensorflow_hub as hub
+import tensorflow as tf
+import tensorflow_model_analysis as tfma
+import tensorflow_data_validation as tfdv
+
+from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
+from tensorflow_model_analysis.addons.fairness.view import widget_view
+
+
+
+
+
+

Model Card Dependencies

+
+
[ ]:
+
+
+
from intel_ai_safety.model_card_gen.model_card_gen import ModelCardGen
+from intel_ai_safety.model_card_gen.datasets import TensorflowDataset
+
+
+
+
+

Download Data

+
+
+
+

Data Description

+

This version of the CivilComments Dataset provides access to the primary seven labels that were annotated by crowd workers, the toxicity and other tags are a value between 0 and 1 indicating the fraction of annotators that assigned these attributes to the comment text.

+

The other tags are only available for a fraction of the input examples. They are currently ignored for the main dataset; the CivilCommentsIdentities set includes those labels, but only consists of the subset of the data with them. The other attributes that were part of the original CivilComments release are included only in the raw data. See the Kaggle documentation for more details about the available features.

+

The comments in this dataset come from an archive of the Civil Comments platform, a commenting plugin for independent news sites. These public comments were created from 2015 - 2017 and appeared on approximately 50 English-language news sites across the world. When Civil Comments shut down in 2017, they chose to make the public comments available in a lasting open archive to enable future research. The original data, published on figshare, includes the public comment text, some associated +metadata such as article IDs, timestamps and commenter-generated “civility” labels, but does not include user ids. Jigsaw extended this dataset by adding additional labels for toxicity, identity mentions, as well as covert offensiveness. This data set is an exact replica of the data released for the Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This dataset is released under CC0, as is the underlying comment text.

+

For comments that have a parent_id also in the civil comments data, the text of the previous comment is provided as the “parent_text” feature. Note that the splits were made without regard to this information, so using previous comments may leak some information. The annotators did not have access to the parent text when making the labels.

+

source: https://www.tensorflow.org/datasets/catalog/civil_comments

+
@misc{pavlopoulos2020toxicity,
+    title={Toxicity Detection: Does Context Really Matter?},
+    author={John Pavlopoulos and Jeffrey Sorensen and Lucas Dixon and Nithum Thain and Ion Androutsopoulos},
+    year={2020}, eprint={2006.00998}, archivePrefix={arXiv}, primaryClass={cs.CL}
+}
+
+@article{DBLP:journals/corr/abs-1903-04561,
+  author    = {Daniel Borkan and
+               Lucas Dixon and
+               Jeffrey Sorensen and
+               Nithum Thain and
+               Lucy Vasserman},
+  title     = {Nuanced Metrics for Measuring Unintended Bias with Real Data for Text
+               Classification},
+  journal   = {CoRR},
+  volume    = {abs/1903.04561},
+  year      = {2019},
+  url       = {http://arxiv.org/abs/1903.04561},
+  archivePrefix = {arXiv},
+  eprint    = {1903.04561},
+  timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},
+  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1903-04561},
+  bibsource = {dblp computer science bibliography, https://dblp.org}
+}
+
+@inproceedings{pavlopoulos-etal-2021-semeval,
+    title = "{S}em{E}val-2021 Task 5: Toxic Spans Detection",
+    author = "Pavlopoulos, John  and Sorensen, Jeffrey  and Laugier, L{'e}o and Androutsopoulos, Ion",
+    booktitle = "Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)",
+    month = aug,
+    year = "2021",
+    address = "Online",
+    publisher = "Association for Computational Linguistics",
+    url = "https://aclanthology.org/2021.semeval-1.6",
+    doi = "10.18653/v1/2021.semeval-1.6",
+    pages = "59--69",
+}
+
+
+

Feature documentation:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Feature

Class

Dtype

article_id

Tensor

tf.int32

id

Tensor

tf.string

identity_attack

Tensor

tf.float32

insult

Tensor

tf.float32

obscene

Tensor

tf.float32

parent_id

Tensor

tf.int32

parent_text

Text

tf.string

severe_toxicity

Tensor

tf.float32

sexual_explicit

Tensor

tf.float32

text

Text

tf.string

threat

Tensor

tf.float32

toxicity

Tensor

tf.float32

+
+
[ ]:
+
+
+
dataset_url = 'https://storage.googleapis.com/civil_comments_dataset/'
+
+train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',
+                                        dataset_url + 'train_tf_processed.tfrecord')
+
+validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',
+                                           dataset_url + 'validate_tf_processed.tfrecord')
+
+
+
+
+

Train Model

+
+
[ ]:
+
+
+
TEXT_FEATURE = 'comment_text'
+LABEL = 'toxicity'
+
+FEATURE_MAP = {
+    LABEL: tf.io.FixedLenFeature([], tf.float32),
+    TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),
+
+    'sexual_orientation': tf.io.VarLenFeature(tf.string),
+    'gender': tf.io.VarLenFeature(tf.string),
+    'religion': tf.io.VarLenFeature(tf.string),
+    'race': tf.io.VarLenFeature(tf.string),
+    'disability': tf.io.VarLenFeature(tf.string)
+}
+
+
+
+
+
[ ]:
+
+
+
def train_input_fn():
+    def parse_function(serialized):
+        # parse_single_example works on tf.train.Example type
+        parsed_example = tf.io.parse_single_example(serialized=serialized, features=FEATURE_MAP)
+        # fighting the 92%-8% imbalance in the dataset
+        # adding `weight` label, doesn't exist already (only FEATURE_MAP keys exist)
+        parsed_example['weight'] = tf.add(parsed_example[LABEL], 0.1)  # 0.1 for non-toxic, 1.1 for toxic
+        return (parsed_example, parsed_example[LABEL])  # (x, y)
+
+
+    train_dataset = tf.data.TFRecordDataset(filenames=[train_tf_file]).map(parse_function).batch(512)
+    return train_dataset
+
+
+
+
+
+
+

Build Model

+
+
[ ]:
+
+
+
# vectorizing through TFHub
+embedded_text_feature_column = hub.text_embedding_column(
+    key=TEXT_FEATURE,
+    module_spec='https://tfhub.dev/google/nnlm-en-dim128/1')
+
+classifier = tf.estimator.DNNClassifier(
+    hidden_units=[500, 100],
+    weight_column='weight',
+    feature_columns=[embedded_text_feature_column],
+    optimizer=tf.keras.optimizers.legacy.Adagrad(learning_rate=0.003),
+    loss_reduction=tf.losses.Reduction.SUM,
+    n_classes=2)
+
+
+
+
+
+

Train Model

+
+
[ ]:
+
+
+
classifier.train(input_fn=train_input_fn, steps=1000)
+
+
+
+
+

Export in EvalSavedModel Format

+
+
[ ]:
+
+
+
MODEL_PATH = tempfile.gettempdir()
+
+def eval_input_receiver_fn():
+    serialized_tf_example = tf.compat.v1.placeholder(dtype=tf.string, shape=[None], name='input_example_placeholder')
+
+    receiver_tensors = {'examples': serialized_tf_example}
+    features = tf.io.parse_example(serialized_tf_example, FEATURE_MAP)
+    features['weight'] = tf.ones_like(features[LABEL])
+
+    return tfma.export.EvalInputReceiver(
+        features=features,
+        receiver_tensors=receiver_tensors,
+        labels=features[LABEL]
+    )
+
+tfma_export_dir = tfma.export.export_eval_savedmodel(
+    estimator = classifier,  # trained model
+    export_dir_base = MODEL_PATH,
+    eval_input_receiver_fn = eval_input_receiver_fn
+)
+
+
+
+
+
[ ]:
+
+
+
# export EvalSavedModel
+tfma_export_dir = tfma.export.export_eval_savedmodel(
+    estimator = classifier,  # trained model
+    export_dir_base = MODEL_PATH,
+    eval_input_receiver_fn = eval_input_receiver_fn
+)
+
+
+
+
+
+

Making a Model Card

+
+
[ ]:
+
+
+
_model_path = tfma_export_dir
+_data_paths = {'eval': TensorflowDataset(validate_tf_file),
+               'train': TensorflowDataset(train_tf_file)}
+
+
+
+
+
[ ]:
+
+
+
_eval_config =  'eval_config.proto'
+
+
+
+
+
[ ]:
+
+
+
%%writefile {_eval_config}
+
+model_specs {
+# To use EvalSavedModel set `signature_name` to "eval".
+signature_name: "eval"
+}
+
+## Post training metric information. These will be merged with any built-in
+## metrics from training.
+metrics_specs {
+metrics { class_name: "BinaryAccuracy" }
+metrics { class_name: "Precision" }
+metrics { class_name: "Recall" }
+metrics { class_name: "ConfusionMatrixPlot" }
+metrics { class_name: "FairnessIndicators" }
+}
+
+## Slicing information
+slicing_specs {}  # overall slice
+slicing_specs {
+feature_keys: ["gender"]
+}
+
+
+
+
+
[ ]:
+
+
+
mc = {
+  "model_details": {
+    "name": "Detecting Toxic Comments",
+    "overview":  (
+    'The Conversation AI team, a research initiative founded by Jigsaw and Google '
+    '(both part of Alphabet), builds technology to protect voices in conversation. '
+    'A main area of focus is machine learning models that can identify toxicity in '
+    'online conversations, where toxicity is defined as anything *rude, disrespectful '
+    'or otherwise likely to make someone leave a discussion*. '
+    'This multi-headed model attemps to recognize toxicity and several subtypes of toxicity: '
+    'This model recognizes toxicity and minimizes this type of unintended bias '
+    'with respect to mentions of identities. Reduce unintended bias ensured we can detect toxicity '
+    ' accross a wide range of conversations. '),
+    "owners": [
+      {
+        "name": "Intel XAI Team",
+        "contact": "xai@intel.com"
+      }
+    ],
+
+    "references": [
+      {
+        "reference": "https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data"
+      },
+      {
+        "reference": "https://medium.com/jigsaw/unintended-bias-and-names-of-frequently-targeted-groups-8e0b81f80a23"
+      }
+    ],
+    "graphics": {
+      "description": " "
+    }
+  },
+  "considerations": {
+      "limitations": [
+            {"description": ('Overrepresented Identities in Data:\n'
+                    'Identity terms for more frequently targeted groups '
+                   '(e.g. words like “black”, “muslim”, “feminist”, “woman”, “gay” etc)'
+                   ' often have higher scores because comments about those groups are '
+                   'over-represented in abusive and toxic comments.')
+            },
+           {"description": ('False Positive Rate:\n'
+                    'The names of targeted groups appear far more often in abusive '
+                    'comments. For example, in many forums unfortunately it’s common '
+                    'to use the word “gay” as an insult, or for someone to attack a '
+                    'commenter for being gay, but it is much rarer for the word gay to '
+                    'appear in a positive, affirming statements (e.g. “I am a proud gay man”). '
+                    'When the training data used to train machine learning models contain these '
+                    'comments, ML models adopt the biases that exist in these underlying distributions, '
+                    'picking up negative connotations as they go. When there’s insufficient diversity '
+                    'in the data, the models can over-generalize and make these kinds of errors.')
+            },
+           {"description": ('Imbalenced Data:\n'
+                     'We developed new ways to balance the training '
+                     'data so that the model sees enough toxic and non-toxic examples '
+                     'containing identity terms in such a way that it can more effectively '
+                     'learn to distinguish toxic from non-toxic uses. You can learn more '
+                     'about this in our paper published at the AI, Ethics, and Society Conference.')
+            },
+        ]
+    },
+
+  "quantitative_analysis": {
+    "graphics": {
+      "description": " "
+    }
+  },
+  "schema_version": "0.0.1"
+}
+
+
+
+
+
[ ]:
+
+
+
mcg = ModelCardGen.generate(data_sets=_data_paths,
+                            eval_config=_eval_config,
+                            model_path=_model_path,
+                            model_card=mc)
+
+
+
+
+
[ ]:
+
+
+
mcg
+
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/notebooks/toxicity-tfma-model-card.ipynb b/v1.1.0/notebooks/toxicity-tfma-model-card.ipynb new file mode 100644 index 0000000..257ff88 --- /dev/null +++ b/v1.1.0/notebooks/toxicity-tfma-model-card.ipynb @@ -0,0 +1,518 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "7e2893de", + "metadata": {}, + "source": [ + "# Creating Model Card for Toxic Comments Classification in Tensorflow" + ] + }, + { + "cell_type": "markdown", + "id": "eb56f9e1", + "metadata": {}, + "source": [ + "Adapted form [Tensorflow](https://colab.research.google.com/github/google/eng-edu/blob/main/ml/pc/exercises/fairness_text_toxicity_part1.ipynb?utm_source=practicum-fairness&utm_campaign=colab-external&utm_medium=referral&utm_content=fairnessexercise1-colab#scrollTo=2z_xzJ40j9Q-) " + ] + }, + { + "cell_type": "markdown", + "id": "64c2921a", + "metadata": {}, + "source": [ + "#### Training Dependencies" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "13462d67", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import tempfile\n", + "import numpy as np\n", + "import pandas as pd\n", + "from datetime import datetime\n", + "\n", + "import tensorflow_hub as hub\n", + "import tensorflow as tf\n", + "import tensorflow_model_analysis as tfma\n", + "import tensorflow_data_validation as tfdv\n", + "\n", + "from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators\n", + "from tensorflow_model_analysis.addons.fairness.view import widget_view" + ] + }, + { + "cell_type": "markdown", + "id": "b4cd35c4", + "metadata": {}, + "source": [ + "#### Model Card Dependencies" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4f1d8721", + "metadata": {}, + "outputs": [], + "source": [ + "from intel_ai_safety.model_card_gen.model_card_gen import ModelCardGen\n", + "from intel_ai_safety.model_card_gen.datasets import TensorflowDataset" + ] + }, + { + "cell_type": "markdown", + "id": "d23dba98", + "metadata": {}, + "source": [ + "## Download Data" + ] + }, + { + "cell_type": "markdown", + "id": "6e4ccc39", + "metadata": {}, + "source": [ + "#### Data Description" + ] + }, + { + "cell_type": "markdown", + "id": "ec83155f", + "metadata": {}, + "source": [ + "This version of the CivilComments Dataset provides access to the primary seven labels that were annotated by crowd workers, the toxicity and other tags are a value between 0 and 1 indicating the fraction of annotators that assigned these attributes to the comment text.\n", + "\n", + "The other tags are only available for a fraction of the input examples. They are currently ignored for the main dataset; the CivilCommentsIdentities set includes those labels, but only consists of the subset of the data with them. The other attributes that were part of the original CivilComments release are included only in the raw data. See the Kaggle documentation for more details about the available features.\n", + "\n", + "The comments in this dataset come from an archive of the Civil Comments platform, a commenting plugin for independent news sites. These public comments were created from 2015 - 2017 and appeared on approximately 50 English-language news sites across the world. When Civil Comments shut down in 2017, they chose to make the public comments available in a lasting open archive to enable future research. The original data, published on figshare, includes the public comment text, some associated metadata such as article IDs, timestamps and commenter-generated \"civility\" labels, but does not include user ids. Jigsaw extended this dataset by adding additional labels for toxicity, identity mentions, as well as covert offensiveness. This data set is an exact replica of the data released for the Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This dataset is released under CC0, as is the underlying comment text.\n", + "\n", + "For comments that have a parent_id also in the civil comments data, the text of the previous comment is provided as the \"parent_text\" feature. Note that the splits were made without regard to this information, so using previous comments may leak some information. The annotators did not have access to the parent text when making the labels.\n", + "\n", + "*source*: https://www.tensorflow.org/datasets/catalog/civil_comments" + ] + }, + { + "cell_type": "markdown", + "id": "9e72a0fb", + "metadata": {}, + "source": [ + "```\n", + "@misc{pavlopoulos2020toxicity,\n", + " title={Toxicity Detection: Does Context Really Matter?},\n", + " author={John Pavlopoulos and Jeffrey Sorensen and Lucas Dixon and Nithum Thain and Ion Androutsopoulos},\n", + " year={2020}, eprint={2006.00998}, archivePrefix={arXiv}, primaryClass={cs.CL}\n", + "}\n", + "\n", + "@article{DBLP:journals/corr/abs-1903-04561,\n", + " author = {Daniel Borkan and\n", + " Lucas Dixon and\n", + " Jeffrey Sorensen and\n", + " Nithum Thain and\n", + " Lucy Vasserman},\n", + " title = {Nuanced Metrics for Measuring Unintended Bias with Real Data for Text\n", + " Classification},\n", + " journal = {CoRR},\n", + " volume = {abs/1903.04561},\n", + " year = {2019},\n", + " url = {http://arxiv.org/abs/1903.04561},\n", + " archivePrefix = {arXiv},\n", + " eprint = {1903.04561},\n", + " timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},\n", + " biburl = {https://dblp.org/rec/bib/journals/corr/abs-1903-04561},\n", + " bibsource = {dblp computer science bibliography, https://dblp.org}\n", + "}\n", + "\n", + "@inproceedings{pavlopoulos-etal-2021-semeval,\n", + " title = \"{S}em{E}val-2021 Task 5: Toxic Spans Detection\",\n", + " author = \"Pavlopoulos, John and Sorensen, Jeffrey and Laugier, L{'e}o and Androutsopoulos, Ion\",\n", + " booktitle = \"Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)\",\n", + " month = aug,\n", + " year = \"2021\",\n", + " address = \"Online\",\n", + " publisher = \"Association for Computational Linguistics\",\n", + " url = \"https://aclanthology.org/2021.semeval-1.6\",\n", + " doi = \"10.18653/v1/2021.semeval-1.6\",\n", + " pages = \"59--69\",\n", + "}\n", + "\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "a2a482dc", + "metadata": {}, + "source": [ + "**Feature documentation**:\n", + "\n", + "|Feature|Class|Dtype|\n", + "|-------|:---:|:---:|\n", + "|article_id|\tTensor|\t\ttf.int32|\n", + "|id|\tTensor|\t\ttf.string|\n", + "|identity_attack|\tTensor|\t\ttf.float32|\n", + "|insult|\tTensor|\t\ttf.float32|\n", + "|obscene|\tTensor|\t\ttf.float32|\n", + "|parent_id|\tTensor|\t\ttf.int32|\n", + "|parent_text|\tText|\t\ttf.string|\n", + "|severe_toxicity|\tTensor|\t\ttf.float32|\n", + "|sexual_explicit|\tTensor|\t\ttf.float32|\n", + "|text|\tText|\t\ttf.string|\n", + "|threat|\tTensor|\t\ttf.float32|\n", + "|toxicity|\tTensor|\t\ttf.float32|" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a79713ca", + "metadata": {}, + "outputs": [], + "source": [ + "dataset_url = 'https://storage.googleapis.com/civil_comments_dataset/'\n", + "\n", + "train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',\n", + " dataset_url + 'train_tf_processed.tfrecord')\n", + "\n", + "validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',\n", + " dataset_url + 'validate_tf_processed.tfrecord')" + ] + }, + { + "cell_type": "markdown", + "id": "4c66decc", + "metadata": {}, + "source": [ + "## Train Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "00fced58", + "metadata": {}, + "outputs": [], + "source": [ + "TEXT_FEATURE = 'comment_text'\n", + "LABEL = 'toxicity'\n", + "\n", + "FEATURE_MAP = {\n", + " LABEL: tf.io.FixedLenFeature([], tf.float32),\n", + " TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),\n", + " \n", + " 'sexual_orientation': tf.io.VarLenFeature(tf.string),\n", + " 'gender': tf.io.VarLenFeature(tf.string),\n", + " 'religion': tf.io.VarLenFeature(tf.string),\n", + " 'race': tf.io.VarLenFeature(tf.string),\n", + " 'disability': tf.io.VarLenFeature(tf.string)\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "49471282", + "metadata": {}, + "outputs": [], + "source": [ + "def train_input_fn():\n", + " def parse_function(serialized):\n", + " # parse_single_example works on tf.train.Example type\n", + " parsed_example = tf.io.parse_single_example(serialized=serialized, features=FEATURE_MAP)\n", + " # fighting the 92%-8% imbalance in the dataset\n", + " # adding `weight` label, doesn't exist already (only FEATURE_MAP keys exist)\n", + " parsed_example['weight'] = tf.add(parsed_example[LABEL], 0.1) # 0.1 for non-toxic, 1.1 for toxic\n", + " return (parsed_example, parsed_example[LABEL]) # (x, y)\n", + " \n", + "\n", + " train_dataset = tf.data.TFRecordDataset(filenames=[train_tf_file]).map(parse_function).batch(512)\n", + " return train_dataset" + ] + }, + { + "cell_type": "markdown", + "id": "7aabaf8c", + "metadata": {}, + "source": [ + "#### Build Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "da5f12c9", + "metadata": {}, + "outputs": [], + "source": [ + "# vectorizing through TFHub\n", + "embedded_text_feature_column = hub.text_embedding_column(\n", + " key=TEXT_FEATURE,\n", + " module_spec='https://tfhub.dev/google/nnlm-en-dim128/1')\n", + "\n", + "classifier = tf.estimator.DNNClassifier(\n", + " hidden_units=[500, 100],\n", + " weight_column='weight',\n", + " feature_columns=[embedded_text_feature_column],\n", + " optimizer=tf.keras.optimizers.legacy.Adagrad(learning_rate=0.003),\n", + " loss_reduction=tf.losses.Reduction.SUM,\n", + " n_classes=2)" + ] + }, + { + "cell_type": "markdown", + "id": "54e45a4a", + "metadata": {}, + "source": [ + "#### Train Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d0352b47", + "metadata": {}, + "outputs": [], + "source": [ + "classifier.train(input_fn=train_input_fn, steps=1000)" + ] + }, + { + "cell_type": "markdown", + "id": "83f47eaa", + "metadata": {}, + "source": [ + "## Export in EvalSavedModel Format" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f8e9e645", + "metadata": {}, + "outputs": [], + "source": [ + "MODEL_PATH = tempfile.gettempdir()\n", + "\n", + "def eval_input_receiver_fn():\n", + " serialized_tf_example = tf.compat.v1.placeholder(dtype=tf.string, shape=[None], name='input_example_placeholder')\n", + " \n", + " receiver_tensors = {'examples': serialized_tf_example}\n", + " features = tf.io.parse_example(serialized_tf_example, FEATURE_MAP)\n", + " features['weight'] = tf.ones_like(features[LABEL])\n", + " \n", + " return tfma.export.EvalInputReceiver(\n", + " features=features,\n", + " receiver_tensors=receiver_tensors,\n", + " labels=features[LABEL]\n", + " )\n", + "\n", + "tfma_export_dir = tfma.export.export_eval_savedmodel(\n", + " estimator = classifier, # trained model\n", + " export_dir_base = MODEL_PATH,\n", + " eval_input_receiver_fn = eval_input_receiver_fn\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4917e184", + "metadata": {}, + "outputs": [], + "source": [ + "# export EvalSavedModel \n", + "tfma_export_dir = tfma.export.export_eval_savedmodel(\n", + " estimator = classifier, # trained model\n", + " export_dir_base = MODEL_PATH,\n", + " eval_input_receiver_fn = eval_input_receiver_fn\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "c603c7be", + "metadata": {}, + "source": [ + "## Making a Model Card" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0bc1445c", + "metadata": {}, + "outputs": [], + "source": [ + "_model_path = tfma_export_dir\n", + "_data_paths = {'eval': TensorflowDataset(validate_tf_file),\n", + " 'train': TensorflowDataset(train_tf_file)}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "766ebda1", + "metadata": {}, + "outputs": [], + "source": [ + "_eval_config = 'eval_config.proto'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "630aff50", + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile {_eval_config}\n", + "\n", + "model_specs {\n", + "# To use EvalSavedModel set `signature_name` to \"eval\".\n", + "signature_name: \"eval\"\n", + "}\n", + "\n", + "## Post training metric information. These will be merged with any built-in\n", + "## metrics from training.\n", + "metrics_specs {\n", + "metrics { class_name: \"BinaryAccuracy\" }\n", + "metrics { class_name: \"Precision\" }\n", + "metrics { class_name: \"Recall\" }\n", + "metrics { class_name: \"ConfusionMatrixPlot\" }\n", + "metrics { class_name: \"FairnessIndicators\" }\n", + "}\n", + "\n", + "## Slicing information\n", + "slicing_specs {} # overall slice\n", + "slicing_specs {\n", + "feature_keys: [\"gender\"]\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "de5d057d", + "metadata": {}, + "outputs": [], + "source": [ + "mc = {\n", + " \"model_details\": {\n", + " \"name\": \"Detecting Toxic Comments\",\n", + " \"overview\": (\n", + " 'The Conversation AI team, a research initiative founded by Jigsaw and Google '\n", + " '(both part of Alphabet), builds technology to protect voices in conversation. '\n", + " 'A main area of focus is machine learning models that can identify toxicity in '\n", + " 'online conversations, where toxicity is defined as anything *rude, disrespectful '\n", + " 'or otherwise likely to make someone leave a discussion*. '\n", + " 'This multi-headed model attemps to recognize toxicity and several subtypes of toxicity: '\n", + " 'This model recognizes toxicity and minimizes this type of unintended bias '\n", + " 'with respect to mentions of identities. Reduce unintended bias ensured we can detect toxicity '\n", + " ' accross a wide range of conversations. '),\n", + " \"owners\": [\n", + " {\n", + " \"name\": \"Intel XAI Team\",\n", + " \"contact\": \"xai@intel.com\"\n", + " }\n", + " ],\n", + "\n", + " \"references\": [\n", + " {\n", + " \"reference\": \"https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data\"\n", + " },\n", + " {\n", + " \"reference\": \"https://medium.com/jigsaw/unintended-bias-and-names-of-frequently-targeted-groups-8e0b81f80a23\"\n", + " }\n", + " ],\n", + " \"graphics\": {\n", + " \"description\": \" \"\n", + " }\n", + " },\n", + " \"considerations\": { \n", + " \"limitations\": [\n", + " {\"description\": ('Overrepresented Identities in Data:\\n'\n", + " 'Identity terms for more frequently targeted groups '\n", + " '(e.g. words like “black”, “muslim”, “feminist”, “woman”, “gay” etc)'\n", + " ' often have higher scores because comments about those groups are '\n", + " 'over-represented in abusive and toxic comments.')\n", + " },\n", + " {\"description\": ('False Positive Rate:\\n'\n", + " 'The names of targeted groups appear far more often in abusive '\n", + " 'comments. For example, in many forums unfortunately it’s common '\n", + " 'to use the word “gay” as an insult, or for someone to attack a '\n", + " 'commenter for being gay, but it is much rarer for the word gay to '\n", + " 'appear in a positive, affirming statements (e.g. “I am a proud gay man”). '\n", + " 'When the training data used to train machine learning models contain these '\n", + " 'comments, ML models adopt the biases that exist in these underlying distributions, '\n", + " 'picking up negative connotations as they go. When there’s insufficient diversity '\n", + " 'in the data, the models can over-generalize and make these kinds of errors.')\n", + " },\n", + " {\"description\": ('Imbalenced Data:\\n'\n", + " 'We developed new ways to balance the training '\n", + " 'data so that the model sees enough toxic and non-toxic examples '\n", + " 'containing identity terms in such a way that it can more effectively '\n", + " 'learn to distinguish toxic from non-toxic uses. You can learn more '\n", + " 'about this in our paper published at the AI, Ethics, and Society Conference.')\n", + " },\n", + " ]\n", + " },\n", + " \n", + " \"quantitative_analysis\": {\n", + " \"graphics\": {\n", + " \"description\": \" \"\n", + " }\n", + " },\n", + " \"schema_version\": \"0.0.1\"\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d815585f", + "metadata": {}, + "outputs": [], + "source": [ + "mcg = ModelCardGen.generate(data_sets=_data_paths,\n", + " eval_config=_eval_config,\n", + " model_path=_model_path, \n", + " model_card=mc)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "39ab58e5", + "metadata": {}, + "outputs": [], + "source": [ + "mcg" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/v1.1.0/objects.inv b/v1.1.0/objects.inv new file mode 100644 index 0000000..c618f55 Binary files /dev/null and b/v1.1.0/objects.inv differ diff --git a/v1.1.0/overview.html b/v1.1.0/overview.html new file mode 100644 index 0000000..b545348 --- /dev/null +++ b/v1.1.0/overview.html @@ -0,0 +1,146 @@ + + + + + + + Overview — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Overview

+

The Intel® Explainable AI Tools are designed to help users detect and mitigate against issues of fairness and interpretability, while running best on Intel hardware. +There are two Python* components in the repository:

+
    +
  • Model Card Generator

    +
      +
    • Creates interactive HTML reports containing model performance and fairness metrics

    • +
    +
  • +
  • Explainer

    +
      +
    • Runs post-hoc model distillation and visualization methods to examine predictive behavior for both TensorFlow* and PyTorch* models via a simple Python API including the following modules:

      +
        +
      • Attributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

      • +
      • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

      • +
      • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

      • +
      +
    • +
    +
  • +
+

*Other names and brands may be claimed as the property of others. Trademarks

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/v1.1.0/search.html b/v1.1.0/search.html new file mode 100644 index 0000000..756ec93 --- /dev/null +++ b/v1.1.0/search.html @@ -0,0 +1,136 @@ + + + + + + Search — Intel® Explainable AI Tools 1.0.0 documentation + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + + + +
+ +
+ +
+
+ +
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/v1.1.0/searchindex.js b/v1.1.0/searchindex.js new file mode 100644 index 0000000..e511f8d --- /dev/null +++ b/v1.1.0/searchindex.js @@ -0,0 +1 @@ +Search.setIndex({"docnames": ["datasets", "explainer/attributions", "explainer/cam", "explainer/index", "explainer/metrics", "index", "install", "legal", "markdown/Install", "markdown/Legal", "markdown/Overview", "markdown/Welcome", "model_card_gen/api", "model_card_gen/example", "model_card_gen/index", "notebooks", "notebooks/ExplainingImageClassification", "notebooks/Multimodal_Cancer_Detection", "notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions", "notebooks/TorchVision_CIFAR_Interpret", "notebooks/adult-pytorch-model-card", "notebooks/compas-model-card-tfx", "notebooks/heart_disease", "notebooks/mnist", "notebooks/partitionexplainer", "notebooks/toxicity-tfma-model-card", "overview"], "filenames": ["datasets.rst", "explainer/attributions.md", "explainer/cam.md", "explainer/index.md", "explainer/metrics.md", "index.md", "install.rst", "legal.rst", "markdown/Install.md", "markdown/Legal.md", "markdown/Overview.md", "markdown/Welcome.md", "model_card_gen/api.rst", "model_card_gen/example.md", "model_card_gen/index.md", "notebooks.rst", "notebooks/ExplainingImageClassification.nblink", "notebooks/Multimodal_Cancer_Detection.nblink", "notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.nblink", "notebooks/TorchVision_CIFAR_Interpret.nblink", "notebooks/adult-pytorch-model-card.nblink", "notebooks/compas-model-card-tfx.nblink", "notebooks/heart_disease.nblink", "notebooks/mnist.nblink", "notebooks/partitionexplainer.nblink", "notebooks/toxicity-tfma-model-card.nblink", "overview.rst"], "titles": ["Datasets", "<no title>", "<no title>", "Explainer", "API Refrence", "Intel\u00ae Explainable AI Tools", "Installation", "Legal Information", "Installation", "Legal Information", "Overview", "Intel\u00ae Explainable AI Tools", "API Reference", "Example Model Card", "Model Card Generator", "Example Notebooks", "Explaining ResNet50 ImageNet Classification Using the CAM Explainer", "Multimodal Breast Cancer Detection Explainability using the Intel\u00ae Explainable AI API", "Explaining Fine Tuned Text Classifier with PyTorch using the Intel\u00ae Explainable AI API", "Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer", "Generating Model Card with PyTorch", "Detecting Issues in Fairness by Generating Model Card from Tensorflow Estimators", "Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer", "Explaining Custom CNN MNIST Classification Using the Attributions Explainer", "Explaining Custom NN NewsGroups Classification Using the Attributions Explainer", "Creating Model Card for Toxic Comments Classification in Tensorflow", "Overview"], "terms": {"thi": [0, 5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 20, 21, 22, 25], "i": [0, 3, 5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 19, 20, 21, 25], "comprehens": [0, 14], "list": [0, 5, 6, 8, 11, 17, 18, 20, 21, 23], "public": [0, 14, 21, 25], "us": [0, 3, 5, 6, 7, 8, 9, 10, 11, 15, 20, 21, 25, 26], "repositori": [0, 5, 6, 8, 10, 11, 14, 17, 18, 20, 26], "name": [0, 5, 6, 8, 10, 11, 14, 15, 16, 17, 18, 20, 21, 25, 26], "link": [0, 5, 6, 8, 11, 14], "sourc": [0, 5, 6, 7, 8, 9, 11, 18, 25], "framework": [0, 14, 15, 17], "case": [0, 5, 6, 8, 11, 14, 15, 17, 18, 21], "adult": [0, 20], "incom": 0, "pytorch": [0, 3, 5, 6, 8, 10, 11, 14, 15, 19, 26], "tabular": [0, 3, 5, 10, 11, 15, 26], "classif": [0, 3, 5, 10, 11, 15, 18, 21, 26], "cdd": [0, 17], "cesm": [0, 17], "imag": [0, 3, 5, 10, 11, 15, 19, 26], "text": [0, 15, 25], "cifar": [0, 15], "10": [0, 5, 6, 8, 11, 15, 17, 18, 20, 21, 22, 25], "torchvis": [0, 16, 19, 23], "tensorflow": [0, 3, 5, 6, 8, 10, 11, 14, 15, 17, 18, 22, 24, 26], "civil": [0, 25], "comment": [0, 15], "tfd": 0, "compa": [0, 14, 21], "recidiv": [0, 14, 21], "risk": [0, 14, 21], "score": [0, 14, 16, 21, 25], "data": [0, 5, 6, 7, 8, 9, 11, 14, 18, 19, 21], "analysi": [0, 14, 18, 21], "imagenet": [0, 15], "imdb": [0, 18], "review": [0, 14, 18], "mnist": [0, 15], "sm": [0, 18], "spam": [0, 18], "collect": [0, 14, 18, 23], "python": [3, 5, 6, 8, 10, 11, 14, 17, 21, 26], "modul": [3, 5, 10, 11, 17, 19, 20, 21, 23, 26], "intel": [3, 6, 7, 8, 9, 10, 14, 15, 20, 21, 25, 26], "ai": [3, 6, 7, 8, 9, 10, 15, 21, 25, 26], "tool": [3, 6, 7, 8, 9, 10, 14, 15, 21, 26], "provid": [3, 5, 6, 7, 8, 9, 11, 14, 20, 21, 25], "method": [3, 5, 10, 11, 16, 19, 26], "model": [3, 10, 19, 23, 26], "compos": [3, 19, 23], "add": [3, 14, 17, 21, 25], "minim": [3, 25], "code": [3, 5, 6, 7, 8, 9, 11, 18, 21], "extens": [3, 17, 18], "easi": 3, "new": [3, 17, 25], "commun": 3, "contribut": [3, 5, 6, 7, 8, 9, 11], "welcom": 3, "attribut": [3, 5, 10, 11, 15, 17, 18, 25, 26], "visual": [3, 5, 10, 11, 17, 18, 19, 21, 26], "neg": [3, 5, 10, 11, 21, 25, 26], "posit": [3, 5, 10, 11, 21, 25, 26], "featur": [3, 10, 14, 16, 17, 18, 19, 20, 21, 25, 26], "pixel": [3, 5, 10, 11, 26], "word": [3, 5, 10, 11, 17, 18, 25, 26], "token": [3, 5, 10, 11, 17, 18, 24, 26], "predict": [3, 5, 10, 11, 14, 17, 19, 20, 21, 22, 26], "cam": [3, 5, 10, 11, 15, 17, 26], "creat": [3, 7, 9, 10, 14, 15, 17, 18, 21, 26], "heatmap": [3, 5, 10, 11, 26], "cnn": [3, 5, 10, 11, 15, 17, 26], "gradient": [3, 5, 10, 11, 19, 26], "weight": [3, 5, 10, 11, 16, 18, 25, 26], "class": [3, 5, 10, 11, 14, 16, 17, 18, 19, 20, 21, 24, 25, 26], "activ": [3, 10, 21, 22, 24, 26], "map": [3, 5, 10, 11, 14, 17, 18, 20, 21, 24, 25, 26], "api": [3, 5, 6, 8, 10, 11, 15, 21, 26], "refrenc": 3, "gain": [3, 5, 10, 11, 20, 26], "insight": [3, 5, 10, 11, 26], "measur": [3, 5, 10, 11, 25, 26], "need": [3, 5, 10, 11, 16, 17, 18, 21, 26], "dure": [3, 5, 10, 11, 26], "machin": [3, 5, 10, 11, 18, 20, 21, 25, 26], "learn": [3, 5, 10, 11, 14, 15, 18, 20, 21, 25, 26], "workflow": [3, 5, 10, 11, 26], "scientist": [5, 11], "mlop": [5, 11], "engin": [5, 11, 21], "have": [5, 6, 8, 11, 14, 16, 17, 18, 21, 22, 25], "interpret": [5, 10, 11, 26], "The": [5, 6, 8, 10, 11, 14, 16, 17, 18, 21, 25, 26], "ar": [5, 6, 7, 8, 9, 10, 11, 14, 16, 17, 18, 21, 25, 26], "design": [5, 10, 11, 26], "help": [5, 10, 11, 17, 26], "user": [5, 10, 11, 14, 17, 21, 25, 26], "detect": [5, 10, 11, 15, 25, 26], "mitig": [5, 10, 11, 26], "against": [5, 10, 11, 21, 26], "issu": [5, 6, 8, 10, 11, 15, 17, 26], "fair": [5, 10, 11, 14, 15, 20, 25, 26], "while": [5, 10, 11, 17, 21, 26], "best": [5, 10, 11, 26], "hardwar": [5, 10, 11, 18, 24, 26], "There": [5, 6, 8, 10, 11, 21, 26], "two": [5, 6, 8, 10, 11, 17, 18, 21, 26], "compon": [5, 10, 11, 21, 26], "card": [5, 6, 8, 10, 11, 26], "gener": [5, 6, 8, 10, 11, 17, 18, 25, 26], "interact": [5, 10, 11, 14, 21, 26], "html": [5, 10, 11, 14, 20, 21, 26], "report": [5, 6, 8, 10, 11, 14, 17, 21, 23, 24, 26], "contain": [5, 6, 8, 10, 11, 14, 21, 25, 26], "perform": [5, 6, 7, 8, 9, 10, 11, 14, 17, 18, 21, 26], "metric": [5, 10, 11, 14, 17, 18, 20, 21, 22, 23, 24, 25, 26], "post": [5, 10, 11, 25, 26], "hoc": [5, 10, 11, 26], "distil": [5, 10, 11, 26], "examin": [5, 10, 11, 26], "behavior": [5, 10, 11, 26], "both": [5, 10, 11, 16, 18, 21, 25, 26], "via": [5, 10, 11, 14, 17, 26], "simpl": [5, 10, 11, 26], "includ": [5, 10, 11, 14, 17, 18, 21, 25, 26], "follow": [5, 6, 8, 10, 11, 14, 17, 18, 21, 26], "linux": [5, 6, 8, 11], "system": [5, 6, 8, 11, 17, 18, 20, 21], "wsl2": [5, 6, 8, 11], "window": [5, 6, 8, 11, 24], "valid": [5, 6, 8, 11, 17, 18, 21], "ubuntu": [5, 6, 8, 11], "20": [5, 6, 8, 11, 17, 18, 20, 21, 23], "04": [5, 6, 8, 11], "22": [5, 6, 8, 11], "lt": [5, 6, 8, 11], "3": [5, 6, 8, 11, 14, 17, 19, 21, 24], "9": [5, 6, 8, 11, 17, 19], "o": [5, 6, 8, 11, 17, 18, 19, 20, 21, 24, 25], "packag": [5, 6, 8, 11, 14, 17], "apt": [5, 6, 8, 11], "build": [5, 6, 8, 11, 21], "essenti": [5, 6, 8, 11], "dev": [5, 6, 8, 11, 25], "git": [5, 6, 8, 11, 14], "onli": [5, 6, 8, 11, 14, 16, 18, 21, 25], "instruct": [5, 6, 8, 11, 18], "safeti": [5, 6, 8, 11], "librari": [5, 6, 8, 11], "clone": [5, 6, 8, 11, 14], "github": [5, 6, 8, 11, 14, 16, 17], "can": [5, 6, 8, 11, 14, 16, 17, 18, 21, 25], "done": [5, 6, 8, 11], "instead": [5, 6, 8, 11, 17, 18], "basic": [5, 6, 8, 11], "pip": [5, 6, 8, 11, 14, 17, 21], "you": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 21, 25], "plan": [5, 6, 8, 11], "make": [5, 6, 8, 11, 17, 21, 22], "chang": [5, 6, 8, 11, 18], "repo": [5, 6, 8, 11, 17], "navig": [5, 6, 8, 11, 14], "directori": [5, 6, 8, 11, 14, 18, 21], "allow": [5, 6, 8, 11, 14, 21], "envion": [5, 6, 8, 11], "venv": [5, 6, 8, 11], "current": [5, 6, 8, 11, 17, 25], "lock": [5, 6, 8, 11], "In": [5, 6, 8, 11, 14, 18, 21], "addtion": [5, 6, 8, 11], "explicitli": [5, 6, 8, 11], "tell": [5, 6, 8, 11, 14], "which": [5, 6, 8, 11, 14, 16, 18, 21], "instanc": [5, 6, 8, 11, 14, 17, 21], "env": [5, 6, 8, 11], "full": [5, 6, 8, 11, 18], "path": [5, 6, 8, 11, 14, 16, 17, 18, 19, 21], "choos": [5, 6, 8, 11, 16], "intel_ai_safeti": [5, 6, 8, 11, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], "subpackag": [5, 6, 8, 11], "plugin": [5, 6, 8, 11, 25], "wish": [5, 6, 8, 11], "all": [5, 6, 8, 11, 14, 17, 18, 21], "its": [5, 6, 8, 11, 14, 17], "e": [5, 6, 8, 11, 14, 18, 25], "g": [5, 6, 8, 11, 14, 18, 21, 25], "model_card_gen": [5, 6, 8, 11, 14, 20, 21, 25], "extra": [5, 6, 8, 11, 18], "b": [5, 6, 8, 11, 17], "just": [5, 6, 8, 11, 17, 18], "c": [5, 6, 8, 11, 14, 21, 25], "d": [5, 6, 8, 11, 17, 18, 19], "implement": [5, 6, 8, 11, 21], "f": [5, 6, 8, 11, 14, 17, 18, 19, 21, 23], "tensroflow": [5, 6, 8, 11], "bin": [5, 6, 8, 11], "we": [5, 6, 8, 11, 14, 16, 17, 18, 21, 25], "encourag": [5, 6, 8, 11], "virtualenv": [5, 6, 8, 11], "conda": [5, 6, 8, 11], "consist": [5, 6, 8, 11, 25], "manag": [5, 6, 8, 11, 14, 21], "wai": [5, 6, 8, 11, 25], "do": [5, 6, 8, 11, 17, 18, 21], "m": [5, 6, 8, 11, 14, 17, 21, 24], "xai_env": [5, 6, 8, 11], "Or": [5, 6, 8, 11], "config": [5, 6, 8, 11, 14, 17, 20, 21], "fals": [5, 6, 8, 11, 14, 17, 19, 20, 21, 23, 25], "mai": [5, 6, 8, 10, 11, 15, 18, 25, 26], "depend": [5, 6, 8, 11, 14, 16], "associ": [5, 6, 7, 8, 9, 11, 18, 25], "document": [5, 6, 8, 11, 14, 18, 25], "your": [5, 6, 7, 8, 9, 11, 16, 17, 18, 21], "wa": [5, 6, 8, 11, 16, 17, 18, 21], "success": [5, 6, 8, 11], "command": [5, 6, 8, 11], "displai": [5, 6, 8, 11, 18], "version": [5, 6, 7, 8, 9, 11, 14, 17, 20, 25], "from": [5, 6, 8, 11, 14, 15, 16, 17, 19, 22, 24, 25], "import": [5, 6, 8, 11, 14, 16, 19, 20, 22, 23, 24, 25], "print": [5, 6, 8, 11, 16, 17, 18, 19, 20, 22, 23, 24], "__version__": [5, 6, 8, 11, 22], "jupyt": [5, 6, 8, 11], "show": [5, 6, 8, 11, 17, 19, 21], "how": [5, 6, 8, 11, 14, 16, 18, 21], "variou": [5, 6, 8, 11, 16], "ml": [5, 6, 8, 11, 18, 21, 25], "domain": [5, 6, 8, 11, 15, 17], "team": [5, 6, 8, 11, 14, 21, 25], "track": [5, 6, 8, 11], "bug": [5, 6, 8, 11], "enhanc": [5, 6, 8, 11, 17, 20], "request": [5, 6, 8, 11, 16, 19], "befor": [5, 6, 8, 11, 18], "submit": [5, 6, 8, 11], "suggest": [5, 6, 8, 11], "search": [5, 6, 8, 11], "see": [5, 6, 8, 11, 17, 18, 21, 25], "ha": [5, 6, 8, 11, 17, 18, 20, 21], "alreadi": [5, 6, 8, 11, 17, 18, 25], "been": [5, 6, 8, 11, 14, 21], "other": [5, 6, 8, 10, 11, 15, 17, 25, 26], "brand": [5, 6, 8, 10, 11, 15, 26], "claim": [5, 6, 8, 10, 11, 15, 26], "properti": [5, 6, 8, 10, 11, 15, 26], "trademark": [5, 6, 8, 10, 11, 15, 26], "These": [5, 6, 7, 8, 9, 11, 21, 25], "script": [5, 6, 7, 8, 9, 11, 20], "intend": [5, 6, 7, 8, 9, 11, 14, 20, 21], "benchmark": [5, 6, 7, 8, 9, 11], "platform": [5, 6, 7, 8, 9, 11, 25], "For": [5, 6, 7, 8, 9, 11, 14, 16, 18, 21, 25], "ani": [5, 6, 7, 8, 9, 11, 14, 17, 25], "inform": [5, 6, 8, 11, 14, 17, 18, 20, 21, 25], "visit": [5, 6, 7, 8, 9, 11], "http": [5, 6, 7, 8, 9, 11, 14, 16, 17, 20, 21, 22, 25], "www": [5, 6, 7, 8, 9, 11, 18, 25], "blog": [5, 6, 7, 8, 9, 11], "commit": [5, 6, 7, 8, 9, 11, 21], "respect": [5, 6, 7, 8, 9, 11, 25], "human": [5, 6, 7, 8, 9, 11, 18], "right": [5, 6, 7, 8, 9, 11], "avoid": [5, 6, 7, 8, 9, 11], "complic": [5, 6, 7, 8, 9, 11], "abus": [5, 6, 7, 8, 9, 11, 25], "polici": [5, 6, 7, 8, 9, 11], "reflect": [5, 6, 7, 8, 9, 11], "global": [5, 6, 7, 8, 9, 11], "principl": [5, 6, 7, 8, 9, 11], "accordingli": [5, 6, 7, 8, 9, 11], "access": [5, 6, 7, 8, 9, 11, 25], "materi": [5, 6, 7, 8, 9, 11], "agre": [5, 6, 7, 8, 9, 11], "product": [5, 6, 7, 8, 9, 11], "applic": [5, 6, 7, 8, 9, 11, 14, 16, 17, 21], "caus": [5, 6, 7, 8, 9, 11], "violat": [5, 6, 7, 8, 9, 11], "an": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 20, 21, 25], "internation": [5, 6, 7, 8, 9, 11], "recogn": [5, 6, 7, 8, 9, 11, 25], "under": [5, 6, 7, 8, 9, 11, 25], "apach": [5, 6, 7, 8, 9, 11], "2": [5, 6, 7, 8, 9, 11, 14, 16, 19, 21, 22, 25], "0": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], "To": [5, 6, 7, 8, 9, 11, 17, 21, 25], "extent": [5, 6, 7, 8, 9, 11], "referenc": [5, 6, 7, 8, 9, 11], "site": [5, 6, 7, 8, 9, 11, 25], "third": [5, 6, 7, 8, 9, 11], "parti": [5, 6, 7, 8, 9, 11], "indic": [5, 6, 7, 8, 9, 11, 21, 23, 25], "content": [5, 6, 7, 8, 9, 11, 14, 16], "doe": [5, 6, 7, 8, 9, 11, 14, 17, 25], "warrant": [5, 6, 7, 8, 9, 11], "accuraci": [5, 6, 7, 8, 9, 11, 14, 17, 18, 23, 24], "qualiti": [5, 6, 7, 8, 9, 11, 21], "By": [5, 6, 7, 8, 9, 11], "": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 21, 22, 25], "term": [5, 6, 7, 8, 9, 11, 25], "compli": [5, 6, 7, 8, 9, 11], "expressli": [5, 7, 9, 11], "adequaci": [5, 7, 9, 11], "complet": [5, 7, 9, 11], "liabl": [5, 7, 9, 11], "error": [5, 7, 9, 11, 25], "omiss": [5, 7, 9, 11], "defect": [5, 7, 9, 11], "relianc": [5, 7, 9, 11], "thereon": [5, 7, 9, 11], "also": [5, 7, 9, 11, 17, 18, 25], "warranti": [5, 7, 9, 11], "non": [5, 7, 9, 11, 25], "infring": [5, 7, 9, 11], "liabil": [5, 7, 9, 11], "damag": [5, 7, 9, 11], "relat": [5, 7, 9, 11], "get": [6, 8, 16, 19, 23], "explain": [6, 7, 8, 9, 10, 26], "specif": [7, 9, 14, 16, 21], "run": [10, 17, 18, 21, 23, 26], "section": [14, 17], "subsect": 14, "decript": 14, "detail": [14, 25], "overview": [14, 20, 21, 25], "A": [14, 17, 20, 21, 25], "brief": 14, "one": [14, 17, 18, 21], "line": 14, "descript": [14, 21], "thorough": 14, "usag": 14, "owner": [14, 21, 25], "individu": [14, 21], "who": 14, "own": 14, "schema": [14, 17, 21], "licens": 14, "refer": [14, 18, 21, 25], "more": [14, 17, 18, 21, 25], "about": [14, 16, 18, 21, 25], "citat": [14, 20], "where": [14, 17, 18, 21, 25], "store": [14, 21], "graphic": [14, 20, 21, 24, 25], "paramet": [14, 17, 19, 20, 23], "architectur": 14, "dataset": [14, 16, 19, 20, 24, 25], "train": [14, 16, 17, 18, 19, 21], "evalu": [14, 17, 21, 22, 25], "format": [14, 17, 18, 20, 21, 23, 24], "kei": [14, 17, 18, 21, 25], "valu": [14, 17, 18, 20, 21, 22, 25], "output": [14, 17, 18, 19, 20, 21, 23], "quantit": 14, "being": [14, 18, 25], "colleciton": 14, "consider": [14, 21, 25], "what": [14, 17], "limit": [14, 25], "known": 14, "technic": 14, "kind": [14, 25], "should": [14, 17, 18, 21], "expect": [14, 18], "well": [14, 17, 25], "factor": 14, "might": 14, "degrad": 14, "tradeoff": 14, "ethic": [14, 21, 25], "environment": 14, "involv": 14, "step": [14, 17, 18, 19, 20, 21, 23, 25], "1": [14, 16, 19, 21, 22, 25], "com": [14, 16, 17, 21, 22, 25], "xai": [14, 21, 25], "cd": 14, "modelcardgen": [14, 20, 21, 25], "classmethod": [14, 18], "requir": [14, 17, 18, 21], "three": [14, 17], "return": [14, 17, 18, 19, 20, 21, 23, 24, 25], "data_set": [14, 20, 21, 25], "dict": [14, 21, 24], "dictionari": [14, 17, 21], "defin": [14, 17, 18, 21, 25], "tfrecord": [14, 21, 25], "raw": [14, 16, 18, 21, 25], "datafram": [14, 17, 18, 21], "eval": [14, 17, 18, 19, 21, 23, 25], "tensorflowdataset": [14, 21, 25], "dataset_path": [14, 21], "file": [14, 17, 18, 19, 21], "glob": 14, "pattern": [14, 21], "pytorchdataset": [14, 20], "pytorch_dataset": 14, "feature_nam": [14, 20, 24], "panda": [14, 17, 18, 20, 21, 22, 25], "pd": [14, 17, 18, 20, 21, 22, 25], "darafram": 14, "y_true": [14, 17, 23], "y_pred": [14, 17, 23], "ypred": 14, "model_path": [14, 20, 21, 25], "str": [14, 17, 18, 21], "field": [14, 21], "repres": [14, 21, 25], "savedmodel": [14, 21], "eval_config": [14, 20, 21, 25], "tfma": [14, 21, 25], "evalconfig": [14, 21], "either": [14, 18], "proto": [14, 20, 21, 25], "string": [14, 17, 18, 21, 25], "pars": [14, 21], "exampl": [14, 16, 17, 18, 20, 21, 25], "let": [14, 16, 17, 22], "u": [14, 17, 21], "entitl": 14, "proxi": [14, 21], "found": [14, 17, 19, 21, 25], "notebook": [14, 17, 18, 20, 21], "compas_with_model_card_gen": 14, "tfx": 14, "ipynb": [14, 16], "model_spec": [14, 20, 21, 25], "label_kei": [14, 18, 20, 21], "ground": [14, 21], "truth": [14, 21], "label": [14, 17, 18, 19, 20, 21, 25], "metric_spec": 14, "comput": [14, 16, 18, 21, 25], "binaryaccuraci": [14, 20, 21, 25], "auc": [14, 20, 21], "confusionmatrixplot": [14, 20, 21, 25], "fairnessind": [14, 20, 21, 25], "slicing_spec": [14, 20, 21, 25], "accross": [14, 25], "datapoint": 14, "aggreg": 14, "group": [14, 25], "race": [14, 20, 21, 25], "is_recid": [14, 21], "metrics_spec": [14, 20, 21, 25], "class_nam": [14, 17, 20, 21, 25], "threshold": [14, 20, 21], "25": [14, 18, 20, 21], "5": [14, 17, 19, 21, 24, 25], "75": [14, 20, 21], "overal": [14, 25], "slice": [14, 25], "feature_kei": [14, 20, 21, 25], "option": [14, 20, 21], "include_default_metr": [14, 20, 21], "If": [14, 16, 17, 18], "must": 14, "prediction_kei": [14, 20], "popul": 14, "object": [14, 17, 18, 21, 23], "serial": [14, 21, 25], "deseri": 14, "json": 14, "v": [14, 18, 21], "model_card": [14, 20, 21, 25], "static": 14, "like": [14, 17, 21, 25], "those": [14, 25], "model_detail": [14, 20, 21, 25], "mc": [14, 20, 21, 25], "variabl": [14, 18, 22], "below": [14, 18], "ad": [14, 17, 25], "pre": [14, 18, 21], "long": 14, "coher": 14, "correct": [14, 17, 21], "offend": [14, 21], "profil": [14, 21], "altern": [14, 21], "sanction": [14, 21], "approxim": [14, 21, 25], "18": [14, 17, 21], "000": [14, 18, 20, 21], "crimin": [14, 21], "broward": [14, 21], "counti": [14, 21], "florida": [14, 21], "between": [14, 17, 21, 25], "januari": [14, 21], "2013": [14, 17, 21], "decemb": [14, 17, 21], "2014": [14, 21], "11": [14, 21], "uniqu": [14, 21, 24], "defend": [14, 21], "histori": [14, 21, 24], "demograph": [14, 20, 21], "likelihood": [14, 21], "reoffend": [14, 21], "contact": [14, 21, 25], "wadsworth": [14, 21], "vera": [14, 21], "piech": [14, 21], "2017": [14, 21, 25], "achiev": [14, 21], "through": [14, 20, 21, 25], "adversari": [14, 20, 21], "arxiv": [14, 21, 25], "org": [14, 18, 20, 21, 22, 25], "ab": [14, 21, 25], "1807": [14, 21], "00199": [14, 21], "chouldechova": [14, 21], "sell": [14, 21], "fairer": [14, 21], "accur": [14, 21], "whom": [14, 21], "1707": [14, 21], "00046": [14, 21], "berk": [14, 21], "et": [14, 17, 20, 21], "al": [14, 20, 21], "justic": [14, 21], "assess": [14, 21], "state": [14, 16, 20, 21], "art": [14, 16, 21], "1703": [14, 21], "09207": [14, 21], "quantitative_analysi": [14, 21, 25], "schema_vers": [14, 20, 21, 25], "here": [14, 16, 21], "doc": 14, "model_card_exampl": 14, "data_path": [14, 21], "mcg": [14, 20, 21, 25], "_data_path": [14, 21, 25], "_model_path": [14, 21, 25], "_eval_config": [14, 20, 21, 25], "pytest": 14, "custom": [14, 15, 17, 21], "mark": 14, "common": [14, 16, 17, 21, 25], "note": [14, 17, 18, 21, 25], "still": 14, "libarari": 14, "resnet50": [15, 17], "cv": 15, "neural": [15, 20], "network": [15, 20], "heart": 15, "diseas": 15, "numer": [15, 18], "categor": [15, 17], "multimod": 15, "breast": 15, "cancer": 15, "nlp": 15, "huggingfac": [15, 17], "transfer": 15, "nn": [15, 19, 20, 23], "newsgroup": 15, "fine": 15, "tune": 15, "classifi": [15, 25], "estim": [15, 22, 25], "toxic": 15, "goal": 16, "explor": 16, "now": [16, 17, 18], "support": 16, "pt_cam": [16, 17], "torch": [16, 17, 18, 19, 20, 23], "numpi": [16, 17, 18, 19, 20, 23, 24, 25], "np": [16, 17, 18, 19, 20, 23, 24, 25], "resnet50_weight": 16, "matplotlib": [16, 17, 19], "pyplot": [16, 17, 19], "plt": [16, 17, 19], "arrai": [16, 17, 20, 23, 24], "rgb": 16, "order": [16, 24], "pil": 16, "io": [16, 17, 21, 25], "bytesio": 16, "respons": 16, "githubusercont": 16, "jacobgil": 16, "grad": [16, 19], "master": 16, "png": 16, "open": [16, 25], "imshow": [16, 17, 19], "save": [16, 18, 19, 21], "imagenet1k_v2": 16, "our": [16, 17, 18, 20, 21, 22, 25], "target": [16, 20, 21, 22, 23, 24, 25], "layer": [16, 17, 21, 22, 24], "normal": [16, 17, 19], "last": [16, 18, 25], "convolut": 16, "simpli": 16, "give": 16, "some": [16, 17, 18, 21, 25], "idea": [16, 17], "choic": 16, "fasterrcnn": 16, "backbon": 16, "resnet18": 16, "50": [16, 17, 20, 23, 25], "layer4": [16, 17], "vgg": 16, "densenet161": 16, "target_lay": 16, "specifi": [16, 17, 18, 21], "integ": [16, 18], "index": [16, 17, 21, 23], "rang": [16, 17, 18, 19, 20, 23, 25], "num_of_class": 16, "base": [16, 17, 18, 21], "tabbi": 16, "cat": [16, 19, 23], "281": 16, "targetclass": 16, "none": [16, 17, 18, 19, 20, 21, 25], "highest": 16, "categori": [16, 24], "target_class": 16, "image_dim": 16, "224": [16, 17], "xgc": [16, 17], "x_gradcam": [16, 17], "cpu": [16, 17, 18, 19, 23], "project": 16, "tf_cam": 16, "inlin": [16, 19], "tf": [16, 17, 19, 21, 22, 25], "urllib": [16, 19], "urlopen": [16, 19], "kera": [16, 21, 22, 24, 25], "get_lay": 16, "conv5_block3_out": 16, "tfgc": 16, "tf_gradcam": 16, "ismailuddin": 16, "gradcam": [16, 17], "blob": 16, "solut": 17, "diagnosi": 17, "contrast": 17, "mammographi": 17, "radiologi": 17, "It": [17, 18, 21], "latest": 17, "v0": 17, "7": [17, 18, 21], "direct": [17, 21], "instal": [17, 18], "cach": [17, 18, 21], "dir": [17, 21], "nltk": 17, "docx2txt": 17, "openpyxl": 17, "xmlfile": 17, "transform": [17, 19, 20, 21, 22, 23, 24], "evalpredict": 17, "trainingargu": [17, 18], "pipelin": 17, "tlt": [17, 18], "dataset_factori": 17, "model_factori": 17, "plotli": 17, "express": 17, "px": 17, "subplot": 17, "make_subplot": 17, "graph_object": 17, "go": [17, 25], "shap": [17, 18, 22], "warn": [17, 18, 22, 24], "filterwarn": [17, 18, 22, 24], "ignor": [17, 18, 22, 24, 25], "root": [17, 19], "annot": [17, 25], "locat": [17, 18, 21], "dataset_dir": [17, 18], "join": [17, 18, 19, 21], "environ": [17, 18, 24], "els": [17, 18, 19, 21], "home": [17, 18, 21], "output_dir": [17, 18], "download": [17, 18, 19, 22, 23], "wiki": 17, "cancerimagingarch": 17, "net": [17, 19, 20, 23], "page": [17, 18, 25], "viewpag": 17, "action": 17, "pageid": 17, "109379611": 17, "brca": 17, "prepare_nlp_data": 17, "py": [17, 21], "data_root": 17, "prepare_vision_data": 17, "jpg": 17, "arrang": 17, "subfold": 17, "each": 17, "csv": [17, 18, 21, 22], "final": 17, "look": [17, 22], "someth": 17, "pkg": 17, "medic": 17, "zip": [17, 18, 21, 24], "manual": 17, "xlsx": 17, "radiology_hand_drawn_segmentations_v2": 17, "vision_imag": 17, "benign": 17, "p100_l_cm_cc": 17, "p100_l_cm_mlo": 17, "malign": 17, "p102_r_cm_cc": 17, "p102_r_cm_mlo": 17, "p100_r_cm_cc": 17, "p100_r_cm_mlo": 17, "input": [17, 18, 19, 21, 24, 25], "suppli": 17, "accord": 17, "source_image_path": 17, "image_path": 17, "source_annotation_path": 17, "annotation_path": 17, "workload": 17, "assign": [17, 25], "subject": 17, "record": [17, 21], "entir": 17, "set": [17, 18, 19, 20, 21, 23, 25], "test": [17, 18, 24], "random": 17, "stratif": 17, "copi": [17, 20, 21], "data_util": 17, "split_imag": 17, "split_annot": 17, "grouped_image_path": 17, "_group": 17, "isdir": 17, "exist": [17, 18, 19, 25], "train_image_path": 17, "test_image_path": 17, "file_dir": 17, "file_nam": 17, "split": [17, 18, 21, 24, 25], "grouped_annotation_path": 17, "splitext": 17, "isfil": [17, 19], "train_dataset": [17, 18, 20, 25], "test_dataset": 17, "to_csv": [17, 21], "4": [17, 19, 21], "_test": 17, "train_annotation_path": 17, "test_annotation_path": 17, "label_col": 17, "column": [17, 18, 21], "call": [17, 21], "factori": 17, "pretrain": [17, 18], "hub": [17, 25], "load": [17, 18, 19, 21], "get_model": 17, "function": [17, 18, 19, 20, 21, 23], "later": [17, 18], "default": [17, 18], "viz_model": 17, "model_nam": [17, 18], "train_viz_dataset": 17, "load_dataset": [17, 18], "use_cas": 17, "image_classif": 17, "test_viz_dataset": 17, "onc": 17, "cell": [17, 18], "preprocess": [17, 18], "subset": [17, 18, 21, 24, 25], "resiz": 17, "them": [17, 21, 25], "match": [17, 21, 23], "batch": [17, 18, 19, 23, 25], "batch_siz": [17, 18, 19, 21, 22, 23, 24], "16": [17, 19], "shuffl": [17, 18, 19, 21, 23], "shuffle_split": 17, "train_pct": 17, "80": [17, 18], "val_pct": 17, "seed": [17, 20], "image_s": 17, "take": [17, 18, 21], "verifi": [17, 18], "correctli": 17, "distribut": [17, 21, 25], "amongst": 17, "confirm": 17, "themselv": 17, "revers": 17, "def": [17, 18, 19, 20, 21, 23, 24, 25], "label_map_func": 17, "elif": 17, "reverse_label_map": 17, "train_label_count": 17, "x": [17, 18, 19, 20, 21, 22, 23, 24, 25], "y": [17, 18, 21, 22, 25], "train_subset": 17, "valid_label_count": 17, "validation_subset": 17, "test_label_count": 17, "datsaet": 17, "distrubt": 17, "form": [17, 21, 25], "type": [17, 18, 20, 21, 25], "fig": 17, "row": [17, 21], "col": 17, "spec": [17, 21], "subplot_titl": 17, "add_trac": 17, "pie": 17, "update_layout": 17, "height": 17, "600": 17, "width": 17, "800": 17, "title_text": 17, "get_exampl": 17, "n": [17, 23, 25], "6": [17, 19, 21, 25], "loader": 17, "util": [17, 18, 19, 20, 22, 23, 25], "dataload": [17, 18, 19, 23], "example_imag": 17, "enumer": [17, 19, 23], "label_nam": [17, 18], "int": [17, 18], "len": [17, 18, 20, 23, 24], "append": [17, 21], "break": 17, "plot": [17, 23], "figur": 17, "figsiz": 17, "12": 17, "suptitl": 17, "tensor": [17, 18, 21, 25], "size": [17, 18, 21], "train_example_imag": 17, "idx": [17, 20], "img": [17, 19], "add_subplot": 17, "axi": [17, 18, 20, 21, 23, 24], "off": [17, 24], "tight_layout": 17, "ylabel": 17, "fontsiz": 17, "tick_param": 17, "bottom": 17, "labelbottom": 17, "left": 17, "labelleft": 17, "movedim": 17, "detach": [17, 18, 19], "astyp": 17, "uint8": 17, "valid_example_imag": 17, "vector": [17, 18, 25], "dens": [17, 21, 22, 24], "number": [17, 18, 21], "compil": [17, 21, 22], "epoch": [17, 18, 19, 20, 22, 23, 24], "argument": [17, 18], "extra_lay": 17, "insert": 17, "addit": [17, 25], "1024": 17, "512": [17, 18, 25], "first": [17, 18, 19, 21], "neuron": 17, "second": [17, 18, 20, 21], "viz_histori": 17, "ipex_optim": [17, 18], "validation_viz_metr": 17, "test_viz_metr": 17, "saved_model_dir": 17, "export": [17, 21], "analyz": 17, "confus": [17, 21], "matrix": 17, "roc": 17, "pr": 17, "curv": 17, "identifi": [17, 21, 25], "exibit": 17, "bia": [17, 25], "scipi": [17, 18], "special": [17, 18], "softmax": [17, 18, 19, 20, 23, 24], "logit": [17, 18], "convert": [17, 20, 21], "probabl": [17, 19, 22, 23, 24], "_model": 17, "viz_cm": 17, "confusion_matrix": [17, 23, 24], "plotter": [17, 23, 24], "pr_curv": [17, 23, 24], "roc_curv": [17, 23, 24], "hot": 17, "encod": [17, 18], "y_pred_label": 17, "argmax": [17, 18, 23, 24], "mal_idx": 17, "tolist": [17, 18, 20], "nor_pr": 17, "ben_pr": 17, "mal": 17, "were": [17, 18, 21, 25], "misclassifi": 17, "ben": 17, "mal_classified_as_nor": 17, "intersect": [17, 23], "mal_classified_as_ben": 17, "nor": 17, "mal_as_nor_imag": 17, "mal_as_ben_imag": 17, "skimag": 17, "14": [17, 21], "mal_as_nor": 17, "calcul": 17, "0th": 17, "1st": 17, "10th": 17, "sinc": [17, 18, 21], "thei": [17, 21, 25], "seem": 17, "tnhe": 17, "clearest": 17, "tumor": 17, "final_image_dim": 17, "targetlay": 17, "mal_as_ben": 17, "5th": 17, "11th": 17, "clinic": 17, "bert": [17, 18], "part": [17, 25], "up": [17, 18, 23, 25], "seq_length": 17, "64": [17, 24], "quantization_criterion": 17, "05": 17, "quantization_max_tri": 17, "nlp_model": 17, "train_file_dir": 17, "train_file_nam": 17, "train_nlp_dataset": 17, "text_classif": 17, "dataset_nam": [17, 18], "csv_file_nam": 17, "header": 17, "true": [17, 18, 19, 20, 22, 23, 24], "shuffle_fil": 17, "exclude_col": 17, "test_file_dir": 17, "test_file_nam": 17, "test_nlp_dataset": 17, "hub_nam": 17, "max_length": [17, 18], "67": 17, "33": [17, 20, 21], "across": [17, 25], "sure": 17, "similarli": [17, 18], "punkt": 17, "get_mc_df": 17, "words_list": 17, "ignored_word": 17, "most": [17, 18, 21], "frequency_dict": 17, "freqdist": 17, "most_common": 17, "500": [17, 20, 25], "final_fd": 17, "frequenc": 17, "cnt": 17, "punctuat": 17, "loc": 17, "df": [17, 20, 22], "read_csv": [17, 21, 22], "symptom": 17, "mal_text": 17, "nor_text": 17, "ben_text": 17, "mal_token": 17, "word_token": 17, "nor_token": 17, "ben_token": 17, "necesarri": 17, "mal_fd": 17, "nor_fd": 17, "ben_fd": 17, "bar": [17, 18], "color": 17, "titl": [17, 18, 19, 25], "updat": [17, 18], "layout_coloraxis_showscal": 17, "trainer": [17, 21], "desir": 17, "nativ": [17, 20], "loop": [17, 18, 19], "invok": 17, "use_train": 17, "set_se": 17, "nlp_histori": 17, "isn": 17, "t": [17, 18, 21, 25], "train_nlp_metr": 17, "test_nlp_metr": 17, "much": [17, 21, 25], "better": 17, "than": [17, 18, 20, 21], "nonetheless": 17, "similar": 17, "mistak": [17, 21], "flag": 17, "logit_predict": 17, "return_raw": 17, "nlp_cm": 17, "mal_classified_as_ben_text": 17, "get_text": 17, "input_id": 17, "encoded_input": [17, 18], "_token": 17, "pad": [17, 18], "return_tensor": [17, 18], "pt": [17, 18, 19, 20], "partition_explain": [17, 18, 24], "partition_text_explain": [17, 18, 24], "r": [17, 18, 24], "w": [17, 18, 24], "faster": 17, "infer": 17, "want": [17, 18, 21], "intel_extension_for_transform": 17, "nlptrainer": 17, "optimizedmodel": 17, "quantizationconfig": 17, "nlptk_metric": 17, "tune_metr": 17, "eval_accuraci": 17, "greater_is_bett": 17, "is_rel": 17, "criterion": [17, 19, 20], "weight_ratio": 17, "quantization_config": 17, "approach": 17, "posttrainingdynam": 17, "max_trial": 17, "compute_metr": [17, 18], "p": [17, 21], "pred": [17, 23, 24], "isinst": [17, 18, 21], "tupl": [17, 18, 21], "label_id": 17, "float32": [17, 25], "mean": 17, "item": [17, 18, 19, 20, 21, 23], "eval_dataset": [17, 18], "quantized_model": 17, "quant_config": 17, "result": [17, 18, 22], "eval_acc": 17, "5f": 17, "save_model": 17, "quantized_bert": 17, "save_pretrain": [17, 18], "same": [17, 18], "stock": 17, "counterpart": [17, 21], "howev": 17, "differ": [17, 18], "quant_cm": 17, "khale": 17, "helal": 17, "alfarghali": 17, "mokhtar": 17, "elkorani": 17, "el": 17, "kassa": 17, "h": 17, "fahmi": 17, "digit": 17, "databas": [17, 18], "low": [17, 21], "energi": 17, "subtract": 17, "spectral": 17, "2021": [17, 25], "archiv": [17, 18, 25], "doi": [17, 20, 25], "7937": 17, "29kw": 17, "ae92": 17, "diagnost": 17, "artifici": 17, "intellig": 17, "research": [17, 21, 25], "2022": [17, 20], "scientif": 17, "volum": [17, 25], "1038": 17, "s41597": 17, "022": 17, "01238": 17, "clark": 17, "k": [17, 18], "vendt": 17, "smith": 17, "freymann": 17, "j": [17, 19], "kirbi": 17, "koppel": 17, "moor": 17, "phillip": 17, "maffitt": 17, "pringl": 17, "tarbox": 17, "l": [17, 18, 25], "prior": 17, "maintain": 17, "oper": [17, 21], "journal": [17, 25], "26": 17, "pp": 17, "1045": 17, "1057": 17, "1007": 17, "s10278": 17, "013": 17, "9622": 17, "demonstr": [18, 21], "catalog": [18, 25], "extend": [18, 25], "optim": [18, 19, 20, 21, 22, 23, 25], "boost": 18, "pleas": [18, 19], "pytorch_requir": 18, "txt": 18, "execut": 18, "assum": [18, 21], "readm": 18, "md": 18, "intel_extension_for_pytorch": 18, "ipex": 18, "log": [18, 21, 23], "sy": [18, 24], "pickl": 18, "tqdm": 18, "auto": [18, 24], "adamw": 18, "classlabel": 18, "load_metr": 18, "datasets_log": 18, "transformers_log": 18, "automodelforsequenceclassif": 18, "autotoken": 18, "get_schedul": 18, "file_util": 18, "download_and_extract_zip_fil": 18, "stream": 18, "stdout": 18, "handler": 18, "_get_library_root_logg": 18, "setstream": 18, "sh": 18, "streamhandl": 18, "set_verbosity_error": 18, "transformers_no_advisory_warn": 18, "albert": 18, "v2": 18, "uncas": 18, "distilbert": 18, "finetun": 18, "sst": 18, "english": [18, 25], "roberta": 18, "anoth": [18, 21], "local": [18, 21], "end": 18, "package_refer": 18, "declar": 18, "from_pretrain": 18, "textclassificationdata": 18, "along": 18, "helper": 18, "__init__": [18, 19, 20, 23], "self": [18, 19, 20, 23], "sentence1_kei": 18, "sentence2_kei": 18, "class_label": 18, "train_d": 18, "eval_d": 18, "tokenize_funct": 18, "arg": [18, 21], "sentenc": 18, "truncat": 18, "tokenize_dataset": 18, "appli": [18, 21], "tokenized_dataset": 18, "remov": 18, "raw_text_column": 18, "remove_column": 18, "define_train_eval_split": 18, "train_split_nam": 18, "eval_split_nam": 18, "train_siz": 18, "eval_s": 18, "select": 18, "get_label_nam": 18, "rais": 18, "valueerror": 18, "display_sampl": 18, "split_nam": 18, "sample_s": 18, "sampl": [18, 20, 24], "sentence1_sampl": 18, "sentence2_sampl": 18, "label_sampl": 18, "dataset_sampl": 18, "style": 18, "hide_index": 18, "onlin": [18, 25], "avail": [18, 25], "next": [18, 19], "movi": 18, "multipl": [18, 19], "time": [18, 19], "speed": 18, "unsupervis": 18, "so": [18, 21, 25], "hfdstextclassificationdata": 18, "initi": [18, 25], "param": 18, "when": [18, 25], "quicker": 18, "debug": 18, "sentence1": 18, "sentence2": 18, "init": 18, "cache_dir": 18, "train_dataset_s": 18, "1000": [18, 21, 25], "eval_dataset_s": 18, "vari": 18, "skip": 18, "continu": 18, "singl": [18, 21], "tab": 18, "separ": 18, "ham": 18, "messag": 18, "tsv": 18, "pass": 18, "delimit": 18, "etc": [18, 25], "customcsvtextclassificationdata": 18, "data_fil": 18, "train_perc": 18, "8": [18, 20, 23, 25], "eval_perc": 18, "map_funct": 18, "intial": 18, "percentag": 18, "reduc": [18, 25], "identif": 18, "purpos": 18, "decim": 18, "convers": [18, 25], "combin": 18, "cannot": 18, "greater": [18, 20], "column_nam": 18, "num_class": [18, 20], "train_test_split": [18, 21, 22], "test_siz": [18, 21, 22], "modifi": 18, "csv_path": 18, "point": [18, 19], "dataset_url": [18, 25], "ic": 18, "uci": [18, 20], "edu": 18, "00228": 18, "smsspamcollect": 18, "csv_name": 18, "renam": [18, 21], "know": 18, "renamed_csv": 18, "don": 18, "extract": 18, "translat": 18, "map_spam": 18, "constructor": 18, "appropri": 18, "textclassificationmodel": 18, "num_label": [18, 20], "training_arg": 18, "bool": 18, "devic": [18, 23], "given": [18, 21], "otherwis": [18, 21, 25], "lr_schedul": 18, "lambdalr": 18, "num_train_epoch": 18, "callabl": 18, "shuffle_sampl": 18, "becaus": [18, 21, 25], "rename_column": 18, "set_format": 18, "train_dataload": 18, "unpack": 18, "progress": 18, "num_training_step": 18, "progress_bar": 18, "loss": [18, 19, 20, 21, 22, 23, 25], "backward": [18, 19, 20, 23], "zero_grad": [18, 19, 20, 23], "eval_dataload": 18, "no_grad": [18, 23], "dim": [18, 20, 23], "add_batch": 18, "raw_input_text": 18, "_": [18, 19], "max": [18, 19, 23, 24], "prediction_label": 18, "int2str": 18, "result_list": 18, "raw_text_input": 18, "result_df": 18, "cl": [18, 25], "simplic": 18, "checkpoint": 18, "previou": [18, 25], "resum": 18, "overwrite_output_dir": 18, "overwrit": 18, "previous": 18, "head": [18, 25], "origin": [18, 19, 21, 25], "replac": [18, 21], "learning_r": [18, 21, 25], "5e": 18, "lr": [18, 19, 20, 22, 23], "linear": [18, 19, 20, 23], "num_warmup_step": 18, "eval_pr": 18, "evalut": 18, "saw": 18, "after": [18, 21], "reloaded_model": 18, "okai": 18, "finish": [18, 19], "wouldn": [18, 21], "watch": 18, "again": 18, "bad": 18, "definit": 18, "my": 18, "favorit": 18, "highli": 18, "recommend": 18, "text_for_shap": 18, "inproceed": [18, 25], "maa": 18, "etal": [18, 25], "2011": 18, "acl": 18, "hlt2011": 18, "author": [18, 21, 25], "andrew": 18, "dali": 18, "raymond": 18, "pham": 18, "peter": 18, "huang": 18, "dan": 18, "ng": 18, "pott": 18, "christoph": 18, "sentiment": 18, "booktitl": [18, 25], "proceed": [18, 20, 25], "49th": 18, "annual": 18, "meet": 18, "linguist": [18, 25], "languag": [18, 25], "technologi": [18, 25], "month": [18, 25], "june": 18, "year": [18, 25], "address": [18, 25], "portland": 18, "oregon": 18, "usa": 18, "publish": [18, 21, 25], "142": 18, "150": [18, 20], "url": [18, 25], "aclweb": 18, "anthologi": 18, "p11": 18, "1015": 18, "misc": [18, 24, 25], "misc_sms_spam_collection_228": 18, "almeida": 18, "tiago": 18, "2012": 18, "howpublish": 18, "totensor": [19, 23], "trainset": 19, "cifar10": 19, "trainload": 19, "num_work": 19, "testset": 19, "testload": 19, "plane": 19, "car": 19, "bird": 19, "deer": 19, "dog": 19, "frog": 19, "hors": 19, "ship": 19, "truck": 19, "super": [19, 20, 23], "conv1": 19, "conv2d": [19, 23], "pool1": 19, "maxpool2d": [19, 23], "pool2": 19, "conv2": 19, "fc1": 19, "120": 19, "fc2": 19, "84": 19, "fc3": 19, "relu1": 19, "relu": [19, 20, 21, 22, 23, 24], "relu2": 19, "relu3": 19, "relu4": 19, "forward": [19, 20, 23], "view": [19, 23, 25], "crossentropyloss": [19, 20], "sgd": [19, 23], "001": [19, 20], "momentum": [19, 23], "use_pretrained_model": 19, "cifar_torchvis": 19, "load_state_dict": 19, "over": [19, 21, 25], "running_loss": 19, "zero": 19, "statist": [19, 21], "2000": 19, "1999": 19, "everi": 19, "mini": 19, "5d": 19, "3f": [19, 24], "state_dict": 19, "transpos": 19, "unnorm": 19, "npimg": 19, "datait": 19, "iter": 19, "make_grid": 19, "groundtruth": 19, "ind": 19, "unsqueez": 19, "requires_grad": 19, "pt_attribut": 19, "captum": 19, "attr": 19, "viz": 19, "handel": 19, "original_imag": 19, "visualize_image_attr": 19, "entri": 19, "salienc": 19, "integratedgradi": 19, "integr": 19, "deeplift": 19, "deep": 19, "lift": 19, "smoothgrad": 19, "smooth": 19, "featureabl": 19, "ablat": 19, "prerpocess": 20, "multilay": 20, "sklearn": [20, 21, 22, 24], "fetch_openml": 20, "categorical_feature_kei": [20, 21], "workclass": 20, "marit": 20, "statu": 20, "occup": 20, "relationship": 20, "sex": [20, 21], "countri": 20, "numeric_feature_kei": 20, "ag": [20, 21, 22], "capit": 20, "hour": 20, "per": 20, "week": 20, "educ": 20, "num": 20, "drop_column": 20, "fnlwgt": 20, "data_id": 20, "1590": 20, "as_fram": 20, "raw_data": 20, "adult_data": 20, "get_dummi": 20, "50k": 20, "to_numpi": 20, "adultdataset": 20, "face": 20, "landmark": 20, "make_input_tensor": 20, "make_label_tensor": 20, "__len__": 20, "adult_df": 20, "from_numpi": 20, "floattensor": 20, "label_arrai": 20, "__getitem__": 20, "is_tensor": 20, "adult_dataset": 20, "adultnn": 20, "num_featur": 20, "lin1": 20, "lin2": 20, "lin3": 20, "lin4": 20, "lin5": 20, "lin6": 20, "lin10": 20, "prelu": 20, "dropout": [20, 23], "xin": 20, "manual_se": [20, 23], "reproduc": 20, "feature_s": 20, "linear1": 20, "sigmoid1": 20, "sigmoid": [20, 22], "linear2": 20, "sigmoid2": 20, "linear3": 20, "lin1_out": 20, "sigmoid_out1": 20, "sigmoid_out2": 20, "num_epoch": [20, 23], "adam": [20, 21, 22, 24], "input_tensor": 20, "label_tensor": 20, "2f": 20, "offlin": 20, "jit": 20, "adult_model": 20, "writefil": [20, 21, 25], "confusionmatrixatthreshold": 20, "sex_femal": 20, "sex_mal": 20, "date": 20, "08": 20, "01": [20, 23, 25], "simoudi": 20, "evangelo": 20, "jiawei": 20, "han": 20, "usama": 20, "fayyad": 20, "intern": [20, 25], "confer": [20, 25], "knowledg": 20, "discoveri": 20, "mine": 20, "No": 20, "conf": 20, "960830": 20, "aaai": 20, "press": 20, "menlo": 20, "park": 20, "ca": 20, "unit": 20, "1996": 20, "friedler": 20, "sorel": 20, "compar": [20, 21], "studi": [20, 21], "intervent": 20, "account": 20, "transpar": 20, "2019": [20, 25], "1145": 20, "3287560": 20, "3287589": 20, "lahoti": 20, "preethi": 20, "without": [20, 25], "reweight": 20, "advanc": 20, "process": [20, 21], "2020": [20, 25], "728": 20, "740": 20, "task": [20, 25], "whether": [20, 21], "person": 20, "salari": 20, "less": 20, "export_html": [20, 21], "census_mc": 20, "eval_input_reciever_fn": 21, "userdefin": 21, "seral": 21, "dep": 21, "docker": 21, "tuner": 21, "kubernet": 21, "29": 21, "metadata": [21, 25], "portpick": 21, "mkdir": 21, "tempfil": [21, 25], "model_select": [21, 22], "genor": 21, "literature1": 21, "techniqu": 21, "remedi": 21, "around": 21, "___": 21, "setup": 21, "filepath": 21, "_data_root": 21, "mkdtemp": 21, "prefix": 21, "storag": [21, 22, 25], "googleapi": [21, 22, 25], "compas_dataset": 21, "cox": 21, "violent": 21, "_data_filepath": 21, "_compas_df": 21, "simplii": 21, "_column_nam": 21, "c_charge_desc": 21, "c_charge_degre": 21, "c_days_from_compa": 21, "juv_fel_count": 21, "juv_misd_count": 21, "juv_other_count": 21, "priors_count": 21, "r_days_from_arrest": 21, "vr_charge_desc": 21, "score_text": 21, "predction": 21, "_ground_truth": 21, "_compas_scor": 21, "labl": 21, "boolean": 21, "crime": 21, "drop": 21, "dropna": 21, "high": 21, "medium": [21, 25], "ground_truth": 21, "compas_scor": 21, "focus": 21, "african": 21, "american": 21, "caucasian": 21, "isin": 21, "x_train": [21, 22, 24], "x_test": [21, 22, 23, 24], "random_st": [21, 22], "42": [21, 22], "back": 21, "na_rep": 21, "opt": 21, "artifact": 21, "_transformer_path": 21, "tensorflow_transform": 21, "tft": 21, "int_feature_kei": 21, "within": 21, "max_categorical_feature_valu": 21, "513": 21, "transformed_nam": 21, "_xf": 21, "preprocessing_fn": 21, "callback": 21, "compute_and_apply_vocabulari": 21, "_fill_in_miss": 21, "vocab_filenam": 21, "scale_to_z_scor": 21, "charg": 21, "tensor_valu": 21, "miss": 21, "sparsetensor": 21, "fill": 21, "rank": 21, "Its": 21, "shape": [21, 24, 25], "dimens": 21, "spars": 21, "default_valu": 21, "dtype": [21, 25], "sparse_tensor": 21, "dense_shap": 21, "dense_tensor": 21, "to_dens": 21, "squeez": 21, "_trainer_path": 21, "tensorflow_model_analysi": [21, 25], "tf_metadata": 21, "schema_util": 21, "_batch_siz": 21, "_learning_r": 21, "00001": 21, "_max_checkpoint": 21, "_save_checkpoint_step": 21, "999": 21, "_gzip_reader_fn": 21, "filenam": [21, 25], "reader": 21, "read": 21, "gzip": 21, "ed": 21, "nest": 21, "structur": 21, "typespec": 21, "element": 21, "tfrecorddataset": [21, 25], "compression_typ": 21, "consid": 21, "_get_raw_feature_spec": 21, "whose": 21, "fixedlenfeatur": [21, 25], "varlenfeatur": [21, 25], "sparsefeatur": 21, "schema_as_feature_spec": 21, "feature_spec": 21, "_example_serving_receiver_fn": 21, "tf_transform_output": 21, "serv": 21, "tftransformoutput": 21, "graph": 21, "raw_feature_spec": 21, "pop": [21, 22], "raw_input_fn": 21, "build_parsing_serving_input_receiver_fn": 21, "serving_input_receiv": 21, "transformed_featur": 21, "transform_raw_featur": 21, "servinginputreceiv": 21, "receiver_tensor": [21, 25], "_eval_input_receiver_fn": 21, "everyth": 21, "evalinputreceiv": [21, 25], "untransform": 21, "notic": 21, "serialized_tf_exampl": [21, 25], "compat": [21, 25], "v1": [21, 25], "placehold": [21, 25], "input_example_tensor": 21, "parse_exampl": [21, 25], "_input_fn": 21, "200": 21, "input_fn": [21, 25], "transformed_feature_spec": 21, "experiment": 21, "make_batched_features_dataset": 21, "make_one_shot_iter": 21, "get_next": 21, "re": [21, 24], "_keras_model_build": 21, "feature_column": [21, 25], "feature_layer_input": 21, "numeric_column": 21, "num_bucket": 21, "indicator_column": 21, "categorical_column_with_ident": 21, "int32": [21, 25], "feature_columns_input": 21, "densefeatur": 21, "feature_layer_output": 21, "dense_lay": 21, "dense_1": 21, "dense_2": 21, "meanabsoluteerror": 21, "trainer_fn": 21, "hparam": 21, "level": 21, "hyperparamet": 21, "pair": 21, "hold": 21, "train_spec": 21, "eval_spec": 21, "eval_input_receiver_fn": [21, 25], "transform_output": 21, "train_input_fn": [21, 25], "lambda": 21, "train_fil": 21, "eval_input_fn": 21, "eval_fil": 21, "trainspec": 21, "max_step": 21, "train_step": 21, "serving_receiver_fn": 21, "finalexport": 21, "evalspec": 21, "eval_step": 21, "run_config": 21, "runconfig": 21, "save_checkpoints_step": 21, "keep_checkpoint_max": 21, "model_dir": 21, "serving_model_dir": 21, "model_to_estim": 21, "keras_model": 21, "receiv": 21, "receiver_fn": 21, "_pipelie_path": 21, "absl": 21, "csvexamplegen": 21, "pusher": 21, "schemagen": 21, "statisticsgen": 21, "executor": 21, "dsl": 21, "executor_spec": 21, "orchestr": 21, "pusher_pb2": 21, "trainer_pb2": 21, "example_gen_pb2": 21, "local_dag_runn": 21, "localdagrunn": 21, "_pipeline_nam": 21, "_compas_root": 21, "inject": 21, "logic": 21, "successfulli": 21, "_transformer_fil": 21, "_trainer_fil": 21, "listen": 21, "server": 21, "_serving_model_dir": 21, "serving_model": 21, "chicago": 21, "taxi": 21, "rel": 21, "anywher": 21, "filesystem": 21, "_tfx_root": 21, "_pipeline_root": 21, "sqlite": 21, "db": 21, "_metadata_path": 21, "create_pipelin": 21, "pipeline_nam": 21, "pipeline_root": 21, "preprocessing_module_fil": 21, "trainer_module_fil": 21, "train_arg": 21, "trainarg": 21, "eval_arg": 21, "evalarg": 21, "metadata_path": 21, "schema_path": 21, "compass": 21, "bring": 21, "example_gen": 21, "input_bas": 21, "input_config": 21, "statistics_gen": 21, "schema_gen": 21, "importschemagen": 21, "schema_fil": 21, "module_fil": 21, "abspath": 21, "trainer_arg": 21, "transformed_exampl": 21, "custom_executor_spec": 21, "executorclassspec": 21, "transform_graph": 21, "candid": 21, "baselin": 21, "modelspec": 21, "slicingspec": 21, "metricsspec": 21, "metricconfig": 21, "metadata_connection_config": 21, "sqlite_metadata_connection_config": 21, "__name__": 21, "__main__": 21, "set_verbos": 21, "info": 21, "num_step": 21, "10000": 21, "5000": 21, "ml_metadata": 21, "metadata_stor": 21, "metadata_store_pb2": 21, "connection_config": 21, "connectionconfig": 21, "filename_uri": 21, "connection_mod": 21, "readwrite_opencr": 21, "metadatastor": 21, "get_artifacts_by_typ": 21, "uri": 21, "modelevalu": 21, "gz": 21, "_project_path": 21, "judg": 21, "parol": 21, "offic": 21, "determin": 21, "bail": 21, "grant": 21, "2016": 21, "articl": [21, 25], "propublica": 21, "incorrectli": 21, "would": 21, "higher": [21, 25], "rate": [21, 25], "white": 21, "made": [21, 25], "opposit": 21, "incorrect": 21, "went": 21, "bias": [21, 25], "due": 21, "uneven": 21, "disproportion": 21, "appear": [21, 25], "frequent": [21, 25], "literatur": 21, "concern": 21, "develop": [21, 25], "trial": 21, "detent": 21, "partnership": 21, "algorithm": 21, "multi": [21, 25], "stakehold": 21, "organ": 21, "googl": [21, 25], "member": 21, "guidelin": 21, "compas_plotli": 21, "standardscal": 22, "file_url": 22, "prepar": 22, "list_numer": 22, "thalach": 22, "trestbp": 22, "chol": 22, "oldpeak": 22, "y_train": [22, 24], "y_test": [22, 24], "scaler": 22, "fit": [22, 24], "sequenti": [22, 23, 24], "binary_crossentropi": 22, "15": [22, 24], "13": 22, "validation_data": [22, 24], "plot_model": 22, "show_shap": 22, "rankdir": 22, "particular": 22, "patient": 22, "had": 22, "1f": 22, "percent": 22, "100": [22, 23, 25], "ke": 22, "kernel_explain": 22, "iloc": 22, "101": 22, "128": [23, 24], "conv_lay": 23, "kernel_s": 23, "fc_layer": 23, "320": 23, "train_load": 23, "batch_idx": 23, "nll_loss": 23, "0f": 23, "tloss": 23, "6f": 23, "mnist_data": 23, "test_load": 23, "test_loss": 23, "empti": 23, "28": 23, "sum": [23, 25], "keepdim": 23, "eq": 23, "view_a": 23, "ntest": 23, "averag": 23, "4f": 23, "cm": [23, 24], "pred_idx": 23, "gt_idx": 23, "deviz": 23, "deep_explain": 23, "instati": 23, "grviz": 23, "gradient_explain": 23, "kmp_warn": 24, "all_categori": 24, "alt": 24, "atheism": 24, "comp": 24, "ibm": 24, "pc": 24, "mac": 24, "forsal": 24, "rec": [24, 25], "motorcycl": 24, "sport": 24, "basebal": 24, "hockei": 24, "sci": 24, "crypt": 24, "electron": 24, "med": 24, "space": 24, "soc": 24, "religion": [24, 25], "christian": 24, "talk": 24, "polit": 24, "gun": 24, "mideast": 24, "selected_categori": 24, "x_train_text": 24, "fetch_20newsgroup": 24, "return_x_i": 24, "x_test_text": 24, "feature_extract": 24, "countvector": 24, "tfidfvector": 24, "max_featur": 24, "50000": 24, "concaten": 24, "toarrai": 24, "create_model": 24, "summari": 24, "sparse_categorical_crossentropi": 24, "256": 24, "accuracy_scor": 24, "train_pr": 24, "test_pr": 24, "x_batch_text": 24, "x_batch": 24, "preds_proba": 24, "actual": 24, "make_predict": 24, "shap_valu": 24, "max_displai": 24, "explan": 24, "argsort": 24, "flip": 24, "waterfall_plot": 24, "lower": 24, "initj": 24, "force_plot": 24, "base_valu": 24, "out_nam": 24, "adapt": 25, "datetim": 25, "tensorflow_hub": 25, "tensorflow_data_valid": 25, "tfdv": 25, "addon": 25, "post_export_metr": 25, "fairness_ind": 25, "widget_view": 25, "civilcom": 25, "primari": 25, "seven": 25, "crowd": 25, "worker": 25, "tag": 25, "fraction": 25, "main": 25, "civilcommentsident": 25, "releas": 25, "kaggl": 25, "come": 25, "independ": 25, "2015": 25, "world": 25, "shut": 25, "down": 25, "chose": 25, "enabl": 25, "futur": 25, "figshar": 25, "id": 25, "timestamp": 25, "jigsaw": 25, "ident": 25, "mention": 25, "covert": 25, "offens": 25, "exact": 25, "replica": 25, "unintend": 25, "challeng": 25, "cc0": 25, "underli": 25, "parent_id": 25, "parent_text": 25, "regard": 25, "leak": 25, "did": 25, "parent": 25, "civil_com": 25, "pavlopoulos2020tox": 25, "context": 25, "realli": 25, "matter": 25, "john": 25, "pavlopoulo": 25, "jeffrei": 25, "sorensen": 25, "luca": 25, "dixon": 25, "nithum": 25, "thain": 25, "ion": 25, "androutsopoulo": 25, "eprint": 25, "2006": 25, "00998": 25, "archiveprefix": 25, "primaryclass": 25, "dblp": 25, "corr": 25, "1903": 25, "04561": 25, "daniel": 25, "borkan": 25, "luci": 25, "vasserman": 25, "nuanc": 25, "real": 25, "sun": 25, "31": 25, "mar": 25, "19": 25, "24": 25, "0200": 25, "biburl": 25, "bib": 25, "bibsourc": 25, "scienc": 25, "bibliographi": 25, "semev": 25, "em": 25, "val": 25, "span": 25, "laugier": 25, "15th": 25, "workshop": 25, "semant": 25, "aug": 25, "aclanthologi": 25, "18653": 25, "59": 25, "69": 25, "article_id": 25, "identity_attack": 25, "insult": 25, "obscen": 25, "severe_tox": 25, "sexual_explicit": 25, "threat": 25, "civil_comments_dataset": 25, "train_tf_fil": 25, "get_fil": 25, "train_tf_process": 25, "validate_tf_fil": 25, "validate_tf_process": 25, "text_featur": 25, "comment_text": 25, "feature_map": 25, "sexual_orient": 25, "gender": 25, "disabl": 25, "parse_funct": 25, "parse_single_exampl": 25, "work": 25, "parsed_exampl": 25, "fight": 25, "92": 25, "imbal": 25, "doesn": 25, "tfhub": 25, "embedded_text_feature_column": 25, "text_embedding_column": 25, "module_spec": 25, "nnlm": 25, "en": 25, "dim128": 25, "dnnclassifi": 25, "hidden_unit": 25, "weight_column": 25, "legaci": 25, "adagrad": 25, "003": 25, "loss_reduct": 25, "reduct": 25, "n_class": 25, "gettempdir": 25, "input_example_placehold": 25, "ones_lik": 25, "tfma_export_dir": 25, "export_eval_savedmodel": 25, "export_dir_bas": 25, "signature_nam": 25, "merg": 25, "built": 25, "precis": 25, "recal": 25, "alphabet": 25, "protect": 25, "voic": 25, "area": 25, "focu": 25, "anyth": 25, "rude": 25, "disrespect": 25, "someon": 25, "leav": 25, "discuss": 25, "attemp": 25, "sever": 25, "subtyp": 25, "ensur": 25, "wide": 25, "8e0b81f80a23": 25, "overrepres": 25, "black": 25, "muslim": 25, "feminist": 25, "woman": 25, "gai": 25, "often": 25, "far": 25, "mani": 25, "forum": 25, "unfortun": 25, "attack": 25, "rarer": 25, "affirm": 25, "statement": 25, "am": 25, "proud": 25, "man": 25, "adopt": 25, "pick": 25, "connot": 25, "insuffici": 25, "divers": 25, "imbalenc": 25, "balanc": 25, "enough": 25, "effect": 25, "distinguish": 25, "paper": 25, "societi": 25}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"dataset": [0, 5, 6, 7, 8, 9, 11, 17, 18, 21, 23], "explain": [3, 5, 11, 15, 16, 17, 18, 19, 22, 23, 24], "goal": 3, "submodul": 3, "api": [4, 12, 17, 18], "refrenc": 4, "intel": [5, 11, 16, 17, 18], "ai": [5, 11, 17, 18], "tool": [5, 11, 16, 18], "overview": [5, 10, 11, 26], "get": [5, 11, 17, 18], "start": [5, 11], "requir": [5, 6, 8, 11], "develop": [5, 6, 8, 11], "instal": [5, 6, 8, 11, 14, 21], "poetri": [5, 6, 8, 11], "exist": [5, 6, 8, 11], "enviorn": [5, 6, 8, 11], "creat": [5, 6, 8, 11, 25], "activ": [5, 6, 8, 11], "python3": [5, 6, 8, 11], "virtual": [5, 6, 8, 11], "environ": [5, 6, 8, 11], "addit": [5, 6, 8, 11], "featur": [5, 6, 8, 11, 22], "specif": [5, 6, 8, 11], "step": [5, 6, 8, 11], "verifi": [5, 6, 8, 11], "run": [5, 6, 8, 11, 14], "notebook": [5, 6, 8, 11, 15, 16], "support": [5, 6, 8, 11], "disclaim": [5, 6, 7, 8, 9, 11], "licens": [5, 6, 7, 8, 9, 11], "model": [5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 24, 25], "softwar": [6, 8], "legal": [7, 9], "inform": [7, 9], "refer": [12, 16], "card": [12, 13, 14, 15, 20, 21, 25], "gener": [12, 14, 15, 20, 21], "exampl": [13, 15, 23], "input": [14, 16, 20], "test": [14, 20, 23], "marker": 14, "sampl": 14, "command": 14, "us": [14, 16, 17, 18, 19, 22, 23, 24], "resnet50": 16, "imagenet": 16, "classif": [16, 17, 19, 22, 23, 24, 25], "cam": 16, "object": 16, "load": 16, "xai": 16, "pytorch": [16, 17, 18, 20], "modul": 16, "xgradcam": 16, "imag": [16, 17], "visual": [16, 22, 23, 24], "tensorflow": [16, 21, 25], "multimod": 17, "breast": 17, "cancer": 17, "detect": [17, 21], "import": [17, 18, 21], "depend": [17, 18, 21, 25], "setup": [17, 18], "directori": 17, "option": [17, 18], "group": 17, "data": [17, 20, 22, 23, 24, 25], "patient": 17, "id": 17, "1": [17, 18, 20, 23, 24], "prepar": [17, 18], "analysi": 17, "transfer": 17, "learn": 17, "save": [17, 20], "comput": 17, "vision": 17, "error": 17, "2": [17, 18, 20, 23, 24], "text": [17, 18, 24], "corpu": 17, "nlp": 17, "explan": 17, "int8": 17, "quantiz": 17, "citat": [17, 18], "public": 17, "tcia": 17, "fine": 18, "tune": 18, "classifi": 18, "paramet": 18, "A": 18, "hug": 18, "face": 18, "b": 18, "custom": [18, 19, 22, 23, 24], "3": [18, 20, 23], "evalu": [18, 24], "trainer": 18, "http": 18, "huggingfac": 18, "co": 18, "doc": 18, "transform": 18, "v4": 18, "16": 18, "en": 18, "main_class": 18, "__": 18, "from": [18, 20, 21, 23], "nativ": 18, "4": [18, 20, 23], "export": [18, 25], "5": [18, 20, 23], "reload": 18, "make": [18, 25], "predict": [18, 23, 24], "6": [18, 23], "cnn": [19, 23], "cifar": 19, "10": [19, 23], "attribut": [19, 22, 23, 24], "collect": 20, "preprocess": [20, 21, 22], "fetch": 20, "openml": 20, "drop": 20, "unneed": 20, "column": 20, "train": [20, 23, 24, 25], "split": [20, 22], "build": [20, 25], "evalconfig": 20, "issu": 21, "fair": 21, "estim": 21, "librari": 21, "download": [21, 25], "tfx": 21, "pipelin": 21, "script": 21, "displai": 21, "neural": 22, "network": 22, "heart": 22, "diseas": 22, "connect": 22, "graph": 22, "accuraci": 22, "mnist": 23, "design": 23, "scatch": 23, "survei": 23, "perform": [23, 24], "across": 23, "all": 23, "class": 23, "metrics_explain": 23, "plugin": 23, "feature_attributions_explain": 23, "can": 23, "observ": 23, "confus": 23, "matrix": 23, "9": 23, "poorli": 23, "additionallli": 23, "i": 23, "high": 23, "misclassif": 23, "rate": 23, "exclus": 23, "amongst": 23, "two": 23, "label": 23, "In": 23, "other": 23, "word": 23, "appear": 23, "": 23, "vice": 23, "versa": 23, "7": 23, "were": 23, "misclassifi": 23, "let": 23, "take": 23, "closer": 23, "look": 23, "pixel": 23, "base": 23, "shap": [23, 24], "valu": [23, 24], "where": 23, "when": 23, "correct": [23, 24], "groundtruth": 23, "conclus": 23, "deep": 23, "gradient": 23, "pai": 23, "close": 23, "attent": 23, "top": 23, "digit": 23, "distinguish": 23, "between": 23, "On": 23, "first": 23, "last": 23, "row": 23, "abov": 23, "we": 23, "ar": 23, "The": 23, "contribut": 23, "postiiv": 23, "red": 23, "thi": 23, "begin": 23, "why": 23, "nn": 24, "newsgroup": 24, "vector": 24, "defin": 24, "compil": 24, "partit": 24, "plot": 24, "bar": 24, "waterfal": 24, "forc": 24, "toxic": 25, "comment": 25, "descript": 25, "evalsavedmodel": 25, "format": 25}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "nbsphinx": 4, "sphinx.ext.intersphinx": 1, "sphinx.ext.viewcode": 1, "sphinx": 57}, "alltitles": {"Datasets": [[0, "datasets"]], "Explainer": [[3, "explainer"]], "Goals": [[3, "goals"]], "Explainer Submodules": [[3, "explainer-submodules"]], "API Refrence": [[4, "api-refrence"]], "Intel\u00ae Explainable AI Tools": [[5, "intel-explainable-ai-tools"], [11, "intel-explainable-ai-tools"]], "Overview": [[5, "overview"], [10, "overview"], [11, "overview"], [26, "overview"]], "Get Started": [[5, "get-started"], [11, "get-started"]], "Requirements": [[5, "requirements"], [11, "requirements"]], "Developer Installation with Poetry": [[5, "developer-installation-with-poetry"], [6, "developer-installation-with-poetry"], [8, "developer-installation-with-poetry"], [11, "developer-installation-with-poetry"]], "Install to existing enviornment with Poetry": [[5, "install-to-existing-enviornment-with-poetry"], [6, "install-to-existing-enviornment-with-poetry"], [8, "install-to-existing-enviornment-with-poetry"], [11, "install-to-existing-enviornment-with-poetry"]], "Create and activate a Python3 virtual environment": [[5, "create-and-activate-a-python3-virtual-environment"], [6, "create-and-activate-a-python3-virtual-environment"], [8, "create-and-activate-a-python3-virtual-environment"], [11, "create-and-activate-a-python3-virtual-environment"]], "Additional Feature-Specific Steps": [[5, "additional-feature-specific-steps"], [6, "additional-feature-specific-steps"], [8, "additional-feature-specific-steps"], [11, "additional-feature-specific-steps"]], "Verify Installation": [[5, "verify-installation"], [6, "verify-installation"], [8, "verify-installation"], [11, "verify-installation"]], "Running Notebooks": [[5, "running-notebooks"], [6, "running-notebooks"], [8, "running-notebooks"], [11, "running-notebooks"]], "Support": [[5, "support"], [6, "support"], [8, "support"], [11, "support"]], "DISCLAIMER": [[5, "disclaimer"], [6, "disclaimer"], [8, "disclaimer"], [11, "disclaimer"]], "License": [[5, "license"], [6, "license"], [7, "license"], [8, "license"], [9, "license"], [11, "license"]], "Datasets and Models": [[5, "datasets-and-models"], [6, "datasets-and-models"], [7, "datasets-and-models"], [8, "datasets-and-models"], [9, "datasets-and-models"], [11, "datasets-and-models"]], "Installation": [[6, "installation"], [8, "installation"]], "Software Requirements": [[6, "software-requirements"], [8, "software-requirements"]], "Legal Information": [[7, "legal-information"], [9, "legal-information"]], "Disclaimer": [[7, "disclaimer"], [9, "disclaimer"]], "API Reference": [[12, "api-reference"]], "Model Card Generator": [[12, "model-card-generator"], [14, "model-card-generator"]], "Example Model Card": [[13, "example-model-card"]], "Install": [[14, "install"]], "Run": [[14, "run"]], "Model Card Generator Inputs": [[14, "model-card-generator-inputs"]], "Test": [[14, "test"]], "Markers": [[14, "markers"]], "Sample test commands using markers": [[14, "sample-test-commands-using-markers"]], "Example Notebooks": [[15, "example-notebooks"]], "Explainer Notebooks": [[15, "explainer-notebooks"]], "Model Card Generator Notebooks": [[15, "model-card-generator-notebooks"]], "Explaining ResNet50 ImageNet Classification Using the CAM Explainer": [[16, "Explaining-ResNet50-ImageNet-Classification-Using-the-CAM-Explainer"]], "Objective": [[16, "Objective"]], "Loading Intel XAI Tools PyTorch CAM Module": [[16, "Loading-Intel-XAI-Tools-PyTorch-CAM-Module"]], "Loading Notebook Modules": [[16, "Loading-Notebook-Modules"]], "Using XGradCAM": [[16, "Using-XGradCAM"]], "Loading the input image": [[16, "Loading-the-input-image"]], "Loading the Model": [[16, "Loading-the-Model"]], "Visualization": [[16, "Visualization"]], "References": [[16, "References"]], "Loading Intel XAI Tools TensorFlow CAM Module": [[16, "Loading-Intel-XAI-Tools-TensorFlow-CAM-Module"]], "Explaining Image Classification Models with TensorFlow": [[16, "Explaining-Image-Classification-Models-with-TensorFlow"]], "Multimodal Breast Cancer Detection Explainability using the Intel\u00ae Explainable AI API": [[17, "Multimodal-Breast-Cancer-Detection-Explainability-using-the-Intel\u00ae-Explainable-AI-API"]], "Import Dependencies and Setup Directories": [[17, "Import-Dependencies-and-Setup-Directories"]], "Dataset": [[17, "Dataset"]], "Optional: Group Data by Patient ID": [[17, "Optional:-Group-Data-by-Patient-ID"]], "Model 1: Image Classification with PyTorch": [[17, "Model-1:-Image-Classification-with-PyTorch"]], "Get the Model and Dataset": [[17, "Get-the-Model-and-Dataset"], [17, "id1"]], "Data Preparation": [[17, "Data-Preparation"], [17, "id2"]], "Image dataset analysis": [[17, "Image-dataset-analysis"]], "Transfer Learning": [[17, "Transfer-Learning"], [17, "id3"]], "Save the Computer Vision Model": [[17, "Save-the-Computer-Vision-Model"]], "Error Analysis": [[17, "Error-Analysis"]], "Explainability": [[17, "Explainability"]], "Model 2: Text Classification with PyTorch": [[17, "Model-2:-Text-Classification-with-PyTorch"]], "Corpus analysis": [[17, "Corpus-analysis"]], "Save the NLP Model": [[17, "Save-the-NLP-Model"]], "Error analysis": [[17, "Error-analysis"], [17, "id5"]], "Explanation": [[17, "Explanation"]], "Int8 Quantization": [[17, "Int8-Quantization"]], "Save the Quantized NLP Model": [[17, "Save-the-Quantized-NLP-Model"]], "Citations": [[17, "Citations"], [18, "Citations"]], "Data Citation": [[17, "Data-Citation"]], "Publication Citation": [[17, "Publication-Citation"]], "TCIA Citation": [[17, "TCIA-Citation"]], "Explaining Fine Tuned Text Classifier with PyTorch using the Intel\u00ae Explainable AI API": [[18, "Explaining-Fine-Tuned-Text-Classifier-with-PyTorch-using-the-Intel\u00ae-Explainable-AI-API"]], "1. Import dependencies and setup parameters": [[18, "1.-Import-dependencies-and-setup-parameters"]], "2. Prepare the dataset": [[18, "2.-Prepare-the-dataset"]], "Option A: Use a Hugging Face dataset": [[18, "Option-A:-Use-a-Hugging-Face-dataset"]], "Option B: Use a custom dataset": [[18, "Option-B:-Use-a-custom-dataset"]], "3. Prepare the Model for Fine Tuning and Evaluation": [[18, "3.-Prepare-the-Model-for-Fine-Tuning-and-Evaluation"]], "Option A: Use the `Trainer `__ API from Hugging Face": [[18, "Option-A:-Use-the-`Trainer-`__-API-from-Hugging-Face"]], "Option B: Use the native PyTorch API": [[18, "Option-B:-Use-the-native-PyTorch-API"]], "4. Export the model": [[18, "4.-Export-the-model"]], "5. Reload the model and make predictions": [[18, "5.-Reload-the-model-and-make-predictions"]], "6. Get Explainations with Intel Explainable AI Tools": [[18, "6.-Get-Explainations-with-Intel-Explainable-AI-Tools"]], "Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer": [[19, "Explaining-Custom-CNN-CIFAR-10-Classification-Using-the-Attributions-Explainer"]], "Generating Model Card with PyTorch": [[20, "Generating-Model-Card-with-PyTorch"]], "1. Data Collection and Preprocessing": [[20, "1.-Data-Collection-and-Preprocessing"]], "Fetch Data from OpenML": [[20, "Fetch-Data-from-OpenML"]], "Drop Unneeded Columns": [[20, "Drop-Unneeded-Columns"]], "Train Test Split": [[20, "Train-Test-Split"]], "2. Build Model": [[20, "2.-Build-Model"]], "3. Train Model": [[20, "3.-Train-Model"]], "4. Save Model": [[20, "4.-Save-Model"]], "5. Generate Model Card": [[20, "5.-Generate-Model-Card"]], "EvalConfig Input": [[20, "EvalConfig-Input"]], "Detecting Issues in Fairness by Generating Model Card from Tensorflow Estimators": [[21, "Detecting-Issues-in-Fairness-by-Generating-Model-Card-from-Tensorflow-Estimators"]], "Install Dependencies": [[21, "Install-Dependencies"]], "Import Libraries": [[21, "Import-Libraries"]], "Download and preprocess the dataset": [[21, "Download-and-preprocess-the-dataset"]], "TFX Pipeline Scripts": [[21, "TFX-Pipeline-Scripts"]], "Display Model Card": [[21, "Display-Model-Card"]], "Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer": [[22, "Explaining-a-Custom-Neural-Network-Heart-Disease-Classification-Using-the-Attributions-Explainer"]], "Data Splitting": [[22, "Data-Splitting"]], "Feature Preprocessing": [[22, "Feature-Preprocessing"]], "Model": [[22, "Model"]], "Visualize the connectivity graph:": [[22, "Visualize-the-connectivity-graph:"]], "Accuracy": [[22, "Accuracy"]], "Explaining Custom CNN MNIST Classification Using the Attributions Explainer": [[23, "Explaining-Custom-CNN-MNIST-Classification-Using-the-Attributions-Explainer"]], "1. Design the CNN from scatch": [[23, "1.-Design-the-CNN-from-scatch"]], "2. Train the CNN on the MNIST dataset": [[23, "2.-Train-the-CNN-on-the-MNIST-dataset"]], "3. Predict the MNIST test data": [[23, "3.-Predict-the-MNIST-test-data"]], "4. Survey performance across all classes using the metrics_explainer plugin": [[23, "4.-Survey-performance-across-all-classes-using-the-metrics_explainer-plugin"]], "5. Explain performance across the classes using the feature_attributions_explainer plugin": [[23, "5.-Explain-performance-across-the-classes-using-the-feature_attributions_explainer-plugin"]], "From (4), it can be observed from the confusion matrix that classes 4 and 9 perform poorly. Additionallly, there is a high misclassification rate exclusively amongst the two labels. In other words, it appears that the CNN if confusing 4\u2019s with 9\u2019s, and vice-versa. 7.4% of all the 9 examples were misclassified as 4, and 10% of all the 4 examples were misclassified as 9.": [[23, "From-(4),-it-can-be-observed-from-the-confusion-matrix-that-classes-4-and-9-perform-poorly.-Additionallly,-there-is-a-high-misclassification-rate-exclusively-amongst-the-two-labels.-In-other-words,-it-appears-that-the-CNN-if-confusing-4's-with-9's,-and-vice-versa.-7.4%-of-all-the-9-examples-were-misclassified-as-4,-and-10%-of-all-the-4-examples-were-misclassified-as-9."]], "Let\u2019s take a closer look at the pixel-based shap values for the test examples where the CNN predicts \u20189\u2019 when the correct groundtruth label is \u20184\u2019.": [[23, "Let's-take-a-closer-look-at-the-pixel-based-shap-values-for-the-test-examples-where-the-CNN-predicts-'9'-when-the-correct-groundtruth-label-is-'4'."]], "6. Conclusion": [[23, "6.-Conclusion"]], "From the deep and gradient explainer visuals, it can be observed that the CNN pays close attention to the top of the digit in distinguishing between a 4 and a 9. On the first and last row of the above gradient explainer visualization we can the 4\u2019s are closed. The contributes to postiive shap values (red) for the 9 classification. This begins explaining why the CNN is confusing the two digits.": [[23, "From-the-deep-and-gradient-explainer-visuals,-it-can-be-observed-that-the-CNN-pays-close-attention-to-the-top-of-the-digit-in-distinguishing-between-a-4-and-a-9.-On-the-first-and-last-row-of-the-above-gradient-explainer-visualization-we-can-the-4's-are-closed.-The-contributes-to-postiive-shap-values-(red)-for-the-9-classification.-This-begins-explaining-why-the-CNN-is-confusing-the-two-digits."]], "Explaining Custom NN NewsGroups Classification Using the Attributions Explainer": [[24, "Explaining-Custom-NN-NewsGroups-Classification-Using-the-Attributions-Explainer"]], "Vectorize Text Data": [[24, "Vectorize-Text-Data"]], "Define the Model": [[24, "Define-the-Model"]], "Compile and Train Model": [[24, "Compile-and-Train-Model"]], "Evaluate Model Performance": [[24, "Evaluate-Model-Performance"]], "SHAP Partition Explainer": [[24, "SHAP-Partition-Explainer"]], "Visualize SHAP Values Correct Predictions": [[24, "Visualize-SHAP-Values-Correct-Predictions"]], "Text Plot": [[24, "Text-Plot"]], "Bar Plots": [[24, "Bar-Plots"]], "Bar Plot 1": [[24, "Bar-Plot-1"]], "Bar Plot 2": [[24, "Bar-Plot-2"]], "Waterfall Plots": [[24, "Waterfall-Plots"]], "Waterfall Plot 1": [[24, "Waterfall-Plot-1"]], "Waterfall Plot 2": [[24, "Waterfall-Plot-2"]], "Force Plot": [[24, "Force-Plot"]], "Creating Model Card for Toxic Comments Classification in Tensorflow": [[25, "Creating-Model-Card-for-Toxic-Comments-Classification-in-Tensorflow"]], "Training Dependencies": [[25, "Training-Dependencies"]], "Model Card Dependencies": [[25, "Model-Card-Dependencies"]], "Download Data": [[25, "Download-Data"]], "Data Description": [[25, "Data-Description"]], "Train Model": [[25, "Train-Model"], [25, "id1"]], "Build Model": [[25, "Build-Model"]], "Export in EvalSavedModel Format": [[25, "Export-in-EvalSavedModel-Format"]], "Making a Model Card": [[25, "Making-a-Model-Card"]]}, "indexentries": {}}) \ No newline at end of file