diff --git a/main/_sources/markdown/Install.md.txt b/main/_sources/markdown/Install.md.txt index 80b2f99..112e93c 100644 --- a/main/_sources/markdown/Install.md.txt +++ b/main/_sources/markdown/Install.md.txt @@ -1,9 +1,10 @@ ## Installation ### Software Requirements * Linux system or WSL2 on Windows (validated on Ubuntu* 20.04/22.04 LTS) -* Python 3.8, 3.9, 3.10 +* Python 3.9, 3.10 * Install required OS packages with `apt-get install build-essential python3-dev` * git (only required for the "Developer Installation") +* Poetry ### Developer Installation with Poetry @@ -15,52 +16,52 @@ on making code changes. 2. Allow poetry to create virtual envionment contained in `.venv` directory of current directory. - ``` + ```bash poetry lock ``` In addtion, you can explicitly tell poetry which python instance to use - ``` + ```bash poetry env use /full/path/to/python ``` 3. Choose the `intel_ai_safety` subpackages and plugins that you wish to install. a. Install `intel_ai_safety` with all of its subpackages (e.g. `explainer` and `model_card_gen`) and plugins - ``` + ```bash poetry install --extras all ``` b. Install `intel_ai_safety` with just `explainer` - ``` + ```bash poetry install --extras explainer ``` c. Install `intel_ai_safety` with just `model_card_gen` - ``` + ```bash poetry install --extras model-card ``` d. Install `intel_ai_safety` with `explainer` and all of its plugins - ``` + ```bash poetry install --extras explainer-all ``` e. Install `intel_ai_safety` with `explainer` and just its pytorch implementations - ``` + ```bash poetry install --extras explainer-pytorch ``` - f. Install `intel_ai_safety` with `explainer` and just its pytorch implementations + f. Install `intel_ai_safety` with `explainer` and just its tensroflow implementations - ``` + ```bash poetry install --extras explainer-tensorflow ``` -4. Activate the enviornment: +4. Activate the environment: - ``` + ```bash source .venv/bin/activate ``` @@ -71,18 +72,18 @@ We encourage you to use a python virtual environment (virtualenv or conda) for c There are two ways to do this: 1. Choose a virtual enviornment to use: a. Using `virtualenv`: - ``` - python3.9 -m virtualenv xai_env + ```bash + python3 -m virtualenv xai_env source xai_env/bin/activate ``` b. Or `conda`: - ``` + ```bash conda create --name xai_env python=3.9 conda activate xai_env ``` 2. Install to current enviornment - ``` + ```bash poetry config virtualenvs.create false && poetry install --extras all ``` @@ -92,7 +93,7 @@ Notebooks may require additional dependencies listed in their associated documen ### Verify Installation Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions: -``` +```bash python -c "from intel_ai_safety.explainer import version; print(version.__version__)" python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)" ``` @@ -106,7 +107,7 @@ The following links have Jupyter* notebooks showing how to use the Explainer and ## Support The Intel Explainable AI Tools team tracks bugs and enhancement requests using -[GitHub issues](https://github.com/intelai/intel-xai-tools/issues). Before submitting a +[GitHub issues](https://github.com/intel/intel-xai-tools/issues). Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported. *Other names and brands may be claimed as the property of others. [Trademarks](http://www.intel.com/content/www/us/en/legal/trademarks.html) diff --git a/main/_sources/markdown/Overview.md.txt b/main/_sources/markdown/Overview.md.txt index 1df549b..156cda3 100644 --- a/main/_sources/markdown/Overview.md.txt +++ b/main/_sources/markdown/Overview.md.txt @@ -3,12 +3,12 @@ The Intel® Explainable AI Tools are designed to help users detect and mitigate against issues of fairness and interpretability, while running best on Intel hardware. There are two Python* components in the repository: -* [Model Card Generator](intel_ai_safety/model_card_gen) +* [Model Card Generator](model_card_gen) * Creates interactive HTML reports containing model performance and fairness metrics * [Explainer](explainer) * Runs post-hoc model distillation and visualization methods to examine predictive behavior for both TensorFlow* and PyTorch* models via a simple Python API including the following modules: - * [Attributions](explainer/intel_ai_safety/explainer/attributions/): Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions - * [CAM (Class Activation Mapping)](explainer/intel_ai_safety/explainer/cam/): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping - * [Metrics](explainer/intel_ai_safety/explainer/metrics/): Gain insight into models with the measurements and visualizations needed during the machine learning workflow + * [Attributions](plugins/explainers/attributions): Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions + * [CAM (Class Activation Mapping)](plugins/explainers/cam-pytorch): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping + * [Metrics](plugins/explainers/metrics): Gain insight into models with the measurements and visualizations needed during the machine learning workflow *Other names and brands may be claimed as the property of others. [Trademarks](http://www.intel.com/content/www/us/en/legal/trademarks.html) diff --git a/main/_sources/markdown/Welcome.md.txt b/main/_sources/markdown/Welcome.md.txt index 52fe559..c01c4d9 100644 --- a/main/_sources/markdown/Welcome.md.txt +++ b/main/_sources/markdown/Welcome.md.txt @@ -7,21 +7,22 @@ This repository provides tools for data scientists and MLOps engineers that have The Intel Explainable AI Tools are designed to help users detect and mitigate against issues of fairness and interpretability, while running best on Intel hardware. There are two Python* components in the repository: -* [Model Card Generator](intel_ai_safety/model_card_gen) +* [Model Card Generator](model_card_gen) * Creates interactive HTML reports containing model performance and fairness metrics * [Explainer](explainer) * Runs post-hoc model distillation and visualization methods to examine predictive behavior for both TensorFlow* and PyTorch* models via a simple Python API including the following modules: - * [Attributions](explainer/intel_ai_safety/explainer/attributions/): Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions - * [CAM (Class Activation Mapping)](explainer/intel_ai_safety/explainer/cam/): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping - * [Metrics](explainer/intel_ai_safety/explainer/metrics/): Gain insight into models with the measurements and visualizations needed during the machine learning workflow + * [Attributions](plugins/explainers/attributions): Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions + * [CAM (Class Activation Mapping)](plugins/explainers/cam-pytorch): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping + * [Metrics](plugins/explainers/metrics): Gain insight into models with the measurements and visualizations needed during the machine learning workflow ## Get Started ### Requirements * Linux system or WSL2 on Windows (validated on Ubuntu* 20.04/22.04 LTS) -* Python 3.8, 3.9, 3.10 +* Python 3.9, 3.10 * Install required OS packages with `apt-get install build-essential python3-dev` * git (only required for the "Developer Installation") +* Poetry ### Developer Installation with Poetry @@ -33,52 +34,52 @@ on making code changes. 2. Allow poetry to create virtual envionment contained in `.venv` directory of current directory. - ``` + ```bash poetry lock ``` In addtion, you can explicitly tell poetry which python instance to use - ``` + ```bash poetry env use /full/path/to/python ``` 3. Choose the `intel_ai_safety` subpackages and plugins that you wish to install. a. Install `intel_ai_safety` with all of its subpackages (e.g. `explainer` and `model_card_gen`) and plugins - ``` + ```bash poetry install --extras all ``` b. Install `intel_ai_safety` with just `explainer` - ``` + ```bash poetry install --extras explainer ``` c. Install `intel_ai_safety` with just `model_card_gen` - ``` + ```bash poetry install --extras model-card ``` d. Install `intel_ai_safety` with `explainer` and all of its plugins - ``` + ```bash poetry install --extras explainer-all ``` e. Install `intel_ai_safety` with `explainer` and just its pytorch implementations - ``` + ```bash poetry install --extras explainer-pytorch ``` - f. Install `intel_ai_safety` with `explainer` and just its pytorch implementations + f. Install `intel_ai_safety` with `explainer` and just its tensroflow implementations - ``` + ```bash poetry install --extras explainer-tensorflow ``` -4. Activate the enviornment: +4. Activate the environment: - ``` + ```bash source .venv/bin/activate ``` @@ -89,18 +90,18 @@ We encourage you to use a python virtual environment (virtualenv or conda) for c There are two ways to do this: 1. Choose a virtual enviornment to use: a. Using `virtualenv`: - ``` - python3.9 -m virtualenv xai_env + ```bash + python3 -m virtualenv xai_env source xai_env/bin/activate ``` b. Or `conda`: - ``` + ```bash conda create --name xai_env python=3.9 conda activate xai_env ``` 2. Install to current enviornment - ``` + ```bash poetry config virtualenvs.create false && poetry install --extras all ``` @@ -110,7 +111,7 @@ Notebooks may require additional dependencies listed in their associated documen ### Verify Installation Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions: -``` +```bash python -c "from intel_ai_safety.explainer import version; print(version.__version__)" python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)" ``` @@ -124,7 +125,7 @@ The following links have Jupyter* notebooks showing how to use the Explainer and ## Support The Intel Explainable AI Tools team tracks bugs and enhancement requests using -[GitHub issues](https://github.com/intelai/intel-xai-tools/issues). Before submitting a +[GitHub issues](https://github.com/intel/intel-xai-tools/issues). Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported. *Other names and brands may be claimed as the property of others. [Trademarks](http://www.intel.com/content/www/us/en/legal/trademarks.html) diff --git a/main/datasets.html b/main/datasets.html index 67def79..de5900c 100644 --- a/main/datasets.html +++ b/main/datasets.html @@ -58,7 +58,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/explainer/attributions.html b/main/explainer/attributions.html index 0dff375..8e57885 100644 --- a/main/explainer/attributions.html +++ b/main/explainer/attributions.html @@ -60,7 +60,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/explainer/cam.html b/main/explainer/cam.html index fbb9f2f..ef51f70 100644 --- a/main/explainer/cam.html +++ b/main/explainer/cam.html @@ -60,7 +60,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/explainer/index.html b/main/explainer/index.html index 4bcbc71..d37c78d 100644 --- a/main/explainer/index.html +++ b/main/explainer/index.html @@ -63,7 +63,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/explainer/metrics.html b/main/explainer/metrics.html index 17585bc..5cc87e4 100644 --- a/main/explainer/metrics.html +++ b/main/explainer/metrics.html @@ -63,7 +63,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/genindex.html b/main/genindex.html index 1014e69..116d7fe 100644 --- a/main/genindex.html +++ b/main/genindex.html @@ -57,7 +57,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/index.html b/main/index.html index 55a4419..7915278 100644 --- a/main/index.html +++ b/main/index.html @@ -59,7 +59,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • @@ -94,7 +94,7 @@

    Overview -
  • Model Card Generator

    +
  • Model Card Generator

    @@ -103,9 +103,9 @@

    OverviewAttributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

  • -
  • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

  • -
  • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

  • +
  • Attributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

  • +
  • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

  • +
  • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

  • @@ -118,9 +118,10 @@

    Get Started

    @@ -131,42 +132,42 @@

    Developer Installation with Poetry
  • Clone this repo and navigate to the repo directory.

  • Allow poetry to create virtual envionment contained in .venv directory of current directory.

    -
    poetry lock
    +
    poetry lock
     

    In addtion, you can explicitly tell poetry which python instance to use

    -
    poetry env use /full/path/to/python
    +
    poetry env use /full/path/to/python
     
  • Choose the intel_ai_safety subpackages and plugins that you wish to install.

    a. Install intel_ai_safety with all of its subpackages (e.g. explainer and model_card_gen) and plugins

    -
    poetry install --extras all
    +
    poetry install --extras all
     

    b. Install intel_ai_safety with just explainer

    -
    poetry install --extras explainer
    +
    poetry install --extras explainer
     

    c. Install intel_ai_safety with just model_card_gen

    -
    poetry install --extras model-card
    +
    poetry install --extras model-card
     

    d. Install intel_ai_safety with explainer and all of its plugins

    -
    poetry install --extras explainer-all
    +
    poetry install --extras explainer-all
     

    e. Install intel_ai_safety with explainer and just its pytorch implementations

    -
    poetry install --extras explainer-pytorch
    +
    poetry install --extras explainer-pytorch
     
    -

    f. Install intel_ai_safety with explainer and just its pytorch implementations

    -
    poetry install --extras explainer-tensorflow
    +

    f. Install intel_ai_safety with explainer and just its tensroflow implementations

    +
    poetry install --extras explainer-tensorflow
     
  • -
  • Activate the enviornment:

    -
    source .venv/bin/activate
    +
  • Activate the environment:

    +
    source .venv/bin/activate
     
  • @@ -181,18 +182,18 @@

    Create and activate a Python3 virtual environment
  • Choose a virtual enviornment to use: a. Using virtualenv:

    -
    python3.9 -m virtualenv xai_env
    -source xai_env/bin/activate
    +
    python3 -m virtualenv xai_env
    +source xai_env/bin/activate
     

    b. Or conda:

    -
    conda create --name xai_env python=3.9
    -conda activate xai_env
    +
    conda create --name xai_env python=3.9
    +conda activate xai_env
     
  • Install to current enviornment

    -
    poetry config virtualenvs.create false && poetry install --extras all
    +
    poetry config virtualenvs.create false && poetry install --extras all
     
  • @@ -206,8 +207,8 @@

    Additional Feature-Specific Steps

    Verify Installation

    Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions:

    -
    python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
    -python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
    +
    python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
    +python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
     

  • @@ -223,7 +224,7 @@

    Running Notebooks

    Support

    The Intel Explainable AI Tools team tracks bugs and enhancement requests using -GitHub issues. Before submitting a +GitHub issues. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

    *Other names and brands may be claimed as the property of others. Trademarks

    diff --git a/main/install.html b/main/install.html index bd344ee..502cd88 100644 --- a/main/install.html +++ b/main/install.html @@ -60,7 +60,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • @@ -93,9 +93,10 @@

    Installation

    @@ -106,42 +107,42 @@

    Developer Installation with Poetry
  • Clone this repo and navigate to the repo directory.

  • Allow poetry to create virtual envionment contained in .venv directory of current directory.

    -
    poetry lock
    +
    poetry lock
     

    In addtion, you can explicitly tell poetry which python instance to use

    -
    poetry env use /full/path/to/python
    +
    poetry env use /full/path/to/python
     
  • Choose the intel_ai_safety subpackages and plugins that you wish to install.

    a. Install intel_ai_safety with all of its subpackages (e.g. explainer and model_card_gen) and plugins

    -
    poetry install --extras all
    +
    poetry install --extras all
     

    b. Install intel_ai_safety with just explainer

    -
    poetry install --extras explainer
    +
    poetry install --extras explainer
     

    c. Install intel_ai_safety with just model_card_gen

    -
    poetry install --extras model-card
    +
    poetry install --extras model-card
     

    d. Install intel_ai_safety with explainer and all of its plugins

    -
    poetry install --extras explainer-all
    +
    poetry install --extras explainer-all
     

    e. Install intel_ai_safety with explainer and just its pytorch implementations

    -
    poetry install --extras explainer-pytorch
    +
    poetry install --extras explainer-pytorch
     
    -

    f. Install intel_ai_safety with explainer and just its pytorch implementations

    -
    poetry install --extras explainer-tensorflow
    +

    f. Install intel_ai_safety with explainer and just its tensroflow implementations

    +
    poetry install --extras explainer-tensorflow
     
  • -
  • Activate the enviornment:

    -
    source .venv/bin/activate
    +
  • Activate the environment:

    +
    source .venv/bin/activate
     
  • @@ -156,18 +157,18 @@

    Create and activate a Python3 virtual environment
  • Choose a virtual enviornment to use: a. Using virtualenv:

    -
    python3.9 -m virtualenv xai_env
    -source xai_env/bin/activate
    +
    python3 -m virtualenv xai_env
    +source xai_env/bin/activate
     

    b. Or conda:

    -
    conda create --name xai_env python=3.9
    -conda activate xai_env
    +
    conda create --name xai_env python=3.9
    +conda activate xai_env
     
  • Install to current enviornment

    -
    poetry config virtualenvs.create false && poetry install --extras all
    +
    poetry config virtualenvs.create false && poetry install --extras all
     
  • @@ -181,8 +182,8 @@

    Additional Feature-Specific Steps

    Verify Installation

    Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions:

    -
    python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
    -python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
    +
    python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
    +python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
     

  • @@ -198,7 +199,7 @@

    Running Notebooks

    Support

    The Intel Explainable AI Tools team tracks bugs and enhancement requests using -GitHub issues. Before submitting a +GitHub issues. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

    *Other names and brands may be claimed as the property of others. Trademarks

    diff --git a/main/legal.html b/main/legal.html index 700026f..57ee57d 100644 --- a/main/legal.html +++ b/main/legal.html @@ -59,7 +59,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/markdown/Install.html b/main/markdown/Install.html index 4d023de..bc54e5c 100644 --- a/main/markdown/Install.html +++ b/main/markdown/Install.html @@ -58,7 +58,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • @@ -91,9 +91,10 @@

    Installation

    @@ -104,42 +105,42 @@

    Developer Installation with Poetry
  • Clone this repo and navigate to the repo directory.

  • Allow poetry to create virtual envionment contained in .venv directory of current directory.

    -
    poetry lock
    +
    poetry lock
     

    In addtion, you can explicitly tell poetry which python instance to use

    -
    poetry env use /full/path/to/python
    +
    poetry env use /full/path/to/python
     
  • Choose the intel_ai_safety subpackages and plugins that you wish to install.

    a. Install intel_ai_safety with all of its subpackages (e.g. explainer and model_card_gen) and plugins

    -
    poetry install --extras all
    +
    poetry install --extras all
     

    b. Install intel_ai_safety with just explainer

    -
    poetry install --extras explainer
    +
    poetry install --extras explainer
     

    c. Install intel_ai_safety with just model_card_gen

    -
    poetry install --extras model-card
    +
    poetry install --extras model-card
     

    d. Install intel_ai_safety with explainer and all of its plugins

    -
    poetry install --extras explainer-all
    +
    poetry install --extras explainer-all
     

    e. Install intel_ai_safety with explainer and just its pytorch implementations

    -
    poetry install --extras explainer-pytorch
    +
    poetry install --extras explainer-pytorch
     
    -

    f. Install intel_ai_safety with explainer and just its pytorch implementations

    -
    poetry install --extras explainer-tensorflow
    +

    f. Install intel_ai_safety with explainer and just its tensroflow implementations

    +
    poetry install --extras explainer-tensorflow
     
  • -
  • Activate the enviornment:

    -
    source .venv/bin/activate
    +
  • Activate the environment:

    +
    source .venv/bin/activate
     
  • @@ -154,18 +155,18 @@

    Create and activate a Python3 virtual environment
  • Choose a virtual enviornment to use: a. Using virtualenv:

    -
    python3.9 -m virtualenv xai_env
    -source xai_env/bin/activate
    +
    python3 -m virtualenv xai_env
    +source xai_env/bin/activate
     

    b. Or conda:

    -
    conda create --name xai_env python=3.9
    -conda activate xai_env
    +
    conda create --name xai_env python=3.9
    +conda activate xai_env
     
  • Install to current enviornment

    -
    poetry config virtualenvs.create false && poetry install --extras all
    +
    poetry config virtualenvs.create false && poetry install --extras all
     
  • @@ -179,8 +180,8 @@

    Additional Feature-Specific Steps

    Verify Installation

    Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions:

    -
    python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
    -python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
    +
    python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
    +python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
     

  • @@ -196,7 +197,7 @@

    Running Notebooks

    Support

    The Intel Explainable AI Tools team tracks bugs and enhancement requests using -GitHub issues. Before submitting a +GitHub issues. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

    *Other names and brands may be claimed as the property of others. Trademarks

    diff --git a/main/markdown/Legal.html b/main/markdown/Legal.html index ab902f2..9876f61 100644 --- a/main/markdown/Legal.html +++ b/main/markdown/Legal.html @@ -58,7 +58,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/markdown/Overview.html b/main/markdown/Overview.html index 2b65636..812a28b 100644 --- a/main/markdown/Overview.html +++ b/main/markdown/Overview.html @@ -58,7 +58,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • @@ -90,7 +90,7 @@

    Overview -
  • Model Card Generator

    +
  • Model Card Generator

    • Creates interactive HTML reports containing model performance and fairness metrics

    @@ -99,9 +99,9 @@

    OverviewAttributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

  • -
  • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

  • -
  • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

  • +
  • Attributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

  • +
  • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

  • +
  • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

  • diff --git a/main/markdown/Welcome.html b/main/markdown/Welcome.html index d27bb47..a8e8fb1 100644 --- a/main/markdown/Welcome.html +++ b/main/markdown/Welcome.html @@ -58,7 +58,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • @@ -93,7 +93,7 @@

    Overview -
  • Model Card Generator

    +
  • Model Card Generator

    • Creates interactive HTML reports containing model performance and fairness metrics

    @@ -102,9 +102,9 @@

    OverviewAttributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

  • -
  • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

  • -
  • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

  • +
  • Attributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

  • +
  • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

  • +
  • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

  • @@ -117,9 +117,10 @@

    Get Started

    @@ -130,42 +131,42 @@

    Developer Installation with Poetry
  • Clone this repo and navigate to the repo directory.

  • Allow poetry to create virtual envionment contained in .venv directory of current directory.

    -
    poetry lock
    +
    poetry lock
     

    In addtion, you can explicitly tell poetry which python instance to use

    -
    poetry env use /full/path/to/python
    +
    poetry env use /full/path/to/python
     
  • Choose the intel_ai_safety subpackages and plugins that you wish to install.

    a. Install intel_ai_safety with all of its subpackages (e.g. explainer and model_card_gen) and plugins

    -
    poetry install --extras all
    +
    poetry install --extras all
     

    b. Install intel_ai_safety with just explainer

    -
    poetry install --extras explainer
    +
    poetry install --extras explainer
     

    c. Install intel_ai_safety with just model_card_gen

    -
    poetry install --extras model-card
    +
    poetry install --extras model-card
     

    d. Install intel_ai_safety with explainer and all of its plugins

    -
    poetry install --extras explainer-all
    +
    poetry install --extras explainer-all
     

    e. Install intel_ai_safety with explainer and just its pytorch implementations

    -
    poetry install --extras explainer-pytorch
    +
    poetry install --extras explainer-pytorch
     
    -

    f. Install intel_ai_safety with explainer and just its pytorch implementations

    -
    poetry install --extras explainer-tensorflow
    +

    f. Install intel_ai_safety with explainer and just its tensroflow implementations

    +
    poetry install --extras explainer-tensorflow
     
  • -
  • Activate the enviornment:

    -
    source .venv/bin/activate
    +
  • Activate the environment:

    +
    source .venv/bin/activate
     
  • @@ -180,18 +181,18 @@

    Create and activate a Python3 virtual environment
  • Choose a virtual enviornment to use: a. Using virtualenv:

    -
    python3.9 -m virtualenv xai_env
    -source xai_env/bin/activate
    +
    python3 -m virtualenv xai_env
    +source xai_env/bin/activate
     

    b. Or conda:

    -
    conda create --name xai_env python=3.9
    -conda activate xai_env
    +
    conda create --name xai_env python=3.9
    +conda activate xai_env
     
  • Install to current enviornment

    -
    poetry config virtualenvs.create false && poetry install --extras all
    +
    poetry config virtualenvs.create false && poetry install --extras all
     
  • @@ -205,8 +206,8 @@

    Additional Feature-Specific Steps

    Verify Installation

    Verify that your installation was successful by using the following commands, which display the Explainer and Model Card Generator versions:

    -
    python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
    -python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
    +
    python -c "from intel_ai_safety.explainer import version; print(version.__version__)"
    +python -c "from intel_ai_safety.model_card_gen import version; print(version.__version__)"
     

  • @@ -222,7 +223,7 @@

    Running Notebooks

    Support

    The Intel Explainable AI Tools team tracks bugs and enhancement requests using -GitHub issues. Before submitting a +GitHub issues. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

    *Other names and brands may be claimed as the property of others. Trademarks

    diff --git a/main/model_card_gen/api.html b/main/model_card_gen/api.html index 107b844..388c744 100644 --- a/main/model_card_gen/api.html +++ b/main/model_card_gen/api.html @@ -64,7 +64,7 @@
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/model_card_gen/example.html b/main/model_card_gen/example.html index 35a9e34..06f7655 100644 --- a/main/model_card_gen/example.html +++ b/main/model_card_gen/example.html @@ -64,7 +64,7 @@
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/model_card_gen/index.html b/main/model_card_gen/index.html index 32bce91..9f4afce 100644 --- a/main/model_card_gen/index.html +++ b/main/model_card_gen/index.html @@ -64,7 +64,7 @@
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • @@ -201,7 +201,7 @@

    Install

    Step 1: Clone the GitHub repository.

    -
    git clone https://github.com/IntelAI/intel-xai-tools.git
    +
    git clone https://github.com/Intel/intel-xai-tools.git
     

    Step 2: Navigate to intel-xai-tools directory.

    @@ -232,7 +232,7 @@

    Model Card Generator Inputseval_config parameter, let us review the following file entitled “eval_config.proto” defined for the COMPAS proxy model found in /notebooks/model_card_gen/compas_with_model_card_gen/compas-model-card-tfx.ipynb.

    In the model_specs section it tells the evaluator “label_key” is the ground truth label. In the metric_specs section it defines the following metrics to be computed: “BinaryAccuracy”, “AUC”, “ConfusionMatrixPlot”, and “FairnessIndicators”. In the slicing_specs section it tells the evaluator to compute these metrics accross all datapoints and aggregate these metrics grouped by the “race” feature.

    -
    model_specs {
    +
    model_specs {
         label_key: 'is_recid'
       }
     metrics_specs {
    @@ -255,7 +255,7 @@ 

    Model Card Generator Inputsmodel_specs as follows

    -
    model_specs {
    +
    model_specs {
         label_key: 'y_true'
         prediction_key: 'y_pred'
       }
    @@ -302,7 +302,7 @@ 

    Model Card Generator Inputs/model_card_gen/intel_ai_safety/model_card_gen/docs/examples/json/model_card_example.json.

    Create Model Card

    
    -from model_card_gen.model_card_gen import ModelCardGen
    +from intel_ai_safety.model_card_gen.model_card_gen import ModelCardGen
     
     model_path = 'compas/model'
     data_paths = {
    diff --git a/main/notebooks.html b/main/notebooks.html
    index 5350d3c..f464eb8 100644
    --- a/main/notebooks.html
    +++ b/main/notebooks.html
    @@ -62,7 +62,7 @@
     
     
     
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/notebooks/ExplainingImageClassification.html b/main/notebooks/ExplainingImageClassification.html index dcde242..4171d47 100644 --- a/main/notebooks/ExplainingImageClassification.html +++ b/main/notebooks/ExplainingImageClassification.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/notebooks/Multimodal_Cancer_Detection.html b/main/notebooks/Multimodal_Cancer_Detection.html index 6dca4d1..1dd21ba 100644 --- a/main/notebooks/Multimodal_Cancer_Detection.html +++ b/main/notebooks/Multimodal_Cancer_Detection.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • @@ -99,7 +99,7 @@

    Import Dependencies and Setup Directories
    # This notebook requires the latest version of intel-transfer-learning (v0.7.0)
     # The package and directions to install it can be found at its repo:
    -# https://github.com/IntelAI/transfer-learning
    +# https://github.com/Intel/transfer-learning
     
     ! pip install --no-cache-dir  nltk docx2txt openpyxl et-xmlfile schema
     
    diff --git a/main/notebooks/Multimodal_Cancer_Detection.ipynb b/main/notebooks/Multimodal_Cancer_Detection.ipynb index c7e5ea6..d28fb99 100644 --- a/main/notebooks/Multimodal_Cancer_Detection.ipynb +++ b/main/notebooks/Multimodal_Cancer_Detection.ipynb @@ -21,7 +21,7 @@ "source": [ "# This notebook requires the latest version of intel-transfer-learning (v0.7.0)\n", "# The package and directions to install it can be found at its repo:\n", - "# https://github.com/IntelAI/transfer-learning\n", + "# https://github.com/Intel/transfer-learning\n", "\n", "! pip install --no-cache-dir nltk docx2txt openpyxl et-xmlfile schema" ] diff --git a/main/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.html b/main/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.html index 16cafcc..dc3fbd9 100644 --- a/main/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.html +++ b/main/notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/notebooks/TorchVision_CIFAR_Interpret.html b/main/notebooks/TorchVision_CIFAR_Interpret.html index 427893b..5e5d71a 100644 --- a/main/notebooks/TorchVision_CIFAR_Interpret.html +++ b/main/notebooks/TorchVision_CIFAR_Interpret.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/notebooks/adult-pytorch-model-card.html b/main/notebooks/adult-pytorch-model-card.html index f0553df..ee305f5 100644 --- a/main/notebooks/adult-pytorch-model-card.html +++ b/main/notebooks/adult-pytorch-model-card.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/notebooks/compas-model-card-tfx.html b/main/notebooks/compas-model-card-tfx.html index 77a2749..bc1a659 100644 --- a/main/notebooks/compas-model-card-tfx.html +++ b/main/notebooks/compas-model-card-tfx.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/notebooks/heart_disease.html b/main/notebooks/heart_disease.html index 65ed8ac..83a283e 100644 --- a/main/notebooks/heart_disease.html +++ b/main/notebooks/heart_disease.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/notebooks/mnist.html b/main/notebooks/mnist.html index 8d3721e..6635b41 100644 --- a/main/notebooks/mnist.html +++ b/main/notebooks/mnist.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/notebooks/partitionexplainer.html b/main/notebooks/partitionexplainer.html index 210267c..5bfa5c9 100644 --- a/main/notebooks/partitionexplainer.html +++ b/main/notebooks/partitionexplainer.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/notebooks/toxicity-tfma-model-card.html b/main/notebooks/toxicity-tfma-model-card.html index 40dfb2c..ce35ff1 100644 --- a/main/notebooks/toxicity-tfma-model-card.html +++ b/main/notebooks/toxicity-tfma-model-card.html @@ -61,7 +61,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/overview.html b/main/overview.html index 6e76c43..b545348 100644 --- a/main/overview.html +++ b/main/overview.html @@ -60,7 +60,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • @@ -92,7 +92,7 @@

    Overview -
  • Model Card Generator

    +
  • Model Card Generator

    • Creates interactive HTML reports containing model performance and fairness metrics

    @@ -101,9 +101,9 @@

    OverviewAttributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

  • -
  • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

  • -
  • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

  • +
  • Attributions: Visualize negative and positive attributions of tabular features, pixels, and word tokens for predictions

  • +
  • CAM (Class Activation Mapping): Create heatmaps for CNN image classifications using gradient-weight class activation CAM mapping

  • +
  • Metrics: Gain insight into models with the measurements and visualizations needed during the machine learning workflow

  • diff --git a/main/search.html b/main/search.html index ff91cc8..756ec93 100644 --- a/main/search.html +++ b/main/search.html @@ -60,7 +60,7 @@
  • Model Card Generator
  • Example Notebooks
  • Legal Information
  • -
  • GitHub Repository
  • +
  • GitHub Repository
  • diff --git a/main/searchindex.js b/main/searchindex.js index 2cc95ea..e511f8d 100644 --- a/main/searchindex.js +++ b/main/searchindex.js @@ -1 +1 @@ -Search.setIndex({"docnames": ["datasets", "explainer/attributions", "explainer/cam", "explainer/index", "explainer/metrics", "index", "install", "legal", "markdown/Install", "markdown/Legal", "markdown/Overview", "markdown/Welcome", "model_card_gen/api", "model_card_gen/example", "model_card_gen/index", "notebooks", "notebooks/ExplainingImageClassification", "notebooks/Multimodal_Cancer_Detection", "notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions", "notebooks/TorchVision_CIFAR_Interpret", "notebooks/adult-pytorch-model-card", "notebooks/compas-model-card-tfx", "notebooks/heart_disease", "notebooks/mnist", "notebooks/partitionexplainer", "notebooks/toxicity-tfma-model-card", "overview"], "filenames": ["datasets.rst", "explainer/attributions.md", "explainer/cam.md", "explainer/index.md", "explainer/metrics.md", "index.md", "install.rst", "legal.rst", "markdown/Install.md", "markdown/Legal.md", "markdown/Overview.md", "markdown/Welcome.md", "model_card_gen/api.rst", "model_card_gen/example.md", "model_card_gen/index.md", "notebooks.rst", "notebooks/ExplainingImageClassification.nblink", "notebooks/Multimodal_Cancer_Detection.nblink", "notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.nblink", "notebooks/TorchVision_CIFAR_Interpret.nblink", "notebooks/adult-pytorch-model-card.nblink", "notebooks/compas-model-card-tfx.nblink", "notebooks/heart_disease.nblink", "notebooks/mnist.nblink", "notebooks/partitionexplainer.nblink", "notebooks/toxicity-tfma-model-card.nblink", "overview.rst"], "titles": ["Datasets", "<no title>", "<no title>", "Explainer", "API Refrence", "Intel\u00ae Explainable AI Tools", "Installation", "Legal Information", "Installation", "Legal Information", "Overview", "Intel\u00ae Explainable AI Tools", "API Reference", "Example Model Card", "Model Card Generator", "Example Notebooks", "Explaining ResNet50 ImageNet Classification Using the CAM Explainer", "Multimodal Breast Cancer Detection Explainability using the Intel\u00ae Explainable AI API", "Explaining Fine Tuned Text Classifier with PyTorch using the Intel\u00ae Explainable AI API", "Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer", "Generating Model Card with PyTorch", "Detecting Issues in Fairness by Generating Model Card from Tensorflow Estimators", "Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer", "Explaining Custom CNN MNIST Classification Using the Attributions Explainer", "Explaining Custom NN NewsGroups Classification Using the Attributions Explainer", "Creating Model Card for Toxic Comments Classification in Tensorflow", "Overview"], "terms": {"thi": [0, 5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 20, 21, 22, 25], "i": [0, 3, 5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 19, 20, 21, 25], "comprehens": [0, 14], "list": [0, 5, 6, 8, 11, 17, 18, 20, 21, 23], "public": [0, 14, 21, 25], "us": [0, 3, 5, 6, 7, 8, 9, 10, 11, 15, 20, 21, 25, 26], "repositori": [0, 5, 6, 8, 10, 11, 14, 17, 18, 20, 26], "name": [0, 5, 6, 8, 10, 11, 14, 15, 16, 17, 18, 20, 21, 25, 26], "link": [0, 5, 6, 8, 11, 14], "sourc": [0, 5, 6, 7, 8, 9, 11, 18, 25], "framework": [0, 14, 15, 17], "case": [0, 5, 6, 8, 11, 14, 15, 17, 18, 21], "adult": [0, 20], "incom": 0, "pytorch": [0, 3, 5, 6, 8, 10, 11, 14, 15, 19, 26], "tabular": [0, 3, 5, 10, 11, 15, 26], "classif": [0, 3, 5, 10, 11, 15, 18, 21, 26], "cdd": [0, 17], "cesm": [0, 17], "imag": [0, 3, 5, 10, 11, 15, 19, 26], "text": [0, 15, 25], "cifar": [0, 15], "10": [0, 5, 6, 8, 11, 15, 17, 18, 20, 21, 22, 25], "torchvis": [0, 16, 19, 23], "tensorflow": [0, 3, 5, 6, 8, 10, 11, 14, 15, 17, 18, 22, 24, 26], "civil": [0, 25], "comment": [0, 15], "tfd": 0, "compa": [0, 14, 21], "recidiv": [0, 14, 21], "risk": [0, 14, 21], "score": [0, 14, 16, 21, 25], "data": [0, 5, 6, 7, 8, 9, 11, 14, 18, 19, 21], "analysi": [0, 14, 18, 21], "imagenet": [0, 15], "imdb": [0, 18], "review": [0, 14, 18], "mnist": [0, 15], "sm": [0, 18], "spam": [0, 18], "collect": [0, 14, 18, 23], "python": [3, 5, 6, 8, 10, 11, 14, 17, 21, 26], "modul": [3, 5, 10, 11, 17, 19, 20, 21, 23, 26], "intel": [3, 6, 7, 8, 9, 10, 14, 15, 20, 21, 25, 26], "ai": [3, 6, 7, 8, 9, 10, 15, 21, 25, 26], "tool": [3, 6, 7, 8, 9, 10, 14, 15, 21, 26], "provid": [3, 5, 6, 7, 8, 9, 11, 14, 20, 21, 25], "method": [3, 5, 10, 11, 16, 19, 26], "model": [3, 10, 19, 23, 26], "compos": [3, 19, 23], "add": [3, 14, 17, 21, 25], "minim": [3, 25], "code": [3, 5, 6, 7, 8, 9, 11, 18, 21], "extens": [3, 17, 18], "easi": 3, "new": [3, 17, 25], "commun": 3, "contribut": [3, 5, 6, 7, 8, 9, 11], "welcom": 3, "attribut": [3, 5, 10, 11, 15, 17, 18, 25, 26], "visual": [3, 5, 10, 11, 17, 18, 19, 21, 26], "neg": [3, 5, 10, 11, 21, 25, 26], "posit": [3, 5, 10, 11, 21, 25, 26], "featur": [3, 10, 14, 16, 17, 18, 19, 20, 21, 25, 26], "pixel": [3, 5, 10, 11, 26], "word": [3, 5, 10, 11, 17, 18, 25, 26], "token": [3, 5, 10, 11, 17, 18, 24, 26], "predict": [3, 5, 10, 11, 14, 17, 19, 20, 21, 22, 26], "cam": [3, 5, 10, 11, 15, 17, 26], "creat": [3, 7, 9, 10, 14, 15, 17, 18, 21, 26], "heatmap": [3, 5, 10, 11, 26], "cnn": [3, 5, 10, 11, 15, 17, 26], "gradient": [3, 5, 10, 11, 19, 26], "weight": [3, 5, 10, 11, 16, 18, 25, 26], "class": [3, 5, 10, 11, 14, 16, 17, 18, 19, 20, 21, 24, 25, 26], "activ": [3, 10, 21, 22, 24, 26], "map": [3, 5, 10, 11, 14, 17, 18, 20, 21, 24, 25, 26], "api": [3, 5, 6, 8, 10, 11, 15, 21, 26], "refrenc": 3, "gain": [3, 5, 10, 11, 20, 26], "insight": [3, 5, 10, 11, 26], "measur": [3, 5, 10, 11, 25, 26], "need": [3, 5, 10, 11, 16, 17, 18, 21, 26], "dure": [3, 5, 10, 11, 26], "machin": [3, 5, 10, 11, 18, 20, 21, 25, 26], "learn": [3, 5, 10, 11, 14, 15, 18, 20, 21, 25, 26], "workflow": [3, 5, 10, 11, 26], "scientist": [5, 11], "mlop": [5, 11], "engin": [5, 11, 21], "have": [5, 6, 8, 11, 14, 16, 17, 18, 21, 22, 25], "interpret": [5, 10, 11, 26], "The": [5, 6, 8, 10, 11, 14, 16, 17, 18, 21, 25, 26], "ar": [5, 6, 7, 8, 9, 10, 11, 14, 16, 17, 18, 21, 25, 26], "design": [5, 10, 11, 26], "help": [5, 10, 11, 17, 26], "user": [5, 10, 11, 14, 17, 21, 25, 26], "detect": [5, 10, 11, 15, 25, 26], "mitig": [5, 10, 11, 26], "against": [5, 10, 11, 21, 26], "issu": [5, 6, 8, 10, 11, 15, 17, 26], "fair": [5, 10, 11, 14, 15, 20, 25, 26], "while": [5, 10, 11, 17, 21, 26], "best": [5, 10, 11, 26], "hardwar": [5, 10, 11, 18, 24, 26], "There": [5, 6, 8, 10, 11, 21, 26], "two": [5, 6, 8, 10, 11, 17, 18, 21, 26], "compon": [5, 10, 11, 21, 26], "card": [5, 6, 8, 10, 11, 26], "gener": [5, 6, 8, 10, 11, 17, 18, 25, 26], "interact": [5, 10, 11, 14, 21, 26], "html": [5, 10, 11, 14, 20, 21, 26], "report": [5, 6, 8, 10, 11, 14, 17, 21, 23, 24, 26], "contain": [5, 6, 8, 10, 11, 14, 21, 25, 26], "perform": [5, 6, 7, 8, 9, 10, 11, 14, 17, 18, 21, 26], "metric": [5, 10, 11, 14, 17, 18, 20, 21, 22, 23, 24, 25, 26], "post": [5, 10, 11, 25, 26], "hoc": [5, 10, 11, 26], "distil": [5, 10, 11, 26], "examin": [5, 10, 11, 26], "behavior": [5, 10, 11, 26], "both": [5, 10, 11, 16, 18, 21, 25, 26], "via": [5, 10, 11, 14, 17, 26], "simpl": [5, 10, 11, 26], "includ": [5, 10, 11, 14, 17, 18, 21, 25, 26], "follow": [5, 6, 8, 10, 11, 14, 17, 18, 21, 26], "linux": [5, 6, 8, 11], "system": [5, 6, 8, 11, 17, 18, 20, 21], "wsl2": [5, 6, 8, 11], "window": [5, 6, 8, 11, 24], "valid": [5, 6, 8, 11, 17, 18, 21], "ubuntu": [5, 6, 8, 11], "20": [5, 6, 8, 11, 17, 18, 20, 21, 23], "04": [5, 6, 8, 11], "22": [5, 6, 8, 11], "lt": [5, 6, 8, 11], "3": [5, 6, 8, 11, 14, 17, 19, 21, 24], "8": [5, 6, 8, 11, 18, 20, 23, 25], "9": [5, 6, 8, 11, 17, 19], "o": [5, 6, 8, 11, 17, 18, 19, 20, 21, 24, 25], "packag": [5, 6, 8, 11, 14, 17], "apt": [5, 6, 8, 11], "build": [5, 6, 8, 11, 21], "essenti": [5, 6, 8, 11], "dev": [5, 6, 8, 11, 25], "git": [5, 6, 8, 11, 14], "onli": [5, 6, 8, 11, 14, 16, 18, 21, 25], "instruct": [5, 6, 8, 11, 18], "safeti": [5, 6, 8, 11], "librari": [5, 6, 8, 11], "clone": [5, 6, 8, 11, 14], "github": [5, 6, 8, 11, 14, 16, 17], "can": [5, 6, 8, 11, 14, 16, 17, 18, 21, 25], "done": [5, 6, 8, 11], "instead": [5, 6, 8, 11, 17, 18], "basic": [5, 6, 8, 11], "pip": [5, 6, 8, 11, 14, 17, 21], "you": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 21, 25], "plan": [5, 6, 8, 11], "make": [5, 6, 8, 11, 17, 21, 22], "chang": [5, 6, 8, 11, 18], "repo": [5, 6, 8, 11, 17], "navig": [5, 6, 8, 11, 14], "directori": [5, 6, 8, 11, 14, 18, 21], "allow": [5, 6, 8, 11, 14, 21], "envion": [5, 6, 8, 11], "venv": [5, 6, 8, 11], "current": [5, 6, 8, 11, 17, 25], "lock": [5, 6, 8, 11], "In": [5, 6, 8, 11, 14, 18, 21], "addtion": [5, 6, 8, 11], "explicitli": [5, 6, 8, 11], "tell": [5, 6, 8, 11, 14], "which": [5, 6, 8, 11, 14, 16, 18, 21], "instanc": [5, 6, 8, 11, 14, 17, 21], "env": [5, 6, 8, 11], "full": [5, 6, 8, 11, 18], "path": [5, 6, 8, 11, 14, 16, 17, 18, 19, 21], "choos": [5, 6, 8, 11, 16], "intel_ai_safeti": [5, 6, 8, 11, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], "subpackag": [5, 6, 8, 11], "plugin": [5, 6, 8, 11, 25], "wish": [5, 6, 8, 11], "all": [5, 6, 8, 11, 14, 17, 18, 21], "its": [5, 6, 8, 11, 14, 17], "e": [5, 6, 8, 11, 14, 18, 25], "g": [5, 6, 8, 11, 14, 18, 21, 25], "model_card_gen": [5, 6, 8, 11, 14, 20, 21, 25], "extra": [5, 6, 8, 11, 18], "b": [5, 6, 8, 11, 17], "just": [5, 6, 8, 11, 17, 18], "c": [5, 6, 8, 11, 14, 21, 25], "d": [5, 6, 8, 11, 17, 18, 19], "implement": [5, 6, 8, 11, 21], "f": [5, 6, 8, 11, 14, 17, 18, 19, 21, 23], "bin": [5, 6, 8, 11], "we": [5, 6, 8, 11, 14, 16, 17, 18, 21, 25], "encourag": [5, 6, 8, 11], "virtualenv": [5, 6, 8, 11], "conda": [5, 6, 8, 11], "consist": [5, 6, 8, 11, 25], "manag": [5, 6, 8, 11, 14, 21], "wai": [5, 6, 8, 11, 25], "do": [5, 6, 8, 11, 17, 18, 21], "m": [5, 6, 8, 11, 14, 17, 21, 24], "xai_env": [5, 6, 8, 11], "Or": [5, 6, 8, 11], "config": [5, 6, 8, 11, 14, 17, 20, 21], "fals": [5, 6, 8, 11, 14, 17, 19, 20, 21, 23, 25], "mai": [5, 6, 8, 10, 11, 15, 18, 25, 26], "depend": [5, 6, 8, 11, 14, 16], "associ": [5, 6, 7, 8, 9, 11, 18, 25], "document": [5, 6, 8, 11, 14, 18, 25], "your": [5, 6, 7, 8, 9, 11, 16, 17, 18, 21], "wa": [5, 6, 8, 11, 16, 17, 18, 21], "success": [5, 6, 8, 11], "command": [5, 6, 8, 11], "displai": [5, 6, 8, 11, 18], "version": [5, 6, 7, 8, 9, 11, 14, 17, 20, 25], "from": [5, 6, 8, 11, 14, 15, 16, 17, 19, 22, 24, 25], "import": [5, 6, 8, 11, 14, 16, 19, 20, 22, 23, 24, 25], "print": [5, 6, 8, 11, 16, 17, 18, 19, 20, 22, 23, 24], "__version__": [5, 6, 8, 11, 22], "jupyt": [5, 6, 8, 11], "show": [5, 6, 8, 11, 17, 19, 21], "how": [5, 6, 8, 11, 14, 16, 18, 21], "variou": [5, 6, 8, 11, 16], "ml": [5, 6, 8, 11, 18, 21, 25], "domain": [5, 6, 8, 11, 15, 17], "team": [5, 6, 8, 11, 14, 21, 25], "track": [5, 6, 8, 11], "bug": [5, 6, 8, 11], "enhanc": [5, 6, 8, 11, 17, 20], "request": [5, 6, 8, 11, 16, 19], "befor": [5, 6, 8, 11, 18], "submit": [5, 6, 8, 11], "suggest": [5, 6, 8, 11], "search": [5, 6, 8, 11], "see": [5, 6, 8, 11, 17, 18, 21, 25], "ha": [5, 6, 8, 11, 17, 18, 20, 21], "alreadi": [5, 6, 8, 11, 17, 18, 25], "been": [5, 6, 8, 11, 14, 21], "other": [5, 6, 8, 10, 11, 15, 17, 25, 26], "brand": [5, 6, 8, 10, 11, 15, 26], "claim": [5, 6, 8, 10, 11, 15, 26], "properti": [5, 6, 8, 10, 11, 15, 26], "trademark": [5, 6, 8, 10, 11, 15, 26], "These": [5, 6, 7, 8, 9, 11, 21, 25], "script": [5, 6, 7, 8, 9, 11, 20], "intend": [5, 6, 7, 8, 9, 11, 14, 20, 21], "benchmark": [5, 6, 7, 8, 9, 11], "platform": [5, 6, 7, 8, 9, 11, 25], "For": [5, 6, 7, 8, 9, 11, 14, 16, 18, 21, 25], "ani": [5, 6, 7, 8, 9, 11, 14, 17, 25], "inform": [5, 6, 8, 11, 14, 17, 18, 20, 21, 25], "visit": [5, 6, 7, 8, 9, 11], "http": [5, 6, 7, 8, 9, 11, 14, 16, 17, 20, 21, 22, 25], "www": [5, 6, 7, 8, 9, 11, 18, 25], "blog": [5, 6, 7, 8, 9, 11], "commit": [5, 6, 7, 8, 9, 11, 21], "respect": [5, 6, 7, 8, 9, 11, 25], "human": [5, 6, 7, 8, 9, 11, 18], "right": [5, 6, 7, 8, 9, 11], "avoid": [5, 6, 7, 8, 9, 11], "complic": [5, 6, 7, 8, 9, 11], "abus": [5, 6, 7, 8, 9, 11, 25], "polici": [5, 6, 7, 8, 9, 11], "reflect": [5, 6, 7, 8, 9, 11], "global": [5, 6, 7, 8, 9, 11], "principl": [5, 6, 7, 8, 9, 11], "accordingli": [5, 6, 7, 8, 9, 11], "access": [5, 6, 7, 8, 9, 11, 25], "materi": [5, 6, 7, 8, 9, 11], "agre": [5, 6, 7, 8, 9, 11], "product": [5, 6, 7, 8, 9, 11], "applic": [5, 6, 7, 8, 9, 11, 14, 16, 17, 21], "caus": [5, 6, 7, 8, 9, 11], "violat": [5, 6, 7, 8, 9, 11], "an": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 20, 21, 25], "internation": [5, 6, 7, 8, 9, 11], "recogn": [5, 6, 7, 8, 9, 11, 25], "under": [5, 6, 7, 8, 9, 11, 25], "apach": [5, 6, 7, 8, 9, 11], "2": [5, 6, 7, 8, 9, 11, 14, 16, 19, 21, 22, 25], "0": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], "To": [5, 6, 7, 8, 9, 11, 17, 21, 25], "extent": [5, 6, 7, 8, 9, 11], "referenc": [5, 6, 7, 8, 9, 11], "site": [5, 6, 7, 8, 9, 11, 25], "third": [5, 6, 7, 8, 9, 11], "parti": [5, 6, 7, 8, 9, 11], "indic": [5, 6, 7, 8, 9, 11, 21, 23, 25], "content": [5, 6, 7, 8, 9, 11, 14, 16], "doe": [5, 6, 7, 8, 9, 11, 14, 17, 25], "warrant": [5, 6, 7, 8, 9, 11], "accuraci": [5, 6, 7, 8, 9, 11, 14, 17, 18, 23, 24], "qualiti": [5, 6, 7, 8, 9, 11, 21], "By": [5, 6, 7, 8, 9, 11], "": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 21, 22, 25], "term": [5, 6, 7, 8, 9, 11, 25], "compli": [5, 6, 7, 8, 9, 11], "expressli": [5, 7, 9, 11], "adequaci": [5, 7, 9, 11], "complet": [5, 7, 9, 11], "liabl": [5, 7, 9, 11], "error": [5, 7, 9, 11, 25], "omiss": [5, 7, 9, 11], "defect": [5, 7, 9, 11], "relianc": [5, 7, 9, 11], "thereon": [5, 7, 9, 11], "also": [5, 7, 9, 11, 17, 18, 25], "warranti": [5, 7, 9, 11], "non": [5, 7, 9, 11, 25], "infring": [5, 7, 9, 11], "liabil": [5, 7, 9, 11], "damag": [5, 7, 9, 11], "relat": [5, 7, 9, 11], "get": [6, 8, 16, 19, 23], "explain": [6, 7, 8, 9, 10, 26], "specif": [7, 9, 14, 16, 21], "run": [10, 17, 18, 21, 23, 26], "section": [14, 17], "subsect": 14, "decript": 14, "detail": [14, 25], "overview": [14, 20, 21, 25], "A": [14, 17, 20, 21, 25], "brief": 14, "one": [14, 17, 18, 21], "line": 14, "descript": [14, 21], "thorough": 14, "usag": 14, "owner": [14, 21, 25], "individu": [14, 21], "who": 14, "own": 14, "schema": [14, 17, 21], "licens": 14, "refer": [14, 18, 21, 25], "more": [14, 17, 18, 21, 25], "about": [14, 16, 18, 21, 25], "citat": [14, 20], "where": [14, 17, 18, 21, 25], "store": [14, 21], "graphic": [14, 20, 21, 24, 25], "paramet": [14, 17, 19, 20, 23], "architectur": 14, "dataset": [14, 16, 19, 20, 24, 25], "train": [14, 16, 17, 18, 19, 21], "evalu": [14, 17, 21, 22, 25], "format": [14, 17, 18, 20, 21, 23, 24], "kei": [14, 17, 18, 21, 25], "valu": [14, 17, 18, 20, 21, 22, 25], "output": [14, 17, 18, 19, 20, 21, 23], "quantit": 14, "being": [14, 18, 25], "colleciton": 14, "consider": [14, 21, 25], "what": [14, 17], "limit": [14, 25], "known": 14, "technic": 14, "kind": [14, 25], "should": [14, 17, 18, 21], "expect": [14, 18], "well": [14, 17, 25], "factor": 14, "might": 14, "degrad": 14, "tradeoff": 14, "ethic": [14, 21, 25], "environment": 14, "involv": 14, "step": [14, 17, 18, 19, 20, 21, 23, 25], "1": [14, 16, 19, 21, 22, 25], "com": [14, 16, 17, 21, 22, 25], "intelai": [14, 17], "xai": [14, 21, 25], "cd": 14, "modelcardgen": [14, 20, 21, 25], "classmethod": [14, 18], "requir": [14, 17, 18, 21], "three": [14, 17], "return": [14, 17, 18, 19, 20, 21, 23, 24, 25], "data_set": [14, 20, 21, 25], "dict": [14, 21, 24], "dictionari": [14, 17, 21], "defin": [14, 17, 18, 21, 25], "tfrecord": [14, 21, 25], "raw": [14, 16, 18, 21, 25], "datafram": [14, 17, 18, 21], "eval": [14, 17, 18, 19, 21, 23, 25], "tensorflowdataset": [14, 21, 25], "dataset_path": [14, 21], "file": [14, 17, 18, 19, 21], "glob": 14, "pattern": [14, 21], "pytorchdataset": [14, 20], "pytorch_dataset": 14, "feature_nam": [14, 20, 24], "panda": [14, 17, 18, 20, 21, 22, 25], "pd": [14, 17, 18, 20, 21, 22, 25], "darafram": 14, "y_true": [14, 17, 23], "y_pred": [14, 17, 23], "ypred": 14, "model_path": [14, 20, 21, 25], "str": [14, 17, 18, 21], "field": [14, 21], "repres": [14, 21, 25], "savedmodel": [14, 21], "eval_config": [14, 20, 21, 25], "tfma": [14, 21, 25], "evalconfig": [14, 21], "either": [14, 18], "proto": [14, 20, 21, 25], "string": [14, 17, 18, 21, 25], "pars": [14, 21], "exampl": [14, 16, 17, 18, 20, 21, 25], "let": [14, 16, 17, 22], "u": [14, 17, 21], "entitl": 14, "proxi": [14, 21], "found": [14, 17, 19, 21, 25], "notebook": [14, 17, 18, 20, 21], "compas_with_model_card_gen": 14, "tfx": 14, "ipynb": [14, 16], "model_spec": [14, 20, 21, 25], "label_kei": [14, 18, 20, 21], "ground": [14, 21], "truth": [14, 21], "label": [14, 17, 18, 19, 20, 21, 25], "metric_spec": 14, "comput": [14, 16, 18, 21, 25], "binaryaccuraci": [14, 20, 21, 25], "auc": [14, 20, 21], "confusionmatrixplot": [14, 20, 21, 25], "fairnessind": [14, 20, 21, 25], "slicing_spec": [14, 20, 21, 25], "accross": [14, 25], "datapoint": 14, "aggreg": 14, "group": [14, 25], "race": [14, 20, 21, 25], "is_recid": [14, 21], "metrics_spec": [14, 20, 21, 25], "class_nam": [14, 17, 20, 21, 25], "threshold": [14, 20, 21], "25": [14, 18, 20, 21], "5": [14, 17, 19, 21, 24, 25], "75": [14, 20, 21], "overal": [14, 25], "slice": [14, 25], "feature_kei": [14, 20, 21, 25], "option": [14, 20, 21], "include_default_metr": [14, 20, 21], "If": [14, 16, 17, 18], "must": 14, "prediction_kei": [14, 20], "popul": 14, "object": [14, 17, 18, 21, 23], "serial": [14, 21, 25], "deseri": 14, "json": 14, "v": [14, 18, 21], "model_card": [14, 20, 21, 25], "static": 14, "like": [14, 17, 21, 25], "those": [14, 25], "model_detail": [14, 20, 21, 25], "mc": [14, 20, 21, 25], "variabl": [14, 18, 22], "below": [14, 18], "ad": [14, 17, 25], "pre": [14, 18, 21], "long": 14, "coher": 14, "correct": [14, 17, 21], "offend": [14, 21], "profil": [14, 21], "altern": [14, 21], "sanction": [14, 21], "approxim": [14, 21, 25], "18": [14, 17, 21], "000": [14, 18, 20, 21], "crimin": [14, 21], "broward": [14, 21], "counti": [14, 21], "florida": [14, 21], "between": [14, 17, 21, 25], "januari": [14, 21], "2013": [14, 17, 21], "decemb": [14, 17, 21], "2014": [14, 21], "11": [14, 21], "uniqu": [14, 21, 24], "defend": [14, 21], "histori": [14, 21, 24], "demograph": [14, 20, 21], "likelihood": [14, 21], "reoffend": [14, 21], "contact": [14, 21, 25], "wadsworth": [14, 21], "vera": [14, 21], "piech": [14, 21], "2017": [14, 21, 25], "achiev": [14, 21], "through": [14, 20, 21, 25], "adversari": [14, 20, 21], "arxiv": [14, 21, 25], "org": [14, 18, 20, 21, 22, 25], "ab": [14, 21, 25], "1807": [14, 21], "00199": [14, 21], "chouldechova": [14, 21], "sell": [14, 21], "fairer": [14, 21], "accur": [14, 21], "whom": [14, 21], "1707": [14, 21], "00046": [14, 21], "berk": [14, 21], "et": [14, 17, 20, 21], "al": [14, 20, 21], "justic": [14, 21], "assess": [14, 21], "state": [14, 16, 20, 21], "art": [14, 16, 21], "1703": [14, 21], "09207": [14, 21], "quantitative_analysi": [14, 21, 25], "schema_vers": [14, 20, 21, 25], "here": [14, 16, 21], "doc": 14, "model_card_exampl": 14, "data_path": [14, 21], "mcg": [14, 20, 21, 25], "_data_path": [14, 21, 25], "_model_path": [14, 21, 25], "_eval_config": [14, 20, 21, 25], "pytest": 14, "custom": [14, 15, 17, 21], "mark": 14, "common": [14, 16, 17, 21, 25], "note": [14, 17, 18, 21, 25], "still": 14, "libarari": 14, "resnet50": [15, 17], "cv": 15, "neural": [15, 20], "network": [15, 20], "heart": 15, "diseas": 15, "numer": [15, 18], "categor": [15, 17], "multimod": 15, "breast": 15, "cancer": 15, "nlp": 15, "huggingfac": [15, 17], "transfer": 15, "nn": [15, 19, 20, 23], "newsgroup": 15, "fine": 15, "tune": 15, "classifi": [15, 25], "estim": [15, 22, 25], "toxic": 15, "goal": 16, "explor": 16, "now": [16, 17, 18], "support": 16, "pt_cam": [16, 17], "torch": [16, 17, 18, 19, 20, 23], "numpi": [16, 17, 18, 19, 20, 23, 24, 25], "np": [16, 17, 18, 19, 20, 23, 24, 25], "resnet50_weight": 16, "matplotlib": [16, 17, 19], "pyplot": [16, 17, 19], "plt": [16, 17, 19], "arrai": [16, 17, 20, 23, 24], "rgb": 16, "order": [16, 24], "pil": 16, "io": [16, 17, 21, 25], "bytesio": 16, "respons": 16, "githubusercont": 16, "jacobgil": 16, "grad": [16, 19], "master": 16, "png": 16, "open": [16, 25], "imshow": [16, 17, 19], "save": [16, 18, 19, 21], "imagenet1k_v2": 16, "our": [16, 17, 18, 20, 21, 22, 25], "target": [16, 20, 21, 22, 23, 24, 25], "layer": [16, 17, 21, 22, 24], "normal": [16, 17, 19], "last": [16, 18, 25], "convolut": 16, "simpli": 16, "give": 16, "some": [16, 17, 18, 21, 25], "idea": [16, 17], "choic": 16, "fasterrcnn": 16, "backbon": 16, "resnet18": 16, "50": [16, 17, 20, 23, 25], "layer4": [16, 17], "vgg": 16, "densenet161": 16, "target_lay": 16, "specifi": [16, 17, 18, 21], "integ": [16, 18], "index": [16, 17, 21, 23], "rang": [16, 17, 18, 19, 20, 23, 25], "num_of_class": 16, "base": [16, 17, 18, 21], "tabbi": 16, "cat": [16, 19, 23], "281": 16, "targetclass": 16, "none": [16, 17, 18, 19, 20, 21, 25], "highest": 16, "categori": [16, 24], "target_class": 16, "image_dim": 16, "224": [16, 17], "xgc": [16, 17], "x_gradcam": [16, 17], "cpu": [16, 17, 18, 19, 23], "project": 16, "tf_cam": 16, "inlin": [16, 19], "tf": [16, 17, 19, 21, 22, 25], "urllib": [16, 19], "urlopen": [16, 19], "kera": [16, 21, 22, 24, 25], "get_lay": 16, "conv5_block3_out": 16, "tfgc": 16, "tf_gradcam": 16, "ismailuddin": 16, "gradcam": [16, 17], "blob": 16, "solut": 17, "diagnosi": 17, "contrast": 17, "mammographi": 17, "radiologi": 17, "It": [17, 18, 21], "latest": 17, "v0": 17, "7": [17, 18, 21], "direct": [17, 21], "instal": [17, 18], "cach": [17, 18, 21], "dir": [17, 21], "nltk": 17, "docx2txt": 17, "openpyxl": 17, "xmlfile": 17, "transform": [17, 19, 20, 21, 22, 23, 24], "evalpredict": 17, "trainingargu": [17, 18], "pipelin": 17, "tlt": [17, 18], "dataset_factori": 17, "model_factori": 17, "plotli": 17, "express": 17, "px": 17, "subplot": 17, "make_subplot": 17, "graph_object": 17, "go": [17, 25], "shap": [17, 18, 22], "warn": [17, 18, 22, 24], "filterwarn": [17, 18, 22, 24], "ignor": [17, 18, 22, 24, 25], "root": [17, 19], "annot": [17, 25], "locat": [17, 18, 21], "dataset_dir": [17, 18], "join": [17, 18, 19, 21], "environ": [17, 18, 24], "els": [17, 18, 19, 21], "home": [17, 18, 21], "output_dir": [17, 18], "download": [17, 18, 19, 22, 23], "wiki": 17, "cancerimagingarch": 17, "net": [17, 19, 20, 23], "page": [17, 18, 25], "viewpag": 17, "action": 17, "pageid": 17, "109379611": 17, "brca": 17, "prepare_nlp_data": 17, "py": [17, 21], "data_root": 17, "prepare_vision_data": 17, "jpg": 17, "arrang": 17, "subfold": 17, "each": 17, "csv": [17, 18, 21, 22], "final": 17, "look": [17, 22], "someth": 17, "pkg": 17, "medic": 17, "zip": [17, 18, 21, 24], "manual": 17, "xlsx": 17, "radiology_hand_drawn_segmentations_v2": 17, "vision_imag": 17, "benign": 17, "p100_l_cm_cc": 17, "p100_l_cm_mlo": 17, "malign": 17, "p102_r_cm_cc": 17, "p102_r_cm_mlo": 17, "p100_r_cm_cc": 17, "p100_r_cm_mlo": 17, "input": [17, 18, 19, 21, 24, 25], "suppli": 17, "accord": 17, "source_image_path": 17, "image_path": 17, "source_annotation_path": 17, "annotation_path": 17, "workload": 17, "assign": [17, 25], "subject": 17, "record": [17, 21], "entir": 17, "set": [17, 18, 19, 20, 21, 23, 25], "test": [17, 18, 24], "random": 17, "stratif": 17, "copi": [17, 20, 21], "data_util": 17, "split_imag": 17, "split_annot": 17, "grouped_image_path": 17, "_group": 17, "isdir": 17, "exist": [17, 18, 19, 25], "train_image_path": 17, "test_image_path": 17, "file_dir": 17, "file_nam": 17, "split": [17, 18, 21, 24, 25], "grouped_annotation_path": 17, "splitext": 17, "isfil": [17, 19], "train_dataset": [17, 18, 20, 25], "test_dataset": 17, "to_csv": [17, 21], "4": [17, 19, 21], "_test": 17, "train_annotation_path": 17, "test_annotation_path": 17, "label_col": 17, "column": [17, 18, 21], "call": [17, 21], "factori": 17, "pretrain": [17, 18], "hub": [17, 25], "load": [17, 18, 19, 21], "get_model": 17, "function": [17, 18, 19, 20, 21, 23], "later": [17, 18], "default": [17, 18], "viz_model": 17, "model_nam": [17, 18], "train_viz_dataset": 17, "load_dataset": [17, 18], "use_cas": 17, "image_classif": 17, "test_viz_dataset": 17, "onc": 17, "cell": [17, 18], "preprocess": [17, 18], "subset": [17, 18, 21, 24, 25], "resiz": 17, "them": [17, 21, 25], "match": [17, 21, 23], "batch": [17, 18, 19, 23, 25], "batch_siz": [17, 18, 19, 21, 22, 23, 24], "16": [17, 19], "shuffl": [17, 18, 19, 21, 23], "shuffle_split": 17, "train_pct": 17, "80": [17, 18], "val_pct": 17, "seed": [17, 20], "image_s": 17, "take": [17, 18, 21], "verifi": [17, 18], "correctli": 17, "distribut": [17, 21, 25], "amongst": 17, "confirm": 17, "themselv": 17, "revers": 17, "def": [17, 18, 19, 20, 21, 23, 24, 25], "label_map_func": 17, "elif": 17, "reverse_label_map": 17, "train_label_count": 17, "x": [17, 18, 19, 20, 21, 22, 23, 24, 25], "y": [17, 18, 21, 22, 25], "train_subset": 17, "valid_label_count": 17, "validation_subset": 17, "test_label_count": 17, "datsaet": 17, "distrubt": 17, "form": [17, 21, 25], "type": [17, 18, 20, 21, 25], "fig": 17, "row": [17, 21], "col": 17, "spec": [17, 21], "subplot_titl": 17, "add_trac": 17, "pie": 17, "update_layout": 17, "height": 17, "600": 17, "width": 17, "800": 17, "title_text": 17, "get_exampl": 17, "n": [17, 23, 25], "6": [17, 19, 21, 25], "loader": 17, "util": [17, 18, 19, 20, 22, 23, 25], "dataload": [17, 18, 19, 23], "example_imag": 17, "enumer": [17, 19, 23], "label_nam": [17, 18], "int": [17, 18], "len": [17, 18, 20, 23, 24], "append": [17, 21], "break": 17, "plot": [17, 23], "figur": 17, "figsiz": 17, "12": 17, "suptitl": 17, "tensor": [17, 18, 21, 25], "size": [17, 18, 21], "train_example_imag": 17, "idx": [17, 20], "img": [17, 19], "add_subplot": 17, "axi": [17, 18, 20, 21, 23, 24], "off": [17, 24], "tight_layout": 17, "ylabel": 17, "fontsiz": 17, "tick_param": 17, "bottom": 17, "labelbottom": 17, "left": 17, "labelleft": 17, "movedim": 17, "detach": [17, 18, 19], "astyp": 17, "uint8": 17, "valid_example_imag": 17, "vector": [17, 18, 25], "dens": [17, 21, 22, 24], "number": [17, 18, 21], "compil": [17, 21, 22], "epoch": [17, 18, 19, 20, 22, 23, 24], "argument": [17, 18], "extra_lay": 17, "insert": 17, "addit": [17, 25], "1024": 17, "512": [17, 18, 25], "first": [17, 18, 19, 21], "neuron": 17, "second": [17, 18, 20, 21], "viz_histori": 17, "ipex_optim": [17, 18], "validation_viz_metr": 17, "test_viz_metr": 17, "saved_model_dir": 17, "export": [17, 21], "analyz": 17, "confus": [17, 21], "matrix": 17, "roc": 17, "pr": 17, "curv": 17, "identifi": [17, 21, 25], "exibit": 17, "bia": [17, 25], "scipi": [17, 18], "special": [17, 18], "softmax": [17, 18, 19, 20, 23, 24], "logit": [17, 18], "convert": [17, 20, 21], "probabl": [17, 19, 22, 23, 24], "_model": 17, "viz_cm": 17, "confusion_matrix": [17, 23, 24], "plotter": [17, 23, 24], "pr_curv": [17, 23, 24], "roc_curv": [17, 23, 24], "hot": 17, "encod": [17, 18], "y_pred_label": 17, "argmax": [17, 18, 23, 24], "mal_idx": 17, "tolist": [17, 18, 20], "nor_pr": 17, "ben_pr": 17, "mal": 17, "were": [17, 18, 21, 25], "misclassifi": 17, "ben": 17, "mal_classified_as_nor": 17, "intersect": [17, 23], "mal_classified_as_ben": 17, "nor": 17, "mal_as_nor_imag": 17, "mal_as_ben_imag": 17, "skimag": 17, "14": [17, 21], "mal_as_nor": 17, "calcul": 17, "0th": 17, "1st": 17, "10th": 17, "sinc": [17, 18, 21], "thei": [17, 21, 25], "seem": 17, "tnhe": 17, "clearest": 17, "tumor": 17, "final_image_dim": 17, "targetlay": 17, "mal_as_ben": 17, "5th": 17, "11th": 17, "clinic": 17, "bert": [17, 18], "part": [17, 25], "up": [17, 18, 23, 25], "seq_length": 17, "64": [17, 24], "quantization_criterion": 17, "05": 17, "quantization_max_tri": 17, "nlp_model": 17, "train_file_dir": 17, "train_file_nam": 17, "train_nlp_dataset": 17, "text_classif": 17, "dataset_nam": [17, 18], "csv_file_nam": 17, "header": 17, "true": [17, 18, 19, 20, 22, 23, 24], "shuffle_fil": 17, "exclude_col": 17, "test_file_dir": 17, "test_file_nam": 17, "test_nlp_dataset": 17, "hub_nam": 17, "max_length": [17, 18], "67": 17, "33": [17, 20, 21], "across": [17, 25], "sure": 17, "similarli": [17, 18], "punkt": 17, "get_mc_df": 17, "words_list": 17, "ignored_word": 17, "most": [17, 18, 21], "frequency_dict": 17, "freqdist": 17, "most_common": 17, "500": [17, 20, 25], "final_fd": 17, "frequenc": 17, "cnt": 17, "punctuat": 17, "loc": 17, "df": [17, 20, 22], "read_csv": [17, 21, 22], "symptom": 17, "mal_text": 17, "nor_text": 17, "ben_text": 17, "mal_token": 17, "word_token": 17, "nor_token": 17, "ben_token": 17, "necesarri": 17, "mal_fd": 17, "nor_fd": 17, "ben_fd": 17, "bar": [17, 18], "color": 17, "titl": [17, 18, 19, 25], "updat": [17, 18], "layout_coloraxis_showscal": 17, "trainer": [17, 21], "desir": 17, "nativ": [17, 20], "loop": [17, 18, 19], "invok": 17, "use_train": 17, "set_se": 17, "nlp_histori": 17, "isn": 17, "t": [17, 18, 21, 25], "train_nlp_metr": 17, "test_nlp_metr": 17, "much": [17, 21, 25], "better": 17, "than": [17, 18, 20, 21], "nonetheless": 17, "similar": 17, "mistak": [17, 21], "flag": 17, "logit_predict": 17, "return_raw": 17, "nlp_cm": 17, "mal_classified_as_ben_text": 17, "get_text": 17, "input_id": 17, "encoded_input": [17, 18], "_token": 17, "pad": [17, 18], "return_tensor": [17, 18], "pt": [17, 18, 19, 20], "partition_explain": [17, 18, 24], "partition_text_explain": [17, 18, 24], "r": [17, 18, 24], "w": [17, 18, 24], "faster": 17, "infer": 17, "want": [17, 18, 21], "intel_extension_for_transform": 17, "nlptrainer": 17, "optimizedmodel": 17, "quantizationconfig": 17, "nlptk_metric": 17, "tune_metr": 17, "eval_accuraci": 17, "greater_is_bett": 17, "is_rel": 17, "criterion": [17, 19, 20], "weight_ratio": 17, "quantization_config": 17, "approach": 17, "posttrainingdynam": 17, "max_trial": 17, "compute_metr": [17, 18], "p": [17, 21], "pred": [17, 23, 24], "isinst": [17, 18, 21], "tupl": [17, 18, 21], "label_id": 17, "float32": [17, 25], "mean": 17, "item": [17, 18, 19, 20, 21, 23], "eval_dataset": [17, 18], "quantized_model": 17, "quant_config": 17, "result": [17, 18, 22], "eval_acc": 17, "5f": 17, "save_model": 17, "quantized_bert": 17, "save_pretrain": [17, 18], "same": [17, 18], "stock": 17, "counterpart": [17, 21], "howev": 17, "differ": [17, 18], "quant_cm": 17, "khale": 17, "helal": 17, "alfarghali": 17, "mokhtar": 17, "elkorani": 17, "el": 17, "kassa": 17, "h": 17, "fahmi": 17, "digit": 17, "databas": [17, 18], "low": [17, 21], "energi": 17, "subtract": 17, "spectral": 17, "2021": [17, 25], "archiv": [17, 18, 25], "doi": [17, 20, 25], "7937": 17, "29kw": 17, "ae92": 17, "diagnost": 17, "artifici": 17, "intellig": 17, "research": [17, 21, 25], "2022": [17, 20], "scientif": 17, "volum": [17, 25], "1038": 17, "s41597": 17, "022": 17, "01238": 17, "clark": 17, "k": [17, 18], "vendt": 17, "smith": 17, "freymann": 17, "j": [17, 19], "kirbi": 17, "koppel": 17, "moor": 17, "phillip": 17, "maffitt": 17, "pringl": 17, "tarbox": 17, "l": [17, 18, 25], "prior": 17, "maintain": 17, "oper": [17, 21], "journal": [17, 25], "26": 17, "pp": 17, "1045": 17, "1057": 17, "1007": 17, "s10278": 17, "013": 17, "9622": 17, "demonstr": [18, 21], "catalog": [18, 25], "extend": [18, 25], "optim": [18, 19, 20, 21, 22, 23, 25], "boost": 18, "pleas": [18, 19], "pytorch_requir": 18, "txt": 18, "execut": 18, "assum": [18, 21], "readm": 18, "md": 18, "intel_extension_for_pytorch": 18, "ipex": 18, "log": [18, 21, 23], "sy": [18, 24], "pickl": 18, "tqdm": 18, "auto": [18, 24], "adamw": 18, "classlabel": 18, "load_metr": 18, "datasets_log": 18, "transformers_log": 18, "automodelforsequenceclassif": 18, "autotoken": 18, "get_schedul": 18, "file_util": 18, "download_and_extract_zip_fil": 18, "stream": 18, "stdout": 18, "handler": 18, "_get_library_root_logg": 18, "setstream": 18, "sh": 18, "streamhandl": 18, "set_verbosity_error": 18, "transformers_no_advisory_warn": 18, "albert": 18, "v2": 18, "uncas": 18, "distilbert": 18, "finetun": 18, "sst": 18, "english": [18, 25], "roberta": 18, "anoth": [18, 21], "local": [18, 21], "end": 18, "package_refer": 18, "declar": 18, "from_pretrain": 18, "textclassificationdata": 18, "along": 18, "helper": 18, "__init__": [18, 19, 20, 23], "self": [18, 19, 20, 23], "sentence1_kei": 18, "sentence2_kei": 18, "class_label": 18, "train_d": 18, "eval_d": 18, "tokenize_funct": 18, "arg": [18, 21], "sentenc": 18, "truncat": 18, "tokenize_dataset": 18, "appli": [18, 21], "tokenized_dataset": 18, "remov": 18, "raw_text_column": 18, "remove_column": 18, "define_train_eval_split": 18, "train_split_nam": 18, "eval_split_nam": 18, "train_siz": 18, "eval_s": 18, "select": 18, "get_label_nam": 18, "rais": 18, "valueerror": 18, "display_sampl": 18, "split_nam": 18, "sample_s": 18, "sampl": [18, 20, 24], "sentence1_sampl": 18, "sentence2_sampl": 18, "label_sampl": 18, "dataset_sampl": 18, "style": 18, "hide_index": 18, "onlin": [18, 25], "avail": [18, 25], "next": [18, 19], "movi": 18, "multipl": [18, 19], "time": [18, 19], "speed": 18, "unsupervis": 18, "so": [18, 21, 25], "hfdstextclassificationdata": 18, "initi": [18, 25], "param": 18, "when": [18, 25], "quicker": 18, "debug": 18, "sentence1": 18, "sentence2": 18, "init": 18, "cache_dir": 18, "train_dataset_s": 18, "1000": [18, 21, 25], "eval_dataset_s": 18, "vari": 18, "skip": 18, "continu": 18, "singl": [18, 21], "tab": 18, "separ": 18, "ham": 18, "messag": 18, "tsv": 18, "pass": 18, "delimit": 18, "etc": [18, 25], "customcsvtextclassificationdata": 18, "data_fil": 18, "train_perc": 18, "eval_perc": 18, "map_funct": 18, "intial": 18, "percentag": 18, "reduc": [18, 25], "identif": 18, "purpos": 18, "decim": 18, "convers": [18, 25], "combin": 18, "cannot": 18, "greater": [18, 20], "column_nam": 18, "num_class": [18, 20], "train_test_split": [18, 21, 22], "test_siz": [18, 21, 22], "modifi": 18, "csv_path": 18, "point": [18, 19], "dataset_url": [18, 25], "ic": 18, "uci": [18, 20], "edu": 18, "00228": 18, "smsspamcollect": 18, "csv_name": 18, "renam": [18, 21], "know": 18, "renamed_csv": 18, "don": 18, "extract": 18, "translat": 18, "map_spam": 18, "constructor": 18, "appropri": 18, "textclassificationmodel": 18, "num_label": [18, 20], "training_arg": 18, "bool": 18, "devic": [18, 23], "given": [18, 21], "otherwis": [18, 21, 25], "lr_schedul": 18, "lambdalr": 18, "num_train_epoch": 18, "callabl": 18, "shuffle_sampl": 18, "becaus": [18, 21, 25], "rename_column": 18, "set_format": 18, "train_dataload": 18, "unpack": 18, "progress": 18, "num_training_step": 18, "progress_bar": 18, "loss": [18, 19, 20, 21, 22, 23, 25], "backward": [18, 19, 20, 23], "zero_grad": [18, 19, 20, 23], "eval_dataload": 18, "no_grad": [18, 23], "dim": [18, 20, 23], "add_batch": 18, "raw_input_text": 18, "_": [18, 19], "max": [18, 19, 23, 24], "prediction_label": 18, "int2str": 18, "result_list": 18, "raw_text_input": 18, "result_df": 18, "cl": [18, 25], "simplic": 18, "checkpoint": 18, "previou": [18, 25], "resum": 18, "overwrite_output_dir": 18, "overwrit": 18, "previous": 18, "head": [18, 25], "origin": [18, 19, 21, 25], "replac": [18, 21], "learning_r": [18, 21, 25], "5e": 18, "lr": [18, 19, 20, 22, 23], "linear": [18, 19, 20, 23], "num_warmup_step": 18, "eval_pr": 18, "evalut": 18, "saw": 18, "after": [18, 21], "reloaded_model": 18, "okai": 18, "finish": [18, 19], "wouldn": [18, 21], "watch": 18, "again": 18, "bad": 18, "definit": 18, "my": 18, "favorit": 18, "highli": 18, "recommend": 18, "text_for_shap": 18, "inproceed": [18, 25], "maa": 18, "etal": [18, 25], "2011": 18, "acl": 18, "hlt2011": 18, "author": [18, 21, 25], "andrew": 18, "dali": 18, "raymond": 18, "pham": 18, "peter": 18, "huang": 18, "dan": 18, "ng": 18, "pott": 18, "christoph": 18, "sentiment": 18, "booktitl": [18, 25], "proceed": [18, 20, 25], "49th": 18, "annual": 18, "meet": 18, "linguist": [18, 25], "languag": [18, 25], "technologi": [18, 25], "month": [18, 25], "june": 18, "year": [18, 25], "address": [18, 25], "portland": 18, "oregon": 18, "usa": 18, "publish": [18, 21, 25], "142": 18, "150": [18, 20], "url": [18, 25], "aclweb": 18, "anthologi": 18, "p11": 18, "1015": 18, "misc": [18, 24, 25], "misc_sms_spam_collection_228": 18, "almeida": 18, "tiago": 18, "2012": 18, "howpublish": 18, "totensor": [19, 23], "trainset": 19, "cifar10": 19, "trainload": 19, "num_work": 19, "testset": 19, "testload": 19, "plane": 19, "car": 19, "bird": 19, "deer": 19, "dog": 19, "frog": 19, "hors": 19, "ship": 19, "truck": 19, "super": [19, 20, 23], "conv1": 19, "conv2d": [19, 23], "pool1": 19, "maxpool2d": [19, 23], "pool2": 19, "conv2": 19, "fc1": 19, "120": 19, "fc2": 19, "84": 19, "fc3": 19, "relu1": 19, "relu": [19, 20, 21, 22, 23, 24], "relu2": 19, "relu3": 19, "relu4": 19, "forward": [19, 20, 23], "view": [19, 23, 25], "crossentropyloss": [19, 20], "sgd": [19, 23], "001": [19, 20], "momentum": [19, 23], "use_pretrained_model": 19, "cifar_torchvis": 19, "load_state_dict": 19, "over": [19, 21, 25], "running_loss": 19, "zero": 19, "statist": [19, 21], "2000": 19, "1999": 19, "everi": 19, "mini": 19, "5d": 19, "3f": [19, 24], "state_dict": 19, "transpos": 19, "unnorm": 19, "npimg": 19, "datait": 19, "iter": 19, "make_grid": 19, "groundtruth": 19, "ind": 19, "unsqueez": 19, "requires_grad": 19, "pt_attribut": 19, "captum": 19, "attr": 19, "viz": 19, "handel": 19, "original_imag": 19, "visualize_image_attr": 19, "entri": 19, "salienc": 19, "integratedgradi": 19, "integr": 19, "deeplift": 19, "deep": 19, "lift": 19, "smoothgrad": 19, "smooth": 19, "featureabl": 19, "ablat": 19, "prerpocess": 20, "multilay": 20, "sklearn": [20, 21, 22, 24], "fetch_openml": 20, "categorical_feature_kei": [20, 21], "workclass": 20, "marit": 20, "statu": 20, "occup": 20, "relationship": 20, "sex": [20, 21], "countri": 20, "numeric_feature_kei": 20, "ag": [20, 21, 22], "capit": 20, "hour": 20, "per": 20, "week": 20, "educ": 20, "num": 20, "drop_column": 20, "fnlwgt": 20, "data_id": 20, "1590": 20, "as_fram": 20, "raw_data": 20, "adult_data": 20, "get_dummi": 20, "50k": 20, "to_numpi": 20, "adultdataset": 20, "face": 20, "landmark": 20, "make_input_tensor": 20, "make_label_tensor": 20, "__len__": 20, "adult_df": 20, "from_numpi": 20, "floattensor": 20, "label_arrai": 20, "__getitem__": 20, "is_tensor": 20, "adult_dataset": 20, "adultnn": 20, "num_featur": 20, "lin1": 20, "lin2": 20, "lin3": 20, "lin4": 20, "lin5": 20, "lin6": 20, "lin10": 20, "prelu": 20, "dropout": [20, 23], "xin": 20, "manual_se": [20, 23], "reproduc": 20, "feature_s": 20, "linear1": 20, "sigmoid1": 20, "sigmoid": [20, 22], "linear2": 20, "sigmoid2": 20, "linear3": 20, "lin1_out": 20, "sigmoid_out1": 20, "sigmoid_out2": 20, "num_epoch": [20, 23], "adam": [20, 21, 22, 24], "input_tensor": 20, "label_tensor": 20, "2f": 20, "offlin": 20, "jit": 20, "adult_model": 20, "writefil": [20, 21, 25], "confusionmatrixatthreshold": 20, "sex_femal": 20, "sex_mal": 20, "date": 20, "08": 20, "01": [20, 23, 25], "simoudi": 20, "evangelo": 20, "jiawei": 20, "han": 20, "usama": 20, "fayyad": 20, "intern": [20, 25], "confer": [20, 25], "knowledg": 20, "discoveri": 20, "mine": 20, "No": 20, "conf": 20, "960830": 20, "aaai": 20, "press": 20, "menlo": 20, "park": 20, "ca": 20, "unit": 20, "1996": 20, "friedler": 20, "sorel": 20, "compar": [20, 21], "studi": [20, 21], "intervent": 20, "account": 20, "transpar": 20, "2019": [20, 25], "1145": 20, "3287560": 20, "3287589": 20, "lahoti": 20, "preethi": 20, "without": [20, 25], "reweight": 20, "advanc": 20, "process": [20, 21], "2020": [20, 25], "728": 20, "740": 20, "task": [20, 25], "whether": [20, 21], "person": 20, "salari": 20, "less": 20, "export_html": [20, 21], "census_mc": 20, "eval_input_reciever_fn": 21, "userdefin": 21, "seral": 21, "dep": 21, "docker": 21, "tuner": 21, "kubernet": 21, "29": 21, "metadata": [21, 25], "portpick": 21, "mkdir": 21, "tempfil": [21, 25], "model_select": [21, 22], "genor": 21, "literature1": 21, "techniqu": 21, "remedi": 21, "around": 21, "___": 21, "setup": 21, "filepath": 21, "_data_root": 21, "mkdtemp": 21, "prefix": 21, "storag": [21, 22, 25], "googleapi": [21, 22, 25], "compas_dataset": 21, "cox": 21, "violent": 21, "_data_filepath": 21, "_compas_df": 21, "simplii": 21, "_column_nam": 21, "c_charge_desc": 21, "c_charge_degre": 21, "c_days_from_compa": 21, "juv_fel_count": 21, "juv_misd_count": 21, "juv_other_count": 21, "priors_count": 21, "r_days_from_arrest": 21, "vr_charge_desc": 21, "score_text": 21, "predction": 21, "_ground_truth": 21, "_compas_scor": 21, "labl": 21, "boolean": 21, "crime": 21, "drop": 21, "dropna": 21, "high": 21, "medium": [21, 25], "ground_truth": 21, "compas_scor": 21, "focus": 21, "african": 21, "american": 21, "caucasian": 21, "isin": 21, "x_train": [21, 22, 24], "x_test": [21, 22, 23, 24], "random_st": [21, 22], "42": [21, 22], "back": 21, "na_rep": 21, "opt": 21, "artifact": 21, "_transformer_path": 21, "tensorflow_transform": 21, "tft": 21, "int_feature_kei": 21, "within": 21, "max_categorical_feature_valu": 21, "513": 21, "transformed_nam": 21, "_xf": 21, "preprocessing_fn": 21, "callback": 21, "compute_and_apply_vocabulari": 21, "_fill_in_miss": 21, "vocab_filenam": 21, "scale_to_z_scor": 21, "charg": 21, "tensor_valu": 21, "miss": 21, "sparsetensor": 21, "fill": 21, "rank": 21, "Its": 21, "shape": [21, 24, 25], "dimens": 21, "spars": 21, "default_valu": 21, "dtype": [21, 25], "sparse_tensor": 21, "dense_shap": 21, "dense_tensor": 21, "to_dens": 21, "squeez": 21, "_trainer_path": 21, "tensorflow_model_analysi": [21, 25], "tf_metadata": 21, "schema_util": 21, "_batch_siz": 21, "_learning_r": 21, "00001": 21, "_max_checkpoint": 21, "_save_checkpoint_step": 21, "999": 21, "_gzip_reader_fn": 21, "filenam": [21, 25], "reader": 21, "read": 21, "gzip": 21, "ed": 21, "nest": 21, "structur": 21, "typespec": 21, "element": 21, "tfrecorddataset": [21, 25], "compression_typ": 21, "consid": 21, "_get_raw_feature_spec": 21, "whose": 21, "fixedlenfeatur": [21, 25], "varlenfeatur": [21, 25], "sparsefeatur": 21, "schema_as_feature_spec": 21, "feature_spec": 21, "_example_serving_receiver_fn": 21, "tf_transform_output": 21, "serv": 21, "tftransformoutput": 21, "graph": 21, "raw_feature_spec": 21, "pop": [21, 22], "raw_input_fn": 21, "build_parsing_serving_input_receiver_fn": 21, "serving_input_receiv": 21, "transformed_featur": 21, "transform_raw_featur": 21, "servinginputreceiv": 21, "receiver_tensor": [21, 25], "_eval_input_receiver_fn": 21, "everyth": 21, "evalinputreceiv": [21, 25], "untransform": 21, "notic": 21, "serialized_tf_exampl": [21, 25], "compat": [21, 25], "v1": [21, 25], "placehold": [21, 25], "input_example_tensor": 21, "parse_exampl": [21, 25], "_input_fn": 21, "200": 21, "input_fn": [21, 25], "transformed_feature_spec": 21, "experiment": 21, "make_batched_features_dataset": 21, "make_one_shot_iter": 21, "get_next": 21, "re": [21, 24], "_keras_model_build": 21, "feature_column": [21, 25], "feature_layer_input": 21, "numeric_column": 21, "num_bucket": 21, "indicator_column": 21, "categorical_column_with_ident": 21, "int32": [21, 25], "feature_columns_input": 21, "densefeatur": 21, "feature_layer_output": 21, "dense_lay": 21, "dense_1": 21, "dense_2": 21, "meanabsoluteerror": 21, "trainer_fn": 21, "hparam": 21, "level": 21, "hyperparamet": 21, "pair": 21, "hold": 21, "train_spec": 21, "eval_spec": 21, "eval_input_receiver_fn": [21, 25], "transform_output": 21, "train_input_fn": [21, 25], "lambda": 21, "train_fil": 21, "eval_input_fn": 21, "eval_fil": 21, "trainspec": 21, "max_step": 21, "train_step": 21, "serving_receiver_fn": 21, "finalexport": 21, "evalspec": 21, "eval_step": 21, "run_config": 21, "runconfig": 21, "save_checkpoints_step": 21, "keep_checkpoint_max": 21, "model_dir": 21, "serving_model_dir": 21, "model_to_estim": 21, "keras_model": 21, "receiv": 21, "receiver_fn": 21, "_pipelie_path": 21, "absl": 21, "csvexamplegen": 21, "pusher": 21, "schemagen": 21, "statisticsgen": 21, "executor": 21, "dsl": 21, "executor_spec": 21, "orchestr": 21, "pusher_pb2": 21, "trainer_pb2": 21, "example_gen_pb2": 21, "local_dag_runn": 21, "localdagrunn": 21, "_pipeline_nam": 21, "_compas_root": 21, "inject": 21, "logic": 21, "successfulli": 21, "_transformer_fil": 21, "_trainer_fil": 21, "listen": 21, "server": 21, "_serving_model_dir": 21, "serving_model": 21, "chicago": 21, "taxi": 21, "rel": 21, "anywher": 21, "filesystem": 21, "_tfx_root": 21, "_pipeline_root": 21, "sqlite": 21, "db": 21, "_metadata_path": 21, "create_pipelin": 21, "pipeline_nam": 21, "pipeline_root": 21, "preprocessing_module_fil": 21, "trainer_module_fil": 21, "train_arg": 21, "trainarg": 21, "eval_arg": 21, "evalarg": 21, "metadata_path": 21, "schema_path": 21, "compass": 21, "bring": 21, "example_gen": 21, "input_bas": 21, "input_config": 21, "statistics_gen": 21, "schema_gen": 21, "importschemagen": 21, "schema_fil": 21, "module_fil": 21, "abspath": 21, "trainer_arg": 21, "transformed_exampl": 21, "custom_executor_spec": 21, "executorclassspec": 21, "transform_graph": 21, "candid": 21, "baselin": 21, "modelspec": 21, "slicingspec": 21, "metricsspec": 21, "metricconfig": 21, "metadata_connection_config": 21, "sqlite_metadata_connection_config": 21, "__name__": 21, "__main__": 21, "set_verbos": 21, "info": 21, "num_step": 21, "10000": 21, "5000": 21, "ml_metadata": 21, "metadata_stor": 21, "metadata_store_pb2": 21, "connection_config": 21, "connectionconfig": 21, "filename_uri": 21, "connection_mod": 21, "readwrite_opencr": 21, "metadatastor": 21, "get_artifacts_by_typ": 21, "uri": 21, "modelevalu": 21, "gz": 21, "_project_path": 21, "judg": 21, "parol": 21, "offic": 21, "determin": 21, "bail": 21, "grant": 21, "2016": 21, "articl": [21, 25], "propublica": 21, "incorrectli": 21, "would": 21, "higher": [21, 25], "rate": [21, 25], "white": 21, "made": [21, 25], "opposit": 21, "incorrect": 21, "went": 21, "bias": [21, 25], "due": 21, "uneven": 21, "disproportion": 21, "appear": [21, 25], "frequent": [21, 25], "literatur": 21, "concern": 21, "develop": [21, 25], "trial": 21, "detent": 21, "partnership": 21, "algorithm": 21, "multi": [21, 25], "stakehold": 21, "organ": 21, "googl": [21, 25], "member": 21, "guidelin": 21, "compas_plotli": 21, "standardscal": 22, "file_url": 22, "prepar": 22, "list_numer": 22, "thalach": 22, "trestbp": 22, "chol": 22, "oldpeak": 22, "y_train": [22, 24], "y_test": [22, 24], "scaler": 22, "fit": [22, 24], "sequenti": [22, 23, 24], "binary_crossentropi": 22, "15": [22, 24], "13": 22, "validation_data": [22, 24], "plot_model": 22, "show_shap": 22, "rankdir": 22, "particular": 22, "patient": 22, "had": 22, "1f": 22, "percent": 22, "100": [22, 23, 25], "ke": 22, "kernel_explain": 22, "iloc": 22, "101": 22, "128": [23, 24], "conv_lay": 23, "kernel_s": 23, "fc_layer": 23, "320": 23, "train_load": 23, "batch_idx": 23, "nll_loss": 23, "0f": 23, "tloss": 23, "6f": 23, "mnist_data": 23, "test_load": 23, "test_loss": 23, "empti": 23, "28": 23, "sum": [23, 25], "keepdim": 23, "eq": 23, "view_a": 23, "ntest": 23, "averag": 23, "4f": 23, "cm": [23, 24], "pred_idx": 23, "gt_idx": 23, "deviz": 23, "deep_explain": 23, "instati": 23, "grviz": 23, "gradient_explain": 23, "kmp_warn": 24, "all_categori": 24, "alt": 24, "atheism": 24, "comp": 24, "ibm": 24, "pc": 24, "mac": 24, "forsal": 24, "rec": [24, 25], "motorcycl": 24, "sport": 24, "basebal": 24, "hockei": 24, "sci": 24, "crypt": 24, "electron": 24, "med": 24, "space": 24, "soc": 24, "religion": [24, 25], "christian": 24, "talk": 24, "polit": 24, "gun": 24, "mideast": 24, "selected_categori": 24, "x_train_text": 24, "fetch_20newsgroup": 24, "return_x_i": 24, "x_test_text": 24, "feature_extract": 24, "countvector": 24, "tfidfvector": 24, "max_featur": 24, "50000": 24, "concaten": 24, "toarrai": 24, "create_model": 24, "summari": 24, "sparse_categorical_crossentropi": 24, "256": 24, "accuracy_scor": 24, "train_pr": 24, "test_pr": 24, "x_batch_text": 24, "x_batch": 24, "preds_proba": 24, "actual": 24, "make_predict": 24, "shap_valu": 24, "max_displai": 24, "explan": 24, "argsort": 24, "flip": 24, "waterfall_plot": 24, "lower": 24, "initj": 24, "force_plot": 24, "base_valu": 24, "out_nam": 24, "adapt": 25, "datetim": 25, "tensorflow_hub": 25, "tensorflow_data_valid": 25, "tfdv": 25, "addon": 25, "post_export_metr": 25, "fairness_ind": 25, "widget_view": 25, "civilcom": 25, "primari": 25, "seven": 25, "crowd": 25, "worker": 25, "tag": 25, "fraction": 25, "main": 25, "civilcommentsident": 25, "releas": 25, "kaggl": 25, "come": 25, "independ": 25, "2015": 25, "world": 25, "shut": 25, "down": 25, "chose": 25, "enabl": 25, "futur": 25, "figshar": 25, "id": 25, "timestamp": 25, "jigsaw": 25, "ident": 25, "mention": 25, "covert": 25, "offens": 25, "exact": 25, "replica": 25, "unintend": 25, "challeng": 25, "cc0": 25, "underli": 25, "parent_id": 25, "parent_text": 25, "regard": 25, "leak": 25, "did": 25, "parent": 25, "civil_com": 25, "pavlopoulos2020tox": 25, "context": 25, "realli": 25, "matter": 25, "john": 25, "pavlopoulo": 25, "jeffrei": 25, "sorensen": 25, "luca": 25, "dixon": 25, "nithum": 25, "thain": 25, "ion": 25, "androutsopoulo": 25, "eprint": 25, "2006": 25, "00998": 25, "archiveprefix": 25, "primaryclass": 25, "dblp": 25, "corr": 25, "1903": 25, "04561": 25, "daniel": 25, "borkan": 25, "luci": 25, "vasserman": 25, "nuanc": 25, "real": 25, "sun": 25, "31": 25, "mar": 25, "19": 25, "24": 25, "0200": 25, "biburl": 25, "bib": 25, "bibsourc": 25, "scienc": 25, "bibliographi": 25, "semev": 25, "em": 25, "val": 25, "span": 25, "laugier": 25, "15th": 25, "workshop": 25, "semant": 25, "aug": 25, "aclanthologi": 25, "18653": 25, "59": 25, "69": 25, "article_id": 25, "identity_attack": 25, "insult": 25, "obscen": 25, "severe_tox": 25, "sexual_explicit": 25, "threat": 25, "civil_comments_dataset": 25, "train_tf_fil": 25, "get_fil": 25, "train_tf_process": 25, "validate_tf_fil": 25, "validate_tf_process": 25, "text_featur": 25, "comment_text": 25, "feature_map": 25, "sexual_orient": 25, "gender": 25, "disabl": 25, "parse_funct": 25, "parse_single_exampl": 25, "work": 25, "parsed_exampl": 25, "fight": 25, "92": 25, "imbal": 25, "doesn": 25, "tfhub": 25, "embedded_text_feature_column": 25, "text_embedding_column": 25, "module_spec": 25, "nnlm": 25, "en": 25, "dim128": 25, "dnnclassifi": 25, "hidden_unit": 25, "weight_column": 25, "legaci": 25, "adagrad": 25, "003": 25, "loss_reduct": 25, "reduct": 25, "n_class": 25, "gettempdir": 25, "input_example_placehold": 25, "ones_lik": 25, "tfma_export_dir": 25, "export_eval_savedmodel": 25, "export_dir_bas": 25, "signature_nam": 25, "merg": 25, "built": 25, "precis": 25, "recal": 25, "alphabet": 25, "protect": 25, "voic": 25, "area": 25, "focu": 25, "anyth": 25, "rude": 25, "disrespect": 25, "someon": 25, "leav": 25, "discuss": 25, "attemp": 25, "sever": 25, "subtyp": 25, "ensur": 25, "wide": 25, "8e0b81f80a23": 25, "overrepres": 25, "black": 25, "muslim": 25, "feminist": 25, "woman": 25, "gai": 25, "often": 25, "far": 25, "mani": 25, "forum": 25, "unfortun": 25, "attack": 25, "rarer": 25, "affirm": 25, "statement": 25, "am": 25, "proud": 25, "man": 25, "adopt": 25, "pick": 25, "connot": 25, "insuffici": 25, "divers": 25, "imbalenc": 25, "balanc": 25, "enough": 25, "effect": 25, "distinguish": 25, "paper": 25, "societi": 25}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"dataset": [0, 5, 6, 7, 8, 9, 11, 17, 18, 21, 23], "explain": [3, 5, 11, 15, 16, 17, 18, 19, 22, 23, 24], "goal": 3, "submodul": 3, "api": [4, 12, 17, 18], "refrenc": 4, "intel": [5, 11, 16, 17, 18], "ai": [5, 11, 17, 18], "tool": [5, 11, 16, 18], "overview": [5, 10, 11, 26], "get": [5, 11, 17, 18], "start": [5, 11], "requir": [5, 6, 8, 11], "develop": [5, 6, 8, 11], "instal": [5, 6, 8, 11, 14, 21], "poetri": [5, 6, 8, 11], "exist": [5, 6, 8, 11], "enviorn": [5, 6, 8, 11], "creat": [5, 6, 8, 11, 25], "activ": [5, 6, 8, 11], "python3": [5, 6, 8, 11], "virtual": [5, 6, 8, 11], "environ": [5, 6, 8, 11], "addit": [5, 6, 8, 11], "featur": [5, 6, 8, 11, 22], "specif": [5, 6, 8, 11], "step": [5, 6, 8, 11], "verifi": [5, 6, 8, 11], "run": [5, 6, 8, 11, 14], "notebook": [5, 6, 8, 11, 15, 16], "support": [5, 6, 8, 11], "disclaim": [5, 6, 7, 8, 9, 11], "licens": [5, 6, 7, 8, 9, 11], "model": [5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 24, 25], "softwar": [6, 8], "legal": [7, 9], "inform": [7, 9], "refer": [12, 16], "card": [12, 13, 14, 15, 20, 21, 25], "gener": [12, 14, 15, 20, 21], "exampl": [13, 15, 23], "input": [14, 16, 20], "test": [14, 20, 23], "marker": 14, "sampl": 14, "command": 14, "us": [14, 16, 17, 18, 19, 22, 23, 24], "resnet50": 16, "imagenet": 16, "classif": [16, 17, 19, 22, 23, 24, 25], "cam": 16, "object": 16, "load": 16, "xai": 16, "pytorch": [16, 17, 18, 20], "modul": 16, "xgradcam": 16, "imag": [16, 17], "visual": [16, 22, 23, 24], "tensorflow": [16, 21, 25], "multimod": 17, "breast": 17, "cancer": 17, "detect": [17, 21], "import": [17, 18, 21], "depend": [17, 18, 21, 25], "setup": [17, 18], "directori": 17, "option": [17, 18], "group": 17, "data": [17, 20, 22, 23, 24, 25], "patient": 17, "id": 17, "1": [17, 18, 20, 23, 24], "prepar": [17, 18], "analysi": 17, "transfer": 17, "learn": 17, "save": [17, 20], "comput": 17, "vision": 17, "error": 17, "2": [17, 18, 20, 23, 24], "text": [17, 18, 24], "corpu": 17, "nlp": 17, "explan": 17, "int8": 17, "quantiz": 17, "citat": [17, 18], "public": 17, "tcia": 17, "fine": 18, "tune": 18, "classifi": 18, "paramet": 18, "A": 18, "hug": 18, "face": 18, "b": 18, "custom": [18, 19, 22, 23, 24], "3": [18, 20, 23], "evalu": [18, 24], "trainer": 18, "http": 18, "huggingfac": 18, "co": 18, "doc": 18, "transform": 18, "v4": 18, "16": 18, "en": 18, "main_class": 18, "__": 18, "from": [18, 20, 21, 23], "nativ": 18, "4": [18, 20, 23], "export": [18, 25], "5": [18, 20, 23], "reload": 18, "make": [18, 25], "predict": [18, 23, 24], "6": [18, 23], "cnn": [19, 23], "cifar": 19, "10": [19, 23], "attribut": [19, 22, 23, 24], "collect": 20, "preprocess": [20, 21, 22], "fetch": 20, "openml": 20, "drop": 20, "unneed": 20, "column": 20, "train": [20, 23, 24, 25], "split": [20, 22], "build": [20, 25], "evalconfig": 20, "issu": 21, "fair": 21, "estim": 21, "librari": 21, "download": [21, 25], "tfx": 21, "pipelin": 21, "script": 21, "displai": 21, "neural": 22, "network": 22, "heart": 22, "diseas": 22, "connect": 22, "graph": 22, "accuraci": 22, "mnist": 23, "design": 23, "scatch": 23, "survei": 23, "perform": [23, 24], "across": 23, "all": 23, "class": 23, "metrics_explain": 23, "plugin": 23, "feature_attributions_explain": 23, "can": 23, "observ": 23, "confus": 23, "matrix": 23, "9": 23, "poorli": 23, "additionallli": 23, "i": 23, "high": 23, "misclassif": 23, "rate": 23, "exclus": 23, "amongst": 23, "two": 23, "label": 23, "In": 23, "other": 23, "word": 23, "appear": 23, "": 23, "vice": 23, "versa": 23, "7": 23, "were": 23, "misclassifi": 23, "let": 23, "take": 23, "closer": 23, "look": 23, "pixel": 23, "base": 23, "shap": [23, 24], "valu": [23, 24], "where": 23, "when": 23, "correct": [23, 24], "groundtruth": 23, "conclus": 23, "deep": 23, "gradient": 23, "pai": 23, "close": 23, "attent": 23, "top": 23, "digit": 23, "distinguish": 23, "between": 23, "On": 23, "first": 23, "last": 23, "row": 23, "abov": 23, "we": 23, "ar": 23, "The": 23, "contribut": 23, "postiiv": 23, "red": 23, "thi": 23, "begin": 23, "why": 23, "nn": 24, "newsgroup": 24, "vector": 24, "defin": 24, "compil": 24, "partit": 24, "plot": 24, "bar": 24, "waterfal": 24, "forc": 24, "toxic": 25, "comment": 25, "descript": 25, "evalsavedmodel": 25, "format": 25}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "nbsphinx": 4, "sphinx.ext.intersphinx": 1, "sphinx.ext.viewcode": 1, "sphinx": 57}, "alltitles": {"Datasets": [[0, "datasets"]], "Explainer": [[3, "explainer"]], "Goals": [[3, "goals"]], "Explainer Submodules": [[3, "explainer-submodules"]], "API Refrence": [[4, "api-refrence"]], "Intel\u00ae Explainable AI Tools": [[5, "intel-explainable-ai-tools"], [11, "intel-explainable-ai-tools"]], "Overview": [[5, "overview"], [10, "overview"], [11, "overview"], [26, "overview"]], "Get Started": [[5, "get-started"], [11, "get-started"]], "Requirements": [[5, "requirements"], [11, "requirements"]], "Developer Installation with Poetry": [[5, "developer-installation-with-poetry"], [6, "developer-installation-with-poetry"], [8, "developer-installation-with-poetry"], [11, "developer-installation-with-poetry"]], "Install to existing enviornment with Poetry": [[5, "install-to-existing-enviornment-with-poetry"], [6, "install-to-existing-enviornment-with-poetry"], [8, "install-to-existing-enviornment-with-poetry"], [11, "install-to-existing-enviornment-with-poetry"]], "Create and activate a Python3 virtual environment": [[5, "create-and-activate-a-python3-virtual-environment"], [6, "create-and-activate-a-python3-virtual-environment"], [8, "create-and-activate-a-python3-virtual-environment"], [11, "create-and-activate-a-python3-virtual-environment"]], "Additional Feature-Specific Steps": [[5, "additional-feature-specific-steps"], [6, "additional-feature-specific-steps"], [8, "additional-feature-specific-steps"], [11, "additional-feature-specific-steps"]], "Verify Installation": [[5, "verify-installation"], [6, "verify-installation"], [8, "verify-installation"], [11, "verify-installation"]], "Running Notebooks": [[5, "running-notebooks"], [6, "running-notebooks"], [8, "running-notebooks"], [11, "running-notebooks"]], "Support": [[5, "support"], [6, "support"], [8, "support"], [11, "support"]], "DISCLAIMER": [[5, "disclaimer"], [6, "disclaimer"], [8, "disclaimer"], [11, "disclaimer"]], "License": [[5, "license"], [6, "license"], [7, "license"], [8, "license"], [9, "license"], [11, "license"]], "Datasets and Models": [[5, "datasets-and-models"], [6, "datasets-and-models"], [7, "datasets-and-models"], [8, "datasets-and-models"], [9, "datasets-and-models"], [11, "datasets-and-models"]], "Installation": [[6, "installation"], [8, "installation"]], "Software Requirements": [[6, "software-requirements"], [8, "software-requirements"]], "Legal Information": [[7, "legal-information"], [9, "legal-information"]], "Disclaimer": [[7, "disclaimer"], [9, "disclaimer"]], "API Reference": [[12, "api-reference"]], "Model Card Generator": [[12, "model-card-generator"], [14, "model-card-generator"]], "Example Model Card": [[13, "example-model-card"]], "Install": [[14, "install"]], "Run": [[14, "run"]], "Model Card Generator Inputs": [[14, "model-card-generator-inputs"]], "Test": [[14, "test"]], "Markers": [[14, "markers"]], "Sample test commands using markers": [[14, "sample-test-commands-using-markers"]], "Example Notebooks": [[15, "example-notebooks"]], "Explainer Notebooks": [[15, "explainer-notebooks"]], "Model Card Generator Notebooks": [[15, "model-card-generator-notebooks"]], "Explaining ResNet50 ImageNet Classification Using the CAM Explainer": [[16, "Explaining-ResNet50-ImageNet-Classification-Using-the-CAM-Explainer"]], "Objective": [[16, "Objective"]], "Loading Intel XAI Tools PyTorch CAM Module": [[16, "Loading-Intel-XAI-Tools-PyTorch-CAM-Module"]], "Loading Notebook Modules": [[16, "Loading-Notebook-Modules"]], "Using XGradCAM": [[16, "Using-XGradCAM"]], "Loading the input image": [[16, "Loading-the-input-image"]], "Loading the Model": [[16, "Loading-the-Model"]], "Visualization": [[16, "Visualization"]], "References": [[16, "References"]], "Loading Intel XAI Tools TensorFlow CAM Module": [[16, "Loading-Intel-XAI-Tools-TensorFlow-CAM-Module"]], "Explaining Image Classification Models with TensorFlow": [[16, "Explaining-Image-Classification-Models-with-TensorFlow"]], "Multimodal Breast Cancer Detection Explainability using the Intel\u00ae Explainable AI API": [[17, "Multimodal-Breast-Cancer-Detection-Explainability-using-the-Intel\u00ae-Explainable-AI-API"]], "Import Dependencies and Setup Directories": [[17, "Import-Dependencies-and-Setup-Directories"]], "Dataset": [[17, "Dataset"]], "Optional: Group Data by Patient ID": [[17, "Optional:-Group-Data-by-Patient-ID"]], "Model 1: Image Classification with PyTorch": [[17, "Model-1:-Image-Classification-with-PyTorch"]], "Get the Model and Dataset": [[17, "Get-the-Model-and-Dataset"], [17, "id1"]], "Data Preparation": [[17, "Data-Preparation"], [17, "id2"]], "Image dataset analysis": [[17, "Image-dataset-analysis"]], "Transfer Learning": [[17, "Transfer-Learning"], [17, "id3"]], "Save the Computer Vision Model": [[17, "Save-the-Computer-Vision-Model"]], "Error Analysis": [[17, "Error-Analysis"]], "Explainability": [[17, "Explainability"]], "Model 2: Text Classification with PyTorch": [[17, "Model-2:-Text-Classification-with-PyTorch"]], "Corpus analysis": [[17, "Corpus-analysis"]], "Save the NLP Model": [[17, "Save-the-NLP-Model"]], "Error analysis": [[17, "Error-analysis"], [17, "id5"]], "Explanation": [[17, "Explanation"]], "Int8 Quantization": [[17, "Int8-Quantization"]], "Save the Quantized NLP Model": [[17, "Save-the-Quantized-NLP-Model"]], "Citations": [[17, "Citations"], [18, "Citations"]], "Data Citation": [[17, "Data-Citation"]], "Publication Citation": [[17, "Publication-Citation"]], "TCIA Citation": [[17, "TCIA-Citation"]], "Explaining Fine Tuned Text Classifier with PyTorch using the Intel\u00ae Explainable AI API": [[18, "Explaining-Fine-Tuned-Text-Classifier-with-PyTorch-using-the-Intel\u00ae-Explainable-AI-API"]], "1. Import dependencies and setup parameters": [[18, "1.-Import-dependencies-and-setup-parameters"]], "2. Prepare the dataset": [[18, "2.-Prepare-the-dataset"]], "Option A: Use a Hugging Face dataset": [[18, "Option-A:-Use-a-Hugging-Face-dataset"]], "Option B: Use a custom dataset": [[18, "Option-B:-Use-a-custom-dataset"]], "3. Prepare the Model for Fine Tuning and Evaluation": [[18, "3.-Prepare-the-Model-for-Fine-Tuning-and-Evaluation"]], "Option A: Use the `Trainer `__ API from Hugging Face": [[18, "Option-A:-Use-the-`Trainer-`__-API-from-Hugging-Face"]], "Option B: Use the native PyTorch API": [[18, "Option-B:-Use-the-native-PyTorch-API"]], "4. Export the model": [[18, "4.-Export-the-model"]], "5. Reload the model and make predictions": [[18, "5.-Reload-the-model-and-make-predictions"]], "6. Get Explainations with Intel Explainable AI Tools": [[18, "6.-Get-Explainations-with-Intel-Explainable-AI-Tools"]], "Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer": [[19, "Explaining-Custom-CNN-CIFAR-10-Classification-Using-the-Attributions-Explainer"]], "Generating Model Card with PyTorch": [[20, "Generating-Model-Card-with-PyTorch"]], "1. Data Collection and Preprocessing": [[20, "1.-Data-Collection-and-Preprocessing"]], "Fetch Data from OpenML": [[20, "Fetch-Data-from-OpenML"]], "Drop Unneeded Columns": [[20, "Drop-Unneeded-Columns"]], "Train Test Split": [[20, "Train-Test-Split"]], "2. Build Model": [[20, "2.-Build-Model"]], "3. Train Model": [[20, "3.-Train-Model"]], "4. Save Model": [[20, "4.-Save-Model"]], "5. Generate Model Card": [[20, "5.-Generate-Model-Card"]], "EvalConfig Input": [[20, "EvalConfig-Input"]], "Detecting Issues in Fairness by Generating Model Card from Tensorflow Estimators": [[21, "Detecting-Issues-in-Fairness-by-Generating-Model-Card-from-Tensorflow-Estimators"]], "Install Dependencies": [[21, "Install-Dependencies"]], "Import Libraries": [[21, "Import-Libraries"]], "Download and preprocess the dataset": [[21, "Download-and-preprocess-the-dataset"]], "TFX Pipeline Scripts": [[21, "TFX-Pipeline-Scripts"]], "Display Model Card": [[21, "Display-Model-Card"]], "Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer": [[22, "Explaining-a-Custom-Neural-Network-Heart-Disease-Classification-Using-the-Attributions-Explainer"]], "Data Splitting": [[22, "Data-Splitting"]], "Feature Preprocessing": [[22, "Feature-Preprocessing"]], "Model": [[22, "Model"]], "Visualize the connectivity graph:": [[22, "Visualize-the-connectivity-graph:"]], "Accuracy": [[22, "Accuracy"]], "Explaining Custom CNN MNIST Classification Using the Attributions Explainer": [[23, "Explaining-Custom-CNN-MNIST-Classification-Using-the-Attributions-Explainer"]], "1. Design the CNN from scatch": [[23, "1.-Design-the-CNN-from-scatch"]], "2. Train the CNN on the MNIST dataset": [[23, "2.-Train-the-CNN-on-the-MNIST-dataset"]], "3. Predict the MNIST test data": [[23, "3.-Predict-the-MNIST-test-data"]], "4. Survey performance across all classes using the metrics_explainer plugin": [[23, "4.-Survey-performance-across-all-classes-using-the-metrics_explainer-plugin"]], "5. Explain performance across the classes using the feature_attributions_explainer plugin": [[23, "5.-Explain-performance-across-the-classes-using-the-feature_attributions_explainer-plugin"]], "From (4), it can be observed from the confusion matrix that classes 4 and 9 perform poorly. Additionallly, there is a high misclassification rate exclusively amongst the two labels. In other words, it appears that the CNN if confusing 4\u2019s with 9\u2019s, and vice-versa. 7.4% of all the 9 examples were misclassified as 4, and 10% of all the 4 examples were misclassified as 9.": [[23, "From-(4),-it-can-be-observed-from-the-confusion-matrix-that-classes-4-and-9-perform-poorly.-Additionallly,-there-is-a-high-misclassification-rate-exclusively-amongst-the-two-labels.-In-other-words,-it-appears-that-the-CNN-if-confusing-4's-with-9's,-and-vice-versa.-7.4%-of-all-the-9-examples-were-misclassified-as-4,-and-10%-of-all-the-4-examples-were-misclassified-as-9."]], "Let\u2019s take a closer look at the pixel-based shap values for the test examples where the CNN predicts \u20189\u2019 when the correct groundtruth label is \u20184\u2019.": [[23, "Let's-take-a-closer-look-at-the-pixel-based-shap-values-for-the-test-examples-where-the-CNN-predicts-'9'-when-the-correct-groundtruth-label-is-'4'."]], "6. Conclusion": [[23, "6.-Conclusion"]], "From the deep and gradient explainer visuals, it can be observed that the CNN pays close attention to the top of the digit in distinguishing between a 4 and a 9. On the first and last row of the above gradient explainer visualization we can the 4\u2019s are closed. The contributes to postiive shap values (red) for the 9 classification. This begins explaining why the CNN is confusing the two digits.": [[23, "From-the-deep-and-gradient-explainer-visuals,-it-can-be-observed-that-the-CNN-pays-close-attention-to-the-top-of-the-digit-in-distinguishing-between-a-4-and-a-9.-On-the-first-and-last-row-of-the-above-gradient-explainer-visualization-we-can-the-4's-are-closed.-The-contributes-to-postiive-shap-values-(red)-for-the-9-classification.-This-begins-explaining-why-the-CNN-is-confusing-the-two-digits."]], "Explaining Custom NN NewsGroups Classification Using the Attributions Explainer": [[24, "Explaining-Custom-NN-NewsGroups-Classification-Using-the-Attributions-Explainer"]], "Vectorize Text Data": [[24, "Vectorize-Text-Data"]], "Define the Model": [[24, "Define-the-Model"]], "Compile and Train Model": [[24, "Compile-and-Train-Model"]], "Evaluate Model Performance": [[24, "Evaluate-Model-Performance"]], "SHAP Partition Explainer": [[24, "SHAP-Partition-Explainer"]], "Visualize SHAP Values Correct Predictions": [[24, "Visualize-SHAP-Values-Correct-Predictions"]], "Text Plot": [[24, "Text-Plot"]], "Bar Plots": [[24, "Bar-Plots"]], "Bar Plot 1": [[24, "Bar-Plot-1"]], "Bar Plot 2": [[24, "Bar-Plot-2"]], "Waterfall Plots": [[24, "Waterfall-Plots"]], "Waterfall Plot 1": [[24, "Waterfall-Plot-1"]], "Waterfall Plot 2": [[24, "Waterfall-Plot-2"]], "Force Plot": [[24, "Force-Plot"]], "Creating Model Card for Toxic Comments Classification in Tensorflow": [[25, "Creating-Model-Card-for-Toxic-Comments-Classification-in-Tensorflow"]], "Training Dependencies": [[25, "Training-Dependencies"]], "Model Card Dependencies": [[25, "Model-Card-Dependencies"]], "Download Data": [[25, "Download-Data"]], "Data Description": [[25, "Data-Description"]], "Train Model": [[25, "Train-Model"], [25, "id1"]], "Build Model": [[25, "Build-Model"]], "Export in EvalSavedModel Format": [[25, "Export-in-EvalSavedModel-Format"]], "Making a Model Card": [[25, "Making-a-Model-Card"]]}, "indexentries": {}}) \ No newline at end of file +Search.setIndex({"docnames": ["datasets", "explainer/attributions", "explainer/cam", "explainer/index", "explainer/metrics", "index", "install", "legal", "markdown/Install", "markdown/Legal", "markdown/Overview", "markdown/Welcome", "model_card_gen/api", "model_card_gen/example", "model_card_gen/index", "notebooks", "notebooks/ExplainingImageClassification", "notebooks/Multimodal_Cancer_Detection", "notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions", "notebooks/TorchVision_CIFAR_Interpret", "notebooks/adult-pytorch-model-card", "notebooks/compas-model-card-tfx", "notebooks/heart_disease", "notebooks/mnist", "notebooks/partitionexplainer", "notebooks/toxicity-tfma-model-card", "overview"], "filenames": ["datasets.rst", "explainer/attributions.md", "explainer/cam.md", "explainer/index.md", "explainer/metrics.md", "index.md", "install.rst", "legal.rst", "markdown/Install.md", "markdown/Legal.md", "markdown/Overview.md", "markdown/Welcome.md", "model_card_gen/api.rst", "model_card_gen/example.md", "model_card_gen/index.md", "notebooks.rst", "notebooks/ExplainingImageClassification.nblink", "notebooks/Multimodal_Cancer_Detection.nblink", "notebooks/PyTorch_Text_Classifier_fine_tuning_with_Attributions.nblink", "notebooks/TorchVision_CIFAR_Interpret.nblink", "notebooks/adult-pytorch-model-card.nblink", "notebooks/compas-model-card-tfx.nblink", "notebooks/heart_disease.nblink", "notebooks/mnist.nblink", "notebooks/partitionexplainer.nblink", "notebooks/toxicity-tfma-model-card.nblink", "overview.rst"], "titles": ["Datasets", "<no title>", "<no title>", "Explainer", "API Refrence", "Intel\u00ae Explainable AI Tools", "Installation", "Legal Information", "Installation", "Legal Information", "Overview", "Intel\u00ae Explainable AI Tools", "API Reference", "Example Model Card", "Model Card Generator", "Example Notebooks", "Explaining ResNet50 ImageNet Classification Using the CAM Explainer", "Multimodal Breast Cancer Detection Explainability using the Intel\u00ae Explainable AI API", "Explaining Fine Tuned Text Classifier with PyTorch using the Intel\u00ae Explainable AI API", "Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer", "Generating Model Card with PyTorch", "Detecting Issues in Fairness by Generating Model Card from Tensorflow Estimators", "Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer", "Explaining Custom CNN MNIST Classification Using the Attributions Explainer", "Explaining Custom NN NewsGroups Classification Using the Attributions Explainer", "Creating Model Card for Toxic Comments Classification in Tensorflow", "Overview"], "terms": {"thi": [0, 5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 20, 21, 22, 25], "i": [0, 3, 5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 19, 20, 21, 25], "comprehens": [0, 14], "list": [0, 5, 6, 8, 11, 17, 18, 20, 21, 23], "public": [0, 14, 21, 25], "us": [0, 3, 5, 6, 7, 8, 9, 10, 11, 15, 20, 21, 25, 26], "repositori": [0, 5, 6, 8, 10, 11, 14, 17, 18, 20, 26], "name": [0, 5, 6, 8, 10, 11, 14, 15, 16, 17, 18, 20, 21, 25, 26], "link": [0, 5, 6, 8, 11, 14], "sourc": [0, 5, 6, 7, 8, 9, 11, 18, 25], "framework": [0, 14, 15, 17], "case": [0, 5, 6, 8, 11, 14, 15, 17, 18, 21], "adult": [0, 20], "incom": 0, "pytorch": [0, 3, 5, 6, 8, 10, 11, 14, 15, 19, 26], "tabular": [0, 3, 5, 10, 11, 15, 26], "classif": [0, 3, 5, 10, 11, 15, 18, 21, 26], "cdd": [0, 17], "cesm": [0, 17], "imag": [0, 3, 5, 10, 11, 15, 19, 26], "text": [0, 15, 25], "cifar": [0, 15], "10": [0, 5, 6, 8, 11, 15, 17, 18, 20, 21, 22, 25], "torchvis": [0, 16, 19, 23], "tensorflow": [0, 3, 5, 6, 8, 10, 11, 14, 15, 17, 18, 22, 24, 26], "civil": [0, 25], "comment": [0, 15], "tfd": 0, "compa": [0, 14, 21], "recidiv": [0, 14, 21], "risk": [0, 14, 21], "score": [0, 14, 16, 21, 25], "data": [0, 5, 6, 7, 8, 9, 11, 14, 18, 19, 21], "analysi": [0, 14, 18, 21], "imagenet": [0, 15], "imdb": [0, 18], "review": [0, 14, 18], "mnist": [0, 15], "sm": [0, 18], "spam": [0, 18], "collect": [0, 14, 18, 23], "python": [3, 5, 6, 8, 10, 11, 14, 17, 21, 26], "modul": [3, 5, 10, 11, 17, 19, 20, 21, 23, 26], "intel": [3, 6, 7, 8, 9, 10, 14, 15, 20, 21, 25, 26], "ai": [3, 6, 7, 8, 9, 10, 15, 21, 25, 26], "tool": [3, 6, 7, 8, 9, 10, 14, 15, 21, 26], "provid": [3, 5, 6, 7, 8, 9, 11, 14, 20, 21, 25], "method": [3, 5, 10, 11, 16, 19, 26], "model": [3, 10, 19, 23, 26], "compos": [3, 19, 23], "add": [3, 14, 17, 21, 25], "minim": [3, 25], "code": [3, 5, 6, 7, 8, 9, 11, 18, 21], "extens": [3, 17, 18], "easi": 3, "new": [3, 17, 25], "commun": 3, "contribut": [3, 5, 6, 7, 8, 9, 11], "welcom": 3, "attribut": [3, 5, 10, 11, 15, 17, 18, 25, 26], "visual": [3, 5, 10, 11, 17, 18, 19, 21, 26], "neg": [3, 5, 10, 11, 21, 25, 26], "posit": [3, 5, 10, 11, 21, 25, 26], "featur": [3, 10, 14, 16, 17, 18, 19, 20, 21, 25, 26], "pixel": [3, 5, 10, 11, 26], "word": [3, 5, 10, 11, 17, 18, 25, 26], "token": [3, 5, 10, 11, 17, 18, 24, 26], "predict": [3, 5, 10, 11, 14, 17, 19, 20, 21, 22, 26], "cam": [3, 5, 10, 11, 15, 17, 26], "creat": [3, 7, 9, 10, 14, 15, 17, 18, 21, 26], "heatmap": [3, 5, 10, 11, 26], "cnn": [3, 5, 10, 11, 15, 17, 26], "gradient": [3, 5, 10, 11, 19, 26], "weight": [3, 5, 10, 11, 16, 18, 25, 26], "class": [3, 5, 10, 11, 14, 16, 17, 18, 19, 20, 21, 24, 25, 26], "activ": [3, 10, 21, 22, 24, 26], "map": [3, 5, 10, 11, 14, 17, 18, 20, 21, 24, 25, 26], "api": [3, 5, 6, 8, 10, 11, 15, 21, 26], "refrenc": 3, "gain": [3, 5, 10, 11, 20, 26], "insight": [3, 5, 10, 11, 26], "measur": [3, 5, 10, 11, 25, 26], "need": [3, 5, 10, 11, 16, 17, 18, 21, 26], "dure": [3, 5, 10, 11, 26], "machin": [3, 5, 10, 11, 18, 20, 21, 25, 26], "learn": [3, 5, 10, 11, 14, 15, 18, 20, 21, 25, 26], "workflow": [3, 5, 10, 11, 26], "scientist": [5, 11], "mlop": [5, 11], "engin": [5, 11, 21], "have": [5, 6, 8, 11, 14, 16, 17, 18, 21, 22, 25], "interpret": [5, 10, 11, 26], "The": [5, 6, 8, 10, 11, 14, 16, 17, 18, 21, 25, 26], "ar": [5, 6, 7, 8, 9, 10, 11, 14, 16, 17, 18, 21, 25, 26], "design": [5, 10, 11, 26], "help": [5, 10, 11, 17, 26], "user": [5, 10, 11, 14, 17, 21, 25, 26], "detect": [5, 10, 11, 15, 25, 26], "mitig": [5, 10, 11, 26], "against": [5, 10, 11, 21, 26], "issu": [5, 6, 8, 10, 11, 15, 17, 26], "fair": [5, 10, 11, 14, 15, 20, 25, 26], "while": [5, 10, 11, 17, 21, 26], "best": [5, 10, 11, 26], "hardwar": [5, 10, 11, 18, 24, 26], "There": [5, 6, 8, 10, 11, 21, 26], "two": [5, 6, 8, 10, 11, 17, 18, 21, 26], "compon": [5, 10, 11, 21, 26], "card": [5, 6, 8, 10, 11, 26], "gener": [5, 6, 8, 10, 11, 17, 18, 25, 26], "interact": [5, 10, 11, 14, 21, 26], "html": [5, 10, 11, 14, 20, 21, 26], "report": [5, 6, 8, 10, 11, 14, 17, 21, 23, 24, 26], "contain": [5, 6, 8, 10, 11, 14, 21, 25, 26], "perform": [5, 6, 7, 8, 9, 10, 11, 14, 17, 18, 21, 26], "metric": [5, 10, 11, 14, 17, 18, 20, 21, 22, 23, 24, 25, 26], "post": [5, 10, 11, 25, 26], "hoc": [5, 10, 11, 26], "distil": [5, 10, 11, 26], "examin": [5, 10, 11, 26], "behavior": [5, 10, 11, 26], "both": [5, 10, 11, 16, 18, 21, 25, 26], "via": [5, 10, 11, 14, 17, 26], "simpl": [5, 10, 11, 26], "includ": [5, 10, 11, 14, 17, 18, 21, 25, 26], "follow": [5, 6, 8, 10, 11, 14, 17, 18, 21, 26], "linux": [5, 6, 8, 11], "system": [5, 6, 8, 11, 17, 18, 20, 21], "wsl2": [5, 6, 8, 11], "window": [5, 6, 8, 11, 24], "valid": [5, 6, 8, 11, 17, 18, 21], "ubuntu": [5, 6, 8, 11], "20": [5, 6, 8, 11, 17, 18, 20, 21, 23], "04": [5, 6, 8, 11], "22": [5, 6, 8, 11], "lt": [5, 6, 8, 11], "3": [5, 6, 8, 11, 14, 17, 19, 21, 24], "9": [5, 6, 8, 11, 17, 19], "o": [5, 6, 8, 11, 17, 18, 19, 20, 21, 24, 25], "packag": [5, 6, 8, 11, 14, 17], "apt": [5, 6, 8, 11], "build": [5, 6, 8, 11, 21], "essenti": [5, 6, 8, 11], "dev": [5, 6, 8, 11, 25], "git": [5, 6, 8, 11, 14], "onli": [5, 6, 8, 11, 14, 16, 18, 21, 25], "instruct": [5, 6, 8, 11, 18], "safeti": [5, 6, 8, 11], "librari": [5, 6, 8, 11], "clone": [5, 6, 8, 11, 14], "github": [5, 6, 8, 11, 14, 16, 17], "can": [5, 6, 8, 11, 14, 16, 17, 18, 21, 25], "done": [5, 6, 8, 11], "instead": [5, 6, 8, 11, 17, 18], "basic": [5, 6, 8, 11], "pip": [5, 6, 8, 11, 14, 17, 21], "you": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 21, 25], "plan": [5, 6, 8, 11], "make": [5, 6, 8, 11, 17, 21, 22], "chang": [5, 6, 8, 11, 18], "repo": [5, 6, 8, 11, 17], "navig": [5, 6, 8, 11, 14], "directori": [5, 6, 8, 11, 14, 18, 21], "allow": [5, 6, 8, 11, 14, 21], "envion": [5, 6, 8, 11], "venv": [5, 6, 8, 11], "current": [5, 6, 8, 11, 17, 25], "lock": [5, 6, 8, 11], "In": [5, 6, 8, 11, 14, 18, 21], "addtion": [5, 6, 8, 11], "explicitli": [5, 6, 8, 11], "tell": [5, 6, 8, 11, 14], "which": [5, 6, 8, 11, 14, 16, 18, 21], "instanc": [5, 6, 8, 11, 14, 17, 21], "env": [5, 6, 8, 11], "full": [5, 6, 8, 11, 18], "path": [5, 6, 8, 11, 14, 16, 17, 18, 19, 21], "choos": [5, 6, 8, 11, 16], "intel_ai_safeti": [5, 6, 8, 11, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], "subpackag": [5, 6, 8, 11], "plugin": [5, 6, 8, 11, 25], "wish": [5, 6, 8, 11], "all": [5, 6, 8, 11, 14, 17, 18, 21], "its": [5, 6, 8, 11, 14, 17], "e": [5, 6, 8, 11, 14, 18, 25], "g": [5, 6, 8, 11, 14, 18, 21, 25], "model_card_gen": [5, 6, 8, 11, 14, 20, 21, 25], "extra": [5, 6, 8, 11, 18], "b": [5, 6, 8, 11, 17], "just": [5, 6, 8, 11, 17, 18], "c": [5, 6, 8, 11, 14, 21, 25], "d": [5, 6, 8, 11, 17, 18, 19], "implement": [5, 6, 8, 11, 21], "f": [5, 6, 8, 11, 14, 17, 18, 19, 21, 23], "tensroflow": [5, 6, 8, 11], "bin": [5, 6, 8, 11], "we": [5, 6, 8, 11, 14, 16, 17, 18, 21, 25], "encourag": [5, 6, 8, 11], "virtualenv": [5, 6, 8, 11], "conda": [5, 6, 8, 11], "consist": [5, 6, 8, 11, 25], "manag": [5, 6, 8, 11, 14, 21], "wai": [5, 6, 8, 11, 25], "do": [5, 6, 8, 11, 17, 18, 21], "m": [5, 6, 8, 11, 14, 17, 21, 24], "xai_env": [5, 6, 8, 11], "Or": [5, 6, 8, 11], "config": [5, 6, 8, 11, 14, 17, 20, 21], "fals": [5, 6, 8, 11, 14, 17, 19, 20, 21, 23, 25], "mai": [5, 6, 8, 10, 11, 15, 18, 25, 26], "depend": [5, 6, 8, 11, 14, 16], "associ": [5, 6, 7, 8, 9, 11, 18, 25], "document": [5, 6, 8, 11, 14, 18, 25], "your": [5, 6, 7, 8, 9, 11, 16, 17, 18, 21], "wa": [5, 6, 8, 11, 16, 17, 18, 21], "success": [5, 6, 8, 11], "command": [5, 6, 8, 11], "displai": [5, 6, 8, 11, 18], "version": [5, 6, 7, 8, 9, 11, 14, 17, 20, 25], "from": [5, 6, 8, 11, 14, 15, 16, 17, 19, 22, 24, 25], "import": [5, 6, 8, 11, 14, 16, 19, 20, 22, 23, 24, 25], "print": [5, 6, 8, 11, 16, 17, 18, 19, 20, 22, 23, 24], "__version__": [5, 6, 8, 11, 22], "jupyt": [5, 6, 8, 11], "show": [5, 6, 8, 11, 17, 19, 21], "how": [5, 6, 8, 11, 14, 16, 18, 21], "variou": [5, 6, 8, 11, 16], "ml": [5, 6, 8, 11, 18, 21, 25], "domain": [5, 6, 8, 11, 15, 17], "team": [5, 6, 8, 11, 14, 21, 25], "track": [5, 6, 8, 11], "bug": [5, 6, 8, 11], "enhanc": [5, 6, 8, 11, 17, 20], "request": [5, 6, 8, 11, 16, 19], "befor": [5, 6, 8, 11, 18], "submit": [5, 6, 8, 11], "suggest": [5, 6, 8, 11], "search": [5, 6, 8, 11], "see": [5, 6, 8, 11, 17, 18, 21, 25], "ha": [5, 6, 8, 11, 17, 18, 20, 21], "alreadi": [5, 6, 8, 11, 17, 18, 25], "been": [5, 6, 8, 11, 14, 21], "other": [5, 6, 8, 10, 11, 15, 17, 25, 26], "brand": [5, 6, 8, 10, 11, 15, 26], "claim": [5, 6, 8, 10, 11, 15, 26], "properti": [5, 6, 8, 10, 11, 15, 26], "trademark": [5, 6, 8, 10, 11, 15, 26], "These": [5, 6, 7, 8, 9, 11, 21, 25], "script": [5, 6, 7, 8, 9, 11, 20], "intend": [5, 6, 7, 8, 9, 11, 14, 20, 21], "benchmark": [5, 6, 7, 8, 9, 11], "platform": [5, 6, 7, 8, 9, 11, 25], "For": [5, 6, 7, 8, 9, 11, 14, 16, 18, 21, 25], "ani": [5, 6, 7, 8, 9, 11, 14, 17, 25], "inform": [5, 6, 8, 11, 14, 17, 18, 20, 21, 25], "visit": [5, 6, 7, 8, 9, 11], "http": [5, 6, 7, 8, 9, 11, 14, 16, 17, 20, 21, 22, 25], "www": [5, 6, 7, 8, 9, 11, 18, 25], "blog": [5, 6, 7, 8, 9, 11], "commit": [5, 6, 7, 8, 9, 11, 21], "respect": [5, 6, 7, 8, 9, 11, 25], "human": [5, 6, 7, 8, 9, 11, 18], "right": [5, 6, 7, 8, 9, 11], "avoid": [5, 6, 7, 8, 9, 11], "complic": [5, 6, 7, 8, 9, 11], "abus": [5, 6, 7, 8, 9, 11, 25], "polici": [5, 6, 7, 8, 9, 11], "reflect": [5, 6, 7, 8, 9, 11], "global": [5, 6, 7, 8, 9, 11], "principl": [5, 6, 7, 8, 9, 11], "accordingli": [5, 6, 7, 8, 9, 11], "access": [5, 6, 7, 8, 9, 11, 25], "materi": [5, 6, 7, 8, 9, 11], "agre": [5, 6, 7, 8, 9, 11], "product": [5, 6, 7, 8, 9, 11], "applic": [5, 6, 7, 8, 9, 11, 14, 16, 17, 21], "caus": [5, 6, 7, 8, 9, 11], "violat": [5, 6, 7, 8, 9, 11], "an": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 20, 21, 25], "internation": [5, 6, 7, 8, 9, 11], "recogn": [5, 6, 7, 8, 9, 11, 25], "under": [5, 6, 7, 8, 9, 11, 25], "apach": [5, 6, 7, 8, 9, 11], "2": [5, 6, 7, 8, 9, 11, 14, 16, 19, 21, 22, 25], "0": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25], "To": [5, 6, 7, 8, 9, 11, 17, 21, 25], "extent": [5, 6, 7, 8, 9, 11], "referenc": [5, 6, 7, 8, 9, 11], "site": [5, 6, 7, 8, 9, 11, 25], "third": [5, 6, 7, 8, 9, 11], "parti": [5, 6, 7, 8, 9, 11], "indic": [5, 6, 7, 8, 9, 11, 21, 23, 25], "content": [5, 6, 7, 8, 9, 11, 14, 16], "doe": [5, 6, 7, 8, 9, 11, 14, 17, 25], "warrant": [5, 6, 7, 8, 9, 11], "accuraci": [5, 6, 7, 8, 9, 11, 14, 17, 18, 23, 24], "qualiti": [5, 6, 7, 8, 9, 11, 21], "By": [5, 6, 7, 8, 9, 11], "": [5, 6, 7, 8, 9, 11, 14, 16, 17, 18, 21, 22, 25], "term": [5, 6, 7, 8, 9, 11, 25], "compli": [5, 6, 7, 8, 9, 11], "expressli": [5, 7, 9, 11], "adequaci": [5, 7, 9, 11], "complet": [5, 7, 9, 11], "liabl": [5, 7, 9, 11], "error": [5, 7, 9, 11, 25], "omiss": [5, 7, 9, 11], "defect": [5, 7, 9, 11], "relianc": [5, 7, 9, 11], "thereon": [5, 7, 9, 11], "also": [5, 7, 9, 11, 17, 18, 25], "warranti": [5, 7, 9, 11], "non": [5, 7, 9, 11, 25], "infring": [5, 7, 9, 11], "liabil": [5, 7, 9, 11], "damag": [5, 7, 9, 11], "relat": [5, 7, 9, 11], "get": [6, 8, 16, 19, 23], "explain": [6, 7, 8, 9, 10, 26], "specif": [7, 9, 14, 16, 21], "run": [10, 17, 18, 21, 23, 26], "section": [14, 17], "subsect": 14, "decript": 14, "detail": [14, 25], "overview": [14, 20, 21, 25], "A": [14, 17, 20, 21, 25], "brief": 14, "one": [14, 17, 18, 21], "line": 14, "descript": [14, 21], "thorough": 14, "usag": 14, "owner": [14, 21, 25], "individu": [14, 21], "who": 14, "own": 14, "schema": [14, 17, 21], "licens": 14, "refer": [14, 18, 21, 25], "more": [14, 17, 18, 21, 25], "about": [14, 16, 18, 21, 25], "citat": [14, 20], "where": [14, 17, 18, 21, 25], "store": [14, 21], "graphic": [14, 20, 21, 24, 25], "paramet": [14, 17, 19, 20, 23], "architectur": 14, "dataset": [14, 16, 19, 20, 24, 25], "train": [14, 16, 17, 18, 19, 21], "evalu": [14, 17, 21, 22, 25], "format": [14, 17, 18, 20, 21, 23, 24], "kei": [14, 17, 18, 21, 25], "valu": [14, 17, 18, 20, 21, 22, 25], "output": [14, 17, 18, 19, 20, 21, 23], "quantit": 14, "being": [14, 18, 25], "colleciton": 14, "consider": [14, 21, 25], "what": [14, 17], "limit": [14, 25], "known": 14, "technic": 14, "kind": [14, 25], "should": [14, 17, 18, 21], "expect": [14, 18], "well": [14, 17, 25], "factor": 14, "might": 14, "degrad": 14, "tradeoff": 14, "ethic": [14, 21, 25], "environment": 14, "involv": 14, "step": [14, 17, 18, 19, 20, 21, 23, 25], "1": [14, 16, 19, 21, 22, 25], "com": [14, 16, 17, 21, 22, 25], "xai": [14, 21, 25], "cd": 14, "modelcardgen": [14, 20, 21, 25], "classmethod": [14, 18], "requir": [14, 17, 18, 21], "three": [14, 17], "return": [14, 17, 18, 19, 20, 21, 23, 24, 25], "data_set": [14, 20, 21, 25], "dict": [14, 21, 24], "dictionari": [14, 17, 21], "defin": [14, 17, 18, 21, 25], "tfrecord": [14, 21, 25], "raw": [14, 16, 18, 21, 25], "datafram": [14, 17, 18, 21], "eval": [14, 17, 18, 19, 21, 23, 25], "tensorflowdataset": [14, 21, 25], "dataset_path": [14, 21], "file": [14, 17, 18, 19, 21], "glob": 14, "pattern": [14, 21], "pytorchdataset": [14, 20], "pytorch_dataset": 14, "feature_nam": [14, 20, 24], "panda": [14, 17, 18, 20, 21, 22, 25], "pd": [14, 17, 18, 20, 21, 22, 25], "darafram": 14, "y_true": [14, 17, 23], "y_pred": [14, 17, 23], "ypred": 14, "model_path": [14, 20, 21, 25], "str": [14, 17, 18, 21], "field": [14, 21], "repres": [14, 21, 25], "savedmodel": [14, 21], "eval_config": [14, 20, 21, 25], "tfma": [14, 21, 25], "evalconfig": [14, 21], "either": [14, 18], "proto": [14, 20, 21, 25], "string": [14, 17, 18, 21, 25], "pars": [14, 21], "exampl": [14, 16, 17, 18, 20, 21, 25], "let": [14, 16, 17, 22], "u": [14, 17, 21], "entitl": 14, "proxi": [14, 21], "found": [14, 17, 19, 21, 25], "notebook": [14, 17, 18, 20, 21], "compas_with_model_card_gen": 14, "tfx": 14, "ipynb": [14, 16], "model_spec": [14, 20, 21, 25], "label_kei": [14, 18, 20, 21], "ground": [14, 21], "truth": [14, 21], "label": [14, 17, 18, 19, 20, 21, 25], "metric_spec": 14, "comput": [14, 16, 18, 21, 25], "binaryaccuraci": [14, 20, 21, 25], "auc": [14, 20, 21], "confusionmatrixplot": [14, 20, 21, 25], "fairnessind": [14, 20, 21, 25], "slicing_spec": [14, 20, 21, 25], "accross": [14, 25], "datapoint": 14, "aggreg": 14, "group": [14, 25], "race": [14, 20, 21, 25], "is_recid": [14, 21], "metrics_spec": [14, 20, 21, 25], "class_nam": [14, 17, 20, 21, 25], "threshold": [14, 20, 21], "25": [14, 18, 20, 21], "5": [14, 17, 19, 21, 24, 25], "75": [14, 20, 21], "overal": [14, 25], "slice": [14, 25], "feature_kei": [14, 20, 21, 25], "option": [14, 20, 21], "include_default_metr": [14, 20, 21], "If": [14, 16, 17, 18], "must": 14, "prediction_kei": [14, 20], "popul": 14, "object": [14, 17, 18, 21, 23], "serial": [14, 21, 25], "deseri": 14, "json": 14, "v": [14, 18, 21], "model_card": [14, 20, 21, 25], "static": 14, "like": [14, 17, 21, 25], "those": [14, 25], "model_detail": [14, 20, 21, 25], "mc": [14, 20, 21, 25], "variabl": [14, 18, 22], "below": [14, 18], "ad": [14, 17, 25], "pre": [14, 18, 21], "long": 14, "coher": 14, "correct": [14, 17, 21], "offend": [14, 21], "profil": [14, 21], "altern": [14, 21], "sanction": [14, 21], "approxim": [14, 21, 25], "18": [14, 17, 21], "000": [14, 18, 20, 21], "crimin": [14, 21], "broward": [14, 21], "counti": [14, 21], "florida": [14, 21], "between": [14, 17, 21, 25], "januari": [14, 21], "2013": [14, 17, 21], "decemb": [14, 17, 21], "2014": [14, 21], "11": [14, 21], "uniqu": [14, 21, 24], "defend": [14, 21], "histori": [14, 21, 24], "demograph": [14, 20, 21], "likelihood": [14, 21], "reoffend": [14, 21], "contact": [14, 21, 25], "wadsworth": [14, 21], "vera": [14, 21], "piech": [14, 21], "2017": [14, 21, 25], "achiev": [14, 21], "through": [14, 20, 21, 25], "adversari": [14, 20, 21], "arxiv": [14, 21, 25], "org": [14, 18, 20, 21, 22, 25], "ab": [14, 21, 25], "1807": [14, 21], "00199": [14, 21], "chouldechova": [14, 21], "sell": [14, 21], "fairer": [14, 21], "accur": [14, 21], "whom": [14, 21], "1707": [14, 21], "00046": [14, 21], "berk": [14, 21], "et": [14, 17, 20, 21], "al": [14, 20, 21], "justic": [14, 21], "assess": [14, 21], "state": [14, 16, 20, 21], "art": [14, 16, 21], "1703": [14, 21], "09207": [14, 21], "quantitative_analysi": [14, 21, 25], "schema_vers": [14, 20, 21, 25], "here": [14, 16, 21], "doc": 14, "model_card_exampl": 14, "data_path": [14, 21], "mcg": [14, 20, 21, 25], "_data_path": [14, 21, 25], "_model_path": [14, 21, 25], "_eval_config": [14, 20, 21, 25], "pytest": 14, "custom": [14, 15, 17, 21], "mark": 14, "common": [14, 16, 17, 21, 25], "note": [14, 17, 18, 21, 25], "still": 14, "libarari": 14, "resnet50": [15, 17], "cv": 15, "neural": [15, 20], "network": [15, 20], "heart": 15, "diseas": 15, "numer": [15, 18], "categor": [15, 17], "multimod": 15, "breast": 15, "cancer": 15, "nlp": 15, "huggingfac": [15, 17], "transfer": 15, "nn": [15, 19, 20, 23], "newsgroup": 15, "fine": 15, "tune": 15, "classifi": [15, 25], "estim": [15, 22, 25], "toxic": 15, "goal": 16, "explor": 16, "now": [16, 17, 18], "support": 16, "pt_cam": [16, 17], "torch": [16, 17, 18, 19, 20, 23], "numpi": [16, 17, 18, 19, 20, 23, 24, 25], "np": [16, 17, 18, 19, 20, 23, 24, 25], "resnet50_weight": 16, "matplotlib": [16, 17, 19], "pyplot": [16, 17, 19], "plt": [16, 17, 19], "arrai": [16, 17, 20, 23, 24], "rgb": 16, "order": [16, 24], "pil": 16, "io": [16, 17, 21, 25], "bytesio": 16, "respons": 16, "githubusercont": 16, "jacobgil": 16, "grad": [16, 19], "master": 16, "png": 16, "open": [16, 25], "imshow": [16, 17, 19], "save": [16, 18, 19, 21], "imagenet1k_v2": 16, "our": [16, 17, 18, 20, 21, 22, 25], "target": [16, 20, 21, 22, 23, 24, 25], "layer": [16, 17, 21, 22, 24], "normal": [16, 17, 19], "last": [16, 18, 25], "convolut": 16, "simpli": 16, "give": 16, "some": [16, 17, 18, 21, 25], "idea": [16, 17], "choic": 16, "fasterrcnn": 16, "backbon": 16, "resnet18": 16, "50": [16, 17, 20, 23, 25], "layer4": [16, 17], "vgg": 16, "densenet161": 16, "target_lay": 16, "specifi": [16, 17, 18, 21], "integ": [16, 18], "index": [16, 17, 21, 23], "rang": [16, 17, 18, 19, 20, 23, 25], "num_of_class": 16, "base": [16, 17, 18, 21], "tabbi": 16, "cat": [16, 19, 23], "281": 16, "targetclass": 16, "none": [16, 17, 18, 19, 20, 21, 25], "highest": 16, "categori": [16, 24], "target_class": 16, "image_dim": 16, "224": [16, 17], "xgc": [16, 17], "x_gradcam": [16, 17], "cpu": [16, 17, 18, 19, 23], "project": 16, "tf_cam": 16, "inlin": [16, 19], "tf": [16, 17, 19, 21, 22, 25], "urllib": [16, 19], "urlopen": [16, 19], "kera": [16, 21, 22, 24, 25], "get_lay": 16, "conv5_block3_out": 16, "tfgc": 16, "tf_gradcam": 16, "ismailuddin": 16, "gradcam": [16, 17], "blob": 16, "solut": 17, "diagnosi": 17, "contrast": 17, "mammographi": 17, "radiologi": 17, "It": [17, 18, 21], "latest": 17, "v0": 17, "7": [17, 18, 21], "direct": [17, 21], "instal": [17, 18], "cach": [17, 18, 21], "dir": [17, 21], "nltk": 17, "docx2txt": 17, "openpyxl": 17, "xmlfile": 17, "transform": [17, 19, 20, 21, 22, 23, 24], "evalpredict": 17, "trainingargu": [17, 18], "pipelin": 17, "tlt": [17, 18], "dataset_factori": 17, "model_factori": 17, "plotli": 17, "express": 17, "px": 17, "subplot": 17, "make_subplot": 17, "graph_object": 17, "go": [17, 25], "shap": [17, 18, 22], "warn": [17, 18, 22, 24], "filterwarn": [17, 18, 22, 24], "ignor": [17, 18, 22, 24, 25], "root": [17, 19], "annot": [17, 25], "locat": [17, 18, 21], "dataset_dir": [17, 18], "join": [17, 18, 19, 21], "environ": [17, 18, 24], "els": [17, 18, 19, 21], "home": [17, 18, 21], "output_dir": [17, 18], "download": [17, 18, 19, 22, 23], "wiki": 17, "cancerimagingarch": 17, "net": [17, 19, 20, 23], "page": [17, 18, 25], "viewpag": 17, "action": 17, "pageid": 17, "109379611": 17, "brca": 17, "prepare_nlp_data": 17, "py": [17, 21], "data_root": 17, "prepare_vision_data": 17, "jpg": 17, "arrang": 17, "subfold": 17, "each": 17, "csv": [17, 18, 21, 22], "final": 17, "look": [17, 22], "someth": 17, "pkg": 17, "medic": 17, "zip": [17, 18, 21, 24], "manual": 17, "xlsx": 17, "radiology_hand_drawn_segmentations_v2": 17, "vision_imag": 17, "benign": 17, "p100_l_cm_cc": 17, "p100_l_cm_mlo": 17, "malign": 17, "p102_r_cm_cc": 17, "p102_r_cm_mlo": 17, "p100_r_cm_cc": 17, "p100_r_cm_mlo": 17, "input": [17, 18, 19, 21, 24, 25], "suppli": 17, "accord": 17, "source_image_path": 17, "image_path": 17, "source_annotation_path": 17, "annotation_path": 17, "workload": 17, "assign": [17, 25], "subject": 17, "record": [17, 21], "entir": 17, "set": [17, 18, 19, 20, 21, 23, 25], "test": [17, 18, 24], "random": 17, "stratif": 17, "copi": [17, 20, 21], "data_util": 17, "split_imag": 17, "split_annot": 17, "grouped_image_path": 17, "_group": 17, "isdir": 17, "exist": [17, 18, 19, 25], "train_image_path": 17, "test_image_path": 17, "file_dir": 17, "file_nam": 17, "split": [17, 18, 21, 24, 25], "grouped_annotation_path": 17, "splitext": 17, "isfil": [17, 19], "train_dataset": [17, 18, 20, 25], "test_dataset": 17, "to_csv": [17, 21], "4": [17, 19, 21], "_test": 17, "train_annotation_path": 17, "test_annotation_path": 17, "label_col": 17, "column": [17, 18, 21], "call": [17, 21], "factori": 17, "pretrain": [17, 18], "hub": [17, 25], "load": [17, 18, 19, 21], "get_model": 17, "function": [17, 18, 19, 20, 21, 23], "later": [17, 18], "default": [17, 18], "viz_model": 17, "model_nam": [17, 18], "train_viz_dataset": 17, "load_dataset": [17, 18], "use_cas": 17, "image_classif": 17, "test_viz_dataset": 17, "onc": 17, "cell": [17, 18], "preprocess": [17, 18], "subset": [17, 18, 21, 24, 25], "resiz": 17, "them": [17, 21, 25], "match": [17, 21, 23], "batch": [17, 18, 19, 23, 25], "batch_siz": [17, 18, 19, 21, 22, 23, 24], "16": [17, 19], "shuffl": [17, 18, 19, 21, 23], "shuffle_split": 17, "train_pct": 17, "80": [17, 18], "val_pct": 17, "seed": [17, 20], "image_s": 17, "take": [17, 18, 21], "verifi": [17, 18], "correctli": 17, "distribut": [17, 21, 25], "amongst": 17, "confirm": 17, "themselv": 17, "revers": 17, "def": [17, 18, 19, 20, 21, 23, 24, 25], "label_map_func": 17, "elif": 17, "reverse_label_map": 17, "train_label_count": 17, "x": [17, 18, 19, 20, 21, 22, 23, 24, 25], "y": [17, 18, 21, 22, 25], "train_subset": 17, "valid_label_count": 17, "validation_subset": 17, "test_label_count": 17, "datsaet": 17, "distrubt": 17, "form": [17, 21, 25], "type": [17, 18, 20, 21, 25], "fig": 17, "row": [17, 21], "col": 17, "spec": [17, 21], "subplot_titl": 17, "add_trac": 17, "pie": 17, "update_layout": 17, "height": 17, "600": 17, "width": 17, "800": 17, "title_text": 17, "get_exampl": 17, "n": [17, 23, 25], "6": [17, 19, 21, 25], "loader": 17, "util": [17, 18, 19, 20, 22, 23, 25], "dataload": [17, 18, 19, 23], "example_imag": 17, "enumer": [17, 19, 23], "label_nam": [17, 18], "int": [17, 18], "len": [17, 18, 20, 23, 24], "append": [17, 21], "break": 17, "plot": [17, 23], "figur": 17, "figsiz": 17, "12": 17, "suptitl": 17, "tensor": [17, 18, 21, 25], "size": [17, 18, 21], "train_example_imag": 17, "idx": [17, 20], "img": [17, 19], "add_subplot": 17, "axi": [17, 18, 20, 21, 23, 24], "off": [17, 24], "tight_layout": 17, "ylabel": 17, "fontsiz": 17, "tick_param": 17, "bottom": 17, "labelbottom": 17, "left": 17, "labelleft": 17, "movedim": 17, "detach": [17, 18, 19], "astyp": 17, "uint8": 17, "valid_example_imag": 17, "vector": [17, 18, 25], "dens": [17, 21, 22, 24], "number": [17, 18, 21], "compil": [17, 21, 22], "epoch": [17, 18, 19, 20, 22, 23, 24], "argument": [17, 18], "extra_lay": 17, "insert": 17, "addit": [17, 25], "1024": 17, "512": [17, 18, 25], "first": [17, 18, 19, 21], "neuron": 17, "second": [17, 18, 20, 21], "viz_histori": 17, "ipex_optim": [17, 18], "validation_viz_metr": 17, "test_viz_metr": 17, "saved_model_dir": 17, "export": [17, 21], "analyz": 17, "confus": [17, 21], "matrix": 17, "roc": 17, "pr": 17, "curv": 17, "identifi": [17, 21, 25], "exibit": 17, "bia": [17, 25], "scipi": [17, 18], "special": [17, 18], "softmax": [17, 18, 19, 20, 23, 24], "logit": [17, 18], "convert": [17, 20, 21], "probabl": [17, 19, 22, 23, 24], "_model": 17, "viz_cm": 17, "confusion_matrix": [17, 23, 24], "plotter": [17, 23, 24], "pr_curv": [17, 23, 24], "roc_curv": [17, 23, 24], "hot": 17, "encod": [17, 18], "y_pred_label": 17, "argmax": [17, 18, 23, 24], "mal_idx": 17, "tolist": [17, 18, 20], "nor_pr": 17, "ben_pr": 17, "mal": 17, "were": [17, 18, 21, 25], "misclassifi": 17, "ben": 17, "mal_classified_as_nor": 17, "intersect": [17, 23], "mal_classified_as_ben": 17, "nor": 17, "mal_as_nor_imag": 17, "mal_as_ben_imag": 17, "skimag": 17, "14": [17, 21], "mal_as_nor": 17, "calcul": 17, "0th": 17, "1st": 17, "10th": 17, "sinc": [17, 18, 21], "thei": [17, 21, 25], "seem": 17, "tnhe": 17, "clearest": 17, "tumor": 17, "final_image_dim": 17, "targetlay": 17, "mal_as_ben": 17, "5th": 17, "11th": 17, "clinic": 17, "bert": [17, 18], "part": [17, 25], "up": [17, 18, 23, 25], "seq_length": 17, "64": [17, 24], "quantization_criterion": 17, "05": 17, "quantization_max_tri": 17, "nlp_model": 17, "train_file_dir": 17, "train_file_nam": 17, "train_nlp_dataset": 17, "text_classif": 17, "dataset_nam": [17, 18], "csv_file_nam": 17, "header": 17, "true": [17, 18, 19, 20, 22, 23, 24], "shuffle_fil": 17, "exclude_col": 17, "test_file_dir": 17, "test_file_nam": 17, "test_nlp_dataset": 17, "hub_nam": 17, "max_length": [17, 18], "67": 17, "33": [17, 20, 21], "across": [17, 25], "sure": 17, "similarli": [17, 18], "punkt": 17, "get_mc_df": 17, "words_list": 17, "ignored_word": 17, "most": [17, 18, 21], "frequency_dict": 17, "freqdist": 17, "most_common": 17, "500": [17, 20, 25], "final_fd": 17, "frequenc": 17, "cnt": 17, "punctuat": 17, "loc": 17, "df": [17, 20, 22], "read_csv": [17, 21, 22], "symptom": 17, "mal_text": 17, "nor_text": 17, "ben_text": 17, "mal_token": 17, "word_token": 17, "nor_token": 17, "ben_token": 17, "necesarri": 17, "mal_fd": 17, "nor_fd": 17, "ben_fd": 17, "bar": [17, 18], "color": 17, "titl": [17, 18, 19, 25], "updat": [17, 18], "layout_coloraxis_showscal": 17, "trainer": [17, 21], "desir": 17, "nativ": [17, 20], "loop": [17, 18, 19], "invok": 17, "use_train": 17, "set_se": 17, "nlp_histori": 17, "isn": 17, "t": [17, 18, 21, 25], "train_nlp_metr": 17, "test_nlp_metr": 17, "much": [17, 21, 25], "better": 17, "than": [17, 18, 20, 21], "nonetheless": 17, "similar": 17, "mistak": [17, 21], "flag": 17, "logit_predict": 17, "return_raw": 17, "nlp_cm": 17, "mal_classified_as_ben_text": 17, "get_text": 17, "input_id": 17, "encoded_input": [17, 18], "_token": 17, "pad": [17, 18], "return_tensor": [17, 18], "pt": [17, 18, 19, 20], "partition_explain": [17, 18, 24], "partition_text_explain": [17, 18, 24], "r": [17, 18, 24], "w": [17, 18, 24], "faster": 17, "infer": 17, "want": [17, 18, 21], "intel_extension_for_transform": 17, "nlptrainer": 17, "optimizedmodel": 17, "quantizationconfig": 17, "nlptk_metric": 17, "tune_metr": 17, "eval_accuraci": 17, "greater_is_bett": 17, "is_rel": 17, "criterion": [17, 19, 20], "weight_ratio": 17, "quantization_config": 17, "approach": 17, "posttrainingdynam": 17, "max_trial": 17, "compute_metr": [17, 18], "p": [17, 21], "pred": [17, 23, 24], "isinst": [17, 18, 21], "tupl": [17, 18, 21], "label_id": 17, "float32": [17, 25], "mean": 17, "item": [17, 18, 19, 20, 21, 23], "eval_dataset": [17, 18], "quantized_model": 17, "quant_config": 17, "result": [17, 18, 22], "eval_acc": 17, "5f": 17, "save_model": 17, "quantized_bert": 17, "save_pretrain": [17, 18], "same": [17, 18], "stock": 17, "counterpart": [17, 21], "howev": 17, "differ": [17, 18], "quant_cm": 17, "khale": 17, "helal": 17, "alfarghali": 17, "mokhtar": 17, "elkorani": 17, "el": 17, "kassa": 17, "h": 17, "fahmi": 17, "digit": 17, "databas": [17, 18], "low": [17, 21], "energi": 17, "subtract": 17, "spectral": 17, "2021": [17, 25], "archiv": [17, 18, 25], "doi": [17, 20, 25], "7937": 17, "29kw": 17, "ae92": 17, "diagnost": 17, "artifici": 17, "intellig": 17, "research": [17, 21, 25], "2022": [17, 20], "scientif": 17, "volum": [17, 25], "1038": 17, "s41597": 17, "022": 17, "01238": 17, "clark": 17, "k": [17, 18], "vendt": 17, "smith": 17, "freymann": 17, "j": [17, 19], "kirbi": 17, "koppel": 17, "moor": 17, "phillip": 17, "maffitt": 17, "pringl": 17, "tarbox": 17, "l": [17, 18, 25], "prior": 17, "maintain": 17, "oper": [17, 21], "journal": [17, 25], "26": 17, "pp": 17, "1045": 17, "1057": 17, "1007": 17, "s10278": 17, "013": 17, "9622": 17, "demonstr": [18, 21], "catalog": [18, 25], "extend": [18, 25], "optim": [18, 19, 20, 21, 22, 23, 25], "boost": 18, "pleas": [18, 19], "pytorch_requir": 18, "txt": 18, "execut": 18, "assum": [18, 21], "readm": 18, "md": 18, "intel_extension_for_pytorch": 18, "ipex": 18, "log": [18, 21, 23], "sy": [18, 24], "pickl": 18, "tqdm": 18, "auto": [18, 24], "adamw": 18, "classlabel": 18, "load_metr": 18, "datasets_log": 18, "transformers_log": 18, "automodelforsequenceclassif": 18, "autotoken": 18, "get_schedul": 18, "file_util": 18, "download_and_extract_zip_fil": 18, "stream": 18, "stdout": 18, "handler": 18, "_get_library_root_logg": 18, "setstream": 18, "sh": 18, "streamhandl": 18, "set_verbosity_error": 18, "transformers_no_advisory_warn": 18, "albert": 18, "v2": 18, "uncas": 18, "distilbert": 18, "finetun": 18, "sst": 18, "english": [18, 25], "roberta": 18, "anoth": [18, 21], "local": [18, 21], "end": 18, "package_refer": 18, "declar": 18, "from_pretrain": 18, "textclassificationdata": 18, "along": 18, "helper": 18, "__init__": [18, 19, 20, 23], "self": [18, 19, 20, 23], "sentence1_kei": 18, "sentence2_kei": 18, "class_label": 18, "train_d": 18, "eval_d": 18, "tokenize_funct": 18, "arg": [18, 21], "sentenc": 18, "truncat": 18, "tokenize_dataset": 18, "appli": [18, 21], "tokenized_dataset": 18, "remov": 18, "raw_text_column": 18, "remove_column": 18, "define_train_eval_split": 18, "train_split_nam": 18, "eval_split_nam": 18, "train_siz": 18, "eval_s": 18, "select": 18, "get_label_nam": 18, "rais": 18, "valueerror": 18, "display_sampl": 18, "split_nam": 18, "sample_s": 18, "sampl": [18, 20, 24], "sentence1_sampl": 18, "sentence2_sampl": 18, "label_sampl": 18, "dataset_sampl": 18, "style": 18, "hide_index": 18, "onlin": [18, 25], "avail": [18, 25], "next": [18, 19], "movi": 18, "multipl": [18, 19], "time": [18, 19], "speed": 18, "unsupervis": 18, "so": [18, 21, 25], "hfdstextclassificationdata": 18, "initi": [18, 25], "param": 18, "when": [18, 25], "quicker": 18, "debug": 18, "sentence1": 18, "sentence2": 18, "init": 18, "cache_dir": 18, "train_dataset_s": 18, "1000": [18, 21, 25], "eval_dataset_s": 18, "vari": 18, "skip": 18, "continu": 18, "singl": [18, 21], "tab": 18, "separ": 18, "ham": 18, "messag": 18, "tsv": 18, "pass": 18, "delimit": 18, "etc": [18, 25], "customcsvtextclassificationdata": 18, "data_fil": 18, "train_perc": 18, "8": [18, 20, 23, 25], "eval_perc": 18, "map_funct": 18, "intial": 18, "percentag": 18, "reduc": [18, 25], "identif": 18, "purpos": 18, "decim": 18, "convers": [18, 25], "combin": 18, "cannot": 18, "greater": [18, 20], "column_nam": 18, "num_class": [18, 20], "train_test_split": [18, 21, 22], "test_siz": [18, 21, 22], "modifi": 18, "csv_path": 18, "point": [18, 19], "dataset_url": [18, 25], "ic": 18, "uci": [18, 20], "edu": 18, "00228": 18, "smsspamcollect": 18, "csv_name": 18, "renam": [18, 21], "know": 18, "renamed_csv": 18, "don": 18, "extract": 18, "translat": 18, "map_spam": 18, "constructor": 18, "appropri": 18, "textclassificationmodel": 18, "num_label": [18, 20], "training_arg": 18, "bool": 18, "devic": [18, 23], "given": [18, 21], "otherwis": [18, 21, 25], "lr_schedul": 18, "lambdalr": 18, "num_train_epoch": 18, "callabl": 18, "shuffle_sampl": 18, "becaus": [18, 21, 25], "rename_column": 18, "set_format": 18, "train_dataload": 18, "unpack": 18, "progress": 18, "num_training_step": 18, "progress_bar": 18, "loss": [18, 19, 20, 21, 22, 23, 25], "backward": [18, 19, 20, 23], "zero_grad": [18, 19, 20, 23], "eval_dataload": 18, "no_grad": [18, 23], "dim": [18, 20, 23], "add_batch": 18, "raw_input_text": 18, "_": [18, 19], "max": [18, 19, 23, 24], "prediction_label": 18, "int2str": 18, "result_list": 18, "raw_text_input": 18, "result_df": 18, "cl": [18, 25], "simplic": 18, "checkpoint": 18, "previou": [18, 25], "resum": 18, "overwrite_output_dir": 18, "overwrit": 18, "previous": 18, "head": [18, 25], "origin": [18, 19, 21, 25], "replac": [18, 21], "learning_r": [18, 21, 25], "5e": 18, "lr": [18, 19, 20, 22, 23], "linear": [18, 19, 20, 23], "num_warmup_step": 18, "eval_pr": 18, "evalut": 18, "saw": 18, "after": [18, 21], "reloaded_model": 18, "okai": 18, "finish": [18, 19], "wouldn": [18, 21], "watch": 18, "again": 18, "bad": 18, "definit": 18, "my": 18, "favorit": 18, "highli": 18, "recommend": 18, "text_for_shap": 18, "inproceed": [18, 25], "maa": 18, "etal": [18, 25], "2011": 18, "acl": 18, "hlt2011": 18, "author": [18, 21, 25], "andrew": 18, "dali": 18, "raymond": 18, "pham": 18, "peter": 18, "huang": 18, "dan": 18, "ng": 18, "pott": 18, "christoph": 18, "sentiment": 18, "booktitl": [18, 25], "proceed": [18, 20, 25], "49th": 18, "annual": 18, "meet": 18, "linguist": [18, 25], "languag": [18, 25], "technologi": [18, 25], "month": [18, 25], "june": 18, "year": [18, 25], "address": [18, 25], "portland": 18, "oregon": 18, "usa": 18, "publish": [18, 21, 25], "142": 18, "150": [18, 20], "url": [18, 25], "aclweb": 18, "anthologi": 18, "p11": 18, "1015": 18, "misc": [18, 24, 25], "misc_sms_spam_collection_228": 18, "almeida": 18, "tiago": 18, "2012": 18, "howpublish": 18, "totensor": [19, 23], "trainset": 19, "cifar10": 19, "trainload": 19, "num_work": 19, "testset": 19, "testload": 19, "plane": 19, "car": 19, "bird": 19, "deer": 19, "dog": 19, "frog": 19, "hors": 19, "ship": 19, "truck": 19, "super": [19, 20, 23], "conv1": 19, "conv2d": [19, 23], "pool1": 19, "maxpool2d": [19, 23], "pool2": 19, "conv2": 19, "fc1": 19, "120": 19, "fc2": 19, "84": 19, "fc3": 19, "relu1": 19, "relu": [19, 20, 21, 22, 23, 24], "relu2": 19, "relu3": 19, "relu4": 19, "forward": [19, 20, 23], "view": [19, 23, 25], "crossentropyloss": [19, 20], "sgd": [19, 23], "001": [19, 20], "momentum": [19, 23], "use_pretrained_model": 19, "cifar_torchvis": 19, "load_state_dict": 19, "over": [19, 21, 25], "running_loss": 19, "zero": 19, "statist": [19, 21], "2000": 19, "1999": 19, "everi": 19, "mini": 19, "5d": 19, "3f": [19, 24], "state_dict": 19, "transpos": 19, "unnorm": 19, "npimg": 19, "datait": 19, "iter": 19, "make_grid": 19, "groundtruth": 19, "ind": 19, "unsqueez": 19, "requires_grad": 19, "pt_attribut": 19, "captum": 19, "attr": 19, "viz": 19, "handel": 19, "original_imag": 19, "visualize_image_attr": 19, "entri": 19, "salienc": 19, "integratedgradi": 19, "integr": 19, "deeplift": 19, "deep": 19, "lift": 19, "smoothgrad": 19, "smooth": 19, "featureabl": 19, "ablat": 19, "prerpocess": 20, "multilay": 20, "sklearn": [20, 21, 22, 24], "fetch_openml": 20, "categorical_feature_kei": [20, 21], "workclass": 20, "marit": 20, "statu": 20, "occup": 20, "relationship": 20, "sex": [20, 21], "countri": 20, "numeric_feature_kei": 20, "ag": [20, 21, 22], "capit": 20, "hour": 20, "per": 20, "week": 20, "educ": 20, "num": 20, "drop_column": 20, "fnlwgt": 20, "data_id": 20, "1590": 20, "as_fram": 20, "raw_data": 20, "adult_data": 20, "get_dummi": 20, "50k": 20, "to_numpi": 20, "adultdataset": 20, "face": 20, "landmark": 20, "make_input_tensor": 20, "make_label_tensor": 20, "__len__": 20, "adult_df": 20, "from_numpi": 20, "floattensor": 20, "label_arrai": 20, "__getitem__": 20, "is_tensor": 20, "adult_dataset": 20, "adultnn": 20, "num_featur": 20, "lin1": 20, "lin2": 20, "lin3": 20, "lin4": 20, "lin5": 20, "lin6": 20, "lin10": 20, "prelu": 20, "dropout": [20, 23], "xin": 20, "manual_se": [20, 23], "reproduc": 20, "feature_s": 20, "linear1": 20, "sigmoid1": 20, "sigmoid": [20, 22], "linear2": 20, "sigmoid2": 20, "linear3": 20, "lin1_out": 20, "sigmoid_out1": 20, "sigmoid_out2": 20, "num_epoch": [20, 23], "adam": [20, 21, 22, 24], "input_tensor": 20, "label_tensor": 20, "2f": 20, "offlin": 20, "jit": 20, "adult_model": 20, "writefil": [20, 21, 25], "confusionmatrixatthreshold": 20, "sex_femal": 20, "sex_mal": 20, "date": 20, "08": 20, "01": [20, 23, 25], "simoudi": 20, "evangelo": 20, "jiawei": 20, "han": 20, "usama": 20, "fayyad": 20, "intern": [20, 25], "confer": [20, 25], "knowledg": 20, "discoveri": 20, "mine": 20, "No": 20, "conf": 20, "960830": 20, "aaai": 20, "press": 20, "menlo": 20, "park": 20, "ca": 20, "unit": 20, "1996": 20, "friedler": 20, "sorel": 20, "compar": [20, 21], "studi": [20, 21], "intervent": 20, "account": 20, "transpar": 20, "2019": [20, 25], "1145": 20, "3287560": 20, "3287589": 20, "lahoti": 20, "preethi": 20, "without": [20, 25], "reweight": 20, "advanc": 20, "process": [20, 21], "2020": [20, 25], "728": 20, "740": 20, "task": [20, 25], "whether": [20, 21], "person": 20, "salari": 20, "less": 20, "export_html": [20, 21], "census_mc": 20, "eval_input_reciever_fn": 21, "userdefin": 21, "seral": 21, "dep": 21, "docker": 21, "tuner": 21, "kubernet": 21, "29": 21, "metadata": [21, 25], "portpick": 21, "mkdir": 21, "tempfil": [21, 25], "model_select": [21, 22], "genor": 21, "literature1": 21, "techniqu": 21, "remedi": 21, "around": 21, "___": 21, "setup": 21, "filepath": 21, "_data_root": 21, "mkdtemp": 21, "prefix": 21, "storag": [21, 22, 25], "googleapi": [21, 22, 25], "compas_dataset": 21, "cox": 21, "violent": 21, "_data_filepath": 21, "_compas_df": 21, "simplii": 21, "_column_nam": 21, "c_charge_desc": 21, "c_charge_degre": 21, "c_days_from_compa": 21, "juv_fel_count": 21, "juv_misd_count": 21, "juv_other_count": 21, "priors_count": 21, "r_days_from_arrest": 21, "vr_charge_desc": 21, "score_text": 21, "predction": 21, "_ground_truth": 21, "_compas_scor": 21, "labl": 21, "boolean": 21, "crime": 21, "drop": 21, "dropna": 21, "high": 21, "medium": [21, 25], "ground_truth": 21, "compas_scor": 21, "focus": 21, "african": 21, "american": 21, "caucasian": 21, "isin": 21, "x_train": [21, 22, 24], "x_test": [21, 22, 23, 24], "random_st": [21, 22], "42": [21, 22], "back": 21, "na_rep": 21, "opt": 21, "artifact": 21, "_transformer_path": 21, "tensorflow_transform": 21, "tft": 21, "int_feature_kei": 21, "within": 21, "max_categorical_feature_valu": 21, "513": 21, "transformed_nam": 21, "_xf": 21, "preprocessing_fn": 21, "callback": 21, "compute_and_apply_vocabulari": 21, "_fill_in_miss": 21, "vocab_filenam": 21, "scale_to_z_scor": 21, "charg": 21, "tensor_valu": 21, "miss": 21, "sparsetensor": 21, "fill": 21, "rank": 21, "Its": 21, "shape": [21, 24, 25], "dimens": 21, "spars": 21, "default_valu": 21, "dtype": [21, 25], "sparse_tensor": 21, "dense_shap": 21, "dense_tensor": 21, "to_dens": 21, "squeez": 21, "_trainer_path": 21, "tensorflow_model_analysi": [21, 25], "tf_metadata": 21, "schema_util": 21, "_batch_siz": 21, "_learning_r": 21, "00001": 21, "_max_checkpoint": 21, "_save_checkpoint_step": 21, "999": 21, "_gzip_reader_fn": 21, "filenam": [21, 25], "reader": 21, "read": 21, "gzip": 21, "ed": 21, "nest": 21, "structur": 21, "typespec": 21, "element": 21, "tfrecorddataset": [21, 25], "compression_typ": 21, "consid": 21, "_get_raw_feature_spec": 21, "whose": 21, "fixedlenfeatur": [21, 25], "varlenfeatur": [21, 25], "sparsefeatur": 21, "schema_as_feature_spec": 21, "feature_spec": 21, "_example_serving_receiver_fn": 21, "tf_transform_output": 21, "serv": 21, "tftransformoutput": 21, "graph": 21, "raw_feature_spec": 21, "pop": [21, 22], "raw_input_fn": 21, "build_parsing_serving_input_receiver_fn": 21, "serving_input_receiv": 21, "transformed_featur": 21, "transform_raw_featur": 21, "servinginputreceiv": 21, "receiver_tensor": [21, 25], "_eval_input_receiver_fn": 21, "everyth": 21, "evalinputreceiv": [21, 25], "untransform": 21, "notic": 21, "serialized_tf_exampl": [21, 25], "compat": [21, 25], "v1": [21, 25], "placehold": [21, 25], "input_example_tensor": 21, "parse_exampl": [21, 25], "_input_fn": 21, "200": 21, "input_fn": [21, 25], "transformed_feature_spec": 21, "experiment": 21, "make_batched_features_dataset": 21, "make_one_shot_iter": 21, "get_next": 21, "re": [21, 24], "_keras_model_build": 21, "feature_column": [21, 25], "feature_layer_input": 21, "numeric_column": 21, "num_bucket": 21, "indicator_column": 21, "categorical_column_with_ident": 21, "int32": [21, 25], "feature_columns_input": 21, "densefeatur": 21, "feature_layer_output": 21, "dense_lay": 21, "dense_1": 21, "dense_2": 21, "meanabsoluteerror": 21, "trainer_fn": 21, "hparam": 21, "level": 21, "hyperparamet": 21, "pair": 21, "hold": 21, "train_spec": 21, "eval_spec": 21, "eval_input_receiver_fn": [21, 25], "transform_output": 21, "train_input_fn": [21, 25], "lambda": 21, "train_fil": 21, "eval_input_fn": 21, "eval_fil": 21, "trainspec": 21, "max_step": 21, "train_step": 21, "serving_receiver_fn": 21, "finalexport": 21, "evalspec": 21, "eval_step": 21, "run_config": 21, "runconfig": 21, "save_checkpoints_step": 21, "keep_checkpoint_max": 21, "model_dir": 21, "serving_model_dir": 21, "model_to_estim": 21, "keras_model": 21, "receiv": 21, "receiver_fn": 21, "_pipelie_path": 21, "absl": 21, "csvexamplegen": 21, "pusher": 21, "schemagen": 21, "statisticsgen": 21, "executor": 21, "dsl": 21, "executor_spec": 21, "orchestr": 21, "pusher_pb2": 21, "trainer_pb2": 21, "example_gen_pb2": 21, "local_dag_runn": 21, "localdagrunn": 21, "_pipeline_nam": 21, "_compas_root": 21, "inject": 21, "logic": 21, "successfulli": 21, "_transformer_fil": 21, "_trainer_fil": 21, "listen": 21, "server": 21, "_serving_model_dir": 21, "serving_model": 21, "chicago": 21, "taxi": 21, "rel": 21, "anywher": 21, "filesystem": 21, "_tfx_root": 21, "_pipeline_root": 21, "sqlite": 21, "db": 21, "_metadata_path": 21, "create_pipelin": 21, "pipeline_nam": 21, "pipeline_root": 21, "preprocessing_module_fil": 21, "trainer_module_fil": 21, "train_arg": 21, "trainarg": 21, "eval_arg": 21, "evalarg": 21, "metadata_path": 21, "schema_path": 21, "compass": 21, "bring": 21, "example_gen": 21, "input_bas": 21, "input_config": 21, "statistics_gen": 21, "schema_gen": 21, "importschemagen": 21, "schema_fil": 21, "module_fil": 21, "abspath": 21, "trainer_arg": 21, "transformed_exampl": 21, "custom_executor_spec": 21, "executorclassspec": 21, "transform_graph": 21, "candid": 21, "baselin": 21, "modelspec": 21, "slicingspec": 21, "metricsspec": 21, "metricconfig": 21, "metadata_connection_config": 21, "sqlite_metadata_connection_config": 21, "__name__": 21, "__main__": 21, "set_verbos": 21, "info": 21, "num_step": 21, "10000": 21, "5000": 21, "ml_metadata": 21, "metadata_stor": 21, "metadata_store_pb2": 21, "connection_config": 21, "connectionconfig": 21, "filename_uri": 21, "connection_mod": 21, "readwrite_opencr": 21, "metadatastor": 21, "get_artifacts_by_typ": 21, "uri": 21, "modelevalu": 21, "gz": 21, "_project_path": 21, "judg": 21, "parol": 21, "offic": 21, "determin": 21, "bail": 21, "grant": 21, "2016": 21, "articl": [21, 25], "propublica": 21, "incorrectli": 21, "would": 21, "higher": [21, 25], "rate": [21, 25], "white": 21, "made": [21, 25], "opposit": 21, "incorrect": 21, "went": 21, "bias": [21, 25], "due": 21, "uneven": 21, "disproportion": 21, "appear": [21, 25], "frequent": [21, 25], "literatur": 21, "concern": 21, "develop": [21, 25], "trial": 21, "detent": 21, "partnership": 21, "algorithm": 21, "multi": [21, 25], "stakehold": 21, "organ": 21, "googl": [21, 25], "member": 21, "guidelin": 21, "compas_plotli": 21, "standardscal": 22, "file_url": 22, "prepar": 22, "list_numer": 22, "thalach": 22, "trestbp": 22, "chol": 22, "oldpeak": 22, "y_train": [22, 24], "y_test": [22, 24], "scaler": 22, "fit": [22, 24], "sequenti": [22, 23, 24], "binary_crossentropi": 22, "15": [22, 24], "13": 22, "validation_data": [22, 24], "plot_model": 22, "show_shap": 22, "rankdir": 22, "particular": 22, "patient": 22, "had": 22, "1f": 22, "percent": 22, "100": [22, 23, 25], "ke": 22, "kernel_explain": 22, "iloc": 22, "101": 22, "128": [23, 24], "conv_lay": 23, "kernel_s": 23, "fc_layer": 23, "320": 23, "train_load": 23, "batch_idx": 23, "nll_loss": 23, "0f": 23, "tloss": 23, "6f": 23, "mnist_data": 23, "test_load": 23, "test_loss": 23, "empti": 23, "28": 23, "sum": [23, 25], "keepdim": 23, "eq": 23, "view_a": 23, "ntest": 23, "averag": 23, "4f": 23, "cm": [23, 24], "pred_idx": 23, "gt_idx": 23, "deviz": 23, "deep_explain": 23, "instati": 23, "grviz": 23, "gradient_explain": 23, "kmp_warn": 24, "all_categori": 24, "alt": 24, "atheism": 24, "comp": 24, "ibm": 24, "pc": 24, "mac": 24, "forsal": 24, "rec": [24, 25], "motorcycl": 24, "sport": 24, "basebal": 24, "hockei": 24, "sci": 24, "crypt": 24, "electron": 24, "med": 24, "space": 24, "soc": 24, "religion": [24, 25], "christian": 24, "talk": 24, "polit": 24, "gun": 24, "mideast": 24, "selected_categori": 24, "x_train_text": 24, "fetch_20newsgroup": 24, "return_x_i": 24, "x_test_text": 24, "feature_extract": 24, "countvector": 24, "tfidfvector": 24, "max_featur": 24, "50000": 24, "concaten": 24, "toarrai": 24, "create_model": 24, "summari": 24, "sparse_categorical_crossentropi": 24, "256": 24, "accuracy_scor": 24, "train_pr": 24, "test_pr": 24, "x_batch_text": 24, "x_batch": 24, "preds_proba": 24, "actual": 24, "make_predict": 24, "shap_valu": 24, "max_displai": 24, "explan": 24, "argsort": 24, "flip": 24, "waterfall_plot": 24, "lower": 24, "initj": 24, "force_plot": 24, "base_valu": 24, "out_nam": 24, "adapt": 25, "datetim": 25, "tensorflow_hub": 25, "tensorflow_data_valid": 25, "tfdv": 25, "addon": 25, "post_export_metr": 25, "fairness_ind": 25, "widget_view": 25, "civilcom": 25, "primari": 25, "seven": 25, "crowd": 25, "worker": 25, "tag": 25, "fraction": 25, "main": 25, "civilcommentsident": 25, "releas": 25, "kaggl": 25, "come": 25, "independ": 25, "2015": 25, "world": 25, "shut": 25, "down": 25, "chose": 25, "enabl": 25, "futur": 25, "figshar": 25, "id": 25, "timestamp": 25, "jigsaw": 25, "ident": 25, "mention": 25, "covert": 25, "offens": 25, "exact": 25, "replica": 25, "unintend": 25, "challeng": 25, "cc0": 25, "underli": 25, "parent_id": 25, "parent_text": 25, "regard": 25, "leak": 25, "did": 25, "parent": 25, "civil_com": 25, "pavlopoulos2020tox": 25, "context": 25, "realli": 25, "matter": 25, "john": 25, "pavlopoulo": 25, "jeffrei": 25, "sorensen": 25, "luca": 25, "dixon": 25, "nithum": 25, "thain": 25, "ion": 25, "androutsopoulo": 25, "eprint": 25, "2006": 25, "00998": 25, "archiveprefix": 25, "primaryclass": 25, "dblp": 25, "corr": 25, "1903": 25, "04561": 25, "daniel": 25, "borkan": 25, "luci": 25, "vasserman": 25, "nuanc": 25, "real": 25, "sun": 25, "31": 25, "mar": 25, "19": 25, "24": 25, "0200": 25, "biburl": 25, "bib": 25, "bibsourc": 25, "scienc": 25, "bibliographi": 25, "semev": 25, "em": 25, "val": 25, "span": 25, "laugier": 25, "15th": 25, "workshop": 25, "semant": 25, "aug": 25, "aclanthologi": 25, "18653": 25, "59": 25, "69": 25, "article_id": 25, "identity_attack": 25, "insult": 25, "obscen": 25, "severe_tox": 25, "sexual_explicit": 25, "threat": 25, "civil_comments_dataset": 25, "train_tf_fil": 25, "get_fil": 25, "train_tf_process": 25, "validate_tf_fil": 25, "validate_tf_process": 25, "text_featur": 25, "comment_text": 25, "feature_map": 25, "sexual_orient": 25, "gender": 25, "disabl": 25, "parse_funct": 25, "parse_single_exampl": 25, "work": 25, "parsed_exampl": 25, "fight": 25, "92": 25, "imbal": 25, "doesn": 25, "tfhub": 25, "embedded_text_feature_column": 25, "text_embedding_column": 25, "module_spec": 25, "nnlm": 25, "en": 25, "dim128": 25, "dnnclassifi": 25, "hidden_unit": 25, "weight_column": 25, "legaci": 25, "adagrad": 25, "003": 25, "loss_reduct": 25, "reduct": 25, "n_class": 25, "gettempdir": 25, "input_example_placehold": 25, "ones_lik": 25, "tfma_export_dir": 25, "export_eval_savedmodel": 25, "export_dir_bas": 25, "signature_nam": 25, "merg": 25, "built": 25, "precis": 25, "recal": 25, "alphabet": 25, "protect": 25, "voic": 25, "area": 25, "focu": 25, "anyth": 25, "rude": 25, "disrespect": 25, "someon": 25, "leav": 25, "discuss": 25, "attemp": 25, "sever": 25, "subtyp": 25, "ensur": 25, "wide": 25, "8e0b81f80a23": 25, "overrepres": 25, "black": 25, "muslim": 25, "feminist": 25, "woman": 25, "gai": 25, "often": 25, "far": 25, "mani": 25, "forum": 25, "unfortun": 25, "attack": 25, "rarer": 25, "affirm": 25, "statement": 25, "am": 25, "proud": 25, "man": 25, "adopt": 25, "pick": 25, "connot": 25, "insuffici": 25, "divers": 25, "imbalenc": 25, "balanc": 25, "enough": 25, "effect": 25, "distinguish": 25, "paper": 25, "societi": 25}, "objects": {}, "objtypes": {}, "objnames": {}, "titleterms": {"dataset": [0, 5, 6, 7, 8, 9, 11, 17, 18, 21, 23], "explain": [3, 5, 11, 15, 16, 17, 18, 19, 22, 23, 24], "goal": 3, "submodul": 3, "api": [4, 12, 17, 18], "refrenc": 4, "intel": [5, 11, 16, 17, 18], "ai": [5, 11, 17, 18], "tool": [5, 11, 16, 18], "overview": [5, 10, 11, 26], "get": [5, 11, 17, 18], "start": [5, 11], "requir": [5, 6, 8, 11], "develop": [5, 6, 8, 11], "instal": [5, 6, 8, 11, 14, 21], "poetri": [5, 6, 8, 11], "exist": [5, 6, 8, 11], "enviorn": [5, 6, 8, 11], "creat": [5, 6, 8, 11, 25], "activ": [5, 6, 8, 11], "python3": [5, 6, 8, 11], "virtual": [5, 6, 8, 11], "environ": [5, 6, 8, 11], "addit": [5, 6, 8, 11], "featur": [5, 6, 8, 11, 22], "specif": [5, 6, 8, 11], "step": [5, 6, 8, 11], "verifi": [5, 6, 8, 11], "run": [5, 6, 8, 11, 14], "notebook": [5, 6, 8, 11, 15, 16], "support": [5, 6, 8, 11], "disclaim": [5, 6, 7, 8, 9, 11], "licens": [5, 6, 7, 8, 9, 11], "model": [5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 24, 25], "softwar": [6, 8], "legal": [7, 9], "inform": [7, 9], "refer": [12, 16], "card": [12, 13, 14, 15, 20, 21, 25], "gener": [12, 14, 15, 20, 21], "exampl": [13, 15, 23], "input": [14, 16, 20], "test": [14, 20, 23], "marker": 14, "sampl": 14, "command": 14, "us": [14, 16, 17, 18, 19, 22, 23, 24], "resnet50": 16, "imagenet": 16, "classif": [16, 17, 19, 22, 23, 24, 25], "cam": 16, "object": 16, "load": 16, "xai": 16, "pytorch": [16, 17, 18, 20], "modul": 16, "xgradcam": 16, "imag": [16, 17], "visual": [16, 22, 23, 24], "tensorflow": [16, 21, 25], "multimod": 17, "breast": 17, "cancer": 17, "detect": [17, 21], "import": [17, 18, 21], "depend": [17, 18, 21, 25], "setup": [17, 18], "directori": 17, "option": [17, 18], "group": 17, "data": [17, 20, 22, 23, 24, 25], "patient": 17, "id": 17, "1": [17, 18, 20, 23, 24], "prepar": [17, 18], "analysi": 17, "transfer": 17, "learn": 17, "save": [17, 20], "comput": 17, "vision": 17, "error": 17, "2": [17, 18, 20, 23, 24], "text": [17, 18, 24], "corpu": 17, "nlp": 17, "explan": 17, "int8": 17, "quantiz": 17, "citat": [17, 18], "public": 17, "tcia": 17, "fine": 18, "tune": 18, "classifi": 18, "paramet": 18, "A": 18, "hug": 18, "face": 18, "b": 18, "custom": [18, 19, 22, 23, 24], "3": [18, 20, 23], "evalu": [18, 24], "trainer": 18, "http": 18, "huggingfac": 18, "co": 18, "doc": 18, "transform": 18, "v4": 18, "16": 18, "en": 18, "main_class": 18, "__": 18, "from": [18, 20, 21, 23], "nativ": 18, "4": [18, 20, 23], "export": [18, 25], "5": [18, 20, 23], "reload": 18, "make": [18, 25], "predict": [18, 23, 24], "6": [18, 23], "cnn": [19, 23], "cifar": 19, "10": [19, 23], "attribut": [19, 22, 23, 24], "collect": 20, "preprocess": [20, 21, 22], "fetch": 20, "openml": 20, "drop": 20, "unneed": 20, "column": 20, "train": [20, 23, 24, 25], "split": [20, 22], "build": [20, 25], "evalconfig": 20, "issu": 21, "fair": 21, "estim": 21, "librari": 21, "download": [21, 25], "tfx": 21, "pipelin": 21, "script": 21, "displai": 21, "neural": 22, "network": 22, "heart": 22, "diseas": 22, "connect": 22, "graph": 22, "accuraci": 22, "mnist": 23, "design": 23, "scatch": 23, "survei": 23, "perform": [23, 24], "across": 23, "all": 23, "class": 23, "metrics_explain": 23, "plugin": 23, "feature_attributions_explain": 23, "can": 23, "observ": 23, "confus": 23, "matrix": 23, "9": 23, "poorli": 23, "additionallli": 23, "i": 23, "high": 23, "misclassif": 23, "rate": 23, "exclus": 23, "amongst": 23, "two": 23, "label": 23, "In": 23, "other": 23, "word": 23, "appear": 23, "": 23, "vice": 23, "versa": 23, "7": 23, "were": 23, "misclassifi": 23, "let": 23, "take": 23, "closer": 23, "look": 23, "pixel": 23, "base": 23, "shap": [23, 24], "valu": [23, 24], "where": 23, "when": 23, "correct": [23, 24], "groundtruth": 23, "conclus": 23, "deep": 23, "gradient": 23, "pai": 23, "close": 23, "attent": 23, "top": 23, "digit": 23, "distinguish": 23, "between": 23, "On": 23, "first": 23, "last": 23, "row": 23, "abov": 23, "we": 23, "ar": 23, "The": 23, "contribut": 23, "postiiv": 23, "red": 23, "thi": 23, "begin": 23, "why": 23, "nn": 24, "newsgroup": 24, "vector": 24, "defin": 24, "compil": 24, "partit": 24, "plot": 24, "bar": 24, "waterfal": 24, "forc": 24, "toxic": 25, "comment": 25, "descript": 25, "evalsavedmodel": 25, "format": 25}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "nbsphinx": 4, "sphinx.ext.intersphinx": 1, "sphinx.ext.viewcode": 1, "sphinx": 57}, "alltitles": {"Datasets": [[0, "datasets"]], "Explainer": [[3, "explainer"]], "Goals": [[3, "goals"]], "Explainer Submodules": [[3, "explainer-submodules"]], "API Refrence": [[4, "api-refrence"]], "Intel\u00ae Explainable AI Tools": [[5, "intel-explainable-ai-tools"], [11, "intel-explainable-ai-tools"]], "Overview": [[5, "overview"], [10, "overview"], [11, "overview"], [26, "overview"]], "Get Started": [[5, "get-started"], [11, "get-started"]], "Requirements": [[5, "requirements"], [11, "requirements"]], "Developer Installation with Poetry": [[5, "developer-installation-with-poetry"], [6, "developer-installation-with-poetry"], [8, "developer-installation-with-poetry"], [11, "developer-installation-with-poetry"]], "Install to existing enviornment with Poetry": [[5, "install-to-existing-enviornment-with-poetry"], [6, "install-to-existing-enviornment-with-poetry"], [8, "install-to-existing-enviornment-with-poetry"], [11, "install-to-existing-enviornment-with-poetry"]], "Create and activate a Python3 virtual environment": [[5, "create-and-activate-a-python3-virtual-environment"], [6, "create-and-activate-a-python3-virtual-environment"], [8, "create-and-activate-a-python3-virtual-environment"], [11, "create-and-activate-a-python3-virtual-environment"]], "Additional Feature-Specific Steps": [[5, "additional-feature-specific-steps"], [6, "additional-feature-specific-steps"], [8, "additional-feature-specific-steps"], [11, "additional-feature-specific-steps"]], "Verify Installation": [[5, "verify-installation"], [6, "verify-installation"], [8, "verify-installation"], [11, "verify-installation"]], "Running Notebooks": [[5, "running-notebooks"], [6, "running-notebooks"], [8, "running-notebooks"], [11, "running-notebooks"]], "Support": [[5, "support"], [6, "support"], [8, "support"], [11, "support"]], "DISCLAIMER": [[5, "disclaimer"], [6, "disclaimer"], [8, "disclaimer"], [11, "disclaimer"]], "License": [[5, "license"], [6, "license"], [7, "license"], [8, "license"], [9, "license"], [11, "license"]], "Datasets and Models": [[5, "datasets-and-models"], [6, "datasets-and-models"], [7, "datasets-and-models"], [8, "datasets-and-models"], [9, "datasets-and-models"], [11, "datasets-and-models"]], "Installation": [[6, "installation"], [8, "installation"]], "Software Requirements": [[6, "software-requirements"], [8, "software-requirements"]], "Legal Information": [[7, "legal-information"], [9, "legal-information"]], "Disclaimer": [[7, "disclaimer"], [9, "disclaimer"]], "API Reference": [[12, "api-reference"]], "Model Card Generator": [[12, "model-card-generator"], [14, "model-card-generator"]], "Example Model Card": [[13, "example-model-card"]], "Install": [[14, "install"]], "Run": [[14, "run"]], "Model Card Generator Inputs": [[14, "model-card-generator-inputs"]], "Test": [[14, "test"]], "Markers": [[14, "markers"]], "Sample test commands using markers": [[14, "sample-test-commands-using-markers"]], "Example Notebooks": [[15, "example-notebooks"]], "Explainer Notebooks": [[15, "explainer-notebooks"]], "Model Card Generator Notebooks": [[15, "model-card-generator-notebooks"]], "Explaining ResNet50 ImageNet Classification Using the CAM Explainer": [[16, "Explaining-ResNet50-ImageNet-Classification-Using-the-CAM-Explainer"]], "Objective": [[16, "Objective"]], "Loading Intel XAI Tools PyTorch CAM Module": [[16, "Loading-Intel-XAI-Tools-PyTorch-CAM-Module"]], "Loading Notebook Modules": [[16, "Loading-Notebook-Modules"]], "Using XGradCAM": [[16, "Using-XGradCAM"]], "Loading the input image": [[16, "Loading-the-input-image"]], "Loading the Model": [[16, "Loading-the-Model"]], "Visualization": [[16, "Visualization"]], "References": [[16, "References"]], "Loading Intel XAI Tools TensorFlow CAM Module": [[16, "Loading-Intel-XAI-Tools-TensorFlow-CAM-Module"]], "Explaining Image Classification Models with TensorFlow": [[16, "Explaining-Image-Classification-Models-with-TensorFlow"]], "Multimodal Breast Cancer Detection Explainability using the Intel\u00ae Explainable AI API": [[17, "Multimodal-Breast-Cancer-Detection-Explainability-using-the-Intel\u00ae-Explainable-AI-API"]], "Import Dependencies and Setup Directories": [[17, "Import-Dependencies-and-Setup-Directories"]], "Dataset": [[17, "Dataset"]], "Optional: Group Data by Patient ID": [[17, "Optional:-Group-Data-by-Patient-ID"]], "Model 1: Image Classification with PyTorch": [[17, "Model-1:-Image-Classification-with-PyTorch"]], "Get the Model and Dataset": [[17, "Get-the-Model-and-Dataset"], [17, "id1"]], "Data Preparation": [[17, "Data-Preparation"], [17, "id2"]], "Image dataset analysis": [[17, "Image-dataset-analysis"]], "Transfer Learning": [[17, "Transfer-Learning"], [17, "id3"]], "Save the Computer Vision Model": [[17, "Save-the-Computer-Vision-Model"]], "Error Analysis": [[17, "Error-Analysis"]], "Explainability": [[17, "Explainability"]], "Model 2: Text Classification with PyTorch": [[17, "Model-2:-Text-Classification-with-PyTorch"]], "Corpus analysis": [[17, "Corpus-analysis"]], "Save the NLP Model": [[17, "Save-the-NLP-Model"]], "Error analysis": [[17, "Error-analysis"], [17, "id5"]], "Explanation": [[17, "Explanation"]], "Int8 Quantization": [[17, "Int8-Quantization"]], "Save the Quantized NLP Model": [[17, "Save-the-Quantized-NLP-Model"]], "Citations": [[17, "Citations"], [18, "Citations"]], "Data Citation": [[17, "Data-Citation"]], "Publication Citation": [[17, "Publication-Citation"]], "TCIA Citation": [[17, "TCIA-Citation"]], "Explaining Fine Tuned Text Classifier with PyTorch using the Intel\u00ae Explainable AI API": [[18, "Explaining-Fine-Tuned-Text-Classifier-with-PyTorch-using-the-Intel\u00ae-Explainable-AI-API"]], "1. Import dependencies and setup parameters": [[18, "1.-Import-dependencies-and-setup-parameters"]], "2. Prepare the dataset": [[18, "2.-Prepare-the-dataset"]], "Option A: Use a Hugging Face dataset": [[18, "Option-A:-Use-a-Hugging-Face-dataset"]], "Option B: Use a custom dataset": [[18, "Option-B:-Use-a-custom-dataset"]], "3. Prepare the Model for Fine Tuning and Evaluation": [[18, "3.-Prepare-the-Model-for-Fine-Tuning-and-Evaluation"]], "Option A: Use the `Trainer `__ API from Hugging Face": [[18, "Option-A:-Use-the-`Trainer-`__-API-from-Hugging-Face"]], "Option B: Use the native PyTorch API": [[18, "Option-B:-Use-the-native-PyTorch-API"]], "4. Export the model": [[18, "4.-Export-the-model"]], "5. Reload the model and make predictions": [[18, "5.-Reload-the-model-and-make-predictions"]], "6. Get Explainations with Intel Explainable AI Tools": [[18, "6.-Get-Explainations-with-Intel-Explainable-AI-Tools"]], "Explaining Custom CNN CIFAR-10 Classification Using the Attributions Explainer": [[19, "Explaining-Custom-CNN-CIFAR-10-Classification-Using-the-Attributions-Explainer"]], "Generating Model Card with PyTorch": [[20, "Generating-Model-Card-with-PyTorch"]], "1. Data Collection and Preprocessing": [[20, "1.-Data-Collection-and-Preprocessing"]], "Fetch Data from OpenML": [[20, "Fetch-Data-from-OpenML"]], "Drop Unneeded Columns": [[20, "Drop-Unneeded-Columns"]], "Train Test Split": [[20, "Train-Test-Split"]], "2. Build Model": [[20, "2.-Build-Model"]], "3. Train Model": [[20, "3.-Train-Model"]], "4. Save Model": [[20, "4.-Save-Model"]], "5. Generate Model Card": [[20, "5.-Generate-Model-Card"]], "EvalConfig Input": [[20, "EvalConfig-Input"]], "Detecting Issues in Fairness by Generating Model Card from Tensorflow Estimators": [[21, "Detecting-Issues-in-Fairness-by-Generating-Model-Card-from-Tensorflow-Estimators"]], "Install Dependencies": [[21, "Install-Dependencies"]], "Import Libraries": [[21, "Import-Libraries"]], "Download and preprocess the dataset": [[21, "Download-and-preprocess-the-dataset"]], "TFX Pipeline Scripts": [[21, "TFX-Pipeline-Scripts"]], "Display Model Card": [[21, "Display-Model-Card"]], "Explaining a Custom Neural Network Heart Disease Classification Using the Attributions Explainer": [[22, "Explaining-a-Custom-Neural-Network-Heart-Disease-Classification-Using-the-Attributions-Explainer"]], "Data Splitting": [[22, "Data-Splitting"]], "Feature Preprocessing": [[22, "Feature-Preprocessing"]], "Model": [[22, "Model"]], "Visualize the connectivity graph:": [[22, "Visualize-the-connectivity-graph:"]], "Accuracy": [[22, "Accuracy"]], "Explaining Custom CNN MNIST Classification Using the Attributions Explainer": [[23, "Explaining-Custom-CNN-MNIST-Classification-Using-the-Attributions-Explainer"]], "1. Design the CNN from scatch": [[23, "1.-Design-the-CNN-from-scatch"]], "2. Train the CNN on the MNIST dataset": [[23, "2.-Train-the-CNN-on-the-MNIST-dataset"]], "3. Predict the MNIST test data": [[23, "3.-Predict-the-MNIST-test-data"]], "4. Survey performance across all classes using the metrics_explainer plugin": [[23, "4.-Survey-performance-across-all-classes-using-the-metrics_explainer-plugin"]], "5. Explain performance across the classes using the feature_attributions_explainer plugin": [[23, "5.-Explain-performance-across-the-classes-using-the-feature_attributions_explainer-plugin"]], "From (4), it can be observed from the confusion matrix that classes 4 and 9 perform poorly. Additionallly, there is a high misclassification rate exclusively amongst the two labels. In other words, it appears that the CNN if confusing 4\u2019s with 9\u2019s, and vice-versa. 7.4% of all the 9 examples were misclassified as 4, and 10% of all the 4 examples were misclassified as 9.": [[23, "From-(4),-it-can-be-observed-from-the-confusion-matrix-that-classes-4-and-9-perform-poorly.-Additionallly,-there-is-a-high-misclassification-rate-exclusively-amongst-the-two-labels.-In-other-words,-it-appears-that-the-CNN-if-confusing-4's-with-9's,-and-vice-versa.-7.4%-of-all-the-9-examples-were-misclassified-as-4,-and-10%-of-all-the-4-examples-were-misclassified-as-9."]], "Let\u2019s take a closer look at the pixel-based shap values for the test examples where the CNN predicts \u20189\u2019 when the correct groundtruth label is \u20184\u2019.": [[23, "Let's-take-a-closer-look-at-the-pixel-based-shap-values-for-the-test-examples-where-the-CNN-predicts-'9'-when-the-correct-groundtruth-label-is-'4'."]], "6. Conclusion": [[23, "6.-Conclusion"]], "From the deep and gradient explainer visuals, it can be observed that the CNN pays close attention to the top of the digit in distinguishing between a 4 and a 9. On the first and last row of the above gradient explainer visualization we can the 4\u2019s are closed. The contributes to postiive shap values (red) for the 9 classification. This begins explaining why the CNN is confusing the two digits.": [[23, "From-the-deep-and-gradient-explainer-visuals,-it-can-be-observed-that-the-CNN-pays-close-attention-to-the-top-of-the-digit-in-distinguishing-between-a-4-and-a-9.-On-the-first-and-last-row-of-the-above-gradient-explainer-visualization-we-can-the-4's-are-closed.-The-contributes-to-postiive-shap-values-(red)-for-the-9-classification.-This-begins-explaining-why-the-CNN-is-confusing-the-two-digits."]], "Explaining Custom NN NewsGroups Classification Using the Attributions Explainer": [[24, "Explaining-Custom-NN-NewsGroups-Classification-Using-the-Attributions-Explainer"]], "Vectorize Text Data": [[24, "Vectorize-Text-Data"]], "Define the Model": [[24, "Define-the-Model"]], "Compile and Train Model": [[24, "Compile-and-Train-Model"]], "Evaluate Model Performance": [[24, "Evaluate-Model-Performance"]], "SHAP Partition Explainer": [[24, "SHAP-Partition-Explainer"]], "Visualize SHAP Values Correct Predictions": [[24, "Visualize-SHAP-Values-Correct-Predictions"]], "Text Plot": [[24, "Text-Plot"]], "Bar Plots": [[24, "Bar-Plots"]], "Bar Plot 1": [[24, "Bar-Plot-1"]], "Bar Plot 2": [[24, "Bar-Plot-2"]], "Waterfall Plots": [[24, "Waterfall-Plots"]], "Waterfall Plot 1": [[24, "Waterfall-Plot-1"]], "Waterfall Plot 2": [[24, "Waterfall-Plot-2"]], "Force Plot": [[24, "Force-Plot"]], "Creating Model Card for Toxic Comments Classification in Tensorflow": [[25, "Creating-Model-Card-for-Toxic-Comments-Classification-in-Tensorflow"]], "Training Dependencies": [[25, "Training-Dependencies"]], "Model Card Dependencies": [[25, "Model-Card-Dependencies"]], "Download Data": [[25, "Download-Data"]], "Data Description": [[25, "Data-Description"]], "Train Model": [[25, "Train-Model"], [25, "id1"]], "Build Model": [[25, "Build-Model"]], "Export in EvalSavedModel Format": [[25, "Export-in-EvalSavedModel-Format"]], "Making a Model Card": [[25, "Making-a-Model-Card"]]}, "indexentries": {}}) \ No newline at end of file diff --git a/versions.html b/versions.html index c67b04a..ba86984 100644 --- a/versions.html +++ b/versions.html @@ -9,6 +9,7 @@

    Intel® Explainable AI Tools Documentation

    Pick a version