Skip to content

Official implementation of the paper Translating XAI to Image-Like Multivariate Time Series.

Notifications You must be signed in to change notification settings

ltronchin/translating-xai-mts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Translating XAI to Image-Like Multivariate Time Series

This repository contains the official implementation of the paper Translating XAI to Image-Like Multivariate Time Series. This project aims to translate eXplainable AI (XAI) methods developed for image data to interpret the decisions made by a machine learning model designed for crash detection based on insurance telematics data. The XAI methods investigated include Grad-CAM, Integrated Gradients, and LIME. The project was developed within a collaboration between Assicurazioni Generali Italia, ELIS Innovation Hub and University Campus Bio-Medico of Rome. The insurance company deploys the model to be explained, and it is assumed to be a black box that cannot be changed to simulate a real-world scenario.

Installation

Clone the repository

git clone
cd translating-xai-mts

Install dependencies

Using virtualenv

  1. First create a Python virtual environment (optional) using the following command:
python -m venv translating-xai-mts
  1. Activate the virtual environment using the following command:
source translating-xai-mts/bin/activate
  1. Install the dependencies using the following command:
pip install -r requirements.txt

Using Conda

For Conda users, you can create a new Conda environment using the following command:

conda env create -f environment.yml

The code was tested with Python 3.9.0, PyTorch 2.0.1, and Tensorflow 2.13.0. For more informations about the requirements, please check the requirements.txt or environment.yml files.

Usage

  1. In the yaml file under the folder configs, you can find all the settings used for each XAI method as well as other parameters;
  2. We cannot share all the data used in the paper due to privacy reasons. However, we provide dummy samples created as a perturbation of the original signals in the folder. Note that the network is assumed to be a black box that we cannot change;
  3. To run the code, and explain the network on the dummy samples, change in yaml file the following paths:
fold_dir: ./data/processed/holdout -> fold_dir: ./data_share/processed/holdout;
data_dir: ./data/data_raw/holdout  -> data_dir: ./data_share/data_raw/holdout; 
  1. you can use the script main.py to obtain explanations from Grad-CAM, LIME, and Integrated Gradients:
python src/main.py

If something goes wrong, please check the config file and the data path.

Other datasets

If you want to use your own dataset, you can follow the steps below:

  1. Create a folder with your own data with the following structure.
data
├── data_raw
│   ├── exp-name
│   │   ├── mode-1
│   │   │   ├── train
│   │   │   │   ├── 0.npy
│   │   │   │   ├── ...
│   │   │   ├── valid
│   │   │   │   ├── 10.npy
│   │   │   │   ├── ...
│   │   │   ├── test
│   │   │   │   ├── 42.npy
│   │   │   │   ├── ...
│   │   ├── mode-2
│   │   │   ├── train
│   │   │   │   ├── 0.npy
│   │   │   │   ├── ...
│   │   │   ├── valid
│   │   │   │   ├── 10.npy
│   │   │   │   ├── ...
│   │   │   ├── test
│   │   │   │   ├── 42.npy
│   │   │   │   ├── ...
├── processed
│   ├── exp-name
│   │   ├── train.txt
│   │   ├── valid.txt
│   │   ├── test.txt

In data_raw, you can put your data in the form of numpy arrays. In the folder processed, you can put the list of samples to be used for training, validation, and testing. Check the folder for an example and the scripts here for more details;

  1. Train your own model using Tensorflow and save the model in the folder exp-name/models, under the root directory of the project. If you want to use PyTorch, you can change the code accordingly;
  2. Change the yaml file accordingly;
  3. Run the script main.py to obtain explanations from Grad-CAM, LIME, and Integrated Gradients:
python src/main.py

Note that the XAI engine is designed to work with telematics data; each sample contains a multivariate acceleration signal (mode-1) and a univariate speed magnitude signal (mode-2). Also, the XAI methods were adapted to explain the black box model provided by the insurance company. Thus, if your data or architecture are different, you may need to change the code accordingly.

Results

Here we report the results obtained explaining a crash and non-crash samples using Grad-CAM, Integrated Gradients, and LIME.

Crash Sample

Grad-CAM

Integrated Gradients

LIME

Non-Crash Sample

Grad-CAM

Integrated Gradients

LIME

Acknowledgements

This code is based on the following papers and repositories

Papers

Repositories

Contact for Issues

If you have any questions or if you are just interested in having a virtual coffee about eXplainable AI, please don't hesitate to reach out to me at: [email protected].

May be the AI be with you!

License

This code is released under the MIT License.

About

Official implementation of the paper Translating XAI to Image-Like Multivariate Time Series.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages