Skip to content

Commit

Permalink
Support AISEDetection per-bounding box method (#55)
Browse files Browse the repository at this point in the history
* Support AISEDetection

* Fix bugs

* fix code quality

* doc strings

* Unit test

* minor

* fix bbox bug and tests

* tests

* fix docs and sample

* fix run_detection

* comments

* refactor aise directory structure

* isort

* fix import issues
  • Loading branch information
negvet authored Aug 29, 2024
1 parent 4ce1903 commit 634caed
Show file tree
Hide file tree
Showing 18 changed files with 753 additions and 161 deletions.
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,14 @@
* Refactor OpenVINO imports by @goodsong81 in https://github.com/openvinotoolkit/openvino_xai/pull/45
* Support OV IR / ONNX model file for Explainer by @goodsong81 in https://github.com/openvinotoolkit/openvino_xai/pull/47
* Try CNN -> ViT assumption for IR insertion by @goodsong81 in https://github.com/openvinotoolkit/openvino_xai/pull/48
* Enable AISE: Adaptive Input Sampling for Explanation of Black-box Models by @negvet in https://github.com/openvinotoolkit/openvino_xai/pull/49
* Enable AISE for classification: Adaptive Input Sampling for Explanation of Black-box Models by @negvet in https://github.com/openvinotoolkit/openvino_xai/pull/49
* Upgrade OpenVINO to 2024.3.0 by @goodsong81 in https://github.com/openvinotoolkit/openvino_xai/pull/52
* Add saliency map visualization with explanation.plot() by @GalyaZalesskaya in https://github.com/openvinotoolkit/openvino_xai/pull/53
* Enable flexible naming for saved saliency maps and include confidence scores by @GalyaZalesskaya in https://github.com/openvinotoolkit/openvino_xai/pull/51
* Add [Pointing Game](https://link.springer.com/article/10.1007/s11263-017-1059-x) saliency map quality metric by @GalyaZalesskaya in https://github.com/openvinotoolkit/openvino_xai/pull/54
* Add [Insertion-Deletion AUC](https://arxiv.org/abs/1806.07421) saliency map quality metric by @GalyaZalesskaya in https://github.com/openvinotoolkit/openvino_xai/pull/56
* Add [ADCC](https://arxiv.org/abs/2104.10252) saliency map quality metric by @GalyaZalesskaya in https://github.com/openvinotoolkit/openvino_xai/pull/57
* Enable AISE for detection: Adaptive Input Sampling for Explanation of Black-box Models by @negvet in https://github.com/openvinotoolkit/openvino_xai/pull/55

### Known Issues

Expand Down
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,10 +71,11 @@ At the moment, *Image Classification* and *Object Detection* tasks are supported
|-----------------|----------------------|-----------|---------------------|-------|
| Computer Vision | Image Classification | White-Box | ReciproCAM | [arxiv](https://arxiv.org/abs/2209.14074) / [src](openvino_xai/methods/white_box/recipro_cam.py) |
| | | | VITReciproCAM | [arxiv](https://arxiv.org/abs/2310.02588) / [src](openvino_xai/methods/white_box/recipro_cam.py) |
| | | | ActivationMap | experimental / [src](openvino_xai/methods/white_box/activation_map.py) |
| | | Black-Box | AISE | [src](openvino_xai/methods/black_box/aise.py) |
| | | | RISE | [arxiv](https://arxiv.org/abs/1806.07421v3) / [src](openvino_xai/methods/black_box/rise.py) |
| | Object Detection | White-Box | ClassProbabilityMap | experimental / [src](openvino_xai/methods/white_box/det_class_probability_map.py) |
| | | | ActivationMap | experimental / [src](openvino_xai/methods/white_box/activation_map.py) |
| | | Black-Box | AISEClassification | [src](openvino_xai/methods/black_box/aise.py) |
| | | | RISE | [arxiv](https://arxiv.org/abs/1806.07421v3) / [src](openvino_xai/methods/black_box/rise.py) |
| | Object Detection | White-Box | ClassProbabilityMap | experimental / [src](openvino_xai/methods/white_box/det_class_probability_map.py) |
| | | Black-Box | AISEDetection | [src](openvino_xai/methods/black_box/aise.py) |

### Supported explainable models

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ explanation.save("output_path", "name_")
Black-box mode does not update the model (treating the model as a black box).
Black-box approaches are based on the perturbation of the input data and measurement of the model's output change.

For black-box mode we support 2 algorithms: **AISE** (by default) and [**RISE**](https://arxiv.org/abs/1806.07421). AISE is more effective for generating saliency maps for a few specific classes. RISE - to generate maps for all classes at once.
For black-box mode we support 2 algorithms: **AISE** (by default) and [**RISE**](https://arxiv.org/abs/1806.07421). AISE is more effective for generating saliency maps for a few specific classes. RISE - to generate maps for all classes at once. AISE is supported for both classification and detection task.

Pros:
- **Flexible** - can be applied to any custom model.
Expand Down
64 changes: 58 additions & 6 deletions examples/run_detection.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
import openvino_xai as xai
from openvino_xai.common.utils import logger
from openvino_xai.explainer.explainer import ExplainMode
from openvino_xai.methods.black_box.base import Preset


def get_argument_parser():
Expand All @@ -31,20 +32,22 @@ def preprocess_fn(x: np.ndarray) -> np.ndarray:
return x


def main(argv):
def postprocess_fn(x) -> np.ndarray:
"""Returns boxes, scores, labels."""
return x["boxes"][0][:, :4], x["boxes"][0][:, 4], x["labels"][0]


def explain_white_box(args):
"""
White-box scenario.
Insertion of the XAI branch into the Model API wrapper, thus Model API wrapper has additional 'saliency_map' output.
Insertion of the XAI branch into the model, thus model has additional 'saliency_map' output.
"""

parser = get_argument_parser()
args = parser.parse_args(argv)

# Create ov.Model
model: ov.Model
model = ov.Core().read_model(args.model_path)

# OTX YOLOX
# # OTX YOLOX
# cls_head_output_node_names = [
# "/bbox_head/multi_level_conv_cls.0/Conv/WithoutBiases",
# "/bbox_head/multi_level_conv_cls.1/Conv/WithoutBiases",
Expand Down Expand Up @@ -75,6 +78,7 @@ def main(argv):
explanation = explainer(
image,
targets=[0, 1, 2], # target classes to explain
overlay=True,
)

logger.info(
Expand All @@ -88,5 +92,53 @@ def main(argv):
explanation.save(output, Path(args.image_path).stem)


def explain_black_box(args):
"""
Black-box scenario.
"""

# Create ov.Model
model: ov.Model
model = ov.Core().read_model(args.model_path)

# Create explainer object
explainer = xai.Explainer(
model=model,
task=xai.Task.DETECTION,
preprocess_fn=preprocess_fn,
postprocess_fn=postprocess_fn,
explain_mode=ExplainMode.BLACKBOX, # defaults to AUTO
)

# Prepare input image and explanation parameters, can be different for each explain call
image = cv2.imread(args.image_path)

# Generate explanation
explanation = explainer(
image,
targets=[0], # target boxes to explain
overlay=True,
preset=Preset.SPEED,
)

logger.info(
f"Generated {len(explanation.saliency_map)} detection "
f"saliency maps of layout {explanation.layout} with shape {explanation.shape}."
)

# Save saliency maps for visual inspection
if args.output is not None:
output = Path(args.output) / "detection_black_box"
explanation.save(output, f"{Path(args.image_path).stem}_")


def main(argv):
parser = get_argument_parser()
args = parser.parse_args(argv)

explain_white_box(args)
explain_black_box(args)


if __name__ == "__main__":
main(sys.argv[1:])
1 change: 1 addition & 0 deletions openvino_xai/explainer/explainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -222,6 +222,7 @@ def explain(
saliency_map=saliency_map,
targets=targets,
label_names=label_names,
metadata=self.method.metadata,
)
return self._visualize(
original_input_image,
Expand Down
5 changes: 4 additions & 1 deletion openvino_xai/explainer/explanation.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,13 @@
import os
from enum import Enum
from pathlib import Path
from typing import Dict, List
from typing import Any, Dict, List

import cv2
import matplotlib.pyplot as plt
import numpy as np

from openvino_xai.common.parameters import Task
from openvino_xai.common.utils import logger
from openvino_xai.explainer.utils import (
convert_targets_to_numpy,
Expand All @@ -36,6 +37,7 @@ def __init__(
saliency_map: np.ndarray | Dict[int | str, np.ndarray],
targets: np.ndarray | List[int | str] | int | str,
label_names: List[str] | None = None,
metadata: Dict[Task, Any] | None = None,
):
targets = convert_targets_to_numpy(targets)

Expand All @@ -58,6 +60,7 @@ def __init__(
self._saliency_map = self._select_target_saliency_maps(targets, label_names)

self.label_names = label_names
self.metadata = metadata

@property
def saliency_map(self) -> Dict[int | str, np.ndarray]:
Expand Down
39 changes: 32 additions & 7 deletions openvino_xai/explainer/visualizer.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
# Copyright (C) 2023-2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from typing import Dict, List, Tuple
from typing import Any, Dict, List, Tuple

import cv2
import numpy as np

from openvino_xai.common.parameters import Task
from openvino_xai.common.utils import format_to_bhwc, infer_size_from_image, scaling
from openvino_xai.explainer.explanation import (
COLOR_MAPPED_LAYOUTS,
Expand Down Expand Up @@ -66,7 +67,7 @@ def overlay(

class Visualizer:
"""
Visualizer implements post-processing for the saliency map in explanation result.
Visualizer implements post-processing for the saliency maps in explanation.
"""

def __call__(
Expand Down Expand Up @@ -130,7 +131,7 @@ def visualize(
original_input_image = format_to_bhwc(original_input_image)

saliency_map_dict = explanation.saliency_map
class_idx_to_return = list(saliency_map_dict.keys())
indices_to_return = list(saliency_map_dict.keys())

# Convert to numpy array to use vectorized scale (0 ~ 255) operation and speed up lots of classes scenario
saliency_map_np = np.array(list(saliency_map_dict.values()))
Expand All @@ -146,6 +147,7 @@ def visualize(
saliency_map_np = self._apply_overlay(
explanation, saliency_map_np, original_input_image, output_size, overlay_weight
)
saliency_map_np = self._apply_metadata(explanation.metadata, saliency_map_np, indices_to_return)
else:
if resize:
if original_input_image is None and output_size is None:
Expand All @@ -157,7 +159,30 @@ def visualize(
saliency_map_np = self._apply_colormap(explanation, saliency_map_np)

# Convert back to dict
return self._update_explanation_with_processed_sal_map(explanation, saliency_map_np, class_idx_to_return)
return self._update_explanation_with_processed_sal_map(explanation, saliency_map_np, indices_to_return)

@staticmethod
def _apply_metadata(metadata: Dict[Task, Any], saliency_map_np: np.ndarray, indices: List[int | str]):
# TODO (negvet): support when indices are strings
if metadata:
if Task.DETECTION in metadata:
for smap_i, target_index in zip(range(len(saliency_map_np)), indices):
saliency_map = saliency_map_np[smap_i]
box, score, label_index = metadata[Task.DETECTION][target_index]
x1, y1, x2, y2 = box
cv2.rectangle(saliency_map, (int(x1), int(y1)), (int(x2), int(y2)), color=(255, 0, 0), thickness=2)
box_label = f"{label_index}|{score:.2f}"
box_label_loc = int(x1), int(y1 - 5)
cv2.putText(
saliency_map,
box_label,
org=box_label_loc,
fontFace=1,
fontScale=1,
color=(255, 0, 0),
thickness=2,
)
return saliency_map_np

@staticmethod
def _apply_scaling(explanation: Explanation, saliency_map_np: np.ndarray) -> np.ndarray:
Expand Down Expand Up @@ -222,15 +247,15 @@ def _apply_overlay(
def _update_explanation_with_processed_sal_map(
explanation: Explanation,
saliency_map_np: np.ndarray,
class_idx: List,
target_indices: List,
) -> Explanation:
dict_sal_map: Dict[int | str, np.ndarray] = {}
if explanation.layout in ONE_MAP_LAYOUTS:
dict_sal_map["per_image_map"] = saliency_map_np[0]
saliency_map_np = dict_sal_map
elif explanation.layout in MULTIPLE_MAP_LAYOUTS:
for idx, class_sal in zip(class_idx, saliency_map_np):
dict_sal_map[idx] = class_sal
for index, sal_map in zip(target_indices, saliency_map_np):
dict_sal_map[index] = sal_map
else:
raise ValueError
explanation.saliency_map = dict_sal_map
Expand Down
6 changes: 4 additions & 2 deletions openvino_xai/methods/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
"""
XAI algorithms.
"""
from openvino_xai.methods.black_box.aise import AISE
from openvino_xai.methods.black_box.aise.classification import AISEClassification
from openvino_xai.methods.black_box.aise.detection import AISEDetection
from openvino_xai.methods.black_box.rise import RISE
from openvino_xai.methods.white_box.activation_map import ActivationMap
from openvino_xai.methods.white_box.base import WhiteBoxMethod
Expand All @@ -24,5 +25,6 @@
"ViTReciproCAM",
"DetClassProbabilityMap",
"RISE",
"AISE",
"AISEClassification",
"AISEDetection",
]
5 changes: 4 additions & 1 deletion openvino_xai/methods/base.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import collections
from abc import ABC, abstractmethod
from typing import Callable, Dict, Mapping
from typing import Any, Callable, Dict, Mapping

import numpy as np
import openvino as ov

from openvino_xai.common.parameters import Task
from openvino_xai.common.utils import IdentityPreprocessFN


Expand All @@ -23,6 +25,7 @@ def __init__(
self._model_compiled = None
self.preprocess_fn = preprocess_fn
self._device_name = device_name
self.metadata: Dict[Task, Any] = collections.defaultdict(dict)

@property
def model_compiled(self) -> ov.CompiledModel | None:
Expand Down
Empty file.
Loading

0 comments on commit 634caed

Please sign in to comment.