Captum v0.5.0 Release
The Captum v0.5.0 release introduces a new function pillar, Influential Examples, with a few code improvements and bug fixes.
Influential Examples
Influential Examples implements the method TracInCP. It calculates the influence score of a given training example on a given test example, which approximately answers the question “if the given training example were removed from the training data, how much would the loss on the model change?”. TracInCP can be used for:
- identifying proponents/opponents, which are the training examples with the most positive/negative influence on a given test example
- identifying mis-labelled data
Captum currently offers the following specific variant implementings of TracInCP:
TracInCP- Computes influence scores using gradients at all specified layers. Can be used for identifying proponents/opponents, and identifying mis-labelled data. Both computations take time linear in training data size.TracInCPFast- Like TracInCP, but computes influence scores using only gradients in the last fully-connected layer, and is expedited using a computational trick.TracInCPFastRandProj- Version of TracInCPFast which is specialized for computing proponents/opponents. In particular, pre-processing enables computation of proponents / opponents in constant time. The tradeoff is the linear time and memory required for pre-processing. Random projections can be used to reduce memory usage. This class should not be used for identifying mis-labelled data.
A tutorial is made to demonstrate the usage https://captum.ai/tutorials/TracInCP_Tutorial

Notable Changes
- Minimum required PyTorch version becomes v1.6.0 (#876)
- Enabled argument
model_idinTCAVand removedAVfrom public concept module (PR #811) - Add new configurable argument
attribute_to_layer_inputinTCAVto set for both layer activation and attribution (#864) - Rename the argument
raw_inputtoraw_input_idsin visualization utilVisualizationDataRecord(PR #804) - Support configurable
epsargument inDeepLift(PR #835) - Captum now leverages
register_full_backward_hookintroduced in PyTorch v1.8.0. Attribution to neuron output inNeuronDeepLift,NeuronGuidedBackprop, andNeuronDeconvolutionare deprecated and will be removed in the next major release v0.6.0 (PR #837) - Fix the issue that Lime and KernelShap fail to handle empty tensor input like
tensor([[],[],[]])(PR #812) - Fix the bug that
visualization_transformofImageFeaturein Captum Insight is not applied (PR #871)