A runtime approach that mitigates
DNN mis-predictions caused by the unexpected inputs to
the DNN.
Code release and supplementary materials for:
"Repairing Failure-inducing Inputs with Input Reflection"
The 37th IEEE/ACM International Conference on Automated Software Engineering (ASE 2022)
Yan Xiao
·
Yun Lin
·
Ivan Beschastnikh
·
Changsheng Sun
David S. Rosenblum
·
Jin Song Dong
resnet.py
: code for Resnet-20 to train the subject modelstrain_model.py
: code for ConvNet and VGG-16 to train the subject modelsspecial_transformation.py
: code for transformationsseek_degree.py
: find the cornel degrees for each transformationtrain.py
: train inputreflectortriplet_loss
: loss functionseval.py
: obtain distances between given instances and training datacollect_auroc_sia.py
: generate AUROC from the distancesearch_threshold_quad.py
: search for the best threshold of detecting deviated data on the validation dataset and calculate the model accuracy after calling InputReflector
pip install -r requirements.txt
- To train the subject models and InputReflector: bash log.sh
- To evaluate the performance of InputReflector: bash log_eval.sh
The loss of the Quadruplet network consists of two parts (Line 15 in Algorithm 2). The first part, , is the traditional triplet loss that is the main constraint. The second part, , is auxiliary to the first loss and conforms to the structure of traditional triplet loss but has different triplets. We use two different margins ( ) to balance the two constraints. We now discuss how to mine triplets for each loss.
First, a 2D matrix of distances between all the embeddings is calculated and stored in (line 1). Given an anchor, we define the hardest positive example as having the same label as the anchor and whose distance from the anchor is the largest ( ) among all the positive examples (lines 2-4). Similarly, the hardest negative example has a different label than the anchor and has the smallest distance from the anchor ( ) among all the negative examples (lines 5-8). These hardest positive example and hardest negative example along with the anchor are formed as a triplet to minimizing in line 9. After convergence, the maximum intra-class distance is required to be smaller than the minimum inter-class distance with respect to the same anchor.
To push away negative pairs from positive pairs, one more loss, , is introduced. Its aim is to make the maximum intra-class distance smaller than the minimum inter-class distance regardless of whether pairs contain the same anchor. This loss constrains the distance between positive pairs (i.e., samples with the same label) to be less than any other negative pairs (i.e., samples with different labels that are also different from the label of the corresponding positive samples). With the help of this constraint, the maximum intra-class distance must be less than the minimum inter-class distance regardless of whether pairs contain the same anchor. To mine such triplets, the valid triplet first needs to be filtered out on line 10 where are distinct and
Then, the hardest negative pairs are sampled whose distance is the minimum among all negative pairs in each batch during training (line 11-13). Finally is minimized to further enlarge the inter-class variations in Line 14.
@inproceedings{xiao2022repairing,
title={Repairing Failure-inducing Inputs with Input Reflection},
author={Xiao, Yan and Lin, Yun and Beschastnikh, Ivan and Sun, Changsheng and Rosenblum, David S and Dong, Jin Song},
booktitle={The 37th IEEE/ACM International Conference on Automated Software Engineering (ASE)},
year={2022},
organization={IEEE}
}
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.
For more questions, please contact [email protected].