A self-checking tool for Deep Neural Networks to detect the potentially incorrect model decision and generate advice to auto-correct the model decision on runtime.
the International Conference on Software Engineering (ICSE), May 2021.
Yan Xiao
·
Ivan Beschastnikh
·
David S. Rosenblum
·
Changsheng Sun
·
Sebastian Elbaum
Yun Lin
·
Jin Song Dong
In this paper we describe a self-checking system, called SelfChecker, that triggers an alarm if the internal layer features of the model are inconsistent with the final prediction.
SelfChecker also provides advice in the form of an alternative prediction. This archive includes codes for generating probability density functions, performing layer selections, and alarm and advice analyses.
We will update .py files later.
utils.py
- Util functions for log.main_kde.py
- Obtain density functions for the combination of classes and layers and inferred classes.kdes_generation.py
- Contain functions for generating density functions and inferred classes.layer_selection_agree.py
- Layer selection for alarm.layer_selection_condition.py
- Layer selection for advice.layer_selection_condition_neg.py
- Layer selection for advice.sc.py
- Alarm and advice analysis.models/
- Folder contains pre-trained models.tmp/
- Folder saving density functions and inferred classes.
conda env create -f sc.yml
conda activate sc
- We prepare a pre-trained model ConvNet on CIFAR-10: python sc.pyc
- To run the whole project:
bash exe_train.sh bash exe_deploy.sh
@inproceedings{xiao2021self,
title={Self-checking deep neural networks in deployment},
author={Xiao, Yan and Beschastnikh, Ivan and Rosenblum, David S and Sun, Changsheng and Elbaum, Sebastian and Lin, Yun and Dong, Jin Song},
booktitle={2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)},
pages={372--384},
year={2021},
organization={IEEE}
}
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.
For more questions, please contact [email protected]