Skip to content

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training

Notifications You must be signed in to change notification settings

LiyuanLucasLiu/APART

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

License

APART

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training

We are in an early-release beta. Expect some adventures and rough edges.

Table of Contents

Introduction

We analyse FGSM-Generated Perturbations for Pre-ResNet18. As visualized below, although FGSM-generated perturbations can surgically doctor the image at the 20th epoch, they deteriorate into random noise at the 30th epoch. As the deterioration of FGSM, the robust perforamnce of FGSM adversarial training drops to zero.

This phenomenon (robustness drop) has been widely observed after conducting adversarial training for too long. As the common wisdom views this phenomenon as overfitting, our analyses suggest that the primary cause of the robustness drop is perturbation underfitting. Guided by our analysis, we propose APART, an adaptive adversarial training framework, which parameterizes perturbation generation and progressively strengthens them. Apart is not only 4 times faster than PGD-10, but suffers less from robustness drop and performs better.

Quick Start Guide

The code is partly forked from the ATTA adversarial training repository, with the corresponding modifications for APART.

Prerequisites

  • Python 3.6.3
  • Pytorch 1.3.1, torchvision 0.6.0
  • Apex 0.1.0

Examples for training and evaluate

python train.py --layerwise --gpuid 0

Citation

Please cite the following paper if you found our model useful. Thanks!

Zichao Li*, Liyuan Liu*, Chengyu Dong and Jingbo Shang (2020). Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training. arXiv preprint arXiv:2010.08034 (2020).

@article{li2020apart,
  title={Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training},
  author = {Li, Zichao and Liu, Liyuan and Dong, Chengyu and Shang, Jingbo},
  journal={arXiv preprint arXiv:2010.08034},
  year={2020}
}

About

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%