We propose DHA, which achieves joint optimization of Data augmentation policy, Hyper-parameter and Architecture.
🌟 End-toE-End Traning: First to efficiently and jointly realize Data Augmentation, Hyper-parameter Optimization, and Neural Architecture Search in an end-to-end manner without retraining.
🌟 State-of-the-art: State-of-the-art accuracy on ImageNet with both cell-based and Mobilenet-like architecture search space
🌟 Findings: Demonstrate the advantages of doing joint-training over optimizing each AutoML component in sequence
[Cifar10] [Cifar100] [IMAGENET] [SPORTS8] [MIT67] [FLOWERS102]
Training normal two-stage ISTA Model
python search_darts.py --conf_path conf/ista/flowers102_ista.json --Running ista_nor
Training normal single-stage ISTA Model
python search_darts.py --conf_path conf/ista/flowers102_ista_single.json --Running ista_single_nor
Training DHA Model
python search_darts.py --conf_path conf/ista/flowers102_ista_single.json --Running ista_single_doal
If you find our work useful or interesting, please cite our paper:
@article{zhou2021dha,
title={Dha: End-to-end joint optimization of data augmentation policy, hyper-parameter and architecture},
author={Zhou, Kaichen and Hong, Lanqing and Hu, Shoukang and Zhou, Fengwei and Ru, Binxin and Feng, Jiashi and Li, Zhenguo},
journal={arXiv preprint arXiv:2109.05765},
year={2021}
}