Official codes for ACMMM 2021 paper "Fully Quantized Image Super-Resolution Networks".
https://arxiv.org/abs/2011.14265
Python 3.6.5
torch 1.2.0
torchvision 0.4.0
For more requirements, please refer to requirements.txt
Followed the existing SR settings, the models are trained on DIV2K dataset and it can be downloaded from here (7.1GB).
For model evaluation, the benchmark datasets (250MB) can be downloaded from the hyper-link provided. It includes Set5, Set14, B100 and Urban100. We also adopted the DIV2K photo id from 801 to 900 for DIV2K dataset evaluation.
The data path can be changed in train.sh
For full-precision SRResNet model training,
CUDA_VISIBLE_DEVICES=0 bash train.sh config/config.baseline.train-scratch.div2k.fp.srresnet $name
For FQSR model training (SRResNet-based),
CUDA_VISIBLE_DEVICES=0 bash train.sh config/config.lsq.finetune.div2k.bit.srresnet $name
In order to receive the most promising results, the resume from full-precision can be specified in the corresponding config/config.lsq.finetune.div2k.bit.srresnet
file.
Similarly, for full-precision EDSR model training,
CUDA_VISIBLE_DEVICES=0 bash train.sh config/config.baseline.train-scratch.div2k.fp.edsr $name
For FQSR model training (EDSR-based),
CUDA_VISIBLE_DEVICES=0 bash train.sh config/config.lsq.finetune.div2k.bit.edsr $name
For model evaluation, the resume path of the tested model can be specified in the corresponding config/config.lsq.finetune.div2k.bit.srresnet
file.
Remember to turn on the --test_only
option.
Enjoy!!
If this respository is useful for your research, please consider citing:
@article{wang2020fully,
title={Fully Quantized Image Super-Resolution Networks},
author={Wang, Hu and Chen, Peng and Zhuang, Bohan and Shen, Chunhua},
journal={arXiv preprint arXiv:2011.14265},
year={2020}
}