🔥 You may be interested in our new work
- Learning from History: Task-agnostic Model Contrastive Learning for Image Restoration, accepted by AAAI 24
- Exploiting Self-Supervised Constraints in Image Super-Resolution, to be appeared in ICME 24 Oral.
@ARTICLE{10176303,
author={Wu, Gang and Jiang, Junjun and Liu, Xianming},
journal={IEEE Transactions on Neural Networks and Learning Systems},
title={A Practical Contrastive Learning Framework for Single-Image Super-Resolution},
year={2023},
volume={},
number={},
pages={1-12},
doi={10.1109/TNNLS.2023.3290038}}
Contrastive learning has achieved remarkable success on various high-level tasks, but there are fewer contrastive learning-based methods proposed for low-level tasks. It is challenging to adopt vanilla contrastive learning technologies proposed for high-level visual tasks to low-level image restoration problems straightly. Because the acquired high-level global visual representations are insufficient for low-level tasks requiring rich texture and context information. In this paper, we investigate the contrastive learning-based single image super-resolution from two perspectives: positive and negative sample construction and feature embedding. The existing methods take naive sample construction approaches (e.g., considering the low-quality input as a negative sample and the ground truth as a positive sample) and adopt a prior model (e.g., pre-trained VGG model) to obtain the feature embedding. To this end, we propose a practical contrastive learning framework for SISR, named PCL-SR. We involve the generation of many informative positive and hard negative samples in frequency space. Instead of utilizing an additional pre-trained network, we design a simple but effective embedding network inherited from the discriminator network which is more task-friendly. Compared with existing benchmark methods, we re-train them by our proposed PCL-SR framework and achieve superior performance. Extensive experiments have been conducted to show the effectiveness and technical contributions of our proposed PCL-SR thorough ablation studies.
Download DIV2K training data (800 training + 100 validtion images). For more informaiton, please refer to EDSR(PyTorch) and RCAN.
We adopt their official implementations in EDSR(PyTorch), RCAN and HAN.
Our contrastive loss with a GAN-like framework is implemented in src/loss/adversarial.py and VGG-based contrastive loss is in src/loss/cl.py.
To reproduce our results, please take our code to their official implementations and re-train.
More methods and other low-level tasks will be tested in the future.
Test datasets can be found in EDSR(PyTorch). PSNR and SSIM metric scripts can be found in here.
Our pre-trained models are released, please download from Google Drive and test respectively.
Main results.
Some examples are presented.
Urban100 Samples
Manga109 Samples
Robust to ResSRSet
We thank the authors for sharing their codes of EDSR (PyTorch), RCAN, HAN, and NLSN.