This repository is for the rawSR algorithm introduced in the TPAMI paper Exploiting raw images for real-scene super-resolution.
Conference version: Towards real scene super-resolution with raw images, CVPR 2019
Our model is trained and tested through the following environment on Ubuntu:-
Python: v2.7.5 with following packages:
-
tensorflow with gpu: v1.9.0
-
rawpy: v0.12.0
-
numpy: v1.15.3
-
scipy: v1.1.0
-
Fig. 5. The image restoration branch adopts the proposed DCA blocks in an encoder-decoder framework and reconstructs high-resolution linear
color measurements
from the degraded low-resolution raw input
Fig. 7. Network architecture of the color correction branch. Our model predicts the pixel-wise transformations A and B with a reference color image.
-
Prepare training data
-
Begin to train
- (optional) Download the pretrained weights and place them in './log_dir'
- Run the following script to train our model:
python train_and_test.py
- For different purposes, you can access './parameters.py' to change parameters according to the annotations. The default setting is to train the model from 0 epoch without pretrained weights, and the validation images will be leveraged to test model performance per 10 epochs.
-
Prepare testing data
-
Synthetic data
-
Real data
- If you wish to test real data, you can prepare the raw image yourself, or download some examples from [Google Drive].
- Place the downloaded dataset or your prepared raw images (like .CR, .RAW, .NEF and etc.) to './Dataset/' with folder name 'REAL' or modify the 'REAL_DATA_PATH' of 'parameters.py' to corresponding path.
-
-
Begin to test
-
Synthetic data
-
Real image
- Set 'REAL' of 'parameters.py' to be True.
- Download the pretrained models through [Google Drive] [x2], [x4], and then place it to './log_dir'.
And then, run the following script for testing:
python train_and_test.py
Notice, the testing results can be found in the path defined in 'RESULT_PATH' in 'parameters.py'.
-
-
Quantitative comparisons
TABLE 1 Quantitative evaluations on the synthetic dataset. “Blind” represents the images with variable blur kernels, and “Non-blind” denotes fixed kernel.
-
Visual comparisons
-
Synthetic data
Fig. 8. Results from the proposed synthetic dataset. References for the baseline methods including SID[1], SRDenseNet[5] and RDN[6], can be found in Table 1. “GT” represents ground truth.
-
Real data
Fig. 11. Comparison with the state-of-the-arts on real-captured images. Since the outputs are of ultra-high resolution, spanning from 6048 × 8064 to 12416 × 17472, we only show image patches cropped from the tiny green boxes in (a). The input images from top to bottom are captured by Sony, Canon, iPhone 6s Plus, and Nikon cameras, respectively.
-
All mentioned results of our algorithm can be found at [Google Drive] [Blind], [Non-blind], [Real]
[1] C. Chen, Q. Chen, J. Xu, and V. Koltun. Learning to see in the dark. In CVPR, 2018.
[2] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In ECCV, 2014.
[3] J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In CVPR, 2016.
[4] E. Schwartz, R. Giryes, and A. M. Bronstein. Deepisp: Towards learning an end-to-end image processing pipeline. TIP, 2018.
[5] T. Tong, G. Li, X. Liu, and Q. Gao. Image super-resolution using dense skip connections. In ICCV, 2017.
[6] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu. Residual dense network for image super-resolution. In CVPR, 2018.
Please consider citing this paper if you find the code and data useful in your research:
@inproceedings{xu2019towards,
title={Towards real scene super-resolution with raw images},
author={Xu, Xiangyu and Ma, Yongrui and Sun, Wenxiu},
booktitle={CVPR},
year={2019}
}
@article{xu2021exploiting,
title={Exploiting Raw Images for Real-Scene Super-Resolution},
author={Xu, Xiangyu and Ma, Yongrui and Sun, Wenxiu and Yang, Ming-Hsuan},
journal={TPAMI},
year={2021}
}