Skip to content

Latest commit

 

History

History
54 lines (41 loc) · 3.45 KB

README.md

File metadata and controls

54 lines (41 loc) · 3.45 KB

Not Just Streaks: Towards Ground Truth for Single Image Deraining (ECCV'22)

Yunhao Ba1*, Howard Zhang1*, Ethan Yang1, Akira Suzuki1, Arnold Pfahnl1, Chethan Chinder Chandrappa1, Celso de Melo2, Suya You2, Stefano Soatto1, Alex Wong3 Achuta Kadambi1

University of California, Los Angeles1, US Army Research Laboratory2, Yale University3

Project Webpage

https://visual.ee.ucla.edu/gt_rain.htm/

Abstract

We propose a large-scale dataset of real-world rainy and clean image pairs and a method to remove degradations, induced by rain streaks and rain accumulation, from the image. As there exists no real-world dataset for deraining, current state-of-the-art methods rely on synthetic data and thus are limited by the sim2real domain gap; more- over, rigorous evaluation remains a challenge due to the absence of a real paired dataset. We fill this gap by collecting the first real paired deraining dataset through meticulous control of non-rain variations. Our dataset enables paired training and quantitative evaluation for diverse real-world rain phenomena (e.g. rain streaks and rain accumulation). To learn a representation invariant to rain phenomena, we propose a deep neural network that reconstructs the underlying scene by minimizing a rain- invariant loss between rainy and clean images. Extensive experiments demonstrate that our model outperforms the state-of-the-art deraining methods on real rainy images under various conditions.

Citation

@InProceedings{ba2022gt-rain,
      author={Ba, Yunhao and Zhang, Howard, and Yang, Ethan and Suzuki, Akira and Pfahnl, Arnold and Chandrappa, Chethan Chinder and de Melo, Celso and You, Suya and Soatto, Stefano and Wong, Alex and Kadambi, Achuta},
      title={Not Just Streaks: Towards Ground Truth for Single Image Deraining},
      booktitle={ECCV},
      year={2022}
}

Dataset

The dataset can be found here.

Requirements

All code was tested on Google Colab with the following:

  • Ubuntu 18.04.6
  • CUDA 11.2
  • Python 3.7.13
  • OpenCV-Python 4.6.0
  • PyTorch 1.12.1
  • scikit-image 0.18.3
  • piq 0.7.0

Setup

Download the dataset from the link above and change the parameters in the training and testing code to point to the appropriate directories.

Running

Training: After setting up the directory structure as specified above, simply run the training loop at the bottom of training_deraining_code.ipynb. Additionally, model weights can be loaded from previous checkpoints by changing resume_train and model_path in the parameters section.

Testing: For testing, we provide two separate versions in testing_deraining_code.ipynb, one for a generic test set which is done by specifying separate folders for the input rainy images and corresponding ground truths in the parameter section, and another for using our test set which can be downloaded from the the dataset link above. Our final model weights are located at model/model_checkpoint.pth.

Disclaimer

Please only use the code and dataset for research purposes.

Contact

Yunhao Ba
UCLA, Electrical and Computer Engineering Department
[email protected]

Howard Zhang
UCLA, Electrical and Computer Engineering Department
[email protected]