This is the implementation for DiffHDR: Towards High-quality HDR Deghosting with Conditional Diffusion Models, Q Yan, T Hu, Y Sun, H Tang, Y Zhu, W Dong, L Van Gool, Y Zhang, in IEEE Transactions on Circuits and Systems for Video Technology, 2023. [arXiv]
Abstract: High Dynamic Range (HDR) images can be recovered from several Low Dynamic Range (LDR) images by existing Deep Neural Networks (DNNs) techniques. Despite the remarkable progress, DNN-based methods still generate ghosting artifacts when LDR images have saturation and large motion, which hinders potential applications in real-world scenarios. To address this challenge, we formulate the HDR deghosting problem as an image generation that leverages LDR features as the diffusion model's condition, consisting of the feature condition generator and the noise predictor. Feature condition generator employs attention and Domain Feature Alignment (DFA) layer to transform the intermediate features to avoid ghosting artifacts. With the learned features as conditions, the noise predictor leverages a stochastic iterative denoising process for diffusion models to generate an HDR image by steering the sampling process. Furthermore, to mitigate semantic confusion caused by the saturation problem of LDR images, we design a sliding window noise estimator to sample smooth noise in a patch-based manner. In addition, an image space loss is proposed to avoid the color distortion of the estimated HDR results. We empirically evaluate our model on benchmark datasets for HDR imaging. The results demonstrate that our approach achieves state-of-the-art performances and well generalization to real-world images.
The framework of the proposed method. The top figure illustrates the diffusion process, while the bottom figure represents the reverse process, which involves a feature conditional generator and a noise predictor. The feature conditional generator incorporates implicitly aligned LDR features into the noise generator through affine transformation, which guides the model generation.
Requirements
- Python 3.9
- PyTorch 1.13
- CUDA 11.7 on Ubuntu 18.04
conda env create -n DiffHDR -f environment.txt
- Download the dataset (including the training set and test set) from Kalantari17's dataset
- Please ensure the data structure is as below
./data/Training
|--001
| |--short.tif
| |--medium.tif
| |--long.tif
| |--exposure.txt
| |--HDRImg.hdr
|--002
...
./data/Test (include 15 scenes from `EXTRA` and `PAPER`)
|--001
| |--short.tif
| |--medium.tif
| |--long.tif
| |--exposure.txt
| |--HDRImg.hdr
...
|--015
| |--short.tif
| |--medium.tif
| |--long.tif
| |--exposure.txt
| |--HDRImg.hdr
...
- Prepare the cropped training set by running gen_crop_data.py from HDR-Transformer (optional, produces better results compared to random cropping):
python gen_crop_data.py
To train the model, run:
python train_diffusion.py
To evaluate DiffHDR using the pre-trained model checkpoint with the current version of the repository:
python eval_diffusion.py --config "hdr.yml" --resume 'Hdr_ddpm3000000.pth.tar' --sampling_timesteps 25 --grid_r 64
You can download pre-trained models from Google Drive Pre-trained Models. In addition, we provide inference results using Pre-trained Models with p=512, r=64, and T=25.
If you find this repo useful, please consider citing:
@article{yan2023towards,
title={Towards high-quality HDR deghosting with conditional diffusion models},
author={Yan, Qingsen and Hu, Tao and Sun, Yuan and Tang, Hao and Zhu, Yu and Dong, Wei and Van Gool, Luc and Zhang, Yanning},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2023},
publisher={IEEE}
}
Our work is inspired by the following works and uses parts of their official implementations:
Thanks to their great work!