-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with model training #1
Comments
I am happy you are interested in our work. May I ask when does this problem occur? During the early training or later? I have retrained the code on the DDN dataset and do not find the problem. |
Thank you very much for your reply. To be precise we encountered this problem during training, we used a new dataset for training and tested the model with 10, 100 and 500 epochs respectively and the output was similar to this image. In order to rule out whether it is the influence of the training data, we trained directly with Rain100L and the results were similar to this case. In order to exclude the influence of the test code, we tested with the model you provided and found the results are normal. In main.py load dataset part.
in dataset.py load_image function
|
I also encountered the same problem. Have you solved it? |
Thanks a lot for your contributions. We are very interested in your work, but after retraining the model (with everything adjusted to match the parameters in the paper), we found the new model output unacceptable and appeared to be outputting the wavelet components. It is worth mentioning that we did not find any problem in the output using the provided pre-trained model. This made us confused. So we replaced another dataset and this still happened. We are sure that it is not a problem with the input and output of the model. Now we are at a loss, can you help us?
Our training environment is torch1.7 cuda10.1 on tesla v100
Translated with www.DeepL.com/Translator (free version)
The text was updated successfully, but these errors were encountered: