-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ratio between fake and reconstructed #32
Comments
Hi @PoRuSung, Thank you for your interest in our work!
If you simply want to change the ratio (importance) of reconstruction and translation losses, then you can adjust However, I am not sure if that answers your question. In particular, if you are referring to a runtime ratio of fake and reco losses, or some other ratio, then a different procedure may be needed. If it is the case, could you please elaborate on the problem you encounter, and we will try to diagnose it. |
@usert5432 Hi, thanks for your reply!
But I wonder the difference between the message you mentioned above and the reconstruction and translation losses, currently my understanding isn't sufficient to distinguish these two, would you kindly explain a little bit for me? Thank you very much. |
Hi @PoRuSung, Thank you for the elaboration.
Unfortunately, setting
However, this abstract "importance" does not directly translate to the actual numerical values of the losses. For example, if one trains CycleGAN with Now, if you care about "importance" of fake and reco losses, then |
Hi @usert5432 , Thank you for your detail explanation, I'll try using |
Hi @usert5432 , I would like to ask another question about training image size, I set the training image size to 960*592 (which is dividable by 16) but encountered error msg like |
Hi @PoRuSung,
Sure, I suspect this issue may be happening because you perform a parameter transfer from a model that was pre-trained on images of size (256, 256)? Could you please double check that and/or share |
d2b_lambda_cyc_zero_translation.zip Hi @usert5432 , |
Hi @PoRuSung, Thank you for sharing the config file! Indeed, it looks like the training attempts to use pre-trained encoders. However, "afhq/pretrain" was pretrained on images of size (256, 256) and thus cannot be simply transferred to images of size (592, 960). I think, this error can be fixed by disabling the pre-training (e.g. passing |
Hi @usert5432 , Thanks for your reply! I'm just wondering training without using pre-trained model would affect the training performance. |
Yes, unfortunately, removing the pre-training will likely worsen the performance. You may want to run a custom pre-training on large images. |
Hi, thanks for the great work ! I want to ask about if I want to change the ratio of fake and reconstructed images while training what should I do? Can you help me with that? (previously I've used your code to train a model and got low reconstruct loss but the fake loss is not as low as reconstruct loss)
The text was updated successfully, but these errors were encountered: