You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Awesome work! I was amazed about how your simple trick improved diversity so much.
Small question: Going through your code (see above) and paper I feel like that there is a minor typo in equation (6) in your paper. I think it should be the feature matching loss with respect to the generated images, right? Meaning:
Also, have you tried using the latent variable clipping trick from the BigGAN paper, meaning using standard normal for training and truncated for testing? I feel like this might be an easy way to further improve quality of generation.
The text was updated successfully, but these errors were encountered:
DSGAN/Inpainting/train.py
Line 223 in 9747be1
Awesome work! I was amazed about how your simple trick improved diversity so much.
Small question: Going through your code (see above) and paper I feel like that there is a minor typo in equation (6) in your paper. I think it should be the feature matching loss with respect to the generated images, right? Meaning:
Also, have you tried using the latent variable clipping trick from the BigGAN paper, meaning using standard normal for training and truncated for testing? I feel like this might be an easy way to further improve quality of generation.
The text was updated successfully, but these errors were encountered: