Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

batch sizes don't match paper #4

Open
jcpeterson opened this issue Nov 22, 2017 · 3 comments
Open

batch sizes don't match paper #4

jcpeterson opened this issue Nov 22, 2017 · 3 comments

Comments

@jcpeterson
Copy link

Why are the default batch sizes used? The original paper uses 16 for sizes 4x4 to 128x128, which should be faster (overall) than what is currently used.

@github-pengge
Copy link
Owner

I did not use the same batch size as the paper used cause I ran the code on 1080 GPU with only 8GB memory, not P100, which has 16GB memory. Besides, the dataset is also different, I used CelebA dataset(with cropped and aligned), and now I'm switching to CelebA-HQ.

@jcpeterson
Copy link
Author

I see. I also noticed the learning rate seems a bit fast to use for both the G and D. Any reason why?

@github-pengge
Copy link
Owner

I found that learning rate was not fixed in official code, and I'm changing the scheduler of learning rate now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants