Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cons_vae make parallel doubles output #1

Open
varoudis opened this issue Oct 14, 2016 · 1 comment
Open

cons_vae make parallel doubles output #1

varoudis opened this issue Oct 14, 2016 · 1 comment

Comments

@varoudis
Copy link

as discussed this is the example from keras with multi_gpu util.
autoencoder's output is double after merge (in make parallel)

https://gist.github.com/varoudis/d6a71f08f3d309cc3b7583f00616d9c0

@kuza55
Copy link
Owner

kuza55 commented Oct 16, 2016

So, the problem seems to be the hard-coded batch size.

The util function assumes that if it takes a slice of the input and passes it into the model, the model will give an output of the same (new) batch size, which isn't true when the batch size is hardcoded.

One way to fix this may be to double the input and output of the model, rather than try slice it up, so that if you want to train on a batch size of 100, you create a batch size 50 model, then pass it to the make_parallel function and it will double everything up properly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants