You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So, the problem seems to be the hard-coded batch size.
The util function assumes that if it takes a slice of the input and passes it into the model, the model will give an output of the same (new) batch size, which isn't true when the batch size is hardcoded.
One way to fix this may be to double the input and output of the model, rather than try slice it up, so that if you want to train on a batch size of 100, you create a batch size 50 model, then pass it to the make_parallel function and it will double everything up properly.
as discussed this is the example from keras with multi_gpu util.
autoencoder's output is double after merge (in make parallel)
https://gist.github.com/varoudis/d6a71f08f3d309cc3b7583f00616d9c0
The text was updated successfully, but these errors were encountered: