-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Working with varying cell-state sizes #19
Comments
AFAIK it is not mathematically possible to leave the shape of the convolution kernels unspecified. If you just want to lazily initialize the shapes at runtime you could use |
Well, it is not the convolution kernel size I am talking about. It is the shape of the input:
|
Ah, my bad. Yes, that should be possible. Could you try with |
Sorry for the delay. I was away on a vacation. And, no luck with |
@anjany Hello, have you solved this problem? |
or similar should work but you get hit with https://github.com/tensorflow/tensorflow/blob/c81830af5d488de600a4f62392c63059e310c017/tensorflow/python/ops/rnn.py#L699-L702 |
When working with variable shaped inputs (defined by [batch_size, None, None, channels]), I get the following error in dynamic_rnn (at line 115 of rnn_cell_impl.py in TF1.2.1) during the graph construction phase:
Provided a prefix or suffix of None: Tensor("rnn_7/strided_slice:0", shape=(), dtype=int32) and (?, ?, 1024)
I get the same error when I work with your example by changing the 'shape' to [None, None] instead of [640, 480]. So, is there a way to work with inputs of varying dimensions? (Observe that for a given unrolled RNN, thiese dimensions would be fixed)
I guess this might be a related commit: https://github.com/tensorflow/tensorflow/commit/54efd636b504aad368eea254eca2970a16d457f6
The text was updated successfully, but these errors were encountered: