-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error with backward
on RNN with cuda
#220
Comments
Thank you for your interest in our library, we will investigate that and hopefully provide a solution. |
Thanks a lot, we will look into it. # This is an example of python code
i = 0
while True:
i++ |
Thanks! I managed to reproduce your issue (see #221). It seems that the cuda implementation of RNNs is not compatible with batching, which we use internally in torchjd. As a short-term solution, you can set your device to "cpu" instead of "cuda". This will, however, slow down the training. @PierreQuinton, I think we should make |
backward
on RNN with cuda
With #222, we now don't rely on vmap when calling @lth456321 We will release v0.4.0 soon after, so you should be able to make this change and make your code work if you update torchjd. Keep us updated if this solves your problem. Thanks again for your issue, it has lead to significant improvements of the library! |
Many thanks for authors' work, but I meet trouble when using thorchjd with rnn, it seems like rnn is not supported in vmap. May I ask if someone can provide some suggestions? Thanks in advance
The text was updated successfully, but these errors were encountered: