-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) #251
Comments
I think this issue has been fixed by PR #246. Can you confirm ? The changes will be included in an upcoming patch release. |
@armand33 I have encountered the same error while working on a project of my own. The code is mostly the same as the example with the Simplest Training, with the addition of an evalutation at the end, also according to the Model Evaluation . In the code below i am using Uniform negative sampler to try and test if the same problem occurs with other negative samplers.
|
Hi @GeorgeKonstantinosDimou, thanks for the message. Have you tried updating your version of TorchKGE. The patch of PR #246 has been included in the latest release. |
Greetings @armand33, yes version 0.17.7 still had the same problem unfortunately |
Hi @GeorgeKonstantinosDimou and @ADIthaker, I looked into your issue and successfully reproduced it. It seems you are using an example code from the website. If you will just change the
Hope this clears your doubt. |
Description
I was trying to run the 'Simplest training' example available on the torchkge site.
For some reason, it keeps giving me an error that all my tensors should be on the same device. However, I have simply copy-pasted the example with my own dataset.
What I Did
The code works only if I change
use_all
parameter indataloader = DataLoader(train, batch_size=batch_size, use_cuda="all")
toNone
, i.e. when I shift my dataloader to the cpu which slows down training.The text was updated successfully, but these errors were encountered: