-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RunTimeError: output must be a cuda tensor #49
Comments
me too ! how to solve it??? HELP!! |
I got the same issue here |
Did anyone figure out what the problem was? I have the same issue. |
i too got the same error. did anyone fix this issue ? |
Benchmark works for me. Fyi, its require custom CUDA kernel modification. I've fixed the issue by build from scratch as my CUDA and environment different than author. If you need detail on how to fix it : https://www.yodiw.com/solve-punica-installation-and-output-must-be-a-cuda-tensor-error/ cc @HARISHSENTHIL @Teofil98 @MichaelYuan2 @danjuan-77 @iskander-sauma-assessio |
Hi!
I tried using the benchmark text generation
python -m benchmarks.bench_textgen_lora --system punica --batch-size 32
but when I did I got a runtime error stating the output should be a cuda tensor.

I am not sure if this error is from my side or if the code is from the code. This is the error I am given
The cuda version I use is 12.4, python version 3.10.12, ninja version 1.11.1.git.kitware.jobserver-1, torch version 2.2.2.
The text was updated successfully, but these errors were encountered: