-
-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ROCm Testing #49
Comments
At least on Linux, you'll want to check if |
Thanks for the advice! I'd use rocminfo to check, but it seems like torch bundles rocm with it (according to @Baysul) which means that the rocminfo utility can't be found on some systems. |
|
When I try that command on some of my systems it seems to pick up my integrated graphics. I remember last year when I started playing with llama, I had to fight with and edit ooba's startup script because it would see my 7950X3D CPU, install rocm, and crash trying to load the model into limited shared memory while ignoring my nvidia gpu. If it's an option, I don't think it would be bad to simply ask the user interactively during the script whether they're using nvidia/amd. |
While that's a good point, the start script applies to both new users and existing users alike. Asking the user each time if nvidia or AMD is being used can be cumbersome. However, it looks like this is the only certain way to write a start script that can get ROCm or CUDA right 100% of the time. |
Fixed in #88. There's no real platform agnostic way to detect the GPU and on top of that, pytorch can install runtimes by itself as well. Fallback to asking the user inside the start script and saving the preferences. |
ROCm is supported on tabbyAPI, but there's no real way to test start scripts. Figure out how to detect if an AMD GPU is present on a system to install the appropriate requirements.
The text was updated successfully, but these errors were encountered: