You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just want to report that the "beginner" installation with a clean clone of the repo does not work properly on all linux systems, at least not on mine. In my case, it failed to infer the CUDA version and as a result did not install all required dependencies.
The issue is that my CUDA installation is in /opt/cuda, therefore the installation script cannot infer the CUDA version and proceeds with using requirements-nowheel.txt, which does not include exllamav2.
Therefore, cuda_version = pathlib.Path(CUDA_PATH).name from the python code does not work for my system.
I'm using the regular CUDA package installed via pacman: cuda-12.3.2-1, so nothing special.
I assume there must be another way to infer the installed cuda version, though it doesn't seem as trivial as expected. I found this related StackOverflow question: https://stackoverflow.com/q/9727688
Parsing nvcc output might be viable, but apparently that's not always available:
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Nov_22_10:17:15_PST_2023
Cuda compilation tools, release 12.3, V12.3.107
Build cuda_12.3.r12.3/compiler.33567101_0
Or parsing cublas_version.txt from the installation, though I don't know if that is platform agnostic.
$ cat /opt/cuda/cublas_version.txt
CUBLAS Version 12.3.4.1
Maybe there's some python code floating around somewhere that does it properly for all systems, but I haven't found it. It seems other projects either require a manual platform-specific installation of torch, or have an interactive installation script which asks for the GPU vendor.
The text was updated successfully, but these errors were encountered:
Just want to report that the "beginner" installation with a clean clone of the repo does not work properly on all linux systems, at least not on mine. In my case, it failed to infer the CUDA version and as a result did not install all required dependencies.
The issue is that my CUDA installation is in
/opt/cuda
, therefore the installation script cannot infer the CUDA version and proceeds with using requirements-nowheel.txt, which does not include exllamav2.System Info:
The issue primarily seems to be this:
Therefore,
cuda_version = pathlib.Path(CUDA_PATH).name
from the python code does not work for my system.I'm using the regular CUDA package installed via pacman:
cuda-12.3.2-1
, so nothing special.I assume there must be another way to infer the installed cuda version, though it doesn't seem as trivial as expected. I found this related StackOverflow question: https://stackoverflow.com/q/9727688
Parsing
nvcc
output might be viable, but apparently that's not always available:Or parsing
cublas_version.txt
from the installation, though I don't know if that is platform agnostic.Maybe there's some python code floating around somewhere that does it properly for all systems, but I haven't found it. It seems other projects either require a manual platform-specific installation of torch, or have an interactive installation script which asks for the GPU vendor.
The text was updated successfully, but these errors were encountered: