Replies: 3 comments 5 replies
-
@sprappcom try building for all arch, so:
|
Beta Was this translation helpful? Give feedback.
-
could you type "ldd ./llama-cli" and nvidia-smi in the terminal? It looks to me that either your drive version is not compatible with Cuda or Cuda not compatible with the driver. You might need to update your driver or Cuda. My driver is 545.23.08 and my cuda is 12.2 |
Beta Was this translation helpful? Give feedback.
-
@sprappcom Please did you end up resolving the problem , i have the exact same problem and I almost tried everything , nothing seems to work ! |
Beta Was this translation helpful? Give feedback.
-
export as compute_89 and make compile with CUDA, everything works but when i did inference with llama-cli, i get
so it's not using gpu ram etc (i've checked, only pure cpu)
what does this mean and how to remedy?
ubuntu 24.04 nvcc 12.5, latest llama.cpp, laptop amd ryzen 7000 series with 4060 gpu
Beta Was this translation helpful? Give feedback.
All reactions