You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Since some CUDA-based applications (like PyTorch, TensorFlow, etc.) have released new versions built on CUDA Toolkit 12.x, which require CUDA Driver 12.0+ to run, it's important to display the current CUDA driver version on each of my GPU servers, just like nvidia-smi does. This way, I can determine whether the CUDA driver is new enough.
Describe the solution you'd like
Display CUDA Driver Version along with the NVIDIA Driver Version.
Describe alternatives you've considered
None.
Additional context
The referrence nvidia-smi header:
The text was updated successfully, but these errors were encountered:
You are getting something wrong -- the CUDA version shown in nvidia-smi output has nothing to do with the actual CUDA runtime used in your system, it's just the latest CUDA version that the current NVIDIA Driver is compatible with.
So in your case, nvidia driver 535.113 can support CUDA 12.2 and below (as long as it's supported). But your ML environments might be using cuda 11.x.
To know the actual cuda runtime version on the current environment:
Is your feature request related to a problem? Please describe.
Since some CUDA-based applications (like PyTorch, TensorFlow, etc.) have released new versions built on CUDA Toolkit 12.x, which require CUDA Driver 12.0+ to run, it's important to display the current CUDA driver version on each of my GPU servers, just like
nvidia-smi
does. This way, I can determine whether the CUDA driver is new enough.Describe the solution you'd like
Display CUDA Driver Version along with the NVIDIA Driver Version.
Describe alternatives you've considered
None.
Additional context
![image](https://private-user-images.githubusercontent.com/48617718/277097250-860af717-339d-4ee5-9203-ca2fc03b0e26.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk1MTUzOTYsIm5iZiI6MTczOTUxNTA5NiwicGF0aCI6Ii80ODYxNzcxOC8yNzcwOTcyNTAtODYwYWY3MTctMzM5ZC00ZWU1LTkyMDMtY2EyZmMwM2IwZTI2LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE0VDA2MzgxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTAxMThkNDhlM2IzMjBjM2VjMzc0ZGJkODdmZTNlMDFjMWExMDliMjUyZjUyZWY0ZDkwNWJhMDYyNGRjNGMyOTkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.MNhmb3cVUli2KQHKqvrsz4l4vjoUixNr98G9vOH_arY)
The referrence
nvidia-smi
header:The text was updated successfully, but these errors were encountered: