Using docker build makes torch cuda is_available to False! #588
-
Describe the bug I've GPU attached in my system. I tried to run Dockerfile, and expected that the image will be built with GPU support. But it isn't. root@00921839c781:/anomalib# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_Oct_11_21:27:02_PDT_2021
Cuda compilation tools, release 11.4, V11.4.152
Build cuda_11.4.r11.4/compiler.30521435_0 root@00921839c781:/anomalib# nvidia-smi
bash: nvidia-smi: command not found root@00921839c781:/anomalib# python
>>> import torch
>>> torch.cuda.is_available()
>>> Flase To Reproduce
Expected behaviour As it's based on nvidia-cuda, I expected it would work out of the box if GPU is attached in the system. Hardware and Software Configuration
Additional context |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
@ashwinvaidya17, will you be able to have a look at this? |
Beta Was this translation helpful? Give feedback.
-
@innat have you tried passing |
Beta Was this translation helpful? Give feedback.
-
Yes, I tried that already. I tried to run this docker file in GCP ; (Architecture: x86_64, Docker version 20.10.18, build b40c2f6). GPU is attached and necessary drived is installed (auto). Is there anything else need to care manually regarding NVIDIA Container Toolkit? In your env, is it working ( |
Beta Was this translation helpful? Give feedback.
-
That's weird. I just build a new image and it is working. From your screenshot I can see that you passed the gpu flag at the end. Try passing it before the |
Beta Was this translation helpful? Give feedback.
That's weird. I just build a new image and it is working. From your screenshot I can see that you passed the gpu flag at the end. Try passing it before the
-it
flag.