Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cuda Causes Trainer to Hang #14

Open
raul1968 opened this issue Mar 29, 2019 · 2 comments
Open

Cuda Causes Trainer to Hang #14

raul1968 opened this issue Mar 29, 2019 · 2 comments

Comments

@raul1968
Copy link

raul1968 commented Mar 29, 2019

Weird question, I have 2 cuda cards and the code only uses one. It runs great on my laptop because its on the CPU. Can you tell me how to have it identify 2nd card or force CPU?

RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 8.00 GiB total capacity; 6.15 GiB already allocated; 58.05 MiB free; 83.38 MiB cached)

I removed cuda from the workstation and works great on cpu

@ktaebum
Copy link
Owner

ktaebum commented Jul 22, 2019

Sorry for late reply :(
If you want to use second cuda cards, please check that

$ nvidia-smi 

shows 2 cards well.
If you want to use forced CPU, just replace

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

to

device = 'cpu'

@raul1968
Copy link
Author

nvidia-smi gave me
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:06:00.0 Off | N/A |
| 28% 33C P8 5W / 151W | 2MiB / 8119MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 1070 Off | 00000000:07:00.0 On | N/A |
| 0% 49C P8 15W / 151W | 1106MiB / 8118MiB | 3% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 1 1352 G /usr/lib/xorg/Xorg 28MiB |
| 1 1394 G /usr/bin/gnome-shell 50MiB |
| 1 1749 G /usr/lib/xorg/Xorg 184MiB |
| 1 2225 G ...uest-channel-token=12045571343934226885 690MiB |
| 1 17690 G /usr/bin/gnome-shell 148MiB |
+-----------------------------------------------------------------------------+
Thank you.. I'll change the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants