You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've tried everything. gc.collect, torch.cuda.empty_cache, deleting every possible tensor and variable as soon as it is used, setting batch size to 1, nothing seems to work.
I've rewritten the data loader, model training pipeline and have made it as simple as i possibly can, but somehow it always runs out of memory.
I'm using codesearchnet dataset for python and trying to train from scratch
I'm running pytorch 1.9.1 with cuda 11.1 on a 16gb GPU instance on aws ec2 with 32gb ram and ubuntu 18.04
I've re-written the code to make it more efficient as the code in the repository loaded the whole bin file of the dataset at once.
But i can't train the model, even with batch size of 1.
With batch size of 8, it crashes after 46 iterations, with batch size of 1 it goes upto 48k iterations but then crashes.
GPU Traceback :
Traceback (most recent call last):
File "train.py", line 150, in <module>
train(args)
File "train.py", line 98, in train
loss.backward(retain_graph=False)
File "/usr/local/lib/python3.6/dist-packages/comet_ml/monkey_patching.py", lin e 312, in wrapper
return_value = original(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/_tensor.py", line 255, in b ackward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=i nputs)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 149, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 7.27 GiB (GPU 0; 14.76 GiB t otal capacity; 10.46 GiB already allocated; 2.98 GiB free; 10.52 GiB reserved in total by PyTorch)
CPU Traceback :
Iteration : 672, Loss : 702.1222534179688
Killed
dmesg output for CPU :
[2059991.491436] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,me ms_allowed=0,global_oom,task_memcg=/user.slice,task=python3,pid=25315,uid=0
[2059991.491542] Out of memory: Killed process 25315 (python3) total-vm:53312244 kB, anon-rss:31451456kB, file-rss:74816kB, shmem-rss:12296kB, UID:0 pgtables:690 68kB oom_score_adj:0
[2059992.056260] oom_reaper: reaped process 25315 (python3), now anon-rss:0kB, f ile-rss:74732kB, shmem-rss:12296kB
The text was updated successfully, but these errors were encountered:
I've tried everything. gc.collect, torch.cuda.empty_cache, deleting every possible tensor and variable as soon as it is used, setting batch size to 1, nothing seems to work.
I've rewritten the data loader, model training pipeline and have made it as simple as i possibly can, but somehow it always runs out of memory.
Here's my data loader :
Here's the training code :
I'm using codesearchnet dataset for python and trying to train from scratch
I'm running pytorch 1.9.1 with cuda 11.1 on a 16gb GPU instance on aws ec2 with 32gb ram and ubuntu 18.04
I've re-written the code to make it more efficient as the code in the repository loaded the whole bin file of the dataset at once.
But i can't train the model, even with batch size of 1.
With batch size of 8, it crashes after 46 iterations, with batch size of 1 it goes upto 48k iterations but then crashes.
GPU Traceback :
CPU Traceback :
dmesg output for CPU :
The text was updated successfully, but these errors were encountered: