You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I modified you code to utilize my own image dataset, and I failing with out of memory.
Traceback (most recent calls WITHOUT Sacred internals):
File "experiments/patch_experiment.py", line 128, in main
learning_loop.train_step(observer, c.inner_loop_steps, c.mnist_classes)
File "/lhi/GTN/GTN_clean/gtn/models/learning_loop.py", line 146, in train_step
self.optimizer_teacher.step()
File "/opt/conda/lib/python3.7/site-packages/torch/optim/adam.py", line 103, in step
denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.93 GiB total capacity; 9.53 GiB already allocated; 1.85 GiB free; 9.62 GiB reserved in total by PyTorch)
I'm using 64x64 images so that I changed input and output of learner/teacher to 64.
I reduced batch size to 4 / test_batch_size 4
The text was updated successfully, but these errors were encountered:
Hi. Thanks for great code.
I modified you code to utilize my own image dataset, and I failing with out of memory.
Traceback (most recent calls WITHOUT Sacred internals):
File "experiments/patch_experiment.py", line 128, in main
learning_loop.train_step(observer, c.inner_loop_steps, c.mnist_classes)
File "/lhi/GTN/GTN_clean/gtn/models/learning_loop.py", line 146, in train_step
self.optimizer_teacher.step()
File "/opt/conda/lib/python3.7/site-packages/torch/optim/adam.py", line 103, in step
denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.93 GiB total capacity; 9.53 GiB already allocated; 1.85 GiB free; 9.62 GiB reserved in total by PyTorch)
I'm using 64x64 images so that I changed input and output of learner/teacher to 64.
I reduced batch size to 4 / test_batch_size 4
The text was updated successfully, but these errors were encountered: