-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cuda memory #44
Comments
@shersoni610 I also had the same problem. I solved this problem by set num_workers=0 of DataLoader() in pytorch/main.py .And I also had try to smaller the batch size of train.But in the end I still use 32 and it works. |
Hello,
I tried changing num_workers = 0, but do not know how to change the batch
size. The code still fails on 6GB Titan card.
Thanks
…On Mon, Nov 25, 2019 at 12:49 AM zxczrx123 ***@***.***> wrote:
@shersoni610 <https://github.com/shersoni610> I also had the same problem.
My environment:
Win10(I had changed some code so I can use it in Win10),
One 1080Ti,
Anaconda py3.6,Cuda 9.0,CUDNN 7.5,Pytorch1.1
I solved this problem by set *num_workers=0* of DataLoader() in
pytorch/main.py .And I also had try to smaller the batch size of train.But
in the end I still use 32 and it works.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#44?email_source=notifications&email_token=ANZR6GTPX6ZPZO4TIBR67CDQVOGSDA5CNFSM4JQ5E7LKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFBTMGI#issuecomment-558052889>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANZR6GRE2OXQEK7FMJWU67TQVOGSDANCNFSM4JQ5E7LA>
.
|
i'm having the same problem on a meager GT 1030 (RuntimeError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 1.95 GiB total capacity; 947.23 MiB already allocated; 24.25 MiB free; 1.02 GiB reserved in total by PyTorch) Changing the number of workers does not help. By the way, what's the point on setting them to zero? Any help with changing the batch size? Thank you |
Setting the default value of test_batch_size argument in main.py from 16 to 8 worked for me |
Hello,
I am trying to run pytorch example on 6GB Cuda card and I get the following message:
RuntimeError: CUDA out of memory. Tried to allocate 640.00 MiB (GPU 0; 5.94 GiB total capacity; 4.54 GiB already allocated; 415.44 MiB free; 143.32 MiB cached)
How can we run the examples on 6GB cards?
Thanks
The text was updated successfully, but these errors were encountered: