Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory when runnig the code from example #286

Open
aleh4d opened this issue Mar 31, 2021 · 3 comments
Open

CUDA out of memory when runnig the code from example #286

aleh4d opened this issue Mar 31, 2021 · 3 comments

Comments

@aleh4d
Copy link

aleh4d commented Mar 31, 2021

I tried to run the code from example on the fast-bert page, but got out of GPU memory error:

Exception has occurred: RuntimeError
CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 6.00 GiB total capacity; 4.35 GiB already allocated; 84.91 MiB free; 4.44 GiB reserved in total by PyTorch)

How to make fast-bert to use less GPU memory? Which parameters to set?

@TingNLP
Copy link

TingNLP commented Apr 13, 2021

modify max_seq_length and batch_size_per_gpu
you can refer https://github.com/google-research/bert

image

@JayDew
Copy link

JayDew commented Apr 21, 2021

Could you please share what values worked for you? I also cannot run the examples

@Elzic6
Copy link

Elzic6 commented May 10, 2021

only way (works for me each time I have the same issue) is to make a "factory reset runtime"
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants