Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory #15

Open
yfpeng1234 opened this issue Sep 11, 2024 · 5 comments
Open

CUDA out of memory #15

yfpeng1234 opened this issue Sep 11, 2024 · 5 comments

Comments

@yfpeng1234
Copy link

Dear author,
I encountered CUDA out of memory when process libero_goal. Actually I can successfully process libero_spatial and libero_object. But when process libero_goal, "CUDA out of memory" when process the third demonstration. Btw, the memory of my GPU is 24G.
Thanks!

@yfpeng1234
Copy link
Author

when processing libero_10 and libero_90, it's also the case. I guess demonstrations in these subset have longer trajectory than libero_spatial and libero_object, then my GPU with 24G memory could not handle it.

@Brianwongcw
Copy link

Hi! peng. I am encountering the same error while preprocessing data and trying to figure it out

Exception CUDA out of memory. Tried to allocate 2.91 GiB. GPU 0 has a total capacity of 15.71 GiB of which 2.57 GiB is free.
Including non-PyTorch memory, this process has 13.12 GiB memory in use. Of the allocated memory 12.96 GiB is allocated by 
PyTorch, and 25.24 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting 
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory 
Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) when processing ./data/atm_libero/
libero_object/pick_up_the_ketchup_and_place_it_in_the_basket_demo/demo_0.hdf5

If there are any updates, we can discuss further

@yfpeng1234
Copy link
Author

That's great! I'm also trying to figure it out

@btbuyccycc
Copy link

I'm trying to figure it out as well

@AlvinWen428
Copy link
Collaborator

Hi, thanks for your interest in our work. The data preprocessing with Cotracker indeed requires large amounts of computation, and it's highly related to the video length. I preprocess the dataset with A100 GPUs. If you cannot get GPUs with larger memory, I suggest you can try to use half-precision or chunk the videos into multiple clips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants