-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OSError: [Errno 24] Too many open files #298
Comments
Hi @zichunxx, I will have a look in the next few days after some deadlines. |
Have you tried with another buffer, like the standard |
No problem! I will try to fix it before you are done with your deadline.
I have tried with |
Hi @zichunxx, I tried yesterday on my machine and reached more than 200k steps without errors: how many steps can you print before the error is raised? |
Hi! The above error is triggered with 5000 steps and a buffer size 4990. Besides, I found this error only occurred when I ran the above program in the system terminal with conda env activated. If I tried this in the VSCode terminal, this error would not happen in 5000 steps, which bothered me a lot. |
Hi!
I tried to store episodes with
EpisodeBuffer
andmemmap=True
to release RAM pressure but met this error:For traceback, the following minimal code snippet can reproduce the error
, where
memmap_dir
should be given.Could you please tell me what causes this problem?
Many thanks for considering my request.
Update:
I found this problem seems to be triggered by saving too many episodes on disk. (Please correct me if I'm wrong)
I tried with
EpisodeBuffer
because image observation almost consumes all RAM (64GB) during training, especially with frame stack. I want to complete this training process without upgrading the hardware. So I want to relieve the pressure on RAM withmemmap=True
but encounter the above problem. Any advice on this problem?Thanks in advance.
The text was updated successfully, but these errors were encountered: