-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Opening the shared memory failed, os error 24 #54
Comments
Would be great if you could share the code as well. Thanks :) Specifically: |
Ok, I submitted the related PR |
Thanks for reporting!
You're talking about #55, right? Regarding the error: Did you see any warnings in the logs? There are some situations where we will unmap shared memory regions after some timeout if the receiver did not react as expected. If this happened, you should see a warning in the log output. (@haixuanTao Do we have the tracing to stdout enabled for Python by default? ) Given that the shared memory allocation failed too, it is more likely that the issue is the number of open files. There is typically a limit on the number of open file handles, which you can query using To fix this properly, we should reduce the number of allocated shared memory regions and reuse the same region for mulitple messages. I opened dora-rs/dora#268 for that. |
@phil-opp , so trace goes to stdout with |
Ok good. And the default log level is |
The default is the same as Tokio tracing default which is |
The original reason for triggering the #54 problem is that the bytes data (numpy array) sent by send_output is relatively large. Now I have replaced the sent content according to haixuanTao's opinion, So the code for this problem does not appear now. To reproduce this problem, the code in prediction = torch.nn.functional.interpolate(
prediction.unsqueeze(1),
size=img.shape[:2],
mode="bicubic",
align_corners=False,
).squeeze()
depth_output = prediction.cpu().numpy()
print("depth_output: ", depth_output)
send_output("depth_frame", depth_output.tobytes(), dora_input["metadata"]) The content of depth_output is relatively large, which is more likely to trigger this problem. |
This would be a good idea in my opinion. We're using warnings in dora to log abnormal events that are not critical yet, but should still be observed by users. |
@meua Thanks a lot for the info! |
What's the status of this? Can we still reproduce the "failed to map shared memory input" error with the latest version? |
I don't have time to test it now, I will verify it later when I have a chance. |
Describe the bug
To Reproduce
Steps to reproduce the behavior:
dora up
dora start graphs/tutorials/webcam_single_dpt_frame.yaml --attach --hot-reload --name webcam-midas
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots or Video
Environments (please complete the following information):
Linux jia 5.15.0-69-generic #76~20.04.1-Ubuntu SMP Mon Mar 20 15:54:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered: