-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IndexError: wait set index too big when calling service early in node lifetime #1133
Comments
The rework of I don't have direct experience with the rest of the what is going on here. It does seem somewhat suspicious to me that the QoSEventHandler class is storing an |
ok, it helps at least to know that it's not a known issue affecting Humble only. Sorry that I don't have a reproducible example; I'll keep an eye out if there is more info I can collect! But I had also been thinking that the waitset might just be generally subject to a race condition, reproducible example or not.... |
Good news! this seemed to be caused by a bug on our end, where a node was being spun in two threads. So I'll close this, thanks! |
Hi @dhood I'm facing the same issue, and I'm running an additional thread as well, can you please share how you have fixed the issue? It would be very much helpful |
Ideally you redesign your system so that nodes get passed up to an executor and spun in a single place, like this https://github.com/ros2/examples/blob/master/rclpy/executors/examples_rclpy_executors/composed.py#L33 For us, we were waiting for an action to complete, so I added a function WAIT_until_future_complete that is similar to spin_until_future_complete but without spinning the node; it just sleeps for little bits, assuming somewhere else is responsible for spinning the node. Or if you want a quick hack to avoid redesign, you could replace your spin_until_future_complete call with a try/catch that ignores the error and spins again |
It seems I have a similar issue in Jazzy when I spin a node with However, I don't encounter this issue if I use @dhood, is it still recommended to use one |
I've found that |
Bug report
During spinning in the following code I received a traceback ending with
IndexError: wait set index too big
I'm using a multithreaded executor with multiple nodes in the same file; which was launched in a separate process by ros2 launch during a test.
Full traceback:
I see there's been rework of the qos_event.py file between Humble and Iron. I'm on Humble; any chance that this is something the team has experienced themselves, and that this has been addressed on Iron?
Required Info:
Ubuntu 22.04
Binaries, humble
rclpy: 3.3.8-2jammy.20230426.045804
Cyclone
Steps to reproduce issue
I believe it to be a matter of chance; not reliably reproducible.
It happened for me when a node sent a service call immediately after its bringup so maybe discovery triggered it. Perhaps logging services were getting connected..?
Expected behavior
Waitset has the appropriate size; perhaps it's ok for rclpy to catch this issue and try to wait again rather than raising?
Actual behavior
My process was terminated because of the exception.
The text was updated successfully, but these errors were encountered: