Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reenable emulate_tty by default #286

Open
wants to merge 1 commit into
base: rolling
Choose a base branch
from
Open

Conversation

wjwwood
Copy link
Member

@wjwwood wjwwood commented Jul 25, 2019

This is a follow up of #277

Signed-off-by: William Woodall <[email protected]>
@wjwwood wjwwood added the enhancement New feature or request label Jul 25, 2019
@wjwwood wjwwood self-assigned this Jul 25, 2019
@@ -390,6 +392,8 @@ def __on_process_stdin(
def __on_process_stdout(
self, event: ProcessIO
) -> Optional[SomeActionsType]:
if self.__stdout_buffer.closed:
raise RuntimeError('OnProcessIO (stdout) received after stdout buffer closed.')
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This this check, I get tracebacks like this:

[DEBUG] [launch]: Traceback (most recent call last):
  File "/home/william/ros2_ws/build/launch/launch/launch_service.py", line 350, in run
    self.__loop_from_run_thread.run_until_complete(run_loop_task)
  File "/usr/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
    return future.result()
  File "/home/william/ros2_ws/build/launch/launch/launch_service.py", line 240, in __run_loop
    await process_one_event_task
  File "/usr/lib/python3.6/asyncio/coroutines.py", line 126, in send
    return self.gen.send(value)
  File "/home/william/ros2_ws/build/launch/launch/launch_service.py", line 177, in _process_one_event
    await self.__process_event(next_event)
  File "/usr/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
    return self.gen.send(None)
  File "/home/william/ros2_ws/build/launch/launch/launch_service.py", line 186, in __process_event
    entities = event_handler.handle(event, self.__context)
  File "/home/william/ros2_ws/build/launch/launch/event_handlers/on_process_io.py", line 73, in handle
    return self.__on_stdout(event)
  File "/home/william/ros2_ws/build/launch/launch/actions/execute_process.py", line 396, in __on_process_stdout
    raise RuntimeError('OnProcessIO (stdout) received after stdout buffer closed.')
RuntimeError: OnProcessIO (stdout) received after stdout buffer closed.

[ERROR] [launch]: Caught exception in launch (see debug for traceback): OnProcessIO (stdout) received after stdout buffer closed.

I believe this because this line buffering logic was not designed in a way to allow for ProcessIO events to occur after the ProcessExit event comes. @ivanpauno do you have any ideas about how we could change it to allow for that race condition?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm flushing the buffers after a ProcessExit event.

OnProcessExit(
target_action=self,
on_exit=self.__flush_buffers,
),
.
The method for flushing buffers leaves them in a wrong state for handling another ProcessIO event:
def __flush_buffers(self, event, context):
with self.__stdout_buffer as buf:
line = buf.getvalue()
if line != '':
self.__stdout_logger.info(
self.__output_format.format(line=line, this=self)
)
with self.__stderr_buffer as buf:
line = buf.getvalue()
if line != '':
self.__stderr_logger.info(
self.__output_format.format(line=line, this=self)
)

One easy change, is to restore the buffers to a correct state after handling ProcessExit event.
It will, at least, don't raise an error with a new ProcessIO event.
The only bad thing: I think it's impossible to ensure the buffer will be completely flushed after the process exited.

Another option: Ignore ProcessIO events after flushing the buffers, instead of raising an error.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Raising an error here, as you proposed is also fine. If the daemon node of launch_testing is causing this race condition, I think the best solution is stop using a daemon node.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignore ProcessIO events after flushing the buffers, instead of raising an error.

Hmm, if we cannot guarantee order, neither of those options will do. I don't know how many or how often we get this sort of out-of-band ProcessIO events, but we could stop buffering after ProcessExit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, if we cannot guarantee order, neither of those options will do. I don't know how many or how often we get this sort of out-of-band ProcessIO events, but we could stop buffering after ProcessExit.

Sounds reasonable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the daemon node of launch_testing is causing this race condition, I think the best solution is stop using a daemon node.

That's not the cause, the cause is using a pty for stdout and stderr of the subprocess, which means there is different timing (apparently) for when data becomes available on those pipes and the process exiting. Perhaps instead we should not flush until the stdout and stderr are closed (I think you can setup a protocol event for when that happens).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifically this one: https://docs.python.org/3/library/asyncio-protocol.html#asyncio.SubprocessProtocol.pipe_connection_lost

I'll need to think about the lifetime of these objects and make sure things stick around long enough.

launch_testing/launch_testing/test_runner.py Show resolved Hide resolved
@hidmic
Copy link
Contributor

hidmic commented Jun 22, 2022

@wjwwood does this still apply? Or should we drop it?

@audrow audrow changed the base branch from master to rolling June 28, 2022 14:19
@tonynajjar
Copy link
Contributor

Hello, the PR has been opened for 5 years, could we know if you still want to go through with it? If not can the PR be closed?
I'm asking because that's a "wide reach" feature, in the sense that so many ROS users have this flag enabled for all their nodes; it would be nice to get a conclusion on whether this will become the default or not. Thank you

@Ryanf55
Copy link

Ryanf55 commented Oct 11, 2024

People are still getting hung up with it online.
https://robotics.stackexchange.com/questions/104557/ros2-launch-no-way-to-get-log-displayed-in-console

Mixing different loggers in large apps causes some logs to be dropped without this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants