Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: fix hang when running tests that need parsing with --interactive #13980

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

pks-t
Copy link
Contributor

@pks-t pks-t commented Dec 5, 2024

When running tests with --interactive we don't redirect stdin, stdout or stderr and instead pass them on to the user's console. This redirect causes us to hang in case the test in question needs parsing, like it is the case for TAP output, because we cannot read the process's stdout.

Fix this hang by not parsing output when running in interactive mode.

@pks-t
Copy link
Contributor Author

pks-t commented Dec 5, 2024

I had a bit of a hard time trying to come up with a testcase for this, also because the test harness does not seem to be prepared to pass arbitrary options to meson test. Help in that context would be appreciated.

@pks-t pks-t force-pushed the pks-tap-fix-hang-with-interactive branch from 4d27f58 to 217629d Compare December 5, 2024 07:35
@bonzini
Copy link
Collaborator

bonzini commented Dec 6, 2024

The problem here is that TAP tests with not ok results must fail even if their return code is 0. So your patch is effectively changing the result. I think that you need to add an "IGNORED" result and use it for parsed tests in interactive mode. Something like this on top of your patch

diff --git a/mesonbuild/mtest.py b/mesonbuild/mtest.py
index 556451c5e..d875cc033 100644
--- a/mesonbuild/mtest.py
+++ b/mesonbuild/mtest.py
@@ -243,6 +243,7 @@ class TestResult(enum.Enum):
     EXPECTEDFAIL = 'EXPECTEDFAIL'
     UNEXPECTEDPASS = 'UNEXPECTEDPASS'
     ERROR = 'ERROR'
+    IGNORED = 'IGNORED'
 
     @staticmethod
     def maxlen() -> int:
@@ -264,7 +265,7 @@ class TestResult(enum.Enum):
     def colorize(self, s: str) -> mlog.AnsiDecorator:
         if self.is_bad():
             decorator = mlog.red
-        elif self in (TestResult.SKIP, TestResult.EXPECTEDFAIL):
+        elif self in (TestResult.SKIP, TestResult.IGNORED, TestResult.EXPECTEDFAIL):
             decorator = mlog.yellow
         elif self.is_finished():
             decorator = mlog.green
@@ -821,7 +822,8 @@ class JunitBuilder(TestLogger):
                                {TestResult.INTERRUPT, TestResult.ERROR})),
                 failures=str(sum(1 for r in test.results if r.result in
                                  {TestResult.FAIL, TestResult.UNEXPECTEDPASS, TestResult.TIMEOUT})),
-                skipped=str(sum(1 for r in test.results if r.result is TestResult.SKIP)),
+                skipped=str(sum(1 for r in test.results if r.result in
+                                {TestResult.SKIP, TestResult.IGNORED})),
                 time=str(test.duration),
             )
 
@@ -831,6 +833,10 @@ class JunitBuilder(TestLogger):
                 testcase = et.SubElement(suite, 'testcase', name=str(subtest), classname=suitename)
                 if subtest.result is TestResult.SKIP:
                     et.SubElement(testcase, 'skipped')
+                elif subtest.result is TestResult.IGNORED:
+                    # shouldn't happen
+                    skip = et.SubElement(testcase, 'skipped')
+                    skip.text = 'Test output was not parsed.'
                 elif subtest.result is TestResult.ERROR:
                     et.SubElement(testcase, 'error')
                 elif subtest.result is TestResult.FAIL:
@@ -866,6 +872,10 @@ class JunitBuilder(TestLogger):
             if test.res is TestResult.SKIP:
                 et.SubElement(testcase, 'skipped')
                 suite.attrib['skipped'] = str(int(suite.attrib['skipped']) + 1)
+            elif test.result is TestResult.IGNORED:
+                skip = et.SubElement(testcase, 'skipped')
+                skip.text = 'Test output was not parsed.'
+                suite.attrib['skipped'] = str(int(suite.attrib['skipped']) + 1)
             elif test.res is TestResult.ERROR:
                 et.SubElement(testcase, 'error')
                 suite.attrib['errors'] = str(int(suite.attrib['errors']) + 1)
@@ -954,7 +964,7 @@ class TestRun:
         if self.results:
             # running or succeeded
             passed = sum(x.result.is_ok() for x in self.results)
-            ran = sum(x.result is not TestResult.SKIP for x in self.results)
+            ran = sum(x.result not in {TestResult.SKIP, TestResult.IGNORED} for x in self.results)
             if passed == ran:
                 return f'{passed} subtests passed'
             else:
@@ -974,6 +984,9 @@ class TestRun:
     def _complete(self) -> None:
         if self.res == TestResult.RUNNING:
             self.res = TestResult.OK
+        if self.needs_parsing and self.console_mode is ConsoleMode.INTERACTIVE:
+            # TODO self.console_mode does not exist
+            self.res = TestResult.IGNORED
         assert isinstance(self.res, TestResult)
         if self.should_fail and self.res in (TestResult.OK, TestResult.FAIL):
             self.res = TestResult.UNEXPECTEDPASS if self.res is TestResult.OK else TestResult.EXPECTEDFAIL
@@ -1593,6 +1606,7 @@ class TestHarness:
         self.unexpectedpass_count = 0
         self.success_count = 0
         self.skip_count = 0
+        self.ignored_count = 0
         self.timeout_count = 0
         self.test_count = 0
         self.name_max_len = 0
@@ -1736,6 +1750,8 @@ class TestHarness:
             self.timeout_count += 1
         elif result.res is TestResult.SKIP:
             self.skip_count += 1
+        elif result.res is TestResult.IGNORED:
+            self.ignored_count += 1
         elif result.res is TestResult.OK:
             self.success_count += 1
         elif result.res in {TestResult.FAIL, TestResult.ERROR, TestResult.INTERRUPT}:
@@ -1794,6 +1810,8 @@ class TestHarness:
         return prefix + left + middle + right
 
     def summary(self) -> str:
+        # TODO add ignored_count, but perhaps only if ConsoleMode.INTERACTIVE?
+        # or perhaps hide all zero lines except for OK and FAIL?
         return textwrap.dedent('''
             Ok:                 {:<4}
             Expected Fail:      {:<4}

@bonzini
Copy link
Collaborator

bonzini commented Dec 31, 2024

Would you like to try the approach in the comment above?

@pks-t pks-t force-pushed the pks-tap-fix-hang-with-interactive branch from 217629d to eff716f Compare January 2, 2025 12:55
@pks-t pks-t requested a review from jpakkane as a code owner January 2, 2025 12:55
pks-t added 4 commits January 2, 2025 13:56
After a test run finishes we print a summary that sums up test counts by
type, e.g. failed or skipped tests. In many cases though it is expected
that most of the counts will be zero, and thus the summary is needlessly
cluttered with irrelevant lines. This list of mostly-irrelevant results
will grow in a subsequent commit where we introduce "Ignored" test
results.

Prepare for this by filtering results. Instead of unconditionally
printing every result, we will now only print those results where we
have at least one counted test. The exception is "Ok:" and "Fail:",
which will always be printed.

Suggested-by: Paolo Bonzini <[email protected]>
Signed-off-by: Patrick Steinhardt <[email protected]>
Move the `console_mode` property into the TestRun Class. This will be
required by a subsequent commit where we start to ignore test results
for parsed interactive tests.

Signed-off-by: Patrick Steinhardt <[email protected]>
When running tests in interactive mode then the standard file streams
will remain connected to the executing terminal so that the user can
interact with the tests. This has the consequence that Meson itself does
not have access to those streams anymore, which is problematic for any
of the test types that require parsing, like for example with the TAP
protocol. This means that Meson is essentially flying blind in those
cases because the test result cannot be determined by parsing the exit
code of the test, but can only reliably be derived from the parsed
output.

One obvious solution to this problem would be to splice the test output
so that both Meson and the user's terminal have access to it. But when
running in interactive mode it is quite likely that the test itself will
actually be driven from the command line, and the chance is high that
the resulting data on stdout cannot be parsed as properly anymore. This
is for example the case in the Git project, where interactive mode is
typically used to drop the user into a shell or invoke a debugger.

So continuing to treat the output as properly formatted output that can
be parsed is likely a dead end in many use cases. Instead, we introduce
a new "IGNORED" test result: when executing tests in interactive mode,
and when the test type indicates that it requires parsing, we will not
try to parse the test at all but mark the test result as ignored
instead.

Suggested-by: Paolo Bonzini <[email protected]>
Signed-off-by: Patrick Steinhardt <[email protected]>
When running tests with `--interactive` we don't redirect stdin, stdout
or stderr and instead pass them on to the user's console. This redirect
causes us to hang in case the test in question needs parsing, like it is
the case for TAP output, because we cannot read the process's stdout.

Fix this hang by not parsing output when running in interactive mode.

Signed-off-by: Patrick Steinhardt <[email protected]>
@pks-t pks-t force-pushed the pks-tap-fix-hang-with-interactive branch from eff716f to 748f18c Compare January 2, 2025 12:58
@pks-t
Copy link
Contributor Author

pks-t commented Jan 2, 2025

@bonzini Thanks for giving me a nudge. I wanted to get to it, but didn't yet find the time to do it. I've now got a version that seems to work alright.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants