Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: fix hang when running tests that need parsing with --interactive #13980

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

pks-t
Copy link

@pks-t pks-t commented Dec 5, 2024

When running tests with --interactive we don't redirect stdin, stdout or stderr and instead pass them on to the user's console. This redirect causes us to hang in case the test in question needs parsing, like it is the case for TAP output, because we cannot read the process's stdout.

Fix this hang by not parsing output when running in interactive mode.

@pks-t
Copy link
Author

pks-t commented Dec 5, 2024

I had a bit of a hard time trying to come up with a testcase for this, also because the test harness does not seem to be prepared to pass arbitrary options to meson test. Help in that context would be appreciated.

When running tests with `--interactive` we don't redirect stdin, stdout
or stderr and instead pass them on to the user's console. This redirect
causes us to hang in case the test in question needs parsing, like it is
the case for TAP output, because we cannot read the process's stdout.

Fix this hang by not parsing output when running in interactive mode.
@pks-t pks-t force-pushed the pks-tap-fix-hang-with-interactive branch from 4d27f58 to 217629d Compare December 5, 2024 07:35
@bonzini
Copy link
Contributor

bonzini commented Dec 6, 2024

The problem here is that TAP tests with not ok results must fail even if their return code is 0. So your patch is effectively changing the result. I think that you need to add an "IGNORED" result and use it for parsed tests in interactive mode. Something like this on top of your patch

diff --git a/mesonbuild/mtest.py b/mesonbuild/mtest.py
index 556451c5e..d875cc033 100644
--- a/mesonbuild/mtest.py
+++ b/mesonbuild/mtest.py
@@ -243,6 +243,7 @@ class TestResult(enum.Enum):
     EXPECTEDFAIL = 'EXPECTEDFAIL'
     UNEXPECTEDPASS = 'UNEXPECTEDPASS'
     ERROR = 'ERROR'
+    IGNORED = 'IGNORED'
 
     @staticmethod
     def maxlen() -> int:
@@ -264,7 +265,7 @@ class TestResult(enum.Enum):
     def colorize(self, s: str) -> mlog.AnsiDecorator:
         if self.is_bad():
             decorator = mlog.red
-        elif self in (TestResult.SKIP, TestResult.EXPECTEDFAIL):
+        elif self in (TestResult.SKIP, TestResult.IGNORED, TestResult.EXPECTEDFAIL):
             decorator = mlog.yellow
         elif self.is_finished():
             decorator = mlog.green
@@ -821,7 +822,8 @@ class JunitBuilder(TestLogger):
                                {TestResult.INTERRUPT, TestResult.ERROR})),
                 failures=str(sum(1 for r in test.results if r.result in
                                  {TestResult.FAIL, TestResult.UNEXPECTEDPASS, TestResult.TIMEOUT})),
-                skipped=str(sum(1 for r in test.results if r.result is TestResult.SKIP)),
+                skipped=str(sum(1 for r in test.results if r.result in
+                                {TestResult.SKIP, TestResult.IGNORED})),
                 time=str(test.duration),
             )
 
@@ -831,6 +833,10 @@ class JunitBuilder(TestLogger):
                 testcase = et.SubElement(suite, 'testcase', name=str(subtest), classname=suitename)
                 if subtest.result is TestResult.SKIP:
                     et.SubElement(testcase, 'skipped')
+                elif subtest.result is TestResult.IGNORED:
+                    # shouldn't happen
+                    skip = et.SubElement(testcase, 'skipped')
+                    skip.text = 'Test output was not parsed.'
                 elif subtest.result is TestResult.ERROR:
                     et.SubElement(testcase, 'error')
                 elif subtest.result is TestResult.FAIL:
@@ -866,6 +872,10 @@ class JunitBuilder(TestLogger):
             if test.res is TestResult.SKIP:
                 et.SubElement(testcase, 'skipped')
                 suite.attrib['skipped'] = str(int(suite.attrib['skipped']) + 1)
+            elif test.result is TestResult.IGNORED:
+                skip = et.SubElement(testcase, 'skipped')
+                skip.text = 'Test output was not parsed.'
+                suite.attrib['skipped'] = str(int(suite.attrib['skipped']) + 1)
             elif test.res is TestResult.ERROR:
                 et.SubElement(testcase, 'error')
                 suite.attrib['errors'] = str(int(suite.attrib['errors']) + 1)
@@ -954,7 +964,7 @@ class TestRun:
         if self.results:
             # running or succeeded
             passed = sum(x.result.is_ok() for x in self.results)
-            ran = sum(x.result is not TestResult.SKIP for x in self.results)
+            ran = sum(x.result not in {TestResult.SKIP, TestResult.IGNORED} for x in self.results)
             if passed == ran:
                 return f'{passed} subtests passed'
             else:
@@ -974,6 +984,9 @@ class TestRun:
     def _complete(self) -> None:
         if self.res == TestResult.RUNNING:
             self.res = TestResult.OK
+        if self.needs_parsing and self.console_mode is ConsoleMode.INTERACTIVE:
+            # TODO self.console_mode does not exist
+            self.res = TestResult.IGNORED
         assert isinstance(self.res, TestResult)
         if self.should_fail and self.res in (TestResult.OK, TestResult.FAIL):
             self.res = TestResult.UNEXPECTEDPASS if self.res is TestResult.OK else TestResult.EXPECTEDFAIL
@@ -1593,6 +1606,7 @@ class TestHarness:
         self.unexpectedpass_count = 0
         self.success_count = 0
         self.skip_count = 0
+        self.ignored_count = 0
         self.timeout_count = 0
         self.test_count = 0
         self.name_max_len = 0
@@ -1736,6 +1750,8 @@ class TestHarness:
             self.timeout_count += 1
         elif result.res is TestResult.SKIP:
             self.skip_count += 1
+        elif result.res is TestResult.IGNORED:
+            self.ignored_count += 1
         elif result.res is TestResult.OK:
             self.success_count += 1
         elif result.res in {TestResult.FAIL, TestResult.ERROR, TestResult.INTERRUPT}:
@@ -1794,6 +1810,8 @@ class TestHarness:
         return prefix + left + middle + right
 
     def summary(self) -> str:
+        # TODO add ignored_count, but perhaps only if ConsoleMode.INTERACTIVE?
+        # or perhaps hide all zero lines except for OK and FAIL?
         return textwrap.dedent('''
             Ok:                 {:<4}
             Expected Fail:      {:<4}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants