Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] Verbose flag check #725

Merged
merged 5 commits into from
Sep 6, 2024
Merged

Conversation

newling
Copy link
Contributor

@newling newling commented Aug 29, 2024

It'd actually be nice to have a line printed for each test that runs. Next step: set up a few levels of verbosity

0: nothing
1: a line for each test run
2: default, sensible
3: debug

Need to understand what 'count' in

 parser.add_argument("-v", "--verbose", action="count", default=0) 

@newling newling changed the title [CI] Checking if CI test is working [CI] Verbose check Aug 29, 2024
@newling newling changed the title [CI] Verbose check [CI] Verbose flag check Aug 29, 2024
@newling newling requested a review from makslevental August 29, 2024 23:36
@makslevental
Copy link
Collaborator

Need to understand what 'count' in

-v -> args.verbose == 1 while -vv -> args.verbose == 2.

@makslevental
Copy link
Collaborator

In general, what I think of when I see us fiddling with this kind of stuff, is we just need to port to pytest. The only reason I haven't done it yet is because pytest doesn't handle segfaults well despite claims to the contrary. At least so I recall - it's worth trying out for us. But actually https://github.com/pytest-dev/pytest-xdist/issues/126 gives us the solution: just use catchsegv.sh (or write our own signal handler).

@newling newling force-pushed the verbosity_test_check branch from 0743369 to f537d4a Compare September 5, 2024 18:09
@newling
Copy link
Contributor Author

newling commented Sep 5, 2024

In general, what I think of when I see us fiddling with this kind of stuff, is we just need to port to pytest. The only reason I haven't done it yet is because pytest doesn't handle segfaults well despite claims to the contrary. At least so I recall - it's worth trying out for us. But actually https://github.com/pytest-dev/pytest-xdist/issues/126 gives us the solution: just use catchsegv.sh (or write our own signal handler).

Ok. I'm not what 'this kind of stuff' is that you're referring to here. I'm not going to do this anytime soon and would like to maintain this script for as long as we need it.

@makslevental
Copy link
Collaborator

Ok. I'm not what 'this kind of stuff' is that you're referring to here. I'm not going to do this anytime soon and would like to maintain this script for as long as we need it.

We've been fiddling with this script a lot lately right? Flags, shell out adjustments, changing how the tests are layered. It signals to me that the script is not a substitute for a full-fledged testing framework that gracefully handles these things. In addition a real testing framework could give us other useful features like perf regression tracking via some kind of standardizing logging (compile times, runtimes, etc).

In IREE itself I believe they're revamping the testing and benchmarking framework so I'm waiting until then to see whether we can use it or if we'll need our own.

@makslevental
Copy link
Collaborator

makslevental commented Sep 5, 2024

Just for comparison this is what pytest looks like

https://github.com/makslevental/mlir-python-extras/tree/main/tests

and here's what the test output looks like when everything passes

https://github.com/makslevental/mlir-python-extras/actions/runs/10714821483/job/29709174975#step:7:16

And when something fails

https://github.com/makslevental/mlir-python-extras/actions/runs/10448085182/job/28927992854

We could even be leveraging the iree/mlir python bindings to do runtime tests rather than shell out

https://github.com/makslevental/mlir-python-extras/blob/main/tests/test_runtime.py

Anyway it's second order stuff but I think it's inevitable we'll need a more serious testing framework.

@newling
Copy link
Contributor Author

newling commented Sep 5, 2024

Ok. I agree that we can do better by using pytest. For now though, can you please accept the PRs I make improving this script? I'm not planning on adding any serious bells and whistles, but this PR fixes a broken test.

@newling
Copy link
Contributor Author

newling commented Sep 5, 2024

Bit the bullet and doing this is parallel #752

@newling newling closed this Sep 5, 2024
@newling newling reopened this Sep 5, 2024
@makslevental
Copy link
Collaborator

Ok. I agree that we can do better by using pytest. For now though, can you please accept the PRs I make improving this script? I'm not planning on adding any serious bells and whistles, but this PR fixes a broken test.

My bad I didn't realize you were like actually actually waiting on me to approve. Sorry!

Copy link
Collaborator

@makslevental makslevental left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for keeping you waiting

@newling newling merged commit 50034c7 into nod-ai:main Sep 6, 2024
6 checks passed
@newling newling deleted the verbosity_test_check branch December 12, 2024 23:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants