Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests supposed to run with pytest shouldn't fall back to unittest in case of problems #321

Open
rotu opened this issue Mar 12, 2020 · 9 comments
Labels
bug Something isn't working

Comments

@rotu
Copy link
Contributor

rotu commented Mar 12, 2020

http://build.ros2.org/view/Fci/job/Fci__nightly-cyclonedds_ubuntu_focal_amd64/40/consoleText

Although the underlying issue was fixed in #317, colcon test incorrectly reported that testing completed successfully and was not flagged on CI.

Starting >>> rqt
[967.160s] ERROR:colcon.colcon_core.task.python.test:Exception in Python testing step extension 'pytest': 'NoneType' object is not iterable
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/colcon_core/task/python/test/__init__.py", line 54, in test
    matched = extension.match(self.context, env, setup_py_data)
  File "/usr/lib/python3/dist-packages/colcon_core/task/python/test/pytest.py", line 49, in match
    return has_test_dependency(setup_py_data, 'pytest')
  File "/usr/lib/python3/dist-packages/colcon_core/task/python/test/__init__.py", line 214, in has_test_dependency
    for d in tests_require:
TypeError: 'NoneType' object is not iterable


----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK
--- stderr: rqt
@dirk-thomas dirk-thomas added the question Further information is requested label Mar 12, 2020
@dirk-thomas
Copy link
Member

dirk-thomas commented Mar 12, 2020

There are multiple way you can run tests for a Python package. With the first extension failing colcon does attempt to use the next extension which in this case passes:

Ran 0 tests in 0.000s

OK

I don't think colcon has a way to distinguish this failure. It already prints a big error message. The only option I see is to not fall through to the next extension and imo that would be a step backwards in terms of robustness.

@rotu
Copy link
Contributor Author

rotu commented Mar 12, 2020

I think colcon should continue to run further tests (unless --abort-on-error is true) and return a non-zero error code when it finishes. Possibly, it should add 1 to the error count in the test suite and possibly it should add a test case in the xml output to represent the error encountered.

@dirk-thomas
Copy link
Member

I think colcon should continue to run further tests (unless --abort-on-error is true)

I am not referring to other tests from other packages. colcon did run the found tests (only 0 using python setup.py test) of that specific package. The first extension which can run Python tests had and error but a second extension ran the tests it found just fine.

So my question again: do you suggest to get rid of the fall back behavior? As I mentioned above that fall back behavior is very much desired in a lot of other cases for increasing robustness. How else are you suggesting colcon should distinguish this intentional fall through to another extension (which does the requested job just fine)?

@rotu
Copy link
Contributor Author

rotu commented Mar 13, 2020

  1. I think any exception other than a SkipExtensionException should cause an error message and a nonzero return code. Maybe every extension that returns True from match() should run. (btw, setup.py test is now deprecated, so using it to kick off pytest isn't something to worry about for long)
  2. Unfortunately the fallthrough isn't really appropriate. If a test suite is supposed to run with pytest and instead runs with unittest, it's very unlikely the results are at all sensible.

@dirk-thomas
Copy link
Member

I think any exception other than a SkipExtensionException should cause an error message and a nonzero return code.

See #322 (comment) why I think this goes against the philosophy to be as robust against failures as possible.

Maybe every extension that returns True from match() should run.

No, that is not a viable option. All extensions are considered in priority order and the highest one which matches will be responsible to perform the task. Multiple extensions performing the same task doesn't make any sense.

Unfortunately the fallthrough isn't really appropriate. If a test suite is supposed to run with pytest and instead runs with unittest, it's very unlikely the results are at all sensible.

This is the essential point. Therefore I think a solution should address this exact case - the two extensions in question shouldn't be both considered in this case.

@rotu
Copy link
Contributor Author

rotu commented Mar 25, 2020

I don't know that I agree. It seems like one extension to run functional testing and another to run performance testing seems like a plausible use case. Or maybe a package has some Pytest tests and some CMake tests.

Trundling on despite errors is not a good design choice. This error has to be made visible; at least visible enough that there's a chance of catching such a bug amidst a mountain of CI output.

@dirk-thomas dirk-thomas added bug Something isn't working and removed question Further information is requested labels Mar 25, 2020
@dirk-thomas dirk-thomas changed the title Colcon test passes even though test discovery crashes tests supposed to run with pytest shouldn't fall back to unittest in case of problems Mar 25, 2020
@dirk-thomas
Copy link
Member

It seems like one extension to run functional testing and another to run performance testing seems like a plausible use case. Or maybe a package has some Pytest tests and some CMake tests.

Neither of such cases currently exist and are even possible. A single extension is in charge of performing the action. It can choose to delegate the work to other extensions but that is not changing that a single entry point is being chosen to do the work.

Trundling on despite errors is not a good design choice.

It feels to me that you want to describe things in the most negative way possible - even if I comment in this thread extensively why things are done the way they are (which you happily ignore afterwards). I also clearly stated that the fallback from pytest to unittest is undesired and should be addressed.

I changed the title to what I see as the acceptance criteria for this to be closed.

@rotu
Copy link
Contributor Author

rotu commented Mar 25, 2020

It feels to me that you want to describe things in the most negative way possible - even if I comment in this thread extensively why things are done the way they are (which you happily ignore afterwards). I also clearly stated that the fallback from pytest to unittest is undesired and should be addressed.

I'm sorry. I did not intend to be negative and I did not intend to raise hackles. I'm sure you understand how important it is to CI that tests are run as designed and the results made visible. I'm going to back off this issue and let you resolve it as you see fit. I'm sure I don't have the same insight into the architecture of Colcon as you do.

@rotu
Copy link
Contributor Author

rotu commented Mar 25, 2020

Also updated the ticket to link to fix the link to the log.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Development

No branches or pull requests

2 participants