-
Notifications
You must be signed in to change notification settings - Fork 705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
coverage: include tests #39
base: master
Are you sure you want to change the base?
Conversation
I don't think we need coverage of the As you see, it fails the tests because of implementation details and others hacks. As long as |
It is useful to see if tests are not executed/run accidentally. |
Yeah, you are right, this can be useful to avoid some mistakes. I need to see how well it integrates. For example, here: Also, the if foo:
bar()
baz() |
Regarding the test example, you could use an assert for the attribute name and raise AttributeError always? Given the extra assertion this would improve the test. As for |
We could maybe split this into two PRs: one for covering tests and then another for the branch coverage, which could be worked on over time then? |
Sorry, I struggle visualizing your solution. I need to simulate the fact that
Oh, yes that makes sense, I see. Thanks!
I very much would prefer to avoid such comments in the source code.
Sure, why not. That will be easier to work on. Again, thanks for all your quality improvements. 👍 |
62a07f7
to
fc0d144
Compare
Then it appears to just be a matter of the other attributes not being tested/covered, isn't it? I've changed this PR/branch to only include tests in coverage, and use |
Let's wait for enabling branch coverage after this is merged. |
Yes, but theorically, others attributes doesn't need to be tested. They are just here in case some function within Loguru needs to use it. It appears that this is not the case, but it's an implementation detail. We don't know it from the point of view of the tests. Also, line here is unreachable anywa as the test is about wrong function arguments: tests/test_add_option#L364. I guess this is a situation when there is no other solution than using |
For unreachable code I suggest using Please feel free to push anything you have already while investigating - I've not looked closely at the coverage report yet myself. |
Nice trick. I will update your branch when I have some time. |
I changed my mind again. As explained in this StackOverflow comment, I think coverage is not the appropriate tool to detect problems in the test suite. My trouble with this:
So, ideally there would be some tool to ensure that all tests functions are run. Unfortunately, I didn't find any such tool. So, maybe we can use coverage of tests as a clue for noticing problems, but I think it should not fail CI checks. |
I can see that it's not a perfect solution, but it appears to be the only option there is so far. A possible option might be to report coverage twice, once with and then witout tests, and using flags on codecov to report this (e.g. "code" and "tests"). This will not help with the overall coverage report (e.g. for diffs), but would display them differently. |
It also works without 100% coverage, because you can browse/see the coverage for tests themselves. |
It also helps with dead code there in general, e.g. compatibility code that is not being used anymore after dropping support for older versions of Python etc.
Why not? You do not have to be too careful about this, but then you get feedback if something is not being executed.
I think using
Mostly they would not be affected, but if so, it is easy to fix - and it is about maintainability in general, so also in their interest. |
Thanks for taking the time to answer my concerns. I understand and I see how this can be useful. |
Are PRs not built on Travis anymore? https://travis-ci.org/Delgan/loguru/pull_requests I don't think we can configure codecov, but I've pushed some trivial fixes for now. |
Oops, I played with the Travis webhook the other day, it seems I broke it. Thanks for the improvements. 👍 |
Cool! |
@@ -28,5 +28,6 @@ def a(x): | |||
f(0) | |||
except ZeroDivisionError: | |||
logger.exception("") | |||
raise | |||
except Exception: | |||
logger.exception("") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The outer exception is not covered, but tests fail with this.
I think that might indicate some missing test (i.e. you want to also test the exception being re-raised / another exception being caught in the outer block).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, the whole nested try
blocks could be removed. This is not testing anything useful. Previously these test cases were generated automatically through some kind of template strings. This is no longer the case, I did not realize I could remove it.
btw: in general a good metric for PRs is if a diff is covered, and that there are no unexpected changes (and coverage does not decrease), and not if the coverage is at 100% (but we're almost there anyway again). |
Only two places are left: one win32 specific branch, and the one commented on above already. |
What about the win32 branch? But it could maybe also be covered by simulating win32, until there is something like AppVeyor to test it for real. |
@blueyed Sorry, I missed your comments. Thanks for the hard work. It's ok if some branches in the tests are not covered. As I said, I don't plan to make sure that every line is executed each time I publish a new commit, so there is no reason to push for 100% here. I may take a look at the reports from time to time to make sure there is no problem or missed test, though. The only thing left to settle is to find a way for the coverage reports of |
That is not really possible. I've added a change to report it twice to codecov, with flags "loguru" and "tests", so it would at least show this separate - but there is only a single total coverage still. |
Hey @blueyed. So, with these changes, coverage of the project is no longer checked? That's problematic. :/ It seems to me that we are moving away from the primary purpose of this suggestion. Quoting you:
So, what about some kind of |
This could also go into .codecov.yml, or be configured through their UI. Ref: https://docs.codecov.io/docs/codecov-yaml Ref: https://docs.codecov.io/docs/coverage-configuration Ref: https://docs.codecov.io/docs/pull-request-comments
That was a bug, should be fixed now.
I do not think that is really possible without using coverage. But using a pytest plugin sounds like a nice idea for this, maybe this could be OTOH |
It would be a useful feature for codecov to allow for an extra status, that would allow to handle tests separately: codecov/project would only look at "loguru/" then, but "codecov/tests" could be configured to look at "tests/". |
As for now, using the flags this would get reported in the comment differently, similar to how it looks on https://codecov.io/gh/Delgan/loguru/compare/acaeeb31689b75a003bb378f46de12f3ebfcacd2...820e180bafd1d252f1fb66d10eaa5600e478ee76 already. |
I don't understand. How does using
Maybe, this could be proposed. But a first proof-of-concept restricted to Loguru package would be fine too.
This sounds too complicated, it doesn't worth the trouble in my opinion.
That would be nice, but I don't expect |
How do you define "called functions"? While a pytest plugin would be possible that e.g. wraps every found function in collected test files to track if it was called, it would still not catch
Ok, apart from the custom Github status - would you be comfortable with failing the CI job then? |
But for now I think this PR is good by itself, and we should iterate based on what comes up. |
..but the codecov config should be changed to not require 100%, but just the current percentage (i.e. it does not decrease). |
This is not exactly what I had in mind.
Sorry, I think I'm a little bit lost. To which case are you referring? If the code of
Indeed, the project is configured to fail the job if I think the "pytest plugin" solution should be explored, this seems to satisfy all we need. |
Travis does not fail - only codecov complains; like with a custom Github status. |
Oh, yeah I see, thanks. I was refering to the "CI job" as a whole, like "whatever is automatically run at push and returns an error or success status". Not sure this is an appropriate definition, though. Failing the Travis job because of coverage problem seems acceptable. Still, I would prefer the "pytest plugin" solution. |
No description provided.