-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test Coverage Discrepancy #186
Comments
Hi @raman-nareika, My guess is that it is due to the fact that you have deleted covered lines (the code duplication). You might be hitting this use case from the FAQ, read it carefully and hopefully it will make sense and it relates to your scenario. If you're going to fail a build, fail it because the number of uncovered lines increased instead of failing it because the coverage rate (percentage) decreased. From your scenario, you still improved your code base, you're just looking at the wrong metric on how to measure the quality of your code base. The only time you can rely on the coverage rate (percentage) is when the original code base was at 100% coverage and it decreased. Let me know if this is the reason why you are seeing this "Coverage Discrepancy". |
I removed these lines as part of the same PR (2 commits) and not relative to the |
If you are asking about the discrepancy with the percentages at the top of your screenshots under the "method health" section, then I'm unfamiliar with this UI. Does it come from pycobertura? What data source is used to display those coverage percentages? On the other hand, the "Code Coverage Diff Summary" sections in your screenshots seem accurate. Below I'm going to focus on those numbers.
That's likely because we removed covered lines, which shrank the size of the code base and thus proportionally affected the overall covered/uncovered ratio. So in the 1st screenshot, I see "TOTAL +0.39%" and in the second screenshot I see "TOTAL +0.34%". That seems accurate and this behavior is related to the link in the FAQ I shared in my last message. But to restate the example here, if you have a code base like this:
... which represents 80% coverage (4 out of 5 lines covered), and later you change it to this:
... then the coverage decreases to 75% (3 out of 4 lines covered). While your example is more complex, that's what's happening. In the first screenshot, you had +39 new statements (TOTAL Stmts) and -1 uncovered line (TOTAL Miss). Then in the second screenshot, it's +33 new lines (TOTAL Stmts) and -1 uncovered line (TOTAL Miss). That would explain the drop in coverage rate according to the example above.
From what I see, the coverage did change accordingly as you added tests, it says "TOTAL +0.34%" (2nd screenshot) and "TOTAL +0.77%" (3rd screenshot). So coverage increased. We can tell because the number of statements between the 2nd and 3rd screenshots remained the same with +33 statements (TOTAL Stmts), but the number of uncovered lines (TOTAL Miss) went from -1 to -17 (fewer uncovered lines means more coverage). Does that make sense? |
Hello,
On Wednesday, I created the pull request, and the overall test coverage went up (total coverage increased by 0.39% in Code Coverage Diff Summary, and the total coverage increased by 0.04%). On Thursday, I renamed the file I've added for consistency (ConversationParticipantsService -> EmailAddressService) and got rid of code duplication, so Stmts changed from 16 to 12. So, the coverage decreased(see the second screenshot). Then I added one test for the uncovered method from the existing class (SecureMessageResource). Code Coverage Diff Summary increased again, while the total coverage didn't (+0.77% vs. -0.03%, see the third screenshot).
Could anyone explain how it's possible that the Cover column shows an increase in coverage, but the overall coverage decreased?
Wednesday PR
Renamed the class
Added one test for the uncovered method from the existing class
The text was updated successfully, but these errors were encountered: