-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIX] Fix incorrect PR comment generation #540
[FIX] Fix incorrect PR comment generation #540
Conversation
Add a test that the info used to generate a PR comment handles variant output files correctly. Regression tests that output one of the allowed variants of the output file should be treated as correct in addition to tests that output the original version of the output file.
Refactor the code for getting the info about a test run for use in a PR comment into a separate function. This makes this function easier to test instead of testing the entire process of creating a PR comment at the same time.
Currently, when generating stats and a list of failed tests for PR comments, tests are only marked as passed if they exactly matched the original output file. This commit changes that by also treating a regression test as passed if it is an acceptable variant of the output file.
Add another test to make sure that the change in 4e78b55 didn't accidentally cause the PR comment handling to mark all tests as passing even if they result in an invalid variant file.
I don't think the CI failing is related to this PR, seeing as From what I can tell, this appears to be related to actions/runner-images#2104 and there seems to be a relatively simple solution in actions/runner-images#2104 (comment). I'll send a PR with that fix. |
I created #541 just now to fix that error. The CI run for that PR seems to have gotten past installing dependencies successfully, so I think that should fix the installation error. |
@pranavrajpal can you take a look at the CI? Some tests fail. |
nosetests appears to be treating get_info_about_test_for_pr_comment as a test to run because it has 'test' in the name. That causes the test suite to fail because it doesn't pass in the required argument. This renames the function so that nosetests realizes that it shouldn't treat it as a test.
Codecov Report
@@ Coverage Diff @@
## master #540 +/- ##
==========================================
+ Coverage 80.98% 81.05% +0.07%
==========================================
Files 36 36
Lines 3591 3605 +14
Branches 408 410 +2
==========================================
+ Hits 2908 2922 +14
Misses 549 549
Partials 134 134
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
I think I understand why the tests are failing, but I don't think it's related to this PR. When the test suite (
Later, during the test run, that time is compared to the current time to see if the HMAC has expired yet: sample-platform/mod_auth/controllers.py Line 378 in eca27bd
which would fail if it reached that any later than 1 second after the set up code ran. That would cause it to never get to the template rendering, which would cause the If you add a sleep to one of the tests in The fix should be pretty simple (just set the expiry date to |
If it's not caused by your PR then I'd prefer a separate PR to fix it :) |
I created #543 to fix that bug and one other related bug I found while fixing it, but, for reasons I don't understand, the CI seems to be failing when trying to generate the coverage report. Could you take a look and see if you know what's going wrong? |
The CI appears to be passing now. Is there anything else you want me to change/fix? |
Nice work 👍 Can you move the data classes to models.py and then import them as required? |
Move dataclasses used for representing info about PR comments to models.py so that all classes for storing data are placed together.
I've moved the data classes to models.py. I'm slightly confused on how you decide what goes in controllers.py and what goes in models.py. I originally thought models.py was meant for database models and controllers.py was for everything else. Is models.py meant for any classes for storing data, and controllers.py for the rest of the actual logic, or is there some other distinction between what goes in each file? |
I think you can go through mod_sample, afaik Mediainfoparser.py contains custom classes. It's not much of a deal, it's better if the classes aren't defined in controllers.py. @canihavesomecoffee has a say on this |
Ok, keeping all of the classes out of controllers.py so that all of the actual logic is kept together makes sense. The data classes are in models.py and the CI seems to be passing, so let me know if you want me to change anything else. |
LGTM. |
In general the models were for database (ORM) purposes, but I'd rather see the classes under model than tacked to the controller file. The controller file should be reserved for logic wrt handling requests :) |
Please prefix your pull request with one of the following: [FEATURE] [FIX] [IMPROVEMENT].
In raising this pull request, I confirm the following (please check boxes):
My familiarity with the project is as follows (check one):
Fixes (part of) #535 (see below for more info)
This fixes a bug in the generation of the PR comment that caused it to mark regression tests as failing if they produced one of the allowed variants of the expected output file, instead of correctly marking these tests as passing. I also added a few tests for this bug.
I don't think this is a complete fix for #535 because I don't think it completely accounts for the difference between CCExtractor/ccextractor#1329 (comment) and https://sampleplatform.ccextractor.org/test/3224. In that comment, the table and the list of failed tests show different results, but the changes I made and the bug I found affect the results for both the table and the list of failed tests in the same way.
Judging from the template for the PR comment, that would probably be caused by
tests
andfailed_tests
in the template getting out of sync, but I can't tell what the cause of that might be just from looking at the code. I also don't know how to debug that problem because I can't tell what regression tests were incorrectly marked as failing.