-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Build] Set up publication of Test results as comment on pull-requests #3144
[Build] Set up publication of Test results as comment on pull-requests #3144
Conversation
@cdietrich I assume the simplest would be to submit this and fix problems if they occur? |
runs-on: ubuntu-latest | ||
steps: | ||
- name: Upload | ||
uses: actions/upload-artifact@834a144ee995460fba8ed112a2fc961b36a5ec5a # v4.3.6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason to use an exact tag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general that's suggested practice to harden supply-chain security.
Otherwise it would be possible to publish a new artifact under a new tag (which is desired if one just uses the major version).
I can change it to just use actions/upload-artifact@v4
. Without dependabot being setup it is otherwise tedious to update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw. since you asked in xtext-reference projects, one can configure dependabot to just update workflows:
https://github.com/eclipse-m2e/m2e-core/blob/master/.github/dependabot.yml
And it can also handle these specific commit-ids.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's merge it and see how it goes!
Thanks @HannesWell
@HannesWell this seems to have broken the build Run actions/upload-artifact@v4 any idea https://github.com/eclipse/xtext/actions/runs/10334822997/job/28608707561?pr=3143 |
this is from your pr |
is it possible the matrix and the build-maven-artifacts clash? |
The 'Publish Unit Test Results' introduced with eclipse-xtext#3144 will fail if test-failures are found.
The 'Publish Unit Test Results' introduced with eclipse-xtext#3144 will fail if test-failures are found.
The 'Publish Unit Test Results' introduced with eclipse-xtext#3144 will fail if test-failures are found.
This now works much better with
@LorenzoBettini, @cdietrich I'll answer your remarks here to keep the other PRs clean of this discussion. In response to #3150 (comment)
Not yet, but I already noticed this too. :/ Unfortunately I could not really find some debug output from that collection job itself. My first guess is that it might be the errors during the shutdown of the Maven build JVM as one can see as annotations in https://github.com/eclipse/xtext/actions/runs/10347229487. In general it has to be something in the logs that the action interprets as an error. In response to #2976 (comment):
If there are real test-failures there is a button that leads to the failure output. But in this case I think this isn't a false positive failure.
I cannot guarantee it, but I would expect that it also works for PRs of branches from the same repository. The jobs still run, just triggered by the push instead of the PR event. The processing is just triggered after all jobs the in
Well, if you would ask me in a project where I'm in charge I would suggest everyone to use forks, regardless of that person is committer or not. But I'm not in charge here, so my opinion does not count :) I just find it a bit disturbing to have all the current work of committers in my repo. Of course I can just fetch the master and then get rid of that. Maybe it's just a matter of what one is used to. |
I have investigated the uploaded artifacts in detail and there indeed test-failures in the runs. One can find them when searching the build-logs for example for |
IIRC, that's just an output log, but not a real test failure. Tests reports should not show real failures. Alternatively, we enabled rerun for flaky tests, so that might be a failure in one of the first run failures. If the plug-in is not able to report flakes then it might be useless. |
maybe test does not work on windows |
The test-report.xml shows failure in it's headers:
That the actual builds do not fail is because of #3148, which I suggested based on the assumption that the test-collect job will fail if there are test-failures. And then one can see immediately from the run results if run-failures are test-failures (only the But surprisingly in #3148 (comment) the result is as expected and also the annotations are applied as expected when one clicks on the details. Maybe it just didn't work before because not everything was available then? 🤔 |
Looks like haven't refreshed for too long: |
I think the build should fail if there are test failures so the Maven option you added is wrong to me. I'd like to see the environment (build) with test failures, not to have a failure only on the test collect build. For example, we know macos is flaky, so its failure could be ignored most of the time. In the current state, it's harder to detect that. At least, that's what I'd find useful. In the current state I don't see benefits from the test report. At least, that's my POV. Unfortunately, I currently don't have time (holiday) to look into that further. Maybe a test report on a per-build matrix basis would be best. If it's too much work, at least a test report for Linux Java 17 and Linux Java 21. To me, it's crucial now to remove that Maven option. Personally, I'd have a strong opinion on this point. |
Actually I thought this would be a quick-win since it works good in the project's I'm involved and I actually also didn't want to spend much time on this. But I have no clear idea, why it's different here. Multiple builds should not be a problem (Eclipse-platform also has a matrix build). Maybe it's that there are really multiple jobs not one job executed in a matrix?
In general you can see the originating workflow if the failure annotations work. But of course it's one click more and you don't see it immediately. |
@HannesWell now the test report job always fails when extracting the archives.. I wonder whether our archives are too big.. |
The latest executions of the |
Cause there is two Linux builds with two j versions |
But I thought the archives would be different because of the java version used in the name |
Yes, but we have three 'Linux' results: one If you search e.g. https://github.com/eclipse/xtext/actions/runs/10422949927/job/28868593443 for |
Here I see just 1 |
I'm wondering as well why this happens?
At this job, in the head-line it says 10 artifacts altough in the bottom only two are listed. For example in: What's definitivly wrong that the I forgot the condition for this one. |
The GitHub event must be published for this action to work properly, if I understand its documentation correctly |
Yes, but only once per event. If you push a branch directly to the repo it is currently uploaded twice: |
|
|
"id": 10421428353, in the same |
That explains the failure. But I don't understand why this happens? Can you also check a run with a PR from a fork? Maybe this indeed caused from pushing to the repo directly. |
Is that from the single event file? |
workflow run is the same. see json above. sizes are drastically different. |
next attempt: #3160 |
At least the publication of the test-results seems to work now, but the indication is still not working. Either the indicated test-failures are false (the corresponding build jobs should fail then) or posting the failure annotations does not work. But I don't understand why this also happens for PRs from forks. AFAICT the setup is almost the same like e.g. in m2e and there it works as expected: https://github.com/eclipse-m2e/m2e-core I have to admit that I'm slowly losing my patience with this. |
from windows find . -name "*.txt" | xargs grep "Failures: [1-9]" |
i see some |
@cdietrich it's only in the shell script that we don't use anymore. @HannesWell I'd tend to revert this action that doesn't seem to work well, at least in our configuration.. in the future I can try to look for alternative solutions. |
2024-08-16T20:35:15.1006049Z maybe there is something wrong with the flaky annotation cc @szarnekow |
The issue seems to imply that flaky tests are supported when reported by Maven. So everything should be fine when there are flakes. |
There is still an open pr for that |
Now I understand what you mean with flaky tests. I have to admit I didn't use |
The log correctly shows flakes reported my surefire: it also tells you which run finally succeeds, or in some cases they finally fail. So I confirm that Maven and Tycho do the right thing. The above issue has been reopened saying that such flakes are currently not supported by the action. It would be enough the PR doesn't report failures in such cases, but I didn't find a way to achieve that. |
Guess we could try to xslt / sed the xml tags for flaky to dev null |
Thank you @LorenzoBettini, with #3164, at least the checks are now green again if there is no persistent failure. |
As suggested in #3139 (comment).
@cdietrich, @LorenzoBettini unfortunately this is only picked up after it is submitted to the main branch (since I don't have write access to this repo and the new workflow runs when another one is complted).
But as far as I can tell this is the same config as we use in