-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include Olivier's latest upstream changes, regenerate and run tests #793
Conversation
Olivier is this ok? |
I would say that this is not really needed since this is part of the PR: #788 But if you want to wait for the approval of that PR, then yes we can put this as a temporary solution |
… where a index issue was not spotted" for madgraph5#781 I will regenerate and run tests
…ter Olivier's commit 8a18cc242 "better handling of the run_card" ./MG5_debug:AttributeError: 'PLUGIN_ProcessExporter' object has no attribute 'run_card_class'
…changes, which should fix madgraph5#781 Apart from codegen logs, there are changes in banner.py, but also one change in matrix1.f for ggttggg. This may indeed be the fix for madgraph5#781
…nstall actions/gh-actions-cache gives 'HTTP 403: API rate limit exceeded')
…le - usual failures in ggttg f/m and gqttq f (madgraph5#783), no change in performance
…h5#781 in ggttggg, no change in performance
Hi @oliviermattelaer thanks. Sorry I had not looked at #788 yet, I did now. But there are still a few things I need to check an dchange in there. Is it ok to merge this one in the meantime (today) and then work on #788 next week when you are at the workshop? Thanks Andrea |
Then maybe it makes more sense to not include run_card_class at all here and change the gpucpp to not complain if this does not exists (let me do that) |
The best would be to apply this patch:
But I'm confused about what you did to the MG5aMC branch.
(and apply the patch after 4 and/or after 5). Cheers, Olivier |
Hi @oliviermattelaer what I did was not a merge but a cherry-pick of a single commit
This means I picked up your And now this appears in the other branch as If you do a merge, in principle it will not be picked up again... Sorry for the confusion, I hope this clarifies. Apart from that, I have then updated madgrpah4gpu to use that commit as submodulke. |
By the way, I cherry picked the one releavnt to ggttggg and #781. The patch you mention above is the one for the runcard, which I was not interested in testing or picking upo yet |
PS The reason I did a cherry pick and not a merge is because I would not know which branch to merge... I prefer to let you do that. Here I just wanted to test the ggttggg patch and so I just picked that |
Does cherry-picking like that will not create un-necessery conflict? Andrea where is the instruction to update all the generated code? (or is the CI/CD doing that automatically?) Olivier |
This completed my first version of the gpucpp PR, but I later included also the 3.5.2 upgrade Revert "[actions/gpucpp] TEMPORARILY disable testsuite on PRs (gh extension install actions/gh-actions-cache gives 'HTTP 403: API rate limit exceeded')" This reverts commit 1fd1c4c.
…rmatting (cosmetics only)
…as made this unnecessary in 3.5.2) Revert "[gpucpp] in CODEGEN output.py, add run_card_class to avoid crashes after Olivier's commit 8a18cc242 "better handling of the run_card"" This reverts commit 8c654cf.
…changes, merging to 3.5.2 Most changes are in the version comments (from 3.5.1 to 3.5.2) There are also some minor changes in genps.f but they look like bug fixes (nincming instead of hardcoded 2)
… 3.5.2 - usual failures in ggttg f/m and gqttq f (madgraph5#783), no change in performance STARTED AT Thu Nov 9 05:26:21 PM CET 2023 ./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean ENDED(1) AT Thu Nov 9 05:54:46 PM CET 2023 [Status=2] ./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean ENDED(2) AT Thu Nov 9 06:05:38 PM CET 2023 [Status=0] ./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean ENDED(3) AT Thu Nov 9 06:15:05 PM CET 2023 [Status=2] ./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst ENDED(4) AT Thu Nov 9 06:18:20 PM CET 2023 [Status=0] ./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -curhst ENDED(5) AT Thu Nov 9 06:21:32 PM CET 2023 [Status=0]
…o 3.5.2, no change in functionality or performance STARTED AT Thu Nov 9 06:24:51 PM CET 2023 ENDED AT Thu Nov 9 10:43:11 PM CET 2023 Status=0 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt 24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_m_inl0_hrd0.txt
Thanks @oliviermattelaer - as discussed offline, I have now removed run_card_class (I have reverted an earlier change) I have now moved to your latest mg5amcnlo which merges 3.5.2. I have regenerated all processes and I have rerun all tests and everything is fine, so I will merge this.
I have created a script Anyway the CI now does a codegen from scratch, and does NOT check that the code is the same as that in the repo. So no need for you to regenerate the code and recommit it, I can do that. As you prefer |
Ok the new CI tests succeed - I am merging this. (As discussed, lets keep #788 for next week instead) |
This is mainly about adding Olivier's fix for #781
I am running some tests and will merge this tomorrow if ok
(@oliviermattelaer please note: I had to also add a run_card_class=None... please review in case)