Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include Olivier's latest upstream changes, regenerate and run tests #793

Merged
merged 14 commits into from
Nov 10, 2023

Conversation

valassi
Copy link
Member

@valassi valassi commented Nov 8, 2023

This is mainly about adding Olivier's fix for #781

I am running some tests and will merge this tomorrow if ok

(@oliviermattelaer please note: I had to also add a run_card_class=None... please review in case)

@valassi valassi self-assigned this Nov 8, 2023
@valassi
Copy link
Member Author

valassi commented Nov 8, 2023

Olivier is this ok?
f8d7b10

@oliviermattelaer
Copy link
Member

I would say that this is not really needed since this is part of the PR: #788

But if you want to wait for the approval of that PR, then yes we can put this as a temporary solution

… where a index issue was not spotted" for madgraph5#781

I will regenerate and run tests
…ter Olivier's commit 8a18cc242 "better handling of the run_card"

  ./MG5_debug:AttributeError: 'PLUGIN_ProcessExporter' object has no attribute 'run_card_class'
…changes, which should fix madgraph5#781

Apart from codegen logs, there are changes in banner.py, but also one change in matrix1.f for ggttggg.
This may indeed be the fix for madgraph5#781
…nstall actions/gh-actions-cache gives 'HTTP 403: API rate limit exceeded')
…le - usual failures in ggttg f/m and gqttq f (madgraph5#783), no change in performance
@valassi
Copy link
Member Author

valassi commented Nov 9, 2023

Hi @oliviermattelaer thanks. Sorry I had not looked at #788 yet, I did now. But there are still a few things I need to check an dchange in there.

Is it ok to merge this one in the meantime (today) and then work on #788 next week when you are at the workshop?

Thanks Andrea

@oliviermattelaer
Copy link
Member

Then maybe it makes more sense to not include run_card_class at all here and change the gpucpp to not complain if this does not exists (let me do that)

@oliviermattelaer
Copy link
Member

The best would be to apply this patch:

--- a/madgraph/iolibs/export_v4.py
+++ b/madgraph/iolibs/export_v4.py
@@ -480,7 +480,7 @@ class ProcessExporterFortran(VirtualExporter):
         if second_exporter:
             self.has_second_exporter = second_exporter

-        if self.has_second_exporter:
+        if self.has_second_exporter and hasattr(self.has_second_exporter, 'run_card_class'):
             with misc.TMP_variable(self, 'run_card_class', self.has_second_exporter.run_card_class):
                 self.create_run_card(matrix_elements, history)

But I'm confused about what you did to the MG5aMC branch.
I would say that the operation would be

  1. git switch gpucpp
  2. git merge 3.x
  3. apply the above patch
  4. git push to gpucpp (or to another branch and do a PR)
  5. update the submodule

(and apply the patch after 4 and/or after 5).
I was planning to do the first four today, but I thought that you did that yesterday but looks like it is not the case.
Could you indicate me what you did?

Cheers,

Olivier

@valassi
Copy link
Member Author

valassi commented Nov 9, 2023

But I'm confused about what you did to the MG5aMC branch.

Hi @oliviermattelaer what I did was not a merge but a cherry-pick of a single commit

git cherry-pick ac7de8d

This means I picked up your
mg5amcnlo/mg5amcnlo@ac7de8d

And now this appears in the other branch as
mg5amcnlo/mg5amcnlo@d7a466d

If you do a merge, in principle it will not be picked up again...

Sorry for the confusion, I hope this clarifies.

Apart from that, I have then updated madgrpah4gpu to use that commit as submodulke.

@valassi
Copy link
Member Author

valassi commented Nov 9, 2023

The best would be to apply this patch:

--- a/madgraph/iolibs/export_v4.py
+++ b/madgraph/iolibs/export_v4.py
@@ -480,7 +480,7 @@ class ProcessExporterFortran(VirtualExporter):
         if second_exporter:
             self.has_second_exporter = second_exporter

-        if self.has_second_exporter:
+        if self.has_second_exporter and hasattr(self.has_second_exporter, 'run_card_class'):
             with misc.TMP_variable(self, 'run_card_class', self.has_second_exporter.run_card_class):
                 self.create_run_card(matrix_elements, history)

But I'm confused about what you did to the MG5aMC branch. I would say that the operation would be

1. git switch gpucpp

2. git merge 3.x

3. apply the above patch

4. git push to gpucpp (or to another branch and do a PR)

5. update the submodule

(and apply the patch after 4 and/or after 5). I was planning to do the first four today, but I thought that you did that yesterday but looks like it is not the case. Could you indicate me what you did?

Cheers,

Olivier

By the way, I cherry picked the one releavnt to ggttggg and #781. The patch you mention above is the one for the runcard, which I was not interested in testing or picking upo yet

@valassi
Copy link
Member Author

valassi commented Nov 9, 2023

PS The reason I did a cherry pick and not a merge is because I would not know which branch to merge... I prefer to let you do that. Here I just wanted to test the ggttggg patch and so I just picked that

@oliviermattelaer
Copy link
Member

Does cherry-picking like that will not create un-necessery conflict?
In any case, I have done the merge of 3.x with the gpucpp branch and no associated conflict was generated.
And I have apply the above patch in top of it.

Andrea where is the instruction to update all the generated code? (or is the CI/CD doing that automatically?)

Olivier

This completed my first version of the gpucpp PR, but I later included also the 3.5.2 upgrade

Revert "[actions/gpucpp] TEMPORARILY disable testsuite on PRs (gh extension install actions/gh-actions-cache gives 'HTTP 403: API rate limit exceeded')"
This reverts commit 1fd1c4c.
…as made this unnecessary in 3.5.2)

Revert "[gpucpp] in CODEGEN output.py, add run_card_class to avoid crashes after Olivier's commit 8a18cc242 "better handling of the run_card""
This reverts commit 8c654cf.
…changes, merging to 3.5.2

Most changes are in the version comments (from 3.5.1 to 3.5.2)
There are also some minor changes in genps.f but they look like bug fixes (nincming instead of hardcoded 2)
… 3.5.2 - usual failures in ggttg f/m and gqttq f (madgraph5#783), no change in performance

STARTED  AT Thu Nov  9 05:26:21 PM CET 2023
./tput/teeThroughputX.sh -mix -hrd -makej -eemumu -ggtt -ggttg -ggttgg -gqttq -ggttggg -makeclean
ENDED(1) AT Thu Nov  9 05:54:46 PM CET 2023 [Status=2]
./tput/teeThroughputX.sh -flt -hrd -makej -eemumu -ggtt -ggttgg -inlonly -makeclean
ENDED(2) AT Thu Nov  9 06:05:38 PM CET 2023 [Status=0]
./tput/teeThroughputX.sh -makej -eemumu -ggtt -ggttg -gqttq -ggttgg -ggttggg -flt -bridge -makeclean
ENDED(3) AT Thu Nov  9 06:15:05 PM CET 2023 [Status=2]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -rmbhst
ENDED(4) AT Thu Nov  9 06:18:20 PM CET 2023 [Status=0]
./tput/teeThroughputX.sh -eemumu -ggtt -ggttgg -flt -curhst
ENDED(5) AT Thu Nov  9 06:21:32 PM CET 2023 [Status=0]
…o 3.5.2, no change in functionality or performance

STARTED AT Thu Nov  9 06:24:51 PM CET 2023
ENDED   AT Thu Nov  9 10:43:11 PM CET 2023

Status=0

24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_eemumu_mad/log_eemumu_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttggg_mad/log_ggttggg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttgg_mad/log_ggttgg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggttg_mad/log_ggttg_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_ggtt_mad/log_ggtt_mad_m_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_d_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_f_inl0_hrd0.txt
24 /data/avalassi/GPU2023/madgraph4gpuX/epochX/cudacpp/tmad/logs_gqttq_mad/log_gqttq_mad_m_inl0_hrd0.txt
@valassi
Copy link
Member Author

valassi commented Nov 10, 2023

Then maybe it makes more sense to not include run_card_class at all here and change the gpucpp to not complain if this does not exists (let me do that)

Thanks @oliviermattelaer - as discussed offline, I have now removed run_card_class (I have reverted an earlier change)
aae8ef1
Now this works fine, thanks to your changes in mg5amcnlo.

I have now moved to your latest mg5amcnlo which merges 3.5.2. I have regenerated all processes and I have rerun all tests and everything is fine, so I will merge this.

Andrea where is the instruction to update all the generated code? (or is the CI/CD doing that automatically?)

I have created a script epochX/cudacpp/CODEGEN/allGenerateAndCompare.sh which regenerates all 15 processes in the repo. I have now been using that routinely for a few weeks.

Anyway the CI now does a codegen from scratch, and does NOT check that the code is the same as that in the repo. So no need for you to regenerate the code and recommit it, I can do that. As you prefer

@valassi
Copy link
Member Author

valassi commented Nov 10, 2023

Ok the new CI tests succeed - I am merging this.

(As discussed, lets keep #788 for next week instead)

@valassi valassi merged commit a059006 into madgraph5:master Nov 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

madevent fails for ggttggg - AMP(1893) is used but dimensions are AMP(1890)
2 participants