-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance on JUWELS-Booster #6
Comments
Can you also test the |
Yes for some reason I couldn't get the input file for the rcemip case... will try once more. |
Send me a message on Slack if I need to clarify something, maybe the |
I managed to run all three tests on the DAS. |
Ah I first needed to run
This is running without the 'cloud optics' flag btw. |
Would it make sense if I also provide the same table for DAS-5? We have a node with an A100 (but older and slower Xeon CPUs). |
You can tell me if those numbers are in line with mine Alessio. |
Fixed my latest table which was transposed |
I did a quick benchmark on an AWS P3 V100 instance:
|
After using fortran compiler flags (!) I get following figures:
So A100's (at least the ones in juwels-booster) appear somewhat slower than the V100 card. This may indicate the code needs some re-tuning... |
We have not done much tuning yet. We have rushed a little in getting a reference implementation ready that gives identical results with the CPU version, but there is probably still a lot to gain in kernel tuning. |
Some ad-hoc tuning of the kernel block sizes reduces the timings to 3051 ms (lw) and 2669 ms (sw) so yes, there is definitely headroom for speedup by tuning on A100s. |
Further manual tuning established
|
I have completed my tests on juwels-booster, which has 2 x 24C AMD-EPYC Rome cpu's and 4 x NVIDIA A100 per node. I see virtually no speedup yet between CPU/GPU versions
For the CUDA runs I used the cuda branch.
The text was updated successfully, but these errors were encountered: