Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rationalize Frontier compiler entries between SCREAM and E3SM. #6773

Open
rljacob opened this issue Nov 22, 2024 · 10 comments
Open

Rationalize Frontier compiler entries between SCREAM and E3SM. #6773

rljacob opened this issue Nov 22, 2024 · 10 comments
Assignees
Labels

Comments

@rljacob
Copy link
Member

rljacob commented Nov 22, 2024

Opening this issue to discuss how to make sure all scream cases build with the "frontier" machine description and at least one compiler entry so we can remove "frontier-scream-gpu" from config_machines.xml

See https://acme-climate.atlassian.net/wiki/spaces/EIDMG/pages/3446079573/How+to+describe+heterogenous+node+machines+with+CIME for background.

In E3SM-Project/scream#2969 (comment) it was noted that screams "craygnuamdgpu” tells the user that it "Uses Cray wrappers with Gnu compilers for the host, and uses the AMD Hip compiler for the GPU". We should follow that convention.

@rljacob
Copy link
Member Author

rljacob commented Nov 22, 2024

@bartgol
Copy link
Contributor

bartgol commented Nov 22, 2024

Cosmetic comment: I personally find the string craygnuamdgpu hard to parse. For "combo" compilers such as this, it may be be more readable to use hostcompiler-devicecompiler, that is add some dashes in there. In this case, craygnu-amdgpu, so that the mach+compiler combo would be frontier_craygnu-amdgpu.

@grnydawn
Copy link
Contributor

To drive the merging of Frontier compiler entries, I think it would be useful to choose one or two Scream test cases within the E3SM machine/compiler configurations and resolve any issues encountered during their build and execution on Frontier. If necessary, we may create a new Scream test case.

@rljacob
Copy link
Member Author

rljacob commented Nov 23, 2024

Use any of the scream test cases that already exist such as ne30_ne30.F2010-SCREAMv1 which is in e3sm_scream_v1_medres in https://github.com/E3SM-Project/E3SM/blob/49fdbe3661f2b8c95d8459081f500fae3a069ba0/cime_config/tests.py#L704C6-L704C28

Those all pass on frontier when using --machine frontier-scream-gpu --compiler craygnuamdgpu

You could start by just copying the craygnuamdgpu compiler entry to the frontier machine file while we figure out how to name these.

@sarats
Copy link
Member

sarats commented Dec 9, 2024

FYI, discussed this topic extensively during the Perf/Infra call today: https://acme-climate.atlassian.net/wiki/spaces/EP/pages/4825645058/2024-12-09+Performance+Infrastructure+Meeting+Notes

@jgfouca and @grnydawn to coordinate and work together to consolidate config.

@rljacob
Copy link
Member Author

rljacob commented Jan 9, 2025

During the EAMxx dev call, we decided on using a "-" between the cpu and gpu instead of putting "gpu" in the compiler name. If there is no dash, that means its a cpu-only compile.
craycray-amd = cray wrapper around cray compiler for host, amd compiler on gpu
craygnu-amd = cray wrapper around gnu compiler, amd compiler on gpu
gnu-amd = use gfortran directly on host, amd compiler on gpu.

@ndkeen
Copy link
Contributor

ndkeen commented Jan 9, 2025

I've tried to bring up these topics before, but wanted to again state for the record:
For example on perlmutter, we have gnu and gnugpu and while I don't think gpu belong in the compiler name -- the compiler name is gnu, we do need a way to know if exe running on GPU nodes. So I think we need a different way (that is not currently supported in CIME) to specify the hardware to be used. New variable can then be used to test on in cmake.
As we don't have that, making the change you suggest there (ie not having gpu string in compiler name) could lead to confusing cmake tests where you basically just need to ask if exe is for GPU or not.

I also think on frontier, we can simply use gnu and gnugpu and then build with GNU fortran/C++ in same way we do on perlmutter. Of course, I defer to POC and if they wanted, could build differently -- but even still, I'm not sure it is a good idea to have complicated-sounding compiler names. We could still have gnu or amd and the details of how it's built are in the config files.

@bartgol
Copy link
Contributor

bartgol commented Jan 9, 2025

@rljacob what if we used openacc? Should the compiler be called gnu-openacc? gnu-gnu? And what if we used openacc for f90, and nvcc for C++? Should we call it gnu-nvcc? gnu-openacc-nvcc?

@rljacob
Copy link
Member Author

rljacob commented Jan 9, 2025

We have 3 things: wrapper, host compiler, device compiler. That would add a fourth thing: GPU programming model. Why would that be necessary?

@bartgol
Copy link
Contributor

bartgol commented Jan 9, 2025

I'm just trying to understand what happens if we use nvcc for C++ and openacc for f90, both running on device. NVCC is a compiler, not a prog model. So would you do gnu-gnu-nvcc, since you use two different GPU compilers?

Re: openacc. It is a prog model, so what would you use, gnu-gnu for code that uses openacc (or openmp-target) for the accelerator?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants