Replies: 2 comments 12 replies
-
Hi @bastonero , Thank you, glad to hear! I think it has mostly been established that newer PyTorch works fine for AMD GPUs, for which some relevant threads are:
The newest versions may also be fine on CUDA, but that has not been completely confirmed yet. 1.* versions that are not 1.11 should be avoided. If you encounter this first issue, or find anything that works (or doesn't work) unexpectedly, please post that info here! |
Beta Was this translation helpful? Give feedback.
-
Hi, on LUMI-G, there is already a container available for nequip and pair_allegro. The procedure is supposed to be the following:
Then:
Now you can do (typically before nequip-train or lmp -in ... in your job script):
With these modules loaded, commands nequip-train, equip-evaluate, nequip-deploy, nequip-benchmark and lmp (lammps with pair allegro) are available. Typical job script using lammps:
Please let me know whether this works, and whether you are experiencing any problems. Need to write up this procedure for my team as well, so bonus if you can check whether it works. |
Beta Was this translation helpful? Give feedback.
-
Dear Developers,
Thanks first of all for the amazing work, it is really useful and easy to use.
I am trying to install Nequip on LUMI-G, where they suggest to install Python packages via Singularity containers. Unfortunately, the ROCM version of those is already greater than what is compatible for Pytorch 1.10.0 < version < 1.12.0. It would be great to know if these constaints are still applicable, or you already did some tests (e.g. using Pytorch 2.x).
Thanks again for any help.
Beta Was this translation helpful? Give feedback.
All reactions