Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue on integrating lammps with MACE potential #507

Open
aishwaryo opened this issue Jul 8, 2024 · 10 comments
Open

Issue on integrating lammps with MACE potential #507

aishwaryo opened this issue Jul 8, 2024 · 10 comments
Labels

Comments

@aishwaryo
Copy link

Dear developers, I was attempting to install the CPU implementation of lammps+MACE from https://mace-docs.readthedocs.io/en/latest/guide/lammps.html. This clones and proceeds with the "mace" branch of the repo.

On the step with cmake I am getting the following error :

CMake Error at Modules/Packages/ML-MACE.cmake:3 (find_package):
By not providing "FindTorch.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "Torch", but
CMake did not find one.

Could not find a package configuration file provided by "Torch" with any of
the following names:

TorchConfig.cmake
torch-config.cmake

Add the installation prefix of "Torch" to CMAKE_PREFIX_PATH or set
"Torch_DIR" to a directory containing one of the above files. If "Torch"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
CMakeLists.txt:526 (include)

-- Configuring incomplete, errors occurred! '

Could you please assist in resolving this?

@stargolike
Copy link

Do you install torch in your machine? i met this error, and i install suitable version torch then the error is disappeared.

@wcwitt
Copy link
Collaborator

wcwitt commented Jul 9, 2024

Adding to @stargolike's comment, have you followed this part of the instructions (using the appropriate libtorch version)?

wget https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-1.13.0%2Bcpu.zip
unzip libtorch-shared-with-deps-1.13.0+cpu.zip
rm libtorch-shared-with-deps-1.13.0+cpu.zip

@aishwaryo
Copy link
Author

Hi! I fixed this issue with correct installation of libtorch. However now when I try to run a simulation I get the error :

"pair_mace does not support vflag_atom."

It possibly comes from a compute of stress/atom. I am attaching the LAMMPS script and .pt files as well. This script has previously been tested with other potentials and it at-least runs. Any help is appreciated!
send.zip

@wcwitt
Copy link
Collaborator

wcwitt commented Jul 10, 2024

We don't currently offer a way to extract the local stress. Is that essential for your application?

@gabor1
Copy link
Collaborator

gabor1 commented Jul 10, 2024

It would be nice to put that on our todo list

@bernstei
Copy link
Collaborator

bernstei commented Jul 10, 2024 via email

@pobo95
Copy link

pobo95 commented Jul 17, 2024

@aishwaryo You can slove the vflg_atom issue using by my repository and this code to pair with lammps. @wcwitt I want to pull this code but I don't know which branch is correct one. Can you make a new one?

@aishwaryo
Copy link
Author

@pobo95 i have tried using your repo and code to pair with lammps. However, while training I am getting the following over all the epochs.

INFO: Epoch 12: loss=0.0000, RMSE_E_per_atom=243.9 meV, RMSE_F=0.0 meV / A

This does not happen while using the original MACE repo. Could you suggest a way through? I have also attached the log and the xyz file so that you can check.
combined.zip

MACE_model_run-123.log

@pobo95
Copy link

pobo95 commented Jul 21, 2024

@aishwaryo

Maybe you didn't use test sets, don't you?

Can you share the result with my data
train_validate.zip
set using this line?

mace_run_train     --name="mace"     --train_file="train.xyz"     --valid_fraction=0.05     --test_file="validate.xyz"     --E0s={14:-0.08755009}     --model="MACE"     --num_interactions=2     --hidden_irreps='64x0e + 64x1o'     --max_L=1     --correlation=2     --r_max=5.0     --forces_weight=1000     --energy_weight=10     --batch_size=5     --valid_batch_size=2     --max_num_epochs=5000   --compute_stress=”true”  --start_swa=450     --scheduler_patience=5     --patience=15     --eval_interval=3     --ema     --swa     --swa_forces_weight=10     --default_dtype="float64"    --device=cuda     --seed=123     --restart_latest     --save_cpu --error_table=PerAtomRMSEstressvirials 

@pobo95
Copy link

pobo95 commented Jul 21, 2024

@aishwaryo

When I test with your data sets withi this line

mace_run_train     --name="mace"     --train_file="combined.xyz"     --valid_fraction=0.05     --E0s=average     --model="MACE"     --num_interactions=2     --hidden_irreps='64x0e + 64x1o'     --max_L=1     --correlation=2     --r_max=5.0     --forces_weight=1000     --energy_weight=10     --batch_size=5     --valid_batch_size=2     --max_num_epochs=5000   --compute_stress=”true”  --start_swa=450     --scheduler_patience=5     --patience=15     --eval_interval=3     --ema     --swa     --swa_forces_weight=10     --default_dtype="float64"    --device=cuda     --seed=123     --restart_latest     --save_cpu --error_table=PerAtomRMSEstressvirials 

It seems fine.

When I chckded your log file, looks like you didn't take max_L.

It can be the resaon too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants