-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use LAMMPS to train a larger system? #476
Comments
And I also found a problem that for the same system and model, ase can run, while lammps is not available. |
If you want to fit a larger number of atoms on a GPU, you should try to decrease your cutoff. What is your cutoff size? |
dear ilyes,I am using the default cutoff. and it's my lammps inputfile
i understand your meaning,i should change |
I meant during training, you should try to use a smaller cutoff. |
sorry,i can't understand. it's my config. and it's not some parameters about cutoff
|
What ilyes meant was when you train your mace model you should use a smaller cutoff. The |
Discussed in #475
Originally posted by stargolike June 20, 2024
I used a system of 200 atoms for training, and selected hidden irreps: '64x0e+64x1o'.
But when I want to use LAMMPS for MD simulation, I can only run a system with 1k atoms. If I use a larger system, there will be a memory out error, such as
The graphics card I am currently using is the RTX8000, which can be used for training.
I would like to ask what this memory is related to and how can I have a larger system for MD
The text was updated successfully, but these errors were encountered: