How to use LAMMPS to train a larger system? #475
Unanswered
stargolike
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I used a system of 200 atoms for training, and selected hidden irreps: '64x0e+64x1o'.
But when I want to use LAMMPS for MD simulation, I can only run a system with 1k atoms. If I use a larger system, there will be a memory out error, such as
The graphics card I am currently using is the RTX8000, which can be used for training.
I would like to ask what this memory is related to and how can I have a larger system for MD
Beta Was this translation helpful? Give feedback.
All reactions