You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 24, 2024. It is now read-only.
For interfacing with an MD code we need to do a lot of stuff for each code:
load/initialize model
convert neighbourlist so our model can be applied
adapt units of MD code to model
apply model to compute energies/forces/stress optionally local version
adapt units of model to MD code
Because the same thing has to be done for every MD code, there are projects like the OpenKIM API that offers us implemented interfaces to several MD codes. So we implement one time our models to their API and then we have automatic access to several MD codes (ase, lammps, gulp, dl_poly). The workflow will probably like this. Write your model and train it in python, then export it. Then in our OpenKIM interface we load the exported model and then we continue with the steps above. One has to do the API implementation in C or C++, which requires different implementations when torch and non-torch models are used.
Since one key point of our library is that we support models for a variety of representations, I wonder how a short+long range like SOAP+LODE would be interfaced (for any MD code). I had the feeling when going through the OpenKIM header files that they only support short range models. Also I would like to research more about how message-passing of graph neural networks affects the the interfaces. Even you include some additional MPI communication or you need to increase the cutoff. I did not have the feeling that OpenKIM supports this.
It seems like something that we want to support in long term for people who use OpenKIM in their workflow, but we don't really benefit from it in any way for the moment.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
For interfacing with an MD code we need to do a lot of stuff for each code:
Because the same thing has to be done for every MD code, there are projects like the OpenKIM API that offers us implemented interfaces to several MD codes. So we implement one time our models to their API and then we have automatic access to several MD codes (ase, lammps, gulp, dl_poly). The workflow will probably like this. Write your model and train it in python, then export it. Then in our OpenKIM interface we load the exported model and then we continue with the steps above. One has to do the API implementation in C or C++, which requires different implementations when torch and non-torch models are used.
Since one key point of our library is that we support models for a variety of representations, I wonder how a short+long range like SOAP+LODE would be interfaced (for any MD code). I had the feeling when going through the OpenKIM header files that they only support short range models. Also I would like to research more about how message-passing of graph neural networks affects the the interfaces. Even you include some additional MPI communication or you need to increase the cutoff. I did not have the feeling that OpenKIM supports this.
It seems like something that we want to support in long term for people who use OpenKIM in their workflow, but we don't really benefit from it in any way for the moment.
The text was updated successfully, but these errors were encountered: