Modify ASE NEB to make it parallel via workflow engines #1576
Replies: 2 comments 6 replies
-
@tomdemeyere: In principle, the pattern you're asking about is definitely doable and something that would be added to What you are describing is a dynamic workflow where the DAG is not known until runtime. This can be achieved using the Most importantly, the This is very similar to the slab recipes. Check out |
Beta Was this translation helpful? Give feedback.
-
Giving this more thoughts (this would be very nice): For basic NEB and DynNEB I was able to make sure that all Atoms.get_forces() and Atoms.get_potential_energy() calls are coming from this function: ase.mep.neb.BaseNEB.get_forces() (I placed assert statements in Atoms.get_...() using the inspect module to make sure of the caller) Didn't try AutoNEB for now, as I don't know how to tame the beast. The interesting bit in the NEB.get_forces() function: if not self.parallel:
# Do all images - one at a time:
for i in range(1, self.nimages - 1):
forces[i - 1] = images[i].get_forces()
energies[i] = images[i].get_potential_energy() Can this be changed into calling a subflow that would dispatch all calculations in parallel? But then later these are set to attributes: self.energies = energies
self.real_forces = np.zeros((self.nimages, self.natoms, 3))
self.real_forces[1:-1] = forces Which means that we need to resolve the futures before this happens? I am not sure what would be the concurrent friendly way of doing this? I can imagine having a function somewhere else in Quacc that forces all workflow engines to stop and resolve, although that is probably a terrible concept. If we can do this, it's pretty much done, we would just need to subclass this particular class where get_forces() belongs. |
Beta Was this translation helpful? Give feedback.
-
Not all calculators are compatible with the parallel version of ASE's NEBs. Similarly, it might be complicated to cleanly use mpi4py in some HPC environment. It might be possible to make it inherently parallel using workflow engine instead.
At some point in the NEB code there is something which looks like:
If we could call a function with a
@job
decorator in the for loop without having to change the whole thing that would do the trick. However I understand that this is far from being trivial since that's not how workflows work. In my understanding this would require to manually resolve the futures after the for loop since the rest of the code depends on these results.Writing the idea here, which unfortunately might not even be possible.
Beta Was this translation helpful? Give feedback.
All reactions