You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
If I run more than a few dozen NWChem simulations concurrently, the memory usage of my workflow engine becomes unmanageably large. Writing the stdout on disk instead of caching it in RAM might save me a lot of memory and reduce my problems.
Writing to disk would also let me 'tail' the results of the simulation while it's running (though that might result in me wasting time watching it run 🙃)
Describe the solution you'd like
Option to write stdout to disk instead of storing it in memory, at least for NWChem.
Describe alternatives you've considered
Cutting memory usage in my workflow engine in other spaces (e.g., using threads instead of processes for each Python worker)
Additional context
Only see this when running >60 NWChem tasks in a single HPC job.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
If I run more than a few dozen NWChem simulations concurrently, the memory usage of my workflow engine becomes unmanageably large. Writing the stdout on disk instead of caching it in RAM might save me a lot of memory and reduce my problems.
Writing to disk would also let me 'tail' the results of the simulation while it's running (though that might result in me wasting time watching it run 🙃)
Describe the solution you'd like
Option to write stdout to disk instead of storing it in memory, at least for NWChem.
Describe alternatives you've considered
Cutting memory usage in my workflow engine in other spaces (e.g., using threads instead of processes for each Python worker)
Additional context
Only see this when running >60 NWChem tasks in a single HPC job.
The text was updated successfully, but these errors were encountered: