You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If someone has a more sophisticated approach to writing output files to /local/scratch and transferring to persistent project storage on Eagle or Grand besides a simple cp/mv/rsync at the end of the PBS job script, that would be a great addition to https://docs.alcf.anl.gov/polaris/queueing-and-running-jobs/example-job-scripts or somewhere else.
The text was updated successfully, but these errors were encountered:
ALCF doesn't have any software to do this. There are some ECP projects that potentially could help but I'm not sure if any of them have tested with Polaris.
Even absent special software, we should add example use cases and job scripts.
NERSC doesn't seem to have such documentation either, since their node-local SSDs are limited to a few large memory nodes on Perlmutter and Cori.
Obviously they have an all-flash Perlmutter Lustre $SCRATCH which we don't have, and the old/discouraged https://docs.nersc.gov/filesystems/cori-burst-buffer/ which isnt quite analogous since it isnt node-local and also supported MPI-I/O.
Do we know of any users on Polaris who currently make use of the SSDs? I asked around, and only found folks with ThetaKNL experience.
If someone has a more sophisticated approach to writing output files to
/local/scratch
and transferring to persistent project storage on Eagle or Grand besides a simplecp/mv/rsync
at the end of the PBS job script, that would be a great addition to https://docs.alcf.anl.gov/polaris/queueing-and-running-jobs/example-job-scripts or somewhere else.The text was updated successfully, but these errors were encountered: