-
Notifications
You must be signed in to change notification settings - Fork 160
Singularity on Archer2
This page assumes that you already have a singularity container with Firedrake called firedrake.sif
, and you now want to run it on Archer2.
See the Singularity page for how to build a Firedrake container
These commands must be run before launching the container (either in job script or from interactive session). If the container is failing with link errors then check to see if the LD_LIBRARY_PATH or BIND variables need updating here: https://docs.archer2.ac.uk/user-guide/containers/#running-parallel-mpi-jobs-using-singularity-containers
module purge
module load load-epcc-module
module load PrgEnv-gnu
module swap cray-mpich cray-mpich-abi
module load cray-dsmml
module load cray-libsci
module load xpmem
module list
cat <<EOF >.gitconfig
[safe]
directory = *
EOF
# Set the LD_LIBRARY_PATH environment variable within the Singularity container
# to ensure that it used the correct MPI libraries.
export SINGULARITYENV_LD_LIBRARY_PATH="/opt/cray/pe/mpich/8.1.23/ofi/gnu/9.1/lib-abi-mpich:/opt/cray/pe/mpich/8.1.23/gtl/lib:/opt/cray/libfabric/1.12.1.2.2.0.0/lib64:/opt/cray/pe/gcc-libs:/opt/cray/pe/gcc-libs:/opt/cray/pe/lib64:/opt/cray/pe/lib64:/opt/cray/xpmem/default/lib64:/usr/lib64/libibverbs:/usr/lib64:/usr/lib64"
# This makes sure HPE Cray Slingshot interconnect libraries are available
# from inside the container.
export SINGULARITY_BIND="/opt/cray,/var/spool,/opt/cray/pe/mpich/8.1.23/ofi/gnu/9.1/lib-abi-mpich:/opt/cray/pe/mpich/8.1.23/gtl/lib,/etc/host.conf,/etc/libibverbs.d/mlx5.driver,/etc/libnl/classid,/etc/resolv.conf,/opt/cray/libfabric/1.12.1.2.2.0.0/lib64/libfabric.so.1,/opt/cray/pe/gcc-libs/libatomic.so.1,/opt/cray/pe/gcc-libs/libgcc_s.so.1,/opt/cray/pe/gcc-libs/libgfortran.so.5,/opt/cray/pe/gcc-libs/ libquadmath.so.0,/opt/cray/pe/lib64/libpals.so.0,/opt/cray/pe/lib64/libpmi2.so.0,/opt/cray/pe/lib64/libpmi.so.0,/opt/cray/xpmem/default/lib64/libxpmem.so.0,/run/munge/munge.socket.2,/usr/lib64/libibverbs/libmlx5-rdmav34.so,/usr/lib64/libibverbs.so.1,/usr/lib64/libkeyutils.so.1,/usr/lib64/liblnetconfig.so.4,/usr/lib64/liblustreapi.so,/usr/lib64/libmunge.so.2,/usr/lib64/libnl-3.so.200,/usr/lib64/libnl-genl-3.so.200,/usr/lib64/libnl-route-3.so.200,/usr/lib64/librdmacm.so.1,/usr/lib64/libyaml-0.so.2"
# set environment variables inside the Singularity container for firedrake et al.
# Don't multithread
export SINGULARITYENV_OMP_NUM_THREADS=1
# Use the mpi compilers from the firedrake container
export SINGULARITYENV_PYOP2_CC=/home/firedrake/firedrake/bin/mpicc
export SINGULARITYENV_PYOP2_CXX=/home/firedrake/firedrake/bin/mpicxx
# Save caches locally so they persist across `singularity run` calls
export SINGULARITYENV_PYOP2_CACHE_DIR=/home/firedrake/work/.cache/pyop2_cache
export SINGULARITYENV_FIREDRAKE_TSFC_KERNEL_CACHE_DIR=/home/firedrake/work/.cache/tsfc
PYOP2_CACHE_DIR
and FIREDRAKE_TSFC_KERNEL_CACHE_DIR
default to the $HOME
directory on the host and is automatically mounted inside the singularity container. This is an issue on Archer2, since $HOME
is not mounted on compute nodes. Instead we create the caches in the directory that the container is run from, assuming that --bind $PWD:/home/firedrake/work
was passed to singularity run
. Saving the caches here also means that they persist between container invocations, so we don't end up compiling all the kernels at every container invocation.
Run the reduced set of tests from the documentation page from an interactive session.
- Launch a single-core interactive job (we currently can't run the parallel tests on Archer2 because the cray-mpich won't allow nested calls to
MPI_Init
). The Archer2 documentation shows two methods for launching interactive jobs. Use the first method usingsalloc
(https://docs.archer2.ac.uk/user-guide/scheduler/#using-salloc-to-reserve-resources).
salloc --nodes=1 --ntasks-per-node=1 --cpus-per-task=1 --exclusive \
--time=00:20:00 --partition=standard --qos=short --account=<account number>
NB: As of May 2023, when trying to run a container from an interactive job launched using the second method using srun
(https://docs.archer2.ac.uk/user-guide/scheduler/#using-srun-directly) SLURM was failing to reallocate the resources from the interactive job to the srun <options> singularity <options>
call so didn't actually run anything.
-
Run the commands above to setup the modules and environment ready for the Singularity container.
-
Check that firedrake is available inside the container:
srun --oversubscribe --hint=nomultithread --distribution=block:block --ntasks=1 \
singularity run --bind $PWD:/home/firedrake/work --home $PWD firedrake.sif \
/home/firedrake/firedrake/bin/python \
-c "from firedrake import *"
- Run the smoke-tests:
srun --oversubscribe --hint=nomultithread --distribution=block:block --ntasks=1 \
singularity run --bind $PWD:/home/firedrake/work --home $PWD firedrake.sif \
/home/firedrake/firedrake/bin/python \
-m pytest /home/firedrake/firedrake/src/firedrake/tests/regression/ -v \
-o cache_dir=/home/firedrake/work/.cache \
-k "not parallel and (poisson_strong or stokes_mini or dg_advection)"
- Hope everything has worked.
Run Firedrake in parallel from an interactive job.
- Launch an interactive job (enter the required
nodes
andntasks-per-node
arguments):
salloc --nodes=${nnodes} --ntasks-per-node=${ntasks} --cpus-per-task=1 --exclusive \
--time=00:20:00 --partition=standard --qos=short --account=<account number>
-
Run the commands above to setup the modules and environment ready for the Singularity container.
-
Navigate to the directory where you want to run.
-
Run your script (
$SIFDIR
is an environment variable for the directory that contains the Singularity image):
srun --oversubscribe --hint=nomultithread --distribution=block:block \
--nodes=${nnodes} --ntasks-per-node=${ntasks} \
singularity run --bind $PWD:/home/firedrake/work --home $PWD $SIFDIR/firedrake.sif \
/home/firedrake/firedrake/bin/python \
myscript.py --my_args
- Hope everything has worked.
Building locally
Tips
- Running Firedrake tests with different subpackage branches
- Modifying and Rebuilding PETSc and petsc4py
- Vectorisation
- Debugging C kernels with
lldb
on MacOS - Parallel MPI Debugging with
tmux-mpi
,pdb
andgdb
- Parallel MPI Debugging with VSCode and
debugpy
- Modifying generated code
- Kernel profiling with LIKWID
- breakpoint() builtin not working
- Debugging pytest with multiple processing
Developers Notes
- Upcoming meeting 2024-08-21
- 2024-08-07
- 2024-07-24
- 2024-07-17
- 2024-07-10
- 2024-06-26
- 2024-06-19
- 2024-06-05
- 2024-05-29
- 2024-05-15
- 2024-05-08
- 2024-05-01
- 2024-04-28
- 2024-04-17
- 2024-04-10
- 2024-04-03
- 2024-03-27
- 2024-03-20
- 2024-03-06
- 2024-02-28
- 2024-02-28
- 2024-02-21
- 2024-02-14
- 2024-02-07
- 2024-01-31
- 2024-01-24
- 2024-01-17
- 2024-01-10
- 2023-12-13
- 2023-12-06
- 2023-11-29
- 2023-11-22
- 2023-11-15
- 2023-11-08
- 2023-11-01
- 2023-10-25
- 2023-10-18
- 2023-10-11
- 2023-10-04
- 2023-09-27
- 2023-09-20
- 2023-09-06
- 2023-08-30
- 2023-08-23
- 2023-07-12
- 2023-07-05
- 2023-06-21
- 2023-06-14
- 2023-06-07
- 2023-05-17
- 2023-05-10
- 2023-03-08
- 2023-02-22
- 2023-02-15
- 2023-02-08
- 2023-01-18
- 2023-01-11
- 2023-12-14
- 2022-12-07
- 2022-11-23
- 2022-11-16
- 2022-11-09
- 2022-11-02
- 2022-10-26
- 2022-10-12
- 2022-10-05
- 2022-09-28
- 2022-09-21
- 2022-09-14
- 2022-09-07
- 2022-08-25
- 2022-08-11
- 2022-08-04
- 2022-07-28
- 2022-07-21
- 2022-07-07
- 2022-06-30
- 2022-06-23
- 2022-06-16
- 2022-05-26
- 2022-05-19
- 2022-05-12
- 2022-05-05
- 2022-04-21
- 2022-04-07
- 2022-03-17
- 2022-03-03
- 2022-02-24
- 2022-02-10
- 2022-02-03
- 2022-01-27
- 2022-01-20
- 2022-01-13
- 2021-12-15
- 2021-12-09
- 2021-11-25
- 2021-11-18
- 2021-11-11
- 2021-11-04
- 2021-10-28
- 2021-10-21
- 2021-10-14
- 2021-10-07
- 2021-09-30
- 2021-09-23
- 2021-09-09
- 2021-09-02
- 2021-08-26
- 2021-08-18
- 2021-08-11
- 2021-08-04
- 2021-07-28
- 2021-07-21
- 2021-07-14
- 2021-07-07
- 2021-06-30
- 2021-06-23
- 2021-06-16
- 2021-06-09
- 2021-06-02
- 2021-05-19
- 2021-05-12
- 2021-05-05
- 2021-04-28
- 2021-04-21
- 2021-04-14
- 2021-04-07
- 2021-03-17
- 2021-03-10
- 2021-02-24
- 2021-02-17
- 2021-02-10
- 2021-02-03
- 2021-01-27
- 2021-01-20
- 2021-01-13
- 2021-01-06