Skip to content
Matt McGurn edited this page Dec 12, 2023 · 22 revisions

The University at Buffalo Center for Computational Research (CCR) is UB's Supercomputing center. The following are useful links to get started at CCR:

The directions assume you are using the new software/module environment.

Build the latest compatible version of PETSc.

CCR uses EasyBuild for building software. ABLATE publishes an ABLATE specific EasyBuild recipe whenever updated.

  1. ssh onto CCR and request a compile node
ssh [email protected]
ssh compile
  1. Download the easy build file specific to CCR and the toolchain from (ablte.dev/content/installation/ConfigFile.html) to CCR.

  2. Build PETSc using the EasyBuild file. This may take some time.

module load easybuild
# Update the PETSC easy build file name with the latest
eb PETSC-xxx-yyy.eb --ignore-checksums -f

Building Release Version ABLATE using EasyBuild. This method can be used to build the latest version of ABLATE. This is not complete yet.

  1. ssh onto CCR and request a compile node if not already connected
ssh [email protected]
ssh compile
  1. Download the ABLATE easy build file specific to CCR and the toolchain from (ablte.dev/content/installation/ConfigFile.html) to CCR.

  2. Build PETSc using the EasyBuild file. This may take some time.

module load easybuild
# Update the PETSC easy build file name with the latest
eb ABLATE-xxx-yyy.eb --ignore-checksums -f

Building ABLATE from source. This method can be used to build ABLATE from custom source files.

  1. ssh onto CCR and request a compile node if not already connected
ssh [email protected]
ssh compile
  1. Load the PETSc Module
# Load the correct toolchain, foss/2021b is shown as an example
module load foss/2021b
# Load the latest petsc
module load petsc 
  1. Clone ABLATE
# CD into the CHREST project space.
cd /projects/academic/chrest/USERNAME

git clone https://github.com/USERNAME/ablate.git
  1. Configure and Build ABLATE
# Create and move into an ablateOpt build directory
mkdir /projects/academic/chrest/USERNAME/ablateOpt
cd /projects/academic/chrest/USERNAME/ablateOpt

# Every time ablate is built export the environment
export PKG_CONFIG_PATH="${PETSC_DIR}/${PETSC_ARCH}/lib/pkgconfig:$PKG_CONFIG_PATH"
export HDF5_ROOT="${PETSC_DIR}/${PETSC_ARCH}"
export CXX_ADDITIONAL_FLAGS="-L$EBROOTGCCCORE/lib64 -Wl,-rpath -Wl,$EBROOTGCCCORE/lib64 -Wl,-rpath -Wl,$EBROOTFLEXIBLAS/lib"

# configure ablate
cmake -DCMAKE_BUILD_TYPE=Release -B . -S ../ablate 

# build ablate
make -j 8
  1. Script to run all framework tests (ablateTest.sbatch)
#!/bin/sh
#SBATCH --partition=general-compute --qos=general-compute
#SBATCH --time=00:15:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
##SBATCH --constraint=IB
#SBATCH --mem=3000
#SBATCH --job-name="ablate_framework_test"
#SBATCH --output=ablate_framework_test-srun.out
#SBATCH [email protected]
#SBATCH --mail-type=ALL

# Print the current environment
echo "SLURM_JOBID="$SLURM_JOBID
echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST
echo "SLURM_NNODES"=$SLURM_NNODES
echo "SLURMTMPDIR="$SLURMTMPDIR

echo "working directory = "$SLURM_SUBMIT_DIR

# Load the correct toolchain, foss/2021b is shown as an example
module load foss/2021b
# Load the latest petsc
module load petsc 

# setup petsc
export PKG_CONFIG_PATH="${PETSC_DIR}/${PETSC_ARCH}/lib/pkgconfig:$PKG_CONFIG_PATH"
export HDF5_ROOT="${PETSC_DIR}/${PETSC_ARCH}"  

# The initial srun will trigger the SLURM prologue on the compute nodes.
NPROCS=`srun --nodes=${SLURM_NNODES} bash -c 'hostname' |wc -l`
echo NPROCS=$NPROCS

# Tell the tests what mpi command to use
export TEST_MPI_COMMAND=srun

# change to your build directory, either debug or release
cd /projects/academic/chrest/USERNAME/ablateOpt
echo "current directory ="$PWD

# Run all tests
ctest
  1. Example Script to run ABLATE with input
#!/bin/bash
#SBATCH --partition=debug --qos=debug
#SBATCH --time=1:00:00
#SBATCH --nodes=1
#SBATCH --job-name="exampleAblate"
#SBATCH [email protected]
#SBATCH --mail-type=END

# Print the current environment
echo "SLURM_JOBID="$SLURM_JOBID
echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST
echo "SLURM_NNODES"=$SLURM_NNODES
echo "SLURMTMPDIR="$SLURMTMPDIR

echo "working directory = "$SLURM_SUBMIT_DIR

# Load the required modules
# Load the correct toolchain, foss/2021b is shown as an example
module load foss/2021b
# Load the latest petsc
module load petsc

# The initial srun will trigger the SLURM prologue on the compute nodes.
NPROCS=`srun --nodes=${SLURM_NNODES} bash -c 'hostname' |wc -l`
echo NPROCS=$NPROCS

# setup petsc
export PKG_CONFIG_PATH="${PETSC_DIR}/${PETSC_ARCH}/lib/pkgconfig:$PKG_CONFIG_PATH"
export HDF5_ROOT="${PETSC_DIR}/${PETSC_ARCH}"  

# Make a temp directory so that tchem has a place to vomit its files
mkdir tmp_$SLURM_JOBID
cd tmp_$SLURM_JOBID

# Run ABLATE with Input
echo "Start Time " `date +%s`
srun -n $NPROCS  /projects/academic/chrest/USERNAME/ablateOpt/ablate --input /panasas/scratch/grp-chrest/example.yaml -yaml::environment::title example_$SLURM_JOBID

echo "End Time " `date +%s`