-
Notifications
You must be signed in to change notification settings - Fork 34
Build ~ UB CCR
Matt McGurn edited this page Dec 12, 2023
·
22 revisions
The University at Buffalo Center for Computational Research (CCR) is UB's Supercomputing center. The following are useful links to get started at CCR:
- CCR Getting Started
- CCR Knowledge Base
- CCR OnDemand - an integrated, single access point for all of your HPC resources
- CCR Coldfront - resource allocation management tool built
The directions assume you are using the new software/module environment.
CCR uses EasyBuild for building software. ABLATE publishes an ABLATE specific EasyBuild recipe whenever updated.
- ssh onto CCR and request a compile node
ssh [email protected]
ssh compile
-
Download the easy build file specific to CCR and the toolchain from (ablte.dev/content/installation/ConfigFile.html) to CCR.
-
Build PETSc using the EasyBuild file. This may take some time.
module load easybuild
# Update the PETSC easy build file name with the latest
eb PETSC-xxx-yyy.eb --ignore-checksums -f
Building Release Version ABLATE using EasyBuild. This method can be used to build the latest version of ABLATE. This is not complete yet.
- ssh onto CCR and request a compile node if not already connected
ssh [email protected]
ssh compile
-
Download the ABLATE easy build file specific to CCR and the toolchain from (ablte.dev/content/installation/ConfigFile.html) to CCR.
-
Build PETSc using the EasyBuild file. This may take some time.
module load easybuild
# Update the PETSC easy build file name with the latest
eb ABLATE-xxx-yyy.eb --ignore-checksums -f
- ssh onto CCR and request a compile node if not already connected
ssh [email protected]
ssh compile
- Load the PETSc Module
# Load the correct toolchain, foss/2021b is shown as an example
module load foss/2021b
# Load the latest petsc
module load petsc
- Clone ABLATE
# CD into the CHREST project space.
cd /projects/academic/chrest/USERNAME
git clone https://github.com/USERNAME/ablate.git
- Configure and Build ABLATE
# Create and move into an ablateOpt build directory
mkdir /projects/academic/chrest/USERNAME/ablateOpt
cd /projects/academic/chrest/USERNAME/ablateOpt
# Every time ablate is built export the environment
export PKG_CONFIG_PATH="${PETSC_DIR}/${PETSC_ARCH}/lib/pkgconfig:$PKG_CONFIG_PATH"
export HDF5_ROOT="${PETSC_DIR}/${PETSC_ARCH}"
export CXX_ADDITIONAL_FLAGS="-L$EBROOTGCCCORE/lib64 -Wl,-rpath -Wl,$EBROOTGCCCORE/lib64 -Wl,-rpath -Wl,$EBROOTFLEXIBLAS/lib"
# configure ablate
cmake -DCMAKE_BUILD_TYPE=Release -B . -S ../ablate
# build ablate
make -j 8
- Script to run all framework tests (ablateTest.sbatch)
#!/bin/sh
#SBATCH --partition=general-compute --qos=general-compute
#SBATCH --time=00:15:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=2
##SBATCH --constraint=IB
#SBATCH --mem=3000
#SBATCH --job-name="ablate_framework_test"
#SBATCH --output=ablate_framework_test-srun.out
#SBATCH [email protected]
#SBATCH --mail-type=ALL
# Print the current environment
echo "SLURM_JOBID="$SLURM_JOBID
echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST
echo "SLURM_NNODES"=$SLURM_NNODES
echo "SLURMTMPDIR="$SLURMTMPDIR
echo "working directory = "$SLURM_SUBMIT_DIR
# Load the correct toolchain, foss/2021b is shown as an example
module load foss/2021b
# Load the latest petsc
module load petsc
# setup petsc
export PKG_CONFIG_PATH="${PETSC_DIR}/${PETSC_ARCH}/lib/pkgconfig:$PKG_CONFIG_PATH"
export HDF5_ROOT="${PETSC_DIR}/${PETSC_ARCH}"
# The initial srun will trigger the SLURM prologue on the compute nodes.
NPROCS=`srun --nodes=${SLURM_NNODES} bash -c 'hostname' |wc -l`
echo NPROCS=$NPROCS
# Tell the tests what mpi command to use
export TEST_MPI_COMMAND=srun
# change to your build directory, either debug or release
cd /projects/academic/chrest/USERNAME/ablateOpt
echo "current directory ="$PWD
# Run all tests
ctest
- Example Script to run ABLATE with input
#!/bin/bash
#SBATCH --partition=debug --qos=debug
#SBATCH --time=1:00:00
#SBATCH --nodes=1
#SBATCH --job-name="exampleAblate"
#SBATCH [email protected]
#SBATCH --mail-type=END
# Print the current environment
echo "SLURM_JOBID="$SLURM_JOBID
echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST
echo "SLURM_NNODES"=$SLURM_NNODES
echo "SLURMTMPDIR="$SLURMTMPDIR
echo "working directory = "$SLURM_SUBMIT_DIR
# Load the required modules
# Load the correct toolchain, foss/2021b is shown as an example
module load foss/2021b
# Load the latest petsc
module load petsc
# The initial srun will trigger the SLURM prologue on the compute nodes.
NPROCS=`srun --nodes=${SLURM_NNODES} bash -c 'hostname' |wc -l`
echo NPROCS=$NPROCS
# setup petsc
export PKG_CONFIG_PATH="${PETSC_DIR}/${PETSC_ARCH}/lib/pkgconfig:$PKG_CONFIG_PATH"
export HDF5_ROOT="${PETSC_DIR}/${PETSC_ARCH}"
# Make a temp directory so that tchem has a place to vomit its files
mkdir tmp_$SLURM_JOBID
cd tmp_$SLURM_JOBID
# Run ABLATE with Input
echo "Start Time " `date +%s`
srun -n $NPROCS /projects/academic/chrest/USERNAME/ablateOpt/ablate --input /panasas/scratch/grp-chrest/example.yaml -yaml::environment::title example_$SLURM_JOBID
echo "End Time " `date +%s`