Skip to content
Matt McGurn edited this page Mar 9, 2022 · 22 revisions

The University at Buffalo Center for Computational Research (CCR) is UB's Supercomputing center. The following are useful links to get started at CCR:

Notes:

  1. For the most recent development issues, notes, and workarounds for building PETSc or ABLATE on CCR check the ABLATE repo wiki.

  2. Loading pre-built CHREST modules may also load any additional modules necessary to use the given software. This may change any preloaded modules you may have.

  3. You must have read-access to the CHREST project directory to use the all CHREST pre-built modules. If you do not have access to the project directory it will be necessary to either use the libraries generally available on CCR or build the libraries yourself and install in your home directory.

  4. To make CHREST pre-built libraries available during every login add

  module use /projects/academic/chrest/modules

to your .bash_profile. Alternatively you can use the command in the terminal to make the modules available temporarily in that terminal.

  1. The default for all CHREST modules is the debug version.

  2. Most pre-build CHREST modules are indexed as version_day-month-year_GitCommit, which refers to the version of the software, the date it was compiled, and the (short) Git commit hash of the compiled software.

Available CHREST modules

The following are available CHREST modules:

  1. PETSc

  2. Ablate

  3. petscXdmfGenerator

Loading all CHREST modules

Loading all available CHREST modules can be done via

  module load chrest/release

for the release version or

  module load chrest/debug

for the debug version.

Loading pre-built ABLATE

To load pre-build versions of Ablate in the terminal enter

  module load ablate/release

for the release version or enter

module load ablate/debug

for the debug version. These will load all required modules for Ablate, add the ABLATE_DIR environment variable, and add ABLATE_DIR to path.

All available versions of ABLATE can be seen by using

module spider ablate

in the terminal window.

Loading pre-built PETSc via CHREST modules

To compile ABLATE against pre-built PETSc modules enter in the terminal

  module load petsc-chrest/release

for the release version of PETSc or enter

  module load petsc-chrest/debug

for the debug version of PETSc. These will load all required modules for PETSc and add the PETSC_DIR and PETSC_ARCH environment variables.

All available versions of PETSc pre-compiled for use with ABLATE can be seen by using

  module spider petsc-chrest

in the terminal window.

Building ABLATE on CCR

  1. Clone and checkout the desired branch

      git clone [email protected]:UBCHREST/ablate.git

    or

      git clone https://github.com/UBCHREST/ablate.git
  2. Load required PETSc modules -- See above

  3. Create debug and release build directories

      mkdir ablate-debug
      mkdir ablate-release
  4. Configure and build

      ## debug mode
      cmake -DCMAKE_BUILD_TYPE=Debug -B ablate-debug -S ablate
      make -C ablate-debug
    
      ## release
      cmake -DCMAKE_BUILD_TYPE=Release -B ablate-release -S ablate
      make -C ablate-release
    

Submitting Jobs to CCR

CCR uses SLURM for scheduling and therefore job scripts specify the job. Details about SLURM Commands and Submitting a SLURM Job Script are provided by CCR. The following example script runs all tests of ablate. Save the script in either ablate-debug or ablate-release to test the appropriate version. Submit the job via the terminal command

  sbatch ablateTest.sh
Script to run all framework tests (ablateTest.sh)
  #!/bin/sh
  #SBATCH --partition=general-compute --qos=general-compute
  #SBATCH --time=00:15:00
  #SBATCH --nodes=2
  #SBATCH --ntasks-per-node=2
  ##SBATCH --constraint=IB
  #SBATCH --mem=3000
  #SBATCH --job-name="ablate_framework_test"
  #SBATCH --output=ablate_framework_test-srun.out
  #SBATCH [email protected]
  #SBATCH --mail-type=ALL

  # Print the current environment
  echo "SLURM_JOBID="$SLURM_JOBID
  echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST
  echo "SLURM_NNODES"=$SLURM_NNODES
  echo "SLURMTMPDIR="$SLURMTMPDIR

  echo "working directory = "$SLURM_SUBMIT_DIR

  # Load the required PETSc module. Change to specific version is necessary
  module load petsc-chrest/debug

  # The initial srun will trigger the SLURM prologue on the compute nodes.
  NPROCS=`srun --nodes=${SLURM_NNODES} bash -c 'hostname' |wc -l`
  echo NPROCS=$NPROCS

  # The PMI library is necessary for srun
  export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi.so

  # Tell the tests what mpi command to use
  export TEST_MPI_COMMAND=srun

  # change to your build directory, either debug or release
  cd debug
  echo "current directory ="$PWD

  # Run all tests
  ctest