Skip to content

Using FDS on CSC (puhti.csc.fi)

Simo Hostikka edited this page Mar 11, 2024 · 8 revisions

These notes provide basic information about using FDS on puhti.csc.fi, one of the two super computers of CSC, Finland. It is assumed you have an account and have logged in for the first time.

  1. Clean your modules

    module purge
    
  2. Clone the firemodels/fds repo to one of your /scratch/project_xxx folders as you would on any linux cluster. Use these notes. Follow GitHub's instructions for generating ssh keys.

  3. Load Intel compiling environments:

    $ module load intel-oneapi-compilers
    $ module load intel-oneapi-mpi
    $ module load intel-oneapi-mkl
    
  4. In the directory Build/impi_intel_linux, run the script to compile

    ./make_fds.sh
    
  5. Prepare a SLURM script like this in a folder under your /scratch/project_xxx (in this example, the name of the script is job_name_script.sh):

    #!/bin/bash 
    #SBATCH --job-name=ductFlow
    #SBATCH --account=project_xxx
    #SBATCH --time=05:00:00
    #SBATCH --mem-per-cpu=1G
    #SBATCH --partition=small
    #SBATCH --ntasks=8
    #SBATCH --cpus-per-task=4
    #SBATCH --tasks-per-node=8
    export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
    srun /projappl/project_xxx/firemodels/fds/Build/impi_intel_linux_64/fds_impi_intel_linux_64 duct_flow.fds
    
    

    This script starts 8 tasks (8 mesh) and four OpenMP threads for each task. Puhti has 40 cores per node, so all tasks (i.e. 32 threads) can fit into a single node. One GB memory is reserved for each thread, and five hours for the whole thing.

    Add the following command to the end of job_name_script.sh to print out the maximum memory requirement during the job:

    sacct -o reqmem,maxrss -j $SLURM_JOBID
    

    Another way to collect information about job is to add seff command.

  6. Run the job by submitting the script:

    $ sbatch job_name_script.sh
    
  7. To run the FDS verification suite, there are two options:

    To run using the queuing system, go to fds/Verification/scripts and run

    $ module load intel-oneapi-mpi
    $ export SLURM_MEMPERCPU=4G
    $ export SBATCH_ACCOUNT='project_XXX'
    $ export USE_MAX_CORES=1
    $ ./Run_FDS_Cases.sh -q small -w 06:00:00
    

    These commands will first set the memory requirement for computing nodes, and then run both normal and benchmarking cases on the 'serial' queue. It assumes the jobs can be run in 6 h. To check how close to the 4 GB limit you got, you can list the memory usages between 3 and 4 GB since date YYYY-MM-DD

    sacct -o maxrss -S YYYY-MM-DD --units=G | grep '3...G'
    

    Another option is to use interactive shell:

    $ sinteractive --account project_XXX --time 48:00:00 --mem 16000 --tmp 100
    $ ./Run_FDS_Cases.sh -q none 
    

    Once run the cases, create Smokeview images by running

    $  ./Make_FDS_Pictures.sh -p ~/bin -X 
    
  8. To generate plots, start an interactive shell as above, and run

    $ module load matlab
    $ FDS_verification_script
    
  9. Alternatively to 7, to copy the results on your local workstation using

    • SFTP,
    • Win-SSHFS (ver 1.6.0 was found to work) program that mounts a drive to e.g. your work $WRKDIR folder (host=puhti.csc.fi, Directory=/scratch/username). Using Win-SSHFS, you can read the FDS result files directly from your Windows PC, but it is quite slow, or
    • rsync (Found at least in cygwin)
    $ cd fds/Manuals
    $ rsync -rltvzu -e ssh [email protected]:~/work/DONOTREMOVE/Firemodels_Fork/fds/Manuals/ .
    $ cd ../Verification
    $ rsync -rltvzu --include '*/*.csv' --include '*/*.txt' --exclude '*/*.*' -e ssh [email protected]:/projappl/project_xxx/firemodels/fds/Verification/ .
    
Clone this wiki locally