A basic "Hello world" example to output text to console from nodes over a network using MPI.
A cluster at IIIT has four SLURM nodes. We want to run one process on each
node, and run 32
threads using OpenMP. In future, such a setup would allow
us to run distributed algorithms that utilize each node's memory efficiently and
minimize communication cost (within the same node). Output is saved in gist.
Technical help from Semparithi Aravindan.
Note You can just copy
main.sh
to your system and run it.
For the code, refer tomain.cxx
.
$ scl enable gcc-toolset-11 bash
$ sbatch main.sh
# ==========================================
# SLURM_JOB_ID = 3373
# SLURM_NODELIST = node[01-04]
# SLURM_JOB_GPUS =
# ==========================================
# Cloning into 'hello-mpi'...
# [node01.local:2180262] MCW rank 0 is not bound (or bound to all available processors)
# [node02.local:3790641] MCW rank 1 is not bound (or bound to all available processors)
# [node04.local:3758212] MCW rank 3 is not bound (or bound to all available processors)
# [node03.local:3287974] MCW rank 2 is not bound (or bound to all available processors)
# P00: NAME=node01.local
# P00: OMP_NUM_THREADS=32
# P02: NAME=node03.local
# P02: OMP_NUM_THREADS=32
# P03: NAME=node04.local
# P03: OMP_NUM_THREADS=32
# P01: NAME=node02.local
# P01: OMP_NUM_THREADS=32
# P00.T00: Hello MPI
# P00.T24: Hello MPI
# P00.T16: Hello MPI
# P00.T26: Hello MPI
# P00.T05: Hello MPI
# P00.T29: Hello MPI
# P00.T22: Hello MPI
# P00.T06: Hello MPI
# P00.T17: Hello MPI
# P00.T23: Hello MPI
# P00.T25: Hello MPI
# P00.T13: Hello MPI
# P00.T01: Hello MPI
# P00.T09: Hello MPI
# P00.T03: Hello MPI
# P00.T02: Hello MPI
# P00.T31: Hello MPI
# P03.T00: Hello MPI
# P03.T24: Hello MPI
# P03.T05: Hello MPI
# P03.T21: Hello MPI
# P03.T04: Hello MPI
# ...
- MPI Basics : Tom Nurkkala
- OpenMPI tutorial coding in Fortran 90 - 01 Hello World! : yinjianz
- Mod-09 Lec-40 MPI programming : Prof. Matthew Jacob
- MPI/OpenMP Hybrid Programming : Neil Stringfellow
- Introduction to MPI Programming, part 1 : Hristo Iliev
- Hybrid MPI+OpenMP programming : Dr. Jussi Enkovaara
- Running an MPI Cluster within a LAN : Dwaraka Nath
- Return values of MPI calls : RIP Tutorial
- MPI Error Handling : Dartmouth College
- Does storing mpi rank enhance the performance : Cosmin Ioniță
- MPI error handler not getting called when exception occurs : Hristo Iliev
- Assert function for MPI Programs : Gilles Gouaillardet
- In MPI, how to make the following program Wait till all calculations are completed : Gilles
- MPI_Abort() vs exit() : R.. GitHub STOP HELPING ICE
- MPI_Datatype : RookieHPC
- MPI_Error_string : DeinoMPI
- MPI_Error_string : MPICH
- MPI_Comm_size : MPICH
- MPI_Comm_rank : MPICH
- MPI_Get_processor_name : MPICH