Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
137 changes: 94 additions & 43 deletions docs/slurm/commands.md
Original file line number Diff line number Diff line change
@@ -1,69 +1,120 @@
# Main Slurm Commands
# Main Slurm commands

## Submit Jobs
## Submitting jobs

<!--submit-start-->

There are three ways of submitting jobs with slurm, using either [`sbatch`](https://slurm.schedmd.com/sbatch.html), [`srun`](https://slurm.schedmd.com/srun.html) or [`salloc`](https://slurm.schedmd.com/salloc.html):
Jobs in the [Slurm scheduler](/slurm/) are executed in batch or interactive mode. Batch jobs are executed asynchronously in the background, whereas interactive jobs allow the user to issue a command in an interactive manner, and even start a shell session. In both cases, the users must request resources for their job, including a finite amount of time for which they can occupy the compute resources.

=== "sbatch (passive job)"
Jobs executed in batch or interactive mode may contain `srun` commands to launch [job steps](). The job steps can run in sequence or in parallel given that enough resources are available in the job allocation or that resources can be shared. Access to resources such as nodes, memory, and accelerator devices, can be requested with appropriate [partition](/partitions/) and [constraint]() options.

### Executing a job in batch mode with `sbatch`

<!--sbatch-start-->

_Batch job scripts_ are submitted to the scheduler with the [`sbatch`](https://slurm.schedmd.com/sbatch.html) command.

- The command adds a resource allocation request to the scheduler job queue together with a _copy_ of a job launcher script to execute in the allocation. The command then exits.
- When the requested resources are available, a job is launched and the job script is executed in the first node of the allocated resources.
- The job allocation is freed when the job script finishes or the allocation times out.

The execution of the job script is thus asynchronous to the execution of the `sbatch` command.

!!! info "Typical `sbatch` (batch job) options"
To submit a bash job script to be executed asynchronously by the scheduler use the following `sbatch` command.
```bash
### /!\ Adapt <partition>, <qos>, <account> and <command> accordingly
sbatch -p <partition> [--qos <qos>] [-A <account>] [...] <path/to/launcher.sh>
sbatch --partition=<partition> [--qos=<qos>] [--account=<account>] [...] <path/to/launcher_script.sh>
```
=== "srun (interactive job)"
```bash
### /!\ Adapt <partition>, <qos>, <account> and <command> accordingly
srun -p <partition> [--qos <qos>] [-A <account>] [...] ---pty bash
Upon job submission, Slurm print a message with the job's ID; the job ID is used to identify this job in all Slurm interactions.

!!! warning "Accessing script from a submission script"
If you reference any other script or program from the submission script, the ensure that the file referenced is accessible.

- Use the full path to the file referenced.
- Ensure that the file is stored in a networked file system and accessible from every node.

!!! example "Example job submission"
```console
$ sbatch <path/to/launcher_script.sh>
submitted batch job 864933
```
`srun` is also to be using within your launcher script to initiate a _job step_.
<!--sbatch-end-->

### Execute a job in interactive mode with `salloc`

_Interactive jobs_ launched a command in an allocation of compute nodes with the [`salloc`](https://slurm.schedmd.com/salloc.html) command.

- The `salloc` command submits a resources allocation request to the scheduler job queue, and blocks until the resources are available.
- When the requested resources are available, a job is launched and the command provided to `salloc` is executed on the first node of the allocated resources.
- The allocation is freed when the command terminates, or the allocation times out.

The main difference of `salloc` from `sbatch` is that `salloc` runs for the whole duration of the command being executed in the allocation, that is `salloc` is a blocking version of `sbatch`.

=== "salloc (request allocation/interactive job)"
!!! info "Typical `salloc` (interactive job) options"
To start an interactive job scheduler use the following `salloc` command.
```bash
# Request interactive jobs/allocations
### /!\ Adapt <partition>, <qos>, <account> and <command> accordingly
salloc -p <partition> [--qos <qos>] [-A <account>] [...] <command>
sbatch --partition=<partition> [--qos=<qos>] [--account=<account>] [--x11] [...] [<commmand>]
```
The `salloc` command will block until the requested resources are available, and when resources are available then it will the launch the `<command>` in the first node of the allocation. Upon job submission, Slurm print a message with the job's ID; the job ID is used to identify this job in all Slurm interactions.

### `sbatch`
!!! example "Example interactive job submission"
```console
$ salloc --partition=batch --qos=normal --nodes=1 --time=8:00:00 bash --login -c 'echo "Hello, world!"'
salloc: Granted job allocation 9824090
salloc: Waiting for resource configuration
salloc: Nodes aion-0085 are ready for job
Hello, world!
salloc: Relinquishing job allocation 9824090
$
```

<!--sbatch-start-->
#### Launching interactive shell sessions with `salloc`

[`sbatch`](https://slurm.schedmd.com/sbatch.html) is used to submit a batch _launcher script_ for later execution, corresponding to _batch/passive submission mode_.
The script will typically contain one or more `srun` commands to launch parallel tasks.
Upon submission with `sbatch`, Slurm will:
The `<command>` argument of `salloc` is optional. The default behavior in our site when no `<command>` is provided is for `salloc` to launch an interactive shell on the first node of the allocation. The shell session is configured to overlap, that is it is not consuming any resources from other [job steps](). The interactive session terminates with an `exit` command, or the allocation times out.

* allocate resources (nodes, tasks, partition, constraints, etc.)
* runs a single **copy** of the batch script on the _first_ allocated node
- in particular, if you depend on other scripts, ensure you have refer to them with the _complete_ path toward them.
!!! example "Example of launching an interactive shell"
```console
0 [username@access1 ~]$ salloc --partition=batch --qos=normal --nodes=1 --time=8:00:00
salloc: Granted job allocation 9805184
salloc: Nodes aion-0207 are ready for job
0 [username@aion-0207 ~](9805184 1N/T/1CN)$
```

When you submit the job, Slurm responds with the job's ID, which will be used to identify this job in reports from Slurm.
??? info "Configuring the default behavior of `salloc` when no command is provided"
If no command is provided then the behavior of `salloc` depends on the configuration of Slurm. The [LaunchParameters](https://slurm.schedmd.com/slurm.conf.html#OPT_LaunchParameters) option of Slurm configuration ([`slurm.conf`](https://slurm.schedmd.com/slurm.conf.html)) is a comma separated list of options for the job launch plugin. The `use_interactive_step` option has `salloc` launch a shell on the first node of the allocation; otherwise `salloc` launches a shell locally, in the machine where it was invoked.

```bash
# /!\ ADAPT path to launcher accordingly
$ sbatch <path/to/launcher>.sh
Submitted batch job 864933
```
<!--sbatch-end-->
The [InteractiveStepOptions](https://slurm.schedmd.com/slurm.conf.html#OPT_InteractiveStepOptions) of Slurm configuration determines the command run by `salloc` when `use_interactive_step` is included in LaunchParameters. The default value is
```
--interactive --preserve-env --pty $SHELL"
```
where `--interactive` creates an "interactive step" that will not consume resources so that other job steps may run in parallel with the interactive step running the shell. The [`--pty` option](https://slurm.schedmd.com/srun.html#OPT_pty) is required when creating an implicit reservation for an interactive shell.

Note that `--interactive` is an internal potion and is not meant to be used outside setting the InteractiveStepOptions.

To create a allocation without launching any command, use the `--no-shell` option. Then `salloc` immediately exits after allocating job resources without running a command. Job steps can still be launched in the job allocation using the `srun` command with the `--jobid=<job allocation id>` command.

### `srun`
#### Implicit interactive job creation with `srun`

[`srun`](https://slurm.schedmd.com/srun.html) is used to initiate parallel _job steps within a job_ **OR** to _start an interactive job_
Upon submission with `srun`, Slurm will:
The [`srun`](https://slurm.schedmd.com/srun.html) is used to initiate parallel job steps within a job allocation. However, if `srun` is invoked outside an allocation, then

* (_eventually_) allocate resources (nodes, tasks, partition, constraints, etc.) when run for _interactive_ submission
* launch a job step that will execute on the allocated resources.
- `srun` automatically allocates a job in a blocking manner similar to `salloc`, and
- when the requested resources become available, it launches a single job step to run the provided command.

A job can contain multiple job steps executing sequentially
or in parallel on independent or shared resources within the job's
node allocation.
!!! info "Launching interactive jobs with `srun`"
To create an implicit job allocation and launch a job step with `srun` provide the usual options usually required by `salloc` or `sbatch`.
```bash
srun --partition=<partition> [--qos=<qos>] [--account=<account>] [...] <command>
```

### salloc
!!! info "Launching interactive shells with `srun`"
To launch an interactive shell in an implicit job allocation, use the [`--pty` option](https://slurm.schedmd.com/srun.html#OPT_pty).
```bash
srun --partition=<partition> [--qos=<qos>] [--account=<account>] [...] --pty bash --login
```
The `--pty` option instructs `srun` to execute the command in [terminal mode](https://en.wikipedia.org/wiki/Terminal_mode) in a [pseudoterminal](https://en.wikipedia.org/wiki/Pseudoterminal), so you can interact with bash as if it was launched in your terminal.

[`salloc`](https://slurm.schedmd.com/salloc.html) is used to _allocate_ resources for a job
in real time. Typically this is used to allocate resources (nodes, tasks, partition, etc.) and spawn a
shell. The shell is then used to execute srun commands to launch
parallel tasks.
!!! note
In an interactive shell session created implicitly with `srun`, the shell occupies one of the available tasks (`SLURM_NTASKS`) in contrast to jobs launched with `salloc` and without command argument.

<!--submit-end-->

Expand Down
92 changes: 61 additions & 31 deletions docs/slurm/index.md
Original file line number Diff line number Diff line change
@@ -1,53 +1,83 @@
# Slurm Resource and Job Management System

ULHPC uses [Slurm](https://slurm.schedmd.com/) (_Simple Linux Utility for Resource Management_) for cluster/resource management and job scheduling.
This middleware is responsible for allocating resources to users, providing a framework for starting, executing and monitoring work on allocated resources and scheduling work for future execution.
The UL HPC uses [Slurm](https://slurm.schedmd.com/) (formerly an acronym for _Simple Linux Utility for Resource Management_) cluster and workload management. The Slurm scheduler is a cluster workload manager and performs three main functions:

- allocates access to [resources](#jobs-and-resources) for fixed time intervals,
- provides a framework for starting, executing, and monitoring work on allocated resources, and
- maintains a priority queue that schedules and regulates access to resources.

[:fontawesome-solid-right-to-bracket: Official docs](https://slurm.schedmd.com/documentation.html){: .md-button .md-button--link }
[:fontawesome-solid-right-to-bracket: Official FAQ](https://slurm.schedmd.com/faq.html){: .md-button .md-button--link }
[:fontawesome-solid-right-to-bracket: ULHPC Tutorial/Getting Started](https://ulhpc-tutorials.readthedocs.io/en/latest/beginners/){: .md-button .md-button--link }

[![](https://hpc-docs.uni.lu/slurm/images/2022-ULHPC-user-guide.png)](https://hpc-docs.uni.lu/slurm/2022-ULHPC-user-guide.pdf)

!!! important "IEEE ISPDC22: ULHPC Slurm 2.0"
??? info "IEEE ISPDC22: ULHPC Slurm 2.0"
If you want more details on the RJMS optimizations performed upon Aion acquisition, check out our [IEEE ISPDC22](https://orbilu.uni.lu/handle/10993/51494) conference paper (21<sup>st</sup> IEEE Int. Symp. on Parallel and Distributed Computing) presented in Basel (Switzerland) on July 13, 2022.

> __IEEE Reference Format__ | [ORBilu entry](https://orbilu.uni.lu/handle/10993/51494) | [slides](https://hpc-docs.uni.lu/slurm/2022-07-13-IEEE-ISPDC22.pdf) <br/>
> Sebastien Varrette, Emmanuel Kieffer, and Frederic Pinel, "Optimizing the Resource and Job Management System of an Academic HPC and Research Computing Facility". _In 21st IEEE Intl. Symp. on Parallel and Distributed Computing (ISPDC22)_, Basel, Switzerland, 2022.
> Sebastien Varrette, Emmanuel Kieffer, and Frederic Pinel, "Optimizing the Resource and Job Management System of an Academic HPC and Research Computing Facility". _In 21st IEEE Intl. Symp. on Parallel and Distributed Computing (ISPDC'22)_, Basel, Switzerland, 2022.

[![](https://hpc-docs.uni.lu/slurm/images/2022-ULHPC-user-guide.png)](https://hpc-docs.uni.lu/slurm/2022-ULHPC-user-guide.pdf)

## TL;DR Slurm on ULHPC clusters
## Overview of the configuration of Slurm on UL HPC clusters

<!--tldr-start-->

In its concise form, the Slurm configuration in place on [ULHPC
supercomputers](../systems/index.md) features the following attributes you
should be aware of when interacting with it:

* Predefined [_Queues/Partitions_](../slurm/partitions.md) depending on node type
- `batch` (Default Dual-CPU nodes) _Max_: 64 nodes, 2 days walltime
- `gpu` (GPU nodes nodes) _Max_: 4 nodes, 2 days walltime
- `bigmem` (Large-Memory nodes) _Max_: 1 node, 2 days walltime
- In addition: `interactive` (for quicks tests) _Max_: 2 nodes, 2h walltime
* for code development, testing, and debugging
* Queue Policy: _[cross-partition QOS](../slurm/qos.md)_, mainly tied to _priority level_ (`low` $\rightarrow$ `urgent`)
- `long` QOS with extended Max walltime (`MaxWall`) set to **14 days**
- special _preemptible QOS_ for [best-effort](/jobs/best-effort.md') jobs: `besteffort`.
* [Accounts hierarchy](../slurm/accounts.md) associated to supervisors (multiple
associations possible), projects or trainings
- you **MUST** use the proper account as a [detailed usage
tracking](../policies/usage-charging.md) is performed and reported.
* [Slurm Federation configuration](https://slurm.schedmd.com/federation.html) between `iris` and `aion`
- ensures global policy (coherent job ID, global scheduling, etc.) within ULHPC systems
- easily submit jobs from one cluster to another using `-M, --cluster aion|iris`
The main configuration options for Slurm that affect the resources that are available for jobs in [UL HPC systems](/systems/) are the following.

- [__Queues/Partitions__](/slurm/partitions) group nodes according to the set of hardware _features_ they implement.
- `batch`: default dual-CPU nodes. Limited to _max_:
- 64 nodes, and
- 2 days walltime.
- `gpu`: GPU nodes nodes. Limited to _max_:
- 4 nodes, and
- 2 days walltime.
- `bigmem`: large-memory nodes. Limited to _max_:
- 1 node, and
- 2 days walltime.
- `interactive`: _floating partition_ across all node types allowing higher priority allocation for quicks tests. Best used in interactive allocations for code development, testing, and debugging. Limited to _max_:
- 2 nodes, and
- 2h walltime.
- [__Queue policies/Quality of Service (QoS's)__](/slurm/qos) apply restrictions to resource access and modify job priority on top of (overriding) access restrictions and priority modifications applied by partitions.
- _Cross-partition QoS's_ are tied to a priority level.
- `low`: Priority 10 and _max_ 300 jobs per user.
- `normal`: Priority 100 and _max_ 100 jobs per user.
- `high`: Priority 200 and _max_ 50 jobs per user.
- `urgent`: Priority 1000 and _max_ 20 jobs per user.
- _Special QoS's_ that control priority access to special hardware.
- `iris-hopper`: Priority 100 and _max_ 100 jobs per user.
- _Long_ type QoS's have extended max walltime (`MaxWall`) of _14 days_ and are defined per cluster/partition combination (`<cluster>-<partition>-long`).
- `aion-batch-long`: _max_ 16 nodes and 8 jobs per user.
- `iris-batch-long`: _max_ 16 nodes and 8 jobs per user.
- `iris-gpu-long`: _max_ 2 nodes per and 4 jobs per user.
- `iris-bigmem-long`: _max_ 2 nodes per and 4 jobs per user.
- `iris-hopper-long`: _max_ 1 GPU per and 100 jobs per user.
- Special _preemptible QoS_ for [best-effort](/jobs/best-effort') jobs.
- `besteffort`: jobs in best effort OoS can be interrupted by jobs in any other QoS. The processes running during interruption are killed, so the executables use in best effort jobs require a [custom checkpoint-restart mechanism](https://docs.nersc.gov/development/checkpoint-restart/).
- [__Accounts__](/slurm/accounts) organize user access to resources hierarchically. Accounts are associated to organization (like faculties), supervisors (multiple associations possible), and activities (like projects, and trainings).
- A default account is associated with all users affiliated with the University of Luxembourg.
- Users not associated with the University of Luxembourg must have access and specify an account association when allocating resources for a job.
- Users must use the proper account as resource usage is [tracked](/policies/usage-charging) and reported.
- [__Federated scheduling__](https://slurm.schedmd.com/federation.html) supports scheduling jobs across both `iris` and `aion`.
- A global policy (coherent job ID, global scheduling, etc.) is enforced within all UL HPC systems.
- Submission of jobs from one cluster to another is possible using the `-M, --cluster (aion|iris)` option.

??? info "Features, partitions, and floating partitions"
_Features_ in Slurm are tags that correspond to hardware capabilities of nodes. For instance the `volta` flag in UL HPC system denotes that the node has GPUs of the Volta architecture.

_Partitions_ are collections of nodes that usually have a homogeneous set of features. For instance all nodes of the GPU partition in UL HPC system have GPUs of the Volta architecture. As a result, partitions tend to be mutually exclusive sets.

_Floating partitions_ contain nodes from multiple partitions. As a result, floating partitions have nodes with variable features. The `-C, --constraint` flag is available to filter nodes in floating partitions according to their features.

<!--tldr-end-->

For more details, see the appropriate pages in the left menu.
## Jobs and resources

A _job_ is the minimal independent unit of work in a Slurm cluster.


## Jobs
an allocation of resources such as compute nodes assigned to a user for an certain amount of time. Jobs can be _interactive_ or _passive_ (e.g., a batch script) scheduled for later execution.

A **job** is an allocation of resources such as compute nodes assigned to a user for an certain amount of time.
Jobs can be _interactive_ or _passive_ (e.g., a batch script) scheduled for later execution.
The resources that the scheduler manages are physical entities like nodes, CPU cores, GPUs, access to special devices, but also system resources like and memory and I/O operations.

!!! question "What characterize a job?"
A user _jobs_ have the following key characteristics:
Expand Down
3 changes: 3 additions & 0 deletions docs/slurm/job_steps.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Job steps

Job steps are processes launched within a job which consume the job resources. Job steps are initiated with the `srun` command. The job steps in a job can be execute also in parallel given that enough resources are available.