Skip to content

Commit

Permalink
deploy: ce57072
Browse files Browse the repository at this point in the history
  • Loading branch information
sellth committed Aug 16, 2024
1 parent 0f90d91 commit 7780a2d
Show file tree
Hide file tree
Showing 7 changed files with 14 additions and 14 deletions.
8 changes: 4 additions & 4 deletions hpc-tutorial/episode-0/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -3251,9 +3251,9 @@ <h2 id="legend">Legend<a class="headerlink" href="#legend" title="Permanent link
<p>While file paths are highlighted like this: <code>/data/cephfs-1/work/projects/cubit/current</code>.</p>
<h2 id="instant-gratification">Instant Gratification<a class="headerlink" href="#instant-gratification" title="Permanent link">&para;</a></h2>
<p>After connecting to the cluster, you are located on a login node.
To get to your first compute node, type <code>srun --time 7-00 --mem=8G --ntasks=8 --pty bash -i</code> which will launch an interactive Bash session on a free remote node running up to 7 days, enabling you to use 8 cores and 8 Gb memory. Typing <code>exit</code> will you bring back to the login node.</p>
<div class="highlight"><pre><span></span><code>$ srun -p long --time 7-00 --mem=8G --ntasks=8 --pty bash -i
med0107 $ exit
To get to your first compute node, type <code>srun --time 7-00 --mem=8G --cpus-per-task=8 --pty bash -i</code> which will launch an interactive Bash session on a free remote node running up to 7 days, enabling you to use 8 cores and 8 Gb memory. Typing <code>exit</code> will you bring back to the login node.</p>
<div class="highlight"><pre><span></span><code>hpc-login-1$ srun -p long --time 7-00 --mem=8G --cpus-per-task=8 --pty bash -i
hpc-cpu-1$ exit
$
</code></pre></div>
<p>See?
Expand All @@ -3263,7 +3263,7 @@ <h2 id="preparation">Preparation<a class="headerlink" href="#preparation" title=
In general the users on the cluster will manage their own software with the help of conda.
If you haven't done so so far, please <a href="../../best-practice/software-installation-with-conda/">follow the instructions in installing conda</a> first.
The only premise is that you are able to <a href="../../connecting/advanced-ssh/linux/">log into the cluster</a>.
Make also sure that you are logged in to a computation node using <code>srun -p medium --time 1-00 --mem=4G --ntasks=1 --pty bash -i</code>.</p>
Make also sure that you are logged in to a computation node using <code>srun -p medium --time 1-00 --mem=4G --cpus-per-task=1 --pty bash -i</code>.</p>
<p>Now we will create a new environment, so as to not interfere
with your current or planned software stack, and install into it all the
software that we need during the tutorial. Run the following commands:</p>
Expand Down
2 changes: 1 addition & 1 deletion hpc-tutorial/episode-1/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -3315,7 +3315,7 @@ <h1 id="first-steps-episode-1">First Steps: Episode 1<a class="headerlink" href=
Here we will build a small pipeline with alignment and variant calling.
The premise is that you have the tools installed as described in <a href="../episode-0/">Episode 0</a>. For this episode, please make sure that you
are on a compute node. As a reminder, the command to access a compute node with the required resources is</p>
<div class="highlight"><pre><span></span><code>$ srun --time 7-00 --mem=8G --ntasks=8 --pty bash -i
<div class="highlight"><pre><span></span><code>$ srun --time 7-00 --mem=8G --cpus-per-task=8 --pty bash -i
</code></pre></div>
<h2 id="tutorial-input-files">Tutorial Input Files<a class="headerlink" href="#tutorial-input-files" title="Permanent link">&para;</a></h2>
<p>We will provide you with some example FASTQ files, but you can use your own if you like.
Expand Down
8 changes: 4 additions & 4 deletions hpc-tutorial/episode-2/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -3250,8 +3250,8 @@ <h2 id="the-sbatch-command">The <code>sbatch</code> Command<a class="headerlink"
<span class="c1"># Set the file to write the stdout and stderr to (if -e is not set; -o or --output).</span>
<span class="c1">#SBATCH --output=logs/%x-%j.log</span>

<span class="c1"># Set the number of cores (-n or --ntasks).</span>
<span class="c1">#SBATCH --ntasks=8</span>
<span class="c1"># Set the number of cores (-c or --cpus-per-task).</span>
<span class="c1">#SBATCH --cpus-per-task=8</span>

<span class="c1"># Force allocation of the two cores on ONE node.</span>
<span class="c1">#SBATCH --nodes=1</span>
Expand Down Expand Up @@ -3293,8 +3293,8 @@ <h2 id="the-sbatch-command">The <code>sbatch</code> Command<a class="headerlink"
<span class="c1"># Set the file to write the stdout and stderr to (if -e is not set; -o or --output).</span>
<span class="c1">#SBATCH --output=logs/%x-%j.log</span>

<span class="c1"># Set the number of cores (-n or --ntasks).</span>
<span class="c1">#SBATCH --ntasks=8</span>
<span class="c1"># Set the number of cores (-c or --cpus-per-task).</span>
<span class="c1">#SBATCH --cpus-per-task=8</span>

<span class="c1"># Force allocation of the two cores on ONE node.</span>
<span class="c1">#SBATCH --nodes=1</span>
Expand Down
4 changes: 2 additions & 2 deletions hpc-tutorial/episode-4/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -3154,8 +3154,8 @@ <h1 id="first-steps-episode-4">First Steps: Episode 4<a class="headerlink" href=
<span class="c1"># Set the file to write the stdout and stderr to (if -e is not set; -o or --output).</span>
<span class="c1">#SBATCH --output=logs/%x-%j.log</span>

<span class="c1"># Set the number of cores (-n or --ntasks).</span>
<span class="c1">#SBATCH --ntasks=2</span>
<span class="c1"># Set the number of cores (-c or --cpus-per-task).</span>
<span class="c1">#SBATCH --cpus-per-task=2</span>

<span class="c1"># Force allocation of the two cores on ONE node.</span>
<span class="c1">#SBATCH --nodes=1</span>
Expand Down
2 changes: 1 addition & 1 deletion search/search_index.json

Large diffs are not rendered by default.

Binary file modified sitemap.xml.gz
Binary file not shown.
4 changes: 2 additions & 2 deletions slurm/commands-sbatch/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -3253,8 +3253,8 @@ <h2 id="important-arguments">Important Arguments<a class="headerlink" href="#imp
This is only given here as an important argument as the maximum number of nodes allocatable to any partition but <code>mpi</code> is set to one (1).
This is done as there are few users on the BIH HPC that actually use multi-node paralleilsm.
Rather, most users will use multi-core parallelism and might forget to limit the number of nodes which causes inefficient allocation of resources.</li>
<li><code>--ntasks</code>
-- This corresponds to the number of threads allocated to each node.</li>
<li><code>--cpus-per-task</code>
-- This corresponds to the number of CPU cores allocated to each task.</li>
<li><code>--mem</code>
-- The memory to allocate for the job.
As you can define minimal and maximal number of tasks/CPUs/cores, you could also specify <code>--mem-per-cpu</code> and get more flexible scheduling of your job.</li>
Expand Down

0 comments on commit 7780a2d

Please sign in to comment.