-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathopen-ondemand.qmd
108 lines (81 loc) · 3.9 KB
/
open-ondemand.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
title: "ManGO, HPC and Open OnDemand"
format: pdf
---
This document gives you instructions on how to
- [Set up the conda environment](#conda) in the HPC
- [Authenticate to ManGO](#mango) with the command line
- [Request a job](#job) in Open OnDemand with the Jupyter Lab
# Install conda and create an environment {#conda}
In the interactive Shell that gives you access to the HPC, go to the Data directory. This is always recommended when you access the HPC via the command line because it has more storage than the home directory.
```sh
cd $VSC_DATA
```
The `$VSC_DATA` path is equivalent to `/data/leuven/3xx/vsc3xxxx`, with `vsc3xxxx` your VSC-account number, and `3xx` its first three numbers.
::: callout-tip
## Scratch
For even more (but temporary) storage, you can access SCRATCH, with `cd $VSC_SCRATCH`. If you want to download large parts of your datasets, it's best to do that there.
This path is equivalent to `/scratch/leuven/3xx/vsc3xxxx`.
:::
[Install Miniconda](https://docs.vscentrum.be/en/latest/software/python_package_management.html?highlight=miniconda#install-miniconda) and if it's successful (which can be checked with `which conda`) create an environment.
```sh
# installation
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -p $VSC_DATA/miniconda3
export PATH="${VSC_DATA}/miniconda3/bin:${PATH}"
```
To create an environment called "irods", run the code below. Here we already install `numpy` (which you will probably need) and `ipykernel` (which you definitely need to use JupyterLab) already.
```sh
conda create -n irods numpy ipykernel
```
Activate the environment and install the Python client. This cannot be installed by just chaining `python-irodsclient` to the `conda create` call.
```sh
conda activate irods
pip install python-irodsclient
```
If everything has been successful, you can now write Python scripts that communicate with ManGO!
The final step is to add the kernel for the Jupyter Notebook to the right directory for Open OnDemand to access it.
```sh
python -m ipykernel install --prefix=${VSC_HOME}/.local/ --name 'irods'
```
# Authenticate to ManGO {#mango}
You can authenticate to ManGO by copying the corresponding snippet from "How to connect" in <https://mango.kuleuven.be>. It will look something like the following snippet (with your username and temporary passwords in `"USERNAME"` and `"TOKEn"`).
```sh
mkdir -p ~/.irods
cat > ~/.irods/irods_environment.json <<'EOF'
{
"irods_host": "ghum.irods.icts.kuleuven.be",
"irods_port": 1247,
"irods_zone_name": "ghum",
"irods_authentication_scheme": "pam_password",
"irods_encryption_algorithm": "AES-256-CBC",
"irods_encryption_salt_size": 8,
"irods_encryption_key_size": 32,
"irods_encryption_num_hash_rounds": 8,
"irods_user_name": "USERNAME",
"irods_ssl_ca_certificate_file": "",
"irods_ssl_verify_server": "cert",
"irods_client_server_negotiation": "request_server_negotiation",
"irods_client_server_policy": "CS_NEG_REQUIRE",
"irods_default_resource": "default",
"irods_cwd": "/ghum/home"
}
EOF
iinit -h | grep Version | grep -v -q 4.2. || sed -i 's/"irods_authentication_scheme":
"pam_password"/"irods_authentication_scheme": "PAM"/' ~/.irods/irods_environment.json
echo 'TOKEN' | iinit --ttl 168 >/dev/null && echo You are now authenticated
to irods. Your session is valid for 168 hours.
```
# Request a Job in OOD {#jupyter}
In <https://ondemand.hpc.kuleuven.be/>, go to "My Interactive Sessions > Jupyter Lab" and fill the right fields in the form.
Field | CPU | GPU
----- | ----- | -----
Cluster | genius | genius
Account | lp_bib_hack | lp_bib_hack
Partition | bigmem | gpu_p100
Number of hours | (whatever you need) | (whatever you need)
Number of cores | 1 + | 9 per GPU
Required memory per core in MB | 3400 | 5000 (or default)
Number of nodes | 1 | 1
Number of GPUs | 0 | <= 4
Reservation | lp_bib_hack | lp_bib_hack