forked from OSC/bc_osc_jupyter
-
Notifications
You must be signed in to change notification settings - Fork 0
/
form.yml
87 lines (86 loc) · 2.69 KB
/
form.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
cluster: "owens"
form:
- bc_account
- jupyterlab_switch
- bc_num_hours
- node_type
- cuda_version
- num_cores
- version
- bc_email_on_started
attributes:
jupyterlab_switch:
widget: "check_box"
label: "Use JupyterLab instead of Jupyter Notebook?"
help: |
JupyterLab is the next generation of Jupyter, and is completely compatible with existing Jupyter Notebooks.
num_cores:
widget: "number_field"
label: "Number of cores"
value: 1
help: |
Number of cores on node type (4 GB per core unless requesting whole
node). Leave blank if requesting full node.
min: 1
max: 28
step: 1
bc_account:
label: "Project"
help: "You can leave this blank if **not** in multiple projects."
cuda_version:
widget: "select"
label: "CUDA Version"
help: |
CUDA is Nvidia's GPU-specific parallel computing framework. A GPU node
is required to make use of this functionality.
options:
- [ "cuda/8.0.44", "cuda/8.0.44" ]
- [ "cuda/8.0.61", "cuda/8.0.61" ]
- [ "cuda/9.0.176", "cuda/9.0.176" ]
- [ "cuda/9.1.85", "cuda/9.1.85" ]
- [ "cuda/9.2.88", "cuda/9.2.88" ]
- [ "cuda/10.0.130", "cuda/10.0.130" ]
node_type:
widget: select
label: "Node type"
help: |
- **any** - (*1-28 cores*) Use any available Owens node. This reduces the
wait time as there are no node requirements.
- **gpu** - (*1-28 cores*) Use an Owens node that has an [NVIDIA Tesla P100
GPU] and loads the requested [CUDA] module. There are 160 of these nodes
on Owens.
- **hugemem** - (*48 cores*) Use an Owens node that has 1.5TB of
available RAM as well as 48 cores. There are 16 of these nodes on
Owens.
- **debug** - (*1-28 cores*) For short sessions (= 1 hour) the debug queue
will have the shortest wait time. This is only accessible during 8AM -
6PM, Monday - Friday. There are 6 of these nodes on Owens.
[NVIDIA Tesla P100 GPU]: http://www.nvidia.com/object/tesla-p100.html
[CUDA]: https://developer.nvidia.com/cuda-zone
options:
- [
"any", "any",
data-min-ppn: 0,
data-max-ppn: 28,
data-can-show-cuda: false
]
- [
"gpu", "gpu",
data-min-ppn: 0,
data-max-ppn: 28,
data-can-show-cuda: true
]
- [
"hugemem", "hugemem",
data-min-ppn: 48,
data-max-ppn: 48,
data-can-show-cuda: false
]
- [
"debug", "debug",
data-min-ppn: 0,
data-max-ppn: 28,
data-can-show-cuda: false
]
version: "jupyter/python3.5"