-
Notifications
You must be signed in to change notification settings - Fork 4
/
README
152 lines (106 loc) · 3.48 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
=== About =========================================================
This package contains various scripts and code for creating
"universal" parallel environments (PE). These PEs are capable
of handling jobs utilizing the following, tested, MPI
impementations:
* OpenMPI
* HPMPI
* Intel MPI
* MVAPICH/MVAPICH2
* MPICH/MPICH2
* LAM/MPI
Supported applications that use MPI but have their own methods of
executing parallel jobs:
* ANSYS 13
* Comsol 4.x
Other non-MPI message-passing/memory-sharing implementations that
will be supported include
* Linda (of Gaussian fame)
=== Description of Files ==========================================
startpe.sh - Called by start_proc_args, sets up necessary
environment including machines files in $TMPDIR,
mpdboot symlinks, etc. Available machine files are
machines.mpich
machines.mvapich
machines.mpich2
machines.mvapich2
machines.intelmpi
machines.hpmpi
stoppe.sh - Cleans up the mess created by startpe.sh
pe_env_setup - Only very recent gridengine releases allow for
running parallel jobs from within an interactive
qlogin session. This script allows that
functionality by simply doing
qlogin -pe mpi.4 8
...
user@node:$ source $SGE_ROOT/gepetools/pe_env_setup
You can now load modules and run tightly-integrated
MPI jobs using mpirun/mpiexec
mpdboot - Takes the pain out of launching your MPD daemons
across the cluster. Call it prior to mpirun/mpiexec
with any MPD-enabled MPI implementation. Uses
tight-integration.
extJobInfo - How about some friggin' visibility into what your
mpi processes are doing???
Simply do:
source $SGE_ROOT/gepetools/extJobInfo
prior to your mpirun/mpiexec in your submit script.
You'll be blessed with a file
${JOB_NAME}.${JOB_ID}.extJobInfo
which will have process info for your job's (currently,
master only) child processes including memory, cpu,
and state information
=== Installation ==================================================
This package can be extracted anywhere by the final installation
directory. Its best if the installation directory is on a shared
directory.
Just run
./install.sh <install_dir>
=== Example jobs using the JSV code
You should be ready to submit jobs...
Examples below:
1. MPICH Example
#$ ... SGE directives ...
#$ -l nodes=4,ppn=4 # 16 slots total, 4 ppn
#$ ...
# Doesn't everyone use modules?
module purge
module add mpi/mpich/1.2.7
mpirun -np $NSLOTS -machinefile $MPICH_HOSTS myexecutable
###
2. MPICH2/MVAPICH2 pre-hydra Example
#$ ...
#$ -l pcpus=32 # 32 slots, round-robin
#$ ...
module purge
module add mpi/mpich2/1.4
mpdboot
mpirun -np $NSLOTS myexecutable
###
3. MPICH2/MVAPICH2/IntelMPI w/ Hydra
#$ ...
#$ -l nodes=8,ppn=8 # 64 slots, 8 ppn
#$ ...
module purge
module add mpi/intel/1.4
mpiexec -n $NSLOTS myexecutable
###
4. OpenMPI
#$ ...
#$ -l nodes=8,ppn=12 # 96 slots, 12 ppn
#$ ...
module purge
module add mpi/openmpi/1.4.4
mpirun myexecutable
###
5. LAM
#$ ...
#$ -l nodes=2,ppn=4
#$ ...
module purge
module add mpi/lam/7.1.4
lamboot $LAM_HOSTS
lamnodes
mpirun -np $NSLOTS myexecutable
lamclean
###