You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: vignettes/Parallel-computing.Rmd
+35-7Lines changed: 35 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ par(mar=c(3,3,1,1)+.1)
29
29
30
30
# Cluster computing
31
31
32
-
SimDesign code may be released to a computing system which supports parallel cluster computations using
32
+
`SimDesign` code may be released to a computing system which supports parallel cluster computations using
33
33
the industry standard Message Passing Interface (MPI) form. This simply
34
34
requires that the computers be setup using the usual MPI requirements (typically, running some flavor
35
35
of Linux, have password-less open-SSH access, IP addresses have been added to the `/etc/hosts` file
@@ -56,6 +56,8 @@ and `slave2` in the ssh `config` file.
56
56
57
57
`mpirun -np 16 -H localhost,slave1,slave2 R --slave -f simulation.R`
58
58
59
+
A similar setup can also be used via the recently supported `future` interface (see below).
60
+
59
61
# Network computing
60
62
61
63
If you access have to a set of computers which can be linked via secure-shell (ssh) on the same LAN network then
@@ -123,6 +125,8 @@ Final <- runSimulation(..., cl=cl)
123
125
stopCluster(cl)
124
126
```
125
127
128
+
A similar setup can also be used via the recently supported `future` interface (see below).
129
+
126
130
# Poor man's cluster computing for independent nodes
127
131
128
132
In the event that you do not have access to a Beowulf-type cluster (described in the section on
@@ -153,19 +157,43 @@ jobs manually.
153
157
154
158
# Using the `future` framework
155
159
156
-
Finally, the `future` framework ([see here](https://cran.r-project.org/web/packages/future/index.html)) can be used
157
-
by changing the logical input to`runSimulation(..., parallel)` to the character vector `parallel = 'future'`, while the computation plan is specified via `future::plan()`. For example,
160
+
The `future` framework (see `help(future, package = 'future')`) can also be used for distributing the
161
+
asynchronous function evaluations by changing the logical input in`runSimulation(..., parallel = TRUE/FALSE)` to the character vector `runSimulation(..., parallel = 'future')`, while the computation plan is pre-specified via `future::plan()`. For example, to initialize a local two-worker parallel processing computational plan one can use the follow:
158
162
159
163
```{r eval=FALSE}
160
164
library(future)
161
-
plan(multisession)
165
+
plan(multisession, workers = 2)
162
166
163
167
res <- runSimulation(design=Design, replications=1000, generate=Generate,
164
168
analyse=Analyse, summarise=Summarise,
165
169
parallel = 'future')
170
+
```
171
+
172
+
The benefit of using the `future` framework is the automatic support of many distinct back-ends, such as, for instance, HPC clusters that control the distribution of jobs via Slurm or TORQUE (e.g., see the `future.batchtools` package).
173
+
174
+
For progress reporting the `progressr` package is required and is intended as a wrapper around `runSimulation()`. Specifically, wrap the function `with_progress()` around `runSimulation()` after having specified the type of `handler()` to use, such as via the following.
175
+
176
+
```{r eval=FALSE}
177
+
library(progressr)
178
+
179
+
# Rstudio style handler (if using RStudio)
180
+
handlers("rstudio")
166
181
167
-
# reset when complete
168
-
plan(sequential)
182
+
# or using the cli package for terminal-based progress
183
+
handlers('cli')
184
+
185
+
# See help(progressr) for additional options and details
186
+
187
+
# to use progressr, wrap/pipe inside with_progress()
188
+
res <- with_progress(runSimulation(design=Design, replications=1000, generate=Generate,
189
+
analyse=Analyse, summarise=Summarise,
190
+
parallel = 'future'))
191
+
```
192
+
193
+
Finally, when the parallel computations are complete be sure to manually reset the computation plan to free any workers via
194
+
195
+
```{r eval=FALSE}
196
+
plan(sequential) # release workers
169
197
```
170
198
171
-
The benefit of using the `future` framework is the automatic support of many distinct back-ends, such as, for instance, HPC clusters that control the distribution of jobs via Slurm or TORQUE (e.g., see the `future.batchtools` package). For progress reporting, use the `progressr` package by wrapping the function `with_progress()` around `runSimulatino()` while having prespecificied the type of `handler()` to use.
0 commit comments