@@ -40,7 +40,7 @@ HPC tested:
40
40
We highly recommend using [ renv] ( https://rstudio.github.io/renv/index.html )
41
41
when working with an HPC.
42
42
43
- ## Creating a New Workflow
43
+ ## Creating a new workflow
44
44
45
45
``` {r, eval = FALSE}
46
46
library(slurmworkflow)
@@ -65,7 +65,7 @@ Calling `create_workflow()` result in the creation of the *workflow directory*:
65
65
* workflow summary* is returned and stored in the ` wf ` variable. We'll use it to
66
66
add elements to the workflow.
67
67
68
- ## Adding a Step to the Workflow
68
+ ## Adding a step to the workflow
69
69
70
70
The first step that we use on most of our * workflows* ensures that our local
71
71
project and the HPC are in sync.
@@ -125,7 +125,7 @@ setup_lines <- c(
125
125
)
126
126
```
127
127
128
- ### Run Code From an R Script
128
+ ### Run code from an R script
129
129
130
130
Our next step will run the following script on the HPC.
131
131
@@ -185,7 +185,7 @@ As before we use the `add_workflow_step()` function. But we change the
185
185
For the ` sbatch ` options, we ask here for 1 CPU, 4GB of RAM and a maximum of 10
186
186
minutes.
187
187
188
- ### Iterating Over Values in an R Script
188
+ ### Iterating over values in an R script
189
189
190
190
One common task on an HPC is to run the same code many time and only vary the
191
191
value of some arguments.
@@ -276,7 +276,7 @@ jobs where each job is a set of around 30 parallel simulations. Therefore, we
276
276
here have 2 levels of parallelization. One in
277
277
[ slurm] ( https://slurm.schedmd.com/ ) and one in the script itself.
278
278
279
- ### Running an R Function Directly
279
+ ### Running an R function directly
280
280
281
281
Sometimes we want to run a simple function directly without storing it into an
282
282
R script. The ` step_tmpl_do_call() ` and ` step_tmpl_map() ` do exactly that for
@@ -313,15 +313,15 @@ Finally, as this will be our last step, we override the `mail-type`
313
313
` sbatch_opts ` to receive a mail when this * step* finishes, whatever the outcome.
314
314
This way we receive a mail telling us that the * workflow* is finished.
315
315
316
- ## Using the Workflow on an HPC
316
+ ## Using the workflow on an HPC
317
317
318
318
Now that our workflow is created how to actually run the code on the HPC?
319
319
320
320
We assume that we are working on a project called "test_proj", that this
321
321
project was cloned on the HPC at the following path: "~ /projects/test_proj" and
322
322
that the "~ /projects/test_proj/workflows/" directory exists.
323
323
324
- ### Sending the Workflow to the HPC
324
+ ### Sending the workflow to the HPC
325
325
326
326
The following commands are to be run from your local computer.
327
327
@@ -346,7 +346,7 @@ RStudio terminal.
346
346
Note that it's ` workflows\networks_estimation ` . Windows uses back-slashes for
347
347
directories and Unix OSes uses forward-slashes.
348
348
349
- #### Running the Workflow From the HPC
349
+ #### Running the workflow from the HPC
350
350
351
351
For this step, you must be at the command line on the HPC. This means that you
352
352
have run: ` ssh <user>@clogin01.sph.emory.edu ` from your local computer.
@@ -382,7 +382,7 @@ You can check the state of your running workflow as usual with `squeue -u <user>
382
382
383
383
The logs for the workflows are in "workflows/test_slurmworkflow/log/".
384
384
385
- ### The "start_workflow.sh" Script
385
+ ### The "start_workflow.sh" script
386
386
387
387
This start script additionally allows you to start a workflow at a specific
388
388
step with the ` -s ` argument.
0 commit comments