Skip to content

Commit 04935f0

Browse files
remove title case in vignette
1 parent 69aae22 commit 04935f0

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

vignettes/slurmworkflow.Rmd

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ HPC tested:
4040
We highly recommend using [renv](https://rstudio.github.io/renv/index.html)
4141
when working with an HPC.
4242

43-
## Creating a New Workflow
43+
## Creating a new workflow
4444

4545
```{r, eval = FALSE}
4646
library(slurmworkflow)
@@ -65,7 +65,7 @@ Calling `create_workflow()` result in the creation of the *workflow directory*:
6565
*workflow summary* is returned and stored in the `wf` variable. We'll use it to
6666
add elements to the workflow.
6767

68-
## Adding a Step to the Workflow
68+
## Adding a step to the workflow
6969

7070
The first step that we use on most of our *workflows* ensures that our local
7171
project and the HPC are in sync.
@@ -125,7 +125,7 @@ setup_lines <- c(
125125
)
126126
```
127127

128-
### Run Code From an R Script
128+
### Run code from an R script
129129

130130
Our next step will run the following script on the HPC.
131131

@@ -185,7 +185,7 @@ As before we use the `add_workflow_step()` function. But we change the
185185
For the `sbatch` options, we ask here for 1 CPU, 4GB of RAM and a maximum of 10
186186
minutes.
187187

188-
### Iterating Over Values in an R Script
188+
### Iterating over values in an R script
189189

190190
One common task on an HPC is to run the same code many time and only vary the
191191
value of some arguments.
@@ -276,7 +276,7 @@ jobs where each job is a set of around 30 parallel simulations. Therefore, we
276276
here have 2 levels of parallelization. One in
277277
[slurm](https://slurm.schedmd.com/) and one in the script itself.
278278

279-
### Running an R Function Directly
279+
### Running an R function directly
280280

281281
Sometimes we want to run a simple function directly without storing it into an
282282
R script. The `step_tmpl_do_call()` and `step_tmpl_map()` do exactly that for
@@ -313,15 +313,15 @@ Finally, as this will be our last step, we override the `mail-type`
313313
`sbatch_opts` to receive a mail when this *step* finishes, whatever the outcome.
314314
This way we receive a mail telling us that the *workflow* is finished.
315315

316-
## Using the Workflow on an HPC
316+
## Using the workflow on an HPC
317317

318318
Now that our workflow is created how to actually run the code on the HPC?
319319

320320
We assume that we are working on a project called "test_proj", that this
321321
project was cloned on the HPC at the following path: "~/projects/test_proj" and
322322
that the "~/projects/test_proj/workflows/" directory exists.
323323

324-
### Sending the Workflow to the HPC
324+
### Sending the workflow to the HPC
325325

326326
The following commands are to be run from your local computer.
327327

@@ -346,7 +346,7 @@ RStudio terminal.
346346
Note that it's `workflows\networks_estimation`. Windows uses back-slashes for
347347
directories and Unix OSes uses forward-slashes.
348348

349-
#### Running the Workflow From the HPC
349+
#### Running the workflow from the HPC
350350

351351
For this step, you must be at the command line on the HPC. This means that you
352352
have run: `ssh <user>@clogin01.sph.emory.edu` from your local computer.
@@ -382,7 +382,7 @@ You can check the state of your running workflow as usual with `squeue -u <user>
382382

383383
The logs for the workflows are in "workflows/test_slurmworkflow/log/".
384384

385-
### The "start_workflow.sh" Script
385+
### The "start_workflow.sh" script
386386

387387
This start script additionally allows you to start a workflow at a specific
388388
step with the `-s` argument.

0 commit comments

Comments
 (0)