Skip to content

Typos #1063

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Nov 3, 2023
Merged

Typos #1063

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ All over the handbook, version notes or information relating to datalad versions
#### Advanced

#### Usecases
- A new usecase about encrypted workflows is now part of the handbook ([#895][])
- A new use case about encrypted workflows is now part of the handbook ([#895][])

#### Miscellaneous additions
- The Makefile in the source repository received a more intuitive and fine-grained structure ([#901][])
Expand Down Expand Up @@ -100,7 +100,7 @@ It includes contributions the new contributors @eort, @mslw, @tguiot, @jhpb7 and

- The GitHub project of the handbook now uses templates for easier issue generation. ([#768][])
- A number of CSS improvements fix the rendering of bullet points ([#770][])
- The ML usecase was minified to speed up builds ([#790][])
- The ML use case was minified to speed up builds ([#790][])
- A new code list for the DGPA workshop was added ([#820][])

## v0.15 (November 25 2021) -- LaTeX improvements
Expand Down
2 changes: 1 addition & 1 deletion docs/basics/101-109-rerun.rst
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ Finally, save this note.

Note that :dlcmd:`rerun` can re-execute the run records of both a :dlcmd:`run`
or a :dlcmd:`rerun` command,
but not with any other type of datalad command in your history
but not with any other type of DataLad command in your history
such as a :dlcmd:`save` on results or outputs after you executed a script.
Therefore, make it a
habit to record the execution of scripts by plugging it into :dlcmd:`run`.
Expand Down
4 changes: 2 additions & 2 deletions docs/basics/101-110-run2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ How can that be?
Just as the ``.mp3`` files, the ``.jpg`` file content is not present
locally after a :dlcmd:`clone`, and we did not :dlcmd:`get` it yet!

This is where the ``-i``/``--input`` option for a datalad run becomes useful.
This is where the ``-i``/``--input`` option for a ``datalad run`` becomes useful.
The content of everything that is specified as an ``input`` will be retrieved
prior to running the command.

Expand Down Expand Up @@ -504,4 +504,4 @@ Apart from displaying the command that will be ran, you will learn *where* the c

.. [#f1] In shell programming, commands exit with a specific code that indicates
whether they failed, and if so, how. Successful commands have the exit code zero. All failures
have exit codes greater than zero.
have exit codes greater than zero.
2 changes: 1 addition & 1 deletion docs/basics/101-113-summary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ command, and discovered the concept of *locked* content.
track of what you do in your dataset by capturing all :term:`provenance`.

* A :dlcmd:`run` command generates a ``run record`` in the commit. This :term:`run record` can be used
by datalad to re-execute a command with :dlcmd:`rerun SHASUM`, where SHASUM is the
by DataLad to re-execute a command with :dlcmd:`rerun SHASUM`, where SHASUM is the
commit hash of the :dlcmd:`run` command that should be re-executed.

* If a :dlcmd:`run` or :dlcmd:`rerun` does not modify any content, it will not write a
Expand Down
4 changes: 2 additions & 2 deletions docs/basics/101-116-sharelocal.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,8 @@ how a dataset can be obtained from a path (instead of a URL as shown in the sect
:ref:`installds`). Thirdly, ``DataLad-101`` is a dataset that can
showcase many different properties of a dataset already, but it will
be an additional learning experience to see how the different parts
of the dataset -- text files, larger files, datalad subdataset,
:dlcmd:`run` commands -- will appear upon installation when shared.
of the dataset -- text files, larger files, subdatasets,
:term:`run record`\s -- will appear upon installation when shared.
And lastly, you will likely "share a dataset with yourself" whenever you
will be using a particular dataset of your own creation as input for
one or more projects.
Expand Down
6 changes: 3 additions & 3 deletions docs/basics/101-121-siblings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ a script on `dataset nesting <https://raw.githubusercontent.com/datalad/datalad.
Because he found this very helpful in understanding dataset
nesting concepts, he decided to download it from GitHub, and saved it in the ``code/`` directory.

He does it using the datalad command :dlcmd:`download-url`
He does it using the DataLad command :dlcmd:`download-url`
that you experienced in section :ref:`createDS` already: This command will
download a file just as ``wget``, but it can also take a commit message
and will save the download right to the history of the dataset that you specify,
Expand All @@ -51,7 +51,7 @@ and run the following command
-O code/nested_repos.sh \
https://raw.githubusercontent.com/datalad/datalad.org/7e8e39b1/content/asciicast/seamless_nested_repos.sh

Run a quick datalad status:
Run a quick ``datalad status``:

.. runrecord:: _examples/DL-101-121-102
:language: console
Expand Down Expand Up @@ -86,7 +86,7 @@ Do we need to install the installed dataset of our room mate
as a copy again?

No, luckily, it's simpler and less convoluted. What we have to
do is to *register* a datalad :term:`sibling`: A reference to our room mate's
do is to *register* a DataLad :term:`sibling`: A reference to our room mate's
dataset in our own, original dataset.

.. gitusernote:: Remote siblings
Expand Down
2 changes: 1 addition & 1 deletion docs/basics/101-127-yoda.rst
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ for a whole analysis dataset. At one point you might also write a
scientific paper about your analysis in a paper project, and the
whole analysis project can easily become a modular component in a paper
project, to make sharing paper, code, data, and results easy.
The usecase :ref:`usecase_reproducible_paper` contains a step-by-step instruction on
The use case :ref:`usecase_reproducible_paper` contains a step-by-step instruction on
how to build and share such a reproducible paper, if you want to learn
more.

Expand Down
2 changes: 1 addition & 1 deletion docs/basics/101-136-filesystem.rst
Original file line number Diff line number Diff line change
Expand Up @@ -601,7 +601,7 @@ provenance record is lost:
Nevertheless, copying files with :dlcmd:`copy-file` is easier and safer
than moving them with standard Unix commands, especially so for annexed files.
A more detailed introduction to :dlcmd:`copy-file` and a concrete
usecase can be found in the online version of the handbook.
use case can be found in the online version of the handbook.

Let's clean up:

Expand Down
8 changes: 4 additions & 4 deletions docs/basics/101-180-FAQ.rst
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ If you do not want to invent a description yourself, you can run
After cloning a dataset, you can retrieve file contents by running

```
datalad get <path/to/directory/or/file>`
datalad get <path/to/directory/or/file>
```

This command will trigger a download of the files, directories, or
Expand Down Expand Up @@ -340,8 +340,8 @@ If you do not want to invent a description yourself, you can run

### Find out what has been done

DataLad datasets contain their history in the ``git log``.
By running ``git log`` (or a tool that displays Git history) in the dataset or on
DataLad datasets contain their history in the `git log`.
By running `git log` (or a tool that displays Git history) in the dataset or on
specific files, you can find out what has been done to the dataset or to individual files
by whom, and when.

Expand Down Expand Up @@ -391,7 +391,7 @@ If you do not want to invent a description yourself, you can run
subdatasets, run 'datalad get -n <path/to/subdataset>'

Afterwards, you can browse the retrieved metadata to find out about
subdataset contents, and retrieve individual files with `datalad get`.
subdataset contents, and retrieve individual files with 'datalad get'.
If you use 'datalad get <path/to/subdataset>', all contents of the
subdataset will be downloaded at once.

Expand Down
2 changes: 1 addition & 1 deletion docs/beyond_basics/101-146-providers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ authentication, the procedure is always the same:
Upon first access via any downloading command, users will be prompted for their
credentials from the command line. Subsequent downloads handle authentication
in the background as long as the credentials stay valid. An example of this
credential management is shown in the usecase :ref:`usecase_HCP_dataset`:
credential management is shown in the use case :ref:`usecase_HCP_dataset`:
Data is stored in S3 buckets that require authentication with AWS credentials.
The first :dlcmd:`get` to retrieve any of the data will prompt for
the credentials from the terminal. If the given credentials are valid, the
Expand Down
12 changes: 6 additions & 6 deletions docs/beyond_basics/101-147-riastores.rst
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ A few examples are:
- Clean up unnecessary files and minimize a (or all) repository with :term:`Git`\s
`garbage collection (gc) <https://git-scm.com/docs/git-gc>`_ command.

The usecase :ref:`usecase_datastore` demonstrates the advantages of this in a
The use case :ref:`usecase_datastore` demonstrates the advantages of this in a
large scientific institute with central data management.
Due to the git-annex ora-remote special remote, datasets can be exported and
stored as archives to save disk space.
Expand Down Expand Up @@ -288,7 +288,7 @@ on where the RIA store (should) exists, or rather, which file transfer protocol
.. find-out-more:: RIA stores with HTTP access

Setting up RIA store with access via HTTP requires additional server-side configurations for Git.
`Git's http-backend documentation <https://git-scm.com/docs/git-http-backend>`_ can point you the relevant configurations for your webserver and usecase.
`Git's http-backend documentation <https://git-scm.com/docs/git-http-backend>`_ can point you the relevant configurations for your web server and usecase.

Note that it is always required to specify an :term:`absolute path` in the URL!

Expand Down Expand Up @@ -584,7 +584,7 @@ in the findoutmore below:

.. find-out-more:: On cloning datasets with subdatasets from RIA stores

The usecase :ref:`usecase_HCP_dataset`
The use case :ref:`usecase_HCP_dataset`
details a RIA-store based publication of a large dataset, split into a nested
dataset hierarchy with about 4500 subdatasets in total. But how can links to
subdatasets work, if datasets in a RIA store are stored in a flat hierarchy,
Expand Down Expand Up @@ -662,7 +662,7 @@ hide the technical layers of the RIA setup. For example, custom procedures provi
at dataset creation could automatically perform a sibling setup in a RIA store,
and also create an associated GitLab repository with a publication dependency to
the RIA store to ease publishing data or cloning the dataset.
The usecase :ref:`usecase_datastore` details the setup of RIA stores in a
The use case :ref:`usecase_datastore` details the setup of RIA stores in a
scientific institute and demonstrates this example.

To simplify repository access beyond using aliases, the datasets stored in a RIA
Expand All @@ -675,7 +675,7 @@ From a user's perspective, the RIA store would thus stay completely hidden.

Standard maintenance tasks by data stewards with knowledge about RIA stores and
access to it can be performed easily or even in an automated fashion. The
usecase :ref:`usecase_datastore` showcases some examples of those operations.
use case :ref:`usecase_datastore` showcases some examples of those operations.

Summary
^^^^^^^
Expand Down Expand Up @@ -713,7 +713,7 @@ procedures.

.. todo::

Link UKBiobank on supercomputer usecase once ready
Link UKBiobank on supercomputer use case once ready

shows how this feature can come in handy.

Expand Down
2 changes: 1 addition & 1 deletion docs/beyond_basics/101-148-clonepriority.rst
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Instead of adding configurations with precise URLs you can also make use of temp
A placeholder takes the form ``{placeholdername}`` and can reference any property that can be inferred from the parent dataset's knowledge about the target superset, specifically any subdataset information that exists as a key-value pair within ``.gitmodules``.
For convenience, an existing `datalad-id` record is made available under the shortened name `id`.
In all likelihood, the list of available placeholders will be expanded in the future.
Do you have a usecase and need a specific placeholder?
Do you have a use case and need a specific placeholder?
`Reach out to us <https://github.com/datalad/datalad/issues/new>`_, we may be able to add the placeholders you need!

When could this be useful?
Expand Down
2 changes: 1 addition & 1 deletion docs/beyond_basics/101-149-copyfile.rst
Original file line number Diff line number Diff line change
Expand Up @@ -329,4 +329,4 @@ Although it requires some Unix-y command line magic, it can be automated for lar

.. rubric:: Footnotes

.. [#f1] You can read about the human connectome dataset in the usecase :ref:`usecase_HCP_dataset`.
.. [#f1] You can read about the human connectome dataset in the use case :ref:`usecase_HCP_dataset`.
2 changes: 1 addition & 1 deletion docs/beyond_basics/101-160-gobig.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Version controlling or analyzing data in datasets with a total size of up to a
few hundred GB, with some tens of thousands of files at maximum? Usually, this
should work fine. If you want to go beyond this scale, however, you should read
this section to learn how to properly scale up. As a general rule, consider this
section relevant once you have a usecase in which you would go substantially
section relevant once you have a use case in which you would go substantially
beyond 100k files in a single dataset.

The contents of this chapter exist thanks to some pioneers that took a leap and
Expand Down
2 changes: 1 addition & 1 deletion docs/beyond_basics/101-168-dvc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ This tutorial consists of the following steps:

This handbook section demonstrates how DataLad could be used as an alternative to DVC.
We demonstrate each step with DVC according to their tutorial, and then recreate a corresponding DataLad workflow.
The usecase :ref:`usecase_ML` demonstrates a similar analysis in a completely DataLad-centric fashion.
The use case :ref:`usecase_ML` demonstrates a similar analysis in a completely DataLad-centric fashion.
If you want to, you can code along, or simply read through the presentation of DVC and DataLad commands.
Some familiarity with DataLad can be helpful, but if you have never used DataLad, footnotes in each section can point you relevant chapters for more insights on a command or concept.
If you have never used DVC, `its documentation <https://dvc.org/doc>`_ (including the `command reference <https://dvc.org/doc/command-reference>`_) can answer further questions.
Expand Down
2 changes: 1 addition & 1 deletion docs/beyond_basics/101-169-cluster.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ We hope to grow this chapter further, so please `get in touch <https://github.co
Pointers to content in other chapters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

To find out more about centralized storage solutions, you may want to checkout the usecase :ref:`usecase_datastore` or the section :ref:`riastore`.
To find out more about centralized storage solutions, you may want to checkout the use case :ref:`usecase_datastore` or the section :ref:`riastore`.

DataLad installation on a cluster
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down
2 changes: 1 addition & 1 deletion docs/beyond_basics/101-171-enki.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Walkthrough: Parallel ENKI preprocessing with fMRIprep

The previous section has been an overview on parallel, provenance-tracked computations in DataLad datasets.
While the general workflow entails a complete setup, it is usually easier to understand it by seeing it applied to a concrete usecase.
Its even more informative if that usecase includes some complexities that do not exist in the "picture-perfect" example but are likely to arise in real life.
Its even more informative if that use case includes some complexities that do not exist in the "picture-perfect" example but are likely to arise in real life.
Therefore, the following walk-through in this section is a write-up of an existing and successfully executed analysis.

The analysis
Expand Down
2 changes: 1 addition & 1 deletion docs/code_from_chapters/ABCD.rst
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ Datasets can be nested in superdataset-subdataset hierarchies.
This overcomes scaling issues.
Some dataset that we work with including ABCD become incredibly large, and when they exceed a few 100k files version control tools can struggle and break.
By nesting datasets, and you will see concrete examples later, you can overcome this and split a dataset into manageable pieces.
If you are interested in finding out more, take a look into the usecase :ref:`usecase_HCP_dataset` or the chapter :ref:`chapter_gobig`.
If you are interested in finding out more, take a look into the use case :ref:`usecase_HCP_dataset` or the chapter :ref:`chapter_gobig`.

But it also helps to link datasets as modular units together, and maximizes the potential for reuse of the individual datasets.
In the context of data analysis, it is especially helpful to do this to link input data to an analysis dataset -- it helps to reuse data in multiple analysis, to link input data in a precise version, and to create an intuitively structured dataset layout.
Expand Down
2 changes: 1 addition & 1 deletion docs/code_from_chapters/OHBM_OSR.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ One of the analysis components for this and most other workflows is data.
DataLad makes it easy to "install" data as if it would be software, and the Datalad 0.13 release comes with some even more exiting features for data consumption than what DataLad can already do.

For example, the human connectome project (HCP) data exists as a datalad dataset on :term:`Github` now. You can find and install it at `github.com/datalad-datasets/human-connectome-project-openaccess <https://github.com/datalad-datasets/human-connectome-project-openaccess>`_.
If you are interested in the creation of this dataset, the usecase :ref:`usecase_hcp_dataset` will talk about the details.
If you are interested in the creation of this dataset, the use case :ref:`usecase_hcp_dataset` will talk about the details.
Beyond access to the full HCP data, there are also subsets of the HCP data being created and transformed into BIDS-like formats, and the newly introduced feature of RIA stores makes it possible to install these HCP data subsets in specific versions, for example BIDS formatted.
You can read up on this new feature in the section :ref:`riastore`.
Here is how to install the "structural preprocessed" subset of the HCP dataset that has been transformed into a bids like format from a public datalad RIA store into a directory called ``.source``:
Expand Down
2 changes: 1 addition & 1 deletion docs/code_from_chapters/dgpa.rst
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ Datasets can be nested in superdataset-subdataset hierarchies.
This overcomes scaling issues.
Sometimes datasets that we work with become incredibly large, and when they exceed a few 100k files version control tools can struggle and break.
By nesting datasets, you can overcome this and split a dataset into manageable pieces.
If you are interested in finding out more, take a look into the usecase :ref:`usecase_HCP_dataset` or the chapter :ref:`chapter_gobig`.
If you are interested in finding out more, take a look into the use case :ref:`usecase_HCP_dataset` or the chapter :ref:`chapter_gobig`.

But it also helps to link datasets as modular units together, and maximizes the potential for reuse of the individual datasets.
In the context of data analysis, it is especially helpful to do this to link input data to an analysis dataset -- it helps to reuse data in multiple analysis, to link input data in a precise version, and to create an intuitively structured dataset layout.
Expand Down
2 changes: 1 addition & 1 deletion docs/code_from_chapters/yale.rst
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ Datasets can be nested in superdataset-subdataset hierarchies.
This overcomes scaling issues.
Some datasets that we work with, including ABCD, become incredibly large, and when they exceed a few 100k files version control tools can struggle and break.
By nesting datasets, you can overcome this and split a dataset into manageable pieces.
If you are interested in finding out more, take a look into the usecase :ref:`usecase_HCP_dataset` or the chapter :ref:`chapter_gobig`.
If you are interested in finding out more, take a look into the use case :ref:`usecase_HCP_dataset` or the chapter :ref:`chapter_gobig`.

But it also helps to link datasets as modular units together, and maximizes the potential for reuse of the individual datasets.
In the context of data analysis, it is especially helpful to do this to link input data to an analysis dataset -- it helps to reuse data in multiple analysis, to link input data in a precise version, and to create an intuitively structured dataset layout.
Expand Down
4 changes: 2 additions & 2 deletions docs/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ Beyond Basics
related to any narrative. Readers are encouraged to read chapters or sections
that fit their needs in whichever order they prefer.

- Care should be taken to not turn content that could be a usecase into an
- Care should be taken to not turn content that could be a use case into an
advanced chapter.


Expand Down Expand Up @@ -344,7 +344,7 @@ The leading integer indicates the category of reference:

1: Command references
2: Concept references
3: Usecase references
3: Use case references

The later integers are consecutively numbered in order of creation. If you want
to create a new reference, just create a reference one integer higher than the
Expand Down
Loading