Skip to content

Commit 63a29ea

Browse files
authored
docs: fix spelling mistakes in the project documentation
Added jsoref to list of Dioptra contributors. Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
1 parent c289747 commit 63a29ea

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+113
-112
lines changed

.github/workflows/pip-compile.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ jobs:
9191
run: |
9292
python3 -m tox run -e py311-linux-${{ matrix.architecture }}-${{ matrix.requirements }}
9393
94-
- name: run tox (MacOS, Python 3.11)
94+
- name: run tox (macOS, Python 3.11)
9595
if: ${{ (matrix.os == 'macos-13' || matrix.os == 'macos-latest') && matrix.python-version == '3.11' }}
9696
run: |
9797
python3 -m tox run -e py311-macos-${{ matrix.architecture }}-${{ matrix.requirements }}

CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ See the [Dioptra Commit Style Guide](./COMMIT_STYLE_GUIDE.md).
7272

7373
#### Squashing
7474

75-
All final commits will be squashed, therefore when squashing your branch, it’s important to make sure you update the commit message. If you’re using Github’s UI it will by default create a new commit message which is a combination of all commits and **does not follow the commit guidelines**.
75+
All final commits will be squashed, therefore when squashing your branch, it’s important to make sure you update the commit message. If you’re using GitHub’s UI it will by default create a new commit message which is a combination of all commits and **does not follow the commit guidelines**.
7676

7777
If you’re working locally, it often can be useful to `--amend` a commit, or utilize `rebase -i` to reorder, squash, and reword your commits.
7878

CONTRIBUTORS.md

+1
Original file line numberDiff line numberDiff line change
@@ -18,3 +18,4 @@ lbarbMITRE
1818
cminiter
1919
pscemama-mitre
2020
alexb1200
21+
jsoref

DEVELOPER.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -17,12 +17,12 @@ Ensure that you have Python 3.11 installed and that it is available in your PATH
1717
| linux-arm64-py3.11-requirements-dev.txt | Linux | arm64 |||
1818
| linux-arm64-py3.11-requirements-dev-tensorflow.txt | Linux | arm64 |||
1919
| linux-arm64-py3.11-requirements-dev-pytorch.txt | Linux | arm64 |||
20-
| macos-amd64-py3.11-requirements-dev.txt | MacOS | x86-64 |||
21-
| macos-amd64-py3.11-requirements-dev-tensorflow.txt | MacOS | x86-64 |||
22-
| macos-amd64-py3.11-requirements-dev-pytorch.txt | MacOS | x86-64 |||
23-
| macos-arm64-py3.11-requirements-dev.txt | MacOS | arm64 |||
24-
| macos-arm64-py3.11-requirements-dev-tensorflow.txt | MacOS | arm64 |||
25-
| macos-arm64-py3.11-requirements-dev-pytorch.txt | MacOS | arm64 |||
20+
| macos-amd64-py3.11-requirements-dev.txt | macOS | x86-64 |||
21+
| macos-amd64-py3.11-requirements-dev-tensorflow.txt | macOS | x86-64 |||
22+
| macos-amd64-py3.11-requirements-dev-pytorch.txt | macOS | x86-64 |||
23+
| macos-arm64-py3.11-requirements-dev.txt | macOS | arm64 |||
24+
| macos-arm64-py3.11-requirements-dev-tensorflow.txt | macOS | arm64 |||
25+
| macos-arm64-py3.11-requirements-dev-pytorch.txt | macOS | arm64 |||
2626
| win-amd64-py3.11-requirements-dev.txt | Windows | x86-64 |||
2727
| win-amd64-py3.11-requirements-dev-tensorflow.txt | Windows | x86-64 |||
2828
| win-amd64-py3.11-requirements-dev-pytorch.txt | Windows | x86-64 |||
@@ -34,7 +34,7 @@ python -m venv .venv
3434
```
3535

3636
Activate the virtual environment after creating it.
37-
To activate it on MacOS/Linux:
37+
To activate it on macOS/Linux:
3838

3939
```sh
4040
source .venv/bin/activate
@@ -53,7 +53,7 @@ python -m pip install --upgrade pip pip-tools
5353
```
5454

5555
Finally, use `pip-sync` to install the dependencies in your chosen requirements file and install `dioptra` in development mode.
56-
On MacOS/Linux:
56+
On macOS/Linux:
5757

5858
```sh
5959
# Replace "linux-amd64-py3.11-requirements-dev.txt" with your chosen file
@@ -133,7 +133,7 @@ make code-check
133133

134134
This project has a [commit style guide](./COMMIT_STYLE_GUIDE.md) that is enforced using the `gitlint` tool.
135135
Developers are expected to run `gitlint` and validate their commit message before opening a Pull Request.
136-
After commiting your contribution, activate your virtual environment if you haven't already and run:
136+
After committing your contribution, activate your virtual environment if you haven't already and run:
137137

138138
```sh
139139
python -m tox run -e gitlint

Makefile

+1-1
Original file line numberDiff line numberDiff line change
@@ -537,7 +537,7 @@ endif
537537
$(call save_sentinel_file,$@)
538538

539539
#################################################################################
540-
# AUTO-GENERATED PROJECT BUILD RECEIPES #
540+
# AUTO-GENERATED PROJECT BUILD RECIPES #
541541
#################################################################################
542542

543543
$(call generate_full_docker_image_recipe,MLFLOW_TRACKING,CONTAINER_IMAGE_TAG)

RELEASE.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@
1414
- node
1515
- redis
1616

17-
3. Edit `examples/scripts/venvs/examples-setup-requirements.txt` and set an upper bound constraint on each of the packages listed (if one isn't set already). The upper bounds can be determined by creating the a virtual environment using this file from the `dev` branch and testing that the instructions in `examples/README.md` work. Once the repo maintainer confirms that the environment works and the user can run the provided scripts and submit jobs from the Jupyter notebook, run `python -m pip freeze` to check what is currently installed. Use the known working versions to set the upper bound constraint.
17+
3. Edit `examples/scripts/venvs/examples-setup-requirements.txt` and set an upper bound constraint on each of the packages listed (if one isn't set already). The upper bounds can be determined by creating a virtual environment using this file from the `dev` branch and testing that the instructions in `examples/README.md` work. Once the repo maintainer confirms that the environment works and the user can run the provided scripts and submit jobs from the Jupyter notebook, run `python -m pip freeze` to check what is currently installed. Use the known working versions to set the upper bound constraint.
1818

19-
4. Fetch the latest requirement files generated by GitHub Actions [here](https://github.com/usnistgov/dioptra/actions/workflows/pip-compile.yml). Download the `requirements-files` zip, unpack it, and move the files with `*requirements-dev*` into the `requirements/` folder, and the rest into the `docker/requirements` folder. In addition, get someone with an M1/M2 Mac to regenerate the MacOS ARM64 requirements files.
19+
4. Fetch the latest requirement files generated by GitHub Actions [here](https://github.com/usnistgov/dioptra/actions/workflows/pip-compile.yml). Download the `requirements-files` zip, unpack it, and move the files with `*requirements-dev*` into the `requirements/` folder, and the rest into the `docker/requirements` folder. In addition, get someone with an M1/M2 Mac to regenerate the macOS ARM64 requirements files.
2020

2121
5. Commit the changes using the message `build: set container tags and package upper bounds for merge to main`
2222

cookiecutter-templates/cookiecutter-dioptra-deployment/{{cookiecutter.__project_slug}}/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ The following subsections explain how to:
9393
- Assign GPUs to specific worker containers
9494
- Integrate custom containers in the Dioptra deployment
9595

96-
In addition to the above, you may want to further customize the the Docker Compose configuration via the `docker-compose.override.yml` file to suit your needs, such as allocating explicit CPUs you want each container to use.
96+
In addition to the above, you may want to further customize the Docker Compose configuration via the `docker-compose.override.yml` file to suit your needs, such as allocating explicit CPUs you want each container to use.
9797
An example template file (`docker-compose.override.yml.template`) is provided as part of the deployment as a starting point.
9898
This can be copied to `docker-compose.override.yml` and modified.
9999
See the [Compose specification documentation](https://docs.docker.com/compose/compose-file/) for the full list of available options.

cookiecutter-templates/cookiecutter-dioptra-deployment/{{cookiecutter.__project_slug}}/docker-compose.override.yml.template

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
# configuration changes.
2828
#
2929
# A datasets directory is configured in the main docker-compose.yml file in the
30-
# cookicutter deployment generation. It is recommended that datasets_directory
30+
# cookiecutter deployment generation. It is recommended that datasets_directory
3131
# be left blank if mounts are being configured here.
3232
# ------------------------------------------------------------------------------
3333

docker/ca-certificates/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,4 @@ There are some common situations where it is necessary to provide one or more ex
2020
2. You are building the containers in a corporate environment that has its own certificate authority and the containers need access to resources or repository mirrors on the corporate network.
2121

2222
If these situations do not apply to you, or if you are unsure if they apply to you, then it is recommended that you try to build the containers without adding anything to this folder first.
23-
If the build process fails due to an HTTPS or SSL error, then that is a a telltale sign that you need to add extra CA certificates to this folder.
23+
If the build process fails due to an HTTPS or SSL error, then that is a telltale sign that you need to add extra CA certificates to this folder.

docs/source/getting-started/installation.rst

+11-11
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ The minimum requirements for installing the ``dioptra`` Python package on your h
3737

3838
- CPU: An x86-64 processor
3939
- RAM: 4GB or higher
40-
- Operating System: Windows 10, MacOS 10.14 or newer, Linux (Ubuntu 20.04 LTS recommended)
40+
- Operating System: Windows 10, macOS 10.14 or newer, Linux (Ubuntu 20.04 LTS recommended)
4141
- Python 3.7 or above (3.7 and 3.8 are actively tested)
4242

4343
.. _quickstart-create-environment:
@@ -57,7 +57,7 @@ There are two install options to start using `Conda Environments <https://docs.c
5757
The following links will provide an installation package for version 2020.11 of `Anaconda <https://docs.anaconda.com/>`_ on your host machine (must meet all :ref:`quickstart-system-requirements`).
5858

5959
- `Anaconda for Windows <https://repo.anaconda.com/archive/Anaconda3-2020.11-Windows-x86_64.exe>`_
60-
- `Anaconda for MacOS <https://repo.anaconda.com/archive/Anaconda3-2020.11-MacOSX-x86_64.pkg>`_
60+
- `Anaconda for macOS <https://repo.anaconda.com/archive/Anaconda3-2020.11-macOSX-x86_64.pkg>`_
6161
- `Anaconda for Linux <https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh>`_
6262

6363
If your host machine does not meet the :ref:`quickstart-system-requirements`, then go to the `Anaconda Installation Documents <https://docs.anaconda.com/anaconda/install/>`_ for more help.
@@ -67,7 +67,7 @@ There are two install options to start using `Conda Environments <https://docs.c
6767
The following links will provide an installation package for the latest version of `Miniconda <https://docs.conda.io/en/latest/miniconda.html>`_ on your host machine (must meet all :ref:`quickstart-system-requirements`).
6868

6969
- `Miniconda for Windows <https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe>`_
70-
- `Miniconda for MacOS <https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.pkg>`_
70+
- `Miniconda for macOS <https://repo.anaconda.com/miniconda/Miniconda3-latest-macOSX-x86_64.pkg>`_
7171
- `Miniconda for Linux <https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh>`_
7272

7373
If your host machine does not meet the :ref:`quickstart-system-requirements`, then go to the `Miniconda Installation Documents <https://docs.conda.io/en/latest/miniconda.html>`_ for more help.
@@ -82,9 +82,9 @@ The minimum requirements for test-driving the Testbed architecture locally on yo
8282

8383
- CPU: Intel or AMD-based x86-64 processor with 4+ physical cores at 1.90GHz or higher (recommended)
8484
- RAM: 16GB or higher (recommended)
85-
- Operating System: Windows 10, MacOS 10.14 or newer, Linux (Ubuntu 20.04 LTS recommended)
85+
- Operating System: Windows 10, macOS 10.14 or newer, Linux (Ubuntu 20.04 LTS recommended)
8686
- GNU/Linux-compatible environment, see :ref:`quickstart-gnu-linux-environment`
87-
- Docker Desktop 3.1.0 or later (Windows/MacOS)
87+
- Docker Desktop 3.1.0 or later (Windows/macOS)
8888

8989
.. _quickstart-gnu-linux-environment:
9090

@@ -94,14 +94,14 @@ GNU/Linux Environments
9494
A host device that uses a GNU/Linux environment can be the following:
9595

9696
- Most Linux distributions
97-
- MacOS/OS X with Homebrew_
97+
- macOS/OS X with Homebrew_
9898
- Windows with the `Windows Subsystem for Linux`_
9999
- A virtual machine running a Linux distribution
100100

101101
.. note::
102102

103103
Dioptra was developed for use with native GNU/Linux environments.
104-
When using MacOS/OS X or Windows there is a chance you will encounter errors that are specific to your system's setup that are not covered in this documentation.
104+
When using macOS/OS X or Windows there is a chance you will encounter errors that are specific to your system's setup that are not covered in this documentation.
105105
To resolve such issues, first look at the external documentation linked (i.e. Homebrew_ and `Windows Subsystem for Linux`_) before submitting a bug report.
106106
Also, when using a virtual machine it is likely the performance can be throttled because of the CPU and Memory allocations set at the time the virtual machine was configured.
107107
If performance becomes an issue when using a virtual machine, consider increasing the CPU and Memory resources allocated to the machine.
@@ -121,7 +121,7 @@ To clone the repository, open a new **Terminal** session for your operating syst
121121

122122
Use the keyboard shortcut :kbd:`ctrl` + :kbd:`alt` + :kbd:`t` to open the **Terminal**.
123123

124-
.. tab-item:: MacOS
124+
.. tab-item:: macOS
125125

126126
Use the keyboard shortcut :kbd:`command` + :kbd:`space` to open the **Spotlight Search**, type ``Terminal`` into the search bar, and click the *Terminal* application under *Top Hit* at the top of your results.
127127

@@ -138,10 +138,10 @@ Next, navigate to the directory where you will clone the repository,
138138
139139
.. attention::
140140

141-
Windows Subsystem for Linux (WSL) and MacOS users may encounter performance and file permission issues depending on the directory where the repository is cloned.
141+
Windows Subsystem for Linux (WSL) and macOS users may encounter performance and file permission issues depending on the directory where the repository is cloned.
142142
This problem is due to the way that Docker is implemented on these operating systems.
143-
For WSL users, these issues may occur if you clone the repository within any folder on the Windows filesystem under ``/mnt/c``, while for MacOS users it may occur if the repository is cloned within the ``Downloads`` or ``Documents`` directory.
144-
For this reason, WSL and MacOS users are both encouraged to create and clone the repository into a projects directory in their home directory,
143+
For WSL users, these issues may occur if you clone the repository within any folder on the Windows filesystem under ``/mnt/c``, while for macOS users it may occur if the repository is cloned within the ``Downloads`` or ``Documents`` directory.
144+
For this reason, WSL and macOS users are both encouraged to create and clone the repository into a projects directory in their home directory,
145145

146146
.. code-block:: sh
147147

docs/source/getting-started/running-dioptra.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Cruft will now run and prompt you to configure the deployment. See the :ref:`App
6666

6767
We recommend identifying a location to store datasets you will want to use with Dioptra at this point and setting the ``datasets_directory`` variable accordingly. See the :ref:`Downloading the datasets <getting-started-acquiring-datasets>` section for more details.
6868

69-
Once you have configured your deployment, continue following the instructions for initialzing and starting your deployment below.
69+
Once you have configured your deployment, continue following the instructions for initializing and starting your deployment below.
7070

7171
.. code:: sh
7272
@@ -337,7 +337,7 @@ The following subsections explain how to:
337337
- Assign GPUs to specific worker containers
338338
- Integrate custom containers in the Dioptra deployment
339339

340-
In addition to the above, you may want to further customize the the Docker Compose configuration via the ``docker-compose.override.yml`` file to suit your needs, such as allocating explicit CPUs you want each container to use.
340+
In addition to the above, you may want to further customize the Docker Compose configuration via the ``docker-compose.override.yml`` file to suit your needs, such as allocating explicit CPUs you want each container to use.
341341
An example template file (``docker-compose.override.yml.template``) is provided as part of the deployment as a starting point.
342342
This can be copied to ``docker-compose.override.yml`` and modified.
343343
See the `Compose specification documentation <https://docs.docker.com/compose/compose-file/>`__ for the full list of available options.

docs/source/glossary.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Glossary
4646
A copy of the paper is available on the arXiv at https://arxiv.org/abs/1511.07528.
4747

4848
JSON
49-
An acronym for the term *Javascript Object Notation*.
49+
An acronym for the term *JavaScript Object Notation*.
5050
JSON is a lightweight data-interchange format that is completely language independent despite being based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999.
5151
For more information, see https://www.json.org.
5252

docs/source/overview/executive-summary.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ The architecture is built entirely from open-source resources making it easy for
171171
Assumptions / System Requirements
172172
---------------------------------
173173

174-
Most of the built-in demonstrations in the testbed assume the testbed is deployed on Unix-based operating systems (e.g., Linux, MacOS).
174+
Most of the built-in demonstrations in the testbed assume the testbed is deployed on Unix-based operating systems (e.g., Linux, macOS).
175175
Those familiar with the Windows Subsystem for Linux (WSL) should be able to deploy it on Windows, but this mode is not explicitly supported at this time.
176176
Most included demos perform computationally intensive calculations requiring access to significant computational resources such as Graphics Processing Units (GPUs).
177177
The architecture has been tested on a :term:`NVIDIA DGX` server with 4 GPUs.

docs/source/user-guide/custom-task-plugins.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ For local tasks we will use a different notation for both creating and invoking
140140
Creating a Local Task
141141
---------------------
142142

143-
In general the major difference besides location of local task plugins is that the the `@task` decorator now replaces the `@pyplugs.register` decorator.
143+
In general the major difference besides location of local task plugins is that the `@task` decorator now replaces the `@pyplugs.register` decorator.
144144
The task decorator is imported from the prefect library:
145145

146146
.. code-block:: python

docs/source/user-guide/entry-points.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ uri
143143
Executable Script
144144
-----------------
145145

146-
The entry point script, in principle, is just an executable Python script that accepts command-line options, so Testbed users can get started quickly by using their pre-existing Python scripts.
146+
The entry point script, in principle, is just an executable Python script that accepts command-line options, so Testbed users can get started quickly by using their preexisting Python scripts.
147147
However, if users wish to make use of the Testbed's powerful job tracking and task plugin capabilities, they will need to adopt the Testbed's standard for writing entry point scripts outlined in this section.
148148

149149
.. attention::

examples/task-plugins/dioptra_custom/feature_squeezing/cw_inf_plugin.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ def create_adversarial_cw_inf_dataset(
9797
color_mode=color_mode,
9898
class_mode=label_mode,
9999
batch_size=batch_size,
100-
shuffle=True, # alse,
100+
shuffle=True, # false,
101101
)
102102
num_images = data_flow.n
103103
img_filenames = [Path(x) for x in data_flow.filenames]

examples/task-plugins/dioptra_custom/feature_squeezing/cw_l2_plugin.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ def create_adversarial_cw_l2_dataset(
102102
color_mode=color_mode,
103103
class_mode=label_mode,
104104
batch_size=batch_size,
105-
shuffle=True, # alse,
105+
shuffle=True, # false,
106106
)
107107
num_images = data_flow.n
108108
img_filenames = [Path(x) for x in data_flow.filenames]

examples/task-plugins/dioptra_custom/feature_squeezing/jsma_plugin.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ def create_adversarial_jsma_dataset(
9393
color_mode=color_mode,
9494
class_mode=label_mode,
9595
batch_size=batch_size,
96-
shuffle=True, # alse,
96+
shuffle=True, # false,
9797
)
9898
num_images = data_flow.n
9999
img_filenames = [Path(x) for x in data_flow.filenames]

examples/task-plugins/dioptra_custom/feature_squeezing/squeeze_plugin.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ def feature_squeeze(
8484
run_id=run_id,
8585
)
8686

87-
batch_size = 32 # There is currently a bug preventing batch size from getting passsed in correctly
87+
batch_size = 32 # There is currently a bug preventing batch size from getting passed in correctly
8888
tensorflow_global_seed: int = rng.integers(low=0, high=2**31 - 1)
8989
dataset_seed: int = rng.integers(low=0, high=2**31 - 1)
9090

0 commit comments

Comments
 (0)