You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -72,7 +72,7 @@ See the [Dioptra Commit Style Guide](./COMMIT_STYLE_GUIDE.md).
72
72
73
73
#### Squashing
74
74
75
-
All final commits will be squashed, therefore when squashing your branch, it’s important to make sure you update the commit message. If you’re using Github’s UI it will by default create a new commit message which is a combination of all commits and **does not follow the commit guidelines**.
75
+
All final commits will be squashed, therefore when squashing your branch, it’s important to make sure you update the commit message. If you’re using GitHub’s UI it will by default create a new commit message which is a combination of all commits and **does not follow the commit guidelines**.
76
76
77
77
If you’re working locally, it often can be useful to `--amend` a commit, or utilize `rebase -i` to reorder, squash, and reword your commits.
Copy file name to clipboardExpand all lines: RELEASE.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -14,9 +14,9 @@
14
14
- node
15
15
- redis
16
16
17
-
3. Edit `examples/scripts/venvs/examples-setup-requirements.txt` and set an upper bound constraint on each of the packages listed (if one isn't set already). The upper bounds can be determined by creating the a virtual environment using this file from the `dev` branch and testing that the instructions in `examples/README.md` work. Once the repo maintainer confirms that the environment works and the user can run the provided scripts and submit jobs from the Jupyter notebook, run `python -m pip freeze` to check what is currently installed. Use the known working versions to set the upper bound constraint.
17
+
3. Edit `examples/scripts/venvs/examples-setup-requirements.txt` and set an upper bound constraint on each of the packages listed (if one isn't set already). The upper bounds can be determined by creating a virtual environment using this file from the `dev` branch and testing that the instructions in `examples/README.md` work. Once the repo maintainer confirms that the environment works and the user can run the provided scripts and submit jobs from the Jupyter notebook, run `python -m pip freeze` to check what is currently installed. Use the known working versions to set the upper bound constraint.
18
18
19
-
4. Fetch the latest requirement files generated by GitHub Actions [here](https://github.com/usnistgov/dioptra/actions/workflows/pip-compile.yml). Download the `requirements-files` zip, unpack it, and move the files with `*requirements-dev*` into the `requirements/` folder, and the rest into the `docker/requirements` folder. In addition, get someone with an M1/M2 Mac to regenerate the MacOS ARM64 requirements files.
19
+
4. Fetch the latest requirement files generated by GitHub Actions [here](https://github.com/usnistgov/dioptra/actions/workflows/pip-compile.yml). Download the `requirements-files` zip, unpack it, and move the files with `*requirements-dev*` into the `requirements/` folder, and the rest into the `docker/requirements` folder. In addition, get someone with an M1/M2 Mac to regenerate the macOS ARM64 requirements files.
20
20
21
21
5. Commit the changes using the message `build: set container tags and package upper bounds for merge to main`
Copy file name to clipboardExpand all lines: cookiecutter-templates/cookiecutter-dioptra-deployment/{{cookiecutter.__project_slug}}/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -93,7 +93,7 @@ The following subsections explain how to:
93
93
- Assign GPUs to specific worker containers
94
94
- Integrate custom containers in the Dioptra deployment
95
95
96
-
In addition to the above, you may want to further customize the the Docker Compose configuration via the `docker-compose.override.yml` file to suit your needs, such as allocating explicit CPUs you want each container to use.
96
+
In addition to the above, you may want to further customize the Docker Compose configuration via the `docker-compose.override.yml` file to suit your needs, such as allocating explicit CPUs you want each container to use.
97
97
An example template file (`docker-compose.override.yml.template`) is provided as part of the deployment as a starting point.
98
98
This can be copied to `docker-compose.override.yml` and modified.
99
99
See the [Compose specification documentation](https://docs.docker.com/compose/compose-file/) for the full list of available options.
Copy file name to clipboardExpand all lines: cookiecutter-templates/cookiecutter-dioptra-deployment/{{cookiecutter.__project_slug}}/docker-compose.override.yml.template
+1-1
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@
27
27
# configuration changes.
28
28
#
29
29
# A datasets directory is configured in the main docker-compose.yml file in the
30
-
# cookicutter deployment generation. It is recommended that datasets_directory
30
+
# cookiecutter deployment generation. It is recommended that datasets_directory
31
31
# be left blank if mounts are being configured here.
Copy file name to clipboardExpand all lines: docker/ca-certificates/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -20,4 +20,4 @@ There are some common situations where it is necessary to provide one or more ex
20
20
2. You are building the containers in a corporate environment that has its own certificate authority and the containers need access to resources or repository mirrors on the corporate network.
21
21
22
22
If these situations do not apply to you, or if you are unsure if they apply to you, then it is recommended that you try to build the containers without adding anything to this folder first.
23
-
If the build process fails due to an HTTPS or SSL error, then that is a a telltale sign that you need to add extra CA certificates to this folder.
23
+
If the build process fails due to an HTTPS or SSL error, then that is a telltale sign that you need to add extra CA certificates to this folder.
Copy file name to clipboardExpand all lines: docs/source/getting-started/installation.rst
+11-11
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ The minimum requirements for installing the ``dioptra`` Python package on your h
37
37
38
38
- CPU: An x86-64 processor
39
39
- RAM: 4GB or higher
40
-
- Operating System: Windows 10, MacOS 10.14 or newer, Linux (Ubuntu 20.04 LTS recommended)
40
+
- Operating System: Windows 10, macOS 10.14 or newer, Linux (Ubuntu 20.04 LTS recommended)
41
41
- Python 3.7 or above (3.7 and 3.8 are actively tested)
42
42
43
43
.. _quickstart-create-environment:
@@ -57,7 +57,7 @@ There are two install options to start using `Conda Environments <https://docs.c
57
57
The following links will provide an installation package for version 2020.11 of `Anaconda <https://docs.anaconda.com/>`_ on your host machine (must meet all :ref:`quickstart-system-requirements`).
58
58
59
59
- `Anaconda for Windows <https://repo.anaconda.com/archive/Anaconda3-2020.11-Windows-x86_64.exe>`_
60
-
- `Anaconda for MacOS<https://repo.anaconda.com/archive/Anaconda3-2020.11-MacOSX-x86_64.pkg>`_
60
+
- `Anaconda for macOS<https://repo.anaconda.com/archive/Anaconda3-2020.11-macOSX-x86_64.pkg>`_
61
61
- `Anaconda for Linux <https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh>`_
62
62
63
63
If your host machine does not meet the :ref:`quickstart-system-requirements`, then go to the `Anaconda Installation Documents <https://docs.anaconda.com/anaconda/install/>`_ for more help.
@@ -67,7 +67,7 @@ There are two install options to start using `Conda Environments <https://docs.c
67
67
The following links will provide an installation package for the latest version of `Miniconda <https://docs.conda.io/en/latest/miniconda.html>`_ on your host machine (must meet all :ref:`quickstart-system-requirements`).
68
68
69
69
- `Miniconda for Windows <https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe>`_
70
-
- `Miniconda for MacOS<https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.pkg>`_
70
+
- `Miniconda for macOS<https://repo.anaconda.com/miniconda/Miniconda3-latest-macOSX-x86_64.pkg>`_
71
71
- `Miniconda for Linux <https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh>`_
72
72
73
73
If your host machine does not meet the :ref:`quickstart-system-requirements`, then go to the `Miniconda Installation Documents <https://docs.conda.io/en/latest/miniconda.html>`_ for more help.
@@ -82,9 +82,9 @@ The minimum requirements for test-driving the Testbed architecture locally on yo
82
82
83
83
- CPU: Intel or AMD-based x86-64 processor with 4+ physical cores at 1.90GHz or higher (recommended)
84
84
- RAM: 16GB or higher (recommended)
85
-
- Operating System: Windows 10, MacOS 10.14 or newer, Linux (Ubuntu 20.04 LTS recommended)
85
+
- Operating System: Windows 10, macOS 10.14 or newer, Linux (Ubuntu 20.04 LTS recommended)
86
86
- GNU/Linux-compatible environment, see :ref:`quickstart-gnu-linux-environment`
87
-
- Docker Desktop 3.1.0 or later (Windows/MacOS)
87
+
- Docker Desktop 3.1.0 or later (Windows/macOS)
88
88
89
89
.. _quickstart-gnu-linux-environment:
90
90
@@ -94,14 +94,14 @@ GNU/Linux Environments
94
94
A host device that uses a GNU/Linux environment can be the following:
95
95
96
96
- Most Linux distributions
97
-
- MacOS/OS X with Homebrew_
97
+
- macOS/OS X with Homebrew_
98
98
- Windows with the `Windows Subsystem for Linux`_
99
99
- A virtual machine running a Linux distribution
100
100
101
101
.. note::
102
102
103
103
Dioptra was developed for use with native GNU/Linux environments.
104
-
When using MacOS/OS X or Windows there is a chance you will encounter errors that are specific to your system's setup that are not covered in this documentation.
104
+
When using macOS/OS X or Windows there is a chance you will encounter errors that are specific to your system's setup that are not covered in this documentation.
105
105
To resolve such issues, first look at the external documentation linked (i.e. Homebrew_ and `Windows Subsystem for Linux`_) before submitting a bug report.
106
106
Also, when using a virtual machine it is likely the performance can be throttled because of the CPU and Memory allocations set at the time the virtual machine was configured.
107
107
If performance becomes an issue when using a virtual machine, consider increasing the CPU and Memory resources allocated to the machine.
@@ -121,7 +121,7 @@ To clone the repository, open a new **Terminal** session for your operating syst
121
121
122
122
Use the keyboard shortcut :kbd:`ctrl` + :kbd:`alt` + :kbd:`t` to open the **Terminal**.
123
123
124
-
.. tab-item:: MacOS
124
+
.. tab-item:: macOS
125
125
126
126
Use the keyboard shortcut :kbd:`command` + :kbd:`space` to open the **Spotlight Search**, type ``Terminal`` into the search bar, and click the *Terminal* application under *Top Hit* at the top of your results.
127
127
@@ -138,10 +138,10 @@ Next, navigate to the directory where you will clone the repository,
138
138
139
139
.. attention::
140
140
141
-
Windows Subsystem for Linux (WSL) and MacOS users may encounter performance and file permission issues depending on the directory where the repository is cloned.
141
+
Windows Subsystem for Linux (WSL) and macOS users may encounter performance and file permission issues depending on the directory where the repository is cloned.
142
142
This problem is due to the way that Docker is implemented on these operating systems.
143
-
For WSL users, these issues may occur if you clone the repository within any folder on the Windows filesystem under ``/mnt/c``, while for MacOS users it may occur if the repository is cloned within the ``Downloads`` or ``Documents`` directory.
144
-
For this reason, WSL and MacOS users are both encouraged to create and clone the repository into a projects directory in their home directory,
143
+
For WSL users, these issues may occur if you clone the repository within any folder on the Windows filesystem under ``/mnt/c``, while for macOS users it may occur if the repository is cloned within the ``Downloads`` or ``Documents`` directory.
144
+
For this reason, WSL and macOS users are both encouraged to create and clone the repository into a projects directory in their home directory,
Copy file name to clipboardExpand all lines: docs/source/getting-started/running-dioptra.rst
+2-2
Original file line number
Diff line number
Diff line change
@@ -66,7 +66,7 @@ Cruft will now run and prompt you to configure the deployment. See the :ref:`App
66
66
67
67
We recommend identifying a location to store datasets you will want to use with Dioptra at this point and setting the ``datasets_directory`` variable accordingly. See the :ref:`Downloading the datasets <getting-started-acquiring-datasets>` section for more details.
68
68
69
-
Once you have configured your deployment, continue following the instructions for initialzing and starting your deployment below.
69
+
Once you have configured your deployment, continue following the instructions for initializing and starting your deployment below.
70
70
71
71
.. code:: sh
72
72
@@ -337,7 +337,7 @@ The following subsections explain how to:
337
337
- Assign GPUs to specific worker containers
338
338
- Integrate custom containers in the Dioptra deployment
339
339
340
-
In addition to the above, you may want to further customize the the Docker Compose configuration via the ``docker-compose.override.yml`` file to suit your needs, such as allocating explicit CPUs you want each container to use.
340
+
In addition to the above, you may want to further customize the Docker Compose configuration via the ``docker-compose.override.yml`` file to suit your needs, such as allocating explicit CPUs you want each container to use.
341
341
An example template file (``docker-compose.override.yml.template``) is provided as part of the deployment as a starting point.
342
342
This can be copied to ``docker-compose.override.yml`` and modified.
343
343
See the `Compose specification documentation <https://docs.docker.com/compose/compose-file/>`__ for the full list of available options.
Copy file name to clipboardExpand all lines: docs/source/glossary.rst
+1-1
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ Glossary
46
46
A copy of the paper is available on the arXiv at https://arxiv.org/abs/1511.07528.
47
47
48
48
JSON
49
-
An acronym for the term *Javascript Object Notation*.
49
+
An acronym for the term *JavaScript Object Notation*.
50
50
JSON is a lightweight data-interchange format that is completely language independent despite being based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999.
Copy file name to clipboardExpand all lines: docs/source/overview/executive-summary.rst
+1-1
Original file line number
Diff line number
Diff line change
@@ -171,7 +171,7 @@ The architecture is built entirely from open-source resources making it easy for
171
171
Assumptions / System Requirements
172
172
---------------------------------
173
173
174
-
Most of the built-in demonstrations in the testbed assume the testbed is deployed on Unix-based operating systems (e.g., Linux, MacOS).
174
+
Most of the built-in demonstrations in the testbed assume the testbed is deployed on Unix-based operating systems (e.g., Linux, macOS).
175
175
Those familiar with the Windows Subsystem for Linux (WSL) should be able to deploy it on Windows, but this mode is not explicitly supported at this time.
176
176
Most included demos perform computationally intensive calculations requiring access to significant computational resources such as Graphics Processing Units (GPUs).
177
177
The architecture has been tested on a :term:`NVIDIA DGX` server with 4 GPUs.
Copy file name to clipboardExpand all lines: docs/source/user-guide/custom-task-plugins.rst
+1-1
Original file line number
Diff line number
Diff line change
@@ -140,7 +140,7 @@ For local tasks we will use a different notation for both creating and invoking
140
140
Creating a Local Task
141
141
---------------------
142
142
143
-
In general the major difference besides location of local task plugins is that the the `@task` decorator now replaces the `@pyplugs.register` decorator.
143
+
In general the major difference besides location of local task plugins is that the `@task` decorator now replaces the `@pyplugs.register` decorator.
144
144
The task decorator is imported from the prefect library:
Copy file name to clipboardExpand all lines: docs/source/user-guide/entry-points.rst
+1-1
Original file line number
Diff line number
Diff line change
@@ -143,7 +143,7 @@ uri
143
143
Executable Script
144
144
-----------------
145
145
146
-
The entry point script, in principle, is just an executable Python script that accepts command-line options, so Testbed users can get started quickly by using their pre-existing Python scripts.
146
+
The entry point script, in principle, is just an executable Python script that accepts command-line options, so Testbed users can get started quickly by using their preexisting Python scripts.
147
147
However, if users wish to make use of the Testbed's powerful job tracking and task plugin capabilities, they will need to adopt the Testbed's standard for writing entry point scripts outlined in this section.
0 commit comments