You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -17,25 +17,25 @@ Our documentation on installation, contribution, and a brief user guide is on [R
17
17
18
18
## Installation
19
19
20
-
We recommend using [`conda`](https://conda.io/projects/conda/en/latest/user-guide/install/index.html) or some other environment manager to manage the MC/DC installation.
20
+
We recommend using [Python virtual environments (venv)](https://docs.python.org/3/library/venv.html) or some other environment manager (e.g. conda) to manage the MC/DC installation.
21
21
This avoids the need for admin access when installing MC/DC's dependencies and allows greater configurability for developers.
22
-
For most users working on a single machine of which they are administrators, MC/DC can be installed via pip:
22
+
For most users working in a venv, MC/DC can be installed via pip:
23
23
```bash
24
24
pip install mcdc
25
25
```
26
-
For developers or users on HPC machines, we recommend that you *not* use the pip distribution and instead install MC/DC and its dependencies via the included [install script](https://mcdc.readthedocs.io/en/latest/install.html), which builds `mpi4py` from source and uses conda to manage your environment. *This is the most reliable way to install and configure MC/DC*. It also takes care of the [Numba patch]() and can configure the [continuous energy data library](), if you have access.
26
+
For developers or users on HPC machines, mpi4pyis often distributed as part of an HPC machines given venv.
27
27
28
28
### Common issues with `mpi4py`
29
29
30
30
The `pip mpi4py` distribution commonly has errors when building due to incompatible local MPI dependencies it builds off of. While pip does have some remedy for this, we recommend the following:
31
31
***Mac users:** we recommend `openmpi` is [installed via homebrew](https://formulae.brew.sh/formula/open-mpi) (note that more reliable mpi4py distribution can also be [found on homebrew](https://formulae.brew.sh/formula/mpi4py)), alternatively you can use `conda` if you don't have admin privileges;
32
32
***Linux users:** we recommend `openmpi` is installed via a root package manager if possible (e.g. `sudo apt install openmpi`) or a conda distribution (e.g. `conda install openmpi`)
33
-
***HPC users and developers on any system:** On HPC systems in particular, `mpi4py`must be built using the system's existing `mpi` installation. Installing MC/DC using the [install script](https://mcdc.readthedocs.io/en/latest/install.html) we've included will handle that for you by installing dependencies using conda rather than pip. It also takes care of the [Numba patch]() and can configure the [continuous energy data library](), if you have access.
33
+
***HPC users and developers on any system:** On HPC systems that do not supply a suitable venv, `mpi4py`may need to be built using the system's existing `mpi` installation. Installing MC/DC using the [install script](https://mcdc.readthedocs.io/en/latest/install.html) we've included will handle that for you by installing dependencies using conda rather than pip. It also takes care of the [Numba patch](https://github.com/CEMeNT-PSAAP/MCDC/blob/main/patch_numba.sh) and can configure the [continuous energy data library](https://github.com/CEMeNT-PSAAP/MCDC/blob/main/config_cont_energy.sh), if you have access.
34
34
35
35
### Numba Config
36
36
37
37
Running MC/DC performantly in [Numba mode](#numba-mode) requires a patch to a single Numba file. If you installed MC/DC with the [install script](https://mcdc.readthedocs.io/en/latest/install.html), this patch has already been taken care of. If you installed via pip, we have a patch script will make the necessary changes for you:
38
-
1. Download the `patch.sh` file [here]() (If you've cloned MC/DC's GitHub repository, you already have this file in your MCDC/ directory).
38
+
1. Download the `patch.sh` file [here](https://github.com/CEMeNT-PSAAP/MCDC/blob/main/patch_numba.sh) (If you've cloned MC/DC's GitHub repository, you already have this file in your MCDC/ directory).
39
39
2. In your active conda environment, run `bash patch_numba.sh`.
40
40
*If you manage your environment with conda, you will not need admin privileges*.
This means that all appropriate modules must be loaded prior to executing.
95
133
96
134
On Quartz, the default modules are sufficient (``intel-classic`` and ``mvapich2``).
97
-
On Lassen, ``module load gcc/8 cuda/11.3``. Then,
135
+
On Lassen, ``module load gcc/8 cuda/11.8``. Then,
98
136
99
137
.. code-block:: sh
100
138
@@ -107,7 +145,6 @@ On local machines, mpi4py will be installed using conda,
107
145
108
146
bash install.sh
109
147
110
-
111
148
To confirm that everything is properly installed, execute ``pytest`` from the MCDC directory.
112
149
113
150
-------------------------------------
@@ -132,3 +169,42 @@ or run the script after instillation as a stand alone operation with
132
169
133
170
Both these operations will clone the internal directory to your MCDC directory, untar the compressed folder, then set an environment variable in your bash script.
134
171
NOTE: this does assume you are using bash shell.
172
+
173
+
174
+
---------------------------------
175
+
GPU Operability (MC/DC+Harmonize)
176
+
---------------------------------
177
+
178
+
MC/DC supports most of its Numba enabled features for GPU compilation and execution.
179
+
When targeting GPUs, MC/DC uses the `Harmonize <https://github.com/CEMeNT-PSAAP/harmonize>`_ library as its GPU runtime, a.k.a. the thing that actually executes MC/DC functions.
180
+
How Harmonize works gets a little involved, but in short,
181
+
Harmonize acts as MC/DC's GPU runtime by using two major scheduling schemes: an event schedular similar to those implemented in OpenMC and Shift, plus a novel scheduler.
182
+
For more information on Harmonize and how we compile MC/DC with it, see this `TOMACs article describing the async scheduler <https://doi.org/10.1145/3626957>`_ or our publications in American Nuclear Society: Math and Comp Meeting in 2025.
183
+
184
+
If you encounter problems with configuration, please file `Github issues promptly <https://github.com/CEMeNT-PSAAP/MCDC/issues>`_ ,
185
+
especially when on supported super computers (LLNL's `Tioga <https://hpc.llnl.gov/hardware/compute-platforms/tioga>`_, `El Capitan <https://hpc.llnl.gov/documentation/user-guides/using-el-capitan-systems>`_, and `Lassen <https://hpc.llnl.gov/hardware/compute-platforms/lassen>`_).
186
+
187
+
Nvidia GPUs
188
+
^^^^^^^^^^^
189
+
190
+
To compile and execute MC/DC on Nvidia GPUs first ensure you have the `Harmonize prerecs <https://github.com/CEMeNT-PSAAP/harmonize/blob/main/install.sh>`_ (CUDA=11.8, Numba>=0.58.0) and a working MC/DC version >=0.10.0. Then,
191
+
192
+
#. Clone the harmonize repo: ``git clone https://github.com/CEMeNT-PSAAP/harmonize.git``
193
+
#. Install into the proper Python env: ``pip install -e .``
194
+
195
+
Operability should now be enabled.
196
+
197
+
AMD GPUs
198
+
^^^^^^^^
199
+
200
+
The prerequisites for AMD operability are slightly more complex and
201
+
require a patch to Numba to allow for AMD target triple LLVM-IR.
202
+
It is recommended that this is done within a Python venv virtual environment.
203
+
204
+
To compile and execute MC/DC on AMD GPUs first ensure you have the `Harmonize prerecs <https://github.com/CEMeNT-PSAAP/harmonize/blob/main/install.sh>`_ (ROCm=6.0.0, Numba>=0.58.0) and a working MC/DC version >=0.11.0. Then,
205
+
206
+
#. Patch Numba to enable HIP (`instructions here <https://github.com/ROCm/numba-hip>`_)
207
+
#. Clone harmonize and `switch to the AMD <https://github.com/CEMeNT-PSAAP/harmonize/tree/amd_event_interop_revamp>`_ branch with ``git switch amd_event_interop_revamp`
208
+
#. Install Harmonize with ``pip install -e .`` or using `Harmonize's install script <https://github.com/CEMeNT-PSAAP/harmonize/tree/main>`_
Copy file name to clipboardExpand all lines: docs/source/pubs.rst
+4Lines changed: 4 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -45,6 +45,10 @@ Conference on Physics of Reactors. Pittsburgh, Pennsylvania, USA (2022).
45
45
Software Engineering in MC/DC Publications
46
46
-------------------------------------------
47
47
48
+
J. P. Morgan, I. Variansyah, B. Cuneo, T. S. Palmer and K. E. Niemeyer. 2024. UNDER REVIEW. Performance Portable Monte Carlo Neutron Transport in MCDC via Numba. Preprint DOI 10.48550/arXiv.2306.07847.
49
+
50
+
B. Cuneo and I. Variansyah. “An Alternative to Stride-Based RNG for Monte Carlo Transport.” In Transactions of The American Nuclear Society, volume 130 (1), pp. 423–426 (2024). DOI 10.13182/T130-44927
51
+
48
52
J. P. Morgan, I. Variansyah, S. Pasmann, K. B. Clements, B. Cuneo, A. Mote,
49
53
C. Goodman, C. Shaw, J. Northrop, R. Pankaj, E. Lame, B. Whewell,
50
54
R. McClarren, T. S. Palmer, L. Chen, D. Anistratov, C. T. Kelley,
0 commit comments