Skip to content

Commit 7d5842c

Browse files
committed
merging with dev
2 parents efb2501 + ac5be7c commit 7d5842c

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

69 files changed

+20095
-1029
lines changed

.github/workflows/docs_test.yml

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
name: Test Build the Docs
2+
3+
on: [push, pull_request]
4+
5+
jobs:
6+
build:
7+
runs-on: ${{ matrix.os }}
8+
strategy:
9+
fail-fast: false
10+
matrix:
11+
os: ["ubuntu-latest"]
12+
steps:
13+
- uses: actions/checkout@v3
14+
- name: Set up python 3.11
15+
uses: actions/setup-python@v3
16+
with:
17+
python-version: "3.11"
18+
- name: debug
19+
run: |
20+
pwd
21+
ls
22+
- uses: mpi4py/setup-mpi@v1
23+
- name: Install dependencies
24+
run: |
25+
pip install --user . mcdc[docs]
26+
pip list
27+
- name: Patch Numba
28+
run : |
29+
bash .github/workflows/patch.sh
30+
- name: Build the Docs
31+
run: |
32+
cd docs
33+
make html

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,25 +17,25 @@ Our documentation on installation, contribution, and a brief user guide is on [R
1717

1818
## Installation
1919

20-
We recommend using [`conda`](https://conda.io/projects/conda/en/latest/user-guide/install/index.html) or some other environment manager to manage the MC/DC installation.
20+
We recommend using [Python virtual environments (venv)](https://docs.python.org/3/library/venv.html) or some other environment manager (e.g. conda) to manage the MC/DC installation.
2121
This avoids the need for admin access when installing MC/DC's dependencies and allows greater configurability for developers.
22-
For most users working on a single machine of which they are administrators, MC/DC can be installed via pip:
22+
For most users working in a venv, MC/DC can be installed via pip:
2323
```bash
2424
pip install mcdc
2525
```
26-
For developers or users on HPC machines, we recommend that you *not* use the pip distribution and instead install MC/DC and its dependencies via the included [install script](https://mcdc.readthedocs.io/en/latest/install.html), which builds `mpi4py` from source and uses conda to manage your environment. *This is the most reliable way to install and configure MC/DC*. It also takes care of the [Numba patch]() and can configure the [continuous energy data library](), if you have access.
26+
For developers or users on HPC machines, mpi4py is often distributed as part of an HPC machines given venv.
2727

2828
### Common issues with `mpi4py`
2929

3030
The `pip mpi4py` distribution commonly has errors when building due to incompatible local MPI dependencies it builds off of. While pip does have some remedy for this, we recommend the following:
3131
* **Mac users:** we recommend `openmpi` is [installed via homebrew](https://formulae.brew.sh/formula/open-mpi) (note that more reliable mpi4py distribution can also be [found on homebrew](https://formulae.brew.sh/formula/mpi4py)), alternatively you can use `conda` if you don't have admin privileges;
3232
* **Linux users:** we recommend `openmpi` is installed via a root package manager if possible (e.g. `sudo apt install openmpi`) or a conda distribution (e.g. `conda install openmpi`)
33-
* **HPC users and developers on any system:** On HPC systems in particular, `mpi4py` must be built using the system's existing `mpi` installation. Installing MC/DC using the [install script](https://mcdc.readthedocs.io/en/latest/install.html) we've included will handle that for you by installing dependencies using conda rather than pip. It also takes care of the [Numba patch]() and can configure the [continuous energy data library](), if you have access.
33+
* **HPC users and developers on any system:** On HPC systems that do not supply a suitable venv, `mpi4py` may need to be built using the system's existing `mpi` installation. Installing MC/DC using the [install script](https://mcdc.readthedocs.io/en/latest/install.html) we've included will handle that for you by installing dependencies using conda rather than pip. It also takes care of the [Numba patch](https://github.com/CEMeNT-PSAAP/MCDC/blob/main/patch_numba.sh) and can configure the [continuous energy data library](https://github.com/CEMeNT-PSAAP/MCDC/blob/main/config_cont_energy.sh), if you have access.
3434

3535
### Numba Config
3636

3737
Running MC/DC performantly in [Numba mode](#numba-mode) requires a patch to a single Numba file. If you installed MC/DC with the [install script](https://mcdc.readthedocs.io/en/latest/install.html), this patch has already been taken care of. If you installed via pip, we have a patch script will make the necessary changes for you:
38-
1. Download the `patch.sh` file [here]() (If you've cloned MC/DC's GitHub repository, you already have this file in your MCDC/ directory).
38+
1. Download the `patch.sh` file [here](https://github.com/CEMeNT-PSAAP/MCDC/blob/main/patch_numba.sh) (If you've cloned MC/DC's GitHub repository, you already have this file in your MCDC/ directory).
3939
2. In your active conda environment, run `bash patch_numba.sh`.
4040
*If you manage your environment with conda, you will not need admin privileges*.
4141

docs/source/conf.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,11 @@
1818
# On Read the Docs, need to mock any python packages that would require c
1919
from unittest.mock import MagicMock
2020

21-
MOCK_MODULES = ["mpi4py", "colorama", "mpi4py.util.dtlib"]
21+
MOCK_MODULES = ["mpi4py", "colorama", "mpi4py.util.dtlib", "sympy", "matplotlib.pyplot"]
2222
sys.modules.update((mod_name, MagicMock()) for mod_name in MOCK_MODULES)
23+
from mpi4py import MPI
2324

25+
MPI.COMM_WORLD.Get_size.return_value = 1
2426

2527
# -- Project information -----------------------------------------------------
2628

docs/source/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ A full exhaustive list of publications can be found on the `CEMeNT site <https:/
8181
:maxdepth: 1
8282

8383
install
84-
user
84+
user/index
8585
contribution
8686
theory/index
8787
pythonapi/index

docs/source/install.rst

Lines changed: 114 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -4,48 +4,93 @@
44
Installation Guide
55
===================
66

7-
Developers in MC/DC (on any machine) or users on HPC machines should install using the installation script included with the source code;
8-
start by :ref:`creating-a-conda-environment`.
9-
Installing from source via the installation script is the most resilient way to get properly configured dependencies.
10-
Most other users can install using pip.
7+
Whether installing MC/DC as a user or from source as a developer,
8+
we recommend doing so using an environment manager like venv or conda.
9+
This will avoid the need for any admin access and keep dependencies clean.
10+
11+
In general, :ref:`creating-a-venv-environment` and :ref:`installing-with-pip` is easier and recommended.
12+
Creating a conda environment and :ref:`installing-with-conda` is more robust and reliable, but is also more difficult.
13+
A conda environment is necessary to install MC/DC on LLNL's Lassen machine.
14+
15+
16+
17+
.. _creating-a-venv-environment:
18+
19+
---------------------------
20+
Creating a venv environment
21+
---------------------------
22+
23+
Python `virtual environments <https://docs.python.org/3.11/library/venv.html>`_ are the easy and
24+
recommended way to get MC/DC operating on personal machines as well as HPCs;
25+
all you need is a working Python version with venv installed.
26+
Particularly on HPCs, using a Python virtual environment is convenient because
27+
system admins will have already configured venv and the pip within it to load packages and dependencies
28+
from the proper sources.
29+
HPCs often use a module system, so before doing anything else,
30+
``module load python/<version_number>``.
31+
32+
A python virtual environment can (usually) be created using
33+
34+
.. code-block:: sh
35+
36+
python -m venv <name_of_venv>
37+
38+
Once you have created a venv, you will need to activate it
39+
40+
.. code-block:: sh
41+
42+
source <name_of_venv>/bin/activate
43+
44+
and will need to do so every time a new terminal instance is launched.
45+
Once your environment is active, you can move on to :ref:`installing-with-pip`.
46+
47+
48+
.. _installing-with-pip:
1149

1250
-------------------
1351
Installing with pip
1452
-------------------
15-
Users who:
16-
17-
#. are unix based (macOS, linux, etc.),
18-
#. have a working version of openMPI (from conda, brew, or apt),
19-
#. are using an environment manager like conda or have administrator privileges, and
20-
#. plan to *use* MC/DC, not develop features for MC/DC
21-
22-
can install using pip.
23-
We recommend doing so within an active conda (or other environment manager) environment,
24-
which avoids the need for any admin access and keeps dependencies clean.
53+
Assuming you have a working Python environment, you can install using pip.
54+
Doing so within an active venv or conda environment avoids the need for any admin access
55+
and keeps dependencies clean.
2556

57+
If you would like to run MC/DC as published in the main branch *and*
58+
do not need to develop in MC/DC, you can install from PyPI:
59+
2660
.. code-block:: sh
2761
2862
pip install mcdc
2963
30-
Now you're ready to run in pure Python mode!
64+
If you would like to execute a version of MC/DC from a specific branch or
65+
*do* plan to develop in MC/DC, you'll need to install from source:
3166

32-
.. _creating-a-conda-environment:
67+
#. Clone the MC/DC repo: ``git clone https://github.com/CEMeNT-PSAAP/MCDC.git``
68+
#. Go to your new MC/DC directory: ``cd MCDC``
69+
#. Install the package from your MC/DC files: ``pip install -e .``
70+
#. Run the included script that makes a necessary numba patch: ``bash patch_numba.sh``
3371

34-
-----------------------------------
35-
Creating an MC/DC Conda environment
36-
-----------------------------------
72+
This should install all needed dependencies without a hitch.
73+
The `-e` flag installs MC/DC as an editable package, meaning that any changes
74+
you make to the MC/DC source files, including checking out a different
75+
branch, will be immediately reflected without needing to do any re-installation.
3776

77+
.. _installing-with-conda:
78+
79+
--------------------------
80+
Installing MC/DC via conda
81+
--------------------------
82+
83+
Conda is the most robust (works even on bespoke systems) option to install MC/DC.
3884
`Conda <https://conda.io/en/latest/>`_ is an open source package and environment management system
3985
that runs on Windows, macOS, and Linux. It allows for easy installing and switching between multiple
4086
versions of software packages and their dependencies.
41-
We can't force you to use it, but we do *highly* recommend it, particularly
42-
if you plan on running MC/DC in `numba mode <https://numba.pydata.org/>`_.
43-
**The included installation script will fail if executed outside of a conda environment.**
87+
Conda is really useful on systems with non-standard hardware (e.g. not x86 CPUs) like Lassen, where
88+
mpi4py is often the most troublesome dependency.
4489

4590
First, ``conda`` should be installed with `Miniconda <https://docs.conda.io/en/latest/miniconda.html>`_
4691
or `Anaconda <https://www.anaconda.com/>`_. HPC instructions:
4792

48-
`Quartz <https://hpc.llnl.gov/hardware/compute-platforms/quartz>`_ (LLNL, x86_64),
93+
`Dane <https://hpc.llnl.gov/hardware/compute-platforms/dane>`_ (LLNL, x86_64),
4994

5095
.. code-block:: sh
5196
@@ -62,39 +107,32 @@ or `Anaconda <https://www.anaconda.com/>`_. HPC instructions:
62107
63108
64109
Then create and activate a new conda environment called *mcdc-env* in
65-
which to install MC/DC. This creates an environment with python3.11
66-
installed.
110+
which to install MC/DC. This creates an environment with python3.12
111+
installed:
67112

68113
.. code-block:: sh
69114
70-
conda create -n mcdc-env python=3.11
115+
conda create -n mcdc-env python=3.12
71116
conda activate mcdc-env
72117
73-
-------------------------------------------
74-
Installing from Source on Linux or Mac OS X
75-
-------------------------------------------
76-
77-
All MC/DC source code is hosted on `Github <https://github.com/CEMeNT-PSAAP/MCDC>`_.
78-
If you have `git <https://git-scm.com>`_, you can download MC/DC by entering the
79-
following commands in a terminal:
118+
Then, MC/DC can be installed from source by first cloning the MC/DC repository:
80119

81120
.. code-block:: sh
82121
83122
git clone https://github.com/CEMeNT-PSAAP/MCDC.git
84123
cd MCDC
85124
86-
87-
The MC/DC repository includes the script ``install.sh``, which will
125+
then using the the ``install.sh`` within it. The install script will
88126
build MC/DC and all of its dependencies and execute any necessary patches.
89-
This has been tested on Quartz, Lassen, and Apple M2 (as of 11/01/2023).
127+
This has been tested on Quartz, Dane, Tioga, Lassen, and Apple M2.
90128
The ``install.sh`` script **will fail outside of a conda environment**.
91129

92130
On HPC machines, the script will install mpi4py
93131
`from source <https://mpi4py.readthedocs.io/en/stable/install.html#using-distutils>`_.
94132
This means that all appropriate modules must be loaded prior to executing.
95133

96134
On Quartz, the default modules are sufficient (``intel-classic`` and ``mvapich2``).
97-
On Lassen, ``module load gcc/8 cuda/11.3``. Then,
135+
On Lassen, ``module load gcc/8 cuda/11.8``. Then,
98136

99137
.. code-block:: sh
100138
@@ -107,7 +145,6 @@ On local machines, mpi4py will be installed using conda,
107145
108146
bash install.sh
109147
110-
111148
To confirm that everything is properly installed, execute ``pytest`` from the MCDC directory.
112149

113150
-------------------------------------
@@ -132,3 +169,42 @@ or run the script after instillation as a stand alone operation with
132169
133170
Both these operations will clone the internal directory to your MCDC directory, untar the compressed folder, then set an environment variable in your bash script.
134171
NOTE: this does assume you are using bash shell.
172+
173+
174+
---------------------------------
175+
GPU Operability (MC/DC+Harmonize)
176+
---------------------------------
177+
178+
MC/DC supports most of its Numba enabled features for GPU compilation and execution.
179+
When targeting GPUs, MC/DC uses the `Harmonize <https://github.com/CEMeNT-PSAAP/harmonize>`_ library as its GPU runtime, a.k.a. the thing that actually executes MC/DC functions.
180+
How Harmonize works gets a little involved, but in short,
181+
Harmonize acts as MC/DC's GPU runtime by using two major scheduling schemes: an event schedular similar to those implemented in OpenMC and Shift, plus a novel scheduler.
182+
For more information on Harmonize and how we compile MC/DC with it, see this `TOMACs article describing the async scheduler <https://doi.org/10.1145/3626957>`_ or our publications in American Nuclear Society: Math and Comp Meeting in 2025.
183+
184+
If you encounter problems with configuration, please file `Github issues promptly <https://github.com/CEMeNT-PSAAP/MCDC/issues>`_ ,
185+
especially when on supported super computers (LLNL's `Tioga <https://hpc.llnl.gov/hardware/compute-platforms/tioga>`_, `El Capitan <https://hpc.llnl.gov/documentation/user-guides/using-el-capitan-systems>`_, and `Lassen <https://hpc.llnl.gov/hardware/compute-platforms/lassen>`_).
186+
187+
Nvidia GPUs
188+
^^^^^^^^^^^
189+
190+
To compile and execute MC/DC on Nvidia GPUs first ensure you have the `Harmonize prerecs <https://github.com/CEMeNT-PSAAP/harmonize/blob/main/install.sh>`_ (CUDA=11.8, Numba>=0.58.0) and a working MC/DC version >=0.10.0. Then,
191+
192+
#. Clone the harmonize repo: ``git clone https://github.com/CEMeNT-PSAAP/harmonize.git``
193+
#. Install into the proper Python env: ``pip install -e .``
194+
195+
Operability should now be enabled.
196+
197+
AMD GPUs
198+
^^^^^^^^
199+
200+
The prerequisites for AMD operability are slightly more complex and
201+
require a patch to Numba to allow for AMD target triple LLVM-IR.
202+
It is recommended that this is done within a Python venv virtual environment.
203+
204+
To compile and execute MC/DC on AMD GPUs first ensure you have the `Harmonize prerecs <https://github.com/CEMeNT-PSAAP/harmonize/blob/main/install.sh>`_ (ROCm=6.0.0, Numba>=0.58.0) and a working MC/DC version >=0.11.0. Then,
205+
206+
#. Patch Numba to enable HIP (`instructions here <https://github.com/ROCm/numba-hip>`_)
207+
#. Clone harmonize and `switch to the AMD <https://github.com/CEMeNT-PSAAP/harmonize/tree/amd_event_interop_revamp>`_ branch with ``git switch amd_event_interop_revamp`
208+
#. Install Harmonize with ``pip install -e .`` or using `Harmonize's install script <https://github.com/CEMeNT-PSAAP/harmonize/tree/main>`_
209+
210+
Operability should now be enabled.

docs/source/pubs.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,10 @@ Conference on Physics of Reactors. Pittsburgh, Pennsylvania, USA (2022).
4545
Software Engineering in MC/DC Publications
4646
-------------------------------------------
4747

48+
J. P. Morgan, I. Variansyah, B. Cuneo, T. S. Palmer and K. E. Niemeyer. 2024. UNDER REVIEW. Performance Portable Monte Carlo Neutron Transport in MCDC via Numba. Preprint DOI 10.48550/arXiv.2306.07847.
49+
50+
B. Cuneo and I. Variansyah. “An Alternative to Stride-Based RNG for Monte Carlo Transport.” In Transactions of The American Nuclear Society, volume 130 (1), pp. 423–426 (2024). DOI 10.13182/T130-44927
51+
4852
J. P. Morgan, I. Variansyah, S. Pasmann, K. B. Clements, B. Cuneo, A. Mote,
4953
C. Goodman, C. Shaw, J. Northrop, R. Pankaj, E. Lame, B. Whewell,
5054
R. McClarren, T. S. Palmer, L. Chen, D. Anistratov, C. T. Kelley,

docs/source/user/cpu.rst

Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
2+
.. _gpu:
3+
4+
=====================
5+
Running MC/DC on CPUs
6+
=====================
7+
8+
Executing MC/DC in something like a jupyter notebook is possible but not recommended,
9+
especially when using MPI and/or Numba.
10+
The instructions below assume you have an existing MC/DC installation.
11+
MPI can be quite tricky to configure if on an HPC; if you're having trouble,
12+
consult our :ref:`install`, your HPC admin, or our `GitHub issues page <https://github.com/CEMeNT-PSAAP/MCDC/issues>`_.
13+
14+
Pure Python Mode
15+
----------------
16+
17+
To run in pure Python mode (slower, no acceleration)
18+
19+
.. code-block:: python3
20+
21+
python input.py
22+
23+
Numba Mode
24+
----------
25+
26+
.. code-block:: python3
27+
28+
python input.py --mode=numba
29+
30+
When running in Numba mode a significant amount of time is taken compiling Python functions to performant binaries.
31+
Only the functions used in a specific simulation will be compiled.
32+
These binaries will be cached, meaning that in subsequent runs of the same simulation the compilation step can be avoided.
33+
The cache can be used as an effective ahead-of-time compilation scheme where binaries can be compiled once and shared between machines.
34+
For more information on caching see :ref:`Caching` and `Numba Caching <https://numba.readthedocs.io/en/stable/developer/caching.html>`_.
35+
36+
MC/DC also has the ability to run Numba in a debugging mode.
37+
This will result in less performant code and longer compile times but will allow for better error messages from Numba and other packages.
38+
39+
.. code-block:: python3
40+
41+
python input.py --mode=numba_debug
42+
43+
44+
For more information on the exact behavior of this option see :ref:`Debugging`
45+
46+
Using MPI
47+
---------
48+
49+
MC/DC can be executed using MPI with or without Numba acceleration.
50+
If ``numba-mode`` is enabled the ``jit`` compilation, which is executed on all threads, can take between 30s-2min.
51+
For smaller problems, Numba compilation time could exceed runtime, and pure python mode could be preferable.
52+
Below, ``--mode`` can equal python or numba. MC/DC gets MPI functionality via `mpi4py <https://mpi4py.readthedocs.io/en/stable/>`_.
53+
As an example, to run on 36 processes in Numba mode with `SLURM <https://slurm.schedmd.com/documentation.html>`_:
54+
55+
.. code-block:: python3
56+
57+
srun -n 36 python input.py --mode=<python/numba>
58+
59+
For systems that do not use SLURM (i.e., a local system) try ``mpiexec`` or ``mpirun`` in its stead.
60+
61+
CPU Profiling
62+
-------------

0 commit comments

Comments
 (0)