diff --git a/Docs/source/dataanalysis/formats.rst b/Docs/source/dataanalysis/formats.rst
index 85518abebb7..a7836d41fef 100644
--- a/Docs/source/dataanalysis/formats.rst
+++ b/Docs/source/dataanalysis/formats.rst
@@ -19,7 +19,7 @@ When using the AMReX `plotfile` format, users can set the ``amrex.async_out=1``
option to perform the IO in a non-blocking fashion, meaning that the simulation
will continue to run while an IO thread controls writing the data to disk.
This can significantly reduce the overall time spent in IO. This is primarily intended for
-large runs on supercomputers such as Summit and Cori; depending on the MPI
+large runs on supercomputers (e.g. at OLCF or NERSC); depending on the MPI
implementation you are using, you may not see a benefit on your workstation.
When writing plotfiles, each rank will write to a separate file, up to some maximum number
diff --git a/Docs/source/install/hpc.rst b/Docs/source/install/hpc.rst
index d6e901276d7..9617f2a7fd6 100644
--- a/Docs/source/install/hpc.rst
+++ b/Docs/source/install/hpc.rst
@@ -33,7 +33,6 @@ This section documents quick-start guides for a selection of supercomputers that
:maxdepth: 1
hpc/adastra
- hpc/cori
hpc/crusher
hpc/frontier
hpc/fugaku
diff --git a/Docs/source/install/hpc/cori.rst b/Docs/source/install/hpc/cori.rst
deleted file mode 100644
index 35421982142..00000000000
--- a/Docs/source/install/hpc/cori.rst
+++ /dev/null
@@ -1,412 +0,0 @@
-.. _building-cori:
-
-Cori (NERSC)
-============
-
-The `Cori cluster `_ is located at NERSC.
-
-
-Introduction
-------------
-
-If you are new to this system, **please see the following resources**:
-
-* `GPU nodes `__
-
-* `Cori user guide `__
-* Batch system: `Slurm `__
-* `Jupyter service `__
-* `Production directories `__:
-
- * ``$SCRATCH``: per-user production directory, purged every 30 days (20TB)
- * ``/global/cscratch1/sd/m3239``: shared production directory for users in the project ``m3239``, purged every 30 days (50TB)
- * ``/global/cfs/cdirs/m3239/``: community file system for users in the project ``m3239`` (100TB)
-
-Installation
-------------
-
-Use the following commands to download the WarpX source code and switch to the correct branch:
-
-.. code-block:: bash
-
- git clone https://github.com/ECP-WarpX/WarpX.git $HOME/src/warpx
-
-KNL
-^^^
-
-We use the following modules and environments on the system (``$HOME/knl_warpx.profile``).
-
-.. literalinclude:: ../../../../Tools/machines/cori-nersc/knl_warpx.profile.example
- :language: bash
- :caption: You can copy this file from ``Tools/machines/cori-nersc/knl_warpx.profile.example``.
-
-And install ADIOS2, BLAS++ and LAPACK++:
-
-.. code-block:: bash
-
- source $HOME/knl_warpx.profile
-
- # c-blosc (I/O compression)
- git clone -b v1.21.1 https://github.com/Blosc/c-blosc.git src/c-blosc
- rm -rf src/c-blosc-knl-build
- cmake -S src/c-blosc -B src/c-blosc-knl-build -DBUILD_TESTS=OFF -DBUILD_BENCHMARKS=OFF -DDEACTIVATE_AVX2=OFF -DCMAKE_INSTALL_PREFIX=$HOME/sw/knl/c-blosc-1.12.1-install
- cmake --build src/c-blosc-knl-build --target install --parallel 16
-
- # ADIOS2
- git clone -b v2.7.1 https://github.com/ornladios/ADIOS2.git src/adios2
- rm -rf src/adios2-knl-build
- cmake -S src/adios2 -B src/adios2-knl-build -DADIOS2_USE_Blosc=ON -DADIOS2_USE_Fortran=OFF -DADIOS2_USE_Python=OFF -DADIOS2_USE_ZeroMQ=OFF -DCMAKE_INSTALL_PREFIX=$HOME/sw/knl/adios2-2.7.1-install
- cmake --build src/adios2-knl-build --target install --parallel 16
-
- # BLAS++ (for PSATD+RZ)
- git clone https://github.com/icl-utk-edu/blaspp.git src/blaspp
- rm -rf src/blaspp-knl-build
- cmake -S src/blaspp -B src/blaspp-knl-build -Duse_openmp=ON -Duse_cmake_find_blas=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/knl/blaspp-master-install
- cmake --build src/blaspp-knl-build --target install --parallel 16
-
- # LAPACK++ (for PSATD+RZ)
- git clone https://github.com/icl-utk-edu/lapackpp.git src/lapackpp
- rm -rf src/lapackpp-knl-build
- CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S src/lapackpp -B src/lapackpp-knl-build -Duse_cmake_find_lapack=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DLAPACK_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/knl/lapackpp-master-install
- cmake --build src/lapackpp-knl-build --target install --parallel 16
-
-For PICMI and Python workflows, also install a virtual environment:
-
-.. code-block:: bash
-
- # establish Python dependencies
- python3 -m pip install --user --upgrade pip
- python3 -m pip install --user virtualenv
-
- python3 -m venv $HOME/sw/knl/venvs/knl_warpx
- source $HOME/sw/knl/venvs/knl_warpx/bin/activate
-
- python3 -m pip install --upgrade pip
- python3 -m pip install --upgrade wheel
- python3 -m pip install --upgrade cython
- python3 -m pip install --upgrade numpy
- python3 -m pip install --upgrade pandas
- python3 -m pip install --upgrade scipy
- MPICC="cc -shared" python3 -m pip install -U --no-cache-dir -v mpi4py
- python3 -m pip install --upgrade openpmd-api
- python3 -m pip install --upgrade matplotlib
- python3 -m pip install --upgrade yt
- # optional: for libEnsemble
- #python3 -m pip install -r $HOME/src/warpx/Tools/LibEnsemble/requirements.txt
-
-Haswell
-^^^^^^^
-
-We use the following modules and environments on the system (``$HOME/haswell_warpx.profile``).
-
-.. literalinclude:: ../../../../Tools/machines/cori-nersc/haswell_warpx.profile.example
- :language: bash
- :caption: You can copy this file from ``Tools/machines/cori-nersc/haswell_warpx.profile.example``.
-
-And install ADIOS2, BLAS++ and LAPACK++:
-
-.. code-block:: bash
-
- source $HOME/haswell_warpx.profile
-
- # c-blosc (I/O compression)
- git clone -b v1.21.1 https://github.com/Blosc/c-blosc.git src/c-blosc
- rm -rf src/c-blosc-haswell-build
- cmake -S src/c-blosc -B src/c-blosc-haswell-build -DBUILD_TESTS=OFF -DBUILD_BENCHMARKS=OFF -DDEACTIVATE_AVX2=OFF -DCMAKE_INSTALL_PREFIX=$HOME/sw/haswell/c-blosc-1.12.1-install
- cmake --build src/c-blosc-haswell-build --target install --parallel 16
-
- # ADIOS2
- git clone -b v2.7.1 https://github.com/ornladios/ADIOS2.git src/adios2
- rm -rf src/adios2-haswell-build
- cmake -S src/adios2 -B src/adios2-haswell-build -DADIOS2_USE_Blosc=ON -DADIOS2_USE_Fortran=OFF -DADIOS2_USE_Python=OFF -DADIOS2_USE_ZeroMQ=OFF -DCMAKE_INSTALL_PREFIX=$HOME/sw/haswell/adios2-2.7.1-install
- cmake --build src/adios2-haswell-build --target install --parallel 16
-
- # BLAS++ (for PSATD+RZ)
- git clone https://github.com/icl-utk-edu/blaspp.git src/blaspp
- rm -rf src/blaspp-haswell-build
- cmake -S src/blaspp -B src/blaspp-haswell-build -Duse_openmp=ON -Duse_cmake_find_blas=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/blaspp-master-haswell-install
- cmake --build src/blaspp-haswell-build --target install --parallel 16
-
- # LAPACK++ (for PSATD+RZ)
- git clone https://github.com/icl-utk-edu/lapackpp.git src/lapackpp
- rm -rf src/lapackpp-haswell-build
- CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S src/lapackpp -B src/lapackpp-haswell-build -Duse_cmake_find_lapack=ON -DBLAS_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DLAPACK_LIBRARIES=${CRAY_LIBSCI_PREFIX_DIR}/lib/libsci_gnu.a -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=$HOME/sw/haswell/lapackpp-master-install
- cmake --build src/lapackpp-haswell-build --target install --parallel 16
-
-For PICMI and Python workflows, also install a virtual environment:
-
-.. code-block:: bash
-
- # establish Python dependencies
- python3 -m pip install --user --upgrade pip
- python3 -m pip install --user virtualenv
-
- python3 -m venv $HOME/sw/haswell/venvs/haswell_warpx
- source $HOME/sw/haswell/venvs/haswell_warpx/bin/activate
-
- python3 -m pip install --upgrade pip
- python3 -m pip install --upgrade wheel
- python3 -m pip install --upgrade cython
- python3 -m pip install --upgrade numpy
- python3 -m pip install --upgrade pandas
- python3 -m pip install --upgrade scipy
- MPICC="cc -shared" python3 -m pip install -U --no-cache-dir -v mpi4py
- python3 -m pip install --upgrade openpmd-api
- python3 -m pip install --upgrade matplotlib
- python3 -m pip install --upgrade yt
- # optional: for libEnsemble
- #python3 -m pip install -r $HOME/src/warpx/Tools/LibEnsemble/requirements.txt
-
-GPU (V100)
-^^^^^^^^^^
-
-Cori provides a partition with `18 nodes that include V100 (16 GB) GPUs `__.
-We use the following modules and environments on the system (``$HOME/gpu_warpx.profile``).
-You can copy this file from ``Tools/machines/cori-nersc/gpu_warpx.profile.example``:
-
-.. literalinclude:: ../../../../Tools/machines/cori-nersc/gpu_warpx.profile.example
- :language: bash
- :caption: You can copy this file from ``Tools/machines/cori-nersc/gpu_warpx.profile.example``.
-
-And install ADIOS2:
-
-.. code-block:: bash
-
- source $HOME/gpu_warpx.profile
-
- # c-blosc (I/O compression)
- git clone -b v1.21.1 https://github.com/Blosc/c-blosc.git src/c-blosc
- rm -rf src/c-blosc-gpu-build
- cmake -S src/c-blosc -B src/c-blosc-gpu-build -DBUILD_TESTS=OFF -DBUILD_BENCHMARKS=OFF -DDEACTIVATE_AVX2=OFF -DCMAKE_INSTALL_PREFIX=$HOME/sw/cori_gpu/c-blosc-1.12.1-install
- cmake --build src/c-blosc-gpu-build --target install --parallel 16
-
- git clone -b v2.7.1 https://github.com/ornladios/ADIOS2.git src/adios2
- rm -rf src/adios2-gpu-build
- cmake -S src/adios2 -B src/adios2-gpu-build -DADIOS2_USE_Blosc=ON -DADIOS2_USE_Fortran=OFF -DADIOS2_USE_Python=OFF -DADIOS2_USE_ZeroMQ=OFF -DCMAKE_INSTALL_PREFIX=$HOME/sw/cori_gpu/adios2-2.7.1-install
- cmake --build src/adios2-gpu-build --target install --parallel 16
-
-For PICMI and Python workflows, also install a virtual environment:
-
-.. code-block:: bash
-
- # establish Python dependencies
- python3 -m pip install --user --upgrade pip
- python3 -m pip install --user virtualenv
-
- python3 -m venv $HOME/sw/cori_gpu/venvs/gpu_warpx
- source $HOME/sw/cori_gpu/venvs/gpu_warpx/bin/activate
-
- python3 -m pip install --upgrade pip
- python3 -m pip install --upgrade wheel
- python3 -m pip install --upgrade cython
- python3 -m pip install --upgrade numpy
- python3 -m pip install --upgrade pandas
- python3 -m pip install --upgrade scipy
- python3 -m pip install -U --no-cache-dir -v mpi4py
- python3 -m pip install --upgrade openpmd-api
- python3 -m pip install --upgrade matplotlib
- python3 -m pip install --upgrade yt
- # optional: for libEnsemble
- #python3 -m pip install -r $HOME/src/warpx/Tools/LibEnsemble/requirements.txt
-
-Building WarpX
---------------
-
-We recommend to store the above lines in individual ``warpx.profile`` files, as suggested above.
-If you want to run on either of the three partitions of Cori, open a new terminal, log into Cori and *source* the environment you want to work with:
-
-.. code-block:: bash
-
- # KNL:
- source $HOME/knl_warpx.profile
-
- # Haswell:
- #source $HOME/haswell_warpx.profile
-
- # GPU:
- #source $HOME/gpu_warpx.profile
-
-.. warning::
-
- Consider that all three Cori partitions are *incompatible*.
-
- Do not *source* multiple ``...warpx.profile`` files in the same terminal session.
- Open a new terminal and log into Cori again, if you want to switch the targeted Cori partition.
-
- If you re-submit an already compiled simulation that you ran on another day or in another session, *make sure to source* the corresponding ``...warpx.profile`` again after login!
-
-Then, ``cd`` into the directory ``$HOME/src/warpx`` and use the following commands to compile:
-
-.. code-block:: bash
-
- cd $HOME/src/warpx
- rm -rf build
-
- # append if you target GPUs: -DWarpX_COMPUTE=CUDA
- cmake -S . -B build -DWarpX_DIMS=3
- cmake --build build -j 16
-
-The general :ref:`cmake compile-time options ` apply as usual.
-
-**That's it!**
-A 3D WarpX executable is now in ``build/bin/`` and :ref:`can be run ` with a :ref:`3D example inputs file `.
-Most people execute the binary directly or copy it out to a location in ``$SCRATCH``.
-
-The general :ref:`cmake compile-time options and instructions for Python (PICMI) bindings ` apply as usual:
-
-.. code-block:: bash
-
- # PICMI build
- cd $HOME/src/warpx
-
- # install or update dependencies
- python3 -m pip install -r requirements.txt
-
- # compile parallel PICMI interfaces with openPMD support and 3D, 2D, 1D and RZ
- WARPX_MPI=ON BUILD_PARALLEL=16 python3 -m pip install --force-reinstall --no-deps -v .
-
-
-.. _building-cori-tests:
-
-Testing
--------
-
-To run all tests (here on KNL), do:
-
-* change in ``Regressions/WarpX-tests.ini`` from ``mpiexec`` to ``srun``: ``MPIcommand = srun -n @nprocs@ @command@``
-
-.. code-block:: bash
-
- # set test directory to a shared directory available on all nodes
- # note: the tests will create the directory automatically
- export WARPX_CI_TMP="$HOME/warpx-regression-tests"
-
- # compile with more cores
- export WARPX_CI_NUM_MAKE_JOBS=16
-
- # run all integration tests
- # note: we set MPICC as a build-setting for mpi4py on KNL/Haswell
- MPICC="cc -shared" ./run_test.sh
-
-
-.. _running-cpp-cori:
-
-Running
--------
-
-Navigate (i.e. ``cd``) into one of the production directories (e.g. ``$SCRATCH``) before executing the instructions below.
-
-KNL
-^^^
-
-The batch script below can be used to run a WarpX simulation on 2 KNL nodes on
-the supercomputer Cori at NERSC. Replace descriptions between chevrons ``<>``
-by relevant values, for instance ```` could be ``laserWakefield``.
-
-Do not forget to first ``source $HOME/knl_warpx.profile`` if you have not done so already for this terminal session.
-
-For PICMI Python runs, the ```` has to read ``python3`` and the ```` is the path to your PICMI input script.
-
-.. literalinclude:: ../../../../Tools/machines/cori-nersc/cori_knl.sbatch
- :language: bash
- :caption: You can copy this file from ``Tools/machines/cori-nersc/cori_knl.sbatch``.
-
-To run a simulation, copy the lines above to a file ``cori_knl.sbatch`` and run
-
-.. code-block:: bash
-
- sbatch cori_knl.sbatch
-
-to submit the job.
-
-For a 3D simulation with a few (1-4) particles per cell using FDTD Maxwell
-solver on Cori KNL for a well load-balanced problem (in our case laser
-wakefield acceleration simulation in a boosted frame in the quasi-linear
-regime), the following set of parameters provided good performance:
-
-* ``amr.max_grid_size=64`` and ``amr.blocking_factor=64`` so that the size of
- each grid is fixed to ``64**3`` (we are not using load-balancing here).
-
-* **8 MPI ranks per KNL node**, with ``OMP_NUM_THREADS=8`` (that is 64 threads
- per KNL node, i.e. 1 thread per physical core, and 4 cores left to the
- system).
-
-* **2 grids per MPI**, *i.e.*, 16 grids per KNL node.
-
-Haswell
-^^^^^^^
-
-The batch script below can be used to run a WarpX simulation on 1 `Haswell node `_ on the supercomputer Cori at NERSC.
-
-Do not forget to first ``source $HOME/haswell_warpx.profile`` if you have not done so already for this terminal session.
-
-.. literalinclude:: ../../../../Tools/machines/cori-nersc/cori_haswell.sbatch
- :language: bash
- :caption: You can copy this file from ``Tools/machines/cori-nersc/cori_haswell.sbatch``.
-
-To run a simulation, copy the lines above to a file ``cori_haswell.sbatch`` and
-run
-
-.. code-block:: bash
-
- sbatch cori_haswell.sbatch
-
-to submit the job.
-
-For a 3D simulation with a few (1-4) particles per cell using FDTD Maxwell
-solver on Cori Haswell for a well load-balanced problem (in our case laser
-wakefield acceleration simulation in a boosted frame in the quasi-linear
-regime), the following set of parameters provided good performance:
-
-* **4 MPI ranks per Haswell node** (2 MPI ranks per `Intel Xeon E5-2698 v3 `_), with ``OMP_NUM_THREADS=16`` (which uses `2x hyperthreading `_)
-
-GPU (V100)
-^^^^^^^^^^
-
-Do not forget to first ``source $HOME/gpu_warpx.profile`` if you have not done so already for this terminal session.
-
-Due to the limited amount of GPU development nodes, just request a single node with the above defined ``getNode`` function.
-For single-node runs, try to run one grid per GPU.
-
-A multi-node batch script template can be found below:
-
-.. literalinclude:: ../../../../Tools/machines/cori-nersc/cori_gpu.sbatch
- :language: bash
- :caption: You can copy this file from ``Tools/machines/cori-nersc/cori_gpu.sbatch``.
-
-
-.. _post-processing-cori:
-
-Post-Processing
----------------
-
-For post-processing, most users use Python via NERSC's `Jupyter service `__ (`Docs `__).
-
-As a one-time preparatory setup, `create your own Conda environment as described in NERSC docs `__.
-In this manual, we often use this ``conda create`` line over the officially documented one:
-
-.. code-block:: bash
-
- conda create -n myenv -c conda-forge python mamba ipykernel ipympl==0.8.6 matplotlib numpy pandas yt openpmd-viewer openpmd-api h5py fast-histogram dask dask-jobqueue pyarrow
-
-We then follow the `Customizing Kernels with a Helper Shell Script `__ section to finalize the setup of using this conda-environment as a custom Jupyter kernel.
-
-``kernel_helper.sh`` should read:
-
-.. code-block:: bash
-
- #!/bin/bash
- module load python
- source activate myenv
- exec "$@"
-
-When opening a Jupyter notebook, just select the name you picked for your custom kernel on the top right of the notebook.
-
-Additional software can be installed later on, e.g., in a Jupyter cell using ``!mamba install -c conda-forge ...``.
-Software that is not available via conda can be installed via ``!python -m pip install ...``.
-
-.. warning::
-
- Jan 6th, 2022 (NERSC-INC0179165 and `ipympl #416 `__):
- Above, we fixated the ``ipympl`` version to *not* take the latest release of `Matplotlib Jupyter Widgets `__.
- This is an intentional work-around; the ``ipympl`` version needs to exactly fit the version pre-installed on the Jupyter base system.
diff --git a/Docs/source/install/hpc/perlmutter.rst b/Docs/source/install/hpc/perlmutter.rst
index c5d6a9e1898..e9ebcd4e1de 100644
--- a/Docs/source/install/hpc/perlmutter.rst
+++ b/Docs/source/install/hpc/perlmutter.rst
@@ -13,7 +13,7 @@ If you are new to this system, **please see the following resources**:
* `NERSC user guide `__
* Batch system: `Slurm `__
-* `Jupyter service `__
+* `Jupyter service `__ (`documentation `__)
* `Filesystems `__:
* ``$HOME``: per-user directory, use only for inputs, source and scripts; backed up (40GB)
@@ -271,11 +271,36 @@ Running
Post-Processing
---------------
-For post-processing, most users use Python via NERSC's `Jupyter service `__ (`Docs `__).
+For post-processing, most users use Python via NERSC's `Jupyter service `__ (`documentation `__).
-Please follow the same process as for :ref:`NERSC Cori post-processing `.
-**Important:** The *environment + Jupyter kernel* must separate from the one you create for Cori.
+As a one-time preparatory setup, log into Perlmutter via SSH and do *not* source the WarpX profile script above.
+Create your own Conda environment and `Jupyter kernel `__ for post-processing:
-The Perlmutter ``$PSCRATCH`` filesystem is only available on *Perlmutter* Jupyter nodes.
-Likewise, Cori's ``$SCRATCH`` filesystem is only available on *Cori* Jupyter nodes.
-You can use the Community FileSystem (CFS) from everywhere.
+.. code-block:: bash
+
+ module load python
+
+ conda config --set auto_activate_base false
+
+ # create conda environment
+ rm -rf $HOME/.conda/envs/warpx-pm-postproc
+ conda create --yes -n warpx-pm-postproc -c conda-forge mamba conda-libmamba-solver
+ conda activate warpx-pm-postproc
+ conda config --set solver libmamba
+ mamba install --yes -c conda-forge python ipykernel ipympl matplotlib numpy pandas yt openpmd-viewer openpmd-api h5py fast-histogram dask dask-jobqueue pyarrow
+
+ # create Jupyter kernel
+ rm -rf $HOME/.local/share/jupyter/kernels/warpx-pm-postproc/
+ python -m ipykernel install --user --name warpx-pm-postproc --display-name WarpX-PM-PostProcessing
+ echo -e '#!/bin/bash\nmodule load python\nsource activate warpx-pm-postproc\nexec "$@"' > $HOME/.local/share/jupyter/kernels/warpx-pm-postproc/kernel-helper.sh
+ chmod a+rx $HOME/.local/share/jupyter/kernels/warpx-pm-postproc/kernel-helper.sh
+ KERNEL_STR=$(jq '.argv |= ["{resource_dir}/kernel-helper.sh"] + .' $HOME/.local/share/jupyter/kernels/warpx-pm-postproc/kernel.json | jq '.argv[1] = "python"')
+ echo ${KERNEL_STR} | jq > $HOME/.local/share/jupyter/kernels/warpx-pm-postproc/kernel.json
+
+ exit
+
+
+When opening a Jupyter notebook on `https://jupyter.nersc.gov `__, just select ``WarpX-PM-PostProcessing`` from the list of available kernels on the top right of the notebook.
+
+Additional software can be installed later on, e.g., in a Jupyter cell using ``!mamba install -y -c conda-forge ...``.
+Software that is not available via conda can be installed via ``!python -m pip install ...``.