__ ____ _____
____ ___ / /__ / __ \/ ___/
/ __ \ / _ \ / //_// /_/ /\__ \
/ / / // __// ,< / _, _/___/ /
/_/ /_/ \___//_/|_|/_/ |_|/____/
COPYRIGHT (c) 2019-2023 UCHICAGO ARGONNE, LLC
This is the fork of NekRS which contains a series of efforts to integrate AI/ML capabilities within the flow solver. It is managed by the Leadership Computing Facility (LCF) at the Argonne National Laboratory. Please explore the other branches to see what capabilities are available and how to use them on LCF machines. This is the master
branch, which is the same as the latest release of the official NekRS repo.
nekRS is a fast and scaleable computational fluid dynamics (CFD) solver targeting HPC applications. The code started as an early fork of libParanumal in 2019.
Capabilities:
- Incompressible and low Mach-number Navier-Stokes + scalar transport
- High-order curvilinear conformal spectral elements in space
- Variable time step 2nd/3rd order semi-implicit time integration
- MPI + OCCA (backends: CUDA, HIP, OPENCL, SERIAL/C++)
- LES and RANS turbulence models
- Arbitrary-Lagrangian-Eulerian moving mesh
- Lagrangian phase model
- Overlapping overset grids
- Conjugate fluid-solid heat transfer
- Various boundary conditions
- VisIt & Paraview support for data analysis and visualization
- Legacy interface to Nek5000
Requirements:
- Linux, Mac OS X (Microsoft WSL and Windows is not supported)
- C++17/C99 compatible compiler
- GNU/Intel/NVHPC Fortran compiler
- MPI-3.1 or later
- CMake version 3.18 or later
Download the latest release available under
https://github.com/argonne-lcf/nekRS-ML/archive/refs/heads/master.zip
or clone our GitHub repository:
https://github.com/argonne-lcf/nekRS-ML.git
The master
branch always points to the latest stable release while next
provides an early preview of the next upcoming release (do not use in a production environment).
To build and install the code run:
CC=mpicc CXX=mpic++ FC=mpif77 ./nrsconfig [-DCMAKE_INSTALL_PREFIX=$HOME/.local/nekrs]
Build settings can be customized through CMake options passed to nrsconfig
.
Please remove the previous build and installation directory in case of an update.
Assuming you run bash
and your install directory is $HOME/.local/nekrs,
add the following line to your $HOME/.bash_profile:
export NEKRS_HOME=$HOME/.local/nekrs
export PATH=$NEKRS_HOME/bin:$PATH
then type source $HOME/.bash_profile
in the current terminal window.
We try hard not to break userland but the code is evolving quickly so things might change from one version to another without being backward compatible. Please consult RELEASE.md
before using the code.
cd $NEKRS_HOME/examples/turbPipePeriodic
mpirun -np 2 nekrs --setup turbPipe.par
For convenience we provide various launch scripts in the bin
directory.
Set the envirnment with
module swap PrgEnv-nvhpc PrgEnv-gnu
module load cudatoolkit-standalone
module load cmake
module unload cray-libsci
export CRAY_ACCEL_TARGET=nvidia80
and build the code with
CC=cc CXX=CC FC=ftn ./nrsconfig -DCMAKE_INSTALL_PREFIX=</path/to/install/dir>
where </path/to/install/dir>
can be a user's home directory or a project space.
Then run the examples with
export NEKRS_HOME=</path/to/install/dir>
export PATH=$NEKRS_HOME/bin:$PATH
cd examples/ktauChannel
mpiexec -n 4 --ppn 4 --cpu-bind numa nekrs --setup channel.par
For documentation, see our readthedocs page. For now it's just a dummy. We hope to improve it soon.
Please visit GitHub Discussions. Here we help, find solutions, share ideas, and follow discussions.
Our project is hosted on GitHub. To learn how to contribute, see CONTRIBUTING.md
.
All bugs are reported and tracked through Issues. If you are having trouble installing the code or getting your case to run properly, you should first vist our discussion group.
nekRS is released under the BSD 3-clause license (see LICENSE
file).
All new contributions must be made under the BSD 3-clause license.
NekRS, a GPU-Accelerated Spectral Element Navier-Stokes Solver
This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy's Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation's exascale computing imperative.