L3STER (pronounced like the delicious English cheese) stands for "Least-Squares Scalable SpecTral/hp Element fRamework". L3STER provides a scalable, flexible framework for the solution of systems of partial differential equations. Thanks to the employment of the least-squares finite element method, no weak formulation is needed - you can directly implement any first-order PDE:
The guiding philosophy of the project is: "From a set of PDEs and a mesh to a working simulation within an afternoon!"
Features of the library include:
- A modern implementation leveraging C++23
- Scalability using hybrid parallelism (MPI + multithreading)
- Computational efficiency thanks to a high-order discretization
- Mesh import from Gmsh
- Results export to VTK, simple postprocessing (flux integrals etc.) available natively
- Easy setup (all dependencies available in Spack)
If you'd like to use L3STER, but you need a different I/O format, please drop us an issue!
If your equation is of a higher order, you'll first need to recast it by introducing auxiliary unknowns (e.g. gradients). At the end of the day, each equation takes the form of:
where
You can also use the approach outlined above to describe an arbitrary boundary condition. In L3STER, the only difference between domain equations and boundary conditions is that when defining BCs, you gain access to the boundary normal vector.
Only Dirichlet BCs are treated in a special fashion. This is because they are strongly imposed on the resulting algebraic system. It is possible to define them in the equation sense, but this is not recommended.
Note that this formulation does not contain a time derivative. If you are solving an unsteady problem, you'll first need to discretize your problem in time. For example, you can use the backward Euler scheme:
You can then add
If your problem is non-linear, you'll first need to linearize it, e.g., using Newton's method.
You can then iterate to obtain your solution.
L3STER provides a convenient way of accessing previously computed fields (and their derivatives) when defining your equations.
This mechanism can also be used for time-stepping (where the previous value(s) of
L3STER is a header-only library, which means you don't need to install it. Simply point your CMake project at the directory where L3STER resides and use the provided target:
# In your project's CMakeLists.txt
add_subdirectory( path/to/L3STER L3STER-bin )
target_link_libraries( my-executable-target L3STER )
That being said, L3STER has several dependencies, which will need to be installed first:
- CMake 3.24 or newer
- A C++ 23 compiler, gcc 13 or newer will work
- MPI
- Hwloc
- Metis
- Trilinos 14.0 or newer. The following packages are currently used:
- Kokkos (which can be built separately from Trilinos)
- Tpetra
- Belos - optional, needed for iterative solvers
- Amesos2 - optional, needed for direct solvers
- Ifpack2 - optional, used for preconditioners (L3STER provides a few simple ones natively)
- Intel OneTBB
- Eigen version 3.4
All of these dependencies are available via Spack. You can easily install them as follows:
# Get spack and set up the shell
git clone -c feature.manyFiles=true https://github.com/spack/spack.git
cd spack
git checkout tags/releases/latest
. share/spack/setup-env.sh # consider adding this to your .bashrc
# Find some common packages so that spack doesn't have to build them from scratch (saves time)
spack external find binutils cmake coreutils curl diffutils findutils git gmake openssh perl python sed tar
# If you have a sufficiently recent compiler, skip this section
spack install gcc
spack load gcc
spack compiler find
# Create a spack environment for L3STER dependencies and install them
# Some of the libraries listed above are not mentioned explicitly, they will be built as dependencies of other packages
spack env create l3ster
spacktivate l3ster
spack add eigen intel-oneapi-tbb parmetis kokkos+openmp trilinos cxxstd=17 +openmp +amesos2 +belos +tpetra +ifpack2
spack concretize
spack install
# Cleanup to save disc space
spack gc -y
spack clean -a
When using L3STER, all you need to do is call spacktivate l3ster
before invoking CMake.
Your cluster administrators may provide a global spack instance. You can take advantage of it using spack chaining. If not, you should use the MPI installation provided for you by the admins, not build your own. Please consult the spack documentation on how to use external packages.
L3STER follows the MPI+X paradigm (hybrid parallelism). It uses TBB for multithreading, and MPI for multiprocessing. It is recommended that you launch one MPI rank per CPU, not CPU core.
Trilinos uses OpenMP for multithreading. L3STER and Trilinos parallel regions never overlap, so oversubscription is not an issue.
L3STER uses Hwloc to detect your machine's topology and limits the SMT parallelism where appropriate. You don't need to worry about hyperthreading, L3STER will just do the right thing.
On desktops, where presumably you have only one CPU, you can launch your application directly. Parallelism is achieved via multithreading on a single MPI rank.
./my-l3ster-app
Note that you still need to build with MPI (sorry).
Example slurm script demonstrating L3STER usage:
#SBATCH -N [number of nodes you'd like to run on]
#SBATCH -n [number of nodes you'd like to run on multiplied by number of sockets/node (often 2)]
#SBATCH -c [number of cores per socket]
#SBATCH --ntasks-per-socket 1
# Set up environment via spack and/or system modules
cd /my/project/dir
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_COMPILER=mpic++ .. || exit 1
cmake --build . || exit 1
srun my-l3ster-app
We are working on fully documenting the L3STER library. For the time being, please refer to the examples.