Skip to content

Commit

Permalink
Merge pull request #3 from CasparJungbacker/main
Browse files Browse the repository at this point in the history
Update information about compiling
  • Loading branch information
CasparJungbacker authored Jan 3, 2025
2 parents c04a53b + 72cc179 commit 218a974
Show file tree
Hide file tree
Showing 3 changed files with 88 additions and 256 deletions.
29 changes: 12 additions & 17 deletions book/running/compilation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,28 +22,23 @@
"```{code} shell\n",
"make \n",
"```\n",
"After successfull compilation, the executable `dales4.4` (@team: number to change) is located in the subdirectory `src/`. Compilation of the code may be sped up using the `-j <nprocs>` specifier, with `nprocs` being the amount of parallel processes.\n",
"After successfull compilation, the executable `dales` is located in the subdirectory `bin/`. Compilation of the code may be sped up using the `-j <nprocs>` specifier, with `nprocs` being the amount of parallel processes.\n",
"\n",
"## Compilation options\n",
"It is possible to specify optional features at the compilation stage of the model. These optional features can be activated by adding them as specifyers to the cmake command, for example, \n",
"It is possible to specify optional features at the compilation stage of the model. These optional features can be activated by adding them as specifyers to the cmake command. CMake options are specified as `-D<option>=<value>`. For example:\n",
"```{code} shell\n",
"cmake ../dales -DCMAKE_BUILD_TYPE=Debug\n",
"```\n",
"will produce a debug build. The debug build is much slower than the release build but contains more error checks. A list of other compilation options is shown below\n",
"- Choose between single and double precision for the main prognostic fields and for the Poisson solver (from v4.4, default `64`)\n",
" ```{code} shell\n",
" -DFIELD_PRECISION=32\n",
" ```\n",
" ```{code} shell\n",
" -DPOIS_PRECISION=32\n",
" ```\n",
"- Optional alternative Poisson solvers (since v4.3, default `False`)\n",
" ```{code} shell\n",
" -DUSE_FFTW=True\n",
" ``` \n",
" ```{code} shell\n",
" -DUSE_HYPRE=True\n",
" ``` \n",
"will produce a debug build. The debug build is much slower than the release build but contains more error checks. A list of commonly used options is given below.\n",
"\n",
"| Option | Description | Allowed values | Default |\n",
"| ------ | ----------- | -------------- | ------- |\n",
"| `-DENABLE_FFTW` | Build with FFTW | On/Off | On |\n",
"| `-DENABLE_HYPRE` | Build with HYPRE | On/Off | Off |\n",
"| `-DENABLE_FP32_FIELDS` | Use single precision floating-point numbers for prognostic fields (momentum, temperature, et cetera) | On/Off | Off |\n",
"| `-DENABLE_FP32_POIS` | Use single precision floating-point numbers for the Poisson solver | On/Off | Off |\n",
"| `-DENABLE_ACC` | Build with GPU support through OpenACC | On/Off | Off |\n",
"\n",
"```{caution}\n",
"To use HYPRE or FFTW, the library needs to both be enabled at compilation and selected at runtime by setting the &SOLVER section of the namoptions input file. `FIX LINK: See Alternative Poisson solvers`.\n",
"```\n",
Expand Down
239 changes: 0 additions & 239 deletions book/running/gpu.ipynb

This file was deleted.

76 changes: 76 additions & 0 deletions book/running/gpu.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Using the GPU
DALES now has the option to use graphics processing units (GPU's) to accelerate computations.

## Prerequisites

- One or more NVIDIA GPU's
- An OpenACC compatible compiler.
- cuFFT
- A GPU-aware MPI library

The easiest and recommended way to get started on the GPU is by downloading the NVIDIA HPC SDK. This SDK contains the nvfortran compiler (which supports OpenACC), cuFFT, and a version of the OpenMPI library that is GPU-compatible. You can download it [here](https://developer.nvidia.com/hpc-sdk-downloads).

## Compiling

To compile DALES for the GPU, OpenACC has to be enabled when configuring the build with CMake. Here, it's important to make sure that CMake uses the correct compiler such that the compilation flags are setup correctly. Using the `which` command, check that the `mpif90` command points to the MPI Fortran compiler wrapper that is bundled with the HPC SDK:

```{code} bash
$ which mpif90
/opt/nvidia/hpc_sdk/Linux_x86_64/2023/comm_libs/mpi/bin/mpif90
```

If the output looks similar, you're all set. The next step is to configure the GPU build using CMake:

```{code} bash
cd dales
mkdir build
cd build
export FC=mpif90
cmake -DENABLE_ACC=On ..
make -j
```

## Problems with NetCDF-Fortran

As mentioned before, the NVIDIA compiler might complain about your installation of NetCDF-Fortran. If this is the case, you will see an error mentioning something about "Old or corrupt module files" during the compilation phase. The solution is to compile the NetCDF-Fortran bindings yourself, using the NVIDIA compiler. In this section, we will briefly explain how to do this.

You can download the NetCDF-Fortran source code from GitHub using `wget`:

```{code} bash
wget https://github.com/Unidata/netcdf-fortran/archive/refs/tags/v4.6.1.tar.gz
tar xvf v4.6.1.tar.gz
```

Next, we need to export some variables such that NetCDF-Fortran compiler correctly. You can use `nc-config`, which comes bundled with NetCDF-C to automatically figure out some compiler flags:
```{code} bash
cd netcdf-fortran-4.6.1
export CC=nvcc
export FC=nvfortran
export CPPFLAGS=$(nc-config --cflags)
export LDFLAGS=$(nc-config --libs)
```

Finally, we can compile and install NetCDF-Fortran. Here, we use `~/software/netcdf-fortran` as the installation location:
```{code} bash
mkdir -p $HOME/software/netcdf-fortran
./configure --prefix=$HOME/software/netcdf-fortran --disable-shared
make install
```

To then compile DALES with this new library, we need to point CMake to its location:

```{code} bash
export FC=mpif90
cmake -DENABLE_ACC=On -DNetCDF_Fortran_ROOT=$HOME/software/netcdf-fortran ..
make -j
```

## Running

If you have compiled DALES succesfully with OpenACC enabled, running it is not very different from the CPU version. DALES can be launched using the `mpirun` command:

```{code} bash
mpirun -np N <path-to-DALES> <path-to-namoptions>
```

where N should match the number of GPU's you want to use.

0 comments on commit 218a974

Please sign in to comment.