diff --git a/CMakeLists.txt b/CMakeLists.txt index 5e92d5c664..058eb79c3c 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -566,6 +566,9 @@ if (MFC_DOCUMENTATION) install(DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/docs/res" DESTINATION "docs/mfc") + install(FILES "${CMAKE_CURRENT_SOURCE_DIR}/docs/robots.txt" + DESTINATION "docs/mfc") + install(FILES "${CMAKE_CURRENT_SOURCE_DIR}/docs/index.html" DESTINATION "docs/mfc") endif() diff --git a/README.md b/README.md index 6cf468dd08..a5dbea6ad4 100644 --- a/README.md +++ b/README.md @@ -20,8 +20,8 @@

Welcome to the home of MFC! -MFC simulates compressible multi-component and multi-phase flows, amongst other things. -It scales ideally to exascale; [tens of thousands of GPUs on NVIDIA- and AMD-GPU Machines](#is-this-really-exascale), like Oak Ridge Summit and Frontier. +MFC simulates compressible multi-component and multi-phase flows, [amongst other things](#what-else-can-this-thing-do). +It scales ideally to exascale; [tens of thousands of GPUs on NVIDIA- and AMD-GPU machines](#is-this-really-exascale), like Oak Ridge Summit and Frontier. MFC is written in Fortran and makes use of metaprogramming to keep the code short (about 20K lines). Get in touch with the maintainers, like Spencer, if you have questions! @@ -96,7 +96,7 @@ They are organized below, just click the drop-downs! * Multi- and single-phase * Phase change via p, pT, and pTg schemes * Grids - * 1-3D Cartesian, Cylindrical, Axi-symmetric. + * 1-3D Cartesian, cylindrical, axi-symmetric. * Arbitrary grid stretching for multiple domain regions available. * Complex/arbitrary geometries via immersed boundary methods * STL geometry files supported @@ -162,12 +162,10 @@ If you use MFC, consider citing it: ## License -Copyright 2021-2024. +Copyright 2021-2024 Spencer Bryngelson and Tim Colonius. MFC is under the MIT license (see [LICENSE](LICENSE) file for full text). ## Acknowledgements - -

- MFC development was supported by multiple current and past grants from the US Department of Defense, National Institute of Health (NIH), Department of Energy (DOE), and the National Science Foundation (NSF). - MFC computations use OLCF Frontier, Summit, and Wombat under allocation CFD154 (PI Bryngelson) and ACCESS-CI under allocations TG-CTS120005 (PI Colonius) and TG-PHY210084 (PI Bryngelson). -

+ +Multiple federal sponsors have supported MFC development, including the US Department of Defense (DOD), National Institutes of Health (NIH), Department of Energy (DOE), and National Science Foundation (NSF). +MFC computations use OLCF Frontier, Summit, and Wombat under allocation CFD154 (PI Bryngelson) and ACCESS-CI under allocations TG-CTS120005 (PI Colonius) and TG-PHY210084 (PI Bryngelson). diff --git a/docs/documentation/case.md b/docs/documentation/case.md index 8721a037fe..e76e82d5b5 100644 --- a/docs/documentation/case.md +++ b/docs/documentation/case.md @@ -23,7 +23,7 @@ print(json.dumps({ Thus, you can run your case file with Python to view the computed case dictionary that will be processed by MFC when you run: ```console -$ python3 my_case_file.py +python3 my_case_file.py ``` This is particularly useful when computations are done in Python to generate the case. @@ -188,7 +188,7 @@ The code outputs error messages when an empty region is left in the domain. Some parameters, as described above, can be defined by analytical functions in the input file. For example, one can define the following patch: -```json +```console 'patch_icpp(2)%geometry' : 15, 'patch_icpp(2)%x_centroid' : 0.25, 'patch_icpp(2)%length_x' : 9.5, @@ -348,7 +348,7 @@ Details of implementation of viscosity in MFC can be found in [Coralic (2015)](r | `adv_alphan` | Logical | Equations for all $N$ volume fractions (instead of $N-1$) | | `mpp_lim` | Logical | Mixture physical parameters limits | | `mixture_err` | Logical | Mixture properties correction | -| `time_stepper` | Integer | Runge--Kutta order [1--3] | +| `time_stepper` | Integer | Runge-Kutta order [1-3] | | `weno_order` | Integer | WENO order [1,3,5] | | `weno_eps` | Real | WENO perturbation (avoid division by zero) | | `mapped_weno` | Logical | WENO with mapping of nonlinear weights | @@ -360,8 +360,8 @@ Details of implementation of viscosity in MFC can be found in [Coralic (2015)](r | `weno_Re_flux` | Logical | Compute velocity gradient using scaler divergence theorem | | `weno_avg` | Logical | Arithmetic mean of left and right, WENO-reconstructed, cell-boundary values | -- \* Options that work only with `model_eqns` $=2$. -- † Options that work only with `cyl_coord` $=$ `False`. +- \* Options that work only with `model_eqns =2`. +- † Options that work only with `cyl_coord = 'F'`. - ‡ Options that work only with `bc_[x,y,z]%[beg,end] = -15` and/or `bc_[x,y,z]%[beg,end] = -16` The table lists simulation algorithm parameters. @@ -379,7 +379,7 @@ Tangential velocities require viscosity, `weno_avg = T`, and `bc_[x,y,z]%beg = - Tangential velocities require viscosity, `weno_avg = T`, and `bc_[x,y,z]\%end = 16` to work properly. Normal velocities require `bc_[x,y,z]\%end = -15` or `\bc_[x,y,z]\%end = -16` to work properly. - `model_eqns` specifies the choice of the multi-component model that is used to formulate the dynamics of the flow using integers from 1 through 3. -`model_eqns` $=$ 1, 2, and 3 correspond to $\Gamma$-$\Pi_\infty$ model ([Johnsen, 2008](references.md#Johnsen08)), 5-equation model ([Allaire et al., 2002](references.md#Allaire02)), and 6-equation model ([Saurel et al., 2009](references.md#Saurel09)), respectively. +`model_eqns = 1`, `2`, and `3` correspond to $\Gamma$-$\Pi_\infty$ model ([Johnsen, 2008](references.md#Johnsen08)), 5-equation model ([Allaire et al., 2002](references.md#Allaire02)), and 6-equation model ([Saurel et al., 2009](references.md#Saurel09)), respectively. The difference of the two models is assessed by ([Schmidmayer et al., 2019](references.md#Schmidmayer19)). Note that some code parameters are only compatible with 5-equation model. @@ -392,14 +392,14 @@ If this parameter is set false, the void fraction of $N$-th component is compute $$ \alpha_N=1-\sum^{N-1}_{i=1} \alpha_i $$ where $\alpha_i$ is the void fraction of $i$-th component. -When a single-component flow is simulated, it requires that `adv_alphan` $=$ `True`. +When a single-component flow is simulated, it requires that `adv_alphan = 'T'`. - `mpp_lim` activates correction of solutions to avoid a negative void fraction of each component in each grid cell, such that $\alpha_i>\varepsilon$ is satisfied at each time step. - `mixture_err` activates correction of solutions to avoid imaginary speed of sound at each grid cell. - `time_stepper` specifies the order of the Runge-Kutta (RK) time integration scheme that is used for temporal integration in simulation, from the 1st to 5th order by corresponding integer. -Note that `time_stepper` $=$ 3 specifies the total variation diminishing (TVD), third order RK scheme ([Gottlieb and Shu, 1998](references.md#Gottlieb98)). +Note that `time_stepper = 3` specifies the total variation diminishing (TVD), third order RK scheme ([Gottlieb and Shu, 1998](references.md#Gottlieb98)). - `weno_order` specifies the order of WENO scheme that is used for spatial reconstruction of variables by an integer of 1, 3, and 5, that correspond to the 1st, 3rd, and 5th order, respectively. @@ -408,18 +408,18 @@ Practically, `weno_eps` $<10^{-6}$ is used. - `mapped_weno` activates mapping of the nonlinear WENO weights to the more accurate nonlinear weights in order to reinstate the optimal order of accuracy of the reconstruction in the proximity of critical points ([Henrick et al., 2005](references.md#Henrick05)). -- `null_weights` activates nullification of the nonlinear WENO weights at the buffer regions outside the domain boundaries when the Riemann extrapolation boundary condition is specified (`bc_[x,y,z]\%beg[end]}` $=-4$). +- `null_weights` activates nullification of the nonlinear WENO weights at the buffer regions outside the domain boundaries when the Riemann extrapolation boundary condition is specified (`bc_[x,y,z]\%beg[end]} = -4`). - `mp_weno` activates monotonicity preservation in the WENO reconstruction (MPWENO) such that the values of reconstructed variables do not reside outside the range spanned by WENO stencil ([Balsara and Shu, 2000](references.md#Balsara00); [Suresh and Huynh, 1997](references.md#Suresh97)). - `riemann_solver` specifies the choice of the Riemann solver that is used in simulation by an integer from 1 through 3. -`riemann_solver` $=$ 1,2, and 3 correspond to HLL, HLLC, and Exact Riemann solver, respectively ([Toro, 2013](references.md#Toro13)). +`riemann_solver = 1`, `2`, and `3` correspond to HLL, HLLC, and Exact Riemann solver, respectively ([Toro, 2013](references.md#Toro13)). - `avg_state` specifies the choice of the method to compute averaged variables at the cell-boundaries from the left and the right states in the Riemann solver by an integer of 1 or 2. -`avg_state` $=$ 1 and 2 correspond to Roe- and arithmetic averages, respectively. +`avg_state = 1` and `2` correspond to Roe- and arithmetic averages, respectively. - `wave_speeds` specifies the choice of the method to compute the left, right, and middle wave speeds in the Riemann solver by an integer of 1 and 2. -`wave_speeds` $=$ 1 and 2 correspond to the direct method ([Batten et al., 1997](references.md#Batten97)), and indirect method that approximates the pressures and velocity ([Toro, 2013](references.md#Toro13)), respectively. +`wave_speeds = 1` and `2` correspond to the direct method ([Batten et al., 1997](references.md#Batten97)), and indirect method that approximates the pressures and velocity ([Toro, 2013](references.md#Toro13)), respectively. - `weno_Re_flux` activates the scaler divergence theorem in computing the velocity gradients using WENO-reconstructed cell boundary values. If this option is false, velocity gradient is computed using finite difference scheme of order 2 which is independent of the WENO order. @@ -461,9 +461,9 @@ This option requires `weno_Re_flux` to be true because cell boundary values are The table lists formatted database output parameters. The parameters define variables that are outputted from simulation and file types and formats of data as well as options for post-processing. -- `format` specifies the choice of the file format of data file outputted by MFC by an integer of 1 and 2. `format` $=$ 1 and 2 correspond to Silo-HDF5 format and binary format, respectively. +- `format` specifies the choice of the file format of data file outputted by MFC by an integer of 1 and 2. `format = 1` and `2` correspond to Silo-HDF5 format and binary format, respectively. -- `precision` specifies the choice of the floating-point format of the data file outputted by MFC by an integer of 1 and 2. `precision` $=$ 1 and 2 correspond to single-precision and double-precision formats, respectively. +- `precision` specifies the choice of the floating-point format of the data file outputted by MFC by an integer of 1 and 2. `precision = 1` and `2` correspond to single-precision and double-precision formats, respectively. - `parallel_io` activates parallel input/output (I/O) of data files. It is highly recommended to activate this option in a parallel environment. With parallel I/O, MFC inputs and outputs a single file throughout pre-process, simulation, and post-process, regardless of the number of processors used. @@ -481,7 +481,7 @@ If `file_per_process` is true, then pre_process, simulation, and post_process mu - `schlieren_alpha(i)` specifies the intensity of the numerical Schlieren of $i$-th component. - `fd_order` specifies the order of the finite difference scheme that is used to compute the vorticity from the velocity field and the numerical schlieren from the density field by an integer of 1, 2, and 4. -`fd_order` $=$ 1, 2, and 4 correspond to the first, second, and fourth-order finite difference schemes, respectively. +`fd_order = 1`, `2`, and `4` correspond to the first, second, and fourth-order finite difference schemes, respectively. - `probe_wrt` activates output of state variables at coordinates specified by `probe(i)%[x;y,z]`. @@ -510,7 +510,7 @@ Details of the acoustic source model can be found in [Maeda and Colonius (2017)] - `num_mono` defines the total number of source planes by an integer. - `Mono(i)%pulse` specifies the choice of the acoustic waveform generated from $i$-th source plane by an integer. -`Mono(i)%pulse` $=$ 1, 2, and 3 correspond to sinusoidal wave, Gaussian wave, and square wave, respectively. +`Mono(i)%pulse = 1`, `2`, and `3` correspond to sinusoidal wave, Gaussian wave, and square wave, respectively. - `Mono(i)%npulse` defines the number of cycles of the acoustic wave generated from $i$-th source plane by an integer. @@ -519,11 +519,11 @@ Details of the acoustic source model can be found in [Maeda and Colonius (2017)] - `Mono(i)%length` defines the characteristic wavelength of the acoustic wave generated from $i$-th source plane. - `Mono(i)%support` specifies the choice of the geometry of acoustic source distribution of $i$-th source plane by an integer from 1 through 3:\\ -`Mono(i)%support` $=1$ specifies an infinite source plane that is normal to the $x$-axis and intersects with the axis at $x=$ `Mono(i)%loc(1)` in 1-D simulation.\\ -`Mono(i)%support` $=2$ specifies a semi-infinite source plane in 2-D simulation. +`Mono(i)%support =1` specifies an infinite source plane that is normal to the $x$-axis and intersects with the axis at $x=$ `Mono(i)%loc(1)` in 1-D simulation.\\ +`Mono(i)%support =2` specifies a semi-infinite source plane in 2-D simulation. The $i$-th source plane is determined by the point at [`Mono(i)%loc(1)`, `Mono(i)%loc(2)`] and the normal vector [$\mathrm{cos}$(`Mono(i)%dir`), $\mathrm{sin}$(`Mono(i)%dir`)] that consists of this point. The source plane is defined in the finite region of the domain: $x\in[-\infty,\infty]$ and $y\in$[-`mymono_length`/2, `mymono_length`/2].\\ -`Mono(i)%support` $=3$ specifies a semi-infinite source plane in 3-D simulation. +`Mono(i)%support =3` specifies a semi-infinite source plane in 3-D simulation. The $i$-th source plane is determined by the point at [`Mono(i)%loc(1)`, `Mono(i)%loc(2)`, `Mono(i)%loc(3)`] and the normal vector [$\mathrm{cos}$(`Mono(i)%dir`), $\mathrm{sin}$(`Mono(i)%dir`), 1] that consists of this point. The source plane is defined in the finite region of the domain: $x\in[-\infty,\infty]$ and $y,z\in$[-`mymono_length`/2, `mymono_length`/2]. There are a few additional spatial support types available for special source types and coordinate systems tabulated in [Monopole supports](#monopole-supports). @@ -571,7 +571,7 @@ This table lists the ensemble-averaged bubble model parameters. - `bubbles` activates the ensemble-averaged bubble model. - `bubble_model` specified a model for spherical bubble dynamics by an integer of 1 and 2. -`bubble_model` $=$ 1 and 2 correspond to the Gilmore and the Keller-Miksis equations, respectively. +`bubble_model = 1`, `2`, and `3` correspond to the Gilmore, Keller-Miksis, and Rayleigh-Plesset models. - `polytropic` activates polytropic gas compression in the bubble. When `polytropic` is set `False`, the gas compression is modeled as non-polytropic due to heat and mass transfer across the bubble wall with constant heat and mass transfer coefficients based on ([Preston et al., 2007](references.md#Preston07)). @@ -579,14 +579,14 @@ When `polytropic` is set `False`, the gas compression is modeled as non-polytrop - `polydisperse` activates polydispersity in the bubble model by means of a probability density function (PDF) of the equiliibrium bubble radius. - `thermal` specifies a model for heat transfer across the bubble interface by an integer from 1 through 3. -`thermal` $=$ 1, 2, and 3 correspond to no heat transfer (adiabatic gas compression), isothermal heat transfer, and heat transfer with a constant heat transfer coefficient based on [Preston et al., 2007](references.md#Preston07), respectively. +`thermal = 1`, `2`, and `3` correspond to no heat transfer (adiabatic gas compression), isothermal heat transfer, and heat transfer with a constant heat transfer coefficient based on [Preston et al., 2007](references.md#Preston07), respectively. - `R0ref` specifies the reference bubble radius. - `nb` specifies the number of discrete bins that define the probability density function (PDF) of the equilibrium bubble radius. - `R0_type` specifies the quadrature rule for integrating the log-normal PDF of equilibrium bubble radius for polydisperse populations. -`R0_type` $=$ 1 corresponds to Simpson's rule. +`R0_type = 1` corresponds to Simpson's rule. - `poly_sigma` specifies the standard deviation of the log-normal PDF of equilibrium bubble radius for polydisperse populations. @@ -599,7 +599,7 @@ Implementation of the parameters into the model follow [Ando (2010)](references. - `qbmm` activates quadrature by method of moments, which assumes a PDF for bubble radius and velocity. -- `dist_type` specifies the initial joint PDF of initial bubble radius and bubble velocity required in qbmm. `dist_type` $=$ 1 and 2 correspond to binormal and lognormal-normal distributions respectively. +- `dist_type` specifies the initial joint PDF of initial bubble radius and bubble velocity required in qbmm. `dist_type = 1` and `2` correspond to binormal and lognormal-normal distributions respectively. - `sigR` specifies the standard deviation of the PDF of bubble radius required in qbmm. @@ -635,7 +635,7 @@ The parameters are optionally used to define initial velocity profiles and pertu - `vel_profile` activates setting the mean streamwise velocity to hyperbolic tangent profile. This option works only for 2D and 3D cases. - `instability_wave` activates the perturbation of initial velocity by instability waves obtained from linear stability analysis for a mixing layer with hyperbolic tangent mean streamwise velocity profile. -This option only works for 2D and 3D cases, together with `vel_profile = TRUE`. +This option only works for 2D and 3D cases, together with `vel_profile = 'T'`. ### 11. Phase Change Model | Parameter | Type | Description | @@ -675,7 +675,7 @@ This option only works for 2D and 3D cases, together with `vel_profile = TRUE`. | -15 | Normal | Slip wall | | -16 | Normal | No-slip wall | -*: This boundary condition is only used for `bc_y%beg` when using cylindrical coordinates (`cyl_coord = 'T'` and 3d). For axisymmetric problems, use `bc_y%beg = -2` with `cyl_coord = 'T'` in 2D. +*: This boundary condition is only used for `bc_y%beg` when using cylindrical coordinates (`cyl_coord = 'T'` and 3D). For axisymmetric problems, use `bc_y%beg = -2` with `cyl_coord = 'T'` in 2D. The boundary condition supported by the MFC are listed in table [Boundary Conditions](#boundary-conditions). Their number (`#`) corresponds to the input value in `input.py` labeled `bc_[x,y,z]%[beg,end]` (see table [Simulation Algorithm Parameters](#5-simulation-algorithm)). diff --git a/docs/documentation/getting-started.md b/docs/documentation/getting-started.md index eef5d6b7e8..ab97efae3d 100644 --- a/docs/documentation/getting-started.md +++ b/docs/documentation/getting-started.md @@ -5,9 +5,8 @@ You can either download MFC's [latest release from GitHub](https://github.com/MFlowCode/MFC/releases/latest) or clone the repository: ```console -$ git clone https://github.com/MFlowCode/MFC.git -$ cd MFC -$ git checkout +git clone https://github.com/MFlowCode/MFC.git +cd MFC ``` ## Build Environment @@ -21,15 +20,15 @@ Please select your desired configuration from the list bellow: - **On supported clusters:** Load environment modules ```console -$ . ./mfc.sh load +. ./mfc.sh load ``` - **Via [Aptitude](https://wiki.debian.org/Aptitude):** ```console -$ sudo apt update -$ sudo apt upgrade -$ sudo apt install tar wget make cmake gcc g++ \ +sudo apt update +sudo apt upgrade +sudo apt install tar wget make cmake gcc g++ \ python3 python3-dev \ "openmpi-*" libopenmpi-dev ``` @@ -37,8 +36,8 @@ $ sudo apt install tar wget make cmake gcc g++ \ - **Via [Pacman](https://wiki.archlinux.org/title/pacman):** ```console -$ sudo pacman -Syu -$ sudo pacman -S base-devel coreutils \ +sudo pacman -Syu +sudo pacman -S base-devel coreutils \ git ninja gcc-fortran \ cmake openmpi python3 \ python-pip openssh \ @@ -88,35 +87,32 @@ You can now follow the appropriate instructions for your distribution.
-

MacOS (x86 and Apple Silicon)

+

MacOS

-**Note:** macOS remains the most difficult platform to consistently compile MFC on. -If you run into issues, we suggest you try using Docker (instructions above). - - - **MacOS v10.15 (Catalina) or newer [ZSH]** (Verify with `echo $SHELL`) + - **If you use [ZSH]** (Verify with `echo $SHELL`) ```console -$ touch ~/.zshrc -$ open ~/.zshrc +touch ~/.zshrc +open ~/.zshrc ``` - - **Older than MacOS v10.15 (Catalina) [BASH]** (Verify with `echo $SHELL`) + - **If you use [BASH]** (Verify with `echo $SHELL`) ```console -$ touch ~/.bash_profile -$ open ~/.bash_profile +touch ~/.bash_profile +open ~/.bash_profile ``` An editor should open. Please paste the following lines into it before saving the file. -If you wish to use a version of GNU's GCC other than 11, modify the first assignment. +If you wish to use a version of GNU's GCC other than 13, modify the first assignment. These lines ensure that LLVM's Clang, and Apple's modified version of GCC, won't be used to compile MFC. Further reading on `open-mpi` incompatibility with `clang`-based `gcc` on macOS: [here](https://stackoverflow.com/questions/27930481/how-to-build-openmpi-with-homebrew-and-gcc-4-9). -We do *not* support `clang` due to conflicts with our Silo dependency. +We do *not* support `clang` due to conflicts with the Silo dependency. ```console # === MFC MPI Installation === -export MFC_GCC_VER=11 +export MFC_GCC_VER=13 export OMPI_MPICC=gcc-$MFC_GCC_VER export OMPI_CXX=g++-$MFC_GCC_VER export OMPI_FC=gfortran-$MFC_GCC_VER @@ -126,11 +122,11 @@ export FC=gfortran-$MFC_GCC_VER # === MFC MPI Installation === ``` -**Close the open editor and terminal window**. Open a **new terminal** window before executing the commands bellow. +**Close the open editor and terminal window**. Open a **new terminal** window before executing the commands below. ```console -$ brew install wget make python make cmake coreutils gcc@$MFC_GCC_VER -$ HOMEBREW_MAKE_JOBS=$(nproc) brew install --cc=gcc-$MFC_GCC_VER --verbose --build-from-source open-mpi +brew install wget make python make cmake coreutils gcc@$MFC_GCC_VER +HOMEBREW_MAKE_JOBS=$(nproc) brew install --cc=gcc-$MFC_GCC_VER --verbose --build-from-source open-mpi ``` They will download the dependencies MFC requires to build itself. `open-mpi` will be compiled from source, using the version of GCC we specified above with the environment variables `HOMEBREW_CC` and `HOMEBREW_CXX`. @@ -149,22 +145,22 @@ First install Docker and Git: - macOS: `brew install git docker` (requires [Homebrew](https://brew.sh/)). - Other systems: ```console -$ sudo apt install git docker # Debian / Ubuntu via Aptitude -$ sudo pacman -S git docker # Arch Linux via Pacman +sudo apt install git docker # Debian / Ubuntu via Aptitude +sudo pacman -S git docker # Arch Linux via Pacman ``` Once Docker and Git are installed on your system, clone MFC with ```console -$ git clone https://github.com/MFlowCode/MFC -$ cd MFC +git clone https://github.com/MFlowCode/MFC +cd MFC ``` To fetch the prebuilt Docker image and enter an interactive bash session with the recommended settings applied, run ```console -$ ./mfc.sh docker # If on \*nix/macOS + ./mfc.sh docker # If on \*nix/macOS .\mfc.bat docker # If on Windows ``` @@ -201,21 +197,21 @@ For a detailed list of options, arguments, and features, please refer to `./mfc. Most first-time users will want to build MFC using 8 threads (or more!) with MPI support: ```console -$ ./mfc.sh build -j 8 +./mfc.sh build -j 8 ``` Examples: -- Build MFC using 8 threads with MPI and GPU acceleration: `$ ./mfc.sh build --gpu -j 8`. -- Build MFC using a single thread without MPI, GPU, and Debug support: `$ ./mfc.sh build --no-mpi`. -- Build MFC's `simulation` code in Debug mode with MPI and GPU support: `$ ./mfc.sh build --debug --gpu -t simulation`. +- Build MFC using 8 threads with MPI and GPU acceleration: `./mfc.sh build --gpu -j 8`. +- Build MFC using a single thread without MPI, GPU, and Debug support: `./mfc.sh build --no-mpi`. +- Build MFC's `simulation` code in Debug mode with MPI and GPU support: `./mfc.sh build --debug --gpu -t simulation`. ## Running the Test Suite Run MFC's test suite with 8 threads: ```console -$ ./mfc.sh test -j 8 +./mfc.sh test -j 8 ``` Please refer to the [Testing](testing.md) document for more information. @@ -225,7 +221,7 @@ Please refer to the [Testing](testing.md) document for more information. MFC has example cases in the `examples` folder. You can run such a case interactively using 2 tasks by typing: ```console -$ ./mfc.sh run examples/2D_shockbubble/case.py -n 2 +./mfc.sh run examples/2D_shockbubble/case.py -n 2 ``` Please refer to the [Running](running.md) document for more information on `case.py` files and how to run them. diff --git a/docs/documentation/running.md b/docs/documentation/running.md index ceaa4a0f1b..53b62c21e3 100644 --- a/docs/documentation/running.md +++ b/docs/documentation/running.md @@ -33,7 +33,7 @@ Please refer to `./mfc.sh run -h` for a complete list of arguments and options, To run all stages of MFC, that is [pre_process](https://github.com/MFlowCode/MFC/tree/master/src/pre_process/), [simulation](https://github.com/MFlowCode/MFC/tree/master/src/simulation/), and [post_process](https://github.com/MFlowCode/MFC/tree/master/src/post_process/) on the sample case [2D_shockbubble](https://github.com/MFlowCode/MFC/tree/master/examples/2D_shockbubble/), ```console -$ ./mfc.sh run examples/2D_shockbubble/case.py +./mfc.sh run examples/2D_shockbubble/case.py ``` If you want to run a subset of the available stages, you can use the `-t` argument. @@ -46,59 +46,59 @@ For example, - Running [pre_process](https://github.com/MFlowCode/MFC/tree/master/src/pre_process/) with 2 cores: ```console -$ ./mfc.sh run examples/2D_shockbubble/case.py -t pre_process -n 2 +./mfc.sh run examples/2D_shockbubble/case.py -t pre_process -n 2 ``` - Running [simulation](https://github.com/MFlowCode/MFC/tree/master/src/simulation/) and [post_process](https://github.com/MFlowCode/MFC/tree/master/src/post_process/) using 4 cores: ```console -$ ./mfc.sh run examples/2D_shockbubble/case.py -t simulation post_process -n 4 +./mfc.sh run examples/2D_shockbubble/case.py -t simulation post_process -n 4 ``` ## Batch Execution The MFC detects which scheduler your system is using and handles the creation and execution of batch scripts. The batch engine is requested via the `-e batch` option. -The number of nodes can be specified with the `-N` (i.e `--nodes`) option. +The number of nodes can be specified with the `-N` (i.e., `--nodes`) option. We provide a list of (baked-in) submission batch scripts in the `toolchain/templates` folder. ```console -$ ./mfc.sh run examples/2D_shockbubble/case.py -e batch -N 2 -n 4 -t simulation -c +./mfc.sh run examples/2D_shockbubble/case.py -e batch -N 2 -n 4 -t simulation -c ``` Other useful arguments include: -- `-# ` to name your job. (i.e `--name`) -- `-@ sample@example.com` to receive emails from the scheduler. (i.e `--email`) -- `-w hh:mm:ss` to specify the job's maximum allowed walltime. (i.e `--walltime`) -- `-a ` to identify the account to be charged for the job. (i.e `--account`) -- `-p ` to select the job's partition. (i.e `--partition`) +- `-# ` to name your job. (i.e., `--name`) +- `-@ sample@example.com` to receive emails from the scheduler. (i.e., `--email`) +- `-w hh:mm:ss` to specify the job's maximum allowed walltime. (i.e., `--walltime`) +- `-a ` to identify the account to be charged for the job. (i.e., `--account`) +- `-p ` to select the job's partition. (i.e., `--partition`) As an example, one might request GPUs on a SLURM system using the following: **Disclaimer**: IBM's JSRUN on LSF-managed computers does not use the traditional node-based approach to -allocate resources. Therefore, the MFC constructs equivalent resource-sets in task and GPU count. +allocate resources. Therefore, the MFC constructs equivalent resource sets in the task and GPU count. ### Profiling with NVIDIA Nsight -MFC provides two different argument to facilitate profiling with NVIDIA Nsight. -**Please ensure that the used argument is placed at the end so that their respective flags can be appended.** +MFC provides two different arguments to facilitate profiling with NVIDIA Nsight. +**Please ensure the used argument is placed at the end so their respective flags can be appended.** - Nsight Systems (Nsys): `./mfc.sh run ... --nsys [nsys flags]` allows one to visualize MFC's system-wide performance with [NVIDIA Nsight Systems](https://developer.nvidia.com/nsight-systems). -NSys is best for getting a general understanding of the order and execution times of major subroutines (WENO, Riemann, etc.) in MFC. +NSys is best for understanding the order and execution times of major subroutines (WENO, Riemann, etc.) in MFC. When used, `--nsys` will run the simulation and generate `.nsys-rep` files in the case directory for all targets. -These files can then be imported into Nsight System's GUI, which can be downloaded [here](https://developer.nvidia.com/nsight-systems/get-started#latest-Platforms). It is best to run case files with a few timesteps so that the report files remain small. Learn more about NVIDIA Nsight Systems [here](https://docs.nvidia.com/nsight-systems/UserGuide/index.html). +These files can then be imported into Nsight System's GUI, which can be downloaded [here](https://developer.nvidia.com/nsight-systems/get-started#latest-Platforms). It is best to run case files with a few timesteps to keep the report files small. Learn more about NVIDIA Nsight Systems [here](https://docs.nvidia.com/nsight-systems/UserGuide/index.html). - Nsight Compute (NCU): `./mfc.sh run ... --ncu [ncu flags]` allows one to conduct kernel-level profiling with [NVIDIA Nsight Compute](https://developer.nvidia.com/nsight-compute). NCU provides profiling information for every subroutine called and is more detailed than NSys. When used, `--ncu` will output profiling information for all subroutines, including elapsed clock cycles, memory used, and more after the simulation is run. -Please note that adding this argument will significantly slow down the simulation and should only be used on case files with a few timesteps. +Adding this argument will significantly slow the simulation and should only be used on case files with a few timesteps. Learn more about NVIDIA Nsight Compute [here](https://docs.nvidia.com/nsight-compute/NsightCompute/index.html). ### Restarting Cases When running a simulation, MFC generates a `./restart_data` folder in the case directory that contains `lustre_*.dat` files that can be used to restart a simulation from saved timesteps. -This allows a user to run a simulation to some timestep $X$, then later continue it to run to another timestep $Y$, where $Y > X$. +This allows a user to simulate some timestep $X$, then continue it to run to another timestep $Y$, where $Y > X$. The user can also choose to add new patches at the intermediate timestep. If you want to restart a simulation, @@ -108,29 +108,29 @@ If you want to restart a simulation, - `t_step_stop` : $t_f$ - `t_step_save` : $SF$ in which $t_i$ is the starting time, $t_f$ is the final time, and $SF$ is the saving frequency time. -- Run pre-process and simulation on the case. +- Run `pre_process` and `simulation` on the case. - `./mfc.sh run case.py -t pre_process simulation ` -- As the simulation runs, it will create LUSTRE files for each saved timestep in `./restart_data`. -- When the simulation stops, choose any LUSTRE file as the restarting point (lustre_ $t_s$.dat) -- Create a new duplicate input file, (ex. `restart_case.py`), on which it should: +- As the simulation runs, it will create Lustre files for each saved timestep in `./restart_data`. +- When the simulation stops, choose any Lustre file as the restarting point (lustre_ $t_s$.dat) +- Create a new duplicate input file (e.g., `restart_case.py`), which should have: 1. For the Computational Domain Parameters - - Have the following removed BUT `m`, `n`, and `p`: - - All domaing/mesh information + - Have the following removed __except__ `m`, `n`, and `p`: + - All domain/mesh information - `(xyz)_domain%beg` - `(xyz)_domain%end` - `stretch_(xyz)` - `a_(xyz)` - `(xyz)_a` - `(xyz)_b` - - Have the following altered: - - `t_step_start` : $t_s$ # this is the point at which the simulation will restart - - `t_step_stop` : $t_{f2}$ # your new final simulation time, which can be the same as $t_f$ - - `t_step_save` : ${SF}_2$ # if interested in changing the saving frequency - - Have the following ADDED: - - `old_ic` : 'T' # to specify that we have initial conditions from previous simulations - - `old_grid` : 'T' # to specify that we have a grid from previous simulations (maybe I do not need m, n, and p, then?) - - `t_step_old` : $t_i$ # this is the time step used as the `t_step_start` of the original `case.py` file + - Alter the following: + - `t_step_start` : $t_s$ (the point at which the simulation will restart) + - `t_step_stop` : $t_{f2}$ (new final simulation time, which can be the same as $t_f$) + - `t_step_save` : ${SF}_2$ (if interested in changing the saving frequency) + - Add the following: + - `old_ic` : 'T' (to specify that we have initial conditions from previous simulations) + - `old_grid` : 'T' (to specify that we have a grid from previous simulations) + - `t_step_old` : $t_i$ (the time step used as the `t_step_start` of the original `case.py` file) 2. For the Simulation Algorithm Parameters - Substitute `num_patches` to reflect the number of ADDED patches in the `restart_case.py` file. If no patches are added, set `num_patches: 0` @@ -145,7 +145,7 @@ in which $t_i$ is the starting time, $t_f$ is the final time, and $SF$ is the sa 4. For Fluid properties - Keep information about the fluid properties -- Run pre-process and simulation on restart_case.py +- Run pre-process and simulation on `restart_case.py` - `./mfc.sh run restart_case.py -t pre_process simulation ` - Run the post_process @@ -153,17 +153,17 @@ in which $t_i$ is the starting time, $t_f$ is the final time, and $SF$ is the sa - One way is to set `t_step_stop` to the restarting point $t_s$ in `case.py`. Then, run the commands below. The first command will run on timesteps $[t_i, t_s]$. The second command will run on $[t_s, t_{f2}]$. Therefore, the whole range $[t_i, t_{f2}]$ will be post processed. ```console -$ ./mfc.sh run case.py -t post_process -$ ./mfc.sh run restart_case.py -t post_process +./mfc.sh run case.py -t post_process +./mfc.sh run restart_case.py -t post_process ``` -We have provided an example `case.py` and `restart_case.py` in `/examples/1D_vacuum_restart/`. This simulation is a duplicate of the `1D_vacuum` case. It demonstrates stopping at timestep 7000, adding a new patch, and restarting the simulation. To test this code, run: +We have provided an example, `case.py` and `restart_case.py` in `/examples/1D_vacuum_restart/`. This simulation is a duplicate of the `1D_vacuum` case. It demonstrates stopping at timestep 7000, adding a new patch, and restarting the simulation. To test this code, run: ```console -$ ./mfc.sh run examples/1D_vacuum_restart/case.py -t pre_process simulation -$ ./mfc.sh run examples/1D_vacuum_restart/restart_case.py -t pre_process simulation -$ ./mfc.sh run examples/1D_vacuum_restart/case.py -t post_process -$ ./mfc.sh run examples/1D_vacuum_restart/restart_case.py -t post_process +./mfc.sh run examples/1D_vacuum_restart/case.py -t pre_process simulation +./mfc.sh run examples/1D_vacuum_restart/restart_case.py -t pre_process simulation +./mfc.sh run examples/1D_vacuum_restart/case.py -t post_process +./mfc.sh run examples/1D_vacuum_restart/restart_case.py -t post_process ``` ### Example Runs @@ -171,6 +171,6 @@ $ ./mfc.sh run examples/1D_vacuum_restart/restart_case.py -t post_process - Oak Ridge National Laboratory's [Summit](https://www.olcf.ornl.gov/summit/): ```console -$ ./mfc.sh run examples/2D_shockbubble/case.py -e batch \ +./mfc.sh run examples/2D_shockbubble/case.py -e batch \ -N 2 -n 4 -t simulation -a -c summit ``` diff --git a/docs/documentation/testing.md b/docs/documentation/testing.md index 331bacd30b..4c9b710b98 100644 --- a/docs/documentation/testing.md +++ b/docs/documentation/testing.md @@ -2,7 +2,7 @@ To run MFC's test suite, run ```console -$ ./mfc.sh test -j +./mfc.sh test -j ``` It will generate and run test cases, comparing their output to that of previous runs from versions of MFC considered to be accurate. @@ -12,7 +12,7 @@ Run `./mfc.sh test -h` for a full list of accepted arguments. Most notably, you can consult the full list of tests by running ``` -$ ./mfc.sh test -l +./mfc.sh test -l ``` To restrict to a given range, use the `--from` (`-f`) and `--to` (`-t`) options. @@ -22,7 +22,7 @@ To run a (non-contiguous) subset of tests, use the `--only` (`-o`) option instea To (re)generate *golden files*, append the `--generate` option: ```console -$ ./mfc.sh test --generate -j 8 +./mfc.sh test --generate -j 8 ``` It is recommended that a range be specified when generating golden files for new test cases, as described in the previous section, in an effort not to regenerate the golden files of existing test cases. @@ -88,7 +88,7 @@ Finally, the case is appended to the `cases` list, which will be returned by the To test updated post process code, append the `-a` or `--test-all` option: ```console -$ ./mfc.sh test -a -j 8 +./mfc.sh test -a -j 8 ``` This argument will re-run the test stack with `parallel_io=True`, which generates silo_hdf5 files. diff --git a/docs/documentation/visualization.md b/docs/documentation/visualization.md index d1087203c7..7299b59f97 100644 --- a/docs/documentation/visualization.md +++ b/docs/documentation/visualization.md @@ -24,26 +24,26 @@ For analysis and processing of the database using VisIt's capability, the user i If `parallel_io = F` then MFC will output the conservative variables to a directory `D/`. If multiple cores are used ($\mathtt{ppn > 1}$) then a separate file is created for each core. -If there is only one coordinate dimension ($n = 0$} and $p = 0$) then the primivative variables will also be written to `D/`. +If there is only one coordinate dimension (`n = 0` and `p = 0`) then the primivative variables will also be written to `D/`. The file names correspond to the variables associated with each equation solved by MFC. They are written at every `t_step_save` time step. The conservative variables are -$$ {(\rho \alpha)}_1, \dots, (\rho\alpha)_{N_c}, \rho u_1, \dots, \rho u_{N_d}, E, \alpha_1, \dots, \alpha_{N_c} $$ +$$ {(\rho \alpha)}\_{1}, \dots, (\rho\alpha)\_{N\_c}, \rho u\_{1}, \dots, \rho u\_{N\_d}, E, \alpha\_1, \dots, \alpha\_{N\_c} $$ and the primitive variables are -$$ {(\rho \alpha)}_1, \dots, (\rho\alpha)_{N_c}, u_1, \dots, u_{N_d}, p, \alpha_1, \dots, \alpha_{N_c} $$ +$$ {(\rho \alpha)}\_1, \dots, (\rho\alpha)\_{N\_c}, u\_1, \dots, u\_{N\_d}, p, \alpha\_1, \dots, \alpha\_{N\_c} $$ where $N_c$ are the number of components `num_fluids` and $N_d$ is the number of spatial dimensions. -There are exceptions: if `model_eqns` $=3$, then the six-equation model appends these variables with the internal energies of each component. -If there are sub-grid bubbles `bubbles` $=$ `T`, then the bubble variables are also written. +There are exceptions: if `model_eqns = 3`, then the six-equation model appends these variables with the internal energies of each component. +If there are sub-grid bubbles `bubbles = T`, then the bubble variables are also written. These depend on the bubble dynamics model used. -If `polytropic` $=$ `T` then the conservative variables are appended by +If `polytropic = T` then the conservative variables are appended by -$$ n_b R_1, n_b {\\dot R}_1, \dots, n_b R_{N_b}, n_b {\\dot R}_{N_b} $$ +$$ n\_b R\_1, n\_b {\\dot R}\_1, \dots, n\_b R\_{N\_b}, n\_b {\\dot R}\_{N\_b} $$ where $n_B$ is the bubble number density and $N_b$ are the number of bubble sizes (see matching variable in the input file, `Nb`). The primitive bubble variables do not include $n_B$: -$$ R_1, {\\dot R}_1, \dots, R_{N_b}, {\\dot R}_{N_b} $$ +$$ R\_1, {\\dot R}\_1, \dots, R\_{N\_b}, {\\dot R}\_{N\_b} $$ diff --git a/docs/index.html b/docs/index.html index f7867e2074..5d2242926c 100644 --- a/docs/index.html +++ b/docs/index.html @@ -333,10 +333,11 @@

- MFC development was supported by multiple current and past grants from the US Office of Naval Research (ONR), the US National Institute of Health (NIH), and the US National Science Foundation (NSF). MFC computations utilize the Extreme Science and Engineering Discovery Environment (XSEDE), under allocations TG-CTS120005 (PI Colonius) and TG-PHY210084 (PI Bryngelson) and OLCF Summit under allocation CFD154 (PI Bryngelson). + Multiple federal sponsors have supported MFC development, including the US Department of Defense (DOD), National Institutes of Health (NIH), Department of Energy (DOE), and National Science Foundation (NSF). + MFC computations use OLCF Frontier, Summit, and Wombat under allocation CFD154 (PI Bryngelson) and ACCESS-CI under allocations TG-CTS120005 (PI Colonius) and TG-PHY210084 (PI Bryngelson).