My PhD-Thesis titled "Reduced order modeling of thermal convection flows: a reservoir computing approach" is published at TU Ilmenau. It is open access. Free of charge. The corresponding technical appendix (numerical implementations, source files & code) can be found at Zenodo.
The thesis revolves around the application of machine learning models on direct numerical simulations of thermal convection flows. Their potential for sub-grid scale turbulence parameterization is evaluated.
This thesis explores the potential of machine learning (ML) algorithms to enhance subgrid-scale parameterizations in large-scale atmospheric simulations. Traditional approaches often rely on simplifications or computationally expensive methods. This work aims to introduce a more physically consistent and computationally efficient approach using Reservoir Computing (RC) and data reduction techniques to extract subgrid-scale features from direct numerical simulations (DNS) of thermal convection. To this end, the high-fidelity simulation data is pre-processed by a Proper Orthogonal Decomposition (POD) or an Autoencoder (AE) network to reduce the amount of data. An RC model is subsequently trained on this reduced data space to predict future flow states without solving the governing nonlinear equations of motion. The combined POD-RC model’s predictions are thoroughly validated by original simulations. It is found that the model accurately replicates spatial organization, structural features, and low-order statistics of dry and moist convection flows, opening new ways for the dynamic parameterization of subgrid-scale transport in larger-scale circulation models. Furthermore, this work investigates the generalization property of an AE-RC model based on a flux-driven two-dimensional turbulent convection system. It is found that the machine learning model can correctly reproduce spatial features and the statistical properties of the physical fields. Finally, this work focuses on the parameterization of the convective boundary layer (CBL) by means of a Generative Adversarial Network (GAN), which is trained on high-fidelity DNS data of a three-dimensional CBL. It is shown that a physics-informed rescaling of the limited amount of training data enables the method to reproduce the CBL growth and the related pattern formation. The GAN results agree with standard mass-flux schemes and additionally provide the granule-type horizontal organization of the turbulent flow, which cannot be obtained with the mass-flux approach. Although the primary focus is not on implementing ML-based parameterization schemes in large-scale models, this work advances our understanding of the potential and limitations of the afore mentioned models in the context of climate modeling and numerical weather prediction.