diff --git a/contrib/get_all_static_data.sh b/contrib/get_all_static_data.sh index 87fcd20bf..b0e8006dd 100755 --- a/contrib/get_all_static_data.sh +++ b/contrib/get_all_static_data.sh @@ -21,7 +21,7 @@ for file in "${data_files[@]}"; do mkdir -p $BASEDIR/scm/data/$file cd $BASEDIR/scm/data/$file echo "Retrieving $file" - wget https://github.com/NCAR/ccpp-scm/releases/download/v5.1.0/${file}.tar.gz + wget https://github.com/NCAR/ccpp-scm/releases/download/v6.0.0/${file}.tar.gz tar -xf ${file}.tar.gz rm -f ${file}.tar.gz done diff --git a/contrib/get_mg_inccn_data.sh b/contrib/get_mg_inccn_data.sh index 08b0ec321..4619ea2dc 100755 --- a/contrib/get_mg_inccn_data.sh +++ b/contrib/get_mg_inccn_data.sh @@ -16,7 +16,7 @@ BASEDIR=$MYDIR/.. # Change to directory containing the physics input data, download and extract archive cd $BASEDIR/scm/data/physics_input_data/ -wget https://github.com/NCAR/ccpp-scm/releases/download/v5.0.0-alpha/MG_INCCN_data.tar +wget https://github.com/NCAR/ccpp-scm/releases/download/v6.0.0/MG_INCCN_data.tar tar -xvf MG_INCCN_data.tar rm -f MG_INCCN_data.tar cd $BASEDIR/ diff --git a/contrib/get_thompson_tables.sh b/contrib/get_thompson_tables.sh index d658950ed..552ac800a 100755 --- a/contrib/get_thompson_tables.sh +++ b/contrib/get_thompson_tables.sh @@ -15,8 +15,8 @@ BASEDIR=$MYDIR/.. # Change to directory containing the physics input data, download and extract archive cd $BASEDIR/scm/data/physics_input_data/ -wget https://github.com/NCAR/ccpp-scm/releases/download/v5.0.0/thompson_tables2.tar -tar -xvf thompson_tables2.tar -rm -f thompson_tables2.tar +wget https://github.com/NCAR/ccpp-scm/releases/download/v6.0.0/thompson_tables.tar +tar -xvf thompson_tables.tar +rm -f thompson_tables.tar cd $BASEDIR/ diff --git a/scm/doc/TechGuide/acknow.tex b/scm/doc/TechGuide/acknow.tex index e443162f3..2ea8edbda 100644 --- a/scm/doc/TechGuide/acknow.tex +++ b/scm/doc/TechGuide/acknow.tex @@ -12,7 +12,7 @@ \vspace*{1cm}\par For referencing this document please use:\\ \vspace*{1cm}\par -Firl, G., L. Carson, L. Bernardet, D. Heinzeller, and M. Harrold, 2021. Common Community Physics Package Single Column Model v5.0.0 User and Technical Guide. 39pp. Available at https://dtcenter.org/GMTB/v5.0.0/scm-ccpp-guide-v5.0.0.pdf +Firl, G., D. Swales, L. Carson, L. Bernardet, D. Heinzeller, and M. Harrold, 2021. Common Community Physics Package Single Column Model v6.0.0 User and Technical Guide. 39pp. Available at https://dtcenter.org/GMTB/v6.0.0/scm-ccpp-guide-v6.0.0.pdf \end{flushleft} \end{titlepage} diff --git a/scm/doc/TechGuide/chap_cases.tex b/scm/doc/TechGuide/chap_cases.tex index cd72f3e5b..399c0d707 100644 --- a/scm/doc/TechGuide/chap_cases.tex +++ b/scm/doc/TechGuide/chap_cases.tex @@ -8,14 +8,10 @@ \subsection{Case configuration namelist parameters} \label{subsection: case config} The \execout{case\_config} namelist expects the following parameters: \begin{itemize} -\item \execout{model\_name} - \begin{itemize} - \item This controls which vertical coordinates to use. Valid values are \exec{'FV3'} or \exec{`GFS'}. Here, \exec{`GFS'} refers to vertical coordinates used in the GSM. - \end{itemize} -\item \execout{n\_columns} - \begin{itemize} - \item The code can be used to run a single column or multiple \emph{independent} columns using the same or different physics suites. Specify an integer, \exec{n}. NOTE: As of this release, only \execout{n\_columns} $= 1$ is supported. - \end{itemize} +\item \execout{vert\_coord\_file} + \begin{itemize} + \item File containing FV3 vertical grid coefficients. + \end{itemize} \item \execout{case\_name} \begin{itemize} \item Identifier for which dataset (initialization and forcing) to load. This string must correspond to a dataset included in the directory \execout{ccpp-scm/scm/data/processed\_case\_input/} (without the file extension). @@ -44,18 +40,6 @@ \subsection{Case configuration namelist parameters} \begin{itemize} \item A string representing the path (relative to the build directory) to which output should be written. (OPTIONAL) \end{itemize} -\item \execout{output\_file} - \begin{itemize} - \item A string representing the name of the NetCDF output file to be written (no \exec{.nc} extension expected). - \end{itemize} -\item \execout{case\_data\_dir} - \begin{itemize} - \item A string representing the path (relative to the build directory) where case initialization and forcing data files can be found. - \end{itemize} -\item \execout{vert\_coord\_data\_dir} - \begin{itemize} - \item A string representing the path (relative to the build directory) where vertical coordinate data files can be found (for \execout{model\_name}=\exec{`GFS'} only). - \end{itemize} \item \execout{thermo\_forcing\_type} \begin{itemize} \item An integer representing how forcing for temperature and moisture state variables is applied (1 $=$ total advective tendencies, 2 $=$ horizontal advective tendencies with prescribed vertical motion, 3 $=$ relaxation to observed profiles with vertical motion prescribed) @@ -112,9 +96,13 @@ \subsection{Case configuration namelist parameters} \begin{itemize} \item An integer representing the grid size of the UFS atmosphere initial conditions; the integer represents the number of grid points in each horizontal direction of each cube tile \end{itemize} +\item \execout{input\_type} + \begin{itemize} + \item 0 => original DTC format, 1 => DEPHY-SCM format. + \end{itemize} \end{itemize} -\subsection{Case input data file} +\subsection{Case input data file (CCPP-SCM format)} \label{subsection: case input} The initialization and forcing data for each case is stored in a NetCDF (version 4) file within the \execout{ccpp-scm/scm/data/processed\_case\_input} directory. Each file has two dimensions (\execout{time} and \execout{levels}) and is organized into 3 groups: scalars, initial, and forcing. Not all fields are required for all cases. For example the fields \execout{sh\_flux\_sfc} and \execout{lh\_flux\_sfc} are only needed if the variable \execout{sfc\_flx\_spec} $=$ \exec{.true.} in the case configuration file and state nudging variables are only required if \execout{thermo\_forcing\_type} $=$ \exec{3} or \execout{mom\_forcing\_type} $=$ \exec{3}. Using an active LSM (Noah, NoahMP, RUC) requires many more variables than are listed here. Example files for using with Noah and NoahMP LSMs are included in \execout{ccpp-scm/scm/data/processed\_case\_input/fv3\_model\_point\_noah[mp].nc}. @@ -122,9 +110,19 @@ \subsection{Case input data file} \lstinputlisting[ basicstyle=\scriptsize\ttfamily, label=lst_case_input_netcdf_header, - caption=example NetCDF file header for case initialization and forcing data + caption=example NetCDF file (CCPP-SCM format) header for case initialization and forcing data ]{./arm_case_header.txt} +\subsection{Case input data file (DEPHY format)} +\label{subsection: case input dephy} +The Development and Evaluation of Physics in atmospheric models (DEPHY) format is an internationally adopted data format intended for use by SCM and LESs. The initialization and forcing data for each case is stored in a NetCDF (version 4) file within the \execout{ccpp-scm/scm/data/processed\_case\_input} directory. Each file has four dimensions (\execout{time}, \execout{levels}, \execout{longitude}, \execout{latitude}) and contains the initial conditions and forcing data. Just as when using the CCPP-SCM formatted inputs, \ref{subsection: case input}, not all fields are required for all cases. More information on the DEPHY format requirements can be found at \href{https://github.com/GdR-DEPHY/DEPHY-SCM}{DEPHY} + +\lstinputlisting[ + basicstyle=\scriptsize\ttfamily, + label=lst_case_input_netcdf_header, + caption=example NetCDF file (DEPHY format) header for case initialization and forcing data + ]{./dephy_case_header.txt} + \section{Included Cases} Several cases are included in the repository to serve as examples for users to create their own and for basic research. All case configuration namelist files for included cases can be found in \execout{ccpp-scm/scm/etc/case\_config} and represent the following observational field campaigns: \begin{itemize} @@ -143,7 +141,7 @@ \section{Included Cases} \section{How to set up new cases} -Setting up a new case involves preparing the two types of files listed above. For the case initialization and forcing data file, this typically involves writing a custom script or program to parse the data from its original format to the format that the SCM expects, listed above. An example of this type of script written in Python is included in \execout{/ccpp-scm/scm/etc/scripts/twpice\_forcing\_file\_generator.py}. The script reads in the data as supplied from its source, converts any necessary variables, and writes a NetCDF (version 4) file in the format described in subsection \ref{subsection: case input}. For reference, the following formulas are used: +Setting up a new case involves preparing the two types of files listed above. For the case initialization and forcing data file, this typically involves writing a custom script or program to parse the data from its original format to the format that the SCM expects, listed above. An example of this type of script written in Python is included in \execout{/ccpp-scm/scm/etc/scripts/twpice\_forcing\_file\_generator.py}. The script reads in the data as supplied from its source, converts any necessary variables, and writes a NetCDF (version 4) file in the format described in subsections \ref{subsection: case input} and \ref{subsection: case input dephy}. For reference, the following formulas are used: \begin{equation} \theta_{il} = \theta - \frac{\theta}{T}\left(\frac{L_v}{c_p}q_l + \frac{L_s}{c_p}q_i\right) \end{equation} @@ -187,7 +185,7 @@ \section{How to set up new cases} For the case configuration file, it is most efficient to copy an existing file in \execout{ccpp-scm/scm/etc/case\_config} and edit it to suit one's case. Recall from subsection \ref{subsection: case config} that this file is used to configure the SCM framework parameters for a given case. Be sure to check that model timing parameters such as the time step and output frequency are appropriate for the physics suite being used. There is likely some stability criterion that governs the maximum time step based on the chosen parameterizations and number of vertical levels (grid spacing). The \execout{case\_name} parameter should match the name of the case input data file that was configured for the case (without the file extension). The \execout{runtime} parameter should be less than or equal to the length of the forcing data unless the desired behavior of the simulation is to proceed with the last specified forcing values after the length of the forcing data has been surpassed. The initial date and time should fall within the forcing period specified in the case input data file. If the case input data is specified to a lower altitude than the vertical domain, the remainder of the column will be filled in with values from a reference profile. There is a tropical profile and mid-latitude summer profile provided, although one may add more choices by adding a data file to \execout{ccpp-scm/scm/data/processed\_case\_input} and adding a parser section to the subroutine \execout{get\_reference\_profile} in \execout{-scm/scm/src/scm\_input.f90}. Surface fluxes can either be specified in the case input data file or calculated using a surface scheme using surface properties. If surface fluxes are specified from data, set \execout{sfc\_flux\_spec} to \exec{.true.} and specify \execout{sfc\_roughness\_length\_cm} for the surface over which the column resides. Otherwise, specify a \execout{sfc\_type}. In addition, one must specify a \execout{column\_area} for each column. -To control the forcing method, one must choose how the momentum and scalar variable forcing are applied. The three methods of Randall and Cripe (1999, JGR) have been implemented: ``revealed forcing'' where total (horizontal $+$ vertical) advective tendencies are applied (type 1), ``horizontal advective forcing'' where horizontal advective tendencies are applied and vertical advective tendencies are calculated from a prescribed vertical velocity and the calculated (modeled) profiles (type 2), and ``relaxation forcing'' where nudging to observed profiles replaces horizontal advective forcing combined with vertical advective forcing from prescribed vertical velocity (type 3). If relaxation forcing is chosen, a \execout{relaxation\_time} that represents the timescale over which the profile would return to the nudging profiles must be specified. +To control the forcing method, one must choose how the momentum and scalar variable forcing are applied. The three methods of Randall and Cripe (1999, JGR) have been implemented: ``revealed forcing'' where total (horizontal $+$ vertical) advective tendencies are applied (type 1), ``horizontal advective forcing'' where horizontal advective tendencies are applied and vertical advective tendencies are calculated from a prescribed vertical velocity and the calculated (modeled) profiles (type 2), and ``relaxation forcing'' where nudging to observed profiles replaces horizontal advective forcing combined with vertical advective forcing from prescribed vertical velocity (type 3). If relaxation forcing is chosen, a \execout{relax\_time} that represents the timescale over which the profile would return to the nudging profiles must be specified. \section{Using other LASSO cases} \label{sec:lasso} @@ -208,16 +206,6 @@ \section{Using UFS Initial Conditions} A script exists in \execout{scm/etc/scripts/UFS\_IC\_generator.py} to read in UFS Atmosphere cold start initial conditions and generate a case input data file that the SCM can use. Note that the script requires a few python packages that may not be found by default in all python installations: \exec{argparse}, \exec{fnmatch}, \exec{logging}, \exec{NetCDF4}, \exec{numpy}, \exec{shapely}, \exec{f90nml}, and \exec{re}. -NOTE: If using NOAA's Hera HPC, the \execout{shapely} python package does not seem to be installed with the version of Anaconda used by the rest of this software package by default so it is installed when users execute \execout{scm/etc/Hera\_setup\_intel.[csh/sh]}. - -Users on other systems can test if \execout{shapely} is installed using this command in the shell: -\begin{lstlisting} -python -c "import shapely" -\end{lstlisting} -If \execout{shapely} is installed, this command will succeed silently, otherwise an \execout{ImportError: No module named shapely} will be printed to screen. To install the \execout{shapely} Python module, use the install method preferred for your Python environment (\execout{easy\_install}, \execout{pip}, \execout{conda}, \dots). - -The \execout{UFS\_IC\_generator.py} script usage is as follows: - \begin{lstlisting}[language=bash] ./UFS_IC_generator.py [-h] (-l LOCATION LOCATION | -ij INDEX INDEX) -d DATE -i IN_DIR -g GRID_DIR [-t {1,2,3,4,5,6}] @@ -255,5 +243,5 @@ \section{Using UFS Initial Conditions} Running the model is the same as for observational field campaign cases: \begin{lstlisting}[language=bash] -./run_scm.py -c fv3_model_point_noah -s SCM_GFS_v15p2 +./run_scm.py -c fv3_model_point_noah -s SCM_GFS_v16 \end{lstlisting} diff --git a/scm/doc/TechGuide/chap_ccpp.tex b/scm/doc/TechGuide/chap_ccpp.tex index c3aef9f2a..187e0bc3a 100644 --- a/scm/doc/TechGuide/chap_ccpp.tex +++ b/scm/doc/TechGuide/chap_ccpp.tex @@ -1,7 +1,7 @@ \chapter{CCPP Interface} \label{chapter: ccpp_interface} -Chapter 6 of the CCPP v5 Technical Documentation (\url{https://ccpp-techdoc.readthedocs.io/en/v5.0.0/}) provides a wealth of information on the overall process of connecting a host model to the CCPP framework for calling physics. This chapter describes the particular implementation within this SCM, including how to set up, initialize, call, and change a physics suite using the CCPP framework. +Chapter 6 of the CCPP v6 Technical Documentation (\url{https://ccpp-techdoc.readthedocs.io/en/v6.0.0/}) provides a wealth of information on the overall process of connecting a host model to the CCPP framework for calling physics. This chapter describes the particular implementation within this SCM, including how to set up, initialize, call, and change a physics suite using the CCPP framework. \section{Setting up a suite} @@ -9,11 +9,11 @@ \section{Setting up a suite} \subsection{Preparing data from the SCM} -As described in sections 6.1 and 6.2 of the \href{https://ccpp-techdoc.readthedocs.io/en/v5.0.0/}{CCPP Technical Documentation} a host model must allocate memory and provide metadata for variables that are passed into and out of the schemes within the physics suite. As of this release, in practice this means that a host model must do this for all variables needed by all physics schemes that are expected to be used with the host model. For this SCM, all variables needed by the physics schemes are allocated and documented in the file \execout{ccpp-scm/scm/src/scm\_type\_defs.f90} and are contained within the \execout{physics} derived data type. This derived data type initializes its component variables in a \execout{create} type-bound procedure. As mentioned in section 6.2 of the \href{https://ccpp-techdoc.readthedocs.io/en/v5.0.0/}{CCPP Technical Documentation}, a table containing all required metadata was constructed for describing all variables in the \execout{physics} derived data type. The standard names of all variables in this table must match with a corresponding variable within one or more of the physics schemes. A list of all standard names used can be found in \execout{ccpp/framework/doc/DevelopersGuide/CCPP\_VARIABLES\_SCM.pdf}. The \execout{local\_name} for each variable corresponds to how a variable is referenced from the point in the code where \execout{ccpp\_field\_add()} statements are made. For this SCM, then, all \execout{local\_name}s begin with the \execout{physics} derived data type. Nested within most of the \execout{local\_name}s is also the name of a derived data type used within the UFS Atmosphere cap (re-used here for expediency). +As described in sections 6.1 and 6.2 of the \href{https://ccpp-techdoc.readthedocs.io/en/v6.0.0/}{CCPP Technical Documentation} a host model must allocate memory and provide metadata for variables that are passed into and out of the schemes within the physics suite. As of this release, in practice this means that a host model must do this for all variables needed by all physics schemes that are expected to be used with the host model. For this SCM, all variables needed by the physics schemes are allocated and documented in the file \execout{ccpp-scm/scm/src/scm\_type\_defs.f90} and are contained within the \execout{physics} derived data type. This derived data type initializes its component variables in a \execout{create} type-bound procedure. As mentioned in section 6.2 of the \href{https://ccpp-techdoc.readthedocs.io/en/v6.0.0/}{CCPP Technical Documentation}, a table containing all required metadata was constructed for describing all variables in the \execout{physics} derived data type. The standard names of all variables in this table must match with a corresponding variable within one or more of the physics schemes. A list of all standard names used can be found in \execout{ccpp/framework/doc/DevelopersGuide/CCPP\_VARIABLES\_SCM.pdf}. The \execout{local\_name} for each variable corresponds to how a variable is referenced from the point in the code where \execout{ccpp\_field\_add()} statements are made. For this SCM, then, all \execout{local\_name}s begin with the \execout{physics} derived data type. Nested within most of the \execout{local\_name}s is also the name of a derived data type used within the UFS Atmosphere cap (re-used here for expediency). \subsection{Editing and running \exec{ccpp\_prebuild.py}} -General instructions for configuring and running the \execout{ccpp\_prebuild.py} script can be found in chapter 8 of the \href{https://ccpp-techdoc.readthedocs.io/en/v5.0.0/}{CCPP Technical Documentation}. The script expects to be run with a host-model-dependent configuration file, passed as argument \execout{--config=path\_to\_config\_file}. Within this configuration file are variables that hold paths to the variable definition files (where metadata tables can be found on the host model side), the scheme files (a list of paths to all source files containing scheme entry points), the auto-generated physics schemes makefile snippet, the auto-generated physics scheme caps makefile snippet, the file where \execout{ccpp\_modules.inc} and \execout{ccpp\_fields.inc} are included, and the directory where the auto-generated physics caps should be written out to. Other variables less likely to be modified by a user are included in this configuration file as well, such as code sections to be included in the auto-generated scheme caps. As mentioned in section \ref{section: compiling}, this script must be run to reconcile data provided by the SCM with data required by the physics schemes before compilation by following step 1 in that section. +General instructions for configuring and running the \execout{ccpp\_prebuild.py} script can be found in chapter 8 of the \href{https://ccpp-techdoc.readthedocs.io/en/v6.0.0/}{CCPP Technical Documentation}. The script expects to be run with a host-model-dependent configuration file, passed as argument \execout{--config=path\_to\_config\_file}. Within this configuration file are variables that hold paths to the variable definition files (where metadata tables can be found on the host model side), the scheme files (a list of paths to all source files containing scheme entry points), the auto-generated physics schemes makefile snippet, the auto-generated physics scheme caps makefile snippet, the file where \execout{ccpp\_modules.inc} and \execout{ccpp\_fields.inc} are included, and the directory where the auto-generated physics caps should be written out to. Other variables less likely to be modified by a user are included in this configuration file as well, such as code sections to be included in the auto-generated scheme caps. As mentioned in section \ref{section: compiling}, this script must be run to reconcile data provided by the SCM with data required by the physics schemes before compilation by following step 1 in that section. \subsection{Preparing a suite definition file} The suite definition file is a text file read by the model at compile time. It is used to specify the physical parameterization suite, and includes information about the number of parameterization groupings, which parameterizations that are part of each of the groups, the order in which the parameterizations should be run, and whether subcycling will be used to run any of the parameterizations with shorter timesteps. @@ -25,13 +25,13 @@ \subsection{Preparing a suite definition file} For this release, supported suite definition files used with this SCM are found in \execout{ccpp-scm/ccpp/suites}. For all of these suites, the physics schemes have been organized into 3 groupings following how the physics are called in the UFS Atmosphere model, although no code is executed in the SCM time loop between execution of the grouped schemes. Several ``interstitial'' schemes are included in the suite definition file to execute code that previously was part of a hard-coded physics driver. Some of these schemes may eventually be rolled into the schemes themselves, improving portability. \section{Initializing/running a suite} -The process for initializing and running a suite in this SCM is described in sections \ref{section: physics init} and \ref{section: time integration}, respectively. A more general description of the process for performing suite initialization and running can also be found in sections 6.4 and 6.5 of the \href{https://ccpp-techdoc.readthedocs.io/en/v5.0.0/}{CCPP Technical Documentation}. +The process for initializing and running a suite in this SCM is described in sections \ref{section: physics init} and \ref{section: time integration}, respectively. A more general description of the process for performing suite initialization and running can also be found in sections 6.4 and 6.5 of the \href{https://ccpp-techdoc.readthedocs.io/en/v6.0.0/}{CCPP Technical Documentation}. \section{Changing a suite} \subsection{Replacing a scheme with another} -When the CCPP has reached a state of maturity, the process for modifying the contents of an existing physics suite will be a very straightforward process, consisting of merely changing the name of the scheme in the suite definition file. As of this release, which consists of one scheme of each ``type'' in the pool of CCPP-compliant physics schemes with many short interstitial schemes, the process requires some consideration. Of course, prior to being able to swap a scheme within a suite, one must first add a CCPP-compliant scheme to the pool of available schemes in the CCPP physics repository. This process is described in chapter 2 of the \href{https://ccpp-techdoc.readthedocs.io/en/v5.0.0/}{CCPP Technical Documentation}. +When the CCPP has reached a state of maturity, the process for modifying the contents of an existing physics suite will be a very straightforward process, consisting of merely changing the name of the scheme in the suite definition file. As of this release, which consists of one scheme of each ``type'' in the pool of CCPP-compliant physics schemes with many short interstitial schemes, the process requires some consideration. Of course, prior to being able to swap a scheme within a suite, one must first add a CCPP-compliant scheme to the pool of available schemes in the CCPP physics repository. This process is described in chapter 2 of the \href{https://ccpp-techdoc.readthedocs.io/en/v6.0.0/}{CCPP Technical Documentation}. Once a CCPP-compliant scheme has been added to the CCPP physics repository, the process for modifying an existing suite should take the following steps into account: @@ -39,7 +39,7 @@ \subsection{Replacing a scheme with another} \item Examine and compare the arguments of the scheme being replaced and the replacement scheme. \begin{itemize} \item Are there any new variables that the replacement scheme needs from the host application? If so, these new variables must be added to the host model cap. For the SCM, this involves adding a component variable to the \execout{physics} derived data type and a corresponding entry in the metadata table. The new variables must also be allocated and initialized in the \execout{physics\%create} type-bound procedure. -\item Do any of the new variables need to be calculated in an interstitial scheme? If so, one must be written and made CCPP-compliant itself. The \href{https://ccpp-techdoc.readthedocs.io/en/v5.0.0/}{CCPP Technical Documentation} will help in this endeavor, and the process outlined in its chapter 2 should be followed. +\item Do any of the new variables need to be calculated in an interstitial scheme? If so, one must be written and made CCPP-compliant itself. The \href{https://ccpp-techdoc.readthedocs.io/en/v6.0.0/}{CCPP Technical Documentation} will help in this endeavor, and the process outlined in its chapter 2 should be followed. \item Do other schemes in the suite rely on output variables from the scheme being replaced that are no longer being supplied by the replacement scheme? Do these output variables need to be derived/calculated in an interstitial scheme? If so, see the previous bullet about adding one. \end{itemize} \item Examine existing interstitial schemes related to the scheme being replaced. @@ -47,7 +47,7 @@ \subsection{Replacing a scheme with another} \item There may be scheme-specific interstitial schemes (needed for one specific scheme) and/or type-generic interstitial schemes (those that are called for all schemes of a given type, i.e. all PBL schemes). Does one need to write analogous scheme-specific interstitial schemes for the replacement? \item Are the type-generic interstitial schemes relevant or do they need to be modified? \end{itemize} -\item Depending on the answers to the above considerations, edit the suite definition file as necessary. Typically, this would involve finding the \execout{} elements associated with the scheme to be replaced and its associated interstitial \execout{} elements and simply replacing the scheme names to reflect their replacements. See chapter 4 of the \href{https://ccpp-techdoc.readthedocs.io/en/v5.0.0/}{CCPP Technical Documentation} for further details. +\item Depending on the answers to the above considerations, edit the suite definition file as necessary. Typically, this would involve finding the \execout{} elements associated with the scheme to be replaced and its associated interstitial \execout{} elements and simply replacing the scheme names to reflect their replacements. See chapter 4 of the \href{https://ccpp-techdoc.readthedocs.io/en/v6.0.0/}{CCPP Technical Documentation} for further details. \end{itemize} \subsection{Modifying ``groups'' of parameterizations} @@ -56,7 +56,7 @@ \subsection{Modifying ``groups'' of parameterizations} \subsection{Subcycling parameterizations} -The suite definition file allows subcycling of schemes, or calling a subset of schemes at a smaller time step than others. The \execout{} element in the suite definition file controls this function. All schemes within such an element are called \exec{n} times during one \execout{ccpp\_physics\_run} call. An example of this is found in the \execout{suite\_SCM\_GFS\_v15p2.xml} suite definition file, where the surface schemes are executed twice for each timestep (implementing a predictor/corrector paradigm). Note that no time step information is included in the suite definition file. If subcycling is used for a set of parameterizations, the smaller time step must be an input argument for those schemes. +The suite definition file allows subcycling of schemes, or calling a subset of schemes at a smaller time step than others. The \execout{} element in the suite definition file controls this function. All schemes within such an element are called \exec{n} times during one \execout{ccpp\_physics\_run} call. An example of this is found in the \execout{suite\_SCM\_GFS\_v16.xml} suite definition file, where the surface schemes are executed twice for each timestep (implementing a predictor/corrector paradigm). Note that no time step information is included in the suite definition file. If subcycling is used for a set of parameterizations, the smaller time step must be an input argument for those schemes. \section{Adding variables} diff --git a/scm/doc/TechGuide/chap_function.tex b/scm/doc/TechGuide/chap_function.tex index 987cbf758..61905d4ee 100644 --- a/scm/doc/TechGuide/chap_function.tex +++ b/scm/doc/TechGuide/chap_function.tex @@ -22,7 +22,7 @@ \section{Reading input} \end{enumerate} \section{Setting up vertical grid and interpolating input data} -The CCPP SCM uses pressure for the vertical coordinate (lowest index is the surface). There are two choices for generating the vertical coordinate corresponding to a) the 2017 operational GFS v14 based on the Global Spectral Model (GSM) (set \execout{model\_name} $=$ \exec{`GFS'} in the \execout{case\_config} file), and b) the FV3-based GFS v15 (set \execout{model\_name} $=$ \exec{`FV3'} in the \execout{case\_config} file). For both methods, the pressure levels are calculated using the surface pressure and coefficients ($a_k$ and $b_k$). For the GSM-based vertical coordinate, the coefficient data is read from an external file. Only 28, 42, 60, 64, and 91 levels are supported. If using the FV3-based vertical coordinate, it is possible to use potentially any (integer) number of vertical levels. Depending on the vertical levels specified, however, the method of specification of the coefficients may change. Please see the subroutine \execout{get\_FV3\_vgrid} in the source file \execout{ccpp-scm/scm/src/scm\_vgrid.F90} for details. This subroutine was minimally adapted from the source file \execout{fv\_eta.F90} from the v0 release version of the FV3GFS model. +The CCPP SCM uses pressure for the vertical coordinate (lowest index is the surface). There are two choices for generating the vertical coordinate corresponding to a) the 2017 operational GFS v14 based on the Global Spectral Model (GSM) (set \execout{model\_name} $=$ \exec{`GFS'} in the \execout{case\_config} file), and b) the FV3-based GFS v15/16/17 (set \execout{model\_name} $=$ \exec{`FV3'} in the \execout{case\_config} file). For both methods, the pressure levels are calculated using the surface pressure and coefficients ($a_k$ and $b_k$). For the GSM-based vertical coordinate, the coefficient data is read from an external file. Only 28, 42, 60, 64, and 91 levels are supported. If using the FV3-based vertical coordinate, it is possible to use potentially any (integer) number of vertical levels. Depending on the vertical levels specified, however, the method of specification of the coefficients may change. Please see the subroutine \execout{get\_FV3\_vgrid} in the source file \execout{ccpp-scm/scm/src/scm\_vgrid.F90} for details. This subroutine was minimally adapted from the source file \execout{fv\_eta.F90} from the v0 release version of the FV3GFS model. After the vertical grid has been set up, the state variable profiles stored in the \execout{scm\_state} derived data type are interpolated from the input and reference profiles in the \execout{set\_state} subroutine of the \execout{scm\_setup} module. diff --git a/scm/doc/TechGuide/chap_intro.tex b/scm/doc/TechGuide/chap_intro.tex index 3afa29ba3..bd03d77d3 100644 --- a/scm/doc/TechGuide/chap_intro.tex +++ b/scm/doc/TechGuide/chap_intro.tex @@ -1,7 +1,7 @@ \chapter{Introduction} \label{chapter: introduction} -A single column model (SCM) can be a valuable tool for diagnosing the performance of a physics suite, from validating that schemes have been integrated into a suite correctly to deep dives into how physical processes are being represented by the approximating code. This SCM has the advantage of working with the Common Community Physics Package (CCPP), a library of physical parameterizations for atmospheric numerical models and the associated framework for connecting potentially any atmospheric model to physics suites constructed from its member parameterizations. In fact, this SCM serves as perhaps the simplest example for using the CCPP and its framework in an atmospheric model. This version contains all parameterizations of NOAA's evolved operational GFS v15.2 suite (implemented in 2019), plus additional developmental schemes. The schemes are grouped in five supported suites described in detail in the \href{https://dtcenter.ucar.edu/GMTB/v5.0.0/sci_doc/}{CCPP Scientific Documentation} (GFS\_v15p2, GFS\_v16beta, csawmg, GSD\_v1, and RRFS\_v1alpha). +A single column model (SCM) can be a valuable tool for diagnosing the performance of a physics suite, from validating that schemes have been integrated into a suite correctly to deep dives into how physical processes are being represented by the approximating code. This SCM has the advantage of working with the Common Community Physics Package (CCPP), a library of physical parameterizations for atmospheric numerical models and the associated framework for connecting potentially any atmospheric model to physics suites constructed from its member parameterizations. In fact, this SCM serves as perhaps the simplest example for using the CCPP and its framework in an atmospheric model. This version contains all parameterizations of NOAA's evolved operational GFS v16 suite (implemented in 2021), plus additional developmental schemes. The schemes are grouped in six supported suites described in detail in the \href{https://dtcenter.ucar.edu/GMTB/v6.0.0/sci_doc/}{CCPP Scientific Documentation} (GFS\_v16, GFS\_v17p8, RAP, HRRR, and RRFS\_v1beta, and WoFS\_v0). This document serves as both the User and Technical Guides for this model. It contains a Quick Start Guide with instructions for obtaining the code, compiling, and running a sample test case, an explanation for what is included in the repository, a brief description of the operation of the model, a description of how cases are set up and run, and finally, an explanation for how the model interfaces with physics through the CCPP infrastructure. @@ -9,21 +9,21 @@ \chapter{Introduction} \section{Version Notes} -The CCPP SCM v5.0.0 contains the following major and minor changes since v4.1. +The CCPP SCM v6.0.0 contains the following major and minor changes since v5.0. Major \begin{itemize} -\item None +\item Inclusion of regression testing functionality +\item Combine single- and multi-run capabilities into one script \end{itemize} Minor \begin{itemize} -\item Tracers are configured externally via a file, to match the ``field\_table'' functionality in FV3 -\item Add the RRFS\_v1alpha suite to match the UFS SRW App version 1 public release: \url{https://ufs-srweather-app.readthedocs.io/en/latest/} -\item Added ability to run with HWRF physics -\item Fixed bug related to prescribed surface flux cases (bug was present in v4.1.0) -\item Updated UFS initial conditions case generation script to better handle LSM-related variables -\item Update SCM surface initialization code to better match FV3 +\item Add RUC LSM support +\item Add the GFS\_v17p8, HRRR, RRFS\_v1beta, and WoFS\_v0 suites +\item Update the vertical coordinate code to better match latest FV3 vertical coordinate code +\item Simplify the case configuration namelists +\item Add greater flexibility for output location (outside of bin directory) \end{itemize} \subsection{Limitations} @@ -31,7 +31,7 @@ \subsection{Limitations} This release bundle has some known limitations: \begin{itemize} -\item Using the RRFS\_v1alpha suite for cases where deep convection is expected to be active will likely produce strange/unreliable results, unless the forcing has been modified to account for the deep convection. This is because forcing for existing cases assumes a horizontal scale for which deep convection is subgrid-scale and is expected to be parameterized. The RRFS\_v1alpha suite is intended for use with regional UFS simulations with horizontal scale small enough not to need a deep convection parameterization active, and it does not contain a deep convective scheme. Nevertheless, the RRFS\_v1alpha suite is included with the SCM as-is for research purposes. +\item Using the RRFS\_v1beta, HRRR, and WoFS\_v0 suites for cases where deep convection is expected to be active will likely produce strange/unreliable results, unless the forcing has been modified to account for the deep convection. This is because forcing for existing cases assumes a horizontal scale for which deep convection is subgrid-scale and is expected to be parameterized. The suites without convection are intended for use with regional UFS simulations with horizontal scale small enough not to need a deep convection parameterization active, and it does not contain a deep convective scheme. Nevertheless, these suites are included with the SCM as-is for research purposes. \item The provided cases over land points cannot use an LSM at this time due to the lack of initialization data for the LSMs. Therefore, for the provided cases over land points (ARM\_SGP\_summer\_1997\_* and LASSO\_*, where sfc\_type = 1 is set in the case configuration file), prescribed surface fluxes must be used: \begin{itemize} \item surface sensible and latent heat fluxes must be provided in the case data file diff --git a/scm/doc/TechGuide/chap_quick.tex b/scm/doc/TechGuide/chap_quick.tex index 3dc4c7eca..e100f62b6 100644 --- a/scm/doc/TechGuide/chap_quick.tex +++ b/scm/doc/TechGuide/chap_quick.tex @@ -14,7 +14,7 @@ \subsection{Release Code} Clone the source using \begin{lstlisting}[language=bash] -git clone --recursive -b v5.0.0 https://github.com/NCAR/ccpp-scm +git clone --recursive -b v6.0.0 https://github.com/NCAR/ccpp-scm \end{lstlisting} Recall that the \execout{recursive} option in this command clones the main ccpp-scm repository and all subrepositories (ccpp-physics and ccpp-framework). Using this option, there is no need to execute \exec{git submodule init} and \exec{git submodule update}. @@ -37,7 +37,7 @@ \subsection{Development Code} \begin{lstlisting}[language=bash] git submodule update --init --recursive \end{lstlisting} -You can try to use the latest commits of the ccpp-physics and ccpp-framework submodules if you wish, but this may not have been tested. To do so: +You can try to use the latest commits of the ccpp-physics and ccpp-framework submodules if you wish, but this may not have been tested (i.e. SCM development may lag ccpp-physics and/or ccpp-framework development). To do so: \begin{enumerate} \item Navigate to the ccpp-physics directory. \begin{lstlisting}[language=bash] @@ -69,36 +69,25 @@ \section{System Requirements, Libraries, and Tools} The source code for the SCM and CCPP components is in the form of programs written in FORTRAN, FORTRAN 90, and C. In addition, the I/O relies on the NetCDF libraries. Beyond the standard scripts, the build system relies on use of the Python scripting language, along with cmake, GNU make and date. -The basic requirements for building and running the CCPP and SCM bundle are listed below. The versions listed reflect successful tests and there is no guarantee that the code will work with different versions. +The following software stacks have been tested with this code. Other versions of various components will likely still work, however. + \begin{itemize} - \item FORTRAN 90+ compiler - \begin{itemize} - \item ifort 18.0.5.274, 19.0.2 and 19.0.5 - \item gfortran 6.2, 8.3, 9.1, 9.2, and 10.1 - \end{itemize} - \item C compiler - \begin{itemize} - \item icc 18.0.5.274, 19.0.2 and 19.0.5 - \item gcc 6.2, 8.3, 9.1, 9.2 and 10.1 - \item Apple clang 11.0.0.11000033, LLVM clang 9.0.0 and 10.0.0 - \end{itemize} - \item cmake 3.14+ - \begin{itemize} - \item NOTE: Version 3.15+ is required if installing NCEPLIBS - \end{itemize} - \item NetCDF 4.3.0, 4.4.0, 4.4.1.1, 4.5.0, 4.6.1, 4.6.3, 4.7.0, 4.7.3. 4.7.4 (not 3.x) with HDF5 and ZLIB - \item Python 2.7.5, 2.7.9, 2.7.13, 2.7.16, 3.6.1, 3.7.5, and 3.8.5 with f90nml module (and Shapely if using the \execout{UFS\_IC\_generator.py} script) + \item gfortran 12.1.0, gcc 12.1.0, cmake 3.23.2, NetCDF 4.7.4, Python 3.9.12 + \item GNU compilers 10.1.0, cmake 3.16.4, NetCDF 4.8.1, Python 3.7.12 + \item GNU compilers 11.1.0, cmake 3.18.2, NetCDF 4.8.1, Python 3.8.5 + \item Intel compilers 2022.0.2, cmake 3.20.1, NetCDF 4.7.4, Python 3.7.11 + \item Intel compilers 2022.1.0, cmake 3.22.0, NetCDF 4.8.1, Python 3.7.12 \end{itemize} + +Because these tools are typically the purview of system administrators to install and maintain, they are considered part of the basic system requirements. The Unified Forecast System (UFS) Short-Range Weather Application release v1.0.0 of March 2021 provides software packages and detailed instructions to install these prerequisites and the hpc-stack on supported platforms (see section~\ref{section: setup_supported_platforms}). -Because these tools and libraries are typically the purview of system administrators to install and maintain, they are considered part of the basic system requirements. The Unified Forecast System (UFS) Short-Range Weather Application release v1.0.0 of March 2021 provides software packages and detailed instructions to install these prerequisites and the NCEPLIBS on supported platforms (see section~\ref{section: setup_supported_platforms}). - -Further, there are several utility libraries as part of the NCEPLIBS package that must be installed with environment variables pointing to their locations prior to building the SCM. +Further, there are several utility libraries as part of the hpc-stack package that must be installed with environment variables pointing to their locations prior to building the SCM. \begin{itemize} \item bacio - Binary I/O Library \item sp - Spectral Transformation Library \item w3nco - GRIB decoder and encoder library \end{itemize} -The following environment variables are used by the build system to properly link these libraries: \execout{BACIO\_LIB4}, \execout{SP\_LIBd}, and \execout{W3NCO\_LIBd}. Computational platforms in which the NCEPLIBS are prebuilt and installed in a central location are referred to as preconfigured platforms. Examples of preconfigured platforms are most NOAA high-performance computing machines (using the Intel compiler) and the NCAR Cheyenne system (using the Intel and GNU compilers). The machine setup scripts mentioned in section \ref{section: compiling} load these libraries (which are identical to those used by the UFS Short and Medium Range Weather Applications on those machines) and set these environment variables for the user automatically. For installing the libraries and its prerequisites on supported platforms, existing UFS packages can be used (see section~\ref{section: setup_supported_platforms}). +The following environment variables are used by the build system to properly link these libraries: \execout{bacio\_ROOT}, \execout{sp\_ROOT}, and \execout{w3nco\_ROOT}. Computational platforms in which these libraries are prebuilt and installed in a central location are referred to as preconfigured platforms. Examples of preconfigured platforms are most NOAA high-performance computing machines (using the Intel compiler) and the NCAR Cheyenne system (using the Intel and GNU compilers). The machine setup scripts mentioned in section \ref{section: compiling} load these libraries (which are identical to those used by the UFS Short and Medium Range Weather Applications on those machines) and set these environment variables for the user automatically. For installing the libraries and its prerequisites on supported platforms, existing UFS packages can be used (see section~\ref{section: setup_supported_platforms}). \subsection{Compilers} The CCPP and SCM have been tested on a variety of @@ -110,7 +99,7 @@ \subsection{Compilers} release website (\url{https://dtcenter.org/community-code/common-community-physics-package-ccpp/download}). \subsection{Using Existing Libraries on Preconfigured Platforms}\label{section: use_preconfigured_platforms} -Platform-specific scripts are provided to load modules and set the user environment for preconfigured platforms. These scripts load compiler modules (Fortran 2008-compliant), the NetCDF module, Python environment, etc. and set compiler and NCEPLIBS environment variables. From the top-level code directory (\execout{ccpp-scm} by default), source the correct script for your platform and shell. For \textit{t/csh} shells, +Platform-specific scripts are provided to load modules and set the user environment for preconfigured platforms. These scripts load compiler modules (Fortran 2008-compliant), the NetCDF module, Python environment, etc. and set compiler and environment variables. From the top-level code directory (\execout{ccpp-scm} by default), source the correct script for your platform and shell. For \textit{t/csh} shells, \begin{lstlisting}[language=csh] source scm/etc/Hera_setup_intel.csh source scm/etc/Cheyenne_setup_gnu.csh @@ -125,31 +114,9 @@ \subsection{Using Existing Libraries on Preconfigured Platforms}\label{section: \subsection{Installing Libraries on Non-preconfigured Platforms}\label{section: setup_supported_platforms} -For users on supported platforms such as generic Linux or macOS systems that have not been preconfigured, the UFS Short-Range Weather Application v1.0.0 release provides software packages and detailed setup instructions at \url{https://github.com/NOAA-EMC/NCEPLIBS-external/releases/tag/ufs-v2.0.0} and \url{https://github.com/NOAA-EMC/NCEPLIBS/releases/tag/ufs-v2.0.0}. UFS users who already installed the \execout{NCEPLIBS} package only need to set the compiler environment variables as indicated in the relevant \execout{README\_*.txt} file in \url{https://github.com/NOAA-EMC/NCEPLIBS-external/releases/tag/ufs-v2.0.0/doc} and source the shell script that is created by the \execout{NCEPLIBS} install process to set the required environment variables for compiling the SCM. - -The SCM uses only a small part of the UFS \execout{NCEPLIBS} package and has fewer prerequisites (i.e. no \execout{ESMF} or \execout{wgrib2} needed). Users who are not planning to use the UFS can follow the machine setup instructions in the relevant \execout{README*.txt} files in \url{https://github.com/NOAA-EMC/NCEPLIBS-external/releases/tag/ufs-v2.0.0/doc} and, instead of installing \execout{NCEPLIBS-external} and \execout{NCEPLIBS}, install only NetCDF/NetCDF-Fortran manually or using the software package manager (\execout{apt}, \execout{yum}, \execout{brew}). - -Users need to set the compiler enviroment variables \execout{CC}, \execout{CXX}, \execout{FC} and the environment variable \execout{NETCDF} for compiling the three NCEP libraries (instead of the \execout{NCEPLIBS} umbrella build referred to in the \execout{NCEPLIBS-external} instructions) and the SCM. - -Installing the NCEP libraries: The SCM repository contains a bash installation script in \execout{ccpp-scm/contrib/build\_nceplibs.sh} that will fetch the source code of the three required NCEP libraries from their authoritative repositories on GitHub and install them locally for the SCM to use. To execute this script, perform the following step from the top level directory (\execout{ccpp-scm}). -\begin{lstlisting} -./contrib/build_nceplibs.sh /path/to/nceplibs -\end{lstlisting} - -Following successful execution of this script, the commands to set the proper environment variables mentioned above will be written to the terminal as output. One must execute the correct set for the active shell to finish the installation, e.g., for bash -\begin{lstlisting} -export bacio_ROOT=/path/to/nceplibs/ -export sp_ROOT=/path/to/nceplibs/ -export w3nco_ROOT=/path/to/nceplibs/ -\end{lstlisting} -and for t/csh -\begin{lstlisting} -setenv bacio_ROOT /path/to/nceplibs/ -setenv sp_ROOT /path/to/nceplibs/ -setenv w3nco_ROOT /path/to/nceplibs/ -\end{lstlisting} +For users on supported platforms such as generic Linux or macOS systems that have not been preconfigured, the \execout{hpc-stack} project is suggested for installing prerequisite libraries. Visit \url{https://github.com/NOAA-EMC/hpc-stack} for instructions for installing prerequisite libraries via \execout{hpc-stack} in their docs directory. UFS users who already installed libraries via the \execout{hpc-stack} package only need to set the compiler (\execout{CC, CXX, FC}), NetCDF (\execout{NetCDF\_ROOT}), and \execout{bacio}, \execout{sp} and \execout{w3nco} (\execout{bacio\_ROOT}, \execout{sp\_ROOT}, \execout{w3nco\_ROOT}) environment variables to point to their installation paths in order to compile the SCM. -The installation of NCEPLIBS requires \execout{cmake} v3.15+. There are many ways to obtain the required version, either by following instructions provided by \execout{cmake} (\url{https://cmake.org/install/}), or by following the instructions provided for the UFS Short-Range Weather Application release (\url{https://github.com/NOAA-EMC/NCEPLIBS-external/releases/tag/ufs-v2.0.0}). Prepend this installation directory of \execout{cmake} to your path environment variable to use it for building the NCEPLIBS. +The SCM uses only a small part of the UFS \execout{hpc-stack} package and has fewer prerequisites (i.e. no \execout{ESMF} or \execout{wgrib2} needed). Users who are not planning to use the UFS can install only NetCDF/NetCDF-Fortran manually or using the software package manager (\execout{apt}, \execout{yum}, \execout{brew}). The Python environment must provide the \execout{f90nml} module for the SCM scripts to function. Users can test if f90nml is installed using this command in the shell: \begin{lstlisting} @@ -186,15 +153,16 @@ \subsection{Installing Libraries on Non-preconfigured Platforms}\label{section: \section{Compiling SCM with CCPP} \label{section: compiling} -The first step in compiling the CCPP and SCM is to properly setup your user environment as described in sections~\ref{section: use_preconfigured_platforms} and~\ref{section: setup_supported_platforms}. The second step is to download the lookup tables and other large datasets (large binaries, $~$324\,MB) needed by the physics schemes and place them in the correct directory: +The first step in compiling the CCPP and SCM is to properly setup your user environment as described in sections~\ref{section: use_preconfigured_platforms} and~\ref{section: setup_supported_platforms}. The second step is to download the lookup tables and other large datasets (large binaries, $<$1 GB) needed by the physics schemes and place them in the correct directory: From the top-level code directory (\execout{ccpp-scm} by default), execute the following scripts: \begin{lstlisting}[language=bash] +./contrib/get_all_static_data.sh ./contrib/get_thompson_tables.sh ./contrib/get_mg_inccn_data.sh \end{lstlisting} -If the download step fails, make sure that your system's firewall does not block access to GitHub. If it does, download the files \execout{thompson\_tables.tar} and \execout{MG\_INCCN\_data.tar} from the GitHub release website using your browser and manually extract its contents in the directory \execout{scm/data/physics\_input\_data/}. +If the download step fails, make sure that your system's firewall does not block access to GitHub. If it does, download the files \execout{comparison\_data.tar.gz}, \execout{phyiscs\_input\_data.tar.gz}, \execout{processed\_case\_input.tar.gz}, \execout{raw\_case\_input.tar.gz} from the GitHub release website using your browser and manually extract its contents in the directory \execout{scm/data}. Similarly, do the same for \execout{thompson\_tables.tar.gz} and \execout{MG\_INCCN\_data.tar.gz} and extract to \execout{scm/data/physics\_input\_data/}. -Following this step, the top level build system will use \execout{cmake} to query system parameters, execute the CCPP prebuild script to match the physics variables (between what the host model -- SCM -- can provide and what is needed by physics schemes in the CCPP), and build the physics caps needed to use them. Finally, \execout{make} is used to compile the components. +Following this step, the top level build system will use \execout{cmake} to query system parameters, execute the CCPP prebuild script to match the physics variables (between what the host model -- SCM -- can provide and what is needed by physics schemes in the CCPP for the chosen suites), and build the physics caps needed to use them. Finally, \execout{make} is used to compile the components. \begin{enumerate} \item From the top-level code directory (\execout{ccpp-scm} by default), change directory to the top-level SCM directory. \begin{lstlisting}[language=bash] @@ -204,12 +172,25 @@ \section{Compiling SCM with CCPP} \begin{lstlisting}[language=bash] mkdir bin && cd bin \end{lstlisting} -\item Invoke \exec{cmake} on the source code to build using one of the options below. +\item Invoke \exec{cmake} on the source code to build using one of the options below. This step is used to identify for which suites the ccpp-framework will build caps and which suites can be run in the SCM without recompiling. \begin{itemize} \item Default mode \begin{lstlisting}[language=bash] cmake ../src \end{lstlisting} +By default, this option uses all supported suites. The list of supported suites is controlled by \execout{scm/src/suite\_info.py}. +\item All suites mode +\begin{lstlisting}[language=bash] +cmake -DCCPP_SUITES=ALL ../src +\end{lstlisting} +All suites in \execout{scm/src/suite\_info.py}, regardless of whether they're supported, will be used. This list is typically longer for the development version of the code than for releases. +\item Selected suites mode +\begin{lstlisting}[language=bash] +cmake -DCCPP_SUITES=SCM_GFS_v16,SCM_RAP ../src +\end{lstlisting} +This only compiles the listed subset of suites (which should still have a corresponding entry in \execout{scm/src/suite\_info.py} + + \item The statements above can be modified with the following options (put before \execout{../src}): \begin{itemize} \item Use threading with openmp (not for macOS with clang+gfortran) @@ -227,12 +208,11 @@ \section{Compiling SCM with CCPP} \end{lstlisting} \end{itemize} - -CMake automatically runs the CCPP prebuild script to match required physics variables with those available from the dycore (SCM) and to generate physics caps and makefile segments. It generates software caps for each physics group defined in the supplied Suite Definition Files (SDFs) and generates a static library that becomes part of the SCM executable. Appropriate software caps \textbf{will be generated for all suites defined in the \execout{ccpp-scm/ccpp/suites} directory automatically.} +CMake automatically runs the CCPP prebuild script to match required physics variables with those available from the dycore (SCM) and to generate physics caps and makefile segments. It generates software caps for each physics group defined in the supplied Suite Definition Files (SDFs) and generates a static library that becomes part of the SCM executable. If necessary, the CCPP prebuild script can be executed manually from the top level directory (\execout{ccpp-scm}). The basic syntax is \begin{lstlisting}[language=bash] -./ccpp/framework/scripts/ccpp_prebuild.py --config=./ccpp/config/ccpp_prebuild_config.py --suites=SCM_GFS_v15p2,SCM_GFS_v16beta,SCM_GSD_v1[...] --builddir=./scm/bin [--debug] +./ccpp/framework/scripts/ccpp_prebuild.py --config=./ccpp/config/ccpp_prebuild_config.py --suites=SCM_GFS_v16,SCM_RAP[...] --builddir=./scm/bin [--debug] \end{lstlisting} where the argument supplied via the \execout{-{}-suites} variable is a comma-separated list of suite names that exist in the \execout{./ccpp/suites} directory. Note that suite names are the suite definition filenames minus the \exec{suite\_} prefix and \exec{.xml} suffix. @@ -250,7 +230,7 @@ \section{Compiling SCM with CCPP} The resulting executable may be found at \execout{./scm} (Full path of \execout{ccpp-scm/scm/bin/scm}). -Although \execout{make clean} is not currently implemented, an out-of-source build is used, so all that is required to clean the build/run directory is (from the \execout{bin} directory) +Although \execout{make clean} is not currently implemented, an out-of-source build is used, so all that is required to clean the build directory is (from the \execout{bin} directory) \begin{lstlisting}[language=bash] pwd #confirm that you are in the ccpp-scm/scm/bin directory before deleting files rm -rfd * @@ -260,25 +240,105 @@ \section{Compiling SCM with CCPP} If you encounter errors, please capture a log file from all of the steps, and start a thread on the support forum at: \url{https://dtcenter.org/forum/ccpp-user-support/ccpp-single-column-model} \section{Run the SCM with a supplied case} -There are several test cases provided with this version of the SCM. For all cases, the SCM will go through the time steps, applying forcing and calling the physics defined in the chosen suite definition file using physics configuration options from an associated namelist. The model is executed through one of two Python run scripts that are pre-staged into the \execout{bin} directory: \execout{run\_scm.py} or \execout{multi\_run\_scm.py}. The first sets up and runs one integration while the latter will set up and run several integrations serially. +There are several test cases provided with this version of the SCM. For all cases, the SCM will go through the time steps, applying forcing and calling the physics defined in the chosen suite definition file using physics configuration options from an associated namelist. The model is executed through a Python run script that is pre-staged into the \execout{bin} directory: \execout{run\_scm.py}. It can be used to run one integration or several integrations serially, depending on the command line arguments supplied. -\subsection{Single Run Script Usage} \label{subsection: singlerunscript} -Running a case requires four pieces of information: the case to run (consisting of initial conditions, geolocation, forcing data, etc.), the physics suite to use (through a CCPP suite definition file), a physics namelist (that specifies configurable physics options to use), and a tracer configuration file. As discussed in chapter \ref{chapter: cases}, cases are set up via their own namelists in \execout{../etc/case\_config}. A default physics suite is provided as a user-editable variable in the script and default namelists and tracer configurations are associated with each physics suite (through \execout{../src/default\_namelists.py} and \execout{../src/default\_tracers.py}), so, technically, one must only specify a case to run with the SCM. The single run script's interface is described below. +\subsection{Run Script Usage} \label{subsection: singlerunscript} +Running a case requires four pieces of information: the case to run (consisting of initial conditions, geolocation, forcing data, etc.), the physics suite to use (through a CCPP suite definition file), a physics namelist (that specifies configurable physics options to use), and a tracer configuration file. As discussed in chapter \ref{chapter: cases}, cases are set up via their own namelists in \execout{../etc/case\_config}. A default physics suite is provided as a user-editable variable in the script and default namelists and tracer configurations are associated with each physics suite (through \execout{../src/suite\_info.py}), so, technically, one must only specify a case to run with the SCM when running just one integration. For running multiple integrations at once, one need only specify one argument (\execout{-m}) which runs through all permutations of supported suites from \execout{../src/suite\_info.py} and cases from \execout{../src/supported\_cases.py}. The run script's options are described below where option abbreviations are included in brackets. -\begin{lstlisting}[language=bash] -./run_scm.py -c CASE_NAME [-s SUITE_NAME] [-n PHYSICS_NAMELIST.nml] [-t TRACER_CONFIGURATION.txt] [-g] [-d] -\end{lstlisting} +\begin{itemize} +\item \execout{-{}-case [-c]} + \begin{itemize} + \item \textbf{This or the \execout{-{}-multirun} option are the minimum required arguments.} The case should correspond to the name of a case in \execout{../etc/case\_config} (without the \execout{.nml} extension). + \end{itemize} +\item \execout{-{}-suite [-s]} + \begin{itemize} + \item The suite should correspond to the name of a suite in \execout{../ccpp/suites} (without the \execout{.xml}) extension that was supplied in the \execout{cmake} or \execout{ccpp\_prebuild} step. + \end{itemize} +\item \execout{-{}-namelist [-n]} + \begin{itemize} + \item The namelist should correspond to the name of a file in \execout{../ccpp/physics\_namelists} (WITH the \execout{.nml} extension). If this argument is omitted, the default namelist for the given suite in \execout{../src/suite\_info.py} will be used. + \end{itemize} +\item \execout{-{}-tracers [-t]} + \begin{itemize} + \item The tracers file should correspond to the name of a file in \execout{../etc/tracer\_config} (WITH the \execout{.txt} extension). If this argument is omitted, the default tracer configuration for the given suite in \execout{../src/suite\_info.py} will be used. + \end{itemize} +\item \execout{-{}-multirun [-m]} + \begin{itemize} + \item \textbf{This or the \execout{-{}-case} option are the minimum required arguments.} When used alone, this option runs through all permutations of supported suites from \execout{../src/suite\_info.py} and cases from \execout{../src/supported\_cases.py}. When used in conjunction with the \execout{-{}- file} option, only the runs configured in the file will be run. + \end{itemize} +\item \execout{-{}-file [-f]} + \begin{itemize} + \item This option may be used in conjunction with the \execout{-{}-multirun} argument. It specifies a path and filename to a python file where multiple runs are configured. + \end{itemize} +\item \execout{-{}-gdb [-g]} + \begin{itemize} + \item Use this to run the executable through the \execout{gdb} debugger (if it is installed on the system). + \end{itemize} +\item \execout{-{}-docker [-d]} + \begin{itemize} + \item Use this argument when running in a docker container in order to successfully mount a volume between the host machine and the Docker container instance and to share the output and plots with the host machine. + \end{itemize} +\item \execout{-{}-runtime} + \begin{itemize} + \item Use this to override the runtime provided in the case configuration namelist. + \end{itemize} +\item \execout{-{}-runtime\_mult} + \begin{itemize} + \item Use this to override the runtime provided in the case configuration namelist by multiplying the runtime by the given value. This is used, for example, in regression testing to reduce total runtimes. + \end{itemize} +\item \execout{-{}-levels [-l} + \begin{itemize} + \item Use this to change the number of vertical levels. + \end{itemize} +\item \execout{-{}-npz\_type} + \begin{itemize} + \item Use this to change the type of FV3 vertical grid to produce (see \execout{src/scm\_vgrid.F90} for valid values). + \end{itemize} +\item \execout{-{}-vert\_coord\_file} + \begin{itemize} + \item Use this to specify the path/filename of a file containing the a\_k and b\_k coefficients for the vertical grid generation code to use. + \end{itemize} +\item \execout{-{}-bin\_dir} + \begin{itemize} + \item Use this to specify the path to the build directory. + \end{itemize} +\item \execout{-{}-run\_dir} + \begin{itemize} + \item Use this to specify the path to the run directory. + \end{itemize} +\item \execout{-{}-case\_data\_dir} + \begin{itemize} + \item Use this to specify the path to the directory containing the case data file (useful for using the DEPHY case repository). + \end{itemize} +\item \execout{-{}-n\_itt\_out} + \begin{itemize} + \item Use this to specify the period of writing instantaneous output in timesteps (if different than the default specified in the script). + \end{itemize} +\item \execout{-{}-n\_itt\_diagt} + \begin{itemize} + \item Use this to specify the period of writing instantaneous and time-averaged diagnostic output in timesteps (if different than the default specified in the script). + \end{itemize} +\item \execout{-{}-timestep [-dt]} + \begin{itemize} + \item Use this to specify the timestep to use (if different than the default specified in \execout{../src/suite\_info.py}). + \end{itemize} +\item \execout{-{}-verbose [-v]} + \begin{itemize} + \item Use this option to see additional debugging output from the run script and screen output from the executable. + \end{itemize} +\end{itemize} -When invoking the run script, the only required argument is the name of the case to run. The case name used must match one of the case configuration files located in \execout{../etc/case\_config} (\emph{without the .nml extension!}). If specifying a suite other than the default, the suite name used must match the value of the suite name in one of the suite definition files located in \execout{../../ccpp/suites} (Note: not the filename of the suite definition file). As part of the fifth CCPP release, the following suite names are valid: +When invoking the run script, the only required argument is the name of the case to run. The case name used must match one of the case configuration files located in \execout{../etc/case\_config} (\emph{without the .nml extension!}). If specifying a suite other than the default, the suite name used must match the value of the suite name in one of the suite definition files located in \execout{../../ccpp/suites} (Note: not the filename of the suite definition file). As part of the sixth CCPP release, the following suite names are valid: \begin{enumerate} -\item SCM\_GFS\_v15p2 -\item SCM\_GFS\_v16beta -\item SCM\_csawmg -\item SCM\_GSD\_v1 -\item SCM\_RRFS\_v1alpha +\item SCM\_GFS\_v16 +\item SCM\_GFS\_v17p8 +\item SCM\_RAP +\item SCM\_HRRR +\item SCM\_RRFS\_v1beta +\item SCM\_WoFS\_v0 \end{enumerate} -Note that using the Thompson microphysics scheme (as in SCM\_GSD\_v1) requires the computation of look-up tables during its initialization phase. As of the release, this process has been prohibitively slow with this model, so it is HIGHLY suggested that these look-up tables are downloaded and staged to use this scheme (and the SCM\_GSD\_v1 suite) as described in section~\ref{section: compiling}. +Note that using the Thompson microphysics scheme requires the computation of look-up tables during its initialization phase. As of the release, this process has been prohibitively slow with this model, so it is HIGHLY suggested that these look-up tables are downloaded and staged to use this scheme as described in section~\ref{section: compiling}. The issue appears to be machine/compiler-specific, so you may be able to produce the tables with the SCM, especially when invoking \execout{cmake} with the \execout{-DOPENMP=ON} option. Also note that some cases require specified surface fluxes. Special suite definition files that correspond to the suites listed above have been created and use the \execout{*\_prescribed\_surface} decoration. It is not necessary to specify this filename decoration when specifying the suite name. If the \execout{spec\_sfc\_flux} variable in the configuration file of the case being run is set to \execout{.true.}, the run script will automatically use the special suite definition file that corresponds to the chosen suite from the list above. @@ -292,36 +352,26 @@ \subsection{Single Run Script Usage} \label{subsection: singlerunscript} \item rainwat \item snowwat \item graupel +\item hailwat \item cld\_amt \item water\_nc \item ice\_nc \item rain\_nc \item snow\_nc \item graupel\_nc +\item hail\_nc +\item graupel\_vol +\item hail\_vol +\item ccn\_nc \item sgs\_tke \item liq\_aero \item ice\_aero \item q\_rimef \end{enumerate} -Lastly, the \execout{-g} flag can be used to run the executable through the \exec{gdb} debugger (assuming it is installed on the system), and the \execout{-d} flag is required when running this command in a Docker container in order to successfully mount a volume between the host machine and the Docker container instance and to share the output and plots with the host machine. - -A NetCDF output file is generated in the location specified in the case -configuration file, if the \execout{output\_dir} variable exists in that file. Otherwise an output directory is constructed from the case, suite, and namelist used (if different from the default). All output directories are placed in the \execout{bin} directory. If using a Docker container, all output is copied to the \execout{/home} directory in container space for volume-mounting purposes. Any standard NetCDF file viewing or analysis tools may be used to +A NetCDF output file is generated in an output directory located named with the case and suite within the run directory. If using a Docker container, all output is copied to the \execout{/home} directory in container space for volume-mounting purposes. Any standard NetCDF file viewing or analysis tools may be used to examine the output file (ncdump, ncview, NCL, etc). -\subsection{Multiple Run Script Usage}\label{subsection: multirunscript} - -A second Python script is provided for automating the execution of multiple integrations through repeated calling of the single run script. From the run directory, one may use this script through the following interface. - -\begin{lstlisting}[language=bash] -./multi_run_scm.py {[-c CASE_NAME] [-s SUITE_NAME] [-f PATH_TO_FILE]} [-v{v}] [-t] [-d] -\end{lstlisting} - -No arguments are required for this script. The \execout{-c or --case}, \execout{-s or --suite}, or \execout{-f or --file} options form a mutually-exclusive group, so exactly one of these is allowed at one time. If \execout{--c} is specified with a case name, the script will run a set of integrations for all supported suites (defined in \execout{../src/supported\_suites.py}) for that case. If \execout{-s} is specified with a suite name, the script will run a set of integrations for all supported cases (defined in \execout{../src/supported\_cases.py}) for that that suite. If \execout{-f} is specified with the path to a filename, it will read in lists of cases, suites, and namelists to use from that file. An example for this file's syntax can be found in \execout{../src/example\_multi\_run.py}. If multiple namelists are specified in the file, there either must be one suite specified \emph{or} the number of suites must match the number of namelists. If none of the \execout{-c or --case}, \execout{-s or --suite}, or \execout{-f or --file} options group is specified, the script will run through all permutations of supported cases and suites (as defined in the files previously mentioned). For this script, all runs are assumed to use default tracer configurations for all suites. - -In addition to the main options, some helper options can also be used with any of those above. The \execout{-v{v} or --verbose} option can be used to output more information from the script to the console and to a log file. If this option is not used, only completion progress messages are written out. If one \execout{-v} is used, the script will write out completion progress messages and all messages and output from the single run script. If two \execout{-vv} are used, the script will also write out all messages and single run script output to a log file (\execout{multi\_run\_scm.log}) in the \execout{bin} directory. The option, \execout{-t or --timer}, can be used to output the elapsed time for each integration executed by the script. Note that the execution time includes file operations performed by the single run script in addition to the execution of the underlying (Fortran) SCM executable. By default, this option will execute one integration of each subprocess. Since some variability is expected for each model run, if greater precision is required, the number of integrations for timing averaging can be set through the internal script variable \execout{timer\_iterations}. This option can be useful, for example, for getting a rough idea of relative computational expense of different physics suites. Finally, the \execout{-d} flag is required when running this command in a Docker container in order to successfully mount a volume between the host machine and the Docker container instance and to share the output and plots with the host machine. - \subsection{Batch Run Script} If using the model on HPC resources and significant amounts of processor time is anticipated for the experiments, it will likely be necessary to submit a job through the HPC's batch system. An example script has been included in the repository for running the model on Hera's batch system (SLURM). It is located in \execout{ccpp-scm/scm/etc/scm\_slurm\_example.py}. Edit the \execout{job\_name}, \execout{account}, etc. to suit your needs and copy to the \execout{bin} directory. The case name to be run is included in the \execout{command} variable. To use, invoke @@ -330,7 +380,7 @@ \subsection{Batch Run Script} \end{lstlisting} from the \execout{bin} directory. -Additional details regarding the SCM may be found in the remainder of this guide. More information on the CCPP can be found in the CCPP Technical Documentation available at \url{https://ccpp-techdoc.readthedocs.io/en/v5.0.0/}. +Additional details regarding the SCM may be found in the remainder of this guide. More information on the CCPP can be found in the CCPP Technical Documentation available at \url{https://ccpp-techdoc.readthedocs.io/en/v6.0.0/}. \section{Creating and Using a Docker Container with SCM and CCPP} \label{docker} @@ -350,7 +400,7 @@ \section{Creating and Using a Docker Container with SCM and CCPP} \subsection{Building the Docker image} -The Dockerfile builds CCPP SCM v5.0.0 from source using the GNU compiler. A number of required codes are built and installed via the DTC-supported common community container. For reference, the common community container repository can be accessed here: \url{https://github.com/NCAR/Common-Community-Container}. +The Dockerfile builds CCPP SCM v6.0.0 from source using the GNU compiler. A number of required codes are built and installed via the DTC-supported common community container. For reference, the common community container repository can be accessed here: \url{https://github.com/NCAR/Common-Community-Container}. The CCPP SCM has a number of system requirements and necessary libraries and tools. Below is a list, including versions, used to create the the GNU-based Docker image: \begin{itemize} @@ -375,7 +425,7 @@ \subsection{Building the Docker image} \begin{lstlisting}[language=bash] docker build -t ccpp-scm . \end{lstlisting} -Inspect the Dockerfile if you would like to see details for how the image is built. The image will contain SCM prerequisite software from DTC, the SCM and CCPP code, and a pre-compiled executable for the SCM with the 5 supported suites for the SCM. A successful build will show two images: dtcenter/common-community-container, and ccpp-scm. To list images, type: +Inspect the Dockerfile if you would like to see details for how the image is built. The image will contain SCM prerequisite software from DTC, the SCM and CCPP code, and a pre-compiled executable for the SCM with the 6 supported suites for the SCM. A successful build will show two images: dtcenter/common-community-container, and ccpp-scm. To list images, type: \begin{lstlisting}[language=bash] docker images \end{lstlisting} @@ -385,7 +435,7 @@ \subsection{Using a prebuilt Docker image from Dockerhub} A prebuilt Docker image for this release is available on Dockerhub if it is not desired to build from source. In order to use this, execute the following from the terminal where Docker is run: \begin{lstlisting}[language=bash] -docker pull dtcenter/ccpp-scm:v5.0.0 +docker pull dtcenter/ccpp-scm:v6.0.0 \end{lstlisting} To verify that it exists afterward, run \begin{lstlisting}[language=bash] @@ -411,13 +461,13 @@ \subsection{Running the Docker image} export OUT_DIR=/path/to/output \end{lstlisting} For Windows, the format that worked for us followed this example: \execout{/c/Users/my username/path/to/directory/to/mount} -\item To run the SCM, you can run the Docker container that was just created and give it the same run commands as discussed in sections \ref{subsection: singlerunscript} and \ref{subsection: multirunscript}. \textbf{Be sure to remember to include the \execout{-d} option for all run commands}. For example, +\item To run the SCM, you can run the Docker container that was just created and give it the same run commands as discussed in section \ref{subsection: singlerunscript}. \textbf{Be sure to remember to include the \execout{-d} option for all run commands}. For example, \begin{lstlisting}[language=bash] docker run --rm -it -v ${OUT_DIR}:/home --name run-ccpp-scm ccpp-scm ./run_scm.py -c twpice -d \end{lstlisting} will run through the TWPICE case using the default suite and namelist and put the output in the shared directory. NOTE: Windows users may need to omit the curly braces around environment variables: use \execout{\$OUT\_DIR} instead of \execout{\$\{OUT\_DIR\}}. For running through all supported cases and suites, use \begin{lstlisting}[language=bash] -docker run --rm -it -v ${OUT_DIR}:/home --name run-ccpp-scm ccpp-scm ./multi_run_scm.py -d +docker run --rm -it -v ${OUT_DIR}:/home --name run-ccpp-scm ccpp-scm ./run_scm.py -m -d \end{lstlisting} The options included in the above \execout{run} commands are the following: \begin{itemize} @@ -426,7 +476,7 @@ \subsection{Running the Docker image} \item \execout{-v} specifies the volume mount from host directory (outside container) to inside the container. Using volumes allows you to share data between the host machine and container. For running the SCM, the output is being mounted from \execout{/home} inside the container to the \execout{OUT\_DIR} on the host machine. Upon exiting the container, data mounted to the host machine will still be accessible. \item \execout{$--$name} names the container. If no name is provided, the daemon will autogenerate a random string name. \end{itemize} -NOTE: If you are using a prebuilt image from Dockerhub, substitute the name of the image that was pulled from Dockerhub in the commands above; i.e. instead of \execout{ccpp-scm} above, one would have \execout{dtcenter/ccpp-scm:v5.0.0}. +NOTE: If you are using a prebuilt image from Dockerhub, substitute the name of the image that was pulled from Dockerhub in the commands above; i.e. instead of \execout{ccpp-scm} above, one would have \execout{dtcenter/ccpp-scm:v6.0.0}. \item To use the SCM interactively, run non-default configurations, create plots, or even develop code, issue the following command: \begin{lstlisting}[language=bash] docker run --rm -it -v ${OUT_DIR}:/home --name run-ccpp-scm ccpp-scm /bin/bash diff --git a/scm/doc/TechGuide/chap_repo.tex b/scm/doc/TechGuide/chap_repo.tex index 0ebb45b5f..4c61361b8 100644 --- a/scm/doc/TechGuide/chap_repo.tex +++ b/scm/doc/TechGuide/chap_repo.tex @@ -2,7 +2,7 @@ \chapter{Repository} \label{chapter: repository} \section{What is included in the repository?} -The repository contains all code and data required to run the CCPP SCM (with the exception of large initialization tables for the Thompson and Morrison-Gettelman microphysics schemes discussed in subsection \ref{subsection: singlerunscript}). It is functionally separated into 3 subdirectories representing the SCM model infrastructure (\execout{scm} directory), the CCPP infrastructure (\execout{ccpp/framework} directory), and the CCPP physics schemes (\execout{ccpp/physics} directory). The entire \execout{ccpp-scm} repository resides on Github's NCAR space, and the \execout{ccpp/framework} and \execout{ccpp/physics} directories are git submodules that point to repositories \execout{ccpp-framework} and \execout{ccpp-physics} on the same space. The structure of the entire repository is represented below. Note that the \execout{ccpp-physics} repository also contains files needed for using the CCPP with the UFS Atmosphere host model that uses the Finite-Volume Cubed-Sphere (FV3) dynamical core. +The repository contains all code required to build the CCPP SCM and scripts that can be used to obtain data to run it (e.g. downloading large initialization tables for the Thompson microphysics schemes discussed in subsection \ref{subsection: singlerunscript} and processed case data). It is functionally separated into 3 subdirectories representing the SCM model infrastructure (\execout{scm} directory), the CCPP infrastructure (\execout{ccpp/framework} directory), and the CCPP physics schemes (\execout{ccpp/physics} directory). The entire \execout{ccpp-scm} repository resides on Github's NCAR space, and the \execout{ccpp/framework} and \execout{ccpp/physics} directories are git submodules that point to repositories \execout{ccpp-framework} and \execout{ccpp-physics} on the same space. The structure of the entire repository is represented below. Note that the \execout{ccpp-physics} repository also contains files needed for using the CCPP with the UFS Atmosphere host model that uses the Finite-Volume Cubed-Sphere (FV3) dynamical core. {\small\justify \dirtree{% @@ -10,30 +10,16 @@ \section{What is included in the repository?} .2 ccpp/. .3 config/\DTcomment{contains the CCPP prebuild configuration file}. .3 framework/. - .4 cmake/\DTcomment{custom cmake code for building ccpp-framework}. - .4 CMakeLists.txt\DTcomment{cmake configuration file for ccpp-framework}. - .4 CODEOWNERS\DTcomment{list of GitHub users with permission to merge}. - .4 doc/\DTcomment{doxygen configuration, output, and Technical Documentation (obsolete)}. - .4 LICENSE. - .4 README.md. - .4 schemes/\DTcomment{contains schemes used for testing}. - .4 scripts/\DTcomment{contains ccpp\_prebuild and other Python scripts for parsing metadata}. - .4 src/\DTcomment{contains CCPP framework code}. - .4 test/\DTcomment{contains scripts and configurations for testing}. - .4 tests\DTcomment{contains next-generation files for testing}. + .4 See \href{https://github.com/NCAR/ccpp-framework/tree/9de5b27890bd95433d14c092b5a34dd07c343634}{https://github.com/NCAR/ccpp-framework} for contents. .3 physics/\DTcomment{contains all physics schemes}. - .4 CMakeLists.txt\DTcomment{cmake configuration file for ccpp-physics}. - .4 CODEOWNERS\DTcomment{list of GitHub users with permission to merge}. - .4 LICENSE. - .4 physics/\DTcomment{contains all CCPP physics and interstitial schemes}. - .5 docs/\DTcomment{contains CCPP physics doxygen documentation}. - .4 README.md. - .4 tools/\DTcomment{tools for checking physics source code}. + .4 See \href{https://github.com/NCAR/ccpp-physics/tree/87d4b6c3ffd4cd4ab2ed7e5b5a2cb3c52f7f350b}{https://github.com/NCAR/ccpp-physics} for contents. .3 physics\_namelists\DTcomment{contains physics namelist files associated with suites}. .3 suites/\DTcomment{contains suite definition files}. + .2 CMakeModules/\DTcomment{contains code to help cmake find other software}. + .3 See \href{https://github.com/noaa-emc/CMakeModules/tree/0065b18c1ad586c17dc2a3c88bb2f52476c79eaf}{https://github.com/noaa-emc/CMakeModules} for contents. .2 CODEOWNERS\DTcomment{list of GitHub users with permission to merge}. .2 contrib/. - .3 build\_nceplibs.sh\DTcomment{script for installing prerequisite NCEPLIBS locally}. + .3 get\_all\_static\_data.sh\DTcomment{script for downloading/extracting the processed SCM case data}. .3 get\_thompson\_tables.sh\DTcomment{script for downloading/extracting the Thompson lookup tables}. .3 get\_mg\_inccn\_data.sh\DTcomment{script for downloading/extracting the Morrison-Gettelman data}. .2 docker/. @@ -41,11 +27,11 @@ \section{What is included in the repository?} .2 README.md. .2 scm/. .3 bin/\DTcomment{build directory (initially empty; populated by cmake)}. - .3 data/. - .4 comparison\_data/\DTcomment{contains data with which to compare SCM output}. - .4 physics\_input\_data/\DTcomment{contains data needed by the CCPP physics}. - .4 processed\_case\_input/\DTcomment{contains initialization and forcing data for cases}. - .4 raw\_case\_input/\DTcomment{contains case data to be processed by scripts}. + .3 data/\DTcomment{build directory (most data directories populated by contrib/get\_all\_static\_data.sh)}. + .4 comparison\_data/\DTcomment{initially empty; contains data with which to compare SCM output}. + .4 physics\_input\_data/\DTcomment{initially empty; contains data needed by the CCPP physics}. + .4 processed\_case\_input/\DTcomment{initially empty; contains initialization and forcing data for cases}. + .4 raw\_case\_input/\DTcomment{initially empty; contains case data to be processed by scripts}. .4 vert\_coord\_data/\DTcomment{contains data to calculate vertical coordinates (from GSM-based GFS only)}. .3 doc/\DTcomment{contains this User's/Technical Guide}. .4 TechGuide/\DTcomment{contains LaTeX for this User's Guide}. @@ -56,12 +42,16 @@ \section{What is included in the repository?} .4 Cheyenne\_setup\_gnu.sh\DTcomment{setup script for Cheyenne HPC for sh, bash}. .4 Cheyenne\_setup\_intel.csh\DTcomment{setup script for Cheyenne HPC for csh, tcsh}. .4 Cheyenne\_setup\_intel.sh\DTcomment{setup script for Cheyenne HPC for sh, bash}. - .4 scm\_slurm\_example.py\DTcomment{example QSUB run script}. + .4 Desktop\_setup\_gfortran.csh\DTcomment{setup script for Mac Desktop for csh, tcsh}. + .4 Desktop\_setup\_gfortran.sh\DTcomment{setup script for Mac Desktop for sh, bash}. .4 Hera\_setup\_intel.csh\DTcomment{setup script for Theia HPC for csh, tcsh}. .4 Hera\_setup\_intel.sh\DTcomment{setup script for Theia HPC for sh, bash}. + .4 scm\_qsub\_example.py\DTcomment{example QSUB run script}. + .4 scm\_slurm\_example.py\DTcomment{example SLURM run script}. .4 scripts/\DTcomment{Python scripts for setting up cases and plotting}. .5 plot\_configs/\DTcomment{plot configuration files}. .4 tracer\_config\DTcomment{tracer configuration files}. .3 LICENSE.txt. - .3 src/\DTcomment{source code for SCM infrastructure and Python run scripts}. + .3 run/\DTcomment{initially empty; populated by run\_scm.py}. + .3 src/\DTcomment{source code for SCM infrastructure, Python run script, CMakeLists.txt for the SCM, example multirun setup files, suite\_info.py}. }} diff --git a/scm/doc/TechGuide/dephy_case_header.txt b/scm/doc/TechGuide/dephy_case_header.txt new file mode 100644 index 000000000..b21c46b0c --- /dev/null +++ b/scm/doc/TechGuide/dephy_case_header.txt @@ -0,0 +1,141 @@ +netcdf fv3_model_point_djs_SCM_driver { +dimensions: + t0 = 1 ; + lat = 1 ; + lon = 1 ; + lev = 127 ; + time = 4 ; + nsoil = 4 ; + nsnow = 3 ; + nsoil_plus_nsnow = 7 ; + nice = 2 ; +variables: + double t0(t0) ; + t0:units = "seconds since 2021-03-22 06:00:00" ; + t0:longname = "Initial time" ; + t0:calendar = "gregorian" ; + double lat(lat) ; + lat:units = "degrees_north" ; + lat:long_name = "Latitude" ; + double lon(lon) ; + lon:units = "degrees_east" ; + lon:long_name = "Longitude" ; + double lev(lev) ; + lev:units = "Pa" ; + lev:long_name = "pressure" ; + double time(time) ; + time:units = "seconds since 2021-03-22 06:00:00" ; + time:long_name = "Forcing time" ; + double thetal(t0, lev, lat, lon) ; + thetal:units = "K" ; + thetal:long_name = "Liquid potential temperature" ; + double qv(t0, lev, lat, lon) ; + qv:units = "kg kg-1" ; + qv:long_name = "Specific humidity" ; + double qt(t0, lev, lat, lon) ; + qt:units = "kg kg-1" ; + qt:long_name = "Total water content" ; + double rv(t0, lev, lat, lon) ; + rv:units = "kg kg-1" ; + rv:long_name = "Water vapor mixing ratio" ; + double rt(t0, lev, lat, lon) ; + rt:units = "kg kg-1" ; + rt:long_name = "Total water mixing ratio" ; + double u(t0, lev, lat, lon) ; + u:units = "m s-1" ; + u:long_name = "Zonal wind" ; + double v(t0, lev, lat, lon) ; + v:units = "m s-1" ; + v:long_name = "Meridional wind" ; + double pressure(t0, lev, lat, lon) ; + pressure:units = "Pa" ; + pressure:long_name = "Pressure" ; + double height(t0, lev, lat, lon) ; + height:units = "m" ; + height:long_name = "Height above ground" ; + double ps(t0, lat, lon) ; + ps:units = "Pa" ; + ps:long_name = "Surface pressure" ; + double ql(t0, lev, lat, lon) ; + ql:units = "kg kg-1" ; + ql:long_name = "Liquid water content" ; + double qi(t0, lev, lat, lon) ; + qi:units = "kg kg-1" ; + qi:long_name = "Ice water content" ; + double rl(t0, lev, lat, lon) ; + rl:units = "kg kg-1" ; + rl:long_name = "Liquid water mixing ratio" ; + double ri(t0, lev, lat, lon) ; + ri:units = "kg kg-1" ; + ri:long_name = "Ice water mixing ratio" ; + double tke(t0, lev, lat, lon) ; + tke:units = "m2 s-2" ; + tke:long_name = "Turbulent kinetic energy" ; + double ozone(t0, lev, lat, lon) ; + ozone:units = "kg kg^-1" ; + ozone:description = "initial profile of ozone mass mixing ratio" ; + double ps_forc(time, lat, lon) ; + ps_forc:units = "Pa" ; + ps_forc:long_name = "Surface pressure for forcing" ; + double pressure_forc(time, lev, lat, lon) ; + pressure_forc:units = "Pa" ; + pressure_forc:long_name = "Pressure for forcing" ; + double height_forc(time, lev, lat, lon) ; + height_forc:units = "m" ; + height_forc:long_name = "Height above the ground for forcing" ; + double temp_adv(time, lev, lat, lon) ; + temp_adv:units = "K s-1" ; + temp_adv:long_name = "Temperature large-scale advection" ; + double qv_adv(time, lev, lat, lon) ; + qv_adv:units = "kg kg-1 s-1" ; + qv_adv:long_name = "Specific humidity large-scale advection" ; + double u_adv(time, lev, lat, lon) ; + u_adv:units = "m s-2" ; + u_adv:long_name = "Zonal wind large-scale advection" ; + double v_adv(time, lev, lat, lon) ; + v_adv:units = "m s-2" ; + v_adv:long_name = "Meridional wind large-scale advection" ; + +// global attributes: + :case = "UFS_20210322060000_261.56E38.1N" ; + :title = "Forcing and Initial Conditions for UFS_20210322060000_261.56E38.1N" ; + :reference = "" ; + :author = "Grant J. Firl" ; + :version = "Created on 2022-06-15-11:24:48" ; + :format_version = "1.0" ; + :modifications = "contains initial conditions for Noah LSM" ; + :script = "UFS_IC_generator.py" ; + :comment = "" ; + :startDate = "20210322060000" ; + :endDate = "20210322065959" ; + :adv_temp = 1 ; + :adv_theta = 0 ; + :adv_thetal = 0 ; + :rad_temp = 0 ; + :rad_theta = 0 ; + :rad_thetal = 0 ; + :adv_qv = 1 ; + :adv_qt = 0 ; + :adv_rv = 0 ; + :adv_rt = 0 ; + :adv_u = 1 ; + :adv_v = 1 ; + :forc_w = 0 ; + :forc_omega = 0 ; + :forc_geo = 0 ; + :nudging_u = 0 ; + :nudging_v = 0 ; + :nudging_temp = 0 ; + :nudging_theta = 0 ; + :nudging_thetal = 0 ; + :nudging_qv = 0 ; + :nudging_qt = 0 ; + :nudging_rv = 0 ; + :nudging_rt = 0 ; + :zorog = 538.3386f ; + :z0 = 15. ; + :surfaceType = "land" ; + :surfaceForcing = "lsm" ; + :surfaceForcingWind = "lsm" ; + :missing_value = -9999. ; +} diff --git a/scm/doc/TechGuide/main.pdf b/scm/doc/TechGuide/main.pdf index f1e763df6..1715bd586 100644 Binary files a/scm/doc/TechGuide/main.pdf and b/scm/doc/TechGuide/main.pdf differ diff --git a/scm/doc/TechGuide/title.tex b/scm/doc/TechGuide/title.tex index 26f4f7306..bcb35a7fa 100644 --- a/scm/doc/TechGuide/title.tex +++ b/scm/doc/TechGuide/title.tex @@ -8,19 +8,22 @@ \textcolor{darkgray}{\bigsf Common Community Physics Package\\[0.5ex] Single Column Model (SCM)} \vspace*{1em}\par -\textcolor{darkgray}{\bigst User and Technical Guide\\[0.5ex] v5.0.0} +\textcolor{darkgray}{\bigst User and Technical Guide\\[0.5ex] v6.0.0} \vspace*{1em}\par -\large{March 2021}\\ +\large{June 2022}\\ -Grant Firl, Laurie Carson, Michelle Harrold\\ -\textit{\small{National Center for Atmospheric Research and Developmental Testbed Center}}\\[4em] +Grant Firl\\ +\textit{\small{CIRA/CSU, NOAA GSL, and DTC}}\\[4em] + +Dustin Swales, Laurie Carson, Michelle Harrold\\ +\textit{\small{NCAR and DTC}}\\[4em] Ligia Bernardet\\ -\textit{\small{NOAA Global Systems Laboratory and Developmental Testbed Center}}\\[4em] +\textit{\small{NOAA GSL and DTC}}\\[4em] Dom Heinzeller\\ -\textit{\small{NOAA Global Systems Laboratory, Developmental Testbed Center and CIRES/CU}}\\[4em] +\textit{\small{JCSDA and DTC}}\\[4em] \vspace{4em} diff --git a/scm/etc/scripts/plot_configs/twpice_all_suites.ini b/scm/etc/scripts/plot_configs/twpice_all_suites.ini index 4f91675ae..1cc5b1da0 100644 --- a/scm/etc/scripts/plot_configs/twpice_all_suites.ini +++ b/scm/etc/scripts/plot_configs/twpice_all_suites.ini @@ -15,7 +15,7 @@ time_series_resample = True [plots] [[profiles_mean]] - vars = qc, qv, T, dT_dt_PBL, dT_dt_conv, dT_dt_micro, dT_dt_lwrad, dT_dt_swrad + vars = qc, qv, T, dT_dt_pbl, dT_dt_conv, dT_dt_micro, dT_dt_lwrad, dT_dt_swrad vars_labels = 'cloud water mixing ratio ($g$ $kg^{-1}$)', 'specific humidity ($g$ $kg^{-1}$)', 'T (K)', 'PBL tendency (K/day)', 'conv. tendency (K/day)', 'microphysics tendency (K/day)', 'LW tendency (K/day)', 'SW tendency (K/day)' vert_axis = pres_l vert_axis_label = 'average pressure (Pa)' @@ -27,7 +27,7 @@ time_series_resample = True [[profiles_mean_multi]] [[[T_forcing]]] - vars = T_force_tend, dT_dt_PBL, dT_dt_conv, dT_dt_micro, dT_dt_lwrad, dT_dt_swrad + vars = T_force_tend, dT_dt_pbl, dT_dt_conv, dT_dt_micro, dT_dt_lwrad, dT_dt_swrad vars_labels = 'force', 'PBL', 'Conv', 'MP', 'LW', 'SW' x_label = 'K/day' [[[conv_tendencies]]] diff --git a/scm/etc/scripts/scm_plotting_routines.py b/scm/etc/scripts/scm_plotting_routines.py index 3c0a15cda..aa100bfd6 100755 --- a/scm/etc/scripts/scm_plotting_routines.py +++ b/scm/etc/scripts/scm_plotting_routines.py @@ -57,7 +57,7 @@ def plot_profile_multi(z, values, labels, x_label, y_label, filename, obs_z=None plt.rc('text', usetex=latex_labels) if np.count_nonzero(values) == 0: - print('The plot for {} will not be created due to all zero values'.format(x_label)) + print('The plot named {} will not be created due to all zero values'.format(filename)) return fig = plt.figure() @@ -296,7 +296,7 @@ def plot_time_series_multi(time, values, labels, x_label, y_label, filename, obs plt.rc('text', usetex=latex_labels) if np.count_nonzero(values) == 0: - print('The plot for {} will not be created due to all zero values'.format(y_label)) + print('The plot named {} will not be created due to all zero values'.format(filename)) return fig = plt.figure() @@ -410,11 +410,11 @@ def contour_plot_firl(x_dim, y_dim, values, min_val, max_val, title, x_label, y_ plt.rc('text', usetex=latex_labels) if np.count_nonzero(values) == 0: - print('The plot for {} will not be created due to all zero values'.format(title)) + print('The plot named {} will not be created due to all zero values'.format(filename)) return if np.amax(values) == np.amin(values): - print('The plot for {} will not be created due to all values being equal'.format(title)) + print('The plot named {} will not be created due to all values being equal'.format(filename)) return if(min_val != -999 and max_val != -999): diff --git a/scm/etc/scripts/scm_read_obs.py b/scm/etc/scripts/scm_read_obs.py index f82f9c52d..59ac43c34 100644 --- a/scm/etc/scripts/scm_read_obs.py +++ b/scm/etc/scripts/scm_read_obs.py @@ -3,7 +3,6 @@ from netCDF4 import Dataset import datetime import numpy as np -import sys import math import forcing_file_common as ffc diff --git a/scm/src/run_scm.py b/scm/src/run_scm.py index 344b9bf60..9987d2de0 100755 --- a/scm/src/run_scm.py +++ b/scm/src/run_scm.py @@ -258,32 +258,32 @@ def __init__(self, case, suite, runtime, runtime_mult, levels, npz_type, vert_co if runtime: self._runtime = runtime message = 'Namelist runtime adjustment {0} IS applied'.format(self._runtime) - logging.info(message) + logging.debug(message) else: self._runtime = None message = 'Namelist runtime adjustment {0} IS NOT applied'.format(self._runtime) - logging.info(message) + logging.debug(message) if runtime_mult: self._runtime_mult = runtime_mult message = 'Existing case namelist runtime multiplied by {0}'.format(self._runtime_mult) - logging.info(message) + logging.debug(message) else: self._runtime_mult = None if levels: self._levels = levels message = 'The number of vertical levels is set to {0}'.format(self._levels) - logging.info(message) + logging.debug(message) else: self._levels = None message = 'The number of vertical levels contained in the case configuration file is used if present, otherwise the default value in scm_input.F90 is used.' - logging.info(message) + logging.debug(message) if npz_type: self._npz_type = npz_type message = 'The npz_type of vertical levels is set to {0}'.format(self._npz_type) - logging.info(message) + logging.debug(message) if npz_type == 'input': if vert_coord_file: self._vert_coord_file = vert_coord_file @@ -302,7 +302,7 @@ def __init__(self, case, suite, runtime, runtime_mult, levels, npz_type, vert_co self._npz_type = None self._vert_coord_file = None message = 'The npz_type contained in the case configuration file is used if present, otherwise the default value in scm_input.F90 is used.' - logging.info(message) + logging.debug(message) if case_data_dir: self._case_data_dir = case_data_dir @@ -544,7 +544,7 @@ def setup_rundir(self): execute(cmd) # Link physics namelist to run directory with its original name - logging.info('Linking physics namelist {0} to run directory'.format(self._physics_namelist)) + logging.debug('Linking physics namelist {0} to run directory'.format(self._physics_namelist)) if os.path.isfile(os.path.join(SCM_RUN, self._physics_namelist)): os.remove(os.path.join(SCM_RUN,self._physics_namelist)) if not os.path.isfile(os.path.join(SCM_ROOT, PHYSICS_NAMELIST_DIR, self._physics_namelist)): @@ -561,7 +561,7 @@ def setup_rundir(self): execute(cmd) # Link tracer configuration to run directory with standard name - logging.info('Linking tracer configuration {0} to run directory'.format(self._tracers)) + logging.debug('Linking tracer configuration {0} to run directory'.format(self._tracers)) if os.path.isfile(os.path.join(SCM_RUN, self._tracers)): os.remove(os.path.join(SCM_RUN, self._tracers)) if not os.path.isfile(os.path.join(SCM_ROOT, TRACERS_DIR, self._tracers)): @@ -580,7 +580,7 @@ def setup_rundir(self): case_data_netcdf_file = self._case + '.nc' except KeyError: case_data_netcdf_file = self._case + '.nc' - logging.info('Linking case input data file {0} to run directory'.format(case_data_netcdf_file)) + logging.debug('Linking case input data file {0} to run directory'.format(case_data_netcdf_file)) if os.path.isfile(os.path.join(SCM_RUN, case_data_netcdf_file)): os.remove(os.path.join(SCM_RUN, case_data_netcdf_file)) if not os.path.isfile(os.path.join(SCM_ROOT, self._case_data_dir, case_data_netcdf_file)): @@ -592,7 +592,7 @@ def setup_rundir(self): # Link vertical coordinate file to run directory with its original name if (self._npz_type == 'input'): - logging.info('Linking vertical coordinate file {0} to run directory'.format(self._vert_coord_file)) + logging.debug('Linking vertical coordinate file {0} to run directory'.format(self._vert_coord_file)) if os.path.isfile(os.path.join(SCM_RUN, self._vert_coord_file)): os.remove(os.path.join(SCM_RUN, self._vert_coord_file)) if not os.path.isfile(os.path.join(SCM_ROOT, VERT_COORD_DATA_DIR, self._vert_coord_file)): @@ -604,7 +604,7 @@ def setup_rundir(self): # Link physics SDF to run directory physics_suite = 'suite_' + self._suite + '.xml' - logging.info('Linking physics suite {0} to run directory'.format(physics_suite)) + logging.debug('Linking physics suite {0} to run directory'.format(physics_suite)) if os.path.isfile(os.path.join(SCM_RUN, physics_suite)): os.remove(os.path.join(SCM_RUN, physics_suite)) if not os.path.isfile(os.path.join(SCM_ROOT, PHYSICS_SUITE_DIR, physics_suite)): @@ -615,7 +615,7 @@ def setup_rundir(self): execute(cmd) # Link physics data needed for schemes to run directory - logging.info('Linking physics input data from {0} into run directory'.format(os.path.join(SCM_ROOT, PHYSICS_DATA_DIR))) + logging.debug('Linking physics input data from {0} into run directory'.format(os.path.join(SCM_ROOT, PHYSICS_DATA_DIR))) for entry in os.listdir(os.path.join(SCM_ROOT, PHYSICS_DATA_DIR)): if os.path.isfile(os.path.join(SCM_ROOT, PHYSICS_DATA_DIR, entry)): if not os.path.exists(entry): @@ -624,7 +624,7 @@ def setup_rundir(self): execute(cmd) # Link reference profile data to run directory - logging.info('Linking reference profile data from {0} into run directory'.format(os.path.join(SCM_ROOT, REFERENCE_PROFILE_DIR))) + logging.debug('Linking reference profile data from {0} into run directory'.format(os.path.join(SCM_ROOT, REFERENCE_PROFILE_DIR))) for entry in REFERENCE_PROFILE_FILE_LIST: if os.path.isfile(os.path.join(SCM_ROOT, REFERENCE_PROFILE_DIR, entry)): if not os.path.exists(entry): @@ -635,7 +635,7 @@ def setup_rundir(self): # Parse physics namelist and extract # - oz_phys # - oz_phys_2015 - logging.info('Parsing physics namelist {0}'.format(os.path.join(SCM_RUN, self._physics_namelist))) + logging.debug('Parsing physics namelist {0}'.format(os.path.join(SCM_RUN, self._physics_namelist))) nml = f90nml.read(os.path.join(SCM_RUN, self._physics_namelist)) # oz_phys try: @@ -657,11 +657,11 @@ def setup_rundir(self): if os.path.exists(os.path.join(SCM_RUN, OZ_PHYS_LINK)): os.remove(os.path.join(SCM_RUN, OZ_PHYS_LINK)) if oz_phys: - logging.info('Linking input data for oz_phys') + logging.debug('Linking input data for oz_phys') cmd = 'ln -sf {0} {1}'.format(os.path.join(SCM_RUN, OZ_PHYS_TARGET), os.path.join(SCM_RUN, OZ_PHYS_LINK)) execute(cmd) elif oz_phys_2015: - logging.info('Linking input data for oz_phys_2015') + logging.debug('Linking input data for oz_phys_2015') cmd = 'ln -sf {0} {1}'.format(os.path.join(SCM_RUN, OZ_PHYS_2015_TARGET), os.path.join(SCM_RUN, OZ_PHYS_LINK)) execute(cmd) @@ -675,12 +675,12 @@ def setup_rundir(self): if do_ugwp_v1: if os.path.exists(os.path.join(SCM_RUN, TAU_LINK)): os.remove(os.path.join(SCM_RUN, TAU_LINK)) - logging.info('Linking input data for UGWP_v1') + logging.debug('Linking input data for UGWP_v1') cmd = 'ln -sf {0} {1}'.format(os.path.join(SCM_RUN, TAU_TARGET), os.path.join(SCM_RUN, TAU_LINK)) execute(cmd) # Link scripts needed to run SCM analysis - logging.info('Linking analysis scripts from {0} into run directory'.format(os.path.join(SCM_ROOT, SCM_ANALYSIS_SCRIPT_DIR))) + logging.debug('Linking analysis scripts from {0} into run directory'.format(os.path.join(SCM_ROOT, SCM_ANALYSIS_SCRIPT_DIR))) analysis_script_files = ['scm_analysis.py','configspec.ini'] for entry in analysis_script_files: if os.path.isfile(os.path.join(SCM_ROOT, SCM_ANALYSIS_SCRIPT_DIR, entry)): @@ -690,7 +690,7 @@ def setup_rundir(self): execute(cmd) # Link plot configuration files needed to run SCM analysis - logging.info('Linking plot configuration files from {0} into run directory'.format(os.path.join(SCM_ROOT, SCM_ANALYSIS_CONFIG_DIR))) + logging.debug('Linking plot configuration files from {0} into run directory'.format(os.path.join(SCM_ROOT, SCM_ANALYSIS_CONFIG_DIR))) for entry in os.listdir(os.path.join(SCM_ROOT, SCM_ANALYSIS_CONFIG_DIR)): if os.path.isfile(os.path.join(SCM_ROOT, SCM_ANALYSIS_CONFIG_DIR, entry)): if not os.path.exists(entry): @@ -699,23 +699,23 @@ def setup_rundir(self): execute(cmd) # Create output directory (delete existing directory) - logging.info('Creating output directory {0} in run directory'.format(output_dir)) + logging.debug('Creating output directory {0} in run directory'.format(output_dir)) if os.path.isdir(os.path.join(SCM_RUN, output_dir)): shutil.rmtree(os.path.join(SCM_RUN, output_dir)) os.makedirs(os.path.join(SCM_RUN, output_dir)) # Write experiment configuration file to output directory - logging.info('Writing experiment configuration {0}.nml to output directory'.format(self._name)) + logging.debug('Writing experiment configuration {0}.nml to output directory'.format(self._name)) cmd = 'cp {0} {1}'.format(os.path.join(SCM_RUN, STANDARD_EXPERIMENT_NAMELIST), os.path.join(SCM_RUN, output_dir,self._name + '.nml')) execute(cmd) # Move executable to run dir if COPY_EXECUTABLE: - logging.info('Copying executable to run directory') + logging.debug('Copying executable to run directory') cmd = 'cp {0} {1}'.format(os.path.join(SCM_ROOT, SCM_BIN, EXECUTABLE_NAME), os.path.join(SCM_RUN, EXECUTABLE_NAME)) execute(cmd) else: - logging.info('Linking executable to run directory') + logging.debug('Linking executable to run directory') cmd = 'ln -sf {0} {1}'.format(os.path.join(SCM_ROOT, SCM_BIN, EXECUTABLE_NAME), os.path.join(SCM_RUN, EXECUTABLE_NAME)) execute(cmd) diff --git a/tutorial_files/add_new_scheme/twpice_scm5_tutorial.ini b/tutorial_files/add_new_scheme/twpice_scm_tutorial.ini similarity index 100% rename from tutorial_files/add_new_scheme/twpice_scm5_tutorial.ini rename to tutorial_files/add_new_scheme/twpice_scm_tutorial.ini diff --git a/tutorial_files/add_new_variable/twpice_scm5_tutorial.ini b/tutorial_files/add_new_variable/twpice_scm_tutorial.ini similarity index 100% rename from tutorial_files/add_new_variable/twpice_scm5_tutorial.ini rename to tutorial_files/add_new_variable/twpice_scm_tutorial.ini diff --git a/tutorial_files/create_new_suite/twpice_scm5_tutorial.ini b/tutorial_files/create_new_suite/twpice_scm_tutorial.ini similarity index 100% rename from tutorial_files/create_new_suite/twpice_scm5_tutorial.ini rename to tutorial_files/create_new_suite/twpice_scm_tutorial.ini diff --git a/tutorial_files/scm_read_obs.py b/tutorial_files/scm_read_obs.py new file mode 100644 index 000000000..6e41d2b34 --- /dev/null +++ b/tutorial_files/scm_read_obs.py @@ -0,0 +1,340 @@ +#!/usr/bin/env python + +from netCDF4 import Dataset +import datetime +import numpy as np +import math +import forcing_file_common as ffc + +def read_twpice_obs(obs_file, time_slices, date): + obs_time_slice_indices = [] + + obs_fid = Dataset(obs_file, 'r') + + obs_year = obs_fid.variables['year'][:] + obs_month = obs_fid.variables['month'][:] + obs_day = obs_fid.variables['day'][:] + obs_hour = obs_fid.variables['hour'][:] + obs_time = obs_fid.variables['time_offset'][:] + + obs_date = [] + for i in range(obs_hour.size): + obs_date.append(datetime.datetime(obs_year[i], obs_month[i], obs_day[i], obs_hour[i], 0, 0, 0)) + obs_date = np.array(obs_date) + + for time_slice in time_slices: + start_date = datetime.datetime(time_slices[time_slice]['start'][0], time_slices[time_slice]['start'][1],time_slices[time_slice]['start'][2], time_slices[time_slice]['start'][3], time_slices[time_slice]['start'][4]) + end_date = datetime.datetime(time_slices[time_slice]['end'][0], time_slices[time_slice]['end'][1],time_slices[time_slice]['end'][2], time_slices[time_slice]['end'][3], time_slices[time_slice]['end'][4]) + start_date_index = np.where(obs_date == start_date)[0][0] + end_date_index = np.where(obs_date == end_date)[0][0] + obs_time_slice_indices.append([start_date_index, end_date_index]) + + #find the index corresponding to the start of the simulations + obs_start_index = np.where(obs_date == date[0][0])[0] + obs_time = obs_time - obs_time[obs_start_index] + + obs_pres_l = obs_fid.variables['lev'][:]*100.0 #pressure levels in mb + + obs_cld = obs_fid.variables['cld'][:]/100.0 + obs_T = obs_fid.variables['T'][:] + obs_q = obs_fid.variables['q'][:]/1000.0 + obs_u = obs_fid.variables['u'][:] + obs_v = obs_fid.variables['v'][:] + obs_precip = obs_fid.variables['prec_srf'][:]/3.6E7 #convert from mm/hr to m/s + obs_shf = obs_fid.variables['SH'][:] + obs_lhf = obs_fid.variables['LH'][:] + obs_pwat = obs_fid.variables['PW'][:]*10.0 #convert from cm to kg/m2 + obs_lw_net_toa = obs_fid.variables['lw_net_toa'][:] + obs_rad_net_srf = obs_fid.variables['rad_net_srf'][:] + obs_sw_dn_toa = obs_fid.variables['sw_dn_toa'][:] + obs_sw_dn_srf = obs_fid.variables['sw_dn_srf'][:] + obs_lw_dn_srf = obs_fid.variables['lw_dn_srf'][:] + obs_lwp = obs_fid.variables['LWP'][:]*10.0 #convert from cm to kg/m2 + #obs_T_forcing = obs_fid.variables['dTdt'][:]*24.0 #convert from K/hour to K/day + #obs_q_forcing = obs_fid.variables['dqdt'][:]*24.0 #convert from g/kg/hour to g/kg/day + obs_h_advec_T = obs_fid.variables['T_adv_h'][:]*24.0 + obs_h_advec_q = obs_fid.variables['q_adv_h'][:]*24.0 + obs_v_advec_T = obs_fid.variables['T_adv_v'][:]*24.0 + obs_v_advec_q = obs_fid.variables['q_adv_v'][:]*24.0 + + obs_T_forcing = obs_h_advec_T + obs_v_advec_T + obs_q_forcing = obs_h_advec_q + obs_v_advec_q + + obs_time_h = obs_time/3600.0 + + Rd = 287.0 + Rv = 461.0 + + e_s = 6.1078*np.exp(17.2693882*(obs_T - 273.16)/(obs_T - 35.86))*100.0 #Tetens formula produces e_s in mb (convert to Pa) + e = obs_q*obs_pres_l/(obs_q + (Rd/Rv)*(1.0 - obs_q)) #compute vapor pressure from specific humidity + obs_rh = np.clip(e/e_s, 0.0, 1.0) + + obs_rh_500 = np.zeros(obs_rh.shape[0]) + index_500 = np.where(obs_pres_l[:]*0.01 < 500.0)[0][0] + lifrac = (obs_pres_l[index_500-1] - 50000.0)/(obs_pres_l[index_500-1] - obs_pres_l[index_500]) + for j in range(obs_rh.shape[0]): #loop over times + obs_rh_500[j] = obs_rh[j,index_500-1] + lifrac*(obs_rh[j,index_500] - obs_rh[j,index_500-1]) + #print index_500, pres_l[-1][j,index_500,k], pres_l[-1][j,index_500-1,k], rh_500_kj, rh[-1][j,index_500,k], rh[-1][j,index_500-1,k] + + return_dict = {'year': obs_year, 'month': obs_month, 'day': obs_day, 'hour': obs_hour, + 'time': obs_time, 'date': obs_date, 'time_slice_indices': obs_time_slice_indices, + 'pres_l': obs_pres_l, 'cld': obs_cld, 'T': obs_T, 'q': obs_q, 'u': obs_u, 'v': obs_v, + 'pwat': obs_pwat, 'time_h': obs_time_h, + 'tprcp_rate_accum': obs_precip, 'qv': obs_q, 'rh': obs_rh, 'rh_500': obs_rh_500, + 'lw_up_TOA_tot': obs_lw_net_toa, 'rad_net_srf': obs_rad_net_srf, 'sw_dn_TOA_tot': obs_sw_dn_toa, + 'lw_dn_sfc_tot': obs_lw_dn_srf, 'sw_dn_sfc_tot': obs_sw_dn_srf, 'lwp': obs_lwp, + 'T_force_tend': obs_T_forcing, 'qv_force_tend': obs_q_forcing} + + obs_fid.close() + + return return_dict + +def read_arm_sgp_summer_1997_obs(obs_file, time_slices, date): + obs_time_slice_indices = [] + + obs_fid = Dataset(obs_file, 'r') + + obs_year = obs_fid.variables['Year'][:] + obs_month = obs_fid.variables['Month'][:] + obs_day = obs_fid.variables['Day'][:] + #obs_hour = obs_fid.variables['hour'][:] + obs_time = obs_fid.variables['time_offset'][:] + + #this file doesn't have the hour variable - calculate from the time offset (seconds from 00Z on 6/18/1997) + obs_hour = (((obs_time - 3)/3600.0)%24).astype(int) + + obs_date = [] + for i in range(obs_hour.size): + obs_date.append(datetime.datetime(obs_year[i], obs_month[i], obs_day[i], obs_hour[i], 0, 0, 0)) + obs_date = np.array(obs_date) + + for time_slice in time_slices: + start_date = datetime.datetime(time_slices[time_slice]['start'][0], time_slices[time_slice]['start'][1],time_slices[time_slice]['start'][2], time_slices[time_slice]['start'][3], time_slices[time_slice]['start'][4]) + end_date = datetime.datetime(time_slices[time_slice]['end'][0], time_slices[time_slice]['end'][1],time_slices[time_slice]['end'][2], time_slices[time_slice]['end'][3], time_slices[time_slice]['end'][4]) + start_date_index = np.where(obs_date == start_date)[0][0] + end_date_index = np.where(obs_date == end_date)[0][0] + obs_time_slice_indices.append([start_date_index, end_date_index]) + #print start_date, end_date, start_date_index, end_date_index, obs_date[start_date_index], obs_date[end_date_index] + + #find the index corresponding to the start of the simulations + obs_start_index = np.where(obs_date == date[0][0])[0] + obs_time = obs_time - obs_time[obs_start_index] + + + obs_pres_l = np.flipud(obs_fid.variables['lev'][:])*100.0 #pressure levels in mb + + obs_cld = np.fliplr(obs_fid.variables['ARSCL_Cld'][:,:,0,0])/100.0 + obs_T = np.fliplr(obs_fid.variables['Temp'][:,:,0,0]) + obs_q = np.fliplr(obs_fid.variables['H2O_Mixing_Ratio'][:,:,0,0]/1000.0) + obs_u = np.fliplr(obs_fid.variables['u_wind'][:,:,0,0]) + obs_v = np.fliplr(obs_fid.variables['v_wind'][:,:,0,0]) + obs_precip = obs_fid.variables['Prec'][:,0,0] + # obs_shf = obs_fid.variables['SH'][:] + # obs_lhf = obs_fid.variables['LH'][:] + # obs_pwat = obs_fid.variables['PW'][:] + # obs_lw_net_toa = obs_fid.variables['lw_net_toa'][:] + # obs_rad_net_srf = obs_fid.variables['rad_net_srf'][:] + # obs_sw_dn_toa = obs_fid.variables['sw_dn_toa'][:] + # obs_sw_dn_srf = obs_fid.variables['sw_dn_srf'][:] + # obs_lw_dn_srf = obs_fid.variables['lw_dn_srf'][:] + # obs_lwp = obs_fid.variables['LWP'][:]*10.0 #convert from cm to kg/m2 + # #obs_T_forcing = obs_fid.variables['dTdt'][:]*24.0 #convert from K/hour to K/day + # #obs_q_forcing = obs_fid.variables['dqdt'][:]*24.0 #convert from g/kg/hour to g/kg/day + # obs_h_advec_T = obs_fid.variables['T_adv_h'][:]*24.0 + # obs_h_advec_q = obs_fid.variables['q_adv_h'][:]*24.0 + # obs_v_advec_T = obs_fid.variables['T_adv_v'][:]*24.0 + # obs_v_advec_q = obs_fid.variables['q_adv_v'][:]*24.0 + # + # obs_T_forcing = obs_h_advec_T + obs_v_advec_T + # obs_q_forcing = obs_h_advec_q + obs_v_advec_q + # + # obs_time_h = obs_time/3600.0 + # + # Rd = 287.0 + # Rv = 461.0 + # + # e_s = 6.1078*np.exp(17.2693882*(obs_T - 273.16)/(obs_T - 35.86))*100.0 #Tetens formula produces e_s in mb (convert to Pa) + # e = obs_q*obs_pres_l/(obs_q + (Rd/Rv)*(1.0 - obs_q)) #compute vapor pressure from specific humidity + # obs_rh = np.clip(e/e_s, 0.0, 1.0) + # + # obs_rh_500 = np.zeros(obs_rh.shape[0]) + # index_500 = np.where(obs_pres_l[:]*0.01 < 500.0)[0][0] + # lifrac = (obs_pres_l[index_500-1] - 50000.0)/(obs_pres_l[index_500-1] - obs_pres_l[index_500]) + # for j in range(obs_rh.shape[0]): #loop over times + # obs_rh_500[j] = obs_rh[j,index_500-1] + lifrac*(obs_rh[j,index_500] - obs_rh[j,index_500-1]) + # #print index_500, pres_l[-1][j,index_500,k], pres_l[-1][j,index_500-1,k], rh_500_kj, rh[-1][j,index_500,k], rh[-1][j,index_500-1,k] + + return_dict = {'year': obs_year, 'month': obs_month, 'day': obs_day, 'hour': obs_hour, + 'time': obs_time, 'date': obs_date, 'time_slice_indices': obs_time_slice_indices, + 'pres_l': obs_pres_l, 'cld': obs_cld, 'T': obs_T, 'qv': obs_q, 'u': obs_u, 'v': obs_v, + 'precip': obs_precip}#, 'shf': obs_shf, 'lhf': obs_lhf, 'pwat': obs_pwat, 'time_h': obs_time_h, + # 'rain': obs_precip, 'rainc': obs_precip, 'qv': obs_q, 'rh': obs_rh, 'rh_500': obs_rh_500, + # 'lw_up_TOA_tot': obs_lw_net_toa, 'rad_net_srf': obs_rad_net_srf, 'sw_dn_TOA_tot': obs_sw_dn_toa, + # 'lw_dn_sfc_tot': obs_lw_dn_srf, 'sw_dn_sfc_tot': obs_sw_dn_srf, 'lwp': obs_lwp, + # 'T_force_tend': obs_T_forcing, 'qv_force_tend': obs_q_forcing} + + # return_dict = {'year': obs_year, 'month': obs_month, 'day': obs_day, 'hour': obs_hour, + # 'time': obs_time, 'date': obs_date, 'time_slice_indices': obs_time_slice_indices, + # 'pres_l': obs_pres_l, 'cld': obs_cld, 'T': obs_T, 'q': obs_q, 'u': obs_u, 'v': obs_v, + # 'precip': obs_precip, 'shf': obs_shf, 'lhf': obs_lhf, 'pwat': obs_pwat, 'time_h': obs_time_h, + # 'rain': obs_precip, 'rainc': obs_precip, 'qv': obs_q, 'rh': obs_rh, 'rh_500': obs_rh_500, + # 'lw_up_TOA_tot': obs_lw_net_toa, 'rad_net_srf': obs_rad_net_srf, 'sw_dn_TOA_tot': obs_sw_dn_toa, + # 'lw_dn_sfc_tot': obs_lw_dn_srf, 'sw_dn_sfc_tot': obs_sw_dn_srf, 'lwp': obs_lwp, + # 'T_force_tend': obs_T_forcing, 'qv_force_tend': obs_q_forcing} + + obs_fid.close() + + return return_dict + +def read_LASSO_obs(obs_file, time_slices, date): + obs_time_slice_indices = [] + + obs_fid = Dataset(obs_file, 'r') + obs_fid.set_auto_mask(False) + + obs_time = obs_fid.variables['time_offset'][:] + obs_datetime_string = obs_fid.getncattr('output_start_datetime') + + #get initial date from global file attribute + obs_init_datetime = datetime.datetime.strptime(obs_datetime_string, '%Y%m%d.%H%M%S %Z') + + obs_date = [] + for i in range(obs_time.size): + obs_date.append(obs_init_datetime + datetime.timedelta(seconds = obs_time[i])) + obs_date = np.array(obs_date) + + for time_slice in time_slices: + start_date = datetime.datetime(time_slices[time_slice]['start'][0], time_slices[time_slice]['start'][1],time_slices[time_slice]['start'][2], time_slices[time_slice]['start'][3], time_slices[time_slice]['start'][4]) + end_date = datetime.datetime(time_slices[time_slice]['end'][0], time_slices[time_slice]['end'][1],time_slices[time_slice]['end'][2], time_slices[time_slice]['end'][3], time_slices[time_slice]['end'][4]) + start_date_index = np.where(obs_date == start_date)[0][0] + end_date_index = np.where(obs_date == end_date)[0][0] + obs_time_slice_indices.append([start_date_index, end_date_index]) + #print start_date, end_date, start_date_index, end_date_index, obs_date[start_date_index], obs_date[end_date_index] + + #find the index corresponding to the start of the simulations + obs_start_index = np.where(obs_date == date[0][0])[0] + obs_time = obs_time - obs_time[obs_start_index] + + #pressure stored in kPa (pressure is 0 for initial time) + obs_pres = obs_fid.variables['bar_pres'][1:,:]*1.0E3 + + #get average pressure levels + obs_pres_l = np.mean(obs_pres[:,:], (0)) + + obs_theta = obs_fid.variables['potential_temp'][1:,:] + #print obs_theta[0,:] + obs_T = (obs_pres/ffc.p0)**(ffc.R_dry/ffc.c_p)*obs_theta + + obs_q = obs_fid.variables['water_vapor_mixing_ratio'][1:,:]*1.0E-3 + #print obs_q[0,:] + obs_cld = obs_fid.variables['cloud_fraction'][1:,:] + + #print obs_cld[0,:] + + return_dict = {'time': obs_time, 'date': obs_date, 'time_slice_indices': obs_time_slice_indices, 'pres_l': obs_pres_l, + 'T': obs_T, 'qv': obs_q, 'cld': obs_cld} + + obs_fid.close() + + return return_dict + +def read_gabls3_obs(obs_file, time_slices, date): + + con_g = 9.81 + con_rd = 287.0 + con_cp = 1004.0 + con_vir = 0.61 + p0 = 100000.0 + + obs_time_slice_indices = [] + + obs_fid = Dataset(obs_file, 'r') + obs_fid.set_auto_mask(False) + + obs_time_hours_elapsed = obs_fid.variables['time'][:] + obs_date_ints = obs_fid.variables['date'][:] + n_times = len(obs_time_hours_elapsed) + + obs_time_hours = np.mod(obs_time_hours_elapsed,24.0*np.ones(n_times)) + + obs_date = [] + for i in range(obs_time_hours.size): + obs_year = int(str(obs_date_ints[i])[0:4]) + obs_month = int(str(obs_date_ints[i])[4:6]) + obs_day = int(str(obs_date_ints[i])[6:]) + obs_hour = int(math.floor(obs_time_hours[i])) + obs_minutes = int((obs_time_hours[i] - obs_hour)*60.0) + obs_seconds = int(((obs_time_hours[i] - obs_hour) - obs_minutes/60.0)*3600.0) + #disregard milliseconds + obs_date.append(datetime.datetime(obs_year, obs_month, obs_day, obs_hour, obs_minutes, obs_seconds, 0)) + obs_date = np.array(obs_date) + + for time_slice in time_slices: + start_date = datetime.datetime(time_slices[time_slice]['start'][0], time_slices[time_slice]['start'][1],time_slices[time_slice]['start'][2], time_slices[time_slice]['start'][3], time_slices[time_slice]['start'][4]) + end_date = datetime.datetime(time_slices[time_slice]['end'][0], time_slices[time_slice]['end'][1],time_slices[time_slice]['end'][2], time_slices[time_slice]['end'][3], time_slices[time_slice]['end'][4]) + start_date_index = np.where(obs_date == start_date)[0][0] + try: + end_date_index = np.where(obs_date == end_date)[0][0] + except IndexError: + end_date_index = len(obs_date) - 1 + obs_time_slice_indices.append([start_date_index, end_date_index]) + + obs_start_index = np.where(obs_date == date[0][0])[0] + obs_time = 3600.0*(obs_time_hours_elapsed - obs_time_hours_elapsed[obs_start_index]) + + obs_zt = np.fliplr(obs_fid.variables['zt'][:]) + obs_zf = np.fliplr(obs_fid.variables['zf'][:]) + obs_t = np.fliplr(obs_fid.variables['t'][:]) + obs_t_fill_value = obs_fid.variables['t']._FillValue + obs_q = np.fliplr(obs_fid.variables['q'][:]) + obs_q_fill_value = obs_fid.variables['q']._FillValue + obs_th = np.fliplr(obs_fid.variables['th'][:]) + + obs_hpbl = obs_fid.variables['hpbl'][:] + obs_tsk = obs_fid.variables['tsk'][:] + obs_shf = obs_fid.variables['shf'][:] + obs_lhf = obs_fid.variables['lhf'][:] + obs_lw_up = obs_fid.variables['lup'][:] + obs_lw_dn = obs_fid.variables['ldw'][:] + obs_sw_up = obs_fid.variables['qup'][:] + obs_sw_dn = obs_fid.variables['qdw'][:] + obs_gflux = obs_fid.variables['g'][:] + obs_t2m = obs_fid.variables['t2m'][:] + obs_q2m = obs_fid.variables['q2m'][:] + obs_ustar = obs_fid.variables['ustar'][:] + obs_u10m = obs_fid.variables['u10m'][:] + obs_v10m = obs_fid.variables['v10m'][:] + + obs_fid.close() + + p_surf = 102440.0 #case specifications + + #find where T has valid values + good_t_indices = np.where(obs_t[0,:] != obs_t_fill_value)[0] + + #assume missing values always appear at same vertical indices + good_t = obs_t[:,good_t_indices] + good_zf = obs_zf[:,good_t_indices] + + #good_q_indices = np.where(obs_q[0,:] != obs_q_fill_value)[0] + good_q = obs_q[:,good_t_indices] + good_th = obs_th[:,good_t_indices] + + #p_good_lev = np.zeros(len(good_t_indices)) + #calculate p from hydrostatic equation, surface pressure, and T and q + #p_good_lev[0] = p_surf*np.exp(-con_g/(con_rd*good_t[0]*(1.0 + con_vir*good_q[0]))*good_zf[0]) + #for k in range(1,len(good_t_indices)): + # p_good_lev[k] = p_good_lev[k-1]*np.exp((-con_g/(con_rd*0.5*(good_t[k-1]+good_t[k])*(1.0 + con_vir*0.5*(good_q[k-1]+good_q[k])))*(good_zf[k]-good_zf[k-1]))) + + #calculate p from temperature and potential temperature + + p_good_lev_from_th = p0/(good_th[0,:]/good_t[0,:])**(con_cp/con_rd) + + obs_sfc_rad_net = (obs_sw_dn - obs_sw_up) + (obs_lw_dn - obs_lw_up) + + return_dict = {'time': obs_time, 'date': obs_date, 'time_slice_indices': obs_time_slice_indices, 'pres_l': p_good_lev_from_th, + 'T': good_t, 'qv': good_q, 'shf': obs_shf, 'lhf': obs_lhf, 'time_h': obs_time/3600.0, 'sfc_up_lw_land': obs_lw_up, 'sfc_dwn_lw': obs_lw_dn, + 'sfc_dwn_sw': obs_sw_dn, 'sfc_up_sw': obs_sw_up, 'sfc_rad_net_land': obs_sfc_rad_net, 'gflux': -1*obs_gflux, 't2m':obs_t2m, 'q2m':obs_q2m, + 'ustar':obs_ustar,'u10m':obs_u10m, 'v10m':obs_v10m, 'hpbl':obs_hpbl, 'tsfc':obs_tsk} + + return return_dict \ No newline at end of file