Code used in Nugent and Bretherton (2023), Geophysical Research Letters (doi: 10.1029/2023GL105083), on the observed distribution of convection that overshoots the cold point.
The data used in this paper is not included here and must be downloaded and processed following the steps listed below. The code used for the analysis and figures is primarily in Jupyter notebooks. Note that the scripts are not designed to be downloaded and run immediately; they must first be edited as specified below.
The South Pacific Convergence Zone (SPC) region crosses the International Date Line. For much of the data processing and analysis, this region is split into two halves to avoid any issues with the +/-180° longitude. In the code, “SPC1” refers to the eastern half (165°-180° E) and “SPC2” refers to the western half (180°-145° W), while “SPC” refers to the entire region (i.e., SPC1 and SPC2 concatenated by longitude).
Downloads one season/year/region at a time. You must first install the CDS API key/client (see here for instructions).
- For each season/year/region, download the list of file links from NASA GES DISC. Rename these files with convention “subset_MMMYYYY_RRR.txt” (e.g., “subset_DJF2009_AMZ.txt”).
- Edit get_mergir.sh to change the file paths and season/year/
- Run get_mergir.sh to download each file via wget. This loops through all regions in one year/season and concatenates the files into a single .nc4 file for each region.
- Repeat for the next season/year.
Downloads one season/year at a time and then subsets into regions. You must first install the CDS API key/client (see here for instructions). Note that this procedure can take over 24 hours to download three months of data because the files must be retrieved off of tapes.
- Edit get_era5_ml.sh to change the file paths and the season/year.
- Run get_era5_ml.sh to download monthly files for the global tropics and split by variable. This runs the python script get_era5_climo_ml.py.
- Edit process_era5_ml.sh to change the file paths and specify the season/year.
- Run process_era5_ml.sh to subset the files into regions, compute the geopotential at each model level (runs compute_geopotential_on_ml.py, written by ECMWF), convert files into netcdfs, and concatenate the monthly files into one file for that season. 50 GB memory is recommended for this step.
- Repeat for the next season/year.
Downloads one season/year at a time and then subsets into regions. You must first register with the AERIS/ICARE Data and Services Center to access the data archive.
- Edit get_dardar_v3.sh with your username/password, change the file paths, and specify the season and year to download.
- Run get_dardar_v3.sh to download the global files for the season/year via lftp.
- Edit process_dardar_v3.sh to change the file paths and specify the year/season.
- Run process_dardar_v3.sh to subset the global file into one file for each region. This runs the python script process_dardar_v3.py. 25 GB memory is recommended for this step.
- Delete the global files and repeat for the next season/year.
Regrids the GPM_MERGIR brightness temperature and ERA5 temperature/height data onto the DARDAR grid and bins the IWC by brightness temperature at cold point-relative levels. This processes all regions in one season/year at a time.
- Edit regrid_bin_plot.sh to change the file paths and the season/year.
- Run regrid_bin_plot.sh. This runs the python script regrid_data_cp.py to regrid and then bin_obs_overshoot.py to do the binning. 50 GB memory is recommended for this step. This also saves “standard” versions of the plots for each year/season; note that these are NOT the same plots used in the paper.
- Repeat for the next season/year.
Calculates the joint brightness temperature-cold point histograms for each season/year. Saves a python dictionary with the histogram counts, bins, etc. to a pickle file.
- Edit get_Tb-cpT_hist.sh to change the file paths and specify the season.
- Run get_Tb-cpT_hist.sh. This runs the python script cold_point_reindex.py to regrid the cold point temperature file for SPC only (needed because of the date line issue) and biv_hist.py to calculate and save the histogram dictionary. 50 GB memory is recommended for this step.
- Repeat for the next season.
- Get the counts for the
$T_b-T_{cp}$ bins in obs_climo_Tb-cpT_hists.ipynb. - Bin by
$T_b-T_{cp}$ and make the figures in obs_climo_paper_diffs_binned_plots.ipynb.
- The joint histogram counts are saved as pickle files in biv_hist.py.
- Calculate the conditional probabilities of overshoots and make the figures in obs_climo_cond_prob_joint_hists_plot.ipynb.
- Calculate the frequencies of cold point overshoots and save as netcdf files in obs_climo_calc_os_freqs.ipynb.
- Make the figures and calculate the mean frequencies (including with alternate thresholds) in obs_climo_overshooting_heatmaps.ipynb.
- Make the time-mean cold point temperature and height files in get_obs_mean_cpT.ipynb.
- Make the figure in obs_climo_time_mean_cp_maps.ipynb.
Calculate the fractions in obs_climo_cirrus_fracs.ipynb.