From 9c0d01f6f8344be8b808859be124717e30bd6c46 Mon Sep 17 00:00:00 2001 From: Tasha Snow Date: Fri, 5 Jul 2024 20:35:06 -0600 Subject: [PATCH] Delete book/external/ICESAT2_ATL10-h5coro_large_scale_time_series.ipynb --- ...ATL10-h5coro_large_scale_time_series.ipynb | 922 ------------------ 1 file changed, 922 deletions(-) delete mode 100644 book/external/ICESAT2_ATL10-h5coro_large_scale_time_series.ipynb diff --git a/book/external/ICESAT2_ATL10-h5coro_large_scale_time_series.ipynb b/book/external/ICESAT2_ATL10-h5coro_large_scale_time_series.ipynb deleted file mode 100644 index 0b47081..0000000 --- a/book/external/ICESAT2_ATL10-h5coro_large_scale_time_series.ipynb +++ /dev/null @@ -1,922 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "6b5274a8", - "metadata": {}, - "source": [ - "# ICESat-2 ATL10-h5coro large-scale time series\n", - "\n", - "imported on: **2023-10-03**\n", - "\n", - "

This notebook is from NSIDC's Processing Large-scale Time Series of ICESat-2 Sea Ice Height in the Cloud tutorial

\n", - "\n", - "> The original source for this document is [https://github.com/nsidc/NSIDC-Data-Tutorials](https://github.com/nsidc/NSIDC-Data-Tutorials)" - ] - }, - { - "cell_type": "markdown", - "id": "e86eaecf-a612-4dbb-8bdc-5b5dfddf65b9", - "metadata": { - "user_expressions": [] - }, - "source": [ - "
\n", - "\n", - "\n", - "# **Processing Large-scale Time Series of ICESat-2 Sea Ice Height in the Cloud**\n", - "\n", - "
\n", - "\n", - "---" - ] - }, - { - "cell_type": "markdown", - "id": "bc15319c-5110-4aaa-8932-db8b4055a167", - "metadata": { - "tags": [], - "user_expressions": [] - }, - "source": [ - "## **1. Tutorial Overview**\n", - "\n", - "This tutorial is designed for the \"DAAC data access in the cloud hands-on experience\" session at the 2023 NSIDC DAAC [User Working Group (UWG)](https://nsidc.org/data/data-programs/nsidc-daac/about-daac#anchor-2) Meeting. \n", - "\n", - "The NSIDC DAAC archives and distributes Daily and Monthly Gridded [Sea Ice Freeboard (ATL20)](https://nsidc.org/data/atl20) and [Polar Sea Surface Height Anomaly (ATL21)](https://nsidc.org/data/atl21) data sets from the ICESat-2 Mission, derived from the lower level [ATL10](https://nsidc.org/data/atl10) data set. However, we may want these lower level point data to be gridded and averaged at a weekly cadence, or using a different projection or other gridding parameters. \n", - "\n", - "This tutorial session is in two parts: \n", - "* We will first guide you through this Jupyter Notebook running in the AWS `us-west-2` region, where data are hosted in the NASA Earthdata Cloud. The notebook utilizes several libraries to performantly search, access, read, and grid the data including `earthaccess`, `h5coro`, and `geopandas`.\n", - "\n", - "* This notebook will focus on the Ross Sea, Antarctica. But let’s say we want to scale this analysis to the entire continent. In the second portion, we will present how to scale and run this same workflow from a script (see [workflow.py](./h5cloud/workflow.py) in the `h5cloud` folder within this notebook's directory) that can be run from your laptop, using [Coiled](https://www.coiled.io/). \n", - "\n", - "### **Credits**\n", - "\n", - "The notebook was created by Andy Barrett and Luis Lopez of NSIDC.\n", - "\n", - "For questions regarding the notebook, or to report problems, please create a new issue in the [NSIDC-Data-Tutorials repo](https://github.com/nsidc/NSIDC-Data-Tutorials/issues).\n", - "\n", - "### **Learning Objectives**\n", - "\n", - "By the end of this demonstration you will be able to: \n", - "1. Use `earthaccess` to authenticate with Earthdata Login, search for ICESat-2 data using spatial and temporal filters, and directly access files in the cloud.\n", - "2. Open data granules using `h5coro` to efficiently read HDF5 data from the NSIDC DAAC S3 bucket.\n", - "3. Load data into a geopandas.DataFrame containing geodetic coordinates, ancillary variables, and date/time converted from ATLAS Epoch.\n", - "4. Grid track data to EASE-Grid v2 6.25 km projected grid using drop-in-the-bucket resampling. \n", - "5. Calculate mean statistics and assign aggregated data to grid cells. \n", - "5. Visualize aggregated sea ice height data on a map.\n", - "\n", - "### **Prerequisites**\n", - "\n", - "1. We are running this notebook in the [CryoCloud](https://book.cryointhecloud.com/intro.html) JupyterHub. For more information, see the CryoCloud [Getting Started](https://book.cryointhecloud.com/content/Getting_Started.html) documentation.\n", - "**It is advised that you use at least a 16GB instance for this notebook.** \n", - "2. An Earthdata Login is required for data access. If you don't have one, you can register for one [here](https://urs.earthdata.nasa.gov/).\n", - "3. It is recommended that you create a .netrc file that contains your Earthdata Login credentials, stored in your home directory. If you do not have a .netrc file, `earthaccess` will prompt you to enter your Earthdata Login username and password.\n", - "\n", - "### **Example of end product** \n", - "At the end of this tutorial, the following figure will be generated, demonstrating a year's worth of ATL10 Sea Ice Freeboard height data gridded over the Ross Sea, Antarctica:\n", - "
\n", - "\n", - "
\n", - "\n", - "### **Time requirement**\n", - "\n", - "Allow approximately 40 minutes to complete this tutorial." - ] - }, - { - "cell_type": "markdown", - "id": "53b77eb5-d5ed-4ddd-8fb1-6c69618d7852", - "metadata": { - "tags": [], - "user_expressions": [] - }, - "source": [ - "## **2. Tutorial steps**\n", - "\n", - "### Installing the latest version of earthaccess\n", - "\n", - "The CryoCloud environment currently does not have the latest `earthaccess` version installed, along with new features in `h5coro` that are not yet released, so we will first manually install those below:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "9700d441-441a-41fb-9ad8-7ea5eabec52b", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "%%capture\n", - "# suppress install outputs\n", - "\n", - "!pip uninstall -y earthaccess h5coro\n", - "!pip install earthaccess==0.6.1\n", - "\n", - "# h5coro has new features that we need that are not released\n", - "!pip install git+https://github.com/ICESat2-SlideRule/h5coro.git@main" - ] - }, - { - "cell_type": "markdown", - "id": "e7bdaa85-ac4e-4172-9154-1d0992414cc1", - "metadata": { - "user_expressions": [] - }, - "source": [ - "**NOTE**: Restart the kernel and clean output after running the cell above." - ] - }, - { - "cell_type": "markdown", - "id": "7820a737-33f0-4470-b9a4-03c5c4f0354c", - "metadata": { - "user_expressions": [] - }, - "source": [ - "### **Import Packages**" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "59e79729-1b02-4ef5-aee1-8923690243da", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "# To force use of shapely\n", - "import os\n", - "os.environ['USE_PYGEOS'] = '0'\n", - "\n", - "# For searching NASA data\n", - "import earthaccess\n", - "\n", - "# For reading data, analysis and plotting\n", - "import numpy as np\n", - "import pandas as pd\n", - "import datetime as dt\n", - "\n", - "# For resampling\n", - "from affine import Affine\n", - "\n", - "# For plotting\n", - "import matplotlib.pyplot as plt\n", - "import cartopy.crs as ccrs\n", - "import cartopy.feature as cfeature\n", - "\n", - "from h5cloud.read_atl10 import read_atl10, get_data_links, get_credentials\n", - "\n", - "print(f\"earthaccess: {earthaccess.__version__}\")" - ] - }, - { - "cell_type": "markdown", - "id": "1966ffa6-a5f2-4520-a8dc-f37678a2cf7a", - "metadata": { - "user_expressions": [] - }, - "source": [ - "### Authenticate\n", - "\n", - "We need to authenticate and get AWS token" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "d47aa955-3d91-4418-85f9-5772f400f712", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "auth = earthaccess.login()" - ] - }, - { - "cell_type": "markdown", - "id": "da19f604-0288-4358-ab88-d18c986f7cc8", - "metadata": { - "user_expressions": [] - }, - "source": [ - "### **Search for ICESat-2 ATL10 data**\n", - "\n", - "We use `earthaccess` to search CMR for granules in the region of interest for the time period of interest. \n", - "\n", - "The region is set by name below. Currently, we have two options: the Ross Sea, and the Southern Ocean and adjoining seas.\n", - "\n", - "The range of dates is set by assigning a start year and end year to `year_begin` and `year_end`. Setting `year_begin` and `year_end` to the same year retreives data for one year." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "d2069528-d382-45ab-a865-a41593bc47a8", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "# To avoid copying and pasting region tuples\n", - "region = \"Ross Sea\" #\"Ross Sea\" # Set region to \"Ross Sea\" for just Ross Sea or \"Antarctica\" for southern ocean \n", - "ross_sea = (-180, -78, -160, -74)\n", - "antarctic = (-180, -90, 180, -60)\n", - "this_region = antarctic if region == \"Antarctica\" else ross_sea\n", - "\n", - "year_begin = 2019\n", - "year_end = 2019\n", - "month = 9" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "61875e91-c8b4-4beb-9535-c3391d1fcc06", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "atl10 = {}\n", - "total_results = 0\n", - "approx_size = 0\n", - "\n", - "for year in range(year_begin, year_end+1):\n", - " \n", - " date_beg = dt.datetime(year, month, 1).strftime(\"%Y-%m-%d\")\n", - " date_end = dt.datetime(year, month, 30).strftime(\"%Y-%m-%d\")\n", - " \n", - " print(f\"Searching period {date_beg} to {date_end} ...\")\n", - " granules = earthaccess.search_data(\n", - " short_name = 'ATL10',\n", - " version = '006',\n", - " cloud_hosted = True,\n", - " bounding_box = this_region,\n", - " temporal = (date_beg, date_end) #(f'{year}-09-01',f'{year}-09-30'),\n", - " )\n", - " total_results += len(granules)\n", - " approx_size += sum([g.size() for g in granules])\n", - " atl10[str(year)] = granules\n", - "print(f\"Total retrieved: {total_results}, approx size: {round(approx_size, 2)} MB\")" - ] - }, - { - "cell_type": "markdown", - "id": "b6887297-2b02-4e76-9bc5-5e804c54c5d7", - "metadata": { - "user_expressions": [] - }, - "source": [ - "### Access the Granules\n", - "\n", - "Because the CryoCloud is hub is running on servers in AWS region `us-west-2`, which is the same region as the NASA Earthdata Cloud, granules can be accessed directly without having to download the files first. This is analogous to how you would work with files on your local filesystem. However, _under the hood_ there are differences.\n", - "\n", - "Initially, we load data for each year into a `geopandas.DataFrame`. `geopandas` is an extension of the `pandas` package. `pandas` is designed to work with `tabular` data - _think data you might put into a spreadsheet_. `geopandas`, extends `pandas` to work with geospatial data by adding geometries (points, lines and polygons) and a coordinate reference system (CRS), so that data in each row is associated with a geospatial feature located on Earth. ICESat-2 track data is well suited to the DataFrame data model because data are related to points or segments. Once data is in a `geopandas.DataFrame`, the data can be reprojected and queried using methods you may be used to using in a GIS." - ] - }, - { - "cell_type": "markdown", - "id": "80340da9-1b3b-458d-9d94-9492657f94bf", - "metadata": { - "tags": [], - "user_expressions": [] - }, - "source": [ - "#### Read data into `geopandas.DataFrame`\n", - "\n", - "The first step is to read the data and put it into a Dataframe. We use `h5coro`, which is a package developed by the SlideRule project to efficiently read HDF5 files in the cloud. Recall from the Cloud Optimized Format presentation, the HDF5 format and the HDF5 library for reading and writing those files are not well suited to accessing data in the cloud. `h5coro` was developed to solve some of the problems related to HDF5 format and tools. Using `h5coro` with `dask`, a python package for parallel processing on multicore local machines and distributed cluster in the cloud, reading data from ATL10 files is 5x faster than using the `h5py` package, an HDF5 reader that uses the HDF5 library.\n", - "\n", - "The code to read the data is long, so we have created the `read_atl10` function and put it in a module. The function is imported into this notebook. If your are interested, take a look at `read_atl10` in [`read_atl10.py`](./h5cloud/read_atl10.py). The main features of the function are briefly described here.\n", - "\n", - "We follow the processing steps for ATL20 to generate our freeboard grids. For each grid cell that contain one or more freeboard segments, a grid cell mean freeboard is calculated as a mean of `gtx/freeboard_segment/beam_fb_height` from ATL10, weighted by segment length `gtx/freeboard_segment/heights/height_segment_length_seg`. To resample segments to grid cells, we also need the geodetic coordinates for each segment in `gtx/freeboard_segment/latitude` and `gtx/freeboard_segment/longitude`. As an additional locator, we also read `gtx/freeboard_segment/delta_time`. `gtx` is the beam number.\n", - "\n", - "In addition to the segment data, we also need some ancillary data from each file. In ATL20 gridded freeboards are calculated using only the _strong beams_ of each beam pair. Which of the six beams are strong and which are weak depends on the orientation of the ICESat-2 satellite. Satellite orientation is given in the `orbit_info/sc_orient` dataset. We also need to read the Atlas Standard Data Product Epoch that is stored in `ancillary_data/atlas_sdp_gps_epoch` to convert `delta_time` from seconds since launch to date and time.\n", - "\n", - "```{note}\n", - "There are three beam pairs numbered 1, 2 and 3. Each of these beam pairs has a left and right beam. Beams are numbered `gt1l` and `gt1r`, `gt2l` and `gt2r`, and `gt3l` and `gt3r`. Depending on the orientation of the ICESat-2 satellite, left beams or right beams are the _strong beams_. The orientation can be _forward_ or _backward_, or _transition_. We only use data in forward or backward orientations.\n", - "```\n", - "\n", - "The datasets containing segment data are stored in the `DATASETS` constant, which is a python `list`, in `reader.py`. If you want additional or different datasets, you can modify this list. See [NSIDC DAAC's ATL10 User Guide](https://nsidc.org/sites/default/files/documents/user-guide/atl10-v006-userguide.pdf) and [ATL10 Data Dictionary](https://nsidc.org/sites/default/files/documents/technical-reference/icesat2_atl10_data_dict_v006.pdf) for detailed descriptions. \n", - "\n", - "A ATL10 file is read using the function `read_atl10`. This function encapsulates opening an HDF5 file and reading the datasets using `h5coro`, and then creating a `geopandas.DataFrame` containing the data. We parallelize the reading of all files in a year using `pqdm`, so files are read using different processors. File for a given year are then concatenated into a single dataframe." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "5fc14401-66a6-44ba-b8f2-421f45e50c29", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "%%time\n", - "\n", - "files = get_data_links(atl10[\"2019\"], environment=\"cloud\")\n", - "cred = get_credentials(environment=\"cloud\")\n", - "tracks = read_atl10(files, executors=4, environment=\"cloud\", credentials=cred)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "a4f577fc-7c75-456e-b7bb-2a4f45ab77c5", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "tracks" - ] - }, - { - "cell_type": "markdown", - "id": "e63674b7-c92a-4bc1-818c-9dae0cf9cc69", - "metadata": { - "tags": [], - "user_expressions": [] - }, - "source": [ - "## Grid the track data\n", - "\n", - "The resampling and calculation of statistics follows the processing steps described in the ATL20 - Gridded Sea Ice Freeboard - ATBD but gridding to a EASE-Grid v2 6.25 km grid. Any projected coordinate system or grid could be chosen. The procedure could be modified with extra QC steps or modifications. **The world is your oyster - or [Aplacophoran](https://antarcticsun.usap.gov/science/4447/)**.\n", - "\n", - "The processing steps are:\n", - "\n", - "- remove non-ice and low quality segments \n", - "- resample freeboard segments to a grid\n", - "- calculate aggregate statistics\n", - " + mean segment length\n", - " + segment count\n", - " + length weighted mean freeboard\n", - " + length weighted standard deviation of freeboard\n", - " \n", - "### Resample Freeboard Segments to a Grid\n", - "\n", - "Following the ATL20 ATBD, we will use a _drop-in-the-bucket_ resampling scheme. This is simple and relatively easy to implement. More complex resampling schemes could be substituted.\n", - "\n", - "To demonstrate resampling we will resample freeboard segments to WGS84 / NSIDC EASE-Grid v2.0 South with a grid resolution of 6.25 km. The EPSG code for the WGS84 / NSIDC EASE-Grid South coordinate reference system is [6932](https://epsg.org/crs_6932/WGS-84-NSIDC-EASE-Grid-2-0-South.html).\n", - "\n", - "We will use the standard 6.25 km grid. To define the grid, we need the grid dimensions (nrows and ncols), the x and y projected coordinates of the upper-left corner of the upper-left grid cell, and the height and width of the grid cells in the same units as the projected coordinates. In this case, the units are meters." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "427548f4-9adb-4cee-adc1-b721bddcb7a7", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "easegrid2_epsg = 6932\n", - "\n", - "# Parameters for standard NSIDC EASE Grid v2.0 South 6.25 km grid\n", - "# nrow = 2880\n", - "# ncol = 2880\n", - "# upper_left_x = -9000000.0\n", - "# upper_left_y = 9000000.0\n", - "# width = 6250.0\n", - "# height = -6250.0\n", - "\n", - "# Parameters for a 10 km grid over Ross Sea Region\n", - "nrow = 151\n", - "ncol = 147\n", - "width = 10000.0\n", - "height = -10000.0\n", - "upper_left_x = -1040000.0\n", - "upper_left_y = -560000.0\n", - "\n", - "map_extent = [upper_left_x, (upper_left_x + (ncol*width)), (upper_left_y + (nrow*height)), upper_left_y]" - ] - }, - { - "cell_type": "markdown", - "id": "0ed0d70b-253b-4ced-a354-6c7a20637640", - "metadata": { - "user_expressions": [] - }, - "source": [ - "The first step is to reproject the points from geodetic coordinates (latitude and longitude) to projected coordinates (x, y). Because the data are in a `geopandas.DataFrame` we can use the `to_crs` method. This takes an EPSG code either as a numeric value (`6932`) or as a string (`\"EPSG:6932\"`).\n", - "\n", - "You can see that the `POINT` objects in the `geometry` have changed from having latitudes and longitudes as coordinates to x and y in meters." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "78293c52-ad08-45a1-bd66-99b2eaade3e7", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "%%time\n", - "tracks = tracks.to_crs(easegrid2_epsg)\n", - "tracks.head()" - ] - }, - { - "cell_type": "markdown", - "id": "76ea27df-9bea-409f-a754-d945fb7ae01a", - "metadata": { - "user_expressions": [] - }, - "source": [ - "A _Drop-in-the-Bucket_ resampling scheme collects points into the grid cells that they intersect with, and then calculates aggregate statistics for each grid cell using attributes associated with those points.\n", - "\n", - "We'll find the grid cell that contains each segment by calculating the row and column coordinates for each segment from the projected coordinates. This is done by creating an _Affine_ transformation matrix for the grid. The Affine matrix is just a matrix representation of the algebraic expressions to convert row and column indices of the grid to projected coordinates. The equations below give the forward transformation from `(row, col)` to `(x, y)`. \n", - "\n", - "$$\n", - "x = width * col + upper\\_left\\_x \\\\\n", - "y = height * row + upper\\_left\\_y\n", - "$$\n", - "\n", - "These are expressed in matrix form:\n", - "\n", - "$$\n", - "\\begin{bmatrix}\n", - "x \\\\\n", - "y \\\\\n", - "0\n", - "\\end{bmatrix} = \n", - "\\begin{bmatrix}\n", - "a & 0 & c \\\\\n", - "0 & d & e \\\\\n", - "0 & 0 & 1\n", - "\\end{bmatrix}\n", - "\\begin{bmatrix}\n", - "col \\\\\n", - "row \\\\\n", - "1\n", - "\\end{bmatrix}\n", - "$$\n", - "\n", - "where $a$ is $\\mathsf{width}$, $c$ is $\\mathsf{upper\\_left\\_x}$, $d$ is $height$, and $e$ is $upper\\_left\\_y$.\n", - "\n", - "```{note}\n", - "The projected coordinate system we are using is a cartesian plane with the origin at the South Pole. The `x` coordinates increase to the right, and `y` coordinates increase up. For raster data, which includes grids and images, have the origin at the upper-left corner of the grid. Column indices increase from right to left, and row indices increase from top to bottom.\n", - "```\n", - "\n", - "We use the `affine` package to create a forward transformation matrix (`fwd`) using the grid parameters above. To transform `(x, y)` projected coordinates to `(row, col)`, we can calculate the reverse transformation matrix using `~fwd`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "d2d5a4e8-89c8-46eb-8561-44b4dea212ae", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "fwd = Affine(width, 0., upper_left_x, 0., height, upper_left_y)" - ] - }, - { - "cell_type": "markdown", - "id": "c42d6200-1043-4c6e-a87e-fd01df31f607", - "metadata": { - "user_expressions": [] - }, - "source": [ - "`(row, col)` coordinates are still rational numbers. We want an integer row and column indices for grid cells. We can use the `floor` function to get integers. `row` and `column` indices are zero based.\n", - "\n", - "We want to be able to leverage the `geopandas.Dataframe.groupby` functionality to collect points into grid cells, so we need a unique identifier to group the data. We can calculate a unique cell index from `row` and `column` indices as follows:\n", - "\n", - "$$\n", - "cell\\_index = row * ncol + col\n", - "$$\n", - "\n", - "This is encapsulated in the function `get_grid_index`. This function is then applied to the `geometry` of tracks." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "00314673-7229-4be9-8674-0f39c9f29baf", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "def ingrid(j, i, ncol, nrow):\n", - " \"\"\"Returns True if raster coordinates fall within grid\"\"\"\n", - " return (j >= 0) & (j < ncol) & (i >= 0) & (i < nrow)\n", - "\n", - "def get_grid_index(xy, shape, transform):\n", - " \"\"\"Returns array index for a map coordinate pair\n", - " \n", - " xy : tuple with x and y map coordinates\n", - " shape : list-like containing raster shape (nrow, ncol), where \n", - " nrow is number of rows in grid and ncol is number of \n", - " columns in grid\n", - " transform : Affine transformation matrix to transform \n", - " raster coordinates to map coordinates\n", - " \"\"\"\n", - " nrow, ncol = shape\n", - " \n", - " j, i = transform * xy\n", - " j, i = np.floor(j).astype(int), np.floor(i).astype(int)\n", - " if ingrid(j, i, ncol, nrow):\n", - " return (i * ncol) + j\n", - " return -1" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "bde50514-c70a-499c-ae77-ffadf367c6df", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "%%time\n", - "tracks[\"grid_index\"] = [get_grid_index((x, y), (nrow, ncol), ~fwd) for x, y in zip(tracks.geometry.x, tracks.geometry.y)]\n", - "tracks.head()" - ] - }, - { - "cell_type": "markdown", - "id": "633c253d-2eec-4222-a867-5cf23f08e088", - "metadata": { - "user_expressions": [] - }, - "source": [ - "`get_grid_index` returns `-1` if a point is outside the grid extent, so we need to filter out points that have a `grid_index` of `-1`. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "04e06217-d2ef-4380-9282-1679e71a127f", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "tracks = tracks[tracks.grid_index > -1]" - ] - }, - { - "cell_type": "markdown", - "id": "5e8ef611-f970-4615-9e18-df848a103dd6", - "metadata": { - "user_expressions": [] - }, - "source": [ - "### Calculate grid cell mean statistics\n", - "\n", - "We calculate four statistics for grid cells that contain segments.\n", - "\n", - "#### Grid Cell Mean Segment Length $\\bar{L}$\n", - "\n", - "$$\n", - "\\bar{L}(x, y, D) = \\frac{\\sum L_i}{N}\n", - "$$\n", - "\n", - "where $L_i$ is `/gtx/freeboard_beam_segment/height_segments/height_segment_length_seg`, $x$ and $y$ are projected coordinates for grid centers, and $D$ is day. \n", - "\n", - "#### Grid Cell Mean Freeboard $\\bar{h}$\n", - "\n", - "$$\n", - "\\bar{h}(x, y, D) = \\frac{\\sum L_i h_i}{\\sum L_i}\n", - "$$\n", - "\n", - "where $h_i$ is `gtx/freeboard_beam_segment/beam_freeboard/beam_fb_height`.\n", - "\n", - "#### Grid Cell Standard Deviation of Freeboard $\\sigma^2 (x, y, D)$\n", - "\n", - "$$\n", - "\\sigma^2 (x, y, D) = \\frac{\\sum L_i (h_i)^2}{\\sum L_i} - \\bar{h}^2 (x, y, D)\n", - "$$\n", - "\n", - "The functions to calculate these statistics are given below. These functions are applied to the grouped data. The `geopandas.apply` method only accepts a single method when operating on multiple columns in a dataframe. We could just have multiple calls for each aggregating function. However, we can collect the individual aggregating functions into a single function and pass that to the `apply` method. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "36b0e9c9-b81e-4e71-b29a-7b04b6c42b15", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "def mean_segment_length(df):\n", - " \"\"\"Returns mean segment length\"\"\"\n", - " return df[\"height_segment_length_seg\"].mean()\n", - "\n", - "\n", - "def mean_freeboard(df):\n", - " \"\"\"Returns length weighted mean freeboard\"\"\"\n", - " return (df.beam_fb_height * df.height_segment_length_seg).sum() / df.height_segment_length_seg.sum()\n", - "\n", - "\n", - "def stdev_freeboard(df):\n", - " \"\"\"Returns weighted standard deviation of freeboard\"\"\"\n", - " hmean = mean_freeboard(df)\n", - " stdev = (df.beam_fb_height**2 * df.height_segment_length_seg).sum() / df.height_segment_length_seg.sum()\n", - " return stdev - hmean**2\n", - "\n", - "\n", - "def count_segments(df):\n", - " \"\"\"Number of segments in grid cell\"\"\"\n", - " return df.beam_fb_height.count()\n", - "\n", - "\n", - "def all_funcs(x):\n", - " \"\"\"Wrapper that allows all the aggregation functions to be applied at once\"\"\"\n", - " funcs = {\n", - " mean_segment_length.__name__: mean_segment_length(x), #__name__ gets the name of a function\n", - " mean_freeboard.__name__: mean_freeboard(x),\n", - " stdev_freeboard.__name__: stdev_freeboard(x),\n", - " count_segments.__name__: count_segments(x),\n", - " }\n", - " # `apply` is expected to return a series or a scaler so we collect the results\n", - " # into a series indexed by aggregating function name\n", - " return pd.Series(funcs, index=funcs.keys())" - ] - }, - { - "cell_type": "markdown", - "id": "25f5be24-a611-4f05-ad99-10d07d1fa7ef", - "metadata": { - "user_expressions": [] - }, - "source": [ - "#### Testing the functions\n", - "\n", - "It is always a good idea to test your code. Below are some test data and expected results. The functions are tested on `test_df`. We then use `pandas.testing.assert_frame_equal` to check that the result and expected dataframes are the same. In this case we are only interested getting the same values, so we do not check the names or datatypes. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "4888fdcb-f08a-4164-b740-f25d25a92aba", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "test_df = pd.DataFrame(\n", - " {\n", - " 'grid_index': [1, 1, 1, 2, 2, 2, 2],\n", - " \"height_segment_length_seg\": [1.2, 1.1, 0.7, 2.3, 1.5, .9, 1.],\n", - " \"beam_fb_height\": [0., 0.2, 0.5, 1.1, 2., .9, 1.5], \n", - " }\n", - ")\n", - "expected = pd.DataFrame(\n", - " {\n", - " \"mean_segment_length\": [1.0, 1.425],\n", - " \"mean_freeboard\": [0.19000000000000003, 1.375438596491228],\n", - " \"stdev_freeboard\": [0.03689999999999998, 0.17167743921206569],\n", - " \"count_segments\": [3, 4],\n", - " },\n", - " index = [1, 2]\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "6f9696fc-7d63-4d5b-9d41-a95e5c408875", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "result = test_df.groupby(\"grid_index\").apply(all_funcs)\n", - "result" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "5b7fe215-e265-4c6b-ae21-f2ca1dfbdee0", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "pd.testing.assert_frame_equal(expected, result, check_names=False, check_dtype=False)" - ] - }, - { - "cell_type": "markdown", - "id": "fa6e82a2-569d-46c8-b7ba-53bd42774ac7", - "metadata": { - "user_expressions": [] - }, - "source": [ - "Now that we have functions that work we can apply them to the real data." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "4430c3cf-a667-43df-8313-701fdfc1abf9", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "%%time\n", - "aggregated_data = tracks.groupby(\"grid_index\").apply(all_funcs)\n", - "aggregated_data" - ] - }, - { - "cell_type": "markdown", - "id": "3d50d44e-3cb1-4d82-9711-619258d4308b", - "metadata": { - "user_expressions": [] - }, - "source": [ - "### Assign aggregated data to grid cells\n", - "\n", - "We now have a dataframe that contains grid cell statistics indexed by a unique array index. We can now create a grid for each of these statistics.\n", - "\n", - "The procedure is relatively straight forward.\n", - "\n", - " - Create an 1D array with the same number of elements as cells in our grid.\n", - " - Use the `grid_index` of the dataframe as an array index to assign values to grid cells, where we have data.\n", - " - Reshape the grid to the dimension of the grid.\n", - " \n", - "We can encapsulate this in a `series_to_grid` function." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "22f1a761-2fa1-4700-8dc2-481b419ee2a1", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "def series_to_grid(series, nrow, ncol):\n", - " \"\"\"Converts a geopandas.Series to a grid using the index\"\"\"\n", - " these_points = (series.index >= 0) & (series.index < (nrow*ncol - 1))\n", - " \n", - " array_index = series[these_points].index.values.astype(int) # the array index must be an integer\n", - " \n", - " vector = np.full(nrow*ncol, np.nan)\n", - " vector[array_index] = series[these_points]\n", - " return vector.reshape(nrow, ncol)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "3acf83d5-b888-427d-b4f0-e98beae7845f", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "%%time\n", - "grids = {name: series_to_grid(values, nrow, ncol) for name, values in aggregated_data.items()}" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "6994fca1-53aa-4b76-95d9-c24981bb4100", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "%matplotlib widget\n", - "plt.imshow(grids['count_segments'], interpolation='none')\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "id": "1c1b73ba-e759-43cb-ad11-4b24c79eb75b", - "metadata": { - "user_expressions": [] - }, - "source": [ - "## Plot data on a map" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "8454d0e8-fd29-45f6-b5aa-f15947ea95de", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "#proj = EASEGrid2South()\n", - "plt.close(\"all\")\n", - "proj = ccrs.LambertAzimuthalEqualArea(central_latitude=-90)\n", - "\n", - "if min(map_extent) < -3000000.0:\n", - " plot_extent = [-3000000.0, 3000000.0, -3000000.0, 3000000.0]\n", - "else:\n", - " plot_extent = map_extent\n", - "\n", - "fig = plt.figure(figsize=(10,10))\n", - "ax = fig.add_subplot(111, projection=proj)\n", - "ax.set_extent(plot_extent, proj)\n", - "ax.add_feature(cfeature.LAND)\n", - "ax.coastlines()\n", - "\n", - "plt.imshow(grids['count_segments'], interpolation='none', extent=map_extent)\n" - ] - }, - { - "cell_type": "markdown", - "id": "5b30ee90-e32c-4abe-9dc4-729fb4ab8b30", - "metadata": { - "tags": [], - "user_expressions": [] - }, - "source": [ - "## Appendix\n", - "\n", - "### Get grid parameters for Ross Sea region" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "42dc7abd-05bc-4c2b-ba17-e5f375707bfb", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "# Import GeoJSON of Ross Sea - this is very approximate\n", - "import geopandas as gpd\n", - "\n", - "ross_sea_gdf = gpd.read_file(\"ross_sea.json\")\n", - "bounds = ross_sea_gdf.to_crs(easegrid2_epsg).bounds.values" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "7238b9cb-d2d8-4157-90b0-7c8a429dfdbb", - "metadata": { - "tags": [] - }, - "outputs": [], - "source": [ - "# Calculate parameters for a grid with resolution that covers region\n", - "resolution = 10000.\n", - "minx, miny, maxx, maxy = [func(bound/resolution) * resolution for bound, func in zip(list(bounds), [np.floor, np.floor, np.ceil, np.ceil])][0]\n", - "\n", - "grid_extent_x = maxx - minx\n", - "grid_extent_y = maxy - miny\n", - "\n", - "width = height = resolution\n", - "\n", - "ncol = grid_extent_x / width\n", - "nrow = grid_extent_y / height\n", - "\n", - "upper_left_x = minx\n", - "upper_left_y = maxy\n", - "\n", - "print(f\"nrow = {int(nrow)}\")\n", - "print(f\"ncol = {int(ncol)}\")\n", - "print(f\"width = {width}\")\n", - "print(f\"height = -{height}\")\n", - "print(f\"upper_left_x = {upper_left_x}\")\n", - "print(f\"upper_left_y = {upper_left_y}\")\n", - " " - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.12" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -}