Skip to content
Daniel Buscombe edited this page Feb 27, 2023 · 4 revisions

Overview

Seg2Map is an open-source, browser-accessible web map application to apply Deep Learning to imagery and image time-series, to make (and re-make) highly customizable maps of Land Use and Land Cover (LULC), broadly defined, or landforms, in order to study Earth’s changing surface for a range of scientific purposes.

This project was funded by the U.S. Geological Survey Community for Data Integration (CDI) FY22-funded proposal "Seg2Map: A Cloud-Based Geospatial Image-Segmentation and Mapping Platform."

The tool is accessed through a web browser, and could therefore be deployed on the cloud. Essentially, Seg2Map is a lightweight web GIS interface built on top of powerful models made available through Segmentation Zoo for fully automated image segmentation (Buscombe and Goldstein, 2022), for mapping spatio-temporal changes from publicly available geospatial imagery, all in the browser.

It will consist of three main elements:

  1. a web interface for downloading of user-defined Earth Observation imagery from cloud-native data repositories in user-defined regions of interest
  2. tools for application of advanced pixelwise classification (or ‘image segmentation’) based on state-of-the-art Machine Learning algorithms; and
  3. post-processing tools for quality assurance and uncertainty quantification.

Why is this needed?

The free and open availability of global coverage Earth observation imagery collected by governmental scientific and space agencies such as USGS, NOAA, NASA, and ESA has enabled a large-scale public data commons that, combined with cloud computing open-source software (Abernathy and others, 2021), has fueled the democratization of land cover mapping from a few select public agencies and well-financed research groups and private companies, to virtually any individual researcher or research group (Zhu and others, 2019). Since the 1970s, the U.S. Geological Survey and its partner federal organizations are recognized leaders in geospatial imagery analysis, and the creation of LULC maps from geospatial imagery and image time-series (Wulder and others, 2019).

Proprietary web mapping platforms like Google Earth Engine (Gorelick and others, 2017) and Microsoft Planetary Computer, are generally controlled by a single private company, with limited modularity or customization (Abernathy and others, 2021). In contrast, Seg2Map will be scientific infrastructure that facilitates interoperability between numerous data repositories and opens-source computational tools, exclusively within the python programming language, leveraging modern web-based geospatial mapping technologies. Seg2Map will combine the major elements of a ‘Land Cover 2.0’ (Wulder and others, 2018) workflow, namely allowing users to ‘bring their [sic.] own algorithms to the data’ and generate spatially explicit accuracy metrics, in a standardized approach.

Land Use Land Cover (LULC) are critical descriptors of Earth’s changing biophysical landscapes and surface processes, and a critical requirement for a wide range of natural resource assessments and resource management decisions regarding, for example, anthropogenic activity, biodiversity impacts, and natural hazards impacts, at all spatial scales (Wulder and others, 2018). LULC products, especially those derived from Landsat imagery, form the core of a myriad of national and international programs such as the Multi-Resolution Land Characteristics (MRLC) Consortium and the Global Climate Observing System (GCOS).

Existing LULC products usually result from heavily post-processed mosaics from imagery collected at multiple times, and often take several years to develop, therefore they are not always suitable for event-scale processes, observations at custom frequencies or specific times, or customized categories, all of which are so crucial in research of natural and developed environments. Seg2Map is designed to meet this need, facilitating an end-to-end workflow, using core open-source computing technologies of Python, Tensorflow and Keras for Machine Learning, supported by well-established open-source tools.

Deep-Learning-Based Image Segmentation

Image segmentation involves classifying pixels (voxels) in imagery into one of two or more pre-defined categories, and Deep Learning models (neural networks with many layers) show state-of-the-art performance, resulting in per-pixel probabilistic estimates of each of a legend of pre-determined categories or classes. The Segmentation Module implement models (Buscombe & Goldstein, 2022) for segmentation of geospatial imagery, based on the flexible and popular U-Net and SegFormer architectures, and implemented in the software, Segmentation Zoo, built on Tensorflow and Keras functionality.

A number of models have been made, for generic application on ~1m imagery. Those models are based on a number of datasets that are documented on other pages of this wiki.

Roadmap

V1: April 2023

  • NAIP imagery only
  • Models based on the following datasets (see wiki pages for further details)
    • Coast Train
    • Chesapeake Landcover
    • AAAI Buildings
    • XBD
    • Deep Globe
    • EnviroAtlas
    • Open Earth Net

V2: later in 2023

  • Extend functionality to Planetscope 3m Dove imagery via the Planetscope API
  • Additional models based on the the "Barrier Islands" data of Sturdivant et al (2019). Work already underway
  • Additional set of models based on merged datasets, for the following set of merged classes (work already underway):
    • water
    • sediment
    • vegetation
    • herbaceous vegetation
    • woody vegetation
    • impervious
    • buildings
    • bare terrain
    • agricultural land

References

  1. Abernathey, R.P., Augspurger, T., Banihirwe, A., Blackmon-Luca, C.C., Crone, T.J., Gentemann, C.L., Hamman, J.J., Henderson, N., Lepore, C., McCaie, T.A. and Robinson, N.H., 2021. Cloud-native repositories for big scientific data. Computing in Science & Engineering, 23(2).
  2. Buscombe, D. and Goldstein, E.B., 2022. A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9(9), p.e2022EA002332.
  3. Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D. and Moore, R., 2017. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote sensing of Environment, 202, pp.18-27.
  4. Sturdivant, E.J., Zeigler, S.L., Gutierrez, B.T., and Weber, K.M., 2019, Barrier island geomorphology and shorebird habitat metrics–Sixteen sites on the U.S. Atlantic Coast, 2013–2014: U.S. Geological Survey data release, https://doi.org/10.5066/P9V7F6UX.
  5. Wulder, M.A., Coops, N.C., Roy, D.P., White, J.C. and Hermosilla, T., 2018. Land cover 2.0. International Journal of Remote Sensing, 39(12), pp.4254-4284.
  6. Zhu, Z., Wulder, M.A., Roy, D.P., Woodcock, C.E., Hansen, M.C., Radeloff, V.C., Healey, S.P., Schaaf, C., Hostert, P., Strobl, P. and Pekel, J.F., 2019. Benefits of the free and open Landsat data policy. Remote Sensing of Environment, 224, pp.382-385.
Clone this wiki locally