Due to climate change, we expect an exacerbation of fire in Europe and around the world, with major wildfire events extending to northern latitudes and boreal regions [1]. In this context, it is important to improve our capabilities to anticipate fire danger and understand its driving mechanisms at a global scale. The Earth is an interconnected system, in which large scale processes can have an effect on the global climate and fire seasons. For example, extreme fires in Siberia have been linked to previous-year surface moisture conditions and anomalies in the Arctic Oscillation [2]. In the context of the ESA-funded project SeasFire, The team at NOA has gathered Earth Observation data related to seasonal fire drivers and created a global analysis-ready datacube for seasonal fire forecasting for the years 2001-2021 at a spatiotemporal resolution of 0.25 deg x 0.25 deg x 8 days [3]. The datacube includes a combination of variables describing the seasonal fire drivers (climate, vegetation, oceanic indices, population density) and the burned areas. Initial studies show the potential of Deep Learning for i) short-term regional [4] and ii) long-term global wildfire forecasting [5]. The goal of this challenge is to develop models that are able to capture global-scale spatiotemporal associations and forecast burned area sizes on a subseasonal to seasonal scale.
Grasp sub-seasonal to seasonal forecasting of global burned area leveraging Explainable AI (xAI) techniques on deep learning models.
Which predictors are more important when forecasting at different lead times and ecoregions. i) Different ecoregions: Is the importance of the variables consistent across different fire regimes? Which variables are important for predicting in different fire regimes (e.g. Mediterranean, Tropics, Savannahs…) ii) Spatial focus: If the input has a spatial context, in which part of the spatial context does the model pay attention for each variable? iii) Are the identified explanations physically meaningful/meaningless? Do they reflect physical laws or data artifacts? iv) Identify some good examples for local explainability. For example, one can search for a major wildfire event where there are known causes, and see if the explainability of the model agrees.
[1] Wu, Chao, et al. "Historical and future global burned area with changing climate and human demography." One Earth 4.4 (2021): 517-530.
[2] Kim, Jin-Soo, et al. "Extensive fires in southeastern Siberian permafrost linked to preceding Arctic Oscillation." Science advances 6.2 (2020): eaax3308.
[3] Alonso, Lazaro, et al. Seasfire Cube: A Global Dataset for Seasonal Fire Modeling in the Earth System. Zenodo, 30 Sept. 2022, p., doi:10.5281/zenodo.7108392.
[4] Kondylatos, Spyros et al. “Wildfire Danger Prediction and Understanding with Deep Learning.” Geophysical Research Letters”, 2022. doi: 10.1029/2022GL099368
[5] Prapas, Ioannis, et al. "Deep Learning for Global Wildfire Forecasting." arXiv preprint arXiv:2211.00534 (2022).
├── data
│ ├── Biomes_and_GFED <- Data from third party sources
│ ├── images <- visual results
│ ├── processed <- average and std used for normalization
│ └── raw <- Seasfire daatcube tiny examples
│
├── models <- Trained and serialized models
│
├── notebooks <- Jupyter notebooks.
│ ├── binary_segmantation <- containing GUI for xAI
│ └── fire_size_quantile_regression <- containing process to train a segmentation model
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc
│ └── figures <- Generated graphics and figures
│
├── requirements.txt <- Required packages (dependencies) generated
│ with `pip freeze > requirements.txt`
│
├── utils <- Scripts to train a model, make predictions and visualizations
│
└── setup.py <- makes project pip installable (pip install -e .) so
Directly launch the GUI in Binder (Interactive jupyter notebook/lab environment in the cloud) or in Google colab (which is much faster for interaction given that it provides GPU resources that makes the xAI models much faster).
git clone https://github.com/PiSchool/noa-xai-for-wildfire-forecasting.git
cd noa-xai-for-wildfire-forecasting
Once you have cloned and navigated into the repository, you can set up a development environment using venv
and install all packages from requirements.txt
.
python3 -m venv env
source env/bin/activate
python3 -m pip install -r requirements.txt
Two Jupyter notebook are provided and can be run either in the virtual environment created here, calling jupyter notebook
and navigating to those files, or opening them into colab.
This challenge, sponsored by Pi School for the National Observatory of Athens (NOA), was carried out by Giovanni Paolini, and Johanna Strebl as part of the 12th edition of Pi School's School of AI program.
Giovanni Paolini | Johanna Strebl |
---|---|
Giovanni is an aerospace engineer in his 3rd year as an industrial Ph.D. candidate in the field of remote sensing for agriculture and hydrology. He is currently involved in different projects on the classification of irrigated areas and estimation of water used for agricultural purposes in semi-arid regions, using very high-resolution satellite data and some state-of-the-art ML algorithms. He joined the wildfire challenge at PI school to boost his knowledge of large deep learning models and he is very eager to contribute to the pressing challenge of wildfire prevention. | Johanna is currently in the final year of her Master's degree in Computer Science at the University of Munich, focusing on machine learning and quantum computing. Her main research interest is in using modern technologies to tackle modern problems. After researching hate speech, her most recent focus is now on Earth observation and remote sensing for climate change mitigation, especially forest fire prediction and modeling with AI. At Pi School, Johanna is collaborating with the National Observatory of Athens to research explainable AI for wildfire forecasting. |
Giov-P Giovanni Paolini |
YokoHono Johanna Strebl |