Skip to content

Latest commit

 

History

History
133 lines (101 loc) · 5.48 KB

File metadata and controls

133 lines (101 loc) · 5.48 KB

Deep-Gen-Project

Deep Generative Models for Wildfire Imagery
Course project for CS 274E: Deep Generative Models (UCI, 2025) — LS-Wireless Lab

Goal: Use generative models to (1) complete wildfire images from partial observations (inpainting) and (2) predict short-term fire spread/appearance.


Spatiotemporal Generative Models for Wildfire Spread Prediction

Python PyTorch Status

Project Overview

Wildfire spread is a complex, stochastic spatiotemporal phenomenon driven by topography, vegetation, and weather. Traditional physics-based models (like FARSITE) are computationally expensive, while standard deep learning models (UNet) often produce deterministic, blurry outputs that fail to capture uncertainty.

Our Goal: We explore the spectrum of Generative Modeling to predict the next day's active fire mask given a history of $k$ days. We compare three distinct paradigms:

  1. Conditional VAE (CVAE): For fast, stochastic forecasting and aleatoric uncertainty quantification.
  2. Symbolic Regression: For discovering interpretable governing equations of fire spread.
  3. Diffusion Models: For high-fidelity, fine-grained spread synthesis.

Dataset: A processed subset of the WildfireSpreadTS benchmark, focusing on the 2020 US fire season.


Repository Structure

Deep-Gen-Project/
├── data/
│   ├── fire_23654679/      # Example raw fire event (GeoTIFFs)
│   └── VAE_dataset/        # Preprocessed .npz tensors for VAE training
│
├── notebooks/
│   ├── data_exploration.ipynb    # Raw data visualization & stats
│   └── model_exploration.ipynb   # Prototyping architectures
│
├── src/
│   ├── VAE_model/          # CONDITIONAL VAE MODULE
│   │   ├── vae_model.py    # Model definition
│   │   ├── vae_run.py      # Main entry point for training/testing
│   │   ├── vae_utils.py    # Model definition & loss functions
│   │   ├── run_00/         # Prototype / Debugging
│   │   ├── run_01/         # 100 events, lookback=3
│   │   ├── run_02/         # 100 events, lookback=5
│   │   ├── run_03/         # Full 2020 dataset (201 events), lookback=3
│   │   ├── run_03a/        # Full dataset, Latent=128, Batch=64
│   │   └── run_04/         # Full dataset, lookback=5
│   │
│   └── symbolic_reg/       # Symbolic Regression scripts (Python/C++)
│
├── utils/                  # Data preprocessing & helper functions
├── LICENSE                 # License file
├── requirements.txt        # Python dependencies
└── README.md               # Repository overview

Methodology & Data

1. Data Preprocessing

We utilize a dynamic Fire-Centered Cropping strategy to handle the spatiotemporal data:

  • Anchor Frame: For a sequence of days $[t-k, \dots, t]$, we identify the center of mass of the fire at day $t$.
  • Cropping: We crop a $64 \times 64$ window around this center for all frames in the sequence (input and target $t+1$). This "locks the camera" to the fire front, allowing the model to learn local spread dynamics.
  • Normalization: Inputs are robustly normalized (percentile clipping) to $[0, 1]$.
  • Augmentation: We apply geometric augmentation (rotations/flips) to handle the data scarcity (~2000 samples).

2. Conditional VAE

We implement a CVAE where the Encoder conditions on both the future $Y_{t+1}$ and the past history $X_{t-k:t}$. The Decoder generates the prediction conditioned on the latent vector $z$ and context features from $X$.

  • Loss: Weighted Binary Cross Entropy (to handle extreme class imbalance) + KL Divergence.
  • Inference: We sample $z \sim \mathcal{N}(0, I)$ to generate probabilistic forecasts.

Usage

Installation

# Clone the repository
git clone https://github.com/LS-Wireless/Deep-Gen-Project.git
cd Deep-Gen-Project

# Create virtual environment
python -m venv .venv
source .venv/bin/activate

# Install dependencies
pip install -r requirements.txt

Running the VAE

The VAE training is configuration-driven. Each experiment folder (e.g., src/VAE_model/run_03/) contains a config.json file specifying hyperparameters.

To train a model:

  1. Navigate to the VAE directory:
    cd src/VAE_model
  2. Run the training script pointing to your desired configuration:
    python vae_run.py --config run_03/config.json

Results & Metrics

We evaluate performance using:

  • IoU (Intersection over Union): To measure overlap with the ground truth fire mask.
  • F1-Score: Harmonic mean of Precision and Recall.
  • Visual Inspection: Comparing generated fire fronts against ground truth.

(Results figures and plots are stored in the results/ folder of each run directory).

VAE Prediction


Contributors

  • Mehdi Zafari - Variational Autoencoders (VAE)
  • Edward Finkelstein - Symbolic Regression
  • Chen Yang - Diffusion Models

Note: This repository and associated algorithms are part of ongoing research. Final results and additional updates will be provided upon publication submission.


© 2025 LS Wireless. All rights reserved.