Skip to content

Latest commit

 

History

History
248 lines (182 loc) · 13.8 KB

README.md

File metadata and controls

248 lines (182 loc) · 13.8 KB

DDPM-CD: Denoising Diffusion Probabilistic Models as Feature Extractors for Change Detection (WACV'25)

(Previosely: DDPM-CD: Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models)

Wele Gedara Chaminda Bandara, Nithin Gopalakrishnan Nair, Vishal M. Patel

Offical Pytorch implementation of DDPM-CD: Denoising Diffusion Probabilistic Models as Feature Extractors for Change Detection / Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models

Latest Version of the Paper

Updates:

  • 🎉 DDPM-CD has been accepted at IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025.
  • ❗ Paper-v3: We have (completely) revised the wrting of our paper. Please refer to v3 on arxiv.

1. Motivation & Contribution

image-20210228153142126

Images sampled from the DDPM model pre-trained on off-the-shelf remote sensing images.

  • Images generated from the pre-trained diffusion model trained on off-the-shelf remote sensing images.
  • The generated images exhibit common objects typically observed in real remote sensing imagery, including buildings, trees, roads, vegetation, water surfaces, etc.
  • This showcases the remarkable capability of diffusion models to grasp essential semantics from the training dataset.
  • Although our primary focus isn't image synthesis, we explore the effectiveness of DDPM as a feature extractor for change detection.

2. Method

image-20210228153142126

We fine-tune a lightweight change classifier utilizing the feature representations produced by the pre-trained DDPM alongside change labels

3. Usage

3.1 Requirements

Before using this repository, make sure you have the following prerequisites installed:

You can install PyTorch with the following command (in Linux OS):

conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia

3.2 Installation

To get started, clone this repository:

git clone https://github.com/wgcban/ddpm-cd.git

Next, create the conda environment named ddpm-cd by executing the following command:

conda env create -f environment.yml

Then activate the environment:

conda activate ddpm-cd

Download the datasets and place them in the dataset folder. ->See Section 5.1 for download links.

If you wish to only test, download the pre-trained DDPM and fine-tuned DDPM-CD models and place them in the experiments folder. ->See Section 7 for links.

All the train-val-test statistics will be automatically upload to wandb, and please refer wandb-quick-start documentation if you are not familiar with using wandb.

4. Pre-training DDPM

4.1 Collect off-the-shelf remote sensing data to train diffusion model

Dump all the remote sensing data sampled from Google Earth Engine and any other publically available remote sensing images to dataset folder or create a simlink.

4.2 Pre-train/resume (unconditional) DDPM

We use ddpm_train.json to setup the configurations. Update the dataset name and dataroot in the json file. Then run the following command to start training the diffusion model. The results and log files will be save to experiments folder. Also, we upload all the metrics to wandb.

python ddpm_train.py --config config/ddpm_train.json -enable_wandb -log_eval

In case, if you want to resume the training from previosely saved point, provide the path to saved model in path/resume_state, else keep it as null.

4.3 Sampling from the pre-trained DDPM

If you want generate samples from the pre-trained DDPM, first update the path to trained diffusion model in [path][resume_state]. Then run the following command.

python ddpm_train.py --config config/ddpm_sampling.json --phase val

The generated images will be saved in experiments.

5. Fine-tuning for change detection

5.1 Download the change detection datasets

Download the change detection datasets from the following links. Place them inside your datasets folder.

Then, update the paths to those folders here [datasets][train][dataroot], [datasets][val][dataroot], [datasets][test][dataroot] in levir.json, whu.json, dsifn.json, and cdd.json.

5.2 Provide the path to pre-trained diffusion model

Udate the path to pre-trained diffusion model weights (*_gen.pth and *_opt.pth) here [path][resume_state] in levir.json, whu.json, dsifn.json, and cdd.json..

5.3 Indicate time-steps used for feature extraction

Indicate the time-steps using to extract feature representations in [model_cd][t]. As shown in the ablation section of the paper, our best model is obtained with time-steps: {50,100,400}. However, time-steps of {50,100} works well too.

5.4 Start fine-tuning for change detection

Run the following code to start the training.

  • Training on LEVIR-CD:
    python ddpm_cd.py --config config/levir.json -enable_wandb -log_eval
  • Training on WHU-CD:
    python ddpm_cd.py --config config/whu.json -enable_wandb -log_eval
  • Training on DSIFN-CD:
    python ddpm_cd.py --config config/dsifn.json -enable_wandb -log_eval
  • Training on CDD:
    python ddpm_cd.py --config config/cdd.json -enable_wandb -log_eval

The results will be saved in experiments and also upload to wandb.

6. Testing

To obtain the predictions and performance metrics (IoU, F1, and OA), first provide the path to pre-trained diffusion model here [path][resume_state] and path to trained change detection model (the best model) here [path_cd][resume_state] in levir_test.json, whu_test.json, dsifn_test.json, and cdd_test.json. Also make sure you specify the time steps used in fine-tuning here: [model_cd][t].

Run the following code to start the training.

  • Test on LEVIR-CD:
    python ddpm_cd.py --config config/levir_test.json --phase test -enable_wandb -log_eval
  • Test on WHU-CD:
    python ddpm_cd.py --config config/whu_test.json --phase test -enable_wandb -log_eval
  • Test on DSIFN-CD:
    python ddpm_cd.py --config config/dsifn_test.json --phase test -enable_wandb -log_eval
  • Test on CDD:
    python ddpm_cd.py --config config/cdd_test.json --phase test -enable_wandb -log_eval

Predictions will be saved in experiments and performance metrics will be uploaded to wandb.

7. Links to download pre-trained models

7.1 Pre-trianed DDPM

Pre-trained diffusion model can be download from: Dropbox

7.2 Fine-tuned DDPM-CD models

Fine-tunes chande detection networks can be download from following links:

7.2 Downloading from GoogleDrive/GitHub

If you face a problem when downloading from the DropBox try one of the following options:

7.3 Train/Val Reports on wandb

7.4 Test results on wandb

8. Results

8.1 Quantitative

image-20210228153142126

The average quantitative change detection results on the LEVIR-CD, WHU-CD, DSIFN-CD, and CDD test- sets. “-” indicates not reported or not available to us. (IN1k) indicates pre-training process is initialized with the ImageNet pre-trained weights. IN1k, IBSD, and GE refers to ImageNet1k, Inria Building Segmentation Dataset, and Google Earth.

8.2 Qualitative

  • LEVIR-CD

    image-20210228153142126

    (a) Pre-change image, (b) Post-change image, (c) FC-EF, (d) FC-Siam-diff, (e) FC-Siam- conc, (f) DT-SCN, (g) BIT, (h) ChangeFormer, (i) DDPM-CD (ours), and (j) Ground-truth. Note: true positives (change class) are indicated in white, true negatives (no-change class) are indicated in black, and false positives plus false negatives indicates in red.

  • WHU-CD

    image-20210228153142126

    (a) Pre-change image, (b) Post-change image, (c) FC-EF, (d) FC-Siam-diff, (e) FC-Siam- conc, (f) DT-SCN, (g) BIT, (h) ChangeFormer, (i) DDPM-CD (ours), and (j) Ground-truth. Note: true positives (change class) are indicated in white, true negatives (no-change class) are indicated in black, and false positives plus false negatives indicates in red

  • DSIFN-CD

    image-20210228153142126

    (a) Pre-change image, (b) Post-change image, (c) FC-EF, (d) FC-Siam-diff, (e) FC-Siam- conc, (f) DT-SCN, (g) BIT, (h) ChangeFormer, (i) DDPM-CD (ours), and (j) Ground-truth. Note: true positives (change class) are indicated in white, true negatives (no-change class) are indicated in black, and false positives plus false negatives indicates in red

  • CDD

    image-20210228153142126

    (a) Pre-change image, (b) Post-change image, (c) FC-EF, (d) FC-Siam-diff, (e) FC-Siam- conc, (f) DT-SCN, (g) BIT, (h) ChangeFormer, (i) DDPM-CD (ours), and (j) Ground-truth. Note: true positives (change class) are indicated in white, true negatives (no-change class) are indicated in black, and false positives plus false negatives indicates in red

9. Citation

@misc{bandara2024ddpmcdv2,
    title = {Remote Sensing Change Detection (Segmentation) using Denoising Diffusion Probabilistic Models},
    author = {Bandara, Wele Gedara Chaminda and Nair, Nithin Gopalakrishnan and Patel, Vishal M.},
    year = {2022},
    eprint={2206.11892},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    doi = {10.48550/ARXIV.2206.11892},
}
@misc{bandara2024ddpmcdv3,
    title={DDPM-CD: Denoising Diffusion Probabilistic Models as Feature Extractors for Change Detection}, 
    author={Wele Gedara Chaminda Bandara and Nithin Gopalakrishnan Nair and Vishal M. Patel},
    year={2024},
    eprint={2206.11892},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    doi = {10.48550/ARXIV.2206.11892},
}

10. References

  • The code of diffusion model is from here.