Skip to content

userxdy/DFormer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation

PWC PWC

PWC PWC PWC PWC PWC

Authors: Bowen Yin, Xuying Zhang, Zhongyu Li, Li Liu, Ming-Ming Cheng, Qibin Hou*

This official repository contains the source code, pre-trained, trained checkpoints, and evaluation toolbox of paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation'. The technical report could be found at arXiv. The code for pre-training and RGB-D saliency will be released soon.

We invite all to contribute in making it more acessible and useful. If you have any questions about our work, feel free to contact me via e-mail (bowenyin@mail.nankai.edu.cn). If you are using our code and evaluation toolbox for your research, please cite this paper (BibTeX).


Figure 1: Comparisons between the existing methods and our DFormer (RGB-D Pre-training).


Figure 2: Overview of the DFormer.

1. 🌟 NEWS

  • [2023/09/05] Releasing the codebase of DFormer and all the pre-trained checkpoints.

2. 🚀 Get Start

0. Install

conda create -n dformer python=3.10 -y
conda activate dformer
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html
pip install tqdm opencv-python scipy tensorboardX tabulate easydict

1. Download Datasets and Checkpoints.

  • Datasets:

By default, you can put datasets into the folder 'datasets' or use 'ln -s path_to_data datasets'.

Datasets GoogleDrive OneDrive BaiduNetdisk
  • Checkpoints:

ImageNet-1K Pre-trained DFormers T/S/B/L can be downloaded at

Pre-trained GoogleDrive OneDrive BaiduNetdisk

NYUDepth v2 trained DFormers T/S/B/L can be downloaded at

NYUDepth v2 GoogleDrive OneDrive BaiduNetdisk

*SUNRGBD

SUNRGBD GoogleDrive OneDrive BaiduNetdisk

Orgnize the checkpoints and dataset folder in the following structure:

<checkpoints>
|-- <pretrained>
    |-- <DFormer_Large.pth.tar>
    |-- <DFormer_Base.pth.tar>
    |-- <DFormer_Small.pth.tar>
    |-- <DFormer_Tiny.pth.tar>
|-- <trained>
    |-- <NYUDepthv2>
        |-- ...
    |-- <SUNRGBD>
        |-- ...
<datasets>
|-- <DatasetName1>
    |-- <RGB>
        |-- <name1>.<ImageFormat>
        |-- <name2>.<ImageFormat>
        ...
    |-- <Depth>
        |-- <name1>.<DepthFormat>
        |-- <name2>.<DepthFormat>
    |-- train.txt
    |-- test.txt
|-- <DatasetName2>
|-- ...

2. Train.

You can change the `local_config' files in the script to choose the model for training.

bash train.sh

3. Eval.

You can change the `local_config' files and checkpoint path in the script to choose the model for testing.

bash eval.sh

🚩 Performance



🕙 ToDo

  • Release the code of RGB-D pre-training.
  • Release the DFormer code for RGB-D salient obejct detection.

We invite all to contribute in making it more acessible and useful. If you have any questions or suggestions about our work, feel free to contact me via e-mail (bowenyin@mail.nankai.edu.cn) or raise an issue.

Reference

You may want to cite:

@article{yin2023dformer,
  title={DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation},
  author={Yin, Bowen and Zhang, Xuying and Li, Zhongyu and Liu, Li and Cheng, Ming-Ming and Hou, Qibin},
  journal={arXiv preprint arXiv:2309.09668},
  year={2023}
}

Acknowledgment

Our implementation is mainly based on mmsegmentaion, CMX and CMNext. Thanks for their authors.

License

Code in this repo is for non-commercial use only.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%