This repository contains the official implementation of the following papers:
DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation
Bowen Yin, Xuying Zhang, Zhongyu Li, Li Liu, Ming-Ming Cheng, Qibin Hou*
ICLR 2024. Paper Link | Homepage | 公众号解读(集智书童) | DFormer-SOD | Jittor-Version |
DFormerv2: Geometry Self-Attention for RGBD Semantic Segmentation
Bo-Wen Yin, Jiao-Long Cao, Ming-Ming Cheng, Qibin Hou*
CVPR 2025. Paper Link | Geometry prior demo |
🤖RGBD-Pretrain(You can train your own encoders)
⚓Application to new datasets(添加新数据集)
We provide the geometry prior generation manner in DFormerv2, and you can further develope it and enhance the depth-related reasearch. We provide the RGBD pretraining code in RGBD-Pretrain. You can pretrain more powerful RGBD encoders and contribute to the RGBD research.
We invite all to contribute in making it more acessible and useful. If you have any questions about our work, feel free to contact us via e-mail (bowenyin@mail.nankai.edu.cn, caojiaolong@mail.nankai.edu.cn). If you are using our code and evaluation toolbox for your research, please cite this paper (BibTeX).
Figure 1: Comparisons between the existing methods and our DFormer (RGB-D Pre-training).
Figure 2: Comparisons among the main RGBD segmentation pipelines and our approach. (a) Use dual encoders to encode RGB and depth respectively and design fusion modules to fusion them, like CMX and GeminiFUsion; (b) Adopt an unified RGBD encoder to extract and
fuse RGBD features, like DFormer; (c) DFormerv2 use depth to form
a geometry prior of the scene and then enhance the visual features.
Figure 2: The geometry attention map in our DFormerv2 and the effect of other attention mechanisms. Our geometry attention is endowed with the 3D geometry perception ability and can focus on the related regions of the whole scene.
A simple visualization demo is provided at
https://huggingface.co/spaces/bbynku/DFormerv2.
- [2025/04/08] The code of DFormerv2 is available.
- [2025/03/09] Our DFormerv2 has been accpeted by CVPR 2025.
- [2025/02/19] The jittor implementation of DFormer is avaiable at Jittor-Version.
- [2024/10/12] Based on our DFormer, Wu's method UBCRCL has won the RUNNER-up at Endoscopic Vision Challenge SegSTRONG-C Subchallenge of MICCAI 24. Congratulation!
- [2024/04/21] We have upgraded and optimized the framework, greatly reducing training time, i.e., training duration for DFormer-L is reduced to ~12h from over 1day.
- [2024/01/16] Our DFormer has been accpeted by The International Conference on Learning Representations (ICLR 2024).
0. Install
conda create -n dformer python=3.10 -y
conda activate dformer
# CUDA 11.8
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install mmcv==2.1.0 -f https://download.openmmlab.com/mmcv/dist/cu118/torch2.1/index.html
pip install tqdm opencv-python scipy tensorboardX tabulate easydict ftfy regex
1. Download Datasets and Checkpoints.
- Datasets:
By default, you can put datasets into the folder 'datasets' or use 'ln -s path_to_data datasets'.
Datasets | GoogleDrive | OneDrive | BaiduNetdisk |
---|
Compred to the original datasets, we map the depth (.npy) to .png via 'plt.imsave(save_path, np.load(depth), cmap='Greys_r')', reorganize the file path to a clear format, and add the split files (.txt).
- Checkpoints:
ImageNet-1K Pre-trained and NYUDepth or SUNRGBD trained DFormer-T/S/B/T and DFormerv2-S/B/L can be downloaded at:
Weights | DFormer | DFormerv2 |
---|---|---|
Pretrained | GoogleDrive, OneDrive, BaiduNetdisk | BaiduNetdisk, HuggingFace |
NYUDepthv2 | GoogleDrive, OneDrive, BaiduNetdisk | BaiduNetdisk, HuggingFace |
SUNRGBD | GoogleDrive, OneDrive, BaiduNetdisk | BaiduNetdisk, HuggingFace |
Orgnize the checkpoints and dataset folder in the following structure:
<checkpoints> |-- <pretrained> |-- <DFormer_Large.pth.tar> |-- <DFormer_Base.pth.tar> |-- <DFormer_Small.pth.tar> |-- <DFormer_Tiny.pth.tar> |-- <DFormerv2_Large_pretrained.pth> |-- <DFormerv2_Base_pretrained.pth> |-- <DFormerv2_Small_pretrained.pth> |-- <trained> |-- <NYUDepthv2> |-- ... |-- <SUNRGBD> |-- ... <datasets> |-- <DatasetName1> |-- <RGB> |-- <name1>.<ImageFormat> |-- <name2>.<ImageFormat> ... |-- <Depth> |-- <name1>.<DepthFormat> |-- <name2>.<DepthFormat> |-- train.txt |-- test.txt |-- <DatasetName2> |-- ...
2. Train.
You can change the `local_config' files in the script to choose the model for training.
bash train.sh
After training, the checkpoints will be saved in the path `checkpoints/XXX', where the XXX is depends on the training config.
3. Eval.
You can change the `local_config' files and checkpoint path in the script to choose the model for testing.
bash eval.sh
4. Visualize.
bash infer.sh
5. FLOPs & Parameters.
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH python benchmark.py --config local_configs.NYUDepthv2.DFormer_Large
6. Latency.
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH python utils/latency.py --config local_configs.NYUDepthv2.DFormer_Large
ps: The latency highly depends on the devices. It is recommended to compare the latency on the same devices.
Table 1: Comparisons between the existing methods and our DFormer.
Table 2: Comparisons between the existing methods and our DFormerv2.
- Tutorial on applying the DFormer encoder to the frameworks of other tasks
[-] Release the code of RGB-D pre-training.[-] Tutorial on applying to a new dataset.[-] Release the DFormer code for RGB-D salient obejct detection.
We invite all to contribute in making it more acessible and useful. If you have any questions or suggestions about our work, feel free to contact me via e-mail (bowenyin@mail.nankai.edu.cn) or raise an issue.
You may want to cite:
@inproceedings{yin2024dformer,
title={DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation},
author={Yin, Bowen and Zhang, Xuying and Li, Zhong-Yu and Liu, Li and Cheng, Ming-Ming and Hou, Qibin},
booktitle={ICLR},
year={2024}
}
@inproceedings{dformerv2,
title={DFormerv2: Geometry Self-Attention for RGBD Semantic Segmentation},
author={Bo-Wen Yin and Jiao-Long Cao and Ming-Ming Cheng and Qibin Hou},
booktitle={CVPR},
year={2025}
}
Our implementation is mainly based on mmsegmentaion, CMX and CMNext. Thanks for their authors.
Code in this repo is for non-commercial use only.