Skip to content

Official implementation of paper "Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields" (CVPR2020)

Notifications You must be signed in to change notification settings

dulucas/Displacement_Field

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Displacement_Field

Official implementation of paper Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields(CVPR 2020) paper link

NYUv2OC++ dataset(only for test use) download link

Visualization

1D example

1D

2D example on blurry depth image(prediction of depth estimator)

2D

Requirements:

  • PyTorch >= 0.4
  • OpenCV
  • CUDA >= 8.0(Only tested with CUDA >= 8.0)
  • Easydict

Data Preparation

sh download.sh

Training

#Use depth only as input
cd model/nyu/df_nyu_depth_only
python train.py -d 0

#Use RGB image as guidance
cd model/nyu/df_nyu_rgb_guidance
python train.py -d 0

Citation

@InProceedings{Ramamonjisoa_2020_CVPR,
author = {Ramamonjisoa, Michael and Du, Yuming and Lepetit, Vincent},
title = {Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Miscellaneous

The model can be trained with only synthetic data(Scenenet for example), and generalize naturally on real data.

Acknowledgement

The code is based on TorchSeg

The NYUv2-OC++ is annotated manually by 4 PhD students major in computer vision. Special thanks to Yang Xiao and Xuchong Qiu for their help in annotating the NYUv2-OC++ dataset.

About

Official implementation of paper "Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields" (CVPR2020)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published