Project Page | Paper | Video
Deep Hough Voting for Robust Global Registration
Junha Lee1,
Seungwook Kim1,
Minsu Cho1,
Jaesik Park1
1POSTECH CSE & GSAI
in ICCV 2021
Point cloud registration is the task of estimating the rigid transformation that aligns a pair of point cloud fragments. We present an efficient and robust framework for pairwise registration of real-world 3D scans, leveraging Hough voting in the 6D transformation parameter space. First, deep geometric features are extracted from a point cloud pair to compute putative correspondences. We then construct a set of triplets of correspondences to cast votes on the 6D Hough space, representing the transformation parameters in sparse tensors. Next, a fully convolutional refinement module is applied to refine the noisy votes. Finally, we identify the consensus among the correspondences from the Hough space, which we use to predict our final transformation parameters. Our method outperforms state-of-the-art methods on 3DMatch and 3DLoMatch benchmarks while achieving comparable performance on KITTI odometry dataset. We further demonstrate the generalizability of our approach by setting a new state-of-the-art on ICL-NUIM dataset, where we integrate our module into a multi-way registration pipeline.
@InProceedings{lee2021deephough,
title={Deep Hough Voting for Robust Global Registration},
author={Junha Lee and Seungwook Kim and Minsu Cho and Jaesik Park},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2021}
}
Speed vs Accuracy | Qualitative results |
---|---|
This repository is developed and tested on
- Ubuntu 18.04
- CUDA 11.1
- Python 3.8.11
- Pytorch 1.4.9
- MinkowskiEngine 0.5.4
Our pipeline is built on MinkowskiEngine. You can install the MinkowskiEngine and the python requirements on your system with:
# setup requirements for MinkowksiEngine
conda create -n dhvr python=3.8
conda install pytorch=1.9.1 torchvision cudatoolkit=11.1 -c pytorch -c nvidia
conda install numpy
conda install openblas-devel -c anaconda
# install MinkowskiEngine
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=${CONDA_PREFIX}/include" --install-option="--blas=openblas"
# download and setup DHVR
git clone https://github.com/junha-l/DHVR.git
cd DHVR
pip install -r requirements.txt
We also depends on torch-batch-svd, an open-source library for 100x faster (batched) svd on GPU.
You can follow the below instruction to install torch-batch-svd
# if your cuda installation directory is other than "/usr/local/cuda", you have to specify it.
(CUDA_HOME=PATH/TO/CUDA/ROOT) bash scripts/install_3rdparty.sh
You can download preprocessed training dataset, which is provided by the author of FCGF, via these commands:
# download 3dmatch train set
bash scripts/download_3dmatch.sh PATH/TO/3DMATCH
# create symlink
ln -s PATH/TO/3DMATCH ./dataset/3dmatch
The official 3DMatch test set is available at the official website. You should download fragments data of Geometric Registration Benchmark and decompress them to a new folder.
Then, create a symlink via following command:
ln -s PATH/TO/3DMATCH_TEST ./dataset/3dmatch-test
The default feature extractor we used in our experiments is FCGF. You can download pretrained FCGF models via following commands:
bash scripts/download_weights.sh
Then, train with
python train.py config/train_3dmatch.gin --run_name NAME_OF_EXPERIMENT
You can test DHVR via following commands:
python test.py config/test_3dmatch.gin --run_name EXP_NAME --load_path PATH/TO/CHECKPOINT
python test.py config/test_3dlomatch.gin --run_name EXP_NAME --load_path PATH/TO/CHECKPOINT
We also provide pretrained weights on 3DMatch dataset. You can download the checkpoint in following link.
Our code is based on the MinkowskiEngine. We also refer to FCGF, DGR, and torch-batch-svd.