This repository contains codes for CVPR2023 paper "NeuMap: Neural Coordinate Mapping by Auto-Transdecoder for Camera Localization".
If you find this project useful, please cite:
@inproceedings{Tang2022NeuMap,
title={NeuMap: Neural Coordinate Mapping by Auto-Transdecoder for Camera Localization},
author={Shitao Tang, Sicong Tang, Andrea Tagliasacchi, Ping Tan and Yasutaka Furukawa},
journal={arXiv preprint arXiv:2211.11177},
year={2022}
}
The codes are based on LoFTR.
We made a mistake when evaluating GreatCourt of Cambridge. Only part of testing images are evaluated. The number in the paper is unreliable.
# For full pytorch-lightning trainer features (recommended)
conda create --name neumap python=3.8
conda activate neumap
pip install -r requirement.txt
- We provide testing codes and pretrained models for 7scenes, Cambridge, Aachen, NAVER LABS and ScanNet. Download the pretrained models in Google Drive/Dropbox.
- The file structure should be like the following:
data
├── kapture
│ ├── train_list
│ ├── aachen1.0
│ ├── HyundaiDepartmentStore
│ ├── GreatCourt
│ ├── chess
│ ...
├── scannet
│ ├── train_list
│ ├── train
We provide the processed data in Google Drive/Dropbox and the label files in Google Drive/Dropbox. Please submit the corresponding pose.txt
file in the result directory to this link for evaluation.
bash scripts/reproduce_test/aachen_v10.sh
We provide the processed data in Google Drive/Dropbox and the label files in Google Drive/Dropbox.
bash scripts/reproduce_test/7scenes.sh
We provide the processed data in Google Drive/Dropbox and the label files in Google Drive/Dropbox.
bash scripts/reproduce_test/cambridge.sh
We provide the processed data in Google Drive/Dropbox and the label files in Google Drive/Dropbox.
bash scripts/reproduce_test/store.sh
We provide the testing data in Google Drive/Dropbox and the label files in Google Drive/Dropbox.
bash scripts/reproduce_test/scannet.sh
First stage training
bash scripts/reproduce_train/aachen_v10_stage1.sh
Second stage training
bash scripts/reproduce_train/aachen_v10_stage2.sh
First stage training
bash scripts/reproduce_train/7scenes_stage1.sh
Second stage training
bash scripts/reproduce_train/7scenes_stage2.sh
First stage training
bash scripts/reproduce_train/cambridge_stage1.sh
Second stage training
bash scripts/reproduce_train/cambridge_stage2.sh
bash scripts/reproduce_train/store.sh
Train network
bash scripts/reproduce_train/scannet.sh
Finetune codes for new scenes
bash scripts/reproduce_train/scannet_code_finetune.sh
To get data size, please refer to tools/get_params.py
and tools/get_params_cambridge.py
.
We provide example codes to divide scenes to voxels and generate label files in tools/get_label_file.py
and tools/get_query_file.py
.
python tools/get_label_file.py --root data/kapture -i aachen1.0/mapping -s 15 -o aachen_debug
Generate fake query images
python tools/get_query_file.py \
-o query_regular_15 \
--retrieval_path data/kapture/aachen1.0/pairs-query-netvlad30.txt \
--root data/kapture \
--train_list_path data/kapture/train_list/aachen_debug.txt
To run custom dataset, please first use r2d2 to generate key-points and triangulate 3D points with COLMAP. Then, convert the COLMAP output files to the same format as data/kapture/aachen1.0/mapping/points
.
We thank Martin Humenberger and Philippe Weinzaepfel for their codes of ESAC in NAVER LAB datasets. We thank Luwei Yang to evaluate Squeezer under different data sizes. We thank Jiahui Zhang and Jiacheng Chen for proof reading. The research is supported by NSERC Discovery Grants, NSERC Discovery Grants Accelerator Supplements, DND/NSERC Discovery Grant Supplement, and John R. Evans Leaders Fund (JELF).