Official code release for "NeRFuser: Large-Scale Scene Representation by NeRF Fusion" [paper].
conda create -n nerfuser -y python=3.10 && conda activate nerfuser
-
pip install torch torchvision pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch pip install nerfstudio
-
git clone --recurse-submodules git@github.com:cvg/Hierarchical-Localization.git && pip install -e Hierarchical-Localization
-
Misc
# due to a bug in open3d 0.17.0, we use the previous version pip install imageio-ffmpeg open3d==0.16.0
git clone git@github.com:ripl/nerfuser.git && cd nerfuser/
pip install .
The data preparation assumes that you have several videos focusing on different yet overlapping portions of the same scene. Let ext
denote the video file extension (e.g. mp4
, mov
, etc.), then one of the videos should be named test.ext
, from which images will be extracted for blending evaluation. Others can be named whatever you like. Assume you have collected 3 more videos besides test.ext
, whose names w/o the ext
extension are stored as A
, B
and C
. First put all the video files (including test.ext
) in the directory DATASET_DIR
, then run the following command to prepare data for training NeRFs:
python -m nerfuser.prep_data \
--dataset-dir $DATASET_DIR \
--vid-ids test $A $B $C \
--downsample 8 \
--extract-images \
--run-sfm \
--write-json \
--vis
Please run python -m nerfuser.prep_data -h
for more details. A sample dataset containing both videos and prepared data is provided here.
Let MODELS_DIR
be the directory where you want to save the trained NeRF models. Run the following command to train a NeRF model corresponding to each video other than test
:
for VID in $A $B $C; do
ns-train nerfacto \
--output-dir $MODELS_DIR \
--data $DATASET_DIR/$VID \
--viewer.quit-on-train-completion True \
--pipeline.datamanager.camera-optimizer.mode off
done
Please run ns-train nerfacto -h
for more details. Trained NeRF models on the sample dataset are provided here. Note that the provided model ckpts are trained with nerfstudio 0.3.2
. If you encounter issues loading them, consider either installing the exact matching version of nerfstudio
, or training your own as above.
Let TS_A
, TS_B
and TS_C
be the timestamps of the trained NeRF models for videos A
, B
and C
respectively. Run the following command to register the NeRF models:
python -m nerfuser.registration \
--model-dirs $MODELS_DIR/$A/nerfacto/$TS_A/nerfstudio_models $MODELS_DIR/$B/nerfacto/$TS_B/nerfstudio_models $MODELS_DIR/$C/nerfacto/$TS_C/nerfstudio_models \
--name my_scene \
--model-names $A $B $C \
--model-gt-trans I \
--cam-info $DATASET_DIR/test/transforms.json \
--render-views \
--run-sfm \
--compute-trans \
--vis
Registration results are saved in outputs/registration
by default. Please run python -m nerfuser.registration -h
for more details.
Run the following command to query the NeRFs with test poses as in test
and generate the blending results:
python -m nerfuser.blending \
--model-dirs $MODELS_DIR/$A/nerfacto/$TS_A/nerfstudio_models $MODELS_DIR/$B/nerfacto/$TS_B/nerfstudio_models $MODELS_DIR/$C/nerfacto/$TS_C/nerfstudio_models \
--name my_scene \
--model-names $A $B $C \
--cam-info $DATASET_DIR/test/transforms.json \
--test-poses $DATASET_DIR/test/transforms.json \
--test-frame world \
--blend-views \
--evaluate
Blending results are saved in outputs/blending
by default. Please run python -m nerfuser.blending -h
for more details.
Alternative to the above two steps, you can run the following command to perform NeRF registration and blending in one go:
python -m nerfuser.fuser \
--model-dirs $MODELS_DIR/$A/nerfacto/$TS_A/nerfstudio_models $MODELS_DIR/$B/nerfacto/$TS_B/nerfstudio_models $MODELS_DIR/$C/nerfacto/$TS_C/nerfstudio_models \
--name my_scene \
--model-names $A $B $C \
--model-gt-trans I \
--cam-info $DATASET_DIR/test/transforms.json \
--render-views \
--run-sfm \
--compute-trans \
--test-poses data/ttic/common_large/test/transforms.json \
--test-frame world \
--blend-views \
--eval-blend
Please run python -m nerfuser.fuser -h
for more details.
If you find our work useful in your research, please consider citing the paper as follows:
@article{fang23,
Author = {Jiading Fang and Shengjie Lin and Igor Vasiljevic and Vitor Guizilini and Rares Ambrus and Adrien Gaidon and Gregory Shakhnarovich and Matthew R. Walter},
Title = {{NeRFuser}: {L}arge-Scale Scene Representation by {NeRF} Fusion},
Journal = {arXiv:2305.13307},
Year = {2023},
Arxiv = {2305.13307}
}