IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis
[Weicai Ye, Shuo Chen]Co-Authors, Chong Bao, Hujun Bao, Marc Pollefeys, Zhaopeng Cui, Guofeng Zhang
ICCV 2023
For flawless reproduction of our results, the Ubuntu OS 20.04 is recommended. The models have been tested using Python 3.7, Pytorch 1.6.0, CUDA10.1. Higher versions should also perform similarly.
Main python dependencies are listed below:
- Python >=3.7
- torch>=1.6.0 (integrate searchsorted API, otherwise need to use the third party implementation SearchSorted )
- cudatoolkit>=10.1
Following packages are used for 3D mesh reconstruction:
- trimesh==3.9.9
- open3d==0.12.0
With Anaconda, you can simply create a virtual environment and install dependencies with CONDA by:
conda create -n intrinsicnerf python=3.7
conda activate intrinsicnerf
pip install -r requirements.txt
We mainly use Replica and Blender Object datasets for experiments, where we train a new IntrinsicNeRF model on each 3D scene. Other similar indoor datasets with colour images, semantic labels and poses can also be used.
We use pre-rendered Replica data provided by Semantic-NeRF. Please download the dataset and modify the data_dir in the configs/*.yaml.
After cloning the codes, we can start to run IntrinsicNeRF in the root directory of the repository.
For standard IntrinsicNeRF training with full dense semantic supervision. You can simply run following command with a chosen config file specifying data directory and hyper-params.
python3 train_SSR_main.py --config_file /SSR/configs/SSR_room0_config.yaml
cd object_level
See the README.md to download the dataset and modify the data_dir in the configs/*.txt. And then choose the object to run the script, for example
mkdir logs/chair
touch logs/chair/out_message
CUDA_VISIBLE_DEVICES=0 nohup python run_nerf.py --config configs/chair.txt --exp chair --datadir YourData_Dir/chair > logs/chair/out_message 2>&1 &
After training IntrinsicNeRF, the pretrained model, clustered information and the decomposition will be saved, so you can choose blender object or replica scene as you want. Please download and unzip the Video_Data.tar.gz for editing. Take room0 as an example:
cd Video_Data
python gui.py --img_dir in/replica/room_0/step_200000/ --cluster_config in/replica/room_0/cluster/ --rate 20 --head_name room_0 --replica
The editing software is as follows:
For more demos and qualitative results, please check our project page.
Thanks Semantic-NeRF and nerf-pytorch for providing nice and inspiring implementations of NeRF.
If you find this code useful for your research, please use the following BibTeX entry.
@inproceedings{Ye2023IntrinsicNeRF,
title={{IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis}},
author={Ye, Weicai and Chen, Shuo and Bao, Chong and Bao, Hujun and Pollefeys, Marc and Cui, Zhaopeng and Zhang, Guofeng},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2023}
}