Skip to content

[NeurIPS 2023] Implementation of "Template-free Articulated Neural Point Clouds for Reposable View"

License

Notifications You must be signed in to change notification settings

lukasuz/Articulated-Point-NeRF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Template-free Articulated Neural Point Clouds for Reposable View Synthesis

NeurIPS 2023

*Reposable 3D reconstruction based on images and masks only, without making use of any priors in form of object-specific skeletons or templates.*

Abstract:

Dynamic Neural Radiance Fields (NeRFs) achieve remarkable visual quality when synthesizing novel views of time-evolving 3D scenes. However, the common reliance on backward deformation fields makes reanimation of the captured object poses challenging. Moreover, the state of the art dynamic models are often limited by low visual fidelity, long reconstruction time or specificity to narrow application domains. In this paper, we present a novel method utilizing a point-based representation and Linear Blend Skinning (LBS) to jointly learn a Dyynamic NeRF and an associated skeletal model from even sparse multi-view video. Our forward-warping approach achieves state-of-the-art visual fidelity when synthesizing novel views and poses while significantly reducing the necessary learning time when compared to existing work. We demonstrate the versatility of our representation on a variety of articulated objects from common datasets and obtain reposable 3D reconstructions without the need of object-specific skeletal templates.

Lukas Uzolas, Elmar Eisemann, Petr Kellnhofer
Delft University of Technology

Run

Environment

Please first install Pytorch and torch_scatter manually, as they are machine dependent. We used python version 3.9.12. CUDA has to be installed on your machine for the custom cuda kernels to work, see TiNeuVox and DirectVoxGO. Then:

pip install -r requirements.txt

Dependencies:

  • PyTorch
  • numpy
  • torch_scatter
  • scipy
  • lpips
  • tqdm
  • mmcv
  • opencv-python
  • imageio
  • imageio-ffmpeg
  • Ninja
  • einops
  • torch_efficient_distloss
  • pykeops
  • tensorboardX
  • seaborn
  • scikit-image
  • torchvision
  • scipy
  • connected-components-3d
  • pandas
  • roma

If any unexpected problems should arise while setting up the repository, you can further consult TiNeuVox and DirectVoxGO, as the backbone code is based on their repository.

Data

Download the datasets (dnerf, wim, zju) and arrange as follows.

├── data 
│   ├── dnerf
│   │	├── jumpingjacks
│   │	├── ...
│   ├── wim
│   │	├── ...
│   ├── zju
│   │	├── ...

Note that you have to follow the pre-processing step as defined in wim for the ZJU data to obtain the pickle files.

Kinematic Model Initialization Only

If you are only interested in extraction of the initial kinematic model (skeleton incl. skinning weights based on point-to-bone distance) check the skeletonizer.py script. You can run that script with the sample data provided here. The function generally expects a 3D density volume, for specifics please check the script.

Train and Render

Train backbone & PCD representation: python run.py --config configs/nerf/jumpingjacks.py --i_print 1000 --render_video --render_pcd

You will be able to find the point cloud and skeleton .pcd files saved in the corresponding pcd folder of the experiment.

Render backbone (TiNeuVox): python run.py --config configs/nerf/jumpingjacks.py --i_print 1000 --render_video --render_only

Render PCD representation: python run.py --config configs/nerf/jumpingjacks.py --i_print 1000 --render_video --render_only --render_pcd

Visualise canonical: python run.py --config configs/nerf/jumpingjacks.py --i_print 1000 --visualise_canonical --render_pcd --render_only

Prune bones, merge weights and visualise canonical: python run.py --config configs/nerf/jumpingjacks.py --i_print 1000 --visualise_canonical --render_pcd --render_only --degree_threshold 30

Repose trained kinematic point cloud: python run.py --config configs/nerf/jumpingjacks.py --i_print 1000 --repose_pcd --render_only --render_pcd --degree_threshold 30

This will generate a reposed video sequence with random bone rotations. Check the code to see how to manually set the bone rotations.

ToDos:

  • Fix known errors (Will do in the next couple of days, 21.12.23)
  • Add stand-alone instructions for initial kinematic model extraction based on point cloud

Acknowledgements

This repository is partially based on TiNeuVox, DirectVoxGO, and D-NeRF. Thanks for their works.

Citation

@article{uzolas2024template,
  title={Template-free articulated neural point clouds for reposable view synthesis},
  author={Uzolas, Lukas and Eisemann, Elmar and Kellnhofer, Petr},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  year={2024}
}