MetaNeuS uses Meta-Learning to learn a template shape of an object category from a database of multiview images. This category template is encoded in the weights of the network as a signed distance function(SDF). Starting from this meta-learned template, we can quickly reconstruct a novel object at test time using a small number of views.
This project builds on the NeurIPS'21 paper "NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction". NeuS is a scene-specific method that requires a large number of multiview inputs. This work extends NeuS using meta-learning to handle unseen objects at test time, and also enables sparse view 3d reconstruction.
MetaNeuS.mp4
Initializing with meta-learned weights also allows other applications like:
NeuS already disentangles the geometry and appearance of an object into two separate networks. However, when the optimization starts from a standard initialization, weight space interpolation doesn't produce any meaningful result. But when it starts from a meta-learned initialization, we can interpolate the object geometry by interpolating the weights of the SDF network (while keeping the appearance constant).
Geometry-Interpolation.mp4
Similarly, with meta-initialized networks, we can interpolate the object appearance by interpolating the weights of the color network while keeping the geometry fixed.
Appearance-Interpolation.mp4
- Python 3.8
- PyTorch 1.9
- NumPy, PyMCubes, imageio, imageio-ffmpeg
Download the dataset from this drive link. This is a modified version of the learnit dataset; I have normalized the scenes such that the objects are located inside a unit sphere.
Train the NeuS model on a particular ShapeNet class with meta-learning:
python train.py --config ./configs/$class.json
Optimize the meta-trained model on sparse views of unseen objects and report test results on held-out views:
python test.py --config ./configs/$class.json --meta-weight META_WEIGHT_PATH
It also saves the scene weights, the extracted 3D mesh and a 360-degree video for each test object in the ./results
directory.
Interpolate the geometry or appearance of two test objects:
python interpolate.py --config ./configs/$class.json --first-weight FIRST_PATH --second-weight SECOND_PATH --property PROPERTY
It will generate an interpolation video in the ./results
directory. Here FIRST_PATH
/ SECOND_PATH
is the path to the weight file for any two test objects. PROPERTY is the property to interpolate, either geometry
or appearance
.
I have used the following repositories as a reference for this implementation:
Thanks to the authors for releasing their code!