End-to-end pipeline: RGB-> depth → PyCuVSLAM ->SFM → nvblox -> 3D synthesis and visualization
Demo :
Demo-Image-to-Mesh.mp4
References:
- nvblox: https://nvidia-isaac.github.io/nvblox/index.html
- Neural reconstruction stereo (NuRec): https://docs.nvidia.com/nurec/robotics/neural_reconstruction_stereo.html
- Depth generation (Depth Anything TRT): https://github.com/ika-rwth-aachen/ros2-depth-anything-v3-trt
Use two conda environments:
pycuvslam: depth + PyCuVSLAM + cuSFM + nvblox pipeline (this repo's current one-shot flow).3dgrut: NURec Step 5 neural reconstruction and USD/USDZ export.
Why two envs:
cuvslamis pinned to Python3.10.*.3dgrutuses a different Python/Torch/CUDA stack (its installer creates a dedicated env).- Mixing both stacks in one env is likely to cause dependency conflicts.
conda create -n pycuvslam python=3.10 -y
conda activate pycuvslam
python -m pip install --upgrade pip
python -m pip install ./third_party/PyCuVSLAM/bin/x86_64
python -m pip install ./third_party/pyCuSFM
python -m pip install "torch==<cu12x build>" -f https://download.pytorch.org/whl/torch_stable.html
python -m pip install <nvblox_torch_wheel> # matching CUDA build
python -m pip install -r requirements.txt rerun-sdk open3dReplace <cu12x build> and <nvblox_torch_wheel> with CUDA-matched builds.
git clone --recursive https://github.com/nv-tlabs/3dgrut.git third_party/3dgrut
cd third_party/3dgrut
./install_env.sh 3dgrut
conda activate 3dgrutinstall_env.sh supports CUDA 11.8.0 and 12.8.1 via CUDA_VERSION.
- Run ReconS scripts (
run_full_pipeline.py,pipelines/run_pycuvslam_*.py,pipelines/run_nvblox*.py) inpycuvslam. - Run
3dgruttraining/export in3dgrut.
- NVIDIA driver: PyCuVSLAM requires a driver exposing CUDA >= 12.6 (R560+; CUDA 13 drivers are OK).
- On WSL, keep:
so
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:/usr/lib/wsl/lib:$LD_LIBRARY_PATH
cuvslamresolves the correct CUDA/libpython libraries. - If
condais missing after install, run~/miniconda3/bin/conda init bash, thensource ~/.bashrc(or reopen a shell).
data/sample_xxx/
iphone_mono/ # RGB frames (0000001.png ...)
iphone_mono_depth/ # Depth frames aligned to RGB (uint16, mm)
iphone_calibration.yaml # Pinhole K
timestamps.txt # frame,timestamp_ns
Outputs land alongside the sample:
iphone_mono_depth/- generated depth mapspycuvslam_poses.tum,pycuvslam_poses_slam.tum- trajectoriesnvblox_out/- mesh from all framescusfm_output/- sparse reconstruction +keyframes/nvblox_sfm_out/- refined mesh from SFM keyframes
run_full_pipeline.py executes 6 steps end-to-end:
- Depth - Generate depth maps via Depth Anything TensorRT
- PyCuVSLAM - Visual odometry + SLAM poses
- Dataset prep - Build nvblox artifacts (associations, trajectory CSV, intrinsics)
- nvblox - Dense mesh from all frames
- cuSFM - Sparse reconstruction with keyframe selection
- nvblox-sfm - Refined mesh from SFM keyframes (cleaner for 3dgrut)
# Using dataset folder (recommended)
python3 run_full_pipeline.py --dataset data/sample_20260208_i1
# Or explicit paths
python3 run_full_pipeline.py \
--rgb-dir data/sample_xxx/iphone_mono \
--calibration data/sample_xxx/iphone_calibration.yaml \
--timestamps data/sample_xxx/timestamps.txtOutputs (in dataset folder):
iphone_mono_depth/- depth mapspycuvslam_poses.tum,pycuvslam_poses_slam.tum- trajectoriesnvblox_out/- mesh from all framescusfm_output/- sparse reconstruction + keyframesnvblox_sfm_out/nvblox_mesh.ply- refined mesh for 3dgrut
Options:
--skip-cusfm/--skip-nvblox-sfm- skip SFM steps--disable-slam- use odometry only--nvblox-ui/--nvblox-sfm-ui- show visualization
See pipelines/README.md for per-step commands (run_pycuvslam_rgbd.py, run_nvblox.py, stereo variant, etc.).