This document summarizes how to install, run, and containerize the DPVO pipeline, with special attention to automotive front-camera footage and reproducible outputs.
Tested on Ubuntu 20.04/22.04 with CUDA 11.x/12.x and Python 3.10.
- Clone the repository
git clone https://github.com/princeton-vl/DPVO.git --recursive cd DPVO - Create the Conda environment
conda env create -f environment.yml conda activate dpvo
- Install DPVO and its dependencies
wget https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.zip unzip eigen-3.4.0.zip -d thirdparty pip install . - Download pretrained model
./download_model.sh
The repository ships with reference intrinsics for the Tesla front camera in calib/tesla.txt. Run DPVO on a front-camera video with:
python run.py \
--imagedir=/absolute/path/to/front_camera.mp4 \
--calib=calib/tesla.txt \
--stride=5 \
--plot \
--save_camera_poses \
--save_motion \
--save_trajectoryAll exports are written to outputs/<run_name>. Omit --name to auto-generate a timestamped directory.
Each calibration file in calib/ contains four whitespace-separated values per line: fx fy cx cy. To support a new camera:
- Estimate intrinsic parameters (e.g., with an OpenCV calibration routine).
- Add a new text file under
calib/containing the four intrinsics. - Invoke
run.pywith--calib=calib/<your_file>.txtand adjust--strideto suit your video frame rate and motion profile.
For multi-camera rigs, run separate sessions per camera or extend the loader to combine intrinsics appropriately.
The docker/ directory provides an Ubuntu 22.04 + CUDA 12.1 runtime that bundles DPVO, Python dependencies, pretrained weights, and calibration files.
cd /mnt/data/pdx/DPVO/docker
docker build -t dpvo-runtime -f Dockerfile ..The build context (
..) ensures models, calibration files, and helper scripts are copied into the image.
For users less familiar with Docker, the repository root exposes run.sh:
./run.sh /absolute/path/to/front_camera.mp4 [additional run.py args]- Mounts the input video read-only in the container (
/data/input.mp4). - Binds the host
outputs/directory into/app/outputs, so results remain immediately accessible on the host filesystem. - Executes
python run.pywith the Tesla calibration defaults (--calib=calib/tesla.txt --stride=5 --plot --save_camera_poses --save_motion --save_trajectory). - Accepts optional
run.pyflags appended to the command (e.g.,--name my_run). - Override defaults via environment variables:
DPVO_IMAGE_NAME– Docker image tag to use (defaults todpvo-runtime).DPVO_OUTPUT_DIR– host folder mapped to/app/outputsinside the container.
Treat run.sh as a push-button option: place your video on disk, run the script, and inspect the generated subfolder under outputs/ when it completes.
Pull the published image from Docker Hub:
docker pull pdxmusic/dpvo-runtime:latestTag aliases are available; for example docker pull pdxmusic/dpvo-runtime:v0.1.0. After pulling, reuse run.sh by exporting DPVO_IMAGE_NAME=pdxmusic/dpvo-runtime:latest, or invoke docker run manually.
- Provision a vast.ai instance with NVIDIA GPUs and Docker runtime enabled.
- Pull the image onto the host:
docker pull pdxmusic/dpvo-runtime:latest
- Upload or mount your video on the remote host.
- Launch DPVO inside the container:
docker run --rm --gpus all --ipc=host \ -v /abs/path/to/video.mp4:/data/input.mp4:ro \ -v /abs/path/to/output_dir:/app/outputs \ pdxmusic/dpvo-runtime:latest \ python run.py --imagedir=/data/input.mp4 --calib=calib/tesla.txt --stride=5 \ --plot --save_camera_poses --save_motion --save_trajectory
Replace the volume paths with directories available on your vast.ai instance. Saved outputs will appear directly under the mounted host directory.
- Host side:
outputs/(orDPVO_OUTPUT_DIR) accumulates subfolders per run. - Container side: DPVO writes into
/app/outputs/<run_name>. - Suitable for shared storage (NFS, cloud buckets mounted via FUSE) so downstream pipelines can ingest results immediately.
- Visualization: Real-time visualization requires
DPViewer. Install natively (pip install ./DPViewer) or extend the Docker image if you need viewer support in containers. - Loop Closure: Enable SLAM back-ends with
--opts LOOP_CLOSURE True(and optionally--opts CLASSIC_LOOP_CLOSURE Trueif classical dependencies are installed). - Data Refresh: The container removes downloaded archives after extraction. Rebuild the image to pick up new checkpoints or code updates.
- GPU Requirements: Ensure recent NVIDIA drivers and the NVIDIA Container Toolkit are installed before running GPU-enabled containers.
DPVO remains an active research project. Contributions and calibration files for additional vehicle platforms are welcome.