This repository is a modified version of VINS-Fusion (see original README below). We facilitate the installation process and the use of Docker.
In order to facilitate the installation process, the system is wrapped up using Docker. We provide scripts to create a Docker image, build the system and run it in a Docker container.
- Docker
- ROS
pose_listener
(if you userun_rosario_sequence.sh
, see below)
Run:
./run.sh -b
This command creates a Docker image, installs all the dependencies and builds the system. The resulting image contains a version of the system ready to be run.
If you are not interested in making changes in the source code, you should run the system in VIS mode. Run:
./run.sh -v
The system is launched in a Docker container based on the previously built image. By default, this command executes a launch file which is configured to run the Rosario dataset. If you want to run your own dataset, write a launch file and placed it in vins_estimator/launch/
. Configuration files must be placed in config/
. Then, run the script with the option -l <LAUNCH_FILE_NAME>
. For example, if you are testing EuRoC, write euroc_dataset.launch
, move it into vins_estimator/launch/
and type:
./run.sh -v -l euroc_dataset.launch
Making changes in launch/configuration files in the host is possible because these folders are mounted into the Docker container. It is not necessary to access the container through a bash shell to modify these files.
See below for information about input data and visualization.
DEV mode allows developers to make changes in the source code, recompile the system and run it with the modifications. To do this, the whole repository is mounted in a container. Run:
./run.sh -d
This opens a bash shell in a docker container. You can edit source files in the host and after that you can use this shell to recompile the system. When the compilation process finishes, you can run the method using roslaunch
.
See below for information about input data and visualization.
At this point, the system is waiting for input data. Either you can run rosbag play
or you can use run_rosario_sequence.sh
.
If you choose the latter, open a second terminal and run:
./run_rosario_sequence.sh -o <OUTPUT_TRAJECTORY_FILE> <ROSBAG_FILE>
In contrast to what run.sh
does, run_rosario_sequence.sh
executes commands in the host (you can modify it to use a Docker container).
ROSBAG_FILE
is played using rosbag
. Also, make sure you have cloned and built pose_listener
in your catkin workspace. Default path for the workspace is ${HOME}/catkin_ws
, set CATKIN_WS_DIR
if the workspace is somewhere else (e.g.: export CATKIN_WS_DIR=$HOME/foo_catkin_ws
). pose_listener
saves the estimated trajectory in <OUTPUT_TRAJECTORY_FILE>
(use absolute path). You can edit run_rosario_sequence.sh
if you prefer to save the trajectory using your own methods. Additionally, run_rosario_sequence.sh
launches rviz
to display visual information during the execution of the system.
Alternatively, if you are not interested in development but in testing or visualization, instead of running run.sh
and run_rosario_sequence.sh
in two different terminals, you can just run:
./run_rosario_sequence.sh -r -o <OUTPUT_TRAJECTORY_FILE> <ROSBAG_FILE>
This launches a Docker container and executes the default launch file (see LAUNCH_FILE
in run.sh
). After that, the bagfile is played and rviz
and pose_listener
are launched. Add -b
if you want to turn off the visualization.
VINS-Fusion is an optimization-based multi-sensor state estimator, which achieves accurate self-localization for autonomous applications (drones, cars, and AR/VR). VINS-Fusion is an extension of VINS-Mono, which supports multiple visual-inertial sensor types (mono camera + IMU, stereo cameras + IMU, even stereo cameras only). We also show a toy example of fusing VINS with GPS. Features:
- multiple sensors support (stereo cameras / mono camera+IMU / stereo cameras+IMU)
- online spatial calibration (transformation between camera and IMU)
- online temporal calibration (time offset between camera and IMU)
- visual loop closure
We are the top open-sourced stereo algorithm on KITTI Odometry Benchmark (12.Jan.2019).
Authors: Tong Qin, Shaozu Cao, Jie Pan, Peiliang Li, and Shaojie Shen from the Aerial Robotics Group, HKUST
Videos:
Related Paper: (paper is not exactly same with code)
-
Online Temporal Calibration for Monocular Visual-Inertial Systems, Tong Qin, Shaojie Shen, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS, 2018), best student paper award pdf
-
VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator, Tong Qin, Peiliang Li, Shaojie Shen, IEEE Transactions on Robotics pdf
If you use VINS-Fusion for your academic research, please cite our related papers. bib
Ubuntu 64-bit 16.04 or 18.04. ROS Kinetic or Melodic. ROS Installation
Follow Ceres Installation.
Clone the repository and catkin_make:
cd ~/catkin_ws/src
git clone https://github.com/HKUST-Aerial-Robotics/VINS-Fusion.git
cd ../
catkin_make
source ~/catkin_ws/devel/setup.bash
(if you fail in this step, try to find another computer with clean system or reinstall Ubuntu and ROS)
Download EuRoC MAV Dataset to YOUR_DATASET_FOLDER. Take MH_01 for example, you can run VINS-Fusion with three sensor types (monocular camera + IMU, stereo cameras + IMU and stereo cameras). Open four terminals, run vins odometry, visual loop closure (optional), rviz and play the bag file respectively. Green path is VIO odometry; red path is odometry under visual loop closure.
roslaunch vins vins_rviz.launch
rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_mono_imu_config.yaml
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_mono_imu_config.yaml
rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag
roslaunch vins vins_rviz.launch
rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_imu_config.yaml
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_imu_config.yaml
rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag
roslaunch vins vins_rviz.launch
rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_config.yaml
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_config.yaml
rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag
Download KITTI Odometry dataset to YOUR_DATASET_FOLDER. Take sequences 00 for example, Open two terminals, run vins and rviz respectively. (We evaluated odometry on KITTI benchmark without loop closure funtion)
roslaunch vins vins_rviz.launch
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/kitti_odom/kitti_config00-02.yaml
rosrun vins kitti_odom_test ~/catkin_ws/src/VINS-Fusion/config/kitti_odom/kitti_config00-02.yaml YOUR_DATASET_FOLDER/sequences/00/
Download KITTI raw dataset to YOUR_DATASET_FOLDER. Take 2011_10_03_drive_0027_synced for example. Open three terminals, run vins, global fusion and rviz respectively. Green path is VIO odometry; blue path is odometry under GPS global fusion.
roslaunch vins vins_rviz.launch
rosrun vins kitti_gps_test ~/catkin_ws/src/VINS-Fusion/config/kitti_raw/kitti_10_03_config.yaml YOUR_DATASET_FOLDER/2011_10_03_drive_0027_sync/
rosrun global_fusion global_fusion_node
Download car bag to YOUR_DATASET_FOLDER. Open four terminals, run vins odometry, visual loop closure(optional), rviz and play the bag file respectively. Green path is VIO odometry; red path is odometry under visual loop closure.
roslaunch vins vins_rviz.launch
rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/vi_car/vi_car.yaml
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/vi_car/vi_car.yaml
rosbag play YOUR_DATASET_FOLDER/car.bag
VIO is not only a software algorithm, it heavily relies on hardware quality. For beginners, we recommend you to run VIO with professional equipment, which contains global shutter cameras and hardware synchronization.
Write a config file for your device. You can take config files of EuRoC and KITTI as the example.
VINS-Fusion support several camera models (pinhole, mei, equidistant). You can use camera model to calibrate your cameras. We put some example data under /camera_models/calibrationdata to tell you how to calibrate.
cd ~/catkin_ws/src/VINS-Fusion/camera_models/camera_calib_example/
rosrun camera_models Calibrations -w 12 -h 8 -s 80 -i calibrationdata --camera-model pinhole
To further facilitate the building process, we add docker in our code. Docker environment is like a sandbox, thus makes our code environment-independent. To run with docker, first make sure ros and docker are installed on your machine. Then add your account to docker
group by sudo usermod -aG docker $YOUR_USER_NAME
. Relaunch the terminal or logout and re-login if you get Permission denied
error, type:
cd ~/catkin_ws/src/VINS-Fusion/docker
make build
Note that the docker building process may take a while depends on your network and machine. After VINS-Fusion successfully built, you can run vins estimator with script run.sh
.
Script run.sh
can take several flags and arguments. Flag -k
means KITTI, -l
represents loop fusion, and -g
stands for global fusion. You can get the usage details by ./run.sh -h
. Here are some examples with this script:
# Euroc Monocualr camera + IMU
./run.sh ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_mono_imu_config.yaml
# Euroc Stereo cameras + IMU with loop fusion
./run.sh -l ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_mono_imu_config.yaml
# KITTI Odometry (Stereo)
./run.sh -k ~/catkin_ws/src/VINS-Fusion/config/kitti_odom/kitti_config00-02.yaml YOUR_DATASET_FOLDER/sequences/00/
# KITTI Odometry (Stereo) with loop fusion
./run.sh -kl ~/catkin_ws/src/VINS-Fusion/config/kitti_odom/kitti_config00-02.yaml YOUR_DATASET_FOLDER/sequences/00/
# KITTI GPS Fusion (Stereo + GPS)
./run.sh -kg ~/catkin_ws/src/VINS-Fusion/config/kitti_raw/kitti_10_03_config.yaml YOUR_DATASET_FOLDER/2011_10_03_drive_0027_sync/
In Euroc cases, you need open another terminal and play your bag file. If you need modify the code, simply re-run ./run.sh
with proper auguments after your changes.
We use ceres solver for non-linear optimization and DBoW2 for loop detection, a generic camera model and GeographicLib.
The source code is released under GPLv3 license.
We are still working on improving the code reliability. For any technical issues, please contact Tong Qin <qintonguavATgmail.com>.
For commercial inquiries, please contact Shaojie Shen <eeshaojieATust.hk>.