This project was aimed at understanding the front end method of estimating the motion of a calibrated camera mounted on a mobile platform. Estimating the location of a platform with respect to its surroundings and the path that it traversed is a key module in a Navigation Stack. Many sensors can be used to extract this particular information, camera being one of them. Cameras are one of the most widely used sensors for estimating the pose of a mobile platform. Visual Odometry is a process through which we can recover the relative translation and rotation of the calibrated camera using images that were recored by this particular camera.
- First image was processed and features were extracted using FAST.
- Second image was processed and the features were tracked and determined using Kanade Lucas Tomasi feature tracking algorithm
- Using the tracked correspondences the essential matrix was calculated.
- The essential matrix was decomposed to recover the relative translation and rotation.
- Using the tracked correspondences, relative translation and rotation, calibration matrix, triangulation is performed to obtain a 3D point cloud.
- Process the third image, follow steps and track features using the second image as reference.
- Perform steps 3-5
- Relative scale is calculated by taking the mean of the distances using the two point clouds between matched keypoints between two sets of subsequent images.
- Update translation and rotation.
- Repeat steps 6-10.
Calculation of absolute scale was avoided as ground truth will not available all the time. The result below is obtained on the basis of relative scale.