-This is the final project in the course and very crucial task for driverless cars. It is one of the state-of-the-art algorithms used currently in many robotics projects. Visual odometry obtains the information of the location of the vehicle from the video stream obtained from the RGBD camera. Taking it one step ahead, in this project visual odometry is computed from *monocular* camera without any depth information obtained directly from the camera. The position of the camera is tracked across the frames by computing *Essential matrices*, obtaining *rotation* and *translation* matrices from it, and selecting correct Essential matrix by doing *triangulation*. The *trajectory* of the vehicle calculated for the given video is shown below.
0 commit comments