Visual odometry(VO) is the process of determining the position and orientation of a robot by analyzing the associated camera images. The project is designed to estimate the motion of calibrated camera mounted over a mobile platform. Motion is estimated by computing the feature points of the image and determining the relative translation and rotation of the images.
git clone https://github.com/pareespathak/visual_odometry.git
pip3 install -r requirements.txt
cd codes
python 2d-2d_feature_tracking_homo.py
for 3D-2D approach
python 3d_2d_optical_flow.py
1 Image sequences
2 Feature Detection
3 Feature Matching (or Tracking)
4 Motion Estimation (2-D-to-2-D,3-D-to-3-D, 3-D-to-2-D)
- 3D-2D Motion estimation using Optical Flow method.
- 2D-2D Motion Estimation using Feature matching method.
- 2D-2D Motion Estimation using Optical Flow Method.
- First image(I1) and Second image(I2) was Captured and Features were computed in both images using sift Feature Detector.
- Corresponding features were matched using FlannBasedMatcher and accuracy was maintained using ratio test.
- Using matched features essential matrix for image pair I1, I2 was computed.
- Decompose an essential matrix into a rotation matrix and a translation matrix.
- 3D Point cloud was computed by triangulating the image pair.
- Repeat the process and compute point cloud for the next corresponding image pair.
- Relative scale was computed by taking the mean of distances between the consecutive points clouds obtained from keypoint matching of subsequent images and rescale translation accordingly.
- Concatenate transformation and repeat the process.
Reference code : 2D-2D Feature Matching
- First image(I1) and Second image(I2) was Captured and Features were computed in the First image using the shi-Tomasi corner detector.
- Features of I1 were tracked in I2 using Lucas Kanade optical flow method.
- Calculate tracked features calculate essential, rotation matrix, translation matrix, and relative scale between images as explained above.
- Track features in the next frames and concatenates transformation.
- Update the reference frame when a sufficient number of features were not tracked and repeat the process.
Reference code : 2D-2D Feature Tracking
-
Do only once:
- Capture two frames I1, I2 and extract feature from first image(I1)
- Track features of I1 in I2 using Lucas Kanade optical flow method.
- Compute essential matrix and triangulate features from I2, I1 to get point 3D cloud.
-
Do at each iteration:
- Capture a new frame.
- Track features of the previous frame in the new frame using Lucas Kanade optical flow method.
- Compute camera pose using Perspective-n-Point algorithm(PnP) from 3-D(point cloud) to 2-D(corresponding tracked features).
- Concatenate transformation.
- Triangulate all-new feature matches between the two frames and obtain point cloud.
- Iterate the process.
- Update reference frame when a sufficient number of features were not tracked, and repeat the process.
-
Reference code : 3D-2D implementation
visual odometry pipeline : Scaramuzza paper
KITTI sample dataset (Feature Matching) | KITTI 05 dataset (Matching) | Video trajectory (Matching) |
---|---|---|
KITTI sample dataset (Feature Tracking) | KITTI 05 dataset (Tracking) | Video trajectory (Tracking) |
KITTI sample dataset | KITTI 05 dataset |
---|---|
Reprojection error | |
Video trajectory |