Add Visual SLAM method in mapping and localization pipeline #3167
Replies: 2 comments 2 replies
-
cc. @KYabuuchi |
Beta Was this translation helpful? Give feedback.
-
UPDATE: I want to explain our purpose and road map a little bit more. With this work, we try to create alternative solution for laser scan mapping and localization (eg. NDT). Our main purpose is create a geo-referenced feature map with ORB_SLAM3. We will calibrate lidar, camera and GNSS/INS sensors and we obtain camera poses from GNSS/INS sensor. When we create feature map with ORB_SLAM3, we use GNSS/INS poses for camera poses. In this way, we avoid driftting and deviating from road. So, if we want to localize on map later, we will use the georeferenced feature map. We think that with this method, we could get better odometry. I would like to tell you our road map step by step so that it is clearer: 1) Firstly, we try to figure out how can we feed ORB_SLAM3 with GNSS/INS poses. For this step I will use KITTI dataset and KITTI ground truth. I will use ground truth poses like GNSS/INS poses. When we do this step correctly, we will use GNSS/INS. 2) I will calculate transforme between lidar-GNSS/INS and lidar-camera. 3) I will feed ORB_SLAM3 with GNSS/INS poses as I did with KITTI dataset, and I will create Geo-referenced feature map 4) I will localize on both feature and laser scan map and I will evaluate the scores. 5) If everyone like this work and it works well, we try to implament to Autoware. I'm waiting for your questions if there are parts that is not clear. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone,
We are work on visual slam methods in mapping work group and we want the implement to Autoware.
There is a nice work in this link:
This work combine ORB features and lidar point cloud as a depth map and it works like RGB-Depth cameras. It gives us some advantages:
I would like to know if you have any ideas and comments about this method. Thanks.
Beta Was this translation helpful? Give feedback.
All reactions