-
lviorf is now available, which supports more lidar and is more robust;
-
This repo is modified based on LVI_SAM, which makes it easier to adapt your sensor;
-
The original version of lio_sam sets many sensor external parameters to fixed values in the code, this version extracts the external parameters into the yaml file, making it easier to configure;
-
The original version of lio_sam dose not consider the translation amount between lidar and camera, this version adds the translation amount;
-
The following params needs to be set as your device extrinsic:
- cameraToROSStandard*: ros standard frame (x:front-y:left-z:up) to camera frame(normal x:front-y:left-z:up);
- extrinsic*: IMU frame to camera frame;
------------------- Update Date: 2022-12-22 -------------------
- Fixed extrinsic bug;
- Support ubran-nav datasets;
- Support kitti datasets;
For more details, please refer to my blog: LVI-SAM:配置环境、安装测试、适配自己采集数据集
- compile package
mkdir -p ~/lvi-sam-simple/src
cd ~/lvi-sam-simple/src
git clone https://github.com/YJZLuckyBoy/LVI-SAM-Simple.git
cd ..
catkin_make -j4
- Run the launch file:
roslaunch liorf run_lvi_sam.launch
- Play existing bag files:
rosbag play handheld.bag
- lvi-sam
- ubran-nav
- kitti
- my data
This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono at a system level.
- ROS (Tested with kinetic and melodic)
- gtsam (Georgia Tech Smoothing and Mapping library)
sudo add-apt-repository ppa:borglab/gtsam-release-4.0 sudo apt install libgtsam-dev libgtsam-unstable-dev
- Ceres (C++ library for modeling and solving large, complicated optimization problems)
sudo apt-get install -y libgoogle-glog-dev sudo apt-get install -y libatlas-base-dev wget -O ~/Downloads/ceres.zip https://github.com/ceres-solver/ceres-solver/archive/1.14.0.zip cd ~/Downloads/ && unzip ceres.zip -d ~/Downloads/ cd ~/Downloads/ceres-solver-1.14.0 mkdir ceres-bin && cd ceres-bin cmake .. sudo make install -j4
When you use Docker, you could solve the dependency at once.
For more information, you can check docker_start.md.
You can use the following commands to download and compile the package.
cd ~/catkin_ws/src
git clone https://github.com/TixiaoShan/LVI-SAM.git
cd ..
catkin_make
The datasets used in the paper can be downloaded from Google Drive. The data-gathering sensor suite includes: Velodyne VLP-16 lidar, FLIR BFS-U3-04S2M-CS camera, MicroStrain 3DM-GX5-25 IMU, and Reach RS+ GPS.
https://drive.google.com/drive/folders/1q2NZnsgNmezFemoxhHnrDnp1JV_bqrgV?usp=sharing
Note that the images in the provided bag files are in compressed format. So a decompression command is added at the last line of launch/module_sam.launch
. If your own bag records the raw image data, please comment this line out.
- Configure parameters:
Configure sensor parameters in the .yaml files in the ```config``` folder.
- Run the launch file:
roslaunch lvi_sam run.launch
- Play existing bag files:
rosbag play handheld.bag
- Update graph optimization using all three factors in imuPreintegration.cpp, simplify mapOptimization.cpp, increase system stability
Thank you for citing our paper if you use any of this code or datasets.
@inproceedings{lvisam2021shan,
title={LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping},
author={Shan, Tixiao and Englot, Brendan and Ratti, Carlo and Rus Daniela},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
pages={5692-5698},
year={2021},
organization={IEEE}
}