This is a list of the new features provided by PLVS:
- Line segment detection, matching, triangulation and tracking with both pinhole and fisheye cameras (NEW).
- This capability can be enabled via the option
Line.on
in the yaml settings. - Removed some bugs and optimized parts of the adopted line_descriptor OpenCV module.
- This capability can be enabled via the option
- Dense reconstruction with different volumetric mapping methods: voxelgrid, octree_point, octomap, fastfusion, chisel, voxblox.
- It can be enabled by using the option
PointCloudMapping.on
in the yaml settings and selecting your preferred methodPointCloudMapping.type
(see the comments in the yaml files).
- It can be enabled by using the option
- Incremental segmentation with RGBD sensors and octree-based dense map.
- It can be enabled by using the option
Segmentation.on
in the yaml settings of the RGBD cameras (only working whenoctree_point
is selected as volumetric mapping method).
- It can be enabled by using the option
- Augmented reality with overlay of tracked features, built meshes and loaded 3D models.
- This viz can be enabled by using the button
AR Camera
in the viewer GUI. - A shader allows to viz points and lines with fisheye cameras.
- This viz can be enabled by using the button
- Generated sparse and dense maps can be saved and reloaded.
- You can save the generated sparse and dense maps anytime by using the GUI: first press the button
Pause
and then press the buttonSave
. As a consequence, maps will be saved in the Scripts folder. In particular, (1) a sparse map will be always saved, (2) a dense map will be saved in the form of a ply (or of another custom format) only in the case you have setPointCloudMapping.on: 1
. - Use the
SparseMapping
options (as showed in this TUM configuration file) in order to reload the sparse map. In particular, be sure to properly specify theSparseMapping.filename
and then setSparseMapping.reuseMap: 1
. - As for reloading the dense map, set
PointCloudMapping.loadMap: 1
and configurePointCloudMapping.loadFilename
.
- You can save the generated sparse and dense maps anytime by using the GUI: first press the button
- Extraction of ORB keypoints via CUDA.
- This capability can be optionally activated by using the option
USE_CUDA
in config.sh
- This capability can be optionally activated by using the option
- Different methods can be used with calibrated stereo cameras for estimating depth maps: libelas, libsgm, opencv (these methods may need more fine tuning).
- Use the option
StereoDense.type
to select your preferred method in the yaml settings for stereo cameras. This will work with your stereo datasets whenPointCloudMapping.on
is set to 1.
- Use the option
- Some parts of the original ORBSLAM code were improved or optimized.
- A new version of g2o is supported (tags/20230223_git). This can be enabled by setting the option
WITH_G2O_NEW
toON
in the mainCMakeLists.txt
of PLVS. Note that the new version of g2o will be automatically installed for you by the main build script (build.sh
→build_thirdparty.sh
→install_local_g2o_new.sh
). - Smart pointers to manage points and lines (WIP for keyframes). See the file Pointers.h.
- MapOjbect: Experimental representation for planar objects (WIP for PLVS II).
- C++17 support. This can be configured at global level in config.sh by setting the variable
CPP_STANDARD_VERSION
. - Many convenient scripts are provided for launching apps, benchmarking and monitoring the system. See the
Scripts
and theBenchmarking
folders.
Note: PLVS is an active project. The main README is under construction and will be updated soon with further information and details. Code improvements are coming soon.
You can find further details and videos on this page and in the following document:
PLVS: A SLAM System with Points, Lines, Volumetric Mapping, and 3D Incremental Segmentation
Luigi Freda
At present, we have some limitations with some specific sensor configurations.
- Monocular sensors
- Line features and volumetric reconstruction are not supported with monocular sensors.
- Stereo sensors
- Volumetric reconstruction: In general, with stereo cameras, volumetric reconstruction is available only if you rectify stereo pairs. This is automatic with
Camera.type: "Pinhole"
. On the other hand, withCamera.type: "KannalaBrandt8"
, you need to setCamera.needRectification: 1
(to this aim, use the new examples in the folder Examples).
- Volumetric reconstruction: In general, with stereo cameras, volumetric reconstruction is available only if you rectify stereo pairs. This is automatic with
Incremental segmentation: Incremental segmentation is only supported with RGBD sensors and octree-based dense map (PointCloudMapping.type: "octree_point"
).
- With pinhole stereo cameras (
Camera.type: "Pinhole"
), rectification is automatically applied when using the new examples in the folder Examples.