Alejandro Fontan · Javier Civera · Michael Milford
VSLAM-LAB is designed to simplify the development, evaluation, and application of Visual SLAM (VSLAM) systems. This framework enables users to compile and configure VSLAM systems, download and process datasets, and design, run, and evaluate experiments — all from a single command line!
Why Use VSLAM-LAB?
- Unified Framework: Streamlines the management of VSLAM systems and datasets.
- Ease of Use: Run experiments with minimal configuration and single command executions.
- Broad Compatibility: Supports a wide range of VSLAM systems and datasets.
- Reproducible Results: Standardized methods for evaluating and analyzing results.
To ensure all dependencies are installed in a reproducible manner, we use the package management tool pixi. If you haven't installed pixi yet, please run the following command in your terminal:
curl -fsSL https://pixi.sh/install.sh | bash
After installation, restart your terminal or source your shell for the changes to take effect. For more details, refer to the pixi documentation.
Clone the repository and navigate to the project directory:
git clone https://github.com/alejandrofontan/VSLAM-LAB.git && cd VSLAM-LAB
You can now execute any baseline on any sequence from any dataset within VSLAM-LAB using the following command:
pixi run demo <baseline> <dataset> <sequence>
For a full list of available systems and datasets, see the VSLAM-LAB Supported Baselines and Datasets. Example commands:
pixi run demo droidslam euroc MH_01_easy
pixi run demo monogs hamlyn rectified01
pixi run demo orbslam2 rgbdtum rgbd_dataset_freiburg1_xyz
pixi run demo dust3r 7scenes chess_seq-01
pixi run demo glomap eth table_3
To change the paths where VSLAM-LAB-Benchmark or/and VSLAM-LAB-Evaluation data are stored (for example, to /media/${USER}/data), use the following commands:
pixi run set-benchmark-path /media/${USER}/data
pixi run set-evaluation-path /media/${USER}/data
With VSLAM-LAB, you can easily design and configure experiments using a YAML file and run them with a single command. To run the experiment demo, execute the following command:
pixi run vslamlab --exp_yaml configs/exp_demo.yaml
Note: This demo will execute one run per sequence using each VSLAM system. There are 80 pre-executed runs saved in VSLAM-LAB-Evaluation to assist with visualization purposes. The demo uses modified versions of ORB-SLAM2 and DSO. Please note that the comparison is between SLAM and Odometry and is intended only as an example of how to use VSLAM-LAB.
Experiments in VSLAM-LAB as sequences of entries in a YAML file (see example ~/VSLAM-LAB/configs/exp_demo_short.yaml):
exp_vslamlab:
Config: config_demo.yaml # YAML file containing the sequences to be run
NumRuns: 1 # Maximum number of executions per sequence
Parameters: {verbose: 1} # Vector with parameters that will be input to the baseline executable
Module: droidslam # droidslam/monogs/orbslam2/dust3r/glomap/...
Config files are YAML files containing the list of sequences to be executed in the experiment (see example ~/VSLAM-LAB/configs/config_demo_short.yaml):
rgbdtum:
- 'rgbd_dataset_freiburg1_xyz'
hamlyn:
- 'rectified01'
7scenes:
- 'chess_seq-01'
eth:
- 'table_3'
euroc:
- 'MH_01_easy'
monotum:
- 'sequence_01'
For a full list of available VSLAM systems and datasets, refer to the section VSLAM-LAB Supported Baselines and Datasets.
Datasets in VSLAM-LAB are stored in a folder named VSLAM-LAB-Benchmark, which is created by default in the same parent directory as VSLAM-LAB. If you want to modify the location of your datasets, change the variable VSLAMLAB_BENCHMARK in ~/VSLAM-LAB/utilities.py.
- To add a new dataset, structure your dataset as follows:
~/VSLAM-LAB-Benchmark
└── YOUR_DATASET
└── sequence_01
├── rgb
└── img_01
└── img_02
└── ...
├── calibration.yaml
├── rgb.txt
└── groundtruth
└── sequence_02
├── ...
└── ...
-
Derive a new class dataset_{your_dataset}.py for your dataset from ~/VSLAM-LAB/Datasets/Dataset_vslamlab.py, and create a corresponding YAML configuration file named dataset_{your_dataset}.yaml.
-
Include the call for your dataset in function def get_dataset(...) in ~/VSLAM-LAB/Datasets/Dataset_utilities.py
from Datasets.dataset_{your_dataset} import {YOUR_DATASET}_dataset
...
def get_dataset(dataset_name, benchmark_path)
...
switcher = {
"rgbdtum": lambda: RGBDTUM_dataset(benchmark_path),
...
"dataset_{your_dataset}": lambda: {YOUR_DATASET}_dataset(benchmark_path),
}
VSLAM-LAB is released under a LICENSE.txt. For a list of code dependencies which are not property of the authors of VSLAM-LAB, please check docs/Dependencies.md.
If you're using VSLAM-LAB in your research, please cite. If you're specifically using VSLAM systems or datasets that have been included, please cite those as well. We provide a spreadsheet with citation for each dataset and VSLAM system for your convenience.
@misc{fontan2024vslamlab,
author = {Fontan, Alejandro},
title = {VSLAM-LAB: A Comprehensive Framework for Visual SLAM Baselines and Datasets},
howpublished = "\url{https://github.com/alejandrofontan/VSLAM-LAB}",
year = {2024}
}
We provide a spreadsheet with more detailed information for each baseline and dataset.
Baselines | System | Sensors | License | Label |
---|---|---|---|---|
DROID-SLAM | VSLAM | mono | BSD-3 | droidslam |
GLOMAP | SfM | mono | BSD-3 | glomap |
MonoGS | VSLAM | mono/RGBD/Stereo | License | monogs |
ORB-SLAM2 | VSLAM | mono/RGBD/Stereo | GPLv3 | orbslam2 |
DUST3R | SfM | mono | CC BY-NC-SA 4.0 | dust3r |
COLMAP | SfM | mono | BSD | colmap |
DSO | VO | mono | GPLv3 | dso |
AnyFeature-VSLAM | VSLAM | mono | GPLv3 | anyfeature |
evo | Eval | - | GPLv3 | evo |
Datasets | Data | Mode | Label |
---|---|---|---|
ETH3D SLAM Benchmarks | real | handheld | eth |
RGB-D SLAM Dataset and Benchmark | real | handheld | rgbdtum |
ICL-NUIM RGB-D Benchmark Dataset | synthetic | handheld | nuim |
Monocular Visual Odometry Dataset | real | handheld | monotum |
The KITTI Vision Benchmark Suite | real | vehicle | kitti |
RGB-D Dataset 7-Scenes | real | handheld | 7scenes |
The EuRoC MAV Dataset | real | UAV | euroc |
TartanAir: A Dataset to Push the Limits of Visual SLAM | synthetic | handheld | tartanair |
The Drunkard's Dataset | synthetic | handheld | drunkards |
The Replica Dataset - iMAP | synthetic | handheld | replica |
Hamlyn Rectified Dataset | real | handheld | hamlyn |
Underwater caves sonar and vision data set | real | underwater | caves |