This repo contains the code for our IROS2020 paper: Learning an Overlap-based Observation Model for 3D LiDAR Localization.
It uses the OverlapNet to train an observation model for Monte Carlo Localization and achieves global localization with 3D LiDAR scans.
Developed by Xieyuanli Chen and Thomas Läbe.
Localization results of overlap-based Monte Carlo Localization.
If you use our implementation in your academic work, please cite the corresponding paper:
@inproceedings{chen2020iros,
author = {X. Chen and T. L\"abe and L. Nardi and J. Behley and C. Stachniss},
title = {{Learning an Overlap-based Observation Model for 3D LiDAR Localization}},
booktitle = {Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
year = {2020},
url={https://www.ipb.uni-bonn.de/pdfs/chen2020iros.pdf},
}
We are using standalone Keras with a TensorFlow backend as a library for neural networks.
The code was tested with Ubuntu 18.04 with its standard python version 3.6.
In order to do training and testing on a whole dataset, you need an Nvidia GPU.
To use a GPU, first you need to install the Nvidia driver and CUDA, so have fun!
-
CUDA Installation guide: link
-
System dependencies:
sudo apt-get update sudo apt-get install -y python3-pip python3-tk sudo -H pip3 install --upgrade pip
-
Python dependencies (may also work with different versions than mentioned in the requirements file)
sudo -H pip3 install -r requirements.txt
-
OverlapNet: To use this implementation, one needs to clone OverlapNet to the local folder by:
git clone https://github.com/PRBonn/OverlapNet
For a quick demo, one could download the feature volumes and pre-trained sensor model, extract the feature volumes in the /data
folder following the recommended data structure, and then run:
cd src/
python3 main_overlap_mcl.py
One could then get the online visualization of overlap-based MCL as shown in the gif.
For more details about the usage and each module of this implementation, one could find them in MCL README.md.
To train a new observation model, one could find more information in prepare_training README.md.
- KITTI Odometry Sequence 07: download.
- Pre-trained Sensor Model: download.
- Feature Volumes: download.
- Map Data: download.
- Query Data: download.
- Training Data: download.
Copyright 2020, Xieyuanli Chen, Thomas Läbe, Cyrill Stachniss, Photogrammetry and Robotics Lab, University of Bonn.
This project is free software made available under the MIT License. For details see the LICENSE file.