Skip to content

Tobias-Fischer/ensemble-event-vpr

Repository files navigation

Event-Based Visual Place Recognition With Ensembles of Temporal Windows

License: CC BY-NC-SA 4.0 stars GitHub issues GitHub repo size

License + Attribution

This code is licensed under CC BY-NC-SA 4.0. Commercial usage is not permitted. If you use this dataset or the code in a scientific publication, please cite the following paper (preprint and additional material):

@article{fischer2020event,
  title={Event-Based Visual Place Recognition With Ensembles of Temporal Windows},
  author={Fischer, Tobias and Milford, Michael},
  journal={IEEE Robotics and Automation Letters},
  volume={5},
  number={4},
  pages={6924--6931},
  year={2020}
}

The Brisbane-Event-VPR dataset accompanies this code repository: https://zenodo.org/record/4302805

Dataset preview

Code overview

The following code is available:

  • The correspondence_event_camera_frame_camera.py file contains the mapping between the rosbag names and the consumer camera video names. The variable video_beginning indicates the ROS timestamp within the bag file that corresponds to the first frame of the consumer camera video file.
  • The read_gps.py file contains some helper functions to read in GPS data from the provided nmea files, and find matches between two traverses.
  • The code_helpers_public.py file contains some helper codes.
  • The code_helpers_public_pr_curve.py file contains some helper codes to get precision, recall and precision-recall curves given a distance matrix.
  • The main code is contained in the Brisbane Event VPR.ipynb notebook.

Please note that in our paper we used manually annotated and then interpolated correspondences; instead here we provide matches based on the GPS data. Therefore, the results between what is reported in the paper and what is obtained using the methods here will be slightly different.

Reconstruct videos from events

  1. Clone this repository: git clone https://github.com/Tobias-Fischer/ensemble-event-vpr.git

  2. Clone https://github.com/cedric-scheerlinck/rpg_e2vid and follow the instructions to create a conda environment and download the pretrained models.

  3. Download the Brisbane-Event-VPR dataset.

  4. Now convert the bag files to txt/zip files that can be used by the event2video code: python convert_rosbags.py. Make sure to adjust the path to the extract_events_from_rosbag.py file from the rpg_e2vid repository.

  5. Now do the event to video conversion: python reconstruct_videos.py. Make sure to adjust the path to the run_reconstruction.py file from the rpg_e2vid repository.

Create suitable conda environment

  1. Create a new conda environment with the dependencies: conda create --name brisbaneeventvpr tensorflow-gpu pynmea2 scipy matplotlib numpy tqdm jupyterlab opencv pip ros-noetic-rosbag ros-noetic-cv-bridge python=3.8 -c conda-forge -c robostack

Export RGB frames from rosbags

  1. conda activate brisbaneeventvpr

  2. python export_frames_from_rosbag.py

Event-based VPR with ensembles

  1. Create a new conda environment with the dependencies: conda create --name brisbaneeventvpr tensorflow-gpu pynmea2 scipy matplotlib numpy tqdm jupyterlab opencv pip

  2. conda activate brisbaneeventvpr

  3. git clone https://github.com/QVPR/netvlad_tf_open.git

  4. cd netvlad_tf_open && pip install -e .

  5. Download the NetVLAD checkpoint here (1.1 GB). Extract the zip and move its contents to the checkpoints folder of the netvlad_tf_open repository.

  6. Open the Brisbane Event VPR.ipynb and adjust the path to the dataset_folder.

  7. You can now run the code in Brisbane Event VPR.ipynb.

Related works

Please check out this collection of related works on place recognition.

About

Code for our RA-L2020 paper

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published