This repository contains code for LENS - Locational Encoding with Neuromorphic Systems. LENS combines neuromorphic algoriths, sensors, and hardware to perform accurate, real-time robotic localization using visual place recognition (VPR). LENS can be used with the SynSense Speck2fDevKit board which houses a SPECKTM dynamic vision sensor and neuromorphic processor for online VPR.
This repository is licensed under the MIT License. If you use our code, please cite our arXiv paper:
@misc{hines2024lens,
title={A compact neuromorphic system for ultra energy-efficient, on-device robot localization},
author={Adam D. Hines and Michael Milford and Tobias Fischer},
year={2024},
eprint={2408.16754},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2408.16754},
}
To run LENS, please download this repository and install the required dependencies.
Get the code by cloning the repository.
git clone git@github.com:AdamDHines/LENS.git
cd ~/LENS
All dependencies can be instlled from our conda-forge package, PyPi package, or local requirements.txt
. For the conda-forge package, we recommend using micromamba or miniforge. Please ensure your Python version is <= 3.11.
# Create a new environment and install packages
micromamba create -n lens-vpr -c conda-forge lens-vpr
# samna package is not available on conda-forge, so pip install it
micromamba activate lens-vpr
pip install samna
# Install from our PyPi package
pip install lens-vpr
# Install from local requirements.txt
pip install -r requirements.txt
Get started using our pretrained models and datasets to evaluate the system. For a full guide on training and evaluating your own datasets, please visit our Wiki.
To run a simulated event stream, you can try our pre-trained model and datasets. Using the --sim_mat
and --matching
flag will display a similarity matrix and perform Recall@N matching based on a ground truth matrix.
python main.py --sim_mat --matching
New models can be trained by parsing the --train_model
flag. Try training a new model with our provided reference dataset.
# Train a new model
python main.py --train_model
For new models on custom datasets, you can optimize your network hyperparameters using Weights & Biases through our convenient optimizer.py
script.
# Optimize network hyperparameters
python optimizer.py
For more details, please visit the Wiki.
If you have a SynSense Speck2fDevKit, you can try out LENS using our pre-trained model and datasets by deploying simulated event streams on-chip.
# Generate a timebased simulation of event streams with pre-recorded data
python main.py --simulated_speck --sim_mat --matching
Additionally, models can be deployed onto the Speck2fDevKit for low-latency and energy efficient VPR with sequence matching in real-time. Use the --event_driven
flag to start the online inferencing system.
# Run the online inferencing model
python main.py --event_driven
For more details on deployment to the Speck2fDevKit, please visit the Wiki.
If you encounter problems whilst running the code or if you have a suggestion for a feature or improvement, please report it as an issue.