This is the code for the 2019 BMVC paper Matching Features without Descriptors: Implicitly Matched Interest Points (PDF) by Titus Cieslewski, Michael Bloesch and Davide Scaramuzza. When using this, please cite:
@InProceedings{Cieslewski19bmvc,
author = {Titus Cieslewski and Michael Bloesch and Davide Scaramuzza},
title = {Matching Features without Descriptors:
Implicitly Matched Interest Points},
booktitle = {British Machine Vision Conference (BMVC)},
year = 2019
}
If you are looking to minimize the amount of data necessary for feature matching, but still want to use descriptors, you might also be interested in our related work SIPs: Succinct Interest Points from Unsupervised Inlierness Probability Learning.
The supplementary material mentioned in the paper can be found at http://rpg.ifi.uzh.ch/datasets/imips/supp_imips.zip .
We recommend working in a virtual environment (also when using ROS/catkin)
pip install --upgrade opencv-contrib-python==3.4.2.16 opencv-python==3.4.2.16 ipython \
pyquaternion scipy absl-py hickle matplotlib sklearn tensorflow-gpu cachetools
sudo apt install python-catkin-tools
mkdir -p imips_ws/src
cd imips_ws
catkin config --init --mkdirs --extend /opt/ros/<YOUR VERSION> --merge-devel
cd src
git clone git@github.com:catkin/catkin_simple.git
git clone git@github.com:uzh-rpg/imips_open.git
git clone git@github.com:uzh-rpg/imips_open_deps.git
catkin build
. ../devel/setup.bash
mkdir imips_ws
cd imips_ws
git clone git@github.com:uzh-rpg/imips_open.git
git clone git@github.com:uzh-rpg/imips_open_deps.git
Make sure imips_open_deps/rpg_common_py/python
, imips_open_deps/rpg_datasets_py/python
and imips_open/python
are in your PYTHONPATH
.
Download the weights from http://rpg.ifi.uzh.ch/datasets/imips/tds=tm_ds=kt_d=14_ol=0.30.zip and extract them into python/imips/checkpoints
.
python infer_folder.py --in_dir=INPUT_DIR [--out_dir=OUTPUT_DIR] [--ext=.EXTENSION]
If no output directory is provided, it will be $HOME/imips_out/INPUT_DIR
.
ext
can be used to specify image extensions other than .jpg
or .png
(add the dot).
Follow these instructions to link up KITTI. To speed things up, you can download http://rpg.ifi.uzh.ch/datasets/imips/tracked_indices.zip and extract the contained files to python/imips/tracked indices
(visual overlap precalculation). Then, run:
python render_matching.py --val_best --testing
This will populate results/match_render/tds=tm_ds=kt_d=14_ol=0.30_kt_testing
with images like the following:
(Re)move the previously downloaded checkpoints. Follow these instructions to link up TUM mono. Then, run:
python train.py
To visualize training progress, you can run:
python plot_val_metrics.py
in parallel. Here is what it should look like after around 60k iterations:
Note that inlier counts drop initially. This is normal. With some initializations, the training seems to fail, you might need to give it some attempts. You can use the rr
flag to change the seed.
This work was supported by the National Centre of Competence in Research (NCCR) Robotics through the Swiss National Science Foundation and the SNSF-ERC Starting Grant. The Titan Xp used for this research was donated by the NVIDIA Corporation.