The paper with supplementary material is available on arXiv:
https://arxiv.org/abs/2401.14919
If you use this code, please cite our paper:
@inproceedings{kluger2024parsac,
title={PARSAC: Accelerating Robust Multi-Model Fitting with Parallel Sample Consensus},
author={Kluger, Florian and Rosenhahn, Bodo},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2024}
}
Related repositories:
- HOPE-F dataset
- SMH dataset
- NYU-VP dataset
- YUD+ dataset
- CONSAC
- Our J-/T-Linkage implementation for VP detection
Get the code:
git clone --recurse-submodules https://github.com/fkluger/parsac.git
cd parsac
git submodule update --init --recursive
Set up the Python environment using Anaconda:
conda env create -f environment.yml
source activate parsac
Download the HOPE-F dataset and extract it inside the datasets/hope
directory.
The small dataset w/o images is sufficient for training and evaluation.
Download the SMH dataset and extract it inside the datasets/smh
directory.
The small dataset w/o images is sufficient for training and evaluation.
The vanishing point labels and pre-extracted line segments for the NYU dataset are fetched automatically via the nyu_vp submodule.
Pre-extracted line segments and VP labels are fetched automatically via the yud_plus submodule. RGB images and camera
calibration parameters, however, are not included. Download the original York Urban Dataset from the
Elder Laboratory's website and
store it under the datasets/yud_plus/data
subfolder.
We provide a mirror of the Adelaide dataset here: https://cloud.tnt.uni-hannover.de/index.php/s/egE6y9KRMxcLg6T.
Download it and place the .mat
files inside the datasets/adelaide
directory.
In order to reproduce the results from the paper using our pre-trained network, first download the neural network weights and then follow the instructions on the EVAL page.
If you want to train PARSAC from scratch, please follow the instructions on the TRAIN page.