This is the PyTorch implementation of our ISPRS 2022 paper. We introduce FeatLoc, a neural network for end-to-end learning of camera localization from indoor 2D sparse features. Authors: Thuan Bui Bach, Tuan Tran Dinh, Joo-Ho Lee
If you find this project useful, please cite:
@article{BACH202250,
title = {FeatLoc: Absolute pose regressor for indoor 2D sparse features with simplistic view synthesizing},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
volume = {189},
pages = {50-62},
year = {2022},
doi = {https://doi.org/10.1016/j.isprsjprs.2022.04.021},
}
Name | PoseNet17 | MapNet | FeatLoc(ours) |
---|---|---|---|
7Scenes | 0.24m, 8.12° | 0.21m, 7.77° | 0.14m, 5.89° |
12Scenes | 0.74m, 6.48° | 0.63m, 5.85° | 0.38m, 5.04° |
- The codes are tested along with
- Python 3.7,
- Pytorch 1.5.0,
- PointNet++ lib,
- Others python packages including matplotlib, pandas, h5py, tqdm, and numpy.
- To directly install these packages, run
sudo pip install -r requirements.txt
- If you are familiar with conda environments, please run
conda create -f environment.yml
conda activate FeatLoc
- Note that PointNet++ lib needs to install seperately.
- Install the hierarchical localization toolbox(hloc) into the
dataset
folder, then change its name toHierarchical_Localization
as bellow.
FeatLoc
├── dataset
│ ├── Generated_Data
│ ├── Hierarchical_Localization
| ├── gendata.py
| └── gendata_lib.py
├── model
│ ├── ...
├── ...
└── README.md
- Generate 3D model
-
For 7scenes dataset, please process following this guideline.
-
For 12Scenes dataset, please run dsac setup to download the dataset, then use hloc for generating 3D model for each scene. Note that you need to create a 3D model of entire train and test data per scene using Colmap first, then use hloc for only training data.
- Generate training and testing data.
- Please use the same environment with hierarchical localization toolbox for this part.
cd dataset
python gendata.py --dataset 7scenes --scene chess --augment 1
- Please run the executable script
eval.py
for evaluating each scene independently . For example, we can evaluate FeatLoc++ onapt1_living
scene as follows:
python eval.py --scene apt1_living --checkpoint results/apt1_living_featloc++au.pth.tar --version 2
Median error in translation = 0.2601 m
Median error in rotation = 3.8867 degrees
- You can download the prepared testing data and trained models of 12scenes from the Google drive (please move the data folders and model files to
dataset/Generated_Data
andresults
folder respectively)
- Please run the executable script
train.py
to train each scene independently. For example, we can train FeatLoc++ onapt1_living
scene as follows:
python train.py --scene apt1_living --n_epochs 200 --version 2 --augment 1
If you have any problem in running the code, please feel free to contact me: thuan.aislab@gmail.com