AtLoc: Attention Guided Camera Localization - AAAI 2020 (Oral).
This is the PyTorch implementation of AtLoc, a simple and efficient neural architecture for robust visual localization.
AtLoc uses a Conda environment that makes it easy to install all dependencies.
-
Install miniconda with Python 3.8.
-
Create the
AtLoc
Conda environment:conda env create -f environment.yml
. -
Activate the environment:
conda activate py38pt21
. -
Note that our code has been tested with PyTorch v0.4.1 (the environment.yml file should take care of installing the appropriate version).
Pixel and Pose statistics must be calculated before any training. Use the data/dataset_mean.py
, which also saves the information at the proper location.
python3 dataset_mean.py --secne {SCENE}
The executable script is train.py
. For example:
- AtLoc:
python3 train.py --dataset Robocup --scene received_images --model AtLoc --gpus 0
- AtLocLstm:
python3 train.py --scene received_images --model AtLoc --lstm True --gpus 0
- Load pretrained checkpoint :
python3 train.py --dataset Robocup --scene received_images --model AtLoc --gpus 0 --start_epochs {PRETRAINED_EPOCH} --epochs 200 --weights {CHECKPOINT_PATH.pth.tar}
The meanings of various command-line parameters are documented in train.py. The values of various hyperparameters are defined in tools/options.py
.
The trained models for partial experiments presented in the paper can be downloaded here. The inference script is eval.py
. Here are some examples, assuming the models are downloaded in logs
.
- AtLoc on
received_images
:
python3 eval.py --scene received_images --model AtLoc --gpus 0 --weights {WEIGHTS_PATH}.pth.tar
Calculates the network attention visualizations and saves them in a video
- For the AtLoc model trained on
received_images
:
python3 saliency_map.py --scene received_images --model AtLoc --gpus 0 --weights {WEIGHTS_PATH}.pth.tar
Run
python3 run.py --weights {WEIGHTS_PATH}.pth.tar
@article{wang2019atloc,
title={AtLoc: Attention Guided Camera Localization},
author={Wang, Bing and Chen, Changhao and Lu, Chris Xiaoxuan and Zhao, Peijun and Trigoni, Niki and Markham, Andrew},
journal={arXiv preprint arXiv:1909.03557},
year={2019}
}