This is the official code repository of the work described in our ICARA 2023 paper.
The code provides the implementation of Miniworld Maze experiments using for the evaluation the 100-Mazes dataset!
Note that our implementation is a minimal fork from pyRIL library adapted for visual navigation. Please, visit the original repository if you want to find a comprehensive Reinforcement and Imitation Learning library.
This repository is released under the GNU General Public License v3.0 License (refer to the LICENSE file for details).
If you make use of this dataset and software, please cite the following reference in any publication:
@inproceedings{Gutierrez-Alvarez2023,
Author = {Guti\'errez-\'Alvarez, C. and Hernandez-Garc\'ia, S. and Nasri, N. and Cuesta-Infante, A. and L\'opez-Sastre, R.~J.},
Title = {Towards Clear Evaluation of Robotic Visual Semantic Navigation},
Booktitle = {ICARA},
Year = {2023}
}
This code is only tested on Ubuntu machines with conda preinstalled. To create the environment, just run:
bash create_env.sh
To run the training process execute the following commands in your terminal:
conda activate vsn
python launch_maze.py maze_experiments/maze_config.yaml
Basic parameters of the training are already set on maze_config.yaml. Change them in order to meet your experiment requirements.
To run the evaluation process on the 100 test mazes execute the following commands in your terminal:
conda activate vsn
python evaluator.py maze_experiments/maze_config.yaml
Be sure to select the same maze_config.yaml that you used during training. Introduce the checkpoint path in the config file to load the model you want to evaluate. To use our baseline, download it from here.