This library provides a simple but high-quality baseline for playing around with model-free and model-based reinforcement learning approaches in both online and offline settings. It is mostly tested, type-hinted, and documented. Read the detailed documentation here.
The code in this repository is based on the code written as part of my master's thesis on uncertainty estimation in offline model-based reinforcement learning. Please cite accordingly.
Clone the repo and install the package and all required development dependencies with:
pip install -e .[dev]
After making changes to the code, make sure that static checks and unit tests pass by running tox
.
Tox only runs unit tests that are not marked as slow
.
For faster feedback from unit tests, run pytest -m fast
.
Please run the slow tests if you have a GPU available by executing pytest -m slow
.
Feel free to use the code but please cite the usage as:
@misc{peter2021ombrl,
title={Investigating Uncertainty Estimation Methods for Offline Reinforcement Learning},
author={Felipe Peter and Elie Aljalbout},
year={2021}
}