Skip to content
Mateus Fernandes edited this page Dec 8, 2023 · 20 revisions

This Wiki serves as both a general guide for the project and as the final report for the INF2102 subject of the PUC-Rio Department of Informatics MSci program, through which this project was developed.

Description

This repository was created for the INF2102 discipline in the PUC-Rio Informatics M.Sci program. The goal of this project is to serve as both a basic introduction to the explainable reinforcement learning (XRL) world and a sandbox for other XRL students/researchers to try out running XRL techniques with as little pre-running work as possible. This was done through the implementation of two different XRL techniques (Belief Maps and VIPER) in a single codebase with non-technique-specific functions and classes being as generic as possible. Is is recommended that users read the technique source papers, linked in the README and in the end of this page, in order to better understand the project.

These two specific techniques are chosen because of their relative simplicity and their common use of Q-Learning/DQN policies. Alongside that, the Belief Map/H-Values technique is a case of the "interpretable box" paradigm, in the sense that it is applied alongside model training. Meanwhile, the VIPER technique is a case of the "model distillation" paradigm, where a RL agent is trained and then a XRL technique is applied a posteriori in order to explain the result obtained through the usual RL training loop.

This is a nice contrast that exemplifies the difference between the two paradigms. "Interpretable box" techniques augment or alter RL techniques to make them more explainable, such as the Belief Map/H-Values technique generating this table that is similar to the Q-Values table but that aims to explain agent intention. While "model distillation" techniques serve as external accessories to already existing RL techniques, generating new results on an already trained models and thus being more applicable/generic; such as the VIPER technique generating a decision tree by applying immitation learning on a trained RL agent.

Features

Wiki Index

  • Instructions - detailed instructions on running and using the project.
  • Pre-development - project goals and requirements.
  • Program - how the project works, i.e. how classes are related and general flow of execution.
  • Structure - description of folders and files.
  • Use Cases - example use cases.

References

This codebase was inspired by multiple sources and other repositories.

For general RL code (such as training loop), see the "Solving Blackjack with QLearning" tutorial by Gymnasium.

For the Belief Map/H-values technique, see the What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes paper and the rl-intention repo for the paper source code.

For the VIPER technique, see the Verifiable Reinforcement Learning via Policy Extraction paper and the viper repo for the paper source code.

Clone this wiki locally