PlaNet: A Deep Planning Network for Reinforcement Learning [1]. Supports symbolic/visual observation spaces. Supports some Gym environments (including classic control/non-MuJoCo environments, so DeepMind Control Suite/MuJoCo are optional dependencies). Hyperparameters have been taken from the original work and are tuned for DeepMind Control Suite, so would need tuning for any other domains (such as the Gym environments).
Run with python.main.py
. For best performance with DeepMind Control Suite, try setting environment variable MUJOCO_GL=egl
(see instructions and details here).
Results and pretrained models can be found in the releases.
- Python 3
- DeepMind Control Suite (optional)
- Gym
- OpenCV Python
- Plotly
- PyTorch
To install all dependencies with Anaconda run conda env create -f environment.yml
and use source activate planet
to activate the environment.