This is a collection of curiosity algorithms implemented in pytorch on top of the rlpyt deep rl codebase.
- Add remaining curiosity models
- Update models directory with more environments
Policy Gradient A2C, PPO
Replay Buffers (supporting both DQN + QPG) non-sequence and sequence (for recurrent) replay, n-step returns, uniform or prioritized replay, full-observation or frame-based buffer (e.g. for Atari, stores only unique frames to save memory, reconstructs multi-frame observations).
Deep Q-Learning DQN + variants: Double, Dueling, Categorical (up to Rainbow minus Noisy Nets), Recurrent (R2D2-style)
Q-Function Policy Gradient DDPG, TD3, SAC
Prediction error ICM, Disagreement
Count-based RND
Learning progress NDIGO
- Standard gym environments (mujoco, etc.)
- Atari environments
- SuperMarioBros
- Deepmind PyColab
-
Clone this repo.
-
If you plan on using mujoco, place your license key "mjkey.txt" in the base directory. This file will be copied in when you start docker using the Makefile command.
-
Make sure you have docker installed to run the image. We recommend running the GPU image which will work even if you are only using CPUs (labeled version_gpu), but a CPU only image is provided as well.
-
Edit global.json to customize any volume mount points, port forwarding, and docker image versions from the registry. Information from this file is read into the Makefile.
-
The makefile contains some basic commands (we use node to read in information from global.json at the top - it's not used for anything else).
make start_docker # start the docker container and drop you in a shell
make start_docker_gpu # start the docker container if running on a machine with GPUs
make stop_docker # stop the docker container and clean up
make clean # clean all subdirectories of pycache files etc.
-
Before running anything, make sure you create an empty directory titled "results" in the base directory.
-
Run the launch file from the command line, substituting in your preferences for the correct arguments (see rlpyt/utils/launching/arguments.py for a complete list).
python3 launch.py -env breakout -alg ppo -curiosity_alg icm -lstm
-
This will launch your experiment in a tmux session titled "experiment". This session will have 3 windows - a window where your code is running, an htop monitoring process, and a window that serves tensorboard to port 12345 (or the port specified in global.json).
-
Results folders will be automatically generated in the results directory created in step 6.
-
Example runs can be found in the models directory. Model weights and exact hyperparameters can be found there for tested environments.
For more information on the rlpyt core codebase, please see this white paper on Arxiv. If you use this repository in your work or otherwise wish to cite it, please make reference to the white paper.
The class types perform the following roles:
- Runner - Connects the
sampler
,agent
, andalgorithm
; manages the training loop and logging of diagnostics.- Sampler - Manages
agent
/environment
interaction to collect training data, can initialize parallel workers.- Collector - Steps
environments
(and maybe operatesagent
) and records samples, attached tosampler
.- Environment - The task to be learned.
- Observation Space/Action Space - Interface specifications from
environment
toagent
.
- Observation Space/Action Space - Interface specifications from
- TrajectoryInfo - Diagnostics logged on a per-trajectory basis.
- Environment - The task to be learned.
- Collector - Steps
- Agent - Chooses control action to the
environment
insampler
; trained by thealgorithm
. Interface tomodel
.- Model - Torch neural network module, attached to the
agent
. - Curiosity Model - Torch neural network module, attached to the
model
which is attached to theagent
. - Distribution - Samples actions for stochastic
agents
and defines related formulas for use in loss function, attached to theagent
.
- Model - Torch neural network module, attached to the
- Algorithm - Uses gathered samples to train the
agent
(e.g. defines a loss function and performs gradient descent).- Optimizer - Training update rule (e.g. Adam), attached to the
algorithm
. - OptimizationInfo - Diagnostics logged on a per-training batch basis.
- Optimizer - Training update rule (e.g. Adam), attached to the
- Sampler - Manages
This codebase is currently funded by Amazon MLRA - we thank them for their support.
Parts of the following open source codebases were used to make this codebase possible. Thanks to all of them for their amazing work!
Thanks to Prof. Pulkit Agrawal and the members of the Improbable AI lab at MIT CSAIL for their continued guidance and support.