Skip to content

Latest commit

 

History

History
263 lines (197 loc) · 10.1 KB

README.md

File metadata and controls

263 lines (197 loc) · 10.1 KB

TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.

PyPI tf-agents PyPI - Python Version

TF-Agents makes implementing, deploying, and testing new Bandits and RL algorithms easier. It provides well tested and modular components that can be modified and extended. It enables fast code iteration, with good test integration and benchmarking.

To get started, we recommend checking out one of our Colab tutorials. If you need an intro to RL (or a quick recap), start here. Otherwise, check out our DQN tutorial to get an agent up and running in the Cartpole environment. API documentation for the current stable release is on tensorflow.org.

TF-Agents is under active development and interfaces may change at any time. Feedback and comments are welcome.

Table of contents

Agents
Tutorials
Multi-Armed Bandits
Examples
Installation
Contributing
Releases
Principles
Contributors
Citation
Disclaimer

Agents

In TF-Agents, the core elements of RL algorithms are implemented as Agents. An agent encompasses two main responsibilities: defining a Policy to interact with the Environment, and how to learn/train that Policy from collected experience.

Currently the following algorithms are available under TF-Agents:

Tutorials

See docs/tutorials/ for tutorials on the major components provided.

Multi-Armed Bandits

The TF-Agents library contains a comprehensive Multi-Armed Bandits suite, including Bandits environments and agents. RL agents can also be used on Bandit environments. There is a tutorial in bandits_tutorial.ipynb. and ready-to-run examples in tf_agents/bandits/agents/examples/v2.

Examples

End-to-end examples training agents can be found under each agent directory. e.g.:

Installation

TF-Agents publishes nightly and stable builds. For a list of releases read the Releases section. The commands below cover installing TF-Agents stable and nightly from pypi.org as well as from a GitHub clone.

⚠️ If using Reverb (replay buffer), which is very common, TF-Agents will only work with Linux.

Note: Python 3.11 requires pygame 2.1.3+.

Stable

Run the commands below to install the most recent stable release. API documentation for the release is on tensorflow.org.

$ pip install --user tf-agents[reverb]

# Use this tag get the matching examples and colabs.
$ git clone https://github.com/tensorflow/agents.git
$ cd agents
$ git checkout v0.18.0

If you want to install TF-Agents with versions of Tensorflow or Reverb that are flagged as not compatible by the pip dependency check, use the following pattern below at your own risk.

$ pip install --user tensorflow
$ pip install --user dm-reverb
$ pip install --user tf-agents

If you want to use TF-Agents with TensorFlow 1.15 or 2.0, install version 0.3.0:

# Newer versions of tensorflow-probability require newer versions of TensorFlow.
$ pip install tensorflow-probability==0.8.0
$ pip install tf-agents==0.3.0

Nightly

Nightly builds include newer features, but may be less stable than the versioned releases. The nightly build is pushed as tf-agents-nightly. We suggest installing nightly versions of TensorFlow (tf-nightly) and TensorFlow Probability (tfp-nightly) as those are the versions TF-Agents nightly are tested against.

To install the nightly build version, run the following:

# `--force-reinstall helps guarantee the right versions.
$ pip install --user --force-reinstall tf-nightly
$ pip install --user --force-reinstall tfp-nightly
$ pip install --user --force-reinstall dm-reverb-nightly

# Installing with the `--upgrade` flag ensures you'll get the latest version.
$ pip install --user --upgrade tf-agents-nightly

From GitHub

After cloning the repository, the dependencies can be installed by running pip install -e .[tests]. TensorFlow needs to be installed independently: pip install --user tf-nightly.

Contributing

We're eager to collaborate with you! See CONTRIBUTING.md for a guide on how to contribute. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

Releases

TF Agents has stable and nightly releases. The nightly releases are often fine but can have issues due to upstream libraries being in flux. The table below lists the version(s) of TensorFlow that align with each TF Agents' release. Release versions of interest:

  • 0.18.0 dropped Python 3.8 support.
  • 0.16.0 is the first version to support Python 3.11.
  • 0.15.0 is the last release compatible with Python 3.7.
  • If using numpy < 1.19, then use TF-Agents 0.15.0 or earlier.
  • 0.9.0 is the last release compatible with Python 3.6.
  • 0.3.0 is the last release compatible with Python 2.x.
Release Branch / Tag TensorFlow Version dm-reverb Version
Nightly master tf-nightly dm-reverb-nightly
0.18.0 v0.18.0 2.14.0 0.13.0
0.17.0 v0.17.0 2.13.0 0.12.0
0.16.0 v0.16.0 2.12.0 0.11.0
0.15.0 v0.15.0 2.11.0 0.10.0
0.14.0 v0.14.0 2.10.0 0.9.0
0.13.0 v0.13.0 2.9.0 0.8.0
0.12.0 v0.12.0 2.8.0 0.7.0
0.11.0 v0.11.0 2.7.0 0.6.0
0.10.0 v0.10.0 2.6.0
0.9.0 v0.9.0 2.6.0
0.8.0 v0.8.0 2.5.0
0.7.1 v0.7.1 2.4.0
0.6.0 v0.6.0 2.3.0
0.5.0 v0.5.0 2.2.0
0.4.0 v0.4.0 2.1.0
0.3.0 v0.3.0 1.15.0 and 2.0.0.

Principles

This project adheres to Google's AI principles. By participating, using or contributing to this project you are expected to adhere to these principles.

Contributors

We would like to recognize the following individuals for their code contributions, discussions, and other work to make the TF-Agents library.

  • James Davidson
  • Ethan Holly
  • Toby Boyd
  • Summer Yue
  • Robert Ormandi
  • Kuang-Huei Lee
  • Alexa Greenberg
  • Amir Yazdanbakhsh
  • Yao Lu
  • Gaurav Jain
  • Christof Angermueller
  • Mark Daoust
  • Adam Wood

Citation

If you use this code, please cite it as:

@misc{TFAgents,
  title = {{TF-Agents}: A library for Reinforcement Learning in TensorFlow},
  author = {Sergio Guadarrama and Anoop Korattikara and Oscar Ramirez and
     Pablo Castro and Ethan Holly and Sam Fishman and Ke Wang and
     Ekaterina Gonina and Neal Wu and Efi Kokiopoulou and Luciano Sbaiz and
     Jamie Smith and Gábor Bartók and Jesse Berent and Chris Harris and
     Vincent Vanhoucke and Eugene Brevdo},
  howpublished = {\url{https://github.com/tensorflow/agents}},
  url = "https://github.com/tensorflow/agents",
  year = 2018,
  note = "[Online; accessed 25-June-2019]"
}

Disclaimer

This is not an official Google product.