Skip to content
This repository has been archived by the owner on Dec 11, 2022. It is now read-only.

Commit

Permalink
SAC algorithm (#282)
Browse files Browse the repository at this point in the history
* SAC algorithm

* SAC - updates to agent (learn_from_batch), sac_head and sac_q_head to fix problem in gradient calculation. Now SAC agents is able to train.
gym_environment - fixing an error in access to gym.spaces

* Soft Actor Critic - code cleanup

* code cleanup

* V-head initialization fix

* SAC benchmarks

* SAC Documentation

* typo fix

* documentation fixes

* documentation and version update

* README typo
  • Loading branch information
guyk1971 authored and shadiendrawis committed May 1, 2019
1 parent 33dc29e commit 74db141
Show file tree
Hide file tree
Showing 92 changed files with 2,813 additions and 403 deletions.
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,11 @@ coach -p CartPole_DQN -r
<img src="img/doom_health.gif" alt="Doom Health Gathering"/> <img src="img/minitaur.gif" alt="PyBullet Minitaur" width = "249" height ="200"/> <img src="img/ant.gif" alt="Gym Extensions Ant"/>
<br><br>

Blog posts from the Intel® AI website:
* [Release 0.8.0](https://ai.intel.com/reinforcement-learning-coach-intel/) (initial release)
* [Release 0.9.0](https://ai.intel.com/reinforcement-learning-coach-carla-qr-dqn/)
* [Release 0.10.0](https://ai.intel.com/introducing-reinforcement-learning-coach-0-10-0/)
* [Release 0.11.0](https://ai.intel.com/rl-coach-data-science-at-scale) (current release)
* [Release 0.11.0](https://ai.intel.com/rl-coach-data-science-at-scale)
* Release 0.12.0 (current release)

Contacting the Coach development team is also possible through the email [coach@intel.com](coach@intel.com)

Expand Down Expand Up @@ -277,6 +277,7 @@ dashboard
* [Clipped Proximal Policy Optimization (CPPO)](https://arxiv.org/pdf/1707.06347.pdf) | **Multi Worker Single Node** ([code](rl_coach/agents/clipped_ppo_agent.py))
* [Generalized Advantage Estimation (GAE)](https://arxiv.org/abs/1506.02438) ([code](rl_coach/agents/actor_critic_agent.py#L86))
* [Sample Efficient Actor-Critic with Experience Replay (ACER)](https://arxiv.org/abs/1611.01224) | **Multi Worker Single Node** ([code](rl_coach/agents/acer_agent.py))
* [Soft Actor-Critic (SAC)](https://arxiv.org/abs/1801.01290) ([code](rl_coach/agents/soft_actor_critic_agent.py))
### General Agents
* [Direct Future Prediction (DFP)](https://arxiv.org/abs/1611.01779) | **Multi Worker Single Node** ([code](rl_coach/agents/dfp_agent.py))
Expand Down
1 change: 1 addition & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ The environments that were used for testing include:
|**[ACER](acer)** | ![#2E8B57](https://placehold.it/15/2E8B57/000000?text=+) |Atari | |
|**[Clipped PPO](clipped_ppo)** | ![#2E8B57](https://placehold.it/15/2E8B57/000000?text=+) |Mujoco | |
|**[DDPG](ddpg)** | ![#2E8B57](https://placehold.it/15/2E8B57/000000?text=+) |Mujoco | |
|**[SAC](sac)** | ![#2E8B57](https://placehold.it/15/2E8B57/000000?text=+) |Mujoco | |
|**[NEC](nec)** | ![#2E8B57](https://placehold.it/15/2E8B57/000000?text=+) |Atari | |
|**[HER](ddpg_her)** | ![#2E8B57](https://placehold.it/15/2E8B57/000000?text=+) |Fetch | |
|**[DFP](dfp)** | ![#ceffad](https://placehold.it/15/ceffad/000000?text=+) |Doom | Doom Battle was not verified |
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/clipped_ppo/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Clipped PPO

Each experiment uses 3 seeds and is trained for 10k environment steps.
Each experiment uses 3 seeds and is trained for 10M environment steps.
The parameters used for Clipped PPO are the same parameters as described in the [original paper](https://arxiv.org/abs/1707.06347).

### Inverted Pendulum Clipped PPO - single worker
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/ddpg/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# DDPG

Each experiment uses 3 seeds and is trained for 2k environment steps.
Each experiment uses 3 seeds and is trained for 2M environment steps.
The parameters used for DDPG are the same parameters as described in the [original paper](https://arxiv.org/abs/1509.02971).

### Inverted Pendulum DDPG - single worker
Expand Down
48 changes: 48 additions & 0 deletions benchmarks/sac/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Soft Actor Critic

Each experiment uses 3 seeds and is trained for 3M environment steps.
The parameters used for SAC are the same parameters as described in the [original paper](https://arxiv.org/abs/1801.01290).

### Inverted Pendulum SAC - single worker

```bash
coach -p Mujoco_SAC -lvl inverted_pendulum
```

<img src="inverted_pendulum_sac.png" alt="Inverted Pendulum SAC" width="800"/>


### Hopper Clipped SAC - single worker

```bash
coach -p Mujoco_SAC -lvl hopper
```

<img src="hopper_sac.png" alt="Hopper SAC" width="800"/>


### Half Cheetah Clipped SAC - single worker

```bash
coach -p Mujoco_SAC -lvl half_cheetah
```

<img src="half_cheetah_sac.png" alt="Half Cheetah SAC" width="800"/>


### Walker 2D Clipped SAC - single worker

```bash
coach -p Mujoco_SAC -lvl walker2d
```

<img src="walker2d_sac.png" alt="Walker 2D SAC" width="800"/>


### Humanoid Clipped SAC - single worker

```bash
coach -p Mujoco_SAC -lvl humanoid
```

<img src="humanoid_sac.png" alt="Humanoid SAC" width="800"/>
Binary file added benchmarks/sac/half_cheetah_sac.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/sac/hopper_sac.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/sac/humanoid_sac.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/sac/inverted_pendulum_sac.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/sac/walker2d_sac.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/_images/algorithms.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_images/sac.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/_modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,7 @@ <h1>All modules for which code is available</h1>
<ul><li><a href="rl_coach/agents/acer_agent.html">rl_coach.agents.acer_agent</a></li>
<li><a href="rl_coach/agents/actor_critic_agent.html">rl_coach.agents.actor_critic_agent</a></li>
<li><a href="rl_coach/agents/agent.html">rl_coach.agents.agent</a></li>
<li><a href="rl_coach/agents/agent_interface.html">rl_coach.agents.agent_interface</a></li>
<li><a href="rl_coach/agents/bc_agent.html">rl_coach.agents.bc_agent</a></li>
<li><a href="rl_coach/agents/categorical_dqn_agent.html">rl_coach.agents.categorical_dqn_agent</a></li>
<li><a href="rl_coach/agents/cil_agent.html">rl_coach.agents.cil_agent</a></li>
Expand All @@ -195,6 +196,7 @@ <h1>All modules for which code is available</h1>
<li><a href="rl_coach/agents/ppo_agent.html">rl_coach.agents.ppo_agent</a></li>
<li><a href="rl_coach/agents/qr_dqn_agent.html">rl_coach.agents.qr_dqn_agent</a></li>
<li><a href="rl_coach/agents/rainbow_dqn_agent.html">rl_coach.agents.rainbow_dqn_agent</a></li>
<li><a href="rl_coach/agents/soft_actor_critic_agent.html">rl_coach.agents.soft_actor_critic_agent</a></li>
<li><a href="rl_coach/agents/value_optimization_agent.html">rl_coach.agents.value_optimization_agent</a></li>
<li><a href="rl_coach/architectures/architecture.html">rl_coach.architectures.architecture</a></li>
<li><a href="rl_coach/architectures/network_wrapper.html">rl_coach.architectures.network_wrapper</a></li>
Expand Down
2 changes: 1 addition & 1 deletion docs/_modules/rl_coach/agents/acer_agent.html
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ <h1>Source code for rl_coach.agents.acer_agent</h1><div class="highlight"><pre>
<span class="bp">self</span><span class="o">.</span><span class="n">num_steps_between_gradient_updates</span> <span class="o">=</span> <span class="mi">5000</span>
<span class="bp">self</span><span class="o">.</span><span class="n">ratio_of_replay</span> <span class="o">=</span> <span class="mi">4</span>
<span class="bp">self</span><span class="o">.</span><span class="n">num_transitions_to_start_replay</span> <span class="o">=</span> <span class="mi">10000</span>
<span class="bp">self</span><span class="o">.</span><span class="n">rate_for_copying_weights_to_target</span> <span class="o">=</span> <span class="mf">0.99</span>
<span class="bp">self</span><span class="o">.</span><span class="n">rate_for_copying_weights_to_target</span> <span class="o">=</span> <span class="mf">0.01</span>
<span class="bp">self</span><span class="o">.</span><span class="n">importance_weight_truncation</span> <span class="o">=</span> <span class="mf">10.0</span>
<span class="bp">self</span><span class="o">.</span><span class="n">use_trust_region_optimization</span> <span class="o">=</span> <span class="kc">True</span>
<span class="bp">self</span><span class="o">.</span><span class="n">max_KL_divergence</span> <span class="o">=</span> <span class="mf">1.0</span>
Expand Down
Loading

0 comments on commit 74db141

Please sign in to comment.