Skip to content

Commit

Permalink
[RLlib] Fix example never returning (ray-project#33140)
Browse files Browse the repository at this point in the history
Signed-off-by: Artur Niederfahrenhorst <artur@anyscale.com>
Signed-off-by: Jack He <jackhe2345@gmail.com>
  • Loading branch information
ArturNiederfahrenhorst authored and ProjectsByJackHe committed Mar 21, 2023
1 parent 33d414d commit 34c8d67
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions doc/source/rllib/rllib-training.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ You can train DQN with the following commands:

<div class="termynal" data-termynal>
<span data-ty="input">pip install "ray[rllib]" tensorflow</span>
<span data-ty="input">rllib train --algo DQN --env CartPole-v1</span>
<span data-ty="input">rllib train --algo DQN --env CartPole-v1 --stop '{"training_iteration": 30}'</span>
</div>

.. margin::
Expand All @@ -43,7 +43,7 @@ RLlib supports any Farama-Foundation Gymnasium environment, as well as a number
It also supports a large number of algorithms (see :ref:`rllib-algorithms-doc`) to
choose from.

Running the above will return one of the `checkpoints` that get generated during training,
Running the above will return one of the `checkpoints` that get generated during training after 30 training iterations,
as well as a command that you can use to evaluate the trained algorithm.
You can evaluate the trained algorithm with the following command (assuming the checkpoint path is called ``checkpoint``):

Expand Down

0 comments on commit 34c8d67

Please sign in to comment.