Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast eval option #391

Merged
merged 28 commits into from
Jul 28, 2019
Merged

Fast eval option #391

merged 28 commits into from
Jul 28, 2019

Conversation

kengz
Copy link
Owner

@kengz kengz commented Jul 27, 2019

Faster evaluation alternative

  • introduce TrackReward env wrapper as a simpler way to track total reward. This also works naturally with vec env.
  • retire obsolete custom total_reward tracking logic
  • refactor body.ckpt logic and env logic
  • add backward-compatible meta.rigorous_eval: int spec to use rigorous slow eval, or fast eval by inferring total_reward directly from env

slm_lab/agent/algorithm/sarsa.py Show resolved Hide resolved
slm_lab/agent/algorithm/actor_critic.py Show resolved Hide resolved
slm_lab/agent/algorithm/dqn.py Show resolved Hide resolved
slm_lab/agent/algorithm/reinforce.py Show resolved Hide resolved
slm_lab/agent/algorithm/sil.py Show resolved Hide resolved
@kengz kengz merged commit ebfb639 into master Jul 28, 2019
@kengz kengz deleted the fast-eval branch July 28, 2019 02:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant