Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A3C distributed modes #340

Merged
merged 39 commits into from
May 18, 2019
Merged

A3C distributed modes #340

merged 39 commits into from
May 18, 2019

Conversation

kengz
Copy link
Owner

@kengz kengz commented May 18, 2019

A3C distributed modes

  • rework global_nets init to support different Hogwild distributed modes:
    • synced (CPU-only): periodically push grads and pull params from global network to local network
    • shared: global networks are always shared, so global net override local net. This works with GPU and is extremely fast, and seems to be more sample efficient.
  • allow for global optimizer for distributed modes. add GlobalAdam and GlobalRMSProp optims
  • add global net methods for init, set algorithm nets, and push grads in net_utils
  • update specs accordingly
  • divide max_tick if distributed to end up with the same max_tick in total
  • add spec compatibility check

Network interface

  • simplify global net init at algorithm level
  • rewrite network interface method from training_step(self, loss, optim, lr_scheduler, lr_clock=None) to train_step(self, loss, optim, lr_scheduler, lr_clock=None, global_net=None)
  • standardize network naming convention: must end with net. Optimizer and lr_scheduler names are net_name.replace('net', 'optim'), net_name.replace('net', 'lr_scheduler') respectively

@kengz kengz merged commit 7feab40 into v4-dev May 18, 2019
@kengz kengz deleted the globalopt branch May 18, 2019 19:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant