Skip to content

Latest commit

 

History

History
52 lines (31 loc) · 2.22 KB

README.md

File metadata and controls

52 lines (31 loc) · 2.22 KB

ScaleRL

ScaleRL is a simple and scalable distributed reinforcement learning framework based on Python and PyTorch

Distributed RL Libraries

Distributed RL Blogs

Distributed Framework

[1] Massively Parallel Methods for Deep Reinforcement Learning (SGD, first distributed architecture, Gorilla DQN).

[2] Asynchronous Methods for Deep Reinforcement Learning (SGD, A3C).

[3] Reinforcement Learning through Asynchronous Advantage Actor-Critic on a GPU (A3C on GPU).

[4] Efficient Parallel Methods for Deep Reinforcement Learning (Batched A2C, GPU).

[5] Evolution Strategies as a Scalable Alternative to Reinforcement Learning (ES).

[6] Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning (ES).

[7] RLlib: Abstractions for Distributed Reinforcement Learning (Library)

[8] Distributed Deep Reinforcement Learning: Learn how to play Atari games in 21 minutes (Batched A3C).

[9] Distributed Prioritized Experience Replay (Ape-X, distributed replay buffer).

[10] IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures (CPU+GPU).

[11] Accelerated Methods for Deep Reinforcement Learning (Simulation Acceleration).

[12] GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning (Simulation Acceleration).

[13] DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames (DD-PPO)

[14] Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning (Sample Factory)

[15] SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference (SEED RL)