In the context of machine learning, reinforcement learning (RL) is one of the learning paradigms involving interaction between agent and environment. Recently, RL has been extensively studied and implemented in the field of control theory. The classic example of a control theory problem is trajectory optimization, such as for spacecraft or rockets. Here, in the RL lingo, a rocket can be treated as an agent, and its environment would be outer space, e.g., the surface of the moon. The environment obeys the Markov Decision Process (MDP) property. The agent obtains a reward and observes a state based on the action that is given to the environment. The action taken by the agent is determined by the policy distribution that can be learned in the course of the training process. To learn the policy, one approach is to utilize the REINFORCE algorithm. This method is a policy gradient algorithm that maximizes the expected return (reward), incorporating Monte Carlo approximation. In practice, the gradient of the expected return will be our objective function to update our policy distribution.
To see the rocket in action, please go to the following link.
Reward curve throughout 6561 episodes.
Here, the qualitative result of the controller for the rocket is shown below.
The rocket successfully landed on the surface of the moon after hovering under the control of the learned policy from the REINFORCE algorithm.
- REINFORCE Algorithm: Taking baby steps in reinforcement learning
- REINFORCE Algorithm
- HOW TO TRAIN A DEEP Q NETWORK
- Training using REINFORCE for Mujoco
- Lunar Lander
- Part 3: Intro to Policy Optimization
- 2018 Practical 4: Reinforcement Learning
- 2019 Practical 4: Reinforcement Learning
- Derivatives of Logarithmic Functions
- PyTorch Lightning