Skip to content

Sardhendu/DeepRL

Repository files navigation

Implemented Algorithms:

Project 1: Navigation Link

Train an agent to navigate (and collect bananas!) in a large, square world.

Trained Agent

  • Task: Episodic
  • Reward: +1 for collecting a yellow banana, -1 is provided for collecting a blue banana.
  • State space:
    • Vector Environment: 37 dimensions that includes agent's velocity, along with ray-based perception of objects around agent's forward direction.
    • Visual Environment: (84, 84, 3) where 84x84 is the image size.
  • Action space:
    • 0 - move forward.
    • 1 - move backward.
    • 2 - turn left.
    • 3 - turn right.

Project 2: Continuous Control Link

Train an agent (double-jointed arm) is to maintain its position at the target location for as many time steps as possible

Trained Agent

  • Task: Continuous
  • Reward: +0.1 for agent's hand in the goal location
  • State space:
    • Single Agent Environment: (1, 33)
      • 33 dimensions consisting of position, rotation, velocity, and angular velocities of the arm.
      • 1 agent
    • Multi Agent Environment: (20, 33)
      • 33 dimensions consisting of position, rotation, velocity, and angular velocities of the arm.
      • 20 agent
  • Action space: Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.

Trained Agent

Project 3: Collaboration and Competition Link

Train two agents to play ping pong. And, the goal of each agent is to keep the ball in play.

Trained Agent

  • Task: Continuous
  • Reward: +0.1 for hitting the ball over net -0.01 if the ball hits the ground or goes out of bounds
  • State space:
    • Multi Agent Environment: (2, 24)
      • 24 dimensions consisting of position, rotation, velocity and etc.
  • Action space: Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping.
  • Target Score The environment is considered solved, when the average (over 100 episodes) of those scores is at least +0.5.

TODO:

  1. Modular code for each environment.
  2. Dueling Network Architectures with DQN
  3. Lambda return for REINFORCE (n-step bootstrap)
  4. Apply prioritized experience replay to all the environment and compare while maintaining the modularity of the code.
  5. Actor-Critic
  6. Crawler for Continuous Control
  7. Add Tensorflow graphs instead of manual dictionary graphs for all environments.
  8. Continuous control Test phase.
  9. Parallel Environments and how efficient hte weight sharing is

To install with Docker

Create a docker Image:

   docker build --tag deep_rl .

Run the Image, expose Jupyter Notebook at port 8888 and mount the working directory:

   docker run -it -p 8888:8888 -v /path/to/your/local/workspace:/workspace/DeepRL --name deep_rl deep_rl

Start Jupyter Notebook:

   jupyter notebook --no-browser --allow-root --ip 0.0.0.0