Skip to content

An implementation of a distributed protocol for cooperative sensing and sending operations of Unmanned Aerial Vehicles (UAVs). It is built on top of TensorFlow Agents and uses reinforcement learning techniques (e.g. Deep Q-Learning, Actor-Critic) to compute ideal trajectories.

Notifications You must be signed in to change notification settings

AlessioLuciani/distributed-uav-rl-protocol

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Distributed UAV RL Protocol

Open In Collab

An implementation of a distributed protocol for cooperative sensing and sending operations of Unmanned Aerial Vehicles (UAVs) [1]. It is built on top of TensorFlow Agents and uses reinforcement learning techniques (e.g. Deep Q-Learning, Actor-Critic) to compute ideal trajectories.

Install dependencies

  • Download and install Miniconda
  • Run conda env create -f environment.yml
  • Run conda activate durp
  • That's it!

Run the simulation

We also developed a simple GUI using the PyQt5 library to test the effects of applying different scenarios on the simulation. To run it, launch the simulator from the project directory with default arguments using:

./simulation.py

or to see the list of available options run:

./simulation.py --help

Simulator preview

Images credits

Icons made by Freepik, photo3idea_studio, and Roundicons from www.flaticon.com

References

[1] Jingzhi Hu, Hongliang Zhang, Lingyang Song, Robert Schober, & H. Vincent Poor. (2020). Cooperative Internet of UAVs: Distributed Trajectory Design by Multi-agent Deep Reinforcement Learning.

About

An implementation of a distributed protocol for cooperative sensing and sending operations of Unmanned Aerial Vehicles (UAVs). It is built on top of TensorFlow Agents and uses reinforcement learning techniques (e.g. Deep Q-Learning, Actor-Critic) to compute ideal trajectories.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published