License: MIT
A Godot Project for a Self Driving Car Game using Reinforcement Learning (NeuroEvolution of Augmentin Topologies)
Trained agent after approx. 30min of training:
This project was meant as demo showcase how Godot Engine can be used as simulation environment for reinforcement learning. The training algorithm itself is implemented in Python, which allows using most openly available implementations for machine learning applications (e.g. using PyTorch, Tensorflow, etc.). At the same time, the easy to use Godot Engine allows quickly implementing 2D as well as 3D simulation environments that can be used for RL interactions.
The communication is very rudimentary implemented using TCP socket communication. In this example, Godot Engine is acting as server while the python code for trainings connects as client. This was intended to easily enable multi-agent training in parallel in one single simulation instance (as inspired by Samuel Arzt).
The algorithm used for training the agent is called NEAT and is based on the wonderful open-source package neat-python from CodeReclaimers. No changes to the core-algorithm itself were made.
Tested with
Get the newest sources from Github:
$ git clone https://github.com/JonathanSchmalhofer/SelfDrivingRLCarGodot
To install all dependencies with Anaconda run
$ cd SelfDrivingRLCarGodot/python/conda
$ conda env create -f environment.yml
To verify the environment was created successfully run
$ conda env list
One of the rows should list an environment named godot-sl-car
.
To activate the conda environment run
$ conda activate godot-sl-car
$ echo $CONDA_DEFAULT_ENV # this should output godot-sl-car
- Import and open and project in Godot from the subfolder
SelfDrivingRLCarGodot
. No changes should be needed, you can directly hit the "Play" button to start the simulation server. - Open a new terminal and activate the conda environment using
$ conda activate godot-sl-car
. - Change to the directory of the
gym_godot_car
python package$ cd python/gym_godot_car
. - Run the the script for training a feedforward network using NEAT
$ python train_neat_feedforward.py
- Watch and Enjoy!
Last seconds of training process before the first agent manages to finish the course:
- With screensaver activated, the TCP connection between python and Godot might time out. I did not find a workaround for this and did not find the time to implement a more stable connection (e.g. using some ping-message if a response was expected but not received).
- The entire setup should also work on Windows, however I only tested it on Linux / Ubuntu 20.04.
- The Godot project should also work with most Godot 3+ projects, however I only tested the latest version at time of commit (i.e. 3.3.2).
- I always wanted to draw the current topology as in MarI/O but found my Godot skills were too shabby to properly do this in a reasonable time. I am open for suggestions though, how to approach this.
- The idea for this project was inspired by Samuel Arzt. Thank you very much for your contributions on YouTube.
- The entire NEAT-algorithm implementation was borrowed from neat-python from CodeReclaimers and stripped down to a minimum for better understanding.
- One of the most entertaining and easy to understand (short) explanations of NEAT was provided by SethBling and his MarI/O project.
- The tileset used in Godot was taken from Kenney's Racing Pack published on OpenGameArt.org. Go check out Kenney.nl for more incredible free game assets.
- The car asset used in Godot was taken from sujit1717's/Unlucky Studio's free top-down car sprites published on OpenGameArt.org. Go check out their website and please support them for publishing free game art.
Liked some of my work? Buy me a coffee (or more likely a beer).