This project is an end-to-end application of reinforcement learning for collision avoidance in autonomous vehicles using the CARLA simulator. It uses the Proximal Policy Optimization (PPO) algorithm, within the CARLA environment (version 0.9.12) to teach a virtual car to avoid collisions at several speeds. This is an end-to-end approach where the input is a camera RGB image and the outputs are directly control values for acceleration and steering.
The reinforcement learning model used in this project is the recurrent version of the Proximal Policy Optimization (PPO) algorithm, implemented through the Stable Baselines3 library. This version allows the model to maintain a memory of its past decisions, improving its decision-making process over time, particularly in environments that require understanding sequences of events for optimal actions.
For more information on Stable Baselines3, visit their official website: Stable Baselines3.
The model was trained for 80,000 steps at a speed of 90 km/h. This initial training allowed the model to successfully learn collision avoidance strategies. It is important to note that further training could potentially improve the model's performance and effectiveness in avoiding collisions under various conditions.
Below are the results showcasing the model's ability to avoid collisions:
- Reward vs. Step Graph: The following graph displays the model's learning progress over 80,000 steps. It highlights the improvement in the model's ability to avoid collisions as training progresses.
- Model in Action: Here is a GIF demonstrating the model successfully avoiding collisions at 90 km/h in the CARLA simulator.
- CARLA Simulator (version 0.9.12)
- Stable Baselines3
- Other dependencies (requirements.txt)
To train the model, execute train.py which supports adjustable parameters via --arg (for details, see argparser within the script). Additionally, direct adjustments may be needed in World.py.Note that self.distance_parked should be set to allow the ego car to reach desired speeds effectively.
The PPO model trained on CARLA 0.9.12 for collision avoidance at 90 km/h shows promising results. While the initial training of 80,000 steps demonstrates the model's capability to learn effective avoidance strategies, further training could improve its performance.
I welcome contributions and suggestions to improve this project. Feel free to fork the repository, submit pull requests, or open issues to discuss potential improvements. I plan to train this model in dynamic scenarios as well. I am also working on a hybrid approach where I use reinforcement learning as path planner and a MPC controller to control the ego car, soon I will also publish this approach.