Skip to content

overtakers/programming_real_self_driving_car

Repository files navigation

Udacity - Self Driving Nanodegree - Capstone Project

The code in this repo is our submission for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.

Our group, the "Overtakers", consists of the following members:

Name Email Address
Martin Kan martinkan@gmail.com
Batururimi Ezistas s.batururimi@gmail.com
Shuang Gao rebecca.shuang@gmail.com
Yue Jiang maze1024@gmail.com
Yocheved Weill yocheved.ovits@gmail.com

Project Overview

The structure of the project code is shown below: Project Structure

The Udacity team has provided us with helpful walkthroughs to guide us through the coding of most of the core functions of the self driving car. As such, the majority of our codebase aligns closely with the demo code shown to us in the walkthroughs. Our main contributions to the project code are as follows:

  • implementing a classifier that uses frozen graphs pre-trained with camera images from the simulation and from the Carla test track (see how we derived the frozen graphs in our traffic light classifier repo here) to identify the state of traffic light signals as observed by the car's video stream in tl_classifier.py and tl_detector.py
  • revising the waypoint_follower in pure_pursuit_core.h to lower the threshold test for updating the twist commands which has the effect of reducing the swerving of the car from side to side when following the waypoints

Setup instructions

Please use one of the two installation options, either native or docker installation.

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Docker Installation (CPU)

Install Docker

Build the docker image

docker build --rm . -t capstone

Run the docker container

docker run --rm -it -p 4567:4567  -v "/$(pwd)":/capstone -v /tmp/log:/root/.ros/ capstone

Docker Installation With Nvidia GPU

Install nvidia-docker

docker build --rm . -f GPU.dockerfile -t capstone-gpu

Run the docker file

docker run --runtime=nvidia --rm -it -p 4567:4567  -v "/$(pwd)":/capstone -v /tmp/log:/root/.ros/ capstone-gpu

Run Sim

source "/opt/ros/$ROS_DISTRO/setup.bash"

For example

source /opt/ros/kinetic/setup.bash
catkin_make
source devel/setup.bash
roslaunch launch/styx.launch

NB You may need to update the cryptography packages:

apt-get --auto-remove --yes remove python-openssl
pip install pyopenssl
apt-get install ros-kinetic-cv-bridge
apt-get install ros-kinetic-pcl-ros

Port Forwarding

To set up port forwarding, please refer to the instructions from term 2

Usage

  1. Clone the project repository
git clone https://github.com/overtakers/programming_real_self_driving_car.git
  1. Install python dependencies
cd programming_real_self_driving_car
pip install -r requirements.txt
pip install Pillow --upgrade #necessary to fix the camera problem
  1. Update ROS environment
rosdep update # yes, even if we have that in Dockerfile
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Simulator dataset

You can extract simulator images by uncommenting the following code in ros/src/tl_detector/tl_detector.py in image_cb:

if self.dbw_enabled:
            # create the directory to save to if not already create
            if not os.path.exists(SIMULATOR_DIR):
                os.makedirs(SIMULATOR_DIR)

            cv_image = self.bridge.imgmsg_to_cv2(self.camera_image, "bgr8")
            filename = os.path.join(SIMULATOR_DIR, "{}.png".format(str(uuid.uuid4())))
            cv2.imwrite(filename, cv_image)

Some already extracted images can be found here

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car.
  2. Unzip the file
unzip traffic_light_bag_file.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
  1. Launch your project in site mode
cd programming_real_self_driving_car/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images

About

No description or website provided.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •