Advanced Lane Detection Project which includes advanced image processing to detect lanes irrespective of the road texture, brightness, contrast, curves etc. Used Image warping and sliding window approach to find and plot the lane lines.
The Flask Application is deployed over Google Cloud Platform. https://lane-finding.appspot.com
├── /camera_cal
├── /static
├── /templates
├── /test_images
├── main.py
├── app.yaml
├── requirements.txt
└── README.md
- Computing the camera calibration matrix and distortion coefficients given a set of chessboard images. (9x6)
- Apply a distortion correction to raw images.
- Use color transforms, gradients, etc., to create a thresholded binary image.
- Apply a perspective transform to rectify binary image ("birds-eye view") to get a warped image.
- Detect lane pixels and fit to find the lane boundary.
- Warp the detected lane boundaries back onto the original image.
- Flask==1.1.0
- gunicorn==19.6.0
- pandas==0.22.0
- numpy==1.11.2
- scipy==0.18.1
- scikit-learn>=0.18
- opencv-python==3.1.0.4
The first step in the pipeline is to undistort the camera. Some images of a 9x6 chessboard are given and are distorted. Our task is to find the Chessboard corners an plot them. For this, after loading the images we calibrate the camera. Open CV functions like findChessboardCorners(), drawChessboardCorners() and calibrateCamera() help us do this.
The images uploaded are initially undistorted using cv2.undistort() which takes in an image and returns the undistorted one.
Detecting edges around trees or cars is okay because these lines can be mostly filtered out by applying a mask to the image and essentially cropping out the area outside of the lane lines. It's most important that we reliably detect different colors of lane lines under varying degrees of daylight and shadow. So, that our self driving car does not become blind in extreme daylight hours or under the shadow of a tree.
I performed gradient threshold and color threshold individually and then created a binary combination of these two images to map out where either the color or gradient thresholds were met called the combined_binary in the code.
Perspective Transform is the Bird's eye view for Lane images. We want to look at the lanes from the top and have a clear picture about their curves. Implementing Perspective Transform was the most interesting one for me. I used values of src and dst as shown below:
src = np.float32([[590,450],[687,450],[1100,720],[200,720]])
dst = np.float32([[300,0],[900,0],[900,720],[300,720]])
Also, made a function warper(img, src, dst) which takes in the Binary Warped Image and return the perspective transform using cv2.getPerspectiveTransform(src, dst) and cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_NEAREST).