1 - Algorithm Details.
2 - YOLO Object Detection.
3 - YOLO5S Detection Setup.
4 - Dependencies and Compiling.
The code for this step is contained in the calibration.cpp Here.
Start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world. Here we are assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, objp is just a replicated array of coordinates, and objpoints will be appended with a copy of it every time I successfully detect all chessboard corners in a test image. imgpoints will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection. Then use the output objpoints and imgpoints to compute the camera calibration and distortion coefficients using the calibrateCamera() function.
Example:
At First, resize the image and then convert frame as Bird view and then use a combination of color and gradient thresholds to generate a binary image.
Step 1: Undisort Image.
Step 2: Binary Image.
Step 3: Take a histogram along all the columns in the lower half of the image and split histogram for two sides for each lane.
Step 4: Use the two highest peaks from histogram as a starting point for determining where the lane lines are, and then use sliding windows moving upward in the image to determine where the lane lines go.
Example:
Step 5: Identify lane-line pixels.
Find all non zero pixels.
Example:
Step 6: Fit their positions with a polynomial.
After performing 2nd order polynomial fit for nonzero pixels, drawing polyline and unwrap image the final output.
Example:
Get the left and right cordinates and calculate the midpoint of lanes and use the image center as reference to calculate distance away from center.
1- laneyolo::get_centerdistance() – Distance of Vechicle from center.
2- laneyolo::get_leftcurvature() – Left Lane Curvature.
3- laneyolo::get_rightcurvature() – Right Lane Curvature.
The Pytorch Model is retrained on partial COCO dataset for 24 classes More Information Here.
The detail for trained classes can be found Here. The implementation is parallel, the preprocessing is done on one thread and detection and post processing is done on another and Lane detection on another thread.
Example:
1- laneyolo::set_batchsize(1) sets batch size for detection. Higher the no higher FPS. Default is 1.
2- laneyolo::set_confthres(0.6) sets Confidance Threshold for detection. Default is 0.6.
3- laneyolo::set_iouthres(0.4) sets IOU Threshold for detection. Default is 0.4.
1- Visual Studio 2019.
2- CUDA 11.1. For Windows installation Guide and Requirements Here.
3- Pytorch 1.9.0. Pytorch C++ library on Windows Here.
4- OpenCV 4.5 or Greater. More information can be found Here.
5- Pytorch trained model (best.torchscript.pt) is included in repository.
6- Eigen Library.
7- CMake.
1- The Windows bat file is Here to build.
2- Change EIGEN3_DIR variable in bat file as per your Eigen install location.
3- Change TORCH_DIR variable in bat file as per your libtorch install location.
4- OPENCV bin path needs to be added to ENV path.