This project aims to detect human in a 3D Lidar dataset using YOLOv5. The 3D Lidar dataset is labeled using Roboflow to detect and classify the human instances in the point cloud data.
The data are collected with Livox Horizon lidar and saved into rosbags. By playing the rosbags, we run our models on Rviz (visulization tools) where the pointcloud are visulized. The color of each point in Rviz represents the intensity value which is detemined by the object's surface material. You can find more detatils and raw data in following links:
-
Roboflow can label, prepare, and host custom data automatically in YOLO format, and create data.yaml.
train: ../train/images val: ../valid/images test: ../test/images nc: 1 names: ['human'] roboflow: workspace: project project: yolo_dection version: 1 license: MIT url: https://universe.roboflow.com/project/yolo_dection/dataset/1
- YOLOv5m, a medium-sized model, is selected in our projects. The different size of YOLOv5 series shows as follow:
- Use Single-GPU training with train.py
!export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH !python3 train.py --img 640 --batch 4 --epochs 300 --data /home/qing/Desktop/SummerProject/data.yaml --cfg /media/qing/KINGSTON/2023-01-28/yolov5/models/yolov5m.yaml --weights yolov5m.pt --name yolov5s_results
All training results are saved to runs/train/ with incrementing run directories, i.e. runs/train/exp2, runs/train/exp3 etc
All trained results can be found here