This sample shows how to integrate YOLO models with customized output layer parsing for detected objects with DeepStreamSDK.
deepstream_app_config_yolo.txt
: DeepStream reference app configuration file for using YOLO models as the primary detector.config_infer_primary_yoloV4.txt
: Configuration file for the GStreamer nvinfer plugin for the YoloV4 detector model.config_infer_primary_yoloV7.txt
: Configuration file for the GStreamer nvinfer plugin for the YoloV7 detector model.nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp
: Output layer parsing function for detected objects for the Yolo models.
- Go to this pytorch repository https://github.com/Tianxiaomo/pytorch-YOLOv4 where you can convert YOLOv4 Pytorch model into ONNX
- Other famous YOLOv4 pytorch repositories as references:
- Or you can download reference ONNX model directly from here (link).
following the guide https://github.com/WongKinYiu/yolov7#export, export a dynamic-batch-1-output onnx-model
$ python export.py --weights ./yolov7.pt --grid --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --dynamic-batch
or using the qat model exported from yolov7_qat
$ cd ~/
$ git clone https://github.com/NVIDIA-AI-IOT/yolo_deepstream.git
$ cd ~/yolo_deepstream/deepstream_yolo/nvdsinfer_custom_impl_Yolo
$ make
$ cd ..
Make sure the model exists under ~/yolo_deepstream/deepstream_yolo/. Change the "config-file" parameter in the "deepstream_app_config_yolo.txt" configuration file to the nvinfer configuration file for the model you want to run with.
Model | Nvinfer Configuration File |
---|---|
YoloV4 | config_infer_primary_yoloV4.txt |
YoloV7 | config_infer_primary_yoloV7.txt |
$ deepstream-app -c deepstream_app_config_yolo.txt
this sample provide two ways of yolov7 post-processing(decoce yolo result, not include NMS), CPU version and GPU version
- CPU implement can be found in: nvdsparsebbox_Yolo.cpp
- CUDA implement can be found in: nvdsparsebbox_Yolo_cuda.cu
Default will use CUDA-post processing. To enable CPU post-processing: in config_infer_primary_yoloV7.txt
parse-bbox-func-name=NvDsInferParseCustomYoloV7_cuda
->parse-bbox-func-name=NvDsInferParseCustomYoloV7
disable-output-host-copy=1
->disable-output-host-copy=0
The performance of the CPU-post-processing and CUDA-post-processing result can be found in Performance