Skip to content

Qengineering/YoloV8-TensorRT-Jetson_Nano

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

YoloV8 TensorRT Jetson (Orin) Nano

TensorRTparking

YoloV8 with the TensorRT framework.

License

A lightweight C++ implementation of YoloV8 running on NVIDIAs TensorRT engine.
No additional libraries are required, just a few lines of code using software, found on every JetPack.

For now: https://github.com/akashAD98/yolov8_in_depth
Paper: on Ultralytics TODO list https://github.com/ultralytics/ultralytics

Specially made for a Jetson Nano see Q-engineering deep learning examples


Model performance benchmark(FPS)

All models are quantized to FP16.
The int8 models don't give any increase in FPS, while, at the same time, their mAP is significantly worse.
The numbers reflect only the inference timing. Grabbing frames, post-processing and drawing are not taken into account.

demo model_name Orin Nano Nano
yolov5 yolov5nu 100 20
yolov8 yolov8n 100 19
yolov8s 100 9.25
yolov8m 40 -
yolov8l 20 -
yolov8x 17 -

Dependencies.

To run the application, you have to:

  • OpenCV 64-bit installed.
  • Optional: Code::Blocks. ($ sudo apt-get install codeblocks)

Installing the dependencies.

Start with the usual

$ sudo apt-get update 
$ sudo apt-get upgrade
$ sudo apt-get install cmake wget curl

OpenCV

Follow the Jetson Nano guide or the Jetson Orin Nano guide.


Installing the app.

To extract and run the network in Code::Blocks

$ mkdir *MyDir*
$ cd *MyDir* 
$ git clone --depth=1 https://github.com/Qengineering/YoloV8-TensorRT-Jetson_Nano.git

Getting your onnx model.

You always start with an onnx YoloV8.2 model generated by ultralytics.
There are three ways to obtain a model:

  • Use an onnx model from the ./models folder.
    • YoloV5nu.onnx
    • YoloV8n.onnx
    • YoloV8s.onnx
  • Download an onnx model from our Sync drive.
  • Export your (custom-trained) model by ultralytics.
    • Install ultralytics with all its third-party software on your Jetson.
    • $ export "PATH=$PATH:~/.local/bin/" >> ~/.bashrc
    • $ source ~/.bashrc
    • $ yolo export model=yolov8s.pt format=onnx opset=11 simplify=True

Getting your engine model.

TensorRT works with *.engine models.
The models must be generated by the same version as the TensorRT version on your Jetson, otherwise you run into errors.
TensorRTerror
That's why we provide the underlying onnx models instead of the engine models.
You need your trtexec app on your Jetson to convert the model from onnx to the engine format.
Usually, trtexec is found in the /usr/src/tensorrt/bin/ folder on your Jetson.
You could include this folder in your PATH with the next command.

$ export "PATH=$PATH:/usr/src/tensorrt/bin/" >> ~/.bashrc
$ source ~/.bashrc

To export onnx to engine use the following command.

$ trtexec --onnx=yolov8s.onnx --saveEngine=yolov8s.engine --fp16

Please be patient, it will take minutes to complete the conversion.

Instead of --fp16, you could use --int8. All 16-bit floating points are now pruned to 8-bit integers, giving you a smaller but less accurate model. You can run the app once you have your yolov8s.engine model.


Running the app.

You can use Code::Blocks.

  • Load the project file *.cbp in Code::Blocks.
  • Select Release, not Debug.
  • Compile and run with F9.
  • You can alter command line arguments with Project -> Set programs arguments...

Or use Cmake.

$ cd *MyDir*
$ mkdir build
$ cd build
$ cmake ..
$ make -j4

Thanks.

A more than special thanks to triple-Mu.

TensorRTbusstop


paypal