Skip to content

Latest commit

 

History

History
68 lines (44 loc) · 3.55 KB

README.en.md

File metadata and controls

68 lines (44 loc) · 3.55 KB

English | 简体中文

🚀 TensorRT YOLO

GitHub License GitHub Release GitHub commit activity GitHub Repo stars GitHub forks

TensorRT-YOLO is an inference acceleration project that supports YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9, PP-YOLOE, and PP-YOLOE+ using NVIDIA TensorRT for optimization. The project integrates EfficientNMS TensorRT plugin for enhanced post-processing and utilizes CUDA kernel functions to accelerate the preprocessing phase. TensorRT-YOLO provides support for both C++ and Python inference, aiming to deliver a fast and optimized object detection solution.

✨ Key Features

  • Support for YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9, PP-YOLOE, and PP-YOLOE+
  • Support for ONNX static and dynamic export, as well as TensorRT inference
  • Integration of EfficientNMS TensorRT plugin for accelerated post-processing
  • Utilization of CUDA kernel functions for accelerated preprocessing
  • Support for inference in both C++ and Python
  • Command-line interface for quick export and inference
  • One-click Docker deployment

🛠️ Requirements

  • Recommended CUDA version >= 11.6
  • Recommended TensorRT version >= 8.6

📦 Usage Guide

📺 BiliBili

📄 License

TensorRT-YOLO is licensed under the GPL-3.0 License, an OSI-approved open-source license that is ideal for students and enthusiasts, fostering open collaboration and knowledge sharing. Please refer to the LICENSE file for more details.

Thank you for choosing TensorRT-YOLO; we encourage open collaboration and knowledge sharing, and we hope you comply with the relevant provisions of the open-source license.

📞 Contact

For bug reports and feature requests regarding TensorRT-YOLO, please visit GitHub Issues!