-
-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Yolo with Tensor RT #7567
Comments
@bruchpilot123 commands look correct, just ran them successfully in Colab. We don't have hardware to reproduce on unfortunately, but we have jetson and rpi support coming in the future. |
Jetson does not support installing tensorrt in this way, see https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-pip for more details. Please use the official jetson documentation https://developer.nvidia.com/embedded/jetpack to install tensorrt. Also you can see PyTorch's blog https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/ for more details about using YOLOv5 with Jetson. |
Sorry, this might be a really stupid questions but I am a beginner. At this time I don Have a great day :) |
@bruchpilot123 export official YOLOv5s model to TRT like this:
See Export tutorial for details: YOLOv5 Tutorials
Good luck 🍀 and let us know if you have any other questions! |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
Search before asking
Question
Hello,
I tried to use Yolov5 on an Nvidia Jetson with Jetpack 5 together with Tensor RT. I used the following commands:
python export.py --weights yolov5s.pt --include engine --imgsz 640 640 --device 0
Since TensorRT should be preinstalled with Jetpack5 I did not use the first command from the notebook. Furthermore the first command does not work for me, since it says that there is no version.
During the pyhton export command I get the following error:
export: data=data/coco128.yaml, weights=['yolov5s.pt'], imgsz=[640, 640], batch_size=1, device=0, half=False, inplace=False, train=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['engine']
YOLOv5 🚀 v6.1-161-ge54e758 torch 1.12.0a0+2c916ef.nv22.3 CUDA:0 (Xavier, 31011MiB)
Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.1 MB)
/home/collins/.local/lib/python3.8/site-packages/pkg_resources/init.py:123: PkgResourcesDeprecationWarning: 0.1.36ubuntu1 is an invalid version and will not be supported in a future release
warnings.warn(
/home/collins/.local/lib/python3.8/site-packages/pkg_resources/init.py:123: PkgResourcesDeprecationWarning: 0.23ubuntu1 is an invalid version and will not be supported in a future release
warnings.warn(
requirements: nvidia-tensorrt not found and is required by YOLOv5, attempting auto-update...
ERROR: Could not find a version that satisfies the requirement nvidia-tensorrt (from versions: none)
ERROR: No matching distribution found for nvidia-tensorrt
requirements: Command 'pip install 'nvidia-tensorrt' -U --index-url https://pypi.ngc.nvidia.com' returned non-zero exit status 1.
ONNX: starting export with onnx 1.11.0...
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
ONNX: export failure: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument other in method wrapper__equal)
TensorRT: starting export with TensorRT 8.4.0.9...
TensorRT: export failure: failed to export ONNX file: yolov5s.onnx
Is there something I can do to fix this?
kind regards,
Robert
Additional
No response
The text was updated successfully, but these errors were encountered: