-
-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include NMS in model export? #3778
Comments
Hi @mattpopovich , I've done some experiments about exporting the |
@zhiqwang I did come across your repo! It is on my list of things to investigate in the future. Thanks for putting it together and giving me the heads up! |
But |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
Question:
I currently have a custom yolov5 model running in my C++ pipeline with TorchScript. The TorchScript model was obtained by running export.py. Getting the TorchScript model to run on the GPU in C++ is easy enough via
model_gpu = torch::jit::load(model_path, torch::kCUDA);
. I would now like to run all of pre-processing, inference, and post-processing on the GPU to speed up my C++ pipeline. Is there any way to export pre-processing and post-processing (non-maximum suppression aka NMS) IN the model? Or would you recommend something else for running the whole pipeline in the GPU in C++?What I've tried:
Running the TorchScript model
When you run the TorchScript model (with size = 640 and after pre-processing), it outputs:
These are all of the detections. After being put through NMS (post-processing), we get our actual (non-duplicate) detections.
Using
jit.script
instead ofjit.trace
In export.py, I can add:
Which gives me an exception: "Could not get name of python class object"
This isn't a big deal because
jit.trace
works, but it's worth noting this error here, because I get it below as well.This might be because TorchScript (jit.script) only supports PyTorch and the math module. There are a few numpy calls in
AutoShape.forward()
so maybe that is one of the issues preventing this?Exporting the PyTorch Hub model
It appears that the PyTorch Hub model includes both pre-processing and post-processing (NMS) via the AutoShape class. My thought was to import my custom model using
torch.hub.load(model='custom', [...])
(which is successful), then I would eitherjit.trace
orjit.script
in order to take advantage of the built-in pre and post-processing.jit.trace
of torch.hub modelComparison exception: With rtol=1e-05 and atol=1e-08, found 1 element(s) (out of 1) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 1.5 (2.0 vs. 0.5), which occurred at index 0.
jit.script
of torch.hub modelRuntimeError: Could not get name of python class object
So.. looks like this isn't a viable alternative either.
Appreciate any advice!
The text was updated successfully, but these errors were encountered: