Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include NMS in model export? #3778

Closed
mattpopovich opened this issue Jun 26, 2021 · 4 comments
Closed

Include NMS in model export? #3778

mattpopovich opened this issue Jun 26, 2021 · 4 comments
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@mattpopovich
Copy link

Question:

I currently have a custom yolov5 model running in my C++ pipeline with TorchScript. The TorchScript model was obtained by running export.py. Getting the TorchScript model to run on the GPU in C++ is easy enough via model_gpu = torch::jit::load(model_path, torch::kCUDA);. I would now like to run all of pre-processing, inference, and post-processing on the GPU to speed up my C++ pipeline. Is there any way to export pre-processing and post-processing (non-maximum suppression aka NMS) IN the model? Or would you recommend something else for running the whole pipeline in the GPU in C++?

What I've tried:

Running the TorchScript model

When you run the TorchScript model (with size = 640 and after pre-processing), it outputs:

  • Tuple[Tensor, List[Tensor]]
    • Tuple[0] = [1, 25200, 13] Tensor
    • Tuple[1] = List[Tensor] of size 3
      • Tuple[1][0] = [1, 3, 80, 80, 13] Tensor
      • Tuple[1][1] = [1, 3, 40, 40, 13] Tensor
      • Tuple[1][2] = [1, 3, 20, 20, 13] Tensor

These are all of the detections. After being put through NMS (post-processing), we get our actual (non-duplicate) detections.

Using jit.script instead of jit.trace

In export.py, I can add:

sm = torch.jit.script(model(img))    # ScriptModule
sm.save(f)

Which gives me an exception: "Could not get name of python class object"

This isn't a big deal because jit.trace works, but it's worth noting this error here, because I get it below as well.
This might be because TorchScript (jit.script) only supports PyTorch and the math module. There are a few numpy calls in AutoShape.forward() so maybe that is one of the issues preventing this?

Exporting the PyTorch Hub model

It appears that the PyTorch Hub model includes both pre-processing and post-processing (NMS) via the AutoShape class. My thought was to import my custom model using torch.hub.load(model='custom', [...]) (which is successful), then I would either jit.trace or jit.script in order to take advantage of the built-in pre and post-processing.

  • jit.trace of torch.hub model
    • Comparison exception: With rtol=1e-05 and atol=1e-08, found 1 element(s) (out of 1) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 1.5 (2.0 vs. 0.5), which occurred at index 0.
  • jit.script of torch.hub model
    • RuntimeError: Could not get name of python class object

So.. looks like this isn't a viable alternative either.

Appreciate any advice!

@mattpopovich mattpopovich added the question Further information is requested label Jun 26, 2021
@zhiqwang
Copy link
Contributor

zhiqwang commented Jun 26, 2021

Hi @mattpopovich , I've done some experiments about exporting the letterbox (pre-processing) and NMS (post-processing) in AutoShape to the torchscript using torch.jit.script here.

@mattpopovich
Copy link
Author

@zhiqwang I did come across your repo! It is on my list of things to investigate in the future. Thanks for putting it together and giving me the heads up!

@zhiqwang
Copy link
Contributor

zhiqwang commented Jun 26, 2021

Hi @mattpopovich

torch.jit.trace has more practical applications, torch.jit.script is more experimental than torch.jit.trace.

But script is more flexible, if you want to export the lettexbox and NMS to torchscript, seems that we must use torch.jit.script, and that's what I've done in my repo.

@github-actions
Copy link
Contributor

github-actions bot commented Jul 27, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Jul 27, 2021
@github-actions github-actions bot closed this as completed Aug 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

2 participants