Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Fix reppoints TensorRT support. #1060

Merged
merged 4 commits into from
Oct 27, 2022

Conversation

grimoire
Copy link
Member

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily receiving feedbacks. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

Reppoint give wrong result on latest TensorRT.

Modification

Reshape twice.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  3. If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

Copy link
Collaborator

@hanrui1sensetime hanrui1sensetime left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my env:
torch 1.10.0 + cu113
tensorrt 8.4.2.4
This conversion will failed on onnx2tensorrt.

[TRT] [W] Skipping tactic 0x0000000000000000 due to Myelin error: autotuning: CUDA error 3 allocating 0-byte buffer: 
[09/27/2022-11:10:30] [TRT] [E] 4: [optimizer.cpp::computeCosts::3624] Error Code 4: Internal Error (Could not find any implementation for node {ForeignNode[Transpose_600 + Reshape_603...Gather_1063]} due to insufficient workspace. See verbose log for requested sizes.)

Copy link
Collaborator

@hanrui1sensetime hanrui1sensetime left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And in another env:
tensorrt 8.0.3.4
pytorch 1.8.0+cuda102:
It fails at:

RuntimeError: Failed to parse onnx, In node 1061 (importClip): UNSUPPORTED_NODE: Assertion failed: inputs.at(2).is_weights() && "Clip max value must be an initializer!"

@hanrui1sensetime
Copy link
Collaborator

LGTM.
P.S. The same as CenterNet, in trt 8.4.x, dynamic shape will get wrong answer. NVIDIA/TensorRT#2299

Copy link
Collaborator

@tpoisonooo tpoisonooo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

# TODO: figure out why we can't reshape after permute deirectly
bbox_pred = bbox_pred.permute(0, 2, 3, 1)
bbox_pred = bbox_pred.reshape(batch_size, -1)
bbox_pred = (bbox_pred + 0).reshape(batch_size, -1, 4)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why add +0 ?

@@ -115,7 +118,11 @@ def reppoints_head__get_bboxes(ctx,
scores = scores.sigmoid()
else:
scores = scores.softmax(-1)
bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(batch_size, -1, 4)

# TODO: figure out why we can't reshape after permute deirectly
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deirectly --> directly

@lvhan028 lvhan028 merged commit 09add48 into open-mmlab:master Oct 27, 2022
Copy link

codecov bot commented Aug 28, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 47.72%. Comparing base (b6f8c1c) to head (2fa0a49).
Report is 148 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1060   +/-   ##
=======================================
  Coverage   47.71%   47.72%           
=======================================
  Files         323      323           
  Lines       11312    11314    +2     
  Branches     1619     1619           
=======================================
+ Hits         5398     5400    +2     
  Misses       5526     5526           
  Partials      388      388           
Flag Coverage Δ
unittests 47.72% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants