Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CustomOps] TensorRT Gather Topk Ops #1033

Merged
merged 6 commits into from
Sep 19, 2022
Merged

Conversation

grimoire
Copy link
Member

@grimoire grimoire commented Sep 13, 2022

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily receiving feedbacks. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Might be disabled after NVIDIA provide a fix.

Motivation

The dynamic batch detector failed because NVIDIA/TensorRT#2299.

Modification

Add custom ops for gather topk.

old:

-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  
                                                   Name    Self CPU %      Self CPU   CPU total %     CPU total  CPU time avg     Self CUDA   Self CUDA %    CUDA total  CUDA time avg    # of Calls  
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  
                         Memcpy DtoD (Device -> Device)         0.00%       0.000us         0.00%       0.000us       0.000us     614.000us        32.32%     614.000us       1.535us           400  
(anonymous namespace)::kernelShapeCopyH2D(int*, std:...         0.00%       0.000us         0.00%       0.000us       0.000us     374.000us        19.68%     374.000us       1.247us           300  
void cuTopKLayer::independentTopK<(nvinfer1::TopKOpe...         0.00%       0.000us         0.00%       0.000us       0.000us     301.000us        15.84%     301.000us       3.010us           100  
                                  __myl_bb0_1_MulAddGat         0.00%       0.000us         0.00%       0.000us       0.000us     204.000us        10.74%     204.000us       2.040us           100  
void cuReduceLayer::tailReduceFast<32u, (nvinfer1::R...         0.00%       0.000us         0.00%       0.000us       0.000us     200.000us        10.53%     200.000us       2.000us           100  
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  
Self CPU time total: 481.878ms
Self CUDA time total: 1.900ms

new:

-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  
                                                   Name    Self CPU %      Self CPU   CPU total %     CPU total  CPU time avg     Self CUDA   Self CUDA %    CUDA total  CUDA time avg    # of Calls  
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  
void cuTopKLayer::independentTopK<(nvinfer1::TopKOpe...         0.00%       0.000us         0.00%       0.000us       0.000us     300.000us        42.80%     300.000us       3.000us           100  
void cuReduceLayer::tailReduceFast<32u, (nvinfer1::R...         0.00%       0.000us         0.00%       0.000us       0.000us     201.000us        28.67%     201.000us       2.010us           100  
void gather_topk_kernel<float>(float const*, int con...         0.00%       0.000us         0.00%       0.000us       0.000us     200.000us        28.53%     200.000us       2.000us           100  
                                        cudaEventRecord         7.96%     104.000us         7.96%     104.000us       1.040us       0.000us         0.00%       0.000us       0.000us           100  
                                       cudaLaunchKernel        91.35%       1.194ms        91.35%       1.194ms       3.980us       0.000us         0.00%       0.000us       0.000us           300  
-------------------------------------------------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  ------------  
Self CPU time total: 1.307ms
Self CUDA time total: 701.000us 

Add shape info, ut, docs, etc.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  3. If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

@irexyc
Copy link
Collaborator

irexyc commented Sep 16, 2022

convert:
faster_rcnn_r50_fpn_1x_coco.py with min/opt/max shape of 1,2,4. (batch)

test:
with --batch-size 1, the bbox_mAP is 0.373.
with --batch-size 2, the bbox_mAP is 0.207

@grimoire
Copy link
Member Author

@irexyc fixed.

Copy link
Collaborator

@irexyc irexyc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test with faster-rcnn and retinanet

@lvhan028 lvhan028 merged commit 0caeaf2 into open-mmlab:master Sep 19, 2022
Copy link

codecov bot commented Aug 28, 2024

Codecov Report

Attention: Patch coverage is 34.09091% with 29 lines in your changes missing coverage. Please review.

Project coverage is 49.51%. Comparing base (a1a19f0) to head (267570d).
Report is 206 commits behind head on master.

Files with missing lines Patch % Lines
mmdeploy/codebase/mmdet/deploy/utils.py 23.07% 20 Missing ⚠️
...oy/codebase/mmdet/core/post_processing/bbox_nms.py 0.00% 5 Missing ⚠️
...odebase/mmdet/models/dense_heads/reppoints_head.py 60.00% 2 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1033      +/-   ##
==========================================
- Coverage   49.59%   49.51%   -0.09%     
==========================================
  Files         305      305              
  Lines       10732    10758      +26     
  Branches     1602     1609       +7     
==========================================
+ Hits         5323     5327       +4     
- Misses       5030     5051      +21     
- Partials      379      380       +1     
Flag Coverage Δ
unittests 49.51% <34.09%> (-0.09%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants