Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MKLDNN]Add quantized concat #13297

Merged
merged 12 commits into from
Nov 28, 2018
Merged

[MKLDNN]Add quantized concat #13297

merged 12 commits into from
Nov 28, 2018

Conversation

ZhennanQin
Copy link
Contributor

@ZhennanQin ZhennanQin commented Nov 16, 2018

Description

This PR is to add quantized concat op and its MKLDNN implementation.

@pengzhao-intel @TaoLv

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@kalyc
Copy link
Contributor

kalyc commented Nov 16, 2018

@mxnet-label-bot add [pr-awaiting-review]
Thanks for your contribution @ZhennanQin

@marcoabreu marcoabreu added the pr-awaiting-review PR is waiting for code review label Nov 16, 2018
@pengzhao-intel
Copy link
Contributor

@larroy please help take a review again. This is an important OP for the quantization flow so we hope it can be merged before r1.4 code freeze.

@pengzhao-intel
Copy link
Contributor

@zheng-da @reminisce @szha @eric-haibin-lin @apeforest
please help take a review :)

NNVM_REGISTER_OP(Concat)
.set_attr<FQuantizedOp>("FQuantizedOp", [](const NodeAttrs& attrs) {
nnvm::NodePtr node = nnvm::Node::Create();
node->attrs.op = Op::Get("_contrib_quantized_concat");
Copy link
Contributor

@apeforest apeforest Nov 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the Concat operator will call the MKLDNN version of the operator. Is this intended?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, all quantized op doesn't have default implementation for cpu. MKLDNN is the only cpu implementation. I guess that's why they all have _contrib_ prefix.

Copy link
Contributor

@apeforest apeforest Nov 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if MKLDNN is not ON and user invokes this operator? Will any error message be given?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure. It should follow framework default behavior. Quantized op are all supported by FComputeEx, so basically it should only be used when MKLDNN is on. One way to avoid this is to define quantized_concat as a mkldnn specific op, by declaring it inside MXNET_USE_MKLDNN macro. But we don't have such backend specific op before. Do you have any suggestion?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should issue an error message that the quantized op is not supported in non MKLDNN build.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree with that. But that beyond the scope of this PR. Currently all quantized op has this issue. We need to create another PR to add error message from framework level for each op that don't have default implementation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have that PR first before we merge this? Knowing there is some limitation without issuing any clear message may create a bad user experience.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apeforest Build MXNet with make USE_OPENCV=1 USE_BLAS=openblas, and run quantized model, below message is reported:

[10:54:08] src/executor/attach_op_execs_pass.cc:351: Neither FCompute nor FComputeEx registered _contrib_quantized_concat
[10:54:08] src/executor/attach_op_execs_pass.cc:351: Neither FCompute nor FComputeEx registered _contrib_quantized_pooling
[10:54:08] src/executor/attach_op_execs_pass.cc:351: Neither FCompute nor FComputeEx registered _contrib_quantized_conv

I think framework can handle this case properly.

@codecov-io
Copy link

Codecov Report

Merging #13297 into master will decrease coverage by 11.06%.
The diff coverage is 16.04%.

Impacted file tree graph

@@             Coverage Diff             @@
##           master   #13297       +/-   ##
===========================================
- Coverage   79.72%   68.66%   -11.07%     
===========================================
  Files         749      652       -97     
  Lines       81176    70894    -10282     
  Branches     3164     3164               
===========================================
- Hits        64714    48676    -16038     
- Misses      15606    21923     +6317     
+ Partials      856      295      -561
Impacted Files Coverage Δ
src/operator/quantization/quantization_utils.h 50% <ø> (-50%) ⬇️
src/operator/quantization/quantized_concat.cc 16.04% <16.04%> (ø)
src/operator/nn/depthwise_convolution_tf.cuh 0% <0%> (-85.89%) ⬇️
python/mxnet/symbol/contrib.py 8.54% <0%> (-81.65%) ⬇️
src/operator/contrib/multibox_prior.cu 2.7% <0%> (-81.09%) ⬇️
src/operator/contrib/multibox_detection.cu 4.76% <0%> (-80.96%) ⬇️
src/engine/threaded_engine_pooled.cc 1.58% <0%> (-80.96%) ⬇️
python/mxnet/gluon/block.py 13.83% <0%> (-79.53%) ⬇️
python/mxnet/gluon/model_zoo/vision/squeezenet.py 21.62% <0%> (-78.38%) ⬇️
python/mxnet/gluon/model_zoo/vision/resnet.py 21.71% <0%> (-76.27%) ⬇️
... and 393 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c78f89f...152ad06. Read the comment docs.

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some change still needed

@ZhennanQin
Copy link
Contributor Author

@apeforest All comments are addressed. Can you review again? Thanks a lot for keeping review round after round.

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for the detailed explanation.

@TaoLv
Copy link
Member

TaoLv commented Nov 28, 2018

Thanks for the contribution. Now merging.

@TaoLv TaoLv merged commit 5111b18 into apache:master Nov 28, 2018
@ZhennanQin ZhennanQin deleted the quantized_concat branch November 29, 2018 01:43
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants