-
Notifications
You must be signed in to change notification settings - Fork 22.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Quant][PT2E] Enable X86InductorQuantizer single quantizable op(maxpool2d) #105639
[Quant][PT2E] Enable X86InductorQuantizer single quantizable op(maxpool2d) #105639
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/105639
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 2cd113c with merge base 97a291f (): UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…ol2d) [ghstack-poisoned]
…ol2d) ghstack-source-id: a1331e5412b957705ec32a869490e60ec894ef2f Pull Request resolved: #105639
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_non_quantizable_op_after_force_int8_int8_op ``` [ghstack-poisoned]
…ol2d) ghstack-source-id: a1331e5412b957705ec32a869490e60ec894ef2f Pull Request resolved: pytorch#105639
…ol2d) ghstack-source-id: 0ace32d1cf1434f9c55e3bee6f18922242a14a4d Pull Request resolved: #105639
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` [ghstack-poisoned]
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` [ghstack-poisoned]
…ol2d) ghstack-source-id: d482e5dcaf6bb2537b7733f4debffc1136e532f2 Pull Request resolved: #105639
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` [ghstack-poisoned]
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` [ghstack-poisoned]
@@ -28,14 +29,83 @@ | |||
get_source_partitions, | |||
SourcePartition, | |||
) | |||
from .quantizer import QuantizationAnnotation, QuantizationSpec, Quantizer | |||
from .quantizer import ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: probably better to import from "torch.ao.quantization.quantizer"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for comment and changed.
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` [ghstack-poisoned]
…ol2d) ghstack-source-id: 0a0d7a11ebfb995a2d840d82667a193e92da62ee Pull Request resolved: pytorch#105639
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` [ghstack-poisoned]
list(maxpool_node.users)[0].target == operator.getitem | ||
) | ||
getitem_node = list(node.users)[0] | ||
if not _is_all_annotated([getitem_node, maxpool_node]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you probably want to skip if any of the node is annotated? otherwise you will be breaking some existing annotations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I think here we add this check because we expected getitem_node
and maxpool_node
already been annotated in previous step of _annotation_propagation_quantizable_pattern
which only annotates the inputs of these 2 nodes. Then, here we will annotate the output of getitem_node
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, I feel this is error prone, since these might be annotated because of other reasons, I think the more robust way to annotate things would be to have a self contained meaningful pattern and just annotate that pattern by itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense. We may discuss about it further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
accepting to unblock
…ol2d) ghstack-source-id: 52b2a7858869e23c64838d1e70e4e82aa3c9c824 Pull Request resolved: pytorch#105639
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` [ghstack-poisoned]
…ol2d) ghstack-source-id: 427a8dff03e45e0d76073ccca839d7ef63c6a1c4 Pull Request resolved: pytorch#105639
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` [ghstack-poisoned]
…le op(maxpool2d)" **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` [ghstack-poisoned]
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
**Summary** Enable the `dq-maxpool2d-q` pattern match and lower into `torch.ops.quantized.max_pool2d`. **Test Plan** ``` python -m pytest test_mkldnn_pattern_matcher.py -k test_qmaxpool2d python -m pytest test_quantized_op.py -k test_max_pool2d_pt2e ``` Pull Request resolved: #105906 Approved by: https://github.com/jgong5, https://github.com/eellison ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639
**Summary** After oneDNN 3.1 upgrade, we don't need to do the weight scale reciprocal calculation. So, remove the redundant reciprocal calculation to optimize QConv performance and using IDeep version API to implement it in this PR: - This QConv implementation expects to work functionally both with current IDeep version and the following IDeep upgrade in PR: #107565. - With the following IDeep upgrade in PR: #107565, the QConv has better performance since the redundant reciprocal calculation are removed. Pull Request resolved: #105996 Approved by: https://github.com/jgong5, https://github.com/jerryzh168 ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639, #105906
…ht scale reciprocal calculation (#107565) **Summary** Upgrade IDeep which includes 1 IDeep change as IDeep PR: intel/ideep#226 - For IDeep PR: intel/ideep#226 which has done 2 things: - Remove the redundant QConv weight scale reciprocal calculation. - Pump IDEEP_VERSION_REVISION version from 0 to 1. So only QConv related calculation will be impacted and we already use IDeep version API in #105996 to make the corresponding change in PyTorch. Pull Request resolved: #107565 Approved by: https://github.com/jgong5, https://github.com/jerryzh168 ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639, #105906, #105996
…ol2d) (#105639) **Summary** In this PR, we mainly enable 2 things. - Enable the skeleton of quantization recipe for single quantizable operators in `X86InductorQuantizer`. - Add quantization recipe of `maxpool2d` and annotate it as input./output share observer. **Test Plan** ``` python -m pytest test_x86inductor_quantizer.py -k test_maxpool2d_recipe ``` Pull Request resolved: #105639 Approved by: https://github.com/jgong5, https://github.com/jerryzh168 ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456
**Summary** Enable the `dq-maxpool2d-q` pattern match and lower into `torch.ops.quantized.max_pool2d`. **Test Plan** ``` python -m pytest test_mkldnn_pattern_matcher.py -k test_qmaxpool2d python -m pytest test_quantized_op.py -k test_max_pool2d_pt2e ``` Pull Request resolved: #105906 Approved by: https://github.com/jgong5, https://github.com/eellison ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639
**Summary** After oneDNN 3.1 upgrade, we don't need to do the weight scale reciprocal calculation. So, remove the redundant reciprocal calculation to optimize QConv performance and using IDeep version API to implement it in this PR: - This QConv implementation expects to work functionally both with current IDeep version and the following IDeep upgrade in PR: #107565. - With the following IDeep upgrade in PR: #107565, the QConv has better performance since the redundant reciprocal calculation are removed. Pull Request resolved: #105996 Approved by: https://github.com/jgong5, https://github.com/jerryzh168 ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639, #105906
…ht scale reciprocal calculation (#107565) **Summary** Upgrade IDeep which includes 1 IDeep change as IDeep PR: intel/ideep#226 - For IDeep PR: intel/ideep#226 which has done 2 things: - Remove the redundant QConv weight scale reciprocal calculation. - Pump IDEEP_VERSION_REVISION version from 0 to 1. So only QConv related calculation will be impacted and we already use IDeep version API in #105996 to make the corresponding change in PyTorch. Pull Request resolved: #107565 Approved by: https://github.com/jgong5, https://github.com/jerryzh168 ghstack dependencies: #104580, #104581, #104588, #104590, #105455, #105456, #105639, #105906, #105996
Stack from ghstack (oldest at bottom):
Summary
In this PR, we mainly enable 2 things.
X86InductorQuantizer
.maxpool2d
and annotate it as input./output share observer.Test Plan