Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor rest of tinygemm quant primitive ops #321

Merged
merged 1 commit into from
Jun 5, 2024

Conversation

jerryzh168
Copy link
Contributor

Summary:
This PR replaces the remaining tinygemm specific quant primitive ops with the general quant primitive ops that we want to use for everything, we could delete these ops in a separate PR if needed

Test Plan:
python test/quantization/test_quant_primitives.py -k test_get_groupwise_affine_qparams python test/quantization/test_quant_primitives.py -k test_groupwise_affine_quantize_tensor_from_qparams python test/quantization/test_quant_primitives.py -k test_groupwise_affine_dequantize_tensor_from_qparams

accuracy:

perf:
no diff for generated code with TORCH_LOGS='output_code' python tutorials/quantize_vit/run_vit_b_quant.py

Copy link

pytorch-bot bot commented Jun 4, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/321

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit f2db23d with merge base 08fb8bf (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 4, 2024
Summary:
This PR replaces the remaining tinygemm specific quant primitive ops with the general quant primitive ops
that we want to use for everything, we could delete these ops in a separate PR if needed

Test Plan:
python test/quantization/test_quant_primitives.py -k test_get_groupwise_affine_qparams
python test/quantization/test_quant_primitives.py -k test_groupwise_affine_quantize_tensor_from_qparams
python test/quantization/test_quant_primitives.py -k test_groupwise_affine_dequantize_tensor_from_qparams

accuracy:

perf:
no diff for generated code with `TORCH_LOGS='output_code' python tutorials/quantize_vit/run_vit_b_quant.py`
@jerryzh168 jerryzh168 merged commit 03e2c9b into pytorch:main Jun 5, 2024
13 checks passed
@jerryzh168 jerryzh168 deleted the refactor-quant-primitives branch June 5, 2024 15:58
dbyoung18 pushed a commit to dbyoung18/ao that referenced this pull request Jul 31, 2024
Summary:
This PR replaces the remaining tinygemm specific quant primitive ops with the general quant primitive ops
that we want to use for everything, we could delete these ops in a separate PR if needed

Test Plan:
python test/quantization/test_quant_primitives.py -k test_get_groupwise_affine_qparams
python test/quantization/test_quant_primitives.py -k test_groupwise_affine_quantize_tensor_from_qparams
python test/quantization/test_quant_primitives.py -k test_groupwise_affine_dequantize_tensor_from_qparams

accuracy:

perf:
no diff for generated code with `TORCH_LOGS='output_code' python tutorials/quantize_vit/run_vit_b_quant.py`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants