-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bc-breaking] enable direct configuration in quantize_ #1595
base: main
Are you sure you want to change the base?
Conversation
Stack from ghstack (oldest at bottom): |
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1595
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 26850da with merge base 8afd10e ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: POC for: * decoupling configuration from transformation * stop passing obscure stateful callables around * enable printing of configuration * reduce amount of context switching to navigate the logic from `quantize_` to quantizing a single module TODO more polish before wider discussion. Test Plan: ``` pytest test/quantization/test_quant_api.py -s -x -k test_int4_weight_only_numerics pytest test/quantization/test_qat.py -s -x -k test_quantize_api_standalone pytest test/quantization/test_qat.py -s -x -k test_quantize_api_convert_path ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: fb0703f88413bc06962dacde24ff6bb7cf0f3b19 ghstack-comment-id: 2607756510 Pull Request resolved: #1595
Summary: POC for: * decoupling configuration from transformation * stop passing obscure stateful callables around * enable printing of configuration * reduce amount of context switching to navigate the logic from `quantize_` to quantizing a single module TODO more polish before wider discussion. Test Plan: ``` pytest test/quantization/test_quant_api.py -s -x -k test_int4_weight_only_numerics pytest test/quantization/test_qat.py -s -x -k test_quantize_api_standalone pytest test/quantization/test_qat.py -s -x -k test_quantize_api_convert_path ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 73e9a5c3bf03e2cb645cc0ea43bec162a5f4897e ghstack-comment-id: 2607756510 Pull Request resolved: #1595
Summary: POC for: * decoupling configuration from transformation * stop passing obscure stateful callables around * enable printing of configuration * reduce amount of context switching to navigate the logic from `quantize_` to quantizing a single module TODO more polish before wider discussion. Test Plan: ``` pytest test/quantization/test_quant_api.py -s -x -k test_int4_weight_only_numerics pytest test/quantization/test_qat.py -s -x -k test_quantize_api_standalone pytest test/quantization/test_qat.py -s -x -k test_quantize_api_convert_path ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: ff2d58b120453a36d10c24da3df207b9348bdc7a ghstack-comment-id: 2607756510 Pull Request resolved: #1595
Summary: POC for: * decoupling configuration from transformation * stop passing obscure stateful callables around * enable printing of configuration * reduce amount of context switching to navigate the logic from `quantize_` to quantizing a single module TODO more polish before wider discussion. Test Plan: ``` pytest test/quantization/test_quant_api.py -s -x -k test_int4_weight_only_numerics pytest test/quantization/test_qat.py -s -x -k test_quantize_api_standalone pytest test/quantization/test_qat.py -s -x -k test_quantize_api_convert_path ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 05b6a547051288c8e59bad7d1df3bca402ea3991 ghstack-comment-id: 2607756510 Pull Request resolved: #1595
Summary: POC for: * decoupling configuration from transformation * stop passing obscure stateful callables around * enable printing of configuration * reduce amount of context switching to navigate the logic from `quantize_` to quantizing a single module TODO more polish before wider discussion. Test Plan: ``` pytest test/quantization/test_quant_api.py -s -x -k test_int4_weight_only_numerics pytest test/quantization/test_qat.py -s -x -k test_quantize_api_standalone pytest test/quantization/test_qat.py -s -x -k test_quantize_api_convert_path ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: e4f1550e3130d523e244a2dfdebb7d4db824c388 ghstack-comment-id: 2607756510 Pull Request resolved: #1595
Summary: POC for: * decoupling configuration from transformation * stop passing obscure stateful callables around * enable printing of configuration * reduce amount of context switching to navigate the logic from `quantize_` to quantizing a single module TODO more polish before wider discussion. Test Plan: ``` pytest test/quantization/test_quant_api.py -s -x -k test_int4_weight_only_numerics pytest test/quantization/test_qat.py -s -x -k test_quantize_api_standalone pytest test/quantization/test_qat.py -s -x -k test_quantize_api_convert_path ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: c0716eda5694ddd9a649fc2cdbb292121a1f4da4 ghstack-comment-id: 2607756510 Pull Request resolved: #1595
Summary: POC for: * decoupling configuration from transformation * stop passing obscure stateful callables around * enable printing of configuration * reduce amount of context switching to navigate the logic from `quantize_` to quantizing a single module TODO more polish before wider discussion. Test Plan: ``` pytest test/quantization/test_quant_api.py -s -x -k test_int4_weight_only_numerics pytest test/quantization/test_qat.py -s -x -k test_quantize_api_standalone pytest test/quantization/test_qat.py -s -x -k test_quantize_api_convert_path ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 56720183d4530d718a44257ec61110f7a3ffee9f ghstack-comment-id: 2607756510 Pull Request resolved: #1595
Summary: POC for: * decoupling configuration from transformation * stop passing obscure stateful callables around * enable printing of configuration * reduce amount of context switching to navigate the logic from `quantize_` to quantizing a single module TODO more polish before wider discussion. Test Plan: ``` pytest test/quantization/test_quant_api.py -s -x -k test_int4_weight_only_numerics pytest test/quantization/test_qat.py -s -x -k test_quantize_api_standalone pytest test/quantization/test_qat.py -s -x -k test_quantize_api_convert_path ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 2cb59edde02826639292373da3653a045b06ce7f ghstack-comment-id: 2607756510 Pull Request resolved: #1595
Summary: POC for: * decoupling configuration from transformation * stop passing obscure stateful callables around * enable printing of configuration * reduce amount of context switching to navigate the logic from `quantize_` to quantizing a single module TODO more polish before wider discussion. Test Plan: ``` pytest test/quantization/test_quant_api.py -s -x -k test_int4_weight_only_numerics pytest test/quantization/test_qat.py -s -x -k test_quantize_api_standalone pytest test/quantization/test_qat.py -s -x -k test_quantize_api_convert_path ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: fc9a5c15c6269b83fe4e5b9025728b1e53627490 ghstack-comment-id: 2607756510 Pull Request resolved: #1595
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Mostly just minor doc nits.
@@ -180,8 +187,13 @@ def apply_uint6_weight_only_quant(linear): | |||
) | |||
@unittest.skipIf(not torch.cuda.is_available(), "Need CUDA available") | |||
def test_print_quantized_module(self, apply_quant): | |||
print(apply_quant) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove?
quantize_(linear, apply_quant) | ||
ql = linear | ||
else: | ||
ql = apply_quant(linear) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
once we migrate all functions to configs we won't need this check anymore right? Should we add a TODO to remove it?
@@ -0,0 +1,10 @@ | |||
import abc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel we can just add this to torchao/config.py
without making a new core directory. No strong preference though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slightly stronger preference is I feel "core" shouldn't appear in the import, so users should be able to do this:
from torchao.config import AOBaseConfig
but we can do that by adding this to __init__.py
@@ -1185,7 +1185,7 @@ def test_qat_prototype_bc(self): | |||
@unittest.skipIf( | |||
not TORCH_VERSION_AT_LEAST_2_4, "skipping when torch version is 2.4 or lower" | |||
) | |||
def test_quantize_api(self): | |||
def test_quantize_api_standalone(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this change?
@@ -315,22 +328,31 @@ def from_intx_quantization_aware_training() -> Callable: | |||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to update the docstring here in the previous line
@@ -269,37 +287,32 @@ def intx_quantization_aware_training( | |||
`torch.nn.Embedding` with an activation config, then we will raise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't comment up there but need to update the docstring in L282
""" | ||
If a workflow config inherits from this then `quantize_` knows | ||
how to a apply it to a model. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add a paragraph here or under quantize_ about how this is related to register_quantize_module_handler
, so users who wish to add their own configs know how to do it?
handler, | ||
_is_linear if filter_fn is None else filter_fn, | ||
device=device, | ||
extra_args=(config,), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alternatively we can pass in a lambda, then we don't need to add extra_args
or pass in config
:
replace_fn = lambda mod: handler(mod, config)
seems simpler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm really not a fan of passing callables around, it's easy when the callable is simple but easy for future people to tack ugly stuff on and increase complexity. Non-callable args make it harder to make the code ugly in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh sorry, I meant pass in replace_fn
instead of handler
, like:
replace_fn = lambda mod: handler(mod, config)
_replace_with_custom_fn_if_matches_filter(
model,
replace_fn,
_is_linear if filter_fn is None else filter_fn,
device=device,
)
either way you're passing a callable
] = {} | ||
|
||
|
||
def register_quantize_module_handler(config_type): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add some docstrings here to explain how this is related to quantize_
and AOBaseConfig
?
summary
This PR enables passing per-workflow arguments to
quantize_
directly, without wrapping them in aCallable
.Motivation: passing direct configuraton is intuintive and widely used in similar contexts across various projects. Passing configuration wrapped in a callable is IMO not intuitive, hard to understand and debug, and we have evidence that it pushes a portion of users from building on top of torchao.
We will keep the old callable syntax supported by
quantize_
for one release cycle, and delete it afterwards. We will keep the old names as aliases for new names going forward (example:int4_weight_only
as an alias ofInt4WeightOnlyConfig
) to keep existing callsites working without changes.user facing API changes
signature of quantize_
usage example
An example for
int4_weight_only
developer facing changes
See the PR details for examples, but they can be summarized as:
current status
The current PR migrates three user facing workflows:
int4_weight_only
intx_quantization_aware_training
andfrom_intx_quantization_aware_training
I've chosen to migrate one PTQ and two QAT workflows to prove generality of the new flow, but avoid a high LOC in this PR to make it easier to review. We will migrate the rest of the workflows in future PRs, detailed below:
After a release cycle, we will delete the old callable syntax.
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags: