Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add static quantization as an example for calibration flow #487

Merged
merged 1 commit into from
Jul 17, 2024

Conversation

jerryzh168
Copy link
Contributor

Summary:
So far quantization flow API that we provided (quantize_) does not require calibration (calibrate a model with sample data), this PR added a static quantization example that serves as an example for calibration flow

    1. first prepare the model for calibration
    1. calibrate the prepared model with sample data
    1. convert the calibrated model to quantized model

Test Plan:
python torchao/prototype/calibration_flow/static_quant.py

Reviewers:

Subscribers:

Tasks:

Tags:

Copy link

pytorch-bot bot commented Jul 8, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/487

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 780c1f9 with merge base aef7e09 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jul 8, 2024
@jerryzh168
Copy link
Contributor Author

jerryzh168 commented Jul 8, 2024

cc @drisspg @vkuzo this flow can be used for smoothquant as well, probably also float8

also QAT for static quant, cc @andrewor14

replacement_fn = lambda m: QuantizedLinear.from_calibrating(m)
_replace_with_custom_fn_if_matches_filter(model, replacement_fn, _is_calibrating_linear)

act_obs = MinMaxObserver(dtype=torch.uint8, qscheme=torch.per_tensor_affine).to("cuda")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd vote for rewriting this stuff without having to use old concepts like qscheme, instead of trying to reuse the code

Copy link
Contributor Author

@jerryzh168 jerryzh168 Jul 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree, this is temporary, the end goal is to implement a generic observer for blockwise quantization

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the blocker for getting rid of the torch.ao dependency now? Are there toy observers we can include for now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we want to define a general observer that works for the new quant primitives in the end so that we can replace this.

we can define a toy observer I think. although I'm not sure why we want to get rid of torch.ao dep since we already depend on pytorch.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's more that torchao is supposed to be a full replacement. Id make an exception if some piece of code is very large and complex but that doesn't seem to be the case here

@jerryzh168
Copy link
Contributor Author

@vkuzo @msaroufim I added a comment to update observers later, please take a look again

torchao/dtypes/affine_quantized_tensor.py Show resolved Hide resolved
quant_min: Optional[int] = None,
quant_max: Optional[int] = None,
zero_point_domain: ZeroPointDomain = ZeroPointDomain.INT,
extended_layout: str = "plain",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems like it's using the old API? So I'm guessing you're landing this PR first?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, we can land the other PR first and then update this

original_shape = input_float.shape
if extended_layout == "tensor_core_tiled":
orig_out_features, orig_in_features = input_float.shape
in_features = find_multiple(orig_in_features, 1024)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re the 1024 and 8 padding heuristics as well this is what the NVIDIA docs say https://x.com/marksaroufim/status/1621580671776092160

So this heuristic tends to be dtype and device dependent - it's possible 1024 and 8 are fine but that would be right mostly out of luck

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should only be used by the tinygemm use case I think, so we should probably add some verifications

torchao/prototype/calibration_flow/static_quant.py Outdated Show resolved Hide resolved
replacement_fn = lambda m: QuantizedLinear.from_calibrating(m)
_replace_with_custom_fn_if_matches_filter(model, replacement_fn, _is_calibrating_linear)

act_obs = MinMaxObserver(dtype=torch.uint8, qscheme=torch.per_tensor_affine).to("cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the blocker for getting rid of the torch.ao dependency now? Are there toy observers we can include for now?

m(*example_inputs)

after_obs = m(*example_inputs)
to_quantized_(m)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I love the to_calibrating and to_quantized names or why not calibrate() and quantize()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calibrate is the name for the calibration process instead of the model transformation step I think, quantize is already used by the weight only and dynamic quant flow..

Copy link
Member

@msaroufim msaroufim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving assuming we move the static_quant.py file to either docs or tutorials/ and do a fast follow to remove the torch.ao dependency

As far as renaming

  • to_calibrating -> insert_observers()
  • to_quantized -> quantize_()

@@ -259,12 +259,12 @@ def insert_subclass(lin):

return insert_subclass

def quantize_(model: torch.nn.Module, apply_tensor_subclass: Callable[[torch.Tensor], torch.Tensor], filter_fn: Optional[Callable[[torch.nn.Module, str], bool]]=None, set_inductor_config: bool=True):
def quantize_(model: torch.nn.Module, apply_tensor_subclass: Callable[[torch.nn.Module], torch.nn.Module], filter_fn: Optional[Callable[[torch.nn.Module, str], bool]]=None, set_inductor_config: bool=True):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msaroufim I made some changes to apply_tensor_subclass to accommodate static quant use cases, please take a look again

Summary:
So far quantization flow API that we provided (`quantize_`) does not require calibration (calibrate a model with sample data), this PR added a static quantization
example that serves as an example for calibration flow

* 1. first prepare the model for calibration
* 2. calibrate the prepared model with sample data
* 3. convert the calibrated model to quantized model

Test Plan:
python torchao/prototype/calibration_flow/static_quant.py

Reviewers:

Subscribers:

Tasks:

Tags:
@jerryzh168 jerryzh168 merged commit 6dd82d8 into pytorch:main Jul 17, 2024
13 checks passed
@jerryzh168 jerryzh168 deleted the static branch July 17, 2024 20:02
dbyoung18 pushed a commit to dbyoung18/ao that referenced this pull request Jul 31, 2024
Summary:
So far quantization flow API that we provided (`quantize_`) does not require calibration (calibrate a model with sample data), this PR added a static quantization
example that serves as an example for calibration flow

* 1. first prepare the model for calibration
* 2. calibrate the prepared model with sample data
* 3. convert the calibrated model to quantized model

Test Plan:
python torchao/prototype/calibration_flow/static_quant.py

Reviewers:

Subscribers:

Tasks:

Tags:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants