Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a developer guide for exporting to executorch #1219

Merged
merged 3 commits into from
Nov 5, 2024

Conversation

jerryzh168
Copy link
Contributor

@jerryzh168 jerryzh168 commented Nov 4, 2024

Summary:
att, the requirement for exporting a quantized model to executorch is mainly that we want to preserve soem high level ops so they can be lowered to executorch ops, examples of ops that are already preserved are quantize_affine/dequantize_affine/choose_qparams_affine which can be matched in executorch for pattern matching, this PR adds an example for how to define and preserve a quantized embedding_byte op, the main util function we use is torchao.utils._register_custom_op

The expected output is that after export we see some high level ops (such as quantized embedding op) being preserved:

GraphModule(
  (0): Module(
    (parametrizations): Module(
      (weight): Module()
    )
  )
)



def forward(self, input):
    input, = fx_pytree.tree_flatten_spec(([input], {}), self._in_spec)
    _0_parametrizations_weight_original0 = getattr(self, "0").parametrizations.weight.original0
    _0_parametrizations_weight_original1 = getattr(self, "0").parametrizations.weight.original1
    input_1 = input
    embedding_byte = torch.ops.quant.embedding_byte.default(_0_parametrizations_weight_original0, [1, 128], _0_parametrizations_weight_original1, input_1);  _0_parametrizations_weight_original0 = _0_parametrizations_weight_original1 = input_1 = None
    return pytree.tree_unflatten((embedding_byte,), self._out_spec)

Test Plan:
python tutorials/developer_api_guide/export_to_executorch.py

Reviewers:

Subscribers:

Tasks:

Tags:

Summary:
att, the requirement for exporting a quantized model to executorch is mainly that we want to
preserve soem high level ops so they can be lowered to executorch ops, examples of ops that are already
preserved are quantize_affine/dequantize_affine/choose_qparams_affine which can be matched in executorch
for pattern matching, this PR adds an example for how to define and preserve a quantized embedding_byte op, the main util function we use is `torchao.utils._register_custom_op`

Test Plan:
python tutorials/developer_api_guide/export_to_executorch.py

Reviewers:

Subscribers:

Tasks:

Tags:
Copy link

pytorch-bot bot commented Nov 4, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1219

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 34a4ad3 with merge base 3475aed (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 4, 2024
Copy link
Contributor

@andrewor14 andrewor14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Can you also share the expected output of the tutorial in the PR description?

@jerryzh168 jerryzh168 merged commit 000a490 into pytorch:main Nov 5, 2024
16 of 17 checks passed
@jerryzh168 jerryzh168 deleted the export-guide branch November 5, 2024 04:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants