Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bitpacking #291

Merged
merged 32 commits into from
May 30, 2024
Merged

Bitpacking #291

merged 32 commits into from
May 30, 2024

Conversation

vayuda
Copy link
Collaborator

@vayuda vayuda commented May 29, 2024

Based on this issue: #284

Adding this first iteration of packing/unpacking algorithms to support lower bit dtypes into protoype/

Copy link

pytorch-bot bot commented May 29, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/291

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit b01550f with merge base 5485929 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 29, 2024

Inputs:
data: torch.Tensor - a tensor of unpacked elements of a small dtype.
container_size: int - the size of the large dtype in bits.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious. container_size can be determined from data.dtype right? e.g. uint8 -> 8, uint16 -> 16. (there is also this - https://pytorch.org/docs/stable/type_info.html#torch.torch.iinfo).
Also, is it assumed that data.dtype has container_size number of bits? What if data use larger or smaller bit-width than container_size? e.g. store int4 in int32, then request to pack to int8. Depending on what are your assumptions to the inputs, perhaps some kind of type checking and/or type casting is good.

Copy link
Member

@msaroufim msaroufim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice first step let's keep iterating on cuda mode to figure out how to promote this to a stable feature

@msaroufim msaroufim merged commit 38dad9b into pytorch:main May 30, 2024
13 checks passed
@vayuda vayuda deleted the bitpacking branch May 30, 2024 19:04
dbyoung18 pushed a commit to dbyoung18/ao that referenced this pull request Jul 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants