Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a CPU nbit to float dequantization op that supports torch.quintMxN type and QuantizedCPU backend #2995

Closed
wants to merge 1 commit into from

Conversation

excelle08
Copy link
Contributor

Summary: Add a CPU nbit to float dequantization operator torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat to support dequantization of int4 / int2 tensors which are of torch.quintMxN dtype and QuantizedCPU backend. This is to support D61305982

Differential Revision: D61305979

Copy link

netlify bot commented Aug 15, 2024

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit ec37950
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/66c7a78f874f9d000899f66c
😎 Deploy Preview https://deploy-preview-2995--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 15, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 15, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Differential Revision: D61305979
excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 15, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 15, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 17, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 18, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 19, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 19, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 20, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 20, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 20, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 22, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 22, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

excelle08 added a commit to excelle08/FBGEMM that referenced this pull request Aug 22, 2024
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
…N type and QuantizedCPU backend (pytorch#2995)

Summary:
Pull Request resolved: pytorch#2995

X-link: facebookresearch/FBGEMM#87

Add a CPU nbit to float dequantization operator `torch.ops.fbgemm.FusedNBitRowwiseQuantizedSBHalfFrontToFloat` to support dequantization of int4 / int2 tensors which are of `torch.quintMxN` dtype and `QuantizedCPU` backend. This is to support D61305982

Reviewed By: sryap

Differential Revision: D61305979
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61305979

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 986e80c.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants