Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve FP6-LLM 2+4bit weight splitting + user API #279

Merged
merged 24 commits into from
May 26, 2024

Conversation

gau-nernst
Copy link
Collaborator

@gau-nernst gau-nernst commented May 25, 2024

Address #208

2+4bit weight splitting

Port https://github.com/pytorch/ao/blob/4ca3985be603e6496da7ec57adf1942c8b32a78e/torchao/csrc/fp6_llm/weight_prepacking.cpp to pure PyTorch.

FP16 weight, (8192, 8192). Ryzen 5600, 4070Ti SUPER

device op time (m/s)
CPU (num_threads=1) original FP16->FP6 + original 2+4bit splitting 1380.96
CPU (num_threads=1) new FP16->FP6 + original 2+4bit splitting 616.285
CPU (num_threads=1) new FP16->FP6 + new 2+4bit splitting 587.965
CPU (num_threads=4) original FP16->FP6 + original 2+4bit splitting 1213.08
CPU (num_threads=4) new FP16->FP6 + original 2+4bit splitting 334.502
CPU (num_threads=4) new FP16->FP6 + new 2+4bit splitting 246.911
CUDA new FP16->FP6 + original 2+4bit splitting 257.539
CUDA new FP16->FP6 + new 2+4bit splitting 1.05908

Note:

  • original 2+4bit splitting only works on CPU. Thus, for the 2nd last row, FP16->FP6 is done on GPU, but 2+4bit splitting is done on CPU.

User API

from torchao.quantization.fp6_llm import convert_fp6_llm

model = ...
convert_fp6_llm(model)  # nn.Linear modules will be replaced with Fp6LlmLinear in-place

I opt for custom linear module instead of tensor subclass mainly because it's easier to implement.

Note:

  • Fp6LlmLinear will cast input to FP16 and cast output to original dtype.

Copy link

pytorch-bot bot commented May 25, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/279

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit d798eaf with merge base 4ca3985 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 25, 2024
@gau-nernst gau-nernst mentioned this pull request May 26, 2024
@gau-nernst gau-nernst marked this pull request as ready for review May 26, 2024 08:41
@gau-nernst gau-nernst marked this pull request as draft May 26, 2024 09:38
@gau-nernst gau-nernst changed the title Improve FP6-LLM 2+4bit weight splitting Improve FP6-LLM 2+4bit weight splitting + user API May 26, 2024
@gau-nernst gau-nernst marked this pull request as ready for review May 26, 2024 15:16
@msaroufim msaroufim self-requested a review May 26, 2024 17:35
@msaroufim msaroufim merged commit 7511b1d into pytorch:main May 26, 2024
13 checks passed
@gau-nernst gau-nernst deleted the fp6_weight_split branch May 26, 2024 20:18
dbyoung18 pushed a commit to dbyoung18/ao that referenced this pull request Jul 31, 2024
* add annotation

* add weight splitting logic

* update from fp6_quant

* merge to_tc_float6_e3m2

* add more optimized version

* add some notes

* add from_tc_float6_e3m2

* add some docs

* make fp6_llm.py

* add test for linear

* fix fp6 llm

* switch to v2 since it's faster

* fix type hint for old python

* simplify further

* fix typing for old python

* add test

* eliminate indexing.faster on CUDA

* skip fp6_llm on cpu

* improve error message

* add support for extra batch dims

* cast output to original dtype

* fix precision error due to dtype
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants