Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eval script fixes #414

Merged
merged 4 commits into from
Jun 21, 2024
Merged

eval script fixes #414

merged 4 commits into from
Jun 21, 2024

Conversation

HDCharles
Copy link
Contributor

@HDCharles HDCharles commented Jun 21, 2024

Additional script fixes

Summary:

int4wo had an issue with device swap after quantization api (need to set device before quantize)
int4wo-gptq had an issue with kv_cache model var not being set correctly (now set in GPTQ code)
eval in general had an issue with lm_eval 0.4.2 (updates to tokenizer and eval harness) 
   https://github.com/pytorch/ao/issues/404
[not eval] autoquant docs not showing up (added __all__ to autoquant), made autoquant low level apis priviate

Test Plan:

python eval.py -q int4wo-64 --compile
wikitext: {'word_perplexity,none': 12.842987954345306, 'word_perplexity_stderr,none': 'N/A', 'byte
_perplexity,none': 1.611855472207904, 'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none': 0.68  
87223897240059, 'bits_per_byte_stderr,none': 'N/A', 'alias': 'wikitext'}

Reviewers:

Subscribers:

Tasks:

Tags:

Copy link

pytorch-bot bot commented Jun 21, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/414

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 79b0c1d with merge base ef1e745 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 21, 2024
Summary:

int4wo had an issue with device swap after quantization
int4wo-gptq had an issue with....

Test Plan:

python eval.py -q int4wo-64 --compile
wikitext: {'word_perplexity,none': 12.842987954345306, 'word_perplexity_stderr,none': 'N/A', 'byte_perplexity,none': 1.611855472207904, 'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none': 0.6887223897240059, 'bits_per_byte_stderr,none': 'N/A', 'alias': 'wikitext'}

python eval.py -q int4wo-64-gptq --compile

Reviewers:

Subscribers:

Tasks:

Tags:
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
Summary:

Test Plan:

two python generate.py --checkpoint_path $CHECKPOINT_PATH/$MODEL_REPO/model.pth --compile --quantization autoquant --write_result benchmark_results.txt

two python eval.py -q int4wo

Reviewers:

Subscribers:

Tasks:

Tags:
@@ -60,17 +60,18 @@ def run_evaluation(

if quantization:
if "int8wo" in quantization:
quantize(model, int8wo())
quantize(model, int8_weight_only())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this need to be compatible with torch 2.3 and below? if so we could define similar helpers:

def _int8wo_api(mod):
if TORCH_VERSION_AFTER_2_4:
quantize(mod, int8_weight_only())
unwrap_tensor_subclass(mod)
else:
change_linear_weights_to_int8_woqtensors(mod)
def _int8da_int8w_api(mod):
if TORCH_VERSION_AFTER_2_4:
quantize(mod, int8_dynamic_activation_int8_weight())
unwrap_tensor_subclass(mod)
else:
change_linear_weights_to_int8_dqtensors(mod)
def _int4wo_api(mod):
if TORCH_VERSION_AFTER_2_4:
quantize(mod, int4_weight_only())
unwrap_tensor_subclass(mod)
else:
change_linear_weights_to_int4_woqtensors(mod)

Copy link
Contributor Author

@HDCharles HDCharles Jun 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think its mostly for our own testing, not sure if that's needed

@@ -189,21 +189,21 @@ def main(
if quantization:
from torchao.quantization.quant_api import (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we dedup the quant code in eval and generate.py?

Copy link
Contributor Author

@HDCharles HDCharles Jun 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only a bit, its probably more trouble that its worth given the differences and needing to handle autoquant vs gptq ...etc

if "int4wo" in quantization:
groupsize=int(quantization.split("-")[-1])
assert groupsize in [32,64,128,256], f"int4wo groupsize needs to be one of [32,64,128,256] but got {groupsize}"
quantize(model, int4wo(groupsize=groupsize))
quantize(model, int4_weight_only(groupsize=groupsize))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is group_size since last update I think

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'll fix it in another PR

@HDCharles HDCharles merged commit 9dc2c11 into main Jun 21, 2024
13 checks passed
dbyoung18 pushed a commit to dbyoung18/ao that referenced this pull request Jul 31, 2024
Additional script fixes

Summary:

int4wo had an issue with device swap after quantization api (need to set device before quantize)
int4wo-gptq had an issue with kv_cache model var not being set correctly (now set in GPTQ code)
eval in general had an issue with lm_eval 0.4.2 (updates to tokenizer and eval harness) 
   pytorch#404
[not eval] autoquant docs not showing up (added __all__ to autoquant), made autoquant low level apis priviate

Test Plan:

python eval.py -q int4wo-64 --compile
wikitext: {'word_perplexity,none': 12.842987954345306, 'word_perplexity_stderr,none': 'N/A', 'byte
_perplexity,none': 1.611855472207904, 'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none': 0.68  
87223897240059, 'bits_per_byte_stderr,none': 'N/A', 'alias': 'wikitext'}

Reviewers:

Subscribers:

Tasks:

Tags:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants