Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LoftQ: edit README.md and example files #1276

Merged
merged 5 commits into from
Dec 17, 2023
Merged

LoftQ: edit README.md and example files #1276

merged 5 commits into from
Dec 17, 2023

Conversation

yxli2123
Copy link
Contributor

Hi,

I would like to update the README.md file and example scripts for LoftQ

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Left a single comment

@@ -184,54 +183,8 @@ def quantize_and_save():
return base_model_dir, lora_model_dir


def load_loftq(base_model_path, lora_adapter_path):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this has been removed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this function was supposed to confirm that everything works fine after the loftq weight initialization step. Hence, not a required step.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect!

Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @yxli2123 for updating README and examples for the LoftQ method, LGTM!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@younesbelkada younesbelkada merged commit 46a84bd into huggingface:main Dec 17, 2023
14 checks passed
BenjaminBossan added a commit to BenjaminBossan/peft that referenced this pull request Dec 18, 2023
Due to PR huggingface#1276, the bug that prevented use of LoftQ with 8bit
quantization has now been fixed. Therefore, the tests no longer need to
be skipped.

Note

I tested locally with GPU and the tests passed.
BenjaminBossan added a commit that referenced this pull request Dec 18, 2023
Due to PR #1276, the bug that prevented use of LoftQ with 8bit
quantization has now been fixed. Therefore, the tests no longer need to
be skipped.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants