-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LoftQ: edit README.md and example files #1276
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Left a single comment
@@ -184,54 +183,8 @@ def quantize_and_save(): | |||
return base_model_dir, lora_model_dir | |||
|
|||
|
|||
def load_loftq(base_model_path, lora_adapter_path): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this has been removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this function was supposed to confirm that everything works fine after the loftq weight initialization step. Hence, not a required step.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @yxli2123 for updating README and examples for the LoftQ method, LGTM!
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Due to PR huggingface#1276, the bug that prevented use of LoftQ with 8bit quantization has now been fixed. Therefore, the tests no longer need to be skipped. Note I tested locally with GPU and the tests passed.
Due to PR #1276, the bug that prevented use of LoftQ with 8bit quantization has now been fixed. Therefore, the tests no longer need to be skipped.
Hi,
I would like to update the README.md file and example scripts for LoftQ