Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PEFT] Fix scale unscale with LoRA adapters #5417

Merged
merged 13 commits into from
Oct 21, 2023

Conversation

younesbelkada
Copy link
Contributor

@younesbelkada younesbelkada commented Oct 16, 2023

What does this PR do?

Fixes: #5414

Needs: huggingface/peft#1029

Before this PR, weighted multi adapter inference was broken because module.scale_layer was initializing the scale of the module with its original value instead of multiplying the existing value.

The fix is to

1- Multiply the existing scale with the new scale factor
2- unscale by dividing the scale with the scale factor to retrieve the correct value.
3- use set_scale to get the correct scale ( alpha / r ) --> this method is used inside set_weights_and_activate_adapters

cc @pacman100 @BenjaminBossan @patrickvonplaten

@younesbelkada younesbelkada changed the title [PEFT] Fix scale unscale v1 [PEFT] Fix scale unscale with LoRA adapters Oct 16, 2023
@younesbelkada younesbelkada marked this pull request as ready for review October 16, 2023 22:53
Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks much!

I think we need to rerun the code of the docs to make sure the results are correct, right?

Good to merge for me after the conflicts have been resolved.

Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @younesbelkada for working on this 🤗, left a few comments.

src/diffusers/utils/peft_utils.py Outdated Show resolved Hide resolved
"""
Removes the previously passed weight given to the LoRA layers of the model.

Args:
model (`torch.nn.Module`):
The model to scale.
weight (`float`):
The weight to be given to the LoRA layers.
weight (`float`, *optional*):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This weight needs to be passed to module.unscale_layer() in line 110

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yes, nice catch!

@@ -184,7 +186,7 @@ def set_weights_and_activate_adapters(model, adapter_names, weights):
module.set_adapter(adapter_name)
else:
module.active_adapter = adapter_name
module.scale_layer(weight)
module.set_scale(adapter_name, weight)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if adapter_name is not available for this lora layer? Is this handled in PEFT to do nothing if adapter is unavailable?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it errors it out from peft in case that is hit. @younesbelkada can confirm.

From what I recollect, the users are required to provide adapter_name when dealing with multiple LoRAs otherwise, for the single case, the default naming is used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we should raise an error but on PEFT side, let me open a PR there. That should be a blocker to merge this PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

huggingface/peft#1034 should take care of that

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

huggingface/peft#1034 is merged. So, no blockers.

Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @younesbelkada for the fixes, LGTM! 🚀
As Sayak mentioned, could you please rerun the docs and notebook example and see that things are working as expected.

@@ -1106,7 +1178,7 @@ def test_integration_logits_multi_adapter(self):
output_type="np",
).images
predicted_slice = images[0, -3:, -3:, -1].flatten()
expected_slice_scale = np.array([0.5977, 0.5985, 0.6039, 0.5976, 0.6025, 0.6036, 0.5946, 0.5979, 0.5998])
expected_slice_scale = np.array([0.5888, 0.5897, 0.5946, 0.5888, 0.5935, 0.5946, 0.5857, 0.5891, 0.5909])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@younesbelkada why are these values being changed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because the previous API was broken and the values were wrong :/ so this PR updates the slow tests with the correct values

@@ -1120,7 +1192,7 @@ def test_integration_logits_multi_adapter(self):
output_type="np",
).images
predicted_slice = images[0, -3:, -3:, -1].flatten()
expected_slice_scale = np.array([0.54625, 0.5473, 0.5495, 0.5465, 0.5476, 0.5461, 0.5452, 0.5485, 0.5493])
expected_slice_scale = np.array([0.5456, 0.5466, 0.5487, 0.5458, 0.5469, 0.5454, 0.5446, 0.5479, 0.5487])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@younesbelkada why are these values being changed?

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Once the PR is merged, I will take care of the docs.

@younesbelkada
Copy link
Contributor Author

Thanks @sayakpaul !

@sayakpaul
Copy link
Member

Regarding the failing "inconsistency test", indeed, it seems to be weird to me since running make fix-copies doesn't yield any changes for me locally.

@younesbelkada
Copy link
Contributor Author

the test is also failing on main: https://github.com/huggingface/diffusers/actions/runs/6566415588/job/17837040326 perhaps we can merge this PR as is then

@sayakpaul
Copy link
Member

@patrickvonplaten could you give this a review too?

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@sayakpaul sayakpaul merged commit bc7a4d4 into huggingface:main Oct 21, 2023
11 checks passed
mhetrerajat pushed a commit to mhetrerajat/diffusers that referenced this pull request Oct 23, 2023
* fix scale unscale v1

* final fixes + CI

* fix slow trst

* oops

* fix copies

* oops

* oops

* fix

* style

* fix copies

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
kashif pushed a commit to kashif/diffusers that referenced this pull request Nov 11, 2023
* fix scale unscale v1

* final fixes + CI

* fix slow trst

* oops

* fix copies

* oops

* oops

* fix

* style

* fix copies

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* fix scale unscale v1

* final fixes + CI

* fix slow trst

* oops

* fix copies

* oops

* oops

* fix

* style

* fix copies

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
* fix scale unscale v1

* final fixes + CI

* fix slow trst

* oops

* fix copies

* oops

* oops

* fix

* style

* fix copies

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[PEFT] set_adapters with weights argument currently broken
4 participants