Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PEFT] Adapt example scripts to use PEFT #5388

Merged
merged 38 commits into from
Dec 7, 2023

Conversation

younesbelkada
Copy link
Contributor

@younesbelkada younesbelkada commented Oct 13, 2023

What does this PR do?

Fixes: #5341

Wandb logs: https://wandb.ai/younesbelkada/text2image-fine-tune/runs/tj5k8xtm?workspace=user-younesbelkada

I have fine-tuned it on Pokemon blip captions and the model trained after 1200 steps can be found here: https://huggingface.co/ybelkada/sd-1.5-pokemon-lora-peft

cc @sayakpaul @patrickvonplaten

TODO:

  • Update the SDXL LoRA script + add peft dependency on example tests workflow

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

@sayakpaul
Copy link
Member

Left some comments. Thanks for initiating this!

@patrickvonplaten
Copy link
Contributor

Before doing this change it would be good to have both a PEFT and a Transformers release to not force the user to install these packages from main

@younesbelkada
Copy link
Contributor Author

Before doing this change it would be good to have both a PEFT and a Transformers release to not force the user to install these packages from main

OK I will push the changes for SDXL and wait until the next release of PEFT before merging this PR

@younesbelkada
Copy link
Contributor Author

I can confirm the scripts work fine under peft + diffusers main! I'll mark the PR as ready for review once do a release on PEFT

@younesbelkada
Copy link
Contributor Author

This PR is ready for review! Let me know if you want me to work on saving adapter configs here, otherwise i can do it as a follow up PR. Please also see the attached wandb logs in the PR description above

@sayakpaul
Copy link
Member

@patrickvonplaten this should be more or less ready to be merged, especially with huggingface/peft#1189.

@BenjaminBossan if would like to review the peft related changes introduced in this PR, that would be awesome.

Here's my Colab Notebook: https://colab.research.google.com/gist/sayakpaul/e112cc40288c5f0563efa4034dd50b72/scratchpad.ipynb.

unet_lora_config = LoraConfig(
r=args.rank,
init_lora_weights="gaussian",
target_modules=["to_k", "to_q", "to_v", "to_out.0", "add_k_proj", "add_v_proj"],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@younesbelkada in your previous iterations, we were missing "to_out.0" in the target_module for some scripts. I have added that.

@BenjaminBossan
Copy link
Member

From the PEFT-side of things, this looks pretty good, thanks Sayak.

Just a consideration: Right now, the requirements include a source install of PEFT. This works for the moment but could be dangerous in the future, in case we add something to PEFT main which breaks the examples (of course, that wouldn't be intentional, but these things can happen). So I wonder if it would be more cautious to fix the hash of the PEFT install and add a comment that we should change the dependency to a proper PEFT version once the next PEFT release is out (most likely, that would be >=0.7.0).

@sayakpaul
Copy link
Member

So I wonder if it would be more cautious to fix the hash of the PEFT install and add a comment that we should change the dependency to a proper PEFT version once the next PEFT release is out (most likely, that would be >=0.7.0).

@BenjaminBossan good point. Could you get me the required hash here? I will resolve ASAP.

@patrickvonplaten
Copy link
Contributor

I'd like to wait a bit until PEFT has done a release to make it easier for users (think it doesn't hurt us to wait a tad more here no?)

Does the training now match more or less (e.g. is the new init used?).

@BenjaminBossan
Copy link
Member

Could you get me the required hash here? I will resolve ASAP.

Latest hash is 8298f1a3668604ac9bc3f6e28b24e8eb554891a1, so adding @ + the hash to the requirement should do the trick. This is of course not required if we do:

I'd like to wait a bit until PEFT has done a release to make it easier for users

@sayakpaul
Copy link
Member

Does the training now match more or less (e.g. is the new init used?).

@patrickvonplaten yes it does. Have verified it. This PR uses the new init method from peft too. I don't mind waiting either.

@younesbelkada
Copy link
Contributor Author

PEFT 0.7.0 has been released ! https://github.com/huggingface/peft/releases/tag/v0.7.0
I have updated the requirements file to reflect the correct PEFT version - requesting a final review cc @patrickvonplaten @sayakpaul

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok for me to add once @sayakpaul gives the green light

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simply amazing here!

@sayakpaul sayakpaul merged commit c271731 into huggingface:main Dec 7, 2023
14 checks passed
@younesbelkada younesbelkada deleted the adapt-example-peft branch December 7, 2023 07:20
donhardman pushed a commit to donhardman/diffusers that referenced this pull request Dec 18, 2023
* adapt example scripts to use PEFT

* Update examples/text_to_image/train_text_to_image_lora.py

* fix

* add for SDXL

* oops

* make sure to install peft

* fix

* fix

* fix dreambooth and lora

* more fixes

* add peft to requirements.txt

* fix

* final fix

* add peft version in requirements

* remove comment

* change variable names

* add few lines in readme

* add to reqs

* style

* fix issues

* fix lora dreambooth xl tests

* init_lora_weights to gaussian and add out proj where missing

* ammend requirements.

* ammend requirements.txt

* add correct peft versions

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
* adapt example scripts to use PEFT

* Update examples/text_to_image/train_text_to_image_lora.py

* fix

* add for SDXL

* oops

* make sure to install peft

* fix

* fix

* fix dreambooth and lora

* more fixes

* add peft to requirements.txt

* fix

* final fix

* add peft version in requirements

* remove comment

* change variable names

* add few lines in readme

* add to reqs

* style

* fix issues

* fix lora dreambooth xl tests

* init_lora_weights to gaussian and add out proj where missing

* ammend requirements.

* ammend requirements.txt

* add correct peft versions

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[PEFT] Update the example scripts to reflect PEFT API
5 participants