Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not save unet.unet keys when training LoRAs #5977

Closed
apolinario opened this issue Nov 29, 2023 · 3 comments · Fixed by #5991
Closed

Do not save unet.unet keys when training LoRAs #5977

apolinario opened this issue Nov 29, 2023 · 3 comments · Fixed by #5991
Assignees
Labels
bug Something isn't working

Comments

@apolinario
Copy link
Collaborator

Describe the bug

When training LoRAs, sometimes they end up with a unet.unet key like in this case https://huggingface.co/sayakpaul/new-lora-check-v15/blob/main/pytorch_lora_weights.safetensors5/tree/main

unet.unet.up_blocks.1.attentions.2.transformer_blocks.1.attn2.processor.to_v_lora.up.weight

While loading up this format is supported on diffusers, it is probably undesirable to have such keys and maybe better off deprecated and make sure all keys have a single unet prefix

(Internal Slack thread: https://huggingface.slack.com/archives/C03UQJENJTV/p1701261840645189)

Reproduction

Training a LoRA with our example notebook/script yields the unet.unet output https://huggingface.co/LinoyTsaban/corgy_dog_LoRA

Logs

No response

System Info

diffusers==0.23.1

Who can help?

@sayakpaul

@apolinario apolinario added the bug Something isn't working label Nov 29, 2023
@sayakpaul
Copy link
Member

This should all go away with #5388.

@patrickvonplaten
Copy link
Contributor

Definitely agree that this will go away with PEFT, but I think it'll still take a couple of releases until we actually serialize in a new format to lessen the friction here. IMO we should merge #5388 still having the old serialization format (but training with PEFT).

=> So it'd be great to try to fix it before / independently of PEFT.

@sayakpaul
Copy link
Member

#5388 is spitting in the current diffusers format only. So, should be good.

=> So it'd be great to try to fix it before / independently of PEFT.

💯 assigning myself :)

@sayakpaul sayakpaul self-assigned this Nov 29, 2023
apolinario pushed a commit to huggingface/notebooks that referenced this issue Dec 30, 2023
* SDXL LoRA Dreambooth example using the advanced script at
diffusers/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py

* added --adam_beta2 value that seems to be better for prodigy

* changed version of datasets in canonical notebook as issue was fixed huggingface/datasets#6442

* update to textual inversion loading

* install diffusers from commit to work around this issue: huggingface/diffusers#5977

* new textual inversion loading

* added slugify and markdown to pick model name

* remove install diffusers from commit as unet prefix fix pr was merged

* added validation prompt

* install datasets from main

* comment

* added instance prompt and validation prompt as seprate params + explanation

* add rank as a seprate param with explaination

* ref the blog in the hyperparameter section for best defaults

---------

Co-authored-by: Linoy <linoy@huggingface.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants