-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not save unet.unet
keys when training LoRAs
#5977
Comments
This should all go away with #5388. |
Definitely agree that this will go away with PEFT, but I think it'll still take a couple of releases until we actually serialize in a new format to lessen the friction here. IMO we should merge #5388 still having the old serialization format (but training with PEFT). => So it'd be great to try to fix it before / independently of PEFT. |
#5388 is spitting in the current diffusers format only. So, should be good.
💯 assigning myself :) |
* SDXL LoRA Dreambooth example using the advanced script at diffusers/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py * added --adam_beta2 value that seems to be better for prodigy * changed version of datasets in canonical notebook as issue was fixed huggingface/datasets#6442 * update to textual inversion loading * install diffusers from commit to work around this issue: huggingface/diffusers#5977 * new textual inversion loading * added slugify and markdown to pick model name * remove install diffusers from commit as unet prefix fix pr was merged * added validation prompt * install datasets from main * comment * added instance prompt and validation prompt as seprate params + explanation * add rank as a seprate param with explaination * ref the blog in the hyperparameter section for best defaults --------- Co-authored-by: Linoy <linoy@huggingface.co>
Describe the bug
When training LoRAs, sometimes they end up with a
unet.unet
key like in this case https://huggingface.co/sayakpaul/new-lora-check-v15/blob/main/pytorch_lora_weights.safetensors5/tree/mainWhile loading up this format is supported on diffusers, it is probably undesirable to have such keys and maybe better off deprecated and make sure all keys have a single
unet
prefix(Internal Slack thread: https://huggingface.slack.com/archives/C03UQJENJTV/p1701261840645189)
Reproduction
Training a LoRA with our example notebook/script yields the
unet.unet
output https://huggingface.co/LinoyTsaban/corgy_dog_LoRALogs
No response
System Info
diffusers==0.23.1
Who can help?
@sayakpaul
The text was updated successfully, but these errors were encountered: