-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support of the LyCORIS (LoCon/LoHA) models #3087
Comments
I would love to see this feature as well. However, most implementations will bake it into their LoRA implementations. So maybe extending LoRAs to use these and not check metadata names vigorously would be a start. |
#3294 there is a pr here to support it. |
I encountered an issue while using version 0.17.1 of the library, specifically when calling the load_lora_weights method. I received the following error:
ValueError("Network alpha is not consistent")
This error originates from the following method: _convert_kohya_lora_to_diffusers(state_dict) To bypass this error, I tried commenting out the following lines of code: for key, value in state_dict.items():
if "lora_down" in key:
lora_name = key.split(".")[0]
lora_name_up = lora_name + ".lora_up.weight"
lora_name_alpha = lora_name + ".alpha"
if lora_name_alpha in state_dict:
alpha = state_dict[lora_name_alpha].item()
if network_alpha is None:
network_alpha = alpha
# elif network_alpha != alpha:
# raise ValueError("Network alpha is not consistent") The code now runs, but this is not the appropriate solution. I would appreciate any suggestions or advice on how to properly handle this |
cc @sayakpaul |
Did you get the expected outputs for this? We added that check for robustness in the module. Cc: @takuma104. There can be different configurations for LyCORIS and from the get-go, it's not possible to support all of them. So, we started supporting them minimally: https://huggingface.co/docs/diffusers/main/en/training/lora#supporting-a1111-themed-lora-checkpoints-from-diffusers. Question for @takuma104: Do we want to relax this constraint? diffusers/src/diffusers/loaders.py Line 1242 in 0bab447
I think that might break things as that would mean different alphas for different LoRA layers, no? |
|
@sayakpaul The constraint you mention indeed assumes that all I believe the LoCon support essentially extends the methodology of #3756. Therefore, I think it would be best to first finalize #3756 as the mechanism, and then create a separate PR for LoCon support. |
I concur with your thoughts, @takuma104. #3778 is about to be get merged soon. So, we can start #3756 and what you have gathered pretty soon. |
@takuma104 went through https://gist.github.com/takuma104/dcf4626fe2b0564d02c6edd4e9fcb616 I saw either 4 or 32 for the LoRAs you have listed there. But within the same LoRA, the alpha value didn't change. Perhaps, I missed something? |
@sayakpaul The first comment is mostly with
|
@carl10086 we will first have support for the rest of blocks as mentioned in #3087 (comment). Then we will revisit this as this concerns a change in how we deal with Feel free to bug us in the coming weeks :) |
dear @sayakpaul , @patrickvonplaten lets say i have trained a LoRA safetensors via Kohya How can I implement it to the below pipeline?
|
You can follow this method: https://huggingface.co/docs/diffusers/main/en/training/lora#supporting-a1111-themed-lora-checkpoints-from-diffusers but note that there might be incompatibilities as discussed in #3725. |
does this support SDXL? so what is the left parameter "." ?
|
currently i am doing a LyCORIS SDXL training would this work? @sayakpaul pipeline.load_lora_weights(".", weight_name="/workspace/light_and_shadow.safetensors") |
If the underlying LoRA was trained against SDXL, it should work but note the following as well: #3725 (comment) |
Let's try to support SDXL LoRAs from the get-go :-) |
The SDXL structure is entirely different it seems and on top of that, the number of structures is large (which is already known). |
I think with #4147 we will have better support. |
Hey everyone, I know PR #4147 is in progress and will support LyCORIS/ LoCon models in future. For now is there any other way to integrate LoCon model in diffuser pipeline specifically I want to use https://civitai.com/models/47085/envybetterhands-locon model for good hands. |
You can use scripts like the one shown in: #3725 (comment) |
Hi all! Could you please give #4287 a try? |
@sayakpaul I don't know what you're refering to exactly. Do you use the regular lora loader? |
I meant to install |
What about Lycoris? |
LyCORIS LoCon is supported. LoHA is currently not. Will be soon. |
With the regular Lora loader, right? |
@sayakpaul will regular lora loader work with lycoris? |
|
Hi @sayakpaul, I'm afraid current diffusers doesn't work with LoCon for now. I've tested with my LoCon and error like below was thrown.
Seems like there's something wrong with layer naming. Could you check this one out plz? |
Then the LoCon modules have something we don't currently support :-) IIUC, LoCon is when you apply LoRA to the conv layers as well, right? In our testing, we did consider some LoRAs that have this setup and they worked well. Check https://huggingface.co/docs/diffusers/main/en/training/lora#supporting-a1111-themed-lora-checkpoints-from-diffusers. What is the LoRA file you're using? Could you provide a fully reproducible snippet? |
@sayakpaul does locon work now? |
@ORANZINO give the locon |
It seems that we cannot train locon in diffusers, any plan for supporting it? |
When I load a Lyrocis from CIVITAL https://civitai.com/models/76404/lycoris-couture-an-edg-collection, it fails, |
exactly the same issue. I trained a LoCon from kohya using the sd15-EDG_LoConOptiSettings preset with the only modification to parameters being epochs. |
Looks like this is supported now. This probably should be closed as complete: |
Happily closing then :) |
Seems like Lycoris LoCon models are supported, but the LoHA variant is not working with the latest version of diffusers (v26). Should this issue be reopened? |
If they were working with a previous version and are not working with the latest version, they, yes. Please also supply a fully reproducible snippet. |
@sayakpaul it seems LoHA support is still not implemented, so I feel this should be re-opened. diffusers: 0.30.3
gives
I just trained this a few hours ago on kohya_ss using some default settings. Ironically I wanted to check it was supported before training so found this thread, read the bottom few messages and it seemed to be supported, which is why I went ahead with it. Hope this helps ! (EDIT: just tested with a random LoCon LoRa as well, can confirm it seems to be working) |
Thanks for checking in! Since LoHA checkpoints haven’t been that popular compared to others, we didn’t prioritize it. Would you maybe like to work with us on this through a PR? cc: @BenjaminBossan for peft. |
LoHa and LoKr are implemented in PEFT, though there is currently some work to re-implement them based on LyCORIS (huggingface/peft#2133). Regarding the integration into diffusers, even though it relies on PEFT, I think this would be quite a big refactor. If there is any way to gauge the demand for this first, this should be done first. An idea that I wondered about: In the end, LoRA, LoKr, and LoHa can be condensed to: |
If anyone needs a temporary workaround to load these LoRA weights before the formal solution is out, it's possible to merge LoRA weights (of any type) into a .ckpt diffusion model with kohya_ss. Then convert the merged .ckpt weight back into diffusers format and load it like how normal models are loaded. See here: https://github.com/Darkbblue/diffusion-content-shift/blob/main/lora_guide.md#output-processing. |
@Darkbblue thanks so much for your note! Would you maybe like to try this out on a LoHA checkpoint for the users (including us, the maintainers) to refer to? |
I've tried this on LoCon. Not tried it on LoHA but I'm pretty confident it can work. I might not be able to try this currently, but in a few days I may have the time. |
Hey I just tried a LoHA checkpoint and it worked. I had to update the conversion scripts in the link though, but everything's done now. |
Wow, thank you! Would you be interesting in contributing this to |
Since this approach involves functions provided by an external platform, kohya_ss, I don't think it'll be very suitable to integrate the entire process into diffusers... I think it should be considered only as a temporary workaround. |
Model/Pipeline/Scheduler description
Hi everyone ! Thanks for your amazing work !
Some specific (optimized) version of Lora are developed (https://github.com/KohakuBlueleaf/LyCORIS) and are available (https://civitai.com/models/37053/michael-jordan) with pretty cool features (ability to make distinctions between trained concepts, etc.). It could be nice, as support for loading .safetensors is made, to have the possibility to load also LyCORIS models.
What do you think ?
Thanks a lot !
Open source status
Provide useful links for the implementation
No response
The text was updated successfully, but these errors were encountered: