Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] fix using wrong model caused by alias #14655

Merged
merged 1 commit into from
Jan 20, 2024

Conversation

chi2nagisa
Copy link
Contributor

Description

  • Try to slove using wrong lora model cased by alias. The forbidden_network_aliases seems only used in generating the model card, but not used during inference.
  • changes in code: when name.lower() in forbidden_network_aliases, get network_on_dick from available_networks rather than available_network_aliases.

I'm training lora models and I have two models with same 'ss_output_name' but different filename (since I changed the filename after training). When I run inference on webui, I find the result doesn't match the model name I write in prompt. After adding code for displaying the model path for name in prompt, I found it is using the wrong model.

This bug may caused by:
For example, there is a model A.safetensors and 'ss_output_name' in metadate is 'A'. And there is a model B.safetensors which 'ss_output_name' in metadata is also 'A'. During constructing the available_network_aliases dict, A.safetensors is processed before B.safetensors. After processing B.safetensors, key 'A' in available_network_aliases dict is mapped to B.safetensors. Even through this alias is stored in forbidden_network_aliases, network_on_disk obj is still queried in available_network_aliases. When I use '<lora:A:1>' in prompt, it will call B.safetensors instead of A.safetensors.

Screenshots/videos:

For reproducing the bug, I use two lora models from civitai.
l0ngma1d.safetensors
Yatogami Tohka(dal).safetensors
The original filename and 'ss_output_name' is same. For reproducing this bug, I changed the 'ss_output_name' in Yatogami Tohka(dal).safetensors to 'l0ngma1d'. And here are the models I used.
https://drive.google.com/drive/folders/1vRg9h2P3H28zTaz-9QZOS9Pw_3cI-bl9?usp=drive_link

Generation parameters

l0ngma1d

<lora:l0ngma1d:1>, 1girl
Negative prompt: blur
Steps: 20, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 233, Size: 512x768, Model hash: fd0db59b48, Model: threeDelicacyWonton_v2, Clip skip: 2, Lora hashes: "l0ngma1d: 42ba22fee2d5", Version: v1.7.0-331-gcb5b335a

Yatogami Tohka(dal)

<lora:Yatogami Tohka(dal):1>, 1girl
Negative prompt: blur
Steps: 20, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 233, Size: 512x768, Model hash: fd0db59b48, Model: threeDelicacyWonton_v2, Clip skip: 2, Lora hashes: "Yatogami Tohka(dal): 06acbeef6ed5", Version: v1.7.0-331-gcb5b335a

Inference with origin model

l0ngma1d Yatogami Tohka(dal)
l0ngma1d Yatogami Tohka(dal)

Inference with modified model

l0ngma1d Yatogami Tohka(dal)
l0ngma1d_bug Yatogami Tohka(dal)_bug

Inference with modified model (My code)

l0ngma1d Yatogami Tohka(dal)
l0ngma1d_correct Yatogami Tohka(dal)_correct

Checklist:

@AUTOMATIC1111 AUTOMATIC1111 merged commit 0f2de4c into AUTOMATIC1111:dev Jan 20, 2024
3 checks passed
@w-e-w w-e-w mentioned this pull request Feb 17, 2024
@pawel665j pawel665j mentioned this pull request Apr 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants