[Major Update] sd-webui-controlnet 1.1.400 #2039
Replies: 38 comments 61 replies
This comment has been hidden.
This comment has been hidden.
-
Amazing work again ! works like a charm.. looking forward to see more models added later on. |
Beta Was this translation helpful? Give feedback.
-
problems should directly go to issue this thread is mainly for feature discussion |
Beta Was this translation helpful? Give feedback.
-
I've been waiting for this for so long! ControlNet makes SDXL 200% more powerful. Thanks a lot for all your hard work! |
Beta Was this translation helpful? Give feedback.
-
exciting! |
Beta Was this translation helpful? Give feedback.
-
So glad to have controlnet back, and some exciting new models to play with. |
Beta Was this translation helpful? Give feedback.
-
Update 1.1.402: now revision will automatically enter image blending mode (revision_ignore_prompts) when both prompt and negative prompt are empty |
Beta Was this translation helpful? Give feedback.
-
@lllyasviel I think there is a bug with selection of options Each time they give different results - i mean there are 3 different outputs but looks like these are mixed somehow Balanced Same seed used. some times controlnet more important giving result of my prompt more important or vice versa https://twitter.com/GozukaraFurkan/status/1699106187667472861 I think the selected checkbox index problem |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
1.1.405
I have no idea how to use it - if anyone knows, please let me know from comfyui SAI official json |
Beta Was this translation helpful? Give feedback.
-
Should multiple controlnet inputs work with SDXL? |
Beta Was this translation helpful? Give feedback.
-
@lllyasviel the tile and inpaint will be implemented in SDXL in the future? |
Beta Was this translation helpful? Give feedback.
-
Great stuff, i like new reference like models - ip adapters, i noticed that with ip adapter 1.5 plus you have to use CFG of like 2 to get best result , not sure if its how its supposed to be but hey it works. |
Beta Was this translation helpful? Give feedback.
-
hi everyone |
Beta Was this translation helpful? Give feedback.
-
i could not get good generation, bad image as below, someone can help me? |
Beta Was this translation helpful? Give feedback.
-
Anyone know why my generations aren't being influenced by Revision? My a1111 is 1.6. |
Beta Was this translation helpful? Give feedback.
-
An updated IPAdapter model for SDXL has been released a few days ago
@lllyasviel |
Beta Was this translation helpful? Give feedback.
-
Would be nice if canny had all preprocessors of softedge and vice versa, theyre pretty similar and i found that sometimes their preprocessors work better when used interchangeably for example preprocess with canny and use that image in softedge, or preprocess in softedge and use that image in canny, especially the second option, IMO canny has not enough preprocessors and should have all from softedge. |
Beta Was this translation helpful? Give feedback.
-
Installed this today and it seems to be working. Did I miss a setting? Non SDXL stuff is fast for me. EDIT: Using webui |
Beta Was this translation helpful? Give feedback.
-
which depth is working best atm? I am trying to utilize depth to draw a trained mug :) |
Beta Was this translation helpful? Give feedback.
-
It's tough that sdxl doesn't have a mediapipe face model that works. |
Beta Was this translation helpful? Give feedback.
-
There's a new ip-adapter-plus_SDXL_vit-h model uploaded on IP-Adapter's huggingface page 2 days ago. This can't be used w/ CN + A1111 w/o being in Safetensor format, correct? Re: There's also an v1.5 ip-adapter-plus-face_sd15.bin which I hadn't seen before. |
Beta Was this translation helpful? Give feedback.
-
yeah, who has this face model converted to safetensors ? wheres converter for controlnet bin files? theyre not like normal bin files that can be turned to safetensors cause i tried and converter fails Ok jokers, looks like all it takes is to rename bin to safetensors, so why this is not enabled by default ? Cmon just let us use bin and safetensors without having to rename, i already lost like hour trying to find the right converter and writing guis for it . |
Beta Was this translation helpful? Give feedback.
-
There is now IP-Adapter safe tensors https://huggingface.co/h94/IP-Adapter/tree/main/sdxl_models https://huggingface.co/InvokeAI/ip_adapter_sdxl_image_encoder/tree/main how can we use it with controlnet extension @Mikubill @lllyasviel |
Beta Was this translation helpful? Give feedback.
-
What about CN Tile for SDXL? Can we hope to see it someday? |
Beta Was this translation helpful? Give feedback.
-
I had problem using "ip-adapter-plus-face_sdxl_vit-h" model in my Automatic1111 with SDXL models, I was getting errors: _- Error running process: ....\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Turned out this was due to some settings in Automatic1111. When I followed steps from section "Achieving Same Outputs with StabilityAI Official Results" and applied these changes it's started working, no more errors in console. Thank you very much 👍 😄 |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hi. Have you made sure to use the Control Net model for SDXL? ControlNet
like OpenPose, Canny, etc all have diff models to select from dropdown for
when using 1.5/SDXL/etc.
Best regards,
Mischa Gushiken (Yuri)
MA Candidate, Educational Technology Specialist
https://yurigushiken.github.io/ <http://yurigushiken.com>
-.-- ..- .-. .. / --. ..- ... .... .. -.- . -.
…On Wed, Jun 19, 2024 at 10:47 AM krisenvid ***@***.***> wrote:
Could somebody help me? my sdxl model can't work with controlnet
openpose,but when i use the sd1.5 model, the openpose can generate the
correct picture,and i have tried the dwpose,unfortunately it dosen't work.
The sdxl model is"ponydiffusionv6xl"
Thanks!
[image: Uploading problem.png…]
—
Reply to this email directly, view it on GitHub
<#2039 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BCL4PLEMQDAPFVW4IDEWAR3ZIGKXHAVCNFSM6AAAAAA4KWV7B2VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TQMJYG44DA>
.
You are receiving this because you commented.Message ID:
<Mikubill/sd-webui-controlnet/repo-discussions/2039/comments/9818780@
github.com>
|
Beta Was this translation helpful? Give feedback.
-
The extension sd-webui-controlnet has added the supports for several control models from the community. Many of the new models are related to SDXL, with several models for Stable Diffusion 1.5.
The sd-webui-controlnet 1.1.400 is developed for webui beyond 1.6.0.
The newly supported model list:
Below methods are supported but they do not require control models
All files are mirrored at https://huggingface.co/lllyasviel/sd_control_collection/tree/main (Use this link for downloading models)
You can download all files in the above link; the list of all original sources are here (do NOT use this link for downloading models). Feel free to let us know if you are original authors of some files and want to add/remove files from the list.
You can put models in
stable-diffusion-webui\extensions\sd-webui-controlnet\models
orstable-diffusion-webui\models\ControlNet
.Note that this update may influence other extensions (especially Deforum, but we have tested Tiled VAE/Diffusion).
About VRAM
All methods have been tested with 8GB VRAM and 6GB VRAM. The 6GB VRAM tests are conducted with GPUs with float16 support.
For 8GB~16GB vram (including 8GB vram), the recommended cmd flag is "--medvram-sdxl". You may edit your "webui-user.bat" as
For vram less than 8GB (like 6GB or 4GB, excluding 8GB vram), the recommended cmd flag is "--lowvram". You may edit your "webui-user.bat" as
About Speed
See the speed collection here.
Control-LoRA (from StabilityAI)
Update Sep 06: StabilityAI just confirmed that some ControlLoRAs can NOT process manually created sketches, hand-drawn canny boundaries, manually composed depth/canny, or any new contents from scratch without source images. Some of the ControlLoRAs expect source images as inputs and are limited to img2img mode.
Control-LoRAs are control models from StabilityAI to control SDXL. When those models were released, StabilityAI provided json workflows in the official user interface ComfyUI. In this section, we will provide steps to test and use these models.
Achieving Same Outputs with StabilityAI Official Results
This guide will configuate your webui to perfectly reproduce StabilityAI's official results in ComfyUI, so that we can make sure that our later steps on controlnet can also perfectly reproduce SAI's results.
Note that you can skip this section if you just want to use ControlNet and do not need to get same results with StabilityAI's workflows.
Besides, these steps will influence the behaviors of your webui, and we recommend users to change these options back if users mainly use Stable Diffusion 1.5.
Then set these parameters
The meta is here for you to copy:
cinematic still photography of a perfect modern sports car. best quality, high resolution, raw photo, designed by a master, flawless. emotional, harmonious, vignette, highly detailed, high budget, bokeh, cinemascope, moody, epic, gorgeous, film grain, grainy
Negative prompt: drawing, painting, illustration. worst quality, low quality, low resolution. imperfect, bad design. anime, cartoon, graphic, rendered, text, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured
Steps: 60, Sampler: DPM++ 2M SDE, CFG scale: 7, Seed: 0, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
Then click "generate" you will see this
The image is
If you see the exactly same image, then congratulations! Your webui can now reproduce StabilityAI official results. Actually if you use SAI's official ComfyUI you will get the same image
(minor difference may be caused by computational numerical imprecision)
Beyond basic behaviors, several remaining differences between Automatic1111’s WebUI and SAI's ComfyUI are:
For example, this is a 512x512 canny edge map, which may be created by canny or manually:
We can see that each line is one-pixel width:
Now if you feed the map to
sd-webui-controlnet
and want to control SDXL with resolution 1024x1024, the algorithm will automatically recognize that the map is a canny map, and then use a special resampling method to give you this:We can see that the resolution is changed and it is now 1024x1024, but the edges are still one-pixel width without any blurry or eroding/dilating
This processing is completely automatic and users do not need to take care of this. This also applies to segmentation maps and openpose maps.
In this way, the canny edge map is always pixel-perfect. This is useful when you have already carefully tuned the canny parameters in a certain resolution (making re-detection of canny edge unacceptable), or when you want to test consistent canny edges for models with different resolution (like comparing SDXL's 1024x1024 with SD 1.5's 512x512), or when the edge map is manually drawn by users (no source image is available for canny detector).
This may be noticed if you cannot get similar results in other software. Note that all input control images in this post are already 1024x1024 pixel-perfect so that the difference is eliminated.
Using Control-LoRAs
In this section you will need "sai_xl_depth_256lora.safetensors". (Find this file in the link at the beginning of this post.)
Download this dog depth image
Use this meta:
a dog on grass, photo, high quality
Negative prompt: drawing, anime, low quality, distortion
Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 114514, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: none, Model: sai_xl_depth_256lora [73ad23d1], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
If you use the exactly same parameters, you will get this image
StabilityAI official results (ComfyUI):
(minor difference may be caused by computational numerical imprecision)
Tricks
Some of the XL control models (not only the ControlLoRAs) are highly experimental and sometimes give "flat" results, even after we put many negative prompts.
For example:
photo of a woman in the street, fashine
Negative prompt: anime, drawing, bad, ugly, low quality
Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 5, Seed: 12345, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: none, Model: sai_xl_canny_256lora [566f20af], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
StabilityAI official results (ComfyUI):
This extension has a control mode to tune the balance between prompts and control images. If you encounter problems like this, you may want to change the control mode and see if problem is fixed:
Note that if you use "ControlNet is more important", you need to reduce the cfg a bit, for example reduce to 3.
("My prompt is more important" does not need you to change cfg)
Exactly same parameters and same prompts but with "ControlNet is more important" and the cfg
Note that the "ControlMode" should be used as a final method to fix problems only if you have absolutely no other methods. The effect is so strong and can completely change the behavior of control models. Usually, users are still encouraged to use prompt engineering (or just try many different random seeds) to fix problems in the first place.
Diffusers Control Models
Now we move on to diffuser's large model. You will need "diffusers_xl_depth_full.safetensors" from the link at the beginning of this post.
This model is very large, and you need to check controlnet's lowvram if using 8GB/6GB vram:
meta:
a dog on grass, photo, high quality
Negative prompt: drawing, anime, low quality, distortion
Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 123456, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: none, Model: diffusers_xl_depth_full [2f51180b], Weight: 1, Resize Mode: Crop and Resize, Low Vram: True, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
ComfyUI test:
Revision
Now we move on to Revision. The clip vision will be automatically downloaded so you do not need to download it.
The input image is:
meta:
(no prompt)
Steps: 20, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 123456, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: revision_ignore_prompt, Model: None, Weight: 1, Resize Mode: Crop and Resize, Low Vram: True, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
StabilityAI official results (ComfyUI):
T2I-Adapter
Now we move on to t2i adapter. You need "t2i-adapter_xl_canny.safetensors" from the link at the beginning of this post.
The input image is:
meta:
a dog on grass, photo, high quality
Negative prompt: drawing, anime, low quality, distortion
Steps: 20, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 123456, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: none, Model: t2i-adapter_xl_canny [ff8b24b1], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
ComfyUI test:
IP-Adapter
Now we move on to ip-adapter. You need "ip-adapter_xl.pth" from the link at the beginning of this post.
The input image is:
meta:
Female Warrior, Digital Art, High Quality, Armor
Negative prompt: anime, cartoon, bad, low quality
Steps: 20, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 123, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: ip-adapter_clip_sdxl, Model: ip-adapter_xl [4209e9f7], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Processor Res: 512, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
ComfyUI test:
IP-Adapter (Stable Diffusion 1.5)
Female Warrior, Digital Art, High Quality, Armor
Negative prompt: anime, cartoon, bad, low quality
Steps: 20, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 123, Size: 512x512, Model hash: 6ce0161689, Model: v1-5-pruned-emaonly, RNG: CPU, ControlNet 0: "Module: ip-adapter_clip_sd15, Model: ip-adapter_sd15 [6a3f6166], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Processor Res: 512, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
Reference-Only
The test image is
woman in street, fashion
Negative prompt: anime, drawing, cartoon, bad, low quality
Steps: 20, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 1234, Size: 1152x832, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: reference_adain+attn, Model: None, Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Threshold A: 0.5, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Refiner: sd_xl_refiner_1.0_0.9vae [8d0ce6c016], Refiner switch at: 0.6, Version: v1.6.0-RC-54-g84d41e49
Below is results from ip-adapter xl, same parameters, for a quck comparison
Control LLLite (from Kohya)
Now we move on to kohya's Control-LLLite. You need "kohya_controllllite_xl_canny_anime.safetensors" from the link at the beginning of this post.
The input image is:
meta:
a dog on grass, photo, high quality
Negative prompt: drawing, anime, low quality, distortion
Steps: 20, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 123456, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: none, Model: kohya_controllllite_xl_canny_anime [7158f7e0], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Guidance Start: 0, Guidance End: 1, Pixel Perfect: False, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
ComfyUI test:
Recolor (SDXL)
The sai_xl_recolor_256lora.safetensors is in the above link mentioned before
a man on grass selfie, photo, high quality
Negative prompt: drawing, anime, low quality, distortion
Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 114514, Size: 1024x1024, Model hash: e6bb9ea85b, Model: sd_xl_base_1.0_0.9vae, RNG: CPU, ControlNet 0: "Module: recolor_luminance, Model: sai_xl_recolor_256lora [43f2f36a], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Threshold A: 1, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
Note that we do not compare to official results in SAI's ComfyUI for recolor since the official codes seem to have a mistake in computing luminance.
Recolor (Stable Diffusion 1.5)
The ioclab_sd15_recolor.safetensors is in the above link mentioned before
a man on grass selfie, photo, high quality
Negative prompt: drawing, anime, low quality, distortion
Steps: 30, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 114514, Size: 512x512, Model hash: c0d1994c73, Model: realisticVisionV20_v20, RNG: CPU, ControlNet 0: "Module: recolor_luminance, Model: ioclab_sd15_recolor [6641f3c6], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Threshold A: 1, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced", SGM noise multiplier: True, Version: v1.6.0-RC-54-g84d41e49
Other Models
For the sake of length, the guidelines of the other models are omitted (like Openpose). You can use the other models in the same way as before, or you can use similar methods to achieve results same with the StabilityAI's official ComfyUI results.
But when you use openpose, you may need to know that some XL control models do not support "openpose_full" - you will need to use just "openpose" if things are not going on well.
Beta Was this translation helpful? Give feedback.
All reactions