Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: "Use separate checkpoint" not working in ForgeUI #716

Closed
driqeks opened this issue Sep 15, 2024 · 4 comments
Closed

[Bug]: "Use separate checkpoint" not working in ForgeUI #716

driqeks opened this issue Sep 15, 2024 · 4 comments
Labels
bug Something isn't working Stale

Comments

@driqeks
Copy link

driqeks commented Sep 15, 2024

Describe the bug

The checkpoint selected in "Use separate checkpoint" is ignored. It always instead uses the regular checkpoint that was used for generating the image.

Steps to reproduce

No response

Screenshots

No response

Console logs, from start to end.

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on user user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
glibc version is 2.39
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Python 3.11.10 (main, Sep  7 2024, 18:35:41) [GCC 13.2.0]
Version: f2.0.1v1.10.1-previous-531-g210af4f8
Commit hash: 210af4f80406f78a67e1c35a64a6febdf1200a82
Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
Launching Web UI with arguments: 
Total VRAM 24188 MB, total RAM 128726 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: /home/user/StableDiffusion/Forge/New/stable-diffusion-webui-forge/models/ControlNetPreprocessor
[-] ADetailer initialized. version: 24.9.0, num models: 10
2024-09-19 01:07:11,487 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': '/home/user/StableDiffusion/Forge/New/stable-diffusion-webui-forge/models/Stable-diffusion/everclearPNYByZovya_v2VAE.safetensors', 'hash': 'b894904e'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 19.4s (prepare environment: 1.5s, import torch: 4.1s, other imports: 0.1s, load scripts: 10.7s, create ui: 1.8s, gradio launch: 1.2s).
Environment vars changed: {'stream': False, 'inference_memory': 3203.0, 'pin_shared_memory': False}
[GPU Setting] You will use 86.76% GPU memory (20985.00 MB) to load weights, and use 13.24% GPU memory (3203.00 MB) to do matrix computation.
Loading Model: {'checkpoint_info': {'filename': '/home/user/StableDiffusion/Forge/New/stable-diffusion-webui-forge/models/Stable-diffusion/everclearPNYByZovya_v2VAE.safetensors', 'hash': 'b894904e'}, 'additional_modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
StateDict Keys: {'unet': 1680, 'vae': 248, 'text_encoder': 197, 'text_encoder_2': 518, 'ignore': 0}
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float16}
Model loaded in 0.6s (unload existing model: 0.2s, forge model load: 0.4s).
[Unload] Trying to free 5230.58 MB for cuda:0 with 0 models keep loaded ... Done.
[Memory Management] Target: JointTextEncoder, Free GPU: 22158.15 MB, Model Require: 1559.68 MB, Previously Loaded: 0.00 MB, Inference Require: 3203.00 MB, Remaining: 17395.47 MB, All loaded to GPU.
Moving model(s) has taken 0.23 seconds
[Unload] Trying to free 3203.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 20320.32 MB ... Done.
[Unload] Trying to free 9569.16 MB for cuda:0 with 0 models keep loaded ... Current free memory is 20319.46 MB ... Done.
[Memory Management] Target: KModel, Free GPU: 20319.46 MB, Model Require: 4897.05 MB, Previously Loaded: 0.00 MB, Inference Require: 3203.00 MB, Remaining: 12219.41 MB, All loaded to GPU.
Moving model(s) has taken 0.96 seconds
100%|███████████████████████████████████████████| 20/20 [00:02<00:00,  7.41it/s]
[Unload] Trying to free 4495.36 MB for cuda:0 with 0 models keep loaded ... Current free memory is 15280.50 MB ... Done.
[Memory Management] Target: IntegratedAutoencoderKL, Free GPU: 15280.50 MB, Model Require: 159.56 MB, Previously Loaded: 0.00 MB, Inference Require: 3203.00 MB, Remaining: 11917.94 MB, All loaded to GPU.
Moving model(s) has taken 0.02 seconds

0: 640x512 1 face, 65.1ms
Speed: 2.0ms preprocess, 65.1ms inference, 16.2ms postprocess per image at shape (1, 3, 640, 512)
[Unload] Trying to free 3478.78 MB for cuda:0 with 1 models keep loaded ... Current free memory is 15074.62 MB ... Done.
[Unload] Trying to free 3203.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 15098.00 MB ... Done.
[Unload] Trying to free 3203.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 15097.39 MB ... Done.
[Unload] Trying to free 3203.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 15096.53 MB ... Done.
100%|█████████████████████████████████████████████| 9/9 [00:01<00:00,  7.93it/s]
[Unload] Trying to free 4287.94 MB for cuda:0 with 1 models keep loaded ... Current free memory is 15084.22 MB ... Done.
Total progress: 100%|███████████████████████████| 20/20 [00:05<00:00,  3.82it/s]
Total progress: 100%|███████████████████████████| 20/20 [00:05<00:00,  8.13it/s]

List of installed extensions

No response

@driqeks driqeks added the bug Something isn't working label Sep 15, 2024
Copy link

dosubot bot commented Sep 15, 2024

Hey @driqeks! I'm here to help you with any bugs or questions you have. I can assist you in solving issues, answering questions, and becoming a contributor.

The "Use separate checkpoint" option in ForgeUI is not working because there is no change event handler defined for the w.ad_use_checkpoint checkbox. To fix this, you need to add a change event handler for w.ad_use_checkpoint similar to the other options. Here is an example of how you can add it:

w.ad_use_checkpoint.change(
    gr_interactive,
    inputs=w.ad_use_checkpoint,
    outputs=w.ad_checkpoint,
    queue=False,
)

This code should be added after the definition of w.ad_use_checkpoint and w.ad_checkpoint to ensure that the dropdown updates correctly when the checkbox is toggled [1].

To continue talking to Dosu, mention @dosu.

@kalle07
Copy link

kalle07 commented Sep 16, 2024

#703

@driqeks
Copy link
Author

driqeks commented Sep 18, 2024

#703

I am not using Flux models. I am using SDXL.

@Bing-su
Copy link
Owner

Bing-su commented Sep 19, 2024

lllyasviel/stable-diffusion-webui-forge#964

forge has changed the import policy for checkpoints, so changing checkpoints during generation no longer works.

@github-actions github-actions bot added the Stale label Oct 7, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

3 participants