Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update readme.md with macOS installation instructions #129

Closed
wants to merge 1 commit into from
Closed

Update readme.md with macOS installation instructions #129

wants to merge 1 commit into from

Conversation

jorge-campo
Copy link

Adds specific Mac M1/M2 installation instructions for Fooocus.

The same instructions for Linux work on Mac. The only prerequisite is the PyTorch installation, as described in the procedure.

On my M1 Mac, the installation and first run ran error-clean, and I could start generating images without any additional configuration.

Screenshot 2023-08-15 at 3 02 58@2x

@ponymushama
Copy link

ponymushama commented Aug 16, 2023

macOS does not have conda command.
Should install conda at first.

@dreamscapeai
Copy link

how many it/s can you have on the M1 pro ?

@jorge-campo
Copy link
Author

macOS does not have conda command.
Should install conda at first.

If you follow the Apple technical document in the procedure, it will instal miniconda3 from the Anaconda repo.

@jorge-campo
Copy link
Author

how many it/s can you have on the M1 pro ?

In my case, these 👇 are the statistics for the Base model + Refiner for two images using the same prompt. My computer has many other processes running in the background competing for RAM, so your mileage may vary.

Screenshot 2023-08-16 at 8 04 28@2x

@lllyasviel
Copy link
Owner

wow 8s/it - that is quite a bit waiting

@jorge-campo
Copy link
Author

wow 8s/it - that is quite a bit waiting

Yes, it is, but it's similar to what I get with ComfyUI or Automatic1111 using SDXL; SD1.5 is faster though. I don't think you can do better with M1 + SDXL (?)

I don't know what optimizations you included in Fooocus, but the image quality is vastly superior to ComfyUI or Automatic1111. Thanks for giving us the chance to play with this project! 😄

@thinkyhead
Copy link

thinkyhead commented Aug 19, 2023

Note that once Miniconda3 is installed and activated in the shell, the Linux instructions work perfectly on macOS.

@guo2048
Copy link

guo2048 commented Aug 29, 2023

After I install it locally with your steps, I got the same issue like described here. #286
the two result pictures are just empty pictures.

@yaroslav-ads
Copy link

After I install it locally with your steps, I got the same issue like described here. #286 the two result pictures are just empty pictures.

You just need to restart your computer

@eyaeya
Copy link

eyaeya commented Sep 2, 2023

It looks like I'm running into a problem with the environment, how can I get past this? I didn't see the instructions for this part in your Readme.

File "/Users/xiaoxiao/Pictures/Fooocus/modules/anisotropic.py", line 132, in adaptive_anisotropic_filter s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True) NotImplementedError: The operator 'aten::std_mean.correction' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variablePYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

@jorge-campo

@kjslag
Copy link

kjslag commented Sep 6, 2023

I had to launch with PYTORCH_ENABLE_MPS_FALLBACK=1 python launch.py insead of python launch.py to deal with the same issue that @eyaeya had. After that, it ran with about 2.7s/it on an M1 Max macbook pro. But the output was just a blank color. I used the latest (as of now) Fooocus commit 09e0d1c from Sept 2 2023.

image image

@iwnfubb
Copy link

iwnfubb commented Sep 15, 2023

Works like a charm ! Is it normal the program just use too much RAM ~ 20GB is used ?

@jorge-campo
Copy link
Author

Please refer to the macOS installation guide in the README.md file.

@huameiwei-vc
Copy link

Last login: Sat Oct 14 11:02:20 on ttys000
(base) songchao@SongChaodeMacBook-Pro ~ % cd Fooocus

conda activate fooocus

python entry_with_update.py
Fast-forward merge
Update succeeded.
Python 3.10.13 (main, Sep 11 2023, 08:16:02) [Clang 14.0.6 ]
Fooocus version: 2.1.60
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Total VRAM 16384 MB, total RAM 16384 MB
Set vram state to: SHARED
Device: mps
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Fooocus Smart Memory] Disabling smart memory, vram_inadequate = True, is_old_gpu_arch = True.
model_type EPS
adm 2560
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Refiner model loaded: /Users/songchao/Fooocus/models/checkpoints/sd_xl_refiner_1.0_0.9vae.safetensors
model_type EPS
adm 2816
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: /Users/songchao/Fooocus/models/checkpoints/sd_xl_base_1.0_0.9vae.safetensors
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "/Users/songchao/miniconda3/envs/fooocus/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/Users/songchao/miniconda3/envs/fooocus/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/Users/songchao/Fooocus/modules/async_worker.py", line 18, in worker
import modules.default_pipeline as pipeline
File "/Users/songchao/Fooocus/modules/default_pipeline.py", line 252, in
refresh_everything(
File "/Users/songchao/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/songchao/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/songchao/Fooocus/modules/default_pipeline.py", line 226, in refresh_everything
refresh_loras(loras)
File "/Users/songchao/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/songchao/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/songchao/Fooocus/modules/default_pipeline.py", line 153, in refresh_loras
model = core.load_sd_lora(model, filename, strength_model=weight, strength_clip=weight)
File "/Users/songchao/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/songchao/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/songchao/Fooocus/modules/core.py", line 82, in load_sd_lora
lora = fcbh.utils.load_torch_file(lora_filename, safe_load=False)
File "/Users/songchao/Fooocus/backend/headless/fcbh/utils.py", line 13, in load_torch_file
sd = safetensors.torch.load_file(ckpt, device=device.type)
File "/Users/songchao/miniconda3/envs/fooocus/lib/python3.10/site-packages/safetensors/torch.py", line 259, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

No matter I download the model again, it's useless.

@lllyasviel
Copy link
Owner

MetadataIncompleteBuffer is corrupted files.

@huameiwei-vc
Copy link

image Why does this take so long? Need additional settings?

@huameiwei-vc
Copy link

image 为什么要这么久?需要其他设置?

mac m1 pro ,16g

@LeapLu
Copy link

LeapLu commented Oct 24, 2023

image

while stating the command this TypeError, then it worked, but keep Initializing... anyone can help?
macbook m1 pro

Update succeeded. Python 3.9.16 (main, Oct 18 2023, 16:24:00) [Clang 15.0.0 (clang-1500.0.40.1)] Fooocus version: 2.1.728 Running on local URL: http://127.0.0.1:7860
image

@Southmelon-Allen
Copy link

how many it/s can you have on the M1 pro ?

122s/it, macbook pro m2. ....so slow

@tiancool
Copy link

tiancool commented Nov 3, 2023

[Fooocus Model Management] Moving model (s) has taken 63.73 seconds, move the model once before each generation, too slow

@omioki23
Copy link

omioki23 commented Nov 4, 2023

image Hi! I'm a cpmplete noob to this. I'm using macbook pro m1 pro 16 gb and it takes around an hour to generate one image. Am I doing something wrong or is it just because I'm using mac? It also has an error that has been discussed here previously (The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU.), does it affect the waiting time?

@mashres15
Copy link

@omioki23 I also have the same issue. Seems like a Mac thing

@xplfly
Copy link

xplfly commented Nov 16, 2023

Machine information:m1 16g
image

I'm getting an error:
image

@jorge-campo

@colingoodman
Copy link

Setup was a breeze, but as others have mentioned generation is extremely slow. Unfortunate.

@badaramoni
Copy link

i did everything and i got url when i gave a prompt and tap generate it completed but i cant see any of images?
any help

@Shuffls
Copy link

Shuffls commented Nov 29, 2023

When trying to generate an image on my mac book m1 air, it gave the following error code:

RuntimeError: MPS backend out of memory (MPS allocated: 8.83 GB, other allocations: 231.34 MB, max allowed: 9.07 GB). Tried to allocate 25.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure)

Clearly it is implying I do not have enough memory, though has anyone figured out how to rectify this please? Thanks

@ezzio-salas
Copy link

@Shuffls i tried
to update in launch.py
added
-os.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.0"
to disable upper limit for memory allocations

for similar issue and fixed my problem

@badaramoni
Copy link

RuntimeError: MPS backend out of memory (MPS allocated: 6.34 GB, other allocations: 430.54 MB, max allowed: 6.77 GB). Tried to allocate 10.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).

@Dkray
Copy link

Dkray commented Jan 3, 2024

So, after a bit more of digging, I managed to trim generation time to around 90 sec per image (10s/it) (in speed) (base m1 14 pro) and close to 40 sec (extreme speed). Fallowing this tutorial turbo-focus and python entry_with_update.py --unet-in-fp16 --attention-split

Followed this guide. I got again a message about using only CPU and a speed of 130 seconds per iteration.

@tomekand1
Copy link

@Dkray, for the CPU message, I implemented this fix

@nickdevvv
Copy link

nickdevvv commented Jan 3, 2024

Running this: PYTORCH_ENABLE_MPS_FALLBACK=1 python3 entry_with_update.py --attention-quad --always-high-vram

on M3 Pro, 18gb results in 50-60s/it

But seems to not run on GPU, because I am getting: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications.

Anyone with same system and better starting command?

Edit:
This command: python entry_with_update.py --always-cpu --disable-offload-from-vram --unet-in-fp8-e5m2 reduces the iteration time to about 20s.

If GPU can be used, it should be much quicker, still looking for a better solution.

Copy link

@wrzgenowa wrzgenowa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bildschirmfoto 2024-01-10 um 01 00 03

The download always finishes but nothing happens then. Tried it with Terminal and with PyCharm using a M1 Macbook. I have no clue about coding... PLS help

@wperrin
Copy link

wperrin commented Jan 24, 2024

Running this: PYTORCH_ENABLE_MPS_FALLBACK=1 python3 entry_with_update.py --attention-quad --always-high-vram

on M3 Pro, 18gb results in 50-60s/it

But seems to not run on GPU, because I am getting: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications.

Anyone with same system and better starting command?

Edit: This command: python entry_with_update.py --always-cpu --disable-offload-from-vram --unet-in-fp8-e5m2 reduces the iteration time to about 20s.

If GPU can be used, it should be much quicker, still looking for a better solution.

Thanks for this, its reduced my M1 Mac mini from 190it/s to 100it/s. However, the MPS is not being used. I'm not sure why but with the standard launch it give the same "mean" error as you have.

The RAM is shared between the processor and GPU on apple M1 chips but I know I do meet the minimum run spec. Let us know if you ever figure this out.
Also, I am running the latest PyTorch daily and the MPS shows as available.

@brunoleitemilk
Copy link

with python entry_with_update.py --always-cpu --disable-offload-from-vram --unet-in-fp8-e5m2
Im running on Macbook pro 2017 15" ~72.60s/it.

Is there something that I can do to get faster results?
I mean, in google colab it can run with 1cpu and 1 gb ram. Why can I run better with my i7?

@Zeeshan-2k1
Copy link

python entry_with_update.py

Update failed.
No module named 'pygit2'
Update succeeded.
[System ARGV] ['entry_with_update.py']
Traceback (most recent call last):
File "/Users/zeeshan/Fooocus/entry_with_update.py", line 46, in
from launch import *
File "/Users/zeeshan/Fooocus/launch.py", line 22, in
from modules.launch_util import is_installed, run, python, run_pip, requirements_met
File "/Users/zeeshan/Fooocus/modules/launch_util.py", line 9, in
import packaging.version
ModuleNotFoundError: No module named 'packaging'
(fooocus)

Apple M1 Pro
Sonoma 14.1.12

@cybernet
Copy link

cybernet commented Feb 5, 2024

has anyone managed to get it running on Metal ?

@JaimeBulaGooten
Copy link

with python entry_with_update.py --always-cpu --disable-offload-from-vram --unet-in-fp8-e5m2 Im running on Macbook pro 2017 15" ~72.60s/it.

Is there something that I can do to get faster results? I mean, in google colab it can run with 1cpu and 1 gb ram. Why can I run better with my i7?

Sorry bro been there. running on intel is a waste of time. Runs.. but will remain slow.

@leecolarelli
Copy link

I get around 10-15/it with python entry_with_update.py --unet-in-fp16 --attention-split. M1 Pro 16" 2021, 16Gb, Mac OS 14.2.1

@mariitz
Copy link

mariitz commented Feb 19, 2024

python entry_with_update.py

Update failed. No module named 'pygit2' Update succeeded. [System ARGV] ['entry_with_update.py'] Traceback (most recent call last): File "/Users/zeeshan/Fooocus/entry_with_update.py", line 46, in from launch import * File "/Users/zeeshan/Fooocus/launch.py", line 22, in from modules.launch_util import is_installed, run, python, run_pip, requirements_met File "/Users/zeeshan/Fooocus/modules/launch_util.py", line 9, in import packaging.version ModuleNotFoundError: No module named 'packaging' (fooocus)

Apple M1 Pro Sonoma 14.1.12

@Zeeshan-2k1 did you solve your issue? I had the exact same error shown. In my case, I had already a newer version of python (3.12) installed, with a link to it, so whenever I was doing commands with "python" it was linked to the newer version of course. However, when you follow the steps in the readme, it will install python 3.11 for you and you have to use this python also, as libraries such as pygit2 are installed in that framework as well (within Conda). Hope this helps!

@mariitz
Copy link

mariitz commented Feb 19, 2024

I was testing some combinations of all the parameters, however, long story short, the best for me (MacBook Pro, M3 Pro apple silicon) was: python entry_with_update.py --attention-pytorch

So the --attention-pytorch did the most performance boost, going from 25-30 secs. per iteration, down to almost 5secs/it. Further, I still included --always-high-vram --disable-offload-from-vram, however I am still not 100% sure if there is any difference in my case noticeable. Almost the exact same results, in terms of timing.

Also the newest version of Fooocus (web UI) allows you to choose for extreme speed setting (when selecting "Advanced"), where only 8 iterations per image are needed. By selecting so, you might create the images even faster, of course, with a slight quality decrease.

@nicksaintjohn
Copy link

Just to clarify for anyone else reading this thread. I can confirm that there is a speed-memory issue at 16Gb using the Mac M-series. You need to force 16-bit floating point, to reduce memory just enough to avoid a bottleneck. This will over double the speed... I went from over 120 it/s, down to <60 it/s. Still not fast, but it becomes usable.

I usually run with python entry_with_update.py --all-in-fp16 --attention-split but, tbh, I don't think the attention-split helps (still testing)

@Deniffler
Copy link

Deniffler commented Mar 15, 2024

python entry_with_update.py --all-in-fp16 --attention-split,

Thanks bro!
I've tried all the variations of the startup commands. At first I had 100-110 sec/it , but your prompt was able to speed up the process to 5-7 sec/it !!!!
I have a MacBook Pro, M1 Pro, 16Gb, year 2021

@Deniffler
Copy link

For optimize the execution of your command in the command line and potentially speed up the process, we can focus on the parameters that most affect performance. However, given that your command already contains many parameters for optimizing memory and computational resource usage, the main directions for improvement may involve more efficient GPU usage and reducing the amount of data required for processing in each iteration.

Here's a modified version of your command considering possible optimizations:

python entry_with_update.py --all-in-fp16 --attention-pytorch --disable-offload-from-vram --always-high-vram --gpu-device-id 0 --async-cuda-allocation --unet-in-fp16 --vae-in-fp16 --clip-in-fp16

Explanation of changes:

Removed unsupported parameters: Parameters that caused an error due to their absence in the list of supported script parameters (--num-workers, --batch-size, --optimizer, --learning-rate, --precision-backend, --gradient-accumulation-steps) have been removed.

Clarification of FP16 usage: Explicit indications for using FP16 for different parts of the model (--unet-in-fp16, --vae-in-fp16, --clip-in-fp16) have been added. This suggests that your model may include components like U-Net, VAE (Variational Autoencoder), and CLIP. Using FP16 can speed up computations and reduce memory consumption, although it may also slightly affect the accuracy of the results.

Using asynchronous CUDA memory allocation: The --async-cuda-allocation parameter implies that the script will use asynchronous memory allocation, which can speed up data loading and the start of computations.

Additional tips:

Performance analysis: Use profiling tools to analyze CPU and GPU usage to identify bottlenecks.
Data loading optimization: If possible, optimize data loading to reduce IO wait times. This can include using faster data formats or buffering data in memory.
Library version checks: Ensure you are using the latest versions of all dependencies. Sometimes updates contain significant performance improvements.
Experiments with batch size: Although the --batch-size parameter is not supported by your current command, if there's an opportunity to adjust the batch size in the code, it can significantly impact performance. Larger batch sizes can increase performance at the expense of increased memory usage.
Remember, performance optimization often requires experimentation and fine-tuning, as the impact of changes can greatly depend on the specific details of your task and hardware.

@Deniffler
Copy link

Снимок экрана 2024-03-17 в 19 32 34

@Dkray
Copy link

Dkray commented Mar 17, 2024

python entry_with_update.py --all-in-fp16 --attention-pytorch --disable-offload-from-vram --always-high-vram --gpu-device-id 0 --async-cuda-allocation --unet-in-fp16 --vae-in-fp16 --clip-in-fp16

I try to use this and get message

/anisotropic.py:132: UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.)
s, m = torch.std_mean(g, dim=(1, 2, 3), keepdim=True)

@Ibarton5317
Copy link

wow 8s/it - that is quite a bit waiting

I have 161.37s/it. can someone help me why?. Like how can i make my mac faster. Its. 2022 model so it has the m1 chip. But why is it this slow?

@cootshk
Copy link

cootshk commented May 15, 2024

Is it just quitting when trying to generate an image for anyone else? (M2 Mac Air)

@Infiexe
Copy link

Infiexe commented May 16, 2024

python entry_with_update.py
Update failed. No module named 'pygit2' Update succeeded. [System ARGV] ['entry_with_update.py'] Traceback (most recent call last): File "/Users/zeeshan/Fooocus/entry_with_update.py", line 46, in from launch import * File "/Users/zeeshan/Fooocus/launch.py", line 22, in from modules.launch_util import is_installed, run, python, run_pip, requirements_met File "/Users/zeeshan/Fooocus/modules/launch_util.py", line 9, in import packaging.version ModuleNotFoundError: No module named 'packaging' (fooocus)
Apple M1 Pro Sonoma 14.1.12

@Zeeshan-2k1 did you solve your issue? I had the exact same error shown. In my case, I had already a newer version of python (3.12) installed, with a link to it, so whenever I was doing commands with "python" it was linked to the newer version of course. However, when you follow the steps in the readme, it will install python 3.11 for you and you have to use this python also, as libraries such as pygit2 are installed in that framework as well (within Conda). Hope this helps!

Did you guys cd Fooocus and conda activate Fooocus before python entry_with_update.py?

@Infiexe
Copy link

Infiexe commented May 16, 2024

wow 8s/it - that is quite a bit waiting

I have 161.37s/it. can someone help me why?. Like how can i make my mac faster. Its. 2022 model so it has the m1 chip. But why is it this slow?

You're probably not using the optimization parameters mentioned right above your post.

@Infiexe
Copy link

Infiexe commented May 16, 2024

Here's a modified version of your command considering possible optimizations:

python entry_with_update.py --all-in-fp16 --attention-pytorch --disable-offload-from-vram --always-high-vram --gpu-device-id 0 --async-cuda-allocation --unet-in-fp16 --vae-in-fp16 --clip-in-fp16

Thank you, got it down to around 13-14 s/it on 2020 M1 MacBook Air 16GB. It starts with 10.5 tho, and slows down after a couple of steps. Fooocus still runs a bit slower than A1111 (7-8 s/it), but IMO still usable. I think it could be faster if it used both CPU and GPU cores. For now, it sits on about 96% with frequent dips to 80% GPU and only 10-17% CPU. Any way to change that? I want my whole machine to generate.

@nicksaintjohn
Copy link

Great work @Deniffler, you clearly spent more time and effort than I have, I was just glad to get it running fully off the GPU... Very glad that I helped set you on the right path, as you've now got us all running as past as possible. I'm much more productive now, many thanks.

@tjex
Copy link

tjex commented May 23, 2024

Posting here as described should be done. Can convert to an issue later if necessary.

The image input checkbox does not reveal the image input tabs / parameters for me.
And the prompt text box is huge...

On my fork I found the rows for the prompt box was set to 1024 and image_input_panel was hidden. The hidden panel could make sense considering the check box is probably what opens the panel? But the prompt box set to 1024 is odd.

It would also seem that image prompting is not working at all for me. I check "image prompt", place in two images (a landscape and an animal) and click generate. Fooocus then generates a random portrait image of a man/woman.

However, if I put an image into the describe tab, and click describe, it will indeed create a prompt from the image. So the tab/image handling seems to be working at least?

Anyone else having a similar problem?

@JaimeBulaGooten
Copy link

Here's a modified version of your command considering possible optimizations:
python entry_with_update.py --all-in-fp16 --attention-pytorch --disable-offload-from-vram --always-high-vram --gpu-device-id 0 --async-cuda-allocation --unet-in-fp16 --vae-in-fp16 --clip-in-fp16

Thank you, got it down to around 13-14 s/it on 2020 M1 MacBook Air 16GB. It starts with 10.5 tho, and slows down after a couple of steps. Fooocus still runs a bit slower than A1111 (7-8 s/it), but IMO still usable. I think it could be faster if it used both CPU and GPU cores. For now, it sits on about 96% with frequent dips to 80% GPU and only 10-17% CPU. Any way to change that? I want my whole machine to generate.

3.51s/it On Mac Book Pro M3 with 36GB. Also, half of memory consumption after using this command.

image

@ElPatr0n75
Copy link

I get that when I try to install the requirements folder :
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

I got two error like that, I don't know how to solve that because then it doesn't run at all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet