Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up the installation process? #118

Open
Expello opened this issue Nov 10, 2024 · 7 comments
Open

Speed up the installation process? #118

Expello opened this issue Nov 10, 2024 · 7 comments

Comments

@Expello
Copy link

Expello commented Nov 10, 2024

Hi,

is there a chance to speed up the installation process?

Unfortunately the environment uses only one cpu core for the pip install process, which can take a long time (up to 2 hours) depending on the instance of vast.ai.

Vast.ai has given me a few tips, but none of them work.

Something like.

ENV MAKEFLAGS="-j$(nproc)" ENV CMAKE_BUILD_PARALLEL_LEVEL=$(nproc) ENV MAX_JOBS=$(nproc)

The background is that I delete the instance after use and reinstall it the next time I need it.

Someone has an idea?

Thanks a lot!

@robballantyne
Copy link
Member

@Expello what are you installing? The default script (PROVISIONING_SCRIPT variable) downloads a few nodes and models. The downloads are all that should take time and it should be nowhere near 2 hours.

Use a better instance or delete the variable and manually it install nodes/models

@Expello
Copy link
Author

Expello commented Nov 10, 2024

I use the standard PROVISIONING_SCRIPT and have only added my custom nodes, 15 of them. And added the tokens accordingly via the variable.

I have not even considered the time for downloading the models, loras, etc. here.

I don't understand why pip install (make) only uses one cpu core, is that the state of things in 2024?

In your experience...how long does the install process normally take?

Maybe I've just had bad luck with the instances so far....

@robballantyne
Copy link
Member

Can you pre-build the wheels and download on start?

Alternatively build the nodes/models into a derivative image and use that - I am going to create better documentation for doing it but it's fairly straightforward

@Expello
Copy link
Author

Expello commented Nov 10, 2024

Hey, that's a brilliant idea...but how do i do it? 😁

I could just upload the finished builds to my server and download them from there every time....that's really good.

@Expello
Copy link
Author

Expello commented Nov 10, 2024

an instance of vast.ai, i have been waiting for 2 hours 14 minutes for completion...
compiling the inference cli's takes forever

grafik
grafik

@Expello
Copy link
Author

Expello commented Nov 11, 2024

Even if I remove the init.sh script and install everything manually....it just takes too long.

And it takes especially long with the inference-cli packages.

I just can't understand why the install / make process only uses one cpu core, that's so 90s feeling!

My dream would be if it is possible to use a ready-made docker image, finish it once, upload docker hub and link, done.

But unfortunately I don't think it will work as expected and I have no idea how to do it.

I will probably have to accept the situation, or rather I had different expectations.

A deployment time of 10 minutes would be ok...but anything longer than an hour is just annoying.

If anyone else has an idea how to reduce the deploy time, I would be grateful for any tips.


after one hour and 16 minutes i canceled...frustrating

grafik

grafik

grafik

@Expello
Copy link
Author

Expello commented Nov 12, 2024

Ok I have found my way, I am now very satisfied.

I have a total deploy time of 17 min
ComfyUI (with 17 cnodes) is already available after 8 minutes
another 9 min. until all models are loaded (69 GB)
(All of course depending on location and internet connection)

Here are my steps (quick&dirty)

1. Perform a "normal installation" -Install all required custome nodes, in my case 17 of them. WITHOUT MODELS and make all the settings the way you like them
Create a comfy_start.sh script under /workspace and make it executeable (chmod +x comfy_start.sh)

#!/bin/bash
cd /workspace/ComfyUI/venv
source bin/activate
cd /workspace/ComfyUI
python main.py --listen 0.0.0.0 --port 3000

2. create a 7z archive - from the folder “/workspace/ComfyUI"

7z a -t7z -mmt=on cu.7z workspace

3. Download the created archive - and upload it to a server of your choice, my archive is 4.6GB in size and I use my private server at home for storage

4. Create a file where all your models, loras etc. are listed with wget, e.g. donwloadmodels.txt and upload this file to a web server of your choice

wget --header="Authorization: Bearer YOUR HF TOKEN" -c "https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors" -O "/workspace/ComfyUI/models/unet/flux1-dev.safetensors" --progress=bar:force:noscroll
wget "https://civitai.com/api/download/models/800173?token=YOUR CIVITAI TOKEN" -O "/workspace/ComfyUI/models/loras/Emma_Stone_Flux_v1.safetensors" --progress=bar:force:noscroll
#AND SO ON

5. Create new Template and use the Docker “ComfyUI FLUX.1” - and modify the “On-start Script” as follows

env >> /etc/environment
mkdir /workspace
apt install -y p7zip-full
apt install -y parallel
wget -c "https://yourserver.net/cu.7z"
mv /root/cu.7z /
cd /
7z -mmt=on x cu.7z
rm cu.7z
wget "https://yourserver.net/downloadmodels.txt"
parallel -j 8 :::: downloadmodels.txt
rm downloadmodels.txt
cd workspace
./comfy_start.sh
Environment :
{
  "HF_TOKEN": "",
  "CIVITAI_TOKEN": "",
  "WEB_ENABLE_HTTPS": "true",
  "WEB_ENABLE_AUTH": "true",
  "COMFYUI_ARGS": "",
  "AUTO_UPDATE": "false",
  "PROVISIONING_SCRIPT": "https://raw.githubusercontent.com/ai-dock/comfyui/main/config/provisioning/default.sh",
  "DATA_DIRECTORY": "/workspace/",
  "WORKSPACE": "/workspace/",
  "WORKSPACE_MOUNTED": "force",
  "SYNCTHING_TRANSPORT_PORT_HOST": "72299",
  "-p 8384:8384": "1",
  "-p 72299:72299": "1",
  "JUPYTER_DIR": "/",
  "-p 22:22": "1",
  "-p 1111:1111": "1",
  "-p 8888:8888": "1",
  "-p 8188:8188": "1",
  "-p 3000:3000": "1",
  "OPEN_BUTTON_PORT": "3000",
  "OPEN_BUTTON_TOKEN": "1",
  "JUPYTER_TYPE": "lab"
}

Maybe this will help someone!

Have a nice day everyone

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants