This is just a simple Docker container with a few helpful additions to get you up and running with ComfyUI quickly and conveniently.
It already has Python, CUDA, and CuDNN installed, so you only need to run it with --gpus all
(already configured in the Compose file) and you won't need any CUDA-related dependencies on your host besides the NVIDIA driver.
I've tested this on Ubuntu 24.04, but I see no reason it wouldn't work on other systems including WSL.
- Create directories to hold your models, input images, and output images;
- Modify
docker-compose.yml
:- Change
/path/to/models
,path/to/input
, and/path/to/output
to the paths to the directories you created, - Change
groupid
anduserid
to the user/group that should read/write the files in the above directories (you can get your current group/user withid -u
andid -g
, respectively);
- Change
- Download the models you will want to use (good resources include HuggingFace and Civatai):
- Checkpoints, e.g. SD1.5 or SDXL,
- LoRAs,
- Embeddings,
- Upscalers,
- etc;
- Run
docker compose up
in the folder containingdocker-compose.yml
; - Navigate to http://127.0.0.1:8188 in your browser.
- You can use ctrl+c to terminate ComfyUI.
When you build the image, quite a number of git repos are pulled in, and for each of them their dependencies are installed. During container startup, quite a number of models are downloaded to get you up and running quickly. Each of these packages, dependencies, and models have their own licences. I haven't gone through them one-by-one (yet), but I believe a lot of them are GPLv3, which is a copyleft licence. Some of the models, such as Stable Diffusion 3, also have non-commercial clauses attached.
- I will add the list of imported repos and models here, along with their licences.
- I will release a sample workflow that demonstrates how to do a number of common things, e.g. generation, face detailing, segmentation, inpainting, upscaling, etc.