-
Notifications
You must be signed in to change notification settings - Fork 780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ComfyUI to Gradio/Spaces blog #2553
base: main
Are you sure you want to change the base?
Conversation
Also cc @abidlabs, @asomoza and @Vaibhavs10 for viz |
For that, a minimal Gradio app would be: | ||
```py | ||
if __name__ == "__main__": | ||
# Comment out the main() call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It took me a bit of time to understand this comment line because we do not see the ComfyUI exported Python code that contains this main()
function definition
) | ||
app.launch(share=True) | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would have added a screenshot of the generated Gradio UI that we just wrote (I know it's not a Gradio tutorial though)
+ def generate_image(prompt, structure_image, style_image, depth_strength, style_strength) | ||
``` | ||
|
||
And inside the function, we need to find the hard coded values of the nodes we want, and replace it with the variables we would like to control, such as: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe that presenting the inputs + output(s) (Markdown list for instance) before the Gradio code could help (just so we're prepared to do the association between the next code block and the previous one)
valid_models = [ | ||
getattr(loader[0], 'patcher', loader[0]) | ||
for loader in model_loaders | ||
if not isinstance(loader[0], dict) and not isinstance(getattr(loader[0], 'patcher', None), dict) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice that you found a way to automate this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me regarding ZeroGPU
One solution is to take a much simpler (meaning only one or two loading model Nodes) exported workflow as example (either for the whole blog, or only for this part) |
@@ -5227,3 +5227,15 @@ | |||
- community | |||
- research | |||
- open-source-collab | |||
|
|||
- local: run-comfyui-workflows-on-spaces | |||
title: "Run ComfyUI workflows for free with Gradio on Spaces" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider shorter title
title: "Run ComfyUI workflows for free with Gradio on Spaces" | |
title: "Run ComfyUI workflows for free on Spaces" |
|
||
## Intro | ||
|
||
In this tutorial I will present a step-by-step guide on how I have converted a complex ComfyUI workflow to a simple Gradio application, and how I have deployed this application on Hugging Face Spaces ZeroGPU serverless structure, which allows for it to be deployed and ran for free on a serverless manner. In this tutorial, we are going to work with [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68), but you can follow the tutorial with any workflow that you would like. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this tutorial I will present a step-by-step guide on how I have converted a complex ComfyUI workflow to a simple Gradio application, and how I have deployed this application on Hugging Face Spaces ZeroGPU serverless structure, which allows for it to be deployed and ran for free on a serverless manner. In this tutorial, we are going to work with [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68), but you can follow the tutorial with any workflow that you would like. | |
In this tutorial I will present a step-by-step guide on how to convert a complex ComfyUI workflow to a simple Gradio application, and how to deploy this application on Hugging Face Spaces ZeroGPU serverless structure, which allows for it to be deployed and run for free in a serverless manner. In this tutorial, we are going to work with [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68), but you can follow the tutorial with any workflow that you would like. |
# Add a title | ||
gr.Markdown("# FLUX Style Shaping") | ||
|
||
with gr.Row(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't gr.Interface
be easier to use here?
|
||
## 3. Preparing it to run Hugging Face Spaces | ||
|
||
Now with our Gradio demo working, we may feel tempted to just hit an export button and get it working on Hugging Face Spaces, however, as we have all models loaded locally, if we just exported all our folder to Spaces, we would upload dozens of GB of models on Hugging Face, which is not supported, specially as all this models should have a mirror on Hugging Face. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now with our Gradio demo working, we may feel tempted to just hit an export button and get it working on Hugging Face Spaces, however, as we have all models loaded locally, if we just exported all our folder to Spaces, we would upload dozens of GB of models on Hugging Face, which is not supported, specially as all this models should have a mirror on Hugging Face. | |
Now with our Gradio demo working, we may feel tempted to just upload everything to Hugging Face Spaces. However, this would require uploading dozens of GB of models to Hugging Face, which is not only only slow but also unnecessary, as all of these models already exist on Hugging Face! |
|
||
Now with our Gradio demo working, we may feel tempted to just hit an export button and get it working on Hugging Face Spaces, however, as we have all models loaded locally, if we just exported all our folder to Spaces, we would upload dozens of GB of models on Hugging Face, which is not supported, specially as all this models should have a mirror on Hugging Face. | ||
|
||
So, we need to first install `pip install huggingface_hub` if we don't have it already, and then we need to do the following on the top of our `app.py` file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, we need to first install `pip install huggingface_hub` if we don't have it already, and then we need to do the following on the top of our `app.py` file: | |
Instead, we will first install `pip install huggingface_hub` if we don't have it already, and then we need to do the following on the top of our `app.py` file: |
hf_hub_download(repo_id="comfyanonymous/flux_text_encoders", filename="t5xxl_fp16.safetensors", local_dir="models/text_encoders/t5") | ||
``` | ||
|
||
This will map all local models on ComfyUI to a Hugging Face version of them. Unfortunately, currently there is no way to automate this process, you gotta find the models of your workflow on Hugging Face and map it to the same ComfyUI folders that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will map all local models on ComfyUI to a Hugging Face version of them. Unfortunately, currently there is no way to automate this process, you gotta find the models of your workflow on Hugging Face and map it to the same ComfyUI folders that. | |
This will map all local models on ComfyUI to their Hugging Face versions. Unfortunately, currently there is no way to automate this process, you need to find the models of your workflow on Hugging Face and map it to the same ComfyUI folders that. |
|
||
If you are running models that are not on Hugging Face, you need find a way to programmatically download them to the correct folder via Python code. This will run only once when the Hugging Face Space starts. | ||
|
||
Now, we will do one last modification to the `app.py` file, which is to include the function decoration for ZeroGPU |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, we will do one last modification to the `app.py` file, which is to include the function decoration for ZeroGPU | |
Now, we will do one last modification to the `app.py` file, which is to include the function decoration for ZeroGPU, which will let us do inference for free! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice tutorial @apolinario! Its a bit of work, but nice to see there's a repeatable path that users can follow
Preparing the Article
md
file. You can also specifyguest
ororg
for the authors.Getting a Review
A review would be nice: @cbensimon, @pcuenca and @linoytsaban; @cbensimon , I'm unsure about the last part wrt to moving the models outside of the function, do you think there is a more elegant way to convey this? Maybe just the link to the diff without hardcoding it?