Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ComfyUI to Gradio/Spaces blog #2553

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

ComfyUI to Gradio/Spaces blog #2553

wants to merge 7 commits into from

Conversation

apolinario
Copy link
Contributor

Preparing the Article

  • Add an entry to _blog.yml.
  • Add a thumbnail. There are no requirements here, but there is a template if it's helpful.
  • Check you use a short title and blog path.
  • Upload any additional assets (such as images) to the Documentation Images repo. This is to reduce bloat in the GitHub base repo when cloning and pulling. Try to have small images to avoid a slow or expensive user experience.
  • Add metadata (such as authors) to your md file. You can also specify guest or org for the authors.
  • Ensure the publication date is correct.
  • Preview the content. A quick way is to paste the markdown content in https://huggingface.co/new-blog. Do not click publish, this is just a way to do an early check.

Getting a Review

A review would be nice: @cbensimon, @pcuenca and @linoytsaban; @cbensimon , I'm unsure about the last part wrt to moving the models outside of the function, do you think there is a more elegant way to convey this? Maybe just the link to the diff without hardcoding it?

@apolinario apolinario changed the title Comfyui to spaces ComfyUI to Gradio/Spaces blog Dec 19, 2024
@apolinario
Copy link
Contributor Author

Also cc @abidlabs, @asomoza and @Vaibhavs10 for viz

For that, a minimal Gradio app would be:
```py
if __name__ == "__main__":
# Comment out the main() call

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It took me a bit of time to understand this comment line because we do not see the ComfyUI exported Python code that contains this main() function definition

)
app.launch(share=True)
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would have added a screenshot of the generated Gradio UI that we just wrote (I know it's not a Gradio tutorial though)

+ def generate_image(prompt, structure_image, style_image, depth_strength, style_strength)
```

And inside the function, we need to find the hard coded values of the nodes we want, and replace it with the variables we would like to control, such as:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe that presenting the inputs + output(s) (Markdown list for instance) before the Gradio code could help (just so we're prepared to do the association between the next code block and the previous one)

Comment on lines +475 to +479
valid_models = [
getattr(loader[0], 'patcher', loader[0])
for loader in model_loaders
if not isinstance(loader[0], dict) and not isinstance(getattr(loader[0], 'patcher', None), dict)
]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice that you found a way to automate this

Copy link

@cbensimon cbensimon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me regarding ZeroGPU

@cbensimon
Copy link

cbensimon commented Dec 20, 2024

@cbensimon , I'm unsure about the last part wrt to moving the models outside of the function, do you think there is a more elegant way to convey this? Maybe just the link to the diff without hardcoding it?

One solution is to take a much simpler (meaning only one or two loading model Nodes) exported workflow as example (either for the whole blog, or only for this part)

@@ -5227,3 +5227,15 @@
- community
- research
- open-source-collab

- local: run-comfyui-workflows-on-spaces
title: "Run ComfyUI workflows for free with Gradio on Spaces"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider shorter title

Suggested change
title: "Run ComfyUI workflows for free with Gradio on Spaces"
title: "Run ComfyUI workflows for free on Spaces"


## Intro

In this tutorial I will present a step-by-step guide on how I have converted a complex ComfyUI workflow to a simple Gradio application, and how I have deployed this application on Hugging Face Spaces ZeroGPU serverless structure, which allows for it to be deployed and ran for free on a serverless manner. In this tutorial, we are going to work with [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68), but you can follow the tutorial with any workflow that you would like.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In this tutorial I will present a step-by-step guide on how I have converted a complex ComfyUI workflow to a simple Gradio application, and how I have deployed this application on Hugging Face Spaces ZeroGPU serverless structure, which allows for it to be deployed and ran for free on a serverless manner. In this tutorial, we are going to work with [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68), but you can follow the tutorial with any workflow that you would like.
In this tutorial I will present a step-by-step guide on how to convert a complex ComfyUI workflow to a simple Gradio application, and how to deploy this application on Hugging Face Spaces ZeroGPU serverless structure, which allows for it to be deployed and run for free in a serverless manner. In this tutorial, we are going to work with [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68), but you can follow the tutorial with any workflow that you would like.

# Add a title
gr.Markdown("# FLUX Style Shaping")

with gr.Row():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't gr.Interface be easier to use here?


## 3. Preparing it to run Hugging Face Spaces

Now with our Gradio demo working, we may feel tempted to just hit an export button and get it working on Hugging Face Spaces, however, as we have all models loaded locally, if we just exported all our folder to Spaces, we would upload dozens of GB of models on Hugging Face, which is not supported, specially as all this models should have a mirror on Hugging Face.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Now with our Gradio demo working, we may feel tempted to just hit an export button and get it working on Hugging Face Spaces, however, as we have all models loaded locally, if we just exported all our folder to Spaces, we would upload dozens of GB of models on Hugging Face, which is not supported, specially as all this models should have a mirror on Hugging Face.
Now with our Gradio demo working, we may feel tempted to just upload everything to Hugging Face Spaces. However, this would require uploading dozens of GB of models to Hugging Face, which is not only only slow but also unnecessary, as all of these models already exist on Hugging Face!


Now with our Gradio demo working, we may feel tempted to just hit an export button and get it working on Hugging Face Spaces, however, as we have all models loaded locally, if we just exported all our folder to Spaces, we would upload dozens of GB of models on Hugging Face, which is not supported, specially as all this models should have a mirror on Hugging Face.

So, we need to first install `pip install huggingface_hub` if we don't have it already, and then we need to do the following on the top of our `app.py` file:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
So, we need to first install `pip install huggingface_hub` if we don't have it already, and then we need to do the following on the top of our `app.py` file:
Instead, we will first install `pip install huggingface_hub` if we don't have it already, and then we need to do the following on the top of our `app.py` file:

hf_hub_download(repo_id="comfyanonymous/flux_text_encoders", filename="t5xxl_fp16.safetensors", local_dir="models/text_encoders/t5")
```

This will map all local models on ComfyUI to a Hugging Face version of them. Unfortunately, currently there is no way to automate this process, you gotta find the models of your workflow on Hugging Face and map it to the same ComfyUI folders that.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This will map all local models on ComfyUI to a Hugging Face version of them. Unfortunately, currently there is no way to automate this process, you gotta find the models of your workflow on Hugging Face and map it to the same ComfyUI folders that.
This will map all local models on ComfyUI to their Hugging Face versions. Unfortunately, currently there is no way to automate this process, you need to find the models of your workflow on Hugging Face and map it to the same ComfyUI folders that.


If you are running models that are not on Hugging Face, you need find a way to programmatically download them to the correct folder via Python code. This will run only once when the Hugging Face Space starts.

Now, we will do one last modification to the `app.py` file, which is to include the function decoration for ZeroGPU
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Now, we will do one last modification to the `app.py` file, which is to include the function decoration for ZeroGPU
Now, we will do one last modification to the `app.py` file, which is to include the function decoration for ZeroGPU, which will let us do inference for free!

Copy link
Member

@abidlabs abidlabs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice tutorial @apolinario! Its a bit of work, but nice to see there's a repeatable path that users can follow

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants