-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #11 from Visionatrix/optional-gemini-in-flows
added ability to use Gemini as alternative to Ollama
- Loading branch information
Showing
8 changed files
with
151 additions
and
6 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
Hardware FAQ | ||
============ | ||
|
||
First, you can take a look at the information in the `ComfyUI repository <https://github.com/comfyanonymous/ComfyUI/wiki/Which-GPU-should-I-buy-for-ComfyUI>`_. | ||
|
||
.. note:: If you are using Windows and want to avoid hassles, currently, there are no alternatives to Nvidia. PyTorch is expected to release a native version for AMD for Windows soon, but until then, Nvidia is the only option. | ||
|
||
List of GPUs by usefulness: | ||
|
||
1. Nvidia 4090 ``24 GB`` | ||
2. AMD 7900 XTX ``24 GB`` | ||
3. Nvidia 3090 ``24 GB`` | ||
4. Nvidia 4080 Super ``16 GB`` | ||
5. Nvidia 4070 Ti Super ``16 GB`` | ||
6. Nvidia 4060 Ti ``16 GB`` | ||
7. Nvidia 3060 ``12 GB`` | ||
|
||
.. note:: You can also look at any performance tests of hardware for ComfyUI as a reference. | ||
|
||
--- | ||
|
||
Q: Why are there no AMD cards other than *AMD 7900 XTX* on the list? | ||
|
||
A: *ROCM (Radeon Open Compute) "officially" supports only the AMD 7900 XTX among consumer cards.* | ||
|
||
--- | ||
|
||
Q: How much RAM is needed in the system? | ||
|
||
A: *For normal operation, 32 GB is sufficient, but if you want to handle large resolutions with Supir Scaler Workflow, then 64 GB is recommended.* | ||
|
||
--- | ||
|
||
Q: How to use 2 GPUs? | ||
|
||
A: *The simplest way is to run 2 workers, each assigned to its own GPU, so they can process tasks in parallel.* |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters