You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why is it not possible to generate images sequencially in a for loop, or trigger next "batch" on latent image input that would work like infinite image generation?
Currently the only way to do something similar is to set a latent batch the max size your GPU VRAM can handle.
Why can't I have a workflow that runs a set amount of times sequentially or generates images until it's killed?
This prevents running the generation unattended over-night and in my opinion would be a really useful feature.
Another case for conditions is - img2img vs prompt input.
Why cannot I do branching logic in a single workflow that would behave differently based on whether I put a text prompt in or a source image?
The text was updated successfully, but these errors were encountered:
And this is just a Proof of Concept of loop capability, but you can use 'auto queue' to implement such loop applications.
(Link: https://www.youtube.com/watch?v=iLO0fg0SF9w)
And this is just a Proof of Concept of loop capability, but you can use 'auto queue' to implement such loop applications. (Link: https://www.youtube.com/watch?v=iLO0fg0SF9w)
Why is it not possible to generate images sequencially in a for loop, or trigger next "batch" on latent image input that would work like infinite image generation?
Currently the only way to do something similar is to set a latent batch the max size your GPU VRAM can handle.
Why can't I have a workflow that runs a set amount of times sequentially or generates images until it's killed?
This prevents running the generation unattended over-night and in my opinion would be a really useful feature.
Another case for conditions is - img2img vs prompt input.
Why cannot I do branching logic in a single workflow that would behave differently based on whether I put a text prompt in or a source image?
The text was updated successfully, but these errors were encountered: