Replies: 3 comments 3 replies
-
Very good idea, I will implement it in SD WebUI ReActor and then here, I'm going to add an age detection option and after that I'll add a txt output with an info of detection values
A source image renamed to "source" to match the same logic as sd-webui-reactor has
Yeah, you're right, I will spot it in the Readme with future updates |
Beta Was this translation helpful? Give feedback.
-
Thanks, looking forward to the detection enhancement. To fight a bit for the naming: In a node based system all input connections are that, inputs. So calling it "Input image" adds no information of what it is used for. This particular input image is the target image to be changed by ReActor. Same as the Source image is an input image used as the source for the transformation. You help your users' understanding much more that way. The naming-for-function principle is also common in ComfyUI, eg. the two prompt inputs in kSampler that are called positive and negative to make it clear what they are used for. |
Beta Was this translation helpful? Give feedback.
-
In any case, we can rename inputs and outputs the way we like inside ComfyUI |
Beta Was this translation helpful? Give feedback.
-
I just saw that you closed the original issue since I started writing this, so I repeat it in this new issue to be sure that you see it.
It's very nice that you have added the gender handling, thank you so much. I have updated the node and tested it, and it worked as expected.
I initially had a problem: After git pull updating, the node (in an existing workflow) had the new gender fields, but the two inputs kept their original legends but had their new positions which caused my wiring to fail. It was very confusing and I didn't understand it until I deleted the node and replaced it with a new one.
The new nomenclature - source for the source faces and input for the image to be changed - is very confusing, at least to me. Source and input are two concepts rather close to each other. Much better to call them what you do in A1111: Source for the source faces, target for the image that is to be changed. You actually do that in the ComfyUI log.
If you don't know ReActor from A1111, it's not clear that you can specify a list of indexes, not just one index.
A question: Do you count the detected faces from left to right, or is top to bottom also involved?
A wish: Do you have (or know of) a way of showing the analysis of an image: The number of faces found and a sequence of gender M and F's? If the information is readily available, you could output the information as text or - ideally - the original target image with face number and gender marked on top of its detected faces. This would be nice for A1111 too. Not everyone has access to logs so some output would be preferable, although log output could be a first step.
Beta Was this translation helpful? Give feedback.
All reactions