-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatization of node creation #3437
Comments
i am in favour of an abstraction for the node boilerplate (this is what drove me to write ScriptNodes also) this implementation can be abstracted a lot more i think, even though you mention the autocomplete...it's still pretty intense/dense. |
Well, what does it abstract? This doesn't seem to be much shorter than our usual nodes... Maybe it is doing something behind the scene? :) In my experience, most of boilerplate comes from data nesting / vectorization support... something like out = []
for a, b, c in zip_long_repeat(...):
new = []
for a, b, c in zip_long_repeat(...):
new.append(...)
out.append(new) The problems I see here are
In general, it seems to me that requirements for "good data matching" are relatively simple:
It would be good to have something like # This says that the node wants to handle one list of vertices together with one value of Parameter
matching = Matching(
Verts = Socket(level = 2), # Single vertex has level 1; list of vertices has level 2; any deeper level requires iteration
Parameter = Socket(level = 0) # Numeric parameter has level 0, any deeper level requires iteration
...)
matching.mode = Matching.REPEAT_LAST
# This would:
# 1) match data and nesting levels in inputs, for example if the node receives data with different nesting levels in different inputs, this will add as many [] as required to match each list of vertices with one value of Parameter
# 2) with data.next() it would iterate through the resulting data tree, doing as many nested "for" loops as needed;
# 3) with data.output[].add() it would manage output data lists so that in the end the data in output sockets will have correct nesting levels - in general, the same nesting level as in the inputs
with data_iterator(self.inputs, self.outputs, matching) as data:
while data.next():
new_verts, new_faces = do_something(data.input['Verts'], data.input['Parameter'])
data.output['Verts'].add(new_verts)
data.output['Faces'].add(new_faces) |
@zeffii Yes. I see the next step in deleting
@portnov It's doing. You can have a look at dissolving node code. I automated standard vectorization. So process method is calling separately for each loop. I think other vectorization systems can be considered but it would be good to get evidence that approach is good and worth further developing. |
that what i wanted to have but my knowledge is not enough to do as it should be written initially. |
I've added code for automatization of node creation along with |
Experiment continue. I'm thinking about avoiding of setting Also sockets got extra |
Also I have added auto registration of node classes. No need in writing |
how to update new node's text on fly than? i was used |
What do you mean on fly? If I'm adding new code to a node I just press reload Blender (I had binded the command to F8 button) and changes appear. |
there is |
You can try rerun this file. It is where classes are actually registering. sverchok/utils/handling_nodes.py Line 407 in 3e31747
|
@nortikin I have tested |
I have continued experiments with optimization API of node creation and reach much further last attempt.
Next code is working:
It should looks magically.
Most advantage of current approach is it impacts on nothing. It does not change/add any node base classes. But new nodes can gain from it.
Also I have tried to use autocomplete as much as possible. At present most names will be hinted by IDE. Access to sockets can be get via dot notation like that
node.inputs.verts
It is possible to set which sockets should be vectorized and all other works will be done automatically. The
process
function should be coded as if Sverchok does not has vectorization.In the end I expect to get less bugs and more new nodes coded, in the same time interval.
The text was updated successfully, but these errors were encountered: