-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thoughts on the future of sockets #184
Comments
I think numpy arrays is the logical choice. Let's go with x y z w even if w might be hidden to allow for simple 4x4 matrix math. A poly line is edge data, is that allowed in blender? And to rewrite all nodes to support this... |
i've seen datastructures that store unrolled info by inserting a token counter inbetween each face.
to be read like: |
OK, i appear to be mistaken (gladly) about edges being permitted to contain more than 2 verts.
how does @nortikin feel about this? Separate branch - or implement the new sockets in tandum and gradually migrate |
mesh = {
'verts3': [ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 ,....... ],
# 'verts4': [ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, ....... ],
'verts_type': 'verts3'
'edges': [0, 1, 1, 2, 2, 3, 3, 4, .....],
'poly3': [0, 1, 2, 1, 2, 3, .....],
# 'poly4': [0, 1, 2, 3, 1, 2, 3, 5, .....],
# 'polyM': [[0,1,2,3,4,5,6],[1,2,3,8,9].....],
'polys_type': 'poly3', # or None
'vertex_colors_1': [col1, col2, col3,...], # easy to convert to n colors per face
# 'vertex_colors_m': [[col1, col2, col3],[col4, col5, col6, col7],...],
'vertex_color_type': 'vertex_colors_1', # or None
'material_slot': (name? or index..? )
} Flattened numpy arrays are easy to reshape: http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html
these are efficient operations. |
IMHO, but needed - to separate nastyness, as done in Generators (cylinder and plane) - there is grouping of vertices, that helps me in farther manipulating, i made example for this. so, numpy with nasty arreys, matrices, it is needed. And additionally - polygons must have the same grouping, UV direction of mesh in all levels with generators. What should be able every node to do ideally - to manipulate nastyness for every case, and for usual case [[[1,2,3]]] and for grouped [[[[1,2],[3,4]]]] and for flatted case [[1,2,3]] as for floats and integers is. is we talking of polygons - there is no place to flat arrey at all. Ideally all data structure have to have same nastyness to not to confuse. And maybe we should bring floats and integers to the same order - [[[1,2,3,4]]] in three levels + digit mesh = {
'vers': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
'curv': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
'edgs': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
'pols': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
'vcol': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
# corresponding to each vert each color from three digits
'mats': { 1:{'name1':[1,2,5,6,7]}, 2:{'name1':[1,2,5,6,7]} } ,
# where {'name1':} is material name or index and [1,2,5,6,7]
# - numbers of polygons
'mtrx': { 1:[arrey matrix], 2:[arrey matrix] },
} or instead of number 1: 2: we could hold name of initial object in scene if there was so? |
main about numpy arrey is if he can manage nested lists? |
i was thinking for nested stuff that we might explicitly do, |
I understand if you think there is no place for flat arrays, but it is used as a convenience for optimized data processing in machine learning. Images inside blender are stored as flat arrays. |
than we should consider if we grouping data, than we grouping again groups of data, how we would do that? |
i haven't thought about the grouped data you mention, like cylinder -- i never use the |
>>> import numpy as np
>>> sh1 = [1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0]
>>> sh2 = [2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0]
>>> sh3 = [3.0, 4.0, 5.0]
>>> f = np.array([sh1, sh2, sh3])
>>> f
array([[1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0],
[2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0],
[3.0, 4.0, 5.0]], dtype=object)
>>> f.size
3 and >>> mesh3 = np.array([3.0, 4.0, 5.0])
>>> mesh2 = np.array([2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0])
>>> mesh1 = np.array([1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0])
>>> f = np.array([mesh1, mesh2, mesh3])
>>> f
array([array([ 1., 2., 3., 1., 2., 3., 1., 2., 3.]),
array([ 2., 3., 4., 2., 3., 4., 2., 3., 4., 2., 3., 4.]),
array([ 3., 4., 5.])], dtype=object)
>>> f[0].dtype
dtype('float64') |
all concept of sverchok and grasshopper also lets you operate data tree. Every level accesable than. |
The levels of data in really important to be able control the flow of data. But they can be ordinary python lists and actual objects can be Also separate shouldn't be on as default I think. Matrix should also allow nesting so you can apply them per object in node group. Or per vertex even. |
ok, so how data will occure? abstract data[node[0]] = {1,2,3}
data[node[1]] = {1,2,3} than node[1].in_socket[1] = node[0].out_socket[1] = data[node[0]][2].name
node[1].out_socket[3] = data[node[1]][2].name
data[node[1]][2] = data[node[0]][2].definition_of node[1].with_parameters_of_node[1] than in nodes we will got something like name and multysocket/adaptive socket stuff. |
sockets should be adaptable. colorised with data, that binded to them, so we could make adaptable sockets easyer. in class socket there will be chackin in dictionary by nodes data and colored and even writed text corresponding to it. maybe we could make standart in naming, as data/vers/pols/edgs/mtrl/mtrx/vcol/etc or we shouldn't? 'vers': { 1:[[[ ... ]]], 2:[[[ ... ]]] } , where 1,2 - are sockets, not objects as i proposed before. we coming to current data as arrey but in numpy way and only sockets. than svrchok should know himself 'verts' devided by 4, edges by two, polygons - other case. but i wish to avoid this difference in listing somehow... how? left as it is now? may it be and speeded up with numpy? |
{curves_knots:[], curves_handlesL=[], curves_handlesR=[]} where curve defined by first list, than if there no automatic handle type, it defines it coordinates as tangent. |
should or not line(curve), polygon to share not only indeces, but both - list of vertices and list of indeces? socket_polygons:
{ 'verts':[ [ [0.0, 0.0, 0.0], [1.0, 3.0, 1.0], [0.0, 2.0, 0.0] ] ],
'handlerL':[[False,False,False]],
'handlerR':[[False,False,False]],
'polygon':[[[0,1,2]]] } |
Also we don't need to confuse the actual sockets with our data formats. Curves (line, polyline, spline, nurbs etc), mesh (generic mesh data) are very different concepts. I see no reason to mix them. |
do you mean sockets for every type of curves? |
No I think one generic curve socket is good idea. |
@enzyme69 asked right question. It is conceptual desigion my part from dialog with @enzyme69 :
|
Yes, me and Nikitron was discussing "a good way to output variations". There are actually 2 things:
In case of umbrella, Nikita made a setup that allows user to change the look of that umbrella to get different variations. However to:
Is always a hassle. As a demo, it is totally fine. Of course, we can always add some efforts to start implementing "Vectorized Input" to work with the Umbrella setup, but that is usually can take quite some times and not always guaranteed to work. I do not mind just scripting and randomized. We don't really need to be able to Clone + Variations. ==> can be heavy. @zeffii Also has this conversation with me at some point. Or it is me that keep mentioning it. Can observe how Houdini does it with "Copy SOP". It is pretty good example. That means in Houdini, every node is pretty much Vectorized ready or something. Anyhow, for now, the best solution is to use the SN node that does randomization of "yellow node" and then bake for each frame changes. I do not expect Sverchok can do like Houdini, to be able to Clone and make Copy with variations all at once. That will be too heavy anyway. And 100 variations is usually enough variation that our brain cannot tell. If we need over 100 complex objects, we need to bake them and use as instance with Blender Particles. However, what Sverchok can do is probably like a Bake, Cache node. At the moment, I just use Blender OBJ and MDD for baking. |
When I was thinking about whole City made procedurally in Sverchok, I realized that was a silly idea. Layout of the city = is okey. I think Sverchok can do it. But to have 1 node-tree that does everything will not be efficient. A better way to think of making procedural city, one needs to think more of component level. A house, possible, a building with different shape possible. Just like a rough mockup. => However, I think this is something that Sverchok is very good at. Medium level object generation. Just like: Umbrella, Spider, Octopus, Tower, Fence so far. Of course with that rough mockup, that simple house or building shape can be further made more details. Only when needed and with different node setup. SO, just like a language: letters => words => sentences => story, poem. |
Yes, this is obviously what we all want. The kind of one level vectorization we have now in some nodes isn't enough. But to get this working properly we have to rewrite allmost (I think every node.) The problem right now for this is telling data and structure apart. If we could to that it would make this much easier... The only nodes that works like this right now and supporting varying level of data is the scalar math node, float 2 int and most of the list nodes. (I think, please correct me if I am and wrong) However when I been doing the this preparation of the nodes I have moved the the creation of topology and geometry in for example sphere in into functions than can easily be extended for the future. |
I don't know where this conversation is going, I find it difficult to follow. I thought we could discuss the dataformat of sockets here, and the problem of
In this description the |
I think Galapagos solves the problem nicely, with bonus side effect that it can be used to back-propagate and change the input depending on the output. Fundamentally it also solves the vectorization issue in a simple way.
|
For simple cases of shell we can study the ugly mess that is needed to match two nested lists in scalar math node. (I am not saying this cannont be simplified and look prettier or that it is a good approach, it simply is most general I know of in sverchok). def recurse_fxy(self, l1, l2, f):
if (isinstance(l1, (int, float)) and
isinstance(l2, (int, float))): # atomic matched data, we can process
return f(l1, l2)
if (isinstance(l2, (list, tuple)) and
isinstance(l1, (list, tuple))): # list of data, match
data = zip(*match_long_repeat([l1, l2]))
return [self.recurse_fxy(ll1, ll2, f) for ll1, ll2 in data]
# different levels of data, build lower level to higher level
if isinstance(l1, (list, tuple)) and isinstance(l2, (int, float)):
return self.recurse_fxy(l1, [l2], f)
if isinstance(l1, (int, float)) and isinstance(l2, (list, tuple)):
return self.recurse_fxy([l1], l2, f) Notes,
Of course some nodes doesn't fall into the paradigm of matching like this, for example Line Connect, but most nodes do actually. For certain cases like mesh matching I think shortest list match is safer default but otherwise longest list matching is almost always the right answer. |
https://www.youtube.com/watch?v=4QqkxT1XeNw maybe watch this so we have a common ground. Galapagos suggests to me a logistic regression, figuring out with a variety of input which combination produces the most desirable output (vs cost.). The purpose of the node is to run a genetic algorithm to optimize input params to produce best output. But it doesn't need to be for a genetic algorithm, we could simply specify a range or list of values for each prime parameter, then let the node adjust the input params for each combo and store the output. I think this vastly simplifies our problem conceptually. It doesn't solve the fine grained multilevel inputs for nodes..but do we really need that? |
I do think we do need that. I don't have time to watch that one hour video right now. |
http://youtu.be/4QqkxT1XeNw?t=20m45s |
I think we need both, they solve different problems, for the umbrella example, Galapagos is probably better. For structuring data I think that doesn't solve the issue at all. |
def recurse_fxy(self, l1, l2, f):
# atomic matched data, we can process
if isinstance(l1, (int, float)) and isinstance(l2, (int, float)):
return f(l1, l2)
# list of data, match
if isinstance(l2, (list, tuple)) and isinstance(l1, (list, tuple)):
data = zip(*match_long_repeat([l1, l2]))
return [self.recurse_fxy(ll1, ll2, f) for ll1, ll2 in data]
# different levels of data, build lower level to higher level
if isinstance(l1, (list, tuple)) and isinstance(l2, (int, float)):
return self.recurse_fxy(l1, [l2], f)
if isinstance(l1, (int, float)) and isinstance(l2, (list, tuple)):
return self.recurse_fxy([l1], l2, f) I just reformatted it slightly. Conceptually this isn't as scary as it looks. |
Not it isn't, the problem is to know what the atomic/molecurlar data is, one solution that exists right now with It isn't hard to write it into generic form either. |
I wish my input here was helpful, but perhaps it only impedes progress. I'll bow out of the discussion for now. |
I think your input is very helpful |
Alex say he doing something like neuro network (sinthetic intellect or something). I will ask his, what stage he in. |
https://www.floodeditor.com/app.html |
@nortikin Can you explain a bit more since the site need sign in. I propose the following which I think would make it easy to parse the tree. Socket types:
All wrapped in in list/tuple that gives structure. |
This will be welcome and amazing :) but major major overhaul of all nodes |
Yes, but I propose doing this together with adapting a structure like the prototypes/templates in SN2 #439 and leaving the |
big decisions.. |
Yes. But doing this would make it simpler to write nodes. Of course compatibility probably couldn't be kept at 100% since some things work in non standard ways today. |
1.numpy will be interrupted with mathutils.Vector(()) somewere. |
|
3 Continued.
|
SN2 got it. this approach needed to use update, old labeling and color node functions. It will be correct, not as i started branch (locally) with veriables and types. classes - it is object oriented really. Than next we change to numpy there... it is more job to do and we can do it second. |
many things we use Vector for now we can using existing numpy functions or write appropriate replacements |
okay, see nestyness. |
levelsoflist wouldn't be used and more, some conversion functions would have to be written yes, but it wouldn't be that hard in general. @zeffii |
https://plus.google.com/112318317857695241382/posts/4oHccTx7x1r |
IFC is complicated but open BIM format, to support that I would look for a blender exporter first. Realistically this seems quite complicated... |
first time i see definitions in python defined in one line... |
Let's throw open some (crazy) ideas about socket types.
The natural choice of container suggests a dict. It might not be optimal when used for our purposes as a non-general container - but I can not profess deep knowledge or expertise from experience.
Verts could be passed as 1 long numpy array of floats (called "unrolled" array), which can be reshaped to 3_n or 4_n (xyzw) when needed. This appears to be efficient numpy / numerical computing practice.
Edges and Polys can be passed as unrolled array of ints, Polys may however contain non-homogeneous information, Polys can hold any number of vertices from 3 upwards.
some of these member variables would be optional, but if
verts
is specified then so mustnum_elements_per_vert
The text was updated successfully, but these errors were encountered: