Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts on the future of sockets #184

Closed
zeffii opened this issue May 24, 2014 · 57 comments
Closed

Thoughts on the future of sockets #184

zeffii opened this issue May 24, 2014 · 57 comments
Labels
Proposal 💡 Would be nice to have

Comments

@zeffii
Copy link
Collaborator

zeffii commented May 24, 2014

Let's throw open some (crazy) ideas about socket types.

The natural choice of container suggests a dict. It might not be optimal when used for our purposes as a non-general container - but I can not profess deep knowledge or expertise from experience.

Verts could be passed as 1 long numpy array of floats (called "unrolled" array), which can be reshaped to 3_n or 4_n (xyzw) when needed. This appears to be efficient numpy / numerical computing practice.
Edges and Polys can be passed as unrolled array of ints, Polys may however contain non-homogeneous information, Polys can hold any number of vertices from 3 upwards.

mesh = {
    'verts': [ 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 ,,,,,,,, ],
    'num_elements_per_vert': 3,
    'edges': [0,1,1,2,2,3,3,4.....],
    'polygons': [[0,1,2],[1,2,3].....],
    'material_slot':  (name? or index..? )
}

some of these member variables would be optional, but if verts is specified then so must num_elements_per_vert

@ly29
Copy link
Collaborator

ly29 commented May 24, 2014

I think numpy arrays is the logical choice. Let's go with x y z w even if w might be hidden to allow for simple 4x4 matrix math.
For mesh a dict makes perfect sense. For other socket types numpy arrays as the atoms maybe wrapped in a dict with some info. The structure of data could be python lists.

A poly line is edge data, is that allowed in blender?
The tricky part is of course face data.

And to rewrite all nodes to support this...

@zeffii
Copy link
Collaborator Author

zeffii commented May 24, 2014

i've seen datastructures that store unrolled info by inserting a token counter inbetween each face.

faces = [3,0,1,2,4,0,3,4,5,3,0,2,5,4,3,4,6,7]

to be read like:
faces = [3,0,1,2,4,0,3,4,5,3,0,2,5,4,3,4,6,7]

@zeffii
Copy link
Collaborator Author

zeffii commented May 24, 2014

OK, i appear to be mistaken (gladly) about edges being permitted to contain more than 2 verts.

And to rewrite all nodes to support this...

how does @nortikin feel about this? Separate branch - or implement the new sockets in tandum and gradually migrate

@zeffii
Copy link
Collaborator Author

zeffii commented May 24, 2014

mesh = {
    'verts3': [ 1.0, 1.0, 1.0,   1.0, 1.0, 1.0 ,....... ], 
    # 'verts4': [ 1.0, 1.0, 1.0, 1.0,   1.0, 1.0, 1.0, 1.0,  ....... ],
    'verts_type': 'verts3'
    'edges': [0, 1,   1, 2,   2, 3,   3, 4,   .....],
    'poly3': [0, 1, 2,   1, 2, 3,   .....],
    # 'poly4': [0, 1, 2, 3,   1, 2, 3, 5,    .....],
    # 'polyM': [[0,1,2,3,4,5,6],[1,2,3,8,9].....],
    'polys_type': 'poly3', # or None
    'vertex_colors_1': [col1, col2, col3,...],   # easy to convert to n colors per face
    # 'vertex_colors_m': [[col1, col2, col3],[col4, col5, col6, col7],...],
    'vertex_color_type':  'vertex_colors_1',  # or None
    'material_slot':  (name? or index..? )
}

Flattened numpy arrays are easy to reshape: http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html

>>> a = np.arange(6).reshape((3, 2))
>>> a
array([[0, 1],
       [2, 3],
       [4, 5]])

these are efficient operations.

@nortikin
Copy link
Owner

IMHO, but needed - to separate nastyness, as done in Generators (cylinder and plane) - there is grouping of vertices, that helps me in farther manipulating, i made example for this. so, numpy with nasty arreys, matrices, it is needed. And additionally - polygons must have the same grouping, UV direction of mesh in all levels with generators.

What should be able every node to do ideally - to manipulate nastyness for every case, and for usual case [[[1,2,3]]] and for grouped [[[[1,2],[3,4]]]] and for flatted case [[1,2,3]] as for floats and integers is.

is we talking of polygons - there is no place to flat arrey at all. Ideally all data structure have to have same nastyness to not to confuse. And maybe we should bring floats and integers to the same order - [[[1,2,3,4]]] in three levels + digit

mesh = {
    'vers': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
    'curv': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
    'edgs': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
    'pols': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
    'vcol': { 1:[[[ ... ]]], 2:[[[ ... ]]] } ,
         # corresponding to each vert each color from three digits
    'mats': { 1:{'name1':[1,2,5,6,7]}, 2:{'name1':[1,2,5,6,7]} } ,
         # where {'name1':} is material name or index and [1,2,5,6,7] 
         #       - numbers of polygons
    'mtrx': { 1:[arrey matrix], 2:[arrey matrix] },
}

or instead of number 1: 2: we could hold name of initial object in scene if there was so?
no, better have count as 0,1,2,3,4...

@nortikin
Copy link
Owner

main about numpy arrey is if he can manage nested lists?
[[[ ]]] to hold and to recognize levels of list and to rcognize max/min levels of list, make nestyness equall for every part of demaged list as [[[ 1, [2] ]]] i.e. if we join two lists

@zeffii
Copy link
Collaborator Author

zeffii commented May 24, 2014

i was thinking for nested stuff that we might explicitly do, stream = [mesh1, mesh2, mesh3] instead of that way. Keep mesh conceptually for one object (verts, polys, edges, material..), and stream for a collection of meshes.

@zeffii
Copy link
Collaborator Author

zeffii commented May 24, 2014

I understand if you think there is no place for flat arrays, but it is used as a convenience for optimized data processing in machine learning. Images inside blender are stored as flat arrays.
R, G, B, A, R, G, B, A, R, G, B, A

@nortikin
Copy link
Owner

than we should consider if we grouping data, than we grouping again groups of data, how we would do that?
for material flat arrey is ok, for matrices ok, maybe for vertex color is ok, but i have to group verts and edges/pols.

@zeffii
Copy link
Collaborator Author

zeffii commented May 24, 2014

i haven't thought about the grouped data you mention, like cylinder -- i never use the separate. But it is important to you, so it must have a good reason that I don't understand yet.

@zeffii
Copy link
Collaborator Author

zeffii commented May 24, 2014

>>> import numpy as np
>>> sh1 = [1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0]
>>> sh2 = [2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0]
>>> sh3 = [3.0, 4.0, 5.0]
>>> f = np.array([sh1, sh2, sh3])
>>> f
array([[1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0],
       [2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0],
       [3.0, 4.0, 5.0]], dtype=object)
>>> f.size
3

and

>>> mesh3 = np.array([3.0, 4.0, 5.0])
>>> mesh2 = np.array([2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0, 2.0, 3.0, 4.0])
>>> mesh1 = np.array([1.0, 2.0, 3.0, 1.0, 2.0, 3.0, 1.0, 2.0, 3.0])
>>> f = np.array([mesh1, mesh2, mesh3])
>>> f
array([array([ 1.,  2.,  3.,  1.,  2.,  3.,  1.,  2.,  3.]),
       array([ 2.,  3.,  4.,  2.,  3.,  4.,  2.,  3.,  4.,  2.,  3.,  4.]),
       array([ 3.,  4.,  5.])], dtype=object)
>>> f[0].dtype
dtype('float64')

@nortikin
Copy link
Owner

grouping lets you mae some cases possible. i.e. i link vertices easy with grouped verts, not in whole object, so
separated_verts
as item picking and item removing and masking in row/column (flip list node makes grouping invert UV to VU direction!)

@nortikin
Copy link
Owner

all concept of sverchok and grasshopper also lets you operate data tree. Every level accesable than.
Your approach of arrey could be used in vertex color and in some other cases.
how than we will bing one data to other? vertices, that removed have to have link to vertex color, as to polygon index. maybe we should automate this relations between vertex and indeces and so on? how to operate than? even more - how to operatete if we will have different approach - one with numpy flat, one with numpy nasted (we can numpy nested lists if we wish)

@nortikin
Copy link
Owner

flip node usage:
separated_verts1

@ly29
Copy link
Collaborator

ly29 commented May 25, 2014

The levels of data in really important to be able control the flow of data. But they can be ordinary python lists and actual objects can be np.array which allow for easy separation of different kinds of content from structure...
For a mesh socket (and curves and later surface) a dict containing the all info seems appropriate.
For vertices np.array [N,4] feels like the correct answer, same for the matrix socket. Some rewriting will be needed however not so much.

Also separate shouldn't be on as default I think.

Matrix should also allow nesting so you can apply them per object in node group. Or per vertex even.

@nortikin
Copy link
Owner

ok, so how data will occure? abstract

data[node[0]] = {1,2,3}
data[node[1]] = {1,2,3}

than

node[1].in_socket[1] = node[0].out_socket[1] = data[node[0]][2].name
node[1].out_socket[3] = data[node[1]][2].name
data[node[1]][2] = data[node[0]][2].definition_of node[1].with_parameters_of_node[1]

than in nodes we will got something like name and multysocket/adaptive socket stuff.
than sockets type will be the same + new ones in tail

@nortikin nortikin mentioned this issue May 25, 2014
47 tasks
@nortikin
Copy link
Owner

sockets should be adaptable. colorised with data, that binded to them, so we could make adaptable sockets easyer. in class socket there will be chackin in dictionary by nodes data and colored and even writed text corresponding to it. maybe we could make standart in naming, as data/vers/pols/edgs/mtrl/mtrx/vcol/etc or we shouldn't?
data named, so named sockets, but in case of multiple data as vertices socket1 and vertices socket2 should be defined somehow, maybe we should than:

'vers': { 1:[[[ ... ]]], 2:[[[ ... ]]] }

, where 1,2 - are sockets, not objects as i proposed before. we coming to current data as arrey but in numpy way and only sockets. than svrchok should know himself 'verts' devided by 4, edges by two, polygons - other case. but i wish to avoid this difference in listing somehow... how? left as it is now? may it be and speeded up with numpy?

@nortikin
Copy link
Owner

nortikin commented Jun 1, 2014

{curves_knots:[], curves_handlesL=[], curves_handlesR=[]}

where curve defined by first list, than if there no automatic handle type, it defines it coordinates as tangent.

@nortikin
Copy link
Owner

nortikin commented Jun 1, 2014

should or not line(curve), polygon to share not only indeces, but both - list of vertices and list of indeces?
i.e.

socket_polygons:
{ 'verts':[ [ [0.0, 0.0, 0.0], [1.0, 3.0, 1.0], [0.0, 2.0, 0.0] ] ],
  'handlerL':[[False,False,False]],
  'handlerR':[[False,False,False]],
  'polygon':[[[0,1,2]]] }

@ly29
Copy link
Collaborator

ly29 commented Jun 2, 2014

Also we don't need to confuse the actual sockets with our data formats. Curves (line, polyline, spline, nurbs etc), mesh (generic mesh data) are very different concepts. I see no reason to mix them.
Sockets should have type
Then we have the data tree.
Then we have the actual data as leaves to the tree. The data can sometimes be viewed as trees etc

@nortikin
Copy link
Owner

nortikin commented Jun 2, 2014

do you mean sockets for every type of curves?

@ly29
Copy link
Collaborator

ly29 commented Jun 3, 2014

No I think one generic curve socket is good idea.

@nortikin
Copy link
Owner

nortikin commented Jul 1, 2014

@enzyme69 asked right question.
How to make many objects with different parameters at once?
I.e. i have umbrella layout, i need many different umbrellas. So, i input random series to other rendom input seed value or other from number or generator and it should produce me many items...
so, first of all, it must add level of data, that will handle of this many objects and not mix this with each items nasty list. as we have
[diversity_of: [many_objects [data ] ], [many_objects [data ] ] ]
it means, all nodes have to support nestyness, even float and integer. If we making it, than we should have at the end flexible mechanism of multiplication of diversity.
or we need many umbrellas with step-by-step opening of cupola, than inserting series instead of float we will have many cases of one layout.............. What do you think?

It is conceptual desigion

my part from dialog with @enzyme69 :

about vectorized input and many umbrellas, i think in start of sverchok i missed something. 
I'm not programmer, but as i see now, some iterations must have more nestynes. i mean,
 there is other data structure in GH. not three levels - container-object-data + additional.
 no. there is categorized and every move makes possibility to multiply vectorized. But how, 
i don't know. May be it is impossible and this GH nestiness is excegeration (look at them
 data trees and uncompatibleness between different data, there are lists as [[[[[[[[]]]]]]]]  
nestiness)
so how to make many umbrellas. if we can grouped this somehow... and every group
 making additional vectorized level...

@enzyme69
Copy link
Collaborator

enzyme69 commented Jul 1, 2014

Yes, me and Nikitron was discussing "a good way to output variations".

There are actually 2 things:

  1. An easier way to Vectorized the Data, if needed
  2. A way to Output and Bake them

In case of umbrella, Nikita made a setup that allows user to change the look of that umbrella to get different variations. However to:

  • Make changes
  • Bake
  • Make changes
  • Bake
  • (Repeat)

Is always a hassle. As a demo, it is totally fine.

Of course, we can always add some efforts to start implementing "Vectorized Input" to work with the Umbrella setup, but that is usually can take quite some times and not always guaranteed to work.

I do not mind just scripting and randomized.

We don't really need to be able to Clone + Variations. ==> can be heavy.

@zeffii Also has this conversation with me at some point. Or it is me that keep mentioning it.

Can observe how Houdini does it with "Copy SOP". It is pretty good example.
https://www.youtube.com/watch?v=UfRH42lHgEI

That means in Houdini, every node is pretty much Vectorized ready or something.

Anyhow, for now, the best solution is to use the SN node that does randomization of "yellow node" and then bake for each frame changes.

I do not expect Sverchok can do like Houdini, to be able to Clone and make Copy with variations all at once. That will be too heavy anyway. And 100 variations is usually enough variation that our brain cannot tell. If we need over 100 complex objects, we need to bake them and use as instance with Blender Particles.

However, what Sverchok can do is probably like a Bake, Cache node. At the moment, I just use Blender OBJ and MDD for baking.

@enzyme69
Copy link
Collaborator

enzyme69 commented Jul 1, 2014

When I was thinking about whole City made procedurally in Sverchok, I realized that was a silly idea.

Layout of the city = is okey. I think Sverchok can do it.

But to have 1 node-tree that does everything will not be efficient.

A better way to think of making procedural city, one needs to think more of component level. A house, possible, a building with different shape possible. Just like a rough mockup. => However, I think this is something that Sverchok is very good at. Medium level object generation.

Just like: Umbrella, Spider, Octopus, Tower, Fence so far.

Of course with that rough mockup, that simple house or building shape can be further made more details. Only when needed and with different node setup.

SO, just like a language: letters => words => sentences => story, poem.

@ly29
Copy link
Collaborator

ly29 commented Jul 1, 2014

Yes, this is obviously what we all want.

The kind of one level vectorization we have now in some nodes isn't enough. But to get this working properly we have to rewrite allmost (I think every node.) The problem right now for this is telling data and structure apart. If we could to that it would make this much easier...

The only nodes that works like this right now and supporting varying level of data is the scalar math node, float 2 int and most of the list nodes. (I think, please correct me if I am and wrong)

However when I been doing the this preparation of the nodes I have moved the the creation of topology and geometry in for example sphere in into functions than can easily be extended for the future.

@zeffii
Copy link
Collaborator Author

zeffii commented Jul 1, 2014

I don't know where this conversation is going, I find it difficult to follow. I thought we could discuss the dataformat of sockets here, and the problem of @vectorization-for-all separately, but the two are very entwined concepts.

  • Code we write for a node should be getting simpler, not harder
  • Nodes can be all be thought as having a shell and a core function
  • Each core is only aware of one argument list. it's own function arg list.
    • eg: make_arc(radius, angle, num_verts, has_edges, closed=True)
  • The shell compiles the incoming cacophony of socket data into discrete argument lists
  • Depending on the combinations / variations possible for the shell to make, each combination is sent to the core and all their results combined into the output list of the Node.

In this description the shell is the trickiest to conceptualize without melting down.

@zeffii
Copy link
Collaborator Author

zeffii commented Jul 1, 2014

I think Galapagos solves the problem nicely, with bonus side effect that it can be used to back-propagate and change the input depending on the output. Fundamentally it also solves the vectorization issue in a simple way.

  • each layout has prime parameters, the params you really want to change
  • give a list of different values for each of these parameters
  • generate the mesh-packet for each of these variations
  • keep track of which param lists generated which mesh-packet.
  • bake all, or bake selected.

@ly29
Copy link
Collaborator

ly29 commented Jul 1, 2014

For simple cases of shell we can study the ugly mess that is needed to match two nested lists in scalar math node. (I am not saying this cannont be simplified and look prettier or that it is a good approach, it simply is most general I know of in sverchok).

    def recurse_fxy(self, l1, l2, f):
        if (isinstance(l1, (int, float)) and
           isinstance(l2, (int, float))):  # atomic matched data, we can process
                return f(l1, l2)
        if (isinstance(l2, (list, tuple)) and
           isinstance(l1, (list, tuple))): # list of data, match
            data = zip(*match_long_repeat([l1, l2]))
            return [self.recurse_fxy(ll1, ll2, f) for ll1, ll2 in data]
        # different levels of data, build lower level to higher level
        if isinstance(l1, (list, tuple)) and isinstance(l2, (int, float)):
            return self.recurse_fxy(l1, [l2], f)
        if isinstance(l1, (int, float)) and isinstance(l2, (list, tuple)):
            return self.recurse_fxy([l1], l2, f)

Notes, match_long_repeat matches N lists so the last value is repeated until the longest list runs out. It could use yield for the result.

# longest list matching [[1,2,3,4,5], [10,11]] -> [[1,2,3,4,5], [10,11,11,11,11]]

fullList is another more explicit method for dealing with the same issue.

Of course some nodes doesn't fall into the paradigm of matching like this, for example Line Connect, but most nodes do actually. For certain cases like mesh matching I think shortest list match is safer default but otherwise longest list matching is almost always the right answer.

@zeffii
Copy link
Collaborator Author

zeffii commented Jul 1, 2014

https://www.youtube.com/watch?v=4QqkxT1XeNw maybe watch this so we have a common ground.

Galapagos suggests to me a logistic regression, figuring out with a variety of input which combination produces the most desirable output (vs cost.). The purpose of the node is to run a genetic algorithm to optimize input params to produce best output.

But it doesn't need to be for a genetic algorithm, we could simply specify a range or list of values for each prime parameter, then let the node adjust the input params for each combo and store the output. I think this vastly simplifies our problem conceptually.

It doesn't solve the fine grained multilevel inputs for nodes..but do we really need that?

@ly29
Copy link
Collaborator

ly29 commented Jul 1, 2014

I do think we do need that. I don't have time to watch that one hour video right now.

@zeffii
Copy link
Collaborator Author

zeffii commented Jul 1, 2014

http://youtu.be/4QqkxT1XeNw?t=20m45s
skip to the good bit then

@ly29
Copy link
Collaborator

ly29 commented Jul 1, 2014

I think we need both, they solve different problems, for the umbrella example, Galapagos is probably better. For structuring data I think that doesn't solve the issue at all.

@zeffii
Copy link
Collaborator Author

zeffii commented Jul 1, 2014

def recurse_fxy(self, l1, l2, f):

    # atomic matched data, we can process
    if isinstance(l1, (int, float)) and isinstance(l2, (int, float)):  
        return f(l1, l2)

    # list of data, match
    if isinstance(l2, (list, tuple)) and isinstance(l1, (list, tuple)): 
        data = zip(*match_long_repeat([l1, l2]))
        return [self.recurse_fxy(ll1, ll2, f) for ll1, ll2 in data]

    # different levels of data, build lower level to higher level
    if isinstance(l1, (list, tuple)) and isinstance(l2, (int, float)):
        return self.recurse_fxy(l1, [l2], f)

    if isinstance(l1, (int, float)) and isinstance(l2, (list, tuple)):
        return self.recurse_fxy([l1], l2, f)

I just reformatted it slightly. Conceptually this isn't as scary as it looks.

@ly29
Copy link
Collaborator

ly29 commented Jul 1, 2014

Not it isn't, the problem is to know what the atomic/molecurlar data is, one solution that exists right now with levelsOfList is to count the depth and apply knowledge of the data type to a match depth, it doesn't however account for varying data depth. If that is needed case is different matter, perhaps it isn't but we should be open to the possibility I think.

It isn't hard to write it into generic form either.

@zeffii
Copy link
Collaborator Author

zeffii commented Jul 1, 2014

I wish my input here was helpful, but perhaps it only impedes progress. I'll bow out of the discussion for now.

@ly29
Copy link
Collaborator

ly29 commented Jul 1, 2014

I think your input is very helpful

@nortikin
Copy link
Owner

nortikin commented Jul 1, 2014

Alex say he doing something like neuro network (sinthetic intellect or something). I will ask his, what stage he in.
When data will be separated from UI, we can combine definitions of layouts on fly.. than we can simply make node of vectorisation to share multiple values to all definition, that in layout. or even definitions inside layout, but it deserves of grouping or calc nodes tree.
or every node should be rewrited, yes. But at the end we must have vectorized output, than data will have one more level, or even two levels. I think it is not so hard, as to rewrite all nodes to implement strong vectorization of all

@nortikin
Copy link
Owner

https://www.floodeditor.com/app.html
here is example of dictionary
use show node to see data

@nortikin nortikin mentioned this issue Oct 4, 2014
@ly29
Copy link
Collaborator

ly29 commented Oct 7, 2014

@nortikin Can you explain a bit more since the site need sign in.

I propose the following which I think would make it easy to parse the tree.

Socket types:

  • Vertices - data numpy.array 4xn, type float64,
  • Mesh face data, numpy array of type object containing numpyarray of type uint32
  • Mesh edge data, as face data or numpy array nx2 uint32
  • Number data, numpy array of any number type, mainly float64 and int64 wrapped in list/tuple
  • Text, list/tuple of string objects
  • Matrix, numpy.array 4x4 float 64
  • Generic input

All wrapped in in list/tuple that gives structure.

@zeffii
Copy link
Collaborator Author

zeffii commented Oct 7, 2014

This will be welcome and amazing :) but major major overhaul of all nodes

@ly29
Copy link
Collaborator

ly29 commented Oct 7, 2014

Yes, but I propose doing this together with adapting a structure like the prototypes/templates in SN2 #439 and leaving the update function behind.

@zeffii
Copy link
Collaborator Author

zeffii commented Oct 7, 2014

big decisions..

@ly29
Copy link
Collaborator

ly29 commented Oct 7, 2014

Yes. But doing this would make it simpler to write nodes. Of course compatibility probably couldn't be kept at 100% since some things work in non standard ways today.

@nortikin
Copy link
Owner

nortikin commented Oct 7, 2014

1.numpy will be interrupted with mathutils.Vector(()) somewere.
2.Link to online node editor shows that there is simple dictionary { 'vertices':[[xyz][xyz]],'edges':[[0,1][1,2]] } in case of numpy we will change to arreys and arrey in arrey in arrey for some cases.
3.not sure we need arreys for polygons and how to store in numpy diferent length lists?
4.what you mean templates in SN2?

@ly29
Copy link
Collaborator

ly29 commented Oct 7, 2014

  1. Yes, that is unavoidable since we are inside of blender. The alternative to numpy array is array, we should investigate overhead etc.
  2. That is a mesh data type, something we should also add. Either as dict, named tuple or class.
  3. Mostly suggested because I want a strong separation between structure and data
  4. Templates (definied as classes) that define the behavior self.process() of the node, the actual node code just contain the function to apply to the socket data. Have a look in Script Node 2 #439

@ly29
Copy link
Collaborator

ly29 commented Oct 7, 2014

3 Continued.

>>> face0
array([3, 4, 5, 6], dtype=uint32)

>>> face1
array([0, 1, 2], dtype=uint32)

>>> poly_data = numpy.array([face0,face1])
>>> poly_data
array([array([3, 4, 5, 6], dtype=uint32), array([0, 1, 2], dtype=uint32)], dtype=object)

@nortikin
Copy link
Owner

nortikin commented Oct 7, 2014

SN2 got it. this approach needed to use update, old labeling and color node functions. It will be correct, not as i started branch (locally) with veriables and types. classes - it is object oriented really.
So, in nodetree file we can define types with default update definition, color mechanism and labeling definition. That will solve mostly all my issues.
it needs to rename in every node update definitions

Than next we change to numpy there... it is more job to do and we can do it second.
It will need rewriting svget svset functions, as i see and matrix/vertices generators/degenerators, levelsoflist function and some other, it is much of work.

@zeffii
Copy link
Collaborator Author

zeffii commented Oct 7, 2014

many things we use Vector for now we can using existing numpy functions or write appropriate replacements

@nortikin
Copy link
Owner

nortikin commented Oct 7, 2014

okay, see nestyness.
vertices to calc vector length - is not hard, for example. Bmesh will not replaced and ops in blender will not. ideally there is pure math, but we cannot now.

@ly29
Copy link
Collaborator

ly29 commented Oct 7, 2014

levelsoflist wouldn't be used and more, some conversion functions would have to be written yes, but it wouldn't be that hard in general.
I am not saying it isn't a lot of work, it is but contining like we do now isn't that good either.
The nested list would still be allowed, but the objects or atoms of individual data tree would be numpy arrays instead of numbers.

@zeffii
And for the things we can't we can still apply vector function to numpy arrays with a little bit of glue code

@nortikin
Copy link
Owner

https://plus.google.com/112318317857695241382/posts/4oHccTx7x1r
here is example how people use it. In comments mentioned IFC exporting. if we will support some socket for this or to implement only IFC export node.... not sure.

@ly29
Copy link
Collaborator

ly29 commented Oct 16, 2014

IFC is complicated but open BIM format, to support that I would look for a blender exporter first. Realistically this seems quite complicated...

http://ifcopenshell.org/ifcblender.html

@nortikin
Copy link
Owner

first time i see definitions in python defined in one line...
and ifc have material information!! incraible. There will be much of work, as i see... not want to deal with it in this year at least

@zeffii zeffii closed this as completed Dec 14, 2014
@Durman Durman added the Proposal 💡 Would be nice to have label Dec 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Proposal 💡 Would be nice to have
Projects
None yet
Development

No branches or pull requests

5 participants