-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
subLevel support in several nodes #477
Comments
to deal with manylevels we need recursion in every node, in uvconnect is complicated, i should rethink it, because we deal in combinations of data there, so, if we input nested data, and work with last level, to save hierarchy it is almost impossible, as i see. But how to combine different levels of data if there is more than two? what connect to what? It is strongly unusable for everage user to think everytime about how many levels i have? and other things. So, @ly29 told right - we not need many levels, maybe additionaly one more? not more than two additional levels anyway. |
I still think it's interesting to consider a flat array (nparray) instead of explicit levels. The flat array data should include a reshape scheme indicating the format of the most atomic data, and upwards. As most nested lists are probably uniform in length, so list might be If you have an array of vectors of xyz (or xyzw= more general) and you have 10 vectors per nested level and three of such lists per datastream. you would reshape so
then the datatype could be dict(flatarray=[120 floats], reshape_scheme=(3, 10, 4)) ofcourse, this breaks down if not each sublist has 10 vectors... |
I think we do need levels. That is what I have been saying a long time. What we do need is structure that is easy to parse. |
One more |
Information about the structure of the data is known when sent to an output, so it's only logical that it can be included in that datastream, allowing the receiving node to parse with higher confidence. |
@zeffii i have to investigate numpy, because what i allready know stops me from using it for Sverchok. Will be familiar, than canmake conclusions of how to use. Maybe you right to use it. |
Okay the right answer to this if make it happen, the sooner the better. |
I made a scheme in order to describe the idea: In this case I'm processing the data using "Longest List" strategy. It means that when two list doesn't have the same length, then the last element of the shortest list is used until all pairs are matched. I'm also considering a node that combine single elements. For example a component Addition in this case is simply coded as: a = input socket float No lists, no nested levels, all the complicated stuff is managed by the Data Matcher. |
Hi @alessandro-zomparelli right know the data matching is done calling functions .data_structure or .sv_itretools, so, being true that they happen inside the node the matching functions are not written in the node's code. I don't know if it could fit all the nodes but it could be a starting point to a resolve(data, function) paradigm I m not sure if I understood your thought/proposal and its benefits, for me as coder the data match is solved usually with a couple of calls to list_match functions but i guess it could be done simpler or more consistent |
You right. It have to be smarter. It is happened due to my weak skills in programming. If we can make more automatisation it is always great. |
Hi @vicdoval , yes it actually solves a lot of problems. I didn't know that, thanks! If I'm right in this function we still have some limitations (in order of importance), tell me if I'm wrong, I've checked it quickly: @nortikin don't be so hard on yourself! :-D |
@alessandro-zomparelli There is |
@Durman I see. This is quite useful, but what I'm thinking is something similar to that, running in background in every node in order to manage just once all irregular situations.
Things are a bit more complicated if one node should compute lists on one side, and single element on the other side. Please, tell me if you see some advantages in that process... I don't want to overcomplicate things, but I've experienced some frustration with some node that doesn't process correctly the given data because the number of level is different than expected. |
The recurse_fxy(x, y, do_something) is not implemented across all the nodes and even in the same node there is another call to recurse_fx used when there is just one parameter. The limitations you mention are all true and i guess that function was written to be used in the Math node for all the math functions :) I imagine there could be a function inspired in this ones like recurse_f([lists], [final_levels], do_something) The problem i see is that sometimes is much more effective to process part of the data on a higher level. In this cases using this handy function would lead to an excess of computation if used as a default behaviour By this moment Sverchok nodes usually works with two levels of nesting: [list] and [list of lists] (excluding the vector level which would be another level) for higher hierarchies I've been using the Monad vectorization. Some nodes may not take the [list of lists] level but they can be encapsulated in a vectorize Monad for that case. Is true I experienced some frustration at the begining with the nesting thing but i got use to it 😄 Any way the open source nature is meant to evolve and improve and although I have learned a lot studding the Sv nodes and under my view (architectural background) they where usually pretty amazing and something to mimic, I'm sure there is always space for improvements. So if you create a cool implementation it will probably be used by every future developer😃 |
Thank you very much @vicdoval for your answer!
I see. Of course we can get used to that, but as teacher, I would like to use Sverchok as tool in some workshops, and for the new users some nodes can generate some frustration... :-/
Oh, I didn't seen that functions. I really struggled in order to make the Gcode Exporter able to process any kind of structures. I added some functions here: https://github.com/alessandro-zomparelli/sverchok/blob/gcode/utils/sv_itertools.py
I don't mind too much the NURBS stuff. Also in GH when you want to do complex things you need to use meshes. I like how SV is able to collaborate with modifiers and other Blender functions.
I see what you mean. Differently than GH users, SV are more likely Graphic Artists that doesn't always have strong background in math. Maybe an approach with packed "effects" is more easy to understand.
Let's see if I can make something that is not insanely heavy :P So far I learned a lot from the existing code of other SV developers. Python can be really awesome, is really a shame that when things get complicated everythong start to slow down, more than other languages |
@alessandro-zomparelli My implementation your idea in Sverchok. I also had similar ideas before when I had experiments with big structures like a town.))
We have something close to description of your ideas named mask. Usually mask represents structure of data. It uses for apply effects of a node to part of input data but also it can be used for rebuild data to initial view. |
Thanks @Durman , the result seems good! I'll take some time to study the gode. I was thinking at something that made me feel really stupid. I never really understood why interacting with Data Tree in Grasshopper was more complicated that just working with sublists (that to me seemed more intuitive). I think that I've just realized why :-D : `dict = {}
If you need to find different ways to combine the lists, you just need to edit the keys, instead of digging into several lists for understanding how deep are them! EDIT: This also simplify a lot the process of reassembly the lists after the node have processed them. |
Well the flatish nature of Sverchok have been up for discussion many times before. Fixing this would require some rewriting but this could be done in an incremental way bye wrapping older nodes and converting them at a suitable time. |
@alessandro-zomparelli you right. For some data we need non-orthogonal trees. |
Yes, I was thinking at making maybe some nodes for testing the idea. Just the basics one.
Yes, this is a very good point. Initially I started looking at numpy, and it must be considered in some way, because it's the most powerfull tool in Python for numerical operations. The difficulty that I see, is to keep easy to script nodes that manages those kind of data without giving errors. The prototype that I would like to test, will use flat numpy arrays for managing the individual lists and a dictionary as general container. A kind of hybrid dictionary-numpy. # Closest Point
out = {}
for point, pointCloud, key in match_data_trees((tree1, tree2), ('ELEMENT', 'LIST')):
out[key] = closestPoint(point, pointCloud)
return out I haven't thought so much about that yet, maybe in that way doesn't make sense, but imagine that you are creating a node that find the closest point inside a point cloud. You don't have to figure which kind of data is arriving to you, just tell which data you want. In this particular case you want to process the point one by one, while you need a list of points representing your cloud. |
I have tried to implement algorithm that makes list of data flattened with links to each value that represent structure of initial data. def get_flatten(data):
def flatten(data, link=[]):
try:
for i, value in enumerate(data):
yield from flatten(value, link=link + [i])
except TypeError:
yield data, link
gen = flatten(data)
return {tuple(link):value for value, link in gen} >>> get_flatten([[1,2],3])
>>> {(0, 0): 1, (0, 1): 2, (1,): 3}
>>> get_flatten([1,[[2,3],4,5],[6,[[7],[8]]],9])
>>> {(0,): 1, (1, 0, 0): 2, (1, 0, 1): 3, (1, 1): 4, (1, 2): 5, (2, 0): 6, (2, 1, 0, 0): 7, (2, 1, 1, 0): 8, (3,): 9} |
Hi @Durman thank you very much. Your code is really neat, I have to study it better... a = [[[0,1,2],[3,4],5,[[6,7]],8],[9]]
def list_pathes(data, dict, path):
_list = []
count = 0
for i in range(len(data)):
l = data[i]
if isinstance(l, list):
list_pathes(l, dict, path + ";" + str(count))
count+=1
else:
_list.append(l)
if _list != []:
dict[path + ""] = _list
def list_to_dict(list):
dict = {}
list_pathes(list, dict, "0")
return dict
print(list_to_dict(a)) The result is something like this:
As you can see in this case it stores lists instead of the single elements. For sure with your Python skills you can make it more simple/optimized. I'm writing some other code that combine those lists as numpy array, in order to make mathematical operations that works with every kind of data structure (like in that example). I was also trying to figure how to try to implement this approach in the existing Sverchok ecosystem. Maybe a child class of node with some specific methods that process automatically the dictionaries. |
@Durman I studied your code, and I learned a lot. I'm not really familiar yet with some pythonish strategies... :P def list_to_dict(data):
def list_pathes(data, path=()):
elements = []
count = 0
for branch in data:
if isinstance(branch, list):
yield from list_pathes(branch, path + (count,))
count+=1
else:
elements.append(branch)
if len(elements) > 0: yield elements, path
gen = list_pathes(data)
return {path:value for value, path in gen} |
for sure we need general solution for all nodes. And i am ready to spend my winter holidays to make all adaptation. But we need to develop now working core subclass, that override all classes of nodes. Lets do it in sverchok under 2.8 blender. Sverchok 0.6 |
@alessandro-zomparelli Lets be clear. What example of use tree from life? Using GH i never used non-equal trees, because there was no reason for me. |
Hi @nortikin this is a great news! Thanks for all your effort! Regarding your question, in my experience I have used trees with different size like this: While it's really RARE to have something like this: So far I've thought about 3 possible strategies: Lists of Lists (actual solution): Numpy >>> import numpy
>>> a = numpy.array(((0,1,2,3),(4,5,6)))
>>> a
array([(0, 1, 2, 3), (4, 5, 6)], dtype=object)
>>> a+a
array([(0, 1, 2, 3, 0, 1, 2, 3), (4, 5, 6, 4, 5, 6)], dtype=object)
>>> a = numpy.array(((0,1,2),(4,5,6)))
>>> a+a
array([[ 0, 2, 4],
[ 8, 10, 12]]) Dictionary + Numpy Maybe at the moment I'm too focused in the Dictionary approach and I don't see different ways to combine efficently the current Lists. In this team there are great pythonist that maybe have much more elegant solutions than this. Maybe just a list of numpy array, is enougth, but that doesn't allow to remove levels like you do with paths, because is destructive... |
Got it. As for me, there should be some mathimatician that can know solution (graphs theory/ matrix-fortran-numpy or any other). Lets find someone somehow |
i use this layout to show how paradigm works.
masking
https://yadi.sk/d/KS0t3boMcjMcs
scaling
https://yadi.sk/d/XvHbRIO1cjMcy
in masking grouped in frame transformation nodes, that not support sublevels of data. i must confess, non of this not use, + vector in and out not support. More i want to do - transformation nodes has to output subleveled data. it becomes two ways complicated thing. if we take scale node, i made separator flag to make sublevel for UVconnect, but if we input separated grouped vertices, how to be? what we should output than? and how to group? input and output has to support in at least two levels.
Sencerely, UVconnect node supports only two cases - usual levels and grouped levels. So, I will make scale node as experimental to deal inputs grouped and ungrouped and output both cases 2x2=4 cases
The text was updated successfully, but these errors were encountered: