-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can we use Numba?... #2646
Comments
i think |
and |
:( |
(more specifically, it does support nested lists, it just switches back to interpreting mode, so becomes useless). |
if it doesn't work on nested lists, would it accept "unrolled lists" , with information about nesting, then use that representation in the decorated function? |
maybe the preprocessing step of "flattening" would take up more time than saved by the numba stuff. hehe.. |
Yes, it'll work with "unrolled" lists. Also it will work with numpy arrays. |
Hi guys, I'm using Numba in the Tissue Reaction-Diffusion code, and for me it gave a boost of 500 times @_@ |
Yeah...
|
@portnov , I would love to help rewriting Sverchok, at least for a comparative test. It would be greatto use numpy + numba for example for the math node and other numerical operations. This would give a huge boost and much more stability to the data structure. |
It's a weakness in Sverchok itself. There are some efforts to do not recalculate what obviously should not be, but these algorithms can be enhanced. There are even issues in progress about that - see #2380, #2439, #2393 ... See also https://github.com/Sverchok/Sverchok — I believe this is the latest attempt to rewrite sverchok... |
Animation nodes is using cython. |
That requires developers (or, even worse, users) to build something, for each operating system supported. We do not want to do so: sverchok is installed by downloading zip file directly from github and pressing a button in preferences. We do not want to make this process any more complicated. |
Tissue plugin (dev branch) is using Numba module and it is not complicated at all. For new users some simple tutorial can be done, I think. Just my 0.2 $ |
Yes, Numba has the advantage of not requiring compilation. But, as I mentioned before, it actually requires to write code in a specific way. So one can't just "add numba and be happy". |
This is beyond my domain :) Thanks for explanation! |
@portnov these are interesting things. this numba |
this was an interesting video : https://www.youtube.com/watch?v=x58W9A2lnQc |
i hope i have time for 'list zip', 'list flip', 'waffle', 'uvconnect' to rewrite, and with numpy/numba support. than i will just brutally cut or extend python lists for equal numpy.array. |
okidoki :) python\bin> ./python.exe -m pip install numba |
i will need to find a way to make this work in SNlite node.. i don't dare assume it will work just-like-that :) |
oh wow. just the first call is slow.. as expected from the docs.. """
in num_items s d=100 n=2
out dummy s
"""
from numba import jit
import numpy as np
import time
x = np.arange(num_items).reshape(10, 10)
@jit(nopython=True) # Set "nopython" mode for best performance, equivalent to @njit
def go_fast(a): # Function is compiled to machine code when called the first time
trace = 0.0
for i in range(a.shape[0]): # Numba likes loops
trace += np.tanh(a[i, i]) # Numba likes NumPy functions
return a + trace # Numba likes NumPy broadcasting
start = time.time()
print(go_fast(x))
end = time.time()
print("Elapsed (with compilation) = %s" % (end - start)) # Elapsed (with compilation) = 2.9859113693237305
start = time.time()
print(go_fast(x))
end = time.time()
print("Elapsed 2 (with compilation) = %s" % (end - start)) # Elapsed 2 (with compilation) = 0.007998228073120117 |
so it has an ahead-of-time compilation mode, which i think would need to be run by the individual-user's computer once before using. https://numba.pydata.org/numba-doc/latest/reference/aot-compilation.html#aot-compilation |
we have many functions that are heavy, not because the logic is heavy but because regular python has prohibitive overhead or numpy doesn't lend itself well because the algorithm isn't easily sanely expressed as a matrix multiplication/etc :) |
so @portnov i'm a total convert. |
slightly modified: pacakge
reaction.py# Turing Pattern Formation
# intro to modelling and analysis of complex systems
# https://math.libretexts.org/Bookshelves/Applied_Mathematics/Book%3A_Introduction_to_the_Modeling_and_Analysis_of_Complex_Systems_(Sayama)/13%3A_Continuous_Field_Models_I_-_Modeling/13.06%3A_Reaction-Diffusion_Systems
from numba import njit
import numpy as np
@njit
def reaction_test(steps, n=40, a=1.0, b=-1., c=2., d=-1.5, h=1., k=1.):
# pylint: disable=c0103
# n = 40 # size of grid n*n
Dh = 1. / n # spatial res, assuming space is [0, 1] * [0, 1]
Dt = 0.02 # temporal res
# a, b, c, d, h, k = 1., -1, 2., -1.5, 1., 1. # param values
Du = 0.0001 # diffusion constant of U
Dv = 0.0006 # dif V
u = np.zeros((n, n))
v = np.zeros((n, n))
for x in range(n):
for y in range(n):
u[x, y] = 1. + np.random.uniform(-0.03, 0.03) # small noise is added
v[x, y] = 1. + np.random.uniform(-0.03, 0.03) # small noise is added
nextu = np.zeros((n, n))
nextv = np.zeros((n, n))
for iteration in range(steps):
for x in range(n):
for y in range(n):
# state-transition function
uC, uR, uL, uU, uD = u[x, y], u[(x+1) % n, y], u[(x-1)%n, y], u[x, (y+1) % n], u[x, (y-1) % n]
vC, vR, vL, vU, vD = v[x, y], v[(x+1) % n, y], v[(x-1)%n, y], v[x, (y+1) % n], v[x, (y-1) % n]
uLap = (uR + uL + uU + uD - 4 * uC) / (Dh**2)
vLap = (vR + vL + vU + vD - 4 * vC) / (Dh**2)
nextu[x, y] = uC + (a*(uC-h) + b*(vC-k) + Du * uLap) * Dt
nextv[x, y] = vC + (c*(uC-h) + d*(vC-k) + Dv * vLap) * Dt
u, nextu = nextu, u
v, nextv = nextv, v
return u.ravel() > v.ravel() snlite script"""
in steps s d=100 n=2
in size s n=40 d=2
out nparray s
"""
from rd_test.reaction import reaction_test
uravel = reaction_test(steps, n=size)
nparray.append(uravel) |
Well, RD stuff looks nice, but I'm not sure where to apply it in Sverchok. Generate textures? About geometry: I'll try to |
i'm rewriting i read however that nested functions are now supported in numba.jit , as long as those functions are not recursive. and don't themselves return a function. |
there are 2 major functions that i'm failing to implement a foolproof solution to, the one builds on the other.
where the criteria are expect the following input
in particular the @portnov any tips before i get entrenched down this rabbit hole ? |
You can look at blender's source, how it calculates normals. |
One of possible options is to find the "best fit" approximating plane for all vertices with least squares method and take it's normal (see |
yeah, i'll distill the blender method (if it's the last thing i do) my pride takes comfort in the fact that tessellation is sometimes someone's topic of BS thesis, a non trivial stuff.. is always hard to find clean examples of |
cool numba talk: https://www.youtube.com/watch?v=-4tD8kNHdXs (30+ minutes. really no-nonsense ) |
This is yet another topic about performance. Link: http://numba.pydata.org/
Numba is a jit-compiler for python. I managed to install it with pip into blender's python, and on simple examples like
it gives about 10x boost.
While this looks very promising, it has one problem: it can't effectively handle the switch from JIT-ed code into not-jit-ed. I.e. if you make a call from function marked with
@jit
to function which is not jit-ed, Numba will switch back to plain interpretation mode and there will no boost anymore. For example, you can't call functions from python standard packages... one exclusion is numpy: Numba is especially aware of numpy and works effectively with numpy calls.So. Do we have complex places, that require speedup and do all work by theirselves, without calling for external functions/libraries (except for numpy)?
I tried with "bend along surface" node — no luck, there is a lot of calls from it to outside and then again into node...
The text was updated successfully, but these errors were encountered: