-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using MKL vectorized functions for things like tanh, exp, log, ... #415
Comments
What's the Rust crate that provides bindings to this, and as a first step, what is the general level interface or functionality in ndarray that is needed to implement this? Access to ndarray data that can be used for this, but is still independent of for example the MKL. |
The situation around this is little bit messy. Right now, go get working solution one has to reexport MKL functions:
And then instead of code mentioned above, one has to do:
What I was thinking about was to add .exp(), .log(), .tanh(), ... methods for ndarrays, which depending whether we have MKL feature would have faster implementation. But, if you say, that this is very low priority enhacement and not worth the effort, I would understand. |
It sounds good. pub trait VectorizedMath {
fn tanh(self) -> Self; // or vtanh?
fn exp(self) -> Self;
...
}
impl VectorizedMath for ArrayBase<S, D> { /* like @usamec done */ } It is basically independent from MKL. We will be able to implement them using SIMD extensions e.g. vsinf in accelerate (macOS). But this must be depends on the platform where the application works, and we must switch its backend, e.g. intel-mkl-src for Intel CPU and accelerate-src for macOS, and so on. One way to implement it is a feature flag "intel-mkl-vectorized" to ndarray crate, which enables Another way is to create a binding of MKL "mkl-sys", which includes functions not included in blas-sys, lapack-sys, and fftw-sys. It should be argued on https://github.com/termoshtt/rust-intel-mkl |
Tested:
Works fine. |
I found, that using MKL library functions like vsTanh (https://software.intel.com/en-us/mkl-developer-reference-fortran-v-tanh) is quite faster than doing
vector.mapv(|x| x.tanh())
.Is it worth including this in ndarray crate behind feature gate?
The text was updated successfully, but these errors were encountered: