-
-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement Ac_mul_B(Vector, Matrix) to avoid temporary #360
Comments
What about either doing v'mx or v=vv[row:row,:] ? |
Yes, these are all working tricks. I think Julia is still struggling with some Matlab-like behaviour. |
How does Fortran handle complex conjugation in |
Personally I don't see
Actually, what we had prior to APL-style slicing was more similar to Matlab, which makes a distinction between column and row vectors. |
As far as I see, there is no implicit complex conjugation involved. This is what the Metcalf-book says: i) matrix_a has shape (n,m) and matrix_b has shape (m,k). ii) matrix_a has shape(m) and matrix_b has shape (m,k). iii) matrix_a has shape (n,m) and matrix_b has shape (m). |
Hm. Thanks. That was a surprising choice. In any case, I think this is very much #42 territory. |
Also note, that in Fortran |
@ararslan |
Performance penalty compared to what? |
Compared to storing predefined |
I wouldn't think so offhand (though I'm certainly no expert), but you could check with |
No performance penalty, no. |
I have just tested.
|
Just checked and |
I have made a mistake.
It brings a speedup, but the performance penalty becomes even worse:
|
Dunno if it'll make a difference but you should try moving |
@ararslan |
@andreasnoack |
If you by |
Yes. |
Hm. Maybe I spoke too soon. It seems like OpenBLAS' GEMM is slower than GEMV julia> A = randn(3,3);
julia> x = randn(3);
julia> xt = x';
julia> let A = A, x = x
@time for i in 1:10^6; BLAS.gemv('T', 1.0, A, x); end
end
0.113836 seconds (2.00 M allocations: 122.070 MB, 8.78% gc time)
julia> let A = A, xt = xt
@time for i in 1:10^6; BLAS.gemm('N', 'N', 1.0, xt, A); end
end
0.188009 seconds (2.00 M allocations: 137.329 MB, 7.56% gc time) which I don't think is reasonable. |
Even so, your testcode is instructive for me. |
I've updated the title. I think the remaining aspects of this discussion is fully covered by #42. |
@andreasnoack @StefanKarpinski Is this your present target? Or is it just making |
It seems that this simple code does what I miss.
Update: |
Yes. The idea was that this issue tracks the problem with the temporary. For you other suggestion please refer to #42 |
I don't understand this issue. |
I respectfully disagree.
Otherwise we should also leave |
I don't think there is a or one standard mathematical definition for Can I assume that your suggestion generalises to using it in a kind of multilinear algebra context as some default tensor contractions operator: contract the last index of the first object with the first index of the second, irrespective of the rank of the two tensors involved? |
I did not dare to go as far as multilinear algebra. |
What are the three types of product in linear algebra? I still don't see where vector times matrix is some standard mathematical product in any given context. Either one works in MATLAB style where everything is matrices and In an abstract setting, matrices correspond to linear mappings from vectors to vectors. So in There is also the concept of linear forms / covectors which map vectors to numbers. These can be represented as one-dimensional objects and are isomorphic to vectors, namely using the inner product. To any linear form I am not defending that this choice is ideal or optimal, but I also don't see how |
My apologies, wrong button. |
I find instructive all what you wrote. |
Yes indeed. My apologies for leading the original discussion astray. I'll make a similar post there and agree that, for now, |
Fixed by JuliaLang/julia#19670 |
Let
mx
be a 3x3 matrix, andvv
be an nx3 matrix.Up to now
v=vv[row,:]
was a row vector, i.e. a 1x3 matrix,so the matrix multiplication
v*mx
worked, and provided a 1x3 row vector.In 0.5.0 Julia this has changed,
v=vv[row,:]
rows are now real 1-dimensional vectors.This is welcome change.
But as a side effect, it became more obvious and painful
that
v*mx
does not work for real 1-dimensional vectors.I know that
transpose(mx)*v
is a workaround,but I think this code rewrite is an unnecessary requirement.
Considering the original definition of matrix multiplication,
summing inner indices should naturally work in this case.
A good example is Fortran's
matmul(v,mx)
: it just works fine.I would greatly appreciate your opinion,
and any pointers to your future plans.
The text was updated successfully, but these errors were encountered: