Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different behavior of .* in sparse and non-sparse matrices and problem in sparse lufact() #142

Closed
jasax opened this issue Sep 19, 2014 · 8 comments
Labels
sparse Sparse arrays

Comments

@jasax
Copy link

jasax commented Sep 19, 2014

I'm fiddling with sparse matrices and have found a few quirks :-)

What I pretend to do is use LU factorization in sparse matrices (for now in real matrices, but I hope to get it done also in sparse complex matrices)

First, in the manual, in the Linear Algebra section (http://docs.julialang.org/en/release-0.3/stdlib/linalg/) there is the relation between the L and U factors (in the case of F=lufact(A) operation they are named/extracted as F[:L] and F[:U] ) and a line and column permutation of the factorized sparse matrix A that reads as

F[:L] * F[:U] == Rs .* A[F[:p], F[:q]]

It seems to me (according to the explanation in that section) that it should read as

F[:L] * F[:U] == F[:Rs] .* A[F[:p], F[:q]]

because Rs doesn't exist per se (it has to be extracted from the object F, and so is F[:Rs]).

Aside that typo, additionally the RHS of that expression when evaluated gives an error due to a mismatch in dimensions. Indeed there is not a perfect equivalence/behavior of the .* operator (this doesn't happen with the .+ operator, for example) in real and in complex matrices, as the session below clearly shows. I end that session with an example of the failure, by applying lufact() and exemplifying the error.

I checked the code around "sparse/sparsematrix.jl:531" where the error is triggered and there is a check of conformant dimensions for matrix dot product .* (and of other operators)

       .............
           if size(A,1) != size(B,1) || size(A,2) != size(B,2)
                throw(DimensionMismatch(""))            # line 531 of the source code
            end
       ...............

(where A and B are the operands) which means that the .* operator only accepts operands with the same dimensions. I think that it is what triggers the error.

But then this configures a different behavior for .* in sparse and non sparse matrix representations.

Below is a session with julia 0.3.0 running in windows 7 in AMD 8-core Piledriver processor where the above cases are visible, with added comments. I experimented in julia 0.2.1 in Linux-Ubuntu (over Virtualbox) and the behavior of .* is the same as in windows 7; however, in 0.2.1 lufact() is not implemented for sparses, so I could not check it.

# H and v are a matrix and a vector, Hs and vs the corresponding sparse objects.

julia> H=[1 0; 2 -1.0]
2x2 Array{Float64,2}:
 1.0   0.0
 2.0  -1.0

julia> v=[2,3.0]
2-element Array{Float64,1}:
 2.0
 3.0

julia> Hs=sparse(H)
2x2 sparse matrix with 3 Float64 entries:
        [1, 1]  =  1.0
        [2, 1]  =  2.0
        [2, 2]  =  -1.0

julia> vs=sparse(v)
2x1 sparse matrix with 2 Float64 entries:
        [1, 1]  =  2.0
        [2, 1]  =  3.0

julia> v .* H
2x2 Array{Float64,2}:
 2.0   0.0
 6.0  -3.0

julia> v .* Hs
ERROR: DimensionMismatch("")
 in .* at sparse/sparsematrix.jl:531

julia> vs .* H
ERROR: DimensionMismatch("")
 in .* at sparse/sparsematrix.jl:531

julia> vs .* Hs
ERROR: DimensionMismatch("")
 in .* at sparse/sparsematrix.jl:531

# product * is OK

julia> H * Hs
2x2 Array{Float64,2}:
 1.0  0.0
 0.0  1.0

julia> Hs * H
2x2 Array{Float64,2}:
 1.0  0.0
 0.0  1.0

# does difference in sizes of v and vs explain something?

julia> size(H)
(2,2)

julia> size(Hs)
(2,2)

julia> size(v)
(2,)

julia> size(vs)
(2,1)

## the .+ operator is OK with mismatch in operand dimensions (sparse or not)

julia> v .+ H
2x2 Array{Float64,2}:
 3.0  2.0
 5.0  2.0

julia> v .+ Hs
2x2 Array{Float64,2}:
 3.0  2.0
 5.0  2.0

julia> vs .+ H
2x2 Array{Float64,2}:
 3.0  2.0
 5.0  2.0

julia> vs .+ Hs
2x2 Array{Float64,2}:
 3.0  2.0
 5.0  2.0

# for conformant sparse operands is OK

julia> Hs .* Hs
2x2 sparse matrix with 3 Float64 entries:
        [1, 1]  =  1.0
        [2, 1]  =  4.0
        [2, 2]  =  1.0

# now we lufact(Hs) and show the problem with F[:Rs] .* A[F[:p], F[:q]]

julia> S=lufact(Hs)
UMFPACK LU Factorization of a 2-by-2 sparse matrix
Ptr{Void} @0x0000000020c65560


julia> S[:L]
2x2 sparse matrix with 2 Float64 entries:
        [1, 1]  =  1.0
        [2, 2]  =  1.0

julia> S[:U]
2x2 sparse matrix with 3 Float64 entries:
        [1, 1]  =  -0.333333
        [1, 2]  =  0.666667
        [2, 2]  =  1.0

julia> S[:p]
2-element Array{Int64,1}:
 2
 1

julia> S[:q]
2-element Array{Int64,1}:
 2
 1

julia> S[:Rs]
2-element Array{Float64,1}:
 1.0
 0.333333

julia> Hsperm=Hs[S[:p],S[:q]]
2x2 sparse matrix with 3 Float64 entries:
        [1, 1]  =  -1.0
        [1, 2]  =  2.0
        [2, 2]  =  1.0

julia> S[:Rs] .* Hsperm
ERROR: DimensionMismatch("")
 in .* at sparse/sparsematrix.jl:531

@tkelman
Copy link

tkelman commented Sep 19, 2014

Not positive, but I think you actually want to be using F[:Rs] as the diagonal elements of a scaling matrix, not as a vector. Some of the other issues are due to not currently having true sparse vectors, the same way we have 1-d dense vectors. Sparse vectors are inconsistently implemented as n-by-1 CSC matrices, which leads to some problems.

@jiahao jiahao added sparse Sparse arrays linear algebra docs This change adds or pertains to documentation labels Sep 19, 2014
@jasax
Copy link
Author

jasax commented Sep 19, 2014

This is a bit awkward, as well as the existence of a 1-D Array type which after double transposition (should return the original object) becomes a different type. The code below shows that the "black sheep" is the original vector, g; if g was created as a 5x1 Array{Float64,2} (which g becomes in, after double transposition), instead of as a 5-element Array{Float64,1}, the inconsistency wouldn't exist. But that change (vector becomes matrix) probably should have some impact (in time of execution) in many of vector uses... :-)

julia> g=[1;2;3;4;5.0]
5-element Array{Float64,1}:
 1.0
 2.0
 3.0
 4.0
 5.0
julia> g==g''
false
julia> sparse(g)==sparse(g'')
true
julia> g'
1x5 Array{Float64,2}:
 1.0  2.0  3.0  4.0  5.0
julia> g''
5x1 Array{Float64,2}:
 1.0
 2.0
 3.0
 4.0
 5.0
julia> sparse(g)
5x1 sparse matrix with 5 Float64 entries:
        [1, 1]  =  1.0
        [2, 1]  =  2.0
        [3, 1]  =  3.0
        [4, 1]  =  4.0
        [5, 1]  =  5.0
julia> sparse(g'')
5x1 sparse matrix with 5 Float64 entries:
        [1, 1]  =  1.0
        [2, 1]  =  2.0
        [3, 1]  =  3.0
        [4, 1]  =  4.0
        [5, 1]  =  5.0
julia> (sparse(g))'==sparse(g')
true
julia> full(sparse(g)')==g'
true

I apologize if these questions have already been discussed and settled down previously (as probably they have...) but I couldn't find when searching.

@jiahao
Copy link
Member

jiahao commented Sep 19, 2014

#42

@jasax
Copy link
Author

jasax commented Sep 20, 2014

Thanks @jiahao. The funny thing is that the sparse() and full() functions have eliminated the inconsistency, that is, sparse objects are "mathematically" consistent with transposition :-)

More funny stuff :-) (we can always use multiple dispatch to tackle both types...)

julia> function f(v::Array{Float64,1})
println(v)
end
f (generic function with 1 method)
julia> g
5-element Array{Float64,1}:
 1.0
 2.0
 3.0
 4.0
 5.0
julia> f(g)
[1.0,2.0,3.0,4.0,5.0]
julia> f(g'')
ERROR: `f` has no method matching f(::Array{Float64,2})
julia> f(full(sparse(g)))
ERROR: `f` has no method matching f(::Array{Float64,2})
julia> function f(v::Array{Float64,2})
println(v)
end
f (generic function with 2 methods)
julia> f(g'')
[1.0
 2.0
 3.0
 4.0
 5.0]

But i'm diverging... the original issue was that the v .* Msparse operation doesn't work, while v .* M does... and I don't know if it is seen as a "bug" or as a "feature" in Julia :-).

@tkelman
Copy link

tkelman commented Sep 20, 2014

sparse objects are "mathematically" consistent with transposition :-)

Only because sparse vectors are currently implemented in a more Matlab-ish manner than the Julia way dense vectors work.

v .* Msparse operation doesn't work, while v .* M does... and I don't know if it is seen as a "bug" or as a "feature" in Julia :-)

Bug. Sparse vectors should probably be the 1-D case of general N-D COO sparse, rather than shoehorned in as 1-column CSC matrices like they are now.

The inconsistency of dense g'', as Jiahao pointed out, is a separate bug in Julia's handling of transpose vectors. There's another PR open for a Transpose type which I think would be a nicer solution here. And if we had true 1-D sparse vectors, then the same behaviors would apply to both sparse and dense 1-D vectors w.r.t. transposition and linear algebraic operations.

@tkelman
Copy link

tkelman commented Sep 20, 2014

Oh, and the v .* Msparse actually has another cause unrelated to sparse vectors - the broadcasting behavior of v .* Mdense just isn't implemented in the sparse version of .*.

julia> [5,6] .* [1 2; 3 4]
2x2 Array{Int64,2}:
  5  10
 18  24

There are cases where this is what you want, if :Rs stands for row scaling. But evidently the broadcast machinery isn't generalized to sparse. not quite, rather the sparse methods are taking priority:

julia> @which [5,6] .* [1 2; 3 4]
.*(As::AbstractArray{T,N}...) at broadcast.jl:278

julia> @which [5,6] .* sparse([1 2; 3 4])
.*(A::Array{T,N},B::SparseMatrixCSC{Tv,Ti<:Integer}) at sparse/sparsematrix.jl:634

Line 634 of sparse/sparsematrix.jl is (.*)(A::Array, B::SparseMatrixCSC) = (.*)(sparse(A), B), which doesn't do broadcasting correctly because sparse vectors are not 1-D.

@jasax
Copy link
Author

jasax commented Sep 20, 2014

OK, thanks @tkelman. Unfortunately I don't know too much (i.e. near zero...) of Julia's innards. I browsed umfpack.jl in the sources and saw that C functions from the package are being called (with ccall()). I cannot yet juggle with Julia's types and type system.Could not go to a spot and propose a patch :-(

I'm in the (spare time) process of building simulation tools for handling (hopefully) very large electric circuits, which is only viable in a sparse matrix landscape. Once upon a time I did those things in C, but now I was hoping to benefit from sparse implementation in Julia to avoid the malloc() - free() tango in C :-), use PCRE to parse input files, and escape from turtle velocity offered by Perl, Python, Ruby, etc... :-)

@tkelman
Copy link

tkelman commented Sep 20, 2014

@jasax I think the best place to look is around line 634 of sparse/sparsematrix,jl. That file should be mostly pure-Julia code. I think unconditionally converting A::Array to sparse(A) might be the wrong thing to do for .* here. You may be able to make headway experimenting with some alternatives.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sparse Sparse arrays
Projects
None yet
Development

No branches or pull requests

3 participants