-
-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interoperability and completeness of special matrix types #136
Comments
Some thoughts:
|
|
True. @andreasnoackjensen just merged JuliaLang/julia#7992, which makes the symmetric matrix types parametric on other matrix types. I didn't mean to imply that sparse matrix support is second-class or unimportant. It's just a messier problem and I don't know a good strategy going forward. |
I've been thinking about this recently in terms of matrix multiplication (particularly for JuliaLang/julia#6837): the key problem here is the quadratic explosion of methods (every additional type has to implement multiplication with every other type). Is there a generic way we could implement matrix multiplication so that only O(1) additional functions need to be defined? At their core, all the multiplication methods basically differ in the way by which they iterate over pairs of elements from each matrix, (column-wise, row-wise, block-wise, etc.). Is there a way we could define these various ways to iterate for each matrix type, along with an associated "cost" (i.e. CSC is cheap by column, very expensive by row), and then the multiplication could just choose the most efficient iteration scheme? |
@tkelman Thanks for opening this issue. Much of the development of the linear algebra functionality has been organic and hasn't followed a master plan. When It is a problem that the potential number of methods grows so fast in the number of types. I don't think it is feasible to support all possible combinations. Right now some of the types try to cover more broadly by promoting explicitly to dense matrices. It is possible that this could be done in a more automatic way, but I haven't figured out how and actually I'm not sure if we really want it, because a no method error might be better than a huge sparse matrix getting promoted to a dense matrix implicitly. At least temporarily, we might want to force the user to be explicit about the order of evaluation of expression involving special matrices, e.g. something like You are more experienced with sparse matrices than I am so please continue to share you thoughts on these issues. There is still plenty work to do. |
I definitely appreciate When solving sparse systems knowing that the system matrix is triangular is a big win and knowing that it is symmetric/Hermitian helps you choose the type of decomposition and representation. Some of the recent code from the Watson Sparse Matrix Package (WSMP) or HSL (what used to be called the Harwell Subroutine Library) allow for a Bunch-Kaufman decomposition in addition to a Cholesky decomposition. That is, it is not necessary to have a positive definite sparse symmetric matrix in order to take advantage of symmetry in solving sparse systems. It may be necessary to enforce the underlying structure in a symmetric/Hermitian or Triangular sparse system so that unnecessary copying can be avoided. For example all the BLAS/LAPACK routines that apply to (non-packed) triangular matrices ignore the contents of the strict triangle not indicated by the uplo flag. For sparse matrices it may be necessary to ensure that the |
No, you didn't do that, the current state of Julia's standard library does that by itself :/ @simonbyrne I was thinking of something similar this morning,
Right, I think the generics in base are not sufficiently generic for this reason. I shouldn't have to wait 2 minutes for
In fact there are a number of applications in optimization where it is an absolute requirement to preserve symmetry with an indefinite matrix, in order to obtain the inertia of the matrix (number of positive, negative, and zero eigenvalues). |
I just ran into one of these limitations today. If At the very least, we should support [It would also be good to have them inherit from something like |
@ivanslapnicar just pointed out that Update: Even worse, |
We don't have a type for quin(t?)diagonal matrices, so Tridiagonal*Tridiagonal returning dense doesn't surprise me. But it would probably surprise someone who wanted Julia to do things quickly for that operation (without specific knowledge of the internals). |
I did propose support for arbitrary banded matrices in JuliaLang/julia#8240, so the specific issue I just mentioned can be addressed as part of the broader cleanup. However it does speak to the combinatorial growth problem. |
So, now that we have a parameterized Symmetric type, it's very quickly clear that |
Leaving a breadcrumb to a bit of a discussion here JuliaLang/julia#10902 (comment) I'm thinking that some of the |
These discussions have generally moved to discourse, and I am closing this issue since there isn't anything directly actionable here. |
At present there are a variety of special structured matrix types - diagonal, bidiagonal, tridiagonal, symtridiagonal, banded, triangular, hessenberg, symmetric, hermitian, sparse CSC, and so on - in base Julia. The completeness and performance of these implementations varies widely, since they've been implemented piecemeal and as needed by @ViralBShah, @andreasnoack, @jiahao, and many others.
Managing this diversity in an effective way is a hard problem, there are only a handful of well-engineered systems in this area. I hope Julia can improve here -
There's a lot to this so it's probably beyond the scope of 0.4. Many of these specialized array types do not need to be in base long-term, but they should be default and ideally precompiled packages. Moving things out to packages too early, especially before the right frameworks are in place to support good interoperability, could in the worst case lead to fragmentation and lower visibility. I also very badly want to narrow the divide between dense linear algebra and sparse linear algebra here, as the latter is a bit of a bastard stepchild in the current system.
I don't have any concrete proposals here yet, but I think we should open a discussion. The challenges are very different, but I look at how the JuliaOpt team has designed MathProgBase for example, the hourglass architecture has done an amazing job of tying together a diverse set of solvers to multiple modeling environments. Can we similarly identify the narrow waists and common protocols for structured linear algebra?
The near-future implementation of CSR sparse (for which there's already an initial PR, JuliaLang/julia#7029) and arbitrary-dimensional COO sparse might make a good proving ground to start coming up with ideas here.
TL;DR: we have lots of array types, how do we make them work together better?
The text was updated successfully, but these errors were encountered: