Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename CHOLMOD.update! -> CHOLMOD.cholfact!/CHOLMOD.ldltfact! #14419

Merged
merged 1 commit into from
Dec 18, 2015

Conversation

andreasnoack
Copy link
Member

This can be considered a reversion of a renaming of cholmod! that I made in the spring. At that time I split cholfact(SparseMatrixCSC) into chofact and ldltfact and thought it would be a good idea to have a generic function for reusing the symbolic factorization.

In practice, I don't think it is important to have a generic function for this and we are now using LinAlg.update! for rank updates of matrices/factorizations so a renaming was necessary.

This also moves the cholfact documentation to the source.

@kshyatt kshyatt added linear algebra Linear algebra sparse Sparse arrays labels Dec 15, 2015
Hermitian{Complex{Float64},SparseMatrixCSC{Complex{Float64},SuiteSparse_long}}};
shift = 0.0) -> CHOLMOD.Factor

Compute the LDLt factorization of `A` and reusing the symbolic factorization `F`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comma instead of "and"

@tkelman
Copy link
Contributor

tkelman commented Dec 16, 2015

A better generic name for these would be something like symfact and numfact!.

update! is just too broad - I don't think the current LinAlg.update! is all that great a name or unambiguous in what it does, but that's a different issue (edit: #14243).

@ViralBShah
Copy link
Member

I agree that update! is too broad.

@andreasnoack andreasnoack force-pushed the anj/update branch 2 times, most recently from 75673a0 to 96c9998 Compare December 16, 2015 16:23
@andreasnoack
Copy link
Member Author

Even though the sparse factorizations decompose into a symbolic and a numerical factorization step, I'm not sure if it the right separation in applications. I'd expect that, most often, you want to do both the first time you factorize a matrix and then, afterwards, you might want to reuse the symbolic factorization. I.e. I'm not sure having a sym(b)fact function is all that useful. It us also ambiguous which symbolic factorization you are asking for: LU, Cholesky, LDLt?

Therefore, I think the main question is if we want a generic function for the numerical factorization step or if it is fine to overload cholfact! and ldltfact! (and eventually lufact!) for this. Is it likely that you want to handle the numerical factorizations generically for e.g. positive definite matrices and general square matrices? I'd expect that you would specialize this on for each category, but I might be wrong.

I'm fine with renaming the update! function to rankupdate!. I'll open a PR for that.

@tkelman
Copy link
Contributor

tkelman commented Dec 16, 2015

In nonlinear optimization (e.g. Ipopt), splitting symbolic and numeric is absolutely the right application separation. Same in any other application where you repeatedly do factorizations on a set of matrices with the same structure but different values. This is very common. The nonzero structure is often known well in advance of any of the values, or provided separately. We need a good API for this for doing serious sparse work.

symfact could dispatch on array types, or have an input flag for the desired factorization type. cholfact, ldltfact, bkfact, lufact etc could similarly be consolidated into a single factorize with dispatching on an algorithm flag.

@andreasnoack
Copy link
Member Author

Same in any other application where you repeatedly do factorizations on a set of matrices with the same structure but different values.

I think you missed my point here. I'm aware of this application, but I'm asking if it is really necessary/desirable to do the symbolic factorization in isolation from the first numerical factorization? That might be the case, but you didn't explain why.

symfact could dispatch on array types, or have an input flag for the desired factorization type. cholfact, ldltfact, bkfact, lufact etc could similarly be consolidated into a single factorize with dispatching on an algorithm flag.

We have had variations of this discussion before and I think you underestimate the challenges with your approach. The "algorithm flag" would either give type instability (for symbols), be non-generic (like factorize(Cholesky, A) working for dense, but not sparse matrices) or introduce new abstract factorization types (factorize(AbstractCholesky, A)). The last solution might be interesting, but it's not obvious that it's better than cholfact(A) or that it's sufficient.

@dmbates
Copy link
Member

dmbates commented Dec 16, 2015

@andreasnoack I have had cases where I know the structure of the sparse symmetric matrix when constructing an instance of a Julia type that should include both the sparse matrix and its Cholesky factor but I don't know the actual values until I begin an optimization using that object.

However, it is usually not that big a deal to insert some values that work, as long as transient zeros (i.e. values that are not systematic zeros but happen to take on the value of zero) are not dropped. I think that CHOLMOD symbolic factorization does not check the values of the nonzeros so that should not be a problem.

In other words, one can construct situations where the symbolic factorization should be done by itself but it is not difficult to write around them.

@tkelman
Copy link
Contributor

tkelman commented Dec 16, 2015

I'm aware of this application, but I'm asking if it is really necessary/desirable to do the symbolic factorization in isolation from the first numerical factorization? That might be the case, but you didn't explain why.

Yes it is, and yes I did:

The nonzero structure is often known well in advance of any of the values, or provided separately.

The symbolic factorization and anything dependent only on structure can often be moved to effectively "offline" in a lot of these applications, done once before any values are known, to preallocate the data structures that will be needed for the numerical factorization. Doing an extra throwaway first-time numerical factorization is not an acceptable overhead for me when I may only need a handful of iterations in some cases.

(like factorize(Cholesky, A) working for dense, but not sparse matrices)

I'm not so much thinking of the type as an indicator of what the return type should be, but rather an algorithm variant flag. So I wouldn't be using the dense Cholesky return type vs CHOLMOD.Factor, but rather something like Val{:cholesky} - though hopefully a cleaner alternative to that will be available soon.

@andreasnoack
Copy link
Member Author

The nonzero structure is often known well in advance of any of the values

How do you store the "nonzero structure"? Would you create a SparseMatrixCSC with dummy elements as mentioned by @dmbates or do you use some kind of graph structure?

I guess a solution could be to provide a keyword argument, numerical=true, to cholfact/ldltfact. I think I added checks in the methods that require a numerical factorization, so mayve the change is not that big. Would that work or would it still be useless?

Isn't Val{:cholesky} just a funny version of AbstractCholesky? Can you reveal any details about the clearner alternative.

@tkelman
Copy link
Contributor

tkelman commented Dec 16, 2015

Just the indices part of the CSC structure. I probably wouldn't create the SparseMatrixCSC, or if I did I'd use the constructor directly and make a slightly invalid object first with an empty nzval. A numerical=true keyword argument sounds like a reasonably small way to try something out here.

Defining and exporting an AbstractCholesky type just to use it as a flag seems like needlessly occupying namespace, other than that it might be more or less equivalent. The cleaner alternative is that I'm hoping #13412 might make Val{} hacks no longer necessary? Though I think I'm confusing that with something else.

andreasnoack added a commit that referenced this pull request Dec 18, 2015
Rename CHOLMOD.update! -> CHOLMOD.cholfact!/CHOLMOD.ldltfact!
@andreasnoack andreasnoack merged commit e83b755 into master Dec 18, 2015
@andreasnoack andreasnoack deleted the anj/update branch December 18, 2015 15:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear algebra Linear algebra sparse Sparse arrays
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants