-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LinearOperator constructor for CUDA #321
Conversation
Another option could be to try to write an extension for AbstractGPUArrays, however this will require some workarounds to strip the Array type of its parameters. One way might be with the workaround shown here or maybe with a |
Hi @nHackel , thank you for this PR! I was wondering if something like A = CUDA.rand(10,10)
tA = typeof(A)
S = tA.name.wrapper{eltype(A), 1, tA.parameters[3:end]...}
test_vec = S(undef, 10) would work for any matrix type (including other GPUArrays, but I can only test with CUDA) to avoid having CUDA in the extensions and solve your issue?
I think that S = tA.name.wrapper{eltype(A), 1, tA.parameters[3]} might work at least for CUDA but probably not for CPU arrays. Let me know if you don't see an obvious answer, we can probably merge this PR and open an issue to improve genericity in the future. |
Hello @geoffroyleconte, thanks for looking into the PR! I thought about the workarounds a bit more and I fear one issue we would run into if we try to make it very generic (either just for GPU or even all arrays) are all the various matrix types where the correct storage/result type is not directly a 1-dimensional version of it. For example a sparse matrix would result in a dense vector in the end. While one could solve that with a new case for AbstractSparseMatrix, the same thing could happen for any number of array types we don't know about. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that's a good point, I mainly used dense vectors even with sparse arrays by there could applications where sparse vectors are needed.
@dpo does this looks good to you?
I thought about adding the method function LinearOperator(
M::AbstractMatrix{T};
symmetric = false,
hermitian = false,
S = storage_type(M) # previously Vector{T}, for "all" arrays here. That change would be a bit more invasive, since it affects the fallback for all matrix types. Let me know what you think, I can either just add the |
Yes that's a great idea! Adding new Array types would then require few lines of code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
This PR adds a method for the
LinearOperator
function which has a correct default storage type for CUDA'sCuArray
.This should (partially, it does not fix the upstream error in NLPModels.jl) fix #307, in particular this error is resolved:
The method is defined in a package extension for CUDA. I've also tried to make it available for older Julia Version with Requires, similar to how it was done for the ChainRulesExt.