-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BlockSparseArrays] BlockSparseArray functionality #2
Comments
Thanks. This currently works: julia> using NDTensors.BlockSparseArrays: Block, BlockSparseArray, blocks
julia> using LinearAlgebra: I
julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
──────────┼──────────
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
julia> a[Block(2, 2)] = I(3)
3×3 Diagonal{Bool, Vector{Bool}}:
1 ⋅ ⋅
⋅ 1 ⋅
⋅ ⋅ 1
julia> a
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
──────────┼──────────
0.0 0.0 │ 1.0 0.0
0.0 0.0 │ 0.0 1.0
julia> using NDTensors.SparseArrayInterface: stored_indices
julia> stored_indices(blocks(a))
1-element Dictionaries.MappedDictionary{CartesianIndex{2}, CartesianIndex{2}, NDTensors.SparseArrayInterface.var"#1#2"{NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}}, Tuple{Dictionaries.Indices{CartesianIndex{2}}}}
CartesianIndex(2, 2) │ CartesianIndex(2, 2) though using this alternative syntax is currently broken: julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
──────────┼──────────
0.0 0.0 │ 0.0 0.0
0.0 0.0 │ 0.0 0.0
julia> a[Block(2), Block(2)] = I(3)
ERROR: DimensionMismatch: tried to assign (3, 3) array to (2, 2) block
Stacktrace:
[1] setindex!(::BlockSparseArray{…}, ::Diagonal{…}, ::Block{…}, ::Block{…})
@ BlockArrays ~/.julia/packages/BlockArrays/L5yjb/src/abstractblockarray.jl:165
[2] top-level scope
@ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types. I would have to think about if it makes sense to support |
In terms of I have a protype of a QR decomposition of a |
Also note that slicing like this should work right now: a[Block(1, 1)[1:2, 1:2]] i.e. you can take slices within a specified block. See BlockArrays.jl for a reference on that slicing notation. |
new feature request: I updated the first comment. Edit: FIXED |
new issue: Edit: FIXED |
new issue:
edit: FIXED |
@ogauthe a number of these issues were fixed by ITensor/ITensors.jl#1332, I've updated the list in the first post accordingly. I added regression tests in ITensor/ITensors.jl#1360 for ones that still need to be fixed, and additionally added placeholder tests that I've marked as broken in the BlockSparseArrays tests. Please continue to update this post with new issues you find, and/or make PRs with broken behavior marked with |
Feature request: Edit: FIXED |
I think ideally Alternatively, Good question about whether or not the axes should get dualed if |
The solution to accept any Axes<:Tuple{Vararg{<:AbstractUnitRange,N}}, Then one can construct a g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArray{Float64}(dual(g1), g1,) outputs
Edit: FIXED |
Thanks for investigating. That seems like the right move to generalize the axes in that way. Hopefully that error is easy enough to circumvent. |
I continue in exploring the effect of
Edit: FIXED |
issue: I cannot write a slice of a block a[BlockArrays.Block(1,1)][1:2,1:2] = ones((2,2)) does not write EDIT: consistent with julia slicing convention, nothing to fix. |
issue: a[BlockArrays.Block(1,1)] = ones((2,2))
println(LinearAlgebra.norm(a)) # 2.0
a[BlockArrays.Block(1,1)][1, 1] = NaN
println(LinearAlgebra.norm(a[BlockArrays.Block(1,1)])) # NaN
println(LinearAlgebra.norm(a)) # AssertionError outputs
I just checked that replacing Edit: FIXED |
issue: a block can be written with an invalid shape. An error should be raised. a = BlockSparseArray{Float64}([2, 3], [2, 3])
println(size(a)) # (5,5)
b = BlockArrays.Block(1,1)
println(size(a[b])) # (2,2)
a[b] = ones((3,3))
println(size(a)) # (5,5)
println(size(a[b])) # (3,3) Edit: FIXED |
Thanks to ITensor/ITensors.jl#1467, I can now initialize a using NDTensors.GradedAxes: GradedAxes, dual, gradedrange
using NDTensors.Sectors: U1
using NDTensors.BlockSparseArrays: BlockSparseArray
g1 = gradedrange([U1(0) => 1, U1(1) => 2, U1(2) => 3])
g2 = gradedrange([U1(0) => 2, U1(1) => 2, U1(3) => 1])
m1 = BlockSparseArray{Float64}(g1, GradedAxes.dual(g2)); # display crash
m2 = BlockSparseArray{Float64}(g2, GradedAxes.dual(g1)); # display crash
m12 = m1 * m2; # MethodError
m21 = m2 * m1; # MethodError
Edit: FIXED |
When no dual axis is involved,
Edit: FIXED |
issue: display error when writing a block using BlockArrays: BlockArrays
using NDTensors.BlockSparseArrays: BlockSparseArrays
using NDTensors.GradedAxes: GradedAxes
using NDTensors.Sectors: U1
g = GradedAxes.gradedrange([U1(0) => 1])
m = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
m[BlockArrays.Block(1,1)] .= 1 1×1 view(::NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, BlockSlice(Block(1),1:1), BlockSlice(Block(1),1:1)) with eltype Float64 with indices NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0])×NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):
Error showing value of type SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}, BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}}, false}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)
Closest candidates are:
(::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
(::Type{T})(::T) where T<:Number
@ Core boot.jl:792
(::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
...
Stacktrace:
[1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
@ Base ./number.jl:7
[2] cvt1
@ ./essentials.jl:468 [inlined]
[3] ntuple
@ ./ntuple.jl:49 [inlined]
[4] convert(::Type{Tuple{…}}, x::Tuple{Int64, Int64})
@ Base ./essentials.jl:470
[5] push!(a::Vector{Tuple{…}}, item::Tuple{Int64, Int64})
@ Base ./array.jl:1118
[6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
@ Base ./arrayshow.jl:76
[7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
@ Base ./arrayshow.jl:207
[8] print_matrix(io::IOContext{…}, X::SubArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
@ Base ./arrayshow.jl:171
[9] print_matrix
@ ./arrayshow.jl:171 [inlined]
[10] print_array
@ ./arrayshow.jl:358 [inlined]
[11] show(io::IOContext{…}, ::MIME{…}, X::SubArray{…})
@ Base ./arrayshow.jl:399
[12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
[13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
[14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
[15] display
@ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
[16] display(x::Any)
@ Base.Multimedia ./multimedia.jl:340
[17] #invokelatest#2
@ ./essentials.jl:892 [inlined]
[18] invokelatest
@ ./essentials.jl:889 [inlined]
[19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{…})
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
[20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
[21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
[22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
[23] (::REPL.var"#do_respond#80"{…})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
[24] #invokelatest#2
@ ./essentials.jl:892 [inlined]
[25] invokelatest
@ ./essentials.jl:889 [inlined]
[26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
@ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
[27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
[28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
@ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types. This looks like the same error as previously triggered by dual axes. Edit: FIXED |
Thanks for the report, looks like it is more generally a problem printing views of blocks of BlockSparseArray with GradedUnitRange axes: using BlockArrays: Block
using NDTensors.BlockSparseArrays: BlockSparseArray
using NDTensors.GradedAxes: gradedrange
using NDTensors.Sectors: U1
r = gradedrange([U1(0) => 1])
a = BlockSparseArray{Float64}(r, r)
@view a[Block(1, 1)] |
As with other related issues, this kind of thing will get fixed when I rewrite GradedAxes based on BlockArrays.jl v1. As a workaround for now I could just strip the sector labels from the GradedUnitRange when printing. Also if you need to print the block you can copy it, i.e. |
There is no real need for a quick fix just for display. It can wait for a rewrite. |
issue: display error when g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArrays.BlockSparseArray{Float64}(g1, g1)
m2 = BlockSparseArrays.BlockSparseArray{Float64}(g1, dual(g1))
display(m1[:,:]) # Ok
display(m2) # Ok
display(m2[:,:]) # MethodError ERROR: MethodError: no method matching LabelledInteger{Int64, U1}(::Int64)
Closest candidates are:
(::Type{LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
@ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
(::Type{T})(::T) where T<:Number
@ Core boot.jl:792
(::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
@ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
...
Stacktrace:
[1] convert(::Type{LabelledInteger{Int64, U1}}, x::Int64)
@ Base ./number.jl:7
[2] cvt1
@ ./essentials.jl:468 [inlined]
[3] ntuple
@ ./ntuple.jl:49 [inlined]
[4] convert(::Type{Tuple{Int64, LabelledInteger{Int64, U1}}}, x::Tuple{Int64, Int64})
@ Base ./essentials.jl:470
[5] push!(a::Vector{Tuple{Int64, LabelledInteger{Int64, U1}}}, item::Tuple{Int64, Int64})
@ Base ./array.jl:1118
[6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
@ Base ./arrayshow.jl:76
[7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
@ Base ./arrayshow.jl:207
[8] print_matrix(io::IOContext{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
@ Base ./arrayshow.jl:171
[9] print_matrix
@ ./arrayshow.jl:171 [inlined]
[10] print_array
@ ./arrayshow.jl:358 [inlined]
[11] show(io::IOContext{…}, ::MIME{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…})
@ Base ./arrayshow.jl:399
[12] #blocksparse_show#11
@ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:120 [inlined]
[13] blocksparse_show
@ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:112 [inlined]
[14] #show#12
@ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:130 [inlined]
[15] show(io::IOContext{…}, mime::MIME{…}, a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
@ NDTensors.BlockSparseArrays.BlockSparseArraysGradedAxesExt ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:127
[16] (::OhMyREPL.var"#15#16"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::IOContext{Base.TTY})
@ OhMyREPL ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:23
[17] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
[18] display
@ ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:6 [inlined]
[19] display
@ ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
[20] display(x::Any)
@ Base.Multimedia ./multimedia.jl:340
[21] top-level scope
@ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types. This is the same error as in #2, in a different context. This previous case was fixed and does not error any more. This is another case that should be fixed by refactoring EDIT: fixed by ITensor/ITensors.jl#1531 (was due to missing |
I realize there are other issues with Should we change the behavior of EDIT: fixed by ITensor/ITensors.jl#1531 (was due to missing axes(Base.Slice(<:UnitRangeDual)}) |
I think |
issue: it is still possible to create a r = gradedrange([U1(1) => 2, U1(2) => 2])[1:3]
a = BlockSparseArray{Float64}(r,r)
a[1:2,1:2] # MethodError ERROR: MethodError: no method matching to_blockindices(::BlockArrays.BlockedUnitRange{…}, ::UnitRange{…})
Closest candidates are:
to_blockindices(::UnitRangeDual, ::UnitRange{<:Integer})
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/unitrangedual.jl:54
to_blockindices(::Base.OneTo, ::UnitRange{<:Integer})
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:186
to_blockindices(::BlockedOneTo, ::UnitRange{<:Integer})
@ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:170
Stacktrace:
[1] blocksparse_to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:32
[2] to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:26
[3] to_indices
@ ./indices.jl:344 [inlined]
[4] view
@ ./subarray.jl:183 [inlined]
[5] layout_getindex
@ ~/.julia/packages/ArrayLayouts/31idh/src/ArrayLayouts.jl:138 [inlined]
[6] getindex(::BlockSparseArray{…}, ::UnitRange{…}, ::UnitRange{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:92
[7] top-level scope
@ REPL[57]:1
Some type information was truncated. Use `show(err)` to see complete types. main at EDIT: fixed by ITensor/ITensors.jl#1531 |
issue: r = gradedrange([U1(0) => 2, U1(1) => 2])
a = BlockSparseArray{Float64}(r, r)
@test isdual.(axes(a)) == (false, false)
@test isdual.(axes(adjoint(a))) == (true, true)
@test_broken isdual.(axes(copy(adjoint(a)))) == (true, true) main at EDIT: I got confused with Edit: FIXED |
issue: g1 = blockedrange([1, 1, 1])
g2 = blockedrange([1, 2, 3])
g3 = blockedrange([2, 2, 1])
g4 = blockedrange([1, 2, 1])
bsa = BlockSparseArray{Float64}(g1, g2, g3, g4);
bsa[Block(3, 2, 2, 3)] .= 1.0
bsat = permutedims(bsa, (2, 3, 4, 1)) ERROR: BoundsError: attempt to access 3×1×2×1 PermutedDimsArray(::Array{Float64, 4}, (2, 3, 4, 1)) with eltype Float64 at index [1:2, 1:2, 1:1, 1:1]
Stacktrace:
[1] throw_boundserror(A::PermutedDimsArray{Float64, 4, (2, 3, 4, 1), (4, 1, 2, 3), Array{…}}, I::NTuple{4, UnitRange{…}})
@ Base ./abstractarray.jl:737
[2] checkbounds
@ ./abstractarray.jl:702 [inlined]
[3] view
@ ./subarray.jl:184 [inlined]
[4] (::NDTensors.BlockSparseArrays.var"#71#74"{Tuple{…}, Tuple{…}})(i::Int64)
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81
[5] ntuple
@ ./ntuple.jl:19 [inlined]
[6] sparse_map!(::NDTensors.BlockSparseArrays.BlockSparseArrayStyle{…}, f::Function, a_dest::BlockSparseArray{…}, a_srcs::PermutedDimsArray{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81
[7] sparse_map!
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93 [inlined]
[8] sparse_copyto!(dest::BlockSparseArray{…}, src::PermutedDimsArray{…})
@ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8
[9] sparse_permutedims!(dest::BlockSparseArray{…}, src::BlockSparseArray{…}, perm::NTuple{…})
@ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:13
[10] permutedims!
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:135 [inlined]
[11] permutedims(A::BlockSparseArray{…}, perm::NTuple{…})
@ Base.PermutedDimsArrays ./permuteddimsarray.jl:145
[12] top-level scope
@ REPL[227]:1
Some type information was truncated. Use `show(err)` to see complete types. I guess there is a mismatch between permuting the array structure and permuting the inner blocks. Edit: FIXED |
issue: surprising display error in a very specific context. The error is different from previous display errors mentioned here. g0 = gradedrange([U1(0) => 1])
g1 = dual(gradedrange([U1(-2) => 1, U1(-1) => 2, U1(0) => 1]))
g2 = dual(gradedrange([U1(-2) => 2, U1(-1) => 2, U1(0) => 1]))
bsa1 = BlockSparseArray{Float64}(g0, g1)
@show bsa1 # Ok
@show bsa1, 1 # Ok
bsa2 = BlockSparseArray{Float64}(g0, g2)
@show bsa2 # Ok
@show bsa2, 1 # BoundsError (bsa2, 1) = ([0.0 0.0 0.0 0.0 0.0], 1)
([0.0 0.0 … 0.0Error showing value of type Tuple{BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, NDTensors.GradedAxes.UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}}, Tuple{BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, NDTensors.GradedAxes.UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Int64}:
ERROR: BoundsError: attempt to access 2-blocked 2-element BlockArrays.BlockedUnitRange{Int64, Vector{Int64}} at index [5]
Stacktrace:
[1] throw_boundserror(A::BlockArrays.BlockedUnitRange{Int64, Vector{Int64}}, I::Int64)
@ Base ./abstractarray.jl:737
[2] getindex
@ ./range.jl:948 [inlined]
[3] getindex(a::BlockArrays.BlockedUnitRange{NDTensors.LabelledNumbers.LabelledInteger{…}, Vector{…}}, index::Int64)
@ NDTensors.GradedAxes ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/gradedunitrange.jl:269
[4] iterate
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/gradedunitrange.jl:221 [inlined]
[5] iterate
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/unitrangedual.jl:95 [inlined]
[6] _show_nonempty(io::IOContext{…}, X::AbstractMatrix, prefix::String, drop_brackets::Bool, axs::Tuple{…})
@ Base ./arrayshow.jl:447
[7] _show_nonempty(io::IOContext{…}, X::BlockSparseArray{…}, prefix::String)
@ Base ./arrayshow.jl:413
[8] show
@ ./arrayshow.jl:491 [inlined]
[9] show_delim_array(io::IOContext{…}, itr::Tuple{…}, op::Char, delim::Char, cl::Char, delim_one::Bool, i1::Int64, n::Int64)
@ Base ./show.jl:1378
[10] show_delim_array
@ ./show.jl:1363 [inlined]
[11] show
@ ./show.jl:1396 [inlined]
[12] show(io::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, x::Tuple{BlockSparseArray{…}, Int64})
@ Base.Multimedia ./multimedia.jl:47
[13] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
[14] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
[15] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
[16] display
@ ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
[17] display(x::Any)
@ Base.Multimedia ./multimedia.jl:340
[18] #invokelatest#2
@ ./essentials.jl:892 [inlined]
[19] invokelatest
@ ./essentials.jl:889 [inlined]
[20] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{…})
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
[21] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
[22] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
[23] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
[24] (::REPL.var"#do_respond#80"{…})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
[25] (::REPL.var"#98#108"{…})(::REPL.LineEdit.MIState, ::Any, ::Vararg{…})
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1248
[26] #invokelatest#2
@ ./essentials.jl:892 [inlined]
[27] invokelatest
@ ./essentials.jl:889 [inlined]
[28] (::REPL.LineEdit.var"#27#28"{REPL.var"#98#108"{…}, String})(s::Any, p::Any)
@ REPL.LineEdit ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:1612
[29] prompt!(term::REPL.Terminals.TextTerminal, prompt::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
@ REPL.LineEdit ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2749
[30] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
@ REPL.LineEdit ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2651
[31] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
[32] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
@ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types. EDIT: fixed by ITensor/ITensors.jl#1531 |
issue: cannot create a zero dim zerodim = BlockSparseArray{Float64}(()) ERROR: MethodError: (BlockSparseArray{Float64, N, A, Blocks} where {N, A<:AbstractArray{Float64, N}, Blocks<:AbstractArray{A, N}})(::Tuple{}) is ambiguous.
Candidates:
(BlockSparseArray{T, N, A, Blocks} where {N, A<:AbstractArray{T, N}, Blocks<:AbstractArray{A, N}})(dims::Tuple{Vararg{Vector{Int64}}}) where T
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearray/blocksparsearray.jl:61
(BlockSparseArray{T, N, A, Blocks} where {N, A<:AbstractArray{T, N}, Blocks<:AbstractArray{A, N}})(axes::Tuple{Vararg{AbstractUnitRange}}) where T
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearray/blocksparsearray.jl:65
Possible fix, define
(BlockSparseArray{T, N, A, Blocks} where {N, A<:AbstractArray{T, N}, Blocks<:AbstractArray{A, N}})(::Tuple{}) where T
Stacktrace:
[1] top-level scope
@ REPL[75]:1 Edit: FIXED |
@ogauthe are all of the open issues you've listed in recent comments also listed in the first post? |
A few were missing, I updated the first post and added links. |
I don't think that this should write to julia> a = randn(4, 4)
4×4 Matrix{Float64}:
0.461072 0.8415 -0.25594 -0.0362716
1.64976 0.325521 -0.174059 -1.27251
0.676818 0.705131 0.909353 -0.295874
-0.159376 0.27667 0.949735 0.135925
julia> a[2:4, 2:4][1:2, 1:2] = zeros(2, 2)
2×2 Matrix{Float64}:
0.0 0.0
0.0 0.0
julia> a
4×4 Matrix{Float64}:
0.461072 0.8415 -0.25594 -0.0362716
1.64976 0.325521 -0.174059 -1.27251
0.676818 0.705131 0.909353 -0.295874
-0.159376 0.27667 0.949735 0.135925 You could use a view, or use |
Many axes-related errors were fixed by ITensor/ITensors.jl#1531. I updated the first comment. |
Feature request: block sizes are not checked in constructor from rg = blockedrange([2, 3])
cg = blockedrange([1])
m1 = ones((2, 1))
m2 = ones((1, 1)) # too small
mdic = Dictionary{Block{2,Int64},Matrix{Float64}}()
set!(mdic, Block(1, 1), m1)
set!(mdic, Block(2, 1), m2)
m = BlockSparseArray(mdic, (rg, cg))
copy(m) # example; many operations will fail ERROR: BoundsError: attempt to access 1×1 Matrix{Float64} at index [1:3, 1:1]
Stacktrace:
[1] throw_boundserror(A::Matrix{Float64}, I::Tuple{UnitRange{Int64}, UnitRange{Int64}})
@ Base ./abstractarray.jl:737
[2] checkbounds
@ ./abstractarray.jl:702 [inlined]
[3] view
@ ./subarray.jl:184 [inlined]
[4] ITensor/ITensors.jl#73
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81 [inlined]
[5] ntuple
@ ./ntuple.jl:19 [inlined]
[6] sparse_map!(::NDTensors.BlockSparseArrays.BlockSparseArrayStyle{…}, f::Function, a_dest::BlockSparseArray{…}, a_srcs::BlockSparseArray{…})
@ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81
[7] sparse_map!
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93 [inlined]
[8] sparse_copyto!
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8 [inlined]
[9] copyto!
@ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:116 [inlined]
[10] copymutable
@ ./abstractarray.jl:1201 [inlined]
[11] copy(a::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
@ Base ./abstractarray.jl:1144
[12] top-level scope
@ REPL[293]:1
Some type information was truncated. Use `show(err)` to see complete types. |
Related to that, another thing that would be nice would be to automatically determine the blocked axes from the blocks that are passed. However, if not enough blocks are passed there may not be enough information to determine all of the sizes. |
Issue: g0 = blockedrange([2])
bsa0 = BlockSparseArray{Float64}(g0, g0)
bsa0[Block(1,1)[1:2,1:2]] = ones((2,2)) # Ok
g = gradedrange([TrivialSector()=>2])
bsa = BlockSparseArray{Float64}(g, g)
bsa[Block(1,1)] = ones((2,2)) # Ok
@show bsa[Block(1,1)[1:2,1:2]] # Ok
bsa[Block(1,1)[1:2,1:2]] = zeros((2,2)) # MethodError ERROR: MethodError: no method matching LabelledNumbers.LabelledInteger{Int64, TrivialSector}(::Int64)
The type `LabelledNumbers.LabelledInteger{Int64, TrivialSector}` exists, but no method is defined for this combination of argument types when trying to construct it.
Closest candidates are:
(::Type{LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
@ LabelledNumbers ~/.julia/packages/LabelledNumbers/Pn1xf/src/labelledinteger.jl:2
(::Type{T})(::T) where T<:Number
@ Core boot.jl:900
(::Type{T})(::BigFloat) where T<:Integer
@ Base mpfr.jl:403
...
Stacktrace:
[1] convert(::Type{LabelledNumbers.LabelledInteger{Int64, TrivialSector}}, x::Int64)
@ Base ./number.jl:7
[2] iterate
@ ./range.jl:909 [inlined]
[3] macro expansion
@ ./cartesian.jl:66 [inlined]
[4] _unsafe_setindex!
@ ./multidimensional.jl:979 [inlined]
[5] _setindex!
@ ./multidimensional.jl:967 [inlined]
[6] setindex!(A::BlockSparseArray{…}, v::Matrix{…}, I::BlockArrays.BlockIndexRange{…})
@ Base ./abstractarray.jl:1413
[7] top-level scope
@ REPL[105]:1
Some type information was truncated. Use `show(err)` to see complete types. The error is detected in Edit: moved to #9 |
@ogauthe can you start a new issue? This issue list should get split into separate issues now that it is a separate repository. |
This issue lists functionalities and feature requests for
BlockSparseArray
.Bugs
LinearAlgebra.Adjoint{T,<:BlockSparseArray{T}}
returns aBlockedArray
in certain slicing operations ([BlockSparseArrays] BlockSparseArray functionality #2).cat(::BlockSparseArray, ::BlockSparseArray)
for dual axes (followup to [BlockSparseArrays] Direct sum/cat
ITensors.jl#1579).Feature requests
BlockSparseArray{Float64}([U1(0) => 2, U1(1) => 3], [U1(0) => 2, U1(1) => 3])
which could implicitly create axes withGradedUnitRange
internally.NestedPermutedDimsArray
instead ofSparsePermutedDimsArrayBlocks
(similar to how we are removingSparseAdjointBlocks
/SparseTransposeBlocks
in [BlockSparseArrays] Simplifications ofblocks
for blocksparseAdjoint
andTranspose
ITensors.jl#1580). Started in [NDTensors] IntroduceNestedPermutedDimsArrays
submodule ITensors.jl#1589, [SparseArrayInterface]NestedPermutedDimsArray
support ITensors.jl#1590.NestedPermutedDimsArrays
submodule ITensors.jl#1589 would be to redefineNestedPermutedDimsArray
as aPermutedDimsArray
wrapping aMappedArray
where the map and inverse map convert toPermutedDimsArray
, that would be good to explore so we don't have to support all of theNestedPermutedDimsArrays
code, which is mostly just a copy ofBase.PermutedDimsArrays
anyway.b = @view a[Block.(1:2), [Block(2), Block(1)]
, defineblocks(b)
as@view blocks(a)[1:2, [2, 1]]
, as opposed to using the more generalSparseSubArrayBlocks
in those cases. Like the newNestedPermutedDimsArray
, in principleSparseSubArrayBlocks
could be replaced by aNestedSubArray
type that defines the slicing behavior of the array storing the blocks and also the slicing of the blocks themselves, but that might be overkill and the concept is very particular to block arrays. But maybeSubArray
of theblocks
could still be used to simplify the code logic inSparseSubArrayBlocks
.Dictionary
should check block sizes ([BlockSparseArrays] BlockSparseArray functionality #2).svd
,qr
, etc. See [BlockSparseArrays] Blockwise matrix factorizations #3. These are well defined if the block sparse matrix has a block structure (i.e. the sparsity pattern of the sparse array of arraysblocks(a)
) corresponding to a generalized permutation matrix. Probably they should be called something likeblock_svd
,block_eigen
,block_qr
, etc. to distinguish that they are meant to be used on block sparse matrices with those structures (and error if they don't have that structure). See 1 for a prototype of a blockwise QR. See also BlockDiagonals.jl for an example in Julia of blockwise factorizations, they use a naming schemesvd_blockwise
,eigen_blockwise
, etc. The slicing operation introduced in [BlockSparseArrays] Sub-slices of multiple blocks ITensors.jl#1489 will be useful for performing block-wise truncated factorizations.Strided.@strided
forview(::BlockSparseArray, ::Block)
. As a workaround it will work if you useview!
/@view!
introduced in [BlockSparseArrays] Define in-place view that may instantiate blocks ITensors.jl#1498. (EDIT: We will have to decide what this should do, maybe it takes a strided view of the block data if the block exists, but otherwise errors if the block doesn't exist.)a[1:2, 1:2]
) to output non-blocked arrays, and define@blocked a[1:2, 1:2]
to explicitly preserve blocking. See the discussion in Functionality for slicing with unit ranges that preserves block information JuliaArrays/BlockArrays.jl#347.SectorDual
type and/or as a boolean flag.Fixed
SparseArrayInterface
toSparseArraysBase
and moveSparseArrayDOKs
into the newSparseArraysBase
. (Fixed in [SparseArraysBase] RenameSparseArrayInterface
toSparseArraysBase
ITensors.jl#1591, [SparseArraysBase] AbsorbSparseArrayDOKs
ITensors.jl#1592.)BlockSparseArray{Float64,2,Matrix{Float64}}([2, 3], [2, 3])
,BlockSparseArray{Float64,2}([2, 3], [2, 3])
,BlockSparseMatrix{Float64}([2, 3], [2, 3])
are not defined. (Fixed by [BlockSparseArrays] Define more constructors ITensors.jl#1586.)block_nstored
toblock_stored_length
andnstored
tostored_length
. (Fixed by [SparseArrayInterface] [BlockSparseArrays] Rename nstored to stored_length ITensors.jl#1585.)BlockSparseArrayLike
toAnyAbstractBlockSparseArray
, which is the naming convention used in other Julia packages for a similar concept2.Base.cat
and related functions. (Implemented in [BlockSparseArrays] Direct sum/cat
ITensors.jl#1579.)@views a[[Block(2), Block(1)], [Block(2), Block(1)]][2:4, 2:4]
in Julia 1.11 (see tests marked as broken in [NDTensors] [ITensors] Update tests to use Julia version 1.10 and 1.11 ITensors.jl#1539). (Fixed in [BlockSparseArrays] Fix nested slicing in Julia 1.11 ITensors.jl#1575.)Array
, for exampleDiagonalArrays.DiagonalArray
,SparseArrayDOKs.SparseArrayDOK
,LinearAlgebra.Diagonal
, etc.BlockSparseArray
can have blocks that are AbstractArray subtypes, however some operations don't preserve those types properly (i.e. implicitly convert toArray
blocks) or don't work. (Partially addressed in [BlockSparseArrays] Initial support for more general blocks, such as GPU blocks ITensors.jl#1560 but more work is needed, we can track issues individually from now on.)BlockSparseArray{Float64}()
fails). (Fixed in [BlockSparseArrays] Zero dimensional block sparse array and some fixes for Adjoint and PermutedDimsArray ITensors.jl#1574.)permutedims
crashes for some block sparse arrays ([BlockSparseArrays] BlockSparseArray functionality #2). (Fixed in [BlockSparseArrays] Zero dimensional block sparse array and some fixes for Adjoint and PermutedDimsArray ITensors.jl#1574.)copy(adjoint)
does not preserve dual axes. (Fixed in [BlockSparseArrays] Zero dimensional block sparse array and some fixes for Adjoint and PermutedDimsArray ITensors.jl#1574.)block_stored_indices(::LinearAlgebra.Adjoint{T, BlockSparseArray})
does not transpose its indices. (Matt: I don't see an issue here, the values ofblock_stored_indices(a')
are the nonzero/stored block locations, thekeys
ofblock_stored_indices(a')
are an implementation detail and should not be used.)LinearAlgebra.norm(a)
crashes whena
containsNaN
.BlockSparseArray
that hasGradedUnitRange
axes (as opposed toGradedOneTo
) fails.a[:, :]
creates an array with ill behaved axes (it should just be equivalent tocopy(a)
). Also triggers display error.BlockSparseArray
triggersBoundsError
in some rare contexts ([BlockSparseArrays] BlockSparseArray functionality #2).a = BlockSparseArray{Float64}([2, 3], [2, 3]); @view a[Block(1, 1)]
returns aSubArray
where the last type parameter which marks whether or not the slice supports faster linear indexing isfalse
, while it should betrue
if that is the case for that block ofa
(this is addressed by [BlockSparseArrays] Redesign block views again ITensors.jl#1513,@view a[Block(1, 1)]
no longer outputs aSubArray
, but rather either the block data directly or aBlockView
object if the block doesn't exist yet).TensorAlgebra.contract
fails when called withview(::BlockSparseArray, ::Block)
orreshape(view(::BlockSparseArray, ::Block), ...)
. As a workaround it will work if you useview!
/@view!
instroduced in [BlockSparseArrays] Define in-place view that may instantiate blocks ITensors.jl#1498.a = BlockSparseArray{Float64}([2, 3], [2, 3]); b = @view a[Block.(1:2), Block.(1:2)]; b[Block(1, 1)] = randn(2, 2)
doesn't set the blockBlock(1, 1)
(it remains uninitialized, i.e. structurally zero). I think the issue is that@view b[Block(1, 1)]
makes two layers ofSubArray
wrappers instead of flattening down to a single layer, and those two layers are not being dispatched on properly (in general we only catch if something is aBlockSparseArray
or aBlockSparseArray
wrapped in a single wrapper layer).BlockArrays
v1.1, see CI for [SymmetrySectors] Non-abelian fusion ITensors.jl#1363. Fixed by [BlockSparseArrays] Update to BlockArrays v1.1, fix some issues with nested views ITensors.jl#1503.r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(r, r); size(view(a, Block(1,1))[1:1,1:1])
returns a tuple ofLabelledInteger
instead ofInt
(see discussion, keep it that way at least for now).r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(dual(r), r); @view(a[Block(1, 1)])[1:1, 1:1]
and other combinations ofdual
lead to method ambiguity errors.Vector{<:BlockIndexRange{1}}
JuliaArrays/BlockArrays.jl#358.dual
is not preserved when adding/subtractingBlockSparseArray
s, i.e.g = gradedrange([U1(0) => 1]); m = BlockSparseArray{Float64}(dual(g), g); isdual(axes(m + m, 1))
should betrue
but isfalse
.r = gradedrange([U1(0) => 1]); a = BlockSparseArray{Float64}(r, r); @view a[Block(1, 1)]
.a[2:4, 2:4]
, by usingBlockArrays.BlockSlice
.a[Block(2), Block(2)] = randn(3, 3)
.a[Block(2, 2)] .= 1
.@view(a[Block(1, 1)])[1:1, 1:1] = 1
.a[Block(1, 1)] = b
ifsize(a[Block(1, 1)]) != size(b)
.BlockSparseMatrix
involving dual axes.BlockSparseMatrix
, i.e.a' * a
anda * a'
, with and without dual axes.adjoint(::BlockSparseMatrix)
. Can be implemented by overloadingaxes(::Adjoint{<:Any,<:AbstractBlockSparseMatrix})
.show(::Adjoint{<:Any,<:BlockSparseMatrix})
andshow(::Transpose{<:Any,<:BlockSparseMatrix))
are broken.eachindex(::BlockSparseArray)
involving dual axes.BlockSparseMatrix
, i.e.a'
(in progress in4).Base.similar(a::BlockSparseArray, eltype::type)
andBase.similar(a::BlockSparseArray, eltype::type, size::NTuple{N,AbstractUnitRange})
do not seteltype
copy(::BlockSparseArray)
copy the blocks.a[1:2, 1:2]
is not implemented yet and needs to be implemented (in progress in5).stored_indices(blocks(a)))
to get a list ofBlock
corresponding to initialized/stored blocks. Ideally there would be shorthands for this likeblock_stored_indices(a)
(in progress in5).nstored(blocks(a))
to get the number of initialized/stored blocks. Ideally there would be shorthands for this likeblock_nstored(a)
(in progress in5)..*=
and./=
, such asa .*= 2
, are broken (in progress in[^1]).Base.:*(::BlockSparseArray, x::Number)
andBase.:/(::BlockSparseArray, x::Number)
are not definedBase.:*(::ComplexF64, ::BlockSparseArray{Float64})
does not change data type for empty array and crashes ifa
contains data.Footnotes
https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl ↩
https://github.com/JuliaGPU/GPUArrays.jl/blob/v11.1.0/lib/GPUArraysCore/src/GPUArraysCore.jl#L27, https://github.com/JuliaGPU/CUDA.jl/blob/v5.4.2/src/array.jl#L396 ↩
https://github.com/ITensor/ITensors.jl/pull/1452, https://github.com/JuliaArrays/BlockArrays.jl/pull/255 ↩
[BlockSparseArrays] Fix adjoint and transpose ITensors.jl#1470 ↩
[BlockSparseArrays] More general broadcasting and slicing ITensors.jl#1332 ↩ ↩2 ↩3
The text was updated successfully, but these errors were encountered: