Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BlockSparseArrays] BlockSparseArray functionality #2

Open
49 of 61 tasks
ogauthe opened this issue Feb 16, 2024 · 72 comments
Open
49 of 61 tasks

[BlockSparseArrays] BlockSparseArray functionality #2

ogauthe opened this issue Feb 16, 2024 · 72 comments
Labels
enhancement New feature or request

Comments

@ogauthe
Copy link
Contributor

ogauthe commented Feb 16, 2024

This issue lists functionalities and feature requests for BlockSparseArray.

using LinearAlgebra
using NDTensors.BlockSparseArrays: BlockSparseArray

a = BlockSparseArray{Float64}([2, 3], [2, 3]);

Bugs

Feature requests

Fixed

Footnotes

  1. https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl

  2. https://github.com/JuliaGPU/GPUArrays.jl/blob/v11.1.0/lib/GPUArraysCore/src/GPUArraysCore.jl#L27, https://github.com/JuliaGPU/CUDA.jl/blob/v5.4.2/src/array.jl#L396

  3. https://github.com/ITensor/ITensors.jl/pull/1452, https://github.com/JuliaArrays/BlockArrays.jl/pull/255

  4. [BlockSparseArrays] Fix adjoint and transpose ITensors.jl#1470

  5. [BlockSparseArrays] More general broadcasting and slicing ITensors.jl#1332 2 3

@ogauthe ogauthe added the enhancement New feature or request label Feb 16, 2024
@mtfishman
Copy link
Member

mtfishman commented Feb 16, 2024

Thanks. This currently works:

julia> using NDTensors.BlockSparseArrays: Block, BlockSparseArray, blocks

julia> using LinearAlgebra: I

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0

julia> a[Block(2, 2)] = I(3)
3×3 Diagonal{Bool, Vector{Bool}}:
 1    
   1  
     1

julia> a
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.01.0  0.0
 0.0  0.00.0  1.0

julia> using NDTensors.SparseArrayInterface: stored_indices

julia> stored_indices(blocks(a))
1-element Dictionaries.MappedDictionary{CartesianIndex{2}, CartesianIndex{2}, NDTensors.SparseArrayInterface.var"#1#2"{NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}}, Tuple{Dictionaries.Indices{CartesianIndex{2}}}}
 CartesianIndex(2, 2) │ CartesianIndex(2, 2)

though using this alternative syntax is currently broken:

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0

julia> a[Block(2), Block(2)] = I(3)
ERROR: DimensionMismatch: tried to assign (3, 3) array to (2, 2) block
Stacktrace:
 [1] setindex!(::BlockSparseArray{…}, ::Diagonal{…}, ::Block{…}, ::Block{…})
   @ BlockArrays ~/.julia/packages/BlockArrays/L5yjb/src/abstractblockarray.jl:165
 [2] top-level scope
   @ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types.

I would have to think about if it makes sense to support a[Block(2, 2)] .= 1.0 or a[Block(2), Block(2)] .= 1.0 if Block(2, 2) isn't initialized yet (probably we should support that).

@mtfishman
Copy link
Member

mtfishman commented Feb 16, 2024

In terms of svd and qr of BlockSparseMatrix, it is well defined to perform them block-wise (i.e. map the problem to factorizing the individual non-zero blocks) as long as there is only one block per row or column, i.e. diagonal up to block permutations of the rows and columns. So I was thinking we would have specialized block_svd and block_qr that checks if they have that structure and performs svd or qr block-wise, and otherwise it errors (it can check if blocks(a) is a generalized permutation matrix).

I have a protype of a QR decomposition of a BlockSparseMatrix here: https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl but it may be outdated since it was written based on an earlier version of BlockSparseArrays.

@mtfishman mtfishman changed the title [NDTensors] [ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays][ENHANCEMENT] BlockSparseArray functionalities Feb 16, 2024
@mtfishman
Copy link
Member

mtfishman commented Feb 18, 2024

Also note that slicing like this should work right now:

a[Block(1, 1)[1:2, 1:2]]

i.e. you can take slices within a specified block. See BlockArrays.jl for a reference on that slicing notation.

@ogauthe
Copy link
Contributor Author

ogauthe commented Feb 26, 2024

new feature request: Base.:*(::BlockSparseArray, x::Number) and Base.:/(::BlockSparseArray, x::Number)

I updated the first comment.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Feb 26, 2024

new issue: 1im * a does not modify a data type if a is empty. It crashes if a contains data.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Mar 1, 2024

new issue: similar(a::BlockSparseArray, eltype::type) do not set new eltype.
similar(a::BlockSparseArray, eltype::type, size) behavior depends on size type.

  • For size::NTuple{N,Int}, it returns a dense julia Array with correct eltype.
  • For size::NTuple{N,AbstractUnitRange}, it returns a BlockSparseArray{Float64} and eltype is not correct.

edit: FIXED

@mtfishman
Copy link
Member

@ogauthe a number of these issues were fixed by ITensor/ITensors.jl#1332, I've updated the list in the first post accordingly. I added regression tests in ITensor/ITensors.jl#1360 for ones that still need to be fixed, and additionally added placeholder tests that I've marked as broken in the BlockSparseArrays tests. Please continue to update this post with new issues you find, and/or make PRs with broken behavior marked with @test_broken or bug fixes and regression tests. I think with the reorganization I did in ITensor/ITensors.jl#1332 these kinds of issues will be easier to fix going forward, though I have to admit it may be a bit tough to jump into that code right now since there are many abstraction layers involved, since the goal is for that code to be as minimal and general as possible.

@ogauthe
Copy link
Contributor Author

ogauthe commented Apr 24, 2024

Feature request: display(::Adjoint). The function Base.adjoint is well defined and returns an Adjoint type, but it throws an error at display.
Closely related, it is not clear to me how GradedAxes.dual and BlockSparseArray should work together. The current behavior is to only accept a GradedUnitRange as an axis and to throw for a UnitRangeDual input. Base.adjoint does not dual the axis. I do not have an opinion yet whether this should be modified or not.

Edit: FIXED

@mtfishman
Copy link
Member

mtfishman commented Apr 24, 2024

I think ideally BlockSparseArray would accept any AbstractUnitRange.

Alternatively, BlockArrays.jl introduced an AbstractBlockedUnitRange in JuliaArrays/BlockArrays.jl#348, we could define a BlockedUnitRangeDual <: AbstractBlockedUnitRange type which wraps another AbstractBlockedUnitRange. I think it will require some experimentation to see which one leads to simpler code.

Good question about whether or not the axes should get dualed if Base.adjoint(::BlockSparseMatrix) is called. Probably we should dual the axes. For example, if we want operations like a' * a to work for a::BlockSparseMatrix and GradedUnitRange axes, I think we need to dual the axes.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 2, 2024

The solution to accept any AbstractUnitRange is to replace the definition of Axes with

  Axes<:Tuple{Vararg{<:AbstractUnitRange,N}},

Then one can construct a BlockSparseArray with a dual axis. For a reason I do not really understand, the resulting array can be printed but not displayed. The error is not the same as when one tries to display a transpose or adoint.

g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArray{Float64}(dual(g1), g1,)

outputs

1×1-blocked 1×1 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
Error showing value of type BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)

Closest candidates are:
  (::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{T})(::BigFloat) where T<:Integer
   @ Base mpfr.jl:378
  ...

Stacktrace:
  [1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{Base.TTY}, X::AbstractVecOrMat, rows::Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, cols::Vector{Int64}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, X::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ Base ./arrayshow.jl:399
 [12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [15] display
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [17] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [18] invokelatest
    @ ./essentials.jl:889 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{Nothing, AbstractDisplay})
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [23] (::REPL.var"#do_respond#80"{Bool, Bool, REPL.var"#93#103"{REPL.LineEditREPL, REPL.REPLHistoryProvider}, REPL.LineEditREPL, REPL.LineEdit.Prompt})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [24] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [25] invokelatest
    @ ./essentials.jl:889 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@mtfishman
Copy link
Member

Thanks for investigating. That seems like the right move to generalize the axes in that way. Hopefully that error is easy enough to circumvent.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 2, 2024

I continue in exploring the effect of dual. eachindex is broken on such an array. This affects ==.

julia> eachindex(m1)
ERROR: MethodError: no method matching AbstractUnitRange{…}(::NDTensors.GradedAxes.UnitRangeDual{…})

Closest candidates are:
  AbstractUnitRange{T}(::AbstractUnitRange{T}) where T
   @ Base range.jl:1308
  AbstractUnitRange{T}(::NDTensors.LabelledNumbers.LabelledUnitRange) where T
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledunitrange.jl:17
  AbstractUnitRange{Int64}(::Static.OptionallyStaticUnitRange)
   @ Static ~/.julia/packages/Static/6JeQI/src/ranges.jl:258
  ...

Stacktrace:
 [1] OrdinalRange{…}(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:1313
 [2] convert(::Type{…}, r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:265
 [3] (::Base.IteratorsMD.var"#3#4")(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [4] map
   @ ./tuple.jl:292 [inlined]
 [5] CartesianIndices(inds::Tuple{NDTensors.GradedAxes.UnitRangeDual{…}, BlockArrays.BlockedUnitRange{…}})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [6] eachindex(::IndexCartesian, A::BlockSparseArray{…})
   @ Base.IteratorsMD ./multidimensional.jl:392
 [7] eachindex(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
   @ Base ./abstractarray.jl:378
 [8] top-level scope
   @ REPL[37]:1
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@mtfishman mtfishman changed the title [BlockSparseArrays][ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays] [ENHANCEMENT] BlockSparseArray functionalities May 10, 2024
@ogauthe
Copy link
Contributor Author

ogauthe commented May 10, 2024

issue: I cannot write a slice of a block

a[BlockArrays.Block(1,1)][1:2,1:2] = ones((2,2))

does not write a. This is probably associated to "Fix initializing a block with broadcasting syntax".

EDIT: consistent with julia slicing convention, nothing to fix.

@mtfishman mtfishman changed the title [BlockSparseArrays] [ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays] BlockSparseArray functionality May 10, 2024
@ogauthe
Copy link
Contributor Author

ogauthe commented May 11, 2024

issue: LinearAlgebra.norm(a) triggers AssertionError when a contains NaN

a[BlockArrays.Block(1,1)] = ones((2,2))
println(LinearAlgebra.norm(a))  # 2.0
a[BlockArrays.Block(1,1)][1, 1] = NaN
println(LinearAlgebra.norm(a[BlockArrays.Block(1,1)]))  # NaN
println(LinearAlgebra.norm(a))  # AssertionError

outputs

ERROR: AssertionError: op(output, (eltype(output))(f_notstored)) == output
Stacktrace:
  [1] #sparse_mapreduce#16
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:118 [inlined]
  [2] sparse_mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:115 [inlined]
  [3] #mapreduce#29
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:82 [inlined]
  [4] mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81 [inlined]
  [5] generic_normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:453 [inlined]
  [6] normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:527 [inlined]
  [7] generic_norm2(x::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:463
  [8] norm2
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:529 [inlined]
  [9] norm(itr::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, p::Int64)
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:598
 [10] _norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:297 [inlined]
 [11] norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298 [inlined]
 [12] norm(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298
 [13] top-level scope
    @ REPL[1220]:1
Some type information was truncated. Use `show(err)` to see complete types.

I just checked that replacing NaN with Inf is correct.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented May 17, 2024

issue: a block can be written with an invalid shape. An error should be raised.

a = BlockSparseArray{Float64}([2, 3], [2, 3])
println(size(a))  # (5,5)
b = BlockArrays.Block(1,1)
println(size(a[b]))  # (2,2)
a[b] = ones((3,3))
println(size(a))  # (5,5)
println(size(a[b]))  # (3,3)

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented May 28, 2024

Thanks to ITensor/ITensors.jl#1467, I can now initialize a BlockSparseArray with a dual axis. However contracting such arrays is broken:

using NDTensors.GradedAxes: GradedAxes, dual, gradedrange
using NDTensors.Sectors: U1
using NDTensors.BlockSparseArrays: BlockSparseArray

g1 = gradedrange([U1(0) => 1, U1(1) => 2, U1(2) => 3])
g2 = gradedrange([U1(0) => 2, U1(1) => 2, U1(3) => 1])

m1 = BlockSparseArray{Float64}(g1, GradedAxes.dual(g2));  # display crash
m2 = BlockSparseArray{Float64}(g2, GradedAxes.dual(g1));  # display crash

m12 = m1 * m2;  # MethodError
m21 = m2 * m1;  # MethodError
ERROR: MethodError: no method matching AbstractUnitRange{…}(::NDTensors.GradedAxes.UnitRangeDual{…})

Closest candidates are:
  AbstractUnitRange{T}(::AbstractUnitRange{T}) where T
   @ Base range.jl:1308
  AbstractUnitRange{T}(::NDTensors.LabelledNumbers.LabelledUnitRange) where T
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledunitrange.jl:17
  AbstractUnitRange{Int64}(::Static.OptionallyStaticUnitRange)
   @ Static ~/.julia/packages/Static/6JeQI/src/ranges.jl:258
  ...

Stacktrace:
  [1] OrdinalRange{…}(r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base ./range.jl:1313
  [2] convert(::Type{…}, r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base ./range.jl:265
  [3] (::Base.IteratorsMD.var"#3#4")(r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base.IteratorsMD ./multidimensional.jl:259
  [4] map
    @ ./tuple.jl:292 [inlined]
  [5] CartesianIndices(inds::Tuple{BlockArrays.BlockedUnitRange{…}, NDTensors.GradedAxes.UnitRangeDual{…}})
    @ Base.IteratorsMD ./multidimensional.jl:259
  [6] eachindex(::IndexCartesian, A::BlockSparseArray{…})
    @ Base.IteratorsMD ./multidimensional.jl:392
  [7] eachindex
    @ ./abstractarray.jl:378 [inlined]
  [8] fill!(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, x::Float64)
    @ Base ./multidimensional.jl:1113
  [9] zero!(::BlockArrays.BlockLayout{…}, A::BlockSparseArray{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:289
 [10] zero!(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:288
 [11] _fill_copyto!(dest::BlockSparseArray{…}, C::FillArrays.Zeros{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:80
 [12] copyto!
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:82 [inlined]
 [13] copy(M::ArrayLayouts.MulAdd{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:77
 [14] copy
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:130 [inlined]
 [15] materialize
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:127 [inlined]
 [16] mul
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:128 [inlined]
 [17] *(A::BlockSparseArray{…}, B::BlockSparseArray{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:213
 [18] top-level scope
    @ REPL[13]:1
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented May 28, 2024

When no dual axis is involved, m' * m and m * m' trigger StackOverflowError

Stacktrace:
     [1] muladd!(α::Float64, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, β::Float64, C::BlockSparseArray{…}; kw::@Kwargs{})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:75
     [2] mul!(dest::BlockSparseArray{…}, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, α::Float64, β::Float64)
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:134
     [3] mul!(dest::BlockSparseArray{…}, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, α::Float64, β::Float64)
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:271
     [4] materialize!(m::ArrayLayouts.MulAdd{…})
       @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/arraylayouts.jl:14
--- the last 4 lines are repeated 10900 more times ---
 [43605] muladd!(α::Float64, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, β::Float64, C::BlockSparseArray{…}; kw::@Kwargs{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:75
 [43606] copyto!
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:82 [inlined]
 [43607] copy(M::ArrayLayouts.MulAdd{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:77
 [43608] copy
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:130 [inlined]
 [43609] materialize
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:127 [inlined]
 [43610] mul
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:128 [inlined]
 [43611] *(A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:290
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Jun 4, 2024

issue: display error when writing a block

using BlockArrays: BlockArrays
using NDTensors.BlockSparseArrays: BlockSparseArrays
using NDTensors.GradedAxes: GradedAxes
using NDTensors.Sectors: U1

g = GradedAxes.gradedrange([U1(0) => 1])
m = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
m[BlockArrays.Block(1,1)] .= 1
1×1 view(::NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, BlockSlice(Block(1),1:1), BlockSlice(Block(1),1:1)) with eltype Float64 with indices NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0])×NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):
Error showing value of type SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}, BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}}, false}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)

Closest candidates are:
  (::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
  ...

Stacktrace:
  [1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{…}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{…}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::SubArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{…}, ::MIME{…}, X::SubArray{…})
    @ Base ./arrayshow.jl:399
 [12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [15] display
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [17] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [18] invokelatest
    @ ./essentials.jl:889 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{…})
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [23] (::REPL.var"#do_respond#80"{})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [24] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [25] invokelatest
    @ ./essentials.jl:889 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

This looks like the same error as previously triggered by dual axes.

Edit: FIXED

@mtfishman
Copy link
Member

Thanks for the report, looks like it is more generally a problem printing views of blocks of BlockSparseArray with GradedUnitRange axes:

using BlockArrays: Block
using NDTensors.BlockSparseArrays: BlockSparseArray
using NDTensors.GradedAxes: gradedrange
using NDTensors.Sectors: U1
r = gradedrange([U1(0) => 1])
a = BlockSparseArray{Float64}(r, r)
@view a[Block(1, 1)]

@mtfishman
Copy link
Member

mtfishman commented Jun 4, 2024

As with other related issues, this kind of thing will get fixed when I rewrite GradedAxes based on BlockArrays.jl v1. As a workaround for now I could just strip the sector labels from the GradedUnitRange when printing. Also if you need to print the block you can copy it, i.e. copy(@view(a[Block(1, 1)])) or a[Block(1, 1)] seem to print fine.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jun 4, 2024

There is no real need for a quick fix just for display. It can wait for a rewrite.

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 11, 2024

issue: display error when dual is involved
main at 5d4111e3f0d720b17b58de137f633225094fb379

g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArrays.BlockSparseArray{Float64}(g1, g1)
m2 = BlockSparseArrays.BlockSparseArray{Float64}(g1, dual(g1))
display(m1[:,:])  # Ok
display(m2)  # Ok
display(m2[:,:])  # MethodError
ERROR: MethodError: no method matching LabelledInteger{Int64, U1}(::Int64)

Closest candidates are:
  (::Type{LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
   @ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
  ...

Stacktrace:
  [1] convert(::Type{LabelledInteger{Int64, U1}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{Int64, LabelledInteger{Int64, U1}}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{Int64, LabelledInteger{Int64, U1}}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{…}, ::MIME{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ Base ./arrayshow.jl:399
 [12] #blocksparse_show#11
    @ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:120 [inlined]
 [13] blocksparse_show
    @ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:112 [inlined]
 [14] #show#12
    @ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:130 [inlined]
 [15] show(io::IOContext{…}, mime::MIME{…}, a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ NDTensors.BlockSparseArrays.BlockSparseArraysGradedAxesExt ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:127
 [16] (::OhMyREPL.var"#15#16"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::IOContext{Base.TTY})
    @ OhMyREPL ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:23
 [17] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [18] display
    @ ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:6 [inlined]
 [19] display
    @ ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [20] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [21] top-level scope
    @ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types.

This is the same error as in #2, in a different context. This previous case was fixed and does not error any more.

This is another case that should be fixed by refactoring UnitRangeDual (I am currently working on it)

EDIT: fixed by ITensor/ITensors.jl#1531 (was due to missing axes(Base.Slice(<:UnitRangeDual)})

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 12, 2024

I realize there are other issues with a[:,:]. The axes are cast to Base.IdentityUnitRange{-> axes(a) <-}. This is causing the display error above, but also creates issues with blocklabels and blocklengths. So when calling a[:,:], the array itself is correct but the axes have yet another type that does not behave as expected.

Should we change the behavior of a[:,:] to preserve the axes or should we define blocklabels and blocklengths for Base.IdentityUnitRange{GradedUnitRange}? I currently favor fixing the axes types to avoid adding more code to support exotic axes types.

EDIT: fixed by ITensor/ITensors.jl#1531 (was due to missing axes(Base.Slice(<:UnitRangeDual)})

@mtfishman
Copy link
Member

I think a[:, :] should always be exactly the same as copy(a), if I understand correctly, so if it doesn't act like that then we should change it to act like that.

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 12, 2024

issue: it is still possible to create a GradedAxes.GradedUnitRange, a.k.a BlockArrays.BlockedUnitRange{<:LabelledInteger} by slicing a BlockedOneTo. Using such an axis for a BlockSparseArrays creates many issues.
e.g

r = gradedrange([U1(1) => 2, U1(2) => 2])[1:3]
a = BlockSparseArray{Float64}(r,r)
a[1:2,1:2]  # MethodError
ERROR: MethodError: no method matching to_blockindices(::BlockArrays.BlockedUnitRange{…}, ::UnitRange{…})

Closest candidates are:
  to_blockindices(::UnitRangeDual, ::UnitRange{<:Integer})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/unitrangedual.jl:54
  to_blockindices(::Base.OneTo, ::UnitRange{<:Integer})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:186
  to_blockindices(::BlockedOneTo, ::UnitRange{<:Integer})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:170

Stacktrace:
 [1] blocksparse_to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
   @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:32
 [2] to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
   @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:26
 [3] to_indices
   @ ./indices.jl:344 [inlined]
 [4] view
   @ ./subarray.jl:183 [inlined]
 [5] layout_getindex
   @ ~/.julia/packages/ArrayLayouts/31idh/src/ArrayLayouts.jl:138 [inlined]
 [6] getindex(::BlockSparseArray{…}, ::UnitRange{…}, ::UnitRange{…})
   @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:92
 [7] top-level scope
   @ REPL[57]:1
Some type information was truncated. Use `show(err)` to see complete types.

main at 5d4111e3f0d720b17b58de137f633225094fb379

EDIT: fixed by ITensor/ITensors.jl#1531

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 16, 2024

issue: copy(adjoint) does not preserve dual axes

r = gradedrange([U1(0) => 2, U1(1) => 2])
a = BlockSparseArray{Float64}(r, r)

@test isdual.(axes(a)) == (false, false)
@test isdual.(axes(adjoint(a))) == (true, true)
@test_broken isdual.(axes(copy(adjoint(a)))) == (true, true)

main at 5d4111e3f0d720b17b58de137f633225094fb379

EDIT: I got confused with adjoint, there is only an issue with copy here.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Oct 22, 2024

issue: permutedims crashes for some arrays

g1 = blockedrange([1, 1, 1])
g2 = blockedrange([1, 2, 3])
g3 = blockedrange([2, 2, 1])
g4 = blockedrange([1, 2, 1])
bsa = BlockSparseArray{Float64}(g1, g2, g3, g4);
bsa[Block(3, 2, 2, 3)] .= 1.0

bsat = permutedims(bsa, (2, 3, 4, 1))
ERROR: BoundsError: attempt to access 3×1×2×1 PermutedDimsArray(::Array{Float64, 4}, (2, 3, 4, 1)) with eltype Float64 at index [1:2, 1:2, 1:1, 1:1]
Stacktrace:
  [1] throw_boundserror(A::PermutedDimsArray{Float64, 4, (2, 3, 4, 1), (4, 1, 2, 3), Array{…}}, I::NTuple{4, UnitRange{…}})
    @ Base ./abstractarray.jl:737
  [2] checkbounds
    @ ./abstractarray.jl:702 [inlined]
  [3] view
    @ ./subarray.jl:184 [inlined]
  [4] (::NDTensors.BlockSparseArrays.var"#71#74"{Tuple{}, Tuple{}})(i::Int64)
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81
  [5] ntuple
    @ ./ntuple.jl:19 [inlined]
  [6] sparse_map!(::NDTensors.BlockSparseArrays.BlockSparseArrayStyle{…}, f::Function, a_dest::BlockSparseArray{…}, a_srcs::PermutedDimsArray{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81
  [7] sparse_map!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93 [inlined]
  [8] sparse_copyto!(dest::BlockSparseArray{…}, src::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8
  [9] sparse_permutedims!(dest::BlockSparseArray{…}, src::BlockSparseArray{…}, perm::NTuple{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:13
 [10] permutedims!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:135 [inlined]
 [11] permutedims(A::BlockSparseArray{…}, perm::NTuple{…})
    @ Base.PermutedDimsArrays ./permuteddimsarray.jl:145
 [12] top-level scope
    @ REPL[227]:1
Some type information was truncated. Use `show(err)` to see complete types.

I guess there is a mismatch between permuting the array structure and permuting the inner blocks.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Oct 22, 2024

issue: surprising display error in a very specific context. The error is different from previous display errors mentioned here.

g0 = gradedrange([U1(0) => 1])
g1 = dual(gradedrange([U1(-2) => 1, U1(-1) => 2, U1(0) => 1]))
g2 = dual(gradedrange([U1(-2) => 2, U1(-1) => 2, U1(0) => 1]))
bsa1 = BlockSparseArray{Float64}(g0, g1)
@show bsa1  # Ok
@show bsa1, 1   # Ok
bsa2 = BlockSparseArray{Float64}(g0, g2)
@show bsa2  # Ok
@show bsa2, 1   # BoundsError
(bsa2, 1) = ([0.0 0.0 0.0 0.0 0.0], 1)
([0.0 0.0 … 0.0Error showing value of type Tuple{BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, NDTensors.GradedAxes.UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}}, Tuple{BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, NDTensors.GradedAxes.UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Int64}:
ERROR: BoundsError: attempt to access 2-blocked 2-element BlockArrays.BlockedUnitRange{Int64, Vector{Int64}} at index [5]
Stacktrace:
  [1] throw_boundserror(A::BlockArrays.BlockedUnitRange{Int64, Vector{Int64}}, I::Int64)
    @ Base ./abstractarray.jl:737
  [2] getindex
    @ ./range.jl:948 [inlined]
  [3] getindex(a::BlockArrays.BlockedUnitRange{NDTensors.LabelledNumbers.LabelledInteger{…}, Vector{…}}, index::Int64)
    @ NDTensors.GradedAxes ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/gradedunitrange.jl:269
  [4] iterate
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/gradedunitrange.jl:221 [inlined]
  [5] iterate
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/unitrangedual.jl:95 [inlined]
  [6] _show_nonempty(io::IOContext{…}, X::AbstractMatrix, prefix::String, drop_brackets::Bool, axs::Tuple{…})
    @ Base ./arrayshow.jl:447
  [7] _show_nonempty(io::IOContext{…}, X::BlockSparseArray{…}, prefix::String)
    @ Base ./arrayshow.jl:413
  [8] show
    @ ./arrayshow.jl:491 [inlined]
  [9] show_delim_array(io::IOContext{…}, itr::Tuple{…}, op::Char, delim::Char, cl::Char, delim_one::Bool, i1::Int64, n::Int64)
    @ Base ./show.jl:1378
 [10] show_delim_array
    @ ./show.jl:1363 [inlined]
 [11] show
    @ ./show.jl:1396 [inlined]
 [12] show(io::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, x::Tuple{BlockSparseArray{…}, Int64})
    @ Base.Multimedia ./multimedia.jl:47
 [13] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [14] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [15] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [16] display
    @ ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [17] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [18] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [19] invokelatest
    @ ./essentials.jl:889 [inlined]
 [20] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{…})
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [21] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [22] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [23] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [24] (::REPL.var"#do_respond#80"{})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [25] (::REPL.var"#98#108"{})(::REPL.LineEdit.MIState, ::Any, ::Vararg{…})
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1248
 [26] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [27] invokelatest
    @ ./essentials.jl:889 [inlined]
 [28] (::REPL.LineEdit.var"#27#28"{REPL.var"#98#108"{}, String})(s::Any, p::Any)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:1612
 [29] prompt!(term::REPL.Terminals.TextTerminal, prompt::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2749
 [30] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2651
 [31] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [32] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

EDIT: fixed by ITensor/ITensors.jl#1531

@ogauthe
Copy link
Contributor Author

ogauthe commented Oct 22, 2024

issue: cannot create a zero dim BlockSparseArray

zerodim = BlockSparseArray{Float64}(())
ERROR: MethodError: (BlockSparseArray{Float64, N, A, Blocks} where {N, A<:AbstractArray{Float64, N}, Blocks<:AbstractArray{A, N}})(::Tuple{}) is ambiguous.

Candidates:
  (BlockSparseArray{T, N, A, Blocks} where {N, A<:AbstractArray{T, N}, Blocks<:AbstractArray{A, N}})(dims::Tuple{Vararg{Vector{Int64}}}) where T
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearray/blocksparsearray.jl:61
  (BlockSparseArray{T, N, A, Blocks} where {N, A<:AbstractArray{T, N}, Blocks<:AbstractArray{A, N}})(axes::Tuple{Vararg{AbstractUnitRange}}) where T
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearray/blocksparsearray.jl:65

Possible fix, define
  (BlockSparseArray{T, N, A, Blocks} where {N, A<:AbstractArray{T, N}, Blocks<:AbstractArray{A, N}})(::Tuple{}) where T

Stacktrace:
 [1] top-level scope
   @ REPL[75]:1

Edit: FIXED

@mtfishman
Copy link
Member

@ogauthe are all of the open issues you've listed in recent comments also listed in the first post?

@ogauthe
Copy link
Contributor Author

ogauthe commented Oct 30, 2024

A few were missing, I updated the first post and added links.

@mtfishman
Copy link
Member

issue: I cannot write a slice of a block

a[BlockArrays.Block(1,1)][1:2,1:2] = ones((2,2))

does not write a. This is probably associated to "Fix initializing a block with broadcasting syntax".

I don't think that this should write to a since a[Block(1,1)] returns a copy of the block, following the convention of slicing in Julia:

julia> a = randn(4, 4)
4×4 Matrix{Float64}:
  0.461072  0.8415    -0.25594   -0.0362716
  1.64976   0.325521  -0.174059  -1.27251
  0.676818  0.705131   0.909353  -0.295874
 -0.159376  0.27667    0.949735   0.135925

julia> a[2:4, 2:4][1:2, 1:2] = zeros(2, 2)
2×2 Matrix{Float64}:
 0.0  0.0
 0.0  0.0

julia> a
4×4 Matrix{Float64}:
  0.461072  0.8415    -0.25594   -0.0362716
  1.64976   0.325521  -0.174059  -1.27251
  0.676818  0.705131   0.909353  -0.295874
 -0.159376  0.27667    0.949735   0.135925

You could use a view, or use a[Block(1,1)[1:2,1:2]] = ones(2, 2).

@ogauthe
Copy link
Contributor Author

ogauthe commented Nov 5, 2024

Many axes-related errors were fixed by ITensor/ITensors.jl#1531. I updated the first comment.

@ogauthe
Copy link
Contributor Author

ogauthe commented Nov 13, 2024

Feature request: block sizes are not checked in constructor from Dictionary. When they do not fit, the object is inconsistent but the error is hard to detect. The constructor should give a clean error.

rg = blockedrange([2, 3])
cg = blockedrange([1])
m1 = ones((2, 1))
m2 = ones((1, 1))  # too small
mdic = Dictionary{Block{2,Int64},Matrix{Float64}}()
set!(mdic, Block(1, 1), m1)
set!(mdic, Block(2, 1), m2)
m = BlockSparseArray(mdic, (rg, cg))
copy(m)  # example; many operations will fail
ERROR: BoundsError: attempt to access 1×1 Matrix{Float64} at index [1:3, 1:1]
Stacktrace:
  [1] throw_boundserror(A::Matrix{Float64}, I::Tuple{UnitRange{Int64}, UnitRange{Int64}})
    @ Base ./abstractarray.jl:737
  [2] checkbounds
    @ ./abstractarray.jl:702 [inlined]
  [3] view
    @ ./subarray.jl:184 [inlined]
  [4] ITensor/ITensors.jl#73
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81 [inlined]
  [5] ntuple
    @ ./ntuple.jl:19 [inlined]
  [6] sparse_map!(::NDTensors.BlockSparseArrays.BlockSparseArrayStyle{…}, f::Function, a_dest::BlockSparseArray{…}, a_srcs::BlockSparseArray{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81
  [7] sparse_map!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93 [inlined]
  [8] sparse_copyto!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8 [inlined]
  [9] copyto!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:116 [inlined]
 [10] copymutable
    @ ./abstractarray.jl:1201 [inlined]
 [11] copy(a::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ Base ./abstractarray.jl:1144
 [12] top-level scope
    @ REPL[293]:1
Some type information was truncated. Use `show(err)` to see complete types.

@mtfishman
Copy link
Member

Related to that, another thing that would be nice would be to automatically determine the blocked axes from the blocks that are passed. However, if not enough blocks are passed there may not be enough information to determine all of the sizes.

@ogauthe
Copy link
Contributor Author

ogauthe commented Dec 10, 2024

Issue: setindex!(::BlockSparseArray, ::BlockIndexRange) fails for any AbstractGradedUnitRange

g0 = blockedrange([2])
bsa0 = BlockSparseArray{Float64}(g0, g0)
bsa0[Block(1,1)[1:2,1:2]] = ones((2,2))  # Ok

g = gradedrange([TrivialSector()=>2])
bsa = BlockSparseArray{Float64}(g, g)
bsa[Block(1,1)] = ones((2,2))  # Ok
@show bsa[Block(1,1)[1:2,1:2]]  # Ok
bsa[Block(1,1)[1:2,1:2]] = zeros((2,2))  # MethodError
ERROR: MethodError: no method matching LabelledNumbers.LabelledInteger{Int64, TrivialSector}(::Int64)
The type `LabelledNumbers.LabelledInteger{Int64, TrivialSector}` exists, but no method is defined for this combination of argument types when trying to construct it.

Closest candidates are:
  (::Type{LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ LabelledNumbers ~/.julia/packages/LabelledNumbers/Pn1xf/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:900
  (::Type{T})(::BigFloat) where T<:Integer
   @ Base mpfr.jl:403
  ...

Stacktrace:
 [1] convert(::Type{LabelledNumbers.LabelledInteger{Int64, TrivialSector}}, x::Int64)
   @ Base ./number.jl:7
 [2] iterate
   @ ./range.jl:909 [inlined]
 [3] macro expansion
   @ ./cartesian.jl:66 [inlined]
 [4] _unsafe_setindex!
   @ ./multidimensional.jl:979 [inlined]
 [5] _setindex!
   @ ./multidimensional.jl:967 [inlined]
 [6] setindex!(A::BlockSparseArray{…}, v::Matrix{…}, I::BlockArrays.BlockIndexRange{…})
   @ Base ./abstractarray.jl:1413
 [7] top-level scope
   @ REPL[105]:1
Some type information was truncated. Use `show(err)` to see complete types.

The error is detected in BlockSparseArrays, however it looks pretty similar to previous issues with dual axes: the problem may be in GradedUnitRanges.

Edit: moved to #9

@mtfishman
Copy link
Member

@ogauthe can you start a new issue? This issue list should get split into separate issues now that it is a separate repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants