Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BlockSparseArrays] Blockwise matrix factorizations #3

Open
mtfishman opened this issue Jul 1, 2024 · 1 comment
Open

[BlockSparseArrays] Blockwise matrix factorizations #3

mtfishman opened this issue Jul 1, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@mtfishman
Copy link
Member

mtfishman commented Jul 1, 2024

A key missing feature in BlockSparseArrays, as mentioned in #2, is blockwise matrix factorizations. I'm starting an issue here to sketch out an implementation plan for that.

Here is an initial implementation of a blockwise SVD, using the new submodules of NDTensors.jl:

using BlockArrays: Block, blockedrange, blocks
using Dictionaries: Dictionary, set!
using LinearAlgebra: Diagonal, svd
using NDTensors.BlockSparseArrays: BlockSparseArray, block_stored_indices
using NDTensors.SparseArrayInterface: stored_indices
using NDTensors.SparseArrayDOKs: SparseMatrixDOK

# Convert an array into a boolean array of ones and zeros.
function boolean_array(a::AbstractArray)
  b = similar(a, Bool)
  fill!(b, zero(eltype(b)))
  for I in stored_indices(a)
    b[I] = one(eltype(b))
  end
  return b
end

# Check if the matrix has 1 or fewer entries
# per row/column.
function is_permutation_matrix(a::SparseMatrixDOK)
  keys = collect(Iterators.map(Tuple, stored_indices(a)))
  I = first.(keys)
  J = last.(keys)
  return allunique(I) && allunique(J)
end

function is_block_permutation_matrix(a::AbstractMatrix)
  return is_permutation_matrix(boolean_array(blocks(a)))
end

function block_svd(a::AbstractMatrix)
  @assert is_block_permutation_matrix(a)
  bs = block_stored_indices(a)
  s_blocks = map(i -> Block(i, i), 1:length(bs))
  bs_to_s_blocks = Dictionary(bs, s_blocks)
  us = Dictionary{eltype(bs),Matrix{eltype(a)}}()
  ss = Dictionary{eltype(bs),Matrix{eltype(a)}}()
  vts = Dictionary{eltype(bs),Matrix{eltype(a)}}()
  for b in bs
    usvb = svd(a[b])
    ub = usvb.U
    sb = Matrix(Diagonal(usvb.S))
    vbt = usvb.Vt
    bs = bs_to_s_blocks[b]
    bu = Block(Int.((Tuple(b)[1], Tuple(bs)[1])))
    bvt = Block(Int.((Tuple(bs)[2], Tuple(b)[2])))
    set!(us, bu, ub)
    set!(ss, bs, sb)
    set!(vts, bvt, vbt)
  end
  r_s = blockedrange(map(s_block -> size(ss[s_block], 1), s_blocks))
  u = BlockSparseArray(us, (axes(a, 1), r_s))
  s = BlockSparseArray(ss, (r_s, r_s))
  vt = BlockSparseArray(vts, (r_s, axes(a, 2)))
  return u, s, vt
end

Here's a demonstration that it works:

julia> a = BlockSparseArray{Float64}([2, 3], [2, 3]);

julia> a[Block(2, 1)] = randn(3, 2);

julia> a[Block(1, 2)] = randn(2, 3);

julia> a
typeof(axes) = Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.

2×2-blocked 5×5 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}:
  0.0        0.0-1.98464  0.0925584  -0.307224
  0.0        0.0-1.8799   1.26757    -0.576902
 ──────────────────────┼────────────────────────────────
  0.558952   0.1182320.0      0.0         0.0     
  1.24029    0.5038750.0      0.0         0.0     
 -0.566776  -0.5858670.0      0.0         0.0     

julia> u, s, vt = block_svd(a);

julia> u
typeof(axes) = Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.

2×2-blocked 5×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}:
  0.0       0.0-0.642224  -0.766517
  0.0       0.0-0.766517   0.642224
 ─────────────────────┼──────────────────────
 -0.338149  0.4386030.0        0.0     
 -0.815799  0.3044760.0        0.0     
  0.469178  0.8455310.0        0.0     

julia> s
typeof(axes) = Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.

2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}:
 1.63656  0.00.0      0.0     
 0.0      0.3236770.0      0.0     
 ───────────────────┼───────────────────
 0.0      0.02.97427  0.0     
 0.0      0.00.0      0.817929

julia> vt
typeof(axes) = Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.

2×2-blocked 4×5 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}:
 -0.896243  -0.4435620.0        0.0        0.0     
  0.443562  -0.8962430.0        0.0        0.0     
 ──────────────────────┼────────────────────────────────
  0.0        0.00.913015  -0.346659   0.215015
  0.0        0.00.383828   0.908532  -0.16506 

julia> u * s * vt
typeof(axes) = Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}

Warning: To temporarily circumvent a bug in printing BlockSparseArrays with mixtures of dual and non-dual axes, the types of the dual axes printed below might not be accurate. The types printed above this message are the correct ones.

2×2-blocked 5×5 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockArrays.BlockedOneTo{Int64, Vector{Int64}}, BlockArrays.BlockedOneTo{Int64, Vector{Int64}}}}:
  0.0        0.0-1.98464  0.0925584  -0.307224
  0.0        0.0-1.8799   1.26757    -0.576902
 ──────────────────────┼────────────────────────────────
  0.558952   0.1182320.0      0.0         0.0     
  1.24029    0.5038750.0      0.0         0.0     
 -0.566776  -0.5858670.0      0.0         0.0     

There are a number of improvements to make to the code above:

  1. Store S as a BlockSparseMatrix where the blocks are Diagonal or DiagonalMatrix instead of Matrix. That would be relatively easy to support, I just haven't tested that and there are probably a few issues to work out in BlockSparseArrays to support that.
  2. Make it generic to GPU, i.e. infer the block type instead of hardcoding it to Matrix which is on CPU.
  3. Add support for truncating the blocks. I'm picturing that in the simplest case we just call that dense/full rank version above, and then analyze the singular values, determine the ranks of each block, and then slice the blocks using the slicing operation introduced in [BlockSparseArrays] Sub-slices of multiple blocks ITensors.jl#1489.
  4. Make it a little more automated to construct the BlockSparseMatrix representations of $U$, $S$, and $V^{\dagger}$. In the code above I first store the results of SVDing each block in dictionaries that store the blocks of $U$, $S$, and $V^{\dagger}$, and then build the BlockSparseMatrix representations of $U$, $S$, and $V^{\dagger}$ from those blocks and also axes determined from the input matrix and the results of the SVD. I think that logic is pretty good and simple but maybe it can be simplified a little bit. For example, we could store the blocks in a SparseMatrixDOK and then convert it to a BlockSparseMatrix, which could automatically determine the block sizes.
  5. Handle BlockSparseMatrix inputs that are invariant under group symmetries. Really the only thing that needs to be changed above is adding on symmetry labels/irrep labels to the blocks of the axes that get attached to the internal dimensions (those connecting $S$ to $U$ and $V^{\dagger}$, called r_s in the code above). The logic for that would be pretty simple, and be uniquely determined using the convention that $U$ and $V^{\dagger}$ are in trivial symmetry sectors (have zero flux).
  6. Handle cases that are not in the form of block permutations by merging blocks that can't be done in parallel (i.e. aren't independent).

For some references on other implementations of blockwise matrix factorizations, see https://github.com/ITensor/ITensors.jl/blob/v0.6.16/NDTensors/src/lib/BlockSparseArrays/src/backup/qr.jl and https://github.com/JuliaArrays/BlockDiagonals.jl/blob/master/src/linalg.jl.

@ogauthe @emstoudenmire

@mtfishman mtfishman added the enhancement New feature or request label Jul 1, 2024
@ogauthe
Copy link
Contributor

ogauthe commented Jul 2, 2024

Looks nice, I'm eager to try.

Considering truncation, indeed the simplest is to compute dense block SVD and truncate afterwards. In the case of non-abelian symmetries, there are choices to be made on how to define the ranks of each block.

When non-abelian symmetries are involved, one does not control $D$ directly but $D^*$, the number of kept multiplets. One can either directly set $D^*$ as a target or else set a $D$ target and keep adding mutliplets until $D$ is reached. This choice matters because when simulating a given model, depending on the parameters such as the ratio of 2 coupling constants $r$, the multiplet decomposition of a singular spectrum may dramatically change.

If one sets a $D^*$ as a target, any truncation is straightforward. The tradeoff is that $D$ is not controlled and may reach very different values when tuning $r$. Moreover, it makes it hard to benchmark simulations done with different symmetries (for instance making use of an extended symmetry at an exceptional point) as the map from $D^*$ to $D$ is symmetry dependent.

If one sets $D$ as a target, it may not be possible to reach it exactly. One then has to decide how to set $D^*$: possible choices are largest value before $D$ is reached, smallest value above $D$, or closest value to $D$.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants