Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get_vector failing for product manifold #204

Closed
Gertian opened this issue Sep 27, 2024 · 21 comments
Closed

get_vector failing for product manifold #204

Gertian opened this issue Sep 27, 2024 · 21 comments

Comments

@Gertian
Copy link

Gertian commented Sep 27, 2024

Hi,

Firstly thanks to the devs for this wonderful package ! However, when using it today I ran into some issues which, as far as I understand, comes from the get_vector function failing in the context of product manifolds. However the error message is very vague and not really helpful for an uneducated end-user such as myself.

The full code in question is below. Obviously I have tested if calc_energy(p, Γ_ref) works well. My apologies in advance if this is just me being stupid and or if this should have been posted as an issue in one of the other packages such as manifolds.jl.

using Revise
#optimize on a manifold
using Manopt
using Manifolds
using RecursiveArrayTools
#finite differnces
using FiniteDifferences
using ManifoldDiff
#LinearAlgebra packages
using LinearAlgebra
using SkewLinearAlgebra #for efficient pfaffian !


#helper function that takes Q ∈ SO(2n) to a covariance matrix
function Q_to_Γ(Q)
    Σ = zero(Q)
    for i in 0:n-1
        Σ[2*i+1:2*i+2, 2*i+1:2*i+2] = [0 1;-1 0]
    end
    return Q*Σ*transpose(Q)
end
#helper function that takes a matrix and returs it skew-symmetric part.
function skewsymmetric(A) #TODO include some checks
    return 0.5*(A - transpose(A))
end

#set some paramters
L = 3
ph = 1
n = L*ph 
r = 3 
#make a generic coveriance matrix
Γ_ref = Q_to_Γ(rand(Rotations(2*n)))

#define the manifold
M = ProductManifold(Euclidean(r, field = ℂ ), PowerManifold(Rotations(2*n), NestedPowerRepresentation(), r))



#Helper function that takes in a set of prefactors, covariance matrices (in the form of a point ∈ manifold) and reference covariance matrix and spits out the energy : 
t=1. 
function calc_energy(p, Γ_ref; λs = p.x[1] , Γs = map(Qi -> Q_to_Γ(Qi), p.x[2]))
    #first we need to calculate all overlaps with the reference state. Due to gauge freedom these are all real and positive. Hence we can safely take sqrt(det) !
    gas = map(Γa -> 2^(-n/2) * det(Γa + Γ_ref)^(1/4), Γs)
    #now, we can make the matrics Δab cfr Brevyi : 
    Δab = [(-2*one(Γ_a) + im*Γ_a - im*Γ_b )*inv(Γ_a + Γ_b)   for Γ_a in Γs, Γ_b in Γs ]
    #and from this Gab = <φ_a | φ_b> is :  
    Gab = [2^n*gas[a]*gas[b]/pfaffian(skewsymmetric(Δab[b,a] + Γ_ref )) for a in 1:r, b in 1:r]
    
    #as a warmup, before we calculate the energy, let us first calculate the norm of the state :
    norm = 0.
    for a in 1:r, b in 1:r
        norm += λs[a]'*Gab[a,b]*λs[b]
    end

    #now let us do the real lifting and calculate the energy : 
    E = 0.
    for a in 1:r, b in 1:r
        for bond in 0:L-1
            #first, let us get sub-covariance matrix corresponding to c^1_i c^2_i+1
            inds = [2*ph*bond + 1, mod1(2*ph*bond + 4, 2*ph*L)]
            E += -t*(-im/2)*λs[a]'*Gab[a,b]*λs[b]*im^n*pfaffian(skewsymmetric(Δab[b,a][inds, inds]))
            #now, let us do the same for c^2_i c^1_i+1
            inds = [2*ph*bond + 2, mod1(2*ph*bond + 3, 2*ph*L)]
            E += -t*(-im/2)*λs[a]'*Gab[a,b]*λs[b]*(-1)*im^n*pfaffian(skewsymmetric(Δab[b,a][inds, inds]))
        end
    end

    #then we can return the final energy as : 
    return real(E/norm)
end

#funciton that makes the gradient of this thing : 
r_backend = ManifoldDiff.TangentDiffBackend(
    ManifoldDiff.FiniteDifferencesBackend()
)
grad_energy(p) = ManifoldDiff.gradient(M, p->calc_energy(p, Γ_ref), p, r_backend)


#now, all that's left is to optimize E(Q) with Q ∈ M : 
best_Q = gradient_descent(M, (M, p)->calc_energy(p, Γ_ref), (M, p) -> grad_energy(p), rand(M)  )

I don't see how this could be relevant but just in case I should mention that I'm running a locally pathced version of SkewLinearAlgebra which contains a custom rrule for pfaffians.

The stacktrace is pretty wild, here it is :

1-element ExceptionStack:
LoadError: AssertionError: length(Xⁱ) == sum(dims)
Stacktrace:
  [1] get_vector(M::ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, Xⁱ::Vector{ComplexF64}, B::DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType})
    @ ManifoldsBaseRecursiveArrayToolsExt ~/.julia/packages/ManifoldsBase/E9U7a/ext/ManifoldsBaseRecursiveArrayToolsExt/ProductManifoldRecursiveArrayToolsExt.jl:102
  [2] JuliaManifolds/Manopt.jl#21
    @ ~/.julia/packages/ManifoldDiff/CZuUn/src/riemannian_diff.jl:163 [inlined]
  [3] (::FiniteDifferences.var"#92#93"{ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, FiniteDifferences.var"#Vector_from_vec#32"{Vector{ComplexF64}, Vector{FiniteDifferences.var"#Complex_from_vec#21"}, Vector{Vector{Float64}}}})(x::Vector{Float64})
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/grad.jl:77
  [4] (::FiniteDifferences.var"#75#77"{Int64, FiniteDifferences.var"#92#93"{ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, FiniteDifferences.var"#Vector_from_vec#32"{Vector{ComplexF64}, Vector{FiniteDifferences.var"#Complex_from_vec#21"}, Vector{Vector{Float64}}}}, Vector{Float64}})(ε::Float64)
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/grad.jl:20
  [5] newf
    @ ~/.julia/packages/StaticArrays/MSJcA/src/broadcast.jl:186 [inlined]
  [6] macro expansion
    @ ~/.julia/packages/StaticArrays/MSJcA/src/broadcast.jl:135 [inlined]
  [7] __broadcast
    @ ~/.julia/packages/StaticArrays/MSJcA/src/broadcast.jl:123 [inlined]
  [8] _broadcast
    @ ~/.julia/packages/StaticArrays/MSJcA/src/broadcast.jl:119 [inlined]
  [9] copy
    @ ~/.julia/packages/StaticArrays/MSJcA/src/broadcast.jl:60 [inlined]
 [10] materialize
    @ ./broadcast.jl:903 [inlined]
 [11] _eval_function(m::FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}, f::FiniteDifferences.var"#75#77"{Int64, FiniteDifferences.var"#92#93"{ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, FiniteDifferences.var"#Vector_from_vec#32"{Vector{ComplexF64}, Vector{FiniteDifferences.var"#Complex_from_vec#21"}, Vector{Vector{Float64}}}}, Vector{Float64}}, x::Float64, step::Float64)
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/methods.jl:249
 [12] _estimate_magnitudes(m::FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}, f::FiniteDifferences.var"#75#77"{Int64, FiniteDifferences.var"#92#93"{ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, FiniteDifferences.var"#Vector_from_vec#32"{Vector{ComplexF64}, Vector{FiniteDifferences.var"#Complex_from_vec#21"}, Vector{Vector{Float64}}}}, Vector{Float64}}, x::Float64)
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/methods.jl:378
 [13] estimate_step(m::FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}, f::FiniteDifferences.var"#75#77"{Int64, FiniteDifferences.var"#92#93"{ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, FiniteDifferences.var"#Vector_from_vec#32"{Vector{ComplexF64}, Vector{FiniteDifferences.var"#Complex_from_vec#21"}, Vector{Vector{Float64}}}}, Vector{Float64}}, x::Float64)
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/methods.jl:365
 [14] (::FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}})(f::FiniteDifferences.var"#75#77"{Int64, FiniteDifferences.var"#92#93"{ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, FiniteDifferences.var"#Vector_from_vec#32"{Vector{ComplexF64}, Vector{FiniteDifferences.var"#Complex_from_vec#21"}, Vector{Vector{Float64}}}}, Vector{Float64}}, x::Float64)
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/methods.jl:193
 [15] JuliaManifolds/Manopt.jl#74
    @ ~/.julia/packages/FiniteDifferences/IPGFN/src/grad.jl:16 [inlined]
 [16] iterate
    @ ./generator.jl:47 [inlined]
 [17] _collect(c::Base.OneTo{Int64}, itr::Base.Generator{Base.OneTo{Int64}, FiniteDifferences.var"#74#76"{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}, FiniteDifferences.var"#92#93"{ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, FiniteDifferences.var"#Vector_from_vec#32"{Vector{ComplexF64}, Vector{FiniteDifferences.var"#Complex_from_vec#21"}, Vector{Vector{Float64}}}}, Vector{Float64}}}, ::Base.EltypeUnknown, isz::Base.HasShape{1})
    @ Base ./array.jl:854
 [18] collect_similar(cont::Base.OneTo{Int64}, itr::Base.Generator{Base.OneTo{Int64}, FiniteDifferences.var"#74#76"{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}, FiniteDifferences.var"#92#93"{ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, FiniteDifferences.var"#Vector_from_vec#32"{Vector{ComplexF64}, Vector{FiniteDifferences.var"#Complex_from_vec#21"}, Vector{Vector{Float64}}}}, Vector{Float64}}})
    @ Base ./array.jl:763
 [19] map
    @ ./abstractarray.jl:3285 [inlined]
 [20] jacobian(fdm::FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}, f::FiniteDifferences.var"#92#93"{ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, FiniteDifferences.var"#Vector_from_vec#32"{Vector{ComplexF64}, Vector{FiniteDifferences.var"#Complex_from_vec#21"}, Vector{Vector{Float64}}}}, x::Vector{Float64}; len::Nothing)
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/grad.jl:15
 [21] jacobian(fdm::FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}, f::Function, x::Vector{Float64})
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/grad.jl:9
 [22] _j′vp(fdm::FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}, f::Function, ȳ::Vector{Int64}, x::Vector{Float64})
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/grad.jl:84
 [23] j′vp(fdm::FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}, f::ManifoldDiff.var"#21#22"{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, var"#22#23", ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}}}, ȳ::Int64, x::Vector{ComplexF64})
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/grad.jl:77
 [24] grad(fdm::FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}, f::Function, xs::Vector{ComplexF64})
    @ FiniteDifferences ~/.julia/packages/FiniteDifferences/IPGFN/src/grad.jl:94
 [25] _gradient(f::Function, p::Vector{ComplexF64}, backend::ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}})
    @ ManifoldDiffFiniteDifferencesExt ~/.julia/packages/ManifoldDiff/CZuUn/ext/ManifoldDiffFiniteDifferencesExt.jl:24
 [26] gradient(M::ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, f::var"#22#23", p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, backend::ManifoldDiff.TangentDiffBackend{ManifoldDiff.FiniteDifferencesBackend{FiniteDifferences.AdaptedFiniteDifferenceMethod{5, 1, FiniteDifferences.UnadaptedFiniteDifferenceMethod{7, 5}}}, ExponentialRetraction, LogarithmicInverseRetraction, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}, DefaultOrthonormalBasis{ℝ, ManifoldsBase.TangentSpaceType}})
    @ ManifoldDiff ~/.julia/packages/ManifoldDiff/CZuUn/src/riemannian_diff.jl:162
 [27] grad_energy(p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}})
    @ Main ~/Documents/Projects/gaussian_MPS/superposed_dense/superposed_dense.jl:98
 [28] JuliaManifolds/Manopt.jl#25
    @ ~/Documents/Projects/gaussian_MPS/superposed_dense/superposed_dense.jl:102 [inlined]
 [29] get_gradient!(M::ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, X::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, mgo::ManifoldGradientObjective{AllocatingEvaluation, var"#24#26", var"#25#27"}, p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}})
    @ Manopt ~/.julia/packages/Manopt/YbDcl/src/plans/gradient_plan.jl:192
 [30] get_gradient!
    @ ~/.julia/packages/Manopt/YbDcl/src/plans/gradient_plan.jl:145 [inlined]
 [31] initialize_solver!
    @ ~/.julia/packages/Manopt/YbDcl/src/solvers/gradient_descent.jl:244 [inlined]
 [32] solve!(p::DefaultManoptProblem{ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, ManifoldGradientObjective{AllocatingEvaluation, var"#24#26", var"#25#27"}}, s::GradientDescentState{ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, StopWhenAny{Tuple{StopAfterIteration, StopWhenGradientNormLess{typeof(norm), Float64}}}, Manopt.ArmijoLinesearchStepsize{ProductRetraction{Tuple{ExponentialRetraction, ExponentialRetraction}}, ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{Float64}}}}, Int64, Float64, typeof(Manopt.armijo_initial_guess), Manopt.var"#322#327", Manopt.var"#323#328"}, Manopt.IdentityUpdateRule, ProductRetraction{Tuple{ExponentialRetraction, ExponentialRetraction}}})
    @ Manopt ~/.julia/packages/Manopt/YbDcl/src/solvers/solver.jl:134
 [33] gradient_descent!(M::ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, mgo::ManifoldGradientObjective{AllocatingEvaluation, var"#24#26", var"#25#27"}, p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}; retraction_method::ProductRetraction{Tuple{ExponentialRetraction, ExponentialRetraction}}, stepsize::Manopt.ArmijoLinesearchStepsize{ProductRetraction{Tuple{ExponentialRetraction, ExponentialRetraction}}, ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{Float64}}}}, Int64, Float64, typeof(Manopt.armijo_initial_guess), Manopt.var"#322#327", Manopt.var"#323#328"}, stopping_criterion::StopWhenAny{Tuple{StopAfterIteration, StopWhenGradientNormLess{typeof(norm), Float64}}}, debug::Vector{Any}, direction::Manopt.ManifoldDefaultsFactory{Manopt.IdentityUpdateRule, Nothing, Tuple{}, @Kwargs{}}, X::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}, kwargs::@Kwargs{})
    @ Manopt ~/.julia/packages/Manopt/YbDcl/src/solvers/gradient_descent.jl:237
 [34] gradient_descent!(M::ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, mgo::ManifoldGradientObjective{AllocatingEvaluation, var"#24#26", var"#25#27"}, p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}})
    @ Manopt ~/.julia/packages/Manopt/YbDcl/src/solvers/gradient_descent.jl:219
 [35] gradient_descent(M::ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, mgo::ManifoldGradientObjective{AllocatingEvaluation, var"#24#26", var"#25#27"}, p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{Float64}}}}; kwargs::@Kwargs{})
    @ Manopt ~/.julia/packages/Manopt/YbDcl/src/solvers/gradient_descent.jl:185
 [36] gradient_descent(M::ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, mgo::ManifoldGradientObjective{AllocatingEvaluation, var"#24#26", var"#25#27"}, p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{Float64}}}})
    @ Manopt ~/.julia/packages/Manopt/YbDcl/src/solvers/gradient_descent.jl:181
 [37] gradient_descent(M::ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, f::Function, grad_f::Function, p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{Float64}}}}; evaluation::AllocatingEvaluation, kwargs::@Kwargs{})
    @ Manopt ~/.julia/packages/Manopt/YbDcl/src/solvers/gradient_descent.jl:178
 [38] gradient_descent(M::ProductManifold{ℂ, Tuple{Euclidean{ManifoldsBase.TypeParameter{Tuple{3}}, ℂ}, PowerManifold{ℝ, Rotations{ManifoldsBase.TypeParameter{Tuple{6}}}, Tuple{Int64}, NestedPowerRepresentation}}}, f::Function, grad_f::Function, p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{Float64}}}})
    @ Manopt ~/.julia/packages/Manopt/YbDcl/src/solvers/gradient_descent.jl:166
 [39] top-level scope
    @ ~/Documents/Projects/gaussian_MPS/superposed_dense/superposed_dense.jl:102
 [40] eval
    @ ./boot.jl:385 [inlined]
 [41] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
    @ Base ./loading.jl:2076
 [42] invokelatest(::Any, ::Any, ::Vararg{Any}; kwargs::@Kwargs{})
    @ Base ./essentials.jl:892
 [43] invokelatest(::Any, ::Any, ::Vararg{Any})
    @ Base ./essentials.jl:889
 [44] inlineeval(m::Module, code::String, code_line::Int64, code_column::Int64, file::String; softscope::Bool)
    @ VSCodeServer ~/.vscode/extensions/julialang.language-julia-1.124.2/scripts/packages/VSCodeServer/src/eval.jl:271
 [45] (::VSCodeServer.var"#69#74"{Bool, Bool, Bool, Module, String, Int64, Int64, String, VSCodeServer.ReplRunCodeRequestParams})()
    @ VSCodeServer ~/.vscode/extensions/julialang.language-julia-1.124.2/scripts/packages/VSCodeServer/src/eval.jl:181
 [46] withpath(f::VSCodeServer.var"#69#74"{Bool, Bool, Bool, Module, String, Int64, Int64, String, VSCodeServer.ReplRunCodeRequestParams}, path::String)
    @ VSCodeServer ~/.vscode/extensions/julialang.language-julia-1.124.2/scripts/packages/VSCodeServer/src/repl.jl:276
 [47] (::VSCodeServer.var"#68#73"{Bool, Bool, Bool, Module, String, Int64, Int64, String, VSCodeServer.ReplRunCodeRequestParams})()
    @ VSCodeServer ~/.vscode/extensions/julialang.language-julia-1.124.2/scripts/packages/VSCodeServer/src/eval.jl:179
 [48] hideprompt(f::VSCodeServer.var"#68#73"{Bool, Bool, Bool, Module, String, Int64, Int64, String, VSCodeServer.ReplRunCodeRequestParams})
    @ VSCodeServer ~/.vscode/extensions/julialang.language-julia-1.124.2/scripts/packages/VSCodeServer/src/repl.jl:38
 [49] (::VSCodeServer.var"#67#72"{Bool, Bool, Bool, Module, String, Int64, Int64, String, VSCodeServer.ReplRunCodeRequestParams})()
    @ VSCodeServer ~/.vscode/extensions/julialang.language-julia-1.124.2/scripts/packages/VSCodeServer/src/eval.jl:150
 [50] with_logstate(f::Function, logstate::Any)
    @ Base.CoreLogging ./logging.jl:515
 [51] with_logger
    @ ./logging.jl:627 [inlined]
 [52] (::VSCodeServer.var"#66#71"{VSCodeServer.ReplRunCodeRequestParams})()
    @ VSCodeServer ~/.vscode/extensions/julialang.language-julia-1.124.2/scripts/packages/VSCodeServer/src/eval.jl:263
 [53] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [54] invokelatest(::Any)
    @ Base ./essentials.jl:889
in expression starting at /home/groose/Documents/Projects/gaussian_MPS/superposed_dense/superposed_dense.jl:102
@kellertuer
Copy link
Member

kellertuer commented Sep 28, 2024

Hi,

Coll that you are using so many of our packages! Cool to see them in action in such an involved setting – and thanks for your kind words..

I am not yet 100% sure where the error comes from, but I am quite sure its not Manopt but either Manifolds.jl

  1. If we look at

[27] grad_energy(p::ArrayPartition{ComplexF64, Tuple{Vector{ComplexF64}, Vector{Matrix{ComplexF64}}}}) @ Main ~/Documents/Projects/gaussian_MPS/superposed_dense/superposed_dense.jl:98

There we “leave” the realm of what Manopt does (from [31] and [30]) you can see this is the very first call to the gradient evaluation. From there it seems that it is maybe not ManifoldsDiff because we get through that nicely I think,
but we are stuck in the RecursiveArrays extension of Manifolds.jl.

That said, you should also get the error when you just evaluate your gradient as grad_energy(p) for your start point rand(M).
Not that your gradient has the manifold hardcoded saved in itself, since it is not a parameter of the gradient; we usually use grad_f(M,p), but that should not be the cause of the error, just a matter of style.

But then I am a bit lost, since the file your error refers to is not in the current version of Manifolds.jl. Which version are you running on? I would recommend upgrading to 0.10.x, because this might have been a bug we already fixed.

  1. Another thing that I am a bit confused about is that here grad_energy(p) = ManifoldDiff.gradient(M, p->calc_energy(p, Γ_ref), p, r_backend), your function f (the second argument) does not have the manifold as first argument. Are you sure this is right?

And sure, the error message is a bit long, but that is usual in Julia, one has to learn a bit to read those. here it seems the interaction between product manifold and “generatinng the tangent vector from coordinates in a basis” does not work – but maybe first check which version you are on.

PS: The SkewLinearAlgebra (cool package by Simmon!) version should not have much of an effect, this happened somewhere inside building the finite differences (in a Riemannian sense).

@kellertuer
Copy link
Member

Ah! Do not code before first coffee! I misread the error message in [1] the error stems from ManifoldsBase even, see the code line


For RecursiveArrays, we maybe also need the expert for that for the rescue: @mateuszbaran !

One thing I did check for this is, that your error really alrady appears if your last lines read

p = rand(M)
grad_energy(p)

so this is not related to Manopt.jl indeed and I‘ll transfer this to where the error appears – ManifoldsBase! I am sure we can narrow that down and find the culprit and fix it :)

@kellertuer kellertuer transferred this issue from JuliaManifolds/Manopt.jl Sep 28, 2024
@kellertuer
Copy link
Member

kellertuer commented Sep 28, 2024

I had a few minutes. So here is at least an M(n)WE reproducing your error, that should be part of the tests once this is fixed

using Revise
using Manifolds, ManifoldDiff, RecursiveArrayTools, FiniteDifferences

r_backend = ManifoldDiff.TangentDiffBackend(
    ManifoldDiff.FiniteDifferencesBackend()
)

f(p) = 1
grad_f(M,p) = ManifoldDiff.gradient(M, f, p, r_backend)
M =^3 × PowerManifold(Rotations(6), NestedPowerRepresentation(),3)
p = rand(M)
# causes the same error
grad_f(M,p)

@kellertuer
Copy link
Member

kellertuer commented Sep 28, 2024

And I think I know the culprit (though not yet the fix)! It‘s the complex manifold.
(Sorry for so many posts – I had a bit more time than I initially thought)

If one looks at the line failing, it checks that we have as many coefficients (the Xⁱ) as we have dimensions in the manifold.
In your case we have 48 coefficients but a dimension 51; but also that the coefficients are complex (actually all of them), which sounds fair for the first 3, but then we also just need 3 (complex) coeffcients for th 6-dimensional manifold ℂ^3 not 6.
So we have to check how to best resolve this. Some areas of Manifolds.jl / ManifoldsBAse.jl were maybe not yet fully tested with complex it seems – so great that you use that now and we can fix this.

In short the problem is the following:
For the complex numbers you can provide two kinds of basis:

  1. A complex basis ($1$ and $\mathrm{i}$ or these in unit vectors for $\mathbb C^n$), but then you have to use real coefficients upfront and need as many as you have in dimension.
  2. A real basis (just $1$ or unit vectors) and then you need complex coefficients, but only half the dimension many.

...and those two cases are mixed up in that place. If you are in a hurry, a quick fix is to phrase your problem on $\mathbb R^{2n}$ instead since you seem to have the corresponding rotation matrices already. Otherwise wait for us to find a good fix here.

Thanks for pointing this out.

@Gertian
Copy link
Author

Gertian commented Sep 28, 2024

Hi @kellertuer ,

Thank you very much this swift reply, explanation and suggested quick fix for my use-case.
I'm super happy that I'll be able to continue with this project this quickly !

For now I'll work with $$\mathbf{R}^{2n}$$ and report back if that's working. Off course, if you later want me to try the full code again with $$\mathbf{R}^{n}$$ I'll be happy to do so.

Many thanks !

@Gertian
Copy link
Author

Gertian commented Sep 28, 2024

Update : that works like a charm.

@kellertuer
Copy link
Member

I already talked with Mateusz, he will check what we can do, to make the other case work as well, it is mainly this tricky “counting of coefficients” that sometimes haunts us ;)

@Gertian
Copy link
Author

Gertian commented Sep 28, 2024

Purely out of interest, have you had any prior struggles with this ?

I can imagine that it's indeed very hard to keep precise track of these types of small details in such an extensive codebase.

@kellertuer
Copy link
Member

Well, we had a bit of trouble with stuff before we noticed there are exactly the two cases I mentioned above (real basis, complex coefficients vs. complex basis, real coefficients), but most of the functions can be solved on a very generic level, because by now both the manifold and the basis have a field type attached.
I hope we got it right for most manifolds by now – maybe besides Product – or maybe in a few ones of Euclidanwe were not careful enough. Sometimes the simple manifolds are the meanest, because by thinking they are super simple one might miss exactly a case like you stumble into now.

The codebase is okay-ishly extensive, I think it is still fairly structured.

@mateuszbaran
Copy link
Member

Hi!

I've taken a look at this issue and it looks like it is caused by our lack of concept of a real coefficient basis. DefaultOrthonormalBasis() can either be real-coefficient or complex-coefficient, depending on the manifold. @kellertuer I guess the easiest solution would be to introduce a function for getting the (default-ish) real coefficient basis of a manifold? And that we would also need a ProductBasis for M.

@kellertuer
Copy link
Member

Sounds both like a good idea. Thee product basis is then mainly to care for the “mix of coefficient types”?

@mateuszbaran
Copy link
Member

Well, the product basis is actually for the opposite: the same coefficient type. I think it was a mistake to use the number system argument of AbstractBasis to represent something other than the coefficient type.

@kellertuer
Copy link
Member

I know we discussed this quite a while lenthy, but I usually forget our default and have to look it up. The field in the AbstractBasis refers to the type used in the basis elements, not their coefficients upfront.

Hence

  • Real means we have a real basis; so we have less basis vectors than the dimension on complex manifolds and that means we need complex coefficients in the sum
  • Complexmean we have complex values in the basis, then we have a (complex) basis but with the same number of elements as the dimension and hence only need real coefficients.

So I think both indicators are fine (basis or coefficients); for now its basis, from the last longer discussion we had for this. I feel the current advantage is, that the parameters tells something about the basis actually (and not about the implicitly coefficients that need to be used upfront, which might again also depend on the manifold/tangent space this basis belongs to).

@mateuszbaran
Copy link
Member

I remember those discussions, and I remember I have never been entirely convinced by this choice. It is fine, as in: we can always find a workaround to every problem, but it seems to me we would have fewer workarounds to do if the field was related to coefficients.

And Real doesn't really mean we have a real basis. You can have complex basis vectors in a complex coefficient basis. The only fundamental part is the field over which the vector space (such as tangent space) is defined. And that field corresponds to coefficients.

@kellertuer
Copy link
Member

I remember that as well, and I am not sure why we ended up with that choice – in like, it might also have been me with some arguments for that. If we document it thoroughly both are fine with me.

If I remember correctly, currently

  • Real would indeed mean the arrays of the basis are real-valued (and on complex that means the coefficients upfront the basis elements are complex). For real manifolds, this is the only good choice for a basis
  • `Complex means the basis consists of complex type tangent vectors; hence the coefficients are real. For real-values manifolds, this would not exist

If I remember correctly, my argument was, that this way the parameter refers to a property of the basis, which seems more reasonable (to me) than the (implicit) coefficients-upfront-the-basis variant of the parameter.

The other way around, Real would indicate for a complex manifold, that we have a complex basis, isn't that a bis misleading that Basis{Real} then refers to a basis with complex tangent vectors?

Also, if we change it, I fear that would be a bit breaking for any complex manifold.
I can check the next few days what we have to fix for the bug reported here.

  • Maybe something in Euclidean where the case complex-manifold complex-basis seems to not have its own dispatch
  • Maybe a ProductBasis to case a bit for the mix of these? We could maybe even check how to best do a product basis where parts of the coefficients are real, and parts are complex – or should that always lead to complex coefficients but we only pass down the real part of this to real manifolds?

@mateuszbaran
Copy link
Member

Hm, maybe let's discuss it when we meet next time?

@kellertuer
Copy link
Member

We discussed this as Mateusz proposed and have an idea for a nice fix, that will also make some interims values of yours nicer, since they will be real.

@kellertuer
Copy link
Member

This should be fixed in the newest version of ManifoldsBase.jl (0.15.17) and with JuliaManifolds/Manifolds.jl#754 also throughout the rest of the ecosystem.

Thanks again for spotting this and reporting it :)

@Gertian
Copy link
Author

Gertian commented Oct 6, 2024

Great !
Thanks for looking into this so quickly.

I'll rewrite my code soon to give this a test "in the field"

Thanks again for the AMAZING package !

@Gertian
Copy link
Author

Gertian commented Oct 7, 2024

And as expected the update works perfectly :) !

@kellertuer
Copy link
Member

Perfect! Thanks again for both reporting this so we could fix it as well as checking back – great to hear that it now works nicely. The old way we had before was not only wrong in a few places, but also misleading, the improved version now seems to work much better then!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants