Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using 'expand' with GPU backend #24

Closed
YotamKa opened this issue Jun 20, 2024 · 4 comments · Fixed by ITensor/ITensorTDVP.jl#85
Closed

Using 'expand' with GPU backend #24

YotamKa opened this issue Jun 20, 2024 · 4 comments · Fixed by ITensor/ITensorTDVP.jl#85

Comments

@YotamKa
Copy link

YotamKa commented Jun 20, 2024

Dear ITensorMPS team,

I found that the new 'expand' function (thanks for fixing that, btw) is not working over a GPU backend.

I saw an error in line 90 in 'expand.jl', i.e.,
projectorⱼ = idⱼ - prime(basisⱼ, rinds) * dag(basisⱼ)
, and tried transforming idⱼ to a CuArray; however, the trace function is bugging.

Probably, the fix is straightforward.
Thanks, Yotam

Here's a minimal code to reproduce the error:

using ITensors
using ITensorMPS 
using CUDA: cu

device = cu
numtype = Float32

N = 2
sites = [siteind("S=1/2") for _ in 1:N]

mps = device(MPS(sites, ["↑" for _ in 1:N]))

mpo_os = OpSum()
mpo_os += "Sz",1
mpo = device(MPO(mpo_os,sites))

psi_expanded = expand(
  mps,
  mpo;
  alg="global_krylov",
  krylovdim=2,
  cutoff=sqrt(eps(numtype))
)

And here is the error statement:

ERROR: Type parameter position not defined for type `DenseVector{Float32}` and position name `eltype`.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] position(type::Type, pos::Function)
    @ NDTensors.TypeParameterAccessors ~/.julia/packages/NDTensors/vDdB4/src/lib/TypeParameterAccessors/src/position.jl:8
  [3] set_type_parameter(type::Type, pos::Function, param::Type)
    @ NDTensors.TypeParameterAccessors ~/.julia/packages/NDTensors/vDdB4/src/lib/TypeParameterAccessors/src/set_parameters.jl:14
  [4] set_eltype(::Type{SimpleTraits.Not{…}}, type::Type{DenseVector{…}}, param::Type)
    @ NDTensors.TypeParameterAccessors ~/.julia/packages/NDTensors/vDdB4/src/lib/TypeParameterAccessors/src/base/abstractarray.jl:68
  [5] set_eltype
    @ ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331 [inlined]
  [6] similartype(::Type{SimpleTraits.Not{…}}, arraytype::Type{DenseVector{…}}, eltype::Type)
    @ NDTensors ~/.julia/packages/NDTensors/vDdB4/src/abstractarray/similar.jl:109
  [7] similartype
    @ ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331 [inlined]
  [8] promote_rule(::Type{NDTensors.Dense{Float32, Vector{…}}}, ::Type{NDTensors.Dense{Float32, CuArray{…}}})
    @ NDTensors ~/.julia/packages/NDTensors/vDdB4/src/dense/dense.jl:128
  [9] promote_type
    @ ./promotion.jl:313 [inlined]
 [10] promote_rule(::Type{NDTensors.DenseTensor{…}}, ::Type{NDTensors.DenseTensor{…}})
    @ NDTensors ~/.julia/packages/NDTensors/vDdB4/src/tensor/tensor.jl:266
 [11] promote_type
    @ ./promotion.jl:313 [inlined]
 [12] permutedims!!(R::NDTensors.DenseTensor{…}, T::NDTensors.DenseTensor{…}, perm::Tuple{…}, f::Function)
    @ NDTensors ~/.julia/packages/NDTensors/vDdB4/src/dense/densetensor.jl:198
 [13] _map!!(f::Function, R::NDTensors.DenseTensor{…}, T1::NDTensors.DenseTensor{…}, T2::NDTensors.DenseTensor{…})
    @ ITensors ~/.julia/packages/ITensors/4M2ep/src/itensor.jl:1961
 [14] map!(f::Function, R::ITensor, T1::ITensor, T2::ITensor)
    @ ITensors ~/.julia/packages/ITensors/4M2ep/src/itensor.jl:1966
 [15] copyto!
    @ ~/.julia/packages/ITensors/4M2ep/src/broadcast.jl:330 [inlined]
 [16] materialize!
    @ ./broadcast.jl:914 [inlined]
 [17] materialize!
    @ ./broadcast.jl:911 [inlined]
 [18] -(A::ITensor, B::ITensor)
    @ ITensors ~/.julia/packages/ITensors/4M2ep/src/itensor.jl:1883
 [19] expand(::Algorithm{:orthogonalize, @NamedTuple{}}, state::MPS, references::Vector{MPS}; cutoff::Float32)
    @ ITensorTDVP ~/.julia/packages/ITensorTDVP/YqF4o/src/expand.jl:90
 [20] expand
    @ ~/.julia/packages/ITensorTDVP/YqF4o/src/expand.jl:73 [inlined]
 [21] expand(state::MPS, reference::Vector{MPS}; alg::String, kwargs::@Kwargs{cutoff::Float32})
    @ ITensorTDVP ~/.julia/packages/ITensorTDVP/YqF4o/src/expand.jl:29
 [22] expand
    @ ~/.julia/packages/ITensorTDVP/YqF4o/src/expand.jl:28 [inlined]
 [23] expand(::Algorithm{…}, state::MPS, operator::MPO; krylovdim::Int64, cutoff::Float32, apply_kwargs::@NamedTuple{…})
    @ ITensorTDVP ~/.julia/packages/ITensorTDVP/YqF4o/src/expand.jl:158
 [24] expand(state::MPS, reference::MPO; alg::String, kwargs::@Kwargs{krylovdim::Int64, cutoff::Float32})
    @ ITensorTDVP ~/.julia/packages/ITensorTDVP/YqF4o/src/expand.jl:29
@mtfishman
Copy link
Member

Hi @YotamKa, thanks for the report, you're on the right track that we need to convert that δ tensor to GPU, though I see it looks like even after you tried that it is hitting some other bug. We'll look into it, I'm sure it is a simple fix.

Also thanks for reporting this to ITensorMPS.jl! The situation around where MPS code is defined is a bit complicated, glad to hear someone is following along haha.

@mtfishman
Copy link
Member

I think the issue with tr on GPU should have been fixed by ITensor/ITensors.jl#1453, are you sure your packages are up to date?

@YotamKa
Copy link
Author

YotamKa commented Jun 20, 2024

Hi, that's a good point. I saw the GitHub discussion about that (tr() with GPU).
I'll update and then report again.

@mtfishman
Copy link
Member

Should be fixed by ITensor/ITensorTDVP.jl#85, I don't see any issues with tr when I use the latest version of NDTensors.jl.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants