Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: LoadError: MethodError: no method matching Float32(::ReverseDiff.TrackedReal{Float64, Float32, Nothing}) #615

Closed
prbzrg opened this issue Sep 2, 2021 · 2 comments

Comments

@prbzrg
Copy link
Member

prbzrg commented Sep 2, 2021

When I run this code:

using DiffEqFlux, DifferentialEquations, GalacticOptim, Distributions

nn = Chain(
    Dense(1, 3, tanh),
    Dense(3, 1, tanh),
) |> f32
tspan = (0.0f0, 10.0f0)
ffjord_mdl = FFJORD(nn, tspan, Tsit5())

data_dist = Normal(6.0f0, 0.7f0)
train_data = rand(data_dist, 1, 100)

function loss(θ)
    logpx, λ₁, λ₂ = ffjord_mdl(train_data, θ)
    -mean(logpx)
end

adtype = GalacticOptim.AutoReverseDiff()
res1 = DiffEqFlux.sciml_train(loss, ffjord_mdl.p, ADAM(0.1), adtype; maxiters=100)

I get this error:

ERROR: LoadError: MethodError: no method matching Float32(::ReverseDiff.TrackedReal{Float64, Float32, Nothing})
Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  (::Type{T})(::T) where T<:Number at boot.jl:760
  (::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at char.jl:50
  ...
Stacktrace:
  [1] _broadcast_getindex_evalf
    @ .\broadcast.jl:648 [inlined]
  [2] _broadcast_getindex
    @ .\broadcast.jl:621 [inlined]
  [3] getindex
    @ .\broadcast.jl:575 [inlined]
  [4] copy
    @ .\broadcast.jl:922 [inlined]
  [5] materialize(bc::Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{2}, Nothing, Type{Float32}, Tuple{Matrix{ReverseDiff.TrackedReal{Float64, Float32, Nothing}}}})
    @ Base.Broadcast .\broadcast.jl:883
  [6] WARNING: both Flux and Iterators export "flatten"; uses of it in module DiffEqFlux must be qualified
WARNING: both Flux and Distributions export "params"; uses of it in module DiffEqFlux must be qualified
(::FFJORD{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}}}, Vector{Float32}, Flux.var"#60#62"{Chain{Tuple{Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}, Dense{typeof(tanh), Matrix{Float32}, Vector{Float32}}}}}, FullNormal, Tuple{Float32, Float32}, Tuple{Tsit5}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}})(x::Matrix{Float32}, p::ReverseDiff.TrackedArray{Float32, Float32, 1, Vector{Float32}, Vector{Float32}}, e::Matrix{Float32}; regularize::Bool, monte_carlo::Bool)
    @ DiffEqFlux C:\Users\Hossein Pourbozorg\.julia\packages\DiffEqFlux\N7blG\src\ffjord.jl:215
  [7] FFJORD (repeats 2 times)
    @ C:\Users\Hossein Pourbozorg\.julia\packages\DiffEqFlux\N7blG\src\ffjord.jl:192 [inlined]
  [8] loss::ReverseDiff.TrackedArray{Float32, Float32, 1, Vector{Float32}, Vector{Float32}})
    @ Main C:\Users\Hossein Pourbozorg\Code Projects\Mine\ffjord-report-issues\iss-4.jl:14
  [9] #74
    @ C:\Users\Hossein Pourbozorg\.julia\packages\DiffEqFlux\N7blG\src\train.jl:84 [inlined]
 [10] (::GalacticOptim.var"#229#238"{OptimizationFunction{true, GalacticOptim.AutoReverseDiff, DiffEqFlux.var"#74#79"{typeof(loss)}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing})(::ReverseDiff.TrackedArray{Float32, Float32, 1, Vector{Float32}, Vector{Float32}})
    @ GalacticOptim C:\Users\Hossein Pourbozorg\.julia\packages\GalacticOptim\bEh06\src\function\reversediff.jl:6
 [11] #231
    @ C:\Users\Hossein Pourbozorg\.julia\packages\GalacticOptim\bEh06\src\function\reversediff.jl:9 [inlined]
 [12] ReverseDiff.GradientTape(f::GalacticOptim.var"#231#240"{Tuple{}, GalacticOptim.var"#229#238"{OptimizationFunction{true, GalacticOptim.AutoReverseDiff, DiffEqFlux.var"#74#79"{typeof(loss)}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, input::Vector{Float32}, cfg::ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float32, Float32, 1, Vector{Float32}, Vector{Float32}}})
    @ ReverseDiff C:\Users\Hossein Pourbozorg\.julia\packages\ReverseDiff\E4Tzn\src\api\tape.jl:199
 [13] gradient!(result::Vector{Float32}, f::Function, input::Vector{Float32}, cfg::ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float32, Float32, 1, Vector{Float32}, Vector{Float32}}})
    @ ReverseDiff C:\Users\Hossein Pourbozorg\.julia\packages\ReverseDiff\E4Tzn\src\api\gradients.jl:41
 [14] (::GalacticOptim.var"#230#239"{GalacticOptim.var"#229#238"{OptimizationFunction{true, GalacticOptim.AutoReverseDiff, DiffEqFlux.var"#74#79"{typeof(loss)}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}})(::Vector{Float32}, ::Vector{Float32})
    @ GalacticOptim C:\Users\Hossein Pourbozorg\.julia\packages\GalacticOptim\bEh06\src\function\reversediff.jl:9
 [15] macro expansion
    @ C:\Users\Hossein Pourbozorg\.julia\packages\GalacticOptim\bEh06\src\solve\flux.jl:43 [inlined]
 [16] macro expansion
    @ C:\Users\Hossein Pourbozorg\.julia\packages\GalacticOptim\bEh06\src\solve\solve.jl:35 [inlined]
 [17] __solve(prob::OptimizationProblem{false, OptimizationFunction{false, GalacticOptim.AutoReverseDiff, OptimizationFunction{true, GalacticOptim.AutoReverseDiff, DiffEqFlux.var"#74#79"{typeof(loss)}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#230#239"{GalacticOptim.var"#229#238"{OptimizationFunction{true, GalacticOptim.AutoReverseDiff, DiffEqFlux.var"#74#79"{typeof(loss)}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#232#241"{GalacticOptim.var"#229#238"{OptimizationFunction{true, GalacticOptim.AutoReverseDiff, DiffEqFlux.var"#74#79"{typeof(loss)}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#237#246", Nothing, Nothing, Nothing}, Vector{Float32}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::ADAM, data::Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}; maxiters::Int64, cb::Function, progress::Bool, save_best::Bool, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ GalacticOptim C:\Users\Hossein Pourbozorg\.julia\packages\GalacticOptim\bEh06\src\solve\flux.jl:41
 [18] #solve#474
    @ C:\Users\Hossein Pourbozorg\.julia\packages\SciMLBase\UIp7W\src\solve.jl:3 [inlined]
 [19] sciml_train(::typeof(loss), ::Vector{Float32}, ::ADAM, ::GalacticOptim.AutoReverseDiff; lower_bounds::Nothing, upper_bounds::Nothing, maxiters::Int64,
 kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ DiffEqFlux C:\Users\Hossein Pourbozorg\.julia\packages\DiffEqFlux\N7blG\src\train.jl:89
 [20] top-level scope
    @ C:\Users\Hossein Pourbozorg\Code Projects\Mine\ffjord-report-issues\iss-4.jl:19
in expression starting at C:\Users\Hossein Pourbozorg\Code Projects\Mine\ffjord-report-issues\iss-4.jl:19

In this environment:

(ffjord-report-issues) pkg> status
      Status `C:\Users\Hossein Pourbozorg\Code Projects\Mine\ffjord-report-issues\Project.toml`
  [aae7a2af] DiffEqFlux v1.43.0
  [0c46a032] DifferentialEquations v6.19.0
  [31c24e10] Distributions v0.25.14
  [a75be94c] GalacticOptim v2.0.3
  [c3e4b0f8] Pluto v0.15.1

julia> versioninfo()
Julia Version 1.6.1
Commit 6aaedecc44 (2021-04-23 05:59 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, skylake)
@prbzrg
Copy link
Member Author

prbzrg commented Sep 2, 2021

I think it's related to type conversion in FFJORD at this line:

logpz = eltype(x).(reshape(logpdf(pz, z), 1, size(x, 2)))

@ChrisRackauckas
Copy link
Member

This got fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants