-
-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducible TypeError in sciml_train #278
Comments
I think Zygote doesn't know how to handle ExtendedRateJump constructors. |
Are there any workarounds currently that would allow a Jump diffusion to be trained via sciml_train? |
Quickest is to use a derivative-free optimizer, like |
I've updated the code to test this solution, but there is something wrong - NLopt only ever evaluated the loss function once. I just updated the last few lines from above: line_counter = 0
function loss_test(θ)
global line_counter = line_counter + 1
println("Loss function evaluations: $(line_counter)")
sol, vals = predict(θ)
loss = 0.0
for t in 1:length(vals)
loss += abs2(vals[t] - 1.0)
end
return loss, vals
end
loss_test(θ)
using NLopt
opt = Opt(:LN_BOBYQA, length(θ))
result = DiffEqFlux.sciml_train(loss_test, θ,
opt,
maxeval = 500) This demonstrates that the loss function is only called once, not 500 times as claimed. |
That could be a bug in the NLopt wrapper. I wouldn't worry about getting it fixed in here though, I'd just worry about reproducing and fixing in GalacticOptim.jl |
sciml_train has been removed and deprecated, and this is fixed in Optimization.jl |
For a small jump diffusion problem adapted from here, I get a very odd TypeError, where the "expected" type is identical to the "got" type.
To reproduce:
Error message:
Versions:
The text was updated successfully, but these errors were encountered: