Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose Ipopt's new_x argument #34

Closed
rgiordan opened this issue Apr 28, 2015 · 6 comments
Closed

Expose Ipopt's new_x argument #34

rgiordan opened this issue Apr 28, 2015 · 6 comments

Comments

@rgiordan
Copy link

The C++ interface to Ipopt has an new_x argument to the evaluation functions (eval_f, eval_grad_f, etc.), allowing the user to avoid re-computing costly derivatives (e.g. when they have an expensive objective function that calculates the value and derivative simultaneously). Would it be possible to expose this feature in the Julia interface?

@mlubin
Copy link
Member

mlubin commented Apr 28, 2015

This is easy and cheap to emulate by storing the previous input vector and checking if it's equal to the new one. We'd accept a PR for this, but anyway I'd recommend coding to the MathProgBase interface which would let you easily switch between solvers (Ipopt, KNITRO, NLopt).

@rgiordan
Copy link
Author

Ok, thank you, I'll look into MathProgBase. For my own understanding, how would you store the previous value? I can't seem to access variables outside the scope of eval functions. (An example is below.)

using Ipopt

prev_x = zeros(2)
prev_v = 0.0
function eval_f(x::Vector{Float64})
  if (x == prev_x)
    return prev_v
  else
    prev_x = x
    prev_v = x[1]^2 + x[2]^2
  end
  prev_v
end

function eval_grad_f(x::Vector{Float64}, grad_f::Vector{Float64})
  grad_f[1] = 2 * x[1]
  grad_f[2] = 2 * x[2]
end

function intermediate(alg_mod::Int, iter_count::Int, 
  obj_value::Float64, inf_pr::Float64, inf_du::Float64, mu::Float64,
  d_norm::Float64, regularization_size::Float64, alpha_du::Float64, alpha_pr::Float64, 
  ls_trials::Int)
  println("Iteration $iter_count, objective value is $obj_value.")
  return true
end

n = 2
x_L = [1.0, 1.0]
x_U = [2.0, 2.0]

function eval_no_jac_g(x, mode, rows, cols, values)
end

function eval_no_g(x, g)
end

prob = createProblem(n, x_L, x_U, 0, Array(Float64, 0), Array(Float64, 0), 0, 2,
                     eval_f, eval_no_g, eval_grad_f, eval_no_jac_g)
addOption(prob, "hessian_approximation", "limited-memory")

prob.x = [1.5, 2.5]
stat = solveProblem(prob)
# Gives the error:
# ERROR: prev_x not defined
#  in eval_f at none:2
#  in eval_f_wrapper at /home/rgiordan/.julia/v0.3/Ipopt/src/Ipopt.jl:86

@IainNZ
Copy link
Contributor

IainNZ commented Apr 28, 2015

Try prev_x[:] = x[:], that way it replaces the "global" prev_x instead of rebinding the name.

@IainNZ
Copy link
Contributor

IainNZ commented Apr 28, 2015

If you look in the source you'll see that I did wrap the C interface, including new_x, already. However, I punted on exposing it to user-facing code because I didn't know how I wanted it to look in Julia and I didn't have a need for it.

@rgiordan
Copy link
Author

Thanks, using the brackets to refer to the global prev_x works fine. Unless I'm mistaken, using some sort of similar global variable trick will also be necessary with MathProgBase.

Yes, I noticed that you had new_x in the C interface but hadn't exposed it to Julia.

@mlubin
Copy link
Member

mlubin commented Apr 29, 2015

It's not very well documented at this point, but you wouldn't use globals with MathProgBase, you would store the vector inside your NLPEvaluator object, so there's no type inference/performance issues to worry about. See the discussion here: JuliaNLSolvers/Optim.jl#107

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants