Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MLSL not working #187

Closed
philippeller opened this issue May 29, 2018 · 8 comments
Closed

MLSL not working #187

philippeller opened this issue May 29, 2018 · 8 comments

Comments

@philippeller
Copy link

Hi

For my problem MLSL does never converge, keeps on going. I also tested with a minimal example, same thing, the following does not converge:

import nlopt
import numpy as np

def myfunc(x, grad):
    return x[0] + x[1]

local_opt = nlopt.opt(nlopt.LN_NELDERMEAD, 2)
local_opt.set_lower_bounds([0, 0])
local_opt.set_upper_bounds([100, 100])
local_opt.set_min_objective(myfunc)
local_opt.set_xtol_rel(1e-4)
​
opt = nlopt.opt(nlopt.G_MLSL_LDS, 2)
opt.set_lower_bounds([0, 0])
opt.set_upper_bounds([1000, 1000])
opt.set_min_objective(myfunc)
opt.set_local_optimizer(local_opt) 
opt.set_xtol_rel(1e-1)

# this alone converges no problem
local_opt.optimize([1,1])
array([0., 0.])

# this does not
opt.optimize([1,1])
@aphirst
Copy link

aphirst commented Jun 2, 2018

Hmm. I can't help all that much, but I have a few questions and remarks which hopefully might bring us closer to understanding.

  1. Do you have any example code using G_MLSL_LDS which does work? I've tried to find some and struggled, but if we had any it would likely be very instructive.
  2. How come you're setting bounds of [0,1000] for your global problem, but only [0,100] for your local problem? On the other hand, if I set both to [0,100], I still get the same behaviour you do.
  3. I added some print() statements to your function. The initial "local" solve converges pretty much instantly. The "global" solve seems to do nothing other than constantly spin random values. I'm afraid I don't understand "MLSL" well enough to suggest why.
  4. I get similar behaviour if I change LN_NELDERMEAD to LN_COBYLA, or LN_PRAXIS. Something does seem to be preventing the global algorithm from properly using the local pass.
  5. I tried playing around with, and even removing, your tolerance values. This didn't have any effect.

The only way I got the global algorithm to converge was to set a "maxtime" (via opt.set_maxtime(1.0)), where funnily enough I get [0., 0.], which suggests that something is working right.

Perhaps someone else has some better ideas. Good luck.

@aphirst
Copy link

aphirst commented Jun 2, 2018

I did some further investigation:

#!/usr/bin/python

import nlopt
import numpy as np

def myfunc(x, grad):
    print("")
    print(x[0], x[1])
    result = np.linalg.norm(x)
    print(result)
    return result

#local_opt = nlopt.opt(nlopt.LN_NELDERMEAD, 2)
#local_opt = nlopt.opt(nlopt.LN_COBYLA, 2)
local_opt = nlopt.opt(nlopt.LN_PRAXIS, 2)
#local_opt.set_lower_bounds([0., 0.])
#local_opt.set_upper_bounds([1., 1.])
local_opt.set_min_objective(myfunc)
#local_opt.set_xtol_rel(1e-4)

opt = nlopt.opt(nlopt.G_MLSL_LDS, 2)
opt.set_lower_bounds([0., 0.])
opt.set_upper_bounds([1., 1.])
opt.set_min_objective(myfunc)
opt.set_local_optimizer(local_opt) 
#opt.set_xtol_rel(1e-1)
opt.set_stopval(1e-20)
#opt.set_maxtime(10.0)

# this alone converges no problem
#result1 = local_opt.optimize([1,1])
#print("Optimizer message: ", local_opt.last_optimize_result())
#print("Optimizer result:  ", result1)

# this does not
result2 = opt.optimize([1.,1.])
print("Optimizer message: ", opt.last_optimize_result())
print("Optimizer result:  ", result2)

It seems I can set very, very small stopval in the global object and get convergence almost instantly (much faster with NELDERMEAD or COBYLA than with PRAXIS, in this case). So there must either be a bug in nlopt, or in how you (we) are setting up the problem.

I made the objective function return the vector norm instead of the sum, since I was also playing around with setting the bounds to something like [-a,a], in case the minimum occuring at [0,0] was having some detrimental effect. That seems to have not been the case.

@pckroon
Copy link

pckroon commented Jun 2, 2018

For as far as I understand MLSL (which is limited), I think it'll never converge. IIRC MLSL picks a random number, and starts a local optimisation there, then picks a new number, and so on. What it does in between is do some clustering on those starting points and the final minimum they converge to, and tries to pick a number such that it's likely to find a different minimum. So I think it won't converge until it finds all local minima.

@aphirst
Copy link

aphirst commented Jun 2, 2018

@pckroon Hmm. The NLopt documentation doesn't make it especially clear to me that this sort of behaviour is expected. If what you say is true, perhaps some additional comments are in order?

https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#mlsl-multi-level-single-linkage

For example, what exactly is its criterion for believing that "all local minima" have been found? (Short of reading the cited papers, which I will eventually do, though this tidbit of of info probably should be in the documentation regardless.)

@pckroon
Copy link

pckroon commented Jun 2, 2018

At least, that's how I interpret the documentation. And yes, more documentation seems valuable.
I'd also need to read the paper to get to the bottom of this.

@philippeller
Copy link
Author

Hmm, interesting. So one is just supposed to set a maximum number of evaluations then i assume (?)

@pckroon
Copy link

pckroon commented Jun 4, 2018

@philippeller That's one way to make sure it won't just eat your CPU for eternity (another would be setting a maximum runtime). There is AFAIK no way to guarantee you've actually found the global minimum though.

@philippeller
Copy link
Author

@pckroon thanks, I'll try that!
Ps: we've made very bad experience in the past using time based termination criteria, which makes your results hardware and state dependent...so for my application the number of evaluations should be my best bet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants