-
Notifications
You must be signed in to change notification settings - Fork 607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MLSL not working #187
Comments
Hmm. I can't help all that much, but I have a few questions and remarks which hopefully might bring us closer to understanding.
The only way I got the global algorithm to converge was to set a "maxtime" (via Perhaps someone else has some better ideas. Good luck. |
I did some further investigation: #!/usr/bin/python
import nlopt
import numpy as np
def myfunc(x, grad):
print("")
print(x[0], x[1])
result = np.linalg.norm(x)
print(result)
return result
#local_opt = nlopt.opt(nlopt.LN_NELDERMEAD, 2)
#local_opt = nlopt.opt(nlopt.LN_COBYLA, 2)
local_opt = nlopt.opt(nlopt.LN_PRAXIS, 2)
#local_opt.set_lower_bounds([0., 0.])
#local_opt.set_upper_bounds([1., 1.])
local_opt.set_min_objective(myfunc)
#local_opt.set_xtol_rel(1e-4)
opt = nlopt.opt(nlopt.G_MLSL_LDS, 2)
opt.set_lower_bounds([0., 0.])
opt.set_upper_bounds([1., 1.])
opt.set_min_objective(myfunc)
opt.set_local_optimizer(local_opt)
#opt.set_xtol_rel(1e-1)
opt.set_stopval(1e-20)
#opt.set_maxtime(10.0)
# this alone converges no problem
#result1 = local_opt.optimize([1,1])
#print("Optimizer message: ", local_opt.last_optimize_result())
#print("Optimizer result: ", result1)
# this does not
result2 = opt.optimize([1.,1.])
print("Optimizer message: ", opt.last_optimize_result())
print("Optimizer result: ", result2) It seems I can set very, very small I made the objective function return the vector norm instead of the sum, since I was also playing around with setting the bounds to something like |
For as far as I understand MLSL (which is limited), I think it'll never converge. IIRC MLSL picks a random number, and starts a local optimisation there, then picks a new number, and so on. What it does in between is do some clustering on those starting points and the final minimum they converge to, and tries to pick a number such that it's likely to find a different minimum. So I think it won't converge until it finds all local minima. |
@pckroon Hmm. The NLopt documentation doesn't make it especially clear to me that this sort of behaviour is expected. If what you say is true, perhaps some additional comments are in order? https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#mlsl-multi-level-single-linkage For example, what exactly is its criterion for believing that "all local minima" have been found? (Short of reading the cited papers, which I will eventually do, though this tidbit of of info probably should be in the documentation regardless.) |
At least, that's how I interpret the documentation. And yes, more documentation seems valuable. |
Hmm, interesting. So one is just supposed to set a maximum number of evaluations then i assume (?) |
@philippeller That's one way to make sure it won't just eat your CPU for eternity (another would be setting a maximum runtime). There is AFAIK no way to guarantee you've actually found the global minimum though. |
@pckroon thanks, I'll try that! |
Hi
For my problem MLSL does never converge, keeps on going. I also tested with a minimal example, same thing, the following does not converge:
The text was updated successfully, but these errors were encountered: