-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Difficult to get low gradients #7
Comments
I do not fully grasp the problem. Mu and eta do not dictate the norm of the gradient, those are assumptions on the convexity of the function to minimize. The norm of the gradient is dictated solely by the function you try to minimize and therefore is computed inside the callback you pass to the L-BFGS. |
Thanks for your quick answer. I realise the gradient is provided by my function, but I find that either the function value converges to 0 before the gradients are sufficiently small, or the linear search failed. Maybe my x (atomic coordinates) do not have sufficient precision? Alternatively, I wonder if I can tweak parameters of the algorithm to have it search harder? |
What precision do you have in your variables? Do you still feed double precision to stlbfgs or have you modified it? |
I'm using the standard double precision, the code is unmodified. What happens when the gradients get small is that the change in function value is even smaller. As stated in issue #8 I will try integer x, which should be feasible since I know the range of x values are below 10 in absolute numbers. |
I have implemented your LBFGS implementation in my chemistry code and try to use it to minimize the energy of molecules using a classical potential function. For some compounds it is very difficult to get the gradients small enough. For my application I need them to be under 1e-6. I tried playing with mu and eta, but little effect.
One more piece of information is that for some difficult cases some of the gradients are always zero, due to symmetry. I find that breaking the symmetry sometimes makes it possible to get a converged minimization on function value, but still with gradient that are > 1e-4.
Suggestions welcome.
The text was updated successfully, but these errors were encountered: