-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not control the number of max_evals or max_iters #376
Comments
Hello, you are correct, The reason your first example only logs 5 evaluations is because it only logs a fitness value if there has been an improvement to the current best global solution. In this case it ran 10 evaluations, and there was no improvement after the 5th one. I'd say in your case, you're better off going with max_iters as your stopping condition. |
Oh, I got it. Could you help me explain why the number of evaluations were 2423 when I set the Many thanks, |
You could do
The output of the logger is the number of evaluations where the fitness is better than the current global best fitness. It actually performed more evaluations but there was no improvement after the 2423rd evaluation. For example if we ran DE with population size 100 with |
Thank you very much for quick and detail explanation. I would like to forge the algorithm run with max_iters so I can compare with other metaheuristic algorithms in other packages (whale optimization algorithm in another library, for instance) with the same axes of iteration and MSE. It seems we can not have the desired number of iteration in this case. Just wonder in my example using the Firefly Algorithm with population size 30, max_iters 10, so the number of evaluations should be (30 + 10 * 30) = 330. But the algorithm returned ~ 2500 evaluations. So I missed something here, right? |
The firefly algorithm is a bit different, because in each iteration there's nested for loops which calculate the distances between each pair of fireflies, update them and evaluate them again, so it performs a lot more function evaluations. |
So, if I want to compare the optimization performance among various algorithms (say, Firefly, PSO...) using the convergence speed (for MSE) in a number of iterations, could you advise me the approach in this case? |
Hello, my sincerest apologies for the late response. Unfortunately we currently only support convergence by number of evaluations. Unfortunately the best you can do right now is to use max_evals and set it to say 100 000 for each algorithm and compare the results that way. If you just want the data from the graph I'll try to add convergence by iterations by the end of this week, shouldn't be complicated. And then we can hopefully solve this issue. |
I understood and will wait for your next update. Many thanks, |
@hanamthang I've submitted a PR that will hopefully provide the features you are looking for. You'll be able to plot a convergence graph by running |
Now in version 2.0.1 you could do something like: algorithms = [FireflyAlgorithm(), ParticleSwarmOptimization(), DifferentialEvolution()]
convergence = []
for algorithm in algorithms:
task = Task(problem, max_iters=50)
best_x, best_mse = algorithm.run(task)
_, fitness = task.convergence_data() # fitness = array of best fitness at each iteration
convergence.append(fitness)
ff_fit, pso_fit, de_fit = convergence # unpack
# plot .... |
Thank you so much for your work and the example codes. I will test the codes with the new version of NiaPy. You can close the issue now. Many thanks, |
Hi,
Thank you very much for your hard working to create the NiaPy which is potential to improve much my research :-))))
Could you please explain the differences between the parameter
max_evals
andmax_iters
in thetask
? And how can I control them?I tried to minimize the mean square error during the feature selection using Firefly Algorithm and CatBoost Regressor. However, when I set the
max_evals = 10
as follows:task = Task(problem, max_evals=10, optimization_type=OptimizationType.MINIMIZATION, enable_logging=True)
it only ran 5 evaluations:
or set the
max_iters
= 10 as follows:task = Task(problem, max_iters=10, optimization_type=OptimizationType.MINIMIZATION, enable_logging=True)
it ran 2423 evaluations:
It seems a stopping criteria was set and the algorithm only stopped when it reached to this criteria. Is it correct? And should I use the parameter
max_evals
ormax_iters
in case of my problem?Many thanks,
Thang
The text was updated successfully, but these errors were encountered: