-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark refactor #321
Benchmark refactor #321
Conversation
@firefly-cpp if you agree, I would recommend merging this PR. |
@zStupan it seems like this scrutinizer configuration works https://github.com/NiaOrg/NiaPy/blob/Attempt-to-fix-scrutinizer/.scrutinizer.yml Could you update the scrutinizer configuration to the working one, so we do not need to merge separate PR to master and again update this PR? |
@GregaVrbancic, please merge this PR. Sorry for late response - holidays for one day. |
Summary
Benchmark
toProblem
and niapy/benchmarks to niapy/problems. I feel like that makes more sense and is more general. Benchmark sounds like it's only for testing and might confuse a new user trying to implement their own optimization problem.Problem
an abstract class.dimension
attribute, and the lower and upper bounds are numpy arrays of sizedimension
, as opposed to floats. Because of this I also had to rewrite the local and global seeding methods of the Forrest Optimization Algorithm and change a dtype parameter to float in Firefly Algorithm which should've been done anyway.problem
argument is a string, a new problem will be constructed from thedimension
,lower
andupper
arguments, if they're provided (not None), otherwise the Problem's defaults will be used. Ifproblem
is an instance of the problem class, then that problem will be used and the dimension and bounds arguments will be ignored. The dimension, lower and upper attributes of the task are then set to those of self.problem.x
matches the problem dimension and raises a ValueError if it doesn't. If the dimensions match it returns the function value ofx
, which is calculated by the _evaluate method. I was thinking for the future, that this method could also take a whole population and evaluate it either sequentially or in parallel, in batches, but for now it only does the dimension check.Name
variables from problems. Instead I've implemented a name method in the problem base class which just returns the instances class name. I'm planning to do this for the Algorithm class as well in a future pull request.