PyPop7
is a Pure-PYthon library of POPulation-based OPtimization for single-objective,
real-parameter, black-box problems. Its goal is to provide a unified interface and a set of
elegant algorithmic implementations (e.g., evolutionary algorithms, swarm-based optimizers, and
pattern search) for Black-Box Optimization (BBO), particularly population-based optimizers,
in order to facilitate research repeatability, wide benchmarking of BBO, and especially their
real-world applications.
More specifically, for alleviating curse of dimensionality, one focus of PyPop7
is to
cover their State Of The Art for Large-Scale Optimization (LSO), though many of their small-
and medium-scaled versions or variants are also included here (mainly for theoretical or
benchmarking or educational purposes). For a growing list of public use cases of PyPop7
,
please refer to this online website
for details. Although we have chosen GPL-3.0 license, anyone could use, modify, and improve this
open-source library entirely freely for any (no matter open-source or closed-source) purposes.
The following three steps are enough to utilize the black-box optimization power of this open-source pure-Python library PyPop7:
$ pip install pypop7
For PyPop7
, the number 7
was added just because pypop
has been registered by
other in PyPI. The icon butterfly for PyPop7
is
used to respect/allude to the book (butterflies in its cover) of Fisher ("the greatest
of Darwin's successors"): The
Genetical Theory of Natural Selection, which directly inspired
Prof. Holland's
proposal of Genetic Algorithms
(GA).
- Define the objective/cost/loss/error/fitness function to be minimized for the optimization problem at hand (in this library, the term fitness function is used, following the established terminology tradition of evolutionary computation):
import numpy as np # for numerical computation, also as the computing engine of pypop7
# Rosenbrock, one notorious test function from the optimization community
def rosenbrock(x):
return 100.0 * np.sum(np.square(x[1:] - np.square(x[:-1]))) + np.sum(np.square(x[:-1] - 1.0))
# to define the fitness function to be *minimized* and also its problem settings
ndim_problem = 1000
problem = {'fitness_function': rosenbrock,
'ndim_problem': ndim_problem, # dimension
'lower_boundary': -5.0 * np.ones((ndim_problem,)), # lower search boundary
'upper_boundary': 5.0 * np.ones((ndim_problem,))} # upper search boundary
Without loss of generality, only the minimization process is considered here, since maximization can be easily transferred to minimization just by negating it.
- Run one black-box optimizer or more on the above optimization problem (rosenbrock):
# to choose e.g., LM-MA-ES owing to its low complexity and metric-learning ability for LSO:
# -> please refer to https://pypop.readthedocs.io/en/latest/es/lmmaes.html for more details
from pypop7.optimizers.es.lmmaes import LMMAES # Limited-Memory Matrix Adaptation ES
# to define algorithm options (which may differ in details among different optimizers)
options = {'fitness_threshold': 1e-10, # to terminate when best-so-far fitness is <= it
'max_runtime': 3600.0, # to terminate when actual runtime >= 1 hours (3600 seconds)
'seed_rng': 0, # seed of random number generation (should be set for repeatability)
'x': 4.0 * np.ones((ndim_problem,)), # initial mean of search/mutation distribution
'sigma': 3.0, # initial global step-size of distribution (not necessarily optimal)
'verbose': 500}
lmmaes = LMMAES(problem, options) # to initialize the black-box optimizer under a unified API
results = lmmaes.optimize() # to run its (often time-consuming) randomized evolution process
print(results)
Please refer to https://pypop.rtfd.io/ for online documentations of this well-designed ("self-boasted") Python library for black-box optimization (several praises from others).
"Optimization algorithms are often designed for a specific type of search space, exploiting its specific structure." ... "The main lesson of the development of our field in the last few decades is that efficient optimization methods can be developed only by intelligently employing the structure of particular instances of problems."
- : indicates the specific BBO version for LSO (e.g., dimension > 100, but not an absolutely deterministic number, depending upon problems and time),
- : indicates the competitive or de facto BBO version for small- or medium-dimensional problems (though it may also work well under some certain LSO circumstances),
- : indicates the baseline BBO version mainly for theoretical and/or educational interest, owing to its algorithmic simplicity (usually relative ease to mathematical analyze).
Note that this above classification based on only the dimension of objective function is just a very rough estimation for algorithm selection. In practice, perhaps the simplest way to algorithm selection is trial-and-error. Otherwise, you can try more advanced Automated Algorithm Selection techniques.
Clearly, this is an algorithm-centric rather than benchmarking-centric Python library only for black-box optimization, though proper benchmarking is also crucial for BBO, as shown below.
- Evolution Strategies (ES)
[e.g., Ollivier et al., 2017, JMLR,
Hansen et al., 2015,
Bäck et al., 2013,
Rudolph, 2012,
Beyer&Schwefel, 2002,
Rechenberg, 1989,
Schwefel, 1984, etc.]
- Mixture Model-based ES (MMES) [He et al., 2021, TEVC]
- Limited-Memory Matrix Adaptation ES (LMMAES) [Loshchilov et al., 2019, TEVC]
- Limited Memory Covariance Matrix Adaptation
(LMCMA)
[Loshchilov, 2017, ECJ]
- Limited Memory Covariance Matrix Adaptation ES (LMCMAES) [Loshchilov, 2014, GECCO]
- Rank-M ES
(RMES)
[Li&Zhang, 2018, TEVC,
Li&Zhang, 2016, PPSN]
- Rank-1 ES (R1ES) [Li&Zhang, 2018, TEVC, Li&Zhang, 2016, PPSN]
- Cholesky-CMA-ES-2016
(CCMAES2016)
[Krause et al., 2016, NeurIPS]
- (1+1)-Active-Cholesky-CMA-ES-2015 (OPOA2015) [Krause&Igel, 2015, FOGA]
- (1+1)-Active-Cholesky-CMA-ES-2010 (OPOA2010) [Arnold&Hansen, 2010, GECCO, Jastrebski&Arnold, 2006, CEC]
- Cholesky-CMA-ES-2009
(CCMAES2009)
[Suttorp et al., 2009, MLJ]
- (1+1)-Cholesky-CMA-ES-2009 (OPOC2009) [Suttorp et al., 2009, MLJ]
- (1+1)-Cholesky-CMA-ES-2006 (OPOC2006) [Igel et al., 2006, GECCO]
- Separable Covariance Matrix Adaptation ES (SEPCMAES) [Bäck et al., 2013, Ros&Hansen, 2008, PPSN]
- Diagonal Decoding Covariance Matrix Adaptation (DDCMA) [Akimoto&Hansen, 2020, ECJ]
- Matrix Adaptation ES
(MAES)
[Beyer&Sendhoff, 2017, TEVC]
- Fast Matrix Adaptation ES (FMAES) [Beyer, 2020, GECCO, Loshchilov et al., 2019, TEVC]
- Covariance Matrix Adaptation ES (CMAES) [Hansen, 2023/2016, Hansen et al., 2003, ECJ, Hansen et al., 2001, ECJ, Hansen&Ostermeier, 1996, CEC]
- Self-Adaptation Matrix Adaptation ES
(SAMAES)
[Beyer, 2020, GECCO]
- Self-Adaptation ES (SAES) [e.g., Beyer, 2020, GECCO, Beyer, 2007, Scholarpedia]
- Cumulative Step-size Adaptation ES (CSAES) [e.g., Hansen et al., 2015, Ostermeier et al., 1994, PPSN]
- Derandomized Self-Adaptation ES (DSAES) [e.g., Hansen et al., 2015, Ostermeier et al., 1994, ECJ]
- Schwefel's Self-Adaptation ES (SSAES) [e.g., Hansen et al., 2015, Beyer&Schwefel, 2002, Schwefel, 1988, Schwefel, 1984, AOR]
- Rechenberg's (1+1)-ES with 1/5th success rule (RES) [e.g., Hansen et al., 2015, Kern et al., 2004, NACO, Rechenberg, 1989, Rechenberg, 1984, Schumer&Steiglitz, 1968, TAC]
- Natural ES (NES)
[e.g., Hüttenrauch&Neumann, 2024, JMLR,
Wierstra et al., 2014, JMLR,
Yi et al., 2009, ICML,
Wierstra et al., 2008, CEC]
- Projection-based Covariance Matrix Adaptation (VKDCMA) [Akimoto&Hansen, 2016, PPSN, Akimoto&Hansen, 2016, GECCO]
- Linear Covariance Matrix Adaptation (VDCMA) [Akimoto et al., 2014, GECCO]
- Rank-1 NES (R1NES) [Sun et al., 2013, GECCO]
- Separable NES (SNES) [Schaul et al., 2011, GECCO]
- Exponential NES (XNES) [Glasmachers et al., 2010, GECCO]
- Exact NES (ENES) [Sun et al., 2009, ICML]
- Original NES (ONES) [Wierstra et al., 2008, CEC]
- Search Gradient-based ES (SGES) [Wierstra et al., 2008, CEC]
- Estimation of Distribution Algorithms
(EDA)
[e.g., Brookes et al., 2020, GECCO,
Larrañaga&Lozano, 2002,
Pelikan et al., 2002,
Mühlenbein&Paaß, 1996, PPSN,
Baluja&Caruana, 1995, ICML]
- Random-Projection EDA (RPEDA) [Kabán et al., 2016, ECJ]
- Univariate Marginal Distribution Algorithm (UMDA) [Larrañaga&Lozano, 2002, Mühlenbein, 1997, ECJ]
- Adaptive Estimation of Multivariate Normal Algorithm (AEMNA) [Larrañaga&Lozano, 2002]
- Estimation of Multivariate Normal Algorithm (EMNA) [Larrañaga&Lozano, 2002]
- Cross-Entropy Method (CEM)
[e.g., Rubinstein&Kroese, 2016,
Hu et al., 2007, OR,
Kroese et al., 2006, MCAP,
De Boer et al., 2005, AOR,
Rubinstein&Kroese, 2004]
- Differentiable CEM (DCEM) [Amos&Yarats, 2020, ICML]
- Dynamic-Smoothing CEM (DSCEM) [Kroese et al., 2006, MCAP]
- Model Reference Adaptive Search (MRAS) [Hu et al., 2007, OR]
- Standard CEM (SCEM) [e.g. Kroese et al., 2006, MCAP]
- Differential Evolution (DE) [e.g., Price, 2013; Price et al., 2005; Storn&Price, 1997, JGO]
- Success-History based Adaptive DE (SHADE) [Tanabe&Fukunaga, 2013, CEC]
- Adaptive DE (JADE) [Zhang&Sanderson, 2009, TEVC]
- Composite DE (CODE) [Wang et al., 2011, TEVC]
- Trigonometric-mutation DE (TDE) [Fan&Lampinen, 2003, JGO]
- Classic DE (CDE) [e.g. Storn&Price, 1997, JGO]
- Particle Swarm Optimizer (PSO) [e.g., Fornasier et al., 2021, JMLR; Bonyadi&Michalewicz, 2017, ECJ; Rahmat-Samii et al., 2012, PIEEE; Escalante et al., 2009, JMLR; Dorigo et al., 2008; Poli et al., 2007, SI; Shi&Eberhart, 1998, CEC; Kennedy&Eberhart, 1995, ICNN]
- Cooperative Coevolving PSO (CCPSO2) [Li&Yao, 2012, TEVC]
- Incremental PSO (IPSO) [de Oca et al., 2011, TSMCB]
- Cooperative PSO (CPSO) [Van den Bergh&Engelbrecht, 2004, TEVC]
- Comprehensive Learning PSO (CLPSO) [Liang et al., 2006, TEVC]
- Standard PSO with a Local (ring) topology (SPSOL) [e.g. Shi&Eberhart, 1998, CEC; Kennedy&Eberhart, 1995, ICNN]
- Standard PSO with a global topology (SPSO) [e.g. Shi&Eberhart, 1998, CEC; Kennedy&Eberhart, 1995, ICNN]
- Cooperative Co-evolution (CC) [e.g., Gomez et al., 2008, JMLR; Panait et al., 2008, JMLR; Moriarty&Miikkulainen, 1995, ICML; Potter&De Jong, 1994, PPSN]
- Hierarchical CC (HCC) [Mei et al., 2016, ACM-TOMS; Gomez&Schmidhuber, 2005, ACM-GECCO]
- CoOperative CMA (COCMA) [Mei et al., 2016, ACM-TOMS; Potter&De Jong, 1994, PPSN]
- CoOperative co-Evolutionary Algorithm (COEA) [e.g. Panait et al., 2008, JMLR; Potter&De Jong, 1994, PPSN]
- CoOperative SYnapse NeuroEvolution (COSYNE) [Gomez et al., 2008, JMLR; Moriarty&Miikkulainen, 1995, ICML]
- Simulated Annealing (SA) [e.g., Bertsimas&Tsitsiklis, 1993, Statistical Science; Kirkpatrick et al., 1983, Science; Hastings, 1970, Biometrika; Metropolis et al., 1953, JCP]
- Enhanced SA (ESA) [Siarry et al., 1997, TOMS]
- Corana et al.' SA (CSA) [Corana et al., 1987, TOMS]
- Noisy SA (NSA) [Bouttier&Gavra, 2019, JMLR]
- Genetic Algorithms (GA) [e.g., Forrest, 1993, Science; Holland, 1973, SICOMP; Holland, 1962, JACM]
- Global and Local genetic algorithm (GL25) [García-Martínez et al., 2008, EJOR]
- Generalized Generation Gap with Parent-Centric Recombination (G3PCX) [Deb et al., 2002, ECJ]
- GENetic ImplemenTOR (GENITOR) [e.g. Whitley et al., 1993, MLJ]
- Evolutionary Programming (EP) [e.g., Yao et al., 1999, TEVC; Fogel, 1994, Statistics and Computing]
- Lévy distribution based EP (LEP) [Lee&Yao, 2004, TEVC]
- Fast EP (FEP) [Yao et al., 1999, TEVC]
- Classical EP (CEP) [e.g. Yao et al., 1999, TEVC; Bäck&Schwefel, 1993, ECJ]
- Direct Search (DS) [e.g. Powell, 1998, Acta-Numerica; Wright, 1996; Hooke&Jeeves, 1961, JACM]
- Powell's search method (POWELL) [SciPy; Powell, 1964, Computer]
- Generalized Pattern Search (GPS) [Kochenderfer&Wheeler, 2019; Torczon, 1997, SIAM-JO]
- Nelder-Mead simplex method (NM) [Dean et al., 1975, Science; Nelder&Mead, 1965, Computer]
- Hooke-Jeeves direct search method (HJ) [Kochenderfer&Wheeler, 2019; Kaupe, 1963, CACM; Hooke&Jeeves, 1961, JACM]
- Coordinate/Compass Search (CS): [Torczon, 1997, SIAM-JO; Fermi&Metropolis, 1952]
- Random (stochastic) Search (RS) [ e.g., Murphy, 2023; Gao&Sener, 2022, ICML; Russell&Norvig, 2021; Nesterov&Spokoiny, 2017, FoCM; Bergstra&Bengio, 2012, JMLR; Schmidhuber et al., 2001; Cvijović&Klinowski, 1995, Science; Rastrigin, 1986; Solis&Wets, 1981, MOOR; Brooks, 1958, OR; Ashby, 1952 ]
- BErnoulli Smoothing (BES) [Gao&Sener, 2022, ICML]
- Gaussian Smoothing (GS) [Nesterov&Spokoiny, 2017, FoCM]
- Simple Random Search (SRS) [Rosenstein&Barto, 2001, IJCAI]
- Annealed Random Hill Climber (ARHC) [e.g. Russell&Norvig, 2021; Schaul et al., 2010, JMLR]
- Random Hill Climber (RHC) [e.g. Russell&Norvig, 2021; Schaul et al., 2010, JMLR]
- Pure Random Search (PRS) [e.g. Bergstra&Bengio, 2012, JMLR; Schmidhuber et al., 2001; Brooks, 1958, OR; Ashby, 1952]
For new/missed BBO, we have provided a unified API to freely add them if they can well satisfy the design philosophy widely recognized in the scientific research community. Note that currently both Ant Colony Optimization (ACO) and Tabu Search (TS) are not covered in this library, since they work well mainly in discrete or combinatorial search spaces in many cases. Furthermore, both brute-force (exhaustive) search and grid search are also excluded here, since it works only for very low (typically < 10) dimensions. In the near-future version, we will consider to add others (e.g., Simultaneous Perturbation Stochastic Approximation (SPSA)) into this open-source library. Please refer to development guide for more details.
For large-scale optimization (LSO), computational efficiency is an indispensable performance criterion of BBO/DFO/ZOO in the post-Moore era. To obtain high-performance computation as much as possible, NumPy is heavily used in this library as the base of numerical computation along with SciPy and scikit-learn. Sometimes Numba is also utilized, in order to further accelerate the wall-clock time.
The first-level folder structure of this library PyPop7 is presented below:
.circleci
: for automatic testing based on pytest.config.yml
: configuration file in CircleCI.
.github
: all configuration files for GitHub.workflows
: for https://github.com/Evolutionary-Intelligence/pypop/actions.
docs
: for online documentations.pypop7
: all Python source code of BBO.tutorials
: a set of tutorials..gitignore
: for GitHub..readthedocs.yaml
: for readthedocs.CODE_OF_CONDUCT.md
: code of conduct.LICENSE
: open-source license.README.md
: basic information of this library.pyproject.toml
: for PyPI (used bysetup.cfg
asbuild-system
).requirements.txt
: only for development.setup.cfg
: for PyPI (used viapyproject.toml
).
For each population-based algorithm family, we are providing several representative applications published on some (rather all) top-tier journals/conferences (such as, Nature, Science, PNAS, PRL, JACS, JACM, PIEEE, JMLR, ICML, NeurIPS, ICLR, CVPR, ICCV, RSS, just to name a few), reported in the (now still actively-updated) paper list called DistributedEvolutionaryComputation.
- Derivative-Free Optimization (DFO) / Zeroth-Order Optimization (ZOO)
- Berahas, A.S., et al., 2022. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. FoCM, 22(2), pp.507-560.
- Larson, J., et al., 2019. Derivative-free optimization methods. Acta Numerica, 28, pp.287-404.
- Nesterov, Y., 2018. Lectures on convex optimization. Cham, Switzerland: Springer. "most of the achievements in Structural Optimization are firmly supported by the fundamental methods of Black-Box Convex Optimization."
- Nesterov, Y. and Spokoiny, V., 2017. Random gradient-free minimization of convex functions. FoCM, 17(2), pp.527-566.
- Rios, L.M. and Sahinidis, N.V., 2013. Derivative-free optimization: A review of algorithms and comparison of software implementations. JGO, 56, pp.1247-1293.
- Conn, A.R., et al., 2009. Introduction to derivative-free optimization. SIAM.
- Kolda, T.G., et al., 2003. Optimization by direct search: New perspectives on some classical and modern methods. SIAM Review, 45(3), pp.385-482.
- Evolutionary Computation (EC) and Swarm Intelligence (SI)
- Eiben, A.E. and Smith, J., 2015. From evolutionary computation to the evolution of things. Nature, 521(7553), pp.476-482. [ http://www.evolutionarycomputation.org/ ]
- Miikkulainen, R. and Forrest, S., 2021. A biological perspective on evolutionary computation. Nature Machine Intelligence, 3(1), pp.9-15.
- Hansen, N. and Auger, A., 2014. Principled design of continuous stochastic search: From theory to practice. Theory and Principled Methods for the Design of Metaheuristics, pp.145-180.
- De Jong, K.A., 2006. Evolutionary computation: A unified approach. MIT Press.
- Beyer, H.G. and Deb, K., 2001. On self-adaptive features in real-parameter evolutionary algorithms. TEVC, 5(3), pp.250-270.
- Salomon, R., 1998. Evolutionary algorithms and gradient search: Similarities and differences. TEVC, 2(2), pp.45-55.
- Fogel, D.B., 1998. Evolutionary computation: The fossil record. IEEE Press.
- Back, T., Fogel, D.B. and Michalewicz, Z. eds., 1997. Handbook of Evolutionary Computation. CRC Press.
- Wolpert, D.H. and Macready, W.G., 1997. No free lunch theorems for optimization. TEVC, 1(1), pp.67-82.
- Bäck, T. and Schwefel, H.P., 1993. An overview of evolutionary algorithms for parameter optimization. ECJ, 1(1), pp.1-23.
- Forrest, S., 1993. Genetic algorithms: Principles of natural selection applied to computation. Science, 261(5123), pp.872-878.
- Taxonomy
- Benchmarking [ benchmarking-network + iohprofiler ]
- Andrés-Thió, N., Audet, C., et al., 2024. Solar: A solar thermal power plant simulator for blackbox optimization benchmarking. arXiv preprint arXiv:2406.00140.
- Kudela, J., 2022. A critical problem in benchmarking and analysis of evolutionary computation methods. Nature Machine Intelligence, 4(12), pp.1238-1245.
- Meunier, L., Rakotoarison, H., Wong, P.K., Roziere, B., Rapin, J., Teytaud, O., Moreau, A. and Doerr, C., 2022. Black-box optimization revisited: Improving algorithm selection wizards through massive benchmarking. TEVC, 26(3), pp.490-500.
- Hansen, N., Auger, A., Ros, R., Mersmann, O., Tušar, T. and Brockhoff, D., 2021. COCO: A platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software, 36(1), pp.114-144.
- Auger, A. and Hansen, N., 2021, July. Benchmarking: State-of-the-art and beyond. GECCO Companion (pp. 339-340). ACM.
- Varelas, K., El Hara, O.A., Brockhoff, D., Hansen, N., Nguyen, D.M., Tušar, T. and Auger, A., 2020. Benchmarking large-scale continuous optimizers: The bbob-largescale testbed, a COCO software guide and beyond. ASC, 97, p.106737.
- Hansen, N., Ros, R., Mauny, N., Schoenauer, M. and Auger, A., 2011. Impacts of invariance in search: When CMA-ES and PSO face ill-conditioned and non-separable problems. ASC, 11(8), pp.5755-5769.
- Moré, J.J. and Wild, S.M., 2009. Benchmarking derivative-free optimization algorithms. SIOPT, 20(1), pp.172-191.
- Whitley, D., Rana, S., Dzubera, J. and Mathias, K.E., 1996. Evaluating evolutionary algorithms. AIJ, 85(1-2), pp.245-276.
- Salomon, R., 1996. Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. A survey of some theoretical and practical aspects of genetic algorithms. BioSystems, 39(3), pp.263-278.
- Fogel, D.B. and Beyer, H.G., 1995. A note on the empirical evaluation of intermediate recombination. ECJ, 3(4), pp.491-495.
- Moré, J.J., Garbow, B.S. and Hillstrom, K.E., 1981. Testing unconstrained optimization software. TOMS, 7(1), pp.17-41.
- Evolution Strategy (ES) [ A visual guide to ES + [Li et al., 2020] + [Akimoto&Hansen, 2022, GECCO-Companion] ]
- Akimoto, Y., Auger, A., Glasmachers, T. and Morinaga, D., 2022. Global linear convergence of evolution strategies on more than smooth strongly convex functions. SIOPT, 32(2), pp.1402-1429.
- Glasmachers, T. and Krause, O., 2022. Convergence analysis of the Hessian estimation evolution strategy. ECJ, 30(1), pp.27-50.
- He, X., Zheng, Z. and Zhou, Y., 2021. MMES: Mixture model-based evolution strategy for large-scale optimization. TEVC, 25(2), pp.320-333.
- Akimoto, Y. and Hansen, N., 2020. Diagonal acceleration for covariance matrix adaptation evolution strategies. ECJ, 28(3), pp.405-435.
- Beyer, H.G., 2020, July. Design principles for matrix adaptation evolution strategies. GECCO Companion (pp. 682-700). ACM.
- Choromanski, K., Pacchiano, A., Parker-Holder, J. and Tang, Y., 2019. From complexity to simplicity: Adaptive es-active subspaces for blackbox optimization. In NeurIPS.
- Varelas, K., Auger, A., Brockhoff, D., Hansen, N., ElHara, O.A., Semet, Y., Kassab, R. and Barbaresco, F., 2018, September. A comparative study of large-scale variants of CMA-ES. In PPSN (pp. 3-15). Springer, Cham.
- Li, Z. and Zhang, Q., 2018. A simple yet efficient evolution strategy for large-scale black-box optimization. TEVC, 22(5), pp.637-646.
- Lehman, J., Chen, J., Clune, J. and Stanley, K.O., 2018, July. ES is more than just a traditional finite-difference approximator. GECCO (pp. 450-457). ACM.
- Ollivier, Y., Arnold, L., Auger, A. and Hansen, N., 2017. Information-geometric optimization algorithms: A unifying picture via invariance principles. JMLR, 18(18), pp.1-65.
- Loshchilov, I., 2017. LM-CMA: An alternative to L-BFGS for large-scale black box optimization. ECJ, 25(1), pp.143-171. [ Loshchilov, I., 2014, July. A computationally efficient limited memory CMA-ES for large scale optimization. GECCO (pp. 397-404). ACM. ] + [ Loshchilov, I., Glasmachers, T. and Beyer, H.G., 2019. Large scale black-box optimization by limited-memory matrix adaptation. TEVC, 23(2), pp.353-358. ]
- Beyer, H.G. and Sendhoff, B., 2017. Simplify your covariance matrix adaptation evolution strategy. TEVC, 21(5), pp.746-759.
- Krause, O., Arbonès, D.R. and Igel, C., 2016. CMA-ES with optimal covariance update and storage complexity. In NeurIPS, 29, pp.370-378.
- Akimoto, Y. and Hansen, N., 2016, July. Projection-based restricted covariance matrix adaptation for high dimension. GECCO (pp. 197-204). ACM.
- Auger, A. and Hansen, N., 2016. Linear convergence of comparison-based step-size adaptive randomized search via stability of Markov chains. SIOPT, 26(3), pp.1589-1624.
- Hansen, N., Arnold, D.V. and Auger, A., 2015. Evolution strategies. In Springer Handbook of Computational Intelligence (pp. 871-898). Springer, Berlin, Heidelberg.
- Diouane, Y., Gratton, S. and Vicente, L.N., 2015. Globally convergent evolution strategies. MP, 152(1), pp.467-490.
- Akimoto, Y., Auger, A. and Hansen, N., 2014, July. Comparison-based natural gradient optimization in high dimension. GECCO (pp. 373-380). ACM.
- Bäck, T., Foussette, C. and Krause, P., 2013. Contemporary evolution strategies. Berlin: Springer. [ Bäck, T., 2014, July. Introduction to evolution strategies. GECCO Companion (pp. 251-280). ] + [ Wang, H., Emmerich, M. and Bäck, T., 2014, March. Mirrored orthogonal sampling with pairwise selection in evolution strategies. In Proceedings of Annual ACM Symposium on Applied Computing (pp. 154-156). ]
- Rudolph, G., 2012. Evolutionary strategies. In Handbook of Natural Computing (pp. 673-698). Springer Berlin, Heidelberg.
- Akimoto, Y., Nagata, Y., Ono, I. and Kobayashi, S., 2012. Theoretical foundation for CMA-ES from information geometry perspective. Algorithmica, 64(4), pp.698-716. [ Akimoto, Y., Nagata, Y., Ono, I. and Kobayashi, S., 2010, September. Bidirectional relation between CMA evolution strategies and natural evolution strategies. In PPSN (pp. 154-163). Springer, Berlin, Heidelberg. ] + [ Akimoto, Y., 2011. Design of evolutionary computation for continuous optimization. Doctoral Dissertation, Tokyo Institute of Technology. ]
- Arnold, D.V. and Hansen, N., 2010, July. Active covariance matrix adaptation for the (1+1)-CMA-ES. GECCO (pp. 385-392). ACM.
- Arnold, D.V. and MacLeod, A., 2006, July. Hierarchically organised evolution strategies on the parabolic ridge. GECCO (pp. 437-444). ACM.
- Igel, C., Suttorp, T. and Hansen, N., 2006, July. A computational efficient covariance matrix update and a (1+1)-CMA for evolution strategies. GECCO (pp. 453-460). ACM. [ Suttorp, T., Hansen, N. and Igel, C., 2009. Efficient covariance matrix update for variable metric evolution strategies. MLJ, 75(2), pp.167-197. ] + [ Krause, O. and Igel, C., 2015, January. A more efficient rank-one covariance matrix update for evolution strategies. In FOGA (pp. 129-136). ACM. ]
- Beyer, H.G. and Schwefel, H.P., 2002. Evolution strategies–A comprehensive introduction. Natural Computing, 1(1), pp.3-52.
- Hansen, N. and Ostermeier, A., 1996, May. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In CEC (pp. 312-317). IEEE. [ Hansen, N. and Ostermeier, A., 2001. Completely derandomized self-adaptation in evolution strategies. ECJ, 9(2), pp.159-195. ] + [ Hansen, N., Müller, S.D. and Koumoutsakos, P., 2003. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). ECJ, 11(1), pp.1-18. ] + [ Auger, A. and Hansen, N., 2005, September. A restart CMA evolution strategy with increasing population size. In CEC (pp. 1769-1776). IEEE. ] + [ Hansen, N. and Auger, A., 2014. Principled design of continuous stochastic search: From theory to practice. In Theory and Principled Methods for the Design of Metaheuristics (pp. 145-180). Springer, Berlin, Heidelberg. ]
- Rudolph, G., 1992. On correlated mutations in evolution strategies. In PPSN (pp. 105-114).
- Schwefel, H.P., 1984. Evolution strategies: A family of non-linear optimization techniques based on imitating some principles of organic evolution. Annals of Operations Research, 1(2), pp.165-167. [ Schwefel, H.P., 1988. Collective intelligence in evolving systems. In Ecodynamics (pp. 95-100). Springer, Berlin, Heidelberg. ]
- Rechenberg, I., 1984. The evolution strategy. A mathematical model of darwinian evolution. In Synergetics—from Microscopic to Macroscopic Order (pp. 122-132). Springer, Berlin, Heidelberg. [ Rechenberg, I., 1989. Evolution strategy: Nature’s way of optimization. In Optimization: Methods and Applications, Possibilities and Limitations (pp. 106-126). Springer, Berlin, Heidelberg. ]
- Applications: e.g., Deng et al., 2023; Zhang et al., 2023, NeurIPS; Tjanaka et al., 2023, IEEE-LRA; Yu et al., 2023, IJCAI; Zhu et al., 2023, IEEE/ASME-TMECH; Fadini et al., 2023; Ma et al., 2023; Kim et al., 2023, Science Robotics; Slade et al., 2022, Nature; Sun et al., 2022, ICML; Tjanaka et al., 2022, GECCO; Wang&Ponce, 2022, GECCO, Tian&Ha, 2022, EvoStar; Hansel et al., 2021; Anand et al., 2021, MLST; Nomura et al., 2021, AAAI; Zheng et al., 2021, IEEE-ASRU; Liu et al., 2019, AAAI; Dong et al., 2019, CVPR; Ha&Schmidhuber, 2018, NeurIPS; Müller&Glasmachers, 2018, PPSN; Chrabąszcz et al., 2018, IJCAI; OpenAI, 2017; Zhang et al., 2017, Science.
- Natural Evolution Strategies (NES)
- Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J. and Schmidhuber, J., 2014. Natural evolution strategies. JMLR, 15(1), pp.949-980.
- Beyer, H.G., 2014. Convergence analysis of evolutionary algorithms that are based on the paradigm of information geometry. ECJ, 22(4), pp.679-709.
- Schaul, T., 2011. Studies in continuous black-box optimization. Doctoral Dissertation, Technische Universität München.
- Yi, S., Wierstra, D., Schaul, T. and Schmidhuber, J., 2009, June. Stochastic search using the natural gradient. ICML (pp. 1161-1168).
- Wierstra, D., Schaul, T., Peters, J. and Schmidhuber, J., 2008, June. Natural evolution strategies. CEC (pp. 3381-3387). IEEE.
- Applications: e.g., Yu et al., USENIX Security; Flageat et al., 2023; Yan et al., 2023; Feng et al., 2023; Wei et al., 2022, IJCV; Agarwal et al., 2022, ICRA; Farid et al., 2022, CoRL; Feng et al., 2022, CVPR; Berliner et al., 2022, ICLR; Kirsch et al., 2022, AAAI; Jain et al., 2022, USENIX Security; Ilyas et al., 2018, ICML.
- Estimation of Distribution Algorithm (EDA) [ MIMIC [NeurIPS-1996] + BOA [GECCO-1999] + [ECJ-2005] ]
- Brookes, D., Busia, A., Fannjiang, C., Murphy, K. and Listgarten, J., 2020, July. A view of estimation of distribution algorithms through the lens of expectation-maximization. GECCO Companion (pp. 189-190). ACM.
- Kabán, A., Bootkrajang, J. and Durrant, R.J., 2016. Toward large-scale continuous EDA: A random matrix theory perspective. ECJ, 24(2), pp.255-291.
- Pelikan, M., Hauschild, M.W. and Lobo, F.G., 2015. Estimation of distribution algorithms. In Springer Handbook of Computational Intelligence (pp. 899-928). Springer, Berlin, Heidelberg.
- Dong, W., Chen, T., Tiňo, P. and Yao, X., 2013. Scaling up estimation of distribution algorithms for continuous optimization. TEVC, 17(6), pp.797-822.
- Hauschild, M. and Pelikan, M., 2011. An introduction and survey of estimation of distribution algorithms. Swarm and Evolutionary Computation, 1(3), pp.111-128.
- Teytaud, F. and Teytaud, O., 2009, July. Why one must use reweighting in estimation of distribution algorithms. GECCO (pp. 453-460).
- Larrañaga, P. and Lozano, J.A. eds., 2001. Estimation of distribution algorithms: A new tool for evolutionary computation. Springer Science & Business Media.
- Mühlenbein, H., 1997. The equation for response to selection and its use for prediction. ECJ, 5(3), pp.303-346.
- Baluja, S. and Caruana, R., 1995. Removing the genetics from the standard genetic algorithm. ICML (pp. 38-46). Morgan Kaufmann.
- Cross-Entropy Method (CEM)
- Pinneri, C., Sawant, S., Blaes, S., Achterhold, J., Stueckler, J., Rolinek, M. and Martius, G., 2021, October. Sample-efficient cross-entropy method for real-time planning. In Conference on Robot Learning (pp. 1049-1065). PMLR.
- Amos, B. and Yarats, D., 2020, November. The differentiable cross-entropy method. ICML (pp. 291-302). PMLR.
- Rubinstein, R.Y. and Kroese, D.P., 2016. Simulation and the Monte Carlo method (Third Edition). John Wiley & Sons.
- Hu, J., et al., 2007. A model reference adaptive search method for global optimization. OR, 55(3), pp.549-568.
- De Boer, P.T., Kroese, D.P., Mannor, S. and Rubinstein, R.Y., 2005. A tutorial on the cross-entropy method. Annals of Operations Research, 134(1), pp.19-67.
- Rubinstein, R.Y. and Kroese, D.P., 2004. The cross-entropy method: A unified approach to combinatorial optimization, Monte-Carlo simulation, and machine learning. New York: Springer.
- Mannor, S., Rubinstein, R.Y. and Gat, Y., 2003. The cross entropy method for fast policy search. ICML (pp. 512-519).
- Applications: e.g., Wang&Ba,2020, ICLR; Hafner et al., 2019, ICML; Pourchot&Sigaud, 2019, ICLR; Simmons-Edler et al., 2019, ICML-RL4RealLife; Chua et al., 2018, NeurIPS; Duan et al., 2016, ICML; Kobilarov, 2012, IJRR.
- Differential Evolution (DE)
- Price, K.V., 2013. Differential evolution. In Handbook of Optimization (pp. 187-214). Springer, Berlin, Heidelberg.
- Tanabe, R. and Fukunaga, A., 2013, June. Success-history based parameter adaptation for differential evolution. CEC (pp. 71-78). IEEE.
- Wang, Y., Cai, Z., and Zhang, Q. 2011. Differential evolution with composite trial vector generation strategies and control parameters. TEVC, 15(1), pp.55–66.
- Zhang, J., and Sanderson, A. C. 2009. JADE: Adaptive differential evolution with optional external archive. TEVC, 13(5), pp.945–958.
- Price, K.V., Storn, R.M. and Lampinen, J.A., 2005. Differential evolution: A practical approach to global optimization. Springer Science & Business Media.
- Fan, H.Y. and Lampinen, J., 2003. A trigonometric mutation operation to differential evolution. JGO, 27(1), pp.105-129.
- Storn, R.M. and Price, K.V. 1997. Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. JGO, 11(4), pp.341–359.
- Applications: e.g., McNulty et al., 2023, PRL; Colombo et al., 2023, Sci. Adv.; Lichtinger&Biggin, 2023, JCTC; Liang et al., 2023, NSDI; Schad et al., 2023, ApJ; Hoyer et al., 2023, MNRAS; Hoyer et al., 2023, ApJL; Abdelnabi&Fritz, 2023,USENIX Security; Kotov et al., 2023, Cell Reports; Sidhartha et al., 2023, CVPR; Hardy et al., 2023, MNRAS; Boucher et al., 2023; Michel et al., 2023, PRA; Woo et al., 2023, iScience; Bozkurt et al., 2023; Ma et al., 2023, KDD; Zhou et al., 2023; Czarnik et al., 2023; Katic et al., 2023, iScience; Khajehnejad et al., 2023, RSIF; Digman&Cornish, 2023, PRD; Rommel et al., 2023; Li et al., 2022, Science; Schlegelmilch et al., 2022, Psychological Review; Mackin et al., 2022, Nature Communications; Liu&Wang, 2022, JSAC; Zhou et al., 2022, Nature Computational Science; Fischer et al., 2022, TOCHI; Ido et al., 2022, npj Quantum Materials; Clark et al., 2022, NECO; Powell et al., 2022, ApJ; Vo et al., 2022, ICLR; Andersson et al., 2022, ApJ; Naudin et al., 2022, NECO; Perini et al., 2022, AAAI; Sterbentz et al., 2022, Physics of Fluids; Mishra et al., 2021, Science; Tiwari et al., 2021, PRB; Mok et al., 2021, Communications Physics; Vinker et al., 2021, CVPR; Mehta et al., 2021, JCAP; Trueblood et al., 2021, Psychological Review; Verdonck et al., 2021, Psychological Review; Robert et al., 2021, npj Quantum Information; Canton et al., 2021, ApJ; Leslie et al., 2021, PRD; Fengler et al., 2021, eLife; Li et al., 2021, TQE; Chen et al., 2021, ACS Photonics; Menczel et al., 2021, J. Phys. A: Math. Theor.; Feng et al., 2021, JSAC; DES Collaboration, 2021, A&A; An et al., 2020, PNAS.
- Particle Swarm Optimizer (PSO) / Consensus-Based Optimization (CBO) [ swarm intelligence | scholarpedia ]
- Fornasier, M., et al., 2024. Consensus-based optimization methods converge globally. SIOPT, 34(3), pp.2973-3004.
- Bailo, R., et al., 2024. CBX: Python and Julia packages for consensus-based interacting particle methods. JOSS, 9(98), p.6611.
- Sünnen, P., 2023. Analysis of a consensus-based optimization method on hypersurfaces and applications. Doctoral Dissertation, Technische Universität München.
- Fornasier, M., et al., 2021. Consensus-based optimization on the sphere: Convergence to global minimizers and machine learning. JMLR, 22(1), pp.10722-10776.
- Carrillo, J.A., et al., 2018. An analytical framework for consensus-based global optimization method. Mathematical Models and Methods in Applied Sciences, 28(06), pp.1037-1066.
- Pinnau, R., et al., 2017. A consensus-based model for global optimization and its mean-field limit. Mathematical Models and Methods in Applied Sciences, 27(01), pp.183-204.
- Bonyadi, M.R. and Michalewicz, Z., 2017. Particle swarm optimization for single objective continuous space problems: A review. ECJ, 25(1), pp.1-54.
- Escalante, H.J., Montes, M. and Sucar, L.E., 2009. Particle swarm model selection. JMLR, 10(15), pp.405−440.
- Floreano, D. and Mattiussi, C., 2008. Bio-inspired artificial intelligence: Theories, methods, and technologies. MIT Press.
- Poli, R., Kennedy, J. and Blackwell, T., 2007. Particle swarm optimization. Swarm Intelligence, 1(1), pp.33-57.
- Venter, G. and Sobieszczanski-Sobieski, J., 2003. Particle swarm optimization. AIAA Journal, 41(8), pp.1583-1589.
- Parsopoulos, K.E. and Vrahatis, M.N., 2002. Recent approaches to global optimization problems through particle swarm optimization. Natural Computing, 1(2), pp.235-306.
- Clerc, M. and Kennedy, J., 2002. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. TEVC, 6(1), pp.58-73.
- Eberhart, R.C., Shi, Y. and Kennedy, J., 2001. Swarm intelligence. Elsevier.
- Shi, Y. and Eberhart, R., 1998, May. A modified particle swarm optimizer. CEC (pp. 69-73). IEEE.
- Kennedy, J. and Eberhart, R., 1995, November. Particle swarm optimization. In Proceedings of International Conference on Neural Networks (pp. 1942-1948). IEEE.
- Eberhart, R. and Kennedy, J., 1995, October. A new optimizer using particle swarm theory. In Proceedings of International Symposium on Micro Machine and Human Science (pp. 39-43). IEEE.
- Interest Applications: e.g., Grabner et al., 2023, Nature Communications; Morselli et al., 2023, IEEE-TWC; Reddy et al., 2023, IEEE-TC; Zhang et al., 2022, CVPR; Yang et al., PRL, 2022; Guan et al., 2022, PRL; Zhong et al., 2022, CVPR; Singh&Hecke, 2021, PRL; Weiel, et al., 2021, Nature Mach. Intell; Wintermantel et al., 2020, PRL; Tang et al., 2019, TPAMI; Sheng et al., 2019, TPAMI; CMS Collaboration, 2019, JHEP; Wang et al., 2019, TVCG; Zhang et al., 2018, PRL; Leditzky et al., 2018, PRL; Pham et al., 2018, TPAMI; Villeneuve et al., 2017, Science; Choi et al., 2017, PRL; González-Echevarría, et al., 2017, TCAD; Zhu et al., 2017, PRL; Choi et al., 2017, ICCV; Pickup et al., 2016, IJCV; Li et al., 2015, PRL; Sharp et al., 2015, CHI; Taneja et al., 2015, TPAMI; Zhang et al., 2015, IJCV; Meyer et al., 2015, ICCV; Tompson et al., 2014, TOG; Baca et al., 2013, Cell; Li et al., PRL, 2013; Kawakami et al., 2013, IJCV; Kim et al., 2012, Nature; Rahmat-Samii et al., 2012, PIEEE; Oikonomidis et al., 2012, CVPR; Li et al., 2011, TPAMI; Zhao et al., 2011, PRL; Zhu et al., 2011, PRL; Lv et al., 2011, PRL; Hentschel&Sanders, 2010, PRL; Pontani&Conway, 2010, JGCD; Zhang et al., 2008, CVPR; Liebelt&Schertler, 2007, CVPR; Hassan et al., 2005, AIAA].
- Cooperative Coevolution (CC)
- Gomez, F., Schmidhuber, J. and Miikkulainen, R., 2008. Accelerated neural evolution through cooperatively coevolved synapses. JMLR, 9(31), pp.937-965.
- Panait, L., Tuyls, K. and Luke, S., 2008. Theoretical advantages of lenient learners: An evolutionary game theoretic perspective. JMLR, 9, pp.423-457.
- Schmidhuber, J., Wierstra, D., Gagliolo, M. and Gomez, F., 2007. Training recurrent networks by evolino. Neural Computation, 19(3), pp.757-779.
- Gomez, F.J. and Schmidhuber, J., 2005, June. Co-evolving recurrent neurons learn deep memory POMDPs. GECCO (pp. 491-498).
- Fan, J., Lau, R. and Miikkulainen, R., 2003. Utilizing domain knowledge in neuroevolution. ICML (pp. 170-177).
- Potter, M.A. and De Jong, K.A., 2000. Cooperative coevolution: An architecture for evolving coadapted subcomponents. ECJ, 8(1), pp.1-29.
- Gomez, F.J. and Miikkulainen, R., 1999, July. Solving non-Markovian control tasks with neuroevolution. IJCAI. (pp. 1356-1361).
- Moriarty, D.E. and Mikkulainen, R., 1996. Efficient reinforcement learning through symbiotic evolution. Machine Learning, 22(1), pp.11-32.
- Moriarty, D.E. and Miikkulainen, R., 1995. Efficient learning from delayed rewards through symbiotic evolution. ICML (pp. 396-404). Morgan Kaufmann.
- Potter, M.A. and De Jong, K.A., 1994, October. A cooperative coevolutionary approach to function optimization. PPSN (pp. 249-257). Springer, Berlin, Heidelberg.
- Simultaneous Perturbation Stochastic Approximation (SPSA) [ https://www.jhuapl.edu/SPSA/ ]
- Spall, J.C., 2005. Introduction to stochastic search and optimization: Estimation, simulation, and control. John Wiley & Sons.
- Simulated Annealing (SA)
- Bouttier, C. and Gavra, I., 2019. Convergence rate of a simulated annealing algorithm with noisy observations. JMLR, 20(1), pp.127-171.
- Gerber, M. and Bornn, L., 2017. Improving simulated annealing through derandomization. JGO, 68(1), pp.189-217.
- Siarry, P., Berthiau, G., Durdin, F. and Haussy, J., 1997. Enhanced simulated annealing for globally minimizing functions of many-continuous variables. TOMS, 23(2), pp.209-228.
- Bertsimas, D. and Tsitsiklis, J., 1993. Simulated annealing. Statistical Science, 8(1), pp.10-15.
- Corana, A., Marchesi, M., Martini, C. and Ridella, S., 1987. Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm. TOMS, 13(3), pp.262-280. [ Corrigenda ]
- Kirkpatrick, S., Gelatt, C.D. and Vecchi, M.P., 1983. Optimization by simulated annealing. Science, 220(4598), pp.671-680.
- Hastings, W.K., 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1), pp.97-109.
- Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H. and Teller, E., 1953. Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21(6), pp.1087-1092.
- Applications: e.g., Young et al., 2023, Nature; Kim et al., 2023, Nature; Passalacqua et al., 2023, Nature; Pronker et al., 2023, Nature; Sullivan&Seljak, 2023; Holm et al., 2023, Nature; Snyder et al., 2023, Nature; Samyak&Palacios, 2023, Biometrika; Bouchet et al., 2023, PNAS; Li&Yu, 2023, ACM-TOG; Zhao et al., 2023, VLDBJ; Zhong et al., 2023, IEEE/ACM-TASLP; Wang et al., 2023, IEEE-TMC; Filippo et al., 2023, IJCAI; Barnes et al., 2023, ApJ; Melo et al., 2023; Bruna et al., 2023; Holm et al., 2023; Jenson et al., 2023, Nature; Kolesov et al., 2016, IEEE-TPAMI
- Genetic Algorithm (GA)
- Whitley, D., 2019. Next generation genetic algorithms: A user’s guide and tutorial. In Handbook of Metaheuristics (pp. 245-274). Springer.
- Levine, D., 1997. Commentary—Genetic algorithms: A practitioner's view. IJOC, 9(3), pp.256-259.
- Goldberg, D.E., 1994. Genetic and evolutionary algorithms come of age. CACM, 37(3), pp.113-120.
- Mitchell, M. and Forrest, S., 1994. Genetic algorithms and artificial life ALJ, 1(3), pp.267-289.
- Forrest, S., 1993. Genetic algorithms: Principles of natural selection applied to computation. Science, 261(5123), pp.872-878.
- De Jong, K.A., 1993. Are genetic algorithms function optimizer?. FOGA, pp.5-17.
- Mitchell, M., Holland, J. and Forrest, S., 1993. When will a genetic algorithm outperform hill climbing. NeurIPS (pp. 51-58).
- Whitley, D., Dominic, S., Das, R. and Anderson, C.W., 1993. Genetic reinforcement learning for neurocontrol problems. MLJ, 13, pp.259-284.
- Holland, J.H., 1992. Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. MIT Press.
- Holland, J.H., 1992. Genetic algorithms. Scientific American, 267(1), pp.66-73.
- Whitley, D., 1989, December. The GENITOR algorithm and selection pressure: Why rank-based allocation of reproductive trials is best. FOGA (pp. 116-121).
- Goldberg, D.E. and Holland, J.H., 1988. Genetic algorithms and machine learning. MLJ, 3(2), pp.95-99.
- Holland, J.H., 1973. Genetic algorithms and the optimal allocation of trials. SICOMP, 2(2), pp.88-105.
- Holland, J.H., 1962. Outline for a logical theory of adaptive systems. JACM, 9(3), pp.297-314.
- Applications: e.g., Wang, 2023, Harvard Ph.D. Dissertation; Lee et al., 2022, Science Robotics; Whitelam&Tamblyn, 2021, PRL; Walker et al., 2021, Nature Chemistry; Chen et al., 2020, Nature.
- Evolutionary Programming (EP)
- Yao, X., Liu, Y. and Lin, G., 1999. Evolutionary programming made faster. TEVC, 3(2), pp.82-102.
- Fogel, D.B., 1999. An overview of evolutionary programming. In Evolutionary Algorithms (pp. 89-109). Springer, New York, NY.
- Fogel, D.B. and Fogel, L.J., 1995, September. An introduction to evolutionary programming. In European Conference on Artificial Evolution (pp. 21-33). Springer, Berlin, Heidelberg.
- Fogel, D.B., 1994. Evolutionary programming: An introduction and some current directions. Statistics and Computing, 4(2), pp.113-129.
- Bäck, T. and Schwefel, H.P., 1993. An overview of evolutionary algorithms for parameter optimization. ECJ, 1(1), pp.1-23.
- Fogel, L.J., Owens, A.J. and Walsh, M.J., 1965. Intelligent decision making through a simulation of evolution. IEEE Transactions on Human Factors in Electronics, 6(1), pp.13-23.
- Applications: e.g., Hoorfar, 2007, IEEE-TAP; Cui et al., 2006, MS; Damavandi&Safavi-Naeini, 2005, IEEE-TCSI.
- Pattern Search
- Porcelli, M. and Toint, P.L., 2022. Exploiting problem structure in derivative free optimization. ACM TOMS, 48(1), pp.1-25.
- Audet, C., Le Digabel, S., Rochon Montplaisir, V. and Tribes, C., 2022. Algorithm XXXX: NOMAD version 4: Nonlinear optimization with the MADS algorithm. TOMS.
- Brent, R.P., 2013. Algorithms for minimization without derivatives. Courier Corporation.
- Singer, S. and Nelder, J., 2009. Nelder-mead algorithm. Scholarpedia, 4(7), p.2928.
- Lagarias, J.C., Reeds, J.A., Wright, M.H. and Wright, P.E., 1998. Convergence properties of the Nelder--Mead simplex method in low dimensions. SIOPT, 9(1), pp.112-147.
- Powell, M.J., 1998. Direct search algorithms for optimization calculations. Acta Numerica, 7, pp.287-336.
- Torczon, V., 1997. On the convergence of pattern search algorithms. SIOPT, 7(1), pp.1-25.
- Barton, R.R. and Ivey Jr, J.S., 1996. Nelder-Mead simplex modifications for simulation optimization. MS, 42(7), pp.954-973.
- Wright, M.H., 1996. Direct search methods: Once scorned, now respectable. Pitman Research Notes in Mathematics Series, pp.191-208.
- Jones, D.R., Perttunen, C.D. and Stuckman, B.E., 1993. Lipschitzian optimization without the Lipschitz constant. JOTA, 79(1), pp.157-181.
- Nelder, J.A. and Mead, R., 1965. A simplex method for function minimization. The Computer Journal, 7(4), pp.308-313.
- Powell, M.J., 1964. An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal, 7(2), pp.155-162.
- Kaupe Jr, A.F., 1963. Algorithm 178: Direct search. CACM, 6(6), pp.313-314.
- Spang, III, H.A., 1962. A review of minimization techniques for nonlinear functions. SIAM Review, 4(4), pp.343-365.
- Hooke, R. and Jeeves, T.A., 1961. “Direct search” solution of numerical and statistical problems. JACM, 8(2), pp.212-229. [ Python - pymoo | This Week's Citation Classic ]
- Box, G.E., 1957. Evolutionary operation: A method for increasing industrial productivity. Journal of the Royal Statistical Society: Series C (Applied Statistics), 6(2), pp.81-101.
- Fermi, E. and Metropolis N., 1952. Numerical solution of a minimum problem. Technical Report, Los Alamos Scientific Lab.
- Applications: e.g., [NM: Gokhale et al., 2023, PNAS; Hayashi, 2022, Bernoulli; Vanunu et al., 2021, PNAS; Williams et al., 2021, PNAS; Fleishman et al., Science, 2020; Nanni et al., 2020, PNAS; Steinrücken et al., 2019, PNAS; Omran et al., 2019, Science; Sparrow et al., 2018, Nature; Prochazka&Vogl, 2017, PNAS; Murphy&Brincke, 2017, MS; Gillon et al., 2017, Nature; Aghaeepour et al., 2017, Science Immunol.; Sayegh et al., 2017, TS; Landis&Schraiber, 2017, PNAS; Kim et al., 2016, PNAS; Chan et al., 2014, MS; Chan et al., 2014, MKSC; Bajikar et al., 2014, PNAS; Lee et al., 2014, PNAS; Wang et al., 2012, PRL; Lau&Rubinstein, Nature, 2012; Brown et al., 2012, MS; Contreras et al., 2012, PNAS; Morlon et al., 2011, PNAS; Forstmann et al., 2010, PNAS; Balachander et al., 2009, MS; Jayanthi et al., 2009, MS; Farrell et al., 2009, PNAS; Forstmann et al., 2008, PNAS; Rouder et al., 2008, PNAS; Sapir et al., 2005, PNAS; Amonlirdviman et al., 2005, Science; Cowan et al., 2004, PS; Zhou et al., 2004, PS; Draganska&Jain, 2004, MS; Fain&Levitt, 2003, PNAS; Dennis et al., 2002, PNAS; Sudhir, 2001, MKSC; Rouder, 2001, PS; Wolszczan, 1994, Science; Polvani et al., 1990, Science; Lee et al., 1987, PNAS; Sabath et al., 1986, PNAS; Burch et al., 1985, PNAS; Regoeczi et al., 1982, PNAS; Brasseur et al., 1982, PNAS; Korn et al., 1981, Science; Dean et al., 1975, Science]; [HJ: Kalita et al., 2023, Nature; Washington et al, 2023; Huynh et al., 2023; Kucher et al., 2022, SPLC; Pepe et al., 2021, IEEE-TASLP; Khaledian et al., 2018, IEEE-TMTT; Luhar et al., 2015, JFM; Paxton et al., 2013, ApJS; Schneider&Excoffier, 1999, Genetics; Ditchfield et al., 1971, JCP].
- Random Search (RS)
- Gao, K. and Sener, O., 2022, June. Generalizing Gaussian smoothing for random search. ICML (pp. 7077-7101). PMLR.
- Stich, S.U., 2014. On low complexity acceleration techniques for randomized optimization. In PPSN (pp. 130-140). Springer.
- Bergstra, J. and Bengio, Y., 2012. Random search for hyper-parameter optimization. JMLR, 13(2).
- Nemirovski, A., Juditsky, A., Lan, G. and Shapiro, A., 2009. Robust stochastic approximation approach to stochastic programming. SIOPT, 19(4), pp.1574-1609.
- Schmidhuber, J., Hochreiter, S. and Bengio, Y., 2001. Evaluating benchmark problems by random guessing. A Field Guide to Dynamical Recurrent Networks, pp.231-235.
- Rosenstein, M.T. and Barto, A.G., 2001, August. Robot weightlifting by direct policy search. IJCAI. (pp. 839-846).
- Cvijović, D. and Klinowski, J., 1995. Taboo search: An approach to the multiple minima problem. Science, 267(5198), pp.664-666.
- Polyak, B.T., 1987. Introduction to optimization. Optimization Software. New York.
- Dorea, C.C.Y., 1983. Expected number of steps of a random optimization method. JOTA, 39(2), pp.165-171. [ Sarma, M.S., 1990. On the convergence of the Baba and Dorea random optimization methods. JOTA, 66, pp.337-343. ] + [ Appel, M.J., Labarre, R. and Radulovic, D., 2004. On accelerated random search. SIOPT, 14(3), pp.708-731. ]
- Solis, F.J. and Wets, R.J.B., 1981. Minimization by random search techniques. MOR, 6(1), pp.19-30.
- Schumer, M.A. and Steiglitz, K., 1968. Adaptive step size random search. TAC, 13(3), pp.270-276.
- Rastrigin, L.A., 1963.
The convergence of the random search method in the extremal control of a many parameter system.
Automaton & Remote Control, 24, pp.1337-1342.
- Rastrigin, L.A., 1986. Random search as a method for optimization and adaptation. In Stochastic Optimization. Springer.
- Brooks, S.H., 1958. A discussion of random methods for seeking maxima. OR, 6(2), pp.244-251.
- Ashby, W.R., 1952. Design for a brain: The origin of adaptive behaviour. Springer.
- Some interesting applications of RS: e.g., [Moon et al., 2023, Nature Medicine]; [Wang et al., 2023, Nature Mental Health]; [Xie et al., 2023, Nature Communications]; [Mathis et al., 2023, Nature Biotechnology]; [Tian et al., 2023, KDD]; [Schuch et al., 2023, JAMA]; [Flam-Shepherd et al., 2022, Nature Communications]; [Beucler et al., 2021, PRL]; [Roman et al., 2021, Nature Machine Intelligence]; [Shen et al., 2021, Nature Communications]; [Gonatopoulos-Pournatzis et al., 2020, Nature Biotechnology]; [Valeri et al., 2020, Nature Communications]; [Chen et al., 2020, Science Robotics]; [Pickard&Needs, 2006, PRL].
- Bayesian Optimization (BO)
- https://bayesoptbook.com/ + https://bayesopt-tutorial.github.io/
- Wang, L., et al., 2020. Learning search space partition for black-box optimization using monte carlo tree search. NeurIPS, 33, pp.19511-19522. [ Python ]
- Jones, D.R., et al., 1998. Efficient global optimization of expensive black-box functions. JGO, 13(4), pp.455-492.
- Automated Machine Learning (AutoML)
- Hutter, F., et al., 2019. Automated machine learning: Methods, systems, challenges. Springer.
- Hoos, H.H., 2011. Automated algorithm configuration and parameter tuning. In Autonomous Search (pp. 37-71). Springer.
Now it is supported by National Natural Science Foundation of China under Grant No. 72401122, Guangdong Basic and Applied Basic Research Foundation under Grant No. 2024A1515012241 and 2021A1515110024. From 2021 to 2023, this open-source pure-Python library PyPop7 was supported by Shenzhen Fundamental Research Program under Grant No. JCYJ20200109141235597 (a total of 2,000,000 Yuan).
If this open-source pure-Python library PyPop7 is used in your paper or project, it is highly welcomed but NOT mandatory to cite the following arXiv preprint paper: Duan, Q., Zhou, G., Shao, C., Wang, Z., Feng, M., Huang, Y., Tan, Y., Yang, Y., Zhao, Q. and Shi, Y., 2024. PyPop7: A pure-Python library for population-based black-box optimization. arXiv preprint arXiv:2212.05652. (Now it has been submitted to JMLR, after 3 reviews from Tue, 28 Mar 2023 to Wed, 01 Nov 2023 to Fri, 05 Jul 2024, and accepted in Fri, 11 Oct 2024.)