Abstract
Evolution strategies (GlossaryTerm
ES
) are evolutionary algorithms that date back to the 1960s and that are most commonly applied to black-box optimization problems in continuous search spaces. Inspired by biological evolution, their original formulation is based on the application of mutation, recombination and selection in populations of candidate solutions. From the algorithmic viewpoint, GlossaryTermES
are optimization methods that sample new candidate solutions stochastically, most commonly from a multivariate normal probability distribution. Their two most prominent design principles are unbiasedness and adaptive control of parameters of the sample distribution. In this overview, the important concepts of success based step-size control, self-adaptation, and derandomization are covered, as well as more recent developments such as covariance matrix adaptation and natural GlossaryTermES
. The latter give new insights into the fundamental mathematical rationale behind GlossaryTermES
. A broad discussion of theoretical results includes progress rate results on various function classes and convergence proofs for evolutionAccess this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Abbreviations
- CMA:
-
covariance matrix adaptation
- CSA:
-
cumulative step-size adaptation
- ES:
-
evolution strategy
- i.i.d.:
-
independent, identically distributed
- IPOP:
-
increasing population size
- NES:
-
natural evolution strategy
- xNES:
-
exponential natural evolution strategy
References
I. Rechenberg: Evolutionstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (Frommann-Holzboog, Stuttgart 1973)
I. Rechenberg: Evolutionsstrategie '94 (Frommann-Holzboog, Stuttgart 1994)
H.-P. Schwefel: Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie (Birkhäuser, Basel 1977)
H.-P. Schwefel: Evolution and Optimum Seeking (Wiley, New York 1995)
L.J. Fogel, A.J. Owens, M.J. Walsh: Artificial Intelligence through Simulated Evolution (Wiley, New York 1966)
H.-G. Beyer, H.-P. Schwefel: Evolution strategies — A comprehensive introduction, Nat. Comp. 1(1), 3–52 (2002)
D.B. Fogel: The Fossil Record (Wiley, New York 1998)
H.-G. Beyer: The Theory of Evolution Strategies (Springer, Berlin, Heidelberg 2001)
D.E. Goldberg: Genetic Algorithms in Search, Optimization and Machine Learning (Addison Wesley, Reading 1989)
N. Hansen, A. Ostermeier: Completely derandomized self-adaptation in evolution strategies, Evol. Comp. 9(2), 159–195 (2001)
H.-P. Schwefel, G. Rudolph: Contemporary evolution strategies. In: Advances Artificial Life, ed. by F. Morán, A. Moreno, J.J. Merelo, P. Chacón (Springer, Berlin, Heidelberg 1995) pp. 891–907
G. Rudolph: Convergence Properties of Evolutionary Algorithms (Dr. Kovač, Hamburg 1997)
D.V. Arnold: Weighted multirecombination evolution strategies, Theor. Comp. Sci. 361(1), 18–37 (2006)
H. Mühlenbein, D. Schlierkamp-Voosen: Predictive models for the breeder genetic algorithm I. Continuous parameter optimization, Evol. Comp. 1(1), 25–49 (1993)
H.-G. Beyer: Toward a theory of evolution strategies: On the benefits of sex — The (μ/μ, λ) theory, Evol. Comp. 3(1), 81–111 (1995)
C. Kappler: Are evolutionary algorithms improved by large mutations?, Lect. Notes Comput. Sci. 1141, 346–355 (1996)
G. Rudolph: Local convergence rates of simple evolutionary algorithms with Cauchy mutations, IEEE Trans. Evol. Comp. 1(4), 249–258 (1997)
X. Yao, Y. Liu, G. Lin: Evolutionary programming made faster, IEEE Trans. Evol. Comp. 3(2), 82–102 (1999)
M. Herdy: The number of offspring as strategy parameter in hierarchically organized evolution strategies, ACM SIGBIO Newsl. 13(2), 2–9 (1993)
M. Schumer, K. Steiglitz: Adaptive step size random search, IEEE Trans. Autom. Control 13(3), 270–276 (1968)
S. Kern, S.D. Müller, N. Hansen, D. Büche, J. Ocenasek, P. Koumoutsakos: Learning probability distributions in continuous evolutionary algorithms — A comparative review, Nat. Comput. 3(1), 77–112 (2004)
N. Hansen: An analysis of mutative σ-self-adaptation on linear fitness functions, Evol. Comp. 14(3), 255–275 (2006)
A. Ostermeier, A. Gawelczyk, N. Hansen: A derandomized approach to self-adaptation of evolution strategies, Evol. Comp. 2(4), 369–380 (1994)
T. Runarsson: Reducing random fluctuations in mutative self-adaptation, Lect. Notes Comput. Sci. 2439, 194–203 (2002)
A. Ostermeier, A. Gawelczyk, N. Hansen: Step-size adaptation based on non-local use of selection information, Lect. Notes Comput. Sci. 866, 189–198 (1994)
N. Hansen, A. Ostermeier, A. Gawelczyk: On the adaptation of arbitrary normal mutation distributions in evolution strategies: The generating set adaptation, Int. Conf. Gen. Algorith., ed. by L.J. Eshelman (Morgan Kaufmann, San Francisco 1995) pp. 57–64
N. Hansen, A. Ostermeier: Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation, IEEE Int. Conf. Evol. Comp. (1996) pp. 312–317
A. Ostermeier, N. Hansen: An evolution strategy with coordinate system invariant adaptation of arbitrary normal mutation distributions within the concept of mutative strategy parameter control, Proc. Genet. Evol. Comput. Conf. (1999) pp. 902–909
D. Wierstra, T. Schaul, J. Peters, J. Schmidhuber: Natural evolution strategies, IEEE Cong. Evol. Comp. (CEC 2008) (2008) pp. 3381–3387
Y. Sun, D. Wierstra, T. Schaul, J. Schmidhuber: Efficient natural evolution strategies, Proc. Genet. Evol. Comput. Conf. (2009) pp. 539–546
T. Glasmachers, T. Schaul, Y. Sun, D. Wierstra, J. Schmidhuber: Exponential natural evolution strategies, Proc. Genet. Evol. Comput. Conf. (2010) pp. 393–400
N. Hansen: Invariance, self-adaptation and correlated mutations and evolution strategies, Lect. Notes Comput. Sci. 1917, 355–364 (2000)
A. Ostermeier: An evolution strategy with momentum adaptation of the random number distribution. In: Proc. 2nd Conf. Parallel Probl. Solving Nat., ed. by R. Männer, B. Manderick (North-Holland, Amsterdam 1992) pp. 199–208
J. Poland, A. Zell: Main vector adaptation: A CMA variant with linear time and space complexity, Proc. Genet. Evol. Comput. Conf. (2001) pp. 1050–1055
J.N. Knight, M. Lunacek: Reducing the space-time complexity of the CMA-ES, Proc. Genet. Evol. Comput. Conf. (2007) pp. 658–665
N. Hansen, S. Kern: Evaluating the CMA evolution strategy on multimodal test functions, Lect. Notes Comput. Sci. 3242, 282–291 (2004)
H.G. Beyer, B. Sendhoff: Covariance matrix adaptation revisited — The CMSA evolution strategy, Lect. Notes Comput. Sci. 3199, 123–132 (2008)
G.A. Jastrebski, D.V. Arnold: Improving evolution strategies through active covariance matrix adaptation, IEEE Cong. Evol. Comp. (CEC 2006) (2006) pp. 2814–2821
H.-G. Beyer: Mutate large, but inherit small! On the analysis of rescaled mutations in ($\smash{\tilde{1},\tilde{\lambda}}$)-ES with noisy fitness data, Lect. Notes Comput. Sci. 1498, 109–118 (1998)
S.I. Amari: Natural gradient works efficiently in learning, Neural Comput. 10(2), 251–276 (1998)
Y. Sun, D. Wierstra, T. Schaul, J. Schmidhuber: Stochastic search using the natural gradient, Int. Conf. Mach. Learn., ed. by A.P. Danyluk, L. Bottou, M.L. Littman (2009) pp. 1161–1168
Y. Akimoto, Y. Nagata, I. Ono, S. Kobayashi: Bidirectional relation between CMA evolution strategies and natural evolution strategies, Lect. Notes Comput. Sci. 6238, 154–163 (2010)
L. Arnold, A. Auger, N. Hansen, Y. Ollivier: Information-geometric optimization algorithms: A unifying picture via invariance principles, ArXiv e-prints (2011), DOI arXiv:1106.3708
M. Pelikan, M.W. Hausschild, F.G. Lobo: Introduction to estimation of distribution algorithms. MEDAL Rep. No. 2012003 (University of Missouri, St. Louis 2012)
G.R. Harik, F.G. Lobo: A parameter-less genetic algorithm, Proc. Genet. Evol. Comput. Conf. (1999) pp. 258–265
F.G. Lobo, D.E. Goldberg: The parameter-less genetic algorithm in practice, Inf. Sci. 167(1), 217–232 (2004)
A. Auger, N. Hansen: A restart CMA evolution strategy with increasing population size, IEEE Cong. Evol. Comp. (CEC 2005) (2005) pp. 1769–1776
T. Suttorp, N. Hansen, C. Igel: Efficient covariance matrix update for variable metric evolution strategies, Mach. Learn. 75(2), 167–197 (2009)
Z. Michalewicz, M. Schoenauer: Evolutionary algorithms for constrained parameter optimizationproblems, Evol. Comp. 4(1), 1–32 (1996)
E. Mezura-Montes, C.A. Coello Coello: Constraint-handling in nature-inspired numerical optimization: Past, present, and future, Swarm Evol. Comp. 1(4), 173–194 (2011)
M. Emmerich, A. Giotis, M. Özdemir, T. Bäck, K. Giannakoglou: Metamodel-assisted evolution strategies, Lect. Notes Comput. Sci. 2439, 361–370 (2002)
C. Igel, N. Hansen, S. Roth: Covariance matrix adaptation for multi-objective optimization, Evol. Comp. 15(1), 1–28 (2007)
N. Hansen, T. Voß, C. Igel: Improved step size adaptation for the MO-CMA-ES, Proc. Genet. Evol. Comput. Conf. (2010) pp. 487–494
R. Salomon: Evolutionary algorithms and gradient search: Similarities and differences, IEEE Trans. Evol. Comp. 2(2), 45–55 (1998)
G. Rudolph: Self-adaptive mutations may lead to premature convergence, IEEE Trans. Evol. Comp. 5(4), 410–414 (2001)
A. Auger, N. Hansen: Reconsidering the progress rate theory for evolution strategies in finite dimensions, Proc. Genet. Evol. Comput. Conf. (2006) pp. 445–452
O. Teytaud, S. Gelly: General lower bounds for evolutionary algorithms, Lect. Notes Comput. Sci. 4193, 21–31 (2006)
H. Fournier, O. Teytaud: Lower bounds for comparison based evolution strategies using VC-dimension and sign patterns, Algorithmica 59(3), 387–408 (2011)
O. Teytaud: Lower bounds for evolution strategies. In: Theory of Randomized Search Heuristics: Foundations and Recent Developments, ed. by A. Auger, B. Doerr (World Scientific Publ., Singapore 2011) pp. 327–354
J. Jägersküpper: Lower bounds for hit-and-run direct search, Lect. Notes Comput. Sci. 4665, 118–129 (2007)
J. Jägersküpper: Lower bounds for randomized direct search with isotropic sampling, Oper. Res. Lett. 36(3), 327–332 (2008)
J. Jägersküpper: Probabilistic runtime analysis of $(1\stackrel{+}{,}\lambda)$ evolution strategies using isotropic mutations, Proc. Genet. Evol. Comput. Conf. (2006) pp. 461–468
M. Jebalia, A. Auger, P. Liardet: Log-linear convergence and optimal bounds for the (1+1)-ES, Lect. Notes Comput. Sci. 4926, 207–218 (2008)
A. Auger, N. Hansen: Theory of evolution strategies: A new perspective. In: Theory of Randomized Search Heuristics: Foundations and Recent Developments, ed. by A. Auger, B. Doerr (World Scientific Publ., Singapore 2011) pp. 289–325
A. Auger, D. Brockhoff, N. Hansen: Analyzing the impact of mirrored sampling and sequential selection in elitist evolution strategies, Proc. 11th Workshop Found. Gen. Algorith. (2011) pp. 127–138
A. Auger, D. Brockhoff, N. Hansen: Mirrored sampling in evolution strategies with weighted recombination, Proc. Genet. Evol. Comput. Conf. (2011) pp. 861–868
D.V. Arnold, H.-G. Beyer: Local performance of the $(\mu/\mu _{I},\lambda)$-ES in a noisy environment, Proc. 11th Workshop Found. Gen. Algorith. (2001) pp. 127–141
D.V. Arnold, H.-G. Beyer: Local performance of the $(1+1)$-ES in a noisy environment, IEEE Trans. Evol. Comp. 6(1), 30–41 (2002)
D.V. Arnold: Noisy Optimization with Evolution Strategies (Kluwer Academic, Boston 2002)
D.V. Arnold, H.-G. Beyer: Performance analysis of evolutionary optimization with cumulative step length adaptation, IEEE Trans. Autom. Control 49(4), 617–622 (2004)
A.I. Oyman, H.-G. Beyer: Analysis of the $(\mu/\mu,\lambda)$-ES on the parabolic ridge, Evol. Comp. 8(3), 267–289 (2000)
D.V. Arnold, H.-G. Beyer: Evolution strategies with cumulative step length adaptation on the noisy parabolic ridge, Nat. Comput. 7(4), 555–587 (2008)
D.V. Arnold, H.-G. Beyer: On the behaviour of evolution strategies optimising cigar functions, Evol. Comp. 18(4), 661–682 (2010)
H.-G. Beyer: Towards a theory of evolution strategies: Self-adaptation, Evol. Comp. 3, 3 (1995)
S. Meyer-Nieberg, H.-G. Beyer: Mutative self-adaptation on the sharp and parabolic ridge, Lect. Notes Comput. Sci. 4436, 70–96 (2007)
D.V. Arnold, A. MacLeod: Hierarchically organised evolution strategies on the parabolic ridge, Proc. Genet. Evol. Comput. Conf. (2006) pp. 437–444
H.-G. Beyer, M. Dobler, C. Hämmerle, P. Masser: On strategy parameter control by meta-ES, Proc. Genet. Evol. Comput. Conf. (2009) pp. 499–506
D.V. Arnold, A. MacLeod: Step length adaptation on ridge functions, Evol. Comp. 16(2), 151–184 (2008)
D.V. Arnold: On the use of evolution strategies for optimising certain positive definite quadratic forms, Proc. Genet. Evol. Comput. Conf. (2007) pp. 634–641
H.-G. Beyer, S. Finck: Performance of the $(\mu/\mu _{{I}},\lambda)\hbox{-}\sigma{\rm SA}$-ES on a class of PDQFs, IEEE Trans. Evol. Comp. 14(3), 400–418 (2010)
M. Jebalia, A. Auger, N. Hansen: Log-linear convergence and divergence of the scale-invariant (1+1)-ES in noisy environments, Algorithmica 59(3), 425–460 (2011)
D.V. Arnold, H.-G. Beyer: On the benefits of populations for noisy optimization, Evol. Comp. 11(2), 111–127 (2003)
D.V. Arnold, H.-G. Beyer: A general noise model and its effects on evolution strategy performance, IEEE Trans. Evol. Comp. 10(4), 380–391 (2006)
D.V. Arnold, H.-G. Beyer: Optimum tracking with evolution strategies, Evol. Comp. 14(3), 291–308 (2006)
D.V. Arnold, D. Brauer: On the behaviour of the $(1+1)$-ES for a simple constrained problem, Lect. Notes Comput. Sci. 5199, 1–10 (2008)
D.V. Arnold: On the behaviour of the $(1,\lambda)$-ES for a simple constrained problem, Lect. Notes Comput. Sci. 5199, 15–24 (2011)
D.V. Arnold: Analysis of a repair mechanism for the $(1,\lambda)$-ES applied to a simple constrained problem, Proc. Genet. Evol. Comput. Conf. (2011) pp. 853–860
A. Anatoly Zhigljavsky: Theory of Global Random Search (Kluwer Academic, Boston 1991)
A. Bienvenüe, O. François: Global convergence for evolution strategies in spherical problems: Some simple proofs and difficulties, Theor. Comp. Sci. 306(1-3), 269–289 (2003)
A. Auger: Convergence results for (1,λ)-SA-ES using the theory of $\varphi$-irreducible Markov chains, Theor. Comp. Sci. 334(1-3), 35–69 (2005)
J. Jägersküpper: Algorithmic analysis of a basic evolutionary algorithm for continuous optimization, Theor. Comp. Sci. 379(3), 329–347 (2007)
J. Jägersküpper: How the (1+1) ES using isotropic mutations minimizes positive definite quadratic forms, Theor. Comp. Sci. 361(1), 38–56 (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Hansen, N., Arnold, D.V., Auger, A. (2015). Evolution Strategies. In: Kacprzyk, J., Pedrycz, W. (eds) Springer Handbook of Computational Intelligence. Springer Handbooks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-43505-2_44
Download citation
DOI: https://doi.org/10.1007/978-3-662-43505-2_44
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-43504-5
Online ISBN: 978-3-662-43505-2
eBook Packages: EngineeringEngineering (R0)