Abstract
We propose a new way to self-adjust the mutation rate in population-based evolutionary algorithms in discrete search spaces. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate is then updated to the rate used in that subpopulation which contains the best offspring. We analyze how the \((1+\lambda )\) evolutionary algorithm with this self-adjusting mutation rate optimizes the OneMax test function. We prove that this dynamic version of the \((1+\lambda )\) EA finds the optimum in an expected optimization time (number of fitness evaluations) of \(O(n\lambda /\log \lambda +n\log n)\). This time is asymptotically smaller than the optimization time of the classic \((1+\lambda )\) EA. Previous work shows that this performance is best-possible among all \(\lambda \)-parallel mutation-based unbiased black-box algorithms. This result shows that the new way of adjusting the mutation rate can find optimal dynamic parameter values on the fly. Since our adjustment mechanism is simpler than the ones previously used for adjusting the mutation rate and does not have parameters itself, we are optimistic that it will find other applications.
Similar content being viewed by others
Notes
As usual in the analysis of algorithms, we write \(\log \) when there is no need to specify the base of the logarithm, e.g., in asymptotic terms like \(\Theta (\log n)\). To avoid the possible risk of an ambiguity for small arguments, as usual, we set \(\log x = \max \{1, \log x\}\). All other logarithms are used in their classic meaning, that is, for all \(x > 0\), we denote by \(\ln x\) the natural logarithm of x and by \(\log _2 x\) its binary logarithm.
Note added in proof: very recently, it was shown in [29] that the (1, \(\lambda \)) EA with self-adaptive mutation rate has a runtime asymptotically equivalent to the one of the algorithm regarded in this work.
References
Alanazi, F., Lehre, P.K.: Runtime analysis of selection hyper-heuristics with classical learning mechanisms. In: Proceedings of CEC ’14, pp. 2515–2523. IEEE (2014)
Antipov, D., Doerr, B., Fang, J., Hetet, T.: Runtime analysis for the \((\mu +\lambda )\) EA optimizing OneMax. In: Proceedings of GECCO ’18, pp. 1459–1466. ACM (2018)
Auger, A., Doerr, B. (eds.): Theory of Randomized Search Heuristics. World Scientific Publishing, Singapore (2011)
Badkobeh, G., Lehre, P.K., Sudholt, D.: Unbiased black-box complexity of parallel search. In: Proceedings of PPSN ’14, pp. 892–901. Springer (2014)
Böttcher, S., Doerr, B., Neumann, F.: Optimal fixed and adaptive mutation rates for the LeadingOnes problem. In: Proceedings of PPSN ’10, pp. 1–10. Springer (2010)
Buzdalov, M., Doerr, B.: Runtime analysis of the \((1+(\lambda ,\lambda ))\) genetic algorithm on random satisfiable 3-CNF formulas. In: Proceedings of GECCO ’17, pp. 1343–1350. ACM (2017)
Cathabard, S., Lehre, P.K., Yao, X.: Non-uniform mutation rates for problems with unknown solution lengths. In: Proceedings of FOGA ’11, pp. 173–180. ACM (2011)
Cervantes, J., Stephens, C.R.: Rank based variation operators for genetic algorithms. In: Proceedings of GECCO ’08, pp. 905–912. ACM (2008)
Dang, D-C., Lehre, P.K.: Self-adaptation of mutation rates in non-elitist populations. In: Proceedings of PPSN ’16, pp. 803–813. Springer (2016)
Dietzfelbinger, M., Rowe, J.E., Wegener, I., Woelfel, P.: Tight bounds for blind search on the integers and the reals. Comb. Probab. Comput. 19, 711–728 (2010)
Doerr, B.: Analyzing randomized search heuristics: tools from probability theory. In: Auger, A., Doerr, B. (eds.) Theory of Randomized Search Heuristics, pp. 1–20. World Scientific Publishing, Singapore (2011)
Doerr, B.: Optimal parameter settings for the \((1+(\lambda , \lambda ))\) genetic algorithm. In: Proceedings of GECCO ’16, pp. 1107–1114. ACM (2016)
Doerr, B.: An elementary analysis of the probability that a binomial random variable exceeds its expectation. Stat. Probab. Lett. 139, 67–74 (2018)
Doerr, B., Doerr, C.: Optimal parameter choices through self-adjustment: applying the 1/5-th rule in discrete settings. In: Proceedings of GECCO ’15, pp. 1335–1342. ACM (2015)
Doerr, B., Künnemann, M.: Optimizing linear functions with the (1+\(\lambda \)) evolutionary algorithm—different asymptotic runtimes for different instances. Theor. Comput. Sci. 561, 3–23 (2015)
Doerr, B., Fouz, M., Witt, C.: Quasirandom evolutionary algorithms. In: Proceedings of GECCO ’10, pp. 1457–1464. ACM (2010)
Doerr, B., Fouz, M., Witt, C.: Sharp bounds by probability-generating functions and variable drift. In: Proceedings of GECCO ’11, pp. 2083–2090. ACM (2011)
Doerr, B., Johannsen, D., Winzen, C.: Multiplicative drift analysis. Algorithmica 64, 673–697 (2012)
Doerr, B., Doerr, C., Ebel, F.: From black-box complexity to designing new genetic algorithms. Theor. Comput. Sci. 567, 87–104 (2015a)
Doerr, B., Doerr, C., Kötzing, T.: Solving problems with unknown solution length at (almost) no extra cost. In: Proceedings of GECCO ’15, pp. 831–838. ACM (2015b)
Doerr, B., Doerr, C., Kötzing, T.: The right mutation strength for multi-valued decision variables. In: Proceedings of GECCO ’16, pp. 1115–1122. ACM (2016a)
Doerr, B., Doerr, C., Kötzing, T.: Provably optimal self-adjusting step sizes for multi-valued decision variables. In: Proceedings of PPSN ’16, pp. 782–791. Springer (2016b)
Doerr, B., Doerr, C., Yang, J.: Optimal parameter choices via precise black-box analysis. In: Proceedings of GECCO ’16, pp. 1123–1130. ACM (2016c)
Doerr, B., Doerr, C., Yang, J.: \(k\)-bit mutation with self-adjusting \(k\) outperforms standard bit mutation. In: Proceedings of PPSN ’16, pp. 824–834. Springer (2016d)
Doerr, B., Doerr, C., Kötzing, T.: Unknown solution length problems with no asymptotically optimal run time. In: Proceedings of GECCO ’17, pp. 1367–1374. ACM (2017a)
Doerr, B., Gießen, C., Witt, C., Yang, J.: The (1+\(\lambda \)) evolutionary algorithm with self-adjusting mutation rate. In: Proceedings of GECCO ’17, pp. 1351–1358. ACM (2017b)
Doerr, B., Le, H.P., Makhmara, R., Nguyen, T.D.: Fast genetic algorithms. In: Proceedings of GECCO ’17, pp. 777–784. ACM (2017c)
Doerr, B., Lissovoi, A., Oliveto, P.S., Warwicker, J.A.: On the runtime analysis of selection hyper-heuristics with adaptive learning periods. In: Proceedings of GECCO ’18, pp. 1015–1022. ACM (2018a)
Doerr, B., Witt, C., Yang, J.: Runtime analysis for self-adaptive mutation rates. In: Proc. GECCO ’18, pp. 1475–1482. ACM (2018b)
Droste, S., Jansen, T., Wegener, I.: On the analysis of the (1+1) evolutionary algorithm. Theor. Comput. Sci. 276, 51–81 (2002)
Eiben, A.E., Hinterding, R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Trans. Evolut. Comput. 3, 124–141 (1999)
Garnier, J., Kallel, L., Schoenauer, M.: Rigorous hitting times for binary mutations. Evolut. Comput. 7, 173–203 (1999)
Gießen, C., Witt, C.: The interplay of population size and mutation probability in the (1+\(\lambda \)) EA on OneMax. Algorithmica 78, 587–609 (2017)
Giel, O., Wegener, I.: Evolutionary algorithms and the maximum matching problem. In: Proceedings of STACS ’03, pp. 415–426. Springer (2003)
Hwang, H.-K., Panholzer, A., Rolin, N., Tsai, T.-H., Chen, W.-M.: Probabilistic analysis of the (1+1)-evolutionary algorithm. Evolut. Comput. 26, 299–345 (2018)
Jansen, T.: Analyzing Evolutionary Algorithms—The Computer Science Perspective. Springer, Berlin (2013)
Jansen, T., Wegener, I.: On the analysis of a dynamic evolutionary algorithm. J. Discrete Algorithms 4, 181–199 (2006)
Jansen, T., De Jong, K.A., Wegener, I.: On the choice of the offspring population size in evolutionary algorithms. Evolut. Comput. 13, 413–440 (2005)
Johannsen, D.: Random combinatorial structures and randomized search heuristics. Ph.D. thesis, Saarland University (2010)
Kaas, R., Buhrman, J.M.: Mean, median and mode in binomial distributions. Stat. Neerl. 34, 13–18 (1980)
Kötzing, T., Lissovoi, A., Witt, C.: (1+1) EA on generalized dynamic OneMax. In: Proceedings of FOGA ’15, pp. 40–51. ACM (2015)
Lässig, J., Sudholt, D.: Adaptive population models for offspring populations and parallel evolutionary algorithms. In: Proceedings of FOGA ’11, pp. 181–192. ACM (2011)
Lehre, P.K., Özcan, E.: A runtime analysis of simple hyper-heuristics: to mix or not to mix operators. In: Proceedings of FOGA ’13, pp. 97–104. ACM (2013)
Lehre, P.K., Witt, C.: Concentrated hitting times of randomized search heuristics with variable drift. In: Proceedings of ISAAC ’14, pp. 686–697. Springer (2014)
Lissovoi, A., Oliveto, P.S., Warwicker, J.A.: On the runtime analysis of generalised selection hyper-heuristics for pseudo-Boolean optimisation. In: Proceedings of GECCO ’17, pp. 849–856. ACM (2017)
Mitavskiy, B., Rowe, J.E., Cannings, C.: Theoretical analysis of local search strategies to optimize network communication subject to preserving the total number of links. Int. J. Intell. Comput. Cybern. 2, 243–284 (2009)
Mühlenbein, H.: How genetic algorithms really work: Mutation and hillclimbing. In: Proceedings of PPSN ’92, pp. 15–26. Elsevier (1992)
Neumann, F., Wegener, I.: Randomized local search, evolutionary algorithms, and the minimum spanning tree problem. Theor. Comput. Sci. 378, 32–40 (2007)
Neumann, F., Witt, C.: Bioinspired Computation in Combinatorial Optimization—Algorithms and Their Computational Complexity. Springer, Berlin (2010)
Oliveto, P.S., Lehre, P.K., Neumann, F.: Theoretical analysis of rank-based mutation-combining exploration and exploitation. In: Proceedings of CEC ’09, pp. 1455–1462. IEEE (2009)
Qian, C., Tang, K., Zhou, Z-H.: Selection hyper-heuristics can provably be helpful in evolutionary multi-objective optimization. In: Proceedings of PPSN ’16, pp. 835–846. Springer (2016)
Robbins, H.: A remark on Stirling’s formula. Am. Math. Mon. 62, 26–29 (1955)
Sudholt, D.: A new method for lower bounds on the running time of evolutionary algorithms. IEEE Trans. Evolut. Comput. 17, 418–435 (2013)
Wegener, I.: Simulated annealing beats Metropolis in combinatorial optimization. In: Proceedings of ICALP ’05, pp. 589–601. Springer (2005)
Zarges, C.: Rigorous runtime analysis of inversely fitness proportional mutation rates. In: Proceedings of PPSN ’08, pp. 112–122. Springer (2008)
Zarges, C.: On the utility of the population size for inversely fitness proportional mutation rates. In: Proceedings of FOGA ’09, pp. 39–46. ACM (2009)
Acknowledgements
This work was supported by a public Grant as part of the Investissement d’avenir Project, reference ANR-11-LABX-0056-LMH, LabEx LMH, and by a Grant by the Danish Council for Independent Research (DFF-FNU 4002–00542).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work started when the second author was still affiliated with the Technical University of Denmark.
An extended abstract of this report appeared in the proceedings of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017) [26].
Rights and permissions
About this article
Cite this article
Doerr, B., Gießen, C., Witt, C. et al. The (\(1+\lambda \)) Evolutionary Algorithm with Self-Adjusting Mutation Rate. Algorithmica 81, 593–631 (2019). https://doi.org/10.1007/s00453-018-0502-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00453-018-0502-x