[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Optimal Static and Self-Adjusting Parameter Choices for the \((1+(\lambda ,\lambda ))\) Genetic Algorithm

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

The \((1+(\lambda ,\lambda ))\) genetic algorithm proposed in Doerr et al. (Theor Comput Sci 567:87–104, 2015) is one of the few examples for which a super-constant speed-up of the expected optimization time through the use of crossover could be rigorously demonstrated. It was proven that the expected optimization time of this algorithm on OneMax is \(O(\max \{n \log (n) / \lambda , \lambda n\})\) for any offspring population size \(\lambda \in \{1,\ldots ,n\}\) (and the other parameters suitably dependent on \(\lambda \)) and it was shown experimentally that a self-adjusting choice of \(\lambda \) leads to a better, most likely linear, runtime. In this work, we study more precisely how the optimization time depends on the parameter choices, leading to the following results on how to optimally choose the population size, the mutation probability, and the crossover bias both in a static and a dynamic fashion. For the mutation probability and the crossover bias depending on \(\lambda \) as in Doerr et al. (2015), we improve the previous runtime bound to \(O(\max \{n \log (n) / \lambda , n \lambda \log \log (\lambda )/\log (\lambda )\})\). This expression is minimized by a value of \(\lambda \) slightly larger than what the previous result suggested and gives an expected optimization time of \(O\left( n \sqrt{\log (n) \log \log \log (n) / \log \log (n)}\right) \). We show that no static choice in the three-dimensional parameter space of offspring population, mutation probability, and crossover bias gives an asymptotically better runtime. We also prove that the self-adjusting parameter choice suggested in Doerr et al. (2015) outperforms all static choices and yields the conjectured linear expected runtime. This is asymptotically optimal among all possible parameter choices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. In [15, Section 4.4] and [16] a slightly different selection rule was suggested for the crossover phase, namely excluding from the crossover selection individuals identical with x. This can speed-up traversing large plateaus of equal fitness. For the OneMax function, naturally, there is no difference, so we present the algorithm in the simpler fashion given above.

  2. To be precise, we use here the fact that the bound of Theorem 1(b) is also valid if both occurrences of E[X] are replaced by an upper bound for E[X]. This is a well-known fact, but seemingly a reference is not so easy to find. Hence the easiest solution is maybe to derive this fact right from Theorem 1(b) by extending the sequence \(X_1, \ldots , X_n\) of random variables by random variables that take a certain value with probability one. By this, we can artificially increase E[X] without changing the random variable \(X - E[X]\). Hence the bound obtained from applying the Theorem to the extended sequence applies also to the original one.

References

  1. Anil, G., Wiegand, R.P.: Black-box search by elimination of fitness functions. In: Proceedings of Foundations of Genetic Algorithms (FOGA’09), pp. 67–78. ACM (2009)

  2. Auger, A.: Benchmarking the (1+1) evolution strategy with one-fifth success rule on the BBOB-2009 function testbed. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’09), (Companion), pp. 2447–2452. ACM (2009)

  3. Auger, A., Hansen, N.: Linear Convergence on Positively Homogeneous Functions of a Comparison Based Step-size Adaptive Randomized Search: The (1+1) ES with Generalized One-fifth Success Rule (2013). http://arxiv.org/abs/1310.8397

  4. Badkobeh, G., Lehre, P.K., Sudholt, D.: Unbiased black-box complexity of parallel search. In: Proceedings of Parallel Problem Solving from Nature (PPSN’14), Lecture Notes in Computer Science, vol. 8672, pp. 892–901. Springer (2014)

  5. Böttcher, S., Doerr, B., Neumann, F.: Optimal fixed and adaptive mutation rates for the LeadingOnes problem. In: Proceedings of Parallel Problem Solving from Nature (PPSN’10), Lecture Notes in Computer Science, vol. 6238, pp. 1–10. Springer (2010)

  6. Dang, D., Friedrich, T., Kötzing, T., Krejca, M.S., Lehre, P.K., Oliveto, P.S., Sudholt, D., Sutton, A.M.: Emergence of diversity and its benefits for crossover in genetic algorithms. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 890–900. Springer (2016)

  7. Dang, D., Friedrich, T., Kötzing, T., Krejca, M.S., Lehre, P.K., Oliveto, P.S., Sudholt, D., Sutton, A.M.: Escaping local optima with diversity mechanisms and crossover. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’16), pp. 645–652. ACM (2016)

  8. Dang, D.C., Lehre, P.K.: Self-adaptation of mutation rates in non-elitist populations. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 803–813. Springer (2016)

  9. De Jong, K.A.: An analysis of the behavior of a class of genetic adaptive systems. Ph.D. thesis, Ann Arbor, MI, USA (1975)

  10. Devroye, L.: The compound random search. Ph.D. dissertation, Purdue University, West Lafayette, IN (1972)

  11. Doerr, B.: Analyzing randomized search heuristics: tools from probability theory. In: Auger, A., Doerr, B. (eds.) Theory of Randomized Search Heuristics, pp. 1–20. World Scientific Publishing (2011). http://www.worldscientific.com/doi/suppl/10.1142/7438/suppl_file/7438_chap01.pdf

  12. Doerr, B.: Optimal parameter settings for the \((1+(\lambda , \lambda ))\) genetic algorithm. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’16), pp. 1107–1114. ACM (2016)

  13. Doerr, B., Doerr, C.: Optimal parameter choices through self-adjustment: Applying the 1/5-th rule in discrete settings. In: Proceedigns of Genetic and Evolutionary Computation Conference (GECCO’15), pp. 1335–1342. ACM (2015)

  14. Doerr, B., Doerr, C.: A tight runtime analysis of the (1+(\(\lambda \), \(\lambda \))) genetic algorithm on OneMax. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’15), pp. 1423–1430. ACM (2015)

  15. Doerr, B., Doerr, C., Ebel, F.: Lessons from the black-box: Fast crossover-based genetic algorithms. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’13), pp. 781–788. ACM (2013)

  16. Doerr, B., Doerr, C., Ebel, F.: From black-box complexity to designing new genetic algorithms. Theor. Comput. Sci. 567, 87–104 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  17. Doerr, B., Doerr, C., Kötzing, T.: Provably optimal self-adjusting step sizes for multi-valued decision variables. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 782–791. Springer (2016)

  18. Doerr, B., Doerr, C., Kötzing, T.: The right mutation strength for multi-valued decision variables. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’16), pp. 1115–1122. ACM (2016). http://arxiv.org/abs/1604.03277

  19. Doerr, B., Doerr, C., Yang, J.: \(k\)-bit mutation with self-adjusting \(k\) outperforms standard bit mutation. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 824–834. Springer (2016)

  20. Doerr, B., Goldberg, L.A.: Drift analysis with tail bounds. In: Proceedings of Parallel Problem Solving from Nature (PPSN’10), Lecture Notes in Computer Science, vol. 6238, pp. 174–183. Springer (2010)

  21. Doerr, B., Goldberg, L.A.: Adaptive drift analysis. Algorithmica 65, 224–250 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. Doerr, B., Happ, E., Klein, C.: Crossover can provably be useful in evolutionary computation. Theor. Comput. Sci. 425, 17–33 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  23. Doerr, B., Jansen, T., Sudholt, D., Winzen, C., Zarges, C.: Mutation rate matters even when optimizing monotonic functions. Evol. Comput. 21, 1–27 (2013)

    Article  Google Scholar 

  24. Doerr, B., Johannsen, D., Kötzing, T., Lehre, P.K., Wagner, M., Winzen, C.: Faster black-box algorithms through higher arity operators. In: Proceedings of Foundations of Genetic Algorithms (FOGA’11), pp. 163–172. ACM (2011)

  25. Doerr, B., Johannsen, D., Kötzing, T., Neumann, F., Theile, M.: More effective crossover operators for the all-pairs shortest path problem. Theor. Comput. Sci. 471, 12–26 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  26. Doerr, B., Johannsen, D., Winzen, C.: Multiplicative drift analysis. Algorithmica 64, 673–697 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  27. Doerr, B., Künnemann, M.: Optimizing linear functions with the (1+\(\lambda \)) evolutionary algorithm-different asymptotic runtimes for different instances. Theor. Comput. Sci. 561, 3–23 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  28. Doerr, B., Theile, M.: Improved analysis methods for crossover-based algorithms. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’09), pp. 247–254. ACM (2009)

  29. Droste, S., Jansen, T., Wegener, I.: On the analysis of the (1+1) evolutionary algorithm. Theor. Comput. Sci. 276, 51–81 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  30. Droste, S., Jansen, T., Wegener, I.: Upper and lower bounds for randomized search heuristics in black-box optimization. Theory Comput. Syst. 39, 525–544 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  31. Eiben, A.E., Hinterding, R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput. 3, 124–141 (1999)

    Article  Google Scholar 

  32. Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing. Springer, Berlin (2003)

    Book  MATH  Google Scholar 

  33. Erdős, P., Rényi, A.: On two problems of information theory. Magyar Tudományos Akadémia Matematikai Kutató Intézet Közleményei 8, 229–243 (1963)

    MathSciNet  MATH  Google Scholar 

  34. Fischer, S., Wegener, I.: The Ising model on the ring: mutation versus recombination. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’04), Lecture Notes in Computer Science, vol. 3102, pp. 1113–1124. Springer (2004)

  35. Gießen, C., Witt, C.: Population size vs. mutation strength for the (1+\(\lambda \)) EA on OneMax. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’15), pp. 1439–1446. ACM (2015)

  36. Goldman, B.W., Punch, W.F.: Parameter-less population pyramid. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’14), pp. 785–792. ACM (2014)

  37. Hansen, N., Gawelczyk, A., Ostermeier, A.: Sizing the population with respect to the local progress in (1,\( \lambda \))-evolution strategies—a theoretical analysis. In: Proceedings of the IEEE Congress on Evolutionary Computation (CEC’95), pp. 80–85. IEEE (1995)

  38. He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 127, 57–85 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  39. Jägersküpper, J.: Rigorous runtime analysis of the (1+1) ES: 1/5-rule and ellipsoidal fitness landscapes. In: Proceedings of Foundations of Genetic Algorithms (FOGA’05), Lecture Notes in Computer Science, vol. 3469, pp. 260–281. Springer (2005)

  40. Jansen, T.: Analyzing Evolutionary Algorithms: The Computer Science Perspective. Springer, Berlin (2013)

    Book  MATH  Google Scholar 

  41. Jansen, T., De Jong, K.A., Wegener, I.: On the choice of the offspring population size in evolutionary algorithms. Evol. Comput. 13, 413–440 (2005)

    Article  Google Scholar 

  42. Jansen, T., Wegener, I.: The analysis of evolutionary algorithms–a proof that crossover really can help. Algorithmica 34, 47–66 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  43. Jansen, T., Wegener, I.: On the analysis of a dynamic evolutionary algorithm. J. Discrete Algorithms 4, 181–199 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  44. Karafotias, G., Hoogendoorn, M., Eiben, A.: Parameter control in evolutionary algorithms: trends and challenges. IEEE Trans. Evol. Comput. 19, 167–187 (2015)

    Article  Google Scholar 

  45. Kern, S., Müller, S.D., Hansen, N., Büche, D., Ocenasek, J., Koumoutsakos, P.: Learning probability distributions in continuous evolutionary algorithms–a comparative review. Nat. Comput. 3, 77–112 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  46. Kötzing, T.: Concentration of first hitting times under additive drift. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’14), pp. 1391–1398. ACM (2014)

  47. Lässig, J., Sudholt, D.: Adaptive population models for offspring populations and parallel evolutionary algorithms. In: Proceedings of Foundations of Genetic Algorithms (FOGA’11), pp. 181–192. ACM (2011)

  48. Lehre, P.K., Witt, C.: Black-box search by unbiased variation. Algorithmica 64, 623–642 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  49. Mironovich, V., Buzdalov, M.: Hard test generation for maximum flow algorithms with the fast crossover-based evolutionary algorithm. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’15) (Companion Material), pp. 1229–1232. ACM (2015)

  50. Mitchell, M., Holland, J.H., Forrest, S.: When will a genetic algorithm outperform hill climbing? In: Proceedings of the 7th Neural Information Processing Systems Conference (NIPS’93), Advances in Neural Information Processing Systems, vol. 6, pp. 51–58. Morgan Kaufmann (1993)

  51. Oliveto, P.S., Lehre, P.K., Neumann, F.: Theoretical analysis of rank-based mutation—combining exploration and exploitation. In: Proceedings of Congress on Evolutionary Computation (CEC’09), pp. 1455–1462. IEEE (2009)

  52. Oliveto, P.S., Yao, X.: Runtime analysis of evolutionary algorithms for discrete optimization. In: Auger, A., Doerr, B. (eds.) Theory of Randomized Search Heuristics, pp. 21–52. World Scientific Publishing, Singapore (2011)

    Chapter  Google Scholar 

  53. Raab, M., Steger, A.: “Balls into bins”—a simple and tight analysis. In: Proceedings of Randomization and Approximation Techniques in Computer Science (RANDOM’98), Lecture Notes in Computer Science, vol. 1518, pp. 159–170. Springer (1998)

  54. Rechenberg, I.: Evolutionsstrategie. Friedrich Fromman Verlag (Günther Holzboog KG), Stuttgart (1973)

    Google Scholar 

  55. Schumer, M.A., Steiglitz, K.: Adaptive step size random search. IEEE Trans. Autom. Control 13, 270–276 (1968)

    Article  Google Scholar 

  56. Sudholt, D.: Crossover is provably essential for the Ising model on trees. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’05), pp. 1161–1167. ACM Press (2005)

  57. Sudholt, D.: Crossover speeds up building-block assembly. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’12), pp. 689–702. ACM (2012)

  58. Wegener, I.: Theoretical aspects of evolutionary algorithms. In: Orejas, F., Spirakis, P.G., van Leeuwen, J. (eds.) Proc. of the 28th International Colloquium on Automata, Languages and Programming (ICALP’01), Lecture Notes in Computer Science, vol. 2076, pp. 64–78. Springer (2001)

  59. Wegener, I.: Methods for the analysis of evolutionary algorithms on pseudo-Boolean functions. In: Sarker, R., Mohammadian, M., Yao, X. (eds.) Evolutionary Optimization, pp. 349–369. Kluwer, Berlin (2002)

    Google Scholar 

  60. Wegener, I.: Simulated annealing beats metropolis in combinatorial optimization. In: Proceedings of International Colloquium on Automata, Languages and Programming (ICALP’05), Lecture Notes in Computer Science, vol. 3580, pp. 589–601. Springer (2005)

  61. Witt, C.: Tight bounds on the optimization time of a randomized search heuristic on linear functions. Comb. Probab. Comput. 22, 294–318 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  62. Witt, C.: Fitness levels with tail bounds for the analysis of randomized search heuristics. Inf. Process. Lett. 114, 38–41 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  63. Zarges, C.: Rigorous runtime analysis of inversely fitness proportional mutation rates. In: Proceedings of Parallel Problem Solving from Nature (PPSN’08), Lecture Notes in Computer Science, vol. 5199, pp. 112–122. Springer (2008)

  64. Zarges, C.: On the utility of the population size for inversely fitness proportional mutation rates. In: Proceedings of Foundations of Genetic Algorithms (FOGA’09), pp. 39–46. ACM (2009)

Download references

Acknowledgements

This work was supported by a public Grant as part of the Investissement d’avenir Project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Programme Gaspard Monge en Optimisation et Recherche Opérationnelle. It is also supported by a second Project within the “FMJH Program Gaspard Monge in optimization and operation research” program, and from the support to this program from EDF (Électricité de France).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carola Doerr.

Additional information

Results presented in this work are based on [12,13,14].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Doerr, B., Doerr, C. Optimal Static and Self-Adjusting Parameter Choices for the \((1+(\lambda ,\lambda ))\) Genetic Algorithm. Algorithmica 80, 1658–1709 (2018). https://doi.org/10.1007/s00453-017-0354-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-017-0354-9

Keywords

Navigation