What Can We Learn from Multi-Objective Meta-Optimization of Evolutionary Algorithms in Continuous Domains?
<p>The three PSO topologies used in this work: global, ring, and star.</p> "> Figure 2
<p>Scheme of SEPaT/EMOPaT. The lower part represents a classical EA. In the meta-optimization process, each individual of Tuner-EA represents a set of Parameters. For each set, the corresponding instance of the lower-level EA (LL-EA) is run <span class="html-italic">N</span> times to optimize the objective function(s). Quality indices (one for SEPaT, more than one for EMOPaT) are values that provide a global evaluation of the results obtained by LL-EA in these runs.</p> "> Figure 3
<p>Encoding of DE (left) and PSO (right) configurations in a tuner EA.</p> "> Figure 4
<p>DE parameters of the top solutions (best <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math>) for the 30-dimensional Rastrigin (first row) and Sphere (second row) functions, with an available budget of 1000, 10,000, and 100,000 fitness evaluations. Bar plots indicate the normalized selection frequency. Descending/ascending trends for all parameter values are clearly visible.</p> "> Figure 5
<p>PSO parameters of the top solutions (best <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math> of the population) for the 30-dimensional Rastrigin (first row) and Sphere (second row) functions, with an available budget of 1000, 10,000, and 100,000 fitness evaluations. Descending/ascending trends for all parameter values are clearly visible.</p> "> Figure 6
<p>Fitness values of all the solutions found in ten independent runs of EMOPaT for the three criteria, plotted pairwise for adjacent values of the budget. The green and red stars represent the Top Solutions for each objective, yellow circles are candidate solutions for intermediate evaluation budgets.</p> "> Figure 7
<p>Average fitness versus number of fitness evaluations for configurations generated for PSO (first and second rows) and DE (third and fourth) for the 30-dimensional Rastrigin (above) and Sphere (below) functions. The plots on the right magnify the first 1000 evaluations to better compare the performance of the “fast” versions.</p> "> Figure A1
<p>Encoding of PSO configurations in the four cases presented in <a href="#secAdot2-mathematics-07-00232" class="html-sec">Appendix A.2</a>. From top left clockwise: useless parameter, harmful numerical parameter, equivalent and harmful topology.</p> "> Figure A2
<p>Values of fitness (Sphere function) versus PSO parameters at the end of the tuning procedure. The last graph refers to the useless parameter <math display="inline"><semantics> <mi>γ</mi> </semantics></math> which, unlike the others, spans across all possible values with no correlation with fitness.</p> "> Figure A3
<p>Evolution of the “bad parameter” <math display="inline"><semantics> <mi>β</mi> </semantics></math>, averaged over all individuals in ten independent runs of EMOPaT, versus generation number.</p> "> Figure A4
<p>Average values and selection percentages of the genes representing the four topologies versus number of EMOPaT generations. Results averaged over 64 individuals in 10 runs.</p> "> Figure A5
<p>Average values of the genes representing the four topologies (including the replicated one) and selection percentages. The <span class="html-italic">x</span> axis reports the number of EMOPaT generations.</p> ">
Abstract
:1. Introduction
2. Background
2.1. Differential Evolution
2.2. Particle Swarm Optimization
2.3. NSGA-II
3. Related Work
- Parameter tuning: the parameters are chosen offline and their values do not change during evolution, which is the case of interest for this paper;
- Parameter control [14]: the parameters may vary during evolution, according to a strategy that depends on the results that are being achieved. These changes are usually driven either by fitness improvement (or by its lack) or by properties of the evolving population, like diversity or entropy.
4. EMOPaT, a General Framework for Multi-Objective Meta-Optimization
Parameter Representation
5. Experimental Evaluation
5.1. Multi-Objective Single-Function Optimization Under Different Constraints
- infer them as the mean of two top solutions found for two different objectives between which the new objective lies. This approach presents some limitations: (i) the parameter must have a clear trend, (ii) some policy for nominal parameters must be defined if the two reference configurations have different settings, (iii) there is no guarantee that all intermediate values of a parameter correspond to valid configurations of the algorithm;
- select them from the Pareto front by plotting the set of all the individuals obtained at the end of ten independent runs on the two objectives of interest, estimate the Pareto front based on those solutions and randomly pick one configuration that lies on it, in an intermediate position between the extremes (see an example related with the Rastrigin function in Figure 6).
5.2. Multi-Function Optimization
6. Summary and Future Work
- Select a proper set of problem-related fitness cases.
- Select the optimization method(s) whose parameters one wants to tune.
- Select the objectives to optimize (convergence speed, solution quality, robustness, etc.).
- Run EMOPaT and save the resulting parameter set.
- At present, the analysis of the results is essentially a “manual” process. Which is the best way to automatically extract, generalize and infer parameters?
- EMOPaT belongs to the class of offline parameter tuning algorithms, in which the values of the parameters are set before starting the optimization process and do not change during its execution. Could it also be used to tune the parameters of a population of EA’s online, adapting parameter values as optimization proceeds?
- In our work, we proved that EMOPaT can also be used to generalize results on a single function (see Section 5.1). Can this idea be extended to different functions? If we obtain a Pareto Front by optimizing two functions, can we extract parameters for a function that lies “between” these two, according to some metric that takes into consideration some of their properties?
- Is it possible to group or cluster functions based on the best-performing parameters found by EMOPaT?
Author Contributions
Funding
Ethical Approval
Conflicts of Interest
Appendix A
Appendix A.1. Comparison with SEPaT
EMOPaT settings * |
Tuner EA = NSGA-II, Population Size = 200, 80 Generations, |
Mutation Rate = 0.125, Crossover Rate = 0.9 |
SEPaT settings |
Tuner EA = DE, Population Size = 200, 80 Generations |
CR = 0.91, F = 0.52, Mutation = target-to-best, Crossover = Exponential |
Function settings |
10-dimensional Sphere, Rotated Cigar, Rotated Rosenbrock, Rotated Ackley, |
Rastrigin, Composition Function 1 (CF1), Composition Function 3 (CF3) |
evaluation criterion = best fitness in 20,000 evaluations averaged over repetitions. |
Differential Evolution | ||||||
Function | Method | PopSize | CR | F | Mutation | Crossover |
Sphere | SEPaT | 20 | 0.506 | 0.520 | target-to-best | exponential |
EMOPaT | 12 | 0.181 | 0.718 | target-to-best | exponential | |
R. Cigar | SEPaT | 60 | 0.955 | 0.660 | target-to-best | binomial |
EMOPaT | 38 | 0.916 | 0.699 | target-to-best | binomial | |
R. Rosenbrock | SEPaT | 39 | 0.993 | 0.745 | random | exponential |
EMOPaT | 47 | 0.989 | 0.761 | random | exponential | |
R. Ackley | SEPaT | 85 | 0.327 | 0.0 | random | exponential |
EMOPaT | 248 | 0.960 | 0.0 | random | exponential | |
Rastrigin | SEPaT | 36 | 0.014 | 0.359 | random | exponential |
EMOPaT | 25 | 0.049 | 1.065 | random | exponential | |
CF 1 | SEPaT | 18 | 0.0 | 1.777 | best | exponential |
EMOPaT | 33 | 0.045 | 1.070 | best | exponential | |
CF 3 | SEPaT | 89 | 0.794 | 0.070 | random | binomial |
EMOPaT | 98 | 0.868 | 0.088 | random | binomial | |
- | Standard | 30 | 0.9 | 0.5 | random | exponential |
Particle Swarm Optimization | ||||||
Function | Method | PopSize | w | Topology | ||
Sphere | SEPaT | 88 | 0.529 | 1.574 | 1.057 | global |
EMOPaT | 25 | 0.774 | 1.989 | 0.591 | global | |
R. Cigar | SEPaT | 67 | 0.713 | 0.531 | 1.130 | ring |
EMOPaT | 41 | 0.757 | 1.159 | 1.097 | ring | |
R. Rosenbrock | SEPaT | 104 | 0.597 | 1.032 | 1.064 | ring |
EMOPaT | 87 | −0.451 | −0.092 | 1.987 | ring | |
R. Ackley | SEPaT | 113 | 0.381 | 0.210 | 1.722 | ring |
EMOPaT | 115 | 0.303 | −0.006 | 2.467 | ring | |
Rastrigin | SEPaT | 13 | −0.236 | 0.090 | 3.291 | global |
EMOPaT | 7 | −0.260 | 0.021 | 3.314 | global | |
CF 1 | SEPaT | 92 | −0.147 | −0.462 | 2.892 | ring |
EMOPaT | 61 | −0.163 | −0.376 | 3.104 | ring | |
CF 3 | SEPaT | 61 | 0.852 | 0.347 | 0.989 | ring |
EMOPaT | 217 | 0.728 | 1.217 | 0.565 | global | |
- | Standard | 30 | 0.721 | 1.193 | 1.193 | ring |
EA | Function | EMOPaT | SEPaT | Standard | vs. SEPaT | vs. Standard |
---|---|---|---|---|---|---|
Fitness | p-Value | |||||
DE | Sphere | |||||
R. Cigar | ||||||
R. Rosenbrock | ||||||
R. Ackley | ||||||
Rastrigin | ||||||
CF 1 | ||||||
CF 3 | ||||||
PSO | Sphere | |||||
R. Cigar | ||||||
R. Rosenbrock | ||||||
R. Ackley | ||||||
Rastrigin | ||||||
CF 1 | ||||||
CF 3 |
Appendix A.2. Empirical Validation
- A useless numerical parameter, i.e., with no effects at all on the algorithm;
- A harmful numerical parameter, i.e., the higher its value, the worse the fitness;
- A harmful nominal parameter choice that constantly produces bad fitness values when made;
- Two totally equivalent choices of a nominal parameter.
EMOPaT settings |
---|
Population Size = 64, 100 Generations, Mutation Rate = 0.125, Crossover Rate = 0.9 |
Function settings |
30-dimensional Sphere and Rastrigin |
evaluation criterion = best fitness in 20000 evaluations averaged over repetitions |
Appendix A.2.1. Useless Parameter
Parameter | Population Size | w | |||
---|---|---|---|---|---|
Mean | 0.159 | 0.555 | 0.586 | 0.332 | 0.441 |
Variance | 0.0313 | 0.0109 | 0.0120 | 0.0103 | 0.0780 |
Correlation with | −0.0777 | −0.179 | −0.159 | 0.149 | - |
Appendix A.2.2. Harmful Numerical Parameter
Appendix A.2.3. Harmful Nominal Parameter Setting
Appendix A.2.4. Equivalent Settings
References
- Eiben, A.E.; Smith, J.E. Introduction to Evolutionary Computing; Springer: Berlin, Germany, 2003. [Google Scholar]
- Pitzer, E.; Affenzeller, M. A Comprehensive Survey on Fitness Landscape Analysis. In Recent Advances in Intelligent Engineering Systems; Studies in Computational Intelligence; Fodor, J., Klempous, R., Suárez Araujo, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 378, pp. 161–191. [Google Scholar]
- Smith-Miles, K.; Tan, T. Measuring algorithm footprints in instance space. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
- Mercer, R.; Sampson, J. Adaptive search using a reproductive metaplan. Kybernetes 1978, 7, 215–228. [Google Scholar] [CrossRef]
- Ugolotti, R.; Nashed, Y.S.G.; Mesejo, P.; Cagnoni, S. Algorithm Configuration using GPU-based Metaheuristics. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Amsterdam, The Netherlands, 6–10 July 2013; pp. 221–222. [Google Scholar]
- Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces; Technical Report; International Computer Science Institute: Berkeley, CA, USA, 1995. [Google Scholar]
- Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Ugolotti, R.; Cagnoni, S. Analysis of Evolutionary Algorithms using Multi-Objective Parameter Tuning. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Vancouver, BC, Canada, 12–16 July 2014; pp. 1343–1350. [Google Scholar]
- Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evolut. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
- Deb, K.; Srinivasan, A. Innovization: Innovating design principles through optimization. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Seattle, WA, USA, 8–12 July 2006; pp. 1629–1636. [Google Scholar]
- Das, S.; Suganthan, P. Differential Evolution: A Survey of the State-of-the-Art. IEEE Trans. Evolut. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
- Montero, E.; Riff, M.C.; Rojas-Morales, N. Tuners review: How crucial are set-up values to find effective parameter values? Eng. Appl. Artif. Intell. 2018, 76, 108–118. [Google Scholar] [CrossRef]
- Sipper, M.; Fu, W.; Ahuja, K.; Moore, J.H. Investigating the parameter space of evolutionary algorithms. BioData Min. 2018, 11, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Karafotias, G.; Hoogendoorn, M.; Eiben, Á.E. Parameter control in evolutionary algorithms: Trends and challenges. IEEE Trans. Evolut. Comput. 2015, 19, 167–187. [Google Scholar] [CrossRef]
- Nannen, V.; Eiben, A.E. Relevance Estimation and Value Calibration of Evolutionary Algorithm Parameters. In Proceedings of the International Joint Conference on Artifical Intelligence (IJCAI), Hyderabad, India, 6–12 January 2007; pp. 975–980. [Google Scholar]
- Larrañaga, P.; Lozano, J.A. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation; Kluwer Academic Publishers: Norwell, MA, USA, 2001. [Google Scholar]
- Smit, S.K.; Eiben, A.E. Beating the ‘world champion’ evolutionary algorithm via REVAC tuning. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
- Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization; Technical Report, KanGAL Report 2005005; Nanyang Technological University: Singapore, 2005. [Google Scholar]
- Meissner, M.; Schmuker, M.; Schneider, G. Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training. BMC Bioinform. 2006, 7, 125. [Google Scholar] [CrossRef] [PubMed]
- Pedersen, M.E.H. Tuning and Simplifying Heuristical Optimization. Master’s Thesis, University of Southampton, Southampton, NY, USA, 2010. [Google Scholar]
- Hutter, F.; Hoos, H.H.; Leyton-Brown, K.; Stützle, T. ParamILS: An Automatic Algorithm Configuration Framework. J. Artif. Intell. Res. 2009, 36, 267–306. [Google Scholar] [CrossRef]
- Luke, S.; Talukder, A.K.A. Is the meta-EA a Viable Optimization Method? In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Amsterdam, The Netherlands, 6–10 July 2013; pp. 1533–1540. [Google Scholar]
- Fister, D.; Fister, I.; Jagrič, T.; Brest, J. A novel self-adaptive differential evolution for feature selection using threshold mechanism. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bengaluru, India, 18–21 November 2018; pp. 17–24. [Google Scholar]
- Bartz-Beielstein, T.; Lasarczyk, C.; Preuss, M. Sequential parameter optimization. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Edinburgh, UK, 2–4 September 2005; pp. 773–780. [Google Scholar]
- López-Ibáñez, M.; Dubois-Lacoste, J.; Cáceres, L.P.; Birattari, M.; Stützle, T. The irace package: Iterated racing for automatic algorithm configuration. Oper. Res. Perspect. 2016, 3, 43–58. [Google Scholar] [CrossRef] [Green Version]
- Pereira, I.; Madureira, A. Racing based approach for Metaheuristics parameter tuning. In Proceedings of the 10th Iberian Conference on Information Systems and Technologies (CISTI), Aveiro, Portugal, 17–20 June 2015; pp. 1–6. [Google Scholar]
- Sinha, A.; Malo, P.; Xu, P.; Deb, K. A Bilevel Optimization Approach to Automated Parameter Tuning. In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation (GECCO), Vancouver, BC, Canada, 12–16 July 2014; pp. 847–854. [Google Scholar]
- Andersson, M.; Bandaru, S.; Ng, A.; Syberfeldt, A. Parameter Tuning of MOEAs Using a Bilevel Optimization Approach. Evolutionary Multi-Criterion Optimization; Springer: Berlin/Heidelberg, Germany, 2015; pp. 233–247. [Google Scholar]
- Dréo, J. Using Performance Fronts for Parameter Setting of Stochastic Metaheuristics. In Proceedings of the Conference on Genetic and Evolutionary Computation Conference (GECCO): Late Breaking Papers, Montreal, QC, Canada, 8–12 July 2009; pp. 2197–2200. [Google Scholar]
- Smit, S.K.; Eiben, A.E.; Szlávik, Z. An MOEA-based Method to Tune EA Parameters on Multiple Objective Functions. In Proceedings of the International Conference on Evolutionary Computation, (part of the International Joint Conference on Computational Intelligence IJCCI (ICEC)), Valencia, Spain, 24–26 October 2010; pp. 261–268. [Google Scholar]
- Smit, S.K.; Eiben, A.E. Parameter Tuning of Evolutionary Algorithms: Generalist vs. Specialist. In Applications of Evolutionary Computation; Di Chio, C., Brabazon, A., Ebner, M., Farooq, M., Fink, A., Grahl, J., Greenfield, G., Machado, P., O’Neill, M., Tarantino, E., et al., Eds.; LNCS Springer: Berlin/Heidelberg, Germany, 2010; Volume 6024, pp. 542–551. [Google Scholar]
- Branke, J.; Elomari, J.A. Meta-optimization for Parameter Tuning with a Flexible Computing Budget. In Proceedings of the Conference on Genetic and Evolutionary Computation Conference (GECCO), Philadelphia, PA, USA, 7–11 July 2012; pp. 1245–1252. [Google Scholar]
- Eiben, A.E.; Smit, S.K. Parameter tuning for configuring and analyzing evolutionary algorithms. Swarm Evol. Comput. 2011, 1, 19–31. [Google Scholar] [CrossRef] [Green Version]
- Blot, A.; Hoos, H.H.; Jourdan, L.; Kessaci-Marmion, M.É.; Trautmann, H. MO-ParamILS: A multi-objective automatic algorithm configuration framework. In Proceedings of the International Conference on Learning and Intelligent Optimization, Ischia, Italy, 29 May–1 June 2016; pp. 32–47. [Google Scholar]
- Blot, A.; Pernet, A.; Jourdan, L.; Kessaci-Marmion, M.É.; Hoos, H.H. Automatically configuring multi-objective local search using multi-objective optimisation. In Evolutionary Multi-Criterion Optimization; Springer: Cham, Switzerland, 2017; pp. 61–76. [Google Scholar]
- López-Ibánez, M.; Dubois-Lacoste, J.; Stützle, T.; Birattari, M. The Irace Iackage, Iiterated Iace for Automatic Algorithm Configuration; Technical Report TR/IRIDIA/2011-004; IRIDIA, Université Libre de Bruxelles: Bruxelles, Belgium, 2011. [Google Scholar]
- Smit, S.K.; Eiben, A.E. Comparing Parameter Tuning Methods for Evolutionary Algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Trondheim, Norway, 18–21 May 2009; pp. 399–406. [Google Scholar]
- Liang, J.; Qu, B.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization; Technical Report; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2013. [Google Scholar]
- Zaharie, D. Critical values for the control parameters of Differential Evolution algorithms. In Proceedings of the 8th International Conference on Soft Computing, Brno, Czech Republic, 5–7 June 2002; pp. 62–67. [Google Scholar]
- Ugolotti, R.; Cagnoni, S. Multi-objective Parameter Tuning for PSO-based Point Cloud Localization. In Advances in Artificial Life and Evolutionary Computation. Proceedings of WIVACE 2014, Vietri sul Mare, Italy, 14–15 May 2014; Pizzuti, C., Spezzano, G., Eds.; Communications in Computer and Information Science; Springer: Berlin/Heidelberg, Germany, 2014; Volume 445, pp. 75–85. [Google Scholar]
- Ugolotti, R.; Mesejo, P.; Nashed, Y.S.G.; Cagnoni, S. GPU-Based Automatic Configuration of Differential Evolution: A Case Study. In Progress in Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2013; pp. 114–125. [Google Scholar]
- Kennedy, J.; Clerc, M. 2006. Available online: http://www.particleswarm.info/Standard_PSO_2006.c (accessed on 2 March 2019).
- Montero, E.; Riff, M.C.; Pérez-Caceres, L.; Coello Coello, C. Are State-of-the-Art Fine-Tuning Algorithms Able to Detect a Dummy Parameter? In Parallel Problem Solving from Nature Conference—PPSN XII, LNCS, Proceedings of the 12th International Conference, Taormina, Italy, 1–5 September 2012; Coello, C.A., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7491, pp. 306–315. [Google Scholar]
Differential Evolution | Particle Swarm Optimization | ||
---|---|---|---|
Population Size | Population Size | ||
Crossover Rate (CR) | Inertia Factor (w) | ||
Scale Factor (F) | |||
Crossover | {binomial, exponential} | ||
Mutation | {random, best, target-to-best} | Topology | {ring, star, global} |
EMOPaT settings |
---|
Population Size = 64, 60 Generations, Mutation Rate = 0.125, Crossover Rate = 0.9 |
Function settings |
30-dimensional Sphere, Rastrigin, Rosenbrock, Griewank (one for each experiment) |
evaluation criterion = best fitnesses after 1000, 10,000, 100,000 evaluations averaged |
over repetitions. |
Differential Evolution | |||||
Function | PopSize | CR | F | Mutation | Crossover |
Sphere | ↘ | ↘ | ↗ | target-to-best | binomial→exponential |
Rastrigin | ↗ | ↗ | ↗ | − | binomial→exponential |
Griewank | ↗ | ↘ | ↘ | target-to-best→random | binomial |
Rosenbrock | ↗ | ↗ | ↗ | target-to-best→best,random | binomial→exponential |
Particle Swarm Optimization | |||||
Function | PopSize | w | Topology | ||
Sphere | ↗ | ↘ | − | ↗ | ring |
Rastrigin | ↗ | ↘ | ↗ | ↘ | global |
Griewank | ↗ | ↘ | ↗ | ↗ | ring |
Rosenbrock | ↗ | ↗ | ↘ | ↗ | global→ring |
Differential Evolution | |||||||
Method | Configuration | PopSize | CR | F | Mutation | Crossover | |
Rastrigin | Top Solutions | Q1K | 9 | 0.214 | 0.736 | target-to-best | binomial |
Q10K | 7 | 0.053 | 0.754 | target-to-best | exponential | ||
Q100K | 7 | 0.039 | 0.784 | target-to-best | exponential | ||
Inferred | 8 | 0.134 | 0.745 | target-to-best | bin., exp. | ||
B | 7 | 0.046 | 0.769 | target-to-best | exponential | ||
From Pareto | A | 9 | 0.217 | 0.763 | target-to-best | binomial | |
B | 7 | 0.006 | 0.769 | target-to-best | binomial | ||
Sphere | Top Solutions | Q1K | 5 | 0.022 | 0.229 | random | binomial |
Q10K | 14 | 0.363 | 0.508 | random | exponential | ||
Q100K | 30 | 0.043 | 0.521 | random | exponential | ||
Inferred | 9 | 0.192 | 0.368 | random | bin., exp. | ||
B | 22 | 0.203 | 0.514 | random | exponential | ||
From Pareto | A | 8 | 0.023 | 0.520 | target-to-best | exponential | |
B | 15 | 0.498 | target-to-best | binomial | |||
Particle Swarm Optimization | |||||||
Method | Configuration | PopSize | w | Topology | |||
Rastrigin | Top Solutions | Q1K | 18 | 0.560 | 1.195 | 0.789 | global |
Q10K | 26 | 0.579 | 2.492 | 0.671 | global | ||
Q100K | 76 | 2.533 | 0.487 | global | |||
Inferred | A | 22 | 0.569 | 1.844 | 0.730 | global | |
B | 51 | 0.164 | 2.513 | 0.579 | global | ||
From Pareto | A | 27 | 0.678 | 0.949 | 0.587 | global | |
B | 60 | 0.297 | 3.132 | 0.481 | global | ||
Sphere | Top Solutions | Q1K | 11 | 0.603 | 1.882 | 1.105 | ring |
Q10K | 14 | 0.510 | 1.998 | 1.483 | ring | ||
Q100K | 15 | 0.449 | 1.725 | 1.667 | ring | ||
Inferred | A | 12 | 0.557 | 1.940 | 1.294 | ring | |
B | 15 | 0.480 | 1.861 | 1.575 | ring | ||
From Pareto | A | 14 | 0.649 | 1.764 | 1.082 | ring | |
B | 15 | 0.480 | 1.681 | 1.635 | ring |
Differential Evolution | ||||||
Budget | PopSize | CR | F | Mutation | Crossover | |
Rastrigin | 1 K | 5 | 0.375 | 0.375 | random | exponential |
5.5 K | 9 | 0.182 | 0.182 | random | exponential | |
10 K | 16 | 0.145 | 0.145 | random | exponential | |
55 K | 24 | 0.146 | 0.146 | random | exponential | |
100 K | 75 | 0.746 | 0.746 | random | exponential | |
Sphere | 1 K | 16 | 0.299 | 0.554 | target-to-best | binomial |
5.5 K | 7 | 0.069 | 0.751 | target-to-best | exponential | |
10 K | 7 | 0.056 | 0.749 | target-to-best | exponential | |
55 K | 7 | 0.057 | 0.749 | target-to-best | exponential | |
100 K | 7 | 0.057 | 0.749 | target-to-best | exponential | |
Particle Swarm Optimization | ||||||
Budget | PopSize | w | Topology | |||
Rastrigin | 1 K | 11 | 0.664 | 1.518 | 0.638 | global |
5.5 K | 88 | 0.662 | 1.062 | 0.624 | global | |
10 K | 119 | 0.654 | 1.672 | 0.607 | global | |
55 K | 230 | 0.650 | 2.412 | 0.643 | global | |
100 K | 230 | 0.650 | 2.412 | 0.643 | global | |
Sphere | 1 K | 17 | 0.837 | 0.952 | 0.488 | global |
5.5 K | 32 | 0.897 | 0.539 | 0.545 | global | |
10 K | 32 | 0.897 | 0.539 | 0.545 | global | |
55 K | 34 | 0.380 | 1.135 | 2.673 | global | |
100 K | 34 | 0.380 | 1.135 | 2.673 | global |
Differential Evolution | |||||
Budget | EMOPaT | FBM | p-Value | Best Method | |
Rastrigin | 1 K | ||||
5.5 K | EMOPaT | ||||
10 K | |||||
55 K | FBM | ||||
100 K | |||||
Sphere | 1 K | EMOPaT | |||
5.5 K | FBM | ||||
10 K | |||||
55 K | |||||
100 K | |||||
Particle Swarm Optimization | |||||
Budget | EMOPaT | FBM | p-Value | Best Method | |
Rastrigin | 1 K | ||||
5.5 K | FBM | ||||
10 K | |||||
55 K | EMOPaT | ||||
100 K | |||||
Sphere | 1 K | EMOPaT | |||
5.5 K | EMOPaT | ||||
10 K | EMOPaT | ||||
55 K | EMOPaT | ||||
100 K | EMOPaT |
EMOPaT settings |
---|
Population Size = 200, 80 Generations, Mutation Rate = 0.125, Crossover Rate = 0.9 |
Function settings |
10-dimensional Sphere, Rotated Cigar, Rosenbrock, Rotated Ackley, |
Rastrigin, Composition Function 1 (CF1), Composition Function 3 (CF3) |
evaluation criterion = best fitness in 20000 evaluations averaged over repetitions. |
Differential Evolution | |||||
Configuration Name | PopSize | CR | F | Mutation | Crossover |
12 | 0.181 | 0.718 | target-to-best | exponential | |
57 | 0.906 | 0.703 | target-to-best | exponential | |
47 | 0.989 | 0.761 | random | exponential | |
271 | 0.170 | 0.216 | target-to-best | exponential | |
24 | 0.024 | 1.158 | random | exponential | |
24 | 0.057 | 1.789 | best | exponential | |
98 | 0.868 | 0.087 | random | binomial | |
10 | 0.607 | 0.886 | target-to-best | exponential | |
70 | 0.612 | 0.480 | best | exponential | |
13 | 0.235 | 0.444 | target-to-best | exponential | |
23 | 0.413 | 0.860 | target-to-best | exponential | |
32 | 0.147 | 0.491 | target-to-best | exponential | |
24 | 0.776 | 0.716 | target-to-best | exponential | |
19 | 0.058 | 0.837 | best | binomial | |
irace | 53 | 0.796 | 0.508 | best | exponential |
SEPaT | 17 | 0.160 | 0.499 | target-to-best | exponential |
average | 40 | 0.563 | 0.988 | target-to-best | binomial |
Particle Swarm Optimization | |||||
Configuration Name | PopSize | w | Topology | ||
34 | 0.768 | 1.756 | 0.474 | global | |
41 | 0.585 | 1.338 | 1.646 | ring | |
55 | −0.465 | −0.060 | 1.930 | ring | |
251 | 0.714 | 1.082 | 0.271 | global | |
20 | −0.131 | −0.050 | 3.787 | global | |
161 | −0.158 | −0.112 | 2.467 | ring | |
269 | −0.172 | 1.235 | 1.945 | global | |
43 | 0.648 | 1.241 | 1.633 | ring | |
44 | 0.639 | 2.114 | 1.478 | ring | |
68 | 0.402 | 1.109 | 2.184 | ring | |
37 | 0.883 | 0.548 | 0.649 | ring | |
26 | −0.073 | 0.295 | 3.032 | ring | |
33 | 0.583 | 2.040 | 1.677 | ring | |
46 | 0.593 | 1.944 | 1.637 | ring | |
irace | 19 | 0.805 | 0.962 | 0.914 | ring |
SEPaT | 22 | 0.732 | 1.358 | 1.153 | ring |
average | 251 | 0.714 | 1.082 | 0.271 | global |
Function | DE | PSO |
---|---|---|
Elliptic | irace, SEPaT | |
Discus | average, | |
Weierstrass | , irace | , |
Griewank | , SEPaT, | , , , , , |
Katsuura | , SEPaT, | , |
CF5 | , SEPaT | , average |
CF7 | , SEPaT, | , , average |
irace | SEPaT | average | sum | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Differential Evolution | ||||||||||||||||||
0 | 4 | 4 | 4 | 3 | 0 | 4 | 4 | 4 | 5 | 3 | 6 | 4 | 2 | 4 | 6 | 2 | 59 | |
2 | 0 | 0 | 1 | 1 | 1 | 1 | 3 | 1 | 4 | 2 | 3 | 2 | 2 | 2 | 4 | 1 | 30 | |
3 | 6 | 0 | 2 | 4 | 2 | 2 | 3 | 4 | 4 | 3 | 4 | 5 | 3 | 4 | 4 | 2 | 55 | |
3 | 4 | 4 | 0 | 3 | 2 | 2 | 4 | 4 | 5 | 3 | 5 | 4 | 3 | 4 | 5 | 4 | 59 | |
1 | 3 | 2 | 2 | 0 | 0 | 2 | 3 | 3 | 5 | 3 | 6 | 3 | 3 | 3 | 5 | 1 | 45 | |
4 | 5 | 3 | 4 | 7 | 0 | 4 | 4 | 7 | 7 | 5 | 7 | 7 | 5 | 6 | 7 | 3 | 85 | |
2 | 5 | 3 | 3 | 3 | 2 | 0 | 4 | 3 | 4 | 3 | 4 | 3 | 3 | 4 | 4 | 2 | 52 | |
2 | 4 | 4 | 1 | 2 | 0 | 3 | 0 | 4 | 3 | 1 | 4 | 6 | 2 | 5 | 4 | 2 | 47 | |
2 | 4 | 2 | 0 | 1 | 0 | 2 | 1 | 0 | 4 | 2 | 4 | 4 | 2 | 3 | 4 | 2 | 37 | |
2 | 3 | 2 | 1 | 1 | 0 | 3 | 2 | 3 | 0 | 0 | 3 | 4 | 1 | 3 | 2 | 1 | 31 | |
2 | 4 | 3 | 2 | 2 | 0 | 4 | 3 | 4 | 3 | 0 | 6 | 5 | 1 | 4 | 6 | 3 | 52 | |
1 | 3 | 3 | 1 | 0 | 0 | 2 | 3 | 1 | 1 | 1 | 0 | 3 | 1 | 3 | 1 | 1 | 25 | |
1 | 3 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 | 1 | 3 | 0 | 1 | 2 | 3 | 0 | 20 | |
1 | 4 | 4 | 2 | 2 | 0 | 4 | 2 | 4 | 2 | 0 | 5 | 4 | 0 | 4 | 4 | 2 | 44 | |
irace | 1 | 3 | 1 | 1 | 2 | 1 | 0 | 1 | 2 | 3 | 1 | 3 | 3 | 1 | 0 | 3 | 1 | 27 |
SEPaT | 1 | 2 | 2 | 1 | 0 | 0 | 2 | 2 | 1 | 0 | 0 | 1 | 3 | 1 | 3 | 0 | 1 | 20 |
average | 2 | 5 | 3 | 3 | 3 | 1 | 4 | 3 | 4 | 4 | 3 | 6 | 5 | 2 | 5 | 5 | 0 | 58 |
Particle Swarm Optimization | ||||||||||||||||||
0 | 3 | 4 | 3 | 2 | 2 | 1 | 3 | 3 | 3 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 43 | |
4 | 0 | 4 | 4 | 0 | 4 | 3 | 0 | 2 | 0 | 1 | 0 | 1 | 2 | 1 | 1 | 4 | 31 | |
1 | 2 | 0 | 2 | 0 | 0 | 2 | 2 | 2 | 2 | 3 | 1 | 2 | 2 | 2 | 2 | 2 | 27 | |
3 | 3 | 3 | 0 | 3 | 2 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 0 | 43 | |
5 | 6 | 7 | 4 | 0 | 4 | 5 | 6 | 6 | 6 | 6 | 5 | 6 | 6 | 6 | 6 | 4 | 88 | |
2 | 2 | 3 | 5 | 1 | 0 | 3 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 5 | 39 | |
4 | 3 | 4 | 4 | 2 | 2 | 0 | 3 | 2 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 4 | 48 | |
4 | 1 | 4 | 4 | 0 | 3 | 2 | 0 | 2 | 0 | 1 | 0 | 1 | 2 | 1 | 1 | 4 | 30 | |
1 | 2 | 3 | 4 | 1 | 3 | 1 | 2 | 0 | 1 | 3 | 2 | 1 | 1 | 2 | 1 | 3 | 31 | |
4 | 1 | 4 | 4 | 0 | 4 | 3 | 1 | 1 | 0 | 2 | 1 | 2 | 2 | 1 | 1 | 4 | 35 | |
2 | 2 | 3 | 4 | 0 | 3 | 2 | 2 | 1 | 1 | 0 | 1 | 2 | 2 | 1 | 1 | 4 | 31 | |
4 | 4 | 4 | 4 | 0 | 5 | 4 | 4 | 3 | 3 | 4 | 0 | 4 | 3 | 4 | 4 | 4 | 58 | |
3 | 4 | 3 | 4 | 1 | 4 | 2 | 1 | 1 | 2 | 3 | 1 | 0 | 1 | 3 | 3 | 4 | 40 | |
2 | 2 | 3 | 4 | 1 | 3 | 1 | 2 | 0 | 1 | 2 | 2 | 1 | 0 | 2 | 2 | 4 | 32 | |
irace | 4 | 2 | 4 | 4 | 1 | 4 | 3 | 2 | 3 | 3 | 3 | 2 | 3 | 3 | 0 | 1 | 4 | 46 |
SEPaT | 4 | 2 | 4 | 4 | 1 | 4 | 3 | 2 | 2 | 2 | 2 | 1 | 2 | 3 | 0 | 0 | 4 | 40 |
average | 3 | 3 | 3 | 0 | 3 | 2 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 0 | 43 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ugolotti, R.; Sani, L.; Cagnoni, S. What Can We Learn from Multi-Objective Meta-Optimization of Evolutionary Algorithms in Continuous Domains? Mathematics 2019, 7, 232. https://doi.org/10.3390/math7030232
Ugolotti R, Sani L, Cagnoni S. What Can We Learn from Multi-Objective Meta-Optimization of Evolutionary Algorithms in Continuous Domains? Mathematics. 2019; 7(3):232. https://doi.org/10.3390/math7030232
Chicago/Turabian StyleUgolotti, Roberto, Laura Sani, and Stefano Cagnoni. 2019. "What Can We Learn from Multi-Objective Meta-Optimization of Evolutionary Algorithms in Continuous Domains?" Mathematics 7, no. 3: 232. https://doi.org/10.3390/math7030232
APA StyleUgolotti, R., Sani, L., & Cagnoni, S. (2019). What Can We Learn from Multi-Objective Meta-Optimization of Evolutionary Algorithms in Continuous Domains? Mathematics, 7(3), 232. https://doi.org/10.3390/math7030232