[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3594805.3607137acmconferencesArticle/Chapter ViewAbstractPublication PagesfogaConference Proceedingsconference-collections
research-article
Open access

First Complexity Results for Evolutionary Knowledge Transfer

Published: 30 August 2023 Publication History

Abstract

The field of evolutionary knowledge transfer (EKT) has recently begun to systematically develop algorithms that exploit a number of related problem instances to accelerate problem solving on difficult optimization tasks. EKT has the potential to have a major impact on evolutionary computation practice, comparable to the role that neural network pretraining has had on machine learning. But the community has only scratched the surface of the theoretical hurdles that the knowledge-transfer workflow raises. We introduce a three-part collect-select-exploit framework for understanding EKT, which we use to highlight the need for better evaluation and benchmarking approaches for transfer. We then present some of the first analytical results for EKT, proving no-free-lunch theorems for transfer, and proving what is (to our knowledge) the first asymptotic runtime result for transfer optimization. These results can serve as the basis for future research into the strengths and limitations of knowledge transfer as an optimization paradigm, and they serve to emphasize the need for more comprehensive benchmarks to hone progress in the field.

References

[1]
Stavros P Adam, Stamatios-Aggelos N Alexandropoulos, Panos M Pardalos, and Michael N Vrahatis. 2019. No free lunch theorem: A review. Approximation and optimization (2019), 57--82.
[2]
Jeffrey O Agushaka and Absalom E Ezugwu. 2022. Initialisation Approaches for Population-Based Metaheuristic Algorithms: A Comprehensive Review. Applied Sciences 12, 2 (2022), 896.
[3]
W. Brian Arthur. 2009. The Nature of Technology: What It Is and How It Evolves. Free Press.
[4]
W. Brian Arthur and Wolfgang Polak. 2006. The Evolution of Technology within a Simple Computer Model. Complexity 11, 5 (May/June 2006), 23--32.
[5]
Kavitesh Kumar Bali, Yew-Soon Ong, Abhishek Gupta, and Puay Siew Tan. 2019. Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II. IEEE Transactions on Evolutionary Computation 24, 1 (2019), 69--83.
[6]
Thomas Bartz-Beielstein and Martin Zaefferer. 2017. Model-based methods for continuous and discrete global optimization. Applied Soft Computing 55 (2017), 154--167.
[7]
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. 2013. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research 47 (2013), 253--279.
[8]
Iwo Bladek and Krzystof Krawiec. 2016. Simultaneous Synthesis of Multiple Functions using Genetic Programming with Scaffolding. In GECCO '16: Companion Proceedings of the 2016 Conference on Genetic and Evolutionary Computation. ACM, Madrid, Spain, 97--98.
[9]
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. Openai gym. arXiv preprint arXiv:1606.01540 (2016).
[10]
Edmund K Burke, Matthew Hyde, Graham Kendall, Gabriela Ochoa, Ender Özcan, and John R Woodward. 2010. A classification of hyper-heuristic approaches. In Handbook of metaheuristics. Springer, 449--468.
[11]
Jaime G Carbonell. 1986. Derivational analogy: A theory of reconstructive problem solving and expertise acquisition. In Machine Learning: An Artificial Intelligence Approach. Vol. 2. Morgan Kaufmann, 371--392.
[12]
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597--1607.
[13]
Bingshui Da, Yew-Soon Ong, Liang Feng, A Kai Qin, Abhishek Gupta, Zexuan Zhu, Chuan-Kang Ting, Ke Tang, and Xin Yao. 2017. Evolutionary multitasking for single-objective continuous optimization: Benchmark problems, performance metric, and baseline results. arXiv preprint arXiv:1706.03470 (2017).
[14]
Bingshui Da, Yew-Soon Ong, Liang Feng, A. K. Qin, Abhishek Gupta, Zexuan Zhu, Chuan-Kang Ting, Ke Tang, and Xin Yao. 2017. Evolutionary Multitasking for Single-objective Continuous Optimization: Benchmark Problems, Performance Metric, and Baseline Results. arXiv:arXiv:1706.03470
[15]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[16]
Carola Doerr. 2020. Complexity theory for discrete black-box optimization heuristics. In Theory of evolutionary computation. Springer, 133--212.
[17]
David Eby, RC Averill, William F Punch, and Erik D Goodman. 1998. Evaluation of injection island GA performance on flywheel design optimisation. In Adaptive Computing in Design and Manufacture. Springer, 121--136.
[18]
Linus Ericsson, Henry Gouk, Chen Change Loy, and Timothy M Hospedales. 2022. Self-supervised representation learning: Introduction, advances, and challenges. IEEE Signal Processing Magazine 39, 3 (2022), 42--62.
[19]
Liang Feng, Lei Zhou, Jinghui Zhong, Abhishek Gupta, Yew-Soon Ong, Kay-Chen Tan, and Alex Kai Qin. 2018. Evolutionary multitasking via explicit autoencoding. IEEE transactions on cybernetics 49, 9 (2018), 3457--3470.
[20]
Stephen Friess, Peter Tiňo, Stefan Menzel, Bernhard Sendhoff, and Xin Yao. 2019. Learning Transferable Variation Operators in a Continuous Genetic Algorithm. In 2019 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2027--2033.
[21]
Stephen Friess, Peter Tiňo, Stefan Menzel, Bernhard Sendhoff, and Xin Yao. 2020. Improving Sampling in Evolution Strategies Through Mixture-Based Distributions Built from Past Problem Instances. In International Conference on Parallel Problem Solving from Nature. Springer, 583--596.
[22]
Stephen Friess, Peter Tino, Stefan Menzel, Bernhard Sendhoff, and Xin Yao. 2020. Representing Experience in Continuous Evolutionary optimisation through Problem-tailored Search Operators. In 2020 IEEE World Congress on Computational Intelligence.
[23]
Liane Gabora, Eric O. Scott, and Stuart A. Kauffman. 2013. A quantum model of exaptation: Incorporating potentiality into evolutionary theory. Progress in Biophysics and Molecular Biology 133, 1 (September 2013), 108--116.
[24]
J. Gerhart and M. Kirschner. 2007. The theory of facilitated variation. Proceedings of the National Academy of Sciences 104 (May 2007).
[25]
Stephen Jay Gould and Elisabeth S. Vrba. 1982. Exaptation--a missing term in the science of form. Paleobiology 8, 1 (1982), 4--15.
[26]
A Gupta, B Da, and L Feng. [n. d.]. Measuring complementarity between function landscapes in evolutionary multitasking.
[27]
Abhishek Gupta, Yew-Soon Ong, and Liang Feng. 2016. Multifactorial Evolution. IEEE Transactions on Evolutionary Computation 20, 3 (2016), 343--357.
[28]
Abhishek Gupta, Yew-Soon Ong, and Liang Feng. 2017. Insights on transfer optimization: Because experience is the best teacher. IEEE Transactions on Emerging Topics in Computational Intelligence 2, 1 (2017), 51--64.
[29]
Abhishek Gupta, Lei Zhou, Yew-Soon Ong, Zefeng Chen, and Yaqing Hou. 2022. Half a dozen real-world applications of evolutionary multitasking, and more. IEEE Computational Intelligence Magazine 17, 2 (2022), 49--66.
[30]
Yu-Chi Ho and David L Pepyne. 2002. Simple explanation of the no-free-lunch theorem and its implications. Journal of optimization theory and applications 115, 3 (2002), 549--570.
[31]
Y-C Ho and David L Pepyne. 2002. Simple explanation of the no free lunch theorem of optimization. Cybernetics and Systems Analysis 38, 2 (2002), 292--298.
[32]
Frank Hutter, Holger H Hoos, Kevin Leyton-Brown, and Thomas Stützle. 2009. ParamILS: an automatic algorithm configuration framework. Journal of Artificial Intelligence Research 36, 1 (2009), 267--306.
[33]
Wojciech Jaśkowski, Krzysztof Krawiec, and Bartosz Wieloch. 2008. Multitask visual learning using genetic programming. Evolutionary computation 16, 4 (2008), 439--459.
[34]
Wojciech Jaśkowski, Krzysztof Krawiec, and Bartosz Wieloch. 2014. Cross-task code reuse in genetic programming applied to visual learning. International Journal of Applied Mathematics and Computer Science 24, 1 (2014), 183--197.
[35]
Xinfang Ji, Yong Zhang, Dunwei Gong, Xiaoyan Sun, and Yinan Guo. 2021. Multisurrogate-Assisted Multitasking Particle Swarm Optimization for Expensive Multimodal Problems. IEEE Transactions on Cybernetics (2021).
[36]
Nadav Kashtan, Elad Noor, and Uri Alon. 2007. Varying environments can speed up evolution. Proceedings of the National Academy of Sciences 104, 34 (2007), 13711--13716.
[37]
Borhan Kazimipour, Xiaodong Li, and A Kai Qin. 2014. A review of population initialization techniques for evolutionary algorithms. In 2014 IEEE congress on evolutionary computation (CEC). IEEE, 2585--2592.
[38]
Stephen Kelly and Malcolm I. Heywood. 2017. Multi-Task Learning in Atari Video Games with Emergent Tangled Program Graphs. In GECCO '17: Proceedings of the 2017 Conference on Genetic and Evolutionary Computation. ACM, Berlin, Germany, 195--202.
[39]
William G Kennedy and Kenneth De Jong. 2003. Characteristics of long-term learning in soar and its application to the utility problem. In Proceedings of the 20th International Conference on Machine Learning (ICML-03). 337--344.
[40]
Simon Kornblith, Jonathon Shlens, and Quoc V Le. 2019. Do better imagenet models transfer better?. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2661--2671.
[41]
Kostas Kouvaris, Jeff Clune, Loizos Kounios, Markus Brede, and Richard A Watson. 2017. How evolution learns to generalise: Using the principles of learning theory to understand the evolution of developmental organisation. PLoS computational biology 13, 4 (2017), e1005358.
[42]
Krzysztof Krawiec and Bartosz Wieloch. 2010. Automatic generation and exploitation of related problems in genetic programming. In IEEE Congress on Evolutionary Computation. IEEE, 1--8.
[43]
Kentarou Kurashige, Toshio Fukuda, and Haruo Hoshino. 2003. Reusing primitive and acquired motion knowledge for gait generation of a six-legged robot using genetic programming. Journal of Intelligent and Robotic Systems 38 (2003), 121--134.
[44]
John Laird, Paul Rosenbloom, and Allen Newell. 1986. Universal subgoaling and chunking: The automatic generation and learning of goal hierarchies. Kluwer Academic Publishers, Boston.
[45]
Johannes Lengler. 2020. Drift analysis. In Theory of Evolutionary Computation. Springer, 89--131.
[46]
Wei Li and Jinbo Li. 2021. Covariance Matrix Adaptation Evolutionary Algorithm for Multi-task Optimization. In Bio-Inspired Computing: Theories and Applications: 15th International Conference, BIC-TA 2020, Qingdao, China, October 23-25, 2020, Revised Selected Papers, Vol. 1363. Springer Nature, 25.
[47]
Ray Lim, Abhishek Gupta, Yew-Soon Ong, Liang Feng, and Allan N Zhang. 2020. Non-Linear Domain Adaptation in Transfer Evolutionary Optimization. Memetic Computing (2020).
[48]
Ray Lim, Lei Zhou, Abhishek Gupta, Yew-Soon Ong, and Allan N Zhang. 2021. Solution Representation Learning in Multi-Objective Transfer Evolutionary Optimization. IEEE Access 9 (2021), 41844--41860.
[49]
Manuel López-Ibáñez, Jérémie Dubois-Lacoste, Leslie Pérez Cáceres, Mauro Birattari, and Thomas Stützle. 2016. The irace package: Iterated racing for automatic algorithm configuration. Operations Research Perspectives 3 (2016), 43--58.
[50]
Sushil J Louis and John McDonnell. 2004. Learning with case-injected genetic algorithms. Evolutionary Computation, IEEE Transactions on 8, 4 (2004), 316--328.
[51]
Xiaoliang Ma, Yongjin Zheng, Zexuan Zhu, Xiaodong Li, Lei Wang, Yutao Qi, and Junshan Yang. 2021. Improving Evolutionary Multitasking Optimization by Leveraging Inter-Task Gene Similarity and Mirror Transformation. IEEE Computational Intelligence Magazine 16, 4 (2021), 38--53.
[52]
Aritz D Martinez, Javier Del Ser, Eneko Osaba, and Francisco Herrera. 2021. Adaptive Multifactorial Evolutionary Optimization for Multitask Reinforcement Learning. IEEE Transactions on Evolutionary Computation 26, 2 (2021), 233--247.
[53]
James McDermott. 2020. When and why metaheuristics researchers can ignore "No Free Lunch" theorems. SN Computer Science 1, 1 (2020), 1--18.
[54]
A Nguyen, J Yosinski, and J Clune. 2016. Understanding Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning. Evolutionary Computation (2016).
[55]
Stephen Oman and Pádraig Cunningham. 2001. Using case retrieval to seed genetic algorithms. International Journal of Computational Intelligence and Applications 1, 01 (2001), 71--82.
[56]
Eneko Osaba, Javier Del Ser, Aritz D Martinez, and Amir Hussain. 2022. Evolutionary multitask optimization: a methodological overview, challenges, and future research directions. Cognitive Computation 14, 3 (2022), 927--954.
[57]
Sinno Jialin Pan and Qiang Yang. 2010. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering 10 (October 2010), 1345--1359.
[58]
Merav Parter, Nadav Kashtan, and Uri Alon. 2008. Facilitated variation: how evolution learns from past environments to generalize to new environments. PLoS computational biology 4, 11 (2008), e1000206.
[59]
Mitchell A Potter, R Paul Wiegand, H Joseph Blumenthal, and Donald A Sofge. 2005. Effects of experience bias when seeding with prior results. In Evolutionary Computation, 2005. The 2005 IEEE Congress on, Vol. 3. IEEE, 2730--2737.
[60]
Justin K Pugh, Lisa B Soros, and Kenneth O Stanley. 2016. Quality diversity: A new frontier for evolutionary computation. Frontiers in Robotics and AI (2016), 40.
[61]
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. (2018).
[62]
Connie Loggia Ramsey and John J Grefenstette. 1993. Case-Based Initialization of Genetic Algorithms. In ICGA. Citeseer, 84--91.
[63]
Chris Schumacher, Michael D Vose, L Darrell Whitley, et al. 2001. The no free lunch and problem description length. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2001). Citeseer, 565--570.
[64]
Eric Scott, Camrynn Fausey, Karen Jones, Geoff Warner, and Hahnemann Ortiz. forthcoming. Automated Discovery of Tax Schemes using Genetic Algorithms. In IRS-TPC Joint Conference on Tax Administration.
[65]
Eric O. Scott and Jeffrey K. Bassett. 2015. Learning Genetic Representations for Classes of Real-Valued Optimization Problems. In GECCO '15: Companion Proceedings of the 2015 Conference on Genetic and Evolutionary Computation. ACM, Madrid, Spain, 1075--1082.
[66]
Gregory Seront. 1995. External concepts reuse in genetic programming. In working notes for the AAAI Symposium on Genetic programming. 94--98.
[67]
Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. 2015. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE 104, 1 (2015), 148--175.
[68]
Q Shang, L Zhang, L Feng, Y Hou, J Zhong, A Gupta, Kay Chen Tan, and H-L Liu. 2019. A preliminary study of adaptive task selection in explicit evolutionary many-tasking. In 2019 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2153--2159.
[69]
Kevin Sim, Emma Hart, and Ben Paechter. 2015. A lifelong learning hyper-heuristic method for bin packing. Evolutionary computation 23, 1 (2015), 37--67.
[70]
Luís F Simões, Dario Izzo, Evert Haasdijk, and Agoston Endre Eiben. 2014. Self-adaptive genotype-phenotype maps: neural networks as a meta-representation. In International Conference on Parallel Problem Solving from Nature. Springer, 110--119.
[71]
Mark K Singley and John Robert Anderson. 1989. The transfer of cognitive skill. Number 9. Harvard University Press.
[72]
Zbigniew Skolicki. 2007. An analysis of island models in evolutionary computation. Ph.D. Dissertation. George Mason University.
[73]
Kenneth O Stanley and Joel Lehman. 2015. Why greatness cannot be planned: The myth of the objective. Springer.
[74]
Gerald Jay Sussman. 1975. A computer model of skill acquisition. Vol. 1. American Elsevier Publishing Company New York.
[75]
Kevin Swersky, Jasper Snoek, and Ryan P Adams. 2013. Multi-task bayesian optimization. Advances in neural information processing systems 26 (2013).
[76]
Kay Chen Tan, Liang Feng, and Min Jiang. 2021. Evolutionary transfer optimization-a new frontier in evolutionary computation research. IEEE Computational Intelligence Magazine 16, 1 (2021), 22--33.
[77]
Zedong Tang, Maoguo Gong, Yue Wu, AK Qin, and Kay Chen Tan. 2021. A Multifactorial Optimization Framework Based on Adaptive Intertask Coordinate System. IEEE Transactions on Cybernetics (2021).
[78]
Daniel R Tauritz and John Woodward. 2018. Hyper-heuristics tutorial. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 685--719.
[79]
Sebastian Thrun and Lorien Pratt. 2012. Learning to learn. Springer Science & Business Media.
[80]
Richard A Watson and Thomas Jansen. 2007. A building-block royal road where crossover is provably essential. In Proceedings of the 9th annual conference on Genetic and evolutionary computation. 1452--1459.
[81]
Richard A Watson, Günter P Wagner, Mihaela Pavlicev, Daniel M Weinreich, and Rob Mills. 2014. The evolution of phenotypic correlations and "developmental memory". Evolution 68, 4 (2014), 1124--1138.
[82]
Qinnan Wei, Jingming Yang, Ziyu Hu, Hao Sun, and Lixin Wei. 2022. A multi-objective multi-tasking evolutionary algorithm based inverse mapping and adaptive transformation strategy: IM-MFEA. ISA Transactions (2022).
[83]
Tingyang Wei, Shibin Wang, Jinghui Zhong, Dong Liu, and Jun Zhang. 2021. A review on evolutionary multitask optimization: Trends and challenges. IEEE Transactions on Evolutionary Computation 26, 5 (2021), 941--960.
[84]
Darrell Whitley and Jean Paul Watson. 2005. Complexity theory and the no free lunch theorem. In Search Methodologies. Springer, 317--339.
[85]
Frank Wilcoxon. 1945. Individual Comparisons by Ranking Methods. Biometrics 1, 6 (1945), 80--83.
[86]
David H Wolpert and William G Macready. 1997. No free lunch theorems for optimization. IEEE transactions on evolutionary computation 1, 1 (1997), 67--82.
[87]
Jian Cheng Wong, Abhishek Gupta, and Yew-Soon Ong. 2021. Can Transfer Neuroevolution Tractably Solve Your Differential Equations? arXiv preprint arXiv:2101.01998 (2021).
[88]
Heng Xiao, Gen Yokoya, and Toshiharu Hatanaka. 2019. Multifactorial pso-fa hybrid algorithm for multiple car design benchmark. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 1926--1931.
[89]
Qingzheng Xu, Na Wang, Lei Wang, Wei Li, and Qian Sun. 2021. Multi-task optimization and multi-task evolutionary computation in the past five years: A brief review. Mathematics 9, 8 (2021), 864.
[90]
Xiaoming Xue, Cuie Yang, Liang Feng, Kai Zhang, Linqi Song, and Kay Chen Tan. 2023. A Scalable Test Problem Generator for Sequential Transfer Optimization. arXiv preprint arXiv:2304.08503 (2023).
[91]
Xiaoming Xue, Cuie Yang, Yao Hu, Kai Zhang, Yiu-ming Cheung, Linqi Song, and Kay Chen Tan. 2021. Evolutionary Sequential Transfer Optimization for Objective-Heterogeneous Problems. IEEE Transactions on Evolutionary Computation (2021).
[92]
Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. 2020. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning. PMLR, 1094--1100.
[93]
Fangfang Zhang, Su Nguyen, Yi Mei, and Mengjie Zhang. 2021. Multitask Learning in Hyper-Heuristic Domain with Dynamic Production Scheduling. In Genetic Programming for Production Scheduling. Springer, 249--269.
[94]
Jun Zhang, Weien Zhou, Xianqi Chen, Wen Yao, and Lu Cao. 2019. Multisource selective transfer framework in multiobjective optimization problems. IEEE Transactions on Evolutionary Computation 24, 3 (2019), 424--438.
[95]
Qingfu Zhang and Heinz Muhlenbein. 2004. On the convergence of a class of estimation of distribution algorithms. IEEE Transactions on evolutionary computation 8, 2 (2004), 127--136.
[96]
Lei Zhou, Liang Feng, Abhishek Gupta, and Yew-Soon Ong. 2021. Learnable Evolutionary Search Across Heterogeneous Problems via Kernelized Autoencoding. IEEE Transactions on Evolutionary Computation 25, 3 (2021), 567--581.

Cited By

View all
  • (2024)Bayesian Inverse Transfer in Evolutionary Multiobjective OptimizationACM Transactions on Evolutionary Learning and Optimization10.1145/3674152Online publication date: 28-Jun-2024
  • (2024)Multiobjective Sequential Transfer Optimization: Benchmark Problems and Preliminary Results2024 IEEE Congress on Evolutionary Computation (CEC)10.1109/CEC60901.2024.10612043(1-8)Online publication date: 30-Jun-2024
  • (2024)Similar Locality Based Transfer Evolutionary Optimization for Minimalistic Attacks2024 IEEE Congress on Evolutionary Computation (CEC)10.1109/CEC60901.2024.10611980(1-8)Online publication date: 30-Jun-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FOGA '23: Proceedings of the 17th ACM/SIGEVO Conference on Foundations of Genetic Algorithms
August 2023
169 pages
ISBN:9798400702020
DOI:10.1145/3594805
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 August 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. evolutionary algorithms theory
  2. multi-task evolution
  3. no free lunch theorems
  4. transfer optimization

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FOGA '23
Sponsor:
FOGA '23: Foundations of Genetic Algorithms XVII
August 30 - September 1, 2023
Potsdam, Germany

Acceptance Rates

Overall Acceptance Rate 72 of 131 submissions, 55%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)212
  • Downloads (Last 6 weeks)27
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Bayesian Inverse Transfer in Evolutionary Multiobjective OptimizationACM Transactions on Evolutionary Learning and Optimization10.1145/3674152Online publication date: 28-Jun-2024
  • (2024)Multiobjective Sequential Transfer Optimization: Benchmark Problems and Preliminary Results2024 IEEE Congress on Evolutionary Computation (CEC)10.1109/CEC60901.2024.10612043(1-8)Online publication date: 30-Jun-2024
  • (2024)Similar Locality Based Transfer Evolutionary Optimization for Minimalistic Attacks2024 IEEE Congress on Evolutionary Computation (CEC)10.1109/CEC60901.2024.10611980(1-8)Online publication date: 30-Jun-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media