Abstract
A new class of evolutionary computation processes is presented, called Learnable Evolution Model or LEM. In contrast to Darwinian-type evolution that relies on mutation, recombination, and selection operators, LEM employs machine learning to generate new populations. Specifically, in Machine Learning mode, a learning system seeks reasons why certain individuals in a population (or a collection of past populations) are superior to others in performing a designated class of tasks. These reasons, expressed as inductive hypotheses, are used to generate new populations. A remarkable property of LEM is that it is capable of quantum leaps (“insight jumps”) of the fitness function, unlike Darwinian-type evolution that typically proceeds through numerous slight improvements. In our early experimental studies, LEM significantly outperformed evolutionary computation methods used in the experiments, sometimes achieving speed-ups of two or more orders of magnitude in terms of the number of evolutionary steps. LEM has a potential for a wide range of applications, in particular, in such domains as complex optimization or search problems, engineering design, drug design, evolvable hardware, software engineering, economics, data mining, and automatic programming.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Ackley, D. & Littman, M. (1992). Interactions between learning and evolution, In C. G. Langton, C. Taylor, J. D. Farmer, & S. Rasmussen (Eds), Artificial life II. Addison-Wesley.
Augier, S., Venturini, G., & Kodratoff, Y. (1995). Learning first order logic rules with a genetic algorithm. Proceedings of the 1st International Conference on Knowledge Discovery and Data Mining (pp. 21–26). Montreal, Canada: AAAI Press.
Baeck, T., Fogel, D. B., & Michalewicz, Z. (1997). (Eds.), Handbook of evolutionary computation. Oxford University Press.
Banzhaf, W., Nordin P., Keller R. E., & Francone, F. D. (1998). Genetic programming: An introduction, San Francisco, CA: Morgan Kaufman Publishers, Inc.
Baldwin, J. M. (1896). A new factor in evolution. American naturalist (Vol. 30) (pp. 441–451, 536–553).
Bloedorn, E. & Michalski, R. S. (1998). Data-driven constructive induction: A methodology and its applications, IEEE Intelligent Systems, special issue on feature transformation and subset selection. Huan Liu & Hiroshi Motoda (Eds.), March-April.
Cervone, G. & Michalski, R. S. (to appear). Design and experiments with LEM2 implementation of the Learnable Evolution Model, Reports of The Machine Learning and Inference Laboratory, George Mason University.
Clark, P. & Niblett, R. (1989). The CN2 induction algorithm, Machine Learning, 3.
Cohen, W. W. (1995). Fast effective rule induction. Proceedings of the Twelfth International Conference on Machine Learning.
Coletti, M., Lash, T., Mandsager, C. Michalski, R. S., & Moustafa, R. (1999). Comparing the learnable evolution model with genetic algorithms in the area of digital filter design, Reports of The Machine Learning and Inference Laboratory, George Mason University, MLI 99–5.
Darwin, C. (1859). On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life, London, John Murray.
de Garis, Hugo. (1996). CAM-BRAIN: The evolutionary engineering of a billion neuron artificial brain by 2001 which grows/evolve at electronic speeds inside a cellular automata machine (CAM). Lecture notes in computer science, Vol. 1062. Towards evolvable hardware (pp 76–98). Springer-Verlag.
de Garis, H., Korkin, M., Gers, F., Nawa, E., & Hough, M. (to appear). Building an Artificial Brain Using an FPGA Based CAM-Brain Machine, Applied Mathematics and Computation Journal (special issue on Artificial life and robotics, artificial brain, brain computing and brainware, North Holland.
De Jong, K. A. (1975). An analysis of the behavior of a class of genetic adaptive systems. Ph.D. Thesis, Department of Computer and Communication Sciences, University of Michigan, Ann Arbor.
De Jong, K. A., Spears, W. M., & Gordon, F. D. (1993). Using genetic algorithms for concept learning, Machine Learning, 13, 161–188.
De Jong, K. A. (to appear). Evolutionary computation: theory and practice. MIT Press.
Dietterich, T. G. (1997). Machine-learning research: four current directions, AI Magazine, 18(4).
Esposito, F., Michalski, R. S., & Saitta, L. (Eds.) (1998). Proceedings of the Fourth International Workshop on Multistrategy Learning, Desenzano del Garda.
Forsburg, S. (1976).AQPLUS: An adaptive random search method for selecting a best set of attributes from a large collection of candidates, Internal Technical Report, Department of Computer Science, University of Illinois, Urbana.
Giordana A. & Neri, F. (1995). Search-intensive concept induction. Evolutionary Computation, 3(4), 375–416.
Goldberg, D. E. (1989). Genetic algorithms in search, optimization and machine learning. Addison-Wesley.
Grefenstette, J. (1991). Lamarckian learning in multi-agent environment. In R. Belew & L. Booker (Eds.). Proceedings of the Fourth International Conference on Genetic Algorithms, San Mateo, GA: Morgan Kaufmann (pp. 303–310).
Greene D. P. & Smith, S. F. (1993). Competition-based induction of decision models from examples. Machine Learning, 13, 229–257.
Hekanaho, J. (1997). GA-based rule enhancement concept learning. Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining (pp. 183–186). Newport Beach, CA: AAAI Press.
Hekanaho, J. (1998). DOGMA: A GA-based relational learner, TUCS Technical Reports Series, Report No. 168.
Hinton, G. E. & Nowlan, S. J. (1987). How learning can guide evolution. Complex Systems, 1, 495–502.
Holland, J. (1975). Adaptation in natural and artificial systems. Ann Arbor: The University of Michigan Press.
Janikow, C. Z. (1993). A knowledge-intensive genetic algorithm for supervised learning. Machine Learning, 13, 189–228.
Kaufman, K. & Michalski, R. S. (1999). Learning from inconsistent and noisy data: the AQ18 approach, Foundations of Intelligent Systems, 11th International Symposium, ISMIS'99, Warsaw, Poland, Spring.
Koza, J. R. (1994). Genetic programming II: automatic discovery of reusable programs. The MIT Press.
Lavrac, N. & Dzeroski, S. (1994). Inductive logic programming: techniques and applications. Ellis Horwood.
Maloof, M. & Michalski, R. S. (1999). AQ-PM: a system for partial memory learning. Proceedings of Intelligent Information Systems VIII, Ustron, Poland.
Michalewicz, Z. (1996). Genetic algorithms +; data structures =; evolution programs, 3rd edn. Springer Verlag.
Michalski, R. S. (1969). On the quasi-minimal solution of the general covering problem, Proceedings of the V International Symposium on Information Processing (FCIP 69), Yugoslavia, Bled (Vol. A3) Switching Circuits, (pp. 125–128).
Michalski, R. S. (1978). A planar geometrical model for representing multi-dimensional discrete spaces and multiple-valued logic functions. Reports of the Department of Computer Science, No. 897, University of Illinois at Champaign-Urbana.
Michalski, R. S. (1973). Discovering classification rules using variable-valued logic system VL1. Proceedings of the Third International Joint Conference on Artificial Intelligence, Stanford, CA (pp. 162–172).
Michalski, R. S. (1983). A theory and methodology of inductive learning, Artificial Intelligence, 20(2) 111–161.
Michalski, R. S. (1994). Inferential theory of learning: developing foundations for multistrategy learning, In R. S. Michalski & G. Tecuci (Eds.), Machine learning: a multistrategy approach (Vol. IV) San Mateo, CA, Morgan Kaufmann.
Michalski, R. S. (1998). Learnable evolution: combining symbolic and evolutionary learning. Proceedings of the 4th International Workshop on Multistrategy Learning, Decenzano del Garda, Italy.
Michalski, R. S. (to appear). Natural induction: theory, methodology and its application to machine learning and data mining. Reports of the Machine Learning and Inference Laboratory, George Mason University.
Michalski, R. S.& Cervone, G. (to appear). Adaptive anchoring quantization of continuous variables: the ANCHOR method. Reports of the Machine Learning and Inference Laboratory, George Mason University.
Michalski, R. S., Bratko, I., & Kubat, M. (1988). Machine learning and data mining: methods and applications. John Wiley and Sons.
Michalski, R. S. & McCormick, B. H. (1971). Interval generalization of switching theory. Proceedings of the Third Annual Houston Conference on Computer and System Science, Houston, Texas.
Michalski, R. S., Mozetic, I., Hong, J., & Lavrac, N. (1986). The AQ15 inductive learning system: an overview and experiments. Reports of the Intelligent Systems Group, No. 86–20, UIUCDCS-R–86–1260, Department of Computer Science, University of Illinois, Urbana.
Michalski, R. S. & Zhang, Q. (1999). Initial experiments with the LEM1 learnable evolution model: an application to function optimization and evolvable hardware. Reports of the Machine Learning and Inference Laboratory, George Mason University.
Mitchell, M. (1996). An introduction to genetic algorithms, Cambridge, MA: MIT Press.
Mitchell, T. M. (1997). Does machine learning really work. AI Magazine, 18(3).
Ravise, C. & Sebag, M. (1996). An advanced evolution should not repeat its past errors. In L. Saitta (Ed.), Proceedings of the 13th International Conference on Machine Learning (pp. 400–408).
Sebag, M. & Schoenauer, M. (1994). Controlling crossover through inductive learning In Y. Davidor, H. P. Schwefel & R. Manner (Eds.), Proceedings of the 3rd Conference on Parallel Problem Solving from Nature, LNVS (Vol. 866) (pp. 209–218). Springer-Verlag.
Sebag, M., Schoenauer M., & Ravise, C. (1997a). Inductive Learning of multation step-size in evolutionary paramter optimization. Proceedings of the 6th Annual Conference on Evolutionary Programming, Indianapolis. (pp. 247–261). LNCS (Vol. 1213).
Sebag, M., Shoenauer, M., & Ravise, C. (1997b). Toward civilized evolution: developing inhibitions, Proceedings of the 7th International Conference on Genetic Algorithms (pp. 291–298).
Turney, P. D. (1995). Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm. Journal of Artificial Intelligence Research, 2, 369–409.
Vafaie, H. & De Jong, K. A. (1991). Improving the performance of a rule induction system using genetic algorithms, Proceedings of the First InternationalWorkshop on Multistrategy Learning,WV: MSL-91, Harpers Ferry.
Vafaie, H. & De Jong, K. A. (1992). Genetic algorithms as a tool for feature selection in machine learning. Proceedings of the 4th International Conference on Tools with Artificial Intelligence, Arlington, VA.
Wnek, J., Kaufman, K., Bloedorn, E., & Michalski, R. S. (1995). Inductive learning systemAQ15c: the method and user's guide, Reports of the Machine Learning and Inference Laboratory, MLI 95–4, George Mason University, Fairfax, VA.
Yao, L. & Sethares, W. (1994). Nonlinear parameter estimation via the genetic algorithm. IEEE Transactions on Signal Processing, 42(4), 927–935.
Zhang, Q. (1997). Knowledge visualizer: a software system for visualizing data, patterns and their relationships. Reports of the Machine Learning and Inference Laboratory, MLI 97–14, George Mason University, Fairfax, VA.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Michalski, R.S. LEARNABLE EVOLUTION MODEL: Evolutionary Processes Guided by Machine Learning. Machine Learning 38, 9–40 (2000). https://doi.org/10.1023/A:1007677805582
Issue Date:
DOI: https://doi.org/10.1023/A:1007677805582