Abstract
Training neural networks is a complex task provided that many algorithms are combined to find best solutions to the classification problem. In this work, we point out the evolutionary computing to minimize a neural configuration. For this purpose, a distribution estimation framework is performed to select relevant features, which lead to classification accuracy with a lower complexity in computational time. Primarily, a pruning strategy-based score function is applied to decide the network relevance in the genetic population. Since the complexity of the network (connections, weights, and biases) is most important, the cooling state of the system will strongly relate to the entropy as a minimization function to reach the desired solution. Also, the framework proposes coevolution learning (with discrete and continuous representations) to improve the behavior of the evolutionary neural learning. The results obtained after simulations show that the proposed work is a promising way to extend its usability to other classes of neural networks.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Abbass HA (2002) An evolutionary artificial neural networks approach for breast cancer diagnosis. Artif Intell Med 25(3):265–281
Abraham A, Nath B (2001) An adaptive learning framework for optimizing artificial neural networks. Lect Notes Comput Sci 2074
Attik M, Bougrain L, Alexandre F (2004) Optimal brain surgeon variants for feature selection. Int Joint Conf Neural Netw 2:1371–1374
Bäck T, Fogel DB, Michalewicz Z (2000) Evolutionary computation 2: advanced algorithms and operators. Institute of physics publishing
Beyer HG, Deb K (2001) On self-adaptive features in real-parameter evolutionary algorithms. IEEE Trans Evol Comput 5(3):250–270
Cantù-Paz E (2003) Pruning neural networks with distribution estimation algorithms. In: Genetic and evolutionary conference GECCO 2003. Springer, Heidelberg, pp 790–800
Cibas T, Soulié FF, Gallinari P, Raudys S (1996) Variable selection with neural networks. Neurocomputing 12(2–3):223–248
Fogel DB (2002) Evolutionary computing. IEEE Press
Gepperth A, Roth S (2005) Applications of multi-objective structure optimization. In: Thirteenth European Symposium on Artificial Neural Networks
Hancock PJB (1992) Pruning neural nets by genetic algorithm. In: Proceedings of the international conference on artificial neural networks, pp 991–994
Hassibi B, Stork DG (1993) Second order derivatives for network pruning: optimal brain surgeon. Adv Neural Inf Process Syst 5:164–171
Hüsken MH, Jin Y, Sendho B (2005) Structure optimization of neural networks for aero-dynamic optimization. Soft Comput 9(1):21–28
Hiroshi T, Hisaaki H (2003) The functional localization of neural networks using genetic algorithms. Neural Netw 16:55–67
Larrañaga P (2002) A review on estimation of distribution algorithms. In: Larrañaga P, Lozano JA (eds) Estimation of distribution algorithms, Chap. 3. Kluwer Academic, Dordrecht, pp 57–100
Le Cun Y, Dencker JS, Solla SA (1990) Optimal brain damage. Adv Neural Inf Process Syst 2:598–605
Liu H, Motoda H (1998) Feature selection from knowledge discovery and data mining, Kluwer Academic, Dordrecht
Messer K, Kittler J (1998) Choosing an optimal network size to aid a search through a large image dataset. Brit Mach Vis Conf 1:235–244
Mohamed Ben Ali Y, Laskri MT (2005) An evolutionary machine learning: an adaptability perspective at fine granularity. In: International journal of knowledge-based and intelligent engineering systems, pp 13–20
Mühlenbein H, Pâaß G (1996) From recombination of genes to the estimation of distributions I. In: Binary parameters, pp 178–187
Newman DJ, Hettich S, Blake CL, Merz CJ (1998) UCI Repository of machine learning databases
Pedersen MW, Hansen LK, Larsen J (1996) Pruning with generalization based weight saliencies: γ OBD, γ OBS. Advances Neural Inf Process Syst 8:521–527
Pelikan M, Goldberg DE, Cantù-Paz E (1999) BOA: the Bayesian optimization algorithm. In: Proceedings of the genetic and evolutionary computation conference. Morgan Kaufmann, San Francisco, pp 525–532
Simi J, Orponen P, Antti-Poika S (2000) A computational taxonomy and survey of neural network models. Neural Comput 12(12):2965–2989
Stanley KO, Miikkulainen R (2004) Competitive coevolution through evolutionary complexification. J Artif Intell Res 21:63–100
Stahlberger A, Riedmiller M (1997) Fast network pruning and feature extraction using the unit-OBS algorithm. Adv Neural Inf Process Syst 9:655–661
Van de Laar P, Heskes T, Gielen S (1999) Partial retraining: a new approach to input relevance determination. Int J Neural Netw 9(1):75–85
Verikas A, Bacauskiene M (2002) Feature selection with neural networks. Pattern Recognit Lett 23(11):1323–1335
Wikel JH, Dow ER (1993) The use of neural networks for variable selection in QSAR. Bioorg Med Chem Lett 3(4):645–651
Wicker D, Rizki M, Tamburino LA (2002) E-Net: evolutionary neural network synthesis. Neurocomputing 42:171–196
Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Mohamed Ben Ali, Y. Advances in evolutionary feature selection neural networks with co-evolution learning. Neural Comput & Applic 17, 217–226 (2008). https://doi.org/10.1007/s00521-007-0114-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-007-0114-x