Abstract
A new weight decaying technique for neural network training is introduced. The proposed technique utilizes genetic algorithms in conjunction with a local optimization method to restrict the weights of the neural network in some range with desired generalization capabilities. This method is a global optimization one that overcomes most of the problems associated with local optimization procedures. In addition, this technique can be combined with any global optimization procedure from the relevant literature. The proposed technique has been evaluated on several well-known benchmark datasets, along with a series of classification and regression problems. The evaluation results are very promising indicating an improvement in classification error from 25 to 80% for the genetic algorithm.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Bishop CM (1995) Neural networks for pattern recognition. Oxford university press
Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2:303–314
Artyomov E, Yadid-Pecht EO (2005) Modified high-order neural network for invariant pattern recognition. Pattern Recognit Lett 26:843–851
Uncini A (2003) Audio signal processing by neural networks. Neurocomputing 55:593–625
Valdas JJ, Bonham-Carter G (2006) Time dependent neural network models for detecting changes of state in complex processes: applications in earth sciences and astronomy. Neural Netw 19:196–207
Lagaris IE, Likas A, Fotiadis DI (1998) Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans Neural Netw 9:987–1000
Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323:533–536
Riedmiller M, Braun H (1993) A direct adaptive method for faster backpropagation learning: the RPROP algorithm. In: Proceedings of the IEEE international conference on neural networks, San Francisco, CA, pp 586–591
Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447
Zhang C, Shao H, Li Y (2000) Particle swarm optimisation for evolving artificial neural network. IEEE Int Conf Syst Man Cybern 4:2487–2490
Geman S, Bienenstock E, Doursat R (1992) Neural networks and the bias/variance dilemma. Neural Comput 4:1–58
Nowlan SJ, Hinton GE (1992) Simplifying neural networks by soft weight sharing. Neural Comput 4:473–493
Hanson SJ, Pratt LY (1989) Comparing biases for minimal network construction with back propagation. In: Touretzky DS (ed) Advances in neural information processing systems, vol 1. Morgan Kaufmann, San Mateo, pp 177–185
Mozer MC, Smolensky P (1989) Skeletonization: a technique for trimming the fat from a network via relevance assesment. In: Touretzky DS (ed) Advances in neural processing systems, vol 1. Morgan Kaufmann, San Mateo, pp 107–115
Nelson MM, Illingworth WT (1991) A practical guide to neural nets, vol 1. Addison-Wesley, Reading, MA
Wong WK, Guo ZX (2010) A hybrid intelligent model for medium-term sales forecasting in fashion retail supply chains using extreme learning machine and harmony search algorithm. Int J Prod Econ 128:614–624
Weigend AS, Rumelhart DE, Huberman BA (1991) Generalization by weight-elimination with application to forecasting. In: Lippmann RP, Moody J, Touretzky DS (eds) Advances in neural information processing systems 3. Morgan Kaufmann, San Mateo
Shahjahan MD, Kazuyuki M (2005) Neural network training algorithm with possitive correlation. IEEE Trans Inf Syst 88:2399–2409
Michaelewicz Z (1996) Genetic algorithms + Data structures = Evolution programs. Springer, Berlin
Treadgold NK, Gedeon TD (1998) Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm. IEEE Trans Neural Netw 9:662–668
Gupta A, Lam M (1998) The weight decay backpropagation for generalizations with missing values. Annal Oper Res 78:165–187
Powell MJD (1989) A tolerant algorithm for linearly constrained optimization calculations. Math Program 45:547
Fletcher R (1970) A new approach to variable metric algorithms. Comput J 13:317–322
Leung CS, Wong KW, Sum PF, Chan LW (2001) A pruning method for the recursive least squared algorithm. Neural Netw 14:147–174
Simonoff JS (2012) Smoothing methods in statistics. Springer
Harrison D, Rubinfeld DL (1978) Hedonic housing prices and the demand for clean air. J Environ Econo Manag 5:81–102
Zhao S, Liu Q, Zhang B (2010) A fast OBS pruning algorithm based on pseudo-entropy of weights. In: 2010 2nd International conference on advanced computer control (ICACC), vol 3, pp 451–455
Nissen S (2003) Implementation of a fast artificial neural network library (FANN). University report, Department of Computer Science, University of Copenhagen
Klima G, Fast compressed neural networks. http://fcnn.sourceforge.net/
Mackowiak PA, Wasserman SS, Levine MM (1992) A critical appraisal of 98.6 degrees f, the upper limit of the normal body temperature, and other legacies of Carl Reinhold August Wunderlich. J Am Med Assoc 268:1578–1580
Andrzejak RG, Lehnertz K, Mormann F, Rieke C, David P, Elger CE (2001) Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: dependence on recording region and brain state. Phys Rev E 64(6):8 (Article ID 061907)
Tzallas AT, Tsipouras MG, Fotiadis DI (2007) Automatic seizure detection based on time-frequency analysis and artificial neural networks. Comput Intell Neurosci 2007:13. doi:10.1155/2007/80510 (Article ID 80510)
Giannakeas N, Tsipouras MG, Tzallas AT, Kyriakidi K, Tsianou ZE, Manousou P, Hall A, Karvounis EC, Tsianos V, Tsianos E (2015) A clustering based method for collagen proportional area extraction in liver biopsy images. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, EMBS, Nov 2015, art. no. 7319047, pp 3097–3100
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Tsoulos, I.G., Tzallas, A. & Tsalikakis, D. Evolutionary Based Weight Decaying Method for Neural Network Training. Neural Process Lett 47, 463–473 (2018). https://doi.org/10.1007/s11063-017-9660-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-017-9660-0