Abstract
Gradient based methods are one of the most widely used error minimization methods used to train back propagation neural networks (BPNN). Some second order learning methods deal with a quadratic approximation of the error function determined from the calculation of the Hessian matrix, and achieves improved convergence rates in many cases. This paper introduces an improved second order back propagation which calculates efficiently the Hessian matrix by adaptively modifying the search direction. This paper suggests a simple modification to the initial search direction, i.e. the gradient of error with respect to weights, can substantially improve the training efficiency. The efficiency of the proposed SOBPNN is verified by means of simulations on five medical data classification. The results show that the SOBPNN significantly improves the learning performance of BPNN.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Zhang, W.J.: Computational Ecology: Artificial Neural Networks and Their Applications, pp. 41–47. World Scientific Publishing Co. Pte. Ltd., Singapore (2010)
Chandra, P., Singh, Y.: An activation function adapting training algorithm for sigmoidal feedforward networks. Neurocomputing 61, 429–437 (2004)
Yu, X.H., Chen, G.A., Cheng, S.X.: Acceleration of backpropagation learning using optimized learning rate and momentum. Electronics Letters 29(14), 1288–1289 (1993)
Hamid, N.A., Nawi, N.M., Ghazali, R.: The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems. International Journal on Advanced Science, Engineering and Information Technology 1(2), 178–184 (2011)
Hamid, N.A., Nawi, N.M., Ghazali, R., Salleh, M.N.M.: Solving Local Minima Problem in Back Propagation Algorithm Using Adaptive Gain, Adaptive Momentum and Adaptive Learning Rate on Classification Problems. International Journal of Modern Physics: Conference Series 9, 448–455 (2012)
Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press (1995)
Nawi, N.M., Ransing, M.R., Ransing, R.S.: An improved Conjugate Gradient based learning algorithm for back propagation neural networks. International Journal of Computational Intelligence 4(1), 46–55 (2007)
Wang, Z., Fang, J., Liu, X.: Global stability of stochastic high-order neural networks with discrete and distributed delays. Chaos, Solitons & Fractals 36(2), 388–396 (2008)
Thimm, G., Moerland, P., Fiesler, E.: The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks. Neural Comput. 8, 451–460 (1996)
Holger, R.M., Graeme, C.D.: The Effect of Internal Parameters and Geometry on the Performance of Back propagation Neural Networks. Environmental Modeling and Software 13(1), 193–209 (1998)
Eom, K., Jung, K., Sirisena, H.: Performance Improvement of Backpropagation Algorithm by Automatic Activation Function Gain Tuning Using Fuzzy Logic. Neurocomputing 50, 439–460 (2003)
Fletcher, R., Powell, M.J.D.: A rapidly convergent descent method for minimization. British Computer J., 163–168 (1963)
Fletcher, R., Reeves, R.M.: Function minimization by conjugate gradients. Comput. J. 7(2), 149–160 (1964)
Huang, H.Y.: A unified approach to quadratically convergent algorithms for function minimization. J. Optim. Theory Appl. 5, 405–423 (1970)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Rumelhart, D.E., McClelland, J.L. (eds.) Parallel Distributed Processing, vol. 1, pp. 318–362 (1986)
Huzaifa, I., Asivadam, V.S., Saad, N.: Enhanced Conjugate Gradient Methods for Training MLP Networks. In: Proceedings of 2010 IEEE Student Conference on Research and Development (SCOReD 2010), Putrajaya, Malaysia (2010)
Hager, W., Zhang, H.: A Survey on Nonlinear Conjugate Gradient Methods. Pacific Journal of Optimization 2, 35–58 (2006)
Nawi, N.M., Ghazali, R., Salleh, M.N.M.: An Approach to Improve Back-propagation algorithm by using Adaptive Gain. Biomedical Soft Computing and Human Sciences 16(2), 125–134 (2010)
Mangasarian, O.L., Wolberg, W.H.: Cancer diagnosis via linear programming. SIAM News 23, 1–18 (1990)
Smith, J.W., Everhart, J.E., Johannes, R.S.: Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In: Proceedings of the Symposium on Computer Applications and Medical Care, pp. 261–265 (1988)
Detrano, R., Janosi, A., Steinbrunn, W.: International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology 64, 304–310 (1989)
Coomans, D., Broeckaert, I., Jonckheer, M., Massart, D.L.: Comparison of Multivariate Discrimination Techniques for Clinical Data - Application to The Thyroid Functional State. Methods of Information Medicine 22, 93–101 (1983)
Masri, S.K.: Pengelasan Simptom-simptom Barah Mulut Menggunakan Pendekatan Rangkaian Neural. Undergraduate Thesis. Universiti Tun Hussein Onn Malaysia (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Nawi, N.M., Hamid, N.A., Harsad, N., Ramli, A.A. (2015). Second Order Back Propagation Neural Network (SOBPNN) Algorithm for Medical Data Classification. In: Phon-Amnuaisuk, S., Au, T. (eds) Computational Intelligence in Information Systems. Advances in Intelligent Systems and Computing, vol 331. Springer, Cham. https://doi.org/10.1007/978-3-319-13153-5_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-13153-5_8
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-13152-8
Online ISBN: 978-3-319-13153-5
eBook Packages: EngineeringEngineering (R0)