Abstract
Recently, a batch mode learning algorithm, namely optimal open weight fault regularization (OOWFR), was developed for handling the open fault situation. In terms of the Kullback–Leibler divergence, this batch mode learning algorithm is optimal. However, the main disadvantage of this batch mode learning algorithm is that it requires to store the entire input–output history. Therefore, the memory consumption is a problem when the number of training samples is large. In this paper, we present an online version for the OOWFR algorithm. We consider two learning rate cases, fixed learning rate and adaptive learning rate. We present the convergent conditions for these two cases. Simulation results show that the performance of the proposed online mode learning algorithm is better than that of other online mode learning algorithms. Also, the performance of the proposed algorithm is close to that of the batch mode OOWFR algorithm.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Zhang X, Song S, Wu C (2013) Robust bayesian classification with incomplete data. Cogn Comput 5(2):170–187. doi:10.1007/s12559-012-9188-6
Chiu CT, Mehrotra K, Mohan CK, Ranka S (1994) Modifying training algorithms for improved fault tolerance. In: Proceedings of The International Conference on Neural Networks 1994, vol. 4. Orlando, pp 333–338
Bernier J, Ortega J, Rodrguez M, Rojas I, Prieto A (1999) An accurate measure for multilayer perceptron tolerance to weight deviations. Neural Process Lett 10(2):121–130
Bernier J, Ortega J, Rojas I, Ros E, Prieto A (2000) Obtaining fault tolerant multilayer perceptrons using an explicit regularization. Neural Process Lett 12(2):107–113
Zhou ZH, Chen SF (2003) Evolving fault-tolerant neural networks. Neural Comput Appl 11(3–4):156–160
Leung CS, Sum J (2008) A fault-tolerant regularizer for RBF networks. IEEE Trans Neural Netw 19(3):493–507
Sum JPF, Leung CS, Ho KI-J (2009) On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise. IEEE Trans Neural Netw 20(1):124–138. doi:10.1109/TNN.2008.2005596
Leung CS, Wang HJ, Sum J (2010) On the selection of weight decay parameter for faulty networks. IEEE Trans Neural Netw 21(8):1232–1244
Chandra P, Singh Y (2003) Fault tolerance of feedforward artificial neural networks—a framework of study. In: Proceedings of The International Joint Conference on Neural Networks 2003, vol 1. Portland, pp 489–494
Phatak DS, Koren I (1995) Complete and partial fault tolerance of feedforward neural nets. IEEE Trans Neural Netw 6(2):446–456
Burr J (1995) Digital neural network implementations. In: Neural networks, concepts, applications, and implementations, Vol III. Englewood Cliffs, Prentice Hall, pp 237–285
Himavathi DAS, Muthuramalingam A (2007) Feedforward neural network implementation in FPGA using layer multiplexing for effective resource utilization. IEEE Trans Neural Netw 18:880–888
Savich A, Moussa M, Areibi S (2007) The impact of arithmetic representation on implementing MLP-BP on FPGAs: a study. IEEE Trans Neural Netw 18(1):240–252
Kaneko T, Liu B (1970) Effect of coefficient rounding in floating-point digital filters. IEEE Trans Aerosp Electron Syst AE–7:995–1003
Liu B, Kaneko T (1969) Error angalysis of digital filter realized with floating-point arithemetic. Proc IEEE 57:1735–1747
Bolt G (1991) Fault models for artificial neural networks. IEEE Int Joint Conf Neural Netw 2:1373–1378
Tchernev EB, Mulvaney RG, Phatak DS (2005) Investigating the fault tolerance of neural networks. Neural Comput 17(7):1646–1664
Haruhiko T, Masahiko M, Hidehiko K, Terumine H (2007) Enhancing both generalization and fault tolerance of multilayer neural networks. In ATS ’04: Proceedings of IJCNN 2007, pp 1429–1433
Ahmadi A, Sargolzaie M, Fakhraie S, Lucas C, Vakili S (2009) Feedforward sigmoidal networks—equicontinuity and fault-tolerance properties. Comput Eng Technol Int Conf 2:93–97
Sequin C, Clay R (1990) Fault tolerance in artificial neural networks. In: Proceedings of The International Joint Conference on Neural Networks 1990. San Diego, pp 703–708
Neti C, Schneider MH, Young ED (1992) Maximally fault tolerance neural networks. IEEE Trans Neural Netw 3(1):14–23
Deodhare D, Vidyasagar M, Sathiya Keerthi S (1998) Synthesis of fault-tolerant feedforward neural networks using minimax optimization. IEEE Trans Neural Netw 9(5):891–900
Bernier J, Daz A, Fernndez F, Caas A, Gonzlez J, Martn-Smith P, Ortega J (2003) Assessing the noise immunity and generalization of radial basis function networks. Neural Process Lett 18(1):35–48
Cavalieri S, Mirabella O (1999) A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks. Neural Netw 12(1):91–106
Mallofre AC, Llanas XP (1996) Fault tolerance parameter model of radial basis function networks. Proc Int Joint Conf Neural Netw 2:1384–1389
Parra X, Catala A (2000) Fault tolerance in the learning algorithm of radial basis function networks. Proc IJCNN 3(2000):527–532
Moody JE (1991) Note on generalization, regularization, and architecture selection in nonlinear learning systems. In: Proceedings First IEEE-SP Workshop on Neural Networks for Signal Processing, pp 1–10
Moody J (1996) A smoothing regularizer for feedforward and recurrent neural networks. Neural Comput 8:461–489
Krogh A, Hertz JA (1992) A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems. Morgan Kaufmann, Burlington, pp 950–957
Leung CS, Tsoi A, Chan L (2001) Two regularizers for recursive least square algorithms in feedforward multilayered neural networks. IEEE Trans Neural Netw 12(6):1314–1332
Kullback S (1959) Information theory and statisticis. Wiley, New York
Wang HJ, Leung CS, Sum PF, Wei G (2010) Kernel width optimization for faulty rbf neural networks with multi-node open fault. Neural Process Lett 32(1):97–107
Wang ZQ, Manry M, Schiano J (2000) LMS learning algorithms: misconceptions and new results on convergence. IEEE Trans on Neural Netw 11(1):47–56
Luo Z-Q (1991) On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks. Neural Comput 3(2):226–245
Sacks J (1958) Asymptotic distribution of stochastic approximation procedures. Ann Math Stat 29(2):373–405
Chen S (2006) Local regularization assisted orthogonal least squares regression. Neurocomputing 69:559–585
Sugiyama M, Ogawa H (2002) Optimal design of regularization term and regularization parameter by subspace information criterion. Neural Netw 15(3):349–361
Chen S, Cowan CFN, Grant P (1991) Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans on Neural Netw 2(2):302–309
Acknowledgments
This work was supported by a research grand from the Research Grants Council of the Hong Kong Special Administrative Region (CityU 115612).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Xiao, Y., Feng, R., Leung, C.S. et al. Online Training for Open Faulty RBF Networks. Neural Process Lett 42, 397–416 (2015). https://doi.org/10.1007/s11063-014-9363-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-014-9363-8