[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Online Training for Open Faulty RBF Networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Recently, a batch mode learning algorithm, namely optimal open weight fault regularization (OOWFR), was developed for handling the open fault situation. In terms of the Kullback–Leibler divergence, this batch mode learning algorithm is optimal. However, the main disadvantage of this batch mode learning algorithm is that it requires to store the entire input–output history. Therefore, the memory consumption is a problem when the number of training samples is large. In this paper, we present an online version for the OOWFR algorithm. We consider two learning rate cases, fixed learning rate and adaptive learning rate. We present the convergent conditions for these two cases. Simulation results show that the performance of the proposed online mode learning algorithm is better than that of other online mode learning algorithms. Also, the performance of the proposed algorithm is close to that of the batch mode OOWFR algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Zhang X, Song S, Wu C (2013) Robust bayesian classification with incomplete data. Cogn Comput 5(2):170–187. doi:10.1007/s12559-012-9188-6

    Article  MATH  Google Scholar 

  2. Chiu CT, Mehrotra K, Mohan CK, Ranka S (1994) Modifying training algorithms for improved fault tolerance. In: Proceedings of The International Conference on Neural Networks 1994, vol. 4. Orlando, pp 333–338

  3. Bernier J, Ortega J, Rodrguez M, Rojas I, Prieto A (1999) An accurate measure for multilayer perceptron tolerance to weight deviations. Neural Process Lett 10(2):121–130

    Article  MATH  Google Scholar 

  4. Bernier J, Ortega J, Rojas I, Ros E, Prieto A (2000) Obtaining fault tolerant multilayer perceptrons using an explicit regularization. Neural Process Lett 12(2):107–113

    Article  Google Scholar 

  5. Zhou ZH, Chen SF (2003) Evolving fault-tolerant neural networks. Neural Comput Appl 11(3–4):156–160

    Article  MATH  Google Scholar 

  6. Leung CS, Sum J (2008) A fault-tolerant regularizer for RBF networks. IEEE Trans Neural Netw 19(3):493–507

    Article  MATH  Google Scholar 

  7. Sum JPF, Leung CS, Ho KI-J (2009) On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise. IEEE Trans Neural Netw 20(1):124–138. doi:10.1109/TNN.2008.2005596

    Article  MATH  Google Scholar 

  8. Leung CS, Wang HJ, Sum J (2010) On the selection of weight decay parameter for faulty networks. IEEE Trans Neural Netw 21(8):1232–1244

    Article  MATH  Google Scholar 

  9. Chandra P, Singh Y (2003) Fault tolerance of feedforward artificial neural networks—a framework of study. In: Proceedings of The International Joint Conference on Neural Networks 2003, vol 1. Portland, pp 489–494

  10. Phatak DS, Koren I (1995) Complete and partial fault tolerance of feedforward neural nets. IEEE Trans Neural Netw 6(2):446–456

    Article  Google Scholar 

  11. Burr J (1995) Digital neural network implementations. In: Neural networks, concepts, applications, and implementations, Vol III. Englewood Cliffs, Prentice Hall, pp 237–285

  12. Himavathi DAS, Muthuramalingam A (2007) Feedforward neural network implementation in FPGA using layer multiplexing for effective resource utilization. IEEE Trans Neural Netw 18:880–888

    Article  MATH  Google Scholar 

  13. Savich A, Moussa M, Areibi S (2007) The impact of arithmetic representation on implementing MLP-BP on FPGAs: a study. IEEE Trans Neural Netw 18(1):240–252

    Article  Google Scholar 

  14. Kaneko T, Liu B (1970) Effect of coefficient rounding in floating-point digital filters. IEEE Trans Aerosp Electron Syst AE–7:995–1003

    Google Scholar 

  15. Liu B, Kaneko T (1969) Error angalysis of digital filter realized with floating-point arithemetic. Proc IEEE 57:1735–1747

    Article  Google Scholar 

  16. Bolt G (1991) Fault models for artificial neural networks. IEEE Int Joint Conf Neural Netw 2:1373–1378

    MATH  Google Scholar 

  17. Tchernev EB, Mulvaney RG, Phatak DS (2005) Investigating the fault tolerance of neural networks. Neural Comput 17(7):1646–1664

    Article  MathSciNet  MATH  Google Scholar 

  18. Haruhiko T, Masahiko M, Hidehiko K, Terumine H (2007) Enhancing both generalization and fault tolerance of multilayer neural networks. In ATS ’04: Proceedings of IJCNN 2007, pp 1429–1433

  19. Ahmadi A, Sargolzaie M, Fakhraie S, Lucas C, Vakili S (2009) Feedforward sigmoidal networks—equicontinuity and fault-tolerance properties. Comput Eng Technol Int Conf 2:93–97

    MATH  Google Scholar 

  20. Sequin C, Clay R (1990) Fault tolerance in artificial neural networks. In: Proceedings of The International Joint Conference on Neural Networks 1990. San Diego, pp 703–708

  21. Neti C, Schneider MH, Young ED (1992) Maximally fault tolerance neural networks. IEEE Trans Neural Netw 3(1):14–23

    Article  Google Scholar 

  22. Deodhare D, Vidyasagar M, Sathiya Keerthi S (1998) Synthesis of fault-tolerant feedforward neural networks using minimax optimization. IEEE Trans Neural Netw 9(5):891–900

    Article  Google Scholar 

  23. Bernier J, Daz A, Fernndez F, Caas A, Gonzlez J, Martn-Smith P, Ortega J (2003) Assessing the noise immunity and generalization of radial basis function networks. Neural Process Lett 18(1):35–48

    Article  Google Scholar 

  24. Cavalieri S, Mirabella O (1999) A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks. Neural Netw 12(1):91–106

    Article  Google Scholar 

  25. Mallofre AC, Llanas XP (1996) Fault tolerance parameter model of radial basis function networks. Proc Int Joint Conf Neural Netw 2:1384–1389

    Article  Google Scholar 

  26. Parra X, Catala A (2000) Fault tolerance in the learning algorithm of radial basis function networks. Proc IJCNN 3(2000):527–532

    Google Scholar 

  27. Moody JE (1991) Note on generalization, regularization, and architecture selection in nonlinear learning systems. In: Proceedings First IEEE-SP Workshop on Neural Networks for Signal Processing, pp 1–10

  28. Moody J (1996) A smoothing regularizer for feedforward and recurrent neural networks. Neural Comput 8:461–489

    Article  Google Scholar 

  29. Krogh A, Hertz JA (1992) A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems. Morgan Kaufmann, Burlington, pp 950–957

  30. Leung CS, Tsoi A, Chan L (2001) Two regularizers for recursive least square algorithms in feedforward multilayered neural networks. IEEE Trans Neural Netw 12(6):1314–1332

    Article  Google Scholar 

  31. Kullback S (1959) Information theory and statisticis. Wiley, New York

    Google Scholar 

  32. Wang HJ, Leung CS, Sum PF, Wei G (2010) Kernel width optimization for faulty rbf neural networks with multi-node open fault. Neural Process Lett 32(1):97–107

    Article  MATH  Google Scholar 

  33. Wang ZQ, Manry M, Schiano J (2000) LMS learning algorithms: misconceptions and new results on convergence. IEEE Trans on Neural Netw 11(1):47–56

    Article  Google Scholar 

  34. Luo Z-Q (1991) On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks. Neural Comput 3(2):226–245

    Article  Google Scholar 

  35. Sacks J (1958) Asymptotic distribution of stochastic approximation procedures. Ann Math Stat 29(2):373–405

    Article  MathSciNet  MATH  Google Scholar 

  36. Chen S (2006) Local regularization assisted orthogonal least squares regression. Neurocomputing 69:559–585

    Article  Google Scholar 

  37. Sugiyama M, Ogawa H (2002) Optimal design of regularization term and regularization parameter by subspace information criterion. Neural Netw 15(3):349–361

    Article  Google Scholar 

  38. Chen S, Cowan CFN, Grant P (1991) Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans on Neural Netw 2(2):302–309

    Article  MATH  Google Scholar 

Download references

Acknowledgments

This work was supported by a research grand from the Research Grants Council of the Hong Kong Special Administrative Region (CityU 115612).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chi Sing Leung.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiao, Y., Feng, R., Leung, C.S. et al. Online Training for Open Faulty RBF Networks. Neural Process Lett 42, 397–416 (2015). https://doi.org/10.1007/s11063-014-9363-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-014-9363-8

Keywords

Navigation