Abstract
Radial basis function (RBF) neural networks represent established machine learning tool with various interesting applications to nonlinear regression modeling. However, their performance may be substantially influenced by outlying measurements (outliers). Promising modifications of RBF network training have been available for the classification of data contaminated by outliers, but there remains a gap of robust training of RBF networks in the regression context. A novel robust approach based on backward subsample selection (i.e. instance selection) is proposed and presented in this paper, which searches sequentially for the most reliable subset of observations and finally performs outlier deletion. The novel approach is investigated in numerical experiments and is also applied to robustify a multilayer perceptron. The results on data containing outliers reveal the improved performance compared to conventional approaches.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Atkinson, A.C., Riani, M., Cerioli, A.: The forward search: theory and data analysis. J. Korean Stat. Soc. 39, 117–134 (2010)
Ben-Gal, I.: Outlier detection. In: Maimon, O., Rockach, L. (eds.) Data Mining and Knowledge Discovery Handbook: A Complete Guide for Practitioners and Researchers, 2nd edn, pp. 117–130. Springer, New York (2010). https://doi.org/10.1007/978-1-4939-7131-2
Borş, A.G., Pitas, I.: Robust RBF networks. In: Howlett, R.J., Jain, L.C., Kacprzyk, J. (eds.) Radial Basis Function Networks 1. Recent Developments in Theory and Applications, pp. 123–133. Physica Verlag Rudolf Liebing KG, Vienna (2001)
Broniatowski, M., Jurečková, J., Kalina, J.: Likelihood ratio under measurement errors. Entropy 20, 966 (2018)
Cerioli, A., Riani, M., Atkinson, A.C., Corbellini, A.: The power of monitoring: how to make the most of a contaminated multivariate sample. Stat. Methods Appl. 27, 559–587 (2018)
Davies, L.: Data Analysis and Approximate Models. Model Choice, Location-Scale, Analysis of Variance, Nonparametric Regression and Image Analysis. CRC Press, Boca Raton (2014)
Dendek, C., Mańdziuk, J.: Improving performance of a binary classifier by training set selection. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008. LNCS, vol. 5163, pp. 128–135. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87536-9_14
Frank, A., Asuncion, A.: UCI Machine Learning Repository. University of California, Irvine (2010). http://archive.ics.uci.edu/ml/
Grabaskas, N., Si, D.: Anomaly detection from kepler satellite time-series data. In: Perner, P. (ed.) MLDM 2017. LNCS (LNAI), vol. 10358, pp. 220–232. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-62416-7_16
Harrell, F.: Regression Modeling Strategies, With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis, 2nd edn. Springer, New York (2015). https://doi.org/10.1007/978-1-4757-3462-1
Haykin, S.O.: Neural Networks and Learning Machines: A Comprehensive Foundation, 2nd edn. Prentice Hall, Upper Saddle River (2009)
Jurečková, J., Picek, J., Schindler, M.: Robust Statistical Methods with R, 2nd edn. Chapman & Hall/CRC, Boca Raton (2019)
Kalina, J.: Three contributions to robust regression diagnostics. J. Appl. Math. Statist. Inf. 11, 69–78 (2015)
Kalina, J.: A robust pre-processing of BeadChip microarray images. Biocybern. Biomed. Eng. 38, 556–563 (2018)
Kordos, M., Rusiecki, A.: Reducing noise impact on MLP training–techniques and algorithms to provide noise-robustness in MLP network training. Soft. Comput. 20, 46–65 (2016)
Kůrková, V., Kainen, P.C.: Comparing fixed and variable-width gaussian networks. Neural Netw. 57, 23–28 (2014)
Lee, C.C., Chung, P.C., Tsai, J.R., Chang, C.I.: Robust radial basis function neural networks. IEEE Trans. Syst. Man Cybern. B 29, 674–685 (1999)
Liu, F.T., Ting, K.M., Zhou, Z.H.: Isolation forest. In: Eighth IEEE International Conference on Data Mining, pp. 413–422. IEEE (2008)
Neruda, R., Vidnerová, P.: Learning errors by radial basis function neural networks and regularization networks. Int. J. Grid Distrib. Comput. 1, 49–57 (2009)
R Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna (2019). https://www.R-project.org/
Rusiecki, A.: Robust learning algorithm based on LTA estimator. Neurocomputing 120, 624–632 (2013)
Rusiecki, A., Kordos, M., Kamiński, T., Greń, K.: Training neural networks on noisy data. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2014. LNCS (LNAI), vol. 8467, pp. 131–142. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07173-2_13
Su, M., Deng, W.: A fast robust learning algorithm for RBF network against outliers. In: Huang, D.-S., Li, K., Irwin, G.W. (eds.) ICIC 2006. LNCS, vol. 4113, pp. 280–285. Springer, Heidelberg (2006). https://doi.org/10.1007/11816157_28
Acknowledgements
We thank Barbora Peštová for technical assistance and six anonymous referees for valuable suggestions leading to improvements of the paper. The work is supported by the projects 19-05704S (J. Kalina) and 18-23827S (P. Vidnerová) of the Czech Science Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Kalina, J., Vidnerová, P. (2019). Robust Training of Radial Basis Function Neural Networks. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2019. Lecture Notes in Computer Science(), vol 11508. Springer, Cham. https://doi.org/10.1007/978-3-030-20912-4_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-20912-4_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-20911-7
Online ISBN: 978-3-030-20912-4
eBook Packages: Computer ScienceComputer Science (R0)