Abstract
A family of norm-based least squares support vector machines (LSSVMs) is the effective statistical learning tool and has attracted considerable attentions. However, there are two critical problems: (1) LSSVM is ill-conditioned or singular when the sample size is much less than feature number and thus causes over-fitting for small sample size (SSS) problem, while other norm-based LSSVMs own slower training speed and cannot deal with large scale data. (2) Norm-based LSSVMs pay less attention to the edge points that are important for learning the final classifier. To overcome the above drawbacks, by replacing \( L_{2}\text{-}norm \) with \( L_{\infty }\text{-}norm \) in empirical loss, we construct a robust classifier, named \( L_{\infty }\text{-}LSSVM \) in short. After adjustment, \( L_{\infty }\text{-}LSSVM \) can detect edge points effectively and enhances the capability of the robustness and the generalization. Also, inspired by the idea of sequential minimal optimization (SMO), we design a novel SMO-typed iterative algorithm. The new algorithm not only guarantees the convergence of optimum solution in theory, but also owns the lower computational time and storage space when data size is large. Finally, extensive numerical experiments validate the above opinions again on four groups of artificial data, Non-i.i.d data: fault detection of railway turnout, four SSS datasets, two types of massive datasets and six benchmark datasets.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Cortes Vapnik VN (1995) Support vector networks. Mach Learn 20:273–297
Vapnik VN (1998) Statistical learning theory. Wiley, New York
Boser BE, Guyon IM, Vapnik VN (1996) A training algorithm for optimal margin classifiers. In: Proceedings of annual Acm workshop on computational learning theory, pp 144–152
Joachims T, Ndellec C, Rouveriol (1998) Text categorization with support vector machines: learning with many relevant features. European conference on machine learning. Chemnitz, Germany, pp 137–142
Tao Q, Zhan S, Li XH, Kurihara T (2016) Robust face detection using local CNN and SVM based on kernel combination. Nurocomputing 211:98–105
Guyon I, Weston J, Barnhill S, Vapnik VN (2002) Gene selection for cancer classification using support vector machine. Mach Learn 46:389–422
Chen S, Wang M (2005) Seeking multi-threshold directly from support vectors for image segmentation. Neurocomputing 67:335–344
Lin JY, Cheng CT, Chau KW (2006) Using support vector machines for long-term discharge prediction. Hydrol Sci J 51(4):599–612
MullerKR Smola AJ, Ratsch G, Schölkopf B, Kohlmorgen J (1999) Using support vector machines for time series prediction. Advances in kernel methods-Support Vector Learning. MIT Press, Cambridge, pp 243–254
Platt JC (1999) Fast training of support vector machines using sequential minimal optimization. In: Advances in kernel methods, pp 185–208
Joachims T (1999) Making large-scale support vector machine learning practical. In: Advances in kernel methods, pp 169–184
Chang CC, Lin CJ (2011) Libsvm: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):389–396
Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300
Suykens JAK (2000) Least squares support vector machines for classification and nonlinear modeling. Neural Network World 10(1–2):29–47
Suykens JAK, Gestel TV, Brabanter JD, Moor BD, Vandewalle J (2002) Least squares support vector machines. World Scientific, Singapore
Shao YH, Chen WJ, Deng NY (2014) Nonparallel hyperplane support vector machine for binary classification problems. Inf Sci Int J 263(3):22–35
Mall R, Suykens J (2015) Very sparse LSSVM reductions for large-scale data. IEEE Trans Neural Netw Learn Syst 26(5):1086–1097
Gao Y, Shan X, Hu Z et al (2016) Extended compressed tracking via random projection based on MSERs and online LS-SVM learning. Pattern Recognit 59:245–254
Ke T, Song LJ, Yang B, Zhao XB, Jing L (2014) Building a biased least squares support vector machine classifier for positive and unlabeled learning. J Softw 9(6):1494–1502
Lopez J, Brabanter KD, Dorronsoro JR, et al (2011) Sparse LSSVMs with L0-Norm minimization. In: Proceedings of the European symposium on artificial neural networks, computational intelligence and machine learning (ESANN 2011), pp 189–194
Drucker H, Burges CJC, Kaufman L, Smola A, Vapnik VN (1997) Support vector regression machines. Bell Labs and Monmouth University
Shao YH, Li CN, Liu MZ, Wang Z, Deng NY (2018) Sparse Lq-norm least squares support vector machine with feature selection. Pattern Recognt 78:167–181
Chiremsel Z, Nait SR, Chiremsel R (2016) Probabilistic fault diagnosis of Safety instrumented systems based on fault tree analysis and Bayesian network. J Fail Anal Preven 16:747–760
Ju HJ, Lee DH, Hwang JY, Namkung JY, Yu H (2020) PUMAD: PU metric learning for anomaly detection. Inf Sci 523:167–183
Kwak N (2008) Principal component analysis based on l1-norm maximization. IEEE Trans Pattern Anal Mach Intell 30(9):1672–1680
Kahneman D, Miller Dale T (1986) Norm theory: comparing reality to its alternatives. Psychol Rev 93(2):136–153
John C, Platt (1998) Sequential Minimal Optimization: a fast algorithm for training Support Vector Machines. [D]
Osuna E, Freund R, Girosi, F (1997) Improved training algorithm for support vector machines. In: Proceedings of IEEE NNSP’97, pp 1–10
Lin ZR (2016) LIBSVM http://www.csie.ntu.edu.tw/~cjlin/libsvm
Pelckmans K, Suykens JAK, Gestel TV, et al (2002) LS-SVM lab: a matlab tool-box for least squares support vector machines. In: Tutorial KU Leuven-ESAT Leuven, Belgium, pp 1–2
Tian YJ, Yu J, Chen WJ (2010) Lp-Norm Support Vector Machine with CCCP. In: Proceedings of seventh international conference on fuzzy systems and knowledge discovery, pp 1560–1564
Blake CL, Merz CJ (1998) UCI Repository for machine learning databases: http://archive.ics.uci.edu/ml/datasets.php
Acknowledgements
This research is supported by the Science Research Program of Tianjin Municipal Education Commission (No. 2018KJ115); Project of Humanities and Social Science Fund of Ministry of Education of China (No. 19YJCZH251).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ke, T., Zhang, L., Ge, X. et al. A Robust Least Squares Support Vector Machine Based on L∞-norm. Neural Process Lett 52, 2371–2397 (2020). https://doi.org/10.1007/s11063-020-10353-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-020-10353-1