[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

A Robust Least Squares Support Vector Machine Based on L-norm

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

A family of norm-based least squares support vector machines (LSSVMs) is the effective statistical learning tool and has attracted considerable attentions. However, there are two critical problems: (1) LSSVM is ill-conditioned or singular when the sample size is much less than feature number and thus causes over-fitting for small sample size (SSS) problem, while other norm-based LSSVMs own slower training speed and cannot deal with large scale data. (2) Norm-based LSSVMs pay less attention to the edge points that are important for learning the final classifier. To overcome the above drawbacks, by replacing \( L_{2}\text{-}norm \) with \( L_{\infty }\text{-}norm \) in empirical loss, we construct a robust classifier, named \( L_{\infty }\text{-}LSSVM \) in short. After adjustment, \( L_{\infty }\text{-}LSSVM \) can detect edge points effectively and enhances the capability of the robustness and the generalization. Also, inspired by the idea of sequential minimal optimization (SMO), we design a novel SMO-typed iterative algorithm. The new algorithm not only guarantees the convergence of optimum solution in theory, but also owns the lower computational time and storage space when data size is large. Finally, extensive numerical experiments validate the above opinions again on four groups of artificial data, Non-i.i.d data: fault detection of railway turnout, four SSS datasets, two types of massive datasets and six benchmark datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html.

  2. http://www.daviddlewis.com/resources/testcollections/reuter21578/.

References

  1. Cortes Vapnik VN (1995) Support vector networks. Mach Learn 20:273–297

    MATH  Google Scholar 

  2. Vapnik VN (1998) Statistical learning theory. Wiley, New York

    MATH  Google Scholar 

  3. Boser BE, Guyon IM, Vapnik VN (1996) A training algorithm for optimal margin classifiers. In: Proceedings of annual Acm workshop on computational learning theory, pp 144–152

  4. Joachims T, Ndellec C, Rouveriol (1998) Text categorization with support vector machines: learning with many relevant features. European conference on machine learning. Chemnitz, Germany, pp 137–142

    Google Scholar 

  5. Tao Q, Zhan S, Li XH, Kurihara T (2016) Robust face detection using local CNN and SVM based on kernel combination. Nurocomputing 211:98–105

    Article  Google Scholar 

  6. Guyon I, Weston J, Barnhill S, Vapnik VN (2002) Gene selection for cancer classification using support vector machine. Mach Learn 46:389–422

    Article  Google Scholar 

  7. Chen S, Wang M (2005) Seeking multi-threshold directly from support vectors for image segmentation. Neurocomputing 67:335–344

    Article  Google Scholar 

  8. Lin JY, Cheng CT, Chau KW (2006) Using support vector machines for long-term discharge prediction. Hydrol Sci J 51(4):599–612

    Article  Google Scholar 

  9. MullerKR Smola AJ, Ratsch G, Schölkopf B, Kohlmorgen J (1999) Using support vector machines for time series prediction. Advances in kernel methods-Support Vector Learning. MIT Press, Cambridge, pp 243–254

    Google Scholar 

  10. Platt JC (1999) Fast training of support vector machines using sequential minimal optimization. In: Advances in kernel methods, pp 185–208

  11. Joachims T (1999) Making large-scale support vector machine learning practical. In: Advances in kernel methods, pp 169–184

  12. Chang CC, Lin CJ (2011) Libsvm: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):389–396

    Article  Google Scholar 

  13. Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  Google Scholar 

  14. Suykens JAK (2000) Least squares support vector machines for classification and nonlinear modeling. Neural Network World 10(1–2):29–47

    Google Scholar 

  15. Suykens JAK, Gestel TV, Brabanter JD, Moor BD, Vandewalle J (2002) Least squares support vector machines. World Scientific, Singapore

    Book  Google Scholar 

  16. Shao YH, Chen WJ, Deng NY (2014) Nonparallel hyperplane support vector machine for binary classification problems. Inf Sci Int J 263(3):22–35

    MathSciNet  MATH  Google Scholar 

  17. Mall R, Suykens J (2015) Very sparse LSSVM reductions for large-scale data. IEEE Trans Neural Netw Learn Syst 26(5):1086–1097

    Article  MathSciNet  Google Scholar 

  18. Gao Y, Shan X, Hu Z et al (2016) Extended compressed tracking via random projection based on MSERs and online LS-SVM learning. Pattern Recognit 59:245–254

    Article  Google Scholar 

  19. Ke T, Song LJ, Yang B, Zhao XB, Jing L (2014) Building a biased least squares support vector machine classifier for positive and unlabeled learning. J Softw 9(6):1494–1502

    Article  Google Scholar 

  20. Lopez J, Brabanter KD, Dorronsoro JR, et al (2011) Sparse LSSVMs with L0-Norm minimization. In: Proceedings of the European symposium on artificial neural networks, computational intelligence and machine learning (ESANN 2011), pp 189–194

  21. Drucker H, Burges CJC, Kaufman L, Smola A, Vapnik VN (1997) Support vector regression machines. Bell Labs and Monmouth University

  22. Shao YH, Li CN, Liu MZ, Wang Z, Deng NY (2018) Sparse Lq-norm least squares support vector machine with feature selection. Pattern Recognt 78:167–181

    Article  Google Scholar 

  23. Chiremsel Z, Nait SR, Chiremsel R (2016) Probabilistic fault diagnosis of Safety instrumented systems based on fault tree analysis and Bayesian network. J Fail Anal Preven 16:747–760

    Article  Google Scholar 

  24. Ju HJ, Lee DH, Hwang JY, Namkung JY, Yu H (2020) PUMAD: PU metric learning for anomaly detection. Inf Sci 523:167–183

    Article  MathSciNet  Google Scholar 

  25. Kwak N (2008) Principal component analysis based on l1-norm maximization. IEEE Trans Pattern Anal Mach Intell 30(9):1672–1680

    Article  Google Scholar 

  26. Kahneman D, Miller Dale T (1986) Norm theory: comparing reality to its alternatives. Psychol Rev 93(2):136–153

    Article  Google Scholar 

  27. John C, Platt (1998) Sequential Minimal Optimization: a fast algorithm for training Support Vector Machines. [D]

  28. Osuna E, Freund R, Girosi, F (1997) Improved training algorithm for support vector machines. In: Proceedings of IEEE NNSP’97, pp 1–10

  29. Lin ZR (2016) LIBSVM http://www.csie.ntu.edu.tw/~cjlin/libsvm

  30. Pelckmans K, Suykens JAK, Gestel TV, et al (2002) LS-SVM lab: a matlab tool-box for least squares support vector machines. In: Tutorial KU Leuven-ESAT Leuven, Belgium, pp 1–2

  31. Tian YJ, Yu J, Chen WJ (2010) Lp-Norm Support Vector Machine with CCCP. In: Proceedings of seventh international conference on fuzzy systems and knowledge discovery, pp 1560–1564

  32. Blake CL, Merz CJ (1998) UCI Repository for machine learning databases: http://archive.ics.uci.edu/ml/datasets.php

Download references

Acknowledgements

This research is supported by the Science Research Program of Tianjin Municipal Education Commission (No. 2018KJ115); Project of Humanities and Social Science Fund of Ministry of Education of China (No. 19YJCZH251).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ting Ke.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ke, T., Zhang, L., Ge, X. et al. A Robust Least Squares Support Vector Machine Based on L-norm. Neural Process Lett 52, 2371–2397 (2020). https://doi.org/10.1007/s11063-020-10353-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-020-10353-1

Keywords

Navigation