[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

State Preserving Extreme Learning Machine: A Monotonically Increasing Learning Approach

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Extreme Learning Machines (ELM) has been introduced as a new algorithm for training single hidden layer feedforward neural networks instead of the classical gradient-based approaches. Based on the consistency property of data, which enforces similar samples to share similar properties, ELM is a biologically inspired learning algorithm that learns much faster with good generalization and performs well in classification tasks. However, the stochastic characteristics of hidden layer outputs from the random generation of the weight matrix in current ELMs leads to the possibility of unstable outputs in the learning and testing phases. This is detrimental to the overall performance when many repeated trials are conducted. To cope with this issue, we present a new ELM approach, named State Preserving Extreme Leaning Machine (SPELM). SPELM ensures the overall training and testing performance of the classical ELM while monotonically increases its accuracy by preserving state variables. For evaluation, experiments are performed on different benchmark datasets including applications in face recognition, pedestrian detection, and network intrusion detection for cyber security. Several popular feature extraction techniques, namely Gabor, pyramid histogram of oriented gradients, and local binary pattern are also incorporated with SPELM. Experimental results show that our SPELM algorithm yields the best performance on tested data over ELM and RELM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Ahonen T, Hadid A, Pietikainen M (2006) Face description with local binary patterns: application to face recognition. IEEE Trans Pattern Anal Mach Intell 28(12):2037–2041

    Article  MATH  Google Scholar 

  2. Bai Z, Huang GB, Wang D, Wang H, Westover MB (2014) Sparse extreme learning machine for classification. IEEE Trans Cybern 44(10):1858–1870

    Article  Google Scholar 

  3. Barros ALB, Barreto GA (2013) Building a robust extreme learning machine for classification in the presence of outliers. In: Hybrid artificial intelligent systems. Springer, New York, pp 588–597

  4. Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  5. Bontupalli V, Hasan R, Taha TM (2014) Power efficient architecture for network intrusion detection system. In: NAECON 2014-IEEE National aerospace and electronics conference, IEEE, pp 250–254

  6. Bosch A, Zisserman A, Munoz X (2007) Representing shape with a spatial pyramid kernel. In: Proceedings of the 6th ACM international conference on Image and video retrieval, ACM, pp 401–408

  7. Chen ZX, Zhu HY, Wang YG (2013) A modified extreme learning machine with sigmoidal activation functions. Neural Comput Appl 22(3–4):541–550

    Article  Google Scholar 

  8. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. IEEE Computer Society Conference on computer vision and pattern recognition, IEEE vol 1, pp 886–893

  9. Deng W, Zheng Q, Chen L (2009) Regularized extreme learning machine. In: IEEE symposium on computational intelligence and data mining, 2009. CIDM’09., IEEE, pp 389–395

  10. Faraoun K, Boukelif A (2007) Neural networks learning improvement using the k-means clustering algorithm to detect network intrusions. World Acad Sci Eng Technol Int J Comput Electr Autom Control Inf Eng 1(10):3138–3145

    Google Scholar 

  11. Feng G, Huang GB, Lin Q, Gay R (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Article  Google Scholar 

  12. Gavrila DM (1999) The visual analysis of human movement: a survey. Comput Vis Image Underst 73(1):82–98

    Article  MATH  Google Scholar 

  13. Görnitz N, Kloft M, Rieck K, Brefeld U (2009) Active learning for network intrusion detection. In: Proceedings of the 2nd ACM workshop on security and artificial intelligence, ACM, pp 47–54

  14. Hettich S, Bay SD (1999) The uci kdd archive. http://kdd.ics.uci.edu

  15. Horata P, Chiewchanwattana S, Sunat K (2013) Robust extreme learning machine. Neurocomputing 102:31–44

    Article  Google Scholar 

  16. Hornik K (1991) Approximation capabilities of multilayer feedforward networks. Neural Netw 4(2):251–257

    Article  MathSciNet  Google Scholar 

  17. Huang GB (2003) Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans Neural Netw 14(2):274–281

    Article  Google Scholar 

  18. Huang GB (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput 6(3):376–390

    Article  MathSciNet  Google Scholar 

  19. Huang GB (2015) What are extreme learning machines? Filling the gap between frank Rosenblatts dream and John von Neumanns puzzle. Cogn Comput 7(3):263–278

    Article  Google Scholar 

  20. Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16):3056–3062

    Article  Google Scholar 

  21. Huang GB, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16):3460–3468

    Article  Google Scholar 

  22. Huang GB, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  23. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501

    Article  Google Scholar 

  24. Huang GB, Wang DH, Lan Y (2011) Extreme learning machines: a survey. Int J Mach Learn Cybern 2(2):107–122

    Article  Google Scholar 

  25. Huang GB, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B: Cybern 42(2):513–529

    Article  Google Scholar 

  26. Huynh HT, Won Y (2008) Weighted least squares scheme for reducing effects of outliers in regression based on extreme learning machine. JDCTA 2(3):40–46

    Google Scholar 

  27. Jolliffe I (2002) Principal component analysis. Wiley Online Library

  28. Lan Y, Soh YC, Huang GB (2010) Two-stage extreme learning machine for regression. Neurocomputing 73(16):3028–3038

    Article  Google Scholar 

  29. Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. IEEE Computer Society Conference on computer vision and pattern recognition, IEEE vol 2, pp 2169–2178

  30. Lee TS (1996) Image representation using 2D gabor wavelets. IEEE Trans Pattern Anal Mach Intell 18(10):959–971

    Article  Google Scholar 

  31. Leshno M, Lin VY, Pinkus A, Schocken S (1993) Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw 6(6):861–867

    Article  Google Scholar 

  32. Liu X, Chen T, Kumar BV (2003) Face authentication for multiple subjects using eigenflow. Pattern Recogn 36(2):313–328

    Article  Google Scholar 

  33. Lu HJ, An CL, Ma XP, Zheng EH, Yang XB, Zhao CL, Li J, Zhang S, Zhang Z, Jin S et al (2013) Disagreement measure based ensemble of extreme learning machine for gene expression data classification. Jisuanji Xuebao (Chin J Comput) 36(2):341–348

    Google Scholar 

  34. Man Z, Lee K, Wang D, Cao Z, Khoo S (2012) Robust single-hidden layer feedforward network-based pattern classifier. IEEE Trans Neural Netw Learn Syst 23(12):1974–1986

    Article  Google Scholar 

  35. MartıNez-MartıNez JM, Escandell-Montero P, Soria-Olivas E, MartıN-Guerrero JD, Magdalena-Benedito R, GóMez-Sanchis J (2011) Regularized extreme learning machine for regression problems. Neurocomputing 74(17):3716–3721

    Article  Google Scholar 

  36. Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) Op-elm: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21(1):158–162

    Article  Google Scholar 

  37. Munder S, Gavrila DM (2006) An experimental study on pedestrian classification. IEEE Trans Pattern Anal Mach Intell 28(11):1863–1868

    Article  Google Scholar 

  38. Ojala T, Pietikäinen M, Mäenpää T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987

    Article  MATH  Google Scholar 

  39. Rumelhart DE, Hinton GE, Williams RJ (1988) Learning representations by back-propagating errors. Cogn Model 5(3):1

    Google Scholar 

  40. Salama MA, Eid HF, Ramadan RA, Darwish A, Hassanien AE (2011) Hybrid intelligent intrusion detection scheme. In: Soft computing in industrial applications. Springer, pp 293–303

  41. Samaria FS, Harter AC (1994) Parameterisation of a stochastic model for human face identification. In: Proceedings of the second IEEE workshop on applications of computer vision, IEEE, pp 138–142

  42. Shen L, Bai L (2006) A review on gabor wavelets for face recognition. Pattern Anal Appl 9(2–3):273–292

    Article  MathSciNet  Google Scholar 

  43. Wang Y, Cao F, Yuan Y (2011a) A study on effectiveness of extreme learning machine. Neurocomputing 74(16):2483–2490

    Article  Google Scholar 

  44. Wang Y, Cao F, Yuan Y (2011b) A study on effectiveness of extreme learning machine. Neurocomputing 74(16):2483–2490

    Article  Google Scholar 

  45. Yu Q, Miche Y, Eirola E, Van Heeswijk M, Severin E, Lendasse A (2013) Regularized extreme learning machine for regression with missing data. Neurocomputing 102:45–51

    Article  Google Scholar 

  46. Yuan Y, Wang Y, Cao F (2011) Optimization approximation solution for regression problem based on extreme learning machine. Neurocomputing 74(16):2475–2482

    Article  Google Scholar 

  47. Zahangir Alom M, Sidike P, Asari VK, Taha TM (2015) State preserving extreme learning machine for face recognition. In: International joint conference on neural networks (IJCNN), IEEE, pp 1–7

  48. Zhao G, Shen Z, Man Z (2011) Robust input weight selection for well-conditioned extreme learning machine. Int J Inf Technol 17(1):1–13

    Google Scholar 

  49. Zhao W, Chellappa R, Phillips PJ, Rosenfeld A (2003) Face recognition: a literature survey. ACM Comput Surv (CSUR) 35(4):399–458

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Md. Zahangir Alom.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alom, M.Z., Sidike, P., Taha, T.M. et al. State Preserving Extreme Learning Machine: A Monotonically Increasing Learning Approach. Neural Process Lett 45, 703–725 (2017). https://doi.org/10.1007/s11063-016-9552-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-016-9552-8

Keywords

Navigation