Abstract
Extreme learning machines (ELMs) are an interesting alternative to multilayer perceptrons because ELMs, in practice, require the optimization only of the number of hidden neurons by grid-search with cross-validation. Nevertheless, a large number of hidden neurons obtained after training is a significant drawback for the effective usage of such classifiers. In order to overcome this drawback, we propose a new approach to prune hidden layer neurons and achieve a fixed-size hidden layer without affecting the classifier’s accuracy and even improving it. The main idea is to leave out non-relevant or very similar neurons to others that are not necessary to achieve a simplified, yet accurate, model. In our proposal, differently from previous works based on genetic algorithms and simulated annealing, we choose only a subset of those neurons generated at random to belong to the hidden layer without weight adjustment or increasing of the hidden neurons. In this work, we compare our proposal called simulated annealing for pruned ELM (SAP-ELM) with pruning methods named optimally pruned ELM, genetic algorithms for pruning ELM and sparse Bayesian ELM. On the basis of our experiments, we can state that SAP-ELM is a promising alternative for classification tasks. We highlight that as our proposal achieves fixed-size models, it can help to solve problems where the memory consuming is crucial, such as embeded systems, and helps to control the maximum size that the models must reach after the training process.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
As an example, this output can be computed by a sigmoidal activation function.
References
Alencar ASC, da Rocha Neto AR, Gomes JPP (2016) A new pruning method for extreme learning machines via genetic algorithms. Appl Soft Comput 44:101–107. doi:10.1016/j.asoc.2016.03.019
Alvarez D, Hornero R, Marcos JV, del Campo F (2012) Feature selection from nocturnal oximetry using genetic algorithms to assist in obstructive sleep apnoea diagnosis. Med Eng Phys 34(8):1049–1057
Bai Z, Huang G, Wang D, Wang H, Westover MB (2014) Sparse extreme learning machine for classification. IEEE Trans Cybern 44(10):1858–1870. doi:10.1109/TCYB.2014.2298235
Černy V (1985) Thermodynamical approach to the traveling salesman problem: an efficient simulation algorithm. J Optim Theory Appl 45(1):41–51
Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30
Deng J, Li K, Irwin GW (2011) Fast automatic two-stage nonlinear model identification based on the extreme learning machine. Neurocomputing 74(16):2422–2429. doi:10.1016/j.neucom.2010.11.035
Fan YT, Wu W, Yang WY, Fan QW, Wang J (2014) A pruning algorithm with L1/2 regularizer for extreme learning machine. J Zhejiang Univ Sci C 15(2):119–125. doi:10.1631/jzus.C1300197
Faragó P, Faragó C, Hintea S, Cîrlugea M (2014) An evolutionary multi-objective optimization approach to design the sound processor of a hearing aid. In: International conference on advancements of medicine and health care through technology, vol 44, pp 181–186
Feng G, Huang G, Lin Q, Gay RKL (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357. doi:10.1109/TNN.2009.2024147
Freire A, Barreto G (2014) A new model selection approach for the elm network using metaheuristic optimization. In: Proceedings of European symposium on artificial neural networks, computational intelligence and machine learning (ESANN 2014), Bruges, Belgium, pp 619–624. http://www.i6doc.com/fr/livre/?GCOI=28001100432440
Haykin S (2009) Neural networks and learning machines, 3rd edn. Prentice Hall, Upper Saddle River
Huang G, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062. doi:10.1016/j.neucom.2007.02.009
Huang G, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16–18):3460–3468. doi:10.1016/j.neucom.2007.10.008
Huang G, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892. doi:10.1109/TNN.2006.875977
Huang G, Li M, Chen L, Siew CK (2008) Incremental extreme learning machine with fully complex hidden nodes. Neurocomputing 71(4–6):576–583. doi:10.1016/j.neucom.2007.07.025
Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501
Jauhari S, Rizvi S (2014) Mining gene expression data focusing cancer therapeutics: a digest. IEEE/ACM Trans Comput Biol Bioinform 11(3):533–547
Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680
Kitamura T (2016) Sparse extreme learning machine classifier using empirical feature mapping. In: Villa AEP, Masulli P, Rivero AJP (eds) Artificial neural networks and machine learning—ICANN 2016—25th international conference on artificial neural networks, Barcelona, Spain, September 6–9, 2016, proceedings, Part I. Lecture notes in computer science, vol 9886. Springer, Berlin, pp 486–493. doi:10.1007/978-3-319-44778-0_57
Lahoz D, Lacruz B, Mateo PM (2011) A bi-objective micro genetic extreme learning machine. In: 2011 IEEE workshop on hybrid intelligent models and applications, HIMA 2011, Paris, France, April 14, 2011. IEEE, pp 68–75. doi:10.1109/HIMA.2011.5953957
Lichman M (2013) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA. http://archive.ics.uci.edu/ml
Luo J, Vong C, Wong P (2014) Sparse Bayesian extreme learning machine for multi-classification. IEEE Trans Neural Netw Learn Syst 25(4):836–843. doi:10.1109/TNNLS.2013.2281839
Matias T, Araújo R, Antunes CH, Gabriel D (2013) Genetically optimized extreme learning machine. In: Seatzu C (ed) Proceedings of 2013 IEEE 18th conference on emerging technologies and factory automation, ETFA 2013, Cagliari, Italy, September 10–13, 2013. IEEE, pp 1–8. doi:10.1109/ETFA.2013.6647975
Matias T, Souza F, Araújo R, Antunes CH (2014) Learning of a single-hidden layer feedforward neural network using an optimized extreme learning machine. Neurocomputing 129:428–436. doi:10.1016/j.neucom.2013.09.016
Miche Y, van Heeswijk M, Bas P, Simula O, Lendasse A (2011) TROP-ELM: a double-regularized ELM using LARS and tikhonov regularization. Neurocomputing 74(16):2413–2421. doi:10.1016/j.neucom.2010.12.042
Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) OP-ELM: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21(1):158–162. doi:10.1109/TNN.2009.2036259
Myers RH (2000) Classical and modern regression with applications (duxbury classic). Duxbury Press, Pacific Grove
Rong H, Ong Y, Tan A, Zhu Z (2008) A fast pruned-extreme learning machine for classification problem. Neurocomputing 72(1–3):359–366. doi:10.1016/j.neucom.2008.01.005
Similä T, Tikka J (2005) Multiresponse sparse regression with application to multidimensional scaling. In: Duch W, Kacprzyk J, Oja E, Zadrozny S (eds) Artificial neural networks: formal models and their applications—ICANN 2005, 15th international conference, Warsaw, Poland, September 11–15, 2005, Proceedings, Part II. Lecture notes in computer science, vol 3697. Springer, New York, pp 97–102 (2005). doi:10.1007/11550907_16
Sivanandam SN, Deepa SN (2008) Introduction to genetic algorithms. Springer, New York
Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244. http://www.ai.mit.edu/projects/jmlr/papers/volume1/tipping01a/abstract.html
Vapnik VN (1995) The nature of statistical learning theory. Springer, New York
Wang N, Er MJ, Han M (2014) Parsimonious extreme learning machine using recursive orthogonal least squares. IEEE Trans Neural Netw Learn Syst 25(10):1828–1841. doi:10.1109/TNNLS.2013.2296048
Wang N, Han M, Dong N, Er MJ (2014) Constructive multi-output extreme learning machine with application to large tanker motion dynamics identification. Neurocomputing 128:59–72. doi:10.1016/j.neucom.2013.01.062
Xu Z, Yao M (2015) A fast incremental method based on regularized extreme learning machine. Springer, Cham, pp 15–30. doi:10.1007/978-3-319-14063-6_2
Zhang R, Lan Y, Huang GB, Xu ZB (2012) Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Trans Neural Netw Learn Syst 23(2):365–371. doi:10.1109/TNNLS.2011.2178124
Acknowledgements
Funding was provided by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, Fundação Cearense de Apoio ao Desenvolvimento Científico e Tecnológico, Instituto Federal do Ceará.
Author information
Authors and Affiliations
Corresponding author
Additional information
The authors would like to thank the IFCE and CAPES for supporting their research.
Rights and permissions
About this article
Cite this article
Dias, M.L.D., de Sousa, L.S., Rocha Neto, A.R. et al. Fixed-Size Extreme Learning Machines Through Simulated Annealing. Neural Process Lett 48, 135–151 (2018). https://doi.org/10.1007/s11063-017-9700-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-017-9700-9