[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
article

Selected an Stacking ELMs for Time Series Prediction

Published: 01 December 2016 Publication History

Abstract

Extreme learning machine (ELM) has several interesting and significant features. In this paper, a novel pruned Stacking ELMs (PS-ELMs) algorithm for time series prediction (TSP) is proposed. It employs ELM as the level-0 algorithm to train several models for Stacking. And our previously proposed reduce-error pruning for TSP (ReTSP)-Trend pruning technique is used to solve the problem that the level-0 learners might make many correlated error predictions. ReTSP-Trend refers to an evaluation measure for reduce-error pruning for TSP (ReTSP), which takes into account the time series trend and the forecasting error direction. What's more, ELM and simple averaging are used to generate the level-1 model. With the development of PS-ELMs, firstly, those essential advantages of ELM will be naturally inherited. Secondly, those specific defects of ELM are ameliorated to some extent, with the help of ensemble pruning paradigm. Thirdly, ensemble pruning is employed to raise the robustness and accuracy of time series forecasting, making up for the shortages of the existing research. Fourthly, our previously proposed pruning measure ReTSP-Trend is employed in PS-ELMs, which indeed guarantees that the remaining predictor which supplements the subensemble the most will be selected. And finally, the development of PS-ELMs will promote our investigation to the popular ensemble technique of Stacked Generalization. The experimental results on four benchmark financial time series datasets verified the validity of the proposed PS-ELMs algorithm.

References

[1]
Neto A, Calvalcanti GD, Ren TI (2009) Financial time series prediction using exogenous series and combined neural networks. In: International joint conference on neural networks, 2009. IJCNN 2009, pp 149---156
[2]
Abu-Mostafa YS, Atiya AF (1996) Introduction to financial forecasting. Appl Intell 6:205---213
[3]
Jiang H, He W (2012) Grey relational grade in local support vector regression for financial time series prediction. Expert Syst Appl 39:2256---2262
[4]
Crone SF, Hibon M, Nikolopoulos K (2011) Advances in forecasting with neural networks? Empirical evidence from the NN3 competition on time series prediction. Int J Forecast 27:635---660
[5]
White H (1988) Economic prediction using neural networks: the case of IBM daily stock returns. In: IEEE international conference on neural networks, 1988, pp 451---458
[6]
Zhiqiang G, Huaiqing W, Quan L (2013) Financial time series forecasting using LPP and SVM optimized by PSO. Soft Comput 17:805---818
[7]
Huang G-B, Zhu Q-Y, Siew C-K (2006) Real-time learning capability of neural networks. IEEE Trans Neural Netw 17:863---878
[8]
Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489---501
[9]
He Y-L, Geng Z-Q, Xu Y, Zhu Q-X (2015) A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement. ISA Trans 58:533---542
[10]
Peng Y, Wang S, Long X, Lu B-L (2015) Discriminative graph regularized extreme learning machine and its application to face recognition. Neurocomputing 149:340---353
[11]
Na W, Zhu Q, Su Z, Jiang Q (2015) Research on well production prediction based on improved extreme learning machine. Int J Model Identif Control 23:238---247
[12]
Wong KI, Vong CM, Wong PK, Luo J (2015) Sparse Bayesian extreme learning machine and its application to biofuel engine performance prediction. Neurocomputing 149:397---404
[13]
Shamshirband S, Mohammadi K, Tong CW, Petković D, Porcu E, Mostafaeipour A et al (2015) Application of extreme learning machine for estimation of wind speed distribution. Clim Dyn 1---15.
[14]
Wang W, Yu L, Liu H, Sun F (2015) Extreme learning machine for linear dynamical systems classification: application to human activity recognition. In: Proceedings of ELM-2014, vol 2. Springer, Berlin, pp 11---20
[15]
Daliri MR (2012) A hybrid automatic system for the diagnosis of lung cancer based on genetic algorithm and fuzzy extreme learning machines. J Med Syst 36:1001---1005
[16]
Daliri MR (2015) Combining extreme learning machines using support vector machines for breast tissue classification. Comput Methods Biomech Biomed Eng 18:185---191
[17]
Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) OP-ELM: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21:158---162
[18]
Gonzalez-Carrasco I, Garcia-Crespo A, Ruiz-Mezcua B, Lopez-Cuadrado JL (2012) An optimization methodology for machine learning strategies and regression problems in ballistic impact scenarios. Appl Intell 36:424---441
[19]
Gonzalez-Carrasco I, Garcia-Crespo A, Ruiz-Mezcua B, Lopez-Cuadrado JL, Colomo-Palacios R (2014) Towards a framework for multiple artificial neural network topologies validation by means of statistics. Expert Syst 31:20---36
[20]
Lai KK, Yu L, Wang S, Wei H (2006) A novel nonlinear neural network ensemble model for financial time series forecasting. In: Computational science--ICCS 2006. Springer, Berlin, pp 790---793
[21]
Kim D, Kim C (1997) Forecasting time series with genetic fuzzy predictor ensemble. IEEE Trans Fuzzy Syst 5:523---535
[22]
Qian B, Rasheed K (2010) Foreign exchange market prediction with multiple classifiers. J Forecast 29:271---284
[23]
Khashei M, Bijari M (2012) A new class of hybrid models for time series forecasting. Expert Syst Appl 39:4344---4357
[24]
Hernández-Lobato D, Martínez-Muñoz G, Suárez A (2011) Empirical analysis and evaluation of approximate techniques for pruning regression bagging ensembles. Neurocomputing 74:2250---2264
[25]
Margineantu DD, Dietterich TG (1997) Pruning adaptive boosting. In: ICML, 1997, pp 211---218
[26]
Prodromidis AL, Stolfo SJ (2001) Cost complexity-based pruning of ensemble classifiers. Knowl Inf Syst 3:449---469
[27]
Martınez-Munoz G, Suárez A (2004) Aggregation ordering in bagging. In: Proceedings of the IASTED international conference on artificial intelligence and applications, 2004, pp 258---263
[28]
Zhou Z-H, Tang W (2003) Selective ensemble of decision trees. In: Rough sets, fuzzy sets, data mining, and granular computing. Springer, Berlin, pp 476---483
[29]
Caruana R, Niculescu-Mizil A, Crew G, Ksikes A (2004) Ensemble selection from libraries of models. In: Proceedings of the twenty-first international conference on machine learning, 2004, p 18
[30]
Banfield RE, Hall LO, Bowyer KW, Kegelmeyer WP (2005) Ensemble diversity measures and their application to thinning. Inf Fusion 6:49---62
[31]
Martínez-Muñoz G, Suárez A (2006) Pruning in ordered bagging ensembles. In: Proceedings of the 23rd international conference on machine learning, 2006, pp 609---616
[32]
Martínez-Muñoz G, Suárez A (2007) Using boosting to prune bagging ensembles. Pattern Recognit Lett 28:156---165
[33]
Zhou Z-H, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137:239---263
[34]
Ma ZC, Dai Q, Liu NZ (2015) Several novel evaluation measures for rank-based ensemble pruning with applications to time series prediction. Expert Syst Appl 42:280---292
[35]
Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12:993---1001
[36]
Grigorievskiy A, Miche Y, Ventelä A-M, Séverin E, Lendasse A (2014) Long-term time series prediction using OP-ELM. Neural Netw 51:50---56
[37]
Zhao G, Shen Z, Miao C, Gay RK (2008) Enhanced extreme learning machine with stacked generalization. In: IJCNN, 2008, pp 1191---1198
[38]
Huang G-B (2003) Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans Neural Netw 14:274---281
[39]
Huang G-B, Zhu Q-Y, Siew C-K (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2004 IEEE international joint conference on neural networks, 2004. Proceedings, pp 985---990
[40]
Ledezma A, Aler R, Sanchis A, Borrajo D (2010) GA-stacking: evolutionary stacked generalization. Intell Data Anal 14:89---119
[41]
Dietterich TG (2000) Ensemble methods in machine learning. In: Multiple classifier systems. Springer, Heidelberg, pp 1---15
[42]
Ting KM, Witten IH (1999) Issues in stacked generalization. J Art Intel Res 10:271---289
[43]
Dzeroski S, Zenko B (2002) Is combining classifiers better than selecting the best one? In: ICML, 2002, pp 123---130
[44]
Merz CJ (1999) Using correspondence analysis to combine classifiers. Mach Learn 36:33---58
[45]
Wolpert DH (1992) Stacked generalization. Neural Netw 5:241---259
[46]
Tamon C, Xiang J (2000) On the boosting pruning problem. In: Machine learning: ECML 2000. Springer, Berlin, pp 404---412
[47]
Partalas I, Tsoumakas G, Vlahavas I (2009) Pruning an ensemble of classifiers via reinforcement learning. Neurocomputing 72:1900---1909
[48]
Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51:181---207
[49]
Martinez-Munoz G, Hernández-Lobato D, Suárez A (2009) An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Trans Pattern Anal Mach Intell 31:245---259
[50]
Assaad M, Boné R, Cardot H (2008) A new boosting algorithm for improved time-series forecasting with recurrent neural networks. Inf Fusion 9:41---55
[51]
Yahoo Finance. http://finance.yahoo.com/. Accessed 7 July 2015
[52]
CrossValidated. http://stats.stackexchange.com/questions/14099/using-k-fold-cross-validation-for-time-series-model-selection. Accessed 7 July 2015
[53]
Neto A, Calvalcanti G, Ren TI (2009) Financial time series prediction using exogenous series and combined neural networks. In: International joint conference on neural networks, 2009. IJCNN 2009, pp 149---156
[54]
Root-mean-square deviation. http://en.wikipedia.org/wiki/Root-mean-square_deviation. Accessed 7 July 2015
[55]
Mean absolute percentage error. http://en.wikipedia.org/wiki/Mean_absolute_percentage_error. Accessed 7 July 2015

Cited By

View all
  • (2025)Ensemble deep learning techniques for time series analysis: a comprehensive review, applications, open issues, challenges, and future directionsCluster Computing10.1007/s10586-024-04684-028:1Online publication date: 1-Feb-2025
  • (2024)Energy load forecasting: one-step ahead hybrid model utilizing ensemblingComputing10.1007/s00607-023-01217-2106:1(241-273)Online publication date: 1-Jan-2024
  • (2023)A Novel Regularization Paradigm for the Extreme Learning MachineNeural Processing Letters10.1007/s11063-023-11248-755:6(7009-7033)Online publication date: 1-Dec-2023
  • Show More Cited By
  1. Selected an Stacking ELMs for Time Series Prediction

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Neural Processing Letters
    Neural Processing Letters  Volume 44, Issue 3
    December 2016
    309 pages

    Publisher

    Kluwer Academic Publishers

    United States

    Publication History

    Published: 01 December 2016

    Author Tags

    1. Extreme learning machine (ELM)
    2. Financial time series forecasting
    3. Pruned Stacking extreme learning machines (PS-ELMs)
    4. ReTSP-Trend
    5. Reduce-error pruning for time series prediction (ReTSP)
    6. Stacked Generalization (Stacking)

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 14 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Ensemble deep learning techniques for time series analysis: a comprehensive review, applications, open issues, challenges, and future directionsCluster Computing10.1007/s10586-024-04684-028:1Online publication date: 1-Feb-2025
    • (2024)Energy load forecasting: one-step ahead hybrid model utilizing ensemblingComputing10.1007/s00607-023-01217-2106:1(241-273)Online publication date: 1-Jan-2024
    • (2023)A Novel Regularization Paradigm for the Extreme Learning MachineNeural Processing Letters10.1007/s11063-023-11248-755:6(7009-7033)Online publication date: 1-Dec-2023
    • (2018)Supervised ranking framework for relationship prediction in heterogeneous information networksApplied Intelligence10.1007/s10489-017-1044-748:5(1111-1127)Online publication date: 1-May-2018
    • (2017)An automated approach to estimate human interestApplied Intelligence10.1007/s10489-017-0947-747:4(1186-1207)Online publication date: 1-Dec-2017

    View Options

    View options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media