Abstract
Time-series prediction can be interpreted in a way that is suitable for artificial intelligence learning. Two effective learning methods, Artificial Neural Networks and Support Vector Machines, are used to provide accurate non-linear models of the problem. In spite of the effectiveness of these methods we have to solve two problems. Firstly, time-series can have noise and a high dimensional embedding space. Secondly, the learning depends on several hyper-parameters that need to be set properly. To handle these problems we apply noise and dimension reduction tech- niques and model selection to get suitable hyper-parameters. Then, we introduce a meta-heuristic to refine the predictions of the selected models. Our experiments show improvements in the quality of predictions of a real-life problem compared to two ‘benchmark’ algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Castillo P A, Merelo J J, González J, Rivas V, Romero G (1999) SA-Prop: Optimization of Multilayer Perception Parameters Using Simulated Annealing. In: Touretzky D, Mozer M, Hasselmo M (eds) Proceedings of 5th International Work-Conference on Artificial and Natural Neural Networks 1:661–670
Chang C-C, Lin C-J (2001) LIBSVM: a Library for Support Vector Machines (Version 2.31). http://citeseer.ist.psu.edu/chang011ibsvm.html
Chester D (1990) Why Two Hidden Layers are Better Than One. In: Proceedings of International Joint Conference on Neural Networks ′90 265–268
Evgeniou T, Pontil M, Poggio T (2000) Regularization Networks and Support Vector Machines. Advances in Computational Mathematics 13(1): 1–50
Friedman J H (1994), An overview of predictive learning and function approximation. In: Cherkassky V, Friedman J H, Wechsler H (eds) From Statistics to Neural Networks, Theory and Pattern Recognition Applications 1–61
Hammer B, Gersmann K (2003) A Note on the Universal Approximation Capability of Support Vector Machines. Neural Processing Letters 17(l):43–53
Gardner M V, Dorling S R (1998) Artificial Neural Networks (the Multilayer Perceptron). Atmospheric Environment 32(14/15):2627–2636
Jolliffe I T (1986) Principal Component Analysis. Springer-Verlag, New York.
Kirkpatrick S, Gerlatt C D Jr, Vecchi M P (1983) Optimization by Simulated Annealing. Science 220(4598):671–680
KDnuggets (2004) Polls : Data mining techniques (Nov 2003). http://www.kdnuggets.com/polls/2003/data_mining_techniques.htm
Michalewicz Z (1992) Genetic Algorithms + Data Structures=Evolution Programs. Springer, Berlin
Müller K R., Smola A J, Rätsch G, Schölkopf B, Kohlmorgen J, Vapnik V (1997) Predicting Time Series with Support Vector Machines. In: Gerstner W (eds) Proceedings of 7th International Conference on Artificial Neural Networks 999–1004
Perez P, Trier A (2001) Prediction of NO and NO2 concentrations near a street with heavy traffic in Santiago, Chile. Atmospheric Environment 35:1783
Refenes A N (1993) Constructive Learning and Its Application to Currency Exchange Rate Forecasting. Neural Network Applications in Investment and Finance Services, Probes Publishing, Chicago
Rocha M, Cortez P, Neves J (2003) Evolutionary Neural Network Learning. In: Callaos N, da Silva I N, Molero J (eds) Proceedings of 11th Portuguese Conference on Artificial Intelligence 24–28
Schölkopf B, Bartlett P, Smola A, Williamson R, (1998) Support Vector Regression with Automatic Accuracy Control. In: Niklasson L, Bodén M, Ziemke T (eds) Proceedings of 8th International Conference on Artificial Neural Networks 111–116
Schölkopf B, Burges C J C (1998) Advances in Kernel Methods Support Vector Learning, The MIT Press, Cambridge
Schölkopf B, Smola A, Müller K-R, (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10:1299–1319
Yao X, Liu Y (1997) A New Evolutionary System for Evolving Artificial Neural Networks, IEEE Transactions on Neural Networks 8(3) 694–713
Witten I H, Frank E (2000) Data Mining: Practical machine learning tools with Java implementations. Morgan Kaufmann, San Francisco
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer
About this paper
Cite this paper
Juhos, I., Szarvas, G. (2006). Intelligent Forecast with Dimension Reduction. In: Abraham, A., de Baets, B., Köppen, M., Nickolay, B. (eds) Applied Soft Computing Technologies: The Challenge of Complexity. Advances in Soft Computing, vol 34. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-31662-0_22
Download citation
DOI: https://doi.org/10.1007/3-540-31662-0_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-31649-7
Online ISBN: 978-3-540-31662-6
eBook Packages: EngineeringEngineering (R0)