Abstract
Extreme learning machine not only has the best generalization performance but also has simple structure and convenient calculation. In this paper, its merits are used for reinforcement learning. The use of extreme learning machine on Q function approximation can improve the speed of reinforcement learning. As the number of hidden layer nodes is equal to that of samples, the larger sample size will seriously affect the learning speed. To solve this problem, a rolling time-window mechanism is introduced to the algorithm, which can reduce the size of the sample space to a certain extent. Finally, our algorithm is compared with a reinforcement learning based on a traditional BP neural network using a boat problem. Simulation results show that the proposed algorithm is faster and more effective.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (1998)
Abe, K.: Reinforcement Learning-Value Function Estimation and Policy Search. Society of Instrument and Control Engineers 41(9), 680–685 (2002)
Wang, X.S., Tian, X.L., Cheng, Y.H.: Value Approximation with Least Squares Support Vector Machine in Reinforcement Learning System. Journal of Computational and Theoretical Nanoscience 4(7/8), 1290–1294 (2007)
Vien, N.A., Yu, H., Chung, T.C.: Hessian Matrix Distribution for Bayesian Policy Gradient Reinforcement Learning. Information Sciences 181(9), 1671–1685 (2011)
Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks. In: Proceedings of the International Joint Conference on Neural Networks, pp. 25–29. The MIT Press, Budapest (2004)
Ding, S., Su, C.Y.: Application of Optimizing Bp Neural Networks Algorithm Based on Genetic Algorithm. In: Proceedings of the 29th Chinese Control Conference, pp. 2425–2428. The MIT Press, Beijing (2010)
Wang, G., Li, P.: Dynamic Adaboost Ensemble Extreme Learning Machine. In: Proceedings of the International Conference on Advanced Computer Theory and Engineering, pp. 54–58. The MIT Press, Chengdu (2010)
Jouffe, L.: Fuzzy Inference System Learning By Reinforcement Methods. IEEE Transactions on Systems, Man and Cybernetics 28(3), 338–355 (1998)
Thomas, A., Marcus, S.I.: Reinforcement Learning for MDPs Using Temporal Difference Schemes. In: Proceedings of the IEEE Conference on Decision and Control, pp. 577–583. The MIT Press, San Diego (1997)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Pan, J., Wang, X., Cheng, Y., Cao, G. (2012). Reinforcement Learning Based on Extreme Learning Machine. In: Huang, DS., Gupta, P., Zhang, X., Premaratne, P. (eds) Emerging Intelligent Computing Technology and Applications. ICIC 2012. Communications in Computer and Information Science, vol 304. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31837-5_12
Download citation
DOI: https://doi.org/10.1007/978-3-642-31837-5_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-31836-8
Online ISBN: 978-3-642-31837-5
eBook Packages: Computer ScienceComputer Science (R0)