Abstract
The exploration, that is a process of trial and error, plays a very important role in reinforcement learning. As a generator for exploration, it seems to be familiar to use the uniform pseudorandom number generator. However, it is known that chaotic source also provides a random-like sequence as like as stochastic source. Applying this random-like feature of deterministic chaos for a generator of the exploration, we already found that the deterministic chaotic generator for the exploration based on the logistic map gives better performances than the stochastic random exploration generator in a nonstationary shortcut maze problem. In this research, in order to make certain such a difference of the performance, we examine target capturing as another nonstationary task. The simulation result in this task approves the result in our previous work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)
Sutton, R.S., Barto, A.G.: Reinforcement Learning. The MIT press, Cambridge (1998)
Thrun, S.B.: Efficient Exploratio In Reinforcement Learning, Technical report CMU-CS-92-102, Carnegie Mellon University, Pittsburgh, PA (1992)
Parker, T.S., Chua, L.O.: Practical Numerical Algorithms for Chaotic Systems. Springer, Heidelberg (1989)
Ott, E., Sauer, T., Yorke, J.A.: Coping with Chaos: Analysis of Chaotic Data and The Exploitation of Chaotic Systems. John Wiley & Sons, Inc., New York (1994)
Potapov, B., Ali, M.K.: Learning, Exploration and Chaotic Policies. International Journal of Modern Physics C 11(7), 1455–1464 (2000)
Morihiro, K., Matsui, N., Nishimura, H.: Effects of Chaotic Exploration on Reinforcement Maze Learning. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3213, pp. 833–839. Springer, Heidelberg (2004)
Watkins, C., Dayan, P.: Q-learning. Machine Learning 8, 279–292 (1992)
Rummery, G.A., Niranjan, M.: On-line q-learning using connectionist systems, Technical Report CUED/F-INFENG/TR 166, Cambridge University Engineering Department (1994)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Morihiro, K., Isokawa, T., Matsui, N., Nishimura, H. (2005). Reinforcement Learning by Chaotic Exploration Generator in Target Capturing Task. In: Khosla, R., Howlett, R.J., Jain, L.C. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2005. Lecture Notes in Computer Science(), vol 3681. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11552413_178
Download citation
DOI: https://doi.org/10.1007/11552413_178
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28894-7
Online ISBN: 978-3-540-31983-2
eBook Packages: Computer ScienceComputer Science (R0)