[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Reinforcement Learning by Chaotic Exploration Generator in Target Capturing Task

  • Conference paper
Knowledge-Based Intelligent Information and Engineering Systems (KES 2005)

Abstract

The exploration, that is a process of trial and error, plays a very important role in reinforcement learning. As a generator for exploration, it seems to be familiar to use the uniform pseudorandom number generator. However, it is known that chaotic source also provides a random-like sequence as like as stochastic source. Applying this random-like feature of deterministic chaos for a generator of the exploration, we already found that the deterministic chaotic generator for the exploration based on the logistic map gives better performances than the stochastic random exploration generator in a nonstationary shortcut maze problem. In this research, in order to make certain such a difference of the performance, we examine target capturing as another nonstationary task. The simulation result in this task approves the result in our previous work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 74.00
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)

    Google Scholar 

  2. Sutton, R.S., Barto, A.G.: Reinforcement Learning. The MIT press, Cambridge (1998)

    Google Scholar 

  3. Thrun, S.B.: Efficient Exploratio In Reinforcement Learning, Technical report CMU-CS-92-102, Carnegie Mellon University, Pittsburgh, PA (1992)

    Google Scholar 

  4. Parker, T.S., Chua, L.O.: Practical Numerical Algorithms for Chaotic Systems. Springer, Heidelberg (1989)

    MATH  Google Scholar 

  5. Ott, E., Sauer, T., Yorke, J.A.: Coping with Chaos: Analysis of Chaotic Data and The Exploitation of Chaotic Systems. John Wiley & Sons, Inc., New York (1994)

    MATH  Google Scholar 

  6. Potapov, B., Ali, M.K.: Learning, Exploration and Chaotic Policies. International Journal of Modern Physics C 11(7), 1455–1464 (2000)

    Article  MATH  Google Scholar 

  7. Morihiro, K., Matsui, N., Nishimura, H.: Effects of Chaotic Exploration on Reinforcement Maze Learning. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3213, pp. 833–839. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  8. Watkins, C., Dayan, P.: Q-learning. Machine Learning 8, 279–292 (1992)

    MATH  Google Scholar 

  9. Rummery, G.A., Niranjan, M.: On-line q-learning using connectionist systems, Technical Report CUED/F-INFENG/TR 166, Cambridge University Engineering Department (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Morihiro, K., Isokawa, T., Matsui, N., Nishimura, H. (2005). Reinforcement Learning by Chaotic Exploration Generator in Target Capturing Task. In: Khosla, R., Howlett, R.J., Jain, L.C. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2005. Lecture Notes in Computer Science(), vol 3681. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11552413_178

Download citation

  • DOI: https://doi.org/10.1007/11552413_178

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28894-7

  • Online ISBN: 978-3-540-31983-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics