Abstract
A fuzzy controller requires a control engineer to tune its fuzzy rules for a given problem to be solved. To reduce the burden, we develop a gradient-based tuning method for a fuzzy controller. The developed method is closely related to reinforcement learning, but takes advantages of a practical assumption made for faster learning. In reinforcement learning, values of problem states need to be learned through lots of trial-and-error interactions between the controller and the plant. And the plant dynamics should also be learned by the controller. In this research, we assume that an approximated value function of the problem states can be represented as a function of a Euclidean distance from a goal state and an action executed at the state. And, using it as an evaluation function, the fuzzy controller is tuned to have an optimal policy for reaching the goal state despite an unknown plant dynamics. Our experimental results on a pole-balancing problem show that the proposed method is efficient and effective in solving not only a set-point problem but also a tracking problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Hamid R. Berenji and Pratap Khedkar. Learning and tuning fuzzy logic controllers through reinforcement. IEEE Trans. On Neural Network, 3(5):724–740, 1992.
Yul Y. Nazaruddin, Agus Naba, and The Houw Liong. Modified adaptive fuzzy control system using universal supervisory controller. In Proceedings SCI 2000/ISAS 2000, volume IX, Orlando, USA, July 23–26, 2000.
Juan C. Santamaria, Richard R. Sutton, and Ashwin Ram. Experiment with reinforcement learning in problems with continuous state and action spaces. Adaptive Behaviour, 6(2):163–218, 1998.
Andrew G. Barto, Richard S. Sutton, and Charles W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Trans. On SMC, 13(5):834–846, 1983.
Lionel Jouffe. Fuzzy inference system learning by reinforcement method. IEEE Trans. On SMC-Part C: Application And Reviews, 28(3):338–355, 1998.
Augustine O. Esogbue, Warren E. Hearnes, and Q. Song. A reinforcement learning fuzzy controller for set-point regulator problems. In Proceedings of the FUZZ-IEEE’ 96 Conference, volume 3, pages 2136–2142, New Orleans, LA, 1996.
Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.
L.C. Baird and A.W. Moore. Gradient descent for general reinforcement learning. Advances in Neural Information Processing Systems 11, 1999.
Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. Anvances in Neural Information Processing System, 12:1057–1063, 2000.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Naba, A., Miyashita, K. (2005). Tuning Fuzzy Controller Using Approximated Evaluation Function. In: Abraham, A., Dote, Y., Furuhashi, T., Köppen, M., Ohuchi, A., Ohsawa, Y. (eds) Soft Computing as Transdisciplinary Science and Technology. Advances in Soft Computing, vol 29. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-32391-0_19
Download citation
DOI: https://doi.org/10.1007/3-540-32391-0_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-25055-5
Online ISBN: 978-3-540-32391-4
eBook Packages: EngineeringEngineering (R0)