Abstract
Task-oriented robot learning has shown significant potential with the development of Reinforcement Learning (RL) algorithms. However, the learning of long-horizon tasks for robots remains a formidable challenge due to the inherent complexity of tasks, typically comprising multiple diverse stages. Universal RL algorithms commonly encounter issues such as slow convergence or even failure to converge altogether when applied to such tasks. The reasons behind these challenges lie in the local optima trap or redundant exploration during the new stages or the junction of two continuous stages. To address these challenges, we propose a novel state-dependent maximum entropy (SDME) reinforcement learning algorithm. This algorithm effectively balances the trade-off between exploration and exploitation around three kinds of critical states arising from the unique nature of long-horizon tasks. We conducted experiments within an open-source simulation environment, focusing on two representative long-horizon tasks. The proposed SDME algorithm exhibits faster and more stable learning capabilities, requiring merely one-third of the number of learning samples necessary for baseline approaches. Furthermore, we assess the generalization ability of our method under randomly initialized conditions, and the results show that the success rate of the SDME algorithm is nearly twice that of the baselines. Our code will be available at https://github.com/Peter-zds/SDME.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Fang, Y., Liao, B., Wang, X., Fang, J., Qi, J., Wu, R., Niu, J., Liu, W.: You only look at one sequence: Rethinking transformer in vision through object detection. Adv. Neural. Inf. Process. Syst. 34, 26183–26197 (2021)
Djordjevic, V., Tao, H., Song, X., He, S., Gao, W., Stojanović, V.: Data-driven control of hydraulic servo actuator: An event-triggered adaptive dynamic programming approach. MBE, Mathematical biosciences and engineering (2023)
Wang, X., Girdhar, R., Yu, S.X., Misra, I.: Cut and learn for unsupervised object detection and instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3124–3134 (2023)
Stojanović, V.: Fault-tolerant control of a hydraulic servo actuator via adaptive dynamic programming. Mathematical Modelling and Control (2023)
Tutsoy, O., Barkana, D.E., Balikci, K.: A novel exploration-exploitation-based adaptive law for intelligent model-free control approaches. IEEE Trans. Cybernet. 53(1), 329–337 (2023). https://doi.org/10.1109/TCYB.2021.3091680
Zhuang, Z., Tao, H., Chen, Y., Stojanovic, V., Paszke, W.: An optimal iterative learning control approach for linear systems with nonuniform trial lengths under input constraints. IEEE Trans. Syst. Man. Cybernet. Syst. (2022)
Quillen, D., Jang, E., Nachum, O., Finn, C., Ibarz, J., Levine, S.: Deep reinforcement learning for vision-based robotic grasping: A simulated comparative evaluation of off-policy methods. In: 2018 IEEE International conference on robotics and automation (ICRA), IEEE, pp. 6284–6291 (2018)
Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., Quillen, D., Holly, E., Kalakrishnan, M., Vanhoucke, V., et al.: Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293 (2018)
Fang, K., Zhu, Y., Garg, A., Kurenkov, A., Mehta, V., Fei-Fei, L., Savarese, S.: Learning task-oriented grasping for tool manipulation from simulated self-supervision. The International Journal of Robotics Research 39(2–3), 202–216 (2020)
Nair, A., Pong, V., Dalal, M., Bahl, S., Lin, S., Levine, S.: Visual reinforcement learning with imagined goals. arXiv preprint arXiv:1807.04742 (2018)
Xu, D., Nair, S., Zhu, Y., Gao, J., Garg, A., Fei-Fei, L., Savarese, S.: Neural task programming: Learning to generalize across hierarchical tasks. In: 2018 IEEE International conference on robotics and automation (ICRA) (2017)
Tremblay, J., To, T., Molchanov, A., Tyree, S., Kautz, J., Birchfield, S.: Synthetically trained neural networks for learning human-readable plans from real-world demonstrations. In: 2018 IEEE Internationa L conference on robotics and automation (ICRA), IEEE, pp. 5659–5666 (2018)
Huang, D.-A., Nair, S., Xu, D., Zhu, Y., Garg, A., Fei-Fei, L., Savarese, S., Niebles, J.C.: Neural task graphs: Generalizing to unseen tasks from a single video demonstration. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8565–8574 (2019)
Yu, T., Quillen, D., He, Z., Julian, R., Hausman, K., Finn, C., Levine, S.: Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In: Conference on robot learning (CoRL) (2019). arXiv:1910.10897
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. Comput. Sci. (2013)
Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: International conference on machine learning, PMLR, pp. 1889–1897 (2015)
Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning, (2015). arXiv:1509.02971
Abed-Alguni, B., Ottom, M.A.: Double delayed q-learning. International Journal of Artificial Intelligence 16(2), 41–59 (2018)
Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, PMLR, pp. 1861–1870 (2018)
Ho, J., Ermon, S.: Generative adversarial imitation learning. Adv. Neural Inform. Process. Syst. 29 (2016)
Ng, A.Y., Russell, S., et al: Algorithms for inverse reinforcement learning. In: Icml, vol.1, p. 2 (2000)
Abolghasemi, P., Mazaheri, A., Shah, M., Boloni, L.: Pay attention!-robustifying a deep visuomotor policy through task-focused visual attention. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4254–4262 (2019)
Mohseni-Kabir, A., Rich, C., Chernova, S., Sidner, C.L., Miller, D.: Interactive hierarchical task learning from a single demonstration. In: Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, pp. 205–212 (2015)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735
Hundt, A., Killeen, B., Greene, N., Wu, H., Kwon, H., Paxton, C., Hager, G.D.: good robot! : Efficient reinforcement learning for multi-step visual tasks with sim to real transfer. IEEE Robot. Autom. Lett. 5(4), 6724–6731 (2020)
Li, Z., Sun, Z., Su, J., Zhang, J.: Learning a skill-sequence-dependent policy for long-horizon manipulation tasks. In: 2021 IEEE 17th International conference on automation science and engineering (CASE), IEEE, pp. 1229–1234 (2021)
Strudel, R., Pashevich, A., Kalevatykh, I., Laptev, I., Sivic, J., Schmid, C.: Learning to combine primitive skills: A step towards versatile robotic manipulation. In: 2020 IEEE International conference on robotics and automation (ICRA), IEEE, pp. 4637–4643 (2020)
Wu, B., Xu, F., He, Z., Gupta, A., Allen, P.K.: Squirl: Robust and efficient learning from video demonstration of long-horizon robotic manipulation tasks. In: 2020 IEEE/RSJ International conference on intelligent robots and systems (IROS), pp. IEEE, 9720–9727 (2020)
Clegg, A., Yu, W., Tan, J., Liu, C.K., Turk, G.: Learning to dress: Synthesizing human dressing motion via deep reinforcement learning. ACM Transactions on Graphics (TOG) 37(6), 1–10 (2018)
Lee, Y., Sun, S.-H., Somasundaram, S., Hu, E.S., Lim, J.J.: Composing complex skills by learning transition policies. In: International conference on learning representations (2018)
Lee, Y., Lim, J.J., Anandkumar, A., Zhu, Y.: Adversarial skill chaining for long-horizon robot manipulation via terminal state regularization, (2021). arXiv:2111.07999
Schulman, J., Chen, X., Abbeel, P.: Equivalence between policy gradients and soft q-learning, (2017). arXiv:1704.06440
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. (2017). arXiv:1707.06347
Zheng, D., Yan, J., Xue, T., Liu, Y.: A knowledge-based task planning approach for robot multi-task manipulation. Complex & Intell. Syst. pp. 1–14 (2023)
Acknowledgements
This work was funded by National Natural Science Foundation of China (Grant No. 61473155), Primary Research & Development Plan of Jiangsu Province (Grant No. BE2017301), Six talent peaks project in Jiangsu Province (Grant No. GDZB-039).
Funding
This work was funded by National Natural Science Foundation of China (Grant No. 61473155), Primary Research & Development Plan of Jiangsu Province (Grant No. BE2017301), Six talent peaks project in Jiangsu Province (Grant No. GDZB-039).
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation was performed by Yong Liu. Deshuai Zheng proposed the method and verified its practicability by experiments. The first draft of the manuscript was written by Tao Xue and Jin Yan. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Consent to participate
Informed consent was obtained from all individual participants included in the study.
Consent to Publish
The participant has consented to the submission of the case report to the journal.
Competing Interests
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zheng, D., Yan, J., Xue, T. et al. State-Dependent Maximum Entropy Reinforcement Learning for Robot Long-Horizon Task Learning. J Intell Robot Syst 110, 19 (2024). https://doi.org/10.1007/s10846-024-02049-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-024-02049-8