Abstract
As an important solution to decision-making problems, imitation learning learns expert behavior from example demonstrations provided by experts, without the necessity of a predefined reward function as in reinforcement learning. Traditionally, imitation learning assumes that demonstrations are generated from single latent expert intention. One promising method in this line is generative adversarial imitation learning (GAIL), designed to work in large environments. It can be thought as a model-free imitation learning built on top of generative adversarial networks (GANs). However, GAIL fails to learn well when handling expert demonstrations under multiple intentions, which can be labeled by latent intentions. In this paper, we propose to add an auxiliary classifier model to GAIL, from which we derive a novel variant of GAIL, named ACGAIL, allowing label conditioning in imitation learning about multiple intentions. Experimental results on several MuJoCo tasks indicate that ACGAIL can achieve significant performance improvements over existing methods, e.g., GAIL and InfoGAIL, when dealing with label-conditional imitation learning about multiple intentions.
This work was in part supported by National Natural Science Foundation of China (61502323) and High School Natural Foundation of Jiangsu (16KJB520041).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: ICML, pp. 1–8 (2004)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: ICML, pp. 214–223 (2017)
Babes, M., Marivate, V., Subramanian, K., Littman, M.L.: Apprenticeship learning about multiple intentions. In: ICML, pp. 897–904 (2011)
Bojarski, M., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016)
Brockman, G., et al.: OpenAI Gym. arXiv preprint arXiv:1606.01540 (2016)
Choi, J., Kim, K.E.: Nonparametric Bayesian inverse reinforcement learning for multiple reward functions. In: NIPS, pp. 305–313 (2012)
Dimitrakakis, C., Rothkopf, C.A.: Bayesian multitask inverse reinforcement learning. In: Sanner, S., Hutter, M. (eds.) EWRL 2011. LNCS (LNAI), vol. 7188, pp. 273–284. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29946-9_27
Finn, C., Levine, S., Abbeel, P.: Guided cost learning: deep inverse optimal control via policy optimization. In: ICML, pp. 49–58 (2016)
Goodfellow, I., et al.: Generative adversarial nets. In: NIPS, pp. 2672–2680 (2014)
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: NIPS, pp. 5769–5779 (2017)
Hausman, K., Chebotar, Y., Schaal, S., Sukhatme, G., Lim, J.J.: Multi-modal imitation learning from unstructured demonstrations using generative adversarial nets. In: NIPS, pp. 1235–1245 (2017)
Ho, J., Ermon, S.: Generative adversarial imitation learning. In: NIPS, pp. 4565–4573 (2016)
Kingma, D.P., Ba, J.: ADAM: a method for stochastic optimization. ICLR (2015)
Li, Y., Song, J., Ermon, S.: InfoGAIL: interpretable imitation learning from visual demonstrations. In: NIPS, pp. 3815–3825 (2017)
Ng, A.Y., Russell, S.J.: Algorithms for inverse reinforcement learning. In: ICML, pp. 663–670 (2000)
Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML, pp. 2642–2651 (2017)
Pomerleau, D.A.: Efficient training of artificial neural networks for autonomous navigation. Neural Comput. 3(1), 88–97 (1991)
Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: ICML, pp. 1889–1897 (2015)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., Freitas, N.: Dueling network architectures for deep reinforcement learning. In: ICML, pp. 1995–2003 (2016)
Ziebart, B.D., Bagnell, J.A., Dey, A.K.: Maximum causal entropy correlated equilibria for Markov games. In: AAMAS, pp. 207–214 (2011)
Ziebart, B.D., Maas, A.L., Bagnell, J.A., Dey, A.K.: Maximum entropy inverse reinforcement learning. In: AAAI, pp. 1433–1438 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Lin, J., Zhang, Z. (2018). ACGAIL: Imitation Learning About Multiple Intentions with Auxiliary Classifier GANs. In: Geng, X., Kang, BH. (eds) PRICAI 2018: Trends in Artificial Intelligence. PRICAI 2018. Lecture Notes in Computer Science(), vol 11012. Springer, Cham. https://doi.org/10.1007/978-3-319-97304-3_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-97304-3_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-97303-6
Online ISBN: 978-3-319-97304-3
eBook Packages: Computer ScienceComputer Science (R0)