Abstract
This paper focuses on generating collective behavior of a robotic swarm using an attention agent. The selective attention mechanism enables an agent to cope with environmental variations which are irrelevant to the task. This paper applies attention mechanisms to a robotic swarm for enhancing system-level properties, such as flexibility or scalability. To train an attention agent, evolutionary computations become a promising method, because a controller structure is not restricted by a gradient-based method. Therefore, this paper employs a deep neuroevolution approach to generating collective behavior in a robotic swarm. The experiments are conducted by computer simulations that consist of the Unity 3D game engine. The performance of the attention agent is compared with the convolutional neural network approach. The experimental results showed that the attention agent obtained generalization abilities in a robotic swarm similar to single-agent problems.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Şahin E (2005) Swarm robotics: From sources of inspiration to domains of application. In: International workshop on swarm robotics, pp 10–20
Dorigo M et al (2014) Swarm robotics. Scholarpedia 9(1):14–63
Manuele B et al (2013) Swarm robotics: a review from the swarm engineering perspective. Swarm Intell 7(1):1–41
Nolfi S, Floreano D (2000) Evolutionary robotics: the biology, intelligence, and technology of self-organizing machines. MIT Press, Cambridge
Baldassarre Gianluca et al (2003) Evolving mobile robots able to display collective behaviors. Artif Life 9(3):255–267
Onur Soysal et al (2007) Aggregation in swarm robotic systems: evolution and probabilistic control. Turk J Electr Eng Comput Sci 15(2):199–225
Valerio Sperati et al (2011) Self-organised path formation in a swarm of robots. Swarm Intell 5:97–119
Alkilabi MHM et al (2017) Cooperative object transport with a swarm of e-puck robots: robustness and scalability of evolved collective strategies. Swarm Intell 11:185–209
Floreano Dario et al (2008) Neuroevolution: from architectures to learning. Evol Intell 1(1):47–62
Geoffrey Hinton et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29:82–97
Yichuan T et al (2012) Deep Lambertian networks. arXiv preprint arXiv:1206.6445
Mohammad Havaei et al (2017) Brain tumor segmentation with deep neural networks. Med Image Anal 35:18–31
David Silver et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529(7587):484–489
Kun S et al (2019) A survey of deep reinforcement learning in video games. arXiv preprint arXiv:1912.10944
Nicolas H et al (2017) Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286
Zhaoming X et al (2019) Iterative reinforcement learning based design of dynamic locomotion skills for cassie. arXiv preprint arXiv:1903.09537
Tabish Rashid et al (2020) Monotonic value function factorisation for deep multi-agent reinforcement learning. J Mach Learn Res 21(1):7234–7284
Terry J et al (2021) Pettingzoo: gym for multi-agent reinforcement learning. Ad Neural Inf Process Syst 34:15032–15043
Hüttenrauch et al (2017) Guided deep reinforcement learning for swarm systems. arXiv preprint arXiv:1709.06011
Yixin H et al (2020) A multi-agent reinforcement learning method for swarm robots in space collaborative exploration. In: 2020 6th international conference on control, automation and robotics (ICCAR), pp 139–144
Lipton Zachary C (2018) The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3):31–57
Bolei Z et al (2016) Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2921–2929
Andrei K et al (2019) Xrai: better attributions through regions. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 4948–4957
Petroski SF et al (2017) Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567
Tim S et al (2017) Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864
Tang Y et al (2020) Neuroevolution of self-interpretable agents. In: Proceedings of the 2020 genetic and evolutionary computation conference, pp 414–424
Spelke Elizabeth S, Kinzler Katherine D (2007) Core knowledge. Dev Sci 10(1):89–96
Yuezhang L et al (2018) An initial attempt of combining visual selective attention with deep reinforcement learning. arXiv preprint arXiv1811.04407
Mott A et al (2019) Towards interpretable reinforcement learning using attention augmented agents. Adv Neural Inf Process Syst 32
Ha D, Schmidhuber J (2018) World models. arXiv preprint arXiv:1803.10122
Risi S, Stanley KO (2019) Deep neuroevolution of recurrent and discrete world models. In: Proceedings of the genetic and evolutionary computation conference
Unity Technologies. Unity—game engine. https://unity3d.com/
Wierstra D et al (2014) Natural evolution strategies. J Mach Learn Res 15(1):949–980
He K et al (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision, pp 1026 –1034
John S et al (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347
Dosovitskiy A et al (2019) An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929
Liu Z et al (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 10012–10022
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was presented in part at the joint symposium of the 28th International Symposium on Artificial Life and Robotics, the 8th International Symposium on BioComplexity, and the 6th International Symposium on Swarm Behavior and Bio-Inspired Robotics (Beppu, Oita and Online, January 25–27, 2023).
About this article
Cite this article
Iwami, A., Morimoto, D., Shiozaki, N. et al. Generating collective behavior of a robotic swarm using an attention agent with deep neuroevolution. Artif Life Robotics 28, 669–679 (2023). https://doi.org/10.1007/s10015-023-00902-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10015-023-00902-x