Abstract
This article presents a gesture-based user interface for a robotic shopping trolley. The trolley is designed as a mobile robotic platform helping customers in shops and supermarkets. Among the main functions are: navigating through the store, providing information on availability and location, and transporting the items bought. One of important features of the developed interface is the gestural modality, or, more precisely, Russian sign language elements recognition system. The notion of the interface design, as well as interaction strategy, are presented in flowcharts, it was made an attempt to demonstrate the gestural modality as a natural part of an assistive information robot. Besides, a short overview of mobile robots is given in the paper, and CNN-based technique of gesture recognition is provided. The Russian sign language recognition option is of high importance due to a relatively large number of native speakers (signers).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Antsaklis, P.J., Passino, K.M. (eds.): An Introduction to Intelligent and Autonomous Control. Kluwer Academic Publishers, Norwell (1993)
Shcherbatov, I.A.: Intellectual control of robotic systems in conditions of uncertainty. Vestnik Astrakhanskogo gosudarstvennogo tekhnicheskogo universiteta 1, 73–77 (2010). (in Russian)
Falconer, J.: Humanoid Robot Demonstrates Sign Language. https://spectrum.ieee.org/automaton/robotics/humanoids/ntu-taiwan-humanoid-sign-language
Kose, H., Yorganci, R.: Tale of a robot: humanoid robot assisted sign language tutoring. In: Proceedings of 11th IEEE-RAS International Conference on Humanoid Robots, Bled, Slovenia, 26–28 October 2011, pp. 105-111 (2011)
Hoshino, K. Kawabuchi, I.: A humanoid robotic hand performing the sign language motions. In: Proceedings of 2003 International Symposium on Micromechatronics and Human Science (MHS-2003), Tsukuba, Japan, pp. 89–94 (2003)
Ethnologue Russian Sign Language Report. https://www.ethnologue.com/language/rsl
Joshua Project. https://joshuaproject.net/people_groups/19007/RS
Ryumin, D., Ivanko, D., Axyonov, A., Kagirov, I., Karpov, A., Zelezny, M.: Human-robot interaction with smart shopping trolley using sign language: data collection. In: 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pp. 949–954 (2019)
Kagirov I., et al.: Lower limbs exoskeleton control system based on intelligent human-machine interface. In: Kotenko, I., Badica, C., Desnitsky, V., El Baz, D., Ivanovic, M. (eds.) IDC 2019. SCI, vol. 868, pp. 457–466. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-32258-8_54
The service robot for the hospitality industry has arrived: JEEVES. https://jeeves.robotise.eu. Accessed 14 June 2020
Toyota Global Frontier Research. https://www.toyota-global.com/innovation/partner_robot/. Accessed 14 June 2020
Yamamoto, T., et al.: Human support robot as research platform of domestic mobile manipulator. In: Chalup, S., Niemueller, T., Suthakorn, J., Williams, M.-A. (eds.) RoboCup 2019. LNCS (LNAI), vol. 11531, pp. 457–465. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35699-6_37
Yamamoto, T., et al.: Development of Human Support Robot as the research platform of a domestic mobile manipulator. ROBOMECH J. 6(1), 4 (2019). https://doi.org/10.1186/s40648-019-0132-3
Mars 2020 Perseverance Rover – NASA Mars. https://mars.nasa.gov/mars2020/. Accessed 14 June 2020
Hrúz, M., Campr, P., Krňoul, Z., Železný, M., Aran, O., Santemiz, P.: Multi-modal dialogue system with sign language capabilities. In: The Proceedings of the 13th International ACM SIGACCESS conference on Computers and Accessibility, ASSETS 2011, pp. 265–266. Association for Computing Machinery, New York (2011)
Kagirov, I., Ryumin, D.A., Axyonov, A.A., Karpov, A.A.: Multimedia database of russian sign language items in 3D. Voprosy Jazykoznanija 1, 104–123 (2020)
Pose Estimation. https://www.tensorflow.org/lite/models/pose_estimation/overview. Accessed 14 June 2020
Papandreou, G., Zhu, T., Chen, L.C., Gidaris, S., Tompson, J., Murphy, K.: PersonLab: person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11218, pp. 269–286. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_17
Lugaresi, C., et al.: MediaPipe: A Framework for Building Perception Pipelines, pp. 1–286. arXiv preprint arXiv:1906.08172 (2019)
Keras Applications. https://keras.io/api/applications/. Accessed 14 June 2020
LabelImg is a graphical image annotation tool. https://github.com/tzutalin/labelImg. Accessed 14 June 2020
Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946 (2019)
Acknowledgements
This research is financially supported by the Ministry of Science and Higher Education of the Russian Federation, agreement No. 14.616.21.0095 (reference RFMEFI61618X0095) and by the Ministry of Education of the Czech Republic, project No. LTARF18017.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kagirov, I., Ryumin, D., Železný, M. (2020). Gesture-Based Intelligent User Interface for Control of an Assistive Mobile Information Robot. In: Ronzhin, A., Rigoll, G., Meshcheryakov, R. (eds) Interactive Collaborative Robotics. ICR 2020. Lecture Notes in Computer Science(), vol 12336. Springer, Cham. https://doi.org/10.1007/978-3-030-60337-3_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-60337-3_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60336-6
Online ISBN: 978-3-030-60337-3
eBook Packages: Computer ScienceComputer Science (R0)