[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

A gesture-centric Android system for multi-party human-robot interaction

Published: 27 February 2013 Publication History

Abstract

Natural body gesturing and speech dialogue, is crucial for human-robot interaction (HRI) and human-robot symbiosis. Real interaction is not only with one-to-one communication but also among multiple people. We have therefore developed a system that can adjust gestures and facial expressions based on a speaker's location or situation for multi-party communication. By extending our already developed real-time gesture planning method, we propose a gesture adjustment suitable for human demand through motion parameterization and gaze motion planning, which allows communication through eye-to-eye contact. We implemented the proposed motion planning method on an android Actroid-SIT and we proposed to use a Key-Value Store to connect the components of our systems. The Key-Value Store is a high-speed and lightweight dictionary database with parallelism and scalability. We conducted multi-party HRI experiments for 1,662 subjects in total. In our HRI system, over 60% of subjects started speaking to the Actroid, and the residence time of their communication also became longer. In addition, we confirmed our system gave humans a more sophisticated impression of the Actroid.

Supplementary Material

MP4 File (p133-kondo.mp4)

References

[1]
Ando, N., Suehiro, T., Kitagaki, K., Kotoku, T., & Yoon, W. (2005). Rt-middleware: Distributed component middleware for rt (robot technology). In Proc. of IEEE int'l conf. on robots and systems (pp. 3555--3560).
[2]
Bartlett, M. S. (1937). The statistical conception of mental factors. British Journal of Psychology, 28(1), 97--104
[3]
Chikaraishi, T., Minato, T., & Ishiguro, H. (2008). Development of an android system integrated with sensor networks. Journal of Information Processing Society of Japan, 49(12), 3821--3834.
[4]
Codd, E. F. (1970). A relational model of data for large shared data banks. Communications of ACM, 13(6), 377--387
[5]
Fast, J. (1970). Body language. Simon & Schuster Adult Publishing Group.
[6]
Fielding, R. T. (2000). Architectural styles and the design of network-based software architectures. Unpublished doctoral dissertation, University of California, Irvine.
[7]
Fraser, N. M., & Gilbert, G. N. (1991). Simulating speech systems. Journal of Computer Speech and Language, 5(1), 81--99
[8]
Hendrickson, A. E., & White, P. O. (1964). Promax: A quick method for rotation to oblique simple structure. British Journal of Psychology, 17(1), 65--70
[9]
Ido, J., Matsumoto, Y., Ogasawara, T., & Nishimura, R. (2006). Humanoid with interaction ability using vision and speech information. In Proc. of IEEE/RSJ int'l conf. on robots and systems (pp. 1316--1321).
[10]
Ishiguro, H., Ono, T., Imai, M., Maeda, T., Kanda, T., & Nakatsu, R. (2001). Robovie: An interactive humanoid robot. Journal of Industrial Robot, 28(6), 498--503
[11]
Japan, I. A. and Communications Ministry in. (2003). To realize next IT network robot made in Japan. Conference Report of Network Robot Technology.
[12]
Kanda, T., Shiomi, M., Miyashita, Z., Ishiguro, H., & Hagita, N. (2009). An affective guide robot in a shopping mall. In Proc. of ACM/IEEE int'l conf. on human-robot interaction (pp. 173--180).
[13]
Kashiwagi, A., Urabe, I., Kaneko, K., & Yomo, T. (2006). Adaptive response of a gene network to environmental changes by fitness-induced attractor selection. Journal of PLos ONE, 1(e49), 1--10.
[14]
Kondo, Y., Kawamura, M., Takemura, K., Takamatsu, J., & Ogasawara, T. (2011). Gaze motion planning for android robot. In Proc. of ACM/IEEE int'l conf. on human-robot interaction (pp. 171--172).
[15]
Kondo, Y., Takemura, K., Takamatsu, J., & Ogasawara, T. (2010). Smooth human-robot interaction by interruptible gesture planning. In Proc. of IEEE/ASME int'l conf. on advanced intelligent mechatronics (pp. 213--218).
[16]
Kondo, Y., Takemura, K., Takamatsu, J., & Ogasawara, T. (2012). Planning body gesture of android for multi-person human-robot interaction. In Proc. of IEEE int'l conf. on robotics and automation (pp. 3897--3902).
[17]
Kovar, L., & Gleicher, M. (2004). Automated extraction and parameterization of motions in large data sets. ACM Trans. on Graphics, 23(3), 559--568
[18]
Kovar, L., Gleicher, M., & Pighin, F. (2002). Motion graphs. In Proc. of ACM SIGGRAPH (pp. 473--482).
[19]
Masuko, S., & Hoshino, J. (2005). Generating head-eye movement for virtual actor. Tech. Report of Institute of Electronics, Information and Communication Engineers, 88(3), 585--595.
[20]
Matsusaka, Y., S.Fujie, & Kobayashi, T. (2010). Framework of communication activation robot participating in multiparty conversation. In Proc. of AAAI fall symposium: Dialog with robots (pp. 68--73).
[21]
Matsusaka, Y., Tojo, T., & Kobayashi, T. (2003). Conversation robot participating in group conversation. IEICE Trans. on Information and Systems, E86-D(1), 26--36.
[22]
Mori, M. (1970). Bukimi no tani (the uncanny valley). Energy, 7(4), 33--35.
[23]
Nagao, K., & Takeuchi, A. (1994). Social interaction: Multimodal conversation with social agents. In Proc. of national conf. on artificial intelligence (pp. 22--28).
[24]
Nakanishi, H., Nakazawa, S., Ishida, T., Takanashi, K., & Isbister, K. (2003). Can software agents influence human relations?: Balance theory in agent-mediated communities. In Proc. of ACM int'l joint conf. on autonomous agents and multiagent systems (pp. 717--724).
[25]
Osgood, C. E., & Tannenbaum, P. (1957). The measurement of meaning. University of Illinois Press.
[26]
Quigley, M., Conley, K., Gerkey, B. P., Faust, J., Foote, T., Leibs, J., et al. (2009). ROS: An open-source robot operating system. In ICRA workshop on open source software.
[27]
Sakamoto, D., Kanda, T., Ono, T., Ishiguro, H., & Hagita, N. (2007). Android as a telecommunication medium with a human-like presence. In Proc. of ACM/IEEE int'l conf. on human-robot interaction (pp. 193--200).
[28]
Sakamoto, D., & Ono, T. (2006). Sociality of robots: Do robots construct or collapse human relations? In Proc. of ACM SIGCHI/SIGART conf. on human-robot interaction (pp. 355--356).
[29]
Shinozawa, K., Naya, F., Yamato, J., & Kogure, K. (2005). Differences in effect of robot and screen agent recommendations on human decision-making. Int'l Journal of Human-Computer Studies, 162(2), 267--279
[30]
Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In Proc. of IEEE int'l conf. on computer vision and pattern recognition (pp. 511--518).
[31]
White, G., McKay, L., & Pollick, F. (2007). Motion and the uncanny valley. Journal of Vision, 7(9), 477.

Cited By

View all
  • (2024)Effects of Gestures and Negative Attitudes on Impressions of Lecturing RobotsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640612(628-631)Online publication date: 11-Mar-2024
  • (2023)Ice-Breaking Technology: Robots and Computers Can Foster Meaningful Connections between Strangers through In-Person ConversationsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581135(1-14)Online publication date: 19-Apr-2023
  • (2022)S2M-Net: Speech Driven Three-party Conversational Motion Synthesis NetworksProceedings of the 15th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3561975.3562954(1-10)Online publication date: 3-Nov-2022
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Journal of Human-Robot Interaction
Journal of Human-Robot Interaction  Volume 2, Issue 1
Special Issue on HRI System Studies
February 2013
170 pages

Publisher

Journal of Human-Robot Interaction Steering Committee

Publication History

Published: 27 February 2013

Author Tags

  1. Android
  2. body gesture
  3. facial expression
  4. human-robot interaction
  5. multi-party

Qualifiers

  • Research-article

Funding Sources

  • Japan Grant-in-Aid for Scientific Research on Innovative Areas

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)120
  • Downloads (Last 6 weeks)24
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Effects of Gestures and Negative Attitudes on Impressions of Lecturing RobotsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640612(628-631)Online publication date: 11-Mar-2024
  • (2023)Ice-Breaking Technology: Robots and Computers Can Foster Meaningful Connections between Strangers through In-Person ConversationsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581135(1-14)Online publication date: 19-Apr-2023
  • (2022)S2M-Net: Speech Driven Three-party Conversational Motion Synthesis NetworksProceedings of the 15th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3561975.3562954(1-10)Online publication date: 3-Nov-2022
  • (2021)Automatic Speech Recognition for Indoor HRI ScenariosACM Transactions on Human-Robot Interaction10.1145/344262910:2(1-30)Online publication date: 8-Mar-2021
  • (2020)Autonomously Learning One-To-Many Social Interaction Logic from Human-Human Interaction DataProceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3319502.3374798(419-427)Online publication date: 9-Mar-2020
  • (2020)Evaluating Imitation of Human Eye Contact and Blinking Behavior Using an Android for Human-like Communication2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)10.1109/RO-MAN46459.2019.8956387(1-6)Online publication date: 16-Jun-2020
  • (2019)A Deep Learning-Based Model for Head and Eye Motion Generation in Three-party ConversationsProceedings of the ACM on Computer Graphics and Interactive Techniques10.1145/33402502:2(1-19)Online publication date: 26-Jul-2019
  • (2018)DNN-HMM based Automatic Speech Recognition for HRI ScenariosProceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3171221.3171280(150-159)Online publication date: 26-Feb-2018
  • (2018)Performance Analysis of “Drive Me” - a Human Robot Interaction System2018 International Conference on Communications (COMM)10.1109/ICComm.2018.8430159(125-130)Online publication date: 14-Jun-2018
  • (2016)PRIveT – a portable ubiquitous robotics testbed for adaptive human-robot interactionJournal of Ambient Intelligence and Smart Environments10.3233/AIS-1503568:1(5-19)Online publication date: 7-Jan-2016
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media