[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
article

Embodiment of an agent by anthropomorphization of a common object

Published: 01 July 2012 Publication History

Abstract

This paper proposes a direct anthropomorphization method to improve interaction between human and an agent. In this method, an artifact is converted into an agent by attaching humanoid parts to it. There have been many studies that have provided valuable information on using spoken directions and gestures via anthropomorphic agents such as computer graphics agents and communication robots. In the direct anthropomorphization method, an artifact is directly anthropomorphized by being fitted with robotic parts shaped in the form of human body parts. An anthropomorphized artifact with such robotic parts can provide information to people through spoken directions and body language. This will persuade people to pay more attention to the artifact, as compared to when using an anthropomorphic virtual or robot agent. The authors conducted experiments to investigate how the response of users to an explanation of the functions of an artifact changes using the direct anthropomorphization method. The results of pre-experiment indicated that participants paid more attention to the target artifact and memorized its functions more quickly and easily when using the direct anthropomorphization method than when using humanoid agent. In following two experiments, the authors compared human-like aspect separately and evaluate what is key element for anthropomorphization. The authors found that “voice” was the key factor for rendering an object as an anthropomorphic agent. Furthermore, the “eyes” were found to be more effective in interactions than the “mouth”.

References

[1]
B. Reeves and C. Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, Univ. of Chicago Press, 1996.
[2]
C. Breazeal, Regulating human-robot interaction using 'emotions', 'drives', and facial expressions, in: Proc. of Autonomous Agents, 1998, pp. 14-21.
[3]
Y. Chen, M. Hsieh, C. Wang, and H. Lee, RFID-based intelligent systems for home-healthcare, in: Proc. of International Conference on Consumer Electronics, 2007, pp. 1-2.
[4]
D.C. Denette, The Intentional Stance, The MIT Press, 1989.
[5]
C.F. Di Salvo, F. Gemperle, J. Forlizzi, and S. Kiesler, All robots are not created equal: The design and perception of humanoid robot heads, in: Proc. of the 4th Conference on Designing Interactive Systems (DIS '02), 2002, pp. 321- 326.
[6]
E.T. Hall, The Hidden Dimension, Doubleday & Company Inc., Garden City, NY, 1966.
[7]
T. Kanda, H. Ishiguro, T. Ono, M. Imai, and R. Nakatsu, Development and evaluation of an interactive humanoid robot "Robovie", in: Proc. of IEEE International Conference on Robotics and Automation (ICRA2002), 2002, pp. 4166-4173.
[8]
H. Kobayashi and S. Kohshima, Unique morphology of the human eye and its adaptive meaning: Comparative studies on external morphology of the primate eye, Journal of Human Evolution 40(5) (2001), 419-435.
[9]
H. Kozima and C. Nakagawa, Interactive Robots as Facilitators of Children's Social Development, Pro Literatur Verlag, 2006.
[10]
K.F. MacDorman and H. Ishiguro, The uncanny advantage of using androids in social and cognitive science research, Interaction Studies 7(3) (2006), 297-337.
[11]
L. Mazuel and N. Sabouret, Generic command interpretation algorithms for conversational agents, Web Intelligence and Agent Systems 6(1) (2008), 43-57, IOS Press.
[12]
M. Mori, The uncanny valley, Energy 7(4) (1970), 33-35.
[13]
D. Morris, Bodytalk: A World Guide to Gestures, Jonathan Cape, 1996.
[14]
Y. Murakawa and S. Totoki, Evaluation of "behavior" of service robot "Enon": Experimental operation of enon in a shopping center, Technical report of IEICE. HCS, Vol. 2006, No. 131, 2006, pp. 31-36.
[15]
Nac Image Technology, Eyemark recorder EMR-8B, 2007, http://www.eyemark.jp/.
[16]
H. Osawa, J. Mukai, and M. Imai, Anthropomorphization of an object by displaying robot, in: Proc. of IEEE International Symposium on Robot and Human Interactive Communication, 2006, pp. 4166-4173.
[17]
H. Osawa, J. Mukai, and M. Imai, in: Proc. of Joint 3rd International Conference on Soft Computing and Intelligent Systems and 7th International Symposium on Advanced Intelligent Systems, 2006, pp. 1460-1465.
[18]
H. Osawa, Y. Matsuda, R. Ohmura, and M. Imai, Evaluation framework of human-like robot's appearance using movable human-like parts, in: Proc. of Workshop for Interaction Science Perspective on HRI: Designing Robot Morphology in 5th ACM International Conference on Human Robot Interaction, 2010, pp. 50-51.
[19]
H. Osawa, J. Orszulak, K.M. Godfrey, and J. Coughlin, Maintaining learning motivation of older people by combining household appliance with a communication robot, in: Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010, pp. 5310-5316.
[20]
H. Osawa, J. Orszulak, K.M. Godfrey, M. Imai, and J. Coughlin, Improving voice interaction for older people using an attachable gesture robot, in: Proc. of IEEE International Symposium on Robot and Human Interactive Communication, 2010, pp. 199-204.
[21]
E.C. Paraiso and J.A. Barthes, An intelligent speech interface for personal assistants applied to knowledge management, Web Intelligence and Agent Systems 3(4) (2005), 217-230, IOS Press.
[22]
M. Scheutz, P. Schermerhorn, and J. Kramer, The utility of affect expression in natural language interactions in joint human-robot tasks, in: Proc. of Human Robot Interaction, Vol. 1, No. 1, 2008. pp. 226-233.
[23]
M. Shimada, T. Minato, and H. Ishiguro, Study on humanrobot communication considering robot appearance using an android robot, in: Proc. of Robomec 2004, 2004, p. 138.
[24]
K. Shinozawa, F. Naya, J. Yamato, and K. Kogure, Differences in effect of robot and screen agent recommendations on human decision-making, International Journal of Human-Computer Studies 62(2) (2005), 267-279.
[25]
T. Shiwa, T. Kanda, M. Imai, H. Ishiguro, and N. Hagita, How quickly should communication robots respond?, in: Proc. of the 3rd ACM/IEEE International Conference on Human Robot Interaction, 2008, pp. 153-160.
[26]
O. Sugiyama, T. Kanda, M. Imai, H. Ishiguro, and N. Hagita, Three-layer model for generation and recognition of attention-drawing behavior, in: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2006), 2006, pp. 5843-5850.

Cited By

View all
  • (2018)Experimental Observation of Nodding Motion in Remote Communication Using ARM-COMSHuman Interface and the Management of Information. Interaction, Visualization, and Analytics10.1007/978-3-319-92043-6_17(194-203)Online publication date: 15-Jul-2018
  • (2013)ARM-COMSProceedings of the 15th international conference on Human Interface and the Management of Information: information and interaction for learning, culture, collaboration and business - Volume Part III10.1007/978-3-642-39226-9_34(307-316)Online publication date: 21-Jul-2013

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Web Intelligence and Agent Systems
Web Intelligence and Agent Systems  Volume 10, Issue 3
July 2012
79 pages

Publisher

IOS Press

Netherlands

Publication History

Published: 01 July 2012

Author Tags

  1. Human Agent Interaction
  2. Human Interface
  3. Human Robot Interaction

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2018)Experimental Observation of Nodding Motion in Remote Communication Using ARM-COMSHuman Interface and the Management of Information. Interaction, Visualization, and Analytics10.1007/978-3-319-92043-6_17(194-203)Online publication date: 15-Jul-2018
  • (2013)ARM-COMSProceedings of the 15th international conference on Human Interface and the Management of Information: information and interaction for learning, culture, collaboration and business - Volume Part III10.1007/978-3-642-39226-9_34(307-316)Online publication date: 21-Jul-2013

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media