default search action
3rd HRI 2008: Amsterdam, The Netherlands
- Terry Fong, Kerstin Dautenhahn, Matthias Scheutz, Yiannis Demiris:
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, HRI 2008, Amsterdam, The Netherlands, March 12-15, 2008. ACM 2008, ISBN 978-1-60558-017-3
Technical papers
- Guy Hoffman, Cynthia Breazeal:
Achieving fluency through perceptual-symbol practice in human-robot collaboration. 1-8 - Jijun Wang, Michael Lewis:
Assessing cooperation in human control of heterogeneous robots. 9-16 - Ben Robins, Kerstin Dautenhahn, Rene te Boekhorst, Chrystopher L. Nehaniv:
Behaviour delay and robot expressiveness in child-robot interactions: a user study on interaction kinesics. 17-24 - Leila Takayama, Wendy Ju, Clifford Nass:
Beyond dirty, dangerous and dull: what everyday people think robots should do. 25-32 - Elena Gribovskaya, Aude Billard:
Combining dynamical systems control and programmingby demonstration for teaching discrete bimanual coordination tasks to a humanoid robot. 33-40 - Xavier Perrin, Ricardo Chavarriaga, Céline Ray, Roland Siegwart, José del R. Millán:
A comparative psychophysical and EEG study of different feedback modalities for HRI. 41-48 - Curtis M. Humphrey, Julie A. Adams:
Compass visualizations for human-robotic interaction. 49-56 - Daniel T. Levin, Stephen S. Killingsworth, Megan M. Saylor:
Concepts about the capabilities of computers and robots: a test of the scope of adults' theory of mind. 57-64 - Takashi Minato, Hiroshi Ishiguro:
Construction and evaluation of a model of natural human motion based on motion diversity. 65-72 - Robin R. Murphy, Kevin S. Pratt, Jennifer L. Burke:
Crew roles and operational protocols for rotary-wing micro-uavs in close urban environments. 73-80 - Henrik Jacobsson, Nick Hawes, Geert-Jan M. Kruijff, Jeremy L. Wyatt:
Crossmodal content binding in information-processing architectures. 81-88 - Tobias Kaupp, Alexei Makarenko:
Decision-theoretic human-robot communication. 89-96 - Peter H. Kahn Jr., Nathan G. Freier, Takayuki Kanda, Hiroshi Ishiguro, Jolina H. Ruckert, Rachel L. Severson, Shaun K. Kane:
Design patterns for sociality in human-robot interaction. 97-104 - Katherine M. Tsui, Holly A. Yanco, David Kontak, Linda Beliveau:
Development and evaluation of a flexible interface for a wheelchair mounted robotic arm. 105-112 - Marcel Heerink, Ben J. A. Kröse, Bob J. Wielinga, Vanessa Evers:
Enjoyment intention to use and actual use of a conversational robot by elderly people. 113-120 - Ronald C. Arkin:
Governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot architecture. 121-128 - Ja-Young Sung, Rebecca E. Grinter, Henrik I. Christensen, Lan Guo:
Housewives or technophiles?: understanding domestic robot owners. 129-136 - Fumitaka Yamaoka, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita:
How close?: model of proximity control for information-presenting robots. 137-144 - Susan R. Fussell, Sara B. Kiesler, Leslie D. Setlock, Victoria Yew:
How people anthropomorphize robots. 145-152 - Toshiyuki Shiwa, Takayuki Kanda, Michita Imai, Hiroshi Ishiguro, Norihiro Hagita:
How quickly should communication robots respond? 153-160 - David J. Bruemmer, Curtis W. Nielsen, David I. Gertman:
How training and experience affect the benefits of autonomy in a dirty-bomb experiment. 161-168 - Chin-Chang Ho, Karl F. MacDorman, Z. A. D. Dwi Pramono:
Human emotion and the uncanny valley: a GLM, MDS, and Isomap analysis of robot video ratings. 169-176 - Nuno Otero, Aris Alissandrakis, Kerstin Dautenhahn, Chrystopher L. Nehaniv, Dag Sverre Syrdal, Kheng Lee Koay:
Human to robot demonstrations of routine home tasks: exploring the role of the robot's feedback. 177-184 - Martijn Liem, Arnoud Visser, Frans C. A. Groen:
A hybrid algorithm for tracking and following people using a robotic dog. 185-192 - Juan Antonio Corrales, Francisco A. Candelas Herías, Fernando Torres Medina:
Hybrid tracking of human operators using IMU/UWB data fusion by a Kalman filter. 193-200 - J. Gregory Trafton, Magdalena D. Bugajska, Benjamin R. Fransen, Raj M. Ratwani:
Integrating vision and audition within a cognitive architecture to track conversations. 201-208 - Rémi Barraquand, James L. Crowley:
Learning polite behavior with situation models. 209-216 - Satoshi Kagami, Yoko Sasaki, Simon Thompson, Tomoaki Fujihara, Tadashi Enomoto, Hiroshi Mizoguchi:
Loudness measurement of human utterance to a robot in noisy environment. 217-224 - Sonia Chernova, Manuela M. Veloso:
Multi-thresholded approach to demonstration selection for interactive robot learning. 225-232 - Kai-yuh Hsiao, Soroush Vosoughi, Stefanie Tellex, Rony Kubat, Deb Roy:
Object schemas for responsive robotic language use. 233-240 - Charles C. Kemp, Cressel D. Anderson, Hai Nguyen, Alexander J. Trevor, Zhe Xu:
A point-and-click interface for the real world: laser designation of objects for mobile manipulation. 241-248 - Sven R. Schmidt-Rohr, Steffen Knoop, Martin Lösch, Rüdiger Dillmann:
Reasoning for a multi-modal service robot considering uncertainty in human-robot interaction. 249-254 - Vanessa Evers, Heidy C. Maldonado, Talia L. Brodecki, Pamela J. Hinds:
Relational vs. group self-construal: untangling the role of national culture in HRI. 255-262 - Paul W. Schermerhorn, Matthias Scheutz, Charles R. Crowell:
Robot social presence and gender: do females view robots differently than males? 263-270 - Cady M. Stanton, Peter H. Kahn Jr., Rachel L. Severson, Jolina H. Ruckert, Brian T. Gill:
Robotic animals might aid in the social development of children with autism. 271-278 - Jessie Y. C. Chen, Michael J. Barnes:
Robotics operator performance in a military multi-tasking environment. 279-286 - Bilge Mutlu, Jodi Forlizzi:
Robots in organizations: the role of workflow, social, and environmental factors in human-robot interaction. 287-294 - Mary Ellen Foster, Ellen Gurman Bard, Markus Guhe, Robin L. Hill, Jon Oberlander, Alois C. Knoll:
The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue. 295-302 - Masahiro Shiomi, Daisuke Sakamoto, Takayuki Kanda, Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita:
A semi-autonomous communication robot: a field trial at a train station. 303-310 - Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita:
Simultaneous teleoperation of multiple social robots. 311-318 - Yuichiro Yoshikawa, Shunsuke Yamamoto, Hidenobu Sumioka, Hiroshi Ishiguro, Minoru Asada:
Spiral response-cascade hypothesis: intrapersonal responding-cascade in gaze interaction. 319-326 - Emrah Akin Sisbot, Aurélie Clodic, Rachid Alami, Maxime Ransan:
Supervision and motion planning for a mobile manipulator interacting with humans. 327-334 - Frank Hegel, Soeren Krach, Tilo Kircher, Britta Wrede, Gerhard Sagerer:
Theory of mind (ToM) on robots: a functional neuroimaging study. 335-342 - Paul Lapides, Ehud Sharlin, Mario Costa Sousa:
Three dimensional tangible user interface for controlling a robotic team. 343-350 - Joseph L. Cooper, Michael A. Goodrich:
Towards combining UAV and sensor operator roles in UAV-enabled visual search. 351-358 - Pierre Boudoin, Christophe Domingues, Samir Otmane, Nassima Ouramdane-Djerrah, Malik Mallem:
Towards multimodal human-robot interaction in large scale virtual environment. 359-366 - Richard Kelley, Alireza Tavakkoli, Christopher King, Monica N. Nicolescu, Mircea Nicolescu, George Bebis:
Understanding human intentions via hidden markov models in autonomous mobile robots. 367-374 - Kristen Stubbs, David Wettergreen, Illah R. Nourbakhsh:
Using a robot proxy to create common ground in exploration tasks. 375-382
Videos
- Christoph Bartneck:
HRI caught on film 2. 383-388
Invited-keynote talks
- Harold Bekkering, Estela Bicho, Ruud G. J. Meulenbroek, Wolfram Erlhagen:
Joint action in man and autonomous systems. 389-390 - Raja Chatila:
Toward cognitive robot companions. 391-392 - Herbert H. Clark:
Talking as if. 393-394
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.