[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1109/IROS45743.2020.9340781guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Human-Robot Interaction in a Shared Augmented Reality Workspace

Published: 24 October 2020 Publication History

Abstract

We design and develop a new shared Augmented Reality (AR) workspace for Human-Robot Interaction (HRI), which establishes a bi-directional communication between human agents and robots. In a prototype system, the shared AR workspace enables a shared perception, so that a physical robot not only perceives the virtual elements in its own view but also infers the utility of the human agent—the cost needed to perceive and interact in AR—by sensing the human agent’s gaze and pose. Such a new HRI design also affords a shared manipulation, wherein the physical robot can control and alter virtual objects in AR as an active agent; crucially, a robot can proactively interact with human agents, instead of purely passively executing received commands. In experiments, we design a resource collection game that qualitatively demonstrates how a robot perceives, processes, and manipulates in AR and quantitatively evaluates the efficacy of HRI using the shared AR workspace. We further discuss how the system can potentially benefit future HRI studies that are otherwise challenging.

References

[1]
S. M. LaValle, A. Yershova, M. Katsev, and M. Antonov, “Head tracking for the oculus rift,” in International Conference on Robotics and Automation (ICRA), 2014.
[2]
H. Liu, X. Xie, M. Millar, M. Edmonds, F. Gao, Y. Zhu, V. J. Santos, B. Rothrock, and S. -C. Zhu, “A glove-based system for studying hand-object manipulation via joint pose and force sensing,” in International Conference on Intelligent Robots and Systems (IROS), 2017.
[3]
H. Liu, Z. Zhang, X. Xie, Y. Zhu, Y. Liu, Y. Wang, and S. -C. Zhu, “High-fidelity grasping in virtual reality using a glove-based system,” in International Conference on Robotics andAutomation (ICRA), 2019.
[4]
C. Schatzschneider, G. Bruder, and F. Steinicke, “Who turned the clock? effects of manipulated zeitgebers, cognitive load and immersion on time estimation,” IEEE Transactions on Visualization & Computer Graph (TVCG), vol. 22, no. 4, pp. 1387-1395, 2016.
[5]
T. Ye, S. Qi, J. Kubricht, Y. Zhu, H. Lu, and S. -C. Zhu, “The martian: Examining human physical judgments across virtual gravity fields,” IEEE Transactions on Visualization & Computer Graph (TVCG), vol. 23, no. 4, pp. 1399-1408, 2017.
[6]
D. Wang, J. Kubrlcht, Y. Zhu, W. Lianq, S.-C. Zhu, C. Jiang, and H. Lu, “Spatially perturbed collision sounds attenuate perceived causality in 3d launching events,” in Conference on Virtual Reality and 3D User Interfaces (VR), 2018.
[7]
J. Lin, X. Guo, J. Shao, C. Jiang, Y. Zhu, and S.-C. Zhu, “A virtual reality platform for dynamic human-scene interaction,” in SIGGRAPH ASIA 2016 Virtual Reality meets Physical Reality: Modelling and Simulating Virtual Humans and Environments, 2016.
[8]
S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and service robotics, Springer, 2018.
[9]
X. Xie, H. Liu, Z. Zhang, Y. Qiu, F. Gao, S. Qi, Y. Zhu, and S.-C. Zhu, “Vrgym: A virtual testbed for physical and interactive ai,” in ACM Turing Celebration Conference-China, 2019.
[10]
X. Xie, C. Li, C. Zhang, Y. Zhu, and S.-C. Zhu, “Learning virtual grasp with failed demonstrations via bayesian inverse reinforcement learning,” in International Conference on Intelligent Robots and Systems (IROS), 2019.
[11]
J. Weisz, P. K. Allen, A. G. Barszap, and S. S. Joshi, “Assistive grasping with an augmented reality user interface,” International Journal of Robotics Research (IJRR), vol. 36, no. 5-7, pp. 543-562, 2017.
[12]
Z. Zhang, Y. Li, J. Guo, D. Weng, Y. Liu, and Y. Wang, “Vision-tangible interactive display method for mixed and virtual reality: Toward the human-centered editable reality,” Journal of the Society for Information Display, 2019.
[13]
Z. Zhang, H. Liu, Z. Jiao, Y. Zhu, and S. -C. Zhu, “Congestion-aware evacuation routing using augmented reality devices,” in International Conference on Robotics and Automation (ICRA), 2020.
[14]
T. H. Collett and B. A. MacDonald, “Augmentedreality visualisationforplayer,” in International Conference on Robotics and Automation (ICRA), 2006.
[15]
F. Ghiringhelli, J. Guzzi, G. A. Di Caro, V. Caglioti, L. M. Gambardella, and A. Giusti, “Interactive augmented reality for understanding and analyzing multi-robot systems,” in International Conference on Intelligent Robots and Systems (IROS), 2014.
[16]
M. Walker, H. Hedayati, J. Lee, and D. Szafir, “Communicating robot motion intent with augmented reality,” in ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2018.
[17]
K. Krückel, F. Nolden, A. Ferrein, and I. Scholl, “Intuitive visual teleoperation for ugvs using free-look augmented reality displays,” in International Conference on Robotics and Automation (ICRA), 2015.
[18]
M. Zolotas, J. Elsdon, and Y. Demiris, “Head-mounted augmented reality for explainable robotic wheelchair assistance,” in International Conference on Intelligent Robots and Systems (IROS), 2018.
[19]
H. Liu, Y. Zhang, W. Si, X. Xie, Y. Zhu, and S. -C. Zhu, “Interactive robot knowledge patching using augmented reality,” in International Conference on Robotics and Automation (ICRA), 2018.
[20]
C. P. Quintero, S. Li, M. K. Pan, W. P. Chan, H. M. Van der Loos, and E. Croft, “Robot programming through augmented trajectories in augmented reality,” in International Conference on Intelligent Robots and Systems (IROS), 2018.
[21]
M. Labbe and F. Michaud, “Online global loop closure detection for large-scale multi-session graph-based slam,” in International Conference on Intelligent Robots and Systems (IROS), 2014.
[22]
M. Bischoff, “Ros sharp.” https://github.com/siemens/rossharp, Accessed: 2020-01-15.
[23]
C. Zimmermann, T. Welschehold, C. Dornhege, W. Burgard, and T. Brox, “3d human pose estimation in rgbd images for robotic task learning,” in International Conference on Robotics and Automation (ICRA), 2018.
[24]
R. B. Rusu and S. Cousins, “3d is here: Point cloud library (pcl),” in International Conference on Robotics and Automation (ICRA), 2011.
[25]
E. Angel, Interactive computer graphics: a top-down approach with OPENGL primer package. Prentice-Hall, Inc., 2001.
[26]
A. F. d. C. Hamilton, R. Brindley, and U. Frith, “Visual perspective taking impairment in children with autistic spectrum disorder,” Cognition, vol. 113, no. 1, pp. 37-44, 2009.
[27]
J. D. Lempers, E. R. Flavell, and J. H. Flavell, “The development in very young children of tacit knowledge concerning visual perception.,” Genetic Psychology Monographs, 1977.
[28]
G. Hoffman and C. Breazeal, “Cost-based anticipatory action selection for human-robot fluency,” Transactions on Robotics (T-RO), vol. 23, no. 5, pp. 952-961, 2007.
[29]
E. C. Grigore, A. Roncone, O. Mangin, and B. Scassellati, “Preference-based assistance prediction for human-robot collaboration tasks,” in International Conference on Intelligent Robots and Systems (IROS), 2018.
[30]
S. Qi, B. Jia, S. Huang, P. Wei, and S. -C. Zhu, “A generalized earley parser for human activity parsing and prediction,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020.
[31]
E. F. Churchill and D. Snowdon, “Collaborative virtual environments: an introductory review of issues and systems,” Virtual Reality, vol. 3, no. 1, pp. 3-15, 1998.
[32]
J. I. Lipton, A. J. Fay, and D. Rus, “Baxter’s homunculus: Virtual reality spaces for teleoperation in manufacturing,” Robotics and Automation Letters (RA-L), vol. 3, no. 1, pp. 179-186, 2017.
[33]
T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel, “Deep imitation learning for complex manipulation tasks from virtual reality teleoperation,” in International Conference on Robotics and Automation (ICRA), 2018.
[34]
J. G. Grandi, H. G. Debarba, L. Nedel, and A. Maciel, “Design and evaluation of a handheld-based 3d user interface for collaborative object manipulation,” in ACM Conference on Human Factors in Computing Systems (CHI), 2017.
[35]
W. Zhang, B. Han, P. Hui, V. Gopalakrishnan, E. Zavesky, and F. Qian, “Cars: Collaborative augmented reality for socialization,” in International Workshop on Mobile Computing Systems & Applications, 2018.
[36]
C. Liu, J. B. Hamrick, J. F. Fisac, A. D. Dragan, J. K. Hedrick, S. S. Sastry, and T. L. Griffiths, “Goal inference improves objective and perceived performance in human-robot collaboration,” in International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2016.
[37]
S. Pellegrinelli, H. Admoni, S. Javdani, and S. Srinivasa, “Human-robot shared workspace collaboration via hindsight optimization,” in International Conference on Intelligent Robots and Systems (IROS), 2016.
[38]
S. Devin and R. Alami, “An implemented theory of mind to improve humanrobot shared plans execution,” in ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2016.
[39]
T. Yuan, H. Liu, L. Fan, Z. Zheng, T. Gao, Y. Zhu, and S. -C. Zhu, “Joint inference of states, robot knowledge, and human (false-)beliefs,” in International Conference on Robotics and Automation (ICRA), 2020.
[40]
V. V. Unhelkar, P. A. Lasota, Q. Tyroller, R. -D. Buhai, L. Marceau, B. Deml, and J. A. Shah, “Human-aware robotic assistant for collaborative assembly: Integrating human motion prediction with planning in time,” Robotics and Automation Letters (RA-L), vol. 3, no. 3, pp. 2394-2401, 2018.
[41]
A. K. Pandey and R. Alami, “Mightability maps: A perceptual level decisional framework for co-operative and competitive human-robot interaction,” in International Conference on Intelligent Robots and Systems (IROS), 2010.
[42]
C. -M. Huang and B. Mutlu, “Anticipatory robot control for efficient humanrobot collaboration,” in ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2016.
[43]
A. Zhou and A. D. Dragan, “Cost functions for robot motion style,” in International Conference on Intelligent Robots and Systems (IROS), 2018.
[44]
M. Edmonds, F. Gao, H. Liu, X. Xie, S. Qi, B. Rothrock, Y. Zhu, Y.N. Wu, H. Lu, and S. -C. Zhu, “A tale of two explanations: Enhancing human trust by explaining robot behavior,” Science Robotics, vol. 4, no. 37, 2019.
[45]
Y. Kato, T. Kanda, and H. Ishiguro, “May i help you?: Design of humanlike polite approaching behavior,” in ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2015.
[46]
C. Mollaret, A. A. Mekonnen, J. Pinquier, F. Lerasle, and I. Ferrané, “A multi-modal perception based architecture for a non-intrusive domestic assistant robot,” in ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2016.
[47]
M. Gombolay, A. Bair, C. Huang, and J. Shah, “Computational design ofmixed-initiative human-robot teaming that considers human factors: situational awareness, workload, and workflow preferences,” International Journal of Robotics Research (IJRR), vol. 36, no. 5-7, pp. 597-617, 2017.
[48]
K. Talamadupula, J. Benton, S. Kambhampati, P. Schermerhorn, and M. Scheutz, “Planning for human-robot teaming in open worlds,” Transactions on Intelligent Systems and Technology (TIST), vol. 1, no. 2, pp. 1-24, 2010.
[49]
D. Premack and G. Woodruff, “Does the chimpanzee have a theory of mind?,” Behavioral and brain sciences, vol. 1, no. 4, pp. 515-526, 1978.
[50]
S. Holtzen, Y. Zhao, T. Gao, J. Tenenbaum, and S. -C. Zhu, “Inferring human intent from video by sampling hierarchical plans. in intelligent robots and systems,” in International Conference on Intelligent Robots and Systems (IROS), 2016.
[51]
P. Wei, Y. Liu, T. Shu, N. Zheng, and S. -C. Zhu, “Where and why are they looking? jointly inferring human attention and intentions in complex tasks,” in the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[52]
E. Short and M. J. Mataric, “Robot moderation of a collaborative game: Towards socially assistive robotics in group interactions,” in International Symposium on Robot and Human Interactive Communication (RO-MAN), 2017.
[53]
N. Tang, S. Stacy, M. G. Zhao, G. Marquez, and T. Gao, “Bootstrapping an imagined we for cooperation,” in the Annual Meeting of the Cognitive Science Society (CogSci), 2020.
[54]
S. Stacy, Q. Zhao, M. Zhao., M. Kleiman-Weiner, and T. Gao, “Intuitive signaling through an ”imagined w"," in the Annual Meeting of the Cognitive Science Society (CogSci), 2020.
[55]
A. P. Melis and M. Tomasello, “Chimpanzees (pan troglodytes) coordinate by communicating in a collaborative problem-solving task,” the Royal Society B, vol. 286, no. 1901, p. 20190408, 2019.
[56]
V. Gallese, L. Fadiga, L. Fogassi, and G. Rizzolatti, “Action recognition in the premotor cortex,” Brain, vol. 119, no. 2, pp. 593-609, 1996.
[57]
A. H. Taylor, G. R. Hunt, F. S. Medina, and R. D. Gray, “Do new caledonian crows solve physical problems through causal reasoning?,” the Royal Society B, vol. 276, no. 1655, pp. 247-254, 2009.
[58]
G. R. Hunt, “Manufacture and use of hook-tools by new caledonian crows,” Nature, vol. 379, no. 6562, pp. 249-251, 1996.
[59]
E. Deng, B. Mutlu, M. J. Mataric, et al., “Embodiment in socially interactive robots,” Foundations and TrendsⓇ in Robotics, vol. 7, no. 4, pp. 251-356, 2019.
[60]
Y. Zhu, T. Gao, L. Fan, S. Huang, M. Edmonds, H. Liu, F. Gao, C. Zhang, S. Qi, Y.N. Wu, J. Tenenbaum, and S. -C. Zhu, “Dark, beyond deep: A paradigm shift to cognitive ai with humanlike common sense,” Engineering, vol. 6, no. 3, pp. 310-345, 2020.

Cited By

View all
  • (2024)EMiRAs-Empathic Mixed Reality AgentsProceedings of the 3rd Empathy-Centric Design Workshop: Scrutinizing Empathy Beyond the Individual10.1145/3661790.3661791(1-7)Online publication date: 11-May-2024
  • (2023)Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented EnvironmentsACM Transactions on Human-Robot Interaction10.1145/361165413:3(1-30)Online publication date: 21-Aug-2023
  • (2022)A Taxonomy of Functional Augmented Reality for Human-Robot InteractionProceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction10.5555/3523760.3523801(294-303)Online publication date: 7-Mar-2022
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Oct 2020
5642 pages

Publisher

IEEE Press

Publication History

Published: 24 October 2020

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)EMiRAs-Empathic Mixed Reality AgentsProceedings of the 3rd Empathy-Centric Design Workshop: Scrutinizing Empathy Beyond the Individual10.1145/3661790.3661791(1-7)Online publication date: 11-May-2024
  • (2023)Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented EnvironmentsACM Transactions on Human-Robot Interaction10.1145/361165413:3(1-30)Online publication date: 21-Aug-2023
  • (2022)A Taxonomy of Functional Augmented Reality for Human-Robot InteractionProceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction10.5555/3523760.3523801(294-303)Online publication date: 7-Mar-2022
  • (2022)Interactive augmented reality storytelling guided by scene semanticsACM Transactions on Graphics10.1145/3528223.353006141:4(1-15)Online publication date: 22-Jul-2022
  • (2022)Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic InterfacesProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517719(1-33)Online publication date: 29-Apr-2022

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media