[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2909824.3020246acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Movers, Shakers, and Those Who Stand Still: Visual Attention-grabbing Techniques in Robot Teleoperation

Published: 06 March 2017 Publication History

Abstract

We designed and evaluated a series of teleoperation interface techniques that aim to draw operator attention while mitigating negative effects of interruption. Monitoring live teleoperation video feeds, for example to search for survivors in search and rescue, can be cognitively taxing, particularly for operators driving multiple robots or monitoring multiple cameras. To reduce workload, emerging computer vision techniques can automatically identify and indicate (cue) salient points of potential interest for the operator. However, it is not clear how to cue such points to a preoccupied operator -- whether cues would be distracting and a hindrance to operators -- and how the design of the cue may impact operator cognitive load, attention drawn, and primary task performance. In this paper, we detail our iterative design process for creating a range of visual attention-grabbing cues that are grounded in psychological literature on human attention, and two formal evaluations that measure attention-grabbing capability and impact on operator performance. Our results show that visually cueing on-screen points of interest does not distract operators, that operators perform poorly without the cues, and detail how particular cue design parameters impact operator cognitive load and task performance. Specifically, full-screen cues can lower cognitive load, but can increase response time; animated cues may improve accuracy, but increase cognitive load. Finally, from this design process we provide tested, and theoretically grounded cues for attention drawing in teleoperation.

Supplementary Material

suppl.mov (hrifp2558.wmv)
Supplemental video

References

[1]
Richard A. Abrams and Shawn E. Christ. 2003. Motion onset captures attention. Psychological Science 14, 5: 427--432.
[2]
Richard A. Abrams and Shawn E. Christ. 2006. Motion onset captures attention?: A rejoinder to Franconeri and Simons ( 2005 ). Perception & Psychophysics 63130, 1: 114--117.
[3]
Pradeep K. Atrey, M. Anwar Hossain, and Abdulmotaleb El Saddik. 2008. Automatic scheduling of CCTV camera views using a human-centric approach. IEEE International Conference on Multimedia and Expo, IEEE, 325--328.
[4]
Brian P. Bailey and Shamsi T. Iqbal. 2008. Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management. ACM Transactions on Computer-Human Interaction 14, 4: 1--28.
[5]
Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and Jia Li. 2015. Salient Object Detection: A Benchmark. IEEE Transactions on Image Processing 24, 12: 5706--5722.
[6]
B. Bridgeman, D. Hendry, and L. Stark. 1975. Failure to detect displacement of the visual world during saccadic eye movements. Vision Research 15, 6: 719--722.
[7]
Neil D B Bruce, Christopher Catton, and Sasa Janjic. 2016. A Deeper Look at Saliency: Feature Contrast, Semantics, and Beyond. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8]
Moira Burke, Anthony Hornof, Erik Nilsen, and Nicholas Gorman. 2005. High-Cost Banner Blindness?: Ads Increase Perceived Workload, Hinder Visual Search, and Are Forgotten. ACM Transactions on Computer-Human Interaction 12, 4: 423--445.
[9]
Jessie Y C Chen, Michael J. Barnes, and Michelle Harper-Sciarini. 201 Supervisory control of multiple robots: Human-performance issues and user-interface design. IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews 41, 4: 435--454.
[10]
Marvin M. Chun. 2000. Contextual cueing of visual attention. Trends in Cognitive Sciences 4, 5: 170--178.
[11]
Fiona Donald, Craig Donald, and Andrew Thatcher. 2015. Work exposure and vigilance decrements in closed circuit television surveillance. Applied ergonomics 47, January 2016: 220--8.
[12]
J.L. Drury, J. Scholtz, and H.a. Yanco. 2003. Awareness in human-robot interactions. IEEE International Conference on Systems, Man and Cybernetics. 1, October.
[13]
Steven L Franconeri and Daniel J Simons. 2005. The dynamic events that capture visual attention: A reply to Abrams and Christ (2005). Perception & psychophysics 67, 6: 962--6.
[14]
Shinji Fukatsu, Yoshifumi Kitamura, Toshihiro Masaki, and Fumio Kishino. 1998. Intuitive control of "bird's eye" overview images for navigation in an enormous virtual environment. ACM Virtual Reality Software and Technology, 67--76.
[15]
Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. 2012. Teleoperation of multiple social robots. IEEE Transactions on Systems, Man, and Cybernetics 42, 3: 530--544.
[16]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Human Mental Workload. 139--183.
[17]
Sunao Hashimoto, Akihiko Ishida, Masahiko Inami, and Takeo Igarash. 2011. TouchMe: An Augmented Reality Based Remote Robot Manipulation. International Conference on Artificial Reality and Telexistence: 1--6.
[18]
Eric Horvitz, Andy Jacobs, and David Hovel. 1999. Attention-sensitive alerting. Uncertainty in artificial intelligence, 305--313.
[19]
Eric Horvitz, Carl Kadie, Tim Paek, and David Hovel. 2003. Models of attention in computing and communication. Communications of the ACM 46, 3: 52.
[20]
Christina J Howard, Tomasz Troscianko, Iain D Gilchrist, Ardhendu Behera, and David C Hogg. 2009. Searching for threat?: factors determining performance during CCTV monitoring. Security: 1--7.
[21]
Denis Kalkofen, Eduardo Veas, Stefanie Zollmann, Markus Steinberger, and Dieter Schmalstieg. 2013. Adaptive ghosted views for Augmented Reality. IEEE International Symposium on Mixed and Augmented Reality, October: 1--9.
[22]
Brenden Keyes, Robert Casey, Holly a Yanco, Bruce a Maxwell, and Yavor Georgiev. 2006. Camera placement and multi-camera fusion for remote robot operation. Proceedings of the IEEE International Workshop on Safety, Security and Rescue Robotics: 22--24.
[23]
Raymond M Klein, W Joseph Macinnes, Raymond M Klein, and W Joseph Macinnes. 1999. Inhibition of Return Is a Foraging Facilitator. 346--352.
[24]
Tomáć Krajník, Vojtěch Vonásek, Daniel Fišer, and Jan Faigl. 2011. AR-Drone as a Platform for Robotic Research and Education. . 172--186.
[25]
Árni Kristjánsson, Ómar I. Jóhannesson, and Ian M. Thornton. 2014. Common Attentional Constraints in Visual Foraging. PLoS ONE 9, 6: e100752.
[26]
Annica Kristoffersson, Silvia Coradeschi, and Amy Loutfi. 2013. A review of mobile robotic telepresence. Advances in Human-Computer Interaction 2013.
[27]
Daniel Labonte, Patrick Boissy, and François Michaud. 2010. Comparative analysis of 3-D robot teleoperation interfaces with novice users. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 40, 5: 1331--1342.
[28]
Arien Mack and Irvine Rock. 1999. Inattentional Blindness. Trends in Cognitive Sciences 3, 1: 39.
[29]
Marcus Mast, Zdeněk Materna, Michal Španěl, et al. 2015. Semi-Autonomous Domestic Service Robots: Evaluation of a User Interface for Remote Manipulation and Navigation With Focus on Effects of Stereoscopic Display. International Journal of Social Robotics 7, 2: 183--202.
[30]
D. Scott McCrickard and C.M. Chewar. 2003. Attuning Notification Design to User Goals and Attention Costs. Communications of the ACM 46, 3: 67--72.
[31]
Dan R. Olsen and Stephen Bart Wood. 2004. Fan-out: measuring human control of multiple robots. Human factors in computing systems, ACM Press, 231--238.
[32]
Jason Pascoe, Nick Ryan, and David Morse. 2000. Using While Moving: HCI Issues in Fieldwork Environments. Transactions on Computer-Human Interaction (TOCHI) - Special Issue on Human-Computer Interaction with Mobile Systems 7, 3: 417--437.
[33]
M. I. Posner, M. J. Nissen, and W. C. Ogden. 1978. Attended and unattended processing modes: The role of set for spatial location. In Modes of Perceiving and Processing Information. Psychology Press.
[34]
Michael I. Posner. 1980. Orienting of attention. Q.J Exp.Psychol. 32, 1: 3--25.
[35]
Z. W. Pylyshyn and R. W. Storm. 1988. Tracking multiple independent targets: evidence for a parallel tracking mechanism. Spatial vision 3, 3: 179--197.
[36]
Sina Radmard, Ajung Moon, and Elizabeth A Croft. 2015. Interface Design and Usability Analysis for a Robotic Telepresence Platform. Ro-Man 2015, 6.
[37]
R. A. Rensink, J. K. O'Regan, and J. J. Clark. 1996. To see or not to see: The need for attention to perceive changes in scenes. Investigative Ophthalmology and Visual Science 37, 3: 1--6.
[38]
J. Richer and J.L. Drury. 2006. A video game-based framework for analyzing human-robot interaction: characterizing interface design in real-time interactive multimedia applications. ACM SIGCHI/SIGART conference on human-robot interaction: 266--273. Retrieved from http://portal.acm.org/citation.cfm?id=1121287
[39]
Daniel Saakes, Vipul Choudhary, Daisuke Sakamoto, Masahiko Inami, and Takeo Igarashi. 2013. A teleoperating interface for ground vehicles using autonomous flying cameras. International Conference on Artificial Reality and Telexistence (ICAT), IEEE, 13--19.
[40]
Daisuke Sakamoto, Koichiro Honda, Masahiko Inami, and Takeo Igarashi. 2009. Sketch and run: a stroke-based interface for home robots. Conference on Human Factors in Computing Systems: 197--200.
[41]
Daniel J. Simons and Christopher F. Chabris. 1999. Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events. Perception 28, 9: 1059--1074.
[42]
Ashish Singh, Stela H. Seo, Yasmeen Hashish, Masayuki Nakane, James E. Young, and Andrea Bunt. 2013. An interface for remote robotic manipulator control that reduces task load and fatigue. IEEE ROMAN: 738--743.
[43]
Matthew J. Stainer, Kenneth C. Scott-Brown, and Benjamin W. Tatler. 2013. Looking for trouble: a description of oculomotor search strategies during live CCTV operation. Frontiers in human neuroscience 7, September: 615.
[44]
Adrian Stoica, Federico Salvioli, and Caitlin Flowers. 2014. Remote control of quadrotor teams, using hand gestures. ACM/IEEE international conference on Human-robot interaction, ACM Press, 296--297.
[45]
Peter Tarasewich. 2003. Designing mobile commerce applications. Communications of the ACM 46, 12: 57.
[46]
Dan Tasse, Anupriya Ankolekar, and Joshua Hailpern. 2016. Getting Users' Attention in Web Apps in Likable, Minimally Annoying Ways. Conference on Human Factors in Computing Systems: 3324--3334.
[47]
Wei Chung Teng, Yi Ching Kuo, and Rayi Yanu Tara. 2013. A teleoperation system utilizing saliency-based visual attention. IEEE International Conference on Systems, Man, and Cybernetics: 139--144.
[48]
Ian M. Thornton, Heinrich H. Bülthoff, Todd S. Horowitz, Aksel Rynning, and Seong-Whan Lee. 2014. Interactive Multiple Object Tracking (iMOT). PLoS ONE 9, 2: e86974.
[49]
Antonio Torralba, Aude Oliva, Monica S Castelhano, and John M Henderson. 2006. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychological Review 113, 4: 766--786.
[50]
S. Treue and J. C. Martínez Trujillo. 1999. Feature-based attention influences motion processing gain in macaque visual cortex. Nature 399, 6736: 575--579.
[51]
K. VanMarle and B. J. Scholl. 2003. Attentive Tracking of Objects Versus Substances. Psychological Science 14, 5: 498--504.
[52]
Eduardo Veas, Erick Mendez, Steven Feiner, et al. 2010. Directing Attention and Influencing Memory with Visual Saliency Modulation. ACM Conference on Human Factors in Computing Systems: 1471--1480.
[53]
Holly A. Yanco and Jill Drury. 2004. Classifying human-robot interaction: An updated taxonomy. IEEE International Conference on Systems, Man and Cybernetics 3: 2841--2846.

Cited By

View all
  • (2023)It’s not what you think: shaping beliefs about a robot to influence a teleoperator’s expectations and behaviorFrontiers in Robotics and AI10.3389/frobt.2023.127133710Online publication date: 21-Dec-2023
  • (2023)Hector UI: A Flexible Human-Robot User Interface for (Semi-)Autonomous Rescue and Inspection Robots2023 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)10.1109/SSRR59696.2023.10499954(91-98)Online publication date: 13-Nov-2023
  • (2022)Configuring Humans: What Roles Humans Play in HRI Research2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI)10.1109/HRI53351.2022.9889496(478-492)Online publication date: 7-Mar-2022
  • Show More Cited By

Index Terms

  1. Movers, Shakers, and Those Who Stand Still: Visual Attention-grabbing Techniques in Robot Teleoperation

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HRI '17: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction
    March 2017
    510 pages
    ISBN:9781450343367
    DOI:10.1145/2909824
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 06 March 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. attention
    2. human-robot interaction
    3. multi-robot teleoperation

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    HRI '17
    Sponsor:

    Acceptance Rates

    HRI '17 Paper Acceptance Rate 51 of 211 submissions, 24%;
    Overall Acceptance Rate 268 of 1,124 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)45
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 13 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)It’s not what you think: shaping beliefs about a robot to influence a teleoperator’s expectations and behaviorFrontiers in Robotics and AI10.3389/frobt.2023.127133710Online publication date: 21-Dec-2023
    • (2023)Hector UI: A Flexible Human-Robot User Interface for (Semi-)Autonomous Rescue and Inspection Robots2023 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)10.1109/SSRR59696.2023.10499954(91-98)Online publication date: 13-Nov-2023
    • (2022)Configuring Humans: What Roles Humans Play in HRI Research2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI)10.1109/HRI53351.2022.9889496(478-492)Online publication date: 7-Mar-2022
    • (2021)AdaSpringProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34481255:1(1-22)Online publication date: 30-Mar-2021
    • (2021)To See or Not to SeeProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34481235:1(1-25)Online publication date: 30-Mar-2021
    • (2021)One More Bite? Inferring Food Consumption Level of College Students Using Smartphone Sensing and Self-ReportsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34481205:1(1-28)Online publication date: 30-Mar-2021
    • (2021)SoundLipProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34480875:1(1-28)Online publication date: 30-Mar-2021
    • (2021)Ray Tracing-based Light Energy Prediction for Indoor Batteryless SensorsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34480865:1(1-27)Online publication date: 30-Mar-2021
    • (2021)WiFiTraceProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34480845:1(1-26)Online publication date: 30-Mar-2021
    • (2021)Passive Health Monitoring Using Large Scale Mobility DataProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34480785:1(1-23)Online publication date: 30-Mar-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media