[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3472749.3474773acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

Situated Live Programming for Human-Robot Collaboration

Published: 12 October 2021 Publication History

Abstract

We present situated live programming for human-robot collaboration, an approach that enables users with limited programming experience to program collaborative applications for human-robot interaction. Allowing end users, such as shop floor workers, to program collaborative robots themselves would make it easy to “retask” robots from one process to another, facilitating their adoption by small and medium enterprises. Our approach builds on the paradigm of trigger-action programming (TAP) by allowing end users to create rich interactions through simple trigger-action pairings. It enables end users to iteratively create, edit, and refine a reactive robot program while executing partial programs. This live programming approach enables the user to utilize the task space and objects by incrementally specifying situated trigger-action pairs, substantially lowering the barrier to entry for programming or reprogramming robots for collaboration. We instantiate situated live programming in an authoring system where users can create trigger-action programs by annotating an augmented video feed from the robot’s perspective and assign robot actions to trigger conditions. We evaluated this system in a study where participants (n = 10) developed robot programs for solving collaborative light-manufacturing tasks. Results showed that users with little programming experience were able to program HRC tasks in an interactive fashion and our situated live programming approach further supported individualized strategies and workflows. We conclude by discussing opportunities and limitations of the proposed approach, our system implementation, and our study and discuss a roadmap for expanding this approach to a broader range of tasks and applications.

Supplementary Material

VTT File (p613-talk.vtt)
VTT File (p613-video_figure.vtt)
VTT File (p613-video_preview.vtt)
MP4 File (p613-talk.mp4)
Talk video and captions
MP4 File (p613-video_figure.mp4)
Video figure and captions
MP4 File (p613-video_preview.mp4)
Video preview and captions

References

[1]
Daniel Bambuŝek, Zdeněk Materna, Michal Kapinus, Vítězslav Beran, and Pavel Smrž. 2019. Combining interactive spatial augmented reality with head-mounted display for end-user collaborative robot programming. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 1–8.
[2]
Andrea Bauer, Dirk Wollherr, and Martin Buss. 2008. Human–robot collaboration: a survey. International Journal of Humanoid Robotics 5, 01 (2008), 47–66.
[3]
Aude Billard, Sylvain Calinon, Ruediger Dillmann, and Stefan Schaal. 2008. Survey: Robot programming by demonstration. Technical Report. Springer.
[4]
Will Brackenbury, Abhimanyu Deora, Jillian Ritchey, Jason Vallee, Weijia He, Guan Wang, Michael L Littman, and Blase Ur. 2019. How users interpret bugs in trigger-action programming. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[5]
John Brooke. 1996. SUS: a “quick and dirty” usability. Usability evaluation in industry 189 (1996).
[6]
John Brooke. 2013. SUS: a retrospective. Journal of usability studies 8, 2 (2013), 29–40.
[7]
Sebastian G Brunner, Franz Steinmetz, Rico Belder, and Andreas Dömel. 2016. RAFCON: A graphical tool for engineering complex, robotic tasks. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 3283–3290.
[8]
Miguel Campusano and Johan Fabry. 2017. Live robot programming: The language, its implementation, and robot API independence. Science of Computer Programming 133 (2017), 1–19.
[9]
Balasubramaniyan Chandrasekaran and James M Conrad. 2015. Human-robot collaboration: A survey. In SoutheastCon 2015. IEEE, 1–8.
[10]
Nick Collins, Alex McLean, Julian Rohrhuber, and Adrian Ward. 2003. Live coding in laptop performance. Organised sound 8, 3 (2003), 321–330.
[11]
Kasra Ferdowsifard, Allen Ordookhanians, Hila Peleg, Sorin Lerner, and Nadia Polikarpova. 2020. Small-Step Live Programming by Example. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 614–626.
[12]
Yuxiang Gao and Chien-Ming Huang. 2019. PATI: a projection-based augmented table-top interface for robot programming. In Proceedings of the 24th international conference on intelligent user interfaces. 345–355.
[13]
Giuseppe Ghiani, Marco Manca, Fabio Paternò, and Carmen Santoro. 2017. Personalization of context-dependent applications through trigger-action rules. ACM Transactions on Computer-Human Interaction (TOCHI) 24, 2(2017), 1–33.
[14]
Matthew C Gombolay, Reymundo A Gutierrez, Shanelle G Clarke, Giancarlo F Sturla, and Julie A Shah. 2015. Decision-making authority, team efficiency and human worker satisfaction in mixed human–robot teams. Autonomous Robots 39, 3 (2015), 293–312.
[15]
Matthew C Gombolay, Ronald J Wilcox, and Julie A Shah. 2018. Fast scheduling of robot teams performing tasks with temporospatial constraints. IEEE Transactions on Robotics 34, 1 (2018), 220–239.
[16]
Michael Hagenow, Emmanuel Senft, Robert Radwin, Michael Gleicher, Bilge Mutlu, and Michael Zinn. 2021. Corrective Shared Autonomy for Addressing Task Variability. IEEE Robotics and Automation Letters 6, 2 (2021), 3720–3727.
[17]
Guy Hoffman. 2019. Evaluating fluency in human–robot collaboration. IEEE Transactions on Human-Machine Systems 49, 3 (2019), 209–218.
[18]
Gaoping Huang, Pawan S Rao, Meng-Han Wu, Xun Qian, Shimon Y Nof, Karthik Ramani, and Alexander J Quinn. 2020. Vipo: Spatial-visual programming with functions for robot-IoT workflows. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[19]
Justin Huang and Maya Cakmak. 2015. Supporting mental model accuracy in trigger-action programming. In Proceedings of the 2015 ACM international joint conference on pervasive and ubiquitous computing. 215–225.
[20]
Justin Huang and Maya Cakmak. 2017. Code3: A system for end-to-end programming of mobile manipulator robots for novices and experts. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI. IEEE, 453–462.
[21]
Michal Kapinus, Vítězslav Beran, Zdeněk Materna, and Daniel Bambušek. 2019. Spatially Situated End-User Robot Programming in Augmented Reality. In 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 1–8.
[22]
David Kent, Carl Saldanha, and Sonia Chernova. 2020. Leveraging depth data in remote robot teleoperation interfaces for general object manipulation. The International Journal of Robotics Research 39, 1 (2020), 39–53.
[23]
Juraj Kubelka, Romain Robbes, and Alexandre Bergel. 2018. The road to live programming: insights from the practice. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE). IEEE, 1090–1101.
[24]
Nicola Leonardi, Marco Manca, Fabio Paternò, and Carmen Santoro. 2019. Trigger-action programming for personalising humanoid robot behaviour. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
[25]
Sorin Lerner. 2020. Focused Live Programming with Loop Seeds. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 607–613.
[26]
Matt Luckcuck, Marie Farrell, Louise A Dennis, Clare Dixon, and Michael Fisher. 2019. Formal specification and verification of autonomous robotic systems: A survey. ACM Computing Surveys (CSUR) 52, 5 (2019), 1–41.
[27]
John McCormac, Ankur Handa, Stefan Leutenegger, and Andrew J Davison. 2017. Scenenet rgb-d: Can 5m synthetic images beat generic imagenet pre-training on indoor segmentation?. In Proceedings of the IEEE International Conference on Computer Vision. 2678–2687.
[28]
Timothy S McNerney. 2004. From turtles to Tangible Programming Bricks: explorations in physical language design. Personal and Ubiquitous Computing 8, 5 (2004), 326–337.
[29]
Joseph E Michaelis, Amanda Siebert-Evenstone, David Williamson Shaffer, and Bilge Mutlu. 2020. Collaborative or simply uncaged? understanding human-cobot interactions in automation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[30]
Mikhail Ostanin, Stanislav Mikhel, Alexey Evlampiev, Valeria Skvortsova, and Alexandr Klimchik. 2020. Human-robot interaction for robotic manipulator programming in Mixed Reality. In 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2805–2811.
[31]
Chris Paxton, Andrew Hundt, Felix Jonathan, Kelleher Guerin, and Gregory D Hager. 2017. Costar: Instructing collaborative robots with behavior trees and vision. In 2017 IEEE international conference on robotics and automation (ICRA). IEEE, 564–571.
[32]
David Porfirio, Allison Sauppé, Aws Albarghouthi, and Bilge Mutlu. 2018. Authoring and verifying human-robot interactions. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 75–86.
[33]
Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy Leibs, Rob Wheeler, and Andrew Y Ng. 2009. ROS: an open-source Robot Operating System. In ICRA workshop on open source software, Vol. 3. Kobe, Japan, 5.
[34]
Gregory F Rossano, Carlos Martinez, Mikael Hedelind, Steve Murphy, and Thomas A Fuhlbrigge. 2013. Easy robot programming concepts: An industrial perspective. In 2013 IEEE international conference on automation science and engineering (CASE). IEEE, 1119–1126.
[35]
Andrew Schoen, Curt Henrichs, Mathias Strohkirch, and Bilge Mutlu. 2020. Authr: A Task Authoring Environment for Human-Robot Teams. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 1194–1208.
[36]
Jiwon Shin, Roland Siegwart, and Stéphane Magnenat. 2014. Visual programming language for Thymio II robot. In Conference on Interaction Design and Children (IDC’14). ETH Zürich.
[37]
Franz Steinmetz, Verena Nitsch, and Freek Stulp. 2019. Intuitive task-level programming by demonstration through semantic skill recognition. IEEE Robotics and Automation Letters 4, 4 (2019), 3742–3749.
[38]
Franz Steinmetz, Annika Wollschläger, and Roman Weitschat. 2018. Razer—a hri for visual task-level programming and intuitive skill parameterization. IEEE Robotics and Automation Letters 3, 3 (2018), 1362–1369.
[39]
Steven L Tanimoto. 1990. VIVA: A visual language for image processing. Journal of Visual Languages & Computing 1, 2 (1990), 127–139.
[40]
Steven L Tanimoto. 2013. A perspective on the evolution of live programming. In 2013 1st International Workshop on Live Programming (LIVE). IEEE, 31–34.
[41]
Panagiota Tsarouchi, Sotiris Makris, and George Chryssolouris. 2016. Human–robot interaction review and challenges on task planning and programming. International Journal of Computer Integrated Manufacturing 29, 8(2016), 916–931.
[42]
Blase Ur, Elyse McManus, Melwyn Pak Yong Ho, and Michael L Littman. 2014. Practical trigger-action programming in the smart home. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 803–812.
[43]
Blase Ur, Melwyn Pak Yong Ho, Stephen Brawner, Jiyun Lee, Sarah Mennicken, Noah Picard, Diane Schulze, and Michael L Littman. 2016. Trigger-action programming in the wild: An analysis of 200,000 ifttt recipes. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 3227–3231.
[44]
Fan Wang and Kris Hauser. 2019. In-hand object scanning via rgb-d video segmentation. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 3296–3302.
[45]
Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. 2019. Detectron2. https://github.com/facebookresearch/detectron2.
[46]
Lefan Zhang, Weijia He, Jesse Martinez, Noah Brackenbury, Shan Lu, and Blase Ur. 2019. AutoTap: synthesizing and repairing trigger-action programs using LTL properties. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). IEEE, 281–291.
[47]
Lefan Zhang, Weijia He, Olivia Morkved, Valerie Zhao, Michael L Littman, Shan Lu, and Blase Ur. 2020. Trace2TAP: Synthesizing Trigger-Action Programs from Traces of Behavior. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1–26.

Cited By

View all
  • (2024)ChatIoT: Zero-code Generation of Trigger-action Based IoT ProgramsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785858:3(1-29)Online publication date: 9-Sep-2024
  • (2024)End-User Development for Human-Robot Interaction: Results and Trends in an Emerging FieldProceedings of the ACM on Human-Computer Interaction10.1145/36611468:EICS(1-40)Online publication date: 17-Jun-2024
  • (2024)PRogramAR: Augmented Reality End-User Robot ProgrammingACM Transactions on Human-Robot Interaction10.1145/364000813:1(1-20)Online publication date: 11-Mar-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST '21: The 34th Annual ACM Symposium on User Interface Software and Technology
October 2021
1357 pages
ISBN:9781450386357
DOI:10.1145/3472749
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 October 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. end-user programming
  2. human-robot collaboration
  3. human-robot interaction
  4. trigger-action programming

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • NASA University Leadership Initiative (ULI)

Conference

UIST '21

Acceptance Rates

Overall Acceptance Rate 561 of 2,567 submissions, 22%

Upcoming Conference

UIST '25
The 38th Annual ACM Symposium on User Interface Software and Technology
September 28 - October 1, 2025
Busan , Republic of Korea

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)130
  • Downloads (Last 6 weeks)4
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)ChatIoT: Zero-code Generation of Trigger-action Based IoT ProgramsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785858:3(1-29)Online publication date: 9-Sep-2024
  • (2024)End-User Development for Human-Robot Interaction: Results and Trends in an Emerging FieldProceedings of the ACM on Human-Computer Interaction10.1145/36611468:EICS(1-40)Online publication date: 17-Jun-2024
  • (2024)PRogramAR: Augmented Reality End-User Robot ProgrammingACM Transactions on Human-Robot Interaction10.1145/364000813:1(1-20)Online publication date: 11-Mar-2024
  • (2024)A System for Human-Robot Teaming through End-User Programming and Shared AutonomyProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634965(231-239)Online publication date: 11-Mar-2024
  • (2024)UNFOLD: Enabling Live Programming for Debugging GUI Applications2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)10.1109/VL/HCC60511.2024.00041(306-316)Online publication date: 2-Sep-2024
  • (2024)Enriching Process Models with Relevant Process Details for Flexible Human-Robot TeamingCollaborative Computing: Networking, Applications and Worksharing10.1007/978-3-031-54531-3_14(249-269)Online publication date: 23-Feb-2024
  • (2023)Understanding In-Situ Programming for Smart Home AutomationProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35962547:2(1-31)Online publication date: 12-Jun-2023
  • (2023)Guidelines for a Human-Robot Interaction Specification Language2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)10.1109/RO-MAN57019.2023.10309563(1-8)Online publication date: 28-Aug-2023
  • (2023)Visual Programming of Robot Tasks with Product and Process VarietyAnnals of Scientific Society for Assembly, Handling and Industrial Robotics 202210.1007/978-3-031-10071-0_20(241-252)Online publication date: 11-Jul-2023
  • (2022)Mimic: In-Situ Recording and Re-Use of Demonstrations to Support Robot TeleoperationProceedings of the 35th Annual ACM Symposium on User Interface Software and Technology10.1145/3526113.3545639(1-13)Online publication date: 29-Oct-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media