[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3586183.3606736acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

GestureCanvas: A Programming by Demonstration System for Prototyping Compound Freehand Interaction in VR

Published: 29 October 2023 Publication History

Editorial Notes

The authors have requested minor, non-substantive changes to the Version of Record and, in accordance with ACM policies, a Corrected Version of Record was published on November 2, 2023. For reference purposes, the VoR may still be accessed via the Supplemental Material section on this page.

Abstract

As the use of hand gestures becomes increasingly prevalent in virtual reality (VR) applications, prototyping Compound Freehand Interactions (CFIs) effectively and efficiently has become a critical need in the design process. Compound Freehand Interaction (CFI) is a sequence of freehand interactions where each sub-interaction in the sequence conditions the next. Despite the need for interactive prototypes of CFI in the early design stage, creating them is effortful and remains a challenge for designers since it requires a highly technical workflow that involves programming the recognizers, system responses and conditionals for each sub-interaction. To bridge this gap, we present GestureCanvas, a freehand interaction-based immersive prototyping system that enables a rapid, end-to-end, and code-free workflow for designing, testing, refining, and subsequently deploying CFI by leveraging the three pillars of interaction models: event-driven state machine, trigger-action authoring, and programming by demonstration. The design of GestureCanvas includes three novel design elements — (i) appropriating the multimodal recording of freehand interaction into a CFI authoring workspace called Design Canvas, (ii) semi-automatic identification of the input trigger logic from demonstration to reduce the manual effort of setting up triggers for each sub-interaction, (iii) on the fly testing for independently validating the input conditionals in-situ. We validate the workflow enabled by GestureCanvas through an interview study with professional designers and evaluate its usability through a user study with non-experts. Our work lays the foundation for advancing research on immersive prototyping systems allowing even highly complex gestures to be easily prototyped and tested within VR environments.

Supplemental Material

PDF File
Version of Record for "GestureCanvas: A Programming by Demonstration System for Prototyping Compound Freehand Interaction in VR" by Sayara et al., Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23).
ZIP File
Supplemental File

References

[1]
2016. tvori: Designing VR & AR Simplified. https://tvori.co/. Accessed April 04, 2023.
[2]
2020. Hand pose detection: Oculus developers. Retrieved Accessed Sep 13, 2022 from https://developer.oculus.com/documentation/unity/unity-isdk-hand-pose-detection/
[3]
Bill Albert and Tom Tullis. 2013. Measuring the user experience: collecting, analyzing, and presenting usability metrics. Newnes.
[4]
Aldin. 2019. The Wizards - Dark Times. https://www.oculus.com/experiences/quest/2280285932034855. Accessed April 04, 2023.
[5]
Rahul Arora, Rubaiat Habib Kazi, Danny M Kaufman, Wilmot Li, and Karan Singh. 2019. Magicalhands: Mid-air hand gestures for animating in vr. In Proceedings of the 32nd annual ACM symposium on user interface software and technology. 463–477.
[6]
Daniel Ashbrook and Thad Starner. 2010. MAGIC: a motion gesture design tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2159–2168.
[7]
Narges Ashtari, Andrea Bunt, Joanna McGrenere, Michael Nebeling, and Parmit K. Chilana. 2020. Creating Augmented and Virtual Reality Applications: Current Practices, Challenges, and Opportunities. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376722
[8]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
[9]
Gavin Buckingham. 2021. Hand tracking for immersive virtual reality: opportunities and challenges. Frontiers in Virtual Reality (2021), 140.
[10]
William Buxton. 1995. Chunking and phrasing and the design of human-computer dialogues. In Readings in Human–Computer Interaction. Elsevier, 494–499.
[11]
Yuanzhi Cao, Tianyi Wang, Xun Qian, Pawan S Rao, Manav Wadhawan, Ke Huo, and Karthik Ramani. 2019. GhostAR: A time-space editor for embodied authoring of human-robot collaborative task with augmented reality. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 521–534.
[12]
Shapes Corp. 2022. ShapesXR: VR Creation and Collaboration Platform for Remote Teams. https://www.shapesxr.com. Accessed April 04, 2023.
[13]
Javier M Covelli, Jannick P Rolland, Michael Proctor, J Peter Kincaid, and PA Hancock. 2010. Field of view effects on pilot performance in flight. The International Journal of Aviation Psychology 20, 2 (2010), 197–219.
[14]
Anind K. Dey, Raffay Hamid, Chris Beckmann, Ian Li, and Daniel Hsu. 2004. A CAPpella: Programming by Demonstration of Context-Aware Applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria) (CHI ’04). Association for Computing Machinery, New York, NY, USA, 33–40. https://doi.org/10.1145/985692.985697
[15]
Satu Elo and Helvi Kyngäs. 2008. The qualitative content analysis process. Journal of advanced nursing 62, 1 (2008), 107–115.
[16]
Barrett Ens, David Ahlström, and Pourang Irani. 2016. Moving ahead with peephole pointing: Modelling object selection with head-worn display field of view limitations. In Proceedings of the 2016 Symposium on Spatial User Interaction. 107–110.
[17]
Maribeth Gandy and Blair MacIntyre. 2014. Designer’s augmented reality toolkit, ten years later: implications for new media authoring tools. In Proceedings of the 27th annual ACM symposium on User interface software and technology. 627–636.
[18]
VRdirect GmbH. 2022. VRdirect: The Enterprise Virtual Reality Solution. https://www.vrdirect.com/. Accessed April 04, 2023.
[19]
Uwe Gruenefeld, Jonas Auda, Florian Mathis, Stefan Schneegass, Mohamed Khamis, Jan Gugenheimer, and Sven Mayer. 2022. VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 611, 15 pages. https://doi.org/10.1145/3491102.3501821
[20]
Shreyas Hampali, Sayan Deb Sarkar, Mahdi Rad, and Vincent Lepetit. 2022. Keypoint transformer: Solving joint identification in challenging hands and object interactions for accurate 3d pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11090–11100.
[21]
Shangchen Han, Beibei Liu, Randi Cabezas, Christopher D Twigg, Peizhao Zhang, Jeff Petkau, Tsz-Ho Yu, Chun-Jung Tai, Muzaffer Akbay, Zheng Wang, 2020. MEgATrack: monochrome egocentric articulated hand-tracking for virtual reality. ACM Transactions on Graphics (ToG) 39, 4 (2020), 87–1.
[22]
Björn Hartmann, Leith Abdulla, Manas Mittal, and Scott R Klemmer. 2007. Authoring sensor-based interactions by demonstration with direct manipulation and pattern recognition. In Proceedings of the SIGCHI conference on Human factors in computing systems. 145–154.
[23]
Ken Hinckley, Patrick Baudisch, Gonzalo Ramos, and Francois Guimbretiere. 2005. Design and analysis of delimiters for selection-action pen gesture phrases in scriboli. In Proceedings of the SIGCHI conference on Human factors in computing systems. 451–460.
[24]
Holonautic. 2021. Hand Physics Lab. https://www.oculus.com/experiences/quest/3392175350802835. Accessed Sep 13, 2022.
[25]
David G Jansson and Steven M Smith. 1991. Design fixation. Design studies 12, 1 (1991), 3–11.
[26]
Jun Kato, Takeo Igarashi, and Masataka Goto. 2016. Programming with examples to develop data-intensive user interfaces. Computer 49, 7 (2016), 34–42.
[27]
Julian Keil, Dennis Edler, Thomas Schmitt, and Frank Dickmann. 2021. Creating immersive virtual environments based on open geospatial data and game engines. KN-Journal of Cartography and Geographic Information 71, 1 (2021), 53–65.
[28]
Veronika Krauß, Alexander Boden, Leif Oppermann, and René Reiners. 2021. Current practices, challenges, and design implications for collaborative ar/vr application development. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
[29]
Ben Lafreniere, Tanya R. Jonker, Stephanie Santosa, Mark Parent, Michael Glueck, Tovi Grossman, Hrvoje Benko, and Daniel Wigdor. 2021. False Positives vs. False Negatives: The effects of recovery time and cognitive costs on input error preference. In The 34th Annual ACM Symposium on User Interface Software and Technology. 54–68.
[30]
David Ledo, Steven Houben, Jo Vermeulen, Nicolai Marquardt, Lora Oehlberg, and Saul Greenberg. 2018. Evaluation Strategies for HCI Toolkit Research. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3173574.3173610
[31]
David Ledo, Jo Vermeulen, Sheelagh Carpendale, Saul Greenberg, Lora Oehlberg, and Sebastian Boring. 2019. Astral: Prototyping Mobile and Smart Object Interactive Behaviours Using Familiar Applications. In Proceedings of the 2019 on Designing Interactive Systems Conference (San Diego, CA, USA) (DIS ’19). Association for Computing Machinery, New York, NY, USA, 711–724. https://doi.org/10.1145/3322276.3322329
[32]
Gun A Lee, Claudia Nelles, Mark Billinghurst, and Gerard Jounghyun Kim. 2004. Immersive authoring of tangible augmented reality applications. In Third IEEE and ACM international symposium on mixed and augmented reality. IEEE, 172–181.
[33]
Germán Leiva, Jens Emil Grønbæk, Clemens Nylandsted Klokmose, Cuong Nguyen, Rubaiat Habib Kazi, and Paul Asente. 2021. Rapido: Prototyping Interactive AR Experiences through Programming by Demonstration. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 626–637. https://doi.org/10.1145/3472749.3474774
[34]
Germán Leiva, Cuong Nguyen, Rubaiat Habib Kazi, and Paul Asente. 2020. Pronto: Rapid Augmented Reality Video Prototyping Using Sketches and Enaction. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376160
[35]
Henry Lieberman, Fabio Paternò, and Volker Wulf. 2006. End user development. Vol. 9. Springer.
[36]
DraftXR LLC. 2020. DraftXR: Design, Test, & Share VR Interfaces. https://www.draftxr.com/. Accessed April 04, 2023.
[37]
Xiaolong Lou, Xiangdong A Li, Preben Hansen, and Peng Du. 2021. Hand-adaptive user interface: improved gestural interaction in virtual reality. Virtual Reality 25 (2021), 367–382.
[38]
Hao Lü and Yang Li. 2012. Gesture coder: a tool for programming multi-touch gestures by demonstration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2875–2884.
[39]
Hao Lü and Yang Li. 2013. Gesture studio: authoring multi-touch interactions through demonstration and declaration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 257–266.
[40]
Magnopus. 2020. Elixir. https://www.oculus.com/experiences/quest/3793077684043441. Accessed Sep 13, 2022.
[41]
George B. Mo, John J Dudley, and Per Ola Kristensson. 2021. Gesture Knitter: A Hand Gesture Design Tool for Head-Mounted Mixed Reality Applications. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 291, 13 pages. https://doi.org/10.1145/3411764.3445766
[42]
Leon Müller, Ken Pfeuffer, Jan Gugenheimer, Bastian Pfleging, Sarah Prange, and Florian Alt. 2021. SpatialProto: Exploring Real-World Motion Captures for Rapid Prototyping of Interactive Mixed Reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 363, 13 pages. https://doi.org/10.1145/3411764.3445560
[43]
Michael Nebeling, Katy Lewis, Yu-Cheng Chang, Lihan Zhu, Michelle Chung, Piaoyang Wang, and Janet Nebeling. 2020. XRDirector: A Role-Based Collaborative Immersive Authoring System. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376637
[44]
Michael Nebeling and Katy Madier. 2019. 360proto: Making Interactive Virtual Reality & Augmented Reality Prototypes from Paper. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300826
[45]
Michael Nebeling, Janet Nebeling, Ao Yu, and Rob Rumble. 2018. ProtoAR: Rapid Physical-Digital Prototyping of Mobile Augmented Reality Applications. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173927
[46]
Michael Nebeling and Maximilian Speicher. 2018. The trouble with augmented reality/virtual reality authoring tools. In 2018 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct). IEEE, 333–337.
[47]
Dan R Olsen Jr. 2007. Evaluating user interface systems research. In Proceedings of the 20th annual ACM symposium on User interface software and technology. 251–258.
[48]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-defined gestures for augmented reality. In CHI’13 Extended Abstracts on Human Factors in Computing Systems. 955–960.
[49]
Meta Quest. 2021. Tiny Castles. https://www.oculus.com/experiences/quest/3647163948685453. Accessed Sep 13, 2022.
[50]
David Rempel, Matt J Camilleri, and David L Lee. 2014. The design of hand gestures for human–computer interaction: Lessons from sign language interpreters. International journal of human-computer studies 72, 10-11 (2014), 728–735.
[51]
Jaime Ruiz and Yang Li. 2011. DoubleFlip: a motion gesture delimiter for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2717–2720.
[52]
Maximilian Speicher and Michael Nebeling. 2018. GestureWiz: A Human-Powered Gesture Design Environment for User Interface Prototypes. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3173574.3173681
[53]
Carbon Studio. 2022. The Wizards - Dark Times. https://www.oculus.com/experiences/rift/2599598633394538. Accessed Sep 13, 2022.
[54]
Miru Studio. 2022. Finger Gun. https://www.oculus.com/experiences/quest/5870383889703320. Accessed Sep 13, 2022.
[55]
Eugene M. Taranta II, Amirreza Samiei, Mehran Maghoumi, Pooya Khaloo, Corey R. Pittman, and Joseph J. LaViola Jr.2017. Jackknife: A Reliable Recognizer with Few Samples and Many Modalities. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). ACM, 5850–5861. http://doi.acm.org/10.1145/3025453.3026002
[56]
void room. 2022. Tea For God - Demo. https://www.oculus.com/experiences/quest/3762343440541585/. Accessed Sep 13, 2022.
[57]
Bryan Wang and Tovi Grossman. 2020. BlyncSync: enabling multimodal smartwatch gestures with synchronous touch and blink. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[58]
Tianyi Wang, Xun Qian, Fengming He, Xiyun Hu, Yuanzhi Cao, and Karthik Ramani. 2021. GesturAR: An Authoring System for Creating Freehand Interactive Augmented Reality Applications. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 552–567. https://doi.org/10.1145/3472749.3474769
[59]
Tianyi Wang, Xun Qian, Fengming He, Xiyun Hu, Ke Huo, Yuanzhi Cao, and Karthik Ramani. 2020. CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’20). Association for Computing Machinery, New York, NY, USA, 328–341. https://doi.org/10.1145/3379337.3415815
[60]
Pavel Zahorik and Rick L Jenison. 1998. Presence as being-in-the-world. Presence 7, 1 (1998), 78–89.
[61]
Bruno Zamborlin, Frederic Bevilacqua, Marco Gillies, and Mark D’inverno. 2014. Fluid gesture interaction design: Applications of continuous recognition for the design of modern gestural interfaces. ACM Transactions on Interactive Intelligent Systems (TiiS) 3, 4 (2014), 1–30.
[62]
Baowen Zhang, Yangang Wang, Xiaoming Deng, Yinda Zhang, Ping Tan, Cuixia Ma, and Hongan Wang. 2021. Interacting two-hand 3d pose and shape reconstruction from single color image. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11354–11363.

Cited By

View all
  • (2024)ConnectVR: A Trigger-Action Interface for Creating Agent-based Interactive VR Stories2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00051(286-297)Online publication date: 16-Mar-2024

Index Terms

  1. GestureCanvas: A Programming by Demonstration System for Prototyping Compound Freehand Interaction in VR
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Please enable JavaScript to view thecomments powered by Disqus.

            Information & Contributors

            Information

            Published In

            cover image ACM Conferences
            UIST '23: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology
            October 2023
            1825 pages
            ISBN:9798400701320
            DOI:10.1145/3586183
            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Sponsors

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 29 October 2023

            Permissions

            Request permissions for this article.

            Check for updates

            Author Tags

            1. compound freehand interaction
            2. programming by demonstration
            3. prototyping
            4. virtual reality

            Qualifiers

            • Research-article
            • Research
            • Refereed limited

            Conference

            UIST '23

            Acceptance Rates

            Overall Acceptance Rate 561 of 2,567 submissions, 22%

            Upcoming Conference

            UIST '25
            The 38th Annual ACM Symposium on User Interface Software and Technology
            September 28 - October 1, 2025
            Busan , Republic of Korea

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)419
            • Downloads (Last 6 weeks)22
            Reflects downloads up to 13 Jan 2025

            Other Metrics

            Citations

            Cited By

            View all
            • (2024)ConnectVR: A Trigger-Action Interface for Creating Agent-based Interactive VR Stories2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00051(286-297)Online publication date: 16-Mar-2024

            View Options

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Media

            Figures

            Other

            Tables

            Share

            Share

            Share this Publication link

            Share on social media