[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3660515.3661331acmconferencesArticle/Chapter ViewAbstractPublication PageseicsConference Proceedingsconference-collections
short-paper

Engineering Touchscreen Input for 3-Way Displays: Taxonomy, Datasets, and Classification

Published: 24 June 2024 Publication History

Abstract

In the family of personal multi-display devices and environments, 3-way displays conveniently integrate into the conventional form factors of laptops and tablets, featuring both a central display area and two symmetrically expandable lateral sides. However, despite a large body of knowledge on touch input for single-display devices, little is known about users’ gesture preferences for 3-way displays. We propose a cross-display gesture taxonomy for future explorations of gesture input for multi-display devices, in which we position 3-way displays. Using a requirement elicitation, we report results from two gesture elicitation studies with a total of 48 participants, where a 3-way display was used as a remote control panel for a smart home environment (study #1) and a touchscreen interface for content manipulation performed both within and across displays (study #2). Based on these findings, we offer two consensus datasets of 3-way-display gestures that are consolidated into a larger classification of stroke-gesture input for 3-way displays.

Supplemental Material

PDF File
Appendix containing the classification

References

[1]
Caroline Appert and Shumin Zhai. 2009. Using Strokes as Command Shortcuts: Cognitive Benefits and Toolkit Support. In Proceedings of the ACM Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI ’09). ACM, New York, NY, USA, 2289–2298. https://doi.org/10.1145/1518701.1519052
[2]
Nathalie Aquino, Jean Vanderdonckt, Nelly Condori-Fernández, Óscar Dieste, and Óscar Pastor. 2010. Usability Evaluation of Multi-Device/Platform User Interfaces Generated by Model-Driven Engineering. In Proceedings of the ACM-IEEE International Symposium on Empirical Software Engineering and Measurement (Bolzano-Bozen, Italy) (ESEM ’10). ACM, New York, NY, USA, Article 30, 10 pages. https://doi.org/10.1145/1852786.1852826
[3]
Laura Bianca Bilius and Radu-Daniel Vatavu. 2021. A Multistudy Investigation of Drivers and Passengers’ Gesture and Voice Input Preferences for In-Vehicle Interactions. Journal of Intelligent Transportation Systems 25, 2 (2021), 197–220. https://doi.org/10.1080/15472450.2020.1846127
[4]
Frederik Brudy, Christian Holz, Roman Rädle, Chi-Jui Wu, Steven Houben, Clemens Nylandsted Klokmose, and Nicolai Marquardt. 2019. Cross-Device Taxonomy: Survey, Opportunities and Challenges of Interactions Spanning Across Multiple Devices. In Proceedings of the ACM Conference on Human Factors in Computing Systems (Glasgow, Scotland, UK) (CHI ’19). ACM, New York, NY, USA, 1–28. https://doi.org/10.1145/3290605.3300792
[5]
Mauricio Cirelli and Ricardo Nakamura. 2014. A Survey on Multi-touch Gesture Recognition and Multi-touch Frameworks. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (Dresden, Germany) (ITS ’14). Association for Computing Machinery, New York, NY, USA, 35–44. https://doi.org/10.1145/2669485.2669509
[6]
Giuseppe Desolda, Carmelo Ardito, Hans-Christian Jetter, and Rosa Lanzilotti. 2019. Exploring spatially-aware cross-device interaction techniques for mobile collaborative sensemaking. International Journal of Human-Computer Studies 122 (2019), 1–20. https://doi.org/10.1016/j.ijhcs.2018.08.006
[7]
Linda Di Geronimo, Marica Bertarini, Julia Badertscher, Maria Husmann, and Moira C. Norrie. 2017. Exploiting Mid-Air Gestures to Share Data among Devices. In Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services (Vienna, Austria) (MobileHCI ’17). ACM, New York, NY, USA, Article 35, 11 pages. https://doi.org/10.1145/3098279.3098530
[8]
Tanja Döring, Alireza Sahami Shirazi, and Albrecht Schmidt. 2010. Exploring Gesture-Based Interaction Techniques in Multi-Display Environments with Mobile Phones and a Multi-Touch Table. In Proceedings of the ACM Conference on Advanced Visual Interfaces (Rome, Italy) (AVI ’10). ACM, New York, NY, USA, 419. https://doi.org/10.1145/1842993.1843097
[9]
Ana Belén Erazo and Jorge Luis Pérez Medina. 2020. Algorithmic Efficiency of Stroke Gesture Recognizers: a Comparative Analysis. International Journal on Advanced Science, Engineering and Information Technology 10, 2 (Mar. 2020), 438–446. https://doi.org/10.18517/ijaseit.10.2.10807
[10]
Fernando Garcia-Sanjuan, Javier Jaen, and Vicente Nacher. 2016. Toward a General Conceptualization of Multi-Display Environments. Frontiers in ICT 3 (2016), 15 pages. https://doi.org/10.3389/fict.2016.00020 Sec. Human-Media Interaction.
[11]
Bogdan-Florin Gheran, Jean Vanderdonckt, and Radu-Daniel Vatavu. 2018. Gestures for Smart Rings: Empirical Results, Insights, and Design Implications. In Proceedings of the 2018 Designing Interactive Systems Conference(DIS ’18). ACM, New York, NY, USA, 623–635. https://doi.org/10.1145/3196709.3196741
[12]
Zhifan He, Ruifo Zhang, Zheng Liu, and Zhengyu Tan. 2020. A User-Defined Gesture Set for Natural Interaction in a Smart Kitchen Environment. In Proceedings of the 13th International Symposium on Computational Intelligence and Design(ISCID ’20). IEEE, USA, 122–125. https://doi.org/10.1109/ISCID51228.2020.00034
[13]
Ken Hinckley. 2003. Synchronous Gestures for Multiple Persons and Computers. In Proceedings of the ACM Symposium on User Interface Software and Technology(UIST ’03). ACM, New York, NY, USA, 149–158. https://doi.org/10.1145/964696.964713
[14]
Ken Hinckley, Gonzalo Ramos, Francois Guimbretiere, Patrick Baudisch, and Marc Smith. 2004. Stitching: Pen Gestures That Span Multiple Displays. In Proceedings of the ACM Working Conference on Advanced Visual Interfaces (Gallipoli, Italy) (AVI ’04). ACM, New York, NY, USA, 23–31. https://doi.org/10.1145/989863.989866
[15]
Christian Kray, Daniel Nesbitt, John Dawson, and Michael Rohs. 2010. User-Defined Gestures for Connecting Mobile Phones, Public Displays, and Tabletops. In Proceedings of the ACM International Conference on Human Computer Interaction with Mobile Devices and Services (Lisbon, Portugal) (MobileHCI ’10). ACM, New York, NY, USA, 239–248. https://doi.org/10.1145/1851600.1851640
[16]
Ekaterina Kurdyukova, Matthias Redlin, and Elisabeth André. 2012. Studying User-Defined IPad Gestures for Interaction in Multi-Display Environment. In Proceedings of the ACM International Conference on Intelligent User Interfaces (Lisbon, Portugal) (IUI ’12). ACM, New York, NY, USA, 93–96. https://doi.org/10.1145/2166966.2166984
[17]
Hyunglae Lee, Heeseok Jeong, Joongho Lee, Ki-Won Yeom, and Ji-Hyung Park. 2009. Gesture-Based Interface for Connection and Control of Multi-device in a Tabletop Display Environment. In Proceedings of the 13th International Conference, HCI International 2009, San Diego, CA, USA, July 19-24, 2009, Human-Computer Interaction. Novel Interaction Methods and Techniques(Lecture Notes in Computer Science, Vol. 5611), Julie A. Jacko (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 216–225. https://doi.org/10.1007/978-3-642-02577-8_24
[18]
Yihua Lou, Wenjun Wu, Radu-Daniel Vatavu, and Wei-Tek Tsai. 2017. Personalized Gesture Interactions for Cyber-Physical Smart-Home Environments. Science China Information Sciences 60, 7 (2017), 072104:1–15. https://doi.org/10.1007/s11432-015-1014-7
[19]
Nathan Magrofuoco, Paolo Roselli, and Jean Vanderdonckt. 2021. Two-Dimensional Stroke Gesture Recognition: A Survey. ACM Computing Survey 54, 7, Article 155 (jul 2021), 36 pages. https://doi.org/10.1145/3465400
[20]
Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, m. c. schraefel, and Jacob O. Wobbrock. 2014. Reducing Legacy Bias in Gesture Elicitation Studies. Interactions 21, 3 (2014), 40–45. https://doi.org/10.1145/2591689
[21]
Meredith Ringel Morris, Jacob O. Wobbrock, and Andrew D. Wilson. 2010. Understanding Users’ Preferences for Surface Gestures. In Proceedings of Graphics Interface(GI ’10). Canadian Information Processing Society, CAN, 261–268. https://dl.acm.org/doi/10.5555/1839214.1839260
[22]
Miguel A. Nacenta, Carl Gutwin, Dzmitry Aliakseyeu, and Sriram Subramanian. 2009. There and Back Again: Cross-Display Object Movement in Multi-Display Environments. Human–Computer Interaction 24, 1-2 (2009), 170–229. https://doi.org/10.1080/07370020902819882 arXiv:https://doi.org/10.1080/07370020902819882
[23]
Michael Nielsen, Moritz Störring, Thomas B. Moeslund, and Erik Granum. 2004. A Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for HCI. In Gesture-Based Communication in Human-Computer Interaction (Genova, Italy) (GW ’03), Antonio Camurri and Gualtiero Volpe (Eds.). Springer, Berlin, 409–420. https://doi.org/10.1007/978-3-540-24598-8_38
[24]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-Defined Gestures for Augmented Reality. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’13). ACM, New York, NY, USA, 955–960. https://doi.org/10.1145/2468356.2468527
[25]
Benjamin Poppinga, Alireza Sahami Shirazi, Niels Henze, Wilko Heuten, and Susanne Boll. 2014. Understanding Shortcut Gestures on Mobile Touch Devices. In Proceedings of the 16th ACM International Conference on Human-Computer Interaction with Mobile Devices & Services (Toronto, ON, Canada) (MobileHCI ’14). ACM, New York, NY, USA, 173–182. https://doi.org/10.1145/2628363.2628378
[26]
Francesca Pulina and Fabio Paternò. 2019. Supporting cross-device interactions with gestures between personal and public devices. In Proceedings of the 18th ACM Conference on Mobile and Ubiquitous Multimedia(MUM ’19). ACM, New York, NY, USA, 50:1–50:5. https://doi.org/10.1145/3365610.3368419
[27]
Roman Rädle, Hans-Christian Jetter, Mario Schreiner, Zhihao Lu, Harald Reiterer, and Yvonne Rogers. 2015. Spatially-Aware or Spatially-Agnostic? Elicitation and Evaluation of User-Defined Cross-Device Interactions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). ACM, New York, NY, USA, 3913–3922. https://doi.org/10.1145/2702123.2702287
[28]
Gonzalo Ramos, Kenneth Hinckley, Andy Wilson, and Raman Sarin. 2009. Synchronous Gestures in Multi-Display Environments. Human–Computer Interaction 24, 1-2 (2009), 117–169. https://doi.org/10.1080/07370020902739288 arXiv:https://doi.org/10.1080/07370020902739288
[29]
Jun Rekimoto. 1997. Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments. In Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology(UIST ’97). ACM, New York, NY, USA, 31–39. https://doi.org/10.1145/263407.263505
[30]
Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-Defined Motion Gestures for Mobile Interaction. In Proceedings of the ACM Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). ACM, New York, NY, USA, 197–206. https://doi.org/10.1145/1978942.1978971
[31]
Ryan Schmidt, Eric Penner, and Sheelagh Carpendale. 2004. Reconfigurable Displays. In Proceedings of the UbiComp’04 Workshop on Ubiquitous Display Environments(UbiDisplay ’04). ACM, New York, NY, USA, 6 pages. https://innovis.cpsc.ucalgary.ca/innovis/uploads/Publications/Publications/ubidisplay04-schmidt-carpendale.pdf
[32]
Teddy Seyed, Chris Burns, Mario Costa Sousa, Frank Maurer, and Anthony Tang. 2012. Eliciting Usable Gestures for Multi-Display Environments. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (Cambridge, Massachusetts, USA) (ITS ’12). ACM, New York, NY, USA, 41–50. https://doi.org/10.1145/2396636.2396643
[33]
Teddy Seyed and Frank Maurer. 2018. Multisurface Environments. John Wiley & Sons, Ltd, West Sussex, UK, Chapter 22, 515–538. https://doi.org/10.1002/9781118976005.ch22
[34]
Vivian Shen and Chris Harrison. 2022. Pull Gestures with Coordinated Graphics on Dual-Screen Devices. In Proceedings of the 2022 International Conference on Multimodal Interaction (Bengaluru, India) (ICMI ’22). ACM, New York, NY, USA, 270–277. https://doi.org/10.1145/3536221.3556620
[35]
Adalberto L. Simeone, Julian Seifert, Dominik Schmidt, Paul Holleis, Enrico Rukzio, and Hans Gellersen. 2013. A Cross-Device Drag-and-Drop Technique. In Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia (Luleå, Sweden) (MUM ’13). ACM, New York, NY, USA, Article 10, 4 pages. https://doi.org/10.1145/2541831.2541848
[36]
Arthur Sluÿters, Quentin Sellier, Jean Vanderdonckt, Vik Parthiban, and Pattie Maes. 2023. Consistent, Continuous, and Customizable Mid-Air Gesture Interaction for Browsing Multimedia Objects on Large Displays. International Journal of Human–Computer Interaction 39, 12 (2023), 2492–2523. https://doi.org/10.1080/10447318.2022.2078464 arXiv:https://doi.org/10.1080/10447318.2022.2078464
[37]
Nikita Soni, Schuyler Gleaves, Hannah Neff, Sarah Morrison-Smith, Shaghayegh Esmaeili, Ian Mayne, Sayli Bapat, Carrie Schuman, Kathryn A. Stofer, and Lisa Anthony. 2019. Do User-Defined Gestures for Flatscreens Generalize to Interactive Spherical Displays for Adults and Children?. In Proceedings of the 8th ACM International Symposium on Pervasive Displays (Palermo, Italy) (PerDis ’19). ACM, New York, NY, USA, Article 24, 7 pages. https://doi.org/10.1145/3321335.3324941
[38]
Kênia Sousa, Hildeberto Mendonça, Jean Vanderdonckt, Els Rogier, and Joannes Vandermeulen. 2008. User Interface Derivation from Business Processes: A Model-Driven Approach for Organizational Engineering. In Proceedings of the ACM Symposium on Applied Computing (Fortaleza, Ceara, Brazil) (SAC ’08). ACM, New York, NY, USA, 553–560. https://doi.org/10.1145/1363686.1363821
[39]
Jean Vanderdonckt and Radu-Daniel Vatavu. 2021. Extensible, Extendable, Expandable, Extractable: The 4E Design Approach for Reconfigurable Displays. International Journal of Human-Computer Interaction 37, 18 (2021), 1720–1736. https://doi.org/10.1080/10447318.2021.1908666
[40]
Radu-Daniel Vatavu, Annette Mossel, and Christian Schönauer. 2016. Digital Vibrons: Understanding Users’ Perceptions of Interacting with Invisible, Zero-Weight Matter. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services(MobileHCI ’16). ACM, New York, NY, USA, 217–226. https://doi.org/10.1145/2935334.2935364
[41]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit. In Proceedings of the ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). ACM, New York, NY, USA, 1325–1334. https://doi.org/10.1145/2702123.2702223
[42]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2022. Clarifying Agreement Calculations and Analysis for End-User Elicitation Studies. ACM Trans. Comput.-Hum. Interact. 29, 1, Article 5 (jan 2022), 70 pages. https://doi.org/10.1145/3476101
[43]
Radu-Daniel Vatavu and Ionut-Alexandru Zaiti. 2014. Leap Gestures for TV: Insights from an Elicitation Study. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (Newcastle Upon Tyne, United Kingdom) (TVX ’14). ACM, New York, NY, USA, 131–138. https://doi.org/10.1145/2602299.2602316
[44]
Santiago Villarreal-Narvaez, Alexandru-Ionuţ Şiean, Arthur Sluÿters, Radu-Daniel Vatavu, and Jean Vanderdonckt. 2022. Informing Future Gesture Elicitation Studies for Interactive Applications That Use Radar Sensing. In Proceedings of the ACM International Conference on Advanced Visual Interfaces (Frascati, Rome, Italy) (AVI ’22). ACM, New York, NY, USA, Article 50, 3 pages. https://doi.org/10.1145/3531073.3534475
[45]
Santiago Villarreal-Narvaez, Arthur Sluÿters, Jean Vanderdonckt, and Radu-Daniel Vatavu. 2024. Brave New GES World: A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies. ACM Comput. Surv. 56, 5, Article 128 (jan 2024), 55 pages. https://doi.org/10.1145/3636458
[46]
Santiago Villarreal-Narvaez, Jean Vanderdonckt, Radu-Daniel Vatavu, and Jacob O. Wobbrock. 2020. A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies?. In Proceedings of the ACM Designing Interactive Systems Conference (Eindhoven, Netherlands) (DIS ’20). ACM, New York, NY, USA, 855–872. https://doi.org/10.1145/3357236.3395511
[47]
Panagiotis Vogiatzidakis and Panayiotis Koutsabasis. 2022. ’Address and command’: Two-handed mid-air interactions with multiple home devices. International Journal of Human-Computer Studies 159 (2022), 102755. https://doi.org/10.1016/j.ijhcs.2021.102755
[48]
Daniel Wigdor and Dennis Wixon. 2011. Brave NUI World: Designing Natural User Interfaces for Touch and Gesture (1st ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. https://dl.acm.org/doi/10.5555/1995309
[49]
Andrew D. Wilson and Hrvoje Benko. 2010. Combining Multiple Depth Cameras and Projectors for Interactions on, above and between Surfaces. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology(UIST ’10). ACM, New York, NY, USA, 273–282. https://doi.org/10.1145/1866029.1866073
[50]
Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-Defined Gestures for Surface Computing. In Proceedings of the ACM Conference on Human Factors in Computing Systems(CHI ’09). ACM, New York, NY, USA, 1083–1092. https://doi.org/10.1145/1518701.1518866
[51]
Haijun Xia, Michael Glueck, Michelle Annett, Michael Wang, and Daniel Wigdor. 2022. Iteratively Designing Gesture Vocabularies: A Survey and Analysis of Best Practices in the HCI Literature. ACM Trans. Comput. Hum. Interact. 29, 4 (2022), 37:1–37:54. https://doi.org/10.1145/3503537
[52]
Zhican Yang, Chun Yu, Xin Chen, Jingjia Luo, and Yuanchun Shi. 2022. Investigating user-defined flipping gestures for dual-display phones. International Journal of Human-Computer Studies 163 (2022), 102800. https://doi.org/10.1016/j.ijhcs.2022.102800
[53]
Yen-Ting Yeh, Fabrice Matulic, and Daniel Vogel. 2023. Phone Sleight of Hand: Finger-Based Dexterous Gestures for Physical Interaction with Mobile Phones. In Proceedings of the ACM Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). ACM, New York, NY, USA, Article 519, 19 pages. https://doi.org/10.1145/3544548.3581121
[54]
Ionut-Alexandru Zaiti, Stefan-Gheorghe Pentiuc, and Radu-Daniel Vatavu. 2015. On Free-Hand TV Control: Experimental Results on User-Elicited Gestures with Leap Motion. Personal and Ubiquitous Computing 19, 5-6 (2015), 821–838. https://doi.org/10.1007/s00779-015-0863-y
[55]
Shumin Zhai, Per Ola Kristensson, Caroline Appert, Tue Haste Anderson, and Xiang Cao. 2012. Foundational Issues in Touch-Surface Stroke Gesture Design — An Integrative Review. Foundations and Trends in Human–Computer Interaction 5, 2 (2012), 97–205. https://doi.org/10.1561/1100000012

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
EICS '24 Companion: Companion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems
June 2024
129 pages
ISBN:9798400706516
DOI:10.1145/3660515
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 June 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Gesture-based User Interfaces
  2. Multi-display devices
  3. New datasets
  4. Requirements elicitation
  5. Requirements engineering
  6. Stroke gestures
  7. Three-way displays.
  8. Touch input

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Funding Sources

Conference

EICS '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 73 of 299 submissions, 24%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 43
    Total Downloads
  • Downloads (Last 12 months)43
  • Downloads (Last 6 weeks)4
Reflects downloads up to 13 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media