[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

PalmSpace: : Leveraging the palm for touchless interaction on public touch screen devices

Published: 17 April 2024 Publication History

Abstract

Touchscreen is the primary solution to interact with public devices such as Automated Teller Machines (ATMs). However, the touch modality raises health concerns since users have to touch the screens, and therefore risking the spread of contagious diseases. We design PalmSpace, an alternate input technique leveraging users’ hand palms to interact with public devices. With Palmspace, UI elements are mapped onto the users’ palms and can be accessed by touching various locations directly on the palm. We conduct a series of user studies to evaluate several design options, such as interface layout, item size, preferred item location, and suitable feedback for items. Based on the results, we design PalmSpace and compare its performance with mid-air input. We show that PalmSpace is a potential solution to interact with public devices without using their touchscreen. We conclude with design guidelines for using the palm as an alternative input space for touchscreen devices.

References

[1]
Ahlström D., Hasan K., Irani P., Are you comfortable doing that? Acceptance studies of around-device gestures in and for public settings, in: Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services, MobileHCI ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 193–202,.
[2]
Aigner R., Wigdor D.J., Benko H., Haller M., Lindlbauer D., Ion A., Zhao S., Tzu J., Valino K., Electronica A., Understanding mid-air hand gestures: A study of human preferences in usage of gesture types for HCI, in: Microsoft Research Technical Report MSR-TR-2012-111, Microsoft, Online, 2012, pp. 1–10.
[3]
Alallah F., Neshati A., Sakamoto Y., Hasan K., Lank E., Bunt A., Irani P., Performer vs. Observer: Whose comfort level should we consider when examining the social acceptability of input modalities for head-worn display?, in: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, VRST ’18, Association for Computing Machinery, New York, NY, USA, 2018,.
[4]
Argelaguet F., Hoyet L., Trico M., Lecuyer A., The role of interaction in virtual embodiment: Effects of the virtual hand representation, in: 2016 IEEE Virtual Reality, VR, 2016, pp. 3–10,.
[5]
Bachmann D., Weichert F., Rinkenauer G., Review of three-dimensional human-computer interaction with focus on the leap motion controller, Sensors 18 (7) (2018) 2194.
[6]
Baudisch P., Xie X., Wang C., Ma W.-Y., Collapse-to-zoom: Viewing web pages on small screen devices by interactively removing irrelevant content, in: Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, UIST ’04, Association for Computing Machinery, New York, NY, USA, 2004, pp. 91–94,.
[7]
Bazarevsky V., Zhang F., On-device, real-time hand tracking with MediaPipe, 2023, Retrieved March 15, 2023 from https://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-with.html.
[8]
Breslauer N., Galić I., Kukec M., Samardžić I., Leap motion sensor for natural user interface, Tehnički vjesnik 26 (2) (2019) 560–565.
[9]
Burigat S., Chittaro L., Vianello A., Dynamic visualization of large numbers of off-screen objects on mobile devices: An experimental comparison of wedge and overview+detail, in: Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’12, Association for Computing Machinery, New York, NY, USA, 2012, pp. 93–102,.
[10]
Butler A., Izadi S., Hodges S., SideSight: Multi-touch interaction around small devices, in: Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology, UIST ’08, Association for Computing Machinery, New York, NY, USA, 2008, pp. 201–204,.
[11]
Chen X.A., Schwarz J., Harrison C., Mankoff J., Hudson S.E., Air+touch: Interweaving touch & in-air gestures, in: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 519–525,.
[12]
Chen X.A., Schwarz J., Harrison C., Mankoff J., Hudson S., Around-body interaction: Sensing and interaction techniques for proprioception-enhanced input with mobile devices, in: Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 287–290,.
[13]
CIBC X.A., CIBCOnline.CIBC.com, 2021, Retrieved September 9, 2021 from https://www.cibconline.cibc.com.
[14]
CityBank X.A., City.bank, 2018, Retrieved September 9, 2022 from https://www.city.bank.
[15]
Cockburn A., Quinn P., Gutwin C., Ramos G., Looser J., Air pointing: Design and evaluation of spatial target acquisition with and without visual feedback, Int. J. Hum.-Comput. Stud. 69 (6) (2011) 401–414,. URL https://www.sciencedirect.com/science/article/pii/S107158191100022X.
[16]
Delamare W., Silpasuwanchai C., Sarcar S., Shiraki T., Ren X., On gesture combination: An exploration of a solution to augment gesture interaction, in: Proceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces, ISS ’19, Association for Computing Machinery, New York, NY, USA, 2019, pp. 135–146,.
[17]
Dezfuli N., Khalilbeigi M., Huber J., Özkorkmaz M., Mühlhäuser M., PalmRC: Leveraging the palm surface as an imaginary eyes-free television remote control, Behav. Inf. Technol. 33 (8) (2014) 829–843,.
[18]
Freeman E., Brewster S., Lantz V., Tactile feedback for above-device gesture interfaces: Adding touch to touchless interactions, in: Proceedings of the 16th International Conference on Multimodal Interaction, ICMI ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 419–426,.
[19]
Freeman E., Brewster S., Lantz V., Do that, there: An interaction technique for addressing in-air gesture systems, in: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, Association for Computing Machinery, New York, NY, USA, 2016, pp. 2319–2331,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/2858036.2858308.
[20]
Friedrich T.E., Elias L.J., The write bias: The influence of native writing direction on aesthetic preference biases, Psychol. Aesthetics Creat. Arts 10 (2) (2016) 128.
[21]
Guiard Y., Asymmetric division of labor in human skilled bimanual action: The kinematic chain as a model, J. Motor Behav. 19 (4) (1987) 486–517.
[22]
Gupta S., Morris D., Patel S.N., Tan D., AirWave: Non-contact haptic feedback using air vortex rings, in: Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp ’13, Association for Computing Machinery, New York, NY, USA, 2013, pp. 419–428,.
[23]
Gustafson S., Baudisch P., Gutwin C., Irani P., Wedge: Clutter-free visualization of off-screen locations, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’08, Association for Computing Machinery, New York, NY, USA, 2008, pp. 787–796,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/1357054.1357179.
[24]
Gustafson S., Bierwirth D., Baudisch P., Imaginary interfaces: Spatial interaction with empty hands and without visual feedback, in: Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST ’10, Association for Computing Machinery, New York, NY, USA, 2010, pp. 3–12,.
[25]
Gustafson S., Holz C., Baudisch P., Imaginary phone: Learning imaginary interfaces by transferring spatial memory from a familiar device, in: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST ’11, Association for Computing Machinery, New York, NY, USA, 2011, pp. 283–292,.
[26]
Gustafson S.G., Irani P.P., Comparing visualizations for tracking off-screen moving targets, in: CHI ’07 Extended Abstracts on Human Factors in Computing Systems, in: CHI EA ’07, Association for Computing Machinery, New York, NY, USA, 2007, pp. 2399–2404,.
[27]
Gustafson S.G., Rabe B., Baudisch P.M., Understanding palm-based imaginary interfaces: The role of visual and tactile cues when browsing, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’13, Association for Computing Machinery, New York, NY, USA, 2013, pp. 889–898,.
[28]
Han T., Hasan K., Nakamura K., Gomez R., Irani P., SoundCraft: Enabling spatial interactions on smartwatches using hand generated acoustics, in: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST ’17, Association for Computing Machinery, New York, NY, USA, 2017, pp. 579–591,.
[29]
Harrison C., Benko H., Wilson A.D., OmniTouch: Wearable multitouch interaction everywhere, in: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST ’11, Association for Computing Machinery, New York, NY, USA, 2011, pp. 441–450,.
[30]
Harrison C., Hudson S.E., Abracadabra: Wireless, high-precision, and unpowered finger input for very small mobile devices, in: Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, UIST ’09, Association for Computing Machinery, New York, NY, USA, 2009, pp. 121–124,.
[31]
Harrison C., Ramamurthy S., Hudson S.E., On-body interaction: Armed and dangerous, in: Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, TEI ’12, Association for Computing Machinery, New York, NY, USA, 2012, pp. 69–76,.
[32]
Harrison C., Tan D., Morris D., Skinput: Appropriating the body as an input surface, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Association for Computing Machinery, New York, NY, USA, 2010, pp. 453–462,.
[33]
Hasan K., Ahlström D., Irani P., Ad-binning: Leveraging around device space for storing, browsing and retrieving mobile device content, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’13, Association for Computing Machinery, New York, NY, USA, 2013, pp. 899–908,.
[34]
Hasan K., Ahlström D., Irani P.P., Comparing direct off-screen pointing, peephole, and flick & pinch interaction for map navigation, in: Proceedings of the 3rd ACM Symposium on Spatial User Interaction, SUI ’15, Association for Computing Machinery, New York, NY, USA, 2015, pp. 99–102,.
[35]
Hasan K., Ahlström D., Irani P., SAMMI: A spatially-aware multi-mobile interface for analytic map navigation tasks, in: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’15, Association for Computing Machinery, New York, NY, USA, 2015, pp. 36–45,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/2785830.2785850.
[36]
Hasan K., Ahlström D., Kim J., Irani P., AirPanes: Two-handed around-device interaction for pane switching on smartphones, in: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, Association for Computing Machinery, New York, NY, USA, 2017, pp. 679–691,.
[37]
Hasan K., Kim J., Ahlström D., Irani P., Thumbs-up: 3D spatial thumb-reachable space for one-handed thumb interaction on smartphones, in: Proceedings of the 2016 Symposium on Spatial User Interaction, SUI ’16, Association for Computing Machinery, New York, NY, USA, 2016, pp. 103–106,.
[38]
Hausen D., Boring S., Greenberg S., The unadorned desk: Exploiting the physical space around a display as an input canvas, in: IFIP Conference on Human-Computer Interaction, Springer, South Africa, 2013, pp. 140–158.
[39]
HoloLens D., Microsoft hololens, 2021, Retrieved September 9, 2022 from https://www.microsoft.com/en-us/hololens.
[40]
Hoshi T., Development of aerial-input and aerial-tactile-feedback system, in: 2011 IEEE World Haptics Conference, 2011, pp. 569–573,.
[41]
Hoshi T., Iwamoto T., Shinoda H., Non-contact tactile sensation synthesized by ultrasound transducers, in: World Haptics 2009 - Third Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2009, pp. 256–260,.
[42]
Hoshi T., Takahashi M., Iwamoto T., Shinoda H., Noncontact tactile display based on radiation pressure of airborne ultrasound, IEEE Trans. Haptics 3 (3) (2010) 155–165,.
[43]
Hossain Z., Hasan K., Liang H.-N., Irani P., EdgeSplit: Facilitating the selection of off-screen objects, in: Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’12, Association for Computing Machinery, New York, NY, USA, 2012, pp. 79–82,.
[44]
Irani P., Gutwin C., Yang X.D., Improving selection of off-screen targets with hopping, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’06, Association for Computing Machinery, New York, NY, USA, 2006, pp. 299–308,.
[45]
Isomoto T., Yamanaka S., Shizuki B., Relationship between dwell-time and model human processor for dwell-based image selection, in: ACM Symposium on Applied Perception 2021, SAP ’21, Association for Computing Machinery, New York, NY, USA, 2021,.
[46]
Jones B., Sodhi R., Forsyth D., Bailey B., Maciocci G., Around device interaction for multiscale navigation, in: Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’12, Association for Computing Machinery, New York, NY, USA, 2012, pp. 83–92,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/2371574.2371589.
[47]
Ketabdar H., Roshandel M., Yüksel K.A., MagiWrite: Towards touchless digit entry using 3D space around mobile devices, in: Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI ’10, Association for Computing Machinery, New York, NY, USA, 2010, pp. 443–446,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/1851600.1851701.
[48]
Kilteni K., Groten R., Slater M., The sense of embodiment in virtual reality, Presence: Teleoperators Virtual Environ. 21 (4) (2012) 373–387,. arXiv:https://direct.mit.edu/pvar/article-pdf/21/4/373/1625283/pres_a_00124.pdf.
[49]
Kratz S., Rohs M., Hoverflow: Exploring around-device interaction with IR distance sensors, in: Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’09, Association for Computing Machinery, New York, NY, USA, 2009,.
[50]
Kratz S., Rohs M., Guse D., Müller J., Bailly G., Nischt M., PalmSpace: Continuous around-device gestures vs. Multitouch for 3D rotation tasks on mobile devices, in: Proceedings of the International Working Conference on Advanced Visual Interfaces, AVI ’12, Association for Computing Machinery, New York, NY, USA, 2012, pp. 181–188,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/2254556.2254590.
[51]
Laput G., Xiao R., Chen X.A., Hudson S.E., Harrison C., Skin buttons: Cheap, small, low-powered and clickable fixed-icon laser projectors, in: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 389–394,.
[52]
Le H.V., Kosch T., Bader P., Mayer S., Henze N., PalmTouch: Using the palm as an additional input modality on commodity smartphones, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, Association for Computing Machinery, New York, NY, USA, 2018, pp. 1–13,.
[53]
LeapMotion H.V., UltraLeap.com, 2021, Retrieved September 9, 2021 from https://www.ultraleap.com.
[54]
Lien J., Gillian N., Karagozler M.E., Amihood P., Schwesig C., Olson E., Raja H., Poupyrev I., Soli: Ubiquitous gesture sensing with millimeter wave radar, ACM Trans. Graph. 35 (4) (2016),.
[55]
Martinez Plasencia D., Joyce E., Subramanian S., MisTable: Reach-through personal screens for tabletops, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 3493–3502,.
[56]
MediaPipe D., MediaPipe - Google LLC, 2021, Retrieved September 9, 2022 from https://mediapipe.dev.
[57]
MediaPipe D., MediaPipe code repository - Google LLC, 2021, Retrieved September 9, 2022 from https://github.com/google/mediapipe.
[58]
Montero C.S., Alexander J., Marshall M.T., Subramanian S., Would you do that? Understanding social acceptance of gestural interfaces, in: Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI ’10, Association for Computing Machinery, New York, NY, USA, 2010, pp. 275–278,.
[59]
Müller H., Löcken A., Heuten W., Boll S., Sparkle: An ambient light display for dynamic off-screen points of interest, in: Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational, NordiCHI ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 51–60,.
[60]
NIH H., New coronavirus stable for hours on surfaces, 2021, Retrieved May 20, 2023 from https://www.nih.gov/news-events/news-releases/new-coronavirus-stable-hours-surfaces.
[61]
Niikura T., Hirobe Y., Cassinelli A., Watanabe Y., Komuro T., Ishikawa M., In-air typing interface for mobile devices with vibration feedback, in: ACM SIGGRAPH 2010 Emerging Technologies, SIGGRAPH ’10, Association for Computing Machinery, New York, NY, USA, 2010,.
[62]
Oh S.Y., Yoon B., Kim H.-i., Woo W., Finger contact in gesture interaction improves time-domain input accuracy in HMD-based augmented reality, in: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, in: CHI EA ’20, Association for Computing Machinery, New York, NY, USA, 2020, pp. 1–8,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/3334480.3383098.
[63]
Qin Q., Rohs M., Kratz S., Dynamic ambient lighting for mobile devices, in: Proceedings of the 24th Annual ACM Symposium Adjunct on User Interface Software and Technology, in: UIST ’11 Adjunct, Association for Computing Machinery, New York, NY, USA, 2011, pp. 51–52,.
[64]
Qualtrics Q., Qualtrics XM - experience management software, 2021, Retrieved September 9, 2022 from https://www.qualtrics.com.
[65]
Rakkolainen I., How feasible are star wars mid-air displays, in: 2007 11th International Conference Information Visualization, IV ’07, 2007, pp. 935–942,.
[66]
Rico J., Brewster S., Usable gestures for mobile interfaces: Evaluating social acceptability, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Association for Computing Machinery, New York, NY, USA, 2010, pp. 887–896,.
[67]
Sodhi R., Poupyrev I., Glisson M., Israr A., AIREAL: Interactive tactile experiences in free air, ACM Trans. Graph. 32 (4) (2013),.
[68]
Song J., Sörös G., Pece F., Fanello S.R., Izadi S., Keskin C., Hilliges O., In-air gestures around unmodified mobile devices, in: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 319–329,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/2642918.2647373.
[69]
Spindler M., Schuessler M., Martsch M., Dachselt R., Pinch-drag-flick vs. Spatial input: Rethinking zoom & pan on mobile displays, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 1113–1122,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/2556288.2557028.
[70]
Sridhar S., Markussen A., Oulasvirta A., Theobalt C., Boring S., WatchSense: On- and above-skin input sensing through a wearable depth sensor, in: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, ACM, New York, NY, USA, 2017, pp. 3891–3902,. URL http://doi.acm.org/10.1145/3025453.3026005.
[71]
Surale H.B., Matulic F., Vogel D., Experimental analysis of barehand mid-air mode-switching techniques in virtual reality, in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA, 2019, pp. 1–14.
[72]
Takashima K., Shinshi N., Kitamura Y., Exploring boundless scroll by extending motor space, in: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’15, Association for Computing Machinery, New York, NY, USA, 2015, pp. 557–566,.
[73]
Uddin M.S., Gutwin C., Lafreniere B., HandMark menus: Rapid command selection and large command sets on multi-touch displays, in: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, Association for Computing Machinery, New York, NY, USA, 2016, pp. 5836–5848,.
[74]
Wang C.-Y., Chu W.-C., Chiu P.-T., Hsiu M.-C., Chiang Y.-H., Chen M.Y., PalmType: Using palms as keyboards for smart glasses, in: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’15, Association for Computing Machinery, New York, NY, USA, 2015, pp. 153–160,. URL https://doi-org.ezproxy.library.ubc.ca/10.1145/2785830.2785886.
[75]
Weigel M., Mehta V., Steimle J., More than touch: Understanding how people use skin as an input surface for mobile computing, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 179–188,.
[76]
Xiao R., Cao T., Guo N., Zhuo J., Zhang Y., Harrison C., LumiWatch: On-arm projected graphics and touch input, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, Association for Computing Machinery, New York, NY, USA, 2018, pp. 1–11,.
[77]
Yang X.-D., Hasan K., Bruce N., Irani P., Surround-see: Enabling peripheral vision on smartphones during active use, in: Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST ’13, Association for Computing Machinery, New York, NY, USA, 2013, pp. 291–300,.
[78]
Zhang F., Bazarevsky V., Vakunov A., Tkachenka A., Sung G., Chang C.-L., Grundmann M., MediaPipe hands: On-device real-time hand tracking, 2020, pp. 1–5. arXiv, arXiv:2006.10214.
[79]
Zhang Y., Zhou J., Laput G., Harrison C., SkinTrack: Using the body as an electrical waveguide for continuous finger tracking on the skin, in: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, Association for Computing Machinery, New York, NY, USA, 2016, pp. 1491–1503,.
[80]
Zhou J., Zhang Y., Laput G., Harrison C., AuraSense: Enabling expressive around-smartwatch interactions with electric field sensing, in: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST ’16, Association for Computing Machinery, New York, NY, USA, 2016, pp. 81–86,.

Cited By

View all
  • (2024)Defining a mid-air gesture dictionary for web-based interactionProceedings of the 2024 International Conference on Advanced Visual Interfaces10.1145/3656650.3656661(1-5)Online publication date: 3-Jun-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image International Journal of Human-Computer Studies
International Journal of Human-Computer Studies  Volume 184, Issue C
Apr 2024
184 pages

Publisher

Academic Press, Inc.

United States

Publication History

Published: 17 April 2024

Author Tags

  1. Hand palm
  2. Mid-air
  3. On-body input
  4. Public touch display
  5. Touchless interaction

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Defining a mid-air gesture dictionary for web-based interactionProceedings of the 2024 International Conference on Advanced Visual Interfaces10.1145/3656650.3656661(1-5)Online publication date: 3-Jun-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media