[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Two-dimensional Stroke Gesture Recognition: A Survey

Published: 18 July 2021 Publication History

Abstract

The expansion of touch-sensitive technologies, ranging from smartwatches to wall screens, triggered a wider use of gesture-based user interfaces and encouraged researchers to invent recognizers that are fast and accurate for end-users while being simple enough for practitioners. Since the pioneering work on two-dimensional (2D) stroke gesture recognition based on feature extraction and classification, numerous approaches and techniques have been introduced to classify uni- and multi-stroke gestures, satisfying various properties of articulation-, rotation-, scale-, and translation-invariance. As the domain abounds in different recognizers, it becomes difficult for the practitioner to choose the right recognizer, depending on the application and for the researcher to understand the state-of-the-art. To address these needs, a targeted literature review identified 16 significant 2D stroke gesture recognizers that were submitted to a descriptive analysis discussing their algorithm, performance, and properties, and a comparative analysis discussing their similarities and differences. Finally, some opportunities for expanding 2D stroke gesture recognition are drawn from these analyses.

References

[1]
Mark Willey. 2001. Design and implementation of a stroke interface library. https://cworth.org/cworth/papers/xstroke/xstroke.pdf.
[2]
Ahmed Kadem Hamed Al-Saedi and Abbas H. Hassin Al-Asadi. 2019. Survey of hand gesture recognition systems. J.: Conf. Series 1294 (Sep. 2019), 042003.
[3]
Lisa Anthony and Jacob O. Wobbrock. 2010. A lightweight multistroke recognizer for user interface prototypes. In Proceedings of Graphics Interface Conference (GI’10). Canadian Information Processing Society, Toronto, Ont., Canada, 245–252. Retrieved from http://dl.acm.org/citation.cfm?id=1839214.1839258.
[4]
Lisa Anthony and Jacob O. Wobbrock. 2012. $N-protractor: A fast and accurate multistroke recognizer. In Proceedings of Graphics Interface Conference (GI’12). Canadian Information Processing Society, Toronto, Ont., Canada, Canada, 117–120. Retrieved from http://dl.acm.org/citation.cfm?id=2305276.2305296.
[5]
Caroline Appert and Shumin Zhai. 2009. Using strokes as command shortcuts: Cognitive benefits and toolkit support. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). Association for Computing Machinery, New York, NY, 2289–2298.
[6]
Shaikh Shawon Arefin Shimon, Courtney Lutton, Zichun Xu, Sarah Morrison-Smith, Christina Boucher, and Jaime Ruiz. 2016. Exploring non-touchscreen gestures for smartwatches. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’16). Association for Computing Machinery, New York, NY, 3822–3833.
[7]
Ilhan Aslan, Tabea Schmidt, Jens Woehrle, Lukas Vogel, and Elisabeth André. 2018. Pen + mid-air gestures: Eliciting contextual gestures. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI’18). Association for Computing Machinery, New York, NY, 135–144.
[8]
Ruben Balcazar, Francisco R. Ortega, Katherine Tarre, Armando Barreto, Mark Weiss, and Naphtali D. Rishe. 2017. CircGR: Interactive multi-touch gesture recognition using circular measurements. In Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS’17). Association for Computing Machinery, New York, NY, 12–21.
[9]
Olivier Bau and Wendy E. Mackay. 2008. OctoPocus: A dynamic guide for learning gesture-based command sets. In Proceedings of the 21st ACM Symposium on User Interface Software and Technology (UIST’08). Association for Computing Machinery, New York, NY, 37–46.
[10]
Xiaojun Bi, Ciprian Chelba, Tom Ouyang, Kurt Partridge, and Shumin Zhai. 2012. Bimanual gesture keyboard. In Proceedings of the 25th ACM Symposium on User Interface Software and Technology (UIST’12). Association for Computing Machinery, New York, NY, 137–146.
[11]
K. K. Biswas and S. K. Basu. 2011. Gesture recognition using Microsoft Kinect. In Proceedings of the 5th International Conference on Automation, Robotics and Applications. 100–103.
[12]
Rachel Blagojevic, Samuel Hsiao-Heng Chang, and Beryl Plimmer. 2010. The power of automatic feature selection: Rubine on steroids. In Proceedings of the 7th Sketch-Based Interfaces and Modeling Symposium (SBIM’10). Eurographics Association, Goslar, DEU, 79–86.
[13]
Idil Bostan, Oundefineduz Turan Buruk, Mert Canat, Mustafa Ozan Tezcan, Celalettin Yurdakul, Tilbe Göksun, and Oundefineduzhan Özcan. 2017. Hands as a controller: User preferences for hand specific on-skin gestures. In Proceedings of the Conference on Designing Interactive Systems (DIS’17). Association for Computing Machinery, New York, NY, 1123–1134.
[14]
Alessandro Carcangiu and Lucio Davide Spano. 2018. G-Gene: A gene alignment method for online partial stroke gestures recognition. Proc. ACM Hum.-Comput. Interact. 2, EICS (June 2018).
[15]
Alessandro Carcangiu, Lucio Davide Spano, Giorgio Fumera, and Fabio Roli. 2019. DEICTIC: A compositional and declarative gesture description based on hidden Markov models. Int. J. Hum.-comput. Stud. 122 (2019), 113–132.
[16]
Xiang “Anthony” Chen, Julia Schwarz, Chris Harrison, Jennifer Mankoff, and Scott E. Hudson. 2014. Air+touch: Interweaving touch & in-air gestures. In Proceedings of the 27th ACM Symposium on User Interface Software and Technology (UIST’14). Association for Computing Machinery, New York, NY, 519–525.
[17]
Hong Cheng, Lu Yang, and Zicheng Liu. 2016. Survey on 3D hand gesture recognition. IEEE Trans. Circ. Syst. Vid. Technol. 26, 9 (2016), 1659–1673.
[18]
Adrien Coyette, Sascha Schimke, Jean Vanderdonckt, and Claus Vielhauer. 2007. Trainable sketch recognizer for graphical user interface design. In Human-Computer Interaction – INTERACT 2007, Cécilia Baranauskas, Philippe Palanque, Julio Abascal, and Simone Diniz Junqueira Barbosa (Eds.). Springer Berlin, 124–135.
[19]
Stefano Dessì and Lucio Davide Spano. 2020. DG3: Exploiting gesture declarative models for sample generation and online recognition. Proc. ACM Hum.-Comput. Interact. 4, EICS (June 2020).
[20]
Hui Ding, Goce Trajcevski, Peter Scheuermann, Xiaoyue Wang, and Eamonn Keogh. 2008. Querying and mining of time series data: Experimental comparison of representations and distance measures. Proc. VLDB Endow. 1, 2 (Aug. 2008), 1542–1552.
[21]
Tilman Dingler, Rufat Rzayev, Alireza Sahami Shirazi, and Niels Henze. 2018. Designing consistent gestures across device types: Eliciting RSVP controls for phone, watch, and glasses. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’18). Association for Computing Machinery, New York, NY.
[22]
Richard O. Duda and Peter E. Hart. 1974. Pattern classification and scene analysis. Libr. Quart. 44, 3 (1974), 258–259.
[23]
Bogdan-Florin Gheran, Jean Vanderdonckt, and Radu-Daniel Vatavu. 2018. Gestures for smart rings: Empirical results, insights, and design implications. In Proceedings of the Designing Interactive Systems Conference (DIS’18). Association for Computing Machinery, New York, NY, 623–635.
[24]
John D. Gould and Josiane Salaun. 1986. Behavioral experiments on handmarkings. SIGCHI Bull. 18, 4 (May 1986), 175–181.
[25]
J. Herold and T. F. Stahovich. 2012. The 1cent recognizer: A fast, accurate, and easy-to-implement handwritten gesture recognition technique. In Proceedings of the International Symposium on Sketch-Based Interfaces and Modeling (SBIM’12). Eurographics Association, Goslar Germany, Germany, 39–46. Retrieved from http://dl.acm.org/citation.cfm?id=2331067.2331074.
[26]
Heloise Hse, Michael Shilman, and A. Richard Newton. 2004. Robust sketched symbol fragmentation using templates. In Proceedings of the 9th International Conference on Intelligent User Interfaces (IUI’04). Association for Computing Machinery, New York, NY, 156–160.
[27]
Yoonho Hwang. 2012. A fast nearest neighbor search algorithm by nonlinear embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’12). IEEE Computer Society, Washington, DC, 3053–3060. Retrieved from http://dl.acm.org/citation.cfm?id=2354409.2354783.
[28]
Yoonho Hwang and Hee-Kap Ahn. 2011. Convergent bounds on the euclidean distance. In Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS’11). Curran Associates Inc., 388–396. Retrieved from http://dl.acm.org/citation.cfm?id=2986459.2986503.
[29]
Noor Adnan Ibraheem and Rafiqul Zaman Khan. 2012. Survey on various gesture recognition technologies and techniques. Int. J. Comput. Applic. 50, 7 (7 2012), 38–44.
[30]
Rafiqul Zaman Khan and Noor Adnan Ibraheem. 2012. Survey on gesture recognition for hand image postures. Comput. Inf. Sci. 5, 3 (2012), 110–121.
[31]
Kenrick Kin, Tom Miller, Björn Bollensdorff, Tony DeRose, Björn Hartmann, and Maneesh Agrawala. 2011. Eden: A professional multitouch tool for constructing virtual organic environments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). Association for Computing Machinery, New York, NY, 1343–1352.
[32]
Barbara Kitchenham, Rialette Pretorius, David Budgen, O. Pearl Brereton, Mark Turner, Mahmood Niazi, and Stephen Linkman. 2010. Systematic literature reviews in software engineering – A tertiary study. Inf. Softw. Technol. 52, 8 (2010), 792–805.
[33]
Sven Kratz and Michael Rohs. 2010. The $3 Recognizer: Simple 3D gesture recognition on mobile devices. In Proceedings of the 15th International Conference on Intelligent User Interfaces (IUI’10). Association for Computing Machinery, New York, NY, 419–420.
[34]
Sven Kratz and Michael Rohs. 2011. Protractor3D: A closed-form solution to rotation-invariant 3D gestures. In Proceedings of the 16th International Conference on Intelligent User Interfaces (IUI’11). Association for Computing Machinery, New York, NY, 371–374.
[35]
Per-Ola Kristensson and Shumin Zhai. 2004. SHARK2: A large vocabulary shorthand writing system for pen-based computers. In Proceedings of the 17th ACM Symposium on User Interface Software and Technology (UIST’04). ACM, New York, NY, 43–52.
[36]
Gordon Kurtenbach, George Fitzmaurice, Thomas Baudel, and Bill Buxton. 1997. The design of a GUI paradigm based on tablets, two-hands, and transparency. In Proceedings of the ACM International Conference on Human Factors in Computing Systems (CHI’97). ACM, New York, NY, 35–42.
[37]
Lynn Kysh. 2013. Difference between a systematic review and a literature review. (8 2013).
[38]
Joseph J. LaViola. 2013. 3D gestural interaction: The state of the field. Int. Scholar. Res. Not. 2013(2013).
[39]
Luis A. Leiva, Daniel Martín-Albo, and Réjean Plamondon. 2015. Gestures à Go Go: Authoring synthetic human-like stroke gestures using the kinematic theory of rapid movements. ACM Trans. Intell. Syst. Technol. 7, 2 (Nov. 2015).
[40]
Yang Li. 2010. Protractor: A fast and accurate gesture recognizer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, New York, NY, 2169–2172.
[41]
L. M. Lorigo and V. Govindaraju. 2006. Offline Arabic handwriting recognition: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 28, 5 (May 2006), 712–724.
[42]
Roanna Lun and Wenbing Zhao. 2015. A survey of applications and human motion recognition with Microsoft Kinect. Int. J. Pattern Recog. Artif. Intell. 29, 5 (2015), 1555008:1–1555008:48.
[43]
Lorenzo Luzzi and Paolo Roselli. 2020. The shape of planar smooth gestures and the convergence of a gesture recognizer. Aequationes mathematicae (2020).
[44]
N. Magrofuoco, J. Pérez-Medina, P. Roselli, J. Vanderdonckt, and S. Villarreal. 2019. Eliciting contact-based and contactless gestures with radar-based sensors. IEEE Access 7 (2019), 176982–176997.
[45]
Miguel A. Nacenta, Yemliha Kamber, Yizhou Qiang, and Per Ola Kristensson. 2013. Memorability of pre-designed and user-defined gesture sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’13). Association for Computing Machinery, New York, NY, 1099–1108.
[46]
Christos H. Papadimitriou and Kenneth Steiglitz. 1982. Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, Inc., Upper Saddle River, NJ.
[47]
Brandon Paulson and Tracy Hammond. 2008. PaleoSketch: Accurate primitive sketch recognition and beautification. In Proceedings of the 13th International Conference on Intelligent User Interfaces (IUI’08). Association for Computing Machinery, New York, NY, 1–10.
[48]
Corey Pittman, Eugene M. Taranta II, and Joseph J. LaViola. 2016. A $-Family friendly approach to prototype selection. In Proceedings of the 21st International Conference on Intelligent User Interfaces (IUI’16). Association for Computing Machinery, New York, NY, 370–374.
[49]
R. Plamondon and S. N. Srihari. 2000. Online and off-line handwriting recognition: A comprehensive survey. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1 (Jan. 2000), 63–84.
[50]
William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. 1992. Numerical Recipes in C (2nd ed.). Cambridge University Press, Cambridge, MA.
[51]
Francis Quek, David McNeill, Robert Bryll, Susan Duncan, Xin-Feng Ma, Cemil Kirbas, Karl E. McCullough, and Rashid Ansari. 2002. Multimodal human discourse: Gesture and speech. ACM Trans. Comput.-hum. Interact. 9, 3 (Sept. 2002), 171–193.
[52]
Siddharth S. Rautaray and Anupam Agrawal. 2015. Vision based hand gesture recognition for human computer interaction: A survey. Artif. Intell. Rev. 43, 1 (2015), 1–54.
[53]
J. Reaver, T. F. Stahovich, and J. Herold. 2011. How to make a quick $: Using hierarchical clustering to improve the efficiency of the dollar recognizer. In Proceedings of the 8th Eurographics Symposium on Sketch-Based Interfaces and Modeling (SBIM’11). ACM, New York, NY, 103–108.
[54]
Yosra Rekik, Radu-Daniel Vatavu, and Laurent Grisoni. 2014. Match-up & Conquer: A two-step technique for recognizing unconstrained bimanual and multi-finger touch input. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI’14). Association for Computing Machinery, New York, NY, 201–208.
[55]
J. R. Rhyne and C. G. Wolf. 1986. Gestural Interfaces for Information Processing Applications. International Business Machines Incorporated, Thomas J. Watson Research Center. Retrieved from https://books.google.be/books?id=NzVsGwAACAAJ.
[56]
Dean Rubine. 1991. Specifying gestures by example. In Proceedings of the 18th Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’91). ACM, New York, NY, 329–337.
[57]
Stefan Schneegass and Alexandra Voit. 2016. GestureSleeve: Using touch sensitive fabrics for gestural input on the forearm for controlling smartwatches. In Proceedings of the ACM International Symposium on Wearable Computers (ISWC’16). Association for Computing Machinery, New York, NY, 108–115.
[58]
Adwait Sharma, Joan Sol Roo, and Jürgen Steimle. 2019. Grasping microgestures: Eliciting single-hand microgestures for handheld objects. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19). Association for Computing Machinery, New York, NY.
[59]
Jia Sheng. 2003. A Study of AdaBoost in 3D Gesture Recognition. Technical Report. Computer Science Department, Toronto University, Toronto, ON. Retrieved from http://www.dgp.toronto.edu/ jsheng/doc/CSC2515/Report.pdf.
[60]
C. Y. Suen, M. Berthod, and S. Mori. 1980. Automatic recognition of handprinted characters—The state of the art. Proc. IEEE 68, 4 (Apr. 1980), 469–487.
[61]
C. C. Tappert. 1982. Cursive script recognition by elastic matching. IBM J. Res. Dev. 26, 6 (Nov. 1982), 765–771.
[62]
Eugene M. Taranta II and Joseph J. LaViola, Jr.2015. Penny Pincher: A blazing fast, highly accurate $-family recognizer. In Proceedings of the 41st Graphics Interface Conference (GI’15). Canadian Information Processing Society, Toronto, Ont., Canada, 195–202. Retrieved from http://dl.acm.org/citation.cfm?id=2788890.2788925.
[63]
Eugene M. Taranta II, Mehran Maghoumi, Corey R. Pittman, and Joseph J. LaViola, Jr.2016. A rapid prototyping approach to synthetic data generation for improved 2D gesture recognition. In Proceedings of the 29th Symposium on User Interface Software and Technology (UIST’16). ACM, New York, NY, 873–885.
[64]
Eugene M. Taranta II, Amirreza Samiei, Mehran Maghoumi, Pooya Khaloo, Corey R. Pittman, and Joseph J. LaViola Jr.2017. Jackknife: A reliable recognizer with few samples and many modalities. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, 5850–5861.
[65]
Seiichi Uchida and Hiroaki Sakoe. 2005. A survey of elastic matching techniques for handwritten character recognition. IEICE - Trans. Inf. Syst. E88-D, 8 (Aug. 2005), 1781–1790.
[66]
Jean Vanderdonckt, Bruno Dumas, and Mauro Cherubini. 2018. Comparing some distances in template-based 2D gesture recognition. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA’18). Association for Computing Machinery, New York, NY, 1–6.
[67]
Jean Vanderdonckt, Paolo Roselli, and Jorge Luis Pérez-Medina. 2018. !FTL, an Articulation-invariant stroke gesture recognizer with controllable position, scale, and rotation invariances. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI’18). Association for Computing Machinery, New York, NY, 125–134.
[68]
Radu-Daniel Vatavu. 2013. The impact of motion dimensionality and bit cardinality on the design of 3D gesture recognizers. Int. J. Hum. Comput. Stud. 71, 4 (2013), 387–409.
[69]
Radu-Daniel Vatavu. 2012. 1F: One accessory feature design for gesture recognizers. In Proceedings of the ACM International Conference on Intelligent User Interfaces (IUI’12). ACM, New York, NY, 297–300.
[70]
Radu-Daniel Vatavu. 2012. User-defined gestures for free-hand TV control. In Proceedings of the 10th European Conference on Interactive TV and Video (EuroITV’12). Association for Computing Machinery, New York, NY, 45–48.
[71]
Radu-Daniel Vatavu. 2017. Improving gesture recognition accuracy on touch screens for users with low vision. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, 4667–4679.
[72]
Radu-Daniel Vatavu. 2019. The dissimilarity-consensus approach to agreement analysis in gesture elicitation studies. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’19). Association for Computing Machinery, New York, NY.
[73]
Radu-Daniel Vatavu, Lisa Anthony, and Jacob O. Wobbrock. 2012. Gestures as point clouds: A $P recognizer for user interface prototypes. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (ICMI’12). ACM, New York, NY, 273–280.
[74]
Radu-Daniel Vatavu, Lisa Anthony, and Jacob O. Wobbrock. 2018. $Q: A super-quick, articulation-invariant stroke-gesture recognizer for low-resource devices. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’18). ACM, New York, NY.
[75]
Radu-Daniel Vatavu, Bogdan Gheran, and Maria-Doina Schipor. 2018. The impact of low vision on touch-gesture articulation on mobile devices. IEEE Pervas. Comput. 17, 1 (2018), 27–37.
[76]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing agreement analysis for elicitation studies: New measures, significance test, and toolkit. In Proceedings of the 33rd ACM Conference on Human Factors in Computing Systems (CHI’15). Association for Computing Machinery, New York, NY, 1325–1334.
[77]
Radu-Daniel Vatavu and Jacob O. Wobbrock. 2016. Between-subjects elicitation studies: Formalization and tool support. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’16). Association for Computing Machinery, New York, NY, 3390–3402.
[78]
Santiago Villarreal-Narvaez, Jean Vanderdonckt, Radu-Daniel Vatavu, and Jacob O. Wobbrock. 2020. A systematic review of gesture elicitation studies: What can we learn from 216 studies? In Proceedings of the ACM Designing Interactive Systems Conference (DIS’20). Association for Computing Machinery, New York, NY, 855–872.
[79]
Petra Wagner, Zofia Malisz, and Stefan Kopp. 2014. Guest editorial: Gesture and speech in interaction: An overview. Speech Commun. 57 (Feb. 2014), 209–232.
[80]
Cheng-Yao Wang, Min-Chieh Hsiu, Po-Tsung Chiu, Chiao-Hui Chang, Liwei Chan, Bing-Yu Chen, and Mike Y. Chen. 2015. PalmGesture: Using palms as gesture interfaces for eyes-free input. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’15). Association for Computing Machinery, New York, NY, 217–226.
[81]
Don Willems, Ralph Niels, Marcel van Gerven, and Louis Vuurpijl. 2009. Iconic and multi-stroke gesture recognition. Pattern Recog. 42, 12 (Dec. 2009), 3303–3312.
[82]
Markus L. Wittorf and Mikkel R. Jakobsen. 2016. Eliciting mid-air gestures for wall-display interaction. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI’16). Association for Computing Machinery, New York, NY.
[83]
Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). Association for Computing Machinery, New York, NY, 1083–1092.
[84]
Jacob O. Wobbrock and Brad A. Myers. 2006. From letters to words: Efficient stroke-based word completion for trackball text entry. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (Assets’06). Association for Computing Machinery, New York, NY, 2–9.
[85]
Jacob O. Wobbrock, Andrew D. Wilson, and Yang Li. 2007. Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes. In Proceedings of the 20th ACM Symposium on User Interface Software and Technology (UIST’07). ACM, New York, NY, 159–168.
[86]
Carl D. Worth. 2003. Xstroke: Full-screen gesture recognition for X. In Proceedings of the FREENIX Track: 2003 USENIX Annual Technical Conference. 187–196. Retrieved from http://www.usenix.org/events/usenix03/tech/freenix03/worth.html.
[87]
Robert Xiao, Gierad Laput, and Chris Harrison. 2014. Expanding the input expressivity of smartwatches with mechanical pan, twist, tilt, and click. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’14). Association for Computing Machinery, New York, NY, 193–196.
[88]
Yina Ye and Petteri Nurmi. 2015. Gestimator: Shape and stroke similarity based gesture recognition. In Proceedings of the ACM International Conference on Multimodal Interaction (ICMI’15). Association for Computing Machinery, New York, NY, 219–226.
[89]
Shumin Zhai and Per-Ola Kristensson. 2003. Shorthand writing on stylus keyboard. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’03). ACM, New York, NY, 97–104.
[90]
S. Zhai, P. O. Kristensson, C. Appert, T. H. Andersen, and X. Cao. 2012. Foundational Issues in Touch-surface Stroke Gesture Design: An Integrative Review. now. Retrieved from https://ieeexplore.ieee.org/document/8187096.
[91]
Yougen Zhang, Wei Deng, Hanchen Song, and Lingda Wu. 2013. A fast pen gesture matching method based on nonlinear embedding. In Advances in Image and Graphics Technologies, Tieniu Tan, Qiuqi Ruan, Xilin Chen, Huimin Ma, and Liang Wang (Eds.). Springer Berlin, 223–231.
[92]
Jingjie Zheng and Daniel Vogel. 2016. Finger-aware shortcuts. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’16). Association for Computing Machinery, New York, NY, 4274–4285.

Cited By

View all
  • (2024)Research on the application of virtual reality technology in the teaching of art sketchingApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-01409:1Online publication date: 31-Jan-2024
  • (2024)Engineering Touchscreen Input for 3-Way Displays: Taxonomy, Datasets, and ClassificationCompanion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3660515.3661331(57-65)Online publication date: 24-Jun-2024
  • (2024)Take a Seat, Make a Gesture: Charting User Preferences for On-Chair and From-Chair Gesture InputProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642028(1-17)Online publication date: 11-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 54, Issue 7
September 2022
778 pages
ISSN:0360-0300
EISSN:1557-7341
DOI:10.1145/3476825
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 July 2021
Accepted: 01 May 2021
Revised: 01 March 2021
Received: 01 July 2020
Published in CSUR Volume 54, Issue 7

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Gesture-based interfaces
  2. gesture recognition
  3. stroke gestures
  4. touch gestures

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)108
  • Downloads (Last 6 weeks)9
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Research on the application of virtual reality technology in the teaching of art sketchingApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-01409:1Online publication date: 31-Jan-2024
  • (2024)Engineering Touchscreen Input for 3-Way Displays: Taxonomy, Datasets, and ClassificationCompanion Proceedings of the 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3660515.3661331(57-65)Online publication date: 24-Jun-2024
  • (2024)Take a Seat, Make a Gesture: Charting User Preferences for On-Chair and From-Chair Gesture InputProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642028(1-17)Online publication date: 11-May-2024
  • (2024)Hands-On Robotics: Enabling Communication Through Direct Gesture ControlCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640635(822-827)Online publication date: 11-Mar-2024
  • (2024)Online Handwritten Chinese Character Recognition Based on 1-D Convolution and Two-Streams TransformersIEEE Transactions on Multimedia10.1109/TMM.2023.333958926(5769-5781)Online publication date: 2024
  • (2024)Wear + Touch: An Exploration of Wearables for Vibrotactile Feedback During Touchscreen InputInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2372145(1-19)Online publication date: 4-Jul-2024
  • (2024)A geometric algebra-based approach for myoelectric pattern recognition control and faster prosthesis recalibrationExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.124373254:COnline publication date: 18-Oct-2024
  • (2024)Dataset of inertial measurements for writing Punjabi characters using IMU sensorsData in Brief10.1016/j.dib.2024.11108357(111083)Online publication date: Dec-2024
  • (2024)A motion conditioned diffusion model for real-time hand trajectory semantic predictionJournal of Real-Time Image Processing10.1007/s11554-024-01591-w22:1Online publication date: 28-Nov-2024
  • (2023)FORTE: Few Samples for Recognizing Hand Gestures with a Smartphone-attached RadarProceedings of the ACM on Human-Computer Interaction10.1145/35932317:EICS(1-25)Online publication date: 19-Jun-2023
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media