[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

SonicASL: An Acoustic-based Sign Language Gesture Recognizer Using Earphones

Published: 24 June 2021 Publication History

Abstract

We propose SonicASL, a real-time gesture recognition system that can recognize sign language gestures on the fly, leveraging front-facing microphones and speakers added to commodity earphones worn by someone facing the person making the gestures. In a user study (N=8), we evaluate the recognition performance of various sign language gestures at both the word and sentence levels. Given 42 frequently used individual words and 30 meaningful sentences, SonicASL can achieve an accuracy of 93.8% and 90.6% for word-level and sentence-level recognition, respectively. The proposed system is tested in two real-world scenarios: indoor (apartment, office, and corridor) and outdoor (sidewalk) environments with pedestrians walking nearby. The results show that our system can provide users with an effective gesture recognition tool with high reliability against environmental factors such as ambient noises and nearby pedestrians.

References

[1]
Takashi Amesaka, Hiroki Watanabe, and Masanori Sugimoto. 2019. Facial Expression Recognition Using Ear Canal Transfer Function. In Proceedings of the 23rd International Symposium on Wearable Computers (ISWC'19). ACM, New York, NY, USA, 1--9.
[2]
Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in English and Mandarin. In International Conference on Machine Learning (ICML'16). 173--182.
[3]
Yannis M Assael, Brendan Shillingford, Shimon Whiteson, and Nando De Freitas. 2016. LipNet: End-to-end sentence-level lipreading. arXiv preprint arXiv:1611.01599 (2016).
[4]
Rochelle Barlow. 2018. ASL Grammar: The Workbook (1 ed.). CreateSpace Independent Publishing Platform.
[5]
Vincent Becker, Linus Fessler, and Gábor Sörös. 2019. GestEar: combining audio and motion sensing for gesture recognition on smartwatches. In Proceedings of the 23rd International Symposium on Wearable Computers (ISWC'19). 10--19.
[6]
Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, et al. 2019. Sign language recognition, generation, and translation: An interdisciplinary perspective. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility. 16--31.
[7]
Helene Brashear, Valerie Henderson, Kwang-Hyun Park, Harley Hamilton, Seungyon Lee, and Thad Starner. 2006. American sign language recognition in game development for deaf children. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility. 79--86.
[8]
Helene Brashear, Thad Starner, Paul Lukowicz, and Holger Junker. 2003. Using multiple sensors for mobile sign language recognition. Georgia Institute of Technology.
[9]
Nam Bui, Nhat Pham, Jessica Jacqueline Barnitz, Zhanan Zou, Phuc Nguyen, Hoang Truong, Taeho Kim, Nicholas Farrow, Anh Nguyen, Jianliang Xiao, Robin Deterding, Thang Dinh, and Tam Vu. 2019. eBP: A Wearable System For Frequent and Comfortable Blood Pressure Monitoring From User's Ear. In The 25th Annual International Conference on Mobile Computing and Networking (MobiCom '19). ACM, New York, NY, USA, Article 53, 17 pages.
[10]
Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, and Richard Bowden. 2017. SubUNets: End-to-end hand shape and continuous sign language recognition. In 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 3075--3084.
[11]
Necati Cihan Camgoz, Oscar Koller, Simon Hadfield, and Richard Bowden. 2020. Sign language transformers: Joint end-to-end sign language recognition and translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10023--10033.
[12]
Mingshi Chen, Panlong Yang, Jie Xiong, Maotian Zhang, Youngki Lee, Chaocan Xiang, and Chang Tian. 2019. Your Table Can Be an Input Panel: Acoustic-based Device-Free Interaction Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 1 (2019), 3.
[13]
Debbie Clason. 2019. Hearing loss statistics at a glance. https://www.healthyhearing.com/report/52814-Hearing-loss-statistics-at-a-glance. [Online; accessed 30-August-2020].
[14]
ASL Dictionary. 2021. SIGN LANGUAGE. https://www.handspeak.com/. ASL Dictionary.
[15]
Philippe Dreuw, Carol Neidle, Vassilis Athitsos, Stan Sclaroff, and Hermann Ney. 2008. Benchmark Databases for Video-Based Automatic Sign Language Recognition. In LREC.
[16]
Biyi Fang, Jillian Co, and Mi Zhang. 2017. DeepASL: Enabling ubiquitous and non-intrusive word and sentence-level sign language translation. In Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems (SenSys'17). Article 5, 13 pages.
[17]
Gaolin Fang, Wen Gao, and Debin Zhao. 2006. Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 37, 1 (2006), 1--9.
[18]
Jakub Gałka, Mariusz Mąsior, Mateusz Zaborski, and Katarzyna Barczewska. 2016. Inertial motion sensing glove for sign language gesture acquisition and recognition. IEEE Sensors Journal 16, 16 (2016), 6310--6316.
[19]
Yang Gao, Yingcheng Jin, Jiyang Li, Seokmin Choi, and Zhanpeng Jin. 2020. EchoWhisper: Exploring an Acoustic-based Silent Speech Interface for Smartphone Users. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3, Article 80 (2020), 27 pages.
[20]
Yang Gao, Wei Wang, Vir V. Phoha, Wei Sun, and Zhanpeng Jin. 2019. EarEcho: Using Ear Canal Echo for Wearable Authentication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3, Article 81 (Sept. 2019), 24 pages.
[21]
Srujana Gattupalli, Amir Ghaderi, and Vassilis Athitsos. 2016. Evaluation of deep learning based pose estimation for sign language recognition. In Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments. 1--7.
[22]
Blaine Goss. 2003. Hearing from the Deaf Culture. Intercultural Communication Studies 12, 2 (2003), 9--24.
[23]
Anthony G. Greenwald. 1976. Within-subjects designs: To use or not to use? Psychological Bulletin 83, 2 (1976), 314--320.
[24]
Matthew D Hickman. 2010. Translation device eases communication problems. https://www.army.mil/article/32679/translation_device_eases_communication_problems. [Online; accessed 23-April-2021].
[25]
Jiahui Hou, Xiang-Yang Li, Peide Zhu, Zefan Wang, Yu Wang, Jianwei Qian, and Panlong Yang. 2019. SignSpeaker: A real-time, high-precision smartwatch-based sign language translator. In The 25th Annual International Conference on Mobile Computing and Networking (MobiCom'19). Article 24, 15 pages.
[26]
Yasha Iravantchi, Mayank Goel, and Chris Harrison. 2019. BeamBand: Hand gesture sensing with ultrasonic beamforming. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--10.
[27]
Dhruv Jain, Rachel Franz, Leah Findlater, Jackson Cannon, Raja Kushalnagar, and Jon Froehlich. 2018. Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. 81--92.
[28]
Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H. Thomas. 2017. EarTouch: Turning the Ear into an Input Surface (MobileHCI'17). ACM, New York, NY, USA, Article 27, 6 pages.
[29]
Dietrich Klakow and Jochen Peters. 2002. Testing the correlation of word error rate and perplexity. Speech Communication 38, 1-2 (2002), 19--28.
[30]
Knowles. 2020. Surface Mount MEMS Microphoness. https://www.knowles.com/subdepartment/dpt-microphones/subdpt-sisonic-surface-mount-mems. [Online; accessed 30-August-2020].
[31]
Karly Kudrinko, Emile Flavin, Xiaodan Zhu, and Qingguo Li. 2020. Wearable Sensor-Based Sign Language Recognition: A Comprehensive Review. IEEE Reviews in Biomedical Engineering Early Access (2020), 1--15.
[32]
Sen M. Kuo and Dennis R. Morgan. 1999. Active Noise Control: A Tutorial Review. Proc. IEEE 87, 6 (1999), 943--973.
[33]
Haouyu Li, Hunglin Hsu, Liying Hu, Shufen Guo, and Rongjian Huang. 2017. Gesture control earphone. US Patent App. 15/125,002.
[34]
Kang Ling, Haipeng Dai, Yuntang Liu, and Alex X Liu. 2018. Ultragesture: Fine-grained gesture sensing and recognition. In 2018 15th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). IEEE, 1--9.
[35]
Li Lu, Jiadi Yu, Yingying Chen, Hongbo Liu, Yanmin Zhu, Yunfei Liu, and Minglu Li. 2018. LipPass: Lip reading-based user authentication on smartphones leveraging acoustic signals. In Proceedings of the 2018 IEEE Conference on Computer Communications (INFOCOM). IEEE, 1466--1474.
[36]
Yongsen Ma, Gang Zhou, Shuangquan Wang, Hongyang Zhao, and Woosub Jung. 2018. SignFi: Sign language recognition using WiFi. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 1, Article 23 (2018), 21 pages.
[37]
Wenguang Mao, Mei Wang, Wei Sun, Lili Qiu, Swadhin Pradhan, and Yi-Chao Chen. 2019. RNN-Based Room Scale Hand Motion Tracking. In The 25th Annual International Conference on Mobile Computing and Networking (MobiCom'19). ACM, New York, NY, USA, Article 38, 16 pages.
[38]
R Martin McGuire, Jose Hernandez-Rebollar, Thad Starner, Valerie Henderson, Helene Brashear, and Danielle S Ross. 2004. Towards a one-way American sign language translator. In Sixth IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, 620--625.
[39]
Dimitris Metaxas, Bo Liu, Fei Yang, Peng Yang, Nicholas Michael, and Carol Neidle. 2012. Recognition of Nonmanual Markers in American Sign Language (ASL) Using Non-Parametric Adaptive 2D-3D Face Tracking. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12). 2414--2420.
[40]
Christian Metzger, Matt Anderson, and Thad Starner. 2004. FreeDigiter: A contact-free device for gesture control. In Eighth International Symposium on Wearable Computers, Vol. 1. IEEE, 18--21.
[41]
Riall Nolan. 1999. Communicating and adapting across cultures: Living and working in the global village. ABC-CLIO.
[42]
Alex Olwal, Kevin Balke, Dmitrii Votintcev, Thad Starner, Paula Conn, Bonnie Chinh, and Benoit Corda. 2020. Wearable Subtitles: Augmenting Spoken Communication with Lightweight Eyewear for All-day Captioning. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST '20). 1108--1120.
[43]
Jayshree R Pansare and Maya Ingle. 2016. Vision-based approach for American sign language recognition using edge orientation histogram. In 2016 International Conference on Image, Vision and Computing (ICIVC). IEEE, 86--90.
[44]
Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le. 2019. SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. Proc. Interspeech 2019 (Sep 2019).
[45]
Dominique A Potvin, Kirsten M Parris, and Raoul A Mulder. 2011. Geographically pervasive effects of urban noise on frequency and syllable rate of songs and calls in silvereyes (Zosterops lateralis). Proceedings of the Royal Society B: Biological Sciences 278, 1717 (2011), 2464--2469.
[46]
Junfu Pu, Wengang Zhou, and Houqiang Li. 2019. Iterative alignment network for continuous sign language recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4165--4174.
[47]
Purple. 2020. Purple Communications - On-site ASL Interpreting and VRI. https://signlanguage.com//. [Online; accessed 01-Feb-2021].
[48]
Kun Qian, Chenshu Wu, Fu Xiao, Yue Zheng, Yi Zhang, Zheng Yang, and Yunhao Liu. 2018. Acousticcardiogram: Monitoring heartbeats using acoustic signals on smart devices. In Proceedings of the 2018 IEEE Conference on Computer Communications (INFOCOM). IEEE, 1574--1582.
[49]
Mina I Sadek, Michael N Mikhael, and Hala A Mansour. 2017. A new approach for designing a smart glove for Arabic Sign Language Recognition system based on the statistical analysis of the Sign Language. In 34th National Radio Science Conference (NRSC). 380--388.
[50]
Panneer Selvam Santhalingam, Al Amin Hosain, Ding Zhang, Parth Pathak, Huzefa Rangwala, and Raja Kushalnagar. 2020. mmASL: Environment-Independent ASL Gesture Recognition Using 60 GHz Millimeter-wave Signals. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 1, Article 26 (2020), 30 pages.
[51]
Baoguang Shi, Xiang Bai, and Cong Yao. 2016. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 11 (2016), 2298--2304.
[52]
Xingzhe Song, Boyuan Yang, Ge Yang, Ruirong Chen, Erick Forno, Wei Chen, and Wei Gao. 2020. SpiroSonic: monitoring human lung function via acoustic sensing on commodity smartphones. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking (MobiCom'20). ACM, Article 52, 14 pages.
[53]
Thad Starner. 2009. Telesign: Towards a one-way American sign language translator. Technical Report. Georgia Institute of Technology.
[54]
Thad Starner, Joshua Weaver, and Alex Pentland. 1998. Real-time american sign language recognition using desk and wearable computer based video. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 12 (1998), 1371--1375.
[55]
David RB Stockwell and A Townsend Peterson. 2002. Effects of sample size on accuracy of species distribution models. Ecological modelling 148, 1 (2002), 1--13.
[56]
USound. 2020. MEMS Microspeakers. https://www.usound.com/product/ganymede/. [Online; accessed 30-August-2020].
[57]
B. Venema, J. Schiefer, V. Blazek, N. Blanik, and S. Leonhardt. 2013. Evaluating Innovative In-Ear Pulse Oximetry for Unobtrusive Cardiovascular and Pulmonary Monitoring During Sleep. IEEE Journal of Translational Engineering in Health and Medicine 1 (2013), 8.
[58]
Christian Vogler and Dimitris Metaxas. 2003. Handshapes and movements: Multiple-channel american sign language recognition. In International Gesture Workshop. Springer, 247--258.
[59]
Anran Wang and Shyamnath Gollakota. 2019. Millisonic: Pushing the limits of acoustic motion tracking. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI'19). 1--11.
[60]
Tianben Wang, Daqing Zhang, Yuanqing Zheng, Tao Gu, Xingshe Zhou, and Bernadette Dorizzi. 2018. C-FMCW based contactless respiration detection using acoustic signal. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 4 (2018), 170.
[61]
Yanwen Wang, Jiaxing Shen, and Yuanqing Zheng. 2020. Push the Limit of Acoustic Gesture Recognition. In IEEE Conference on Computer Communications (INFOCOM'20). IEEE, 566--575.
[62]
Yanwen Wang, Jiaxing Shen, and Yuanqing Zheng. 2020. Push the Limit of Acoustic Gesture Recognition. IEEE Transactions on Mobile Computing Early Access (2020), 1--14.
[63]
Zhengjie Wang, Yushan Hou, Kangkang Jiang, Wenwen Dou, Chengming Zhang, Zehua Huang, and Yinjing Guo. 2019. Hand Gesture Recognition Based on Active Ultrasonic Sensing of Smartphone: A Survey. IEEE Access 7 (2019), 111897--111922.
[64]
Wei Xu, ZhiWen Yu, Zhu Wang, Bin Guo, and Qi Han. 2019. Acousticid: gait-based human identification using acoustic signal. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3 (2019), 1--25.
[65]
Xuhai Xu, Haitian Shi, Xin Yi, WenJia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K. Dey. 2020. EarBuddy: Enabling On-Face Interaction via Wireless Earbuds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI'20). 1--14.
[66]
Hee-Deok Yang, Stan Sclaroff, and Seong-Whan Lee. 2008. Sign language spotting with a threshold model based on conditional random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 7 (2008), 1264--1277.
[67]
Yuancheng Ye, Yingli Tian, Matt Huenerfauth, and Jingya Liu. 2018. Recognizing american sign language gestures from within continuous videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2064--2073.
[68]
Zahoor Zafrulla, Helene Brashear, Harley Hamilton, and Thad Starner. 2010. A novel approach to american sign language (asl) phrase verification using reversed signing. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 48--55.
[69]
Zahoor Zafrulla, Helene Brashear, Thad Starner, Harley Hamilton, and Peter Presti. 2011. American sign language recognition with the kinect. In Proceedings of the 13th International Conference on Multimodal Interfaces. 279--286.
[70]
Zahoor Zafrulla, Helene Brashear, Pei Yin, Peter Presti, Thad Starner, and Harley Hamilton. 2010. American sign language phrase verification in an educational game for deaf children. In 2010 20th International Conference on Pattern Recognition. IEEE, 3846--3849.
[71]
Zahoor Zafrulla, Himanshu Sahni, Abdelkareem Bedri, Pavleen Thukral, and Thad Starner. 2015. Hand detection in American Sign Language depth data using domain-driven random forest regression. In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Vol. 1. IEEE, 1--7.
[72]
Cheng Zhang, Qiuyue Xue, Anandghan Waghmare, Ruichen Meng, Sumeet Jain, Yizeng Han, Xinyu Li, Kenneth Cunefare, Thomas Ploetz, Thad Starner, Omer Inan, and Gregory D. Abowd. 2018. FingerPing: Recognizing Fine-Grained Hand Poses Using Active Acoustic On-Body Sensing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18). 1--10.
[73]
Qian Zhang, Dong Wang, Run Zhao, and Yinggang Yu. 2019. MyoSign: enabling end-to-end sign language recognition with wearables. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 650--660.
[74]
Yongzhao Zhang, Wei-Hsiang Huang, Chih-Yun Yang, Wen-Ping Wang, Yi-Chao Chen, Chuang-Wen You, Da-Yuan Huang, Guangtao Xue, and Jiadi Yu. 2020. Endophasia: Utilizing Acoustic-Based Imaging for Issuing Contact-Free Silent Speech Commands. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 1 (2020), 1--26.
[75]
Tianming Zhao, Jian Liu, Yan Wang, Hongbo Liu, and Yingying Chen. 2018. PPG-based Finger-level Gesture Recognition Leveraging Wearables. In International Conference on Computer Communications (INFOCOM). IEEE, 1457--1465.
[76]
Tianming Zhao, Jian Liu, Yan Wang, Hongbo Liu, and Yingying Chen. 2020. Towards Low-cost Sign Language Gesture Recognition Leveraging Wearables. IEEE Transactions on Mobile Computing Early Access (2020), 1--16.

Cited By

View all
  • (2024)American Sign Language Recognition and Translation Using Perception Neuron Wearable Inertial Motion Capture SystemSensors10.3390/s2402045324:2(453)Online publication date: 11-Jan-2024
  • (2024)ActSonic: Recognizing Everyday Activities from Inaudible Acoustic Wave Around the BodyProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997528:4(1-32)Online publication date: 21-Nov-2024
  • (2024)Towards Smartphone-based 3D Hand Pose Reconstruction Using Acoustic SignalsACM Transactions on Sensor Networks10.1145/367712220:5(1-32)Online publication date: 26-Aug-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 5, Issue 2
June 2021
932 pages
EISSN:2474-9567
DOI:10.1145/3472726
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 June 2021
Published in IMWUT Volume 5, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Acoustic sensing
  2. earphones
  3. sign language gesture recognition

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)178
  • Downloads (Last 6 weeks)28
Reflects downloads up to 10 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)American Sign Language Recognition and Translation Using Perception Neuron Wearable Inertial Motion Capture SystemSensors10.3390/s2402045324:2(453)Online publication date: 11-Jan-2024
  • (2024)ActSonic: Recognizing Everyday Activities from Inaudible Acoustic Wave Around the BodyProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997528:4(1-32)Online publication date: 21-Nov-2024
  • (2024)Towards Smartphone-based 3D Hand Pose Reconstruction Using Acoustic SignalsACM Transactions on Sensor Networks10.1145/367712220:5(1-32)Online publication date: 26-Aug-2024
  • (2024)MunchSonic: Tracking Fine-grained Dietary Actions through Active Acoustic Sensing on EyeglassesProceedings of the 2024 ACM International Symposium on Wearable Computers10.1145/3675095.3676619(96-103)Online publication date: 5-Oct-2024
  • (2024)EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage SignalsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676367(1-13)Online publication date: 13-Oct-2024
  • (2024)Exploring Uni-manual Around Ear Off-Device Gestures for EarablesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435138:1(1-29)Online publication date: 6-Mar-2024
  • (2024)MAF: Exploring Mobile Acoustic Field for Hand-to-Face Gesture InteractionsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642437(1-20)Online publication date: 11-May-2024
  • (2024)A Sign Language Recognition Framework Based on Cross-Modal Complementary Information FusionIEEE Transactions on Multimedia10.1109/TMM.2024.337709526(8131-8144)Online publication date: 2024
  • (2024)Ultra Write: A Lightweight Continuous Gesture Input System with Ultrasonic Signals on COTS Devices2024 IEEE International Conference on Pervasive Computing and Communications (PerCom)10.1109/PerCom59722.2024.10494485(174-183)Online publication date: 11-Mar-2024
  • (2024)Ultragios: Turning Mobile Devices Into Acoustic Sensors With Sensing Gesture InformationIEEE Sensors Journal10.1109/JSEN.2024.344396824:19(30584-30599)Online publication date: 1-Oct-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media