Abstract
The continued evolution of voice recognition technology has led to its integration into many smart devices as the primary mode of user interaction. Smart speakers are among the most popular smart devices that utilize voice recognition to offer interactive functions and features to serve as a personal assistant and a control hub for smart homes. However, smart speakers rely primarily on voice recognition technology and are often inaccessible to Deaf and hard of hearing (DHH) individuals. While smart speakers such as the Amazon Echo Show have a built-in screen to provide visual interaction for DHH users through features such as “Tap to Alexa,” these devices still require users to be positioned next to them. Though features such as “Tap to Alexa” improve the accessibility of smart speakers for DHH users, they are not functionally comparable solutions as they restrict DHH users from benefiting the same user freedom hearing users have in interacting with them from across the room or while performing another hands-on activity. To bridge this gap, we explore alternative approaches such as augmented reality (AR) wearables and various projection systems. We conducted a mixed-method study involving surveys and Wizard of Oz evaluations to investigate the proposed research objectives. The study’s findings provide a deeper insight into the potential of AR projection interfaces as novel interaction methods to improve the accessibility of smart speakers for DHH people.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cicret projects emails, videos and games onto skin—daily mail online. https://www.dailymail.co.uk/sciencetech/article-2871401/The-bracelet-turns-ARM-touchscreen-Cicret-projects-emails-videos-games-skin.html
Smart speaker market size global forecast to 2021–2030. https://www.marketsandmarkets.com/Market-Reports/smart-speaker-market-44984088.html
This faceless watch has raised \$400,000 on Indiegogo. https://www.nbcnews.com/id/wbna55692278
Bigham, J.P., Kushalnagar, R., Huang, T.H.K., Flores, J.P., Savage, S.: On how deaf people might use speech to control devices. In: Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. ASSETS ’17, pp. 383–384. Association for Computing Machinery (2017). https://doi.org/10.1145/3132525.3134821
Blair, J., Abdullah, S.: It didn’t sound good with my cochlear implants: understanding the challenges of using smart assistants for deaf and hard of hearing users, 4(4), 118:1–118:27 (2020). https://doi.org/10.1145/3432194
Blasko, G., Coriand, F., Feiner, S.: Exploring interaction with a simulated wrist-worn projection display. In: Ninth IEEE International Symposium on Wearable Computers (ISWC’05), pp. 2–9 (2005). https://doi.org/10.1109/ISWC.2005.21, ISSN: 2376-8541
Findlater, L., Chinh, B., Jain, D., Froehlich, J., Kushalnagar, R., Lin, A.C.: Deaf and hard-of-hearing individuals’ preferences for wearable and mobile sound awareness technologies. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI ’19, pp. 1–13. Association for Computing Machinery (2019). https://doi.org/10.1145/3290605.3300276
Fok, R., Kaur, H., Palani, S., Mott, M.E., Lasecki, W.S.: Towards more robust speech interactions for deaf and hard of hearing users. In: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ASSETS ’18, pp. 57–67. Association for Computing Machinery (2018). https://doi.org/10.1145/3234695.3236343
Glasser, A., Mande, V., Huenerfauth, M.: Accessibility for deaf and hard of hearing users: sign language conversational user interfaces. In: Proceedings of the 2nd Conference on Conversational User Interfaces. CUI ’20, pp. 1–3. Association for Computing Machinery (2020). https://doi.org/10.1145/3405755.3406158
Glasser, A., Mande, V., Huenerfauth, M.: Understanding deaf and hard-of-hearing users’ interest in sign-language interaction with personal-assistant devices. In: Proceedings of the 18th International Web for All Conference. W4A ’21, pp. 1–11. Association for Computing Machinery (2021). https://doi.org/10.1145/3430263.3452428
Glasser, A., Watkins, M., Hart, K., Lee, S., Huenerfauth, M.: Analyzing deaf and hard-of-hearing users’ behavior, usage, and interaction with a personal assistant device that understands sign-language input. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. CHI ’22, pp. 1–12. Association for Computing Machinery (2022). https://doi.org/10.1145/3491102.3501987
Guo, R., et al.: HoloSound: combining speech and sound identification for deaf or hard of hearing users on a head-mounted display. In: Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. ASSETS ’20, pp. 1–4. Association for Computing Machinery (2020). https://doi.org/10.1145/3373625.3418031
Harrison, C., Benko, H., Wilson, A.D.: OmniTouch: wearable multitouch interaction everywhere. In: Proceedings of the 24th annual ACM Symposium on User Interface Software and Technology. UIST ’11, pp. 441–450. Association for Computing Machinery. https://doi.org/10.1145/2047196.2047255
Harrison, C., Faste, H.: Implications of location and touch for on-body projected interfaces. In: Proceedings of the 2014 Conference on Designing Interactive Systems. DIS ’14, pp. 543–552. Association for Computing Machinery (2014). https://doi.org/10.1145/2598510.2598587
Harrison, C., Tan, D., Morris, D.: Skinput: appropriating the body as an input surface. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’10, pp. 453–462. Association for Computing Machinery (2010). https://doi.org/10.1145/1753326.1753394
Jain, D., Chinh, B., Findlater, L., Kushalnagar, R., Froehlich, J.: Exploring augmented reality approaches to real-time captioning: A preliminary autoethnographic study. In: Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems. DIS ’18 Companion, pp. 7–11. Association for Computing Machinery (2018). https://doi.org/10.1145/3197391.3205404
Jain, D., et al.: Head-mounted display visualizations to support sound awareness for the deaf and hard of hearing. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. CHI ’15, pp. 241–250. Association for Computing Machinery (2015). https://doi.org/10.1145/2702123.2702393
Jain, D., Franz, R., Findlater, L., Cannon, J., Kushalnagar, R., Froehlich, J.: Towards accessible conversations in a mobile context for people who are deaf and hard of hearing. In: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ASSETS ’18, pp. 81–92. Association for Computing Machinery (2018). https://doi.org/10.1145/3234695.3236362
Kadylak, T., Blocker, K.A., Kovac, C.E., Rogers, W.A.: Understanding the potential of digital home assistant devices for older adults through their initial perceptions and attitudes, 21(1), 1–10 (2022). https://doi.org/10.4017/gt.2022.21.1.486.06
Karitsuka, T., Sato, K.: A wearable mixed reality with an on-board projector. In: The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, Proceedings, pp. 321–322 (2003). https://doi.org/10.1109/ISMAR.2003.1240740
Kratz, S., Rohs, M., Reitberger, F., Moldenhauer, J.: Attjector: an attention-following wearable projector. In: Kinect Workshop at Pervasive 2012, June 2012. https://www2.hci.uni-hannover.de/papers/kratz2012attjector.pdf
Kurata, T., Sakata, N., Kourogi, M., Okuma, T., Ohta, Y.: Interaction using nearby-and-far projection surfaces with a body-worn ProCam system, 6804 (2008). https://doi.org/10.1117/12.759311
Lopatovska, I., et al.: Talk to me: exploring user interactions with the Amazon Alexa. J. Librariansh. Inf. Sci. 51(4), 984–997 (2019). https://doi.org/10.1177/0961000618759414, publisher: SAGE Publications Ltd
Mande, V., Glasser, A., Dingman, B., Huenerfauth, M.: Deaf users’ preferences among wake-up approaches during sign-language interaction with personal assistant devices. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. CHI EA ’21, pp. 1–6. Association for Computing Machinery (2021). https://doi.org/10.1145/3411763.3451592
Masina, F., et al.: Investigating the accessibility of voice assistants with impaired users: mixed methods study, 22(9), e18431 (2020). https://doi.org/10.2196/18431
Miller, P.: ASU cast one smartwatch with built-in projector is now a real thing you can buy. https://www.theverge.com/circuitbreaker/2016/5/18/11700894/asu-cast-one-smartwatch-projector-released-china
Mistry, P., Maes, P., Chang, L.: WUW - wear UR world: a wearable gestural interface. In: CHI ’09 Extended Abstracts on Human Factors in Computing Systems. CHI EA ’09, pp. 4111–4116. Association for Computing Machinery (2009). https://doi.org/10.1145/1520340.1520626
Morris, J.T., Thompson, N.A.: User personas: smart speakers, home automation and people with disabilities (2020). http://hdl.handle.net/10211.3/215991, Publisher: California State University, Northridge
Olwal, A., et al.: Wearable subtitles: augmenting spoken communication with lightweight eyewear for all-day captioning. In: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. UIST ’20, pp. 1108–1120. Association for Computing Machinery (2020). https://doi.org/10.1145/3379337.3415817
Peng, Y.H., et al.: SpeechBubbles: enhancing captioning experiences for deaf and hard-of-hearing people in group conversations. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. CHI ’18, pp. 1–10. Association for Computing Machinery (2018). https://doi.org/10.1145/3173574.3173867
Piumsomboon, T., Clark, A., Billinghurst, M., Cockburn, A.: User-defined gestures for augmented reality. In: CHI ’13 Extended Abstracts on Human Factors in Computing Systems. CHI EA ’13, pp. 955–960. Association for Computing Machinery, New York, NY, USA, April 2013. https://doi.org/10.1145/2468356.2468527
Pomykalski, P., Woźniak, M.P., Woźniak, P.W., Grudzień, K., Zhao, S., Romanowski, A.: Considering wake gestures for smart assistant use. In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. CHI EA ’20, pp. 1–8. Association for Computing Machinery, New York, NY, USA, April 2020. https://doi.org/10.1145/3334480.3383089
Pradhan, A., Lazar, A., Findlater, L.: Use of intelligent voice assistants by older adults with low technology use, 27(4), 31:1–31:27. https://doi.org/10.1145/3373759
Rodolitz, J., Gambill, E., Willis, B., Vogler, C., Kushalnagar, R.S.: Accessibility of voice-activated agents for people who are deaf or hard of hearing. http://hdl.handle.net/10211.3/210397, Publisher: California State University, Northridge
Sakata, N., Konishi, T., Nishida, S.: Mobile interfaces using body worn projector and camera. In: Shumaker, R. (ed.) VMR 2009. LNCS, vol. 5622, pp. 106–113. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02771-0_12
Schneegass, S., Ogando, S., Alt, F.: Using on-body displays for extending the output of wearable devices. In: Proceedings of the 5th ACM International Symposium on Pervasive Displays. PerDis ’16, pp. 67–74. Association for Computing Machinery (2016). https://doi.org/10.1145/2914920.2915021
Sciarretta, E., Alimenti, L.: Smart speakers for inclusion: how can intelligent virtual assistants really assist everybody? In: Kurosu, M. (ed.) HCII 2021. LNCS, vol. 12762, pp. 77–93. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78462-1_6
Smith, E., Sumner, P., Hedge, C., Powell, G.: Smart-speaker technology and intellectual disabilities: agency and wellbeing, 18(4), 432–442 (2023). https://doi.org/10.1080/17483107.2020.1864670
Tigwell, G.W., Crabb, M.: Household surface interactions: understanding user input preferences and perceived home experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. CHI ’20, pp. 1–14. Association for Computing Machinery, New York, NY, USA, April 2020. https://doi.org/10.1145/3313831.3376856
Xiao, R., Cao, T., Guo, N., Zhuo, J., Zhang, Y., Harrison, C.: LumiWatch: on-arm projected graphics and touch input. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. CHI ’18, pp. 1–11. Association for Computing Machinery (2018). https://doi.org/10.1145/3173574.3173669
Yamamoto, G., Sato, K.: PALMbit: a body interface utilizing light projection onto palms, 61, 797–804 (2007). https://doi.org/10.3169/itej.61.797
Zhou, X., Williams, A.S., Ortega, F.R.: Eliciting multimodal gesture+speech interactions in a multi-object augmented reality environment. In: Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology. VRST ’22, pp. 1–10. Association for Computing Machinery, New York, NY, USA, November 2022. https://doi.org/10.1145/3562939.3565637
Acknowledgments
The authors have no competing interests to declare that are relevant to the content of this article.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mathew, R., Tigwell, G.W., Peiris, R.L. (2024). Deaf and Hard of Hearing People’s Perspectives on Augmented Reality Interfaces for Improving the Accessibility of Smart Speakers. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. HCII 2024. Lecture Notes in Computer Science, vol 14697. Springer, Cham. https://doi.org/10.1007/978-3-031-60881-0_21
Download citation
DOI: https://doi.org/10.1007/978-3-031-60881-0_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-60880-3
Online ISBN: 978-3-031-60881-0
eBook Packages: Computer ScienceComputer Science (R0)