[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Deaf and Hard of Hearing People’s Perspectives on Augmented Reality Interfaces for Improving the Accessibility of Smart Speakers

  • Conference paper
  • First Online:
Universal Access in Human-Computer Interaction (HCII 2024)

Abstract

The continued evolution of voice recognition technology has led to its integration into many smart devices as the primary mode of user interaction. Smart speakers are among the most popular smart devices that utilize voice recognition to offer interactive functions and features to serve as a personal assistant and a control hub for smart homes. However, smart speakers rely primarily on voice recognition technology and are often inaccessible to Deaf and hard of hearing (DHH) individuals. While smart speakers such as the Amazon Echo Show have a built-in screen to provide visual interaction for DHH users through features such as “Tap to Alexa,” these devices still require users to be positioned next to them. Though features such as “Tap to Alexa” improve the accessibility of smart speakers for DHH users, they are not functionally comparable solutions as they restrict DHH users from benefiting the same user freedom hearing users have in interacting with them from across the room or while performing another hands-on activity. To bridge this gap, we explore alternative approaches such as augmented reality (AR) wearables and various projection systems. We conducted a mixed-method study involving surveys and Wizard of Oz evaluations to investigate the proposed research objectives. The study’s findings provide a deeper insight into the potential of AR projection interfaces as novel interaction methods to improve the accessibility of smart speakers for DHH people.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 89.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 64.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Cicret projects emails, videos and games onto skin—daily mail online. https://www.dailymail.co.uk/sciencetech/article-2871401/The-bracelet-turns-ARM-touchscreen-Cicret-projects-emails-videos-games-skin.html

  2. Smart speaker market size global forecast to 2021–2030. https://www.marketsandmarkets.com/Market-Reports/smart-speaker-market-44984088.html

  3. This faceless watch has raised \$400,000 on Indiegogo. https://www.nbcnews.com/id/wbna55692278

  4. Bigham, J.P., Kushalnagar, R., Huang, T.H.K., Flores, J.P., Savage, S.: On how deaf people might use speech to control devices. In: Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. ASSETS ’17, pp. 383–384. Association for Computing Machinery (2017). https://doi.org/10.1145/3132525.3134821

  5. Blair, J., Abdullah, S.: It didn’t sound good with my cochlear implants: understanding the challenges of using smart assistants for deaf and hard of hearing users, 4(4), 118:1–118:27 (2020). https://doi.org/10.1145/3432194

  6. Blasko, G., Coriand, F., Feiner, S.: Exploring interaction with a simulated wrist-worn projection display. In: Ninth IEEE International Symposium on Wearable Computers (ISWC’05), pp. 2–9 (2005). https://doi.org/10.1109/ISWC.2005.21, ISSN: 2376-8541

  7. Findlater, L., Chinh, B., Jain, D., Froehlich, J., Kushalnagar, R., Lin, A.C.: Deaf and hard-of-hearing individuals’ preferences for wearable and mobile sound awareness technologies. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI ’19, pp. 1–13. Association for Computing Machinery (2019). https://doi.org/10.1145/3290605.3300276

  8. Fok, R., Kaur, H., Palani, S., Mott, M.E., Lasecki, W.S.: Towards more robust speech interactions for deaf and hard of hearing users. In: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ASSETS ’18, pp. 57–67. Association for Computing Machinery (2018). https://doi.org/10.1145/3234695.3236343

  9. Glasser, A., Mande, V., Huenerfauth, M.: Accessibility for deaf and hard of hearing users: sign language conversational user interfaces. In: Proceedings of the 2nd Conference on Conversational User Interfaces. CUI ’20, pp. 1–3. Association for Computing Machinery (2020). https://doi.org/10.1145/3405755.3406158

  10. Glasser, A., Mande, V., Huenerfauth, M.: Understanding deaf and hard-of-hearing users’ interest in sign-language interaction with personal-assistant devices. In: Proceedings of the 18th International Web for All Conference. W4A ’21, pp. 1–11. Association for Computing Machinery (2021). https://doi.org/10.1145/3430263.3452428

  11. Glasser, A., Watkins, M., Hart, K., Lee, S., Huenerfauth, M.: Analyzing deaf and hard-of-hearing users’ behavior, usage, and interaction with a personal assistant device that understands sign-language input. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. CHI ’22, pp. 1–12. Association for Computing Machinery (2022). https://doi.org/10.1145/3491102.3501987

  12. Guo, R., et al.: HoloSound: combining speech and sound identification for deaf or hard of hearing users on a head-mounted display. In: Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. ASSETS ’20, pp. 1–4. Association for Computing Machinery (2020). https://doi.org/10.1145/3373625.3418031

  13. Harrison, C., Benko, H., Wilson, A.D.: OmniTouch: wearable multitouch interaction everywhere. In: Proceedings of the 24th annual ACM Symposium on User Interface Software and Technology. UIST ’11, pp. 441–450. Association for Computing Machinery. https://doi.org/10.1145/2047196.2047255

  14. Harrison, C., Faste, H.: Implications of location and touch for on-body projected interfaces. In: Proceedings of the 2014 Conference on Designing Interactive Systems. DIS ’14, pp. 543–552. Association for Computing Machinery (2014). https://doi.org/10.1145/2598510.2598587

  15. Harrison, C., Tan, D., Morris, D.: Skinput: appropriating the body as an input surface. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’10, pp. 453–462. Association for Computing Machinery (2010). https://doi.org/10.1145/1753326.1753394

  16. Jain, D., Chinh, B., Findlater, L., Kushalnagar, R., Froehlich, J.: Exploring augmented reality approaches to real-time captioning: A preliminary autoethnographic study. In: Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems. DIS ’18 Companion, pp. 7–11. Association for Computing Machinery (2018). https://doi.org/10.1145/3197391.3205404

  17. Jain, D., et al.: Head-mounted display visualizations to support sound awareness for the deaf and hard of hearing. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. CHI ’15, pp. 241–250. Association for Computing Machinery (2015). https://doi.org/10.1145/2702123.2702393

  18. Jain, D., Franz, R., Findlater, L., Cannon, J., Kushalnagar, R., Froehlich, J.: Towards accessible conversations in a mobile context for people who are deaf and hard of hearing. In: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ASSETS ’18, pp. 81–92. Association for Computing Machinery (2018). https://doi.org/10.1145/3234695.3236362

  19. Kadylak, T., Blocker, K.A., Kovac, C.E., Rogers, W.A.: Understanding the potential of digital home assistant devices for older adults through their initial perceptions and attitudes, 21(1), 1–10 (2022). https://doi.org/10.4017/gt.2022.21.1.486.06

  20. Karitsuka, T., Sato, K.: A wearable mixed reality with an on-board projector. In: The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, Proceedings, pp. 321–322 (2003). https://doi.org/10.1109/ISMAR.2003.1240740

  21. Kratz, S., Rohs, M., Reitberger, F., Moldenhauer, J.: Attjector: an attention-following wearable projector. In: Kinect Workshop at Pervasive 2012, June 2012. https://www2.hci.uni-hannover.de/papers/kratz2012attjector.pdf

  22. Kurata, T., Sakata, N., Kourogi, M., Okuma, T., Ohta, Y.: Interaction using nearby-and-far projection surfaces with a body-worn ProCam system, 6804 (2008). https://doi.org/10.1117/12.759311

  23. Lopatovska, I., et al.: Talk to me: exploring user interactions with the Amazon Alexa. J. Librariansh. Inf. Sci. 51(4), 984–997 (2019). https://doi.org/10.1177/0961000618759414, publisher: SAGE Publications Ltd

  24. Mande, V., Glasser, A., Dingman, B., Huenerfauth, M.: Deaf users’ preferences among wake-up approaches during sign-language interaction with personal assistant devices. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. CHI EA ’21, pp. 1–6. Association for Computing Machinery (2021). https://doi.org/10.1145/3411763.3451592

  25. Masina, F., et al.: Investigating the accessibility of voice assistants with impaired users: mixed methods study, 22(9), e18431 (2020). https://doi.org/10.2196/18431

  26. Miller, P.: ASU cast one smartwatch with built-in projector is now a real thing you can buy. https://www.theverge.com/circuitbreaker/2016/5/18/11700894/asu-cast-one-smartwatch-projector-released-china

  27. Mistry, P., Maes, P., Chang, L.: WUW - wear UR world: a wearable gestural interface. In: CHI ’09 Extended Abstracts on Human Factors in Computing Systems. CHI EA ’09, pp. 4111–4116. Association for Computing Machinery (2009). https://doi.org/10.1145/1520340.1520626

  28. Morris, J.T., Thompson, N.A.: User personas: smart speakers, home automation and people with disabilities (2020). http://hdl.handle.net/10211.3/215991, Publisher: California State University, Northridge

  29. Olwal, A., et al.: Wearable subtitles: augmenting spoken communication with lightweight eyewear for all-day captioning. In: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. UIST ’20, pp. 1108–1120. Association for Computing Machinery (2020). https://doi.org/10.1145/3379337.3415817

  30. Peng, Y.H., et al.: SpeechBubbles: enhancing captioning experiences for deaf and hard-of-hearing people in group conversations. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. CHI ’18, pp. 1–10. Association for Computing Machinery (2018). https://doi.org/10.1145/3173574.3173867

  31. Piumsomboon, T., Clark, A., Billinghurst, M., Cockburn, A.: User-defined gestures for augmented reality. In: CHI ’13 Extended Abstracts on Human Factors in Computing Systems. CHI EA ’13, pp. 955–960. Association for Computing Machinery, New York, NY, USA, April 2013. https://doi.org/10.1145/2468356.2468527

  32. Pomykalski, P., Woźniak, M.P., Woźniak, P.W., Grudzień, K., Zhao, S., Romanowski, A.: Considering wake gestures for smart assistant use. In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. CHI EA ’20, pp. 1–8. Association for Computing Machinery, New York, NY, USA, April 2020. https://doi.org/10.1145/3334480.3383089

  33. Pradhan, A., Lazar, A., Findlater, L.: Use of intelligent voice assistants by older adults with low technology use, 27(4), 31:1–31:27. https://doi.org/10.1145/3373759

  34. Rodolitz, J., Gambill, E., Willis, B., Vogler, C., Kushalnagar, R.S.: Accessibility of voice-activated agents for people who are deaf or hard of hearing. http://hdl.handle.net/10211.3/210397, Publisher: California State University, Northridge

  35. Sakata, N., Konishi, T., Nishida, S.: Mobile interfaces using body worn projector and camera. In: Shumaker, R. (ed.) VMR 2009. LNCS, vol. 5622, pp. 106–113. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02771-0_12

    Chapter  Google Scholar 

  36. Schneegass, S., Ogando, S., Alt, F.: Using on-body displays for extending the output of wearable devices. In: Proceedings of the 5th ACM International Symposium on Pervasive Displays. PerDis ’16, pp. 67–74. Association for Computing Machinery (2016). https://doi.org/10.1145/2914920.2915021

  37. Sciarretta, E., Alimenti, L.: Smart speakers for inclusion: how can intelligent virtual assistants really assist everybody? In: Kurosu, M. (ed.) HCII 2021. LNCS, vol. 12762, pp. 77–93. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78462-1_6

    Chapter  Google Scholar 

  38. Smith, E., Sumner, P., Hedge, C., Powell, G.: Smart-speaker technology and intellectual disabilities: agency and wellbeing, 18(4), 432–442 (2023). https://doi.org/10.1080/17483107.2020.1864670

  39. Tigwell, G.W., Crabb, M.: Household surface interactions: understanding user input preferences and perceived home experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. CHI ’20, pp. 1–14. Association for Computing Machinery, New York, NY, USA, April 2020. https://doi.org/10.1145/3313831.3376856

  40. Xiao, R., Cao, T., Guo, N., Zhuo, J., Zhang, Y., Harrison, C.: LumiWatch: on-arm projected graphics and touch input. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. CHI ’18, pp. 1–11. Association for Computing Machinery (2018). https://doi.org/10.1145/3173574.3173669

  41. Yamamoto, G., Sato, K.: PALMbit: a body interface utilizing light projection onto palms, 61, 797–804 (2007). https://doi.org/10.3169/itej.61.797

  42. Zhou, X., Williams, A.S., Ortega, F.R.: Eliciting multimodal gesture+speech interactions in a multi-object augmented reality environment. In: Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology. VRST ’22, pp. 1–10. Association for Computing Machinery, New York, NY, USA, November 2022. https://doi.org/10.1145/3562939.3565637

Download references

Acknowledgments

The authors have no competing interests to declare that are relevant to the content of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roshan Mathew .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mathew, R., Tigwell, G.W., Peiris, R.L. (2024). Deaf and Hard of Hearing People’s Perspectives on Augmented Reality Interfaces for Improving the Accessibility of Smart Speakers. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. HCII 2024. Lecture Notes in Computer Science, vol 14697. Springer, Cham. https://doi.org/10.1007/978-3-031-60881-0_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-60881-0_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-60880-3

  • Online ISBN: 978-3-031-60881-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics