[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3234695.3236339acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article
Open access

Investigating Cursor-based Interactions to Support Non-Visual Exploration in the Real World

Published: 08 October 2018 Publication History

Abstract

The human visual system processes complex scenes to focus attention on relevant items. However, blind people cannot visually skim for an area of interest. Instead, they use a combination of contextual information, knowledge of the spatial layout of their environment, and interactive scanning to find and attend to specific items. In this paper, we define and compare three cursor-based interactions to help blind people attend to items in a complex visual scene: window cursor (move their phone to scan), finger cursor (point their finger to read), and touch cursor (drag their finger on the touchscreen to explore). We conducted a user study with 12 participants to evaluate the three techniques on four tasks, and found that: window cursor worked well for locating objects on large surfaces, finger cursor worked well for accessing control panels, and touch cursor worked well for helping users understand spatial layouts. A combination of multiple techniques will likely be best for supporting a variety of everyday tasks for blind users.

Supplementary Material

suppl.mov (fp013.mp4)
Supplemental video

References

[1]
Aipoly 2018. Aipoly. http://aipoly.com. (2018).
[2]
Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. 2010a. VizWiz: Nearly Real-time Answers to Visual Questions. In Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST '10). ACM, New York, NY, USA, 333--342.
[3]
Jeffrey P Bigham, Chandrika Jayant, Andrew Miller, Brandyn White, and Tom Yeh. 2010b. VizWiz:: LocateIt-enabling blind people to locate objects in their environment. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on. IEEE, 65--72.
[4]
Meera M Blattner, Denise A Sumikawa, and Robert M Greenberg. 1989. Earcons and icons: Their structure and common design principles. Human-Computer Interaction 4, 1 (1989), 11--44.
[5]
Glenn A Bowen. 2008. Naturalistic inquiry and the saturation concept: a research note. Qualitative research 8, 1 (2008), 137--152.
[6]
Wikipedia contributors. 2018. Optical character recognition -- Wikipedia, The Free Encyclopedia. (2018). https://en.wikipedia.org/w/index.php?title=Optical_character_recognition&oldid=847751695 {Online; accessed 27-June-2018}.
[7]
Giovanni Fusco, Ender Tekin, Richard E. Ladner, and James M. Coughlan. 2014. Using Computer Vision to Access Appliance Displays. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS '14). ACM, New York, NY, USA, 281--282.
[8]
Anhong Guo, Xiang `Anthony' Chen, Haoran Qi, Samuel White, Suman Ghosh, Chieko Asakawa, and Jeffrey P. Bigham. 2016. VizLens: A robust and interactive screen reader for interfaces in the real world. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 651--664.
[9]
Chandrika Jayant, Hanjie Ji, Samuel White, and Jeffrey P. Bigham. 2011. Supporting Blind Photography. In The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '11). ACM, New York, NY, USA, 203--210.
[10]
Shaun K. Kane, Jeffrey P. Bigham, and Jacob O. Wobbrock. 2008. Slide Rule: Making Mobile Touch Screens Accessible to Blind People Using Multi-touch Interaction Techniques. In Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility (Assets '08). ACM, New York, NY, USA, 73--80.
[11]
Shaun K. Kane, Brian Frey, and Jacob O. Wobbrock. 2013. Access Lens: A Gesture-based Screen Reader for Real-world Documents. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 347--350.
[12]
KNFB Reader 2018. KNFB Reader. http://www.knfbreader.com/. (2018).
[13]
LookTel Money Reader 2018. LookTel Money Reader. http://www.looktel.com/moneyreader. (2018).
[14]
LookTel Recognizer 2018. LookTel Recognizer. http://www.looktel.com/recognizer. (2018).
[15]
Roberto Manduchi and James M. Coughlan. 2014. The Last Meter: Blind Visual Guidance to a Target. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 3113--3122.
[16]
T. Morris, P. Blenkhorn, L. Crossey, Q. Ngo, M. Ross, D. Werner, and C. Wong. 2006. Clearspeech: A Display Reader for the Visually Handicapped. IEEE Transactions on Neural Systems and Rehabilitation Engineering 14, 4 (Dec 2006), 492--500.
[17]
Suranga Nanayakkara, Roy Shilkrot, Kian Peen Yeo, and Pattie Maes. 2013. EyeRing: A Finger-worn Input Device for Seamless Interactions with Our Surroundings. In Proceedings of the 4th Augmented Human International Conference (AH '13). ACM, New York, NY, USA, 13--20.
[18]
OrCam 2018. OrCam. http://www.orcam.com. (2018).
[19]
Seeing AI 2018. Seeing AI. https://www.microsoft.com/en-us/seeing-ai/. (2018).
[20]
Lei Shi, Yuhang Zhao, and Shiri Azenkot. 2017. Markit and Talkit: A Low-Barrier Toolkit to Augment 3D Printed Models with Audio Annotations. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST '17). ACM, New York, NY, USA, 493--506.
[21]
Roy Shilkrot, Jochen Huber, Connie Liu, Pattie Maes, and Suranga Chandima Nanayakkara. 2014. FingerReader: A Wearable Device to Support Text Reading on the Go. In CHI '14 Extended Abstracts on Human Factors in Computing Systems (CHI EA '14). ACM, New York, NY, USA, 2359--2364.
[22]
Kristen Shinohara and Josh Tenenberg. 2007. Observing Sara: A Case Study of a Blind Person's Interactions with Technology. In Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility (Assets '07). ACM, New York, NY, USA, 171--178.
[23]
B. Shneiderman. 1996. The eyes have it: a task by data type taxonomy for information visualizations. In Proceedings 1996 IEEE Symposium on Visual Languages. 336--343.
[24]
Ender Tekin, James M. Coughlan, and Huiying Shen. 2011. Real-time Detection and Reading of LED/LCD Displays for Visually Impaired Persons. In Proceedings of the 2011 IEEE Workshop on Applications of Computer Vision (WACV) (WACV '11). IEEE Computer Society, Washington, DC, USA, 491--496.
[25]
Marynel Vázquez and Aaron Steinfeld. 2014. An Assisted Photography Framework to Help Visually Impaired Users Properly Aim a Camera. ACM Trans. Comput.-Hum. Interact. 21, 5, Article 25 (Nov. 2014), 29 pages.
[26]
Samuel White, Hanjie Ji, and Jeffrey P. Bigham. 2010. EasySnap: Real-time Audio Feedback for Blind Photography. In Adjunct Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST '10). ACM, New York, NY, USA, 409--410.
[27]
Yu Zhong, Pierre J. Garrigues, and Jeffrey P. Bigham. 2013. Real Time Object Scanning Using a Mobile Phone and Cloud-based Visual Search Engine. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '13). ACM, New York, NY, USA, Article 20, 8 pages.
[28]
Yu Zhong, Walter S. Lasecki, Erin Brady, and Jeffrey P. Bigham. 2015. RegionSpeak: Quick Comprehensive Spatial Descriptions of Complex Images for Blind Users. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 2353--2362.

Cited By

View all
  • (2024)SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676384(1-19)Online publication date: 13-Oct-2024
  • (2023)WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile InteractionsProceedings of the ACM on Human-Computer Interaction10.1145/36264787:ISS(357-375)Online publication date: 1-Nov-2023
  • (2023)Leveraging Hand-Object Interactions in Assistive Egocentric VisionIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2021.312330345:6(6820-6831)Online publication date: 1-Jun-2023
  • Show More Cited By

Index Terms

  1. Investigating Cursor-based Interactions to Support Non-Visual Exploration in the Real World

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASSETS '18: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility
      October 2018
      508 pages
      ISBN:9781450356503
      DOI:10.1145/3234695
      This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 08 October 2018

      Check for updates

      Author Tags

      1. accessibility
      2. blind
      3. computer vision
      4. cursor
      5. interaction
      6. mobile devices
      7. non-visual exploration
      8. visually impaired

      Qualifiers

      • Research-article

      Conference

      ASSETS '18
      Sponsor:

      Acceptance Rates

      ASSETS '18 Paper Acceptance Rate 28 of 108 submissions, 26%;
      Overall Acceptance Rate 436 of 1,556 submissions, 28%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)204
      • Downloads (Last 6 weeks)24
      Reflects downloads up to 12 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676384(1-19)Online publication date: 13-Oct-2024
      • (2023)WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile InteractionsProceedings of the ACM on Human-Computer Interaction10.1145/36264787:ISS(357-375)Online publication date: 1-Nov-2023
      • (2023)Leveraging Hand-Object Interactions in Assistive Egocentric VisionIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2021.312330345:6(6820-6831)Online publication date: 1-Jun-2023
      • (2022)OmniScribe: Authoring Immersive Audio Descriptions for 360° VideosProceedings of the 35th Annual ACM Symposium on User Interface Software and Technology10.1145/3526113.3545613(1-14)Online publication date: 29-Oct-2022
      • (2022)AIGuide: Augmented Reality Hand Guidance in a Visual ProstheticACM Transactions on Accessible Computing10.1145/350850115:2(1-32)Online publication date: 19-May-2022
      • (2022)ImageExplorer: Multi-Layered Touch Exploration to Encourage Skepticism Towards Imperfect AI-Generated Image CaptionsProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3501966(1-15)Online publication date: 29-Apr-2022
      • (2022)Accessibility-Related Publication Distribution in HCI Based on a Meta-AnalysisExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491101.3519701(1-28)Online publication date: 27-Apr-2022
      • (2020)Making Mobile Augmented Reality Applications AccessibleProceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3373625.3417006(1-14)Online publication date: 26-Oct-2020
      • (2020)PneuFetch: Supporting Blind and Visually Impaired People to Fetch Nearby Objects via Light Haptic CuesExtended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3334480.3383095(1-9)Online publication date: 25-Apr-2020
      • (2020)Hand-Priming in Object Localization for Assistive Egocentric Vision2020 IEEE Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV45572.2020.9093353(3411-3421)Online publication date: Mar-2020
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media