Authors:
Jin Chen
1
;
Satesh Ramnath
2
;
Tyron Samaroo
2
;
Fani Maksakuli
1
;
Arber Ruci
1
;
E’edresha Sturdivant
1
and
Zhigang Zhu
3
;
2
Affiliations:
1
Nearabl Inc., New York, NY 10023, U.S.A.
;
2
Computer Science Department, The City College of New York - CUNY, New York, NY 10031, U.S.A.
;
3
PhD Program in Computer Science, The Graduate Center - CUNY, New York, NY 10016, U.S.A.
Keyword(s):
Mobile Application, Voice Interaction, 3D Finger Recognition, 3D Object Detection, Assistive Computer Vision.
Abstract:
This paper presents a mobile-based solution that integrates 3D vision and voice interaction to assist people who are blind or have low vision to explore and interact with their surroundings. The key components of the system are the two 3D vision modules: the 3D object detection module integrates a deep-learning based 2D object detector with ARKit-based point cloud generation, and an interest direction recognition module integrates hand/finger recognition and ARKit-based 3D direction estimation. The integrated system consists of a voice interface, a task scheduler, and an instruction generator. The voice interface contains a customized user request mapping module that maps the user’s input voice into one of the four primary system operation modes (exploration, search, navigation, and settings adjustment). The task scheduler coordinates with two web services that host the two vision modules to allocate resources for computation based on the user request and network connectivity strengt
h. Finally, the instruction generator computes the corresponding instructions based on the user request and results from the two vision modules. The system is capable of running in real time on mobile devices. We have shown preliminary experimental results on the performance of the voice to user request mapping module and the two vision modules.
(More)