[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Machine-Learning-Based Accessibility System

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Accessing the internet presents considerable difficulties for those with impairments. They frequently have physical constraints that prevent them from using conventional input devices like a mouse or keyboard. However, without the use of specialised technologies such as screen readers or braille displays, people with visual impairments would not be able to access digital material. These difficulties may make it more difficult for them to communicate clearly, obtain information, and engage in online activities. Technologies that can increase online accessibility and make it more inclusive for people with impairments are thus urgently needed. There has been an increase in demand in recent years for technology that can enhance the standard of living for those with disabilities. The software-based virtual keyboard and suggested sign language recognition system are major contributions to this subject since they provide a solution to the problems faced by people with hearing and vision impairments. By offering an alternative to conventional communication techniques, the sign language recognition system enables users to communicate more efficiently and organically. On the other hand, the software-based virtual keyboard solves the difficulties persons with visual impairments encounter whilst engaging with digital platforms. The suggested method might significantly improve the accessibility of websites and other digital platforms by lowering the obstacles that people with disabilities now experience when trying to access information and services online. A wide range of users with various degrees of expertise and ability may utilise the system since it is created to be user-friendly and effective. The system is also very configurable and adaptable thanks to the machine learning algorithms, which can adjust to various sign languages, dialects, and user styles. The software-based virtual keyboard and suggested sign language recognition system provide a viable alternative for enhancing accessibility and communication for people with impairments. The quality of life for those with hearing and vision impairments might be considerably improved by these technologies, allowing them to engage more fully in society and have easier access to information. To perfect these technologies and handle the remaining difficulties in enhancing accessibility for people with impairments, more study and development in this area is required.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Hafiar H, Subekti P, Nugraha A. Internet utilisation by the students with visual impairment disabilities. Int J Emerg Technol Learn (iJET). 2019;14:200. https://doi.org/10.3991/ijet.v14i10.10057.

    Article  Google Scholar 

  2. Shawar BA. Evaluating web accessibility of educational websites. Int J Emerg Technol Learning (iJET). 2015;10(4):4–10. https://doi.org/10.3991/ijet.v10i4.4518.

    Article  ADS  Google Scholar 

  3. Moo LM, Kim Y. Government website accessibility: InDepth evaluation of Korea and India. J Indian Stud. 2009;14(1):1–22. https://doi.org/10.21758/jis.2009.14.1.1.

    Article  Google Scholar 

  4. Wachs JP, Kölsch M, Stern H, Edan Y. Vision-based hand gesture applications. Commun ACM. 2011;54(2):60–71. https://doi.org/10.1145/1897816.1897838.

    Article  Google Scholar 

  5. Goggin G, Newell C. Digital disability: the social construction of disability in new media. Rowman & Littlefield; 2003.

    Google Scholar 

  6. Tigwell GW. Nuanced perspectives toward disability simulations from digital designers, blind, low vision, and color blind people. In: Proceedings of the 2021 CHI conference on human factors in computing systems; 2021. https://doi.org/10.1145/3411764.3445620

  7. Anicca A. The gaps in counting India’s disabled population. IndiaSpend; 2022. https://scroll.in/article/1028665/the-gaps-in-counting-indias-disabled-population

  8. Bhatia R. How can Indian education make space for the needs of ‘special’ children?. In: Voices, India TOI. https://timesofindia.indiatimes.com/blogs/voices/how-can-indian-education-make-space-for-the-needs-of-special-children/?source=app&frmapp=yes

  9. Royal College for the Blind. “DEMOS project-visually impaired students and E-learning: frequently asked questions.” Online materials for staff disability awareness; 2002. http://jarmin.com/demos/resource/rncb/print.html

  10. World Bank. World Report on disability. WHO; 2011. https://www.who.int/teams/noncommunicable-diseases/sensory-functions-disability-and-rehabilitation/world-report-on-disability

  11. Abou-Zahra S, Brewer J, Cooper M. Artificial Intelligence (AI) for web accessibility. In: Proceedings of the 15th international web for all conference; 2018.

  12. Aqel MOA, Issa A, Harb A, Shehada J. Development of vibro-tactile braille display and keyboard. In: 2019 international conference on promising electronic technologies (ICPET), Gaza, Palestine; 2019. p. 28–33. https://doi.org/10.1109/ICPET.2019.00013.

  13. Manohar P, Parthasarathy A. An innovative braille system keyboard for the visually impaired; 2009. p. 559–562. https://doi.org/10.1109/UKSIM.2009.66.

  14. Garcillanosa MM, Apuyan KNT, Arro AM, Ascan GG. Audio-assisted standalone microcontroller-based Braille System Tutor for Grade 1 Braille symbols. In: 2016 IEEE advanced information management, communicates, electronic and automation control conference (IMCEC), Xi'an, China; 2016. p. 439–42. https://doi.org/10.1109/IMCEC.2016.7867250.

  15. Ahmed F, Choudhury AR, Rakshit A, Hasan MZ. An IoT based system for printing braille letter from speech. In: 2020 IEEE region 10 symposium (TENSYMP), Dhaka, Bangladesh; 2020. p. 344–7. https://doi.org/10.1109/TENSYMP50017.2020.9230734.

  16. Papastratis I, Chatzikonstantinou C, Konstantinidis D, Dimitropoulos K, Daras P. Artificial Intelligence technologies for sign language. Sensors. 2021;21:5843. https://doi.org/10.3390/s21175843.

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  17. Isewon I, Oyelade J, Oladipupo O. Design and implementation of text to speech conversion for visually impaired people. Foundation of Computer Science FCS, New York, USA; 2012.

  18. Jasmine SG, Singh S. Face recognition system. Int J Eng Res Technol IJERT. https://www.ijert.org/research/face-recognition-system-IJERTV8IS050150.pdf

  19. Agarwal V, Keertana V, Krishna I, Mahim SP and Pavitra YJ. Interactive educational device for the visually impaired. In: 2023 international conference for advancement in technology (ICONAT), Goa, India; 2023. p. 1–5. https://doi.org/10.1109/ICONAT57137.2023.10080314

  20. Alnfiai M, Sampali S. An evaluation of the Braille enter keyboard: an input method based on braille patterns for touchscreen devices. In: 2017 international conference on computer and applications (ICCA), Doha, Qatar; 2017. p. 107–119. https://doi.org/10.1109/COMAPP.2017.8079740.

  21. Rowley HA, Baluja S, Kanade T. Rotation invariant neural network-based face detection. In: IEEE conference on computer vision and pattern recognition; 1998. p. 38–44.

  22. Lee T, Park SK, Park M. Novel PoseVariant face detection method for human-robot interaction application. In: IAPR conference on machine vision applications; 2005. p. 281–4.

  23. Chen Q, Wu H, Fukumoto T, Yachida M. 3D head pose estimation without feature tracking. In: IEEE international conference on automatic face and gesture recognition; 1998. p. 88–93.

  24. Yan S, Xia Y, Smith J, Lu W, Zhang B. Multi-scale convolutional neural networks for hand detection. Appl Comput Intell Soft Comput. 2017. https://doi.org/10.1155/2017/9830641.

    Article  Google Scholar 

  25. Wachs JP, Stern H, Edan Y. Cluster labeling and parameter estimation for the automated setup of a HandGesture recognition system. IEEE Trans Syst Man Cybern. 2005;35:932–44.

    Article  Google Scholar 

  26. Newell A, Yang K, Deng J. Stacked hourglass networks for human pose estimation. In: European conference on computer vision (ECCV); 2016. p. 483–499.

  27. Neverova N, Wolf C, Taylor G, Nebout F. Hand segmentation with structured convolutional learning. In: Asian conference on computer vision (ACCV) 2014: computer vision. Singapore; 2014. p. 687–702.

  28. Farooq J, Ali MB. Real time hand gesture recognition for computer interaction. In: International conference on robotics and emerging allied technologies in engineering, Islamabad; 2014.

  29. Yewale SK, Bharne PK. Hand gesture recognition using different algorithms based on artificial neural network. In: 2011 international conference on emerging trends in networks and computer communications, Udaipur; 2011

  30. Al-Mohair H, Mohamad-Saleh J, Suandi SA. Hybrid human skin detection using neural network and K-means clustering technique. Appl Soft Comput. 2015;33(33):337–47.

    Article  Google Scholar 

  31. Zhao M, Quek FKH, Wu X. RIEVL: Recursive Induction Learning in Hand Gesture Recognition. IEEE Trans Pattern Anal. 1998;20:1174–85.

    Article  Google Scholar 

  32. Ansari MA, Singh DK. An approach for human machine interaction using dynamic hand gesture recognition; 2019. https://doi.org/10.1109/CICT48419.2019.9066173.

  33. Min BW, Yoon HS, Soh J, Yangc YM, Ejima T. Hand gesture recognition using hidden Markov models. In: Proceedings of the IEEE international conference on systems, man and cybernetics. 1997; 5: 4232–4235.

  34. Bhansali L, Narvekar M. Gesture recognition to make umpire decisions. Int J Comput App. 2016;148:26–9. https://doi.org/10.5120/ijca2016911312.

    Article  Google Scholar 

  35. Pavlovic VI, Sharma R, Huang TS. visual interpretation of hand gesture for human-computer interaction: a review. IEEE Trans Pattern Anal Mach Intell. 1997;19(7):677–95.

    Article  Google Scholar 

  36. Licsar A, Sziranyi T. Supervised training based hand gesture recognition system. In: Proceedings of the 16th international conference on pattern recognition; 2002. 3: 30999–31003.

  37. Al-Okby MFR, Neubert S, Stoll N, Thurow K. Complementary functions for intelligent wheelchair head tilts controller. In: 2017 IEEE 15th international symposium on intelligent systems and informatics (SISY); 2017. p. 000117-000122. https://doi.org/10.1109/SISY.2017.8080536.

  38. Huang J, Zhou W, Li H, Li W. Sign language recognition using 3D convolutional neural networKS. In: IEEE international conference on multimedia and expo (ICME). Turin, Italy; 2015.

  39. Zheng L, Liang B, Jiang A. Recent advances of deep learning for sign language recognition. In: 2017 international conference on digital image computing: techniques and applications (DICTA), Sydney, NSW, Australia. In CVPR. Honolulu, Hawaii, USA. IEEE; 2017.

  40. Jarman AM, Arshad S, Alam N, Islam MJ. An automated bengali sign language recognition based on finger tip finder Algorithm. Int J Electron Inf Bangi. 2015;4:1–10.

    Google Scholar 

  41. Tu YJ, Kao CC, Lin HY. Human computer interaction using face and gesture recognition. In: 2013 asia-pacific signal and information processing association annual summit and conference, Kaohsiung; 2013.

  42. Khan S, Ali ME, Das SS, Rahman MM. Real time hand gesture recognition by skin color detection for american sign language; 2020. https://doi.org/10.1109/EICT48899.2019.9068809.

  43. Aggarwal D, Banerjee K, Bali V. A review on techniques and applications of object tracking and gesture recognition. In: 2022 international; 2022. https://ieeexplore.ieee.org/abstract/document/9751803/

  44. Navaneetha Krishnan S, Yuvaraj D, Banerjee K, Josephson PJ, Kumar T, Ayoobkhan MU. Medical image enhancement in health care applications using modified sun flower optimization. Optik. 2022;271: 170051.

    Article  ADS  Google Scholar 

  45. Boháček M, Hrúz M. Sign pose-based transformer for word-level sign language recognition. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision (WACV) workshops; 2022. p. 182–91.

  46. Katoch S, Singh V, Tiwary US. Indian Sign Language recognition system using SURF with SVM and CNN. Array. 2022;14: 100141.

    Article  Google Scholar 

  47. Das S, Imtiaz MS, Neom NH, Siddique N, Wang H. A hybrid approach for Bangla sign language recognition using deep transfer learning model with random forest classifier. Expert Syst Appl. 2023;213: 118914.

    Article  Google Scholar 

  48. “Use the TalkBack braille keyboard.” https://support.google.com/accessibility/android/answer/9728765?hl=en. Accessed 09 Apr 2023.

  49. Mattheiss E, Regal G, Schrammel J, Garschall M, Tscheligi M. EdgeBraille: Braille-based text input for touch devices. J Assist Technol. 2015;9(3):147–58.

    Article  Google Scholar 

  50. “Use a braille display with VoiceOver on iPhone,” Apple Support. https://support.apple.com/en-in/guide/iphone/iph73b8c43/ios. Accessed 09 Apr 2023.

  51. Dhar A, Nittala A, Yadav K. TactBack: vibroTactile braille output using smartphone and smartwatch for visually impaired. In: Proceedings of the 13th international web for all conference, Montreal, Canada; 2016. p. 1–2.

  52. Liu YH. Feature extraction and image recognition with convolutional neural networks. J Phys Conf Ser. 2018;1087: 062032.

    Article  Google Scholar 

  53. Alzubaidi L, et al. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data. 2021;8(1):53.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Aditya W, et al. Novel spatio-temporal continuous sign language recognition using an attentive multi-feature network. Sensors. 2022. https://doi.org/10.3390/s22176452.

    Article  PubMed  PubMed Central  Google Scholar 

  55. Krogh A. What are artificial neural networks? Nat Biotechnol. 2008;26(2):195–7.

    Article  CAS  PubMed  Google Scholar 

  56. Williams A. Convolutional neural networks in python: introduction to convolutional neural networks. Createspace Independent Publishing Platform; 2017.

    Google Scholar 

  57. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition; 2014. arXiv preprint arXiv:1409.1556. https://doi.org/10.48550/arXiv.1409.1556.

  58. Hameed Z, Zahia S, Garcia-Zapirain B, Javier Aguirre J, María VA. Breast cancer histopathology image classification using an ensemble of deep learning models. Sensors. 2020. https://doi.org/10.3390/s20164373.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Pang Y, Sun M, Jiang X, Li X. Convolution in convolution for network in network. IEEE Trans Neural Netw Learn Syst. 2018;29(5):1587–97.

    Article  MathSciNet  PubMed  Google Scholar 

  60. Anwar SM, et al. Medical image analysis using convolutional neural networks: a review. J Med Syst. 2018;42:1–13. https://doi.org/10.1007/s10916-018-1088-1.

    Article  Google Scholar 

  61. Tabian I, Fu H, Khodaei ZS. A convolutional neural network for impact detection and characterization of complex composite structures. Sensors. 2019. https://doi.org/10.3390/s19224933.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017. https://doi.org/10.1145/3065386.

    Article  Google Scholar 

  63. Rapela J, Mendel JM, Grzywacz NM. Estimating nonlinear receptive fields from natural images. J Vis. 2006;6(4):441–74.

    Article  PubMed  Google Scholar 

  64. Montavon G, Orr G, Müller KR. Neural networks: tricks of the trade. Springer; 2012.

    Book  Google Scholar 

  65. Nagi J et al. Max-pooling convolutional neural networks for vision-based hand gesture recognition. In: 2011 IEEE international conference on signal and image processing applications (ICSIPA), Kuala Lumpur, Malaysia; 2011. https://doi.org/10.1109/icsipa.2011.6144164. http://ieeexplore.ieee.org/document/6144164/

  66. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 2818–26.

  67. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, Miami, FL; 2009. https://doi.org/10.1109/cvpr.2009.5206848. https://ieeexplore.ieee.org/document/5206848/

  68. Voulodimos A, Doulamis N, Doulamis A, Protopapadakis E. Deep learning for computer vision: a brief review. Comput Intell Neurosci. 2018. https://doi.org/10.1155/2018/7068349.

    Article  PubMed  PubMed Central  Google Scholar 

  69. Chen H, Tong R, Chen M, Fang Y, Liu H. A hybrid CNN-Svm classifier for hand gesture recognition with surface Emg signals. In: 2018 international conference on machine learning and cybernetics (ICMLC). IEEE; 2018. https://doi.org/10.1109/icmlc.2018.8526976.

  70. “[No title],” ACM Digital Library. https://doi.org/10.1145/3422622. Accessed 09 Apr 2023.

  71. Kothadiya D, Bhatt C, Sapariya K, Patel K, Gil-González A-B, Corchado JM. Deepsign: Sign language detection and recognition using deep learning. Electronics. 2022;11(11):1780.

    Article  Google Scholar 

  72. Graves A, Schmidhuber J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 2005;18(5–6):602–10.

    Article  PubMed  Google Scholar 

  73. Medsker L, Jain LC. Recurrent neural networks: design and applications. CRC Press; 1999.

    Book  Google Scholar 

  74. Graves A. Supervised sequence labelling with recurrent neural networks. Springer; 2012.

    Book  Google Scholar 

  75. Zaremba W, Sutskever I, Vinyals O. Recurrent neural network regularization; 2014. arXiv preprint arXiv:1409.2329. https://doi.org/10.48550/arXiv.1409.2329

  76. Trewin S. InputLogger: general-purpose logging of keyboard and mouse events on an apple macintosh. Behav Res Methods Instrum Comput. 1998;30(2):327–31.

    Article  Google Scholar 

  77. “Keyboard,” PyPI. https://pypi.org/project/keyboard/. Accessed 09 Apr 2023.

  78. “Pynput,” PyPI. https://pypi.org/project/pynput/. Accessed 09 Apr 2023.

  79. Banerjee K, et al. A machine-learning approach for prediction of water contamination using latitude, longitude, and elevation. Water. 2022;14(5):728.

    Article  CAS  Google Scholar 

  80. Banerjee K, Santhosh Kumar MB, Tilak LN. Analysis of groundwater quality using GIS-based water quality Index in Noida, Gautam Buddh Nagar, Uttar Pradesh (UP), India. App Artif. 2021. https://doi.org/10.1007/978-981-16-3067-5_14.

    Article  Google Scholar 

  81. Banerjee K, Santhosh Kumar MB, Tilak LN. Delineation of potential groundwater zones using Analytical hierarchy process (AHP) for Gautham Buddh Nagar District, Uttar Pradesh, India. Mater Today Proc. 2021;44:4976–83.

    Article  CAS  Google Scholar 

  82. Yadav N, Banerjee K, Bali V. A survey on fatigue detection of workers using machine learning. International Journal of E-Health and; 2020. https://www.igi-global.com/article/a-survey-on-fatigue-detection-of-workers-using-machine-learning/251853

  83. Sharma T, Banerjee K, Mathur S, Bali V. Stress analysis using machine learning techniques. Int J Adv Eng Sci Appl Math. 2020;29:14654–65.

    Google Scholar 

  84. Banerjee K, Prasad RA. A new technique in reference based DNA sequence compression algorithm: Enabling partial decompression. presented at the international conference of computational methods in sciences and engineering 2014 (ICCMSE 2014), Athens, Greece; 2014. https://doi.org/10.1063/1.4897853.

  85. Banerjee K, Prasad RA. Reference based inter chromosomal similarity based DNA sequence compression algorithm. In: 2017 international conference on computing, communication and automation (ICCCA), May 2017. p. 234–8.

  86. Banerjee K, Bali V. Design and development of bioinformatics feature based DNA sequence data compression algorithm. EAI Endorsed Trans Pervasive Health Technol. 2019;5(20):11. https://doi.org/10.4108/eai.13-7-2018.164097.

    Article  Google Scholar 

  87. Klatt DH. Review of text-to-speech conversion for English. J Acoust Soc Am. 1998;82(3):737.

    Article  ADS  Google Scholar 

  88. Truong RA, Yang CK, Tran QV. A translator for American sign language to text and speech. In: 2016 IEEE 5th global conference on consumer electronics, Kyoto, Japan; 2016. https://doi.org/10.1109/gcce.2016.7800427. http://ieeexplore.ieee.org/document/7800427.

  89. Jiang X, Lu X, Chen L, Zhou L, Shen S. A dynamic gesture recognition method based on computer vision. In: 6th international congress on image and signal processing (CISP 2013). 978-1-4799-2764-7/2013 IEEE.

  90. Liu X, Fujimura K. Hand gesture recognition using depth data. In: Proceedings of the sixth IEEE international conference on automatic face and gesture recognition; 2004. p. 529–34.

  91. Bretzner L, Laptev I, Lindeberg T. Hand gesture using multi-scale color features, hierarchical models and particle filtering. In: Proceedings of the fifth international conference on automatic face and gesture recognition. 2003; p. 423–8.

  92. Daware S, Kowdiki M. Morphological based dynamic hand gesture recognition for Indian sign language; 2018. p. 343–6. https://doi.org/10.1109/ICIRCA.2018.8597417.

  93. Baro X, Gonzalez J, Fabian J, Bautista MA, Oliu M, Escalante HJ, Guyon I, Escalera S. Chalearn looking at people 2015 challenges: Action spotting and cultural event recognition. In: 2015 IEEE conference on computer vision and pattern recognition workshops (CVPRW); 2015. p. 1–9.

  94. Fang Y, Cheng J, Wang K, Lu H. Hand gesture recognition using fast multi-scale analysis. In: Proceedings of the fourth international conference on image and graphics; 2007. p. 694–8.

  95. Chambers GS, Venkatesh S, West GA, Bui HH. Segmentation of intentional human gestures for sports video annotation, in MMM 2004. In: Proceedings of the 10th international multimedia modelling conference, IEEE computer society, Los Alamitos, Calif. 2004; p. 124–9.

  96. Tang A, Lu K, Wang Y, Huang J, Li H. A real-time hand posture recognition system using deep neural networks. In ACM transactions on intelligent systems and technology (TIST)—special section on visual understanding with RGB-D sensors; 2015.

  97. Oberweger M, Riegler G, Wohlhart P, Lepetit V. Efficiently creating 3D training data for fine hand pose estimation. In CVPR. Nevada, United States; 2016.

  98. Nahar L, Sulaiman R, Jaafar A. «Bangla Braille learning application» in smart-phones for visually impaired students in Bangladesh: interactive Learning Environments; 2019. p. 1–14 (in English)

  99. Alnai M, Sampalli S. BraillePassword: accessible web authentication technique on touchscreen devices. J Ambient Intell Humanized Comput. 2019;10(6):2375–91 (in English).

    Article  Google Scholar 

  100. Ali A, Kuber R, Aviv AJ. Developing and evaluating a gestural and tactile mobile interface to support user authentication. In: iConference.

  101. Said K, Kuber R, Murphy E. AudioAuth: exploring the design and usability of a sound-based authentication system. Int J Mob Hum Comput Interact. 2015. https://doi.org/10.4018/IJMHCI.2015100102.

    Article  Google Scholar 

  102. Hassan M, Mohammed A. Conversion of english characters into braille using neural network. Iraqi J Comput Commun Control Syst Eng. 2011;11:28–35.

    Google Scholar 

  103. Shokat S, Riaz R, Rizvi SS, et al. Deep learning scheme for character prediction with position-free touch screen-based Braille input method. Hum Cent Comput Inf Sci. 2020;10:41. https://doi.org/10.1186/s13673-020-00246-6.

    Article  Google Scholar 

  104. Kacorri H, Kitani KM, Bigham JP, Asakawa C. People with visual impairment training personal object recognizers : feasibility and challenges. In: Proceedings of the 2017 CHI conference on human factors in computing systems, Denver Colorado, USA, ACM, May 2017; 2017. p. 5839–49.

  105. Mascetti S, Bernareggi C, and Belotti M. TypeInBraille : a Braille-based typing application for touch-screen devices. In: The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility, Dundee Scotland, UK, October 2011; 2011. p. 295–296

  106. Alnfiai M, Sampalli S (2019) Braille tap: developing a calculator based on braille using tap gestures. Universal access in human-computer interaction. Springer, designing novel interactions, Vancouver, Canada; 2019. p. 213–23.

  107. Li T, Zeng X, Xu S. A deep learning method for Braille recognition. In: 6th international conference on computational intelligence and communication networks, (CICN) 2014, 2014. p. 1092–5.

  108. Jha V, Parvathi K. Machine learning based Braille transliteration of odia language. Int J Innov Technol Explor Eng. 2020;5:1866–71.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kakoli Banerjee.

Ethics declarations

Conflict of interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Advanced Computing and Data Sciences” guest edited by Mayank Singh, Vipin Tyagi and P.K. Gupta.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Banerjee, K., Singh, A., Akhtar, N. et al. Machine-Learning-Based Accessibility System. SN COMPUT. SCI. 5, 294 (2024). https://doi.org/10.1007/s42979-024-02615-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-024-02615-9

Keywords

Navigation