Abstract
People who experience physical or visual impairments depend on family members or caregivers to accomplish their activities. For the physically impaired, the adoption of electric-powered wheelchairs remedies the effects of lost mobility, providing more independence to users. Thus, the research for autonomous wheelchairs becomes relevant. The detection of the location and the best time to cross the streets is a problem that concerns both the visually impaired and the autonomous navigation systems of these wheelchairs. Therefore, this work aims to develop a method for real-time pedestrian traffic lights (PTLs) and zebra crosswalks identification. To accomplish this, we built a new and challenging dataset, composed of 5180 images from five countries, and used it in the training, validation, and testing of five state-of-the-art convolutional neural networks (CNNs) architectures that we modified to be suitable for our task. The results found attest that our approach is capable of performing the simultaneous identification of crosswalks and PTLs with up to 95% accuracy, which makes it suitable for challenging scenarios.
Similar content being viewed by others
References
Adapt-Project (2019) Adapt, le projet en quelques mots. http://adapt-project.com/?p=projet.. Accessed 16 June 2020
Ash R, Ofri D, Brokman J et al (2018) Real-time pedestrian traffic light detection. In: 2018 IEEE international conference on the science of electrical engineering in Israel (ICSEE). https://doi.org/10.1109/ICSEE.2018.8646287, pp 1–5
Brézin AP, Lafuma A, Fagnani F et al (2005) Prevalence and burden of self-reported blindness, low vision, and visual impairment in the french community: a nationwide survey. Arch Ophthalmol 123 (8):1117–1124. https://doi.org/10.1001/archopht.123.8.1117
Cheng R (2018) Image data sets. http://wangkaiwei.org/downloadeg.html. Accessed 16 Nov 2018
Cheng R, Wang K, Yang K et al (2017a) Crosswalk navigation for people with visual impairments on a wearable device. J Electron Imaging 26(5):1–14. https://doi.org/10.1117/1.JEI.26.5.053025
Cheng R, Wang K, Yang K et al (2017b) Real-time pedestrian crossing lights detection algorithm for the visually impaired. Multimed Tools Appl 77:20,651–20,671. https://doi.org/10.1007/s11042-017-5472-5
Deng J, Dong W, Socher R et al (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. https://doi.org/10.1109/CVPR.2009.5206848, pp 248–255
Edwards K, Mccluskey A (2010) A survey of adult power wheelchair and scooter users. Disabil Rehabil Assist Technol 5(6):411–419. https://doi.org/10.3109/17483101003793412
Finlayson M, van Denend T (2003) Experiencing the loss of mobility: perspectives of older adults with ms. Disabil Rehabil 25(20):1168–1180. https://doi.org/10.1080/09638280310001596180
Ghilardi MC, Oes GS, Wehrmann J et al (2018) Real-time detection of pedestrian traffic lights for visually-impaired people. In: 2018 international joint conference on neural networks (IJCNN). https://doi.org/10.1109/IJCNN.2018.8489516, pp 1–8
Grewal H, Matthews A, Tea R et al (2017) Lidar-based autonomous wheelchair. In: 2017 IEEE sensors applications symposium (SAS). https://doi.org/10.1109/SAS.2017.7894082, pp 1–6
Hartman A, Nandikolla VK (2019) Human-machine interface for a smart wheelchair. J Robot 2019:4837,058:1–4837,058:11. https://doi.org/10.1155/2019/4837058
Hartman A (2017) Design of a smart wheelchair for hybrid autonomous medical mobility Master’s thesis, California State University, Northridge
He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. https://doi.org/10.1109/CVPR.2016.90, pp 770–778
Huh M, Agrawal P, Efros AA (2016) What makes imagenet good for transfer learning? arXiv:160808614
IrisVision (2021) Irisvision redefines what living with low vision looks like. https://irisvision.com/product. Accessed 1 March 2021
Khan Z, Braich PS, Rahim K et al (2016) Burden and depression among caregivers of visually impaired patients in a canadian population. Adv Med 2016. https://doi.org/10.1155/2016/4683427
Labelbox (2021) The training data platform for ai teams. https://labelbox.com/product/platform. Accessed 2 April 2021
Lankenau A, Röfer T (2000) Smart wheelchairs – state of the art in an emerging market. Künstliche Intell 14(4):37–39
Linda F, Edwin LW, Steven SB (2000) Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey. J Rehabil Res Dev 37(3):353–360. PMID: 10917267
Ma X, Li X, Tang X et al (2021) Deconvolution feature fusion for traffic signs detection in 5g driven unmanned vehicle. Phys Commun 47:101,375
Niijima S, Sasaki Y, Mizoguchi H (2019) Real-time autonomous navigation of an electric wheelchair in large-scale urban area with 3d map. Adv Robot 33(19):1006–1018. https://doi.org/10.1080/01691864.2019.1642240
NuEyes (2021) What is nueyes pro? https://www.nueyes.com/about. Accessed 19 Jan 2021
O’Mahony N, Campbell S, Carvalho A et al (2019) Deep learning vs. traditional computer vision. In: Science and information conference. Springer, pp 128–144
Redmon J, Farhadi A (2018) Yolov3: An incremental improvement. arXiv:1804.02767. Accessed 20 Sept 2020
Roters J (2019) Databases. https://www.uni-muenster.de/PRIA/en/forschung/index.shtml. Accessed 16 July 2019
Sakai Y, Lu H, Tan JK et al (2018) Environment recognition for electric wheelchair based on yolov2. In: Proceedings of the 3rd international conference on biomedical signal and image processing. association for computing machinery, ICBIP ’18. https://doi.org/10.1145/3278229.3278231, New York, NY, USA, pp 112–117
Sakai Y, Nakayama Y, Lu H et al (2019) Recognition of surrounding environment for electric wheelchair based on wideseg. In: 2019 19th international conference on control, automation and systems (ICCAS). https://doi.org/10.23919/ICCAS47443.2019.8971608, pp 816–820
Silva ET, Sampaio F, da Silva LC et al (2020) A method for embedding a computer vision application into a wearable device. Microprocess Microsyst 76:1–10. https://doi.org/10.1016/j.micpro.2020.103086
Sound of Vision (2021) Work packages. https://soundofvision.net/approach, Accessed 15 Jan 2021
Tan M, Le QV (2020) Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv:1905.11946. Accessed 15 May 2020
Valenta P (2018) Ampel-pilot-dataset. https://github.com/patVlnta/Ampel-Pilot-Dataset. Accessed 15 Sept 2021
WeWalk (2021) Revolutionary smart cane and smartphone app. https://wewalk.io/en. Accessed 2 April 2021
Yu S, Lee H, Kim J (2019) Street crossing aid using light-weight cnns for the visually impaired. In: 2019 IEEE/CVF international conference on computer vision workshop (ICCVW). https://doi.org/10.1109/ICCVW.2019.00317, pp 2593–2601
Yu S, Lee H, Kim J (2019) Lytnet:A convolutional neural network for real-time pedestrian traffic lights and zebra crossing recognition for the visually impaired. arXiv:1907.09706. Accessed 16 Dec 2019
Yuan Y, Xiong Z, Wang Q (2019) Vssa-net: vertical spatial sequence attention network for traffic sign detection. IEEE Trans Image Process 28 (7):3423–3434
Funding
The authors declare that they have not received funding for this work.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interests
The authors declare that they have no conflict of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Silvio R. R. Sanches, Pedro H. Bugatti and Priscila T. M. Saito contributed equally to this work.
Rights and permissions
About this article
Cite this article
Moura, R.S., Sanches, S.R.R., Bugatti, P.H. et al. Pedestrian traffic lights and crosswalk identification. Multimed Tools Appl 81, 16497–16513 (2022). https://doi.org/10.1007/s11042-022-12222-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-022-12222-6