[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Pedestrian traffic lights and crosswalk identification

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

People who experience physical or visual impairments depend on family members or caregivers to accomplish their activities. For the physically impaired, the adoption of electric-powered wheelchairs remedies the effects of lost mobility, providing more independence to users. Thus, the research for autonomous wheelchairs becomes relevant. The detection of the location and the best time to cross the streets is a problem that concerns both the visually impaired and the autonomous navigation systems of these wheelchairs. Therefore, this work aims to develop a method for real-time pedestrian traffic lights (PTLs) and zebra crosswalks identification. To accomplish this, we built a new and challenging dataset, composed of 5180 images from five countries, and used it in the training, validation, and testing of five state-of-the-art convolutional neural networks (CNNs) architectures that we modified to be suitable for our task. The results found attest that our approach is capable of performing the simultaneous identification of crosswalks and PTLs with up to 95% accuracy, which makes it suitable for challenging scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Adapt-Project (2019) Adapt, le projet en quelques mots. http://adapt-project.com/?p=projet.. Accessed 16 June 2020

  2. Ash R, Ofri D, Brokman J et al (2018) Real-time pedestrian traffic light detection. In: 2018 IEEE international conference on the science of electrical engineering in Israel (ICSEE). https://doi.org/10.1109/ICSEE.2018.8646287, pp 1–5

  3. Brézin AP, Lafuma A, Fagnani F et al (2005) Prevalence and burden of self-reported blindness, low vision, and visual impairment in the french community: a nationwide survey. Arch Ophthalmol 123 (8):1117–1124. https://doi.org/10.1001/archopht.123.8.1117

    Article  Google Scholar 

  4. Cheng R (2018) Image data sets. http://wangkaiwei.org/downloadeg.html. Accessed 16 Nov 2018

  5. Cheng R, Wang K, Yang K et al (2017a) Crosswalk navigation for people with visual impairments on a wearable device. J Electron Imaging 26(5):1–14. https://doi.org/10.1117/1.JEI.26.5.053025

    Article  Google Scholar 

  6. Cheng R, Wang K, Yang K et al (2017b) Real-time pedestrian crossing lights detection algorithm for the visually impaired. Multimed Tools Appl 77:20,651–20,671. https://doi.org/10.1007/s11042-017-5472-5

    Article  Google Scholar 

  7. Deng J, Dong W, Socher R et al (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. https://doi.org/10.1109/CVPR.2009.5206848, pp 248–255

  8. Edwards K, Mccluskey A (2010) A survey of adult power wheelchair and scooter users. Disabil Rehabil Assist Technol 5(6):411–419. https://doi.org/10.3109/17483101003793412

    Article  Google Scholar 

  9. Finlayson M, van Denend T (2003) Experiencing the loss of mobility: perspectives of older adults with ms. Disabil Rehabil 25(20):1168–1180. https://doi.org/10.1080/09638280310001596180

    Article  Google Scholar 

  10. Ghilardi MC, Oes GS, Wehrmann J et al (2018) Real-time detection of pedestrian traffic lights for visually-impaired people. In: 2018 international joint conference on neural networks (IJCNN). https://doi.org/10.1109/IJCNN.2018.8489516, pp 1–8

  11. Grewal H, Matthews A, Tea R et al (2017) Lidar-based autonomous wheelchair. In: 2017 IEEE sensors applications symposium (SAS). https://doi.org/10.1109/SAS.2017.7894082, pp 1–6

  12. Hartman A, Nandikolla VK (2019) Human-machine interface for a smart wheelchair. J Robot 2019:4837,058:1–4837,058:11. https://doi.org/10.1155/2019/4837058

    Google Scholar 

  13. Hartman A (2017) Design of a smart wheelchair for hybrid autonomous medical mobility Master’s thesis, California State University, Northridge

  14. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. https://doi.org/10.1109/CVPR.2016.90, pp 770–778

  15. Huh M, Agrawal P, Efros AA (2016) What makes imagenet good for transfer learning? arXiv:160808614

  16. IrisVision (2021) Irisvision redefines what living with low vision looks like. https://irisvision.com/product. Accessed 1 March 2021

  17. Khan Z, Braich PS, Rahim K et al (2016) Burden and depression among caregivers of visually impaired patients in a canadian population. Adv Med 2016. https://doi.org/10.1155/2016/4683427

  18. Labelbox (2021) The training data platform for ai teams. https://labelbox.com/product/platform. Accessed 2 April 2021

  19. Lankenau A, Röfer T (2000) Smart wheelchairs – state of the art in an emerging market. Künstliche Intell 14(4):37–39

    MATH  Google Scholar 

  20. Linda F, Edwin LW, Steven SB (2000) Adequacy of power wheelchair control interfaces for persons with severe disabilities: a clinical survey. J Rehabil Res Dev 37(3):353–360. PMID: 10917267

    Google Scholar 

  21. Ma X, Li X, Tang X et al (2021) Deconvolution feature fusion for traffic signs detection in 5g driven unmanned vehicle. Phys Commun 47:101,375

    Article  Google Scholar 

  22. Niijima S, Sasaki Y, Mizoguchi H (2019) Real-time autonomous navigation of an electric wheelchair in large-scale urban area with 3d map. Adv Robot 33(19):1006–1018. https://doi.org/10.1080/01691864.2019.1642240

    Article  Google Scholar 

  23. NuEyes (2021) What is nueyes pro? https://www.nueyes.com/about. Accessed 19 Jan 2021

  24. O’Mahony N, Campbell S, Carvalho A et al (2019) Deep learning vs. traditional computer vision. In: Science and information conference. Springer, pp 128–144

  25. Redmon J, Farhadi A (2018) Yolov3: An incremental improvement. arXiv:1804.02767. Accessed 20 Sept 2020

  26. Roters J (2019) Databases. https://www.uni-muenster.de/PRIA/en/forschung/index.shtml. Accessed 16 July 2019

  27. Sakai Y, Lu H, Tan JK et al (2018) Environment recognition for electric wheelchair based on yolov2. In: Proceedings of the 3rd international conference on biomedical signal and image processing. association for computing machinery, ICBIP ’18. https://doi.org/10.1145/3278229.3278231, New York, NY, USA, pp 112–117

  28. Sakai Y, Nakayama Y, Lu H et al (2019) Recognition of surrounding environment for electric wheelchair based on wideseg. In: 2019 19th international conference on control, automation and systems (ICCAS). https://doi.org/10.23919/ICCAS47443.2019.8971608, pp 816–820

  29. Silva ET, Sampaio F, da Silva LC et al (2020) A method for embedding a computer vision application into a wearable device. Microprocess Microsyst 76:1–10. https://doi.org/10.1016/j.micpro.2020.103086

    Article  Google Scholar 

  30. Sound of Vision (2021) Work packages. https://soundofvision.net/approach, Accessed 15 Jan 2021

  31. Tan M, Le QV (2020) Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv:1905.11946. Accessed 15 May 2020

  32. Valenta P (2018) Ampel-pilot-dataset. https://github.com/patVlnta/Ampel-Pilot-Dataset. Accessed 15 Sept 2021

  33. WeWalk (2021) Revolutionary smart cane and smartphone app. https://wewalk.io/en. Accessed 2 April 2021

  34. Yu S, Lee H, Kim J (2019) Street crossing aid using light-weight cnns for the visually impaired. In: 2019 IEEE/CVF international conference on computer vision workshop (ICCVW). https://doi.org/10.1109/ICCVW.2019.00317, pp 2593–2601

  35. Yu S, Lee H, Kim J (2019) Lytnet:A convolutional neural network for real-time pedestrian traffic lights and zebra crossing recognition for the visually impaired. arXiv:1907.09706. Accessed 16 Dec 2019

  36. Yuan Y, Xiong Z, Wang Q (2019) Vssa-net: vertical spatial sequence attention network for traffic sign detection. IEEE Trans Image Process 28 (7):3423–3434

    Article  MathSciNet  Google Scholar 

Download references

Funding

The authors declare that they have not received funding for this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ronaldo S. Moura.

Ethics declarations

Conflict of Interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Silvio R. R. Sanches, Pedro H. Bugatti and Priscila T. M. Saito contributed equally to this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moura, R.S., Sanches, S.R.R., Bugatti, P.H. et al. Pedestrian traffic lights and crosswalk identification. Multimed Tools Appl 81, 16497–16513 (2022). https://doi.org/10.1007/s11042-022-12222-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-12222-6

Keywords

Navigation