Real-Time Traffic Sign Detection and Recognition Method Based on Simplified Gabor Wavelets and CNNs
<p>Examples of the difficult situations faced in traffic sign detection and recognition: Undesirable light, disorientation, motion blur, color fade, occlusion, rain, and snow.</p> "> Figure 2
<p>The pipelines of our method.</p> "> Figure 3
<p>The eight SGW filters shown in 3D histograms with the following parameters: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0.3</mn> <mi mathvariant="sans-serif">π</mi> <mtext> </mtext> <mi>θ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; (<b>b</b>) <span class="html-italic">ω</span> = 0.3π, <span class="html-italic">θ</span> = π<span class="html-italic">j</span>/4; (<b>c</b>) <span class="html-italic">ω</span> = 0.3π, <span class="html-italic">θ</span> = π<span class="html-italic">j</span>/2; (<b>d</b>) <span class="html-italic">ω</span> = 0.3π, <span class="html-italic">θ</span> = 3π<span class="html-italic">j</span>/4; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mi mathvariant="sans-serif">π</mi> <mtext> </mtext> <mi>θ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mtext> </mtext> <mi>ω</mi> <mo>=</mo> <mn>0.5</mn> <mi mathvariant="sans-serif">π</mi> <mo>,</mo> <mtext> </mtext> <mi>θ</mi> <mo>=</mo> <mi mathvariant="sans-serif">π</mi> <mi>j</mi> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math>; (<b>g</b>) <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0.5</mn> <mi mathvariant="sans-serif">π</mi> <mo>,</mo> <mtext> </mtext> <mi>θ</mi> <mo>=</mo> <mi mathvariant="sans-serif">π</mi> <mi>j</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>; and (<b>h</b>) <math display="inline"><semantics> <mrow> <mtext> </mtext> <mi>ω</mi> <mo>=</mo> <mn>0.5</mn> <mi mathvariant="sans-serif">π</mi> <mo>,</mo> <mtext> </mtext> <mi>θ</mi> <mo>=</mo> <mn>3</mn> <mi mathvariant="sans-serif">π</mi> <mi>j</mi> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math>.</p> "> Figure 4
<p>Sample processing by eight SGW filters and output of maximization at each pixel in the eight feature maps: (<b>a</b>) <span class="html-italic">ω</span> = 0.3π, <span class="html-italic">θ</span> = 0; (<b>b</b>) <span class="html-italic">ω</span> = 0.3π, <span class="html-italic">θ</span> = π<span class="html-italic">j</span>/4; (<b>c</b>) <span class="html-italic">ω</span> = 0.3π, <span class="html-italic">θ</span> = π<span class="html-italic">j</span>/2; (<b>d</b>) <span class="html-italic">ω</span> = 0.3π, <span class="html-italic">θ</span> = 3π<span class="html-italic">j</span>/4; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0.5</mn> <mi mathvariant="sans-serif">π</mi> <mo>,</mo> <mtext> </mtext> <mi>θ</mi> <mo>=</mo> <mn>0</mn> <mtext> </mtext> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0.5</mn> <mi mathvariant="sans-serif">π</mi> <mo>,</mo> <mtext> </mtext> <mi>θ</mi> <mo>=</mo> <mi mathvariant="sans-serif">π</mi> <mi>j</mi> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math>; (<b>g</b>) <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0.5</mn> <mi mathvariant="sans-serif">π</mi> <mo>,</mo> <mtext> </mtext> <mi>θ</mi> <mo>=</mo> <mi mathvariant="sans-serif">π</mi> <mi>j</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>; (<b>h</b>) <math display="inline"><semantics> <mrow> <mtext> </mtext> <mi>ω</mi> <mo>=</mo> <mn>0.5</mn> <mi mathvariant="sans-serif">π</mi> <mo>,</mo> <mtext> </mtext> <mi>θ</mi> <mo>=</mo> <mn>3</mn> <mi mathvariant="sans-serif">π</mi> <mi>j</mi> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math>; (<b>i</b>) the input of the image; (<b>o</b>) the output of the synthesized map.</p> "> Figure 5
<p>(<b>a</b>) Traffic sign of RGB; (<b>b</b>) grayscale image; (<b>c</b>) synthetic Gabor filtered map; (<b>d</b>) two vector contrast graphs.</p> "> Figure 6
<p>(<b>a</b>) Traffic sign of RGB; (<b>b</b>) grayscale image; (<b>c</b>) synthetic Gabor filtered map; (<b>d</b>) two vector contrast graphs.</p> "> Figure 7
<p>(<b>a</b>) Original scene RGB image; (<b>b</b>) grayscale image; (<b>c</b>) synthetic SGW filtered map; (<b>d</b>) maximally stable extremal regions (MSERs) on grayscale map; (<b>e</b>) MSERs on synthetic SGW filtered map; and (<b>f</b>) segmentation results after taking account the filter rules.</p> "> Figure 8
<p>The processing operation for super class classification.</p> "> Figure 9
<p>The structure of the three convolutional neural network (CNNs).</p> "> Figure 10
<p>Examples of the subclasses of the German traffic sign detection benchmark (GTSDB): (<b>a</b>) prohibitory traffic signs; (<b>b</b>) danger traffic signs; (<b>c</b>) mandatory traffic signs.</p> "> Figure 11
<p>Examples of the subclasses of the Chinese Traffic Sign Dataset (CTSD): (<b>a</b>) prohibitory traffic signs; (<b>b</b>) danger traffic signs; (<b>c</b>) mandatory traffic signs.</p> "> Figure 12
<p>Detection precision recall curves on the GTSDB and CTSD. The y-coordinate was adjusted for clarity.</p> "> Figure 13
<p>Visualization of the traffic sign transformation and the first convolution layers of the CNN.</p> "> Figure 14
<p>Visualization of the transformation and the first convolution layers of Reference [<a href="#B41-sensors-18-03192" class="html-bibr">41</a>].</p> "> Figure 15
<p>The relationship between the training epochs and training loss of triangle, circle, and overall classifications.</p> "> Figure 16
<p>The relationship between the training epochs and classification accuracy of triangle, circle, and overall classifications.</p> "> Figure 17
<p>The comparison of the classification rate of ours and the state-of-the-art methods.</p> "> Figure 18
<p>Some misclassification of the test samples.</p> "> Figure 19
<p>Examples of the detection and recognition results.</p> ">
Abstract
:1. Introduction
- Although the same kind of traffic signs have some consistency in color, in outdoor environments the color of the traffic signs is greatly influenced by illumination and light direction. Therefore, the color information is not fully reliable.
- As vehicle mounted cameras are not always perpendicular to the traffic signs, and the shape of traffic signs are often distorted in road scenes, the shape information of traffic signs is no longer fully reliable.
- Traffic signs in some road scenes are often obscured by buildings, trees, and other vehicles; therefore, we needed to recognize the traffic signs with incomplete information.
- Traffic sign discoloration, traffic sign damage, rain, snow, fog, and other problems, are also given as challenges in the process of traffic sign detection and classification.
- Some challenging examples are shown in Figure 1.
- We propose simplified Gabor filters to preprocess the grayscale images of traffic scenes, to enhance the edges and strengthen the shape information. In addition, this could make the non-edge areas of painted artificial objects, such as traffic signs, more stable and reduce the noise in such areas.
- We use the maximally stable extremal regions (MSERs) algorithm to process the simplified Gabor filtered map to find the regions of interest more effectively, and we used our defined rules to filter out significant negative samples.
- We first used an eight-channel simplified Gabor feature as the input of the CNNs, which were defined as a pre-convolution layer of the convolutional neural networks (CNNs) for traffic sign classification.
- Our method performs only one feature extraction through the detection and classification stage, which causes feature sharing throughout the two stages. Compared with algorithms used in the different feature extraction methods, in the detection and classification stage, this saves a lot of processing time and makes it feasible for use in real time applications.
2. Related Works
3. Overview of Our Method
4. Traffic Sign Detection
4.1. Simplified Gabor Wavelet Model
4.2. Traffic Sign Proposal Extraction
4.3. Traffic Sign Detection
- The normalized sample image is used as the input, and the gradients in the horizontal and vertical orientations are calculated by the gradient operator.
- Statistic local image gradient information, where the sample image is divided into several pixels (cell), and the gradient direction is divided into nine intervals (bins). In each unit, the gradient direction of all pixels is counted in the direction interval, and a nine-dimension eigenvector is obtained.
- The adjacent four units form a block, so the characteristic vectors of all the elements in a block are connected in series to obtain the feature description vectors of the block region (36 dimensions).
- The sample image is scanned in blocks, and the scanning step is the cell. Finally, all the features of the block are combined to get the feature of the sample.
5. Traffic Sign Classification
6. Experimental Results
6.1. Experimental Dataset and Computer Environment
6.2. Traffic Sign Detection
6.3. Traffic Sign Classification
7. Conclusions and Future Works
Author Contributions
Funding
Conflicts of Interest
References
- Soendoro, W.D.; Supriana, I. Traffic sign recognition with Color-based Method, shape-arc estimation and SVM. In Proceedings of the 2011 International Conference on Electrical Engineering and Informatics, Bandung, Indonesia, 17–19 July 2011. [Google Scholar]
- Li, H.; Sun, F.; Liu, L.; Wang, L. A novel traffic sign detection method via color segmentation and robust shape matching. Neurocomputing 2015, 169, 77–88. [Google Scholar] [CrossRef]
- Bahlmann, C.; Zhu, Y.; Ramesh, V.; Pellkofer, M.; Koehler, T. A system for traffic sign detection, tracking, and recognition using color, shape, and motion information. In Proceedings of the 2005 IEEE Intelligent Vehicles Symposium, Las Vegas, NV, USA, 6–8 June 2005. [Google Scholar]
- Ardianto, S.; Chen, C.; Hang, H. Real-time traffic sign recognition using color segmentation and SVM. In Proceedings of the 2017 International Conference on Systems, Signals and Image Processing (IWSSIP), Poznan, Poland, 22–24 May 2017. [Google Scholar]
- Shadeed, W.G.; Abu-Al-Nadi, D.I.; Mismar, M.J. Road traffic sign detection in color images. In Proceedings of the 10th IEEE International Conference on Electronics, Circuits and Systems, Sharjah, UAE, 14–17 December 2003. [Google Scholar]
- Malik, R.; Khurshid, J.; Ahmad, S.N. Road sign detection and recognition using colour segmentation, shape analysis and template matching. In Proceedings of the 2007 International Conference on Machine Learning and Cybernetics, Hong Kong, China, 19–22 August 2007. [Google Scholar]
- Paclk, P.; Novovicova, J. Road sign classification without color information. In Proceedings of the 6th Conference of Advanced School of Imaging and Computing, July 2000; Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.3982 (accessed on 7 July 2000).
- Loy, G.; Barnes, N. Fast shape-based road sign detection for a driver assistance system. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 September–2 October 2004. [Google Scholar]
- Riveiro, B.; Díaz-Vilariño, L.; Conde-Carnero, B.; Soilán, M.; Arias, P. Automatic segmentation and shape-based classification of retro-reflective traffic signs from mobile LiDAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 295–303. [Google Scholar] [CrossRef]
- Yang, Y.; Luo, H.; Xu, H.; Wu, F. Towards real-time traffic sign detection and classification. IEEE Trans. Actions Intell. Transp. Syst. 2016, 17, 2022–2031. [Google Scholar] [CrossRef]
- Yin, S.; Ouyang, P.; Liu, L.; Guo, Y.; Wei, S. Fast traffic sign recognition with a rotation invariant binary pattern based feature. Sensors 2015, 15, 2161–2180. [Google Scholar] [CrossRef] [PubMed]
- Jin, J.; Fu, K.; Zhang, C. Traffic sign recognition with hinge loss trained convolutional neural networks. IEEE Trans. Intell. Transp. Syst. 2014, 15, 1991–2000. [Google Scholar] [CrossRef]
- Cireşan, D.; Meier, U.; Masci, J.; Schmidhuber, J. Multi-column deep neural network for traffic sign classification. Neural Netw. 2012, 32, 333–338. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Qian, R.; Yue, Y.; Coenen, F.; Zhang, B. Traffic sign recognition with convolutional neural network based on max pooling positions. In Proceedings of the 12th International Conference on Natural Computation; Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Changsha, China, 13–15 August 2016. [Google Scholar]
- Youssef, A.; Albani, D.; Nardi, D.; Bloisi, D.D. Fast traffic sign recognition using color segmentation and deep convolutional networks. In Proceedings of the ACIVS 2016: Advanced Concepts for Intelligent Vision Systems, Lecce, Italy, 24–27 October 2016. [Google Scholar]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Krizhevsky, A.; Ilya, S.; Geoffrey, E.H. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems 25, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Qian, R.; Zhang, B.; Yue, Y.; Wang, Z.; Coenen, F. Robust Chinese traffic sign detection and recognition with deep convolutional neural network. In Proceedings of the 11th International Conference on Natural Computation (ICNC), Zhangjiajie, China, 15–17 August 2015. [Google Scholar]
- Zhang, J.; Huang, M.; Jin, X.; Li, X. A Real-Time Chinese traffic sign detection algorithm based on modified YOLOv2. Algorithms 2017, 10, 127. [Google Scholar] [CrossRef]
- Xu, Q.; Su, J.; Liu, T. A detection and recognition method for prohibition traffic signs. In Proceedings of the 2010 International Conference on Image Analysis and Signal Processing, Zhejiang, China, 12–14 April 2010. [Google Scholar]
- Zhu, S.; Liu, L.; Lu, X. Color-geometric model for traffic sign recognition. In Proceedings of the Multiconference on Computational Engineering in Systems Applications, Beijing, China, 4–6 October 2006. [Google Scholar]
- Sheikh, D.M.A.A.; Kole, A.; Maity, T. Traffic sign detection and classification using colour feature and neural network. In Proceedings of the 2016 International Conference on Intelligent Control Power and Instrumentation (ICICPI), Kolkata, India, 21–23 October 2016. [Google Scholar]
- Manjunath, B.S.; Ma, W.Y. Texture features for browsing and retrieval of image data. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 837–842. [Google Scholar] [CrossRef]
- Geisler, W.; Clark, M.; Bovik, A. Multichannel texture analysis using localized spatial filters. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 55–73. [Google Scholar] [CrossRef]
- Tadic, V.; Popovic, M.; Odry, P. Fuzzified Gabor filter for license plate detection. Eng. Appl. Artif. Intell. 2016, 48, 40–58. [Google Scholar] [CrossRef]
- Zhang, L.; Tjondronegoro, D.; Chandran, V. Random Gabor based templates for facial expression recognition in images with facial occlusion. Neurocomputing 2014, 145, 451–464. [Google Scholar] [CrossRef]
- Jia, L.; Chen, C.; Liang, J.; Hou, Z. Fabric defect inspection based on lattice segmentation and Gabor filtering. Neurocomputing 2017, 238, 84–102. [Google Scholar] [CrossRef]
- Pellegrino, F.A.; Vanzella, W.; Torre, V. Edge detection revisited. IEEE Trans. Syst. Man Cybern. 2004, 34, 1500–1518. [Google Scholar] [CrossRef]
- Xing, Y.; Yang, Q.; Guo, C. Face Recognition based on gabor enhanced marginal fisher model and error correction SVM. In Proceedings of the Advances in Neural Networks—ISNN 2011, Guilin, China, 29 May–1 June 2011. [Google Scholar]
- Jiang, W.; Lam, K.; Shen, T. Efficient edge detection using simplified gabor wavelets. IEEE Trans. Syst. Man Cybern. 2009, 39, 1036–1047. [Google Scholar] [CrossRef] [PubMed]
- Choi, W.-P.; Tse, S.-H.; Wong, K.-W.; Lam, K.-M. Simplified Gabor wavelets for human face recognition. Pattern Recognit. 2008, 41, 1186–1199. [Google Scholar] [CrossRef]
- Matas, J.; Chum, O.; Urban, M.; Pajdla, T. Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 2004, 22, 761–767. [Google Scholar] [CrossRef]
- Greenhalgh, J.; Mirmehdi, M. Real-time detection and recognition of road traffic signs. IEEE Trans. Actions Intell. Transp. Syst. 2012, 13, 1498–1506. [Google Scholar] [CrossRef]
- Cireşan, D.; Meier, U.; Masci, J.; Schmidhuber, J. A committee of neural networks for traffic sign classification. In Proceedings of the 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011. [Google Scholar]
- Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. The German Traffic Sign Recognition Benchmark: A multi-class classification competition. In Proceedings of the 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011. [Google Scholar]
- Zhang, Z.; Li, Y.; He, X.; Yuan, W. CNN Optimization and its application in traffic signs recognition based on GRA. J. Residuals Sci. Technol. 2016, 13, 6. [Google Scholar] [CrossRef]
- Li, X.; Fei, S.; Zhang, T. Face recognition based on histogram of modular gabor feature and support vector machines. In Proceedings of the Advances in Neural Networks—ISNN 2009, Wuhan, China, 26–29 May 2009. [Google Scholar]
- Li, M.; Yu, X.; Ryu, K.H.; Lee, S.; Theera-Umpon, N. Face recognition technology development with Gabor, PCA and SVM methodology under illumination normalization condition. Clust. Comput. 2017. [Google Scholar] [CrossRef]
- Saatci, E.; Tavsanoglu, V. Fingerprint image enhancement using CNN Gabor-type filters. In Proceedings of the Cellular Neural Networks and Their Applications, Frankfurt, Germany, 24 July 2002; pp. 377–382. [Google Scholar]
- Shuo-Yiin, C.; Nelson, M. Robust CNN-based speech recognition with gabor filter kernels. In Proceedings of the 15th Annual Conference of the International Speech Communication Association, Singapore, 14–18 September 2014; pp. 905–909. [Google Scholar]
- Aghdam, H.H.; Heravi, E.J.; Puig, D. Recognizing traffic signs using a practical deep neural network. In Proceedings of the Robot 2015: Second Iberian Robotics Conference, Lisbon, Portugal, 19–21 November 2015. [Google Scholar]
- Sermanet, P.; LeCun, Y. Traffic sign recognition with multi-scale Convolutional Networks. In Proceedings of the 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011. [Google Scholar] [CrossRef]
- Xie, K.; Ge, S.; Ye, Q.; Luo, Z. Traffic sign recognition based on attribute-refinement cascaded convolutional neural networks. In Proceedings of the Pacific Rim Conference on Multimedia, Xi’an, China, 15–16 September 2016; pp. 201–210. [Google Scholar]
Algorithms | No. of Additions | No. of Multiplications |
---|---|---|
Canny | ||
TGW | ||
SGW |
Dataset | Parameters | Minimum | Maximum |
---|---|---|---|
GTSDB | Height (H) | 16 | 128 |
Width (W) | 16 | 128 | |
MSERs Area/Bounding Box Area () | 0.4 | 0.8 | |
Aspect ratio () | 0.5 | 2.1 | |
CTSD | Height (H) | 26 | 560 |
Width (W) | 26 | 580 | |
MSERs Area/Bounding Box Area () | 0.4 | 0.8 | |
Aspect ratio () | 0.4 | 2.2 |
GTSDB | CTSD | ||
---|---|---|---|
Paper [10] | Average number of proposals | 325 | 200 |
Recall, FNs | 99.63%, 1 | 99.44%, 3 | |
Time (ms) | 67 | 90 | |
Grayscale + MSERs | Average number of proposals | 388 | 321 |
Filtered by rules | 118 | 99 | |
Recall, FNs | 97.1%, 8 | 98.12%, 10 | |
Time (ms) | 40 | 38 | |
SGW Map + MSERs | Average number of proposals | 276 | 178 |
Filtered by rules | 83 | 56 | |
Recall, FNs | 99.63%, 1 | 99.62%, 2 | |
Time (ms) | 46 | 41 |
No. | Size | Cell | Block | Stride | Bin | Gradient Direction | Dimension |
---|---|---|---|---|---|---|---|
HOG1 | 24 × 24 | 6 × 6 | 12 × 12 | 6 × 6 | 9 | 324 | |
HOG2 | 36 × 36 | 6 × 6 | 12 × 12 | 6 × 6 | 9 | 900 | |
HOG3 | 42 × 42 | 6 × 6 | 12 × 12 | 6 × 6 | 9 | 1296 | |
HOG4 | 56 × 56 | 6 × 6 | 12 × 12 | 6 × 6 | 9 | 1296 | |
HOG5 | 64 × 64 | 6 × 6 | 12 × 12 | 6 × 6 | 9 | 1764 |
Data Set | Method | Detection Rate/% | Average Detection Time/MS |
---|---|---|---|
GTSDB | HOG1 + SVM | 95.88 | 69 |
HOG2 + SVM | 99.33 | 93 | |
HOG3 + SVM | 95.49 | 101 | |
HOG4 + SVM | 83.25 | 71 | |
HOG5 + SVM | 82.26 | 89 | |
CTSD | HOG1 + SVM | 94.63 | 62 |
HOG2 + SVM | 97.96 | 79 | |
HOG3 + SVM | 94.08 | 95 | |
HOG4 + SVM | 81.86 | 65 | |
HOG5 + SVM | 81.03 | 78 |
Layers | CNN for Circular Traffic Signs | CNN for Triangle Traffic Signs | CNN for Overall Traffic Signs |
---|---|---|---|
CONV_Lay_1 | Kernels: 5 5 6 8 Stride: 1 | Kernels: 5 5 6 8 Stride: 1 | Kernels: 5 5 6 8 Stride: 1 |
ReLU_Lay_1 | Rectified Linear Unit | Rectified Linear Unit | Rectified Linear Unit |
POOL_Lay_1 | Method: max pooling Size: 2 2 Stride: 2 | Method: max pooling Size: 2 2 Stride: 2 | Method: max pooling Size: 2 2 Stride: 2 |
CONV_Lay_2 | Filters: 5 5 12 6 Stride: 1 | Filters: 5 5 12 6 Stride: 1 | Filters: 5 5 12 6 Stride: 1 |
ReLU_Lay_2 | Rectified Linear Unit | Rectified Linear Unit | Rectified Linear Unit |
POOL_Lay_2 | Method: max pooling Size: 2 2 Stride: 2 | Method: Max pooling Size: 2 2 Stride: 2 | Method: Max pooling Size: 2 2 Stride: 2 |
CONV_Lay_4 (FC) | Filters: 432 20 Stride: 1 | Filters: 432 × 15 Stride: 1 | Filters: 432 × 43 Stride: 1 |
Triangle | Circle | Overall | |
---|---|---|---|
Training samples | 8970 | 22,949 | 39,209 |
Test samples | 2790 | 7440 | 12,630 |
Class number | 15 | 20 | 43 |
Misclassification | 25 | 61 | 163 |
Accuracy (%) | 99.28 | 99.49 | 98.71 |
Proposed method Accuracy | 99.43 | - |
Steps | GTSDB Average Processing Time (ms/Frame) | CSTD Average Processing Time (ms/Frame) |
---|---|---|
Simplified Gabor Filter | 17 | 15 |
MSERs | 29 | 26 |
HOG | 93 | 79 |
SVM | 15 | 13 |
Classification | 5 | -- |
Total processing time | 159 | -- |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shao, F.; Wang, X.; Meng, F.; Rui, T.; Wang, D.; Tang, J. Real-Time Traffic Sign Detection and Recognition Method Based on Simplified Gabor Wavelets and CNNs. Sensors 2018, 18, 3192. https://doi.org/10.3390/s18103192
Shao F, Wang X, Meng F, Rui T, Wang D, Tang J. Real-Time Traffic Sign Detection and Recognition Method Based on Simplified Gabor Wavelets and CNNs. Sensors. 2018; 18(10):3192. https://doi.org/10.3390/s18103192
Chicago/Turabian StyleShao, Faming, Xinqing Wang, Fanjie Meng, Ting Rui, Dong Wang, and Jian Tang. 2018. "Real-Time Traffic Sign Detection and Recognition Method Based on Simplified Gabor Wavelets and CNNs" Sensors 18, no. 10: 3192. https://doi.org/10.3390/s18103192
APA StyleShao, F., Wang, X., Meng, F., Rui, T., Wang, D., & Tang, J. (2018). Real-Time Traffic Sign Detection and Recognition Method Based on Simplified Gabor Wavelets and CNNs. Sensors, 18(10), 3192. https://doi.org/10.3390/s18103192