Pantograph Detection Algorithm with Complex Background and External Disturbances
<p>Schematic of PCS.</p> "> Figure 2
<p>HSC footage of pantographs.</p> "> Figure 3
<p>Comparison of YOLO V4 with other mainstream neural networks [<a href="#B20-sensors-22-08425" class="html-bibr">20</a>,<a href="#B21-sensors-22-08425" class="html-bibr">21</a>,<a href="#B22-sensors-22-08425" class="html-bibr">22</a>,<a href="#B23-sensors-22-08425" class="html-bibr">23</a>,<a href="#B24-sensors-22-08425" class="html-bibr">24</a>,<a href="#B25-sensors-22-08425" class="html-bibr">25</a>,<a href="#B26-sensors-22-08425" class="html-bibr">26</a>,<a href="#B27-sensors-22-08425" class="html-bibr">27</a>,<a href="#B28-sensors-22-08425" class="html-bibr">28</a>,<a href="#B29-sensors-22-08425" class="html-bibr">29</a>,<a href="#B30-sensors-22-08425" class="html-bibr">30</a>,<a href="#B31-sensors-22-08425" class="html-bibr">31</a>,<a href="#B32-sensors-22-08425" class="html-bibr">32</a>]. (<b>a</b>) Test results on VOC2007 + VOC2012. (<b>b</b>) Test results on the COCO dataset.</p> "> Figure 4
<p>YOLO V4 overall algorithm process.</p> "> Figure 5
<p>Blurred HSC imaging caused by rainwater. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> "> Figure 6
<p>The HSC lens has a lot of dirt attached to it. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> "> Figure 7
<p>Changes of the four parameters of the bounding box when YOLO V4 is positioned normally without external interference.</p> "> Figure 8
<p>The HSC Screen dirty detection results. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> "> Figure 9
<p>HSC blur and dirt detection algorithm process flow chart.</p> "> Figure 10
<p>Catenary support device affects pantograph detection. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> "> Figure 11
<p>Sun affects pantograph detection. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> "> Figure 12
<p>Bride affects pantograph detection. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> "> Figure 13
<p>Tunnels affects pantograph detection. (<b>a</b>) Before the HSR enters the tunnel. (<b>b</b>) The moment the HSR enters the tunnel. (<b>c</b>) After the fill light is turned on, the HSR runs stably in the tunnel. (<b>d</b>) The moment the HSR exits the tunnel.</p> "> Figure 14
<p>Platform affects pantograph detection. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> "> Figure 15
<p>Average grayscale variation of images of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) when driving into different tunnels.</p> "> Figure 16
<p>Sun did not affect YOLO detection of pantographs in HSR-A and HSR-B. (<b>a</b>) Case I. (<b>b</b>) Case II. (<b>c</b>) Case III. (<b>d</b>) Case IV. (<b>e</b>) Case V. (<b>f</b>) Case VI.</p> "> Figure 17
<p>The corresponding HSC in <a href="#sensors-22-08425-f016" class="html-fig">Figure 16</a> captures the scene without the sun in the frame. (<b>a</b>) Case I. (<b>b</b>) Case II. (<b>c</b>) Case III. (<b>d</b>) Case IV. (<b>e</b>) Case V. (<b>f</b>) Case VI.</p> "> Figure 18
<p>Average grayscale comparison.</p> "> Figure 19
<p>Average grayscale variation in the corresponding areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) during sun influence pantograph detection.</p> "> Figure 20
<p>Image binarization and opening operations. (<b>a</b>) L-ROI, ROI and R-ROI. (<b>b</b>) Binary image. (<b>c</b>) Binary image after opening operation.</p> "> Figure 21
<p>Binary image of different regions and the corresponding vertical projections after the opening operation. (<b>a</b>) L-ROI. (<b>b</b>) ROI. (<b>c</b>) R-ROI.</p> "> Figure 22
<p>Change in the percentage of white areas in the vertical projection of different areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) when the HSR is operated without external disturbances.</p> "> Figure 23
<p>Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) during HSR operation after being affected by the catenary support devices.</p> "> Figure 24
<p>Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) during HSR operation after being influenced by the bridge.</p> "> Figure 25
<p>Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) during HSR operation after being influenced by the platform.</p> "> Figure 26
<p>HSR complex background detection algorithm process flow chart.</p> "> Figure 27
<p>Pantograph detection algorithm process flow chart.</p> "> Figure 28
<p>EOR-Brenner evaluation results of images captured by HSR-A and HSR-B under different conditions.</p> "> Figure 29
<p>Scenes taken at different moments of the same HSR in rainy weather. (<b>a</b>) Case I. (<b>b</b>) Case II. (<b>c</b>) Case III. (<b>d</b>) Case IV. (<b>e</b>) Case V. (<b>f</b>) Case VI.</p> ">
Abstract
:1. Introduction
2. YOLO V4 Locates the Pantograph Region
3. HSC Blur and Dirt Detection Algorithm
3.1. Blurry HSC Screen and Dirty HSC Screen
3.1.1. Rainwater
3.1.2. Dirty
3.2. External Factors Cause YOLO V4 to Fail to Locate the Pantograph
3.3. Improved Image Sharpness Evaluation Algorithm
3.4. Blob Detection Algorithm Detects Screen Dirt
3.5. Overall Process of HSC Blur and Dirt Detection Algorithm
4. HSR Complex Background Detection Algorithm
4.1. The Complex Background That HSR Needs to Face
4.1.1. Catenary Support Devices
4.1.2. Sun
4.1.3. Bridge
4.1.4. Tunnel
4.1.5. Platform
4.2. Tunnel Detection Algorithm Based on the Overall Average Grayscale of the Image
4.3. Sun Detection Algorithm Based on Local Average Grayscale of Image Pantograph Region
4.4. Background Detection Algorithm for Catenary Support Devices, Bridges, and Platforms Based on Vertical Projection
4.5. Overall Process of HSR Complex Background Detection Algorithm
5. Experiments and Conclusions
5.1. The Overall Process of Pantograph Detection Algorithm
5.2. Performance Evaluation of Algorithms under Complex Background Interference
5.3. EOR-Brenner Evaluates the Sharpness of Pantograph Images Captured by HSC
5.4. Evaluation of the Overall Performance of the Algorithm in This Study
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Tan, P.; Ma, J.e.; Zhou, J.; Fang, Y.t. Sustainability development strategy of China’s high speed rail. J. Zhejiang Univ. Sci. A 2016, 17, 923–932. [Google Scholar] [CrossRef] [Green Version]
- Tan, P.; Li, X.; Wu, Z.; Ding, J.; Ma, J.; Chen, Y.; Fang, Y.; Ning, Y. Multialgorithm fusion image processing for high speed railway dropper failure–defect detection. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 4466–4478. [Google Scholar] [CrossRef]
- Tan, P.; Li, X.F.; Xu, J.M.; Ma, J.E.; Wang, F.J.; Ding, J.; Fang, Y.T.; Ning, Y. Catenary insulator defect detection based on contour features and gray similarity matching. J. Zhejiang Univ. Sci. A 2020, 21, 64–73. [Google Scholar] [CrossRef]
- Gao, S.; Liu, Z.; Yu, L. Detection and monitoring system of the pantograph-catenary in high-speed railway (6C). In Proceedings of the 2017 7th International Conference on Power Electronics Systems and Applications-Smart Mobility, Power Transfer & Security (PESA), Hong Kong, China, 12–14 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–7. [Google Scholar]
- Gao, S. Automatic detection and monitoring system of pantograph–catenary in China’s high-speed railways. IEEE Trans. Instrum. Meas. 2020, 70, 1–12. [Google Scholar] [CrossRef]
- He, D.; Chen, J.; Liu, W.; Zou, Z.; Yao, X.; He, G. Online Images Detection for Pantograph Slide Abrasion. In Proceedings of the 2020 IEEE 20th International Conference on Communication Technology (ICCT), Nanning, China, 28–31 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1365–1371. [Google Scholar]
- Ma, L.; Wang, Z.y.; Gao, X.r.; Wang, L.; Yang, K. Edge detection on pantograph slide image. In Proceedings of the 2009 2nd International Congress on Image and Signal Processing, Tianjin, China, 17–19 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1–3. [Google Scholar]
- Li, H. Research on fault detection algorithm of pantograph based on edge computing image processing. IEEE Access 2020, 8, 84652–84659. [Google Scholar] [CrossRef]
- Huang, S.; Zhai, Y.; Zhang, M.; Hou, X. Arc detection and recognition in pantograph–catenary system based on convolutional neural network. Inf. Sci. 2019, 501, 363–376. [Google Scholar] [CrossRef]
- Jiang, S.; Wei, X.; Yang, Z. Defect detection of pantograph slider based on improved Faster R-CNN. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 5278–5283. [Google Scholar]
- Jiao, Z.; Ma, C.; Lin, C.; Nie, X.; Qing, A. Real-time detection of pantograph using improved CenterNet. In Proceedings of the 2021 IEEE 16th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 1–4 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 85–89. [Google Scholar]
- Wei, X.; Jiang, S.; Li, Y.; Li, C.; Jia, L.; Li, Y. Defect detection of pantograph slide based on deep learning and image processing technology. IEEE Trans. Intell. Transp. Syst. 2019, 21, 947–958. [Google Scholar] [CrossRef]
- Li, D.; Pan, X.; Fu, Z.; Chang, L.; Zhang, G. Real-time accurate deep learning-based edge detection for 3-D pantograph pose status inspection. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
- Sun, R.; Li, L.; Chen, X.; Wang, J.; Chai, X.; Zheng, S. Unsupervised learning based target localization method for pantograph video. In Proceedings of the 2020 16th International Conference on Computational Intelligence and Security (CIS), Nanning, China, 27–30 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 318–323. [Google Scholar]
- Na, K.M.; Lee, K.; Shin, S.K.; Kim, H. Detecting deformation on pantograph contact strip of railway vehicle on image processing and deep learning. Appl. Sci. 2020, 10, 8509. [Google Scholar] [CrossRef]
- Huang, Z.; Chen, L.; Zhang, Y.; Yu, Z.; Fang, H.; Zhang, T. Robust contact-point detection from pantograph-catenary infrared images by employing horizontal-vertical enhancement operator. Infrared Phys. Technol. 2019, 101, 146–155. [Google Scholar] [CrossRef]
- Lu, S.; Liu, Z.; Chen, Y.; Gao, Y. A novel subpixel edge detection method of pantograph slide in complicated surroundings. IEEE Trans. Ind. Electron. 2021, 69, 3172–3182. [Google Scholar] [CrossRef]
- Luo, Y.; Yang, Q.; Liu, S. Novel vision-based abnormal behavior localization of pantograph-catenary for high-speed trains. IEEE Access 2019, 7, 180935–180946. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef] [Green Version]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Ju, M.; Luo, H.; Wang, Z.; Hui, B.; Chang, Z. The application of improved YOLO V3 in multi-scale target detection. Appl. Sci. 2019, 9, 3775. [Google Scholar] [CrossRef] [Green Version]
- Kim, S.W.; Kook, H.K.; Sun, J.Y.; Kang, M.C.; Ko, S.J. Parallel feature pyramid network for object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 234–250. [Google Scholar]
- Wang, T.; Anwer, R.M.; Cholakkal, H.; Khan, F.S.; Pang, Y.; Shao, L. Learning rich features at high-speed for single-shot object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 1971–1980. [Google Scholar]
- Chao, P.; Kao, C.Y.; Ruan, Y.S.; Huang, C.H.; Lin, Y.L. Hardnet: A low memory traffic network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 3552–3561. [Google Scholar]
- Zhang, S.; Wen, L.; Bian, X.; Lei, Z.; Li, S.Z. Single-shot refinement neural network for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4203–4212. [Google Scholar]
- Zhao, Q.; Sheng, T.; Wang, Y.; Tang, Z.; Chen, Y.; Cai, L.; Ling, H. M2det: A single-shot object detector based on multi-level feature pyramid network. Proc. AAAI Conf. Artif. Intell. 2019, 33, 9259–9266. [Google Scholar] [CrossRef] [Green Version]
- Liu, H.; Zhang, L.; Xin, S. An Improved Target Detection General Framework Based on Yolov4. In Proceedings of the 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, 27–31 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1532–1536. [Google Scholar]
- Maier, A.; Niederbrucker, G.; Uhl, A. Measuring image sharpness for a computer vision-based Vickers hardness measurement system. In Proceedings of the Tenth International Conference on Quality Control by Artificial Vision, Saint-Etienne, France, 28–30 June 2011; SPIE: Bellingham, WA, USA, 2011; Volume 8000, pp. 199–208. [Google Scholar]
- Kaspers, A. Blob Detection. Master’s Thesis, Utrecht University, Utrecht, The Netherlands, 2011. [Google Scholar]
- Zhang, M.; Wu, T.; Beeman, S.C.; Cullen-McEwen, L.; Bertram, J.F.; Charlton, J.R.; Baldelomar, E.; Bennett, K.M. Efficient small blob detection based on local convexity, intensity and shape information. IEEE Trans. Med. Imaging 2015, 35, 1127–1137. [Google Scholar] [CrossRef]
- Bochem, A.; Herpers, R.; Kent, K.B. Hardware acceleration of blob detection for image processing. In Proceedings of the 2010 Third International Conference on Advances in Circuits, Electronics and Micro-Electronics, Venice, Italy, 18–25 July 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 28–33. [Google Scholar]
- Xiong, X.; Choi, B.J. Comparative analysis of detection algorithms for corner and blob features in image processing. Int. J. Fuzzy Log. Intell. Syst. 2013, 13, 284–290. [Google Scholar] [CrossRef] [Green Version]
- Thanh, N.D.; Li, W.; Ogunbona, P. An improved template matching method for object detection. In Proceedings of the Asian Conference on Computer Vision, Xi’an, China, 23–27 September 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 193–202. [Google Scholar]
- Zhou, H.; Yuan, Y.; Shi, C. Object tracking using SIFT features and mean shift. Comput. Vis. Image Underst. 2009, 113, 345–352. [Google Scholar] [CrossRef]
- Li, X.; Zhang, T.; Shen, X.; Sun, J. Object tracking using an adaptive Kalman filter combined with mean shift. Opt. Eng. 2010, 49, 020503. [Google Scholar] [CrossRef] [Green Version]
- Krotkov, E.P. Active Computer Vision by Cooperative Focus and Stereo; Springer Science & Business Media: New York, NY, USA, 2012. [Google Scholar]
- Riaz, M.; Park, S.; Ahmad, M.B.; Rasheed, W.; Park, J. Generalized laplacian as focus measure. In Proceedings of the International Conference on Computational Science, Krakow, Poland, 23–25 June 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1013–1021. [Google Scholar]
- Chern, N.N.K.; Neow, P.A.; Ang, M.H. Practical issues in pixel-based autofocusing for machine vision. In Proceedings of the 2001 ICRA, IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164), Seoul, Korea, 21–26 May 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 3, pp. 2791–2796. [Google Scholar]
- Huang, H.; Ge, P. Depth extraction in computational integral imaging based on bilinear interpolation. Opt. Appl. 2020, 50, 497–509. [Google Scholar] [CrossRef]
- Feichtenhofer, C.; Fassold, H.; Schallauer, P. A perceptual image sharpness metric based on local edge gradient analysis. IEEE Signal Process. Lett. 2013, 20, 379–382. [Google Scholar] [CrossRef]
- Zhang, K.; Huang, D.; Zhang, B.; Zhang, D. Improving texture analysis performance in biometrics by adjusting image sharpness. Pattern Recognit. 2017, 66, 16–25. [Google Scholar] [CrossRef]
- Xie, X.P.; Zhou, J.; Wu, Q.Z. No-reference quality index for image blur. J. Comput. Appl. 2010, 30, 921. [Google Scholar] [CrossRef]
Method | TM | MS + SIFT | MS + KF | PDDNet | SED | Improved Faster R-CNN | The Method of This Study |
---|---|---|---|---|---|---|---|
[38] | [39] | [40] | [12] | [17] | [18] | ||
Whether the pantograph can be detected correctly under the complex background | × | × | × | × | × | × | ✓ |
Image Serial Number | Different Sharpness Evaluation Algorithms | ||||||||
---|---|---|---|---|---|---|---|---|---|
Tenengard [41] | Laplacian [42] | SMD [43] | SMD2 [44] | EG [45] | EAV [46] | NRSS [47] | Brenner [33] | EOR -Brenner | |
Figure 2 left | 22.5 | 4.24 | 1.81 | 2.01 | 9.34 | 38.18 | 0.79 | 252 | 704 |
Figure 2 right | 31.1 | 8.25 | 3.23 | 5.18 | 17.26 | 48.25 | 0.91 | 400 | 876 |
Figure 5a | 9.4 | 2.18 | 0.76 | 0.57 | 2.31 | 23.44 | 0.75 | 95 | 55 |
Figure 5b | 10.57 | 2.49 | 0.86 | 0.64 | 2.46 | 27.89 | 0.75 | 117 | 64 |
Figure 6a | 31.64 | 4.45 | 2.72 | 2.35 | 13.92 | 39.01 | 0.82 | 158 | 228 |
Figure 6b | 32.81 | 5.52 | 2.77 | 2.75 | 16.32 | 50.55 | 0.84 | 286 | 476 |
Figure 10a | 26.27 | 4.55 | 2.13 | 2.48 | 11.98 | 44.48 | 0.77 | 269 | 686 |
Figure 10b | 39.79 | 6.76 | 3.54 | 5.13 | 21.42 | 66.29 | 0.81 | 363 | 767 |
Figure 11a | 24.00 | 4.56 | 2.20 | 2.71 | 13.62 | 51.25 | 0.81 | 143 | 310 |
Figure 11b | 14.00 | 2.54 | 1.22 | 1.42 | 6.77 | 42.21 | 0.78 | 75 | 285 |
Figure 12a | 42.92 | 6.78 | 3.47 | 3.96 | 21.19 | 56.17 | 0.79 | 358 | 613 |
Figure 12b | 31.82 | 4.84 | 2.67 | 3.61 | 17.03 | 55.23 | 0.78 | 221 | 346 |
Figure 13a | 27.18 | 4.12 | 2.30 | 2.75 | 13.49 | 46.28 | 0.76 | 162 | 356 |
Figure 13b | 10.44 | 2.21 | 0.86 | 0.85 | 2.43 | 9.76 | 0.74 | 229 | 230 |
Figure 13c | 20.96 | 3.70 | 1.80 | 1.54 | 7.97 | 32.38 | 0.75 | 209 | 342 |
Figure 13d | 10.65 | 2.34 | 0.88 | 0.74 | 2.38 | 10.11 | 0.75 | 245 | 246 |
Figure 14a | 46.62 | 7.53 | 4.05 | 6.12 | 26.28 | 80.26 | 0.78 | 305 | 924 |
Figure 14b | 39.25 | 6.14 | 3.38 | 3.21 | 22.02 | 86.59 | 0.78 | 310 | 551 |
Image Serial Number | Vertical Projection | Average Grayscale | Number of Blob | ||
---|---|---|---|---|---|
L-ROI (%) | R-ROI (%) | Whole | ROI | ||
Figure 2 left | 0.5 | 0.5 | 135 | 146 | 57 |
Figure 2 right | 0.3 | 0.4 | 148 | 154 | 62 |
Figure 5a | 0.4 | 0.4 | 159 | 175 | 30 |
Figure 5b | 0.5 | 0.3 | 158 | 179 | 29 |
Figure 6a | 3.3 | 1.1 | 179 | 190 | 481 |
Figure 6b | 6.1 | 0.7 | 143 | 149 | 445 |
Figure 10a | 1.9 | 38.6 | 120 | 114 | 61 |
Figure 10b | 14.1 | 72.0 | 117 | 116 | 73 |
Figure 11a | 3.4 | 0.5 | 178 | 212 | 69 |
Figure 11b | 0.2 | 0.5 | 189 | 221 | 44 |
Figure 12a | 46.0 | 44.7 | 118 | 122 | 140 |
Figure 12b | 83.2 | 67.7 | 106 | 100 | 91 |
Figure 13a | 47.8 | 69.0 | 149 | 154 | 117 |
Figure 13b | 0 | 0 | 2 | 0 | 26 |
Figure 13c | 0.5 | 0.5 | 52 | 55 | 61 |
Figure 13d | 0.5 | 0.5 | 250 | 252 | 45 |
Figure 14a | 94.3 | 99.6 | 112 | 118 | 130 |
Figure 14b | 100 | 7.9 | 127 | 141 | 106 |
Image Serial Number | The Actual Time Corresponding to the Scene | Different Sharpness Evaluation Algorithms | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Tenengard [41] | Laplacian [42] | SMD [43] | SMD2 [44] | EG [45] | EAV [46] | NRSS [47] | Brenner [33] | EOR-Brenner | ||
Figure 29a | 16:49:36 | 16.30 | 3.15 | 1.31 | 1.09 | 5.69 | 32.18 | 0.77 | 124 | 149 |
Figure 29b | 16:51:45 | 9.16 | 2.45 | 0.74 | 0.54 | 2.20 | 28.28 | 0.74 | 125 | 63 |
Figure 29c | 18:59:35 | 22.53 | 4.72 | 1.79 | 1.73 | 7.98 | 46.70 | 0.78 | 256 | 756 |
Figure 29d | 19:22:54 | 23.29 | 4.82 | 1.90 | 1.93 | 9.12 | 40.97 | 0.79 | 235 | 764 |
Figure 29e | 20:57:08 | 9.46 | 1.76 | 0.82 | 0.69 | 3.45 | 29.17 | 0.76 | 50 | 81 |
Figure 29f | 22:41:23 | 9.94 | 2.37 | 0.85 | 0.62 | 2.54 | 32.92 | 0.74 | 112 | 59 |
Serial Number | Type of Sample | Number of Samples | Total Algorithm Run Time | FPS | Precision |
---|---|---|---|---|---|
I | Complex backgrounds only | 14,985 | 304 s | 49 | 99.92% |
II | Complex backgrounds + Blur | 14,999 | 346 s | 43 | 99.90% |
III | Complex backgrounds + Dirt | 14,974 | 349 s | 43 | 99.98% |
Precision-I | Precision-II | Precision-III | |
---|---|---|---|
The complete algorithm proposed in this study | 99.92% | 99.90% | 99.98% |
− HSR complex background detection algorithm | 73.97% | 84.76% | 85.32% |
− HSC blur and dirt detection algorithm | 96.24% | 73.16% | 77.13% |
− HSR complex background detection algorithm and HSC blur and dirt detection algorithm | 70.36% | 57.42% | 63.10% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tan, P.; Cui, Z.; Lv, W.; Li, X.; Ding, J.; Huang, C.; Ma, J.; Fang, Y. Pantograph Detection Algorithm with Complex Background and External Disturbances. Sensors 2022, 22, 8425. https://doi.org/10.3390/s22218425
Tan P, Cui Z, Lv W, Li X, Ding J, Huang C, Ma J, Fang Y. Pantograph Detection Algorithm with Complex Background and External Disturbances. Sensors. 2022; 22(21):8425. https://doi.org/10.3390/s22218425
Chicago/Turabian StyleTan, Ping, Zhisheng Cui, Wenjian Lv, Xufeng Li, Jin Ding, Chuyuan Huang, Jien Ma, and Youtong Fang. 2022. "Pantograph Detection Algorithm with Complex Background and External Disturbances" Sensors 22, no. 21: 8425. https://doi.org/10.3390/s22218425
APA StyleTan, P., Cui, Z., Lv, W., Li, X., Ding, J., Huang, C., Ma, J., & Fang, Y. (2022). Pantograph Detection Algorithm with Complex Background and External Disturbances. Sensors, 22(21), 8425. https://doi.org/10.3390/s22218425