[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Compact Image-Style Transfer: Channel Pruning on the Single Training of a Network
Next Article in Special Issue
Deep Learning-Based Intelligent Forklift Cargo Accurate Transfer System
Previous Article in Journal
Amplifying Lamb Wave Detection for Fiber Bragg Grating with a Phononic Crystal GRIN Lens Waveguide
Previous Article in Special Issue
A Method of Calibration for the Distortion of LiDAR Integrating IMU and Odometer
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pantograph Detection Algorithm with Complex Background and External Disturbances

1
School of Automation and Electrical Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
2
College of Electrical Engineering, Zhejiang University, Hangzhou 310027, China
3
Chinese-German Institute for Applied Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(21), 8425; https://doi.org/10.3390/s22218425
Submission received: 21 August 2022 / Revised: 8 October 2022 / Accepted: 28 October 2022 / Published: 2 November 2022
(This article belongs to the Special Issue Efficient Intelligence with Applications in Embedded Sensing)
Figure 1
<p>Schematic of PCS.</p> ">
Figure 2
<p>HSC footage of pantographs.</p> ">
Figure 3
<p>Comparison of YOLO V4 with other mainstream neural networks [<a href="#B20-sensors-22-08425" class="html-bibr">20</a>,<a href="#B21-sensors-22-08425" class="html-bibr">21</a>,<a href="#B22-sensors-22-08425" class="html-bibr">22</a>,<a href="#B23-sensors-22-08425" class="html-bibr">23</a>,<a href="#B24-sensors-22-08425" class="html-bibr">24</a>,<a href="#B25-sensors-22-08425" class="html-bibr">25</a>,<a href="#B26-sensors-22-08425" class="html-bibr">26</a>,<a href="#B27-sensors-22-08425" class="html-bibr">27</a>,<a href="#B28-sensors-22-08425" class="html-bibr">28</a>,<a href="#B29-sensors-22-08425" class="html-bibr">29</a>,<a href="#B30-sensors-22-08425" class="html-bibr">30</a>,<a href="#B31-sensors-22-08425" class="html-bibr">31</a>,<a href="#B32-sensors-22-08425" class="html-bibr">32</a>]. (<b>a</b>) Test results on VOC2007 + VOC2012. (<b>b</b>) Test results on the COCO dataset.</p> ">
Figure 4
<p>YOLO V4 overall algorithm process.</p> ">
Figure 5
<p>Blurred HSC imaging caused by rainwater. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> ">
Figure 6
<p>The HSC lens has a lot of dirt attached to it. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> ">
Figure 7
<p>Changes of the four parameters of the bounding box when YOLO V4 is positioned normally without external interference.</p> ">
Figure 8
<p>The HSC Screen dirty detection results. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> ">
Figure 9
<p>HSC blur and dirt detection algorithm process flow chart.</p> ">
Figure 10
<p>Catenary support device affects pantograph detection. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> ">
Figure 11
<p>Sun affects pantograph detection. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> ">
Figure 12
<p>Bride affects pantograph detection. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> ">
Figure 13
<p>Tunnels affects pantograph detection. (<b>a</b>) Before the HSR enters the tunnel. (<b>b</b>) The moment the HSR enters the tunnel. (<b>c</b>) After the fill light is turned on, the HSR runs stably in the tunnel. (<b>d</b>) The moment the HSR exits the tunnel.</p> ">
Figure 14
<p>Platform affects pantograph detection. (<b>a</b>) HSR-A. (<b>b</b>) HSR-B.</p> ">
Figure 15
<p>Average grayscale variation of images of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) when driving into different tunnels.</p> ">
Figure 16
<p>Sun did not affect YOLO detection of pantographs in HSR-A and HSR-B. (<b>a</b>) Case I. (<b>b</b>) Case II. (<b>c</b>) Case III. (<b>d</b>) Case IV. (<b>e</b>) Case V. (<b>f</b>) Case VI.</p> ">
Figure 17
<p>The corresponding HSC in <a href="#sensors-22-08425-f016" class="html-fig">Figure 16</a> captures the scene without the sun in the frame. (<b>a</b>) Case I. (<b>b</b>) Case II. (<b>c</b>) Case III. (<b>d</b>) Case IV. (<b>e</b>) Case V. (<b>f</b>) Case VI.</p> ">
Figure 18
<p>Average grayscale comparison.</p> ">
Figure 19
<p>Average grayscale variation in the corresponding areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) during sun influence pantograph detection.</p> ">
Figure 20
<p>Image binarization and opening operations. (<b>a</b>) L-ROI, ROI and R-ROI. (<b>b</b>) Binary image. (<b>c</b>) Binary image after opening operation.</p> ">
Figure 21
<p>Binary image of different regions and the corresponding vertical projections after the opening operation. (<b>a</b>) L-ROI. (<b>b</b>) ROI. (<b>c</b>) R-ROI.</p> ">
Figure 22
<p>Change in the percentage of white areas in the vertical projection of different areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) when the HSR is operated without external disturbances.</p> ">
Figure 23
<p>Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) during HSR operation after being affected by the catenary support devices.</p> ">
Figure 24
<p>Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) during HSR operation after being influenced by the bridge.</p> ">
Figure 25
<p>Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (<b>top</b>) and HSR-B (<b>bottom</b>) during HSR operation after being influenced by the platform.</p> ">
Figure 26
<p>HSR complex background detection algorithm process flow chart.</p> ">
Figure 27
<p>Pantograph detection algorithm process flow chart.</p> ">
Figure 28
<p>EOR-Brenner evaluation results of images captured by HSR-A and HSR-B under different conditions.</p> ">
Figure 29
<p>Scenes taken at different moments of the same HSR in rainy weather. (<b>a</b>) Case I. (<b>b</b>) Case II. (<b>c</b>) Case III. (<b>d</b>) Case IV. (<b>e</b>) Case V. (<b>f</b>) Case VI.</p> ">
Versions Notes

Abstract

:
As an important equipment for high-speed railway (HSR) to obtain electric power from outside, the state of the pantograph will directly affect the operation safety of HSR. In order to solve the problems that the current pantograph detection method is easily affected by the environment, cannot effectively deal with the interference of external scenes, has a low accuracy rate and can hardly meet the actual operation requirements of HSR, this study proposes a pantograph detection algorithm. The algorithm mainly includes three parts: the first is to use you only look once (YOLO) V4 to detect and locate the pantograph region in real-time; the second is the blur and dirt detection algorithm for the external interference directly affecting the high-speed camera (HSC), which leads to the pantograph not being detected; the last is the complex background detection algorithm for the external complex scene “overlapping” with the pantograph when imaging, which leads to the pantograph not being recognized effectively. The dirt and blur detection algorithm combined with blob detection and improved Brenner method can accurately evaluate the dirt or blur of HSC, and the complex background detection algorithm based on grayscale and vertical projection can greatly reduce the external scene interference during HSR operation. The algorithm proposed in this study was analyzed and studied on a large number of video samples of HSR operation, and the precision on three different test samples reached 99.92%, 99.90% and 99.98%, respectively. Experimental results show that the algorithm proposed in this study has strong environmental adaptability and can effectively overcome the effects of complex background and external interference on pantograph detection, and has high practical application value.

1. Introduction

As an important part of the pantograph-catenary system (PCS), the pantograph is a special current-receiving device installed on the roof of the high-speed railway (HSR). When the pantograph is raised, it transmits power from the traction substation to the HSR through the friction between the pantograph and the contact network, thus providing the power required for the operation of the HSR. Once a pantograph failure occurs, it will directly affect the operational safety of HSR [1,2,3]. Therefore, the current pantograph status must be accurately assessed through real-time detection of pantographs to ensure the safety and stability of HSR operation. The PCS is shown in Figure 1.
There are two main models of HSR in actual operation, the speed of the two models of HSR is usually 150–300 km/h when they are running stably, but the images captured by the high-speed cameras (HSC) equipped with the two models of HSR are slightly different. One is the image captured by HSR-A as shown in the left image in Figure 2, and the other is the image captured by HSR-B as shown in the right image in Figure 2. It is worth mentioning that there are some Chinese messages in the images captured by the HSC in Figure 2, which contain the basic information of the vehicle and the time information and do not affect the reader’s understanding of this paper. The same is true for the images captured by the relevant HSC that appear subsequently in the paper.
Although the two models of HSR are equipped with different angles of HSC, they both have a frame rate of 25 FPS. Therefore, regardless of the operating speed of HSR, the HSC can only capture 25 pantograph images per second, so the algorithm must process at least 25 images captured by the HSC per second to meet the real-time requirement. The region corresponding to the red rectangle in Figure 2 is the region of interest (ROI), and the pantograph in the ROI is the main research object of this study.
In the current pantograph detection method, Refs. [4,5] proposed the use of Catenary and Pantograph Video Monitor (CPVM-5C) System for pantograph detection, but in the 5C system the camera is generally installed at the HSR exit, which cannot detect and monitor the running HSR in real time. Refs. [6,7,8] proposed to extract the edges of pantographs by improved edge detection, wavelet transform, hough transform, etc., so as to realize the evaluation of pantographs, but this is essentially based on the traditional image processing method, which is only applicable to pantograph detection when the overall image is clear and the background is single, which is limited and difficult to meet the complex situation when the HSR is actually running. Refs. [9,10,11] proposed to achieve real-time pantograph detection by simply using a certain improved neural network, whose detection results are entirely given by the neural network. This method relies heavily on a large number of data sets for support, and is prone to a large number of false alarms when the training set is not rich enough in samples. The data set of certain complex scenes in the operation of HSR is difficult to obtain, so it is difficult to build a model that covers a large number of rich scene samples under training, which makes a large interference to the detection results when disturbed. Refs. [12,13,14,15] and others combine deep learning and image processing to greatly improve the stability of pantograph detection by a single reliance on neural networks, but there are still major limitations in complex scenes. The proposed methods of [16,17,18] are not very practical for complex scenes and external interference, and the complex scenes that can be overcome are very limited.
In the actual operation of HSR, it is often faced with various complex environments and changing scenarios. Even for HSR running on the same line, there may be huge differences in the scenarios encountered in different time periods. This difference is caused by multiple factors, which is irregular and difficult to predict. Because the occurrence of these scenes is full of randomness, resulting in a sample set for training neural networks that cannot cover all situations in all complex scenes and environments. With limited samples, methods to improve detection accuracy by improving certain neural networks do not fundamentally address the large number of pantograph state false positives in such scenarios, and cannot really address the impact of complex scenarios in the actual operation of HSR. Therefore, this paper focuses on filtering and detecting these complex scenes and external interference by designing algorithms, so as to achieve a method more in line with the actual operation of HSR and more widely applicable, reducing or even eliminating these scenes for neural network real-time detection of a pantograph’s impact.

2. YOLO V4 Locates the Pantograph Region

The Alexey-proposed You Only Look Once (YOLO) V4 is a huge upgrade to the one-stage detector in the field of object detection [19]. Compared with the previous version of YOLO, YOLO V4 replaces the backbone network from the original darknet53 to CPSdarknet53 on the basis of YOLO V3, which makes YOLO V4 effectively reduce the amount of computation and improve the learning ability. Meanwhile, YOLO V4 replaces spatial pyramid pooling (SPP) with feature pyramid networks (FPN), which splices feature maps at different scales and increases the receptive field of the model, enabling YOLO V4 to extract more details.
Average Precision (AP) and Mean Average Precision (mAP) are important metrics to measure the performance of the target detection algorithm, while AP-50 and AP-75 are the AP values when the corresponding Intersection over Union (IoU) thresholds are set to 0.5 and 0.75. The performance of YOLO V4 and current mainstream object detection algorithms on two datasets, Visual Object Classes (VOC) and Common Objects in Context (COCO), is shown in Figure 3.
Figure 3 shows that the YOLO V4 has clear advantages in all aspects. Alexey had pointed out that the YOLO V4 was the most advanced detector at that time, and even now it still seems that the YOLO V4 has great advantages and performance [19]. Therefore, YOLO V4 is used to locate the pantograph region in this study, and the located pantograph region is passed into the subsequent algorithm. The overall algorithm flow for locating the pantograph region using YOLO V4 is shown in Figure 4.

3. HSC Blur and Dirt Detection Algorithm

3.1. Blurry HSC Screen and Dirty HSC Screen

During the operation of HSR, the HSC is always exposed to the outside of the car, which makes the HSC extremely vulnerable to interference from the outside. The external interference affecting the HSC is mainly divided into two kinds: one is the influence of rain on the pantograph during rainy days, and the other is the influence of the dirt attached to the HSC lens on the pantograph.

3.1.1. Rainwater

HSR operation needs to face very complicated weather conditions, especially in rainy days, rainwater will directly affect the imaging of HSC. Figure 5 illustrates the different degrees of impact of rain on the HSR-A and HSR-B. when HSR is running at high speed, rainwater tends to cause blurring of the HSC imaging, making the captured pantographs unclear and thus causing the YOLO V4 to incorrectly assess the pantographs.

3.1.2. Dirty

The lens dirt attached to the HSC can generally only be removed by manual cleaning. As shown in Figure 6, during the period from the time when the lens is dirty to before the dirt is artificially cleaned, the dirty lens will continue to affect the overall evaluation of the pantograph by YOLO V4.

3.2. External Factors Cause YOLO V4 to Fail to Locate the Pantograph

When YOLO V4 cannot locate the pantograph due to external interference, the approximate position of the pantograph in the current screen can be inferred from the pantograph position that was determined in the previous normal screen. When YOLO V4 locates the pantograph area, it only needs to obtain four parameters of the bounding box in Figure 2 to achieve its accurate positioning. These four parameters are the horizontal coordinates ( x l e f t ) and vertical coordinates ( y t o p ) of the point ( P t o p l e f t ) in the upper left corner of the bounding box, and the width and height of the pantograph. The variation of the four parameters of the bounding box positioned by YOLO during normal operation of HSR of two different models is shown in Figure 7.
As can be seen from Figure 7, whether it is HSR-A or HSR-B, when its normal operation is not disturbed by external scenes, the pantograph region positioned by YOLO V4 is always relatively fixed, although there is a small range of jitter. This small-scale jitter is caused by a combination of factors such as the bumps during the operation of the HSR and the force changes between the pantograph and the catenary. This jitter does not affect the approximate position of the pantograph in the image, so when the YOLO V4 is unable to locate the pantograph area due to external interference, the approximate position of the pantograph in the current frame can be inferred from the coordinate information obtained from the previous frame, and subsequent analysis can be performed.

3.3. Improved Image Sharpness Evaluation Algorithm

Brenner algorithm is a classical blur detection algorithm [33], which finally achieves the evaluation of image sharpness by accumulating the square of the grayscale difference between two pixel points. Since the gray value of the image at the focal position changes significantly compared with the telefocused image, and the image at the focal position has more edge information, a more accurate judgment of the sharpness of the image can be made using this method. However, the traditional Brenner algorithm cannot meet the complex scene changes and variable external disturbances that need to be faced during the operation of high speed rail, so this paper proposes the emphasize object region-Brenner (EOR-Brenner) algorithm combined with the pantograph region localized by YOLO V4. The principle of EOR-Brenner is shown in Equation (1).
F = k 1 F I M G + k 2 F R O I = k 1 x = 0 i m g . c o l s 3 y = 0 i m g . r o w s 1 [ f ( x + 2 , y ) f ( x , y ) ] 2 + k 2 x = x l e f t x l e f t + w i d t h y = y t o p y t o p + h e i g h t [ f ( x + 2 , y ) f ( x , y ) ] 2
where x is the horizontal coordinate of a pixel point, y is the vertical coordinate of a pixel point, f ( x , y ) is the gray value of the pixel point, F I M G and F R O I are the sharpness results of the corresponding region. k 1 and k 2 are the weights of the corresponding region, and F is the final result of the improved Brenner algorithm.
Although the ROI occupies a relatively small area of the whole image, the pantograph, as the key research object, should be given a higher weight to the area where it is located. In this study, we recommend that k 1 can be 2 or 4 times of k 2 , and the specific choice should be made flexibly according to the actual operation line of HSR. After the values of k 1 and k 2 are determined, the appropriate threshold ( λ ) is selected based on the calculated EOR-Brenner to achieve the differentiation and detection of clear and blurred images.
As shown in (2), when the final result F of EOR-Brenner is higher than the set threshold ( λ ), the image captured by the current HSC is considered to be clear. If the pantograph cannot be detected or is detected as abnormal at this time, it can be assumed that the current detection result is not affected by the blurring of the HSC screen. However, there are still two situations: (1) the current pantograph is in normal state, although it is not affected by the blurred screen, but it may be disturbed by other external environment such as complex background, which leads to the normal pantograph being undetectable or the pantograph is incorrectly detected as abnormal. (2) The pantograph is really abnormal. At this time, it is necessary to further evaluate the real state of the pantograph through the subsequent algorithm, and finally realize the accurate detection of the real state of the pantograph.
C l e a r i m a g e , F > λ B l u r r e d i m a g e , F < λ

3.4. Blob Detection Algorithm Detects Screen Dirt

When dirt is attached to a HSC, it is very easy to form blobs. Blobs caused by dirt have different areas, convexity, circularity and inertia rates, so these attributes can be used to detect and filter the blobs [34,35,36,37], and the number of blobs can ultimately determine whether the HSC is dirty or not.
The area of the blob (S) reflects the size of the detected blob, while the circularity derived from the area of the blob (S) and the corresponding perimeter (C) reflects the degree to which the detected spot is close to a circle, and the calculation of the circularity is shown in Equation (3):
V a l u e c i r c u l a r i t y = 4 π S C 2
The convexity reflects the degree of concavity of the blob. The convexity of the blob can be obtained from the area of the blob (S) and the area of the convex hull (H) of the blob, which is calculated as shown in Equation (4):
V a l u e c o n v e x i t y = S H
The inertia rate also reflects the shape of the blob. If an image is represented by f ( x , y ) , then the moments of the image can be expressed by the Equation (5)
M i j = x y x i y j f ( x , y )
For a binary image, the zero-order moment M 00 is equal to its area, so its center of mass is as shown in Equation (6):
{ x ¯ , y ¯ } = M 10 M 00 , M 01 M 00
The central moment of the image is defined as shown in Equation (7):
μ p q = x y ( x x ¯ ) p ( y y ¯ ) q f ( x , y )
If only second-order central moments are considered, the image is exactly equivalent to an ellipse with a defined size, orientation and eccentricity, centered at the image center of mass and with constant radiality. The covariance moments of the image are shown in Equation (8):
cov [ f ( x , y ) ] = μ 20 μ 11 μ 11 μ 02 = μ 20 μ 00 μ 11 μ 00 μ 11 μ 00 μ 02 μ 00
The two eigenvalues λ 1 and λ 2 of this matrix correspond to the long and short axes of the image intensity (i.e., the ellipse). Then λ 1 and λ 2 can be expressed by the Equation (9):
λ 1 = μ 20 + μ 02 2 + 4 μ 11 2 + μ 20 μ 02 2 2 λ 2 = μ 20 + μ 02 2 4 μ 11 2 + μ 20 μ 02 2 2
The final inertia rate is obtained as shown in Equation (10):
V a l u e i n e r t i a = λ 2 λ 1 = μ 20 + μ 02 4 μ 11 2 + μ 20 μ 02 2 μ 20 + μ 02 + 4 μ 11 2 + μ 20 μ 02 2 = μ 20 + μ 02 4 μ 11 2 + μ 20 μ 02 2 μ 20 + μ 02 + 4 μ 11 2 + μ 20 μ 02 2
The final selection of the number of blobs is achieved by the area, convexity, circularity and inertia rate of the blobs, and when the final number of detected blobs is greater than the set threshold, it can be inferred that the HSC surface is attached to the dirty at this time, so as to achieve the detection of HSC dirty. For the case shown in Figure 6 the final detection result is shown in Figure 8.

3.5. Overall Process of HSC Blur and Dirt Detection Algorithm

As shown in Figure 9, the number of blobs in the current frame is first detected by the blob detection algorithm, and when the number is greater than the set threshold it is determined that the reason why YOLO V4 cannot achieve positioning in the current frame is due to dirt, and if the number of detected spots is less than the threshold value, the EOR-Brenner is used to evaluate whether the current frame is blurred or not. Finally correctly evaluate whether the pantograph detection abnormality in the current frame or the pantograph cannot be detected is caused by the dirty and blurred HSC.

4. HSR Complex Background Detection Algorithm

4.1. The Complex Background That HSR Needs to Face

HSR often needs to face a large number of external scene changes and variable terrain, environment and other influences during actual operation. These external scenes and terrain, environment, etc. can directly affect the algorithm’s correct assessment of the real state of the pantograph, and thus a large number of false alarms occur. Compared with blur and dirt, which directly affect the HSC and thus affect the detection of pantographs, when these external scenes and terrain environments affect the detection of pantographs, the images captured by the HSC are still very clear and free of blobs, but their impact on pantograph detection is mainly due to the HSC imaging when these external disturbances and pantograph “overlap” together, thus causing a large number of false alarms on the pantograph state. In this study, we refer to this type of interference as the “complex background”, and the common complex backgrounds are catenary support devices, the sun, bridges, tunnels, and platforms of HSR.
In this study, we propose a HSR complex background detection algorithm to achieve accurate detection of these complex scenes during the operation of HSR, so as to exclude the influence of these complex background on the pantograph state evaluation.

4.1.1. Catenary Support Devices

As an extremely important part of the whole huge HSR system, the catenary support device not only plays the role of electrical insulation, but also bears a certain mechanical load. The contact network support device, as the most frequently appearing background, as shown in Figure 10 will often affect the normal detection of pantographs.

4.1.2. Sun

As shown in Figure 11, when the sun appears in the pantograph imaging region, the strong light causes a “partial absence”-like phenomenon in the pantograph.

4.1.3. Bridge

Due to the complex geographical environment, when two areas are separated by rivers, only special or mixed-use bridges can be built over the rivers to provide HSR access. In more and more cities, numerous viaducts are being built to provide access to HSR. When the HSR crosses the bridge, it directly affects the detection and positioning of the pantographs. The effect of bridges on pantographs is shown in the Figure 12.

4.1.4. Tunnel

The presence of the tunnel greatly reduces the travel time and shortens the mileage between the two areas. Figure 13 shows the different images captured by the HSC before and after the HSR enters the tunnel. When the HSR enters the tunnel and runs stably, as shown in Figure 13c, the normal monitoring of the pantograph can still be achieved at this time because the fill light on the HSR is turned on. However, as shown in Figure 13b and Figure 13d, the dramatic light changes during the short period of time when the HSR enters and leaves the tunnel will cause the neural network to fail to achieve accurate positioning and detection of the pantographs when entering and leaving the tunnel.

4.1.5. Platform

As shown in Figure 14, when the HSR drives into the platform, the platform will partially overlap with the pantograph region, which affects YOLO’s positioning and detection of the pantograph, thus causing a large number of false alarms of the pantograph status by YOLO in the platform.

4.2. Tunnel Detection Algorithm Based on the Overall Average Grayscale of the Image

For such false alarms caused by drastic changes in light over a short period of time that cause YOLO to be unable to detect and locate the pantograph for a short period of time, they can be excluded by the grayscale change rule of the image. The average grayscale calculation method of the image is shown in Equation (11):
g ¯ = i = 0 i m g . c o l s 1 j = 0 i m g . r o w s 1 P ( i , j ) i m g . c o l s i m g . r o w s
where P ( x , y ) is the grayscale of the corresponding pixel point, i m g . r o w s is the height of the image and i m g . c o l s is the width of the image.
When the pantograph is running in a relatively clear and clean background, the image corresponding to each frame will cause the average grayscale of the image to fluctuate in a small range with the continuous operation of the HSR and the continuous change of the scene, but there will not be a large change in the average grayscale. Figure 15 shows the change in the average grayscale of the images taken by the HSC before and after the different cars enter and exit the tunnel.
As can be seen from Figure 15, when the HSR is running normally outside the tunnel, the average grayscale of the image only fluctuates in a very small range, and basically remains relatively stable. When the HSR enters the tunnel, the average gray value of the captured image drops to about 5 (as shown in Figure 13b, the image is basically black) because the fill light is not yet turned on and the light inside and outside the tunnel changes drastically. As the fill light is turned on, after a short period of time to adapt to the HSR will remain in a stable state in the tunnel and continue to travel, the average gray scale of the image will remain relatively stable again (as shown in Figure 13d, the image is basically all white) and the time of the HSR in the tunnel is determined by the speed of the HSR and the length of the tunnel. When the HSR out of the tunnel, due to run from a relatively dark environment to a bright environment, the HSC overexposure phenomenon will occur. At this time the average gray scale of the HSC captured by the image will jump to close to 250 or so.

4.3. Sun Detection Algorithm Based on Local Average Grayscale of Image Pantograph Region

The influence of the sun on the HSR is full of uncertainty. We cannot accurately predict that a HSR happens to pass by at a certain time on a certain line, and the sun also happens to appear in the pantograph imaging region of the HSR at this time, and affect YOLO’s assessment of the pantograph state. Moreover, not all suns are as jealous of pantograph detection as shown in Figure 11. Figure 16 shows the situation where the sun appears in some images taken by HSC, but the sun does not affect YOLO’s detection of the pantograph region.
The screen of the corresponding scene in Figure 16 after the high speed rail leaves the area affected by the sun is shown in Figure 17. Furthermore, the average grayscale of the corresponding scenes in Figure 16 and Figure 17 is shown in Figure 18.
It can be found that the overall average grayscale of the image is not necessarily increased after the sun appears in the image captured by the HSC. However, when the sun affects the detection of pantographs, it will definitely cause an increase in the average grayscale of ROI. When the sun is not present the difference between the overall image and the average grayscale of the ROI is not significant, but once the sun affects the pantograph, it will definitely cause a large difference between the two. Using this unique difference, it is possible to determine whether the pantograph is detected as anomalous in the current image due to the sun. When the sun affects the pantograph detection, the average grayscale change of the overall image and ROI and the corresponding difference between the two average gray levels are shown in Figure 19.

4.4. Background Detection Algorithm for Catenary Support Devices, Bridges, and Platforms Based on Vertical Projection

Catenary support devices, bridges, and platforms do not have an excessive effect on the average grayscale of the images captured by the HSC, so for these three common external disturbances, the choice was made to eliminate the relevant interference by using vertical projection. As shown in Figure 20a, based on the ROI positioned by YOLO V4, the left region of interest (L-ROI) and right region of interest (R-ROI) can be positioned. Firstly, the image captured by the HSC is binarized to highlight the object to be studied, and the result of binarization is shown in Figure 20b. Then the binary image is passed through the image to reduce the interference in the image by the opening operation, and the image after the opening operation is shown in Figure 20c. Finally, the vertical projection of the L-ROI, ROI, and R-ROI regions is calculated by the result of the open operation as shown in Figure 21, where the height of the white region of the vertical projection reflects the number of pixels in the white region on the corresponding horizontal coordinates in the binary image.
As shown in Figure 22, the percentage of white areas in the vertical projections of L-ROI and R-ROI is low when the HSR is operating normally without external disturbance, while there is a large percentage of white areas in the vertical projections corresponding to ROI.
The impact of the catenary support device on the pantograph detection is much smaller compared to other complex backgrounds, but the percentage of white areas in the vertical projection still reflects the changes brought about by this scenario very accurately. The changes in the percentage of white areas in the vertical projection after different areas in the L-ROI, ROI and R-ROI are affected by the catenary support devices during the operation of the HSR are shown in Figure 23.
The effect of bridges on the percentage of white areas in the vertical projection of different regions during HSR operation is shown in Figure 24. Since the HSC angles of HSR-A and HSR-B are different, the bridges do not have the same effect on the percentage of white in the vertical projection areas of L-ROI and R-ROI, but both cause at least one of the L-ROI or R-ROI to have a huge change in the percentage of white area in the vertical projection.
The effect of the platform on the percentage of white areas in the vertical projection of the different areas is shown in Figure 25. Furthermore, due to the HSC angle, the impact of the platform on HSR-A and HSR-B is different, but both have an impact on at least one of the R-ROI or L-ROI.
From Figure 22, Figure 23, Figure 24 and Figure 25, it can be seen that the percentage of white area in the projection corresponding to ROI does not change much when subjected to complex background interference, while the changes of L-ROI and R-ROI are very obvious after subjected to complex background interference, so this paper mainly detects the presence of complex background interference by the projection of L-ROI and R-ROI areas.

4.5. Overall Process of HSR Complex Background Detection Algorithm

The overall process of the complex background detection algorithm is shown in Figure 26. For a pantograph image captured by a HSC, when it cannot be detected or is detected as abnormal, the complex background detection algorithm is needed to assess whether the current detection result has the possibility of being affected by the complex background.
The specific process is as follows: First, the change of the average grayscale of the current image as a whole and the average grayscale of the previous frame as a whole is used to evaluate whether the detection result may be affected by the drastic change of light before and after the HSR enters and leaves the tunnel. If not, the relationship between the overall average grayscale of the image and the average grayscale of the ROI is used to assess whether the sun may have intruded into the pantograph region and thus influenced the pantograph detection. If the influence of the sun can still be excluded, the detection of the catenary support devices, platforms, and bridges is achieved by vertical projection to finally determine whether the pantograph detection results are influenced by the complex background at this time.
If the influence of complex background on the detection result is excluded by HSR complex background detection algorithm, then there are still two possibilities for the pantograph not to be detected or detected as abnormal: (1) although the current image is not disturbed by complex background, it may be disturbed by other interference which leads to misjudgment of the pantograph, (2) the pantograph does appear abnormal. In this case, the overall algorithm proposed in Section 5.1 of this study is combined to achieve accurate detection of the real situation of pantographs.

5. Experiments and Conclusions

5.1. The Overall Process of Pantograph Detection Algorithm

The overall process of the algorithm is shown in the Figure 27, when YOLO V4 cannot detect the pantograph in a frame or detect it as abnormal, the algorithm gives priority to detecting it through the HSC blur and dirt detection algorithm, and when the detection abnormality is ruled out as a result of dirty or blurred screen, then the HSR complex background detection algorithm to determine whether the detection of abnormalities is caused by complex background. Finally, we can realize the accurate judgment of the pantograph state.

5.2. Performance Evaluation of Algorithms under Complex Background Interference

The operation of HSR requires frequent face to the interference and influence brought by scenarios such as catenary support devices, sun, bridges, platforms, and tunnels to pantograph detection. The performance of different methods in detecting pantographs in complex backgrounds is shown in Table 1.
Refs. [12,17,18,38,39,40] all proposed good methods and ideas in order to improve the performance of their respective algorithms in complex backgrounds. However, in the face of more complex background disturbances and effects during the actual operation of HSR, the relevant algorithms still cannot achieve correct detection of pantographs under these complex backgrounds. In contrast, the HSR complex background detection algorithm proposed in this study can well achieve the correct detection and evaluation of the pantograph state under the relevant scenes. The results in Table 1 show that the method proposed in this study is more suitable for the real situation and practical needs of HSR, and performs better under the influence of complex background.

5.3. EOR-Brenner Evaluates the Sharpness of Pantograph Images Captured by HSC

Figure 28 shows the scores of EOR-Brenner on the sharpness of the images captured by two different models under different conditions. Where Frame 1–Frame 100 corresponds to the images captured by HSC during normal operation without any disturbance, Frame 101–Frame 200 corresponds to the blurred image caused by rain affecting the HSC, and Frame 201–Frame 300 is the dirty HSC lens.
Comparing Figure 28, it can be seen that EOR-Brenner gives higher scores than Brenner for clear pantograph images; for blurred pantograph images EOR-Brenner gives lower scores than Brenner for image sharpness; and the scores are very close when dirty. At the same time, EOR-Brenner has higher distinguishability between clear, blurred and dirty images, while the scores of the original Brenner images are very similar when they are dirty and clear. The improved EOR-Brenner algorithm is more in line with the real operating environment of HSR and better meets the actual needs of HSR operation.

5.4. Evaluation of the Overall Performance of the Algorithm in This Study

The combined test results for complex scenes and blurred and dirty cases are shown in Table 2 and Table 3. The red part corresponds to a clear image without interference, the gray part corresponds to a blurred image, the purple part corresponds to an image affected by dirt, and the pink part corresponds to an image disturbed by a complex environment.
Figure 29 shows the scene of the same HSR running at different times on the same line. Due to the intermittent heavy rainfall, the blurring of the images caused by the HSC affected by rain at different moments is not the same. For the same train on the same line when it is affected differently the results of the clarity algorithm for it are shown in Table 4.
As can be seen from Table 2, Table 3 and Table 4, regardless of the cases in which different complex backgrounds or external disturbances affect the pantograph detection of different HSR, or the cases in which the same HSR affects the pantograph detection at different moments due to changes in the external environment, the EOR-Brenner algorithm proposed in this study can accurately evaluate the sharpness of these pantograph images under the influence of disturbances, and the clearer the image, the higher the score. For the blurred pantograph images, the EOR-Brenner algorithm scores them much lower than the normal pantograph images, so as to achieve an accurate judgment of the blurred situation. However, it should be noted that for the images corresponding to Figure 6 when the HSC lens is dirty, a large number of blobs appear on the lens due to the dirt, which will make the image have more edge details at this time, so the EOR-Brenner does not score the dirty image low. However, the number of blobs on the dirty image is much higher than the pantograph images in other cases, so the number of blobs can achieve accurate detection of dirty images.
For the case of complex background affecting pantograph detection, comparing Table 2 and Table 3, we can see that the average gray scale of the whole image (Figure 13) before and after entering and leaving the tunnel will suddenly jump to around 0 or 255, while other disturbances affecting the pantograph will not lead to such a drastic change in gray scale value, through this jump in gray scale value can provide a strong basis for whether the high speed rail is driving into the tunnel, so as to exclude the high speed rail The effect on pantograph detection when entering and leaving the tunnel. When the sun affects the pantograph detection (Figure 11) it causes a large difference between the average grayscale of the ROI and the average grayscale of the whole image, while in other cases the difference between the average grayscale of the pantograph area and the whole image is small. Compared with other disturbances, contact network support devices, bridges, and tunnels, when affecting pantograph detection (Figure 10, Figure 12 and Figure 14), cause the white percentage of the vertical projection of at least one of the L-ROI region and R-ROI region to reach more than 35%, while the percentage of the vertical projection of the L-ROI and R-ROI regions in other scenes basically remains around 1%, with the maximum not exceeding 10%. Accurate detection of these scenes can be achieved by this feature.
The results of the comprehensive test for a variety of scenes at the same time are shown in Table 5. Meanwhile, we demonstrate the effectiveness of each module by the ablation experiments shown in Table 6. It is easy to find that the HSR complex background detection algorithm and HSC blur and dirt detection algorithm proposed in this study can greatly improve the accuracy of pantograph inspection evaluation when complex background and external disturbance exist. In general, the algorithm proposed in this study is in line with the real situation of HSR operation and meets the actual needs of HSR operation, which has a greater practical application value.

6. Conclusions

The pantograph detection algorithm proposed in this study fully considers the actual needs of HSR operation, and at the same time conducts a comprehensive and synthesize analysis of the complex scenarios and external disturbances that need to be faced during HSR operation. The proposed algorithm achieves precision of 99.92%, 99.90% and 99.98% on different test samples. At the same time, for three different samples, the processing speed of the algorithm per second reaches 49 FPS, 43 FPS and 43 FPS respectively, which meets the requirement of the algorithm to process at least 25 images per second in the actual operation of HSR. This method solves two major difficulties when using neural network to realize pantograph detection: firstly, the current pantograph detection method is easily affected by external interference, and cannot detect and eliminate external interference. Secondly, because the pantograph samples in complex situations are few and difficult to collect, the sample set for training the neural network cannot cover all situations, so the detection accuracy in complex situations is low.

Author Contributions

Methodology, P.T. and Z.C.; Supervision, P.T., X.L., J.D., J.M. and Y.F.; Visualization, Z.C., W.L. and C.H.; Writing—original draft, Z.C.; Writing—review & editing, P.T. and Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

Project supported by the National Natural Science Foundation of China (Nos. 51577166, 51637009, 51677171 and 51827810), the National Key R&D Program (No. 2018YFB0606000), and the China Scholarship Council (No. 201708330502), Shuohuang Railway Development Limited Liability Company (SHTL-2020-13). State Key Laboratory of Industrial Control Technology (ICT2022B29).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tan, P.; Ma, J.e.; Zhou, J.; Fang, Y.t. Sustainability development strategy of China’s high speed rail. J. Zhejiang Univ. Sci. A 2016, 17, 923–932. [Google Scholar] [CrossRef] [Green Version]
  2. Tan, P.; Li, X.; Wu, Z.; Ding, J.; Ma, J.; Chen, Y.; Fang, Y.; Ning, Y. Multialgorithm fusion image processing for high speed railway dropper failure–defect detection. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 4466–4478. [Google Scholar] [CrossRef]
  3. Tan, P.; Li, X.F.; Xu, J.M.; Ma, J.E.; Wang, F.J.; Ding, J.; Fang, Y.T.; Ning, Y. Catenary insulator defect detection based on contour features and gray similarity matching. J. Zhejiang Univ. Sci. A 2020, 21, 64–73. [Google Scholar] [CrossRef]
  4. Gao, S.; Liu, Z.; Yu, L. Detection and monitoring system of the pantograph-catenary in high-speed railway (6C). In Proceedings of the 2017 7th International Conference on Power Electronics Systems and Applications-Smart Mobility, Power Transfer & Security (PESA), Hong Kong, China, 12–14 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–7. [Google Scholar]
  5. Gao, S. Automatic detection and monitoring system of pantograph–catenary in China’s high-speed railways. IEEE Trans. Instrum. Meas. 2020, 70, 1–12. [Google Scholar] [CrossRef]
  6. He, D.; Chen, J.; Liu, W.; Zou, Z.; Yao, X.; He, G. Online Images Detection for Pantograph Slide Abrasion. In Proceedings of the 2020 IEEE 20th International Conference on Communication Technology (ICCT), Nanning, China, 28–31 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1365–1371. [Google Scholar]
  7. Ma, L.; Wang, Z.y.; Gao, X.r.; Wang, L.; Yang, K. Edge detection on pantograph slide image. In Proceedings of the 2009 2nd International Congress on Image and Signal Processing, Tianjin, China, 17–19 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1–3. [Google Scholar]
  8. Li, H. Research on fault detection algorithm of pantograph based on edge computing image processing. IEEE Access 2020, 8, 84652–84659. [Google Scholar] [CrossRef]
  9. Huang, S.; Zhai, Y.; Zhang, M.; Hou, X. Arc detection and recognition in pantograph–catenary system based on convolutional neural network. Inf. Sci. 2019, 501, 363–376. [Google Scholar] [CrossRef]
  10. Jiang, S.; Wei, X.; Yang, Z. Defect detection of pantograph slider based on improved Faster R-CNN. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 5278–5283. [Google Scholar]
  11. Jiao, Z.; Ma, C.; Lin, C.; Nie, X.; Qing, A. Real-time detection of pantograph using improved CenterNet. In Proceedings of the 2021 IEEE 16th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 1–4 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 85–89. [Google Scholar]
  12. Wei, X.; Jiang, S.; Li, Y.; Li, C.; Jia, L.; Li, Y. Defect detection of pantograph slide based on deep learning and image processing technology. IEEE Trans. Intell. Transp. Syst. 2019, 21, 947–958. [Google Scholar] [CrossRef]
  13. Li, D.; Pan, X.; Fu, Z.; Chang, L.; Zhang, G. Real-time accurate deep learning-based edge detection for 3-D pantograph pose status inspection. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
  14. Sun, R.; Li, L.; Chen, X.; Wang, J.; Chai, X.; Zheng, S. Unsupervised learning based target localization method for pantograph video. In Proceedings of the 2020 16th International Conference on Computational Intelligence and Security (CIS), Nanning, China, 27–30 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 318–323. [Google Scholar]
  15. Na, K.M.; Lee, K.; Shin, S.K.; Kim, H. Detecting deformation on pantograph contact strip of railway vehicle on image processing and deep learning. Appl. Sci. 2020, 10, 8509. [Google Scholar] [CrossRef]
  16. Huang, Z.; Chen, L.; Zhang, Y.; Yu, Z.; Fang, H.; Zhang, T. Robust contact-point detection from pantograph-catenary infrared images by employing horizontal-vertical enhancement operator. Infrared Phys. Technol. 2019, 101, 146–155. [Google Scholar] [CrossRef]
  17. Lu, S.; Liu, Z.; Chen, Y.; Gao, Y. A novel subpixel edge detection method of pantograph slide in complicated surroundings. IEEE Trans. Ind. Electron. 2021, 69, 3172–3182. [Google Scholar] [CrossRef]
  18. Luo, Y.; Yang, Q.; Liu, S. Novel vision-based abnormal behavior localization of pantograph-catenary for high-speed trains. IEEE Access 2019, 7, 180935–180946. [Google Scholar] [CrossRef]
  19. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  20. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  21. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef] [Green Version]
  22. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
  23. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  24. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
  25. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
  26. Ju, M.; Luo, H.; Wang, Z.; Hui, B.; Chang, Z. The application of improved YOLO V3 in multi-scale target detection. Appl. Sci. 2019, 9, 3775. [Google Scholar] [CrossRef] [Green Version]
  27. Kim, S.W.; Kook, H.K.; Sun, J.Y.; Kang, M.C.; Ko, S.J. Parallel feature pyramid network for object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 234–250. [Google Scholar]
  28. Wang, T.; Anwer, R.M.; Cholakkal, H.; Khan, F.S.; Pang, Y.; Shao, L. Learning rich features at high-speed for single-shot object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 1971–1980. [Google Scholar]
  29. Chao, P.; Kao, C.Y.; Ruan, Y.S.; Huang, C.H.; Lin, Y.L. Hardnet: A low memory traffic network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 3552–3561. [Google Scholar]
  30. Zhang, S.; Wen, L.; Bian, X.; Lei, Z.; Li, S.Z. Single-shot refinement neural network for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4203–4212. [Google Scholar]
  31. Zhao, Q.; Sheng, T.; Wang, Y.; Tang, Z.; Chen, Y.; Cai, L.; Ling, H. M2det: A single-shot object detector based on multi-level feature pyramid network. Proc. AAAI Conf. Artif. Intell. 2019, 33, 9259–9266. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, H.; Zhang, L.; Xin, S. An Improved Target Detection General Framework Based on Yolov4. In Proceedings of the 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, 27–31 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1532–1536. [Google Scholar]
  33. Maier, A.; Niederbrucker, G.; Uhl, A. Measuring image sharpness for a computer vision-based Vickers hardness measurement system. In Proceedings of the Tenth International Conference on Quality Control by Artificial Vision, Saint-Etienne, France, 28–30 June 2011; SPIE: Bellingham, WA, USA, 2011; Volume 8000, pp. 199–208. [Google Scholar]
  34. Kaspers, A. Blob Detection. Master’s Thesis, Utrecht University, Utrecht, The Netherlands, 2011. [Google Scholar]
  35. Zhang, M.; Wu, T.; Beeman, S.C.; Cullen-McEwen, L.; Bertram, J.F.; Charlton, J.R.; Baldelomar, E.; Bennett, K.M. Efficient small blob detection based on local convexity, intensity and shape information. IEEE Trans. Med. Imaging 2015, 35, 1127–1137. [Google Scholar] [CrossRef]
  36. Bochem, A.; Herpers, R.; Kent, K.B. Hardware acceleration of blob detection for image processing. In Proceedings of the 2010 Third International Conference on Advances in Circuits, Electronics and Micro-Electronics, Venice, Italy, 18–25 July 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 28–33. [Google Scholar]
  37. Xiong, X.; Choi, B.J. Comparative analysis of detection algorithms for corner and blob features in image processing. Int. J. Fuzzy Log. Intell. Syst. 2013, 13, 284–290. [Google Scholar] [CrossRef] [Green Version]
  38. Thanh, N.D.; Li, W.; Ogunbona, P. An improved template matching method for object detection. In Proceedings of the Asian Conference on Computer Vision, Xi’an, China, 23–27 September 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 193–202. [Google Scholar]
  39. Zhou, H.; Yuan, Y.; Shi, C. Object tracking using SIFT features and mean shift. Comput. Vis. Image Underst. 2009, 113, 345–352. [Google Scholar] [CrossRef]
  40. Li, X.; Zhang, T.; Shen, X.; Sun, J. Object tracking using an adaptive Kalman filter combined with mean shift. Opt. Eng. 2010, 49, 020503. [Google Scholar] [CrossRef] [Green Version]
  41. Krotkov, E.P. Active Computer Vision by Cooperative Focus and Stereo; Springer Science & Business Media: New York, NY, USA, 2012. [Google Scholar]
  42. Riaz, M.; Park, S.; Ahmad, M.B.; Rasheed, W.; Park, J. Generalized laplacian as focus measure. In Proceedings of the International Conference on Computational Science, Krakow, Poland, 23–25 June 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1013–1021. [Google Scholar]
  43. Chern, N.N.K.; Neow, P.A.; Ang, M.H. Practical issues in pixel-based autofocusing for machine vision. In Proceedings of the 2001 ICRA, IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164), Seoul, Korea, 21–26 May 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 3, pp. 2791–2796. [Google Scholar]
  44. Huang, H.; Ge, P. Depth extraction in computational integral imaging based on bilinear interpolation. Opt. Appl. 2020, 50, 497–509. [Google Scholar] [CrossRef]
  45. Feichtenhofer, C.; Fassold, H.; Schallauer, P. A perceptual image sharpness metric based on local edge gradient analysis. IEEE Signal Process. Lett. 2013, 20, 379–382. [Google Scholar] [CrossRef]
  46. Zhang, K.; Huang, D.; Zhang, B.; Zhang, D. Improving texture analysis performance in biometrics by adjusting image sharpness. Pattern Recognit. 2017, 66, 16–25. [Google Scholar] [CrossRef]
  47. Xie, X.P.; Zhou, J.; Wu, Q.Z. No-reference quality index for image blur. J. Comput. Appl. 2010, 30, 921. [Google Scholar] [CrossRef]
Figure 1. Schematic of PCS.
Figure 1. Schematic of PCS.
Sensors 22 08425 g001
Figure 2. HSC footage of pantographs.
Figure 2. HSC footage of pantographs.
Sensors 22 08425 g002
Figure 3. Comparison of YOLO V4 with other mainstream neural networks [20,21,22,23,24,25,26,27,28,29,30,31,32]. (a) Test results on VOC2007 + VOC2012. (b) Test results on the COCO dataset.
Figure 3. Comparison of YOLO V4 with other mainstream neural networks [20,21,22,23,24,25,26,27,28,29,30,31,32]. (a) Test results on VOC2007 + VOC2012. (b) Test results on the COCO dataset.
Sensors 22 08425 g003
Figure 4. YOLO V4 overall algorithm process.
Figure 4. YOLO V4 overall algorithm process.
Sensors 22 08425 g004
Figure 5. Blurred HSC imaging caused by rainwater. (a) HSR-A. (b) HSR-B.
Figure 5. Blurred HSC imaging caused by rainwater. (a) HSR-A. (b) HSR-B.
Sensors 22 08425 g005
Figure 6. The HSC lens has a lot of dirt attached to it. (a) HSR-A. (b) HSR-B.
Figure 6. The HSC lens has a lot of dirt attached to it. (a) HSR-A. (b) HSR-B.
Sensors 22 08425 g006
Figure 7. Changes of the four parameters of the bounding box when YOLO V4 is positioned normally without external interference.
Figure 7. Changes of the four parameters of the bounding box when YOLO V4 is positioned normally without external interference.
Sensors 22 08425 g007
Figure 8. The HSC Screen dirty detection results. (a) HSR-A. (b) HSR-B.
Figure 8. The HSC Screen dirty detection results. (a) HSR-A. (b) HSR-B.
Sensors 22 08425 g008
Figure 9. HSC blur and dirt detection algorithm process flow chart.
Figure 9. HSC blur and dirt detection algorithm process flow chart.
Sensors 22 08425 g009
Figure 10. Catenary support device affects pantograph detection. (a) HSR-A. (b) HSR-B.
Figure 10. Catenary support device affects pantograph detection. (a) HSR-A. (b) HSR-B.
Sensors 22 08425 g010
Figure 11. Sun affects pantograph detection. (a) HSR-A. (b) HSR-B.
Figure 11. Sun affects pantograph detection. (a) HSR-A. (b) HSR-B.
Sensors 22 08425 g011
Figure 12. Bride affects pantograph detection. (a) HSR-A. (b) HSR-B.
Figure 12. Bride affects pantograph detection. (a) HSR-A. (b) HSR-B.
Sensors 22 08425 g012
Figure 13. Tunnels affects pantograph detection. (a) Before the HSR enters the tunnel. (b) The moment the HSR enters the tunnel. (c) After the fill light is turned on, the HSR runs stably in the tunnel. (d) The moment the HSR exits the tunnel.
Figure 13. Tunnels affects pantograph detection. (a) Before the HSR enters the tunnel. (b) The moment the HSR enters the tunnel. (c) After the fill light is turned on, the HSR runs stably in the tunnel. (d) The moment the HSR exits the tunnel.
Sensors 22 08425 g013
Figure 14. Platform affects pantograph detection. (a) HSR-A. (b) HSR-B.
Figure 14. Platform affects pantograph detection. (a) HSR-A. (b) HSR-B.
Sensors 22 08425 g014
Figure 15. Average grayscale variation of images of HSR-A (top) and HSR-B (bottom) when driving into different tunnels.
Figure 15. Average grayscale variation of images of HSR-A (top) and HSR-B (bottom) when driving into different tunnels.
Sensors 22 08425 g015
Figure 16. Sun did not affect YOLO detection of pantographs in HSR-A and HSR-B. (a) Case I. (b) Case II. (c) Case III. (d) Case IV. (e) Case V. (f) Case VI.
Figure 16. Sun did not affect YOLO detection of pantographs in HSR-A and HSR-B. (a) Case I. (b) Case II. (c) Case III. (d) Case IV. (e) Case V. (f) Case VI.
Sensors 22 08425 g016
Figure 17. The corresponding HSC in Figure 16 captures the scene without the sun in the frame. (a) Case I. (b) Case II. (c) Case III. (d) Case IV. (e) Case V. (f) Case VI.
Figure 17. The corresponding HSC in Figure 16 captures the scene without the sun in the frame. (a) Case I. (b) Case II. (c) Case III. (d) Case IV. (e) Case V. (f) Case VI.
Sensors 22 08425 g017
Figure 18. Average grayscale comparison.
Figure 18. Average grayscale comparison.
Sensors 22 08425 g018
Figure 19. Average grayscale variation in the corresponding areas of HSR-A (top) and HSR-B (bottom) during sun influence pantograph detection.
Figure 19. Average grayscale variation in the corresponding areas of HSR-A (top) and HSR-B (bottom) during sun influence pantograph detection.
Sensors 22 08425 g019
Figure 20. Image binarization and opening operations. (a) L-ROI, ROI and R-ROI. (b) Binary image. (c) Binary image after opening operation.
Figure 20. Image binarization and opening operations. (a) L-ROI, ROI and R-ROI. (b) Binary image. (c) Binary image after opening operation.
Sensors 22 08425 g020
Figure 21. Binary image of different regions and the corresponding vertical projections after the opening operation. (a) L-ROI. (b) ROI. (c) R-ROI.
Figure 21. Binary image of different regions and the corresponding vertical projections after the opening operation. (a) L-ROI. (b) ROI. (c) R-ROI.
Sensors 22 08425 g021
Figure 22. Change in the percentage of white areas in the vertical projection of different areas of HSR-A (top) and HSR-B (bottom) when the HSR is operated without external disturbances.
Figure 22. Change in the percentage of white areas in the vertical projection of different areas of HSR-A (top) and HSR-B (bottom) when the HSR is operated without external disturbances.
Sensors 22 08425 g022
Figure 23. Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (top) and HSR-B (bottom) during HSR operation after being affected by the catenary support devices.
Figure 23. Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (top) and HSR-B (bottom) during HSR operation after being affected by the catenary support devices.
Sensors 22 08425 g023
Figure 24. Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (top) and HSR-B (bottom) during HSR operation after being influenced by the bridge.
Figure 24. Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (top) and HSR-B (bottom) during HSR operation after being influenced by the bridge.
Sensors 22 08425 g024
Figure 25. Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (top) and HSR-B (bottom) during HSR operation after being influenced by the platform.
Figure 25. Changes in the percentage of white areas in the vertical projections of different areas of HSR-A (top) and HSR-B (bottom) during HSR operation after being influenced by the platform.
Sensors 22 08425 g025
Figure 26. HSR complex background detection algorithm process flow chart.
Figure 26. HSR complex background detection algorithm process flow chart.
Sensors 22 08425 g026
Figure 27. Pantograph detection algorithm process flow chart.
Figure 27. Pantograph detection algorithm process flow chart.
Sensors 22 08425 g027
Figure 28. EOR-Brenner evaluation results of images captured by HSR-A and HSR-B under different conditions.
Figure 28. EOR-Brenner evaluation results of images captured by HSR-A and HSR-B under different conditions.
Sensors 22 08425 g028
Figure 29. Scenes taken at different moments of the same HSR in rainy weather. (a) Case I. (b) Case II. (c) Case III. (d) Case IV. (e) Case V. (f) Case VI.
Figure 29. Scenes taken at different moments of the same HSR in rainy weather. (a) Case I. (b) Case II. (c) Case III. (d) Case IV. (e) Case V. (f) Case VI.
Sensors 22 08425 g029
Table 1. Performance of different algorithms when dealing with complex backgrounds.
Table 1. Performance of different algorithms when dealing with complex backgrounds.
MethodTMMS + SIFTMS + KFPDDNetSEDImproved
Faster R-CNN
The Method
of This Study
[38][39][40][12][17][18]
Whether the pantograph can be detected
correctly under the complex background
××××××
Table 2. Comprehensive evaluation of the images presented in this article I.
Table 2. Comprehensive evaluation of the images presented in this article I.
Image Serial
Number
Different Sharpness Evaluation Algorithms
Tenengard
[41]
Laplacian
[42]
SMD
[43]
SMD2
[44]
EG
[45]
EAV
[46]
NRSS
[47]
Brenner
[33]
EOR
-Brenner
Figure 2 left22.54.241.812.019.3438.180.79252704
Figure 2 right31.18.253.235.1817.2648.250.91400876
Figure 5a9.42.180.760.572.3123.440.759555
Figure 5b10.572.490.860.642.4627.890.7511764
Figure 6a31.644.452.722.3513.9239.010.82158228
Figure 6b32.815.522.772.7516.3250.550.84286476
Figure 10a26.274.552.132.4811.9844.480.77269686
Figure 10b39.796.763.545.1321.4266.290.81363767
Figure 11a24.004.562.202.7113.6251.250.81143310
Figure 11b14.002.541.221.426.7742.210.7875285
Figure 12a42.926.783.473.9621.1956.170.79358613
Figure 12b31.824.842.673.6117.0355.230.78221346
Figure 13a27.184.122.302.7513.4946.280.76162356
Figure 13b10.442.210.860.852.439.760.74229230
Figure 13c20.963.701.801.547.9732.380.75209342
Figure 13d10.652.340.880.742.3810.110.75245246
Figure 14a46.627.534.056.1226.2880.260.78305924
Figure 14b39.256.143.383.2122.0286.590.78310551
Table 3. Comprehensive evaluation of the images presented in this article II.
Table 3. Comprehensive evaluation of the images presented in this article II.
Image Serial NumberVertical ProjectionAverage GrayscaleNumber
of Blob
L-ROI (%)R-ROI (%)WholeROI
Figure 2 left0.50.513514657
Figure 2 right0.30.414815462
Figure 5a0.40.415917530
Figure 5b0.50.315817929
Figure 6a3.31.1179190481
Figure 6b6.10.7143149445
Figure 10a1.938.612011461
Figure 10b14.172.011711673
Figure 11a3.40.517821269
Figure 11b0.20.518922144
Figure 12a46.044.7118122140
Figure 12b83.267.710610091
Figure 13a47.869.0149154117
Figure 13b002026
Figure 13c0.50.5525561
Figure 13d0.50.525025245
Figure 14a94.399.6112118130
Figure 14b1007.9127141106
Table 4. Performance of the same HSR at different times with different levels of disturbance.
Table 4. Performance of the same HSR at different times with different levels of disturbance.
Image
Serial
Number
The Actual Time
Corresponding
to the Scene
Different Sharpness Evaluation Algorithms
Tenengard
[41]
Laplacian
[42]
SMD
[43]
SMD2
[44]
EG
[45]
EAV
[46]
NRSS
[47]
Brenner
[33]
EOR-Brenner
Figure 29a16:49:3616.303.151.311.095.6932.180.77124149
Figure 29b16:51:459.162.450.740.542.2028.280.7412563
Figure 29c18:59:3522.534.721.791.737.9846.700.78256756
Figure 29d19:22:5423.294.821.901.939.1240.970.79235764
Figure 29e20:57:089.461.760.820.693.4529.170.765081
Figure 29f22:41:239.942.370.850.622.5432.920.7411259
Table 5. Overall algorithm testing.
Table 5. Overall algorithm testing.
Serial NumberType of SampleNumber of SamplesTotal Algorithm
Run Time
FPSPrecision
IComplex backgrounds only14,985304 s4999.92%
IIComplex backgrounds + Blur14,999346 s4399.90%
IIIComplex backgrounds + Dirt14,974349 s4399.98%
Table 6. Impact of different modules on the overall algorithm.
Table 6. Impact of different modules on the overall algorithm.
 Precision-IPrecision-IIPrecision-III
The complete algorithm proposed in this study99.92%99.90%99.98%
− HSR complex background detection algorithm73.97%84.76%85.32%
− HSC blur and dirt detection algorithm96.24%73.16%77.13%
− HSR complex background detection algorithm
and HSC blur and dirt detection algorithm
70.36%57.42%63.10%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tan, P.; Cui, Z.; Lv, W.; Li, X.; Ding, J.; Huang, C.; Ma, J.; Fang, Y. Pantograph Detection Algorithm with Complex Background and External Disturbances. Sensors 2022, 22, 8425. https://doi.org/10.3390/s22218425

AMA Style

Tan P, Cui Z, Lv W, Li X, Ding J, Huang C, Ma J, Fang Y. Pantograph Detection Algorithm with Complex Background and External Disturbances. Sensors. 2022; 22(21):8425. https://doi.org/10.3390/s22218425

Chicago/Turabian Style

Tan, Ping, Zhisheng Cui, Wenjian Lv, Xufeng Li, Jin Ding, Chuyuan Huang, Jien Ma, and Youtong Fang. 2022. "Pantograph Detection Algorithm with Complex Background and External Disturbances" Sensors 22, no. 21: 8425. https://doi.org/10.3390/s22218425

APA Style

Tan, P., Cui, Z., Lv, W., Li, X., Ding, J., Huang, C., Ma, J., & Fang, Y. (2022). Pantograph Detection Algorithm with Complex Background and External Disturbances. Sensors, 22(21), 8425. https://doi.org/10.3390/s22218425

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop