[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112180285A - Method and device for identifying faults of traffic signal lamp, navigation system and road side equipment - Google Patents

Method and device for identifying faults of traffic signal lamp, navigation system and road side equipment Download PDF

Info

Publication number
CN112180285A
CN112180285A CN202011012323.4A CN202011012323A CN112180285A CN 112180285 A CN112180285 A CN 112180285A CN 202011012323 A CN202011012323 A CN 202011012323A CN 112180285 A CN112180285 A CN 112180285A
Authority
CN
China
Prior art keywords
image
sum
difference
lamp
absolute values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011012323.4A
Other languages
Chinese (zh)
Other versions
CN112180285B (en
Inventor
刘博�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011012323.4A priority Critical patent/CN112180285B/en
Publication of CN112180285A publication Critical patent/CN112180285A/en
Application granted granted Critical
Publication of CN112180285B publication Critical patent/CN112180285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/44Testing lamps
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/097Supervising of traffic control systems, e.g. by giving an alarm if two crossing streets have green light simultaneously

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method for identifying faults of traffic signal lamps, and relates to the fields of cloud computing, computer vision, intelligent traffic and the like. The specific implementation scheme is as follows: acquiring a first video stream collected aiming at a target traffic signal lamp group; aiming at each frame of image in the first video stream, determining the position of a target traffic signal lamp group in the image and intercepting the corresponding image based on the position to obtain a second video stream; and performing at least one of the following operations in order to identify whether the target traffic signal group is malfunctioning: performing RGB three-channel consistency detection on the image positions of all lamp holders in the target traffic signal lamp group based on each frame of image in the second video stream; differential images are respectively obtained based on every two adjacent frames of images in the second video stream, and lighting detection is carried out based on each obtained differential image.

Description

Method and device for identifying faults of traffic signal lamp, navigation system and road side equipment
Technical Field
The application relates to the fields of cloud computing, computer vision, intelligent transportation and the like, in particular to a method and a device for identifying faults of traffic signal lamps, a navigation system, road side equipment, electronic equipment and a storage medium.
Background
When the unmanned vehicle is driven on the road, if the traffic light in front of the unmanned vehicle is broken, the planning of the navigation route can be influenced.
Disclosure of Invention
The application provides a method and a device for identifying a traffic signal lamp fault, a navigation system, an electronic device and a storage medium.
According to a first aspect, there is provided a method of identifying a traffic signal fault, comprising: acquiring a first video stream collected aiming at a target traffic signal lamp group; determining the position of the target traffic signal lamp group in the image aiming at each frame of image in the first video stream, and intercepting the corresponding image based on the position to obtain a second video stream; and performing at least one of the following operations in order to identify whether the target traffic signal group is malfunctioning: performing RGB three-channel consistency detection on the image positions of all lamp holders in the target traffic signal lamp group based on each frame of image in the second video stream; and respectively obtaining a differential image based on every two adjacent frames of images in the second video stream, and carrying out lighting detection based on each obtained differential image.
According to a second aspect, there is provided an apparatus for identifying a traffic signal lamp fault, comprising: the acquisition module is used for acquiring a first video stream acquired by aiming at the target traffic signal lamp group; a determining module, configured to determine, for each frame of image in the first video stream, a position of the target traffic signal light group in the image and intercept a corresponding image based on the position to obtain a second video stream; and an identification module for performing at least one of the following operations in order to identify whether the target traffic signal light group is failed: performing RGB three-channel consistency detection on the image positions of all lamp holders in the target traffic signal lamp group based on each frame of image in the second video stream; and respectively obtaining a differential image based on every two adjacent frames of images in the second video stream, and carrying out lighting detection based on each obtained differential image.
According to a third aspect, there is provided a navigation system comprising: an unmanned vehicle; the device for identifying the traffic signal lamp fault is used for identifying the traffic signal lamp fault and sending the traffic signal lamp fault information to the unmanned vehicle, so that the unmanned vehicle adjusts the navigation route of the unmanned vehicle based on the received traffic signal lamp fault information.
According to a fourth aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the method of the embodiment of the present application.
According to a fifth aspect, there is provided a non-transitory computer readable storage medium having computer instructions stored thereon, comprising: the computer instructions are used for causing the computer to execute the method of the embodiment of the application.
According to a sixth aspect, there is provided a roadside apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the method of the embodiment of the present application.
According to the technical scheme provided by the embodiment of the application, because the lighting detection means based on the time sequence characteristics of the traffic signal lamp and/or the RGB three-channel light color consistency detection means are/is adopted to identify whether the traffic signal lamp fails, compared with the method for identifying whether the traffic signal lamp fails by adopting a neural network model in the related technology, the technical scheme provided by the embodiment of the application does not need supervised machine learning, so that data does not need to be marked, and the cost consumed by marking the data can be saved; in addition, the technical scheme provided by the embodiment of the application does not depend on the neural network model, so that the defect that the training data is naturally distributed unevenly and the neural network model is difficult to train due to the fact that the time for which the lamps of various colors are turned on is different in the scheme depending on the neural network model in the related art can be overcome; in addition, the technical scheme provided by the embodiment of the application does not depend on a neural network model, so that the support of a large number of computing resources, storage resources and the like is not required, and the defect of poor real-time performance of the obtained result due to long reasoning time of the neural network can be overcome.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 schematically illustrates a timing diagram of a traffic signal according to an embodiment of the present application;
FIG. 2 schematically illustrates an exemplary system architecture to which the method and apparatus for identifying traffic signal faults of embodiments of the present application may be applied;
3A-3C schematically illustrate a flow diagram of a method of identifying a traffic signal fault according to an embodiment of the present application;
FIG. 4 schematically illustrates an RGB three-channel lamp color consistency detection scheme according to an embodiment of the present application;
fig. 5 is a schematic diagram schematically showing changes in signal intensity when the states of the respective color signal lamps are switched according to an embodiment of the present application;
FIG. 6 schematically illustrates a block diagram of an apparatus for identifying a traffic signal fault according to an embodiment of the present application;
FIG. 7 schematically illustrates a block diagram of a navigation system according to an embodiment of the present application;
fig. 8 is a block diagram schematically illustrating an electronic device that may implement the method and apparatus for identifying traffic signal lamp failure according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the scheme for identifying the traffic signal lamp fault may include various schemes. For example, in the scheme 1, a level detector may be connected to a beacon of a traffic signal lamp, and when the level detector detects that the level in the beacon is always in an abnormal state (for example, such an abnormal level state corresponds to all lamp holders being in a black lamp state or a bright lamp state), the event is reported, so that the unmanned vehicle can adjust its own navigation route in time based on the event. For example, in the scheme 2, the traffic signal lamp signal machine may not be improved, but the lighting identification and tracking may be performed on the traffic signal lamp through the neural network model, and when a long-time black light or a long-time lighting (exceeding the normal black light and lighting time) of a lamp head of any color is found, the event is reported, so that the unmanned vehicle may adjust its own navigation route in time based on the event.
In the process of implementing the embodiment of the present application, the inventors found that: the accuracy of detecting the traffic signal lamp fault by using the scheme 1 is high, but the signal machine needs to be improved, and in the case that a city manager does not support the scheme, other non-visual alternatives are difficult to find.
Meanwhile, in the process of implementing the embodiment of the present application, the inventors have also found that: although the scheme 2 does not need to improve the signal of the traffic signal lamp, the scheme 2 usually needs supervised learning, so that data labeling is needed, and the cost consumed by labeling data needs to be paid. In addition, because the lights of various colors in a group of traffic lights are not on for the same time, the training data are naturally distributed unevenly, so that the difficulty of training the neural network model is higher when the scheme 2 is adopted. In addition, since the training of the neural network model usually requires a large amount of computing resources and storage resources, the requirements on the computing resources and the storage resources are high when the scheme 2 is adopted. Moreover, because the inference time of the neural network model is usually long, the real-time performance of the obtained result is poor when the scheme 2 is adopted.
In addition, in the process of implementing the embodiment of the present application, the inventors also found that: according to the time sequence characteristics of the traffic signal lamps, the lamp caps of all colors in any group of traffic signal lamps are sequentially turned on and off according to a preset time sequence. For example, as shown in fig. 1, the uppermost layer represents a pulse timing chart of a red lamp, the middle layer represents a pulse timing chart of a green lamp, and the lowermost layer represents a pulse timing chart of a yellow lamp. It can be seen that, in a time period, when the red light is in the on state, both the green light and the yellow light are in the off state; when the green light is in the on state, the red light and the yellow light are both in the off state; when the yellow lamp is in a lighting state or a flashing state, the green lamp and the yellow lamp are both in a extinguishing state. That is, all of the lightheads in a group of traffic lights may not be fully lit or fully extinguished (except for a brief fully extinguished state that may occur when a yellow or green light flashes), and theoretically only one lighthead may be lit for a period of time.
Based on this, the inventive concept of the present application lies in: carrying out full black lamp fault detection or full bright lamp fault detection by utilizing the time sequence characteristics of the traffic signal lamp and the RGB three-channel lamp color consistency; and/or, the lighting inspection is carried out by utilizing the time sequence characteristics of the traffic signal lamp and the characteristic that only one lamp cap is lighted theoretically within a period of time. If there is a lighthead that is lit, the traffic signal is deemed to be trouble-free. If none of the lightheads are illuminated, the traffic signal is considered to have failed. Thus, it is not necessary to improve the traffic signal of the traffic signal lamp as in the case of the above-described embodiment 1, but the above-described drawback of the embodiment 2 can be overcome.
It should be understood that in the embodiment of the present application, the principle of the RGB three-channel lamp color consistency check is as follows: according to the time sequence characteristics of the traffic signal lamps, all the lamp caps in one group of traffic signal lamps cannot be in the on state or in the off state, and only can be in the on state in turn. Therefore, once the RGB three-channel lamp colors of all the lamp heads are consistent, it is considered that the group of traffic signal lamps are all in the lighting state or the black state, i.e. a fault occurs.
It should also be understood that in the embodiments of the present application, the principle of the lighting inspection is that: according to the time sequence characteristics of the traffic signal lamps, only one lamp cap is lighted within a period of time under normal conditions. Thus, once the lit lighthead is found to be present, the set of traffic lights is considered normal. Otherwise, once the unlit lamp head is found, the group of traffic signal lamps is considered to be in fault.
Fig. 2 schematically illustrates an exemplary system architecture to which the method and apparatus for identifying traffic signal lamp faults according to the embodiments of the present application may be applied. It should be noted that fig. 2 is only an example of a system architecture to which the embodiments of the present application may be applied to help those skilled in the art understand the technical content of the present application, and does not mean that the embodiments of the present application may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 2, the system architecture 200 may include a traffic light 210, an image capture device 220, an image processing device 230, a data storage device 240, and an unmanned vehicle 250 (e.g., an autonomous or assisted driving vehicle).
Specifically, the traffic signal lamp 210 is erected at a certain road side, and the position thereof is determined for directing traffic at the intersection.
The image capturing apparatus 220 may include: the roadside sensing sensor (e.g., including a roadside camera, etc.) is erected near the traffic signal 210 such that the traffic signal 210 is within the sensing area of the image capture device 220. The image capture device 220 may be used to capture a video stream for the traffic signal 210 in real-time, continuously for a long period of time, in order to monitor whether the traffic signal 210 is malfunctioning.
The image processing device 230 (e.g., including a roadside computing device) receives the video stream captured from the image capture device 220 in real-time and processes the video stream to identify and track whether the traffic signal 210 is malfunctioning. Once the image processing device 230 finds that the traffic light 210 is malfunctioning, it can send the corresponding recognition and tracking results to the unmanned vehicle 250 (the unmanned vehicle 250 includes all unmanned vehicles that have not seen the traffic light 210 and are about to reach or have reached the intersection where the traffic light 210 is located), thereby enabling the unmanned vehicle 250 to adjust its own navigation route according to the results. In addition, after the image processing device 230 finds that the traffic signal 210 has a fault, the corresponding recognition and tracking results can be sent to the data storage device 240 for storage, so as to be used for analyzing the cause of the fault and repairing the traffic signal 210.
It should be noted that, in the embodiment of the present application, the image capturing device 220 installed near the intersection where the traffic signal lamp 210 is located can be prevented from being blocked by vehicles and the like coming from and going to the intersection, and continuously capture a video stream for the traffic signal lamp 210 in real time for a long time, so as to monitor whether the traffic signal lamp 210 has a fault. Therefore, even if the unmanned vehicle 250 cannot observe the traffic signal lamp 210 at the intersection in front all the time due to the fact that the unmanned vehicle is shielded by the vehicle in front, and the like, and see that the unmanned vehicle is bad, or when the unmanned vehicle blinks at a yellow light, blinks at a green light, and the like, or does not reach the intersection where the traffic signal lamp 210 is located and cannot observe the state of the traffic signal lamp 210, whether the traffic signal lamp 210 has a fault or not can be known in time, and whether the current navigation route needs to be adjusted or not can be determined.
In addition, it should be noted that, in the embodiment of the present application, the image capturing device 220, the image processing device 230, and the data storage device 240 may be a device having functions of image capturing, image processing, and data storage. Alternatively, the image acquisition device 220, the image processing device 230, and the data storage device 240 may also be three different devices. Among them, the image processing apparatus 230 may perform image processing by cloud computing. The data storage device 240 may store data through cloud storage.
It should be understood that the number of traffic lights, image capture devices, image processing devices, data storage devices, and unmanned vehicles in FIG. 2 are merely illustrative. There may be any number of traffic lights, image capture devices, image processing devices, data storage devices, and unmanned vehicles, as desired for implementation.
It should be understood that the system architecture described above may include an intelligent transportation vehicle-road coordinated system architecture in which the roadside device includes a roadside sensing device (e.g., a roadside camera) connected to a roadside computing device (e.g., a roadside computing unit RSCU) connected to a server device that may communicate with the autonomous or assisted driving vehicle in various ways. In another system architecture, the roadside sensing device itself includes a computing function, and the roadside sensing device is directly connected to the server device. In another system architecture, the roadside apparatus may be directly connected to the autonomous vehicle or the assisted driving vehicle. The above connections may be wired or wireless; the server device in the application is, for example, a cloud control platform, a vehicle-road cooperative management platform, a central subsystem, an edge computing platform, a cloud computing platform, and the like.
The present application will be described in detail with reference to specific examples.
According to an embodiment of the present application, a method of identifying a traffic signal lamp fault is provided.
It should be noted that the execution main body of the method provided in the embodiment of the present application may be various road side devices, such as a road side sensing device with a computing function, a road side computing device connected to the road side sensing device, a server device connected to the road side computing device, or a server device directly connected to the road side sensing device. In the embodiment of the application, the server device is, for example, a cloud control platform, a vehicle-road cooperative management platform, a central subsystem, an edge computing platform, a cloud computing platform, and the like.
Fig. 3A to 3C schematically show a flow chart of a method of identifying a traffic signal fault according to an embodiment of the present application.
As shown in FIG. 3A, the method 300A may include operations S310-S320 and S33A.
In operation S310, a first video stream captured for a target traffic signal group is acquired.
In operation S320, for each frame of image in the first video stream, a position of the target traffic light group in the image is determined and a corresponding image is intercepted based on the position, resulting in a second video stream.
In operation S33A, RGB three-channel consistency detection is performed on the image positions of the lightheads in the target traffic signal group based on each frame image in the second video stream, so as to identify whether the target traffic signal group is faulty.
Specifically, in the embodiment of the present application, the image capturing device erected on the roadside can capture a video stream for a traffic signal light erected at a nearby intersection in real time for a long time without interruption. Therefore, in operation S310, a video stream uploaded by the target image capturing device in real time may be acquired. The target image acquisition equipment is equipment for acquiring video streams aiming at the target traffic signal lamp group.
It should be noted that, in the embodiment of the present application, the number of the lamp caps in the target traffic signal lamp group is not limited. For example, three lamp heads of red, yellow and green can be included; or may comprise two lamp heads of red and green; and so on.
Since each frame of original image in the first video stream may include image regions of other scenes except for each lamp cap in the target traffic signal lamp group, in operation S320, an image of the position of the target traffic signal lamp group included in each frame of original image in the first video stream may be intercepted in an image processing manner such as cropping or matting, so as to obtain a corresponding multi-frame new image. The new video stream composed of the plurality of new images is the second video stream.
Further, in operation S33A, RGB three-channel light color consistency detection may be performed on two images of the target traffic signal light group at positions where the light heads are located based on each frame image in the second video stream. And determining that all the lamp caps in the target traffic signal lamp group pass the RGB three-channel lamp color consistency test based on the current frame of image, and representing that the target traffic signal lamp group has or is suspected to have a fault. Otherwise, the current state of the target traffic signal lamp group is represented to be normal. And if the current state of the target traffic signal lamp group is determined to be normal based on the current frame image, continuing to perform RGB three-channel lamp color consistency detection based on the next frame image. Otherwise, if the target traffic signal lamp group is determined to have a fault or is suspected to have a fault based on the current frame of image, the RGB three-channel lamp color consistency detection process is ended, and the event is reported.
It should be understood that, in the embodiment of the present application, the lighting test may not be performed, but only the RGB three-channel lamp color consistency test is performed, so that whether the traffic signal lamp set has a fault or not may be found. However, it should be noted that only the RGB three-channel lamp color consistency test is performed, and the lighting test is not performed, which results in a single test method, and may further reduce the robustness of identifying abnormal conditions.
As shown in FIG. 3B, the method 300B may include operations S310-S320 and S33B.
In operation S310, a first video stream captured for a target traffic signal group is acquired.
In operation S320, for each frame of image in the first video stream, a position of the target traffic light group in the image is determined and a corresponding image is intercepted based on the position, resulting in a second video stream.
Differential images are respectively found based on every two adjacent frames of images in the second video stream and lighting detection is performed based on each found differential image in order to identify whether the target traffic signal group is failed or not in operation S33B.
It should be noted that operations S310 and S320 in the embodiment shown in fig. 3B of the present application are the same as or similar to operations S310 and S320 in the embodiment shown in fig. 3A of the present application, and no further description is given in this embodiment of the present application.
Specifically, in operation S33B, the current frame image img _ list [ n ] may be added to the tail of the image queue img _ list, and the inter-frame difference between the current frame image img _ list [ n ] and the previous frame image img _ list [ n-1] in the image queue img _ list may be calculated, so as to obtain a corresponding difference image. And then, calculating the sum of absolute values of differential pixels at different lamp holder positions in the differential image, comparing the sum, and determining whether the lighted lamp holder exists according to the comparison result. If the lighted lamp cap exists, the next frame image img _ list [ n +1] is continuously added to the tail of the image queue img _ list, and the inter-frame difference between the frame image img _ list [ n +1] and the previous frame image in the image queue img _ list, namely the image img _ list [ n ], is calculated, so that a corresponding difference image is obtained. And then, calculating the sum of absolute values of differential pixels at different lamp holder positions in the differential image, comparing the sum, and determining whether the lighted lamp holder exists according to the comparison result. The above operation is executed in a circulating way so as to identify whether the target traffic signal lamp group is in failure or not through the lamp lighting detection. And reporting the event and ending the lighting detection process if no lighted lamp holder is found in the lighting detection process.
Similarly, it should also be understood that, in the embodiment of the present application, the RGB three-channel lamp color consistency check may not be performed, and only the lighting check may be performed, so that whether the traffic signal lamp set has a fault or not may be found. However, it should be noted that only lighting inspection is performed, and RGB three-channel light color consistency inspection is not performed, which also results in a single inspection means, and may further reduce robustness of identifying abnormal conditions.
As shown in FIG. 3C, the method 200C may include operations S310-S320 and S33C1 and S33C 2.
In operation S310, a first video stream captured for a target traffic signal group is acquired.
In operation S320, for each frame of image in the first video stream, a position of the target traffic light group in the image is determined and a corresponding image is intercepted based on the position, resulting in a second video stream.
After operation S320, the following operations S33C1 and S33C2 are performed in order to identify whether the target traffic signal group malfunctions.
In operation S33C1, RGB three-channel light color consistency detection is performed on the image positions of the light heads in the target traffic signal light group based on each frame image in the second video stream.
In operation S33C2, difference images are respectively obtained based on each two adjacent frames of images in the second video stream, and lighting detection is performed based on each obtained difference image.
It should be noted that operations S310 and S320 in the embodiment shown in fig. 3C of the present application are the same as or similar to operations S310 and S320 in the embodiment shown in fig. 3A and fig. 3B of the present application, operation S33C1 in the embodiment shown in fig. 3C of the present application is the same as or similar to operation S33A in the embodiment shown in fig. 3A of the present application, operation S33C2 in the embodiment shown in fig. 3C of the present application is the same as or similar to operation S33B in the embodiment shown in fig. 3B of the present application, and the embodiments of the present application are not repeated herein.
Furthermore, it should be noted that, in the embodiment of the present application, the execution sequence of the operation S33C1 and the operation S33C2 is not limited. For example, operation S33C1 may be performed first and operation S33C2 may be performed second, operation S33C2 may be performed first and operation S33C1 may be performed second, or operation S33C1 and operation S33C2 may be performed simultaneously.
In addition, it should be noted that, in the embodiment of the present application, if any one of the RGB three-channel light color consistency detection pass and the light-up detection fail is established, it indicates that the traffic signal lamp has a fault or is suspected to have a fault; otherwise, the RGB three-channel lamp color consistency detection is not passed, and the bright lamp detection is passed, so that the current state of the traffic signal lamp is represented to be normal.
For example, in the embodiment of the present application, the lighting detection may be performed according to the difference image between two adjacent frame images. If the lighting inspection is passed within a certain time period, indicating that no signal lamp or black lamp fault occurs, and continuously processing the next frame of image. Otherwise, BGR three-channel lamp color consistency inspection is carried out. And if all the lamp caps meet the consistency check on the BGR three channels, determining that the black lamp fault exists at present, and reporting. Otherwise, representing the occurrence of the black lamp fault of the signal lamp, continuously processing the next frame of image, and carrying out lighting inspection or BGR three-channel lamp color consistency inspection.
According to the embodiment of the application, a neural network is not needed, supervised machine learning is not needed, and data labeling is not needed, so that the requirements on resource configuration of the computing nodes, the storage nodes and the like are not high, the computing speed is high, the timeliness is high, and the cost can be saved.
As an optional embodiment, performing RGB three-channel lamp color consistency detection on the image position of each lamp head in the target traffic signal lamp group based on each frame image in the second video stream may include: the first operation is performed based on each frame of image in the second video stream. Wherein the first operation may include: extracting an RGB image of each lamp holder in each lamp holder based on the position of each lamp holder in the current frame image; and extracting all single-color channel images of each lamp holder based on the RGB image of each lamp holder, and carrying out consistency detection on all single-color channel images of all lamp holders in each lamp holder.
For example, taking the target traffic signal light group including three lightheads of red, yellow and green as an example, as shown in fig. 4, in the current frame image 40, RGB images red _ RGB, yellow _ RGB and green _ RGB at the positions of the lightheads of the red light 41, the yellow light 42 and the green light 43 are respectively obtained. And for the RGB image red _ RGB of the base of the red lamp 41, the monochrome channel image red _ R (red channel image of the base of the red lamp 41), red _ G (green channel image of the base of the red lamp 41), red _ B (blue channel image of the base of the red lamp 41) of the three primary colors red, green, and blue are extracted, respectively. For the RGB image yellow _ RGB of the base of the yellow lamp 42, a monochrome channel image yellow _ R (red channel image of the base of the yellow lamp 42), yellow _ G (green channel image of the base of the yellow lamp 42), and yellow _ B (blue channel image of the base of the yellow lamp 42) of the three primary colors red, green, and blue are extracted, respectively. For the RGB image green _ RGB of the base of the green lamp 43, a monochrome channel image green _ R (red channel image of the base of the green lamp 43), green _ G (green channel image of the base of the green lamp 43), and green _ B (blue channel image of the base of the green lamp 43) of three primary colors of red, green, and blue are extracted, respectively. And further detecting whether the red channel image red _ R of the base of the red lamp 41, the red channel image yellow _ R of the base of the yellow lamp 42 and the red channel image green _ R of the base of the green lamp 43 meet red consistency between every two. And detecting whether the green channel image red _ G of the base of the red lamp 41, the green channel image yellow _ G of the base of the yellow lamp 42 and the green channel image green _ G of the base of the green lamp 43 satisfy green consistency between two pairs. And detecting whether the blue channel image red _ B at the base of the red lamp 41, the blue channel image yellow _ B at the base of the yellow lamp 42 and the blue channel image green _ B at the base of the blue lamp 43 meet the blue consistency between every two. And if every two monochromatic channel images of all the lamp caps meet the lamp color consistency, the RGB three-channel lamp color consistency detection is considered to pass. Otherwise, the RGB three-channel lamp color consistency detection is not passed.
By the embodiment of the application, the lamp color consistency detection is respectively carried out on each single-color channel of all the lamp holders, and whether the traffic signal lamp fails or not can be accurately identified.
Further, as an alternative embodiment, the consistency detection of all the monochrome channel images of all the burners may include the following operations.
And carrying out difference processing on the red channel images of all lamp holders pairwise to obtain at least one first difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each first difference image in the at least one first difference image is smaller than a first pixel threshold value.
And carrying out difference processing on the green channel images of all the lamp holders pairwise to obtain at least one second difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each second difference image in the at least one second difference image is smaller than a second pixel threshold value.
And carrying out difference processing on the blue channel images of all the lamp holders pairwise to obtain at least one third difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each third difference image in the at least one third difference image is smaller than a third pixel threshold value.
If the sum of all differential pixel absolute values obtained based on each first differential image is less than the first pixel threshold, the sum of all differential pixel absolute values obtained based on each second differential image is less than the second pixel threshold, and the sum of all differential pixel absolute values obtained based on each third differential image is less than the third pixel threshold, then the RGB three-channel consistency detection passes.
Illustratively, with continuing reference to fig. 4, taking the example of detecting whether the red light 41, the yellow light 42, and the green light 43 satisfy the red channel light color consistency, a (red _ R-yellow _ R) is calculated, and a difference image between the red channel image of the red light 41 and the red channel image of the yellow light 42 is obtained and is referred to as diff _ img 1. A difference image between the red channel image of the red light 41 and the red channel image of the green light 43 is calculated (red _ R-green _ R), denoted diff _ img 2. The difference image between the red channel image of the yellow light 42 and the red channel image of the green light 43 is calculated (yellow _ R-green _ R) and is denoted diff _ img 3. And respectively calculating the sum of the absolute values of all the differential pixels in the differential image diff _ img1, the sum of the absolute values of all the differential pixels in the differential image diff _ img2 and the sum of the absolute values of all the differential pixels in the differential image diff _ img2, and sequentially recording the calculation results as sum1, sum2 and sum 3. Then, whether the following three conditions are all satisfied is judged: "sum 2 < thres _ 1? "," sum2 < thres _ 1? "," sum2 < thres _ 1? ". If all of the above three conditions are satisfied, it is considered that the red lamp 41, the yellow lamp 42, and the green lamp 43 satisfy the red channel lamp color consistency. Where thres _1 denotes a first pixel threshold, which may be derived from empirical values.
It should be noted that, because there is a difference between different differential pixels in the same differential image, and the size of the image region captured for each lighthead may not be consistent with the size of the lighthead image region used for estimating the corresponding pixel threshold (including the first, second, and third pixel thresholds), the determination result of the above condition may be affected, and the accuracy of the signal lamp failure or non-failure recognition result may be affected. Therefore, in the embodiment of the present application, after the difference images diff _ img1, diff _ img2, and diff _ img3 are obtained, thresholding may be performed on the difference images, then summing is performed, then normalization is performed, and finally the magnitude relationship between the calculation result and the corresponding pixel threshold is compared.
Similarly, with continuing reference to fig. 4, the method for checking whether the lamp color consistency of the yellow channel and the lamp color consistency of the green channel between the three lamp bases of the red lamp 41, the yellow lamp 42, and the green lamp 43 are respectively satisfied is the same as the method for checking whether the lamp color consistency of the red channel is satisfied, and the embodiments of the present application are not described herein again.
It should be understood that, in the embodiment of the present application, the first pixel threshold, the second pixel threshold, and the third pixel threshold may be the same or different, and the embodiment of the present application is not limited herein.
Through the embodiment of the application, for example, if the three conditions are all met, all the lamp holders are represented to be in the off state, so that whether the traffic signal lamp has a black lamp fault or not can be identified through the operation.
It should be noted that sometimes, a short full black state does not necessarily mean that the traffic signal lamp has a black fault. For example, in special cases such as yellow or green blinking, a short, completely black state may occur. Based on this, in this application embodiment, further, whether RGB three-channel lamp color consistency is satisfied within a period of time may also be detected. And if the RGB three-channel lamp color consistency is met within a period of time, the traffic signal lamp is considered to have a black lamp fault. If the RGB three-channel lamp color consistency is not met any more after a period of time, the traffic signal lamp is considered to have a black lamp fault, and only the situations of yellow lamp flickering or green lamp flickering and the like occur.
Or, further, as an optional embodiment, the consistency detection is performed on all the monochrome channel images of all the burners in each burner, and includes:
carrying out difference processing on the red channel images of all lamp holders in pairs to obtain at least one first difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each first difference image in the at least one first difference image is larger than a fourth pixel threshold value or not;
carrying out difference processing on the green channel images of all lamp holders in pairs to obtain at least one second difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each second difference image in the at least one second difference image is greater than a fifth pixel threshold value;
carrying out difference processing on the blue channel images of all lamp holders in pairs to obtain at least one third difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each third difference image in the at least one third difference image is greater than a sixth pixel threshold value; and
and if the sum of the absolute values of all the differential pixels obtained based on each first differential image is greater than the fourth pixel threshold, the sum of the absolute values of all the differential pixels obtained based on each second differential image is greater than the fifth pixel threshold, and the sum of the absolute values of all the differential pixels obtained based on each third differential image is greater than the sixth pixel threshold, the RGB three-color light consistency detection is passed.
It should be noted that, similar to the previous embodiment, if the above three conditions are also all true in the embodiment of the present application, the RGB three-color lamp color consistency detection is also passed. Only in the previous embodiment, it is determined whether the sum of the absolute values of all the differential pixels is smaller than the corresponding pixel threshold, and in the present embodiment, it is determined whether the sum of the absolute values of all the differential pixels is larger than other corresponding pixel thresholds. Therefore, if the RGB three-color lamp color consistency detection is passed in the embodiment of the present application, it indicates that all the lamp caps are in the lighting state, and thus, it can be identified whether the traffic signal lamp has the abnormal lighting fault through the above operations.
It should be understood that operations that need to be executed in the embodiment of the present application are similar to those in the previous embodiment of the present application, and implementation methods are also similar to those in the previous embodiment of the present application, and are not described herein again.
As an alternative embodiment, the obtaining a difference image based on each two adjacent frames of images in the second video stream and performing the lighting detection based on the obtained difference image may include: differential images are respectively obtained based on every two adjacent frames of images in the second video stream, and second operation is sequentially executed based on each obtained differential image.
The second operation may include: solving the sum of absolute values of differential pixels at the position of each lamp holder in each lamp holder according to the current differential image; acquiring a target lamp holder with the largest sum of absolute values of differential pixels from all lamp holders; and if the sum of the obtained maximum difference pixel absolute values is consistent with the sum of the preset maximum difference pixel absolute values corresponding to the target lamp holder, determining that the target lamp holders in all the lamp holders are lighted, wherein the lamp holders in all the lamp holders are lighted to represent that the current lighting detection passes.
Illustratively, taking the target traffic signal set including three bases of red, yellow and green as an example, in the current difference image, the sum of the absolute values of the maximum difference pixels at the base positions of the red, yellow and green lights is calculated respectively and is recorded as diff _ sum _ r (the sum of the absolute values of the maximum difference pixels at the base positions of the red light), diff _ sum _ y (the sum of the absolute values of the maximum difference pixels at the base positions of the yellow light) and diff _ sum _ g (the sum of the absolute values of the maximum difference pixels at the base positions of the green light). The magnitude between diff _ sum _ r, diff _ sum _ y, and diff sum _ g is compared. As shown in fig. 5, if diff _ sum _ r is maximum and coincides with the sum of the corresponding predetermined maximum differential pixel absolute values, it is characterized that the red light is in the lighted state at this time. Similarly, if diff _ sum _ y is the largest and is consistent with the sum of the corresponding predetermined maximum differential pixel absolute values, it is characterized that the yellow light is in the on state at this time. If diff _ sum _ g is maximum and is consistent with the sum of the corresponding predetermined maximum differential pixel absolute values, it is characterized that the green light is in the on state at this time.
It should be understood that, in the embodiment of the present application, if it is determined that there is a case where the lamp holder is turned on or a case where the lamp holder is in the lighting state according to the current differential image, the state representing the current target traffic signal lamp is normal, and the next frame of differential image is continuously obtained for lighting detection. Otherwise, if the situation that the lamp holder is lightened does not exist or the situation that the lamp holder is in the lightening state does not exist according to the current differential image, the current target traffic signal lamp is represented to have a fault, the event is reported, and the lamp lightening detection process is ended. Or, whether the target traffic signal lamp really has a fault can be further judged through RGB three-channel lamp color consistency check.
Through the embodiment of the application, the lighting detection means is adopted, if the lighted lamp cap exists in a period of time, the traffic signal lamp is considered to have no fault, and therefore whether the traffic signal lamp has the fault or not can be accurately identified.
It should be noted that, because there is a difference between different differential pixels in the same differential image, and the sizes of the image regions captured for each lighthead may not be consistent with the size of the lighthead image region used for estimating the sum of the absolute values of the corresponding predetermined maximum differential pixels (including red, yellow, and green), the result of determining the consistency between the sum of the absolute values of the maximum differential pixels and the sum of the absolute values of the predetermined maximum differential pixels corresponding to the target lighthead may be affected, and the accuracy of the result of identifying whether a signal lamp is faulty or not may be affected. Therefore, in the embodiment of the present application, after each difference image is obtained, thresholding may be performed on the difference images, then the sum of the difference pixels is obtained, then normalization is performed, and finally the magnitude relationship between the calculation result and the corresponding predetermined maximum difference pixel value is compared.
Specifically, the current frame image img _ list [ n ] may be added to the tail of the image queue img _ list, and an inter-frame difference between the current frame image img _ list [ n ] and the previous frame image img _ list [ n-1] in the image queue img _ list, that is, img _ list [ n ] -img _ list [ n-1], may be calculated to obtain a corresponding difference image. Then, a thresholding operation is performed on the difference image. For example, a portion of the differential image having a pixel value greater than thres (a preset pixel threshold) may be set to 255, and a portion having a pixel value less than thres may be set to 0. Then, after thresholding, the sum of absolute values of all difference pixels in each lamp holder position difference image is calculated, and the sum is divided by n _ pixel (the number of pixel points contained in the corresponding lamp holder position difference image, and the number of pixel points at different lamp holder positions may be different or the same) to carry out normalization processing. Finally, the base of all base positions where the sum of the absolute values of the differential pixels is the largest after normalization is found and is denoted as max _ diff _ sum _ current, and the magnitude relationship between max _ diff _ sum _ current and the corresponding sum of the absolute values of the maximum differential pixels (here representing the sum after normalization) is compared.
Further, as an optional embodiment, the method may further include: and if the obtained sum of the maximum difference pixel absolute values is consistent with the sum of the predetermined maximum difference pixel absolute values (a first condition), and the sum of the difference pixel absolute values at the target lighthead position is respectively larger than N times (for example, N is 2) of the sum of the difference pixel absolute values at other lighthead positions in all lightheads (a second condition), determining that the target lighthead in all lightheads is lighted.
It should be noted that, since only the first condition is determined to be satisfied, but not the second condition, some special cases may not be excluded. For example, it is impossible to exclude the situation that all the bases or a plurality of bases are in the lighting state together, and thus the traffic signal lamp is mistakenly regarded as not having a fault.
In the embodiment of the present application, it is determined whether the first condition and the second condition are satisfied at the same time, and the above special cases may be excluded. Namely, once all the lamp holders or a plurality of the lamp holders are in a lighting state, the lighting detection is considered to be failed, and the fault event is reported. Therefore, whether the traffic signal lamp has a fault or not can be identified more accurately through the embodiment of the application.
And/or, further, as an alternative embodiment, the obtaining that the sum of the maximum difference pixel absolute values (may be a value after normalization) coincides with the sum of the predetermined maximum difference pixel absolute values (may be a value after normalization) includes: the sum of the obtained maximum differential pixel absolute values is greater than M% (a third condition) of the sum of the predetermined maximum differential pixel absolute values, where M is less than 100, e.g., M is 80.
It should be noted that, in the embodiment of the present application, each lamp head in the traffic signal lamp group may be abstracted as a movable obstacle, and the instant of switching the lamp head state may be equivalent to switching the motion state of the obstacle, that is, switching from the stationary state to the motion state.
As shown in fig. 5, since the electrical signal of the base changes significantly at the moment of switching the base state, and thus the pixel value at the base position in the image also changes significantly, the lighting detection can be realized by comparing the sum of the differential pixels at different base positions in the differential image according to the characteristic. Also, since the sum of the above-described predetermined maximum differential pixel absolute values may be determined from the image frame acquired at the moment of switching the lighthead state, it is apparent that the sum of the predetermined maximum differential pixel absolute values thus determined may be larger than the sum of the maximum differential pixel absolute values when the lighthead is in a steady lighting state.
Therefore, in the embodiment of the present application, it is possible to determine whether the lighting detection passes by judging the third condition, and thus the obtained lighting detection can be made more accurate.
As an alternative embodiment, for the current differential image, summing absolute values of differential pixels at each base position in each base may include: firstly, thresholding operation is carried out on the current differential image, then the sum of the absolute values of the differential pixels at the position of each lamp holder in each lamp holder is obtained, and normalization processing is carried out on the obtained sum of the absolute values of the differential pixels.
It should be noted that the operations for implementing thresholding and normalization in the embodiment of the present application are the same as or similar to the operations for implementing thresholding and normalization in the foregoing embodiment of the present application, and are not described herein again.
By the embodiment of the application, the influence on the calculation result of the sum of the absolute values of the differential pixels due to different lamp cap regions selected aiming at different lamp caps can be eliminated. Moreover, the influence on the calculation result of the sum of the absolute values of the differential pixels due to the difference of different differential pixels in the differential image can be eliminated.
As an alternative embodiment, the method may further comprise: acquiring continuous multi-frame images in at least one signal lamp period; and carrying out initialization processing on the continuous multi-frame images to obtain the sum of the absolute values of the preset maximum difference pixels corresponding to the target lamp holder.
Further, as an alternative embodiment, performing initialization processing on consecutive multi-frame images to obtain the sum of absolute values of the predetermined maximum difference pixels corresponding to the target lighthead may include: setting an initial value of the sum of the absolute values of the maximum difference pixels corresponding to the target lamp holder to 0; aiming at every two continuous images in the continuous multi-frame images, solving corresponding differential images to obtain a plurality of differential images; and assigning the sum of the absolute values of the maximum difference pixels corresponding to the target lamp holder based on each difference image in the plurality of difference images to obtain the sum of the absolute values of the preset maximum difference pixels.
Specifically, in the embodiment of the present application, the sum of the absolute values of the predetermined maximum differential pixels described above may be obtained according to the following operation flow:
and (1) acquiring the specific position (which can be determined by the position point of the upper left corner and the position point of the lower right corner of the lamp cap) of each lamp cap in the current signal lamp in each frame of image in the video stream, and the number n _ pixel of pixel points contained in the image position.
Operation (2) of performing initialization processing to set the lighted base in an initial state as unknown; setting the image frame number required for initialization as init _ waiting _ frame _ number (at least including all image frame numbers in one signal lamp period); the initial value of the sum of the absolute values of the maximum differential pixels when the red lamp is lit is set to 0, that is, red _ max _ diff _ sum1 is set to 0, the initial value of the sum of the absolute values of the maximum differential pixels when the green lamp is lit is set to 0, that is, green _ max _ diff _ sum1 is set to 0, and the initial value of the sum of the absolute values of the maximum differential pixels when the yellow lamp is lit is set to 0, that is, yellow _ max _ diff _ sum1 is set to 0.
And (3) adding the current image to the tail of the image queue img _ list, if the current length of the image queue img _ list is 2, turning to the operation (4), otherwise, continuously adding the next frame of image to the tail of the image queue img _ list, and turning to the operation (4) after the current length of the image queue img _ list is 2.
And operation (4) calculating the difference between frames, namely img _ list [1] -img _ list [0], and performing thresholding operation on the difference image, wherein img _ list [1] represents the latest frame image, and img _ list [0] represents the previous frame image of the latest frame image.
And (5) calculating the sum of the absolute values of the differential pixels in each lighthead position in the differential image, and dividing the sum by n _ pixel of the corresponding lighthead to perform normalization processing. The one of all the positions of the lamp base where the sum of the absolute values of the differential pixels is the largest (denoted as max _ diff _ sum _ current1) is found and it is obtained which lamp color it is. If it is greater than max _ diff _ sum1 for the corresponding lamp color, then the value of max _ diff _ sum _ currentl is assigned to max _ diff _ sum1 for the corresponding lamp color.
And operation (6), deleting the image img _ list [0] in the image queue, and recording the current image img _ list [1] as the image img _ list [0 ]. And (4) judging whether the number of the currently processed images is greater than the init _ waiting _ frame _ number, if so, ending the initialization flow, otherwise, continuing the initialization, namely, repeatedly executing the operations (3) - (6).
According to the embodiment of the application, the application also provides a device for identifying the fault of the traffic signal lamp.
As shown in fig. 6, the apparatus 600 for identifying a traffic signal lamp fault includes: an acquisition module 610, a determination module 620, and an identification module 630.
Specifically, the acquiring module 610 is configured to acquire a first video stream acquired for a target traffic signal lamp group;
a determining module 620, configured to determine, for each frame of image in the first video stream, a position of the target traffic signal light group in the image, and intercept a corresponding image based on the position to obtain a second video stream; and
an identification module 630 for performing at least one of the following operations to identify whether the target traffic signal set is malfunctioning: performing RGB three-color consistency detection on the image position of each lamp holder in the target traffic signal lamp group based on each frame image in the second video stream; differential images are respectively obtained based on every two adjacent frames of images in the second video stream, and lighting detection is carried out based on each obtained differential image.
It should be noted that, embodiments of the apparatus portion of the present application are the same as or similar to embodiments of the method portion of the present application, and the apparatus for identifying a traffic signal lamp fault in the embodiments of the present application may be used to implement the method for identifying a traffic signal lamp fault in any of the embodiments of the present application, and no further description is given here in the embodiments of the present application. The embodiments of the present application are not described herein again.
According to an embodiment of the present application, there is also provided a navigation system.
As shown in fig. 7, the navigation system 700 includes: an unmanned vehicle 710 and a device for identifying traffic light failure 720.
Specifically, the traffic light fault identifying device 710 is configured to identify a traffic light fault and send traffic light fault information to the unmanned vehicle 710, so that the unmanned vehicle 710 adjusts its own navigation route based on the received traffic light fault information.
It should be noted that, in the embodiment of the present application, the apparatus 710 for identifying a traffic signal lamp fault may be used to implement the method for identifying a traffic signal lamp fault in any of the embodiments of the present application, and details of the embodiment of the present application are not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 8, the embodiment of the present application is a block diagram of an electronic device for identifying a traffic signal lamp fault. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 8, the electronic device (which may be a roadside device) includes: one or more processors 801, memory 802, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 8 illustrates an example of a processor 801.
The memory 802 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of identifying traffic signal faults provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of identifying traffic signal faults provided herein.
The memory 802, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method of identifying traffic signal faults in the embodiments of the present application (e.g., the obtaining module 610, the determining module 620, and the identifying module 630 shown in fig. 6). The processor 801 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 802, that is, implements the method of identifying traffic signal light failure in the above method embodiments.
The memory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of an electronic device that recognizes a traffic signal failure, and the like. Further, the memory 802 may include high speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 802 optionally includes memory located remotely from the processor 801, which may be connected over a network to electronics that identify traffic signal failure. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for implementing the method for identifying a traffic signal lamp fault of the present application may further include: an input device 803 and an output device 804. The processor 801, the memory 802, the input device 803, and the output device 804 may be connected by a bus or other means, and are exemplified by a bus in fig. 8.
The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic equipment that recognizes traffic signal light faults, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 804 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, because the lighting detection means based on the time sequence characteristics of the traffic signal lamp and/or the RGB three-channel consistency detection means are/is adopted to identify whether the traffic signal lamp fails, compared with the method for identifying whether the traffic signal lamp fails by adopting a neural network model in the related technology, the technical scheme provided by the embodiment of the application does not need supervised machine learning, so that data labeling is not needed, and the cost consumed by labeling data can be saved; in addition, the technical scheme provided by the embodiment of the application does not depend on the neural network model, so that the defect that the training data is naturally distributed unevenly and the neural network model is difficult to train due to the fact that the time for which the lamps of various colors are turned on is different in the scheme depending on the neural network model in the related art can be overcome; in addition, the technical scheme provided by the embodiment of the application does not depend on the neural network model, so that the support of a large amount of computing resources, storage resources and the like is not required, and the defect of poor real-time performance of the obtained result due to long reasoning time of the neural network model can be overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (15)

1. A method of identifying a traffic signal fault, comprising:
acquiring a first video stream collected aiming at a target traffic signal lamp group;
determining the position of the target traffic signal lamp group in the image aiming at each frame of image in the first video stream, and intercepting the corresponding image based on the position to obtain a second video stream; and
performing at least one of the following operations in order to identify whether the target traffic signal group is malfunctioning:
performing RGB three-channel consistency detection on the image positions of all lamp holders in the target traffic signal lamp group based on each frame of image in the second video stream;
differential images are respectively obtained based on every two adjacent frames of images in the second video stream, and lighting detection is carried out based on each obtained differential image.
2. The method of claim 1, wherein performing RGB three-channel conformance detection on the image positions of the lightheads in the target traffic light group based on each frame image in the second video stream comprises: performing a first operation based on each frame of image in the second video stream, wherein,
the first operation includes:
extracting an RGB image of each lamp holder in each lamp holder based on the position of each lamp holder in the current frame image; and
and extracting all the single-color channel images of each lamp holder based on the RGB image of each lamp holder, and carrying out consistency detection on all the single-color channel images of all the lamp holders in each lamp holder.
3. The method of claim 2, wherein the consistency detection of all monochrome channel images of all of the lightheads comprises:
carrying out difference processing on the red channel images of all lamp holders in pairs to obtain at least one first difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each first difference image in the at least one first difference image is smaller than a first pixel threshold value;
carrying out difference processing on the green channel images of all lamp holders in pairs to obtain at least one second difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each second difference image in the at least one second difference image is smaller than a second pixel threshold value;
carrying out difference processing on the blue channel images of all lamp holders in pairs to obtain at least one third difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each third difference image in the at least one third difference image is smaller than a third pixel threshold value; and
and if the sum of all the differential pixel absolute values obtained based on each first differential image is smaller than the first pixel threshold, the sum of all the differential pixel absolute values obtained based on each second differential image is smaller than the second pixel threshold, and the sum of all the differential pixel absolute values obtained based on each third differential image is smaller than the third pixel threshold, the RGB three-channel consistency detection is passed.
4. The method of claim 2, wherein the consistency detection of all monochrome channel images of all of the lightheads comprises:
carrying out difference processing on the red channel images of all lamp holders in pairs to obtain at least one first difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each first difference image in the at least one first difference image is greater than a fourth pixel threshold value;
carrying out difference processing on the green channel images of all lamp holders in pairs to obtain at least one second difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each second difference image in the at least one second difference image is greater than a fifth pixel threshold value;
carrying out difference processing on the blue channel images of all lamp holders in pairs to obtain at least one third difference image, and judging whether the sum of absolute values of all difference pixels obtained based on each third difference image in the at least one third difference image is greater than a sixth pixel threshold value; and
and if the sum of the absolute values of all the differential pixels obtained based on each first differential image is greater than the fourth pixel threshold, the sum of the absolute values of all the differential pixels obtained based on each second differential image is greater than the fifth pixel threshold, and the sum of the absolute values of all the differential pixels obtained based on each third differential image is greater than the sixth pixel threshold, the RGB three-channel consistency detection is passed.
5. The method according to any one of claims 1 to 4, wherein the obtaining a difference image based on each two adjacent frames of images in the second video stream and performing the lighting detection based on the obtained difference image comprises: respectively deriving difference images based on every two adjacent frames of images in the second video stream and sequentially performing a second operation based on each derived difference image, wherein,
the second operation includes:
solving the sum of the absolute values of the difference pixels at the position of each lamp holder in each lamp holder according to the current difference image;
acquiring a target lamp holder with the largest sum of absolute values of differential pixels from all lamp holders;
and if the obtained sum of the maximum difference pixel absolute values is consistent with the sum of the preset maximum difference pixel absolute values corresponding to the target lamp holder, determining that the target lamp holder in all the lamp holders is lighted, wherein the lamp holders in all the lamp holders are lighted to represent that the current lighting detection passes.
6. The method of claim 5, further comprising:
and if the obtained sum of the absolute values of the maximum difference pixels is consistent with the sum of the absolute values of the preset maximum difference pixels, and the sum of the absolute values of the difference pixels at the position of the target lamp holder is respectively greater than N times of the sum of the absolute values of the difference pixels at the positions of other lamp holders in all the lamp holders, determining that the target lamp holder in all the lamp holders is turned on.
7. The method of claim 5, wherein said obtaining the sum of maximum difference pixel absolute values consistent with the sum of predetermined maximum difference pixel absolute values comprises: the sum of the obtained maximum differential pixel absolute values is greater than M% of the sum of the predetermined maximum differential pixel absolute values, where M is less than 100.
8. The method of claim 5, wherein the summing absolute values of the differential pixels at each of the lighthead positions for the current differential image comprises:
firstly, performing thresholding operation on the current difference image, then solving the sum of absolute values of difference pixels at the position of each lamp holder in each lamp holder, and performing normalization processing on the solved sum of absolute values of difference pixels.
9. The method of claim 5, further comprising:
acquiring continuous multi-frame images in at least one signal lamp period;
and initializing the continuous multi-frame images to obtain the sum of the absolute values of the preset maximum difference pixels corresponding to the target lamp holder.
10. The method of claim 9, wherein initializing the consecutive frames of images to obtain a predetermined maximum difference pixel absolute value sum corresponding to the target lighthead comprises:
setting an initial value of the sum of the absolute values of the maximum difference pixels corresponding to the target lamp holder to 0;
aiming at every two continuous images in the continuous multi-frame images, solving corresponding differential images to obtain a plurality of differential images;
and assigning the sum of the absolute values of the maximum difference pixels corresponding to the target lamp holder based on each difference image in the plurality of difference images to obtain the sum of the absolute values of the preset maximum difference pixels.
11. An apparatus for identifying a fault in a traffic signal, comprising:
the acquisition module is used for acquiring a first video stream acquired by aiming at the target traffic signal lamp group;
the determining module is used for determining the position of the target traffic signal lamp group in the image aiming at each frame of image in the first video stream and intercepting the corresponding image based on the position to obtain a second video stream; and
an identification module to perform at least one of the following operations to identify whether the target traffic signal set is malfunctioning:
performing RGB three-channel consistency detection on the image positions of all lamp holders in the target traffic signal lamp group based on each frame of image in the second video stream;
differential images are respectively obtained based on every two adjacent frames of images in the second video stream, and lighting detection is carried out based on each obtained differential image.
12. A navigation system, comprising:
an unmanned vehicle;
the apparatus for identifying a traffic signal fault of claim 11, being adapted to identify a traffic signal fault and send traffic signal fault information to the unmanned vehicle to cause the unmanned vehicle to adjust its own navigation route based on the received traffic signal fault information.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
15. A roadside apparatus characterized by comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
CN202011012323.4A 2020-09-23 2020-09-23 Method and device for identifying traffic signal lamp faults, navigation system and road side equipment Active CN112180285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011012323.4A CN112180285B (en) 2020-09-23 2020-09-23 Method and device for identifying traffic signal lamp faults, navigation system and road side equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011012323.4A CN112180285B (en) 2020-09-23 2020-09-23 Method and device for identifying traffic signal lamp faults, navigation system and road side equipment

Publications (2)

Publication Number Publication Date
CN112180285A true CN112180285A (en) 2021-01-05
CN112180285B CN112180285B (en) 2024-05-31

Family

ID=73956060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011012323.4A Active CN112180285B (en) 2020-09-23 2020-09-23 Method and device for identifying traffic signal lamp faults, navigation system and road side equipment

Country Status (1)

Country Link
CN (1) CN112180285B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033464A (en) * 2021-04-10 2021-06-25 阿波罗智联(北京)科技有限公司 Signal lamp detection method, device, equipment and storage medium
CN115080163A (en) * 2022-06-08 2022-09-20 深圳传音控股股份有限公司 Display processing method, intelligent terminal and storage medium
CN115932642A (en) * 2022-11-02 2023-04-07 广东左向照明有限公司 Lamp screening method based on operation monitoring

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496282A (en) * 2011-12-16 2012-06-13 湖南工业大学 Traffic intersection signal light state identification method based on RGB color transformation
WO2014115239A1 (en) * 2013-01-22 2014-07-31 パイオニア株式会社 Traffic light recognition device, control method, program, and memory medium
CN104574960A (en) * 2014-12-25 2015-04-29 宁波中国科学院信息技术应用研究院 Traffic light recognition method
JP2019053619A (en) * 2017-09-15 2019-04-04 株式会社東芝 Signal identification device, signal identification method, and driving support system
CN110782692A (en) * 2019-10-31 2020-02-11 青岛海信网络科技股份有限公司 Signal lamp fault detection method and system
CN111428663A (en) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 Traffic light state identification method and device, electronic equipment and storage medium
CN111428647A (en) * 2020-03-25 2020-07-17 浙江浙大中控信息技术有限公司 Traffic signal lamp fault detection method
CN111507210A (en) * 2020-03-31 2020-08-07 华为技术有限公司 Traffic signal lamp identification method and system, computing device and intelligent vehicle
CN111652940A (en) * 2020-04-30 2020-09-11 平安国际智慧城市科技股份有限公司 Target abnormity identification method and device, electronic equipment and storage medium
CN111667499A (en) * 2020-06-05 2020-09-15 济南博观智能科技有限公司 Image segmentation method, device and equipment for traffic signal lamp and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496282A (en) * 2011-12-16 2012-06-13 湖南工业大学 Traffic intersection signal light state identification method based on RGB color transformation
WO2014115239A1 (en) * 2013-01-22 2014-07-31 パイオニア株式会社 Traffic light recognition device, control method, program, and memory medium
CN104574960A (en) * 2014-12-25 2015-04-29 宁波中国科学院信息技术应用研究院 Traffic light recognition method
JP2019053619A (en) * 2017-09-15 2019-04-04 株式会社東芝 Signal identification device, signal identification method, and driving support system
CN110782692A (en) * 2019-10-31 2020-02-11 青岛海信网络科技股份有限公司 Signal lamp fault detection method and system
CN111428647A (en) * 2020-03-25 2020-07-17 浙江浙大中控信息技术有限公司 Traffic signal lamp fault detection method
CN111428663A (en) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 Traffic light state identification method and device, electronic equipment and storage medium
CN111507210A (en) * 2020-03-31 2020-08-07 华为技术有限公司 Traffic signal lamp identification method and system, computing device and intelligent vehicle
CN111652940A (en) * 2020-04-30 2020-09-11 平安国际智慧城市科技股份有限公司 Target abnormity identification method and device, electronic equipment and storage medium
CN111667499A (en) * 2020-06-05 2020-09-15 济南博观智能科技有限公司 Image segmentation method, device and equipment for traffic signal lamp and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033464A (en) * 2021-04-10 2021-06-25 阿波罗智联(北京)科技有限公司 Signal lamp detection method, device, equipment and storage medium
CN113033464B (en) * 2021-04-10 2023-11-21 阿波罗智联(北京)科技有限公司 Signal lamp detection method, device, equipment and storage medium
CN115080163A (en) * 2022-06-08 2022-09-20 深圳传音控股股份有限公司 Display processing method, intelligent terminal and storage medium
CN115932642A (en) * 2022-11-02 2023-04-07 广东左向照明有限公司 Lamp screening method based on operation monitoring

Also Published As

Publication number Publication date
CN112180285B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
EP3859708B1 (en) Traffic light image processing method and device, and roadside device
US11356599B2 (en) Human-automation collaborative tracker of fused object
EP3848853A2 (en) Image detection method, apparatus, electronic device and storage medium
CN112528926B (en) Method, device, equipment and storage medium for detecting signal lamp image abnormality
CN111931724B (en) Signal lamp abnormality identification method and device, electronic equipment and road side equipment
CN111931726B (en) Traffic light detection method, device, computer storage medium and road side equipment
CN111428663A (en) Traffic light state identification method and device, electronic equipment and storage medium
JP7241127B2 (en) Signal light color identification method, device and roadside equipment
CN105160924A (en) Video processing-based intelligent signal lamp state detection method and detection system
CN111931715B (en) Method and device for recognizing state of vehicle lamp, computer equipment and storage medium
CN113469109B (en) Traffic light identification result processing method and device, road side equipment and cloud control platform
US20210110168A1 (en) Object tracking method and apparatus
CN112180285B (en) Method and device for identifying traffic signal lamp faults, navigation system and road side equipment
CN112419722A (en) Traffic abnormal event detection method, traffic control method, device and medium
CN113378769A (en) Image classification method and device
CN112131414A (en) Signal lamp image labeling method and device, electronic equipment and road side equipment
CN110689747A (en) Control method and device of automatic driving vehicle and automatic driving vehicle
CN113221878A (en) Detection frame adjusting method and device applied to signal lamp detection and road side equipment
CN112699754A (en) Signal lamp identification method, device, equipment and storage medium
CN115891868A (en) Fault detection method, device, electronic apparatus, and medium for autonomous vehicle
JP2022120116A (en) Traffic light identification method, apparatus, electronic device, storage medium, computer program, roadside device, cloud control platform, and vehicle road cooperative system
JP2013171319A (en) Vehicle state detection device, vehicle behavior detection device and vehicle state detection method
CN110817674B (en) Method, device and equipment for detecting step defect of escalator and storage medium
CN112396668B (en) Method and device for identifying abnormal lamp color in signal lamp and road side equipment
KR102161212B1 (en) System and method for motion detecting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211009

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd.

Address before: 2 / F, baidu building, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant