CN113129375B - Data processing method, device, equipment and storage medium - Google Patents
Data processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113129375B CN113129375B CN202110432270.XA CN202110432270A CN113129375B CN 113129375 B CN113129375 B CN 113129375B CN 202110432270 A CN202110432270 A CN 202110432270A CN 113129375 B CN113129375 B CN 113129375B
- Authority
- CN
- China
- Prior art keywords
- image
- halation
- lamp
- traffic indicator
- traffic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 27
- 125000001475 halogen functional group Chemical group 0.000 claims description 51
- 238000012545 processing Methods 0.000 claims description 21
- 238000000034 method Methods 0.000 claims description 20
- 239000003086 colorant Substances 0.000 claims description 9
- 238000013473 artificial intelligence Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000013135 deep learning Methods 0.000 abstract description 2
- 230000008447 perception Effects 0.000 abstract 1
- 238000004590 computer program Methods 0.000 description 10
- 238000001514 detection method Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000002372 labelling Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008602 contraction Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Geometry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The disclosure provides a data processing method, a device, equipment and a storage medium, relates to the technical field of computers, and further relates to artificial intelligence technologies such as intelligent traffic, road side perception and deep learning. The specific implementation scheme is as follows: according to the position information of the halation presented by the traffic indication lamp in the image to be processed and the reference image of the image to be processed, determining the lamp frame size information of the target group traffic indication lamp related to the halation in the image to be processed; determining the position information of the traffic indicator lights of the target group in the image to be processed according to the size information of the lamp frame of the traffic indicator lights of the target group, and the color information and the position information of the halation; and marking the image to be processed according to the position information of the traffic indicator lamp of the target group in the image to be processed, and taking the marked image as a training sample. By the embodiment, the processing method for the halation image data is provided, sample data are enriched, and further accuracy of a model is improved.
Description
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to the field of artificial intelligence, and more particularly, to the field of intelligent transportation, roadside awareness, and deep learning.
Background
With the development of artificial intelligence technology, the use of neural network models is becoming wider and wider. For example, in situations where traffic light detection and light color recognition are required, it is common at present to collect images including traffic lights on roads using a road side sensing device (such as a road side camera) and train a neural network model using the collected image data to obtain a traffic light detection model and a light color recognition model.
However, at present, when training the neural network, the training sample only considers the image data of the traffic light which does not show the halation, so that the training sample is single, and the accuracy of the trained model is low, and improvement is needed.
Disclosure of Invention
The present disclosure provides a data processing method, apparatus, device, and storage medium.
According to an aspect of the present disclosure, there is provided a data processing method, the method including:
determining the lamp frame size information of a target group traffic indicator lamp related to the lamp halo in the image to be processed according to the position information of the lamp halo presented by the traffic indicator lamp in the image to be processed and the reference image of the image to be processed; the reference image and the image to be processed are acquired at the same position and at the same angle by the same road side sensing equipment, and a traffic indicator in the reference image is presented without a halation;
Determining the position information of the traffic indicator lights of the target group in the image to be processed according to the lamp frame size information of the traffic indicator lights of the target group, the color information and the position information of the halation;
and marking the image to be processed according to the position information of the traffic indicator lamp of the target group in the image to be processed, and taking the marked image as a training sample.
According to another aspect of the present disclosure, there is provided a data processing apparatus comprising:
the lamp frame size information determining module is used for determining the lamp frame size information of the target group traffic indicator lamp related to the lamp halo in the image to be processed according to the position information of the lamp halo presented by the traffic indicator lamp in the image to be processed and the reference image of the image to be processed; the reference image and the image to be processed are acquired at the same position and at the same angle by the same road side sensing equipment, and a traffic indicator in the reference image is presented without a halation;
the position information determining module is used for determining the position information of the traffic indicator lamp of the target group in the image to be processed according to the lamp frame size information of the traffic indicator lamp of the target group, the color information and the position information of the halation;
The processing module is used for marking the image to be processed according to the position information of the traffic indicator lamp of the target group in the image to be processed, and taking the marked image as a training sample.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data processing method of any one embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the data processing method according to any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a data processing method as described in any of the embodiments of the present disclosure.
According to the technology disclosed by the invention, the processing method for the halation image data is provided, and the image data containing the halation is used as a sample for model training, so that the sample data is enriched, and the accuracy of the model is further improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a data processing method provided in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow chart of another data processing method provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow chart of yet another data processing method provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a data processing apparatus provided according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device for implementing a data processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a data processing method provided according to an embodiment of the present disclosure. The embodiment of the disclosure is suitable for the situation of processing image data, in particular for the situation of processing images comprising traffic lights and presenting halations by the traffic lights. The embodiment may be performed by a data processing apparatus configured in an electronic device, which may be implemented in software and/or hardware. As shown in fig. 1, the data processing method includes:
s101, determining the lamp frame size information of the traffic indicator lamp of the target group associated with the lamp halo in the image to be processed according to the position information of the lamp halo presented by the traffic indicator lamp in the image to be processed and the reference image of the image to be processed.
In this embodiment, the traffic indicator light is also called a traffic signal light, or a traffic light, which is a basic language of road traffic; alternatively, the set of traffic lights includes at least three traffic lights, e.g., a set of traffic lights consisting of red, yellow and green three-color lights.
The image to be processed is an image to be processed, specifically an image including traffic lights and the traffic lights presenting halation (which may also be referred to as halation). Optionally, in this embodiment, the image to be processed is acquired by a road side sensing device (such as a road side camera) fixedly disposed at the road junction under an environment with poor imaging quality; among them, environments with poor imaging quality include, but are not limited to, night environments, hazy environments, and the like.
Optionally, images acquired by a plurality of road side sensing devices can be acquired in advance, and the acquired images are marked with interesting traffic indicator lamps; further, for a group of traffic lights capable of seeing the lamp frame, the group of traffic lights can be marked in the image by using the marking frame, and attribute information of the lamp frame (such as road side sensing equipment identification, lamp frame position information, color information of the group of traffic lights and the like for collecting the image) can be marked; for a group of traffic indication lamps only can see the halation, the halation can be marked in the image by using a marking frame, and attribute information of the halation (such as road side sensing equipment identification for collecting the image, position information of the halation, color information of the halation and the like) can be marked. And if the image with the halo in the marked image is identified, the image is taken as an image to be processed. Further, the number of the images to be processed may be one or more, and in the scene for training the traffic light detection model and the light color recognition model, the number of the images to be processed is preferably a plurality.
It should be noted that, in general, a group of traffic lights (such as traffic lights at a road junction) only has one light on at a time, that is, if a road side sensing device can only capture a group of traffic lights, there is a halo associated with the group of traffic lights in a to-be-processed image acquired by the road side sensing device. Furthermore, if a group of traffic lights is lit at two or more times in a particular scene, there may be two or more vignetting associated with the group of traffic lights in the image to be processed. The present embodiment can be used to deal with any number of vignetting associated with a set of traffic lights, and this and subsequent embodiments are described in terms of a set of traffic lights having only one light on at a time.
For example, after the images to be processed are acquired, a reference image associated with each of the images to be processed may be acquired. The reference image and the image to be processed are acquired at the same position and at the same angle by the same road side sensing equipment, and the traffic indicator lamp in the reference image is in no-halation presentation. Specifically, for each image to be processed, the road side sensing device identifier can be obtained from attribute information of the halation in the image to be processed, further other marked images collected by the road side sensing device associated with the road side sensing device identifier are obtained, and an image which does not show the halation and can see the lamp frame is selected from the marked images to be used as a reference image of the image to be processed. That is, the same roadside sensing device is at the same position and at the same angle, and two images acquired for a scene comprising at least one group of traffic lights at different moments in time, wherein the image of the traffic lights presenting the halation is used as an image to be processed, and the other image not presenting the halation is used as a reference image of the image to be processed.
It can be understood that in the case that the frame sizes of a group of traffic lights are fixed, the frame size information of the same group of traffic lights in different images photographed at the same angle by the same roadside sensing device at the same position is the same. And for each image to be processed, after the reference image of the image to be processed is acquired, the lamp frame size information of the traffic indicator lamp of the target group associated with the lamp halo can be determined from the reference image according to the position information of the lamp halo in the image to be processed and the reference image of the image to be processed. The position information of the halo may be a pixel coordinate of the halo in the image to be processed, specifically may be a pixel coordinate of a labeling frame labeling the halo, for example, may be a pixel coordinate of four vertices of the labeling frame, or may be a pixel coordinate of two vertices of any diagonal corner of the labeling frame; in this embodiment, a group of traffic indicator lamps where one or some traffic indicator lamps presenting halation in an image to be processed are located is used as a target group of traffic indicator lamps, where a lamp frame of the target group of traffic indicator lamps is not displayed in the image to be processed; the so-called lamp frame size information may include length information and width information of the lamp frame.
For example, if only one group of traffic indicator lamps is in the reference image of the image to be processed, determining the distance between the group of traffic indicator lamps and the halation according to the lamp frame position information of the group of traffic indicator lamps and the position information of the halation; if the distance is smaller than the set threshold value, indicating that the target traffic indicator lamp associated with the halation in the image to be processed is the same group of traffic indicator lamps in the actual scene as the group of traffic indicator lamps in the reference image, and further determining the lamp frame size information of the target indicator lamp associated with the halation according to the lamp frame position information of the group of traffic indicator lamps in the reference image. The frame position information may be a pixel coordinate of the frame in the reference image, specifically may be a pixel coordinate of a labeling frame labeling the frame, for example, may be a pixel coordinate of four vertices of the labeling frame, or may be a pixel coordinate of two vertices of any diagonal corner of the labeling frame.
Optionally, the length information and the width information of the group of traffic lights can be determined according to the frame position information of the group of traffic lights in the reference image, so that the length information and the width information of the target traffic lights associated with the halation in the image to be processed, namely the frame size information, can be determined.
Further, if the reference image of the image to be processed contains two or more groups of traffic indication lamps, the same group of traffic indication lamps of the target group traffic indication lamps related to the halation can be selected from the multiple groups of traffic indication lamps, and then the frame size information of the target group traffic indication lamps related to the halation can be determined according to the frame position information of the same group of traffic indication lamps.
S102, determining the position information of the traffic indicator lights of the target group in the image to be processed according to the size information of the lamp frame of the traffic indicator lights of the target group, and the color information and the position information of the halation.
It should be noted that, in the actual scene, the relative positional relationship between the indication lamps in a group of traffic indication lamps is fixed; for example, the relative positional relationship between the red, yellow and green three-color indicator lamps is fixed.
Further, after the frame size information of the traffic lights of the target group is determined, if the color information of the halation is a known color, that is, any one of the colors of the traffic lights (such as any one of red, yellow and green), the color information and the position information of the halation and the frame size information of the traffic lights of the target group may be input into a pre-trained position determination model, and the position information of the traffic lights of the target group in the image to be processed may be obtained. In this embodiment, the position information of the traffic indicator light of the target group in the image to be processed is the pixel coordinates of the frame of the traffic indicator light of the target group in the image to be processed, for example, the pixel coordinates of four vertices of the frame.
For example, if the color information of the halation is an unknown color, in the case that a group of traffic lights has multiple colors, it may be sequentially assumed that the color information of the halation is one of the colors of the traffic lights, and based on the position information of the halation and the assumed color information of the halation, and the frame size information of the target group traffic lights, the position information of the target traffic lights in the image to be processed in the case of different colors is determined; then, the road side sensing equipment for collecting the image to be processed can acquire the image which is collected under the same environment (such as night) as the image to be processed and does not show the halation, and the determined position information of the target traffic indicator lamp in the image to be processed under the condition of different colors is respectively compared with the lamp frame position information of each group of traffic indicator lamps in the acquired image, so that the position information of the target group traffic indicator lamp in the image to be processed can be determined according to the comparison result, and the color information of the halation can be determined. For example, if the determined position information of the target traffic indicator lamp in the image to be processed under the condition of a certain color is similar to or has a smaller difference with the frame position information of a group of traffic indicator lamps in the acquired image, the target traffic indicator lamp and the group of traffic indicator lamps in the acquired image are indicated to be the same group of traffic indicator lamps in the actual scene, and at this time, the position information of the target traffic indicator lamp in the image to be processed, which is determined under the condition that the halo is the color, can be used as the position information of the final target group of traffic indicator lamps in the image to be processed.
And S103, marking the image to be processed according to the position information of the traffic indicator lamp of the target group in the image to be processed, and taking the marked image as a training sample.
Specifically, after the position information of the traffic indicator lights of the target group in the image to be processed is determined, the lamp frames of the traffic indicator lights of the target group can be marked in the image to be processed by adopting marking frames according to the position information of the traffic indicator lights of the target group in the image to be processed. Meanwhile, attribute information of the lamp frame, such as position information, color information, road side sensing equipment identification for collecting the image and the like, can be marked. The color information of the lamp frame is the color information of the lighted lamp in the traffic light of the target group, namely the color information of the halation.
Optionally, after the image to be processed is marked, the marked image can be used as a training sample for training the traffic indicator light detection model and the light color recognition model.
It should be noted that, when the existing traffic indicator detection model and lamp color recognition model are used for training the model, the adopted training sample only considers the image data of the traffic indicator which does not show the halation, so that the accuracy of the model is lower; on the basis of the prior art, the traffic light is processed by the image data of the halo, and the processed image is used as a training sample.
In addition, it should be noted that, in a road side sensing scenario, a road side sensing device (such as a road side sensing camera) is disposed on a road side, and a position between the road side sensing camera and a traffic light is relatively unchanged, so that a mode of taking a corresponding image from a designated position in an image and performing light color recognition is mainly adopted at present, wherein the designated position is a position of the traffic light marked in the image in advance. Because the position of the drive test sensing device may change (for example, the base or the support rod is slightly deformed) in the actual scene, or the position of the traffic indicator light may change (for example, the base or the support rod is slightly deformed), the existing lamp color recognition method cannot directly take out a complete set of traffic indicator lights from the designated position.
In the embodiment, the influence of factors such as thermal expansion and contraction, support rod deformation and the like on the positions of the road side sensing equipment and the traffic indicator lamp are fully considered, and the lamp frame size information of the target group traffic indicator lamp related to the lamp halo is determined from the reference image of the image to be processed by taking the reference image of the image to be processed as a standard and combining with the position information of the lamp halo; meanwhile, in the process of determining the position information of the traffic indicator lights of the target group in the image to be processed, the embodiment fully considers the relative position relation inside one group of traffic indicator lights, namely, the position information of the traffic indicator lights of the target group in the image to be processed is determined based on the lamp frame size information, the position information and the color information of the lamp halos of the traffic indicator lights of the target group, so that the accuracy of the determined position information is improved, and a foundation is laid for the follow-up acquisition of a high-accuracy model.
According to the technical scheme, the reference image of the image to be processed is taken as a standard, the lamp frame size information of the target group traffic indicator lamp related to the lamp halo is determined from the reference image of the image to be processed in combination with the position information of the lamp halo, the position information of the target group traffic indicator lamp in the image to be processed is determined based on the lamp frame size information of the target group traffic indicator lamp, the position information and the color information of the lamp halo, the image to be processed is marked based on the position information of the target group traffic indicator lamp in the image to be processed, and the marked image is used as a sample for training a traffic indicator lamp detection model and a lamp color recognition model. Compared with the prior art, the traffic light model processing method has the advantages that the traffic light model processing device processes the image data of the traffic light showing halation, and takes the processed image as a training sample, so that the training sample is enriched, and the accuracy of the model is improved.
Fig. 2 is a flow chart of another data processing method provided in accordance with an embodiment of the present disclosure. The present embodiment further explains how to determine the frame size information of the traffic lights of the target group based on the above embodiments. As shown in fig. 2, the data processing method includes:
S201, determining the same group of traffic indicator lamps of the target group of traffic indicator lamps related to the halation from at least two groups of traffic indicator lamps according to the position information of the halation presented by the traffic indicator lamps in the image to be processed and the lamp frame position information of at least two groups of traffic indicator lamps in the reference image of the image to be processed.
In this embodiment, the same group traffic indicator lamp and the target group traffic indicator lamp are the same group traffic indicator lamp in the actual scene.
Alternatively, in the case where the reference image of the image to be processed includes two or more sets of traffic lights, the same set of traffic lights of the target set of traffic lights may be selected from the multiple sets of traffic lights. For example, for each group of traffic lights in the reference image, the distance between the halation and the group of traffic lights may be determined according to the position information of the group of traffic lights and the position information of the halation, and then the same group of traffic lights of the target group of traffic lights may be selected from the multiple groups of traffic lights in the reference image according to the distance between each group of traffic lights and the halation in the reference image.
Further, as an alternative manner of the embodiment of the disclosure, determining the traffic lights of the same group of traffic lights of the target group may be determining the center point coordinates of the light vignetting according to the position information of the light vignetting; according to the lamp frame position information of at least two groups of traffic indicator lamps in the reference image, determining the lamp frame center point coordinates of the at least two groups of traffic indicator lamps; respectively calculating the distance between the center point coordinates of the halation and the center point coordinates of the lamp frames of at least two groups of traffic indicator lamps; and determining the same group of traffic indicator lamps of the target group of traffic indicator lamps associated with the halation from at least two groups of traffic indicator lamps according to the distance.
Specifically, according to the pixel coordinates of the marking frame marking the halation, determining the central point coordinates of the halation; meanwhile, for each group of traffic indication lamps in the reference image, according to pixel coordinates of a marking frame marking the lamp frames of the group of traffic indication lamps, the lamp frame center point coordinates of the group of traffic indication lamps can be determined, and the distance between the lamp frame center point coordinates of the group of traffic indication lamps and the center point coordinates of the halation, namely the distance between the group of traffic indication lamps and the halation, is calculated; then, the same group of traffic lights of the target group of traffic lights can be selected from the multiple groups of traffic lights in the reference image according to the distance between each group of traffic lights and the halation in the reference image. For example, a group of traffic lights in the reference image corresponding to the minimum distance may be selected as the same group of traffic lights as the target group of traffic lights. Further, in order to ensure accuracy, the minimum distance between each group of traffic lights in the reference image and the halo can be compared with a set threshold, if the minimum distance is smaller than the set threshold, the target group of traffic lights associated with the halo is indicated, and one group of traffic lights in the reference image corresponding to the minimum distance is the same group of traffic lights in an actual scene, so that one group of traffic lights in the reference image corresponding to the minimum distance can be used as the same group of traffic lights of the target group of traffic lights.
Illustratively, as a further alternative manner of the embodiment of the present disclosure, the same group of traffic lights as the target group of traffic lights may further determine the coordinates of the center point of the light halo according to the position information of the light halo; meanwhile, for the lamp frame position information of each group of traffic indication lamps in the reference image and the relative position relation inside the group of traffic indication lamps, the position information of each traffic indication lamp inside the group of traffic indication lamps can be determined, namely, the position information of the lighted lamp (such as a red lamp) in the group of traffic indication lamps can be determined, further, the center point coordinate (namely, the lamp cap center point coordinate) of the lighted lamp in the group of traffic indication lamps can be determined, and the distance between the center point coordinate of the halation and the center point coordinate of the lighted lamp in the group of traffic indication lamps is calculated; then, the same group traffic light of the target group traffic light may be selected from the plurality of groups of traffic lights in the reference image based on the distance between the center point coordinates of the lighted lights and the center point coordinates of the vignetting in each group of traffic lights.
S202, according to the lamp frame position information of the same group of traffic indicator lamps, the lamp frame size information of the target group of traffic indicator lamps is determined.
Specifically, after the same group of traffic lights of the target group of traffic lights are determined, the length information and the width information of the same group of traffic lights can be determined according to the lamp frame position information of the same group of traffic lights; because the traffic indicator lights of the target group and the traffic indicator lights of the same group are the same group of traffic indicator lights in the actual scene, the length information and the width information of the traffic indicator lights of the target group, namely the lamp frame size information, can be determined.
S203, determining the position information of the traffic indicator lights of the target group in the image to be processed according to the lamp frame size information of the traffic indicator lights of the target group, and the color information and the position information of the halation.
S204, marking the image to be processed according to the position information of the traffic indicator lamp of the target group in the image to be processed, and taking the marked image as a training sample.
According to the technical scheme, under the condition that two or more groups of traffic indicator lamps are contained in the reference image of the image to be processed, the same group of traffic indicator lamps of the target group traffic indicator lamps related to the halation are determined from the reference image of the image to be processed, and according to the lamp frame position information of the same group of traffic indicator lamps, the lamp frame size information of the target group traffic indicator lamps can be accurately determined, and then the lamp frame size information of the target group traffic indicator lamps, the position information and the color information of the halation and the like are combined, so that the position information of the target group traffic indicator lamps in the image to be processed can be determined, and then the target group traffic indicator lamps can be marked in the image to be processed based on the position information and are used for training samples of a traffic indicator lamp detection model and a lamp color recognition model to be trained subsequently. According to the technical scheme, under the condition that the reference image of the image to be processed contains two or more groups of traffic indication lamps, the same group of traffic indication lamps of the target group of traffic indication lamps can be accurately determined, and then the lamp frame size information of the target group of traffic indication lamps can be determined based on the lamp frame position information of the same group of traffic indication lamps, so that an optional mode is provided for determining the lamp frame size information of the target group of traffic indication lamps.
Fig. 3 is a flow chart of yet another data processing method provided in accordance with an embodiment of the present disclosure. The embodiment further explains how to determine the position information of the traffic indicator lights of the target group in the image to be processed on the basis of the above embodiment. As shown in fig. 3, the data processing method includes:
s301, determining the lamp frame size information of the traffic indicator lamp of the target group associated with the lamp halo in the image to be processed according to the position information of the lamp halo presented by the traffic indicator lamp in the image to be processed and the reference image of the image to be processed.
In this embodiment, the reference image and the image to be processed are collected at the same position and at the same angle by the same roadside sensing device, and the traffic indicator light in the reference image is presented without a halo.
S302, identifying whether the color information of the halation is an unknown color; if yes, then execute S303; if not, S306 is performed.
S303, acquiring the same-scene image of the image to be processed.
In the embodiment, the influence of factors such as thermal expansion and contraction, support rod deformation and the like in an actual scene on the positions of the road side sensing equipment and the traffic indicator lamp is fully considered, and the same scene image of the image to be processed is introduced. The same-scene image and the image to be processed are collected by the same road side sensing equipment under the same environment scene, and the traffic indicator lamp in the same-scene image is presented without a halation; for example, the to-be-processed image and the same scene image are collected at different times at night by a road side sensing camera fixedly arranged at a certain road intersection, traffic indicator lamps in the to-be-processed image show halation, and the same scene image does not show the halation phenomenon of the traffic indicator lamps and can see a lamp frame.
It can be understood that, in the case that the reference image of the image to be processed is the same as the acquisition environment of the image to be processed, the reference image is the same scene image of the image to be processed, and the processes of S304 and S305 can be directly performed without executing S303.
S304, updating the color information of the halation according to the position information of the halation and the position information of the single traffic indicator lamp in the same scene image.
Specifically, all the single traffic indicator lamps in the same scene image can be traversed, if the position information of a certain traffic indicator lamp in the same scene image is identified to be the same as the position information of the halation or smaller in phase difference, the traffic indicator lamp showing the halation in the image to be processed is described, and the traffic indicator lamp is the same as the certain traffic indicator lamp in the same scene image in an actual scene, and at the moment, the color information of the traffic indicator lamp which is the same as the position information of the halation in the same scene image can be used as the color information of the halation. For example, if the color information of the traffic light identical to the position information of the halo in the same scene image is red, the red is taken as the color information of the halo.
Optionally, as an optional manner of an embodiment of the disclosure, updating the color information of the halo may be determining, according to the position information of the halo and the position information of the single traffic indicator in the same scene image, a height difference between the halo and the single traffic indicator in the same scene image; determining a target lamp associated with the halation from the single traffic light of the scene image according to the height difference; and updating the color information of the halation according to the color information of the target lamp.
Specifically, according to the position information of the halation, the central point coordinate of the halation is determined; for each traffic indicator in the same scene image, according to the position information of the traffic indicator, the center point coordinate of the traffic indicator (namely the center point coordinate of the lamp cap) can be determined, and the height difference between the center point coordinate of the traffic indicator and the center point coordinate of the halation (namely the difference between the traffic indicator and the halation in the horizontal height) can be calculated; then, a traffic indicator light in the same scene image corresponding to the minimum height difference can be selected as a target light. Further, in order to ensure accuracy, the minimum height difference in the height differences between all traffic indication lamps and the halation in the same scene image can be compared with a set height threshold, if the minimum height difference is smaller than the set height threshold, the traffic indication lamp presenting the halation in the image to be processed is illustrated, and the traffic indication lamp and the target lamp in the same scene image are the same traffic indication lamp in an actual scene, so that the traffic indication lamp in the same scene image corresponding to the minimum height difference can be used as the target lamp associated with the halation; and the color information of the target lamp can be used as the color information of the halation.
It is noted that in this embodiment, by performing fine granularity division on the group traffic indicator lights, a concept of a single traffic indicator light is introduced, and a target light associated with a halo is determined from the same scene image, so that the color information of the halo is updated according to the color information of the target light, the accuracy of updating the color information of the halo is further increased, and a foundation is laid for obtaining a more accurate model.
S305, determining the position information of the traffic indicator lamp of the target group in the image to be processed according to the central point coordinates of the halation, the updated color information of the halation, the lamp frame size information of the traffic indicator lamp of the target group and the relative position relation inside the traffic indicator lamp of the target group.
In this embodiment, the relative positional relationship inside a group of traffic lights is the relative positional relationship between the respective lights inside a group of traffic lights, that is, the relative positional relationship between the different color lights inside a group of traffic lights; for example, the relative positional relationship among the three different color indicator lights of red, yellow and green.
Specifically, the center point coordinates of the halation, updated color information, lamp frame size information of the traffic indicator lights of the target group, relative position relations inside the traffic indicator lights of the target group and the like are input into a pre-trained position determination model, and the position information of the traffic indicator lights of the target group in the image to be processed can be obtained.
Further, according to the updated color information of the halation and the relative position relationship inside the traffic indicator lights of the target group, other traffic indicator lights except the traffic indicator lights presenting the halation in the traffic indicator of the target group can be determined, and the vertical distribution condition (or the left-right distribution condition) of the traffic indicator lights presenting the halation is determined; and determining the position information of the traffic indicator lights of the target group in the image to be processed according to the central point coordinates of the halation, the size information of the lamp frames of the traffic indicator lights of the target group and the up-and-down distribution condition (or left-right distribution condition).
S306, determining the center point coordinates of the halation according to the position information of the halation.
S307, determining the position information of the traffic indicator lights of the target group in the image to be processed according to the center point coordinates and the color information of the halation, the lamp frame size information of the traffic indicator lights of the target group and the relative position relation inside the traffic indicator lights of the target group.
In this embodiment, the method of determining the position information of the target group traffic indicator in the image to be processed in step S307 is the same as the method of determining the position information of the target group traffic indicator in the image to be processed in step S305, and S307 can be implemented only by replacing the updated color information of the halo in step S305 with the color information of the halo, which is not described herein again.
According to the technical scheme, the standard image of the image to be processed is taken as a standard, and the position information of the halation is combined, so that the frame size information of the traffic indicator lamp of the target group associated with the halation is determined from the standard image of the image to be processed; then, when the color information of the halation is identified as any one of the colors of the traffic indicator lights, determining the position information of the traffic indicator lights of the target group in the image to be processed based on the lamp frame size information of the traffic indicator lights of the target group, the position information of the halation and the color information; meanwhile, under the condition that the color information of the halation is identified to be the unknown color, the same scene image of the image to be processed is introduced to determine the position information of the traffic indicator lamp of the target group in the image to be processed by fully considering the influences of factors such as thermal expansion and contraction, support rod deformation and the like in the actual scene on the positions of the road side sensing equipment and the traffic indicator lamp, the accuracy of the determined position information is further improved, and a foundation is laid for obtaining a more accurate model. In addition, the embodiment marks the image to be processed based on the position information of the target group traffic indicator in the image to be processed, and the marked image is used as a sample for training the traffic indicator detection model and the lamp color recognition model, so that training samples are enriched, and the accuracy of the model is further improved.
Fig. 4 is a schematic structural view of a data processing apparatus according to an embodiment of the present disclosure. The embodiment of the disclosure is suitable for the situation of processing image data, in particular for the situation of processing images comprising traffic lights and presenting halations by the traffic lights. The apparatus may be implemented in software and/or hardware, and the apparatus may implement the data processing method according to any embodiment of the disclosure. As shown in fig. 4, the data processing apparatus includes:
the lamp frame size information determining module 401 is configured to determine lamp frame size information of a target group traffic indicator lamp associated with a lamp halo in an image to be processed according to position information of the lamp halo presented by the traffic indicator lamp in the image to be processed and a reference image of the image to be processed; the reference image and the image to be processed are acquired at the same position and at the same angle by the same road side sensing equipment, and a traffic indicator light in the reference image is presented without a halation;
the position information determining module 402 is configured to determine position information of the traffic indicator lamp of the target group in the image to be processed according to the frame size information of the traffic indicator lamp of the target group, and color information and position information of the halation;
The processing module 403 is configured to label the image to be processed according to the position information of the traffic indicator of the target group in the image to be processed, and take the labeled image as a training sample.
According to the technical scheme, the reference image of the image to be processed is taken as a standard, the lamp frame size information of the target group traffic indicator lamp related to the lamp halo is determined from the reference image of the image to be processed in combination with the position information of the lamp halo, the position information of the target group traffic indicator lamp in the image to be processed is determined based on the lamp frame size information of the target group traffic indicator lamp, the position information and the color information of the lamp halo, the image to be processed is marked based on the position information of the target group traffic indicator lamp in the image to be processed, and the marked image is used as a sample for training a traffic indicator lamp detection model and a lamp color recognition model. Compared with the prior art, the traffic light model processing method has the advantages that the traffic light model processing device processes the image data of the traffic light showing halation, and takes the processed image as a training sample, so that the training sample is enriched, and the accuracy of the model is improved.
Illustratively, the bezel size information determination module 401 includes:
The target group lamp determining unit is used for determining the same group of traffic indicator lamps of the target group traffic indicator lamps related to the halation from the at least two groups of traffic indicator lamps according to the position information of the halation and the lamp frame position information of the at least two groups of traffic indicator lamps in the reference image;
and the lamp frame size information determining unit is used for determining the lamp frame size information of the traffic indicator lamps of the target group according to the lamp frame position information of the traffic indicator lamps of the same group.
The target group lamp determination unit is specifically configured to:
determining the center point coordinates of the halation according to the position information of the halation;
according to the lamp frame position information of at least two groups of traffic indicator lamps in the reference image, determining the lamp frame center point coordinates of the at least two groups of traffic indicator lamps;
respectively calculating the distance between the center point coordinates of the halation and the center point coordinates of the lamp frames of at least two groups of traffic indicator lamps;
and determining the same group of traffic indicator lamps of the target group of traffic indicator lamps associated with the halation from at least two groups of traffic indicator lamps according to the distance.
Illustratively, the location information determination module 402 is specifically configured to:
if the color information of the halation is any one of the colors of the traffic indicator lights, determining the center point coordinates of the halation according to the position information of the halation;
And determining the position information of the traffic indicator lamp of the target group in the image to be processed according to the center point coordinates and the color information of the halation, the lamp frame size information of the traffic indicator lamp of the target group and the relative position relation inside the traffic indicator lamp of the target group.
Illustratively, the location information determination module 402 includes:
the image acquisition unit is used for acquiring the same scene image of the image to be processed if the color information of the halation is unknown; the same-scene image and the image to be processed are acquired by the same-road-side sensing equipment under the same environment scene, and traffic indicator lamps in the same-scene image are presented without halation;
the color information updating unit is used for updating the color information of the halation according to the position information of the halation and the position information of the single traffic indicator lamp in the same scene image;
the position information determining unit is used for determining the position information of the traffic indicator lamp of the target group in the image to be processed according to the central point coordinates of the halation, the updated color information of the halation, the lamp frame size information of the traffic indicator lamp of the target group and the relative position relation inside the traffic indicator lamp of the target group;
wherein, at least three traffic lights are included in a group of traffic lights.
The color information updating unit is specifically configured to:
determining the height difference between the halation and the single traffic indicator lamp in the same scene image according to the position information of the halation and the position information of the single traffic indicator lamp in the same scene;
determining a target lamp associated with the halation from the single traffic light of the scene image according to the height difference;
and updating the color information of the halation according to the color information of the target lamp.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the electronic device 500 includes a computing unit 501 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 403, various programs and data required for the operation of the electronic device 500 may also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in electronic device 500 are connected to I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the electronic device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the respective methods and processes described above, such as a data processing method. For example, in some embodiments, the data processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 500 via the ROM 502 and/or the communication unit 509. When a computer program is loaded into RAM 503 and executed by computing unit 501, one or more steps of the data processing method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the data processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (14)
1. A data processing method, comprising:
determining the lamp frame size information of a target group traffic indicator lamp related to the lamp halo in the image to be processed according to the position information of the lamp halo presented by the traffic indicator lamp in the image to be processed and the reference image of the image to be processed; the reference image and the image to be processed are acquired at the same position and at the same angle by the same road side sensing equipment, and a traffic indicator in the reference image is presented without a halation;
Determining the position information of the traffic indicator lights of the target group in the image to be processed according to the lamp frame size information of the traffic indicator lights of the target group, the color information and the position information of the halation;
and marking the image to be processed according to the position information of the traffic indicator lamp of the target group in the image to be processed, and taking the marked image as a training sample.
2. The method of claim 1, wherein the determining the frame size information of the traffic lights of the target group associated with the halation in the image to be processed according to the position information of the halation presented by the traffic lights in the image to be processed and the reference image of the image to be processed comprises:
determining the same group of traffic indicator lamps of the target group of traffic indicator lamps related to the halation from at least two groups of traffic indicator lamps according to the position information of the halation and the lamp frame position information of the at least two groups of traffic indicator lamps in the reference image;
and determining the lamp frame size information of the traffic indicator lamps of the target group according to the lamp frame position information of the traffic indicator lamps of the same group.
3. The method of claim 2, wherein the determining the same group of traffic lights as the target group of traffic lights associated with the halation from the at least two groups of traffic lights based on the location information of the halation and the frame location information of the at least two groups of traffic lights in the reference image, comprises:
Determining the center point coordinates of the halation according to the position information of the halation;
determining the coordinates of the center points of the lamp frames of at least two groups of traffic indication lamps according to the lamp frame position information of the at least two groups of traffic indication lamps in the reference image;
respectively calculating the distance between the center point coordinates of the halation and the lamp frame center point coordinates of the at least two groups of traffic indication lamps;
and determining the same group of traffic indicator lamps of the target group of traffic indicator lamps related to the halation from the at least two groups of traffic indicator lamps according to the distance.
4. The method of claim 1, wherein the determining the location information of the target group traffic indicator in the image to be processed according to the frame size information of the target group traffic indicator, and the color information and the location information of the halo, comprises:
if the color information of the halation is any one of the colors of the traffic indicator lights, determining the center point coordinates of the halation according to the position information of the halation;
and determining the position information of the traffic indicator lamp of the target group in the image to be processed according to the center point coordinates and the color information of the halation, the lamp frame size information of the traffic indicator lamp of the target group and the relative position relation inside the traffic indicator lamp of the target group.
5. The method of claim 1, wherein the determining the location information of the target group traffic indicator in the image to be processed according to the frame size information of the target group traffic indicator, and the color information and the location information of the halo, comprises:
if the color information of the halation is unknown, acquiring the same scene image of the image to be processed; the same-scene image and the image to be processed are acquired by the same road side sensing equipment under the same environment scene, and traffic indicator lamps in the same-scene image are presented without halation;
updating the color information of the halation according to the position information of the halation and the position information of the single traffic indicator lamp in the same scene image;
determining the position information of the target group traffic indicator in the image to be processed according to the central point coordinates of the halation, the updated color information of the halation, the lamp frame size information of the target group traffic indicator and the relative position relation inside the target group traffic indicator;
wherein, at least three traffic lights are included in a group of traffic lights.
6. The method of claim 5, wherein the updating the color information of the halo based on the location information of the halo and the location information of the individual traffic lights in the co-scene image comprises:
Determining the height difference between the halation and the single traffic indicator lamp in the same scene image according to the position information of the halation and the position information of the single traffic indicator lamp in the same scene image;
determining a target lamp associated with the halation from the single traffic indicator lamp of the same scene image according to the height difference;
and updating the color information of the halation according to the color information of the target lamp.
7. A data processing apparatus comprising:
the lamp frame size information determining module is used for determining the lamp frame size information of the target group traffic indicator lamp related to the lamp halo in the image to be processed according to the position information of the lamp halo presented by the traffic indicator lamp in the image to be processed and the reference image of the image to be processed; the reference image and the image to be processed are acquired at the same position and at the same angle by the same road side sensing equipment, and a traffic indicator in the reference image is presented without a halation;
the position information determining module is used for determining the position information of the traffic indicator lamp of the target group in the image to be processed according to the lamp frame size information of the traffic indicator lamp of the target group, the color information and the position information of the halation;
The processing module is used for marking the image to be processed according to the position information of the traffic indicator lamp of the target group in the image to be processed, and taking the marked image as a training sample.
8. The apparatus of claim 7, wherein the bezel size information determination module comprises:
the target group lamp determining unit is used for determining the same group of traffic indicator lamps of the target group traffic indicator lamps related to the halation from the at least two groups of traffic indicator lamps according to the position information of the halation and the lamp frame position information of the at least two groups of traffic indicator lamps in the reference image;
and the lamp frame size information determining unit is used for determining the lamp frame size information of the traffic indicator lamps of the target group according to the lamp frame position information of the traffic indicator lamps of the same group.
9. The apparatus of claim 8, wherein the target group lamp determination unit is specifically configured to:
determining the center point coordinates of the halation according to the position information of the halation;
determining the coordinates of the center points of the lamp frames of at least two groups of traffic indication lamps according to the lamp frame position information of the at least two groups of traffic indication lamps in the reference image;
respectively calculating the distance between the center point coordinates of the halation and the lamp frame center point coordinates of the at least two groups of traffic indication lamps;
And determining the same group of traffic indicator lamps of the target group of traffic indicator lamps related to the halation from the at least two groups of traffic indicator lamps according to the distance.
10. The apparatus of claim 7, wherein the location information determination module is specifically configured to:
if the color information of the halation is any one of the colors of the traffic indicator lights, determining the center point coordinates of the halation according to the position information of the halation;
and determining the position information of the traffic indicator lamp of the target group in the image to be processed according to the center point coordinates and the color information of the halation, the lamp frame size information of the traffic indicator lamp of the target group and the relative position relation inside the traffic indicator lamp of the target group.
11. The apparatus of claim 7, wherein the location information determination module comprises:
the image acquisition unit is used for acquiring the same scene image of the image to be processed if the color information of the halation is unknown; the same-scene image and the image to be processed are acquired by the same road side sensing equipment under the same environment scene, and traffic indicator lamps in the same-scene image are presented without halation;
The color information updating unit is used for updating the color information of the halation according to the position information of the halation and the position information of the single traffic indicator lamp in the same scene image;
the position information determining unit is used for determining the position information of the traffic indicator lamp of the target group in the image to be processed according to the central point coordinates of the halation, the updated color information of the halation, the lamp frame size information of the traffic indicator lamp of the target group and the relative position relation inside the traffic indicator lamp of the target group;
wherein, at least three traffic lights are included in a group of traffic lights.
12. The apparatus of claim 11, wherein the color information updating unit is specifically configured to:
determining the height difference between the halation and the single traffic indicator lamp in the same scene image according to the position information of the halation and the position information of the single traffic indicator lamp in the same scene;
determining a target lamp associated with the halation from the single traffic indicator lamp of the same scene image according to the height difference;
and updating the color information of the halation according to the color information of the target lamp.
13. An electronic device, comprising:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data processing method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the data processing method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110432270.XA CN113129375B (en) | 2021-04-21 | 2021-04-21 | Data processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110432270.XA CN113129375B (en) | 2021-04-21 | 2021-04-21 | Data processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113129375A CN113129375A (en) | 2021-07-16 |
CN113129375B true CN113129375B (en) | 2023-12-01 |
Family
ID=76778837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110432270.XA Active CN113129375B (en) | 2021-04-21 | 2021-04-21 | Data processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113129375B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113807310A (en) * | 2021-09-29 | 2021-12-17 | 中国第一汽车股份有限公司 | Signal lamp target detection method and device, electronic equipment and storage medium |
CN113947762A (en) * | 2021-09-30 | 2022-01-18 | 阿波罗智联(北京)科技有限公司 | Traffic light color identification method, device and equipment and road side computing equipment |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2119614A2 (en) * | 2008-05-15 | 2009-11-18 | Siemens Schweiz AG | Signalling unit with LED redundancy |
CN102568242A (en) * | 2012-01-17 | 2012-07-11 | 杭州海康威视系统技术有限公司 | Signal lamp state detection method and system based on video processing |
CN104574960A (en) * | 2014-12-25 | 2015-04-29 | 宁波中国科学院信息技术应用研究院 | Traffic light recognition method |
JP2017004295A (en) * | 2015-06-11 | 2017-01-05 | 株式会社ミツバ | Traffic light recognition apparatus and traffic light recognition method |
CN107273838A (en) * | 2017-06-08 | 2017-10-20 | 浙江大华技术股份有限公司 | Traffic lights capture the processing method and processing device of picture |
KR20180031421A (en) * | 2016-09-20 | 2018-03-28 | 강정열 | A traffic light |
CN108876858A (en) * | 2018-07-06 | 2018-11-23 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling image |
CN110084111A (en) * | 2019-03-19 | 2019-08-02 | 江苏大学 | A kind of quick vehicle detection at night method applied to adaptive high beam |
JP2019139801A (en) * | 2019-04-25 | 2019-08-22 | 株式会社ミツバ | Traffic light machine recognition device, signal recognition system, and traffic light machine recognition method |
CN110992725A (en) * | 2019-10-24 | 2020-04-10 | 合肥讯图信息科技有限公司 | Method, system and storage medium for detecting traffic signal lamp fault |
CN111127358A (en) * | 2019-12-19 | 2020-05-08 | 苏州科达科技股份有限公司 | Image processing method, device and storage medium |
WO2020133983A1 (en) * | 2018-12-29 | 2020-07-02 | 中国银联股份有限公司 | Signal light identification method, device, and electronic apparatus |
CN111598006A (en) * | 2020-05-18 | 2020-08-28 | 北京百度网讯科技有限公司 | Method and device for labeling objects |
CN111931726A (en) * | 2020-09-23 | 2020-11-13 | 北京百度网讯科技有限公司 | Traffic light detection method and device, computer storage medium and road side equipment |
CN112307970A (en) * | 2020-10-30 | 2021-02-02 | 北京百度网讯科技有限公司 | Training data acquisition method and device, electronic equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180211120A1 (en) * | 2017-01-25 | 2018-07-26 | Ford Global Technologies, Llc | Training An Automatic Traffic Light Detection Model Using Simulated Images |
CN108681994B (en) * | 2018-05-11 | 2023-01-10 | 京东方科技集团股份有限公司 | Image processing method and device, electronic equipment and readable storage medium |
-
2021
- 2021-04-21 CN CN202110432270.XA patent/CN113129375B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2119614A2 (en) * | 2008-05-15 | 2009-11-18 | Siemens Schweiz AG | Signalling unit with LED redundancy |
CN102568242A (en) * | 2012-01-17 | 2012-07-11 | 杭州海康威视系统技术有限公司 | Signal lamp state detection method and system based on video processing |
CN104574960A (en) * | 2014-12-25 | 2015-04-29 | 宁波中国科学院信息技术应用研究院 | Traffic light recognition method |
JP2017004295A (en) * | 2015-06-11 | 2017-01-05 | 株式会社ミツバ | Traffic light recognition apparatus and traffic light recognition method |
KR20180031421A (en) * | 2016-09-20 | 2018-03-28 | 강정열 | A traffic light |
CN107273838A (en) * | 2017-06-08 | 2017-10-20 | 浙江大华技术股份有限公司 | Traffic lights capture the processing method and processing device of picture |
CN108876858A (en) * | 2018-07-06 | 2018-11-23 | 北京字节跳动网络技术有限公司 | Method and apparatus for handling image |
WO2020133983A1 (en) * | 2018-12-29 | 2020-07-02 | 中国银联股份有限公司 | Signal light identification method, device, and electronic apparatus |
CN110084111A (en) * | 2019-03-19 | 2019-08-02 | 江苏大学 | A kind of quick vehicle detection at night method applied to adaptive high beam |
JP2019139801A (en) * | 2019-04-25 | 2019-08-22 | 株式会社ミツバ | Traffic light machine recognition device, signal recognition system, and traffic light machine recognition method |
CN110992725A (en) * | 2019-10-24 | 2020-04-10 | 合肥讯图信息科技有限公司 | Method, system and storage medium for detecting traffic signal lamp fault |
CN111127358A (en) * | 2019-12-19 | 2020-05-08 | 苏州科达科技股份有限公司 | Image processing method, device and storage medium |
CN111598006A (en) * | 2020-05-18 | 2020-08-28 | 北京百度网讯科技有限公司 | Method and device for labeling objects |
CN111931726A (en) * | 2020-09-23 | 2020-11-13 | 北京百度网讯科技有限公司 | Traffic light detection method and device, computer storage medium and road side equipment |
CN112307970A (en) * | 2020-10-30 | 2021-02-02 | 北京百度网讯科技有限公司 | Training data acquisition method and device, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
基于图像识别的信号灯路口辅助驾驶方法;魏海林;《浙江大学学报(工学版)》;第51卷(第6期);1090-1096 * |
Also Published As
Publication number | Publication date |
---|---|
CN113129375A (en) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3852008A2 (en) | Image detection method and apparatus, device, storage medium and computer program product | |
CN109285181B (en) | Method and apparatus for recognizing image | |
CN112785625A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN113129375B (en) | Data processing method, device, equipment and storage medium | |
CN113591580B (en) | Image annotation method and device, electronic equipment and storage medium | |
CN113792061B (en) | Map data updating method and device and electronic equipment | |
CN109115242B (en) | Navigation evaluation method, device, terminal, server and storage medium | |
CN115719436A (en) | Model training method, target detection method, device, equipment and storage medium | |
CN114186007A (en) | High-precision map generation method and device, electronic equipment and storage medium | |
CN114140592A (en) | High-precision map generation method, device, equipment, medium and automatic driving vehicle | |
CN112991459A (en) | Camera calibration method, device, equipment and storage medium | |
CN112883236B (en) | Map updating method and device, electronic equipment and storage medium | |
CN112699754B (en) | Signal lamp identification method, device, equipment and storage medium | |
CN115410173B (en) | Multi-mode fused high-precision map element identification method, device, equipment and medium | |
US20220309763A1 (en) | Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system | |
CN114111813B (en) | High-precision map element updating method and device, electronic equipment and storage medium | |
CN114565908A (en) | Lane line detection method and device, electronic device and storage medium | |
CN113591569A (en) | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium | |
CN116758503A (en) | Automatic lane line marking method, device, equipment and storage medium | |
US11792514B2 (en) | Method and apparatus for stabilizing image, roadside device and cloud control platform | |
CN115035481A (en) | Image object distance fusion method, device, equipment and storage medium | |
CN114170282A (en) | Point cloud fusion method and device, electronic equipment and medium | |
CN112507951B (en) | Indicating lamp identification method, indicating lamp identification device, indicating lamp identification equipment, road side equipment and cloud control platform | |
CN114490909B (en) | Object association method and device and electronic equipment | |
CN113177545B (en) | Target object detection method, target object detection device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |