CN116503821A - Road identification recognition method and system based on point cloud data and image recognition - Google Patents
Road identification recognition method and system based on point cloud data and image recognition Download PDFInfo
- Publication number
- CN116503821A CN116503821A CN202310724151.0A CN202310724151A CN116503821A CN 116503821 A CN116503821 A CN 116503821A CN 202310724151 A CN202310724151 A CN 202310724151A CN 116503821 A CN116503821 A CN 116503821A
- Authority
- CN
- China
- Prior art keywords
- data
- road
- depth
- identification
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims abstract description 48
- 238000007621 cluster analysis Methods 0.000 claims abstract description 9
- 238000006243 chemical reaction Methods 0.000 claims description 18
- 238000013145 classification model Methods 0.000 claims description 16
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 20
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000000725 suspension Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/273—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a road identification recognition method and a system based on point cloud data and image recognition, wherein the method comprises the following steps: acquiring point cloud data and a target image; extracting edge data and determining depth data; forming a three-dimensional pixel matrix; performing cluster analysis on the point cloud data to obtain data of a suspected road identifier as suspected data; obtaining suspected elements; judging whether the suspected element is a road identifier and the corresponding road identifier type according to the R channel, G channel and B channel data of the suspected element. According to the road identification recognition method and system based on the point cloud data and the image recognition, the suspected data recognized in the point cloud data are reversely mapped back to the target image, so that the recognition accuracy of the road identification can be effectively improved, and other suspended type identification noise can be filtered; meanwhile, the type of the road mark can be accurately identified, and an accurate basis is provided for the adjustment of the subsequent traffic big data.
Description
Technical Field
The invention relates to an automatic road identification technology, in particular to a road identification method and system based on point cloud data and image identification.
Background
The automatic identification of the road mark has very wide application in the aspects of traffic control, route planning and the like, thereby improving traffic efficiency and reducing traffic jams; different from the road identification used in the automatic driving technology, in the application of traffic control, urban road data and the like, the identification of the road identification needs to have higher precision, and meanwhile, strong real-time performance is not needed.
At present, the road identification recognition applied to urban road data is widely performed by using a point cloud technology, wherein the point cloud is a discrete point set distributed in an N-dimensional space, mainly three-dimensional, and is usually used for discrete sampling of object surface information. The rapid development of the three-dimensional scanning technology makes the acquisition of the point cloud data more convenient and simpler, and the computer graphics driven by the point cloud increasingly shows wide application prospects in the fields of intelligent robots, automatic driving, digital cities and the like. The point cloud processing technology comprises point cloud filtering, segmentation, registration, feature extraction, curved surface reconstruction and the like.
However, for the identification of road marks, objects such as billboards, temporary road signs and traffic lights hung around roads can generate noise in point cloud data, and in the prior art, the data are generally screened by manpower, so that the workload is great.
Disclosure of Invention
In order to at least overcome the above-mentioned shortcomings in the prior art, an object of the present application is to provide a road identification recognition method and system based on point cloud data and image recognition.
In a first aspect, an embodiment of the present application provides a road identifier identifying method based on point cloud data and image identification, including:
acquiring point cloud data in a preset area and a target image corresponding to the point cloud data; the target image comprises a road and a road mark configured on the road, and is shot along the direction of the road;
performing image recognition on the target image, extracting edge data of a road, and determining depth data according to sharpness information corresponding to the edge data;
assigning values to the pixel matrix of the target image according to the edge data and the depth data to form a three-dimensional pixel matrix; the elements in the three-dimensional pixel matrix are six-dimensional sequences; the six-dimensional sequence comprises an R channel, a G channel, a B channel, an X coordinate, a Y coordinate and a Z coordinate; wherein the Z coordinate corresponds to the depth data;
performing cluster analysis on the point cloud data to obtain data of a suspected road identifier as suspected data;
matching the three-dimensional pixel matrix into the point cloud data, and acquiring elements in the three-dimensional pixel matrix corresponding to the suspected data in the point cloud data as suspected elements;
judging whether the suspected element is a road identifier and the corresponding road identifier type according to the R channel, G channel and B channel data of the suspected element.
In the implementation of the embodiment of the present application, it is required to acquire point cloud data and corresponding target images in one area, and it should be understood that, for the same preset area, there may be more than one target image, and selection needs to be performed according to the road condition in the preset area, for example, specific selection is performed according to the road junction and the road length condition. The acquisition of the point cloud data and the target image can be performed simultaneously, namely, the acquisition of the point cloud data and the target image is performed through the same information acquisition vehicle. In order to ensure the accuracy of the road recognition, it is necessary to take a picture along the road direction at the time of the picture taking of the target, and the shooting direction thereof should also be recorded in the relevant data of the target picture for subsequent use.
In this embodiment of the present application, edge data of a road may be extracted based on an existing edge recognition technology, and depth corresponding to the edge data may be determined according to sharpness information corresponding to the edge data, and since a target image is generally a 2D collected image, it is necessary to determine road length information along a direction perpendicular to a paper surface of the target image according to a condition of a camera and sharpness, and a determination result is depth data. The two-dimensional state of the target image can be changed by the depth data into a three-dimensional state with depth information and is characterized in particular as a matrix of voxels. The coordinates corresponding to the edge data in the voxel matrix should carry depth data, and other data can be assigned according to the needs.
The clustering analysis of the point cloud data based on the prior art can extract data which may be road identifications, and it should be understood that the road identifications mentioned in the embodiments of the present application are suspension identifications, and are not road identifications; cluster analysis of point cloud data belongs to the mature prior art, and the embodiment of the application is not repeated. In order to more accurately judge whether the identified road identifier is a correct road identifier and which road identifier is, the embodiment of the application can effectively improve the identification accuracy of the road identifier and filter out other suspended types of identification noise by reversely mapping the suspected data identified in the point cloud data back to the target image; meanwhile, the type of the road mark can be accurately identified, and an accurate basis is provided for the adjustment of the subsequent traffic big data.
In one possible implementation, determining depth data from sharpness information corresponding to the edge data includes:
calculating depth data of the target image according to related parameters of shooting equipment for shooting the target image;
searching points with sharpness greater than a preset value at two sides from the edge data according to the sharpness information to serve as sharpness boundary points;
calculating the depth coordinate of the sharpness demarcation point according to the depth data to serve as a reference depth coordinate, and taking the road starting point in the target image as a zero depth coordinate;
and performing depth data interpolation according to the depth data, the sharpness information, the zero depth coordinate and the reference depth coordinate to form depth data corresponding to the edge data.
In one possible implementation manner, performing depth data interpolation according to the depth data, the sharpness information, the zero depth coordinate and the reference depth coordinate to form depth data corresponding to the edge data includes:
taking a region between the zero depth coordinate and the reference depth coordinate as a first interpolation region, and taking a non-first interpolation region in the edge data as a second interpolation region;
calculating a depth conversion function in the target image based on the depth data; the depth conversion function is the corresponding relation between the length of the line in the target image and the length of the actual line;
performing depth data interpolation on the first interpolation region according to the depth conversion function to form a first depth coordinate, and performing depth data interpolation on the second interpolation region according to the depth conversion function to form a second depth coordinate;
obtaining sharpness information corresponding to the second depth coordinate, and correcting the second depth coordinate by using a sharpness change function and the depth conversion function in parallel to form a third depth coordinate; the sharpness changing function is the corresponding relation between the depth data and the sharpness in the second interpolation area;
and merging the first depth coordinate and the third depth coordinate to form depth data corresponding to the edge data.
In one possible implementation, assigning values to the pixel matrix of the target image according to the edge data and the depth data to form a three-dimensional pixel matrix includes:
coordinates of edge data corresponding to the zero depth coordinates in the point cloud data are calculated according to the shooting position of the target image in the point cloud data to serve as reference coordinates;
calculating coordinates of each edge data in the point cloud data according to the reference coordinates and the shooting direction of the target image to serve as edge data coordinates;
and assigning the edge data coordinates and the depth data to the edge data in the pixel matrix to form a three-dimensional pixel matrix.
In one possible implementation manner, determining whether the suspected element is a road identifier according to R-channel, G-channel and B-channel data of the suspected element, and the corresponding road identifier category includes:
inputting the R channel, G channel and B channel data of the suspected elements into an identification recognition model, and taking the output result of the identification recognition model as a recognition result.
In one possible implementation manner, the establishing of the identification recognition model includes:
collecting a sample of a road identifier, and dividing the sample of the road identifier into a warning identifier, a forbidden identifier and an indication identifier; the sample of road identifications comprises images of road identifications acquired from a plurality of angles;
respectively establishing identification standards of warning marks, forbidden marks and indication marks in an RGB color space;
establishing a classification model according to the identification standard;
training a convolutional neural network based on the classified samples of the road identifications to form a warning identification model, a forbidden identification model and an indication identification model;
and taking the classification model, the warning identification recognition model, the forbidden identification recognition model and the indication identification recognition model as the identification recognition models.
In one possible implementation manner, inputting the R-channel, G-channel and B-channel data of the suspected element into an identification recognition model, and taking the output result of the identification recognition model as the recognition result includes:
inputting the R channel, G channel and B channel data of the suspected elements into an identification recognition model;
classifying the suspected elements through the classification model, wherein the classification result is warning identification, forbidden identification, indication identification or no identification;
identifying the suspected elements classified to the warning mark through the warning mark identification model;
identifying the suspected elements classified to the forbidden mark through the forbidden mark identification model;
identifying the suspected elements classified to the indication mark through the indication mark identification model;
and taking the output results of the warning identification model, the forbidden identification model and the indication identification model and the non-identification result output by the classification model as the output results of the identification model.
In one possible implementation, establishing the identification criteria of the warning flag, the ban flag, and the indication flag in the RGB color space includes:
calculating an average value of R channel values in the road sign sample as a first average value, an average value of G channel values as a second average value and an average value of B channel values as a third average value;
the identification standard of the forbidden mark is that the first average value is in a first preset interval in one image, the second average value is smaller than a first preset value, and the third average value falls into a second preset interval or a third preset interval;
the recognition standard of the warning mark is that the proportion between the second average value and the third average value in one image is smaller than a second preset value, and the first average value is in the first preset interval;
the identification standard of the indication mark is that the third average value is larger than a third preset value in one image, and the ratio between the first average value and the second average value is smaller than the second preset value.
In a second aspect, an embodiment of the present application provides a road identifier recognition system based on point cloud data and image recognition, including:
an acquisition unit configured to acquire point cloud data in a preset area and a target image corresponding to the point cloud data; the target image comprises a road and a road mark configured on the road, and is shot along the direction of the road;
an image recognition unit configured to perform image recognition on the target image, extract edge data of a road, and determine depth data according to sharpness information corresponding to the edge data;
an assignment unit configured to assign values to a pixel matrix of the target image according to the edge data and the depth data to form a three-dimensional pixel matrix; the elements in the three-dimensional pixel matrix are six-dimensional sequences; the six-dimensional sequence comprises an R channel, a G channel, a B channel, an X coordinate, a Y coordinate and a Z coordinate; wherein the Z coordinate corresponds to the depth data;
the clustering unit is configured to perform cluster analysis on the point cloud data to obtain data of suspected road identifications as suspected data;
the matching unit is configured to match the three-dimensional pixel matrix into the point cloud data and acquire elements in the three-dimensional pixel matrix corresponding to suspected data in the point cloud data as suspected elements;
and the judging unit is configured to judge whether the suspected element is a road identifier and a corresponding road identifier type according to the R channel, G channel and B channel data of the suspected element.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the road identification recognition method and system based on the point cloud data and the image recognition, the suspected data recognized in the point cloud data are reversely mapped back to the target image, so that the recognition accuracy of the road identification can be effectively improved, and other suspended type identification noise can be filtered; meanwhile, the type of the road mark can be accurately identified, and an accurate basis is provided for the adjustment of the subsequent traffic big data.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention. In the drawings:
FIG. 1 is a schematic diagram of steps of a method according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the accompanying drawings in the present application are only for the purpose of illustration and description, and are not intended to limit the protection scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this application, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to the flow diagrams and one or more operations may be removed from the flow diagrams as directed by those skilled in the art.
In addition, the described embodiments are only some, but not all, of the embodiments of the present application. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1 in combination, a flow chart of a road identifier identifying method based on point cloud data and image identification according to an embodiment of the present invention is shown, and further, the road identifier identifying method based on point cloud data and image identification specifically includes the following descriptions of step S1 to step S6.
S1: acquiring point cloud data in a preset area and a target image corresponding to the point cloud data; the target image comprises a road and a road mark configured on the road, and is shot along the direction of the road;
s2: performing image recognition on the target image, extracting edge data of a road, and determining depth data according to sharpness information corresponding to the edge data;
s3: assigning values to the pixel matrix of the target image according to the edge data and the depth data to form a three-dimensional pixel matrix; the elements in the three-dimensional pixel matrix are six-dimensional sequences; the six-dimensional sequence comprises an R channel, a G channel, a B channel, an X coordinate, a Y coordinate and a Z coordinate; wherein the Z coordinate corresponds to the depth data;
s4: performing cluster analysis on the point cloud data to obtain data of a suspected road identifier as suspected data;
s5: matching the three-dimensional pixel matrix into the point cloud data, and acquiring elements in the three-dimensional pixel matrix corresponding to the suspected data in the point cloud data as suspected elements;
s6: judging whether the suspected element is a road identifier and the corresponding road identifier type according to the R channel, G channel and B channel data of the suspected element.
In the implementation of the embodiment of the present application, it is required to acquire point cloud data and corresponding target images in one area, and it should be understood that, for the same preset area, there may be more than one target image, and selection needs to be performed according to the road condition in the preset area, for example, specific selection is performed according to the road junction and the road length condition. The acquisition of the point cloud data and the target image can be performed simultaneously, namely, the acquisition of the point cloud data and the target image is performed through the same information acquisition vehicle. In order to ensure the accuracy of the road recognition, it is necessary to take a picture along the road direction at the time of the picture taking of the target, and the shooting direction thereof should also be recorded in the relevant data of the target picture for subsequent use.
In this embodiment of the present application, edge data of a road may be extracted based on an existing edge recognition technology, and depth corresponding to the edge data may be determined according to sharpness information corresponding to the edge data, and since a target image is generally a 2D collected image, it is necessary to determine road length information along a direction perpendicular to a paper surface of the target image according to a condition of a camera and sharpness, and a determination result is depth data. The two-dimensional state of the target image can be changed by the depth data into a three-dimensional state with depth information and is characterized in particular as a matrix of voxels. The coordinates corresponding to the edge data in the voxel matrix should carry depth data, and other data can be assigned according to the needs.
The clustering analysis of the point cloud data based on the prior art can extract data which may be road identifications, and it should be understood that the road identifications mentioned in the embodiments of the present application are suspension identifications, and are not road identifications; cluster analysis of point cloud data belongs to the mature prior art, and the embodiment of the application is not repeated. In order to more accurately judge whether the identified road identifier is a correct road identifier and which road identifier is, the embodiment of the application can effectively improve the identification accuracy of the road identifier and filter out other suspended types of identification noise by reversely mapping the suspected data identified in the point cloud data back to the target image; meanwhile, the type of the road mark can be accurately identified, and an accurate basis is provided for the adjustment of the subsequent traffic big data.
In one possible implementation, determining depth data from sharpness information corresponding to the edge data includes:
calculating depth data of the target image according to related parameters of shooting equipment for shooting the target image;
searching points with sharpness greater than a preset value at two sides from the edge data according to the sharpness information to serve as sharpness boundary points;
calculating the depth coordinate of the sharpness demarcation point according to the depth data to serve as a reference depth coordinate, and taking the road starting point in the target image as a zero depth coordinate;
and performing depth data interpolation according to the depth data, the sharpness information, the zero depth coordinate and the reference depth coordinate to form depth data corresponding to the edge data.
When the embodiment of the application is implemented, the depth of field data can be calculated through the information such as the aperture, the focal length and the like of the shooting equipment for shooting the target image, namely, the range which can be kept clear in the target image; the definition degree can be confirmed according to the sharpness information, namely, a demarcation point corresponding to the depth data is found out through the sharpness information, and under the condition that the depth data can be calculated, the depth data of the demarcation point can be obtained and used as a reference depth coordinate; the road start point in the target image is generally an image capturing position, which is known, and the depth data corresponding to the edge data can be generated by interpolation processing through the data.
In one possible implementation manner, performing depth data interpolation according to the depth data, the sharpness information, the zero depth coordinate and the reference depth coordinate to form depth data corresponding to the edge data includes:
taking a region between the zero depth coordinate and the reference depth coordinate as a first interpolation region, and taking a non-first interpolation region in the edge data as a second interpolation region;
calculating a depth conversion function in the target image based on the depth data; the depth conversion function is the corresponding relation between the length of the line in the target image and the length of the actual line;
performing depth data interpolation on the first interpolation region according to the depth conversion function to form a first depth coordinate, and performing depth data interpolation on the second interpolation region according to the depth conversion function to form a second depth coordinate;
obtaining sharpness information corresponding to the second depth coordinate, and correcting the second depth coordinate by using a sharpness change function and the depth conversion function in parallel to form a third depth coordinate; the sharpness changing function is the corresponding relation between the depth data and the sharpness in the second interpolation area;
and merging the first depth coordinate and the third depth coordinate to form depth data corresponding to the edge data.
When the embodiment of the application is implemented, the depth conversion function in the image can be calculated according to the data related to the depth of field, and the calculation mode belongs to the prior art and is commonly used for the size measurement technology of the image capturing object; in the embodiment of the application, the edge data is divided into a first interpolation area and a second interpolation area, wherein the first interpolation area is a clearer section and can be directly interpolated through a depth conversion function; the second interpolation region is a region with lower sharpness and is generally further away from the shooting location, so that the sharpness change function and the depth conversion function can be combined to acquire depth data with higher accuracy.
In one possible implementation, assigning values to the pixel matrix of the target image according to the edge data and the depth data to form a three-dimensional pixel matrix includes:
coordinates of edge data corresponding to the zero depth coordinates in the point cloud data are calculated according to the shooting position of the target image in the point cloud data to serve as reference coordinates;
calculating coordinates of each edge data in the point cloud data according to the reference coordinates and the shooting direction of the target image to serve as edge data coordinates;
and assigning the edge data coordinates and the depth data to the edge data in the pixel matrix to form a three-dimensional pixel matrix.
In the implementation of the embodiment of the present application, the target image needs to be corresponding to the coordinate system of the point cloud data, and the point cloud data is generally in a complete coordinate system, so the embodiment of the present application will not be repeated for the coordinate system of the point cloud data. Specifically, it is necessary to perform reference position determination based on the shooting position of the target image as a reference, calculate coordinates of edge data in the point cloud data according to the shooting direction and the depth data acquired previously, and finally complete the voxel matrix.
In one possible implementation manner, determining whether the suspected element is a road identifier according to R-channel, G-channel and B-channel data of the suspected element, and the corresponding road identifier category includes:
inputting the R channel, G channel and B channel data of the suspected elements into an identification recognition model, and taking the output result of the identification recognition model as a recognition result.
In one possible implementation manner, the establishing of the identification recognition model includes:
collecting a sample of a road identifier, and dividing the sample of the road identifier into a warning identifier, a forbidden identifier and an indication identifier; the sample of road identifications comprises images of road identifications acquired from a plurality of angles;
respectively establishing identification standards of warning marks, forbidden marks and indication marks in an RGB color space;
establishing a classification model according to the identification standard;
training a convolutional neural network based on the classified samples of the road identifications to form a warning identification model, a forbidden identification model and an indication identification model;
and taking the classification model, the warning identification recognition model, the forbidden identification recognition model and the indication identification recognition model as the identification recognition models.
When the embodiment of the application is implemented, the road representation is required to be identified based on the trained identification model, wherein the identifications can be classified according to the current identification standard, and the unusual identifications are removed, namely the road identifications are divided into warning identifications, forbidden identifications and indication identifications. After classification is completed, a classification model needs to be established according to RGB data of different categories, and the classification model classifies based on the RGB data and belongs to a simpler classification model. After classification is completed, a corresponding convolutional neural network can be trained for each classification so as to generate a corresponding recognition model, and because the road identification is classified firstly in the embodiment of the application, the convolutional neural network models of different classifications can be simpler, the conventional YOLO series can be directly adopted for realization, and the training cost of the recognition model is greatly reduced.
In one possible implementation manner, inputting the R-channel, G-channel and B-channel data of the suspected element into an identification recognition model, and taking the output result of the identification recognition model as the recognition result includes:
inputting the R channel, G channel and B channel data of the suspected elements into an identification recognition model;
classifying the suspected elements through the classification model, wherein the classification result is warning identification, forbidden identification, indication identification or no identification;
identifying the suspected elements classified to the warning mark through the warning mark identification model;
identifying the suspected elements classified to the forbidden mark through the forbidden mark identification model;
identifying the suspected elements classified to the indication mark through the indication mark identification model;
and taking the output results of the warning identification model, the forbidden identification model and the indication identification model and the non-identification result output by the classification model as the output results of the identification model.
When the embodiment of the application is implemented, the output results of the warning identifier identification model, the forbidden identifier identification model and the indication identifier identification model can also comprise identification failure, namely when the output result of the warning identifier identification model, the forbidden identifier identification model or the indication identifier identification model is identification failure, or the output result of the classification model is no identification, the road identifier is judged to be a non-true road identifier.
In one possible implementation, establishing the identification criteria of the warning flag, the ban flag, and the indication flag in the RGB color space includes:
calculating an average value of R channel values in the road sign sample as a first average value, an average value of G channel values as a second average value and an average value of B channel values as a third average value;
the identification standard of the forbidden mark is that the first average value is in a first preset interval in one image, the second average value is smaller than a first preset value, and the third average value falls into a second preset interval or a third preset interval;
the recognition standard of the warning mark is that the proportion between the second average value and the third average value in one image is smaller than a second preset value, and the first average value is in the first preset interval;
the identification standard of the indication mark is that the third average value is larger than a third preset value in one image, and the ratio between the first average value and the second average value is smaller than the second preset value.
In the embodiment of the present application, the warning mark, the forbidden mark, and the indication mark may be classified according to the corresponding colors, where for the forbidden mark, there is a certain number of red, white background or blue background, and there may be black part at the same time, so the classification is performed according to the above. Similarly, for the forbidden mark, the yellow background and the black edge are mainly used, and the content on the forbidden mark is generally black, so that the values of the B channel and the G channel are required to be basically similar. The same indication mark mainly comprising blue ground color is judged based on the thought.
Based on the same inventive concept, there is also provided a road identification recognition system based on point cloud data and image recognition, the system comprising:
an acquisition unit configured to acquire point cloud data in a preset area and a target image corresponding to the point cloud data; the target image comprises a road and a road mark configured on the road, and is shot along the direction of the road;
an image recognition unit configured to perform image recognition on the target image, extract edge data of a road, and determine depth data according to sharpness information corresponding to the edge data;
an assignment unit configured to assign values to a pixel matrix of the target image according to the edge data and the depth data to form a three-dimensional pixel matrix; the elements in the three-dimensional pixel matrix are six-dimensional sequences; the six-dimensional sequence comprises an R channel, a G channel, a B channel, an X coordinate, a Y coordinate and a Z coordinate; wherein the Z coordinate corresponds to the depth data;
the clustering unit is configured to perform cluster analysis on the point cloud data to obtain data of suspected road identifications as suspected data;
the matching unit is configured to match the three-dimensional pixel matrix into the point cloud data and acquire elements in the three-dimensional pixel matrix corresponding to suspected data in the point cloud data as suspected elements;
and the judging unit is configured to judge whether the suspected element is a road identifier and a corresponding road identifier type according to the R channel, G channel and B channel data of the suspected element.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The elements described as separate components may or may not be physically separate, and it will be apparent to those skilled in the art that elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the elements and steps of the examples have been generally described functionally in the foregoing description so as to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a grid device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, randomAccess Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.
Claims (9)
1. The road identification recognition method based on the point cloud data and the image recognition is characterized by comprising the following steps of:
acquiring point cloud data in a preset area and a target image corresponding to the point cloud data; the target image comprises a road and a road mark configured on the road, and is shot along the direction of the road;
performing image recognition on the target image, extracting edge data of a road, and determining depth data according to sharpness information corresponding to the edge data;
assigning values to the pixel matrix of the target image according to the edge data and the depth data to form a three-dimensional pixel matrix; the elements in the three-dimensional pixel matrix are six-dimensional sequences; the six-dimensional sequence comprises an R channel, a G channel, a B channel, an X coordinate, a Y coordinate and a Z coordinate; wherein the Z coordinate corresponds to the depth data;
performing cluster analysis on the point cloud data to obtain data of a suspected road identifier as suspected data;
matching the three-dimensional pixel matrix into the point cloud data, and acquiring elements in the three-dimensional pixel matrix corresponding to the suspected data in the point cloud data as suspected elements;
judging whether the suspected element is a road identifier and the corresponding road identifier type according to the R channel, G channel and B channel data of the suspected element.
2. The method of road identification recognition based on point cloud data and image recognition according to claim 1, wherein determining depth data from sharpness information corresponding to the edge data comprises:
calculating depth data of the target image according to related parameters of shooting equipment for shooting the target image;
searching points with sharpness greater than a preset value at two sides from the edge data according to the sharpness information to serve as sharpness boundary points;
calculating the depth coordinate of the sharpness demarcation point according to the depth data to serve as a reference depth coordinate, and taking the road starting point in the target image as a zero depth coordinate;
and performing depth data interpolation according to the depth data, the sharpness information, the zero depth coordinate and the reference depth coordinate to form depth data corresponding to the edge data.
3. The method according to claim 2, wherein interpolating depth data based on the depth data, the sharpness information, the zero depth coordinates, and the reference depth coordinates to form depth data corresponding to the edge data comprises:
taking a region between the zero depth coordinate and the reference depth coordinate as a first interpolation region, and taking a non-first interpolation region in the edge data as a second interpolation region;
calculating a depth conversion function in the target image based on the depth data; the depth conversion function is the corresponding relation between the length of the line in the target image and the length of the actual line;
performing depth data interpolation on the first interpolation region according to the depth conversion function to form a first depth coordinate, and performing depth data interpolation on the second interpolation region according to the depth conversion function to form a second depth coordinate;
obtaining sharpness information corresponding to the second depth coordinate, and correcting the second depth coordinate by using a sharpness change function and the depth conversion function in parallel to form a third depth coordinate; the sharpness changing function is the corresponding relation between the depth data and the sharpness in the second interpolation area;
and merging the first depth coordinate and the third depth coordinate to form depth data corresponding to the edge data.
4. The method of road identification based on point cloud data and image recognition of claim 2, wherein assigning values to the pixel matrix of the target image based on the edge data and the depth data to form a three-dimensional pixel matrix comprises:
coordinates of edge data corresponding to the zero depth coordinates in the point cloud data are calculated according to the shooting position of the target image in the point cloud data to serve as reference coordinates;
calculating coordinates of each edge data in the point cloud data according to the reference coordinates and the shooting direction of the target image to serve as edge data coordinates;
and assigning the edge data coordinates and the depth data to the edge data in the pixel matrix to form a three-dimensional pixel matrix.
5. The method for identifying road identifiers based on point cloud data and image recognition according to claim 2, wherein determining whether the suspected element is a road identifier according to R-channel, G-channel, and B-channel data of the suspected element, and the corresponding road identifier category includes:
inputting the R channel, G channel and B channel data of the suspected elements into an identification recognition model, and taking the output result of the identification recognition model as a recognition result.
6. The method for identifying road identifiers based on point cloud data and image recognition according to claim 5, wherein the establishing of the identifier identification model comprises:
collecting a sample of a road identifier, and dividing the sample of the road identifier into a warning identifier, a forbidden identifier and an indication identifier; the sample of road identifications comprises images of road identifications acquired from a plurality of angles;
respectively establishing identification standards of warning marks, forbidden marks and indication marks in an RGB color space;
establishing a classification model according to the identification standard;
training a convolutional neural network based on the classified samples of the road identifications to form a warning identification model, a forbidden identification model and an indication identification model;
and taking the classification model, the warning identification recognition model, the forbidden identification recognition model and the indication identification recognition model as the identification recognition models.
7. The method for identifying road identifiers based on point cloud data and image recognition according to claim 6, wherein inputting the R-channel, G-channel, and B-channel data of the suspected elements into an identifier identification model, and outputting the result of the identifier identification model as an identification result comprises:
inputting the R channel, G channel and B channel data of the suspected elements into an identification recognition model;
classifying the suspected elements through the classification model, wherein the classification result is warning identification, forbidden identification, indication identification or no identification;
identifying the suspected elements classified to the warning mark through the warning mark identification model;
identifying the suspected elements classified to the forbidden mark through the forbidden mark identification model;
identifying the suspected elements classified to the indication mark through the indication mark identification model;
and taking the output results of the warning identification model, the forbidden identification model and the indication identification model and the non-identification result output by the classification model as the output results of the identification model.
8. The method for identifying road identifications based on point cloud data and image identification according to claim 6, wherein establishing identification criteria of warning identifications, forbidden identifications and indication identifications in an RGB color space respectively comprises:
calculating an average value of R channel values in the road sign sample as a first average value, an average value of G channel values as a second average value and an average value of B channel values as a third average value;
the identification standard of the forbidden mark is that the first average value is in a first preset interval in one image, the second average value is smaller than a first preset value, and the third average value falls into a second preset interval or a third preset interval;
the recognition standard of the warning mark is that the proportion between the second average value and the third average value in one image is smaller than a second preset value, and the first average value is in the first preset interval;
the identification standard of the indication mark is that the third average value is larger than a third preset value in one image, and the ratio between the first average value and the second average value is smaller than the second preset value.
9. A road identification recognition system based on point cloud data and image recognition using the method of any one of claims 1 to 8, characterized by comprising:
an acquisition unit configured to acquire point cloud data in a preset area and a target image corresponding to the point cloud data; the target image comprises a road and a road mark configured on the road, and is shot along the direction of the road;
an image recognition unit configured to perform image recognition on the target image, extract edge data of a road, and determine depth data according to sharpness information corresponding to the edge data;
an assignment unit configured to assign values to a pixel matrix of the target image according to the edge data and the depth data to form a three-dimensional pixel matrix; the elements in the three-dimensional pixel matrix are six-dimensional sequences; the six-dimensional sequence comprises an R channel, a G channel, a B channel, an X coordinate, a Y coordinate and a Z coordinate; wherein the Z coordinate corresponds to the depth data;
the clustering unit is configured to perform cluster analysis on the point cloud data to obtain data of suspected road identifications as suspected data;
the matching unit is configured to match the three-dimensional pixel matrix into the point cloud data and acquire elements in the three-dimensional pixel matrix corresponding to suspected data in the point cloud data as suspected elements;
and the judging unit is configured to judge whether the suspected element is a road identifier and a corresponding road identifier type according to the R channel, G channel and B channel data of the suspected element.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310724151.0A CN116503821B (en) | 2023-06-19 | 2023-06-19 | Road identification recognition method and system based on point cloud data and image recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310724151.0A CN116503821B (en) | 2023-06-19 | 2023-06-19 | Road identification recognition method and system based on point cloud data and image recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116503821A true CN116503821A (en) | 2023-07-28 |
CN116503821B CN116503821B (en) | 2023-08-25 |
Family
ID=87323283
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310724151.0A Active CN116503821B (en) | 2023-06-19 | 2023-06-19 | Road identification recognition method and system based on point cloud data and image recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116503821B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117456285A (en) * | 2023-12-21 | 2024-01-26 | 宁波微科光电股份有限公司 | Foreign object detection method for subway screen doors based on TOF camera and deep learning model |
CN118781507A (en) * | 2024-09-12 | 2024-10-15 | 成都经开地理信息勘测设计院有限公司 | Road information recognition method and system based on vehicle-mounted point cloud and aerial photographs |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105374039A (en) * | 2015-11-16 | 2016-03-02 | 辽宁大学 | Monocular image depth information estimation method based on contour acuity |
CN106560835A (en) * | 2015-09-30 | 2017-04-12 | 高德软件有限公司 | Guideboard identification method and device |
CN107463918A (en) * | 2017-08-17 | 2017-12-12 | 武汉大学 | Lane line extracting method based on laser point cloud and image data fusion |
JP2018173749A (en) * | 2017-03-31 | 2018-11-08 | 株式会社パスコ | Road sign detection device, road sign detection method, program, and road surface detection device |
CN109959911A (en) * | 2019-03-25 | 2019-07-02 | 清华大学 | Multi-target autonomous positioning method and device based on lidar |
CN110378942A (en) * | 2018-08-23 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Barrier identification method, system, equipment and storage medium based on binocular camera |
US20200401823A1 (en) * | 2019-06-19 | 2020-12-24 | DeepMap Inc. | Lidar-based detection of traffic signs for navigation of autonomous vehicles |
CN112967283A (en) * | 2021-04-22 | 2021-06-15 | 上海西井信息科技有限公司 | Target identification method, system, equipment and storage medium based on binocular camera |
CN113221648A (en) * | 2021-04-08 | 2021-08-06 | 武汉大学 | Fusion point cloud sequence image guideboard detection method based on mobile measurement system |
CN113484875A (en) * | 2021-07-30 | 2021-10-08 | 燕山大学 | Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering |
CN113674355A (en) * | 2021-07-06 | 2021-11-19 | 中国北方车辆研究所 | Target identification and positioning method based on camera and laser radar |
CN113935428A (en) * | 2021-10-25 | 2022-01-14 | 山东大学 | Three-dimensional point cloud clustering identification method and system based on image identification |
CN115761550A (en) * | 2022-12-20 | 2023-03-07 | 江苏优思微智能科技有限公司 | Water surface target detection method based on laser radar point cloud and camera image fusion |
-
2023
- 2023-06-19 CN CN202310724151.0A patent/CN116503821B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106560835A (en) * | 2015-09-30 | 2017-04-12 | 高德软件有限公司 | Guideboard identification method and device |
CN105374039A (en) * | 2015-11-16 | 2016-03-02 | 辽宁大学 | Monocular image depth information estimation method based on contour acuity |
JP2018173749A (en) * | 2017-03-31 | 2018-11-08 | 株式会社パスコ | Road sign detection device, road sign detection method, program, and road surface detection device |
CN107463918A (en) * | 2017-08-17 | 2017-12-12 | 武汉大学 | Lane line extracting method based on laser point cloud and image data fusion |
CN110378942A (en) * | 2018-08-23 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Barrier identification method, system, equipment and storage medium based on binocular camera |
CN109959911A (en) * | 2019-03-25 | 2019-07-02 | 清华大学 | Multi-target autonomous positioning method and device based on lidar |
US20200401823A1 (en) * | 2019-06-19 | 2020-12-24 | DeepMap Inc. | Lidar-based detection of traffic signs for navigation of autonomous vehicles |
CN113221648A (en) * | 2021-04-08 | 2021-08-06 | 武汉大学 | Fusion point cloud sequence image guideboard detection method based on mobile measurement system |
CN112967283A (en) * | 2021-04-22 | 2021-06-15 | 上海西井信息科技有限公司 | Target identification method, system, equipment and storage medium based on binocular camera |
CN113674355A (en) * | 2021-07-06 | 2021-11-19 | 中国北方车辆研究所 | Target identification and positioning method based on camera and laser radar |
CN113484875A (en) * | 2021-07-30 | 2021-10-08 | 燕山大学 | Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering |
CN113935428A (en) * | 2021-10-25 | 2022-01-14 | 山东大学 | Three-dimensional point cloud clustering identification method and system based on image identification |
CN115761550A (en) * | 2022-12-20 | 2023-03-07 | 江苏优思微智能科技有限公司 | Water surface target detection method based on laser radar point cloud and camera image fusion |
Non-Patent Citations (3)
Title |
---|
YAODONG CUI等: "Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review", 《JOURNAL OF LATEX CLASS FILES》, pages 1 - 19 * |
程铁洪等: "一种利用无人机载点云的道路设施提取方法", 《北京测绘》, vol. 34, no. 11, pages 1649 - 1652 * |
马利等: "一种应用轮廓锐度的单视点图像深度信息提取算法", 《小型微型计算机系统》, vol. 37, no. 2, pages 316 - 320 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117456285A (en) * | 2023-12-21 | 2024-01-26 | 宁波微科光电股份有限公司 | Foreign object detection method for subway screen doors based on TOF camera and deep learning model |
CN118781507A (en) * | 2024-09-12 | 2024-10-15 | 成都经开地理信息勘测设计院有限公司 | Road information recognition method and system based on vehicle-mounted point cloud and aerial photographs |
CN118781507B (en) * | 2024-09-12 | 2024-11-22 | 成都经开地理信息勘测设计院有限公司 | Road information identification method and system based on vehicle-mounted point cloud and aerial photo |
Also Published As
Publication number | Publication date |
---|---|
CN116503821B (en) | 2023-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116503821B (en) | Road identification recognition method and system based on point cloud data and image recognition | |
CN107463918B (en) | Lane line extraction method based on fusion of laser point cloud and image data | |
CN107516077B (en) | Traffic sign information extraction method based on fusion of laser point cloud and image data | |
CN106296666B (en) | A kind of color image removes shadow method and application | |
EP2811423B1 (en) | Method and apparatus for detecting target | |
CN111626277B (en) | Vehicle tracking method and device based on over-station inter-modulation index analysis | |
CN106709412B (en) | Traffic sign detection method and device | |
CN103020623A (en) | Traffic sign detection method and equipment | |
JP6653361B2 (en) | Road marking image processing apparatus, road marking image processing method, and road marking image processing program | |
CN109949593A (en) | A traffic signal recognition method and system based on prior knowledge of intersections | |
CN109635799B (en) | Method for recognizing number of character wheel of gas meter | |
CN114972177B (en) | Road hazard identification and management method, device and intelligent terminal | |
JP4747122B2 (en) | Specific area automatic extraction system, specific area automatic extraction method, and program | |
CN109858310A (en) | Vehicles and Traffic Signs detection method | |
Thomas et al. | Smart car parking system using convolutional neural network | |
CN112598674B (en) | Image processing method and device for vehicle and vehicle | |
CN110969135B (en) | Vehicle logo recognition method in natural scene | |
CN109800693B (en) | A night-time vehicle detection method based on color channel mixing features | |
CN112364844B (en) | Data acquisition method and system based on computer vision technology | |
CN118366107B (en) | Irregular parking identification method on roads | |
CN113269195A (en) | Reading table image character recognition method and device and readable storage medium | |
CN109919863B (en) | Full-automatic colony counter, system and colony counting method thereof | |
CN115457442A (en) | A vehicle line pressure detection method, device and storage medium | |
CN113361483A (en) | Traffic speed limit sign detection method, device, equipment and storage medium | |
CN115359346B (en) | Small micro-space identification method and device based on street view picture and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |