CN111079541A - Road stop line detection method based on monocular vision - Google Patents
Road stop line detection method based on monocular vision Download PDFInfo
- Publication number
- CN111079541A CN111079541A CN201911137093.1A CN201911137093A CN111079541A CN 111079541 A CN111079541 A CN 111079541A CN 201911137093 A CN201911137093 A CN 201911137093A CN 111079541 A CN111079541 A CN 111079541A
- Authority
- CN
- China
- Prior art keywords
- straight line
- points
- initial
- effective
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a road stop line detection method based on monocular vision. The detection method comprises the steps of road image information graying processing, Gaussian filtering smoothing processing, ROI setting, obtaining a gradient effective point grayscale image, obtaining a region growing source image, obtaining a region growing result image of effective points, screening an initial target straight line, determining the final position of a stop line and the like. The detection method can accurately identify the stop line in the road in real time, obtain the position information of the stop line in the picture, and obtain the position of the stop line in the actual road in real time by combining a coordinate conversion technology.
Description
Technical Field
The invention relates to the field of intelligent driving image processing target detection, in particular to a road stop line detection method based on monocular vision.
Background
The identification of the road traffic line is an important component of the intelligent automobile technology, the road traffic line has a plurality of types, and the stop line is an important traffic sign and generally appears at road intersections and the like. The accurate identification of the stop line can enable the vehicle to know the position of the intersection, so that the vehicle can decelerate in advance and run more normatively. The phenomenon that the traffic signal lamp is rushed by mistake and other vehicles and pedestrians are wiped can be avoided, and the driving safety of the intelligent automobile is improved. By combining other positioning technologies such as GPS and the like, the stop line identification can ensure accurate positioning in the driving process of the intelligent automobile, so that more accurate decision and control can be made. Therefore, accurate identification of the stop-line is one of the important components in achieving ADAS and automatic driving.
Disclosure of Invention
The invention aims to provide a road stop line detection method based on monocular vision to solve the problems in the prior art.
The technical scheme adopted for achieving the aim of the invention is that the road stop line detection method based on the monocular vision comprises the following steps:
1) and carrying out gray processing on the image information acquired by the vehicle-mounted monocular camera.
2) And setting the ROI according to the road traffic line distribution characteristics in the image.
3) And smoothing the gray level image in the ROI by utilizing Gaussian filtering.
4) And carrying out gray stretching on the image in the ROI to enhance the contrast of the image.
5) Calculating the gray gradient values of all pixel points in the ROI in the x direction and the y direction and the included angle theta of the gradient directions(i,j). The pixel points of the ith column and the jth row on the image matrix are I (I, j), and A is a 3x3 matrix. Gx(i,j)Is the gray gradient value G of pixel point I in x directiony(i,j)Is the gray gradient value, g, of pixel point I in y direction(i,j)The gray value of the pixel point I.
Gx(i,j)=(g(i+1,j-1)+g(i+1,j)+g(i+1,j+1)-g(i-1,j-1)-g(i-1,j+1)-g(i-1,j+1)) (2)
Gy(i,j)=(g(i-1,j+1)+g(i,j+1)+g(i+1,j+1)-g(i-1,j-1)-g(i,j-1)-g(i+1,j-1)) (3)
In the formula, theta(i,j)Has a value range of [0 DEG, 180 DEG ]]。
6) The gradient mean and the gray level mean within the ROI are calculated.
In the formula, GyaverIs the mean value of the gradient in the y direction, gaverThe gray level mean value is, r is the number of rows of pixels in the ROI area, and c is the number of columns of pixels in the ROI area.
7) Determining a gradient judgment coefficient G in the y direction according to the change rule of the gradient and the gray level in the ROI areayjudgeThe gray scale judgment coefficient gjudgeAngle of gradient and threshold value thetajudge. Wherein, 90 degree is not less than thetajudge≥0°。
8) And judging the validity of the pixel points, and clearing the invalid points to obtain an initial valid gradient map. Wherein, the initial effective point has the following characteristics:
g(i,j)≥gaver.gjudge
Gy(i,j)≥Gyaver.Gyjudge
(180°-θjudge)≥θ(i,j)≥θjudge
9) traversing the initial effective gradient map to judge whether the gradient map is initiallyWhether the number of initial effective points in the effective point neighborhood is larger than or equal to the threshold value N of the interference pointseAnd removing the interference points to obtain a region growing source map.
10) And scanning the region growing source map from bottom to top according to the distribution characteristics of the road traffic lines to obtain a region growing initial seed point set.
11) And according to the seed coordinates in the initial seed set, performing region growth in the growth source graph to obtain a region growth result graph consisting of effective points containing lane information.
12) And carrying out hough transformation on the region growing result graph to obtain a straight line set in the region growing result graph.
13) And analyzing and screening the straight line set to obtain an initial target straight line meeting the requirement.
14) And determining an initial scanning area according to the initial target straight line.
15) And scanning the initial scanning area, and obtaining a final scanning area according to a scanning result.
16) And scanning the number of effective points in the final scanning area, and calculating the percentage of the number of the effective points in the total number of the pixel points in the final scanning area. When the percentage of the effective points occupying the final scanning area is greater than or equal to the density threshold DthresAnd judging the primary target straight line as an effective straight line. And when the percentage of the effective points occupying the final scanning area is less than the density threshold, judging the primary target straight line as an invalid straight line, and determining that no stop line exists in the road at the moment.
17) The image coordinates of the valid initial target straight line are extracted, the stop line is identified at the coordinates, and the stop line position is displayed in the original image.
Further, in step 9), the number of the interference points is that the number of the effective points of the surrounding neighborhood is less than the threshold NeThe pixel point of (2).
Further, step 11) specifically comprises the following steps:
11.1) popping the seed points to obtain the coordinate information of the seed points.
11.2) traversing a neighborhood with a certain size around the seed point, if valid points exist in the neighborhood, stacking all the valid points in the neighborhood, and marking the valid points.
11.3) repeating the steps 11.1) and 11.2) until the stack is empty, and obtaining a region growing result graph consisting of effective points containing lane information.
Further, in step 13), the straight line information in the straight line set is coordinate information of two end points of all straight lines in the set. Calculating the length of each straight line segment and the included angle between the length of each straight line segment and the positive direction of x by utilizing coordinate information, and then calculating the length of each straight line segment and the included angle between each straight line segment and the positive direction of x according to a straight line length threshold value LthresAnd an angle threshold thetathresScreening of initial target straight line linit。linitThe straight line which satisfies the angle requirement and has the longest length in the straight line set.
Further, in step 14), the length and width of the initial scanning area are both greater than the range change amount of the x coordinate and the y coordinate of the initial target straight line, and the shape of the initial scanning area is a belt shape.
Further, in step 15), traversing the initial scanning area, finding the valid points at the two ends, and determining the length of the final scanning area, the slope of the initial target line, the set pixel width and the midpoint coordinate of the initial target line according to the difference of the x coordinates of the valid points at the two ends to determine the shape, size and position of the final scanning area.
The technical effects of the invention are undoubted:
A. the stop line in the road can be accurately identified in real time, the position information of the stop line in the picture is obtained, and the position of the stop line in the actual road can be obtained in real time by combining a coordinate conversion technology;
B. by selecting the specific region of interest, the processing region is greatly reduced, so that the interference is reduced, and the processing speed is improved;
C. the algorithm is simple, and the real-time performance of the system is greatly improved.
Drawings
FIG. 1 is a flow chart of a detection method;
FIG. 2 is a schematic view of an intersection stop line;
FIG. 3 is a schematic view of a camera image;
FIG. 4 is a diagram of a region growing source;
FIG. 5 is a schematic view of a scanning of a region growing source map;
FIG. 6 is a graph showing the result of ideal region growing;
FIG. 7 is a schematic view of an initial scanning area;
FIG. 8 is a schematic view of a final scan area;
FIG. 9 is a schematic view of stop-line identification I;
fig. 10 is a stop-line recognition diagram ii.
Detailed Description
The present invention is further illustrated by the following examples, but it should not be construed that the scope of the above-described subject matter is limited to the following examples. Various substitutions and alterations can be made without departing from the technical idea of the invention and the scope of the invention is covered by the present invention according to the common technical knowledge and the conventional means in the field.
Example 1:
referring to fig. 1, the present embodiment discloses a road stop line detection method based on monocular vision, including the following steps:
1) and carrying out gray processing on the image information acquired by the vehicle-mounted monocular camera. Wherein, the intersection stop line schematic diagram is shown in fig. 2. A schematic view of a camera image is shown in fig. 3.
2) And setting an ROI (region of interest) according to the distribution characteristics of the road traffic lines in the image, and reducing subsequent calculation amount.
3) And smoothing the gray level image in the ROI by utilizing Gaussian filtering.
4) And carrying out gray stretching on the image in the ROI to enhance the contrast of the image.
5) Calculating the gray gradient values of all pixel points in the ROI in the x direction and the y direction and the included angle theta of the gradient directions(i,j). The pixel points of the ith column and the jth row on the image matrix are I (I, j), and A is a 3x3 matrix. Gx(i,j)Is the gray gradient value G of pixel point I in x directiony(i,j)Is the gray gradient value, g, of pixel point I in y direction(i,j)The gray value of the pixel point I.
Gx(i,j)=(g(i+1,j-1)+g(i+1,j)+g(i+1,j+1)-g(i-1,j-1)-g(i-1,j+1)-g(i-1,j+1)) (2)
Gy(i,j)=(g(i-1,j+1)+g(i,j+1)+g(i+1,j+1)-g(i-1,j-1)-g(i,j-1)-g(i+1,j-1)) (3)
6) Calculating a gradient mean value and a gray mean value in the ROI, wherein the gradient mean value in the y direction is the gray mean value, r is the number of rows of pixels in the ROI area, and c is the number of columns of pixels in the ROI area;
7) determining a gradient judgment coefficient G in the y direction according to the change rule of the gradient and the gray level in the ROI areajudgeAnd a gray scale judgment coefficient gjudgeAngle of gradient and threshold value thetajudge。90°≥θjudgeNot less than 0 degree. For example: gyjudge=1.2、gjudge1.1 and θjudge=20°。
8) And judging the validity of the pixel points, and clearing the invalid points to obtain an initial valid gradient map. Wherein the initial effective point has the following characteristics
g(i,j)≥gaver.gjudge
Gy(i,j)≥Gyaver.Gyjudge
(180°-θjudge)≥θ(i,j)≥θjudge
9) Traversing the initial effective gradient map, and judging whether the number of the initial effective points in the initial effective point neighborhood is larger than or equal to a threshold value NeThus, the interference points are removed, and a region growing source map is obtained. Wherein the number of effective points of the interference point is smaller than the threshold N of the interference point for the surrounding neighborhood (3 x3 neighborhood in the example)e(N in this example)ePixel point of 3). Judging the number of effective points in neighborhood around the pixel point and the threshold value NeIf it is greater than or equal to NeThen the point is retained if less than NeThen the point is discarded and finally the region growing source map is obtained, see fig. 4.
10) And scanning the region growing source map from bottom to top according to the distribution characteristics of the road traffic lines to obtain a region growing initial seed point set. In this embodiment, the scan lines are located at x-direction coordinates c/36 × 27, c/36 × 9, see fig. 5. When the scanning line meets the effective point, the effective point is used as a seed point for subsequent region growth and is pressed into a seed stack, and when the number of seeds of one scanning line is more than 2, the scanning of the scanning line is stopped.
11) And according to the seed coordinates in the initial seed stack, performing region growth in the region growth source graph to obtain a region growth result graph consisting of effective points containing lane information. After the region growing, a region growing result map containing stop-line information is obtained, ideally as shown in fig. 6.
11.1) popping the seed points to obtain the coordinate information of the seed points.
11.2) traversing the 3x3 neighborhood around the seed point, if the valid point exists in the 3x3 neighborhood, stacking all the valid points in the neighborhood, and marking the valid points.
11.3) repeat steps 11.1) and 11.2) until the stack is empty.
12) And carrying out hough transformation on the region growing result graph to obtain a straight line set in the region growing result graph.
13) And analyzing and screening the straight line set to obtain an initial target straight line meeting the requirement.
The straight line information in the straight line set is coordinate information of two end points of all straight lines in the set. Calculating the length of each straight line segment by using coordinate informationLlineAnd the angle theta between the positive direction of x and the angle thetalineThen according to a linear length threshold LthresAnd an angle threshold thetathresScreening of initial target straight line linit(in this example, Lthres50 pixel length, θthres=20°)。linitMeets the angle requirement (theta is more than or equal to 0 degree) in a straight line setline≤θthresOr 180 ° - θthres≤θlineA straight line with the longest length less than or equal to 180 degrees;
14) and determining an initial scanning area according to the initial target straight line. Using the coordinate information of the two end points of the initial target straight line, an initial scanning area with a proper width and length is determined (in this example, the length of the initial scanning area is the length of the ROI area, and the width is 60 pixels), effective points in the initial scanning area are scanned, effective points at the two ends in the area are found, and the coordinate information of the effective points is recorded, and the initial scanning area is as shown in fig. 7.
15) And obtaining a final scanning area according to the scanning result. Calculating an initial target straight line l according to the coordinate information of the initial target straight lineinitAccording to the initial target straight line linitSlope of (3), midpoint coordinate of initial scanning line, set width W of final scanning regionfinal(y-direction coordinate range) and the length determined by the difference of the x-direction coordinates of the two most significant points in the initial scanning area, as shown in fig. 8.
16) And traversing the final scanning area, counting the number of effective points in the scanning area, and calculating the percentage of the number of the effective points in the total number of the pixels in the final scanning area. When the percentage is greater than or equal to the density threshold DthresAnd then, judging that the initial target straight line is a valid straight line. When the percentage is less than the density threshold DthresAnd judging the primary target straight line as an invalid straight line, and determining that no stop line exists in the road at the moment.
17) And extracting the coordinate information of the effective initial target straight line, determining that the stop line is positioned at the position of the effective initial target straight line, and displaying the stop line in the original image.
Referring to fig. 9, 9a is a gaussian-filtered grayscale image, 9b is a stop-line recognition result image, 9c is a stop-line recognition result image, and 9d is a region growing result image. From 9d, it can be seen that the region growing result graph contains the stop-line information, and most of the disturbance information is removed. The recognition result of the algorithm is accurate as can be seen from 9b and 9 c. Referring to fig. 10, 10a is the identification result, 10b is the region growing source map with the interference points removed by the neighborhood removal threshold, 10c is the initial scanning region, and 10d is the final scanning region. It can be seen that the initial scan region is parallel to the x-axis since it does not take into account the slope of the initial target line. And the final scan region is determined considering the slope of the initial target line, so that the region is a slanted strip region, and since the difference in x-coordinates of the effective points at the two ends of the initial target region is the length of the ROI region, the length of the final scan region is equal to the length of the ROI region.
Example 2:
the embodiment discloses a basic road stop line detection method based on monocular vision, which comprises the following steps:
1) and carrying out gray processing on the image information acquired by the vehicle-mounted monocular camera.
2) And setting the ROI according to the road traffic line distribution characteristics in the image.
3) And smoothing the gray level image in the ROI by utilizing Gaussian filtering.
4) And carrying out gray stretching on the image in the ROI to enhance the contrast of the image.
5) Calculating the gray gradient values of all pixel points in the ROI in the x direction and the y direction and the included angle theta of the gradient directions(i,j). The pixel points of the ith column and the jth row on the image matrix are I (I, j), and A is a 3x3 matrix. Gx(i,j)Is the gray gradient value G of pixel point I in x directiony(i,j)Is the gray gradient value, g, of pixel point I in y direction(i,j)The gray value of the pixel point I.
Gx(i,j)=(g(i+1,j-1)+g(i+1,j)+g(i+1,j+1)-g(i-1,j-1)-g(i-1,j+1)-g(i-1,j+1)) (2)
Gy(i,j)=(g(i-1,j+1)+g(i,j+1)+g(i+1,j+1)-g(i-1,j-1)-g(i,j-1)-g(i+1,j-1)) (3)
In the formula, theta(i,j)Has a value range of [0 DEG, 180 DEG ]]。
6) The gradient mean and the gray level mean within the ROI are calculated.
In the formula, GyaverIs the mean value of the gradient in the y direction, gaverThe gray level mean value is, r is the number of rows of pixels in the ROI area, and c is the number of columns of pixels in the ROI area.
7) Determining a gradient judgment coefficient G in the y direction according to the change rule of the gradient and the gray level in the ROI areayjudgeThe gray scale judgment coefficient gjudgeAngle of gradient and threshold value thetajudge. Wherein, 90 degree is not less than thetajudge≥0°。
8) And judging the validity of the pixel points, and clearing the invalid points to obtain an initial valid gradient map. Wherein, the initial effective point has the following characteristics:
g(i,j)≥gaver.gjudge
Gy(i,j)≥Gyaver.Gyjudge
(180°-θjudge)≥θ(i,j)≥θjudge
9) traversing the initial effective gradient map, and judging whether the number of the initial effective points in the initial effective point neighborhood is more than or equal to the number of the initial effective points in the initial effective point neighborhoodThreshold value N of disturbance pointeAnd removing the interference points to obtain a region growing source map.
10) And scanning the region growing source map from bottom to top according to the distribution characteristics of the road traffic lines to obtain a region growing initial seed point set.
11) And according to the seed coordinates in the initial seed set, performing region growth in the growth source graph to obtain a region growth result graph consisting of effective points containing lane information.
12) And carrying out hough transformation on the region growing result graph to obtain a straight line set in the region growing result graph.
13) And analyzing and screening the straight line set to obtain an initial target straight line meeting the requirement.
14) And determining an initial scanning area according to the initial target straight line.
15) And scanning the initial scanning area, and obtaining a final scanning area according to a scanning result.
16) And scanning the number of effective points in the final scanning area, and calculating the percentage of the number of the effective points in the total number of the pixel points in the final scanning area. When the percentage of the effective points occupying the final scanning area is greater than or equal to the density threshold DthresAnd judging the primary target straight line as an effective straight line. And when the percentage of the effective points occupying the final scanning area is less than the density threshold, judging the primary target straight line as an invalid straight line, and determining that no stop line exists in the road at the moment.
17) The image coordinates of the valid initial target straight line are extracted, the stop line is identified at the coordinates, and the stop line position is displayed in the original image.
Example 3:
the main steps of this embodiment are the same as those of embodiment 2, wherein, in step 9), the number of the interference points is smaller than the threshold N for the number of effective points in the surrounding neighborhoodeThe pixel point of (2).
Example 4:
the main steps of this embodiment are the same as those of embodiment 2, wherein step 11) specifically includes the following steps:
11.1) popping the seed points to obtain the coordinate information of the seed points.
11.2) traversing a neighborhood with a certain size around the seed point, if valid points exist in the neighborhood, stacking all the valid points in the neighborhood, and marking the valid points.
11.3) repeating the steps 11.1) and 11.2) until the stack is empty, and obtaining a region growing result graph consisting of effective points containing lane information.
Example 5:
the main steps of this embodiment are the same as those of embodiment 2, wherein, in step 13), the straight line information in the straight line set is the coordinate information of two end points of all straight lines in the set. Calculating the length of each straight line segment and the included angle between the length of each straight line segment and the positive direction of x by utilizing coordinate information, and then calculating the length of each straight line segment and the included angle between each straight line segment and the positive direction of x according to a straight line length threshold value LthresAnd an angle threshold thetathresScreening of initial target straight line linit。linitThe straight line which satisfies the angle requirement and has the longest length in the straight line set.
Example 6:
the main steps of this embodiment are the same as embodiment 2, wherein, in step 14), the length and width of the initial scanning area are both greater than the range variation of the x coordinate and the y coordinate of the initial target straight line, and the shape of the initial scanning area is a belt.
Example 7:
the main steps of this embodiment are the same as those of embodiment 2, wherein, in step 15), the initial scanning area is traversed, the valid points at the two ends are found, and the length of the final scanning area, the slope of the initial target straight line, the set pixel width and the midpoint coordinate of the initial target straight line are determined according to the difference of the x coordinates of the valid points at the two ends, so as to determine the shape, size and position of the final scanning area.
Claims (6)
1. A road stop line detection method based on monocular vision is characterized by comprising the following steps:
1) carrying out gray processing on image information acquired by the vehicle-mounted monocular camera;
2) setting an ROI according to the road traffic line distribution characteristics in the image;
3) smoothing the gray level image in the ROI by Gaussian filtering;
4) carrying out gray stretching on the image in the ROI to enhance the contrast of the image;
5) calculating the gray gradient values of all pixel points in the ROI in the x direction and the y direction and the included angle theta of the gradient directions(i,j)(ii) a The pixel points of the ith column and the jth row on the image matrix are I (I, j), and A is a 3x3 matrix. Gx(i,j)Is the gray gradient value G of pixel point I in x directiony(i,j)Is the gray gradient value, g, of pixel point I in y direction(i,j)The gray value of the pixel point I is obtained;
Gx(i,j)=(g(i+1,j-1)+g(i+1,j)+g(i+1,j+1)-g(i-1,j-1)-g(i-1,j+1)-g(i-1,j+1)) (2)
Gy(i,j)=(g(i-1,j+1)+g(i,j+1)+g(i+1,j+1)-g(i-1,j-1)-g(i,j-1)-g(i+1,j-1)) (3)
in the formula, theta(i,j)Has a value range of [0 DEG, 180 DEG ]];
6) Calculating a gradient mean value and a gray level mean value in the ROI;
in the formula, GyaverIs the mean value of the gradient in the y direction, gaverTaking the gray average value, r is the number of rows of pixels in the ROI area, and c is the number of columns of pixels in the ROI area;
7) determining the y direction according to the change rule of the gradient and the gray level in the ROI areaGradient of (1) judging coefficient GyjudgeThe gray scale judgment coefficient gjudgeAngle of gradient and threshold value thetajudge(ii) a Wherein, 90 degree is not less than thetajudge≥0°;
8) Judging the effectiveness of the pixel points, and clearing the invalid points to obtain an initial effective gradient map; wherein, the initial effective point has the following characteristics:
g(i,j)≥gaver.gjudge
Gy(i,j)≥Gyaver.Gyjudge
(180°-θjudge)≥θ(i,j)≥θjudge
9) traversing the initial effective gradient map, and judging whether the number of the initial effective points in the initial effective point neighborhood is larger than or equal to the threshold value N of the interference pointeRemoving interference points to obtain a regional growth source map;
10) scanning a region growing source diagram from bottom to top according to the distribution characteristics of road traffic lines to obtain a region growing initial seed point set;
11) according to the seed coordinates in the initial seed set, carrying out region growth in a growth source graph to obtain a region growth result graph consisting of effective points containing lane information;
12) hough transformation is carried out on the region growing result graph, and a straight line set in the region growing result graph is obtained;
13) analyzing and screening the straight line set to obtain an initial target straight line meeting the requirement;
14) determining an initial scanning area according to the initial target straight line;
15) scanning the initial scanning area, and obtaining a final scanning area according to a scanning result;
16) scanning the number of effective points in the final scanning area, and calculating the percentage of the number of the effective points in the total number of pixel points in the final scanning area; when the percentage of the effective points occupying the final scanning area is greater than or equal to the density threshold DthresJudging the primary target straight line as an effective straight line; when the percentage of the effective points occupying the final scanning area is less than the density threshold, judging the primary target straight line as an ineffective straight line and identifyingDetermining that no stop line exists in the road at the moment;
17) the image coordinates of the valid initial target straight line are extracted, the stop line is identified at the coordinates, and the stop line position is displayed in the original image.
2. The monocular vision-based road stop line detecting method according to claim 1, wherein: in step 9), the number of effective points of the surrounding neighborhood is smaller than a threshold value N as the interference pointseThe pixel point of (2).
3. The monocular vision-based road stop line detecting method according to claim 1 or 2, wherein: step 11) comprises the following steps:
11.1) popping the seed points to obtain coordinate information of the seed points;
11.2) traversing a neighborhood with a certain size around the seed point, if valid points exist in the neighborhood, stacking all the valid points in the neighborhood, and marking the valid points;
11.3) repeating the steps 11.1) and 11.2) until the stack is empty, and obtaining a region growing result graph consisting of effective points containing lane information.
4. The monocular vision-based road stop line detecting method according to claim 1 or 3, wherein: in step 13), the straight line information in the straight line set is the coordinate information of two end points of all straight lines in the set; calculating the length of each straight line segment and the included angle between the length of each straight line segment and the positive direction of x by utilizing coordinate information, and then calculating the length of each straight line segment and the included angle between each straight line segment and the positive direction of x according to a straight line length threshold value LthresAnd an angle threshold thetathresScreening of initial target straight line linit;linitThe straight line which satisfies the angle requirement and has the longest length in the straight line set.
5. The monocular vision-based road stop line detecting method according to claim 1, wherein: in step 14), the length and width of the initial scanning area are both larger than the range variation of the x coordinate and the y coordinate of the initial target straight line, and the shape of the initial scanning area is a belt.
6. The monocular vision-based road stop line detecting method according to claim 1, wherein: step 15), traversing the initial scanning area, searching the effective points at the two ends, and determining the length of the final scanning area, the slope of the initial target straight line, the set pixel width and the midpoint coordinate of the initial target straight line according to the difference value of the x coordinates of the effective points at the two ends to determine the shape, the size and the position of the final scanning area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911137093.1A CN111079541B (en) | 2019-11-19 | 2019-11-19 | Road stop line detection method based on monocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911137093.1A CN111079541B (en) | 2019-11-19 | 2019-11-19 | Road stop line detection method based on monocular vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111079541A true CN111079541A (en) | 2020-04-28 |
CN111079541B CN111079541B (en) | 2022-03-08 |
Family
ID=70311069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911137093.1A Active CN111079541B (en) | 2019-11-19 | 2019-11-19 | Road stop line detection method based on monocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111079541B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112330604A (en) * | 2020-10-19 | 2021-02-05 | 香港理工大学深圳研究院 | Method for generating vectorized road model from point cloud data |
CN112712731A (en) * | 2020-12-21 | 2021-04-27 | 北京百度网讯科技有限公司 | Image processing method, device and system, road side equipment and cloud control platform |
CN113091693A (en) * | 2021-04-09 | 2021-07-09 | 天津大学 | Monocular vision long-range distance measurement method based on image super-resolution technology |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130051681A (en) * | 2011-11-10 | 2013-05-21 | 한국전자통신연구원 | System and method for recognizing road sign |
CN104504364A (en) * | 2014-11-23 | 2015-04-08 | 北京联合大学 | Real-time stop line recognition and distance measurement method based on temporal-spatial correlation |
CN105160309A (en) * | 2015-08-24 | 2015-12-16 | 北京工业大学 | Three-lane detection method based on image morphological segmentation and region growing |
CN105740827A (en) * | 2016-02-02 | 2016-07-06 | 大连楼兰科技股份有限公司 | Stop line detection and ranging algorithm on the basis of quick sign communication |
CN105740828A (en) * | 2016-02-02 | 2016-07-06 | 大连楼兰科技股份有限公司 | Stop line detection method based on quick sign communication |
CN105740832A (en) * | 2016-02-02 | 2016-07-06 | 大连楼兰科技股份有限公司 | Stop line detection and distance measurement algorithm applied to intelligent drive |
CN105740831A (en) * | 2016-02-02 | 2016-07-06 | 大连楼兰科技股份有限公司 | Stop line detection method applied to intelligent drive |
CN106250816A (en) * | 2016-07-19 | 2016-12-21 | 武汉依迅电子信息技术有限公司 | A kind of Lane detection method and system based on dual camera |
CN106354135A (en) * | 2016-09-19 | 2017-01-25 | 武汉依迅电子信息技术有限公司 | Lane keeping system and method based on Beidou high-precision positioning |
CN106503678A (en) * | 2016-10-27 | 2017-03-15 | 厦门大学 | Roadmarking automatic detection and sorting technique based on mobile laser scanning point cloud |
CN106529505A (en) * | 2016-12-05 | 2017-03-22 | 惠州华阳通用电子有限公司 | Image-vision-based lane line detection method |
KR20170052234A (en) * | 2015-11-04 | 2017-05-12 | 현대모비스 주식회사 | Method of crosswalk detection and location estimation |
CN108805060A (en) * | 2018-05-29 | 2018-11-13 | 杭州视氪科技有限公司 | A kind of zebra line style crossing detection method |
-
2019
- 2019-11-19 CN CN201911137093.1A patent/CN111079541B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130051681A (en) * | 2011-11-10 | 2013-05-21 | 한국전자통신연구원 | System and method for recognizing road sign |
CN104504364A (en) * | 2014-11-23 | 2015-04-08 | 北京联合大学 | Real-time stop line recognition and distance measurement method based on temporal-spatial correlation |
CN105160309A (en) * | 2015-08-24 | 2015-12-16 | 北京工业大学 | Three-lane detection method based on image morphological segmentation and region growing |
KR20170052234A (en) * | 2015-11-04 | 2017-05-12 | 현대모비스 주식회사 | Method of crosswalk detection and location estimation |
CN105740831A (en) * | 2016-02-02 | 2016-07-06 | 大连楼兰科技股份有限公司 | Stop line detection method applied to intelligent drive |
CN105740832A (en) * | 2016-02-02 | 2016-07-06 | 大连楼兰科技股份有限公司 | Stop line detection and distance measurement algorithm applied to intelligent drive |
CN105740828A (en) * | 2016-02-02 | 2016-07-06 | 大连楼兰科技股份有限公司 | Stop line detection method based on quick sign communication |
CN105740827A (en) * | 2016-02-02 | 2016-07-06 | 大连楼兰科技股份有限公司 | Stop line detection and ranging algorithm on the basis of quick sign communication |
CN106250816A (en) * | 2016-07-19 | 2016-12-21 | 武汉依迅电子信息技术有限公司 | A kind of Lane detection method and system based on dual camera |
CN106354135A (en) * | 2016-09-19 | 2017-01-25 | 武汉依迅电子信息技术有限公司 | Lane keeping system and method based on Beidou high-precision positioning |
CN106503678A (en) * | 2016-10-27 | 2017-03-15 | 厦门大学 | Roadmarking automatic detection and sorting technique based on mobile laser scanning point cloud |
CN106529505A (en) * | 2016-12-05 | 2017-03-22 | 惠州华阳通用电子有限公司 | Image-vision-based lane line detection method |
CN108805060A (en) * | 2018-05-29 | 2018-11-13 | 杭州视氪科技有限公司 | A kind of zebra line style crossing detection method |
Non-Patent Citations (9)
Title |
---|
ASHWIN ARUNMOZHI等: ""Stop Sign and Stop Line Detection and Distance Calculation for Autonomous Vehicle control"", 《2018 IEEE INTERNATIONAL CONFERENCE ON ELECTRO/INFORMATION TECHNOLOGY (EIT)》 * |
CHENGMING YE等: ""Semi-Automated Generation of Road Transition Lines Using Mobile Laser Scanning Data"", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 * |
SERGIU NEDEVSCHI等: ""Intersection Representation Enhacement by Sensorial Data and Digital Map Alignment"", 《PROCEEDINGS OF THE 2010 IEEE 6TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING》 * |
TIBERIU MARITA等: ""Stop-line Detection and Localization Method for Intersection Scenarios"", 《2011 IEEE 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING》 * |
介炫惠: ""道路交通标线的检测算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王云建等: ""基于DM6446的车道线快速检测算法"", 《杭州电子科技大学学报》 * |
谢锦等: ""基于方向边缘匹配的人行横道与停止线检测"", 《计算机工程》 * |
郑永荣等: ""基于单目视觉的智能车路口实时定位方法"", 《计算机工程》 * |
钟鹏飞: ""基于机器视觉的非结构化道路识别与障碍物检测研究"", 《中国优秀硕士学位论文全文数据库 农业科技辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112330604A (en) * | 2020-10-19 | 2021-02-05 | 香港理工大学深圳研究院 | Method for generating vectorized road model from point cloud data |
CN112712731A (en) * | 2020-12-21 | 2021-04-27 | 北京百度网讯科技有限公司 | Image processing method, device and system, road side equipment and cloud control platform |
CN113091693A (en) * | 2021-04-09 | 2021-07-09 | 天津大学 | Monocular vision long-range distance measurement method based on image super-resolution technology |
Also Published As
Publication number | Publication date |
---|---|
CN111079541B (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105678285B (en) | A kind of adaptive road birds-eye view transform method and road track detection method | |
CN109784344B (en) | Image non-target filtering method for ground plane identification recognition | |
CN104766058B (en) | A kind of method and apparatus for obtaining lane line | |
CN111079541B (en) | Road stop line detection method based on monocular vision | |
CN110210451B (en) | Zebra crossing detection method | |
CN104899554A (en) | Vehicle ranging method based on monocular vision | |
CN108921813B (en) | Unmanned aerial vehicle detection bridge structure crack identification method based on machine vision | |
CN108052904B (en) | Method and device for acquiring lane line | |
CN113221861B (en) | Multi-lane line detection method, device and detection equipment | |
CN106407924A (en) | Binocular road identifying and detecting method based on pavement characteristics | |
CN108171695A (en) | A kind of express highway pavement detection method based on image procossing | |
CN109635737A (en) | Automobile navigation localization method is assisted based on pavement marker line visual identity | |
CN104700072A (en) | Lane line historical frame recognition method | |
EP1796042B1 (en) | Detection apparatus and method | |
CN112949482A (en) | Non-contact type rail sleeper relative displacement real-time measurement method based on deep learning and visual positioning | |
CN110770741B (en) | Lane line identification method and device and vehicle | |
CN117078717A (en) | Road vehicle track extraction method based on unmanned plane monocular camera | |
CN114724119B (en) | Lane line extraction method, lane line detection device, and storage medium | |
CN113239733A (en) | Multi-lane line detection method | |
CN110733416B (en) | Lane departure early warning method based on inverse perspective transformation | |
CN109800641B (en) | Lane line detection method based on threshold value self-adaptive binarization and connected domain analysis | |
CN108765456A (en) | Method for tracking target, system based on linear edge feature | |
WO2019149213A1 (en) | Image-based road cone recognition method and apparatus, storage medium, and vehicle | |
CN107066985B (en) | Intersection zebra crossing detection method based on rapid Hough transform | |
JP5327241B2 (en) | Object identification device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |