[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106778668B - A kind of method for detecting lane lines of robust that combining RANSAC and CNN - Google Patents

A kind of method for detecting lane lines of robust that combining RANSAC and CNN Download PDF

Info

Publication number
CN106778668B
CN106778668B CN201611254172.7A CN201611254172A CN106778668B CN 106778668 B CN106778668 B CN 106778668B CN 201611254172 A CN201611254172 A CN 201611254172A CN 106778668 B CN106778668 B CN 106778668B
Authority
CN
China
Prior art keywords
image
point
lane line
cnn
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611254172.7A
Other languages
Chinese (zh)
Other versions
CN106778668A (en
Inventor
陈海沯
陈从华
谢超
叶德焰
任赋
王治家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ming Ming (xiamen) Technology Co Ltd
Original Assignee
Ming Ming (xiamen) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ming Ming (xiamen) Technology Co Ltd filed Critical Ming Ming (xiamen) Technology Co Ltd
Priority to CN201611254172.7A priority Critical patent/CN106778668B/en
Publication of CN106778668A publication Critical patent/CN106778668A/en
Application granted granted Critical
Publication of CN106778668B publication Critical patent/CN106778668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A kind of method for detecting lane lines of robust that combining RANSAC and CNN, comprising the following steps: S1, obtain original image I from image collecting device, and Gaussian smoothing is carried out to the original image I and obtains smoothed image I0;S2, to the smoothed image I0Edge extracting processing is carried out, edge feature figure I is obtained1;S3, to the edge feature figure I1Image denoising processing is carried out, denoising image I is obtained2;S4, to the denoising image I2Area-of-interest is set, image of interest I is obtained3;S5, to image I3Edge Feature Points division is carried out, common point, left side point, the right point are divided into;S6, using RANSAC algorithm detect lane line, when the lane line number detected be greater than 3 or | xt,upper‑xt‑1,upper|>TupperOr | xt,lower‑xt‑1,lower|>TlowerOr | Pt,vp(x,y)‑PT-1, vp(x,y)|>TvpWhen, S7 is gone to, otherwise continues iteration to maximum number of iterations Imax, obtain final lane line;S7, candidate lane line is found first with CNN, then to the image containing candidate lane line handled by CNN applying step S6 again.

Description

A kind of method for detecting lane lines of robust that combining RANSAC and CNN
Technical field
The invention belongs to image procossings, nerual network technique field, specifically, being related to a kind of joint CNN (convolutional Neural Network) and RANSAC (random sampling unification algorism) robust method for detecting lane lines.
Background technique
A large amount of traffic accidents analysis shows, cause casualties at most, the most heavy cause of accident of economic loss, which belongs to, to be chased after Tail bumps against, wherein the rear-end collision as caused by random change lane accounts for major part.Automobile assistant driving system is as driving peace Full guarantee, can will collide in vehicle or when run-off-road reminds driver, avoid accident.Accurate lane line Positioning and identification be lane departure warning, lane change auxiliary etc. DAS (Driver Assistant System)s important component.The prior art Scheme is typically based on the combination of edge detection, RANSAC or RANSAC and HOUGH, in the increased feelings of scene complex jamming noise spot Under condition, RANSAC can not accurately fit Road, and with the increase of the number of iterations, can not reach processing speed in real time Degree causes above scheme that can become unreliable in the case where road scene complexity, is unable to reach the application request in market.
Summary of the invention
In view of the deficiencies of the prior art, the present invention proposes a kind of lane detection side of robust for combining RANSAC and CNN Method, technical solution are specific as follows:
A kind of method for detecting lane lines of robust that combining RANSAC and CNN, which comprises the following steps:
S1, original image I is obtained from image collecting device, and Gaussian smoothing is carried out to the original image I and is obtained To smoothed image I0
S2, to the smoothed image I0Edge extracting processing is carried out, edge feature figure I is obtained1
S3, to the edge feature figure I1Image denoising processing is carried out, denoising image I is obtained2
S4, to the denoising image I2Area-of-interest is set, image of interest I is obtained3
S5, to image I3Edge Feature Points division is carried out, setting divides parameter Vmin, Vcommon and Uvanish, works as side When edge point ordinate is between Vmin and Vcommon, it is determined as common point, when marginal point ordinate is less than Uvanish, sentences It is set to left side point, remaining is determined as the right point;
S6, lane line is detected using RANSAC algorithm, detailed process is that S61 constructs hyperbola lane line model M:
Wherein Eu=f/du, Ev=f/dv, f is camera focus, duAnd dvIt is pixel height and width, z0It is obtained with θ from camera assessment , it is assumed that (uL,vL) it is left side point and public point set, (uR,vR) it is the right point and public point set, then model M is reduced to
Wherein, a=(a1,a2,a3,a4)T, model parameterMaximum number of iterations I is arranged in S62max, change each time In generation, randomly selects point set S from data set P, and S includes N number of point, removes the parameter a of assessment hyperbolic model M, wherein N >=4, data Collection P is (uL, vL) and (uR,vR) union, (uL,vL) it is left side point and public point set, (uR,vR) it is the right point and common point Set;S63 removes in assessment P the data point S* for being not belonging to point set S using the model M of instantiation, if the error e of S* and MfValue Less than error threshold et, then S* is added to one and is known as in the point set Si of unificant set;If the number of S64 point set Si is greater than Unificant set point set quantity threshold d, then remove assessment models M using set Si, and the standard of assessment uses the error at the midpoint point set Si efIt is cumulative and;The S65 relatively front and back model M that iteration obtains twice, retains the lesser model of error, when two model errors are small In threshold value TeWhen model parameter all retain, while Number of Models counter cumulative one, wherein Number of Models is lane line number; S66 works as the lane line number detected greater than 3, or | xt,upper-xt-1,upper|>TupperOr | xt,lower-xt-1,lower|> TlowerOr | Pt,vp(x,y)-Pt-1,vp(x,y)|>TvpWhen, wherein Xt,upper、Xt-1,upper、Xt,lower、Xt-1,lowerIt respectively indicates The x coordinate on the vertex of lane line and the t frame of bottom point and t-1 frame, TupperAnd TlowerIt is before and after frames vertex and bottom point absolute value of the difference Threshold value, Pt,vp(x, y) is the end point position of t frame, Pt-1,vp(x, y) is the position of t-1 frame end point, TvpIt is that before and after frames disappear The threshold value for losing the difference of point then interrupts RANSAC algorithm detection lane line, goes to step S7, otherwise continues iteration to greatest iteration Number Imax, obtain final lane line;
S7, candidate lane line is found first with CNN, then to the figure containing candidate lane line handled by CNN As applying step S6 again.
Further, the detailed process of the step S1 is: utilizing 2-d gaussian filters device function G (x, y) and original graph As I progress convolution, smoothed image I is obtained0, whereinThe width of σ expression filter.
Further, the detailed process of the step S2 is: being directed to smoothed image I0Each position b (x, y), by the position The pixel and left side b (x-m, y) and the right b (x+m, y) set compare: B+m(x, y)=b (x, y)-b (x+m, y), B-m(x,y) =b (x, y)-b (x-m, y), wherein distance m >=1,
Threshold value T, edge image I are set1Value are as follows:
Further, the step S3 is denoised using sliding window, and detailed process is that two small sliding windows are arranged, Referred to as interior window and outer window, two window roles are in identical neighborhood of pixels, but outer window is longer and roomy than interior window 1.5%, it slides two windows and traverses whole edge graph, compare the sum of two window interior pixel values, if the picture of two windows It is plain and equal, then the pixel in window is determined for isolated noise and is set to zero.
Further, the detailed process of the step S4 is: horizontal axis maximum value being arranged under plane of delineation rectangular coordinate system Xhigh, horizontal axis minimum value Xlow, longitudinal axis maximum value Yhigh, longitudinal axis minimum value YlowFour parameters, if image I2In position (x, Y) there are Xlow≤x≤Xhigh, Ylow≤y≤Yhigh, then it is determined as area-of-interest, is otherwise determined as region of loseing interest in.
Further, the error e in the step S6fCalculating use aromatic distance, the aromatic distance of point (u, v) Calculation formula are as follows:Whereink2=EuEvKz0, k3=Evθ- V, k4=u+Euψ, when point belongs to right point set (uR,vR) when μ=- 1, belong to left point set (uL, vL) when μ=+ 1.
Further, the detailed process of the CNN selection candidate lane line of the step S6 is: will test image first by column A long line preferentially is lined up, then inputs trained convolutional neural networks structure, each pixel of network processes long line, so MLP exports result long line afterwards, finally according still further to column priority processing at one 100 × 15 image, wherein described 100 × 15 Image is the image of candidate lane line.
Further, the convolutional neural networks structure includes 2 down-sampling layers, 3 convolutional layers, 1 MLP, wherein 2 The core size of a down-sampling layer is respectively 8 × 2 and 4 × 2, and MLP includes 3 full articulamentums.
Further, described image acquisition device is the front camera or automobile data recorder being arranged on vehicle.
The invention adopts the above technical scheme, has the beneficial effect that the present invention can enhance the standard of lane detection Exactness, wherein the high-level feature of CNN energy extraction process image, extracted feature are not influenced by noise light photograph etc., and RANSAC and CNN combination can make detection algorithm more robust, in road conditions complexity, such as fence occur, enclosure wall or In the case of the acute variation of illumination, lane line still can be effectively detected.Therefore, the present invention can detect fast and reliablely Lane line guarantees the driving safety of vehicle, and the requirement to hardware is lower, and manufacturing cost is low, is conducive to marketing.
Detailed description of the invention
Fig. 1 shows flow chart of the invention;
Fig. 2 shows the schematic diagrames of shape for hat nuclear structure;
Fig. 3 shows the schematic diagram of the sliding window operator of removal noise;
Fig. 4 shows the schematic diagram of setting area-of-interest;
Fig. 5 shows the schematic diagram of CNN structure;
Fig. 6 (a) is original input picture;
Fig. 6 (b) is the result figure after edge detection;
Fig. 6 (c) is the result figure of independent RANSAC detection;
Fig. 6 (d) is the display of the testing result of Fig. 6 (c) on the original image;
Fig. 6 (e) is that Fig. 6 (b) passes through CNN treated result figure;
Fig. 6 (f) is the testing result that RANSAC is carried out to Fig. 6 (e);
Fig. 6 (g) is display of Fig. 6 (f) testing result in original image.
Specific embodiment
To further illustrate that each embodiment, the present invention are provided with attached drawing.These attached drawings are that the invention discloses one of content Point, mainly to illustrate embodiment, and the associated description of specification can be cooperated to explain the operation principles of embodiment.Cooperation ginseng These contents are examined, those of ordinary skill in the art will be understood that other possible embodiments and advantages of the present invention.In figure Component be not necessarily to scale, and similar component symbol is conventionally used to indicate similar component.
Now in conjunction with the drawings and specific embodiments, the present invention is further described.
Referring to Fig.1-5, detailed process step of the invention is described.The step of present invention uses is as follows:
1.1, first from image collecting device (for example, front camera or automobile data recorder etc. for being arranged on vehicle) Original image I is obtained, smoothed image I is obtained to image I Gaussian smoothing0.2-d gaussian filters device function are as follows:
Wherein, parameter σ indicates the width of filter, and the frequency band of the bigger filter of σ is wider, and smoothness is better, the present invention σ is adjusted by the experiment of the image data to actual acquisition, image is made smoothly and between owing smooth to obtain a balance excessively. Convolution is carried out with G (x, y) and original image I, obtains smoothed image I0。I0(x, y)=G (x, y) * I (x, y), * are convolution algorithm Symbol.
1.2, it is directed to smoothed image I0Carry out edge extracting processing.In the present embodiment, edge extracting uses shape for hat core (such as Shown in Fig. 2) and smoothed image progress convolution, concrete processing procedure is as follows: being directed to smoothed image I0Each position b (x, y), The pixel of the position and left side b (x-m, y) and the right b (x+m, y) are compared, wherein distance m >=1:
B+m(x, y)=b (x, y)-b (x+m, y)
B-m(x, y)=b (x, y)-b (x-m, y) (2)
Threshold value T, edge image I are set1Value are as follows:
Edge extracting process terminates as a result, obtains edge feature figure I1.But edge extracting can also using other methods into Row is without departing from the spirit and scope of the present invention.
1.3, image denoising processing is carried out for edge feature figure obtained above.Image denoising processing is found out isolated first Then isolated area is set to zero by region.In the present embodiment, image denoising processing is carried out using sliding window operator.Such as Fig. 3 It is shown, concrete operations are as follows: two small sliding windows of setting, referred to as interior window and outer window, two window roles are identical Neighborhood of pixels, but the height h1 of outer window and width w1 is bigger by 1.5% than the height h2 of interior window and width w2.Slide two windows Mouthful whole edge graph of traversal, compares the sum of two window interior pixel values, if the pixel of two windows and equal, decides that window Pixel in mouthful is isolated noise, is set to zero.In order to improve computational efficiency, the integrogram of edge graph is used to calculation window With.Denoising obtains image I later2.But image denoising can also be carried out using other methods without departing from spirit of the invention and Range.
1.4, for denoising image I2The region (ROI) interested is set.Horizontal axis is set under plane of delineation rectangular coordinate system Maximum value Xhigh, horizontal axis minimum value Xlow, longitudinal axis maximum value Yhigh, longitudinal axis minimum value YlowFour parameters, this four parameters pass through Analysis acquisition is carried out to the image data of actual acquisition.For I2Middle position (x, y), if Xlow≤x≤Xhigh, Ylow≤y≤ Yhigh, then it is determined as area-of-interest, remaining is determined as region of loseing interest in, and does not enter subsequent processing steps, image of interest For I3.But area-of-interest is arranged can also be carried out without departing from the spirit and scope of the present invention using other methods.
1.5, it is directed to image I3Carry out Edge Feature Points division.As shown in figure 4, setting ordinate of orthogonal axes division parameter Vmin, Vcommon and Uvanish.In the picture, when marginal point ordinate is between Vmin and Vcommon, it is determined as common point, When marginal point ordinate is less than Uvanish, it is determined as left side point, remaining is determined as the right point.
1.6, lane line is detected using RANSAC algorithm, the specific steps are as follows:
1.6.1, hyperbola lane line model is constructed are as follows:
E in formula (4)u=f/du, Ev=f/dv, f is camera focus, duAnd dvIt is pixel height and width, z0It is commented with θ from camera Estimate acquisition.Assuming that (uL,vL) it is left side point and public point set, (uR,vR) it is the right point and public point set.Formula (4) simplifies For
Wherein a=(a1,a2,a3,a4)T, hyperbolic model parameter can be found out according to the following formula after a is found out:
1.6.2, maximum number of iterations I is setmax, iteration, randomly selects point set S from data set P each time, and S includes N It is a, remove the parameter a of assessment hyperbolic model M.Wherein N >=4, data set P are (uL, vL) and (uR,vR) union, (uL,vL) For left side point and public point set, (uR,vR) it is the right point and public point set.
1.6.3, the data point S* for being not belonging to point set S is removed in assessment P using the model M of instantiation, if the error of S* and M Function efValue be less than error threshold et, then S* is added to one and is known as in the point set Si of unificant set.
If 1.6.4, the number of point set Si is greater than unificant set point set quantity threshold d, assessment mould is removed using set Si Type M, the standard of assessment are the error e at the midpoint point set SifIt is cumulative and.
1.6.5, compare the front and back model M that iteration obtains twice, retain the lesser model of error, when two model errors are small In threshold value TeWhen model parameter all retain, while Number of Models counter cumulative one, wherein Number of Models is lane line number.ef Calculating use aromatic distance, the aromatic distance calculation formula of point (u, v) are as follows:
In formula
k2=EuEvKz0 (12)
k3=Evθ-v (13)
k4=u+Euψ (14)
When point belongs to right point set (uR,vR) when k1In μ=- 1, belong to left point set (uL, vL) when k1In μ=+ 1. RANSAC iterative fitting obtains hyperbolic model parameter a, the i.e. model parameter of lane line later.
1.6.6, for the lane line for using RANSAC to extract, the result extracted after environment becomes complexity is easy to become not Reliably, if the accuracy of extraction is improved by the number of iterations or raising threshold value of increase RANSAC, even if in accuracy Under the premise of attainable, it is also unable to satisfy the requirement of real-time of processing speed.Present invention combination CNN enhances lane line drawing Accuracy, the high-level feature of CNN energy extraction process image, extracted feature are not influenced by noise light photograph etc..But RANSAC stops detection under which kind of testing conditions, and being transferred to CNN is also problem.In consideration of it, the present invention is respectively indicated by setting The parameter X of the x coordinate of the vertex of lane line and the t frame of bottom point and t-1 framet,upper、Xt-1,upper、Xt,lower、Xt-1,lower, indicate The parameter T of the threshold value of before and after frames vertex and bottom point absolute value of the differenceupperAnd Tlower, indicate the parameter of the end point position of t frame Pt,vp(x, y) indicates the parameter P of the position of t-1 frame end pointt-1,vpThe threshold of the difference of (x, y) and expression before and after frames end point The parameter T of valuevpTo be judged.When RANSAC algorithm detection lane line number be greater than 3, or | xt,upper-xt-1,upper|> Tupper, or | xt,lower-xt-1,lower|>Tlower, or | Pt,vp(x,y)-Pt-1,vp(x,y)|>Tvp, then the inspection of 1.6 steps is abandoned The lane line of survey is transferred to CNN detection, i.e. step 1.7;Otherwise continue iteration to maximum number of iterations Imax, obtain final lane Line.
The combination of 1.7 but CNN and RANSAC is difficult, and key point is how to balance the time complexity and inspection of the two The precision of survey.Need to carry out the operation that multiple convolution extracts feature in CNN, the deeper network of convolution is wider certainly better, still Time complexity is also relatively higher, and the depth and range balance quality and time complexity that network how is arranged are a difficult points.For This, the present invention, which devises a precision and time complexity, can be effectively applicable to the convolutional neural networks structure of lane detection, As shown in Figure 5.Convolutional neural networks structure includes 2 down-sampling layers, 3 convolutional layers, and 1 MLP (multilayer perceptron) is followed successively by Convolutional layer 1, down-sampling layer 1, convolutional layer 2, down-sampling layer 2, convolutional layer 3 and MLP.Wherein, the core size of down-sampling layer is respectively 8 × 2 and 4 × 2, MLP include 3 full articulamentums.It is 192 × 28 that CNN, which inputs picture size, and exporting is 100 × 15.But the present invention It is not limited to above-mentioned convolutional neural networks structure, i.e., the convolutional neural networks structure made improvements on this basis each falls within this hair Bright protection scope.In CNN detection-phase, it will test image by column and preferentially line up a long line, input trained convolution mind Through network, each pixel of network processes long line, last MLP exports result long line, then according still further to column priority processing at one 100 × 15 image is opened, which is the candidate lane line that CNN is found.Then lane line is detected using RANSAC again, obtained To final lane line.
The present invention has good robustness, still can effectively detect lane line in complex environment.Fig. 6 shows this The actually detected result of invention.Wherein, Fig. 6 (a) is original input picture, and environment is complicated where the lane line, is interfered numerous;Figure 6 (b) be the result figure after edge detection, and the detection of figure showing edge has many edges similar with lane line later;Fig. 6 (c) it is independent RANSAC detection as a result, mistake occurs in testing result, deviates correct lane line position;Fig. 6 (d) is detection As a result it is shown in original image;Fig. 6 (e) is Fig. 6 (b) by CNN treated result figure, and image interference tails off, and lane line region is prominent Out;Fig. 6 (f) is the testing result that RANSAC is carried out to Fig. 6 (e), the position of testing result accurate detection to lane line;Fig. 6 It (g) is display of Fig. 6 (f) testing result in original image.It is actually detected the result shows that the present invention can accurately examine under complex environment Measuring car road line position.
Although specifically showing and describing the present invention in conjunction with preferred embodiment, those skilled in the art should be bright It is white, it is not departing from the spirit and scope of the present invention defined by the appended claims, it in the form and details can be right The present invention makes a variety of changes, and is protection scope of the present invention.

Claims (9)

1. a kind of method for detecting lane lines for the robust for combining RANSAC and CNN, which comprises the following steps:
S1, original image I is obtained from image collecting device, and Gaussian smoothing is carried out to the original image I and is put down Sliding image I0
S2, to the smoothed image I0Edge extracting processing is carried out, edge feature figure I is obtained1
S3, to the edge feature figure I1Image denoising processing is carried out, denoising image I is obtained2
S4, to the denoising image I2Area-of-interest is set, image of interest I is obtained3
S5, to image I3Edge Feature Points division is carried out, setting divides parameter Vmin, Vcommon and Uvanish, when marginal point is vertical When coordinate is between Vmin and Vcommon, it is determined as common point, when marginal point ordinate is less than Uvanish, is determined as a left side Edge point, remaining is determined as the right point;
S6, detect lane line using RANSAC algorithm, detailed process is: S61 constructs hyperbola lane line model M:
Wherein Eu=f/du, Ev=f/dv, f is camera focus, duAnd dvIt is pixel height and width, z0It is obtained with θ from camera assessment, it is false If (uL,vL) it is left side point and public point set, (uR,vR) it is the right point and public point set, then model M is reduced to
Wherein a=(a1,a2,a3,a4)T, model parameterMaximum number of iterations I is arranged in S62max, change each time In generation, randomly selects point set S from data set P, and S includes N number of point, removes the parameter a of assessment hyperbolic model M, wherein N >=4, data Collection P is (uL, vL) and (uR,vR) union, (uL,vL) it is left side point and public point set, (uR,vR) it is the right point and common point Set;S63 removes in assessment P the data point S* for being not belonging to point set S using the model M of instantiation, if the error e of S* and MfValue Less than error threshold et, then S* is added to one and is known as in the point set Si of unificant set;If the number of S64 point set Si is greater than Unificant set point set quantity threshold d, then remove assessment models M using set Si, and the standard of assessment uses the error at the midpoint point set Si efIt is cumulative and;The S65 relatively front and back model M that iteration obtains twice, retains the lesser model of error, when two model errors are small In threshold value TeWhen model parameter all retain, while Number of Models counter cumulative one, wherein Number of Models is lane line number; S66 works as the lane line number detected greater than 3, or | Xt,upper-Xt-1,upper|>TupperOr | Xt,lower-Xt-1,lower|>Tlower Or | Pt,vp(x,y)-Pt-1,vp(x,y)|>TvpWhen, wherein Xt,upper、Xt-1,upper、Xt,lower、Xt-1,lowerRespectively indicate lane The x coordinate on the vertex of line and the t frame of bottom point and t-1 frame, TupperAnd TlowerIt is the threshold on before and after frames vertex and bottom point absolute value of the difference Value, Pt,vp(x, y) is the end point position of t frame, Pt-1,vp(x, y) is the position of t-1 frame end point, TvpIt is before and after frames end point Difference threshold value, then interrupt RANSAC algorithm detection lane line, go to step S7, otherwise continue iteration to maximum number of iterations Imax, obtain final lane line;
S7, candidate lane line is found first with CNN, then again to the image containing candidate lane line handled by CNN Secondary applying step S6.
2. the method as described in claim 1, which is characterized in that the detailed process of the step S1 is: being filtered using dimensional Gaussian Wave device function G (x, y) and original image I carries out convolution, obtains smoothed image I0, whereinσ table Show the width of filter.
3. the method as described in claim 1, which is characterized in that the detailed process of the step S2 is: being directed to smoothed image I0 Each position b (x, y), the pixel of the position and left side b (x-m, y) and the right b (x+m, y) are compared: B+m(x, y)=b (x, y)-b (x+m, y), B-m(x, y)=b (x, y)-b (x-m, y), wherein distance m >=1, is arranged threshold value T, the value of edge image I1 Are as follows:
4. the method as described in claim 1, which is characterized in that the step S3 is denoised using sliding window, and detailed process is, Two small sliding windows are set, referred to as in window and outer window, two window roles are in identical neighborhood of pixels, but outer window Height and width it is bigger by 1.5% than the height and width of interior window, slide two windows and traverse whole edge graph, compare two windows The sum of mouthful internal pixel values determines the pixel in window for isolated noise and is set to if the pixel of two windows and equal Zero.
5. the method as described in claim 1, which is characterized in that the detailed process of the step S4 is: setting horizontal axis maximum value Xhigh, horizontal axis minimum value Xlow, longitudinal axis maximum value Yhigh, longitudinal axis minimum value YlowFour parameters, if I2Middle position (x, y) exists Xlow≤x≤Xhigh, Ylow≤y≤Yhigh, then it is determined as area-of-interest, is otherwise determined as region of loseing interest in.
6. the method as described in claim 1, which is characterized in that the error e in the step S6fCalculating using aromatic Distance, the aromatic distance calculation formula of point (u, v) are as follows:Whereink2=EuEvKz0, k3=Evθ-v, k4=u+Euψ, when point (u, v) belongs to right point set (uR,vR) when μ=- 1, belong to left point set (uL, vL) when μ=+ 1.
7. the method as described in claim 1, which is characterized in that the specific mistake of the CNN selection candidate lane line of the step S7 Cheng Shi: it will test image first by column and preferentially line up a long line, then input trained convolutional neural networks structure, network Each pixel of long line is handled, then MLP exports result long line, finally according still further to column priority processing at one 100 × 15 Image, wherein described 100 × 15 image is the image containing candidate lane line.
8. the method for claim 7, which is characterized in that the convolutional neural networks structure includes 2 down-sampling layers, 3 A convolutional layer, 1 MLP, wherein the core size of 2 down-sampling layers is respectively 8 × 2 and 4 × 2, and MLP includes 3 full articulamentums.
9. the method as described in claim 1, which is characterized in that described image acquisition device is that be arranged on vehicle preposition is taken the photograph As head or automobile data recorder.
CN201611254172.7A 2016-12-30 2016-12-30 A kind of method for detecting lane lines of robust that combining RANSAC and CNN Active CN106778668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611254172.7A CN106778668B (en) 2016-12-30 2016-12-30 A kind of method for detecting lane lines of robust that combining RANSAC and CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611254172.7A CN106778668B (en) 2016-12-30 2016-12-30 A kind of method for detecting lane lines of robust that combining RANSAC and CNN

Publications (2)

Publication Number Publication Date
CN106778668A CN106778668A (en) 2017-05-31
CN106778668B true CN106778668B (en) 2019-08-09

Family

ID=58953261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611254172.7A Active CN106778668B (en) 2016-12-30 2016-12-30 A kind of method for detecting lane lines of robust that combining RANSAC and CNN

Country Status (1)

Country Link
CN (1) CN106778668B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590451A (en) * 2017-09-04 2018-01-16 中国科学院长春光学精密机械与物理研究所 A kind of method for detecting lane lines
CN108229386B (en) * 2017-12-29 2021-12-14 百度在线网络技术(北京)有限公司 Method, apparatus, and medium for detecting lane line
CN110348273B (en) * 2018-04-04 2022-05-24 北京四维图新科技股份有限公司 Neural network model training method and system and lane line identification method and system
CN108573242A (en) * 2018-04-26 2018-09-25 南京行车宝智能科技有限公司 A kind of method for detecting lane lines and device
CN108615242B (en) * 2018-05-04 2021-07-27 重庆邮电大学 High-speed guardrail tracking method
CN109580979B (en) * 2018-06-12 2021-02-09 苏州市职业大学 Vehicle speed real-time measurement method based on video processing
US10949951B2 (en) * 2018-08-23 2021-03-16 General Electric Company Patient-specific deep learning image denoising methods and systems
US10262214B1 (en) * 2018-09-05 2019-04-16 StradVision, Inc. Learning method, learning device for detecting lane by using CNN and testing method, testing device using the same
CN112686080A (en) * 2019-10-17 2021-04-20 北京京东乾石科技有限公司 Method and device for detecting lane line
CN112216640B (en) * 2020-10-19 2021-08-06 高视科技(苏州)有限公司 Semiconductor chip positioning method and device
WO2022082574A1 (en) * 2020-10-22 2022-04-28 华为技术有限公司 Lane line detection method and apparatus
CN113033433B (en) * 2021-03-30 2024-03-15 北京斯年智驾科技有限公司 Port lane line detection method, device, system, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592114A (en) * 2011-12-26 2012-07-18 河南工业大学 Method for extracting and recognizing lane line features of complex road conditions
CN102722705A (en) * 2012-06-12 2012-10-10 武汉大学 Method for detecting multi-lane line on basis of random sample consensus (RANSAC) algorithm
KR20140080105A (en) * 2012-12-20 2014-06-30 울산대학교 산학협력단 Method for detecting lane boundary by visual information
CN103902985A (en) * 2014-04-15 2014-07-02 安徽工程大学 High-robustness real-time lane detection algorithm based on ROI
CN105261020A (en) * 2015-10-16 2016-01-20 桂林电子科技大学 Method for detecting fast lane line

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592114A (en) * 2011-12-26 2012-07-18 河南工业大学 Method for extracting and recognizing lane line features of complex road conditions
CN102722705A (en) * 2012-06-12 2012-10-10 武汉大学 Method for detecting multi-lane line on basis of random sample consensus (RANSAC) algorithm
KR20140080105A (en) * 2012-12-20 2014-06-30 울산대학교 산학협력단 Method for detecting lane boundary by visual information
CN103902985A (en) * 2014-04-15 2014-07-02 安徽工程大学 High-robustness real-time lane detection algorithm based on ROI
CN105261020A (en) * 2015-10-16 2016-01-20 桂林电子科技大学 Method for detecting fast lane line

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Lane Detection System using CNN;Jihun Kim;《IEMEK J. Embed. Sys. Appl》;20161103;全文 *
Real time Detection of Lane Markers in Urban Streets;Mohamed Aly;《IEEE》;20081231;全文 *
Robust Lane Detection Based On Convolutional Neural Network and Random Sample Consensus;Jihun Kim;《Springer》;20141231;全文 *
一种基于双曲线模型的车道线检测算法;高嵩;《西安工业大学学报》;20131031;第33卷(第10期);全文 *

Also Published As

Publication number Publication date
CN106778668A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106778668B (en) A kind of method for detecting lane lines of robust that combining RANSAC and CNN
CN107808378B (en) Method for detecting potential defects of complex-structure casting based on vertical longitudinal and transverse line profile features
CN109117876B (en) Dense small target detection model construction method, dense small target detection model and dense small target detection method
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
CN103048329B (en) A kind of road surface crack detection method based on active contour model
CN107368792A (en) A kind of finger vein identification method and system based on wave filter and Bone Edge
CN113240623B (en) Pavement disease detection method and device
CN105447512A (en) Coarse-fine optical surface defect detection method and coarse-fine optical surface defect detection device
CN104794440B (en) A kind of false fingerprint detection method based on the multiple dimensioned LBP of more piecemeals
CN109376740A (en) A kind of water gauge reading detection method based on video
CN114067186B (en) Pedestrian detection method and device, electronic equipment and storage medium
CN104036516A (en) Camera calibration checkerboard image corner detection method based on symmetry analysis
CN102592128A (en) Method and device for detecting and processing dynamic image and display terminal
CN107492076A (en) A kind of freeway tunnel scene vehicle shadow disturbance restraining method
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN113076860B (en) Bird detection system under field scene
CN106447640A (en) Multi-focus image fusion method based on dictionary learning and rotating guided filtering and multi-focus image fusion device thereof
Su et al. A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification
CN116740528A (en) Shadow feature-based side-scan sonar image target detection method and system
CN109543498B (en) Lane line detection method based on multitask network
CN113537211A (en) Deep learning license plate frame positioning method based on asymmetric IOU
CN104637060B (en) A kind of image partition method based on neighborhood principal component analysis-Laplce
CN115984186A (en) Fine product image anomaly detection method based on multi-resolution knowledge extraction
CN116524269A (en) Visual recognition detection system
CN101599176B (en) Method for partitioning internal layer of tubular structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A robust lane detection method combining RANSAC and CNN

Granted publication date: 20190809

Pledgee: Xiamen Huli Sub branch of Agricultural Bank of China Co.,Ltd.

Pledgor: MINGJIAN (XIAMEN) TECHNOLOGY CO.,LTD.

Registration number: Y2024980009494