[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106778668A - A kind of method for detecting lane lines of the robust of joint RANSAC and CNN - Google Patents

A kind of method for detecting lane lines of the robust of joint RANSAC and CNN Download PDF

Info

Publication number
CN106778668A
CN106778668A CN201611254172.7A CN201611254172A CN106778668A CN 106778668 A CN106778668 A CN 106778668A CN 201611254172 A CN201611254172 A CN 201611254172A CN 106778668 A CN106778668 A CN 106778668A
Authority
CN
China
Prior art keywords
image
point
lane line
cnn
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611254172.7A
Other languages
Chinese (zh)
Other versions
CN106778668B (en
Inventor
陈海沯
陈从华
谢超
叶德焰
任赋
王治家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ming Ming (xiamen) Technology Co Ltd
Original Assignee
Ming Ming (xiamen) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ming Ming (xiamen) Technology Co Ltd filed Critical Ming Ming (xiamen) Technology Co Ltd
Priority to CN201611254172.7A priority Critical patent/CN106778668B/en
Publication of CN106778668A publication Critical patent/CN106778668A/en
Application granted granted Critical
Publication of CN106778668B publication Critical patent/CN106778668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A kind of method for detecting lane lines of the robust of joint RANSAC and CNN, comprises the following steps:S1, original image I is obtained from image collecting device, and Gaussian smoothing is carried out to the original image I and obtain smoothed image I0;S2, to the smoothed image I0Edge extracting treatment is carried out, edge feature figure I is obtained1;S3, to the edge feature figure I1Image denoising treatment is carried out, denoising image I is obtained2;S4, to the denoising image I2Area-of-interest is set, image of interest I is obtained3;S5, to image I3Edge Feature Points division is carried out, is divided into common point, left side point, the right point;S6, detect lane line using RANSAC algorithms, when the track line number for detecting more than 3 or | xt,upper‑xt‑1,upper|>TupperOr | xt,lower‑xt‑1,lower|>TlowerOr | Pt,vp(x,y)‑PT 1, vp(x,y)|>TvpWhen, S7 is gone to, otherwise continue iteration to maximum iteration Imax, obtain final lane line;S7, candidate lane line is found first with CNN, then to the image containing candidate lane line that is obtained by CNN treatment applying step S6 again.

Description

A kind of method for detecting lane lines of the robust of joint RANSAC and CNN
Technical field
The invention belongs to image procossing, nerual network technique field, specifically, it is related to a kind of joint CNN (convolutional Neurals Network) and RANSAC (random sampling unification algorism) robust method for detecting lane lines.
Background technology
The analysis shows of a large amount of traffic accidents, cause casualties at most, and the most heavy cause of accident of economic loss belongs to and chases after Tail bumps against, wherein account for major part by the rear-end collision that random change lane causes.Automobile assistant driving system is pacified as driving Full guarantee, can will collide in vehicle or remind driver during run-off-road, it is to avoid accident occurs.Accurate lane line Positioning with identification be lane departure warning, track change auxiliary etc. DAS (Driver Assistant System) important component.Prior art Scheme is typically based on the combination of rim detection, RANSAC or RANSAC and HOUGH, in the scene increased feelings of complex jamming noise spot Under condition, RANSAC cannot accurately fit Road, and with the increase of iterations, it is impossible to reach treatment speed in real time Degree, causes such scheme to become unreliable in the case of road scene complexity, it is impossible to reach the application request in market.
The content of the invention
In view of the shortcomings of the prior art, the present invention proposes a kind of lane detection side of the robust of joint RANSAC and CNN Method, its technical scheme is specific as follows:
A kind of method for detecting lane lines of the robust of joint RANSAC and CNN, it is characterised in that comprise the following steps:
S1, original image I is obtained from image collecting device, and Gaussian smoothing is carried out to the original image I and obtained To smoothed image I0
S2, to the smoothed image I0Edge extracting treatment is carried out, edge feature figure I is obtained1
S3, to the edge feature figure I1Image denoising treatment is carried out, denoising image I is obtained2
S4, to the denoising image I2Area-of-interest is set, image of interest I is obtained3
S5, to image I3Edge Feature Points division is carried out, is set and is divided parameter Vmin, Vcommon and Uvanish, work as side Edge point ordinate is located at when between Vmin and Vcommon, is judged to common point, when marginal point ordinate is less than Uvanish, is sentenced It is set to left side point, remaining is judged to the right point;
S6, lane line is detected using RANSAC algorithms, detailed process is, S61 builds hyperbola lane line model M:
Wherein Eu=f/du, Ev=f/dv, f is camera focus, duAnd dvIt is pixel height and width, z0Obtained from camera assessment with θ , it is assumed that (uL,vL) it is left side point and public point set, (uR,vR) be that the right is put and public point set, then model M is reduced to
Wherein, a=(a1,a2,a3,a4)T, model parameterS62 sets maximum iteration Imax, change each time In generation, point set S, S are randomly selected from data set P and includes N number of point, go to assess the parameter a of hyperbolic model M, wherein N >=4, data Collection P is (uL, vL) and (uR,vR) union, (uL,vL) it is left side point and public point set, (uR,vR) it is the right point and common point Set;S63 goes in assessment P to be not belonging to the data point S* of point set S using the model M of instantiation, if the error e of S* and MfValue Less than error threshold et, then S* is added to one and is referred to as in the point set Si of unificant set;If the number of S64 point sets Si is more than Unificant set point set quantity threshold d, then remove assessment models M using set Si, and the standard of assessment uses the error at point set Si midpoints efIt is cumulative and;The model M that the more front and rear iteration twice of S65 is obtained, retains the less model of error, when two model errors are small In threshold value TeWhen model parameter all retain, while Number of Models counter cumulative, wherein, Number of Models is track line number; S66 works as the track line number for detecting more than 3, or | xt,upper-xt-1,upper| > TupperOr | xt,lower-xt-1,lower| > TlowerOr | Pt,vp(x,y)-Pt-1,vp(x, y) | > TvpWhen, wherein Xt,upper、Xt-1,upper、Xt,lower、Xt-1,lowerDifference table Show the x coordinate of the summit of lane line and the t frames of bottom point and t-1 frames, TupperAnd TlowerBe front and rear frame summit and bottom point difference it is absolute The threshold value of value, Pt,vp(x, y) is the end point position of t frames, PT-1, vp(x, y) is the position of t-1 frame end points, TvpIt is front and rear frame The threshold value of the difference of end point, then interrupt RANSAC algorithms detection lane line, goes to step S7, otherwise continues iteration to maximum and changes Generation number Imax, obtain final lane line;
S7, find candidate lane line, the then figure containing candidate lane line to being obtained by CNN treatment first with CNN As applying step S6 again.
Further, the detailed process of the step S1 is:Using 2-d gaussian filterses device function G (x, y) and original graph As I carries out convolution, smoothed image I is obtained0, wherein,σ represents the width of wave filter.
Further, the detailed process of the step S2 is:For smoothed image I0Each position b (x, y), by the position (x-m, y) (x+m's pixel and left side b put y) compares with the right b:B+m(x, y)=b (x, y)-b (x+m, y), B-m(x,y) =b (x, y)-b (x-m, y), wherein apart from m >=1, setting threshold value T, edge image I1Value be:
Further, the step S3 uses sliding window denoising, and detailed process is to set two small sliding windows, Referred to as interior window and outer window, two window roles are in identical neighborhood of pixels, but outer window is more long and roomy than interior window 1.5%, slide two windows and travel through whole edge graph, compare two sums of window interior pixel value, if two pictures of window It is plain and equal, then the pixel in window is judged as isolated noise and is set to zero.
Further, the detailed process of the step S4 is:Transverse axis maximum is set under plane of delineation rectangular coordinate system Xhigh, transverse axis minimum value Xlow, longitudinal axis maximum Yhigh, longitudinal axis minimum value YlowFour parameters, if image I2In position (x, Y) there is Xlow≤x≤Xhigh, Ylow≤y≤Yhigh, then it is judged to area-of-interest, otherwise it is judged to region of loseing interest in.
Further, the error e in the step S6fCalculating use aromatic distance, the aromatic distance of point (u, v) Computing formula is:Whereink2=EuEvKz0, k3=Evθ- vr, k4=ur+Euψ, when point belongs to right point set (uR,vR) when μ=- 1, belong to left point set (uL, vL) when μ=+ 1.
Further, the detailed process of the CNN selection candidate lane lines of the step S6 is:First by detection image by row A long line preferentially is lined up, the convolutional neural networks structure for training then is input into, each pixel of network processes long line, so MLP output results long line afterwards, finally according still further to row priority treatment into the image of 100 × 15, wherein, described 100 × 15 Image is the image of candidate lane line.
Further, the convolutional neural networks structure includes 2 down-sampling layers, 3 convolutional layers, 1 MLP, wherein, 2 The core size of individual down-sampling layer be respectively 8 × 2 and 4 × 2, MLP include 3 full articulamentums.
Further, described image harvester is provided in front camera or drive recorder on vehicle.
The present invention uses above technical scheme, has an advantageous effect in that:The present invention can strengthen the standard of lane detection The high-level feature of exactness, wherein CNN energy extraction process image, the feature extracted is not influenceed by noise light photograph etc., and RANSAC and CNN combinations can make detection algorithm more robust, complicated in road conditions, for example, occur in that fence, enclosure wall or In the case of the acute variation of illumination, still can be with effective detection lane line.Therefore, the present invention can be detected fast and reliablely Lane line, it is ensured that the driving safety of vehicle, and requirement to hardware is relatively low, and manufacturing cost is low, is conducive to marketing.
Brief description of the drawings
Fig. 1 shows flow chart of the invention;
Fig. 2 shows the schematic diagram of shape for hat nuclear structure;
Fig. 3 shows the schematic diagram of the sliding window operator of removal noise;
Fig. 4 shows the schematic diagram for setting area-of-interest;
Fig. 5 shows the schematic diagram of CNN structures;
Fig. 6 (a) is original input picture;
Fig. 6 (b) is the result figure after rim detection;
Fig. 6 (c) is the result figure of independent RANSAC detections;
Fig. 6 (d) is the testing result display on the original image of Fig. 6 (c);
Fig. 6 (e) is Fig. 6 (b) by the result figure after CNN treatment;
Fig. 6 (f) is the testing result that RANSAC is carried out to Fig. 6 (e);
Fig. 6 (g) is display of Fig. 6 (f) testing results in artwork.
Specific embodiment
To further illustrate each embodiment, the present invention is provided with accompanying drawing.These accompanying drawings are the invention discloses one of content Point, it is mainly used to illustrate embodiment, and the associated description of specification can be coordinated to explain the operation principles of embodiment.Coordinate ginseng These contents are examined, those of ordinary skill in the art will be understood that other possible implementation methods and advantages of the present invention.In figure Component be not necessarily to scale, and similar element numbers are conventionally used to indicate similar component.
In conjunction with the drawings and specific embodiments, the present invention is further described.
Reference picture 1-5, describes idiographic flow step of the invention.The step of present invention is used is as follows:
1.1st, first from image collecting device (for example, the front camera being arranged on vehicle or drive recorder etc.) Original image I is obtained, smoothed image I is obtained to image I Gaussian smoothings0.2-d gaussian filterses device function is:
Wherein, parameter σ represents the width of wave filter, and the frequency band of the bigger wave filters of σ is wider, and smoothness is better, the present invention σ is adjusted by the experiment of the view data to actual acquisition, makes image one balance of acquirement between smoothing excessively and owing to smooth. Convolution is carried out with G (x, y) and original image I, smoothed image I is obtained0。I0(x, y)=G (x, y) * I (x, y), * are convolution algorithm Symbol.
1.2nd, for smoothed image I0Carry out edge extracting treatment.In the present embodiment, edge extracting uses shape for hat core (such as Shown in Fig. 2) and smoothed image carry out convolution, concrete processing procedure is as follows:For smoothed image I0Each position b (x, y), By the pixel of the position and left side b, (x-m, y) (x+m, y) compares, wherein apart from m >=1 with the right b:
B+m(x, y)=b (x, y)-b (x+m, y)
B-m(x, y)=b (x, y)-b (x-m, y) (2)
Threshold value T, edge image I are set1Value be:
Thus, edge extracting process terminates, and obtains edge feature figure I1.But edge extracting can also be entered using other methods Row is without departing from the spirit and scope of the present invention.
1.3rd, image denoising treatment is carried out for edge feature figure obtained above.Image denoising treatment is found out isolated first Region, is then set to zero by isolated area.In the present embodiment, image denoising treatment is carried out using sliding window operator.Such as Fig. 3 Shown, concrete operations are:Two small sliding windows, referred to as interior window and outer window are set, and two window roles are in identical Neighborhood of pixels, but the height h1 of outer window and width w1 is bigger by 1.5% than the height h2 and width w2 of interior window.Slide two windows Mouthful whole edge graph of traversal, compares two sums of window interior pixel value, if the pixel of two windows and equal, decides that window Intraoral pixel is isolated noise, is set to zero.In order to improve computational efficiency, the integrogram of edge graph is used to calculation window With.Image I is obtained after denoising2.But image denoising can also using other methods carry out without deviating from it is of the invention spirit and Scope.
1.4th, for denoising image I2(ROI) region interested is set.Transverse axis is set under plane of delineation rectangular coordinate system Maximum Xhigh, transverse axis minimum value Xlow, longitudinal axis maximum Yhigh, longitudinal axis minimum value YlowFour parameters, this four parameters pass through View data to actual acquisition is analyzed acquisition.For I2Middle position (x, y), if Xlow≤x≤Xhigh, Ylow≤y≤ Yhigh, then it is judged to area-of-interest, remaining is judged to region of loseing interest in, and does not enter subsequent processing steps, image of interest It is I3.But setting area-of-interest can also be carried out without departing from the spirit and scope of the present invention using other methods.
1.5th, for image I3Carry out Edge Feature Points division.As shown in figure 4, setting ordinate of orthogonal axes division parameter Vmin, Vcommon and Uvanish.In the picture, when marginal point ordinate is located between Vmin and Vcommon, it is judged to common point, When marginal point ordinate is less than Uvanish, it is judged to left side point, remaining is judged to the right point.
1.6th, lane line is detected using RANSAC algorithms, is comprised the following steps that:
1.6.1, building hyperbola track line model is:
E in formula (4)u=f/du, Ev=f/dv, f is camera focus, duAnd dvIt is pixel height and width, z0Commented from camera with θ Estimate acquisition.Assuming that (uL,vL) it is left side point and public point set, (uR,vR) it is the right point and public point set.Formula (4) simplifies For
Wherein a=(a1,a2,a3,a4)T, hyperbolic model parameter can be obtained according to below equation after a is obtained:
1.6.2, maximum iteration I is setmax, iteration, point set S, S is randomly selected from data set P and includes N each time It is individual, go to assess the parameter a of hyperbolic model M.Wherein N >=4, data set P is (uL, vL) and (uR,vR) union, (uL,vL) It is left side point and public point set, (uR,vR) it is the right point and public point set.
1.6.3, go in assessment P to be not belonging to the data point S* of point set S using the model M of instantiation, if the error of S* and M Function efValue be less than error threshold et, then S* is added to one and is referred to as in the point set Si of unificant set.
If 1.6.4, the number of point set Si is more than unificant set point set quantity threshold d, go to assess mould using set S i Type M, the standard of assessment is the error e at point set Si midpointsfIt is cumulative and.
1.6.5 the model M that iteration is obtained twice before and after, comparing, retains the less model of error, when two model errors are small In threshold value TeWhen model parameter all retain, while Number of Models counter cumulative, wherein, Number of Models is track line number.ef Calculating use aromatic distance, the aromatic of point (u, v) be apart from computing formula:
In formula
k2=EuEvKz0 (12)
k3=Evθ-vr (13)
k4=ur+Euψ (14)
When point belongs to right point set (uR,vR) when k1In μ=- 1, belong to left point set (uL, vL) when k1In μ=+ 1. Hyperbolic model parameter a, the i.e. model parameter of lane line are obtained after RANSAC iterative fittings.
1.6.6, for the lane line extracted using RANSAC, the result extracted after environment becomes complexity easily becomes not It is reliable, if by the degree of accuracy for increasing the iterations or raising threshold value of RANSAC to improve extraction, even if in accuracy On the premise of reaching, the requirement of real-time of processing speed cannot be also met.The present invention combines CNN to strengthen track line drawing The degree of accuracy, the high-level feature of CNN energy extraction process images, the feature extracted is not influenceed by noise light photograph etc..But RANSAC stops detection under which kind of testing conditions, and it is also problem to be transferred to CNN.In consideration of it, the present invention is represented respectively by setting The parameter X of the x coordinate of the summit of lane line and the t frames of bottom point and t-1 framest,upper、Xt-1,upper、Xt,lower、Xt-1,lower, represent The parameter T of the threshold value of the absolute value of front and rear frame summit and bottom point differenceupperAnd Tlower, represent the parameter of the end point position of t frames Pt,vp(x, y), represents the parameter P of the position of t-1 frame end pointst-1,vpThe threshold of the difference of frame end point before and after (x, y) and expression The parameter T of valuevpTo be judged.When the track line number of RANSAC algorithms detection is more than 3, or | xt,upper-xt-1,upper| > Tupper, or | xt,lower-xt-1,lower| > Tlower, or | Pt,vp(x,y)-Pt-1,vp(x, y) | > Tvp, then 1.6 steps are abandoned The lane line of detection, is transferred to CNN detections, i.e. step 1.7;Otherwise continue iteration to maximum iteration Imax, obtain final track Line.
The combination of 1.7 but CNN and RANSAC is difficult, and key point is how to balance both time complexity and inspections The precision of survey.Need to carry out the operation that multiple convolution extracts feature in CNN, the deeper network of convolution is wider certainly better, but Time complexity is also relatively higher, and the depth and range balance quality and time complexity for how setting network are a difficult points.For This, the present invention devises a precision and time complexity can be effectively applicable to the convolutional neural networks structure of lane detection, As shown in Figure 5.Convolutional neural networks structure includes 2 down-sampling layers, and 3 convolutional layers, 1 MLP (multilayer perceptron) is followed successively by Convolutional layer 1, down-sampling layer 1, convolutional layer 2, down-sampling layer 2, convolutional layer 3 and MLP.Wherein, the core size of down-sampling layer is respectively 8 × 2 and 4 × 2, MLP include 3 full articulamentums.CNN input picture sizes are 192 × 28, are output as 100 × 15.But the present invention Above-mentioned convolutional neural networks structure is not limited to, i.e., the convolutional neural networks structure for making improvements on this basis each falls within this hair Bright protection domain.In CNN detection-phases, detection image is lined up into a long line by row are preferential, the convolution god that input is trained Through network, each pixel of network processes long line, last MLP output results long line, then according still further to row priority treatment into 100 × 15 image is opened, the image is the candidate lane line of CNN searchings.Then application RANSAC detects lane line again, obtains To final lane line.
The present invention has good robustness, and lane line still can be effectively detected in complex environment.Fig. 6 shows this The actually detected result of invention.Wherein, Fig. 6 (a) is that environment is complicated where original input picture, the lane line, is disturbed numerous;Figure 6 (b) is the result figure after rim detection, and figure showing edge has many edges similar with lane line after detecting;Fig. 6 C () is the result of independent RANSAC detections, testing result mistake occurs, deviates correct track line position;Fig. 6 (d) is detection Result shows in artwork;Fig. 6 (e) is result figures of the Fig. 6 (b) after CNN is processed, and image disruption is tailed off, and lane line region is dashed forward Go out;Fig. 6 (f) is the testing result that RANSAC is carried out to Fig. 6 (e), and testing result accurately detects the position of lane line;Fig. 6 G () is display of Fig. 6 (f) testing results in artwork.Actually detected result shows that the present invention can accurately be examined under complex environment Measuring car road line position.
Although specifically showing and describing the present invention with reference to preferred embodiment, those skilled in the art should be bright In vain, do not departing from the spirit and scope of the present invention that appended claims are limited, in the form and details can be right The present invention makes a variety of changes, and is protection scope of the present invention.

Claims (9)

1. a kind of method for detecting lane lines of the robust of joint RANSAC and CNN, it is characterised in that comprise the following steps:
S1, original image I is obtained from image collecting device, and Gaussian smoothing is carried out to the original image I and put down Sliding image I0
S2, to the smoothed image I0Edge extracting treatment is carried out, edge feature figure I is obtained1
S3, to the edge feature figure I1Image denoising treatment is carried out, denoising image I is obtained2
S4, to the denoising image I2Area-of-interest is set, image of interest I is obtained3
S5, to image I3Edge Feature Points division is carried out, is set and is divided parameter Vmin, Vcommon and Uvanish, when marginal point is vertical Coordinate is located at when between Vmin and Vcommon, is judged to common point, when marginal point ordinate is less than Uvanish, is judged to a left side Edge point, remaining is judged to the right point;
S6, lane line is detected using RANSAC algorithms, detailed process is:S61 builds hyperbola lane line model M:
Wherein Eu=f/du, Ev=f/dv, f is camera focus, duAnd dvIt is pixel height and width, z0Obtained from camera assessment with θ, it is false If (uL,vL) it is left side point and public point set, (uR,vR) be that the right is put and public point set, then model M is reduced to
Wherein a=(a1,a2,a3,a4)T, model parameterS62 sets maximum iteration Imax, change each time In generation, point set S, S are randomly selected from data set P and includes N number of point, go to assess the parameter a of hyperbolic model M, wherein N >=4, data Collection P is (uL, vL) and (uR,vR) union, (uL,vL) it is left side point and public point set, (uR,vR) it is the right point and common point Set;S63 goes in assessment P to be not belonging to the data point S* of point set S using the model M of instantiation, if the error e of S* and MfValue Less than error threshold et, then S* is added to one and is referred to as in the point set Si of unificant set;If the number of S64 point sets Si is more than Unificant set point set quantity threshold d, then remove assessment models M using set Si, and the standard of assessment uses the error at point set Si midpoints efIt is cumulative and;The model M that the more front and rear iteration twice of S65 is obtained, retains the less model of error, when two model errors are small In threshold value TeWhen model parameter all retain, while Number of Models counter cumulative, wherein, Number of Models is track line number; S66 works as the track line number for detecting more than 3, or | Xt,upper-Xt-1,upper| > TupperOr | Xt,lower-Xt-1,lower| > TlowerOr | Pt,vp(x,y)-Pt-1,vp(x, y) | > TvpWhen, wherein Xt,upper、Xt-1,upper、Xt,lower、Xt-1,lowerDifference table Show the x coordinate of the summit of lane line and the t frames of bottom point and t-1 frames, TupperAnd TlowerBe front and rear frame summit and bottom point difference it is absolute The threshold value of value, Pt,vp(x, y) is the end point position of t frames, Pt-1,vp(x, y) is the position of t-1 frame end points, TvpIt is front and rear frame The threshold value of the difference of end point, then interrupt RANSAC algorithms detection lane line, goes to step S7, otherwise continues iteration to maximum and changes Generation number Imax, obtain final lane line;
S7, candidate lane line is found first with CNN, then to the image containing candidate lane line that is obtained by CNN treatment again Secondary applying step S6.
2. the method for claim 1, it is characterised in that the detailed process of the step S1 is:Filtered using dimensional Gaussian Ripple device function G (x, y) carries out convolution with original image I, obtains smoothed image I0, wherein,x,y It is width that σ represents wave filter.
3. the method for claim 1, it is characterised in that the detailed process of the step S2 is:For smoothed image I0's Each position b (x, y), by the pixel of the position and left side b, (x-m, y) (x+m y) compares with the right b:B+m(x, y)=b (x, Y)-b (x+m, y), B-m(x, y)=b (x, y)-b (x-m, y), wherein apart from m >=1, setting threshold value T, edge image I1Value be:
4. the method for claim 1, it is characterised in that the step S3 uses sliding window denoising, detailed process is, Two small sliding windows, referred to as interior window and outer window are set, and two window roles are in identical neighborhood of pixels, but outer window Height and width it is bigger by 1.5% than the height and width of interior window, slide two windows travel through whole edge graph, compare two windows The sum of mouthful internal pixel values, if the pixel of two windows and equal, judges the pixel in window as isolated noise and is set to Zero.
5. the method for claim 1, it is characterised in that the detailed process of the step S4 is:Transverse axis maximum is set Xhigh, transverse axis minimum value Xlow, longitudinal axis maximum Yhigh, longitudinal axis minimum value YlowFour parameters, if I2Middle position (x, y) is present Xlow≤x≤Xhigh, Ylow≤y≤Yhigh, then it is judged to area-of-interest, otherwise it is judged to region of loseing interest in.
6. the method for claim 1, it is characterised in that the error e in the step S6fCalculating using aromatic Distance, the aromatic of point (u, v) be apart from computing formula:Whereink2 =EuEvKz0, k3=Evθ-vr, k4=ur+Euψ, when point (u, v) belongs to right point set (uR,vR) when μ=- 1, belong to left point set (uL, vL) when μ=+ 1.
7. the method for claim 1, it is characterised in that the CNN of the step S7 selects the specific mistake of candidate lane line Cheng Shi:Detection image is preferentially lined up into a long line by row first, the convolutional neural networks structure for training, network is then input into Each pixel of long line is processed, then MLP output results long line, finally according still further to row priority treatment into 100 × 15 Image, wherein, described 100 × 15 image is the image containing candidate lane line.
8. method as claimed in claim 7, it is characterised in that the convolutional neural networks structure includes 2 down-samplings layers, 3 Individual convolutional layer, 1 MLP, wherein, the core size of 2 down-samplings layer be respectively 8 × 2 and 4 × 2, MLP include 3 full articulamentums.
9. the method for claim 1, it is characterised in that what described image harvester was provided on vehicle preposition takes the photograph As head or drive recorder.
CN201611254172.7A 2016-12-30 2016-12-30 A kind of method for detecting lane lines of robust that combining RANSAC and CNN Active CN106778668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611254172.7A CN106778668B (en) 2016-12-30 2016-12-30 A kind of method for detecting lane lines of robust that combining RANSAC and CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611254172.7A CN106778668B (en) 2016-12-30 2016-12-30 A kind of method for detecting lane lines of robust that combining RANSAC and CNN

Publications (2)

Publication Number Publication Date
CN106778668A true CN106778668A (en) 2017-05-31
CN106778668B CN106778668B (en) 2019-08-09

Family

ID=58953261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611254172.7A Active CN106778668B (en) 2016-12-30 2016-12-30 A kind of method for detecting lane lines of robust that combining RANSAC and CNN

Country Status (1)

Country Link
CN (1) CN106778668B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590451A (en) * 2017-09-04 2018-01-16 中国科学院长春光学精密机械与物理研究所 A kind of method for detecting lane lines
CN108229386A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method, apparatus of lane line and medium
CN108573242A (en) * 2018-04-26 2018-09-25 南京行车宝智能科技有限公司 A kind of method for detecting lane lines and device
CN108615242A (en) * 2018-05-04 2018-10-02 重庆邮电大学 A kind of highway guardrail tracking
CN109580979A (en) * 2018-06-12 2019-04-05 苏州市职业大学 Speed method for real-time measurement based on video processing
CN110348273A (en) * 2018-04-04 2019-10-18 北京四维图新科技股份有限公司 Neural network model training method, system and Lane detection method, system
CN110858391A (en) * 2018-08-23 2020-03-03 通用电气公司 Patient-specific deep learning image denoising method and system
CN110889318A (en) * 2018-09-05 2020-03-17 斯特拉德视觉公司 Lane detection method and apparatus using CNN
CN112216640A (en) * 2020-10-19 2021-01-12 惠州高视科技有限公司 Semiconductor chip positioning method and device
CN112654997A (en) * 2020-10-22 2021-04-13 华为技术有限公司 Lane line detection method and device
CN112686080A (en) * 2019-10-17 2021-04-20 北京京东乾石科技有限公司 Method and device for detecting lane line
CN113033433A (en) * 2021-03-30 2021-06-25 北京斯年智驾科技有限公司 Port lane line detection method, device, system, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592114A (en) * 2011-12-26 2012-07-18 河南工业大学 Method for extracting and recognizing lane line features of complex road conditions
CN102722705A (en) * 2012-06-12 2012-10-10 武汉大学 Method for detecting multi-lane line on basis of random sample consensus (RANSAC) algorithm
KR20140080105A (en) * 2012-12-20 2014-06-30 울산대학교 산학협력단 Method for detecting lane boundary by visual information
CN103902985A (en) * 2014-04-15 2014-07-02 安徽工程大学 High-robustness real-time lane detection algorithm based on ROI
CN105261020A (en) * 2015-10-16 2016-01-20 桂林电子科技大学 Method for detecting fast lane line

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592114A (en) * 2011-12-26 2012-07-18 河南工业大学 Method for extracting and recognizing lane line features of complex road conditions
CN102722705A (en) * 2012-06-12 2012-10-10 武汉大学 Method for detecting multi-lane line on basis of random sample consensus (RANSAC) algorithm
KR20140080105A (en) * 2012-12-20 2014-06-30 울산대학교 산학협력단 Method for detecting lane boundary by visual information
CN103902985A (en) * 2014-04-15 2014-07-02 安徽工程大学 High-robustness real-time lane detection algorithm based on ROI
CN105261020A (en) * 2015-10-16 2016-01-20 桂林电子科技大学 Method for detecting fast lane line

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIHUN KIM: "Lane Detection System using CNN", 《IEMEK J. EMBED. SYS. APPL》 *
JIHUN KIM: "Robust Lane Detection Based On Convolutional Neural Network and Random Sample Consensus", 《SPRINGER》 *
MOHAMED ALY: "Real time Detection of Lane Markers in Urban Streets", 《IEEE》 *
高嵩: "一种基于双曲线模型的车道线检测算法", 《西安工业大学学报》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590451A (en) * 2017-09-04 2018-01-16 中国科学院长春光学精密机械与物理研究所 A kind of method for detecting lane lines
CN108229386A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method, apparatus of lane line and medium
CN108229386B (en) * 2017-12-29 2021-12-14 百度在线网络技术(北京)有限公司 Method, apparatus, and medium for detecting lane line
CN110348273A (en) * 2018-04-04 2019-10-18 北京四维图新科技股份有限公司 Neural network model training method, system and Lane detection method, system
CN108573242A (en) * 2018-04-26 2018-09-25 南京行车宝智能科技有限公司 A kind of method for detecting lane lines and device
CN108615242A (en) * 2018-05-04 2018-10-02 重庆邮电大学 A kind of highway guardrail tracking
CN108615242B (en) * 2018-05-04 2021-07-27 重庆邮电大学 High-speed guardrail tracking method
CN109580979B (en) * 2018-06-12 2021-02-09 苏州市职业大学 Vehicle speed real-time measurement method based on video processing
CN109580979A (en) * 2018-06-12 2019-04-05 苏州市职业大学 Speed method for real-time measurement based on video processing
CN110858391A (en) * 2018-08-23 2020-03-03 通用电气公司 Patient-specific deep learning image denoising method and system
CN110858391B (en) * 2018-08-23 2023-10-10 通用电气公司 Patient-specific deep learning image denoising method and system
CN110889318A (en) * 2018-09-05 2020-03-17 斯特拉德视觉公司 Lane detection method and apparatus using CNN
CN110889318B (en) * 2018-09-05 2024-01-19 斯特拉德视觉公司 Lane detection method and device using CNN
CN112686080A (en) * 2019-10-17 2021-04-20 北京京东乾石科技有限公司 Method and device for detecting lane line
CN112216640A (en) * 2020-10-19 2021-01-12 惠州高视科技有限公司 Semiconductor chip positioning method and device
CN112216640B (en) * 2020-10-19 2021-08-06 高视科技(苏州)有限公司 Semiconductor chip positioning method and device
CN112654997A (en) * 2020-10-22 2021-04-13 华为技术有限公司 Lane line detection method and device
CN113033433A (en) * 2021-03-30 2021-06-25 北京斯年智驾科技有限公司 Port lane line detection method, device, system, electronic device and storage medium
CN113033433B (en) * 2021-03-30 2024-03-15 北京斯年智驾科技有限公司 Port lane line detection method, device, system, electronic device and storage medium

Also Published As

Publication number Publication date
CN106778668B (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN106778668A (en) A kind of method for detecting lane lines of the robust of joint RANSAC and CNN
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
WO2020062433A1 (en) Neural network model training method and method for detecting universal grounding wire
CN104408460B (en) A kind of lane detection and tracking detection method
CN103048329B (en) A kind of road surface crack detection method based on active contour model
CN104299008B (en) Vehicle type classification method based on multi-feature fusion
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
CN104299009B (en) License plate character recognition method based on multi-feature fusion
CN102629326A (en) Lane line detection method based on monocular vision
CN111738314A (en) Deep learning method of multi-modal image visibility detection model based on shallow fusion
CN113139470B (en) Glass identification method based on Transformer
CN102819733B (en) Rapid detection fuzzy method of face in street view image
CN113240623B (en) Pavement disease detection method and device
CN103632129A (en) Facial feature point positioning method and device
CN107368792A (en) A kind of finger vein identification method and system based on wave filter and Bone Edge
CN114067186B (en) Pedestrian detection method and device, electronic equipment and storage medium
CN102682428A (en) Fingerprint image computer automatic mending method based on direction fields
CN104915642B (en) Front vehicles distance measuring method and device
CN112488046A (en) Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN113076860B (en) Bird detection system under field scene
CN110991264A (en) Front vehicle detection method and device
CN116758421A (en) Remote sensing image directed target detection method based on weak supervised learning
Su et al. A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification
CN115984186A (en) Fine product image anomaly detection method based on multi-resolution knowledge extraction
CN112861785A (en) Shielded pedestrian re-identification method based on example segmentation and image restoration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A robust lane detection method combining RANSAC and CNN

Granted publication date: 20190809

Pledgee: Xiamen Huli Sub branch of Agricultural Bank of China Co.,Ltd.

Pledgor: MINGJIAN (XIAMEN) TECHNOLOGY CO.,LTD.

Registration number: Y2024980009494

PE01 Entry into force of the registration of the contract for pledge of patent right