[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115131968A - Matching fusion method based on lane line point set and attention mechanism - Google Patents

Matching fusion method based on lane line point set and attention mechanism Download PDF

Info

Publication number
CN115131968A
CN115131968A CN202210752033.6A CN202210752033A CN115131968A CN 115131968 A CN115131968 A CN 115131968A CN 202210752033 A CN202210752033 A CN 202210752033A CN 115131968 A CN115131968 A CN 115131968A
Authority
CN
China
Prior art keywords
lane line
point set
lane
attention
fitted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210752033.6A
Other languages
Chinese (zh)
Other versions
CN115131968B (en
Inventor
李锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210752033.6A priority Critical patent/CN115131968B/en
Publication of CN115131968A publication Critical patent/CN115131968A/en
Application granted granted Critical
Publication of CN115131968B publication Critical patent/CN115131968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0129Traffic data processing for creating historical data or processing based on historical data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a matching fusion method based on a lane line point set and an attention mechanism, which comprises the steps of S1, receiving vehicle position and state information, and executing the step S2 if an automatic driving system is started; s2, receiving the road data and the lane line point set detected by the sensor, and processing the lane line point set; s3, traversing the lane line point sets detected by the sensors, and judging whether the lane line point sets are in the attention focusing area; and S4, storing data of the lane line point set in the attention focusing area into a point set storage area to be fitted in a focusing area, otherwise, storing the data into a point set storage area to be fitted in a non-focusing area, respectively fitting, numbering the fitted lane lines, calculating confidence coefficients, and taking the weighted average of the confidence coefficients of all points as the confidence coefficient of the fitted lane line. The method can effectively reduce the matching search space, reduce the risk of wrong matching and fusion, and improve the path planning of the automatic driving vehicle and the control stability and safety of the steering wheel.

Description

Matching fusion method based on lane line point set and attention mechanism
Technical Field
The invention relates to the technical field of vehicle control, in particular to a matching fusion method based on a lane line point set and an attention mechanism.
Background
Autopilot system safety relies on the correct output of the various sensors. For example, a single sensor such as a forward-looking camera or a forward-looking imaging laser radar is difficult to continuously and stably detect a lane line due to the conditions of a field angle range, severe weather, shielding and the like, and the forward-looking camera is also influenced by factors such as illumination conditions, light and shade alternation and the like; and a sensor system (such as a multi-camera panoramic system) can utilize redundant information formed by detection results of a plurality of sensors (such as cameras) in different directions to carry out matching and fusion, thereby improving the capability of continuously and stably detecting the lane line and improving the safety of automatic driving path planning. The key difficulty of the multiple sensors is that each sensor can only obtain local information, how the multiple sensors perform matching and fusion is performed, and the matching search space dimension of the multiple sensors is reduced.
The invention patent with the publication number of CN112154449A discloses a lane line fusion method, a device, a vehicle and a storage medium, wherein the lane line fusion method comprises the steps of obtaining an environment image around a movable platform, and obtaining an initial lane line set of the movable platform according to the environment image (S110); fitting and optimizing initial lane line data in the initial lane line set to obtain a target lane line set; wherein, fitting optimization comprises the following steps: performing fitting optimization according to the initial lane line data and the historical lane line data in the historical lane line set to obtain a target lane line set (S120); the target lane line set includes lane lines that are not parallel to each other. However, this patent has the following problems:
1. the initial lane lines and the historical lane lines are directly used for fusion, identification delay is generated due to the historical lane line data when the lane line information changes suddenly under the working condition that the vision and the illumination change suddenly, and the accuracy of the currently fused lane lines can be influenced by errors and noise (mismatching) of the historical lane line data.
2. When the recognition accuracy and the measurement accuracy of the sensor are gradually improved, the effect of the historical lane line data is reduced, and when the recognition accuracy of the sensor is close to 100% and the measurement accuracy is in the millimeter level, the historical lane line data is directly used or used for lane line fusion through prediction, but delay, uncertainty and noise are introduced.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to provide a matching fusion method based on a lane line point set and an attention mechanism, how to effectively and accurately match and fuse lane lines in various scenes, effectively reducing a matching search space, reducing risks of wrong matching and fusion, and improving the stability and safety of route planning of an automatic driving vehicle.
In order to solve the technical problems, the invention adopts the following technical scheme:
a matching fusion method based on a lane line point set and an attention mechanism comprises the following steps:
s1, receiving the vehicle position and state information, and executing the step S2 if the automatic driving system is in the available state;
s2, receiving the road data and the lane line point set detected by the sensor, and processing the lane line point set;
s3, traversing the lane line point sets detected by the sensors, and judging whether the lane line point sets are in the attention focusing area;
s4, storing data of the lane line point set in the attention focusing area into a point set storage area to be fitted in a focusing area, storing data not in the attention focusing area into a point set storage area to be fitted in a non-focusing area, fitting the data respectively, numbering the fitted lane lines and calculating confidence coefficients, and taking the weighted average of the confidence coefficients of all the points as the confidence coefficient of the fitted lane line;
s5, merging lane lines, receiving all fitted lane lines in the S4 and numbering;
s6, outputting a global fusion lane line to the predictive planning control system according to the predictive planning control request;
s7, memorizing N frame fusion lane lines, uniformly predicting the current time of the memorized N frame fusion lane lines, and then performing inter-frame matching;
s8, classifying attention, and taking a fused lane line in a to-be-fitted point set storage area of a focus domain as a main attention lane line and a fused lane line in a to-be-fitted point set storage area of a non-focus domain as a secondary attention lane line;
and S9, calculating the central line of the attention focusing area, and determining the attention focusing area according to the sensor distance measurement error.
Further, in step S2, the road data includes travelable areas, guardrails, and curbs; and removing abnormal points outside the guardrail and the road edge data in a lane line point set according to the travelable area, the guardrail and the road edge data.
Further, in step S4, the lane line point set data stored in the to-be-fitted point set storage area of the focus area is processed, the to-be-fitted point set storage area of each focus area is traversed, and a RANSAC algorithm is used to fit the polynomial function to the lane line point set of each focus area in the to-be-fitted storage area.
Further, the RANSAC algorithm is adopted to process the lane line point set data stored in the to-be-fitted point set storage area of the focusing area, and the processing comprises removing or thinning points with long time headway, removing points with confidence coefficient smaller than an adaptive threshold, and removing dark gray tire imprints and old lane lines, so that the iteration times of the RANSAC algorithm are reduced.
Further, the adaptive threshold for confidence is segmented by length, the adaptive threshold for confidence AT the lane line point of each segment AT conf Comprises the following steps:
Figure BDA0003718577470000031
in the formula, Conf point Confidence of lane line points; lambda epsilon (0,1)]Is a rejection area factor; m is the number of lane line points obtained by the sensor in the section after the lane line is segmented according to the length; conf LLimit The lower limit of the adaptive threshold for confidence.
Further, in step S4, the step of processing the lane line point set data stored in the storage area for the point set to be fitted in the unfocused domain includes:
s41, processing the data of the lane line point sets stored in the to-be-matched area of the unfocused area, sorting the lane line point sets which are not in the attention focusing area in the first half cycle according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value to be used as the lane line of the first half cycle long Fp (i ═ 1.., m), whereas as the first half cycle short lane line short FP(i)(1,...,n);
Sequentially selecting the front half-circumference lane lines in the point set storage area to be fitted of the unfocused domain as a reference according to the sequence from long to short, traversing the rest lane lines and the front half-circumference short lane lines for matching, and entering S42 when the current half-circumference lane lines are completely traversed and matched;
s42, processing the data of the lane line point sets stored in the to-be-matched area of the unfocused area, sorting the lane line point sets which are not in the attention focusing area in the latter half cycle according to length, and screening the lane line point sets with the length larger than the self-adaptive threshold value to be used as the lane line lines of the latter half cycle long Rp (i ═ 1.., m), or conversely, as a second-half-cycle short lane line short RP(i)(1,...,n);
Sequentially selecting the rear half perimeter lane lines in the point set storage area to be fitted of the unfocused domain as a reference according to the sequence from long to short, traversing the rest lane lines and the rear half perimeter short lane lines for matching, and entering S43 when the rear half perimeter lane lines are traversed and matched;
and S43, traversing the rear half cycle fusion lane line of the vehicle and matching the fusion lane line with the front half cycle fusion lane line as a reference, merging all matched lane line point sets and eliminating unmatched abnormal point fitting polynomial functions if the current half cycle lane line point set can be matched with the rear half cycle lane line point set.
Further, the formula of the lane line length adaptive threshold in the process of processing the lane line point set data stored in the to-be-fitted point set storage area of the unfocused domain is as follows:
Figure BDA0003718577470000032
in the formula, LTH line Calculating the length of the lane line; delta epsilon (0,1)]A rejection region factor; n is the total number of the lane lines in the storage area of the point set to be fitted in the non-focusing area; LTH LLimit The lower limit of the adaptive threshold for confidence.
Further, in step S7, inter-frame matching is performed by fitting an nth-order polynomial curve to the lane lines, which is expressed by:
Figure BDA0003718577470000041
in the formula, A ip1 Coefficient representing an nth-order polynomial in a later frame, A ip2 Coefficient representing an nth-order polynomial in a previous frame, Tr IFM A tolerance threshold representing an inter-frame match; delta A inon A normalization factor representing an interframe matching error of an ith coefficient of the polynomial curve;
and when the inequality is established, successfully matching the frames of the memory lane line curve, and storing the memory lane line sequence.
Further, the matching fusion method based on the lane line point set and the attention mechanism according to claim 1 is characterized in that in step S9, the center line of the attention focusing area is calculated according to the predicted driving direction, the main attention lane coefficient and the forgetting factor; wherein,
predicting the driving direction, and not counting a main attention lane line which is excessively different from the predicted driving direction of the vehicle into the center line calculation of the attention focusing area, wherein when the following discriminant function inequality is satisfied, the lane line is calculated as the center line of the attention focusing area:
|A 0 +A 1 v h t pred +(2A 2 -Y r )(v h t pred ) 2 |<Tr pred
in the formula, v h Is the speed of the vehicle, Y r Yaw rate, t, of the host vehicle pred To set the prediction time, A 0 、A 1 、A 2 Respectively are fitted lane line equation coefficients, Tr pred Is predicted t pred Lane line spacing after timeA discrimination threshold value of a lateral position from the host vehicle;
calculating and storing a forgetting factor of the current frame, and storing n at most m A frame, whose formula is:
Figure BDA0003718577470000042
in the formula, beta b Is a forgetting factor reference value; vr pi The distance is the far-end visible distance of the lane line in the driving direction; v. of hpi Is the speed of the vehicle, conf pi For the main attention lane line confidence, dt, of the current frame pi For the time difference between the current frame and the previous frame, μ is the safety threshold, and i is the frame number (i ═ 1 m );
Calculating the central line of the attention focusing area, and expressing the central line by a polynomial of third order and above, wherein the coefficient of the central line is calculated by the following formula:
Figure BDA0003718577470000051
in the formula, n m For memorizing the upper limit of the frame number, when m < n m -1,(β p1p1 β p2 +...(β p1 β p2 ...β pm ))>Tr memo When the element belongs to (0,1),
Figure BDA0003718577470000052
wherein Tr memo Is a memory tolerance threshold;
Figure BDA0003718577470000053
is n m A polynomial coefficient vector of a main attention lane line of an ith frame in the matching memory sequence of the history frames;
the formula for determining the lateral width of the attention focusing region according to the sensor ranging error is as follows:
Δd(l)=γσ avg (l)
where Δ d (l) is half the lateral width of the focal zone, γ ∈ (0,1)]According to the characteristics of a sensorSafety factor, σ, settable by a characteristic avg (l) The average value of the sensor ranging errors at the distance l is obtained;
an nth degree polynomial is adopted to represent the central line of the attention focusing region, the region between the two lower curves is the attention focusing region,
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +...+A npf x n +Δd(x)
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +...+A npf x n -Δd(x)。
compared with the prior art, the invention has the beneficial effects that:
1. according to the method, through an attention focusing area based on a forgetting mechanism, internal data of a lane line point set in the attention focusing area are stored in a point set storage area to be fitted in a focusing area, and abnormal points are removed by using an RANSAC algorithm or an improved algorithm thereof for fitting, so that points with low confidence coefficient, points which are detected wrongly and external points are effectively removed. And storing the data of the lane line point set which is not in the attention focusing area into a point set storage area to be fitted in the non-focusing area, fusing the local areas of the first half cycle and the second half cycle (the first half cycle comprises a forward sensor and a side forward sensor, and the second half cycle comprises a backward sensor and a side backward sensor), obtaining the information of the first half cycle and the second half cycle, and fusing the information of the first half cycle and the second half cycle globally. The lane line point set fusion in the attention focusing area and the lane line point set fusion in the non-focusing area can be parallelly and asynchronously calculated, so that the delay of the lane line point set matching fusion in the attention focusing area is reduced. The historical data of the lane lines does not directly participate in the fusion of the lane lines, because the fused historical data of the lane lines necessarily introduce delay and noise, and the sudden change of the lane lines to a certain degree cannot be well dealt with. Historical data is used to generate attention focus areas based on a forgetting mechanism to reduce the matching search space dimension. Therefore, the matching search space can be effectively reduced in various scenes, the lane line point sets detected by the multiple sensors are quickly matched and fused, the risk of wrong matching and fusion is reduced, the fusion delay is reduced, and the stability and the safety of path planning and control of the automatic driving vehicle are improved.
Drawings
For purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made in detail to the present invention as illustrated in the accompanying drawings, in which:
FIG. 1 is a flow chart of a matching fusion method based on a lane line point set and an attention mechanism according to the present invention;
FIG. 2 is a schematic diagram of a sensor profile according to the present invention;
FIG. 3 is a schematic diagram of a usage scenario in the present invention;
FIG. 4 is a schematic diagram of an attention focusing region according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Because a plurality of sensors (such as cameras) detect the lane lines in different directions to obtain detection results represented by point sets and confidence degrees of each point, the detection result data can form redundant information, and if the detection results are matched and fused by adopting a correct method, the capability of continuously and stably detecting the lane lines can be improved, so that the safety of automatic driving path planning is improved.
However, the lane lines are of a slender structure and may pass through the vehicle body and be blocked by various targets, and the local lane lines (point sets) and characteristic parameters thereof detected by different cameras are difficult to be correctly matched and fused in actual complex and changeable scenes to form global lane line information output. When the number of the sensors is increased, the traversal search space dimension of matching between point sets of different lane lines is also greatly increased, the operation time is influenced, and the wrong matching and fusion of different camera detection lane lines can cause serious curvature and course angle detection errors, thereby causing wrong path planning results and generating serious safety accidents.
The specific implementation mode of the method is derived and explained based on the vehicle coordinate system, but the actual physical meaning and effect of the method are not influenced by changing the coordinate system.
As shown in fig. 1, fig. 2 and fig. 3, the present embodiment provides a matching fusion method based on a lane line point set and an attention mechanism, including the following steps:
s1, receiving the vehicle position and state information, and executing the step S2 if the automatic driving system is in the available state;
s2, receiving the road data and the lane line point set detected by the sensor, and processing the lane line point set; when there are k sensors, under the condition that the k sensor detects the root line, the set of lane line points is:
Figure BDA0003718577470000071
the lane line point set can be directly obtained through the detection result of the sensor, and can also be indirectly obtained through sparse sampling and distance calculation of the semantic/instance segmentation result of the sensor. The set of lane line points output by the sensor already contains the position information of each point in the vehicle coordinate system, or the position information of the lane line points in the vehicle coordinate system can be indirectly calculated.
S3, traversing the lane line point sets detected by the sensors, and judging whether the lane line point sets are in the attention focusing area;
s4, storing data of the lane line point set in the attention focusing area into a point set storage area to be fitted in a focusing area, storing data not in the attention focusing area into a point set storage area to be fitted in a non-focusing area, fitting the data respectively, numbering the fitted lane lines and calculating confidence coefficients, and taking the weighted average of the confidence coefficients of all the points as the confidence coefficient of the fitted lane line;
s5, merging lane lines, receiving all fitted lane lines in the S4 and numbering;
s6, outputting a global fusion lane line and lane lines around the front and the back of the vehicle to the predictive planning control system according to the predictive planning control request;
and S7, memorizing the N frame fusion lane lines, uniformly predicting the current time of the memorized N frame fusion lane lines, and then performing interframe matching.
S8, classifying attention, and taking a fused lane line in a to-be-fitted point set storage area of a focus domain as a main attention lane line and a fused lane line in a to-be-fitted point set storage area of a non-focus domain as a secondary attention lane line;
when the secondary attention lane line is in the continuous N-frame fusion lane line and can be stably matched between frames, the secondary attention lane line is converted into the primary attention lane line;
when the main attention lane line is in the continuous N-frame fusion lane line, if only 1 frame which cannot be matched with each other appears, the main attention lane line is converted into the secondary attention lane line.
And S9, calculating the central line of the attention focusing area, and determining the attention focusing area according to the sensor ranging error.
According to the method, the internal data of the lane line point set in the attention focusing area is stored in the point set storage area to be fitted in the focusing area through the attention focusing area based on the forgetting mechanism, and the RANSAC algorithm or the improved algorithm thereof is used for removing the external points for fitting so as to effectively remove the points with low confidence coefficient and the external points which are detected by mistake. And storing data of the lane line point set which is not in the attention focusing area into a to-be-fitted point set storage area of the non-focusing area, and fusing local areas of the first half cycle and the second half cycle (the first half cycle comprises a forward sensor and a side forward sensor, and the second half cycle comprises a backward sensor and a side backward sensor) to obtain half local information and perform global fusion. The historical data of the lane lines do not directly participate in the fusion of the lane lines, because the historical data of the fused lane lines necessarily introduce delay and noise, and the sudden change of the lane lines to a certain degree cannot be well dealt with. Historical data is used to generate attention focus areas based on a forgetting mechanism to reduce the matching search space dimension. Therefore, the matching search space can be effectively reduced in various scenes, the lane line point set detected by the multiple sensors is improved to be quickly matched and fused, the risk of wrong matching and fusion is reduced, and the stability and the safety of route planning and steering wheel control of the automatic driving vehicle are improved.
Referring to fig. 2, the sensor in the invention can be a panoramic camera system, which comprises a system consisting of 6 cameras of front view, left rear view, right front view, right rear view and rear view (the camera can be replaced by other sensors capable of detecting lane lines (point sets), such as imaging laser radar, and outputs the lane line point sets without influencing the use of the method.
The method is suitable for the fusion of the lane line point detection results of the multi-vision sensor (the detection results comprise the coordinates of a lane line point set, the confidence coefficient and the color gray saturation of each point), and is also suitable for the fusion of the lane line point detection results of other sensors such as an imaging laser radar, a high-line-number laser radar and the like and combined sensors thereof (the data of the lane line point set similar to the lane line point detection results of the camera and the confidence coefficient of each point can be generated).
The method is also suitable for multi-sensor fusion of the road edge and the generalized road boundary point set.
In this embodiment, the road data includes travelable areas, guardrails, and road edge data; and according to the travelable area, the guardrails and the road edge data, removing the abnormal points on the outer sides of the guardrail and the road edge data in a lane line point set.
Therefore, the data of the lane line point sets pass through the guardrails and the road edges, multi-source information verification of a driving area can be achieved, the lane lines crossing the guardrails and opposite lane lines are eliminated, and the risk of wrong matching fusion is reduced.
Preferably, in this embodiment, a set of lane line points where more than three-fourths of the points are in the focus domain is stored in the storage area of the to-be-fitted point set in the focus domain.
In this embodiment, the lane line point set data stored in the to-be-fitted point set storage area of the focus area and the lane line point set data stored in the to-be-fitted point set storage area of the unfocused area are processed, and asynchronous processing can be performed in parallel to reduce delay of matching and fusion of the lane line point sets in the focus area.
For ease of understanding, reference will now be made individually.
I. Specifically, the method comprises the steps of processing lane line point set data lineFocusP (i, j) (i is 1, r, j is 1, s (r)) stored in a to-be-fitted point set storage area of a focusing area, traversing the to-be-fitted point set storage area of each focusing area, and fitting a polynomial function by using a RANSAC (random sample consensus) algorithm or an improved RANSAC algorithm for the lane line point set of each focusing area in the to-be-fitted storage area.
Preferably, the method comprises the following steps of using a RANSAC algorithm or a modified algorithm thereof:
1) an initial set of points is selected and step 2) is performed.
2) And (3) selecting a group of points a from the initial point set by adopting random sampling, and fitting a polynomial function by using a least square method to execute the step 3).
3) And calculating the nearest distance between each point and the polynomial function curve, eliminating the outer points with the distance greater than a threshold value, and calculating the number of the points which are not eliminated. Execution step 4)
4) And when the number of the points which are not removed is more than half of the total number of the points, fitting the curve again, calculating the sum of the distances between the curve and the points and the confidence coefficient mean value of the points, storing, and executing the step 5), otherwise, executing the steps 2) and 3) again.
5) And when the sum of the distances between the curve and each point is less than a threshold value, selecting the curve as an alternative optimal curve and storing the curve. Otherwise, the steps 2), 3) and 4) are executed again.
6) And when the random sampling times in the step 2) exceed a set threshold c and at least one alternative optimal curve is obtained, outputting the alternative optimal curve as an optimal curve. Otherwise, repeating the steps 2), 3), 4) and 5) until b curves are fitted. And selecting one curve with the highest score from the b curves by adopting a weighting scoring mechanism in the RANSAC algorithm, and outputting the curve as the current optimal lane line. The score is calculated based on the sum of the distances between the curve and each point, the confidence coefficient mean value of the point set, the number of the lane line points, the length of the lane line and the proportion of the color normal points.
Preferably, the scoring formula is:
Score=e 1 D sum +e 2 ·Conf mean +e 3 ·n point +e 4 ·L lane +e 5 ·P color
in the formula, e 1 ,...,e 5 As weighting coefficients, D sum Is the sum of the distances of the points and the normalized fitted curve, Conf mean Is the mean of the confidence of the set of points, n point For normalized number of lane points, L lane Calculating the length, P, for the normalized lane line color Is the proportion of the lane line color normal points (not gray but not black).
In the above steps, a is the number of preset random sampling points, b is the upper limit of the candidate optimal curve (iteration termination), and c is the upper limit of the random sampling times (iteration termination).
Preferably, the original RANSAC algorithm has the problem that when the number of points is too large, the number of times for successfully selecting the b curves to reach the probability of 99% is too large. For points in the attention mechanism focus domain, a special improved RANSAC (random sample consensus) algorithm which is based on the confidence coefficient of a lane line point set and fused with other information can be adopted to fit the lane line.
Specifically, the method for selecting the initial point set in the step 1) is improved as follows: firstly, removing points with confidence coefficient smaller than a self-adaptive threshold value from a lane line point set of each focusing domain in a storage area to be fitted; then, points with the average headway longer than 5s/Vehicle are removed or sparsely sampled (the average headway is the average headway/average Vehicle speed); and sparsely sampling points with the average headway longer than 2s/Vehicle (average headway is the average headway/average Vehicle speed), and particularly, removing the tire mark and the old lane line with dark gray colors and removing the line with overlarge color difference with white and yellow colors from the sensor capable of detecting the colors. And the RANSAC algorithm is used for fitting the polynomial function, so that the RANSAC random point taking times can be effectively reduced.
In the above method for calculating the adaptive threshold of Confidence (Confidence) in the selected initial point set, the adaptive threshold AT (adaptive threshold) is segmented according to length, each d (for example, within 50M for 8M camera, d is 5; 50M to 100M, d is 10) M is divided into one segment, and the Confidence (Confidence) of the lane line point of each segment is conf Comprises the following steps:
Figure BDA0003718577470000101
in the formula, Conf point Confidence of lane line points; lambda epsilon (0,1)]Is a rejection area factor; m is the number of lane line points obtained by the sensor in the section after the lane line is segmented according to the length; conf LLimit The lower limit of the adaptive threshold for confidence.
When the improved RANSAC algorithm is used for fitting the polynomial, the polynomial degree of curve fitting is determined according to a scene: preferably, when the sensor detects that the current scene is a structured road, the 3 rd-order polynomial curve of the lane line represents, and the 3 rd-order polynomial function of the lane line obtained by fitting is as follows:
A(x)=A 0 +A 1 x+A 2 x 2 +A 3 x 3
when the sensor detects that the current scene is an unstructured road, the unstructured road is represented by a 5 th-order polynomial curve, and the 5 th-order polynomial function of the lane line obtained through fitting is as follows:
A(x)=A 0 +A 1 x+A 2 x 2 +A 3 x 3 +A 4 x 4 +A 5 x 5
all the fitted lane lines linefocus (i) (1., h) are numbered, confidence calculation is performed, and the weighted average of the confidence of each point is used as the confidence of the fitted lane line.
II. Specifically, the step of processing the lane line point set data stored in the storage area of the to-be-fitted point set in the unfocused domain comprises the following steps:
s41, processing the data of the lane line point sets in the non-focus area to be matched, sorting the lane line point sets in the first half cycle which are not in the attention focus area according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value to be used as the lane line in the first half cycle long Fp (i ═ 1.., m), whereas as the first half cycle short lane line short FP(i)(1,...,n);
Sequentially collecting the front half perimeter lane lines in the non-focus domain point set to be fitted in the storage area according to the sequence from long to short long Fp (i ═ 1.., m) is selected as the benchmarkTraverse the other long lane lines long FP (j) (j ≠ i) and the first half-cycle short lane line short Fp (1...., n) matching, specifically including:
1) if the lane line long Fp (i ═ 1.., m) can be lined with other front half-perimeter lane lines long FP (j) (j is not equal to i) matching, merging all matched lane lines, eliminating unmatched abnormal points to fit the lane lines, and collecting the matched lane line points and the front half-perimeter lane line long FP (j) (j ≠ i) merges into a line long Fp (i) as a new reference lane line. And the front half-circumference lane line long Eliminating a reference lane line point set from FP (j) (j is not equal to i) without aligning the front half-circumference lane line long FP (j) (j is not equal to i), and the other lane lines are matched and traversed.
2) If the front half-circumference lane line point gathers the line long Fp (i ═ 1.., m) can gather line with other first half cycle short lane lines Short And (f) (i 1., n) matching, merging all matched lane line points, eliminating unmatched abnormal points, and performing lane line fitting.
3) And if the lane line cannot be matched with the first half-cycle short lane line and the length is greater than the self-adaptive threshold value, directly taking the lane line point set as an independent lane line for fitting.
Numbering all the fitted lane lines lineFP (i) (1.,..,. g), and performing confidence calculation, wherein the confidence weighted average value of each point is used as the confidence of the fitted lane lines.
And S42 is entered when the first half of the vehicle is completely matched by traversing the lane line.
S42, processing the data of the lane line point sets in the non-focus area to be matched, sorting the lane line point sets which are not in the attention focus area in the second half cycle according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value to be used as the lane line of the second half cycle long Rp (i ═ 1.., m), and conversely, as the second half-cycle short lane line short RP(i)(1,...,n);
According to the sequence from long to short, sequentially selecting a rear half perimeter lane line in a non-focus domain point set storage area to be fitted as a reference, traversing other lane lines and a rear half perimeter short lane line for matching, and specifically comprising the following steps:
1) if the lane line long Rp (i ═ 1.. times, m) can be linked to other rear half-circumference lane lines long And RP (j) (j is not equal to i) matching, merging the set of the lane line points on all the matches, eliminating the mismatching abnormal points to fit the lane line, and merging the lane line on the match and the lane line with the second half perimeter into a line long Rp (i) as a new reference lane line. Line of lane line with rear half circumference long RP (j) (j is not equal to i) eliminates a reference lane line point set and does not match with the line any more long And RP (j) (j is not equal to i), and performing matching traversal on the other lane lines.
2) If the rear half perimeter lane line can be connected with other rear half perimeter lane lines short And on RP (i) (1,., n) matching, combining all matched lane line points, eliminating unmatched abnormal points, and then performing lane line fitting.
3) And if the lane line cannot be matched with the short lane line of the rear half cycle and the length is greater than the self-adaptive threshold value, directly taking the lane line point set as an independent lane line for fitting.
And numbering all the fitted lane lines lineR (i) (i is 1.,. h), and performing confidence calculation, wherein the confidence weighted average value of each point is used as the confidence of the fitted lane lines.
Entering S43 when the lane lines of the second half cycle are traversed and matched;
and S43, traversing the rear half cycle fusion lane line of the vehicle and matching the fusion lane line with the front half cycle fusion lane line as a reference, merging all matched lane line point sets and eliminating unmatched abnormal point fitting polynomial functions if the current half cycle lane line point set can be matched with the rear half cycle lane line point set. Preferably, when the sensor detects that the current scene is a structured road, the lane line is represented by a polynomial curve of degree 3, and when the sensor detects that the current scene is an unstructured road, the current scene is represented by a polynomial curve of degree 5.
Specifically, the formula of the lane line length adaptive threshold in the process of processing the lane line point set data stored in the to-be-fitted point set storage area of the unfocused domain is as follows:
Figure BDA0003718577470000121
in the formula, LTH line Calculating the length of the lane line; delta e (0,1)]A rejection region factor; n is the total number of lane lines in the non-focusing domain point set storage area to be fitted; LTH LLimit The lower limit of the adaptive threshold for confidence.
The purpose of screening according to length by using an adaptive threshold formula is as follows: since particularly short lane lines detected by the sensors are not easily matched and there is a certain probability that they are other marking lines, such as unclear arrow signs or unclear guide lines. Therefore, the lane line which is particularly short and difficult to match is not selected as the matching reference from the matching fusion strategy.
In step S7, inter-frame matching is performed by fitting an nth-order polynomial curve to the lane lines, where the formula is:
Figure BDA0003718577470000122
in the formula, A ip1 Coefficient representing an nth-order polynomial in a later frame, A ip2 Coefficient representing an nth-order polynomial in a previous frame, Tr IFM A tolerance threshold representing an inter-frame match; delta A inon A normalization factor representing an interframe matching error of an ith coefficient of the polynomial curve;
when the inequality is established, the inter-frame matching of the memory lane line curve is successful, and the successfully matched lane line curve is stored into the memory lane line sequence.
When the nth-order polynomial n is 3, the third-order polynomial curve can be fitted to the lane line for interframe matching, and the formula is as follows:
Figure BDA0003718577470000123
in the formula, A 0p1 、A 1p1 、A 2p1 、A 3p1 Representing a later frameCoefficient of the middle-fitted lane line cubic polynomial, A 0p2 、A 1p2 、A 2p2 、A 3p2 Coefficient, Tr, representing a fitted lane line cubic polynomial in the previous frame IFM A tolerance threshold for inter-frame matching;
when the inequality is established, the inter-frame matching of the memory lane line curve is successful, and the successfully matched lane line curve is stored into the memory lane line sequence.
Referring to fig. 4, in step S9, the center line of the attention focusing area is calculated based on the predicted traveling direction, the main attention lane coefficient, and the forgetting factor; wherein,
predicting the driving direction, and not counting a main attention lane line which is greatly different from the predicted driving direction of the vehicle into the calculation of the central line of the attention focusing area; when the following discriminant function inequality holds, the lane line is calculated as the center line of the attention focusing area:
|A 0 +A 1 v h t pred +(2A 2 -Y r )(v h t pred ) 2 |<Tr pred
in the formula, v h The speed of the host vehicle; y is r Is the yaw rate of the host vehicle; t is t pred To set a predicted time; a. the 0 ,A 1 ,A 2 Respectively are fitted lane line equation coefficients, Tr pred Is predicted t pred A discrimination threshold value of the transverse position of the lane line from the vehicle after the time;
calculating and storing a forgetting factor of the current frame, and storing n at most m A frame, whose formula is:
Figure BDA0003718577470000131
in the formula, beta b Is a forgetting factor reference value; vr pi The distance is the far-end visible distance of the lane line in the driving direction; v. of hpi The speed of the host vehicle; conf pi The confidence of the main attention lane line of the current frame; dt pi Is the time difference between the current frame and the previous frame; μ is a safety threshold; i is a frameThe number (i ═ 1.., n.) m ). Namely, when the longer the lane line and the slower the vehicle speed are seen in the history frame, the smaller the forgetting factor is, the slower the forgetting is, and the history information can be used more effectively to construct the attention focusing area by using the forgetting factor.
And calculating the central line of the attention focusing area according to the predicted driving direction, the main attention lane line coefficient, the confidence coefficient and the forgetting factor. Namely, the bionic test result is the same as the bionic test result of human brain vision, the driving direction is concerned, and the lower the confidence coefficient is, the faster the driver forgets.
Calculating the central line of the attention focusing region, and expressing the central line by polynomials of third order and above, wherein the calculation formula of the coefficient is as follows:
Figure BDA0003718577470000132
in the formula, n m The upper limit of the number of memory frames; when m is less than n m -1,(β p1p1 β p2 +...(β p1 β p2 ...β pm ))>Tr memo When the element belongs to (0,1),
Figure BDA0003718577470000133
wherein Tr memo Is a memory tolerance threshold;
Figure BDA0003718577470000134
is n m A polynomial coefficient vector of a main attention lane line of an ith frame in the matching memory sequence of the historical frames;
the formula for determining the lateral width of the attention focusing region according to the sensor ranging error is as follows:
Δd(l)=γσ avg (l)
where Δ d (l) is half the lateral width of the focal zone, γ ∈ (0,1)]Safety factors, sigma, set according to sensor performance characteristics avg (l) The average value of the sensor ranging errors at the distance l is obtained;
when a cubic polynomial is used to represent the center line of the attention focusing region, the region between the two lower curves is the attention focusing region,
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +Δd(x)
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 -Δd(x);
when an nth-order polynomial is used to represent the center line of the attention focusing region, then the region between the two lower curves is the attention focusing region,
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +...+A npf x n +Δd(x)
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +...+A npf x n -Δd(x)。
finally, it is noted that the above embodiments are merely intended to illustrate the technical solution of the present invention and not to limit the same, and although the present invention has been described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein. All obvious changes which are introduced by the technical solution of the invention are still within the protective scope of the invention.

Claims (9)

1. A matching fusion method based on a lane line point set and an attention mechanism is characterized by comprising the following steps:
s1, receiving the vehicle position and state information, and executing the step S2 if the automatic driving system is in the available state;
s2, receiving the road data and the lane line point set detected by the sensor, and processing the lane line point set;
s3, traversing the lane line point sets detected by the sensors, and judging whether the lane line point sets are in the attention focusing area;
s4, storing data of the lane line point set in the attention focusing area into a point set storage area to be fitted in a focusing area, storing data not in the attention focusing area into a point set storage area to be fitted in a non-focusing area, respectively fitting, numbering the fitted lane lines, calculating confidence coefficients, and taking the weighted average value of the confidence coefficients of all the points as the confidence coefficient of the fitted lane line;
s5, merging lane lines, receiving all fitted lane lines in the S4 and numbering;
s6, outputting a global fusion lane line to the predictive planning control system according to the predictive planning control request;
s7, memorizing N frame fusion lane lines, uniformly predicting the current time of the memorized N frame fusion lane lines, and then performing inter-frame matching;
s8, classifying attention, and taking a fused lane line in a to-be-fitted point set storage area of a focus domain as a main attention lane line and a fused lane line in a to-be-fitted point set storage area of a non-focus domain as a secondary attention lane line;
and S9, calculating the central line of the attention focusing area, and determining the attention focusing area according to the sensor ranging error.
2. The matching fusion method based on the lane line point set and the attention mechanism according to claim 1, wherein in step S2, the road data comprises travelable area, guardrail and road edge data; and according to the travelable area, the guardrails and the road edge data, removing the abnormal points on the outer sides of the guardrail and the road edge data in a lane line point set.
3. The matching fusion method based on the lane line point set and the attention mechanism as claimed in claim 1, wherein in step S4, the data of the lane line point set stored in the to-be-fitted point set storage region of the focus region is processed, the to-be-fitted point set storage region of each focus region is traversed, and the RANSAC algorithm is adopted to fit the polynomial function to the lane line point set of each focus region in the to-be-fitted storage region.
4. The matching fusion method based on the lane line point set and the attention mechanism as claimed in claim 3, wherein RANSAC algorithm is adopted to process the lane line point set data stored in the storage area of the point set to be fitted in the focusing area, including removing or thinning points with long headway, removing points with confidence coefficient smaller than the adaptive threshold value, and removing dark gray tire imprints and old lane lines, so as to reduce the iteration number of RANSAC algorithm.
5. The matching fusion method based on the lane line point set and attention mechanism as claimed in claim 4, wherein the adaptive threshold of confidence is segmented by length, and the adaptive threshold of confidence of the lane line point AT each segment is AT conf Comprises the following steps:
Figure FDA0003718577460000011
in the formula, Conf point Confidence of lane line points; lambda epsilon (0,1)]Is a rejection area factor; m is the number of lane line points obtained by the sensor in the section after the lane line is segmented according to the length; conf LLimit The lower limit of the adaptive threshold for confidence.
6. The matching fusion method based on the lane line point set and attention mechanism as claimed in claim 1, wherein in step S4, the step of processing the lane line point set data stored in the unfocused domain to-be-fitted point set storage area comprises:
s41, processing the data of the lane line point sets stored in the to-be-matched area of the unfocused area, sorting the lane line point sets which are not in the attention focusing area in the first half cycle according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value to be used as the lane line of the first half cycle long Fp (i ═ 1.., m), whereas as the first half cycle short lane line short FP(i)(1,...,n);
Sequentially selecting the front half-circumference lane lines in the point set storage area to be fitted of the unfocused domain as a reference according to the sequence from long to short, traversing the rest lane lines and the front half-circumference short lane lines for matching, and entering S42 when the current half-circumference lane lines are completely traversed and matched;
s42, processing the data of the lane line point sets stored in the non-focus area to be matched, sorting the lane line point sets which are not in the attention focus area in the second half cycle according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value to be used as the lane line of the second half cycle long Rp (i ═ 1.., m), and conversely, as the second half-cycle short lane line short RP(i)(1,...,n);
Sequentially selecting the second half perimeter lane lines in the to-be-fitted point set storage area of the unfocused domain as a reference according to the sequence from long to short, traversing the other lane lines and the second half perimeter short lane lines for matching, and entering S43 when the second half perimeter lane lines are completely traversed and matched;
and S43, traversing the rear half cycle fusion lane line of the vehicle and matching the fusion lane line with the front half cycle fusion lane line as a reference, merging all matched lane line point sets and eliminating unmatched abnormal point fitting polynomial functions if the current half cycle lane line point set can be matched with the rear half cycle lane line point set.
7. The matching fusion method based on the lane line point set and the attention mechanism as claimed in claim 6, wherein the formula of the lane line length adaptive threshold in the process of processing the lane line point set data stored in the unfocused domain to-be-fitted point set storage region is as follows:
Figure FDA0003718577460000021
in the formula, LTH line Calculating the length of the lane line; delta epsilon (0,1)]A rejection region factor; n is the total number of lane lines in the non-focusing domain point set storage area to be fitted; LTH LLimit The lower limit of the adaptive threshold for confidence.
8. The matching and fusing method based on the lane line point set and the attention mechanism is characterized in that in step S7, inter-frame matching is performed by fitting an nth-order polynomial curve to the lane lines, and the formula is as follows:
Figure FDA0003718577460000022
in the formula, A ip1 Coefficient representing an nth-order polynomial in a later frame, A ip2 Coefficient representing an nth-order polynomial in a previous frame, Tr IFM A tolerance threshold representing an inter-frame match; delta A inon A normalization factor representing an interframe matching error of an ith coefficient of the polynomial curve;
and when the inequality is established, successfully matching the frames of the memory lane line curve, and storing the memory lane line sequence.
9. The matching fusion method based on the lane line point set and the attention mechanism according to claim 1, wherein in step S9, the center line of the attention focusing area is calculated according to the predicted driving direction, the main attention lane coefficient and the forgetting factor; wherein,
predicting the driving direction, and not counting a main attention lane line which is excessively different from the predicted driving direction of the vehicle into the center line calculation of the attention focusing area, wherein when the following discriminant function inequality is satisfied, the lane line is calculated as the center line of the attention focusing area:
|A 0 +A 1 v h t pred +(2A 2 -Y r )(v h t pred ) 2 |<Tr pred
in the formula, v h Is the speed of the vehicle, Y r Is the yaw rate of the vehicle, t pred To set the prediction time, A 0 、A 1 、A 2 Are respectively the fitted lane line equation coefficients, Tr pred Is predicted t pred A discrimination threshold value of the transverse position of the lane line from the vehicle after the time;
calculating and storing a forgetting factor of the current frame, and storing n at most m A frame, whose formula is:
Figure FDA0003718577460000031
in the formula, beta b Is a forgetting factor reference value; vr pi The distance is the far-end visible distance of the lane line in the driving direction; v. of hpi For the speed of the vehicle, conf pi For the main attention lane line confidence, dt, of the current frame pi Is the time difference between the current frame and the previous frame, μ is the safety threshold, and i is the frame number (i ═ 1, …, n m );
Calculating the central line of the attention focusing area, and expressing the central line by a polynomial of third order and above, wherein the coefficient of the central line is calculated by the following formula:
Figure FDA0003718577460000032
in the formula, n m For memorizing the upper limit of the frame number, when m < n m -1,(β p1p1 β p2 +…(β p1 β p2 ...β pm ))>Tr memo When the element belongs to (0,1),
Figure FDA0003718577460000041
wherein Tr memo Is a memory tolerance threshold;
Figure FDA0003718577460000042
is n m A polynomial coefficient vector of a main attention lane line of an ith frame in the matching memory sequence of the historical frames;
the formula for determining the lateral width of the attention focusing region according to the sensor ranging error is as follows:
Δd(l)=γσ avg (l)
where Δ d (l) is half the lateral width of the focal zone, γ ∈ (0,1)]To set safety factors, sigma, according to sensor performance characteristics avg (l) The average value of the sensor ranging errors at the distance l is obtained;
an nth degree polynomial is adopted to represent the central line of the attention focusing region, the region between the two lower curves is the attention focusing region,
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +…+A npf x n +Δd(x)
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +…+A npf x n -Δd(x)。
CN202210752033.6A 2022-06-28 2022-06-28 Matching fusion method based on lane line point set and attention mechanism Active CN115131968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210752033.6A CN115131968B (en) 2022-06-28 2022-06-28 Matching fusion method based on lane line point set and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210752033.6A CN115131968B (en) 2022-06-28 2022-06-28 Matching fusion method based on lane line point set and attention mechanism

Publications (2)

Publication Number Publication Date
CN115131968A true CN115131968A (en) 2022-09-30
CN115131968B CN115131968B (en) 2023-07-11

Family

ID=83380118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210752033.6A Active CN115131968B (en) 2022-06-28 2022-06-28 Matching fusion method based on lane line point set and attention mechanism

Country Status (1)

Country Link
CN (1) CN115131968B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118068357A (en) * 2024-04-19 2024-05-24 智道网联科技(北京)有限公司 Road edge fusion processing method and device, electronic equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133600A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 A kind of real-time lane line detection method based on intra-frame trunk
US20180025234A1 (en) * 2016-07-20 2018-01-25 Ford Global Technologies, Llc Rear camera lane detection
CN108647664A (en) * 2018-05-18 2018-10-12 河海大学常州校区 It is a kind of based on the method for detecting lane lines for looking around image
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest
CN110001782A (en) * 2019-04-29 2019-07-12 重庆长安汽车股份有限公司 Automatic lane-change method, system and computer readable storage medium
CN110386146A (en) * 2018-04-17 2019-10-29 通用汽车环球科技运作有限责任公司 For handling the method and system of driver attention's data
CN111582201A (en) * 2020-05-12 2020-08-25 重庆理工大学 Lane line detection system based on geometric attention perception
CN111950467A (en) * 2020-08-14 2020-11-17 清华大学 Fusion network lane line detection method based on attention mechanism and terminal equipment
CN112241728A (en) * 2020-10-30 2021-01-19 中国科学院合肥物质科学研究院 Real-time lane line detection method and system for learning context information by adopting attention mechanism
CN112862845A (en) * 2021-02-26 2021-05-28 长沙慧联智能科技有限公司 Lane line reconstruction method and device based on confidence evaluation
CN113468967A (en) * 2021-06-02 2021-10-01 北京邮电大学 Lane line detection method, device, equipment and medium based on attention mechanism
CN113591618A (en) * 2021-07-14 2021-11-02 重庆长安汽车股份有限公司 Method, system, vehicle and storage medium for estimating shape of road ahead
US20210350705A1 (en) * 2020-05-11 2021-11-11 National Chiao Tung University Deep-learning-based driving assistance system and method thereof
CN114022863A (en) * 2021-10-28 2022-02-08 广东工业大学 Deep learning-based lane line detection method, system, computer and storage medium
CN114120069A (en) * 2022-01-27 2022-03-01 四川博创汇前沿科技有限公司 Lane line detection system, method and storage medium based on direction self-attention
CN114531913A (en) * 2020-09-09 2022-05-24 华为技术有限公司 Lane line detection method, related device, and computer-readable storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180025234A1 (en) * 2016-07-20 2018-01-25 Ford Global Technologies, Llc Rear camera lane detection
CN107133600A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 A kind of real-time lane line detection method based on intra-frame trunk
CN110386146A (en) * 2018-04-17 2019-10-29 通用汽车环球科技运作有限责任公司 For handling the method and system of driver attention's data
CN108647664A (en) * 2018-05-18 2018-10-12 河海大学常州校区 It is a kind of based on the method for detecting lane lines for looking around image
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest
CN110001782A (en) * 2019-04-29 2019-07-12 重庆长安汽车股份有限公司 Automatic lane-change method, system and computer readable storage medium
US20210350705A1 (en) * 2020-05-11 2021-11-11 National Chiao Tung University Deep-learning-based driving assistance system and method thereof
CN111582201A (en) * 2020-05-12 2020-08-25 重庆理工大学 Lane line detection system based on geometric attention perception
CN111950467A (en) * 2020-08-14 2020-11-17 清华大学 Fusion network lane line detection method based on attention mechanism and terminal equipment
CN114531913A (en) * 2020-09-09 2022-05-24 华为技术有限公司 Lane line detection method, related device, and computer-readable storage medium
CN112241728A (en) * 2020-10-30 2021-01-19 中国科学院合肥物质科学研究院 Real-time lane line detection method and system for learning context information by adopting attention mechanism
CN112862845A (en) * 2021-02-26 2021-05-28 长沙慧联智能科技有限公司 Lane line reconstruction method and device based on confidence evaluation
CN113468967A (en) * 2021-06-02 2021-10-01 北京邮电大学 Lane line detection method, device, equipment and medium based on attention mechanism
CN113591618A (en) * 2021-07-14 2021-11-02 重庆长安汽车股份有限公司 Method, system, vehicle and storage medium for estimating shape of road ahead
CN114022863A (en) * 2021-10-28 2022-02-08 广东工业大学 Deep learning-based lane line detection method, system, computer and storage medium
CN114120069A (en) * 2022-01-27 2022-03-01 四川博创汇前沿科技有限公司 Lane line detection system, method and storage medium based on direction self-attention

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
XI LI等: "A Fast Detection Method For Polynomial Fitting Lane With Self-attention Module Added", 《2021 INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND INFORMATION SCIENCES (ICCAIS)》, pages 46 - 51 *
刘雷: "基于深度学习的车道线及道路目标检测研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 5, pages 035 - 424 *
吕川: "基于关键点检测的车道线识别及跟踪方法研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 1, pages 035 - 344 *
李国庆: "基于深度学习的自动驾驶汽车视觉感知算法研究", no. 5, pages 035 - 336 *
李超等: "一种基于帧间关联的实时车道线检测算法", 《计算机科学》, no. 2, pages 317 - 323 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118068357A (en) * 2024-04-19 2024-05-24 智道网联科技(北京)有限公司 Road edge fusion processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115131968B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
Meuter et al. A decision fusion and reasoning module for a traffic sign recognition system
CN103020956B (en) Image matching method for judging Hausdorff distance based on decision
RU2764708C1 (en) Methods and systems for processing lidar sensor data
CN112668602A (en) Method, device and machine-readable storage medium for determining a quality level of a data set of a sensor
US12012102B2 (en) Method for determining a lane change indication of a vehicle
CN113591618A (en) Method, system, vehicle and storage medium for estimating shape of road ahead
JP2020067698A (en) Partition line detector and partition line detection method
CN114296095A (en) Method, device, vehicle and medium for extracting effective target of automatic driving vehicle
Redmill et al. A lane tracking system for intelligent vehicle applications
Fang et al. Camera and LiDAR fusion for on-road vehicle tracking with reinforcement learning
CN116758153A (en) Multi-factor graph-based back-end optimization method for accurate pose acquisition of robot
CN115131968A (en) Matching fusion method based on lane line point set and attention mechanism
Zhang et al. MixedFusion: An efficient multimodal data fusion framework for 3-D object detection and tracking
Zhao et al. Robust online tracking with meta-updater
CN113942503A (en) Lane keeping method and device
CN113895439A (en) Automatic driving lane change behavior decision method based on probability fusion of vehicle-mounted multisource sensors
KR20220168061A (en) Apparatus for controlling a vehicle, system having the same and method thereof
Duan [Retracted] Deep Learning‐Based Multitarget Motion Shadow Rejection and Accurate Tracking for Sports Video
CN115719485A (en) Road side traffic target detection method based on category guidance
CN115841514A (en) Automatic parking method, device and equipment
Sajjad et al. A Comparative Analysis of Camera, LiDAR and Fusion Based Deep Neural Networks for Vehicle Detection
CN118261915B (en) Jiao Lugong sequence state real-time detection method based on lightweight local feature matching
CN118259312B (en) Laser radar-based vehicle collision early warning method
US20230410490A1 (en) Deep Association for Sensor Fusion
US20240005647A1 (en) Vehicle and Control Method Thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant