[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115131968B - Matching fusion method based on lane line point set and attention mechanism - Google Patents

Matching fusion method based on lane line point set and attention mechanism Download PDF

Info

Publication number
CN115131968B
CN115131968B CN202210752033.6A CN202210752033A CN115131968B CN 115131968 B CN115131968 B CN 115131968B CN 202210752033 A CN202210752033 A CN 202210752033A CN 115131968 B CN115131968 B CN 115131968B
Authority
CN
China
Prior art keywords
lane line
lane
point set
attention
fitted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210752033.6A
Other languages
Chinese (zh)
Other versions
CN115131968A (en
Inventor
李锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210752033.6A priority Critical patent/CN115131968B/en
Publication of CN115131968A publication Critical patent/CN115131968A/en
Application granted granted Critical
Publication of CN115131968B publication Critical patent/CN115131968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0129Traffic data processing for creating historical data or processing based on historical data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a matching fusion method based on a lane line point set and an attention mechanism, which comprises the following steps of S1, receiving vehicle position and state information, and executing step S2 if an automatic driving system is started; s2, receiving road data and a lane line point set detected by a sensor, and processing the lane line point set; s3, traversing the lane line point sets detected by the sensors, and judging whether the lane line point sets are in the attention focusing area or not; and S4, storing the data of the lane line point set in the attention focusing area into a focusing area point set storage area to be fitted, otherwise storing into a non-focusing area point set storage area to be fitted, respectively fitting, numbering the fitted lane lines, calculating the confidence coefficient, and taking the weighted average value of the confidence coefficient of each point as the confidence coefficient of the lane lines after fitting. The invention can effectively reduce the matching search space, reduce the risks of false matching and fusion, and improve the stability and safety of automatic driving vehicle path planning and steering wheel control.

Description

Matching fusion method based on lane line point set and attention mechanism
Technical Field
The invention relates to the technical field of vehicle control, in particular to a matching fusion method based on a lane line point set and an attention mechanism.
Background
Autopilot system safety relies on the correct output of each sensor. For example, a single sensor such as a front-view camera or a front-view imaging laser radar can be difficult to continuously and stably detect lane lines due to the conditions of a field angle range, bad weather, shielding and the like, and the front-view camera can be influenced by factors such as illumination conditions, light and shade alternation and the like; the sensor system (such as a multi-camera periscope system) can utilize redundant information formed by detection results of a plurality of sensors (such as cameras) in different directions to match and fuse, so that the capability of continuously and stably detecting lane lines is improved, and the safety of automatic driving path planning is improved. The key difficulty of the multisensor is that each sensor can only obtain local information, how the multisensor performs matching and fusion, the dimension of the matching search space of the multisensor is reduced, and according to the results of bionic and psychological research, the human brain automatically ignores low-probability and low-value information by using an attention mechanism when processing the information, and forgets and associates context to reduce the dimension of the matching search space to fuse the information, so that the lane line matching fusion method based on the global attention mechanism is necessary for high-level automatic driving.
The invention patent with publication number CN112154449A discloses a lane line fusion method, a lane line fusion device, a vehicle and a storage medium, wherein the lane line fusion method comprises the steps of acquiring an environment image around a movable platform, and obtaining an initial lane line set of the movable platform according to the environment image (S110); performing fitting optimization on initial lane line data in the initial lane line set to obtain a target lane line set; wherein, the fitting optimization comprises: performing fitting optimization according to the initial lane line data and the historical lane line data in the historical lane line set to obtain a target lane line set (S120); the set of target lane lines includes lane lines that are not parallel to each other. But this patent has the following problems:
1. the initial lane line and the historical lane line are directly utilized for fusion, and recognition delay is generated due to the historical lane line data when the lane line information is suddenly changed under the working conditions of visual field and illumination suddenly changed, and errors and noise (mismatching) of the historical lane line data can also influence the accuracy of the lane line after current fusion.
2. When the recognition accuracy and the measurement accuracy of the sensor are gradually improved, the effect of the historical lane line data is reduced, and when the recognition accuracy of the sensor is close to 100% and the measurement accuracy is in millimeter level, the historical lane line data is directly used for lane line fusion or is used for lane line fusion through prediction, and delay, uncertainty and noise are introduced instead.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to solve the technical problems that: how to provide a matching fusion method based on a lane line point set and an attention mechanism, how to effectively and accurately match and fuse lane lines in various scenes, effectively reduce matching search space, reduce risks of incorrect matching and fusion, and improve stability and safety of automatic driving vehicle path planning.
In order to solve the technical problems, the invention adopts the following technical scheme:
a matching fusion method based on a lane line point set and an attention mechanism comprises the following steps:
s1, receiving vehicle position and state information, and executing a step S2 if an automatic driving system is in an available state;
s2, receiving road data and a lane line point set detected by a sensor, and processing the lane line point set;
s3, traversing the lane line point sets detected by the sensors, and judging whether the lane line point sets are in the attention focusing area or not;
s4, storing the data of the lane line point set in the focusing area into a focusing area point set storage area to be fitted, storing the data not in the focusing area into a non-focusing area point set storage area to be fitted, fitting respectively, numbering the fitted lane lines, calculating the confidence coefficient, and taking the weighted average value of the confidence coefficient of each point as the confidence coefficient of the lane lines after fitting;
s5, merging the lane lines, receiving all the fitted lane lines in the S4 and numbering;
s6, outputting a global fusion lane line to a predictive planning control system according to the predictive planning control request;
s7, memorizing N frames of fusion lane lines, uniformly predicting the memorized N frames of fusion lane lines to the current moment, and then carrying out inter-frame matching;
s8, attention grading, wherein a fusion lane line in a focus area to-be-fitted point set storage area is used as a primary attention lane line, and a fusion lane line in a non-focus area to-be-fitted point set storage area is used as a secondary attention lane line;
s9, calculating the central line of the attention focusing area, and determining the attention focusing area according to the distance measurement error of the sensor.
Further, in step S2, the road data includes a travelable area, guardrail, and road edge data; and removing abnormal points outside the guardrail and the road edge data in the lane line point set according to the drivable area, the guardrail and the road edge data.
Further, in step S4, lane line point set data stored in the focus region to-be-fitted point set storage area is processed, the focus region to-be-fitted point set storage area is traversed, and a RANSAC algorithm is adopted to fit a polynomial function to the lane line point sets of the focus regions in the to-be-fitted storage area.
Further, the RANSAC algorithm is adopted to process the lane line point set data stored in the to-be-fitted point set storage area of the focusing area, wherein the processing comprises the steps of eliminating or thinning the points with long head time distance, eliminating the points with confidence less than the self-adaptive threshold value, eliminating the dark grey tire marks and old lane lines, so that the iteration times of the RANSAC algorithm are reduced.
Further, the adaptive threshold value of the confidence is segmented by length, and the adaptive threshold value AT of the confidence of the lane line point of each segment conf The method comprises the following steps:
Figure BDA0003718577470000031
in Conf point Confidence of lane line points; lambda E (0, 1)]Is a reject domain factor; m is the number of lane line points obtained by the sensor in the section after the lane line is segmented according to the length; conf LLimit Is the lower limit of the adaptive threshold for confidence.
Further, in step S4, the step of processing the lane line point set data stored in the non-focusing region point set storage area to be fitted includes:
s41, processing the data of the lane line point sets stored in the non-focusing area to be matched, sorting the lane line point sets which are not in the focusing area of attention in the first half cycle according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value as the lane line of the first half cycle long FP (i) (i=1, once again, m), otherwise, the line is used as a front half cycle short lane line short FP(i)(1,...,n);
Sequentially selecting the first half perimeter lane lines in the non-focusing region point set storage area to be fitted as a reference according to the sequence from long to short, traversing the rest lane lines and the first half perimeter short lane lines to match, and entering S42 when the current half perimeter lane lines are traversed and matched;
s42, processing the data of the lane line point sets stored in the non-focusing area to be matched, sorting the lane line point sets which are not in the focusing area of the attention in the second half cycle according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value as the lane line of the second half cycle long RP (i) (i=1, once again, m), otherwise, the line is used as a short lane line of the second half cycle short RP(i)(1,...,n);
Sequentially selecting the second half perimeter lane lines in the non-focusing region point set storage area to be fitted as a reference according to the sequence from long to short, traversing the rest lane lines and the second half perimeter short lane lines to match, and entering S43 when the second half perimeter lane lines are traversed and matched;
s43, traversing and matching a second half cycle fusion lane line of the vehicle by taking the first half cycle fusion lane line as a reference, matching the current half cycle lane line point set with the second half cycle lane line point set, merging all matched lane line point sets, and removing the abnormal point fitting polynomial function.
Further, the formula of the lane line length self-adaptive threshold value in processing the lane line point set data stored in the non-focusing region point set storage area to be fitted is as follows:
Figure BDA0003718577470000032
in the formula, LTH line Calculating a length for the lane line; delta epsilon (0, 1)]Rejecting domain factors; n is the total number of lane lines in a storage area of a point set to be fitted in a non-focusing area; LTH (Low temperature Co-fired ceramic) LLimit Is the lower limit of the adaptive threshold for confidence.
Further, in step S7, the inter-frame matching is performed by fitting the polynomial curve of degree n to the lane line, and the formula is:
Figure BDA0003718577470000041
wherein A is ip1 Coefficients representing the n th order polynomial in the following frame, A ip2 Coefficients representing the n th order polynomial in the previous frame, tr IFM Tolerance threshold representing inter-frame matching; ΔA inon A normalization factor representing an interframe matching error of an ith coefficient of the polynomial curve;
when the inequality is established, the inter-frame matching of the memory lane line curves is successful, and the memory lane line sequences are stored.
Further, the method according to claim 1, wherein in step S9, the attention focusing area center line is calculated according to the predicted driving direction, the main attention lane coefficient and the forgetting factor; wherein,,
the main attention lane line with the excessive difference from the predicted vehicle driving direction is not counted in the attention focusing area central line calculation, and when the following discriminant function inequality is established, the lane line is counted as the attention focusing area central line calculation:
|A 0 +A 1 v h t pred +(2A 2 -Y r )(v h t pred ) 2 |<Tr pred
in the formula, v h The speed of the host vehicle is Y r Yaw for the host vehicleRate, t pred To set the prediction time, A 0 、A 1 、A 2 Respectively the fitted lane line equation coefficients, tr pred For predicted t pred Judging a threshold value of the distance between the lane line and the transverse position of the vehicle after the time;
calculating forgetting factor of current frame and storing n at most m A frame having the formula:
Figure BDA0003718577470000042
wherein beta is b A forgetting factor reference value; vr (Vr) pi The far-end visible distance of the lane line in the driving direction is set; v hpi For the speed of the host vehicle, conf pi Confidence, dt, of the main attention lane line for the current frame pi μ is a safety threshold, i is the number of frames (i=1,.. m );
Calculating the central line of the attention gathering area, and expressing the central line by using polynomials with third order and more, wherein the calculation formula of the coefficients is as follows:
Figure BDA0003718577470000051
wherein n is m For the upper limit of the memory frame number, when m is less than n m -1,(β p1p1 β p2 +...(β p1 β p2 ...β pm ))>Tr memo When E (0, 1),
Figure BDA0003718577470000052
wherein Tr is memo Is a memory margin threshold; />
Figure BDA0003718577470000053
Is n m Polynomial coefficient vectors of main attention lane lines of the i th frame in the matching memory sequence of the history frames;
the formula for determining the lateral width of the focus area of attention from the sensor range error is:
Δd(l)=γσ avg (l)
where Δd (l) is half the lateral width of the focal zone, γ ε (0, 1)]Sigma for a safety factor set according to sensor performance characteristics avg (l) The average value of the distance measurement error of the sensor on the distance l;
the central line of the attention focusing area is expressed by using an n-degree polynomial, the area between the lower two curves is the attention focusing area,
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +...+A npf x n +Δd(x)
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +...+A npf x n -Δd(x)。
compared with the prior art, the invention has the beneficial effects that:
1. according to the invention, through the attention focusing area based on the forgetting mechanism, the data of the lane line point set in the attention focusing area is stored in the point set storage area to be fitted in the focusing area, and abnormal points are removed by using the RANSAC algorithm or an improved algorithm thereof for fitting, so that points with low confidence and points detected by mistake and external points are effectively removed. And storing the internal data of the lane line point set which is not in the focusing area of the attention into a point set storage area to be fitted in a non-focusing area, and carrying out local fusion on the first half cycle and the second half cycle (the first half cycle comprises a forward sensor and a side forward sensor, the second half cycle comprises a backward sensor and a side backward sensor) to obtain half-local information and carrying out global fusion. The lane line point set fusion in the attention focusing area and the lane line point set fusion in the unfocused area can be calculated asynchronously in parallel, so that the delay of the lane line point set matching fusion in the attention focusing area is reduced. The lane line history data does not directly participate in the fusion of the lane lines, because the fusion of the lane line history data inevitably introduces delay and noise, and the lane line abrupt change to a certain extent cannot be well coped with. Instead, historical data is used to generate a forgetting mechanism-based attention focus area to reduce the matching search space dimension. Therefore, under various scenes, the matching search space can be effectively reduced, the lane line point sets detected by the multiple sensors are rapidly matched and fused, the risks of error matching and fusion are reduced, the fusion delay is reduced, and the stability and safety of automatic driving vehicle path planning and control are improved.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of a method of matching fusion based on lane line point sets and an attention mechanism of the present invention;
FIG. 2 is a schematic diagram of sensor distribution in the present invention;
FIG. 3 is a schematic diagram of a use scenario in the present invention;
fig. 4 is a schematic view of the focus area of attention in the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Because the lane lines are detected by a plurality of sensors (such as cameras) in different directions, detection results represented by the point set and the confidence coefficient of each point are obtained, and redundant information formed by the detection result data can be obtained.
However, the lane lines are of an elongated structure, possibly penetrating through the vehicle body and being blocked by various targets, and the local lane lines (point sets) and characteristic parameters thereof detected by different cameras are difficult to be correctly matched and fused in a scene with actual complexity and variability, so that global lane line information is formed to be output. When the number of sensors is increased, the dimension of the matched traversal search space among the point sets of different lane lines is also greatly increased, the operation time is influenced, and the incorrect matching and fusion of the lane lines detected by different cameras can cause serious curvature and course angle detection errors, and cause incorrect path planning results, so that serious safety accidents are generated.
The method embodiments are derived and described based on a vehicle coordinate system, but changing the coordinate system does not affect the actual physical meaning and effect of the method.
As shown in fig. 1, 2 and 3, the present embodiment provides a matching fusion method based on a lane line point set and an attention mechanism, which includes the following steps:
s1, receiving vehicle position and state information, and executing a step S2 if an automatic driving system is in an available state;
s2, receiving road data and a lane line point set detected by a sensor, and processing the lane line point set; when k sensors exist and the kth sensor detects a root line, the lane line point set is as follows:
Figure BDA0003718577470000071
the lane line point set can be directly obtained through the detection result of the sensor, or can be indirectly obtained through sparse sampling and distance calculation of the sensor semantic/instance segmentation result. The lane line point set output by the sensor already contains the position information of each point in the vehicle coordinate system, or the position information of the lane line point in the vehicle coordinate system can be indirectly calculated.
S3, traversing the lane line point sets detected by the sensors, and judging whether the lane line point sets are in the attention focusing area or not;
s4, storing the data of the lane line point set in the focusing area into a focusing area point set storage area to be fitted, storing the data not in the focusing area into a non-focusing area point set storage area to be fitted, fitting respectively, numbering the fitted lane lines, calculating the confidence coefficient, and taking the weighted average value of the confidence coefficient of each point as the confidence coefficient of the lane lines after fitting;
s5, merging the lane lines, receiving all the fitted lane lines in the S4 and numbering;
s6, outputting global fusion lane lines, front half-cycle lane lines and rear half-cycle lane lines of the vehicle to a predictive planning control system according to the predictive planning control request;
s7, memorizing N frames of fusion lane lines, uniformly predicting the memorized N frames of fusion lane lines to the current moment, and then carrying out inter-frame matching.
S8, attention grading, wherein a fusion lane line in a focus area to-be-fitted point set storage area is used as a primary attention lane line, and a fusion lane line in a non-focus area to-be-fitted point set storage area is used as a secondary attention lane line;
when the secondary attention lane line is in the continuous N-frame fusion lane lines, the secondary attention lane line can be changed into a main attention lane line when the frames are matched stably;
when the primary attention lane line is in the continuous N-frame fusion lane line, the secondary attention lane line is changed to whenever 1 frame cannot be matched with each other.
S9, calculating the central line of the attention focusing area, and determining the attention focusing area according to the distance measurement error of the sensor.
According to the invention, through the attention focusing area based on the forgetting mechanism, the data of the lane line point set in the attention focusing area is stored in the point set storage area to be fitted in the focusing area, and the exterior points are removed by using the RANSAC algorithm or an improved algorithm thereof for fitting, so that the points with low confidence and the exterior points detected by mistake are effectively removed. And storing the internal data of the lane line point set which is not in the focusing area of the attention into a point set storage area to be fitted in a non-focusing area, and carrying out local fusion on the first half cycle and the second half cycle (the first half cycle comprises a forward sensor and a side forward sensor, the second half cycle comprises a backward sensor and a side backward sensor) to obtain half-local information and carrying out global fusion. The lane line history data is not directly involved in the fusion of the lane lines, and delay and noise are necessarily introduced in the fusion of the lane line history data, so that the lane line abrupt change to a certain extent cannot be well coped with. Instead, historical data is used to generate a forgetting mechanism-based attention focus area to reduce the matching search space dimension. Therefore, under various scenes, the matching search space can be effectively reduced, the lane line point sets detected by the multiple sensors are improved to be matched and fused rapidly, the risks of error matching and fusion are reduced, and the stability and safety of automatic driving vehicle path planning and steering wheel control are improved.
Referring to fig. 2, the sensor in the invention may be a panoramic camera system, which comprises a front view, a left rear view, a right front view, a right rear view, and a system formed by 6 cameras for rear view (the cameras may be replaced by other sensors capable of detecting lane lines (point sets), such as imaging lidar, to output the lane line point sets, so as not to affect the use of the method.
The method is suitable for fusion of lane line point detection results of a multi-vision sensor (the detection results comprise coordinates of a lane line point set, confidence coefficient of each point and color gray saturation), and is also suitable for fusion of lane line point detection results of other sensors such as imaging laser radar, high-line-number laser radar and the like and combination sensors thereof (lane line point set data similar to the lane line point detection results of a camera and the confidence coefficient of each point can be generated).
The method is also suitable for multi-sensor fusion of the road edge and the generalized road boundary point set.
In this embodiment, the road data includes a drivable area, a guardrail, and road edge data; and removing abnormal points outside the guardrail and the road edge data in the lane line point set according to the drivable area, the guardrail and the road edge data.
Therefore, the lane line point set data passes through the multi-source information verification of the guardrail and the road edge and the drivable area, the lane lines crossing the lane lines opposite to the guardrail are removed, and the risk of mismatching and fusion is reduced.
Preferably, in this embodiment, more than three quarters of the points are in the lane line point set in the focusing area, and are stored in the point set storage area to be fitted in the focusing area.
In this embodiment, the lane line point set data stored in the focusing region point set storage area to be fitted and the lane line point set data stored in the non-focusing region point set storage area to be fitted may be processed asynchronously in parallel to reduce delay of matching and fusion of the lane line point sets in the focusing region of attention.
For ease of understanding, the description will now be made separately.
I. Specifically, for lane line point set data linefcusp (i, j) (i=1,., r; j=1,..s (r)) (r focus fields, s (r) points per focus field) are processed, the storage area of the set of points to be fitted of each focus field is traversed, and the RANSAC (random sample consensus) algorithm or the modified algorithm RANSAC fitting polynomial function is adopted for the set of lane line points of each focus field in the storage area to be fitted.
Preferably, the steps of the RANSAC algorithm or its modified algorithm are adopted:
1) Selecting an initial point set and executing the step 2).
2) And (3) selecting a group of a points from the initial point set by adopting random sampling, and fitting a polynomial function by using a least square method to execute the step (3).
3) And calculating the nearest distance between each point and the polynomial function curve, removing the outer points with the distances larger than the threshold value, and calculating the number of points which are not removed. Executing step 4)
4) And when the number of the points which are not removed is greater than half of the total points, fitting the curve again, calculating the sum of the distances between the curve and the points, and storing the confidence mean value of the points, and executing the step 5), otherwise, executing the steps 2) and 3) again.
5) When the sum of the distances between the curve and each point is smaller than the threshold value, the curve is selected as an alternative optimal curve and stored. Otherwise, step 2), 3), and 4) are performed again).
6) And when the random sampling frequency exceeds the set threshold value c and at least one alternative optimal curve exists in the step 2), outputting the alternative optimal curve as an optimal curve. Otherwise, repeating the steps 2), 3), 4) and 5) until b curves are fitted. And selecting one of the b curves with the highest score as the current optimal lane line to output by adopting a weighted scoring mechanism in the RANSAC algorithm. The score is calculated based on the sum of the distances between the curve and each point, the confidence mean value of the point set, the number of lane line points, the length of the lane line and the proportion of the color normal points.
Preferably, the scoring formula is:
Score=e 1 D sum +e 2 ·Conf mean +e 3 ·n point +e 4 ·L lane +e 5 ·P color
in the formula e 1 ,...,e 5 As a weighting coefficient, D sum Conf is the sum of the distances between the normalized fitting curve and each point mean N is the confidence mean of the point set point For the normalized number of lane line points, L lane Calculating length for normalized lane lines, P color Is the proportion of the lane line color normal point (not gray, not black).
In the above step, a is the number of random extraction points preset, b is the upper limit of the alternative optimal curve (iteration termination), and c is the upper limit of the random sampling times (iteration termination).
Preferably, the problem with the original RANSAC algorithm is that when the number of points is too large, the number of times required to successfully select b curves is too large to reach a 99% probability. For the points in the focusing domain of the attention mechanism, a specially improved RANSAC (random sample consensus) algorithm based on the confidence coefficient of the lane line point set and fusing other information can be used for fitting the lane lines.
Specifically, the method for selecting the initial point set in the step 1) is improved as follows: for a lane line point set of each focusing domain in a storage area to be fitted, firstly removing points with confidence coefficient smaller than a self-adaptive threshold value; then, eliminating or thinning out points with the sampling more than 5s/Vehicle (average headway=average headway/average Vehicle speed); thinning samples points with the average headway larger than 2s/Vehicle (average headway=average headway/average Vehicle speed), particularly, for the sensor capable of detecting color, dark gray tire marks and old lane lines are removed, and lines with an excessive difference from white and yellow colors are removed. And then the RANSAC algorithm is used for fitting the polynomial function, so that the random point taking times of the RANSAC can be effectively reduced.
In the above method for calculating the adaptive threshold of Confidence in selecting the initial point set, the method is segmented according to the length, and each d (for an 8M camera, for example, within 50 meters, d=5, 50 meters to 100 meters, and d=10) meters is divided into a segment, and the lane line points of each segmentAdaptive Threshold (AT) for Confidence (Confidence) conf The method comprises the following steps:
Figure BDA0003718577470000101
in Conf point Confidence of lane line points; lambda E (0, 1)]Is a reject domain factor; m is the number of lane line points obtained by the sensor in the section after the lane line is segmented according to the length; conf LLimit Is the lower limit of the adaptive threshold for confidence.
When the improved RANSAC algorithm fits the polynomial, determining the polynomial degree of curve fitting according to the scene: preferably, when the sensor detects that the current scene is a structured road, the 3-degree polynomial curve of the lane line is represented, and the 3-degree polynomial function of the lane line obtained by fitting is as follows:
A(x)=A 0 +A 1 x+A 2 x 2 +A 3 x 3
when the sensor detects that the current scene is an unstructured road, the current scene is represented by a 5-degree polynomial curve, and a 5-degree polynomial function of a lane line obtained by fitting is as follows:
A(x)=A 0 +A 1 x+A 2 x 2 +A 3 x 3 +A 4 x 4 +A 5 x 5
numbering all the lane lines linelocus (i) (i=1,..once, h) after fitting, performing confidence calculation, and taking the weighted average of the confidence degrees of each point as the confidence degree of the lane lines after fitting.
II. Specifically, the step of processing the lane line point set data stored in the point set storage area to be fitted in the unfocused region includes:
s41, processing lane line point set data in a region to be matched in a non-focusing region, sorting lane line point sets which are not in a focusing region of attention in the first half cycle according to the length, and screening out lane line point sets with the length larger than an adaptive threshold value as lane line of the first half cycle long FP (i) (i=1, once again, m), otherwise, the line is used as a front half cycle short lane line short FP(i)(1,...,n);
Sequentially connecting the front half perimeter lane lines in the non-focusing region point set storage area to be fitted from long to short long FP (i) (i=1.,.. m) is selected as a reference to be used, traversing the rest long lane lines long FP (j) (j not equal i) and front half cycle lane line short FP (i) (1., etc., n) matching, specifically comprising:
1) If lane line long FP (i) (i=1.,.. m) can be combined with other front half perimeter lane lines long FP (j) (j not equal i) matching, merging all matched lane lines, removing unmatched abnormal point fit lane lines, and matching the matched lane line point set with the front half perimeter lane line long FP (j) (j not equal i) is combined into line long FP (i) serves as a new reference lane line. And the front half perimeter lane line long FP (j) (j not equal i) eliminates the reference lane line point set and does not correspond to the front half perimeter lane line any more long FP (j) (j not equal i), and performing the remaining lane matching traversal.
2) Lane line set line if the front half perimeter long FP (i) (i=1.,.. m) line set of lane line points capable of being matched with other front half cycle short lane lines Short FP (i) (i=1,., n) matched, merging the lane line points set on all the matches, eliminating the abnormal points of the mismatch, and performing lane line fitting.
3) If the lane line cannot be matched with the short lane line of the first half cycle and the length is larger than the self-adaptive threshold value, the lane line point set is directly used as an independent lane line to be fitted.
Numbering all the lane lines lineFP (i) (i=1..once., g) after fitting, and performing confidence calculation, wherein the weighted average of the confidence degrees of each point is used as the confidence degree of the lane lines after fitting.
And when the front half cycle lane lines of the vehicle are all traversed and matched, the process goes to S42.
S42, processing the data of the lane line point sets in the non-focusing area to-be-matched area, sorting the lane line point sets which are not in the focusing area of the attention in the second half cycle according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value to be used as the lane line of the second half cycle long RP(i)(i=1,., m), and vice versa as a second half-cycle lane line short RP(i)(1,...,n);
Sequentially selecting the second half perimeter lane lines in the non-focusing region point set storage area to be fitted as a reference according to the sequence from long to short, traversing the rest lane lines and the second half perimeter short lane lines to match, and specifically, the method comprises the following steps:
1) If lane line long RP (i) (i=1.,.. m) can be combined with other rear half perimeter lane lines long RP (j) (j is not equal to i) is matched, combining all matched lane line points, removing unmatched abnormal point fit lane lines, and combining the matched lane lines and the second half perimeter lane lines into lines long RP (i) is used as a new reference lane line. The rear half perimeter lane line long RP (j) (j not equal to i) eliminates the reference lane line point set, and does not aim at line any more long RP (j) (j not equal i), and performing the matching traversal of the rest lane lines.
2) If the rear half circumference lane line can be matched with other rear half circumference short lane lines short And (3) on the matching of RP (i) (1, the..and n), merging all the lane line points matched, removing abnormal points which are not matched, and performing lane line fitting.
3) If the lane line cannot be matched with the short lane line of the second half cycle and the length is larger than the self-adaptive threshold value, the lane line point set is directly used as an independent lane line to be fitted.
Numbering all the lane lines line (i) (i=1,..h) after fitting, and performing confidence calculation, wherein each point confidence weighted mean value is used as the confidence of the lane line after fitting.
When the lane lines of the second half cycle are completely traversed and matched, S43 is entered;
s43, traversing and matching a second half cycle fusion lane line of the vehicle by taking the first half cycle fusion lane line as a reference, matching the current half cycle lane line point set with the second half cycle lane line point set, merging all matched lane line point sets, and removing the abnormal point fitting polynomial function. Preferably, when the sensor detects that the current scene is a structured road, the 3-degree polynomial curve of the lane line is represented, and when the sensor detects that the current scene is an unstructured road, the current scene is represented by the 5-degree polynomial curve.
Specifically, the formula of the lane line length self-adaptive threshold value in processing the lane line point set data stored in the non-focusing region point set storage area to be fitted is as follows:
Figure BDA0003718577470000121
in the formula, LTH line Calculating a length for the lane line; delta epsilon (0, 1)]Rejecting domain factors; n is the total number of lane lines in a storage area of a point set to be fitted in a non-focusing area; LTH (Low temperature Co-fired ceramic) LLimit Is the lower limit of the adaptive threshold for confidence.
The purpose of the length-wise screening with the adaptive threshold formula is to: because the particularly short lane lines detected by the sensor are not easily matched and there is a certain probability that other identification lines, such as unclear arrow signs or unclear drain lines, are present. Therefore, the lane lines which are extremely short and difficult to match are not selected as the matching references from the matching fusion strategy.
In the specific implementation, in step S7, the inter-frame matching is performed by fitting the polynomial curve of degree n to the lane line, where the formula is:
Figure BDA0003718577470000122
wherein A is ip1 Coefficients representing the n th order polynomial in the following frame, A ip2 Coefficients representing the n th order polynomial in the previous frame, tr IFM Tolerance threshold representing inter-frame matching; ΔA inon A normalization factor representing an interframe matching error of an ith coefficient of the polynomial curve;
when the inequality is established, the inter-frame matching of the memorized lane line curves is successful, and the successfully matched lane line curves are memorized in the memorized lane line sequence.
When the n degree polynomial n is 3, the lane line can be fitted with a cubic polynomial curve for interframe matching, and the formula is as follows:
Figure BDA0003718577470000123
wherein A is 0p1 、A 1p1 、A 2p1 、A 3p1 Coefficients representing a fitted lane line cubic polynomial in the subsequent frame, A 0p2 、A 1p2 、A 2p2 、A 3p2 Coefficients, tr, representing the fitted lane line cubic polynomial in the previous frame IFM Tolerance threshold for inter-frame matching;
when the inequality is established, the inter-frame matching of the memorized lane line curves is successful, and the successfully matched lane line curves are memorized in the memorized lane line sequence.
Referring to fig. 4, in step S9, an attention focusing region center line is calculated from a predicted traveling direction, a main attention lane coefficient, and a forgetting factor; wherein,,
predicting the driving direction, and calculating a main attention lane line which is excessively different from the driving direction of the predicted vehicle and does not consider the central line of the attention focusing area; when the following discriminant function inequality holds, the lane line is calculated as the attention focusing region center line:
|A 0 +A 1 v h t pred +(2A 2 -Y r )(v h t pred ) 2 |<Tr pred
in the formula, v h The speed of the vehicle; y is Y r Yaw rate of the vehicle; t is t pred Setting a prediction time; a is that 0 ,A 1 ,A 2 Respectively the fitted lane line equation coefficients, tr pred For predicted t pred Judging a threshold value of the distance between the lane line and the transverse position of the vehicle after the time;
calculating forgetting factor of current frame and storing n at most m A frame having the formula:
Figure BDA0003718577470000131
wherein beta is b A forgetting factor reference value; vr (Vr) pi In the driving direction for lane lineAn upward distal visual distance; v hpi The speed of the vehicle; conf pi A main attention lane line confidence level for the current frame; dt (dt) pi Is the time difference between the current frame and the previous frame; μ is a safety threshold; i is the number of frames (i=1.. m ). That is, as the lane line seen in the history frame is longer and the vehicle speed is slower, the forgetting factor is smaller and the forgetting is slower, and the attention focusing region can be more effectively constructed using the history information using the forgetting factor.
And calculating the central line of the attention focusing area according to the predicted driving direction, the main attention lane line coefficient, the confidence coefficient and the forgetting factor. The bionic test result of human brain vision is the same as that of the bionic test result of human brain vision, the running direction is concerned, and the lower the confidence is, the faster the user forgets.
Calculating the central line of the attention gathering area, and expressing the central line by using polynomials with third order and more, wherein the calculation formula of the coefficients is as follows:
Figure BDA0003718577470000132
wherein n is m The upper limit of the memory frame number; when m < n m -1,(β p1p1 β p2 +...(β p1 β p2 ...β pm ))>Tr memo When E (0, 1),
Figure BDA0003718577470000133
wherein Tr is memo Is a memory margin threshold; />
Figure BDA0003718577470000134
Is n m Polynomial coefficient vectors of main attention lane lines of the i th frame in the matching memory sequence of the history frames;
the formula for determining the lateral width of the focus area of attention from the sensor range error is:
Δd(l)=γσ avg (l)
where Δd (l) is half the lateral width of the focal zone, γ ε (0, 1)]For setting according to sensor performance characteristicsFull coefficient, sigma avg (l) The average value of the distance measurement error of the sensor on the distance l;
when the center line of the attention focusing area is expressed by using a cubic polynomial, the area between the lower two curves is the attention focusing area,
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +Δd(x)
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 -Δd(x);
when the center line of the attention focusing area is expressed by using the polynomial of degree n, the area between the lower two curves is the attention focusing area,
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +...+A npf x n +Δd(x)
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +...+A npf x n -Δd(x)。
finally, it is noted that the above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and that although the present invention has been described with reference to the preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein. Obvious changes that are extended by the technical proposal of the invention are still within the scope of the invention.

Claims (9)

1. A matching fusion method based on a lane line point set and an attention mechanism is characterized by comprising the following steps:
s1, receiving vehicle position and state information, and executing a step S2 if an automatic driving system is in an available state;
s2, receiving road data and a lane line point set detected by a sensor, and processing the lane line point set;
s3, traversing the lane line point sets detected by the sensors, and judging whether the lane line point sets are in the attention focusing area or not;
s4, storing the data of the lane line point set in the focusing area into a focusing area point set storage area to be fitted, storing the data not in the focusing area into a non-focusing area point set storage area to be fitted, fitting respectively, numbering the fitted lane lines and calculating the confidence coefficient, and taking the weighted average value of the confidence coefficient of each point as the confidence coefficient of the lane lines after fitting;
s5, receiving all the fitted lane lines in the S4 and numbering;
s6, outputting the fitted lane lines to a predictive planning control system according to the predictive planning control request and taking the fitted lane lines as fusion lane lines;
s7, memorizing N frames of fusion lane lines, uniformly predicting the memorized N frames of fusion lane lines to the current moment, and then carrying out inter-frame matching;
s8, attention grading, wherein a fusion lane line in a focus area to-be-fitted point set storage area is used as a primary attention lane line, and a fusion lane line in a non-focus area to-be-fitted point set storage area is used as a secondary attention lane line; when the secondary attention lane line is in the continuous N-frame fusion lane lines, the secondary attention lane line can be changed into a main attention lane line when the frames are matched stably; when the main attention lane line is in the continuous N-frame fusion lane lines, the main attention lane line is converted into the secondary attention lane line as long as 1 frame of fusion lane line can not be matched;
s9, calculating the central line of the attention focusing area by using the main attention lane line, and determining the transverse width of the attention focusing area according to the distance measurement error of the sensor so as to determine the attention focusing area.
2. The method of claim 1, wherein in step S2, the road data includes travelable area, guardrail and road edge data; and removing abnormal points outside the guardrail and the road edge data in the lane line point set according to the drivable area, the guardrail and the road edge data.
3. The method according to claim 1, wherein in step S4, the lane line point set data stored in the focal domain point set storage area to be fitted is processed, the focal domain point set storage areas to be fitted are traversed, and the RANSAC algorithm is used to fit the polynomial function to the lane line point sets of the focal domains in the focal domain point set storage areas to be fitted.
4. The method for matching and fusing lane line point set and attention mechanism based on claim 3, wherein the method is characterized in that a RANSAC algorithm is adopted to process lane line point set data stored in a focus domain point set storage area to be fitted, and the method comprises the steps of eliminating or thinning points far away from a vehicle head time, eliminating points with confidence coefficient smaller than a self-adaptive threshold value, eliminating dark grey tire marks and old lane lines, so as to reduce the iteration times of the RANSAC algorithm.
5. The method of matching fusion based on lane point set and attention mechanism as claimed in claim 4, wherein the confidence adaptive threshold value is segmented by length, and the confidence adaptive threshold value AT of lane point of each segment conf The method comprises the following steps:
Figure FDA0004272351970000021
in Conf point Confidence of lane line points; lambda E (0, 1)]Is a reject domain factor; m is the number of lane line points obtained by the sensor in the section after the lane line is segmented according to the length; conf LLimit Is the lower limit of the adaptive threshold for confidence.
6. The method according to claim 1, wherein in step S4, the step of processing the lane line point set data stored in the non-focusing region point set storage area to be fitted comprises:
s41, for storing in non-focusing domainProcessing the data of the lane line point sets in the region to be matched, sorting the lane line point sets which are not in the attention focusing region in the first half cycle according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value to be used as the lane line of the first half cycle long FP (i), i=1, m; otherwise, the line is used as a front half cycle short lane line short FP(i),i=1,...,n;
Sequentially selecting the first half perimeter lane lines in the non-focusing region point set storage area to be fitted as a reference according to the sequence from long to short, traversing the rest lane lines and the first half perimeter short lane lines to match, and entering S42 when the current half perimeter lane lines are traversed and matched;
s42, processing the data of the lane line point sets stored in the non-focusing area to be matched, sorting the lane line point sets which are not in the focusing area of the attention in the second half cycle according to the length, and screening the lane line point sets with the length larger than the self-adaptive threshold value as the lane line of the second half cycle long RP (i), i=1, m; otherwise, the line is used as a short lane line of the second half cycle short RP(i),i=1,...,n;
Sequentially selecting the second half perimeter lane lines in the non-focusing region point set storage area to be fitted as a reference according to the sequence from long to short, traversing the rest lane lines and the second half perimeter short lane lines to match, and entering S43 when the second half perimeter lane lines are traversed and matched;
s43, traversing and matching a second half cycle fusion lane line of the vehicle by taking the first half cycle fusion lane line as a reference, matching the current half cycle lane line point set with the second half cycle lane line point set, merging all matched lane line point sets, and removing the abnormal point fitting polynomial function.
7. The method of claim 6, wherein the formula of the lane line length adaptive threshold in processing the lane line point set data stored in the non-focusing domain point set storage area to be fitted is:
Figure FDA0004272351970000031
in the formula, LTH line Calculating a length for the lane line; delta epsilon (0, 1)]Rejecting domain factors; n is the total number of lane lines in a storage area of a point set to be fitted in a non-focusing area; LTH (Low temperature Co-fired ceramic) LLimit Is the lower limit of the adaptive threshold for confidence.
8. The method of claim 1, wherein in step S7, the matching is performed by fitting n-degree polynomial curves to the lane lines, and the formula is:
Figure FDA0004272351970000032
wherein A is ip1 Coefficients representing the n th order polynomial in the following frame, A ip2 Coefficients representing the n th order polynomial in the previous frame, tr IFM Tolerance threshold representing inter-frame matching; ΔA inon A normalization factor representing an interframe matching error of an ith coefficient of the polynomial curve;
when the inequality is established, the inter-frame matching of the memory lane line curves is successful, and the memory lane line sequences are stored.
9. The method according to claim 1, wherein in step S9, the attention focusing area center line is calculated according to the predicted traveling direction, the main attention lane coefficient, and the forgetting factor; wherein,,
the main attention lane line with the excessive difference from the predicted vehicle driving direction is not counted in the attention focusing area central line calculation, and when the following discriminant function inequality is established, the lane line is counted as the attention focusing area central line calculation:
|A 0 +A 1 v h t pred +(2A 2 -Y r )(v h t pred ) 2 |<Tr pred
in the formula, v h The speed of the host vehicle is Y r Yaw rate of the vehicle, t pred To set the prediction time, A 0 、A 1 、A 2 Respectively the fitted lane line equation coefficients, tr pred For predicted t pred Judging a threshold value of the distance between the lane line and the transverse position of the vehicle after the time;
calculating forgetting factor of current frame and storing n at most m A frame having the formula:
Figure FDA0004272351970000041
wherein beta is b A forgetting factor reference value; vr (Vr) pi The far-end visible distance of the lane line in the driving direction is set; v hpi For the speed of the host vehicle, conf pi Confidence, dt, of the main attention lane line for the current frame pi For the time difference between the current frame and the previous frame, μ is a safety threshold, i is the number of frames, i=1, …, n m
Calculating the central line of the attention gathering area, and expressing the central line by using polynomials with third order and more, wherein the calculation formula of the coefficients is as follows:
Figure FDA0004272351970000042
wherein n is m For the upper limit of the memory frame number, when m is less than n m -1,(β p1p1 β p2 +…(β p1 β p2 ...β pm ))>Tr memo E (0, 1), beta pm ,…,β pnm-1 =0, where Tr memo Is a memory margin threshold;
Figure FDA0004272351970000043
i=1,…,n m is n m Polynomial coefficient vectors of main attention lane lines of the i th frame in the matching memory sequence of the history frames;
the formula for determining the lateral width of the focus area of attention from the sensor range error is:
Δd(l)=γσ avg (l)
where Δd (l) is half the lateral width of the focal zone, γ ε (0, 1)]Sigma for a safety factor set according to sensor performance characteristics avg (l) The average value of the distance measurement error of the sensor on the distance l;
the central line of the attention focusing area is expressed by using an n-degree polynomial, the area between the lower two curves is the attention focusing area,
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +…+A npf x n +Δd(x)
A 0 (x)=A 0pf +A 1pf x+A 2pf x 2 +A 3pf x 3 +…+A npf x n -Δd(x)。
CN202210752033.6A 2022-06-28 2022-06-28 Matching fusion method based on lane line point set and attention mechanism Active CN115131968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210752033.6A CN115131968B (en) 2022-06-28 2022-06-28 Matching fusion method based on lane line point set and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210752033.6A CN115131968B (en) 2022-06-28 2022-06-28 Matching fusion method based on lane line point set and attention mechanism

Publications (2)

Publication Number Publication Date
CN115131968A CN115131968A (en) 2022-09-30
CN115131968B true CN115131968B (en) 2023-07-11

Family

ID=83380118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210752033.6A Active CN115131968B (en) 2022-06-28 2022-06-28 Matching fusion method based on lane line point set and attention mechanism

Country Status (1)

Country Link
CN (1) CN115131968B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118068357B (en) * 2024-04-19 2024-07-12 智道网联科技(北京)有限公司 Road edge fusion processing method and device, electronic equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133600A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 A kind of real-time lane line detection method based on intra-frame trunk
CN108647664A (en) * 2018-05-18 2018-10-12 河海大学常州校区 It is a kind of based on the method for detecting lane lines for looking around image
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest
CN110001782A (en) * 2019-04-29 2019-07-12 重庆长安汽车股份有限公司 Automatic lane-change method, system and computer readable storage medium
CN110386146A (en) * 2018-04-17 2019-10-29 通用汽车环球科技运作有限责任公司 For handling the method and system of driver attention's data
CN111582201A (en) * 2020-05-12 2020-08-25 重庆理工大学 Lane line detection system based on geometric attention perception
CN111950467A (en) * 2020-08-14 2020-11-17 清华大学 Fusion network lane line detection method based on attention mechanism and terminal equipment
CN112241728A (en) * 2020-10-30 2021-01-19 中国科学院合肥物质科学研究院 Real-time lane line detection method and system for learning context information by adopting attention mechanism
CN112862845A (en) * 2021-02-26 2021-05-28 长沙慧联智能科技有限公司 Lane line reconstruction method and device based on confidence evaluation
CN113468967A (en) * 2021-06-02 2021-10-01 北京邮电大学 Lane line detection method, device, equipment and medium based on attention mechanism
CN113591618A (en) * 2021-07-14 2021-11-02 重庆长安汽车股份有限公司 Method, system, vehicle and storage medium for estimating shape of road ahead
CN114022863A (en) * 2021-10-28 2022-02-08 广东工业大学 Deep learning-based lane line detection method, system, computer and storage medium
CN114120069A (en) * 2022-01-27 2022-03-01 四川博创汇前沿科技有限公司 Lane line detection system, method and storage medium based on direction self-attention
CN114531913A (en) * 2020-09-09 2022-05-24 华为技术有限公司 Lane line detection method, related device, and computer-readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10762358B2 (en) * 2016-07-20 2020-09-01 Ford Global Technologies, Llc Rear camera lane detection
TWI734472B (en) * 2020-05-11 2021-07-21 國立陽明交通大學 Driving assistance system based on deep learning and the method thereof

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133600A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 A kind of real-time lane line detection method based on intra-frame trunk
CN110386146A (en) * 2018-04-17 2019-10-29 通用汽车环球科技运作有限责任公司 For handling the method and system of driver attention's data
CN108647664A (en) * 2018-05-18 2018-10-12 河海大学常州校区 It is a kind of based on the method for detecting lane lines for looking around image
CN109409202A (en) * 2018-09-06 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Robustness method for detecting lane lines based on dynamic area-of-interest
CN110001782A (en) * 2019-04-29 2019-07-12 重庆长安汽车股份有限公司 Automatic lane-change method, system and computer readable storage medium
CN111582201A (en) * 2020-05-12 2020-08-25 重庆理工大学 Lane line detection system based on geometric attention perception
CN111950467A (en) * 2020-08-14 2020-11-17 清华大学 Fusion network lane line detection method based on attention mechanism and terminal equipment
CN114531913A (en) * 2020-09-09 2022-05-24 华为技术有限公司 Lane line detection method, related device, and computer-readable storage medium
CN112241728A (en) * 2020-10-30 2021-01-19 中国科学院合肥物质科学研究院 Real-time lane line detection method and system for learning context information by adopting attention mechanism
CN112862845A (en) * 2021-02-26 2021-05-28 长沙慧联智能科技有限公司 Lane line reconstruction method and device based on confidence evaluation
CN113468967A (en) * 2021-06-02 2021-10-01 北京邮电大学 Lane line detection method, device, equipment and medium based on attention mechanism
CN113591618A (en) * 2021-07-14 2021-11-02 重庆长安汽车股份有限公司 Method, system, vehicle and storage medium for estimating shape of road ahead
CN114022863A (en) * 2021-10-28 2022-02-08 广东工业大学 Deep learning-based lane line detection method, system, computer and storage medium
CN114120069A (en) * 2022-01-27 2022-03-01 四川博创汇前沿科技有限公司 Lane line detection system, method and storage medium based on direction self-attention

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Fast Detection Method For Polynomial Fitting Lane With Self-attention Module Added;Xi Li等;《2021 International Conference on Control, Automation and Information Sciences (ICCAIS)》;46-51 *
一种基于帧间关联的实时车道线检测算法;李超等;《计算机科学》(第2期);317-323 *
基于关键点检测的车道线识别及跟踪方法研究;吕川;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》(第1期);C035-344 *
基于深度学习的车道线及道路目标检测研究;刘雷;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》(第5期);C035-424 *
李国庆.基于深度学习的自动驾驶汽车视觉感知算法研究.2022,(第5期),C035-336. *

Also Published As

Publication number Publication date
CN115131968A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
US12072705B2 (en) Intelligent decision-making method and system for unmanned surface vehicle
CN109800658B (en) Parking space type online identification and positioning system and method based on neural network
Kong et al. Vanishing point detection for road detection
Meuter et al. A decision fusion and reasoning module for a traffic sign recognition system
CN103020956B (en) Image matching method for judging Hausdorff distance based on decision
CN114972418A (en) Maneuvering multi-target tracking method based on combination of nuclear adaptive filtering and YOLOX detection
CN110427797B (en) Three-dimensional vehicle detection method based on geometric condition limitation
CN108764108A (en) A kind of Foregut fermenters method based on Bayesian inference
US20180065633A1 (en) Vehicle driving assist apparatus
JP2007274037A (en) Method and device for recognizing obstacle
CN114454875A (en) Urban road automatic parking method and system based on reinforcement learning
CN115131968B (en) Matching fusion method based on lane line point set and attention mechanism
CN117630907B (en) Sea surface target tracking method integrating infrared imaging and millimeter wave radar
CN111612818A (en) Novel binocular vision multi-target tracking method and system
CN106599918B (en) vehicle tracking method and system
Hartmann et al. Towards autonomous self-assessment of digital maps
Fang et al. Camera and LiDAR fusion for on-road vehicle tracking with reinforcement learning
CN113111707A (en) Preceding vehicle detection and distance measurement method based on convolutional neural network
CN113034378A (en) Method for distinguishing electric automobile from fuel automobile
Zhao et al. Robust online tracking with meta-updater
CN115346155A (en) Ship image track extraction method for visual feature discontinuous interference
CN115187799A (en) Single-target long-time tracking method
CN113942503A (en) Lane keeping method and device
CN107506739B (en) Night forward vehicle detection and distance measurement method
CN117576665B (en) Automatic driving-oriented single-camera three-dimensional target detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant