CN118269968B - Prediction method of automatic driving collision risk fused with online map uncertainty - Google Patents
Prediction method of automatic driving collision risk fused with online map uncertainty Download PDFInfo
- Publication number
- CN118269968B CN118269968B CN202410713791.6A CN202410713791A CN118269968B CN 118269968 B CN118269968 B CN 118269968B CN 202410713791 A CN202410713791 A CN 202410713791A CN 118269968 B CN118269968 B CN 118269968B
- Authority
- CN
- China
- Prior art keywords
- agent
- collision
- uncertainty
- lane
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 83
- 238000000605 extraction Methods 0.000 claims abstract description 5
- 239000003795 chemical substances by application Substances 0.000 claims description 99
- 239000013598 vector Substances 0.000 claims description 39
- 230000006870 function Effects 0.000 claims description 33
- 230000008569 process Effects 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 14
- 230000002093 peripheral effect Effects 0.000 claims description 12
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 230000007246 mechanism Effects 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 6
- 230000002452 interceptive effect Effects 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 230000010354 integration Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000001133 acceleration Effects 0.000 claims description 2
- 230000003993 interaction Effects 0.000 claims description 2
- 238000004590 computer program Methods 0.000 description 4
- 230000003213 activating effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013058 risk prediction model Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- -1 pedestrians Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0953—Predicting travel path or likelihood of collision the prediction being responsive to vehicle dynamic parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0097—Predicting future conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0098—Details of control systems ensuring comfort, safety or stability not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/005—Handover processes
- B60W60/0059—Estimation of the risk associated with autonomous or manual driving, e.g. situation too complex, sensor failure or driver incapacity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/27—Regression, e.g. linear or logistic regression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0019—Control system elements or transfer functions
- B60W2050/0028—Mathematical models, e.g. for simulation
- B60W2050/0031—Mathematical model of the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0043—Signal treatments, identification of variables or parameters, parameter estimation or state estimation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/408—Radar; Laser, e.g. lidar
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2552/00—Input parameters relating to infrastructure
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/404—Characteristics
- B60W2554/4042—Longitudinal speed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/404—Characteristics
- B60W2554/4043—Lateral speed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/40—High definition maps
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Algebra (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention belongs to the field of traffic control, and relates to a prediction method of an automatic driving collision risk integrating on-line map uncertainty, which mainly comprises on-line map uncertainty generation, on-line map uncertainty track prediction and collision risk prediction, wherein on-line map uncertainty generation is to generate on-line maps and on-line map uncertainty parameters based on-vehicle cameras and radar collected on-line data, fuse the on-line map and on-line map uncertainty parameters into a downstream track prediction module to perform feature extraction, endpoint prediction and track generation, and the collision risk prediction module outputs track points of a target vehicle and Zhou Che according to track prediction to output collision probability in future time steps and calculate collision probability residual time.
Description
Technical Field
The invention belongs to the field of traffic control, relates to prediction of collision risk of an automatic driving vehicle, and particularly relates to a prediction method of the collision risk of the automatic driving by fusing uncertainty of an online map.
Background
Risk prediction is an important component of an automated driving stack, and affects planning and control modules downstream of an automated driving vehicle, and a good risk prediction method can provide effective and accurate safety assessment signals for a take-over system under co-driving of a human machine.
Most of the existing risk prediction is based on data provided by a high-precision map, but the high-precision map is high in labeling and maintenance cost, can only be used in a specific area, is poor in expansibility, and partial students can estimate the high-precision map on line by turning work to sensor data. Therefore, a new risk prediction method is needed to be designed, which is used for generating uncertainty of an online map and integrating the generated uncertainty into risk prediction so as to improve the accuracy of risk prediction, and a valuable reference is provided for the research of early warning of an automatic driving take-over system under the co-driving of a person.
Disclosure of Invention
In view of the above technical problems and drawbacks, an object of the present invention is to provide a method for predicting an automatic driving collision risk by fusing online map uncertainty, which generates online map and online map uncertainty parameters based on online data collected by a vehicle-mounted camera and a radar, and fuses the online map and online map uncertainty parameters to a downstream trajectory prediction module for feature extraction, endpoint prediction and trajectory generation, and then the collision risk prediction module outputs a trajectory point of a target vehicle and Zhou Che according to trajectory prediction, and outputs probability of collision occurring in a future time step, and calculates remaining time of collision probability.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a method of predicting an autopilot collision risk incorporating online map uncertainty, the method comprising the steps of:
step 1: generating an online map uncertainty;
step 1.1: collecting information of surrounding roads and agents acquired by the vehicle-mounted camera and the radar, and performing corresponding noise reduction treatment;
Step 1.2: inputting the noise-reduced data into a BEV encoder, and converting the acquired data into a common BEV spatial feature;
Step 1.3: establishing a Gaussian regression uncertainty model and a classification uncertainty model, taking BEV space characteristics as input, generating map element vertexes by using an online map regression model, wherein the map element vertexes comprise positions and types of the element vertexes, and calculating online map uncertainty related parameters by the Gaussian regression uncertainty model and the classification uncertainty model; the Gaussian regression uncertainty model calculates the predicted position of each map element vertex and the parameter of the uncertainty related to the position, and the classification uncertainty model calculates the confidence coefficient of the type of each map element vertex;
step 2: track prediction integrating uncertainty of an online map;
step 2.1: fusing the obtained parameters of the predicted position and the related uncertainty of the position of the map element vertex and the confidence coefficient of the type of the map element vertex into an online map by using multi-layer perceptron coding;
Step 2.2: for an online map fused with uncertainty parameters, performing multi-line segment coding by using VectorNet, performing feature extraction, when lane line features are coded, using VectorNet to code a multi-line sub-graph for each scene element, using two independent sub-graphs for an agent and a lane, and obtaining feature vectors through coding ;
Step 2.3: the feature vector to be obtainedThe characteristics are updated by being transmitted into an interactive modeling module formed by stacking an agent-to-lane module, a lane-to-agent module and an agent-to-agent module, and the updated characteristics are as follows; The lane-to-lane module focuses on the relationship between the agent and the lane, the lane-to-lane module focuses on the relationship between the lane and the lane, the lane-to-agent module focuses on the relationship between the lane and the agent, the agent-to-agent module focuses on the relationship between the agent and the agent, and the agent-to-lane module, the lane-to-agent module and the agent-to-agent module are all multi-head attention blocks;
step 2.4: based on meta information f and features Spliced feature vectorPredicting end points of the target vehicle and surrounding agent trajectories using a fixed point predictor and an environment adaptive predictor, and generating trajectories using a multi-layer perceptron MLP;
step 3: predicting collision risk based on the predicted trajectory;
step 3.1: acquiring coordinates of future time steps of the target vehicle and the surrounding agents according to the track generated in the step 2.4, inputting the predicted coordinates of the future time steps of the target vehicle and the surrounding agents into a collision probability prediction module, and predicting the probability of collision of the surrounding agents in the future total time steps;
Step 3.2: and (3) inputting the collision probability obtained in the step (3.1) into a critical collision probability remaining time calculation module, calculating critical collision probability remaining time, and transmitting the critical collision probability remaining time to a man-machine co-driving system.
As a preferred aspect of the present invention, in step 1.1, the surrounding road information includes: the road type, speed limit, whether at an intersection, traffic lights, and the slope of the road; the road type comprises an intersection, a left turning lane, a right turning lane and a straight running lane; the surrounding agent information includes: the type of agent, speed, acceleration, the type of agent includes pedestrians, vehicles.
As a preferred aspect of the present invention, in step 1.3, the gaussian regression uncertainty model established is expressed as:
;
wherein V is the number of M vertexes of the map element, and the coordinate of the ith vertex is ,Is a mean value, representing the expected value of the vertex position,Is the standard deviation, measures the uncertainty of the predicted position,Is the variance; i and j are used to represent the j-th dimension of the i-th vertex of map element M,The output of the Gaussian regression uncertainty model;
The classification uncertainty model outputs a confidence level for the type of each map element vertex using a multi-layer perceptron MLP.
As a preferred aspect of the present invention, in step 2.1, the model for fusing the parameters of the predicted position and the uncertainty related to the position of the map element vertex and the confidence of the type of the map element vertex into the online map is:
In the method, in the process of the invention, Is the gaussian distribution mean vector of the ith vertex,Is the standard deviation vector of uncertainty of the ith vertex of the gaussian regression uncertainty model output,Is a class probability vector composed of the confidence of the ith vertex, [; represents the connection of three vectors,Is a multi-layer perceptron MLP.
As a preferred aspect of the present invention, in step 2.3, the interactive modeling module updates the agent-to-agent module, lane-to-lane module using a self-attention encoder followed by a feed-forward network (FFN), and updates the lane-to-agent module, agent-to-lane module using a cross-attention encoder followed by a feed-forward network, each encoder with a multi-head attention Mechanism (MHA):
;
= K = V = ;
;
In the method, in the process of the invention, 、、Is a learned projection, norm regularization; q, K, V are query vectors, key vectors, value vectors of the attention mechanism,The function is activated and the function is activated,Are feature vectors;
The multi-head attention block is defined as follows:
;
Stacking agent-to-lane, lane-to-agent, agent-to-agent modules to update features Obtaining updated features。
Preferably, in step 2.4, the meta information f and the feature areThe spliced model is as follows:
= (,f);
In the method, in the process of the invention, For the multi-layer perceptron MLP,For meta-information and featuresThe spliced vector, meta information f includes direction and position information of the prediction time agent.
Preferably, in step 2.4, the fixed point predictor is used for predicting the endpoint of the target agent, and the fixed point predictor predicts the endpoint by means of a multi-layer perceptron, and the formula is as follows:
= ();
In the method, in the process of the invention, For the end point of the prediction,The multi-layer perceptron MLP,For meta-information f and featuresAnd the spliced characteristic vector.
As a preferred aspect of the present invention, in step 2.4, the environment adaptive predictor is configured to predict endpoints of a plurality of surrounding agents, where the environment adaptive predictor adapts to a specific situation of each agent by adopting a dynamic weight learning manner:
;
;
;
;
In the method, in the process of the invention, 、As a result of the parameters that can be trained,、The method is characterized in that the method is a weight dynamic adjustment matrix in an endpoint prediction module, norm is layer normalization, and ReLU is an activation function; Is through dynamic weight matrix And featuresRegularizing the matrix after nonlinear fitting,Is the predicted endpoint.
Preferably, in step 3.1, the collision probability prediction module establishes a collision intersection function for predicting the time steps between the target vehicle and the surrounding agentsA collision probability prediction function within a time range of (a); the collision intersection function is used for judging whether the matrix of the target vehicle and the peripheral vehicle has an overlapped part in each time step predicted in the future, if the matrix has an output 1, the matrix does not have an output 0;
The collision intersection function is:
;
In the method, in the process of the invention, In order to predict the time frame of the future trajectory,In order for the vehicle to be a target,For the surrounding agent to be a proxy,In the form of a matrix of the target vehicle,A matrix of surrounding agents;
Target vehicle and surrounding agents at predicted time steps The collision probability prediction function in the time range of (2) is:
;
In the method, in the process of the invention, For the event of a collision of the target vehicle with a surrounding agent,= ;Predicting time steps for a target vehicle and surrounding agentsProbability integration of collision; Refers to the target vehicle and surrounding agents at time [ Probability of distribution within ] by interactivity prediction;
Let the surrounding agents be n, define the target vehicle at T In range of at least one collision with one surrounding agent() In the future the total time stepProbability p of internal collision() The calculation formula of (c) is:
;
wherein p is% () Calculating the total time step of s to n cyclesIs a collision probability of (a).
In step 3.2, the calculation formula of the critical collision probability remaining time calculation module is preferably as follows:
;
In the method, in the process of the invention, Refer to all possible predicted time stepsIn (2) finding the smallest one so that at timeProbability of collisionExceeding the critical value CCP.
The invention has the advantages and beneficial effects that:
(1) The method provided by the invention uses the online map as the input of the collision risk prediction model, avoids the high maintenance cost of using the high-precision map, and improves the expandability of the collision risk prediction model.
(2) The invention establishes the generation model of the uncertainty of the online map, integrates the uncertainty of the online map into a downstream prediction task, and effectively improves the accuracy of collision risk prediction.
(3) The end point prediction in the vehicle track prediction module uses the fixed point predictor and the environment self-adaptive predictor, and when the environment self-adaptive predictor predicts the end points of a plurality of agents, the environment self-adaptive predictor can learn dynamic weights according to the environment of each agent, so that the waste of calculation resources can be effectively avoided, and the response speed of the risk prediction module is improved.
(4) The invention adopts a method combining a front-edge technology of deep learning with the traditional risk prediction, fully considers the problems encountered in an upstream module of the risk prediction module, provides a corresponding improvement scheme, and provides a new thought and view angle for the establishment of a risk early warning system in a co-driving take-over system of a man-machine.
(5) The method TTCCP of the present invention provides a specific time value that indicates how long after the system is expected to be in the face of a potential collision, the risk value of the vehicle reaches or exceeds the critical level defined by the system; the value can be used for activating safety precaution, and an effective and accurate safety assessment signal is provided for a take-over system of an automatic driving vehicle under the co-driving of a human and a machine.
Drawings
Other objects and attainments together with a more complete understanding of the invention will become apparent and appreciated by referring to the following description taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 is a flow chart of a method for predicting collision risk of automatic driving fused with uncertainty of an online map;
FIG. 2 is a flow chart of the context adaptive predictor of the present invention predicting endpoints;
fig. 3 is a flow chart of collision risk prediction based on a predicted trajectory according to the present invention.
Detailed Description
The following detailed description of the application, taken in conjunction with the accompanying drawings, is not intended to limit the scope of the application, so that those skilled in the art may better understand the technical solutions of the application and their advantages.
As shown in fig. 1 to 3, the method for predicting the risk of an automatic driving collision with fusion of uncertainty of an online map according to the present embodiment includes the following steps:
step 1: generating an online map uncertainty;
step 1.1: collecting information of surrounding roads and agents acquired by the vehicle-mounted camera and the radar, and performing corresponding noise reduction treatment;
In this embodiment, the surrounding road information includes: the road type, speed limit, whether at an intersection, traffic lights, and the slope of the road; the road types comprise an intersection, a left turning lane, a right turning lane, a straight running lane and the like; the surrounding agent information includes: the type of the agent comprises intelligent agents such as pedestrians, vehicles and the like, wherein the type of the agent mainly refers to vehicles, and the speed mainly refers to vehicle speed;
Step 1.2: inputting the noise-reduced data into a BEV encoder, and converting the acquired data into a common aerial view (BEV) spatial feature;
Step 1.3: establishing a Gaussian regression uncertainty model and a classification uncertainty model, taking BEV space characteristics as input, generating map element vertexes by using an online map regression model, wherein the map element vertexes comprise positions and types of the element vertexes, and calculating online map uncertainty related parameters by the Gaussian regression uncertainty model and the classification uncertainty model; the Gaussian regression uncertainty model calculates the predicted position of each map element vertex and the uncertainty parameter related to the position, and the classification uncertainty model calculates the confidence coefficient of the type of each map element vertex;
In this embodiment, the established gaussian regression uncertainty model is expressed as:
;
wherein V is the number of M vertexes of the map element, and the coordinate of the ith vertex is ,Is a mean value, representing the expected value of the vertex position,Is the standard deviation, measures the uncertainty of the predicted position,Is the variance; i and j are used to represent the j-th dimension of the i-th vertex of map element M,The output of the Gaussian regression uncertainty model;
The classification uncertainty model outputs a confidence level for the type of each map element vertex using a multi-layer perceptron MLP.
Step 2: track prediction integrating uncertainty of an online map;
step 2.1: fusing the obtained parameters of the predicted position and the uncertainty related to the position of the vertex of the map element and the confidence of the type of the vertex of the map element into an online map by using multi-layer perceptron coding;
In this embodiment, the model for fusing the predicted position of the map element vertex, the parameter of uncertainty related to the position, and the confidence of the type of the map element vertex into the online map is as follows:
;
In the method, in the process of the invention, Is the gaussian distribution mean vector of the ith vertex,Is the standard deviation vector of uncertainty of the ith vertex of the gaussian regression uncertainty model output,Is a class probability vector composed of the confidence of the ith vertex, [; represents the connection of three vectors,Is a multi-layer perceptron MLP.
Step 2.2: for an online map fused with uncertainty parameters, performing multi-line segment coding by using VectorNet and performing feature extraction, when lane line features are coded, using VectorNet to code a multi-line subgraph for each scene element (namely multi-line), using two independent subgraphs for an agent (vehicle) and a lane, and obtaining feature vectors through coding;
Step 2.3: the feature vector to be obtainedIs transmitted into an interactive modeling module formed by stacking agent-to-lane module (AL), lane-to-lane module (LL), lane-to-agent module (LA) and agent-to-agent module (AA) to update the characteristics, and the updated characteristics are that; The lane-to-lane module focuses on the relationship between the agent and the lane, the lane-to-lane module focuses on the relationship between the lane and the lane, the lane-to-agent module focuses on the relationship between the lane and the agent, and the agent-to-agent module focuses on the relationship between the agent and the agent;
Specifically, the interaction modeling module updates the internal relationships (AA, LL) using a self-attention encoder followed by a feed-forward network (FFN), and the cross-attention encoder followed by a feed-forward network (FFN), each with a multi-head attention Mechanism (MHA) to increase the expressive power of the model:
;
= K = V = ;
;
In the method, in the process of the invention, 、、Is a learned projection, norm regularization; q, K, V are query vectors, key vectors, value vectors of the attention mechanism,The function is activated and the function is activated,Are feature vectors;
The multi-head attention block is defined as follows:
;
AA, LL, LA, AL modules are stacked to update features Obtaining updated features。
Step 2.4: based on meta information f and featuresSpliced feature vectorPredicting end points of trajectories of the target vehicle and surrounding agents (peripheral vehicles) using a fixed-point predictor and an environment-adaptive predictor, and generating the trajectories using a multi-layer perceptron MLP;
Specifically, meta information f and characteristics The spliced model is as follows:
= (,f);
In the method, in the process of the invention, For the multi-layer perceptron MLP,For meta-information and featuresThe spliced vector, meta information f includes direction and position information of the prediction time agent.
In this embodiment, the fixed-point predictor is used for predicting the endpoint of the target agent (target vehicle), and the fixed-point predictor predicts the endpoint by means of a multi-layer perceptron, as follows:
= ();
In the method, in the process of the invention, For the end point of the prediction,The multi-layer perceptron MLP,For meta-information f and featuresAnd the spliced characteristic vector.
In this embodiment, the environment adaptive predictor is configured to predict endpoints of a plurality of agents (vehicles around), as shown in fig. 2, where the environment adaptive predictor adapts to a specific situation of each agent (vehicle) by adopting a dynamic weight learning manner:
;
;
;
;
In the method, in the process of the invention, 、As a result of the parameters that can be trained,、The method is characterized in that the method is a weight dynamic adjustment matrix in an endpoint prediction module, norm is layer normalization, and ReLU is an activation function; Is through dynamic weight matrix And featuresRegularizing the matrix after nonlinear fitting,Is the predicted endpoint.
Step 3: predicting collision risk based on the predicted trajectory;
step 3.1: acquiring coordinates of future time steps of the target vehicle and surrounding vehicles according to the track generated in the step 2.4, inputting the predicted coordinates of the future time steps of the target vehicle and the surrounding vehicles into a collision probability prediction module, and predicting the probability of collision of the surrounding agent in the future total time step; as shown in fig. 3, the method specifically comprises the following steps:
Step 3.1.1: acquiring coordinates of four vertexes of the target vehicle and Zhou Che (surrounding vehicles);
Step 3.1.2: respectively constructing four vertexes of a target vehicle and a peripheral vehicle into matrixes;
step 3.1.3: inputting a collision probability prediction module (collision judgment function) to judge whether collision occurs or not;
step 3.1.4: the probability calculation function (collision probability prediction function) of the target vehicle and one of the peripheral vehicles is input to calculate the probability;
step 3.1.5: and calculating the collision probability of the target vehicle and all the peripheral vehicles.
In this embodiment, the collision probability prediction module establishes a collision intersection function, and predicts time steps between the target vehicle and surrounding agentsA collision probability prediction function within a time range of (a); the collision intersection function is used for judging whether the matrix of the target vehicle and the peripheral vehicle has an overlapped part in each time step predicted in the future, and if the matrix has an output 1, the collision is represented; if no 0 is output, no collision occurs;
The collision intersection function is:
;
In the method, in the process of the invention, In order to predict the time frame of the future trajectory,In order for the vehicle to be a target,For the surrounding agent to be a proxy,In the form of a matrix of the target vehicle,A matrix of surrounding agents;
the target vehicle and surrounding agent (surrounding vehicle) are predicted to be time-stepped The collision probability prediction function in the time range of (2) is:
;
In the method, in the process of the invention, For the event of a collision of the target vehicle with the peripheral vehicle, = ;For the target vehicle and Zhou Che at a predicted time stepProbability integration of collision; Refers to the target vehicle and surrounding agents at time [ Probability of distribution within ] by interactivity prediction;
let the surrounding vehicles be n, define the target vehicle at T In the range of at least one collision with one surrounding vehicle() In the future the total time stepProbability p of internal collision() The calculation formula of (c) is:
;
wherein p is% () Calculating the total time step of s to n cyclesIs a collision probability of (a).
Step 3.2: inputting the collision probability obtained in the step 3.1 into a critical collision probability remaining time (TTCCP) calculating module, calculating critical collision probability remaining time, and transmitting the critical collision probability remaining time to a man-machine co-driving system; the calculation formula of the critical collision probability remaining time (TTCCP) calculation module is as follows:
;
In the method, in the process of the invention, Refer to all possible predicted time stepsIn (2) finding the smallest one so that at timeProbability of collisionExceeding the critical value CCP.
In this embodiment TTCCP provides a specific time value indicating how long after the system is expected to be in the face of a potential collision, the risk value of the vehicle reaches or exceeds the system-defined threshold level; the value can be used for activating safety precaution, and an effective and accurate safety assessment signal is provided for a take-over system of an automatic driving vehicle under the co-driving of a human and a machine.
The present invention also provides an electronic device including: one or more processors, memory; the memory is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the prediction method of the automatic driving collision risk fused with the online map uncertainty.
The invention also provides a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the above-described method of predicting an autopilot collision risk incorporating online map uncertainty.
Those skilled in the art will appreciate that all or part of the functions of the various methods/modules in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a computer readable storage medium, and the storage medium may include: read-only memory, random access memory, magnetic disk, optical disk, hard disk, etc., and the program is executed by a computer to realize the above-mentioned functions. For example, the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above can be realized.
In addition, when all or part of the functions in the above embodiments are implemented by means of a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and the program in the above embodiments may be implemented by downloading or copying the program into a memory of a local device or updating a version of a system of the local device, and when the program in the memory is executed by a processor.
The foregoing description of the invention has been presented for purposes of illustration and description, and is not intended to be limiting. Several simple deductions, modifications or substitutions may also be made by a person skilled in the art to which the invention pertains, based on the idea of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A method for predicting the risk of an autopilot collision fused with an online map uncertainty, the method comprising the steps of:
step 1: generating an online map uncertainty;
step 1.1: collecting information of surrounding roads and agents acquired by the vehicle-mounted camera and the radar, and performing corresponding noise reduction treatment;
Step 1.2: inputting the noise-reduced data into a BEV encoder, and converting the acquired data into a common BEV spatial feature;
Step 1.3: establishing a Gaussian regression uncertainty model and a classification uncertainty model, taking BEV space characteristics as input, generating map element vertexes by using an online map regression model, wherein the map element vertexes comprise positions and types of the element vertexes, and calculating online map uncertainty related parameters by the Gaussian regression uncertainty model and the classification uncertainty model; the Gaussian regression uncertainty model calculates the predicted position of each map element vertex and the parameter of the uncertainty related to the position, and the classification uncertainty model calculates the confidence coefficient of the type of each map element vertex;
step 2: track prediction integrating uncertainty of an online map;
step 2.1: fusing the obtained parameters of the predicted position and the related uncertainty of the position of the map element vertex and the confidence coefficient of the type of the map element vertex into an online map by using multi-layer perceptron coding;
Step 2.2: for an online map fused with uncertainty parameters, performing multi-line segment coding by using VectorNet, performing feature extraction, when lane line features are coded, using VectorNet to code a multi-line sub-graph for each scene element, using two independent sub-graphs for an agent and a lane, and obtaining feature vectors through coding ;
Step 2.3: the feature vector to be obtainedThe characteristics are updated by being transmitted into an interactive modeling module formed by stacking an agent-to-lane module, a lane-to-agent module and an agent-to-agent module, and the updated characteristics are as follows; The lane-to-lane module focuses on the relationship between the agent and the lane, the lane-to-lane module focuses on the relationship between the lane and the lane, the lane-to-agent module focuses on the relationship between the lane and the agent, the agent-to-agent module focuses on the relationship between the agent and the agent, and the agent-to-lane module, the lane-to-agent module and the agent-to-agent module are all multi-head attention blocks;
step 2.4: based on meta information f and features Spliced feature vectorPredicting end points of the target vehicle and surrounding agent trajectories using a fixed point predictor and an environment adaptive predictor, and generating trajectories using a multi-layer perceptron MLP;
step 3: predicting collision risk based on the predicted trajectory;
step 3.1: acquiring coordinates of future time steps of the target vehicle and the surrounding agents according to the track generated in the step 2.4, inputting the predicted coordinates of the future time steps of the target vehicle and the surrounding agents into a collision probability prediction module, and predicting the probability of collision of the surrounding agents in the future total time steps;
Step 3.2: and (3) inputting the collision probability obtained in the step (3.1) into a critical collision probability remaining time calculation module, calculating critical collision probability remaining time, and transmitting the critical collision probability remaining time to a man-machine co-driving system.
2. The method for predicting risk of an automatic driving collision with fusion of uncertainty of an online map according to claim 1, wherein in step 1.1, the surrounding road information includes: the road type, speed limit, whether at an intersection, traffic lights, and the slope of the road; the road type comprises an intersection, a left turning lane, a right turning lane and a straight running lane; the surrounding agent information includes: the type of agent, speed, acceleration, the type of agent includes pedestrians, vehicles.
3. The method for predicting risk of an automatic driving collision with fusion of online map uncertainty as claimed in claim 1, wherein in step 1.3, the established gaussian regression uncertainty model is expressed as:
;
wherein V is the number of M vertexes of the map element, and the coordinate of the ith vertex is ,Is a mean value, representing the expected value of the vertex position,Is the standard deviation, measures the uncertainty of the predicted position,Is the variance; i and j are used to represent the j-th dimension of the i-th vertex of map element M,The output of the Gaussian regression uncertainty model;
The classification uncertainty model outputs a confidence level for the type of each map element vertex using a multi-layer perceptron MLP.
4. The method for predicting risk of an automatic driving collision with integrated uncertainty of an online map according to claim 1, wherein in step 2.1, the model of integrating the predicted position of the map element vertex and the parameter of the uncertainty related to the position and the confidence of the type of the map element vertex into the online map is:
;
In the method, in the process of the invention, Is the gaussian distribution mean vector of the ith vertex,Is the standard deviation vector of uncertainty of the ith vertex of the gaussian regression uncertainty model output,Is a class probability vector composed of the confidence of the ith vertex, [; represents the connection of three vectors,Is a multi-layer perceptron MLP.
5. The method of claim 1, wherein in step 2.3, the interaction modeling module uses a self-attention encoder followed by FFN to update the agent-to-agent module, the lane-to-lane module, and uses a cross-attention encoder followed by FFN to update the lane-to-agent module, the agent-to-lane module, each encoder using a multi-headed attention mechanism:
;
= K = V = ;
;
In the method, in the process of the invention, 、、Is a learned projection, norm regularization; q, K, V are query vectors, key vectors, value vectors of the attention mechanism,The function is activated and the function is activated,Are feature vectors;
The multi-head attention block is defined as follows:
;
Stacking agent-to-lane, lane-to-agent, agent-to-agent modules to update features Obtaining updated features。
6. The method for predicting risk of collision in automatic driving with fusion of uncertainty of online map as claimed in claim 1, wherein in step 2.4, the meta information f and the feature are as followsThe spliced model is as follows:
= (,f);
In the method, in the process of the invention, For the multi-layer perceptron MLP,For meta-information and featuresThe spliced vector, meta information f includes direction and position information of the prediction time agent.
7. The method for predicting risk of collision in automatic driving with integrated online map uncertainty as recited in claim 1, wherein in step 2.4, the fixed point predictor is used for predicting the end point of the target agent, and the fixed point predictor predicts the end point by means of a multi-layer perceptron, and the formula is as follows:
= ();
In the method, in the process of the invention, For the end point of the prediction,The multi-layer perceptron MLP,For meta-information f and featuresAnd the spliced characteristic vector.
8. The method for predicting the risk of an automatic driving collision fused with the uncertainty of an online map according to claim 1, wherein the environment adaptive predictor is used for predicting endpoints of a plurality of surrounding agents, and the environment adaptive predictor adapts to specific situations of each agent in a dynamic weight learning manner:
;
;
;
;
In the method, in the process of the invention, 、As a result of the parameters that can be trained,、The method is characterized in that the method is a weight dynamic adjustment matrix in an endpoint prediction module, norm is layer normalization, and ReLU is an activation function; Is through dynamic weight matrix And featuresRegularizing the matrix after nonlinear fitting,Is the predicted endpoint.
9. The method for predicting the risk of an automatic driving collision with fusion of on-line map uncertainty as claimed in claim 1, wherein the step 3.1 specifically comprises the steps of:
Step 3.1.1: acquiring coordinates of four vertexes of a target vehicle and a peripheral vehicle;
Step 3.1.2: respectively constructing four vertexes of a target vehicle and a peripheral vehicle into matrixes;
step 3.1.3: inputting a collision probability prediction module, and judging whether collision occurs or not;
step 3.1.4: inputting a probability calculation function of the target vehicle and one of the peripheral vehicles to calculate probability;
step 3.1.5: calculating collision probability of the target vehicle and all peripheral vehicles;
In step 3.1, the collision probability prediction module establishes a collision intersection function for predicting time steps between the target vehicle and surrounding agents A collision probability prediction function within a time range of (a); the collision intersection function is used for judging whether the matrix of the target vehicle and the peripheral vehicle has an overlapped part in each time step predicted in the future, if the matrix has an output 1, the matrix does not have an output 0;
The collision intersection function is:
;
In the method, in the process of the invention, In order to predict the time frame of the future trajectory,In order for the vehicle to be a target,For the surrounding agent to be a proxy,In the form of a matrix of the target vehicle,A matrix of surrounding agents;
Target vehicle and surrounding agents at predicted time steps The collision probability prediction function in the time range of (2) is:
;
In the method, in the process of the invention, For the event of a collision of the target vehicle with a surrounding agent, = ;Predicting time steps for a target vehicle and surrounding agentsProbability integration of collision; Refers to the target vehicle and surrounding agents at time [ Probability of distribution within ] by interactivity prediction;
Let the surrounding agents be n, define the target vehicle at T In range of at least one collision with one surrounding agent() In the future the total time stepProbability p of internal collision() The calculation formula of (c) is:
;
wherein p is% () Calculating the total time step of s to n cyclesIs a collision probability of (a).
10. The method for predicting the risk of an automatic driving collision with fusion of online map uncertainty as claimed in claim 1, wherein in step 3.2, the calculation formula of the critical collision probability remaining time calculation module is as follows:
;
In the method, in the process of the invention, Refer to all possible predicted time stepsIn (2) finding the smallest one so that at timeProbability of collisionExceeding the critical value CCP.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410713791.6A CN118269968B (en) | 2024-06-04 | 2024-06-04 | Prediction method of automatic driving collision risk fused with online map uncertainty |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410713791.6A CN118269968B (en) | 2024-06-04 | 2024-06-04 | Prediction method of automatic driving collision risk fused with online map uncertainty |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118269968A CN118269968A (en) | 2024-07-02 |
CN118269968B true CN118269968B (en) | 2024-09-24 |
Family
ID=91633868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410713791.6A Active CN118269968B (en) | 2024-06-04 | 2024-06-04 | Prediction method of automatic driving collision risk fused with online map uncertainty |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118269968B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115855084A (en) * | 2022-11-30 | 2023-03-28 | 北京百度网讯科技有限公司 | Map data fusion method and device, electronic equipment and automatic driving product |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2562049A (en) * | 2017-05-02 | 2018-11-07 | Kompetenzzentrum Das Virtuelle Fahrzeug | Improved pedestrian prediction by using enhanced map data in automated vehicles |
KR102138979B1 (en) * | 2018-11-29 | 2020-07-29 | 한국과학기술원 | Lane-based Probabilistic Surrounding Vehicle Motion Prediction and its Application for Longitudinal Control |
CN116176627A (en) * | 2023-03-15 | 2023-05-30 | 杭州电子科技大学 | Vehicle track prediction method based on heterogeneous node time-space domain sensing |
-
2024
- 2024-06-04 CN CN202410713791.6A patent/CN118269968B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115855084A (en) * | 2022-11-30 | 2023-03-28 | 北京百度网讯科技有限公司 | Map data fusion method and device, electronic equipment and automatic driving product |
Also Published As
Publication number | Publication date |
---|---|
CN118269968A (en) | 2024-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Muhammad et al. | Deep learning for safe autonomous driving: Current challenges and future directions | |
US20210341927A1 (en) | Verifying Predicted Trajectories Using A Grid-Based Approach | |
Zhang et al. | Prediction of pedestrian-vehicle conflicts at signalized intersections based on long short-term memory neural network | |
Yoon et al. | Interaction-aware probabilistic trajectory prediction of cut-in vehicles using Gaussian process for proactive control of autonomous vehicles | |
Jeong et al. | Bidirectional long shot-term memory-based interactive motion prediction of cut-in vehicles in urban environments | |
CN112163446A (en) | Obstacle detection method and device, electronic equipment and storage medium | |
CN114926823B (en) | WGCN-based vehicle driving behavior prediction method | |
CN117056153A (en) | Methods, systems, and computer program products for calibrating and verifying driver assistance systems and/or autopilot systems | |
Kim et al. | Vision-based uncertainty-aware lane keeping strategy using deep reinforcement learning | |
Jiao et al. | End-to-end uncertainty-based mitigation of adversarial attacks to automated lane centering | |
Du et al. | A Novel Intelligent Approach to Lane‐Change Behavior Prediction for Intelligent and Connected Vehicles | |
CN112446466A (en) | Measuring confidence in deep neural networks | |
CN118470974B (en) | Driving safety early warning system and method based on Internet of vehicles | |
Wu et al. | A deep learning-based car accident detection approach in video-based traffic surveillance | |
CN118269968B (en) | Prediction method of automatic driving collision risk fused with online map uncertainty | |
Verstraete et al. | Pedestrian collision avoidance in autonomous vehicles: A review | |
CN117392834B (en) | Intelligent route planning method and system | |
US20240112044A1 (en) | Methods and systems for ontology construction with ai-mediated crowdsourcing and concept mining for high-level activity understanding | |
CN117818659A (en) | Vehicle safety decision method and device, electronic equipment, storage medium and vehicle | |
Lu et al. | DeepQTest: Testing Autonomous Driving Systems with Reinforcement Learning and Real-world Weather Data | |
WO2023021208A1 (en) | Support tools for av testing | |
KR20230011774A (en) | Method and system for data recording of vehicle | |
US20240112473A1 (en) | Object trajectory clustering with hybrid reasoning for machine learning | |
Huang et al. | Digital twin edge services with proximity-aware longitudinal lane changing model for connected vehicles | |
Wang | Machine Learning for Autonomous Vehicle Collision Prediction and Avoidance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |