WO2022247303A1 - 预测控制的方法、装置、设备及计算机可读存储介质 - Google Patents
预测控制的方法、装置、设备及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2022247303A1 WO2022247303A1 PCT/CN2022/071016 CN2022071016W WO2022247303A1 WO 2022247303 A1 WO2022247303 A1 WO 2022247303A1 CN 2022071016 W CN2022071016 W CN 2022071016W WO 2022247303 A1 WO2022247303 A1 WO 2022247303A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target vehicle
- perception
- positioning
- simulation
- surrounding obstacles
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000004088 simulation Methods 0.000 claims abstract description 140
- 230000008447 perception Effects 0.000 claims description 99
- 230000002159 abnormal effect Effects 0.000 claims description 37
- 238000002372 labelling Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 238000004422 calculation algorithm Methods 0.000 description 22
- 230000006872 improvement Effects 0.000 description 8
- 238000012795 verification Methods 0.000 description 8
- 238000011161 development Methods 0.000 description 7
- 230000018109 developmental process Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B17/00—Systems involving the use of models or simulators of said systems
- G05B17/02—Systems involving the use of models or simulators of said systems electric
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Definitions
- the present application relates to the field of control technologies, and in particular to methods, devices, equipment and computer-readable storage media for predictive control.
- the simulation system is a very important offline algorithm module.
- the current simulation system is mainly divided into world-based simulation and log-based simulation.
- log-based simulation a 30-second to 1-minute segmental scene is generally constructed based on online data or scene editing for a virtual unmanned vehicle environment.
- log-based simulation is mainly reflected in the following three aspects: 1. Software modules that support verification planning, control and other algorithms for the design and upgrade of new algorithms; Scenarios that need to be optimized; 3. Provide extreme boundary scenarios to verify the safety of unmanned vehicle algorithms.
- Part of the simulation software lacks noise at the result level of the actual perception data (such as over-segmentation, broken frames, etc.), making prediction and control in the loop impossible to truly effectively implement.
- the present application provides a method, device, device and computer-readable storage medium for predictive control, which can improve the effect of predictive control of a simulation system.
- a predictive control method comprising: predicting the real coordinates of the target vehicle at the next simulation time according to the real coordinates of the target vehicle at the current simulation time and the simulated control amount of the target vehicle at the current simulation time; The true value of the perception of the surrounding obstacles by the vehicle at the current simulation moment, and the multi-modal future trajectory obtained by predicting the surrounding obstacles, predict the true perception value of the target vehicle to the surrounding obstacles at the next simulation moment; based on the The distribution estimation results of the positioning error and the perception error of the target vehicle, as well as the real coordinates and the true perception value of the target vehicle at the next simulation moment, predict the simulated control amount of the target vehicle at the next simulation moment.
- a prediction control device comprising: a positioning prediction unit, used to predict the real position of the target vehicle at the next simulation time according to the real coordinates of the target vehicle at the current simulation time and the simulation control amount of the target vehicle at the current simulation time Coordinates; a perception prediction unit, used to predict the target vehicle in the next simulation according to the target vehicle's perception of the surrounding obstacles at the current simulation moment and the multi-modal future trajectory obtained by predicting the surrounding obstacles. Perceived true values of surrounding obstacles at any moment; a control prediction unit, configured to predict based on the distribution estimation results of the positioning error and perceptual error of the target vehicle, as well as the real coordinates and perceptual true value of the target vehicle at the next simulation moment The simulation control amount of the target vehicle at the next simulation time.
- An electronic device includes: a processor and a memory; the memory is used to store a computer program; the processor is used to execute the above predictive control method by invoking the computer program.
- a computer readable storage medium on which a computer program is stored, and the above predictive control method is implemented when the program is executed by a processor.
- Fig. 1 is a schematic flow chart of a predictive control method shown in the present application
- Fig. 2 is the block diagram of generating noise data and vehicle dynamics model shown in the present application
- FIG. 3 is a schematic diagram of the composition of a predictive control device shown in the present application.
- FIG. 4 is a schematic structural diagram of an electronic device shown in the present application.
- first, second, third, etc. may be used in this application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present application, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination.”
- the embodiment of the present application provides a predictive control method, which is implemented by a planning simulation system, which is specifically a complete predictive control-in-the-loop planning simulation that supports uncertainty observations
- the system can better simulate the performance of the actual scenes on the line, and provide a verification environment for the development and improvement of the actual unmanned vehicle planning control system.
- FIG. 1 it is a schematic flowchart of a predictive control method provided by the embodiment of the present application, the method includes the following steps S101-S103:
- S101 Predict the real coordinates of the target vehicle at the next simulation moment according to the real coordinates of the target vehicle at the current simulation moment and the simulated control amount of the target vehicle at the current simulation moment.
- the target vehicle may be any type of unmanned vehicle, such as a sweeping vehicle, an articulated vehicle, a floor washing vehicle, or a passenger vehicle.
- the planning simulation system has a kernel that implements the simulation algorithm of uncertain observations.
- the interior of the algorithm kernel maintains a global coordinate system St of the self-vehicle based on real observations. At the same time, it also maintains the vehicle scene The perceived true value Xt of the traffic participants in (that is, the surrounding obstacles of the target vehicle). Based on this, the real coordinate St of the target vehicle at the current simulation moment can be obtained from the global coordinate system St maintained by the kernel.
- the planning simulation system can run the predictive planning control algorithm.
- the predictive planning control algorithm Through the predictive planning control algorithm, the simulated control value Ut of the target vehicle at the current simulation moment can be output.
- the algorithm kernel can be based on the real coordinate St of the target vehicle at the current simulation moment , and the simulated control variable Ut of the target vehicle at the current simulation moment, to obtain the real coordinate St+1 of the target vehicle at the next simulation moment.
- "predict the real coordinates of the target vehicle at the next simulation moment based on the real coordinates of the target vehicle at the current simulation moment and the simulation control amount of the target vehicle at the current simulation moment” may include: predicting the real coordinates of the target vehicle at the next simulation moment according to the real coordinates of the target vehicle at the current simulation moment, the dynamic model of the target vehicle, and the simulated control amount of the target vehicle at the current simulation moment.
- a dynamic model of the target vehicle can be established in advance, so that the algorithm kernel can simulate the target vehicle based on the real coordinate St of the target vehicle at the current simulation moment, the dynamic model of the target vehicle, and the current simulation moment
- the control variable Ut is used to obtain the real coordinate St+1 of the target vehicle at the next simulation moment.
- the dynamic model of the target vehicle may be obtained by learning the real running data of the target vehicle, and the dynamic model may be a non-parametric dynamic model.
- the real running data of the target vehicle may include the target vehicle’s positioning, perception, and running data fed back from the bottom layer of the vehicle.
- the running data includes But it is not limited to the data generated by the modules during the operation of autonomous driving, and the data generated by manual driving with sensors and software modules turned on.
- the vehicle dynamics model can be estimated through the machine learning model based on the real running data. Specifically, the horizontal and vertical observations of the target vehicle and the vehicle control input in different scenarios can be obtained, and the depth Learning algorithm to obtain the dynamic model of the target vehicle.
- the dynamic model of the target vehicle can be realized by the data acquisition module 21 shown in Figure 2, the vehicle power system analysis module 27 and the dynamic model building module 28 build.
- the embodiment of the present application uses a non-parametric dynamic model to learn the vehicle power system based on a large amount of driving data flow, which can provide a better power system. Compared with the traditional manually marked dynamic model, it can predict the state of the vehicle in each scene. Higher precision.
- S102 Predict the true value of the target vehicle's perception of the surrounding obstacles at the next simulation time according to the target vehicle's true perception of the surrounding obstacles at the current simulation moment and the multi-modal future trajectory obtained by predicting the surrounding obstacles.
- the surrounding obstacles refer to the traffic participants around the target vehicle in the simulation scene.
- the embodiment of the present application does not limit the type of surrounding obstacles, which can be vehicles, pedestrians, etc.;
- the information included is limited, which may include the type of obstacle, shape information, and motion status.
- the algorithm core of the planning simulation system not only maintains the above global coordinate system St of the self-vehicle, but also maintains the perceived true value Xt of the surrounding obstacles in the simulation scene at the current simulation moment.
- the true value Xt+1 of the target vehicle’s perception of surrounding obstacles at the next simulation moment is estimated.
- the true value of the perception of the surrounding obstacles by the vehicle at the next simulation moment specifically may include: according to the true perception value of the target vehicle to the surrounding obstacles at the current simulation moment, predict the multi-modal running trajectory of the surrounding obstacles in the future Probability distribution: based on the predicted probability distribution, the degree of danger of surrounding obstacles and/or the driving trajectory of the target vehicle, the true value of the target vehicle's perception of surrounding obstacles at the next simulation moment is obtained.
- the true perception value Xt of the obstacle of the target vehicle may be the result of manual labeling, which requires It should be noted that the method of generating the perceptual truth value Xt will be introduced in the subsequent step S103.
- the perceived true value Xt can be output to a pre-built multi-modal prediction system, which will simultaneously consider non-drivable areas, traffic lights, lane lines, etc., through Deep learning algorithms (including but not limited to grid graph-based CNN structure neural network and/or graph neural network-based deep learning algorithms), output the multi-modal trajectory of the obstacle in the future, and calculate the obstacle
- Deep learning algorithms including but not limited to grid graph-based CNN structure neural network and/or graph neural network-based deep learning algorithms
- the degree of danger of each obstacle in the simulation scene and/or the trajectory of the target vehicle can be further considered, wherein the degree of danger of the obstacle includes but is not limited to the aggressiveness of the obstacle (such as the aggressiveness of the vehicle, the pedestrian's The aggressiveness of the car is weak), and whether it obeys the traffic rules (such as whether the vehicle strictly follows the lane line, pedestrians cross the road, etc.).
- the target vehicle at the next simulation moment The perceived true value of surrounding obstacles is Xt+1.
- the embodiment of the present application constructs reasonable prediction probabilities of multi-modal running trajectories for surrounding obstacles in the simulation scene, and provides a reasonable interaction scene between the target vehicle and the surrounding obstacles in the simulation scene. In this way, It can effectively improve the authenticity of the simulation scene, so that it can better simulate the performance of the actual scenes on the line, and then provide a verification environment for the development and improvement of the actual unmanned vehicle planning and control system.
- S103 Predict the simulation control amount of the target vehicle at the next simulation time based on the distribution estimation results of the positioning error and the perception error of the target vehicle, as well as the real coordinates and the true perception value of the target vehicle at the next simulation time.
- the algorithm kernel of the planning simulation system can be based on the distribution estimation results of the positioning error and perception error of the target vehicle, as well as the real coordinates St+1 and Perceive the true value Xt+1, and predict the simulated control value Ut+1 of the target vehicle at the next simulation moment.
- the real coordinate St+1 and the real perception value Xt+1 of the target vehicle at the next simulation moment can be converted, and the conversion operation can be performed by the positioning noise system (including module 22 and module 23 shown in FIG. 2 ) and perceptual noise system (including module 24, module 25 and module 26 shown in FIG. 2 ), through this conversion operation, the NSt+1 of introducing uncertainty observation is obtained and NXt+1.
- the positioning noise system including module 22 and module 23 shown in FIG. 2
- perceptual noise system including module 24, module 25 and module 26 shown in FIG. 2
- the planning simulation system outputs the simulated control variable Ut+1 for the target vehicle at the next simulation moment based on the NSt+1 and NXt+1 introduced by the uncertain observation through the running predictive planning control algorithm.
- the cycle time slice t from 0 to the time slice n at the end of the scene finally realizes the predictive planning control simulation for the scene based on uncertain observations. It can be seen that the introduction of observation uncertainty according to the actual positioning of the target vehicle and the distribution of perception results can effectively improve the authenticity of the simulation scene, thereby better simulating the performance of the actual scenes on the line, and then providing guidance for the actual unmanned vehicle planning.
- the development and improvement of the control system provides a verification environment.
- the distribution estimation result of the positioning error of the target vehicle can be generated in the following manner, including the following steps A1-A2: Step A1: From the real operating data of the target vehicle, obtain The online positioning result of the target vehicle, and based on the online positioning result and positioning auxiliary information, predict the abnormal positioning scene, wherein the positioning auxiliary information is auxiliary information for determining abnormal positioning.
- the online positioning result of the target vehicle can be obtained from the real operating data of the target vehicle, and then, the online positioning result and the auxiliary information that assists in determining the abnormal positioning are input, and the possible abnormal positioning scene is output .
- auxiliary information includes but is not limited to landmark objects detected online (such as roadsides, etc.), INS and other kinematics-based route deduction, machine learning positioning status classification modules based on time series features, etc. This function can be realized by the location anomaly analysis module 22 shown in FIG. 2 .
- Step A2 For each predicted abnormal positioning scene, determine the true positioning value of the target vehicle in the abnormal positioning scene, and obtain the positioning error of the target vehicle in the abnormal positioning scene based on the determined true positioning value.
- a set of positioning algorithms with a large amount of calculation and poor real-time performance can be used to recalculate the true positioning value of the abnormal positioning scene offline, so that the calculated Based on the true positioning value of the target vehicle and the wrong positioning results on the line, the positioning error in the abnormal positioning scene is calculated.
- the distribution estimation results of the positioning error in each abnormal positioning scene of the target vehicle can be obtained.
- This function can be realized by the abnormal location and distribution module 23 shown in FIG. 2 .
- the distribution estimation result of the perception error of the target vehicle can be generated in the following manner, including the following steps B1-B3: Step B1: From the real operating data of the target vehicle, obtain The online perception results of the target vehicle to the surrounding obstacles, and based on the online perception results, predict the abnormal perception scene.
- the online perception result of the target vehicle to the surrounding obstacles can be obtained from the real operating data of the target vehicle, and then, based on the online perception result, the life cycle of the obstacle, the convex hull distance of the obstacle , the speed change of the obstacle, the speed difference between the obstacle speed and the interpolation of the center of mass, etc., perform heuristic analysis to obtain candidate complex perception scenarios that may make mistakes, that is, possible abnormal perception scenarios.
- This function can be realized by the perception anomaly analysis module 24 shown in FIG. 2 .
- Step B2 For each predicted abnormal perception scene, obtain the manual labeling result of the obstacle perception result in the abnormal perception scene.
- the target vehicle since the target vehicle needs to obtain information about obstacles around it during actual operation, such information may include 3D point cloud data of obstacles obtained through radar, or obstacle image data obtained through cameras .
- the perception results obtained in the above-mentioned step B1 in the abnormal perception scene can be projected onto the obstacle three-dimensional point cloud data and/or obstacle image data, and the obstacle labeling result can be re-given manually, and the re-labeling content includes But it is not limited to re-segmentation for over-segmentation and under-segmentation obstacles, so as to give correct obstacle perception results (such as correct obstacle category, real 3D state, etc.).
- This function can be realized by the perceptual labeling module 25 shown in FIG. 2 .
- Step B3 Obtain the true value of the target vehicle's perception of surrounding obstacles at the current moment from the obtained manual labeling results, and based on the acquired true value of perception, obtain the perception error of the target vehicle in the abnormal perception scene.
- the perceptual truth labeling results and the error category, speed, and segmentation of the online perception system for the perceptual results, for each abnormal perception scene the perceptual truth labeling results and the online Based on the error perception results, the perception error in the abnormal perception scene is calculated, so that the distribution estimation results of the perception error in each abnormal perception scene of the target vehicle can be obtained.
- This function can be realized by the abnormality awareness distribution module 26 shown in FIG. 2 .
- FIG. 3 it is a schematic diagram of the composition of a predictive control device provided by an embodiment of the present application.
- the device includes: a positioning prediction unit 310, which is used to control the target vehicle according to the real coordinates of the target vehicle at the current simulation time and the current simulation time.
- the simulated control amount of the vehicle is used to predict the real coordinates of the target vehicle at the next simulation moment;
- the perception prediction unit 320 is used to perceive the true value of the target vehicle to the surrounding obstacles at the current simulation moment and to perform Predict the obtained multi-modal future trajectory, and predict the true value of the target vehicle's perception of surrounding obstacles at the next simulation moment;
- control the prediction unit 330 for estimating the distribution based on the positioning error and perception error of the target vehicle , and the real coordinates and perceived true value of the target vehicle at the next simulation moment, to predict the simulated control amount of the target vehicle at the next simulation moment.
- the positioning prediction unit 310 is specifically configured to: calculate the target vehicle's real coordinates at the current simulation moment, the dynamic model of the target vehicle, and the current simulation moment The simulation control quantity of , and predict the real coordinates of the target vehicle at the next simulation moment.
- the dynamic model is obtained by learning real operating data of the target vehicle.
- the dynamic model is a non-parametric dynamic model.
- the perception prediction unit 320 is specifically configured to: predict the multi-modality of the surrounding obstacles in the future based on the true value of the target vehicle's perception of the surrounding obstacles at the current simulation moment.
- the probability distribution of the state running trajectory; based on the predicted probability distribution, the degree of danger of the surrounding obstacles and/or the driving trajectory of the target vehicle, the true value of the target vehicle's perception of the surrounding obstacles at the next simulation moment is obtained.
- the device further includes a first estimation unit, configured to generate a distribution estimation result of the positioning error of the target vehicle in the following manner: from the real operating data of the target vehicle In the method, the online positioning result of the target vehicle is obtained, and based on the online positioning result and positioning auxiliary information, the abnormal positioning scene is predicted, and the positioning auxiliary information is auxiliary information for determining abnormal positioning; for prediction For each abnormal positioning scene, determine the true positioning value of the target vehicle in the abnormal positioning scene, and obtain the positioning error of the target vehicle in the abnormal positioning scene based on the determined true positioning value.
- the device further includes a second estimation unit, configured to generate a distribution estimation result of the target vehicle's perception error in the following manner: from the real operating data of the target vehicle In the method, the online perception result of the target vehicle to the surrounding obstacles is obtained, and based on the online perception result, the abnormal perception scene is predicted; for each predicted abnormal perception scene, the Manual labeling results of obstacle perception results; from the obtained manual labeling results, obtain the true value of the target vehicle's perception of the surrounding obstacles at the current moment, and based on the acquired true value of perception, obtain the Perceptual errors in perceptual scenes.
- a second estimation unit configured to generate a distribution estimation result of the target vehicle's perception error in the following manner: from the real operating data of the target vehicle
- the online perception result of the target vehicle to the surrounding obstacles is obtained, and based on the online perception result, the abnormal perception scene is predicted; for each predicted abnormal perception scene, the Manual labeling results of obstacle perception results; from the obtained manual labeling results, obtain the true value of the target vehicle's perception of the surrounding obstacles at
- the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
- the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this application. It can be understood and implemented by those skilled in the art without creative effort.
- the embodiment of the present application also provides an electronic device.
- the structure diagram of the electronic device is shown in FIG. Electrically connected; the memory 4002 is configured to store at least one computer-executable instruction, and the processor 4001 is configured to execute the at least one computer-executable instruction, thereby performing any one of the embodiments or any optional one in the present application.
- the processor 4001 can be FPGA (Field-Programmable Gate Array, Field Programmable Gate Array) or other devices with logic processing capabilities, such as MCU (Microcontroller Unit, micro control unit), CPU (Central Process Unit, central processing unit ).
- MCU Microcontroller Unit, micro control unit
- CPU Central Process Unit, central processing unit
- the embodiment of the present application also provides another computer-readable storage medium, which stores a computer program, and the computer program is used to realize any of the functions provided by any embodiment or any optional implementation mode in the present application when executed by a processor.
- the steps of a predictive control method are also provided.
- the computer-readable storage medium includes but is not limited to any type of disk (including floppy disk, hard disk, optical disk, CD-ROM, and magneto-optical disk), ROM (Read-Only Memory, read-only memory), RAM ( Random Access Memory, Random Access Memory), EPROM (Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory, magnetic card or ray card. That is, a readable storage medium includes any medium that stores or transmits information in a form readable by a device (eg, a computer).
- a device eg, a computer
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (10)
- 一种预测控制方法,包括:根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标;根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测所述目标车辆在下一模拟时刻对周围障碍物的感知真值;基于所述目标车辆的定位误差和感知误差的分布估计结果,以及所述目标车辆在下一模拟时刻的真实坐标和感知真值,预测下一模拟时刻对所述目标车辆的模拟控制量。
- 根据权利要求1所述的方法,其特征在于,所述根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标,包括:根据目标车辆在当前模拟时刻的真实坐标、所述目标车辆的动力学模型、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标。
- 根据权利要求2所述的方法,其特征在于,所述动力学模型是通过对所述目标车辆的真实运行数据进行学习得到的。
- 根据权利要求2所述的方法,其特征在于,所述动力学模型是非参数动力学模型。
- 根据权利要求1所述的方法,其特征在于,所述根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测所述目标车辆在下一模拟时刻对周围障碍物的感知真值,包括:根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值,预测周围障碍物在未来一段时间的多模态运行轨迹的概率分布;基于预测得到的概率分布、周围障碍物的危险程度和/或所述目标车辆的行驶轨迹,得到所述目标车辆在下一模拟时刻对周围障碍物的感知真值。
- 根据权利要求1-5任一项所述的方法,其特征在于,按照下述方式生成所述目标车辆的定位误差的分布估计结果:从所述目标车辆的真实运行数据中,获取所述目标车辆的线上定位结果,并基于所述线上定位结果以及定位辅助信息,对异常定位场景进行预测,所述定位辅助信息为用于判定定位异常的辅助信息;对于预测出的每一异常定位场景,确定所述目标车辆在该异常定位场景下的定位真值,并基于确定的定位真值,得到所述目标车辆在该异常定位场景下的定位误差。
- 根据权利要求1-5任一项所述的方法,其特征在于,按照下述方式生成所述目 标车辆的感知误差的分布估计结果:从所述目标车辆的真实运行数据中,获取所述目标车辆对周围障碍物的线上感知结果,并基于所述线上感知结果,对异常感知场景进行预测;对于预测出的每一异常感知场景,获取该异常感知场景下的障碍物感知结果的人工标注结果;从获取的人工标注结果中,获取所述目标车辆在当前时刻对周围障碍物的感知真值,并基于获取的感知真值,得到所述目标车辆在该异常感知场景下的感知误差。
- 一种预测控制装置,其特征在于,包括:定位预测单元,用于根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标;感知预测单元,用于根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测所述目标车辆在下一模拟时刻对周围障碍物的感知真值;控制预测单元,用于基于所述目标车辆的定位误差和感知误差的分布估计结果,以及所述目标车辆在下一模拟时刻的真实坐标和感知真值,预测下一模拟时刻对所述目标车辆的模拟控制量。
- 一种电子设备,其特征在于,包括:处理器、存储器;所述存储器,用于存储计算机程序;所述处理器,用于通过调用所述计算机程序,执行如权利要求1-7中任一项所述的预测控制方法。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-7任一项所述的预测控制方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110586196.7 | 2021-05-27 | ||
CN202110586196.7A CN115236997B (zh) | 2021-05-27 | 2021-05-27 | 预测控制方法、装置、设备及计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022247303A1 true WO2022247303A1 (zh) | 2022-12-01 |
Family
ID=83666466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/071016 WO2022247303A1 (zh) | 2021-05-27 | 2022-01-10 | 预测控制的方法、装置、设备及计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115236997B (zh) |
WO (1) | WO2022247303A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116991157A (zh) * | 2023-04-14 | 2023-11-03 | 北京百度网讯科技有限公司 | 具备人类专家驾驶能力的自动驾驶模型、训练方法和车辆 |
CN117113722A (zh) * | 2023-09-20 | 2023-11-24 | 广东省水利水电第三工程局有限公司 | 一种大型混泥土模具吊装bim仿真方法及系统 |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005122272A (ja) * | 2003-10-14 | 2005-05-12 | Toyota Motor Corp | 車輌の走行経路予測制御装置 |
CN102358287A (zh) * | 2011-09-05 | 2012-02-22 | 北京航空航天大学 | 一种用于车辆自动驾驶机器人的轨迹跟踪控制方法 |
JP2014118138A (ja) * | 2013-07-29 | 2014-06-30 | Daihatsu Motor Co Ltd | 運転支援装置 |
CN108227709A (zh) * | 2017-12-29 | 2018-06-29 | 深圳地平线机器人科技有限公司 | 用于控制车辆的自动驾驶的方法和装置 |
CN109085840A (zh) * | 2018-09-21 | 2018-12-25 | 大连维德智能视觉技术创新中心有限公司 | 一种基于双目视觉的车辆导航控制系统及控制方法 |
CN109572694A (zh) * | 2018-11-07 | 2019-04-05 | 同济大学 | 一种考虑不确定性的自动驾驶风险评估方法 |
CN109572693A (zh) * | 2019-01-24 | 2019-04-05 | 湖北亿咖通科技有限公司 | 车辆避障辅助方法、系统及车辆 |
CN109866752A (zh) * | 2019-03-29 | 2019-06-11 | 合肥工业大学 | 基于预测控制的双模式并行车辆轨迹跟踪行驶系统及方法 |
CN111260950A (zh) * | 2020-01-17 | 2020-06-09 | 清华大学 | 一种基于轨迹预测的轨迹跟踪方法、介质和车载设备 |
CN112415995A (zh) * | 2020-09-22 | 2021-02-26 | 重庆智行者信息科技有限公司 | 基于实时安全边界的规划控制方法 |
CN112578683A (zh) * | 2020-10-16 | 2021-03-30 | 襄阳达安汽车检测中心有限公司 | 一种优化的汽车辅助驾驶控制器在环仿真测试方法 |
CN112666975A (zh) * | 2020-12-18 | 2021-04-16 | 中山大学 | 一种基于预测控制和屏障函数的无人机安全轨迹跟踪方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN208717823U (zh) * | 2018-04-28 | 2019-04-09 | 上海仙途智能科技有限公司 | 无人清扫系统 |
CN108502053A (zh) * | 2018-06-13 | 2018-09-07 | 安徽新华学院 | 一种轮式机器人平台精确控制方法 |
EP3653459B1 (en) * | 2018-11-15 | 2021-07-14 | Volvo Car Corporation | Vehicle safe stop |
CN109598066B (zh) * | 2018-12-05 | 2023-08-08 | 百度在线网络技术(北京)有限公司 | 预测模块的效果评估方法、装置、设备和存储介质 |
CN111459995B (zh) * | 2020-03-11 | 2021-11-23 | 南京航空航天大学 | 一种基于驾驶数据的多模态车速预测方法 |
CN111505965B (zh) * | 2020-06-17 | 2020-09-29 | 深圳裹动智驾科技有限公司 | 自动驾驶车辆仿真测试的方法、装置、计算机设备及存储介质 |
-
2021
- 2021-05-27 CN CN202110586196.7A patent/CN115236997B/zh active Active
-
2022
- 2022-01-10 WO PCT/CN2022/071016 patent/WO2022247303A1/zh active Application Filing
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005122272A (ja) * | 2003-10-14 | 2005-05-12 | Toyota Motor Corp | 車輌の走行経路予測制御装置 |
CN102358287A (zh) * | 2011-09-05 | 2012-02-22 | 北京航空航天大学 | 一种用于车辆自动驾驶机器人的轨迹跟踪控制方法 |
JP2014118138A (ja) * | 2013-07-29 | 2014-06-30 | Daihatsu Motor Co Ltd | 運転支援装置 |
CN108227709A (zh) * | 2017-12-29 | 2018-06-29 | 深圳地平线机器人科技有限公司 | 用于控制车辆的自动驾驶的方法和装置 |
CN109085840A (zh) * | 2018-09-21 | 2018-12-25 | 大连维德智能视觉技术创新中心有限公司 | 一种基于双目视觉的车辆导航控制系统及控制方法 |
CN109572694A (zh) * | 2018-11-07 | 2019-04-05 | 同济大学 | 一种考虑不确定性的自动驾驶风险评估方法 |
CN109572693A (zh) * | 2019-01-24 | 2019-04-05 | 湖北亿咖通科技有限公司 | 车辆避障辅助方法、系统及车辆 |
CN109866752A (zh) * | 2019-03-29 | 2019-06-11 | 合肥工业大学 | 基于预测控制的双模式并行车辆轨迹跟踪行驶系统及方法 |
CN111260950A (zh) * | 2020-01-17 | 2020-06-09 | 清华大学 | 一种基于轨迹预测的轨迹跟踪方法、介质和车载设备 |
CN112415995A (zh) * | 2020-09-22 | 2021-02-26 | 重庆智行者信息科技有限公司 | 基于实时安全边界的规划控制方法 |
CN112578683A (zh) * | 2020-10-16 | 2021-03-30 | 襄阳达安汽车检测中心有限公司 | 一种优化的汽车辅助驾驶控制器在环仿真测试方法 |
CN112666975A (zh) * | 2020-12-18 | 2021-04-16 | 中山大学 | 一种基于预测控制和屏障函数的无人机安全轨迹跟踪方法 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116991157A (zh) * | 2023-04-14 | 2023-11-03 | 北京百度网讯科技有限公司 | 具备人类专家驾驶能力的自动驾驶模型、训练方法和车辆 |
CN117113722A (zh) * | 2023-09-20 | 2023-11-24 | 广东省水利水电第三工程局有限公司 | 一种大型混泥土模具吊装bim仿真方法及系统 |
CN117113722B (zh) * | 2023-09-20 | 2024-03-15 | 广东省水利水电第三工程局有限公司 | 一种大型混泥土模具吊装bim仿真方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN115236997A (zh) | 2022-10-25 |
CN115236997B (zh) | 2023-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11458991B2 (en) | Systems and methods for optimizing trajectory planner based on human driving behaviors | |
US11487988B2 (en) | Augmenting real sensor recordings with simulated sensor data | |
US20200160598A1 (en) | Systems and Methods for Generating Synthetic Light Detection and Ranging Data via Machine Learning | |
JP2021534484A (ja) | 手続き的な世界の生成 | |
US20190065637A1 (en) | Augmenting Real Sensor Recordings With Simulated Sensor Data | |
JP2020125102A (ja) | ライダ、レーダ及びカメラセンサのデータを使用する強化学習に基づく自律走行時の最適化されたリソース割当てのための方法及び装置 | |
CN108509820B (zh) | 障碍物分割方法及装置、计算机设备及可读介质 | |
US12099351B2 (en) | Operational testing of autonomous vehicles | |
CN105793730A (zh) | 对象运动的基于激光雷达的分类 | |
CN109558854B (zh) | 障碍物感知方法、装置、电子设备及存储介质 | |
WO2022247303A1 (zh) | 预测控制的方法、装置、设备及计算机可读存储介质 | |
Danescu et al. | Particle grid tracking system stereovision based obstacle perception in driving environments | |
JP2024511043A (ja) | モデル注入を用いた点群データ拡張のためのシステム、および方法 | |
US20230080540A1 (en) | Lidar simulation system | |
CN109376664A (zh) | 机器学习训练方法、装置、服务器和介质 | |
Roos et al. | A framework for simulative evaluation and optimization of point cloud-based automotive sensor sets | |
Agafonov et al. | 3D objects detection in an autonomous car driving problem | |
JP2022081613A (ja) | 自動運転特徴の特定方法、装置、設備、媒体及びコンピュータプログラム | |
US20220156517A1 (en) | Method for Generating Training Data for a Recognition Model for Recognizing Objects in Sensor Data from a Surroundings Sensor System of a Vehicle, Method for Generating a Recognition Model of this kind, and Method for Controlling an Actuator System of a Vehicle | |
CN114966736A (zh) | 一种基于点云数据进行目标速度预测的处理方法 | |
US20230278589A1 (en) | Autonomous driving sensor simulation | |
US12106528B2 (en) | Generating scene flow labels for point clouds using object labels | |
CN114663503B (zh) | 从图像进行三维位置预测 | |
US11644331B2 (en) | Probe data generating system for simulator | |
CN114663879A (zh) | 目标检测方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22810042 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22810042 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22810042 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.05.2024) |