[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022247303A1 - 预测控制的方法、装置、设备及计算机可读存储介质 - Google Patents

预测控制的方法、装置、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022247303A1
WO2022247303A1 PCT/CN2022/071016 CN2022071016W WO2022247303A1 WO 2022247303 A1 WO2022247303 A1 WO 2022247303A1 CN 2022071016 W CN2022071016 W CN 2022071016W WO 2022247303 A1 WO2022247303 A1 WO 2022247303A1
Authority
WO
WIPO (PCT)
Prior art keywords
target vehicle
perception
positioning
simulation
surrounding obstacles
Prior art date
Application number
PCT/CN2022/071016
Other languages
English (en)
French (fr)
Inventor
黄超
黎罗河
彭莹
Original Assignee
上海仙途智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海仙途智能科技有限公司 filed Critical 上海仙途智能科技有限公司
Publication of WO2022247303A1 publication Critical patent/WO2022247303A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • G05B17/02Systems involving the use of models or simulators of said systems electric
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present application relates to the field of control technologies, and in particular to methods, devices, equipment and computer-readable storage media for predictive control.
  • the simulation system is a very important offline algorithm module.
  • the current simulation system is mainly divided into world-based simulation and log-based simulation.
  • log-based simulation a 30-second to 1-minute segmental scene is generally constructed based on online data or scene editing for a virtual unmanned vehicle environment.
  • log-based simulation is mainly reflected in the following three aspects: 1. Software modules that support verification planning, control and other algorithms for the design and upgrade of new algorithms; Scenarios that need to be optimized; 3. Provide extreme boundary scenarios to verify the safety of unmanned vehicle algorithms.
  • Part of the simulation software lacks noise at the result level of the actual perception data (such as over-segmentation, broken frames, etc.), making prediction and control in the loop impossible to truly effectively implement.
  • the present application provides a method, device, device and computer-readable storage medium for predictive control, which can improve the effect of predictive control of a simulation system.
  • a predictive control method comprising: predicting the real coordinates of the target vehicle at the next simulation time according to the real coordinates of the target vehicle at the current simulation time and the simulated control amount of the target vehicle at the current simulation time; The true value of the perception of the surrounding obstacles by the vehicle at the current simulation moment, and the multi-modal future trajectory obtained by predicting the surrounding obstacles, predict the true perception value of the target vehicle to the surrounding obstacles at the next simulation moment; based on the The distribution estimation results of the positioning error and the perception error of the target vehicle, as well as the real coordinates and the true perception value of the target vehicle at the next simulation moment, predict the simulated control amount of the target vehicle at the next simulation moment.
  • a prediction control device comprising: a positioning prediction unit, used to predict the real position of the target vehicle at the next simulation time according to the real coordinates of the target vehicle at the current simulation time and the simulation control amount of the target vehicle at the current simulation time Coordinates; a perception prediction unit, used to predict the target vehicle in the next simulation according to the target vehicle's perception of the surrounding obstacles at the current simulation moment and the multi-modal future trajectory obtained by predicting the surrounding obstacles. Perceived true values of surrounding obstacles at any moment; a control prediction unit, configured to predict based on the distribution estimation results of the positioning error and perceptual error of the target vehicle, as well as the real coordinates and perceptual true value of the target vehicle at the next simulation moment The simulation control amount of the target vehicle at the next simulation time.
  • An electronic device includes: a processor and a memory; the memory is used to store a computer program; the processor is used to execute the above predictive control method by invoking the computer program.
  • a computer readable storage medium on which a computer program is stored, and the above predictive control method is implemented when the program is executed by a processor.
  • Fig. 1 is a schematic flow chart of a predictive control method shown in the present application
  • Fig. 2 is the block diagram of generating noise data and vehicle dynamics model shown in the present application
  • FIG. 3 is a schematic diagram of the composition of a predictive control device shown in the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device shown in the present application.
  • first, second, third, etc. may be used in this application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present application, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination.”
  • the embodiment of the present application provides a predictive control method, which is implemented by a planning simulation system, which is specifically a complete predictive control-in-the-loop planning simulation that supports uncertainty observations
  • the system can better simulate the performance of the actual scenes on the line, and provide a verification environment for the development and improvement of the actual unmanned vehicle planning control system.
  • FIG. 1 it is a schematic flowchart of a predictive control method provided by the embodiment of the present application, the method includes the following steps S101-S103:
  • S101 Predict the real coordinates of the target vehicle at the next simulation moment according to the real coordinates of the target vehicle at the current simulation moment and the simulated control amount of the target vehicle at the current simulation moment.
  • the target vehicle may be any type of unmanned vehicle, such as a sweeping vehicle, an articulated vehicle, a floor washing vehicle, or a passenger vehicle.
  • the planning simulation system has a kernel that implements the simulation algorithm of uncertain observations.
  • the interior of the algorithm kernel maintains a global coordinate system St of the self-vehicle based on real observations. At the same time, it also maintains the vehicle scene The perceived true value Xt of the traffic participants in (that is, the surrounding obstacles of the target vehicle). Based on this, the real coordinate St of the target vehicle at the current simulation moment can be obtained from the global coordinate system St maintained by the kernel.
  • the planning simulation system can run the predictive planning control algorithm.
  • the predictive planning control algorithm Through the predictive planning control algorithm, the simulated control value Ut of the target vehicle at the current simulation moment can be output.
  • the algorithm kernel can be based on the real coordinate St of the target vehicle at the current simulation moment , and the simulated control variable Ut of the target vehicle at the current simulation moment, to obtain the real coordinate St+1 of the target vehicle at the next simulation moment.
  • "predict the real coordinates of the target vehicle at the next simulation moment based on the real coordinates of the target vehicle at the current simulation moment and the simulation control amount of the target vehicle at the current simulation moment” may include: predicting the real coordinates of the target vehicle at the next simulation moment according to the real coordinates of the target vehicle at the current simulation moment, the dynamic model of the target vehicle, and the simulated control amount of the target vehicle at the current simulation moment.
  • a dynamic model of the target vehicle can be established in advance, so that the algorithm kernel can simulate the target vehicle based on the real coordinate St of the target vehicle at the current simulation moment, the dynamic model of the target vehicle, and the current simulation moment
  • the control variable Ut is used to obtain the real coordinate St+1 of the target vehicle at the next simulation moment.
  • the dynamic model of the target vehicle may be obtained by learning the real running data of the target vehicle, and the dynamic model may be a non-parametric dynamic model.
  • the real running data of the target vehicle may include the target vehicle’s positioning, perception, and running data fed back from the bottom layer of the vehicle.
  • the running data includes But it is not limited to the data generated by the modules during the operation of autonomous driving, and the data generated by manual driving with sensors and software modules turned on.
  • the vehicle dynamics model can be estimated through the machine learning model based on the real running data. Specifically, the horizontal and vertical observations of the target vehicle and the vehicle control input in different scenarios can be obtained, and the depth Learning algorithm to obtain the dynamic model of the target vehicle.
  • the dynamic model of the target vehicle can be realized by the data acquisition module 21 shown in Figure 2, the vehicle power system analysis module 27 and the dynamic model building module 28 build.
  • the embodiment of the present application uses a non-parametric dynamic model to learn the vehicle power system based on a large amount of driving data flow, which can provide a better power system. Compared with the traditional manually marked dynamic model, it can predict the state of the vehicle in each scene. Higher precision.
  • S102 Predict the true value of the target vehicle's perception of the surrounding obstacles at the next simulation time according to the target vehicle's true perception of the surrounding obstacles at the current simulation moment and the multi-modal future trajectory obtained by predicting the surrounding obstacles.
  • the surrounding obstacles refer to the traffic participants around the target vehicle in the simulation scene.
  • the embodiment of the present application does not limit the type of surrounding obstacles, which can be vehicles, pedestrians, etc.;
  • the information included is limited, which may include the type of obstacle, shape information, and motion status.
  • the algorithm core of the planning simulation system not only maintains the above global coordinate system St of the self-vehicle, but also maintains the perceived true value Xt of the surrounding obstacles in the simulation scene at the current simulation moment.
  • the true value Xt+1 of the target vehicle’s perception of surrounding obstacles at the next simulation moment is estimated.
  • the true value of the perception of the surrounding obstacles by the vehicle at the next simulation moment specifically may include: according to the true perception value of the target vehicle to the surrounding obstacles at the current simulation moment, predict the multi-modal running trajectory of the surrounding obstacles in the future Probability distribution: based on the predicted probability distribution, the degree of danger of surrounding obstacles and/or the driving trajectory of the target vehicle, the true value of the target vehicle's perception of surrounding obstacles at the next simulation moment is obtained.
  • the true perception value Xt of the obstacle of the target vehicle may be the result of manual labeling, which requires It should be noted that the method of generating the perceptual truth value Xt will be introduced in the subsequent step S103.
  • the perceived true value Xt can be output to a pre-built multi-modal prediction system, which will simultaneously consider non-drivable areas, traffic lights, lane lines, etc., through Deep learning algorithms (including but not limited to grid graph-based CNN structure neural network and/or graph neural network-based deep learning algorithms), output the multi-modal trajectory of the obstacle in the future, and calculate the obstacle
  • Deep learning algorithms including but not limited to grid graph-based CNN structure neural network and/or graph neural network-based deep learning algorithms
  • the degree of danger of each obstacle in the simulation scene and/or the trajectory of the target vehicle can be further considered, wherein the degree of danger of the obstacle includes but is not limited to the aggressiveness of the obstacle (such as the aggressiveness of the vehicle, the pedestrian's The aggressiveness of the car is weak), and whether it obeys the traffic rules (such as whether the vehicle strictly follows the lane line, pedestrians cross the road, etc.).
  • the target vehicle at the next simulation moment The perceived true value of surrounding obstacles is Xt+1.
  • the embodiment of the present application constructs reasonable prediction probabilities of multi-modal running trajectories for surrounding obstacles in the simulation scene, and provides a reasonable interaction scene between the target vehicle and the surrounding obstacles in the simulation scene. In this way, It can effectively improve the authenticity of the simulation scene, so that it can better simulate the performance of the actual scenes on the line, and then provide a verification environment for the development and improvement of the actual unmanned vehicle planning and control system.
  • S103 Predict the simulation control amount of the target vehicle at the next simulation time based on the distribution estimation results of the positioning error and the perception error of the target vehicle, as well as the real coordinates and the true perception value of the target vehicle at the next simulation time.
  • the algorithm kernel of the planning simulation system can be based on the distribution estimation results of the positioning error and perception error of the target vehicle, as well as the real coordinates St+1 and Perceive the true value Xt+1, and predict the simulated control value Ut+1 of the target vehicle at the next simulation moment.
  • the real coordinate St+1 and the real perception value Xt+1 of the target vehicle at the next simulation moment can be converted, and the conversion operation can be performed by the positioning noise system (including module 22 and module 23 shown in FIG. 2 ) and perceptual noise system (including module 24, module 25 and module 26 shown in FIG. 2 ), through this conversion operation, the NSt+1 of introducing uncertainty observation is obtained and NXt+1.
  • the positioning noise system including module 22 and module 23 shown in FIG. 2
  • perceptual noise system including module 24, module 25 and module 26 shown in FIG. 2
  • the planning simulation system outputs the simulated control variable Ut+1 for the target vehicle at the next simulation moment based on the NSt+1 and NXt+1 introduced by the uncertain observation through the running predictive planning control algorithm.
  • the cycle time slice t from 0 to the time slice n at the end of the scene finally realizes the predictive planning control simulation for the scene based on uncertain observations. It can be seen that the introduction of observation uncertainty according to the actual positioning of the target vehicle and the distribution of perception results can effectively improve the authenticity of the simulation scene, thereby better simulating the performance of the actual scenes on the line, and then providing guidance for the actual unmanned vehicle planning.
  • the development and improvement of the control system provides a verification environment.
  • the distribution estimation result of the positioning error of the target vehicle can be generated in the following manner, including the following steps A1-A2: Step A1: From the real operating data of the target vehicle, obtain The online positioning result of the target vehicle, and based on the online positioning result and positioning auxiliary information, predict the abnormal positioning scene, wherein the positioning auxiliary information is auxiliary information for determining abnormal positioning.
  • the online positioning result of the target vehicle can be obtained from the real operating data of the target vehicle, and then, the online positioning result and the auxiliary information that assists in determining the abnormal positioning are input, and the possible abnormal positioning scene is output .
  • auxiliary information includes but is not limited to landmark objects detected online (such as roadsides, etc.), INS and other kinematics-based route deduction, machine learning positioning status classification modules based on time series features, etc. This function can be realized by the location anomaly analysis module 22 shown in FIG. 2 .
  • Step A2 For each predicted abnormal positioning scene, determine the true positioning value of the target vehicle in the abnormal positioning scene, and obtain the positioning error of the target vehicle in the abnormal positioning scene based on the determined true positioning value.
  • a set of positioning algorithms with a large amount of calculation and poor real-time performance can be used to recalculate the true positioning value of the abnormal positioning scene offline, so that the calculated Based on the true positioning value of the target vehicle and the wrong positioning results on the line, the positioning error in the abnormal positioning scene is calculated.
  • the distribution estimation results of the positioning error in each abnormal positioning scene of the target vehicle can be obtained.
  • This function can be realized by the abnormal location and distribution module 23 shown in FIG. 2 .
  • the distribution estimation result of the perception error of the target vehicle can be generated in the following manner, including the following steps B1-B3: Step B1: From the real operating data of the target vehicle, obtain The online perception results of the target vehicle to the surrounding obstacles, and based on the online perception results, predict the abnormal perception scene.
  • the online perception result of the target vehicle to the surrounding obstacles can be obtained from the real operating data of the target vehicle, and then, based on the online perception result, the life cycle of the obstacle, the convex hull distance of the obstacle , the speed change of the obstacle, the speed difference between the obstacle speed and the interpolation of the center of mass, etc., perform heuristic analysis to obtain candidate complex perception scenarios that may make mistakes, that is, possible abnormal perception scenarios.
  • This function can be realized by the perception anomaly analysis module 24 shown in FIG. 2 .
  • Step B2 For each predicted abnormal perception scene, obtain the manual labeling result of the obstacle perception result in the abnormal perception scene.
  • the target vehicle since the target vehicle needs to obtain information about obstacles around it during actual operation, such information may include 3D point cloud data of obstacles obtained through radar, or obstacle image data obtained through cameras .
  • the perception results obtained in the above-mentioned step B1 in the abnormal perception scene can be projected onto the obstacle three-dimensional point cloud data and/or obstacle image data, and the obstacle labeling result can be re-given manually, and the re-labeling content includes But it is not limited to re-segmentation for over-segmentation and under-segmentation obstacles, so as to give correct obstacle perception results (such as correct obstacle category, real 3D state, etc.).
  • This function can be realized by the perceptual labeling module 25 shown in FIG. 2 .
  • Step B3 Obtain the true value of the target vehicle's perception of surrounding obstacles at the current moment from the obtained manual labeling results, and based on the acquired true value of perception, obtain the perception error of the target vehicle in the abnormal perception scene.
  • the perceptual truth labeling results and the error category, speed, and segmentation of the online perception system for the perceptual results, for each abnormal perception scene the perceptual truth labeling results and the online Based on the error perception results, the perception error in the abnormal perception scene is calculated, so that the distribution estimation results of the perception error in each abnormal perception scene of the target vehicle can be obtained.
  • This function can be realized by the abnormality awareness distribution module 26 shown in FIG. 2 .
  • FIG. 3 it is a schematic diagram of the composition of a predictive control device provided by an embodiment of the present application.
  • the device includes: a positioning prediction unit 310, which is used to control the target vehicle according to the real coordinates of the target vehicle at the current simulation time and the current simulation time.
  • the simulated control amount of the vehicle is used to predict the real coordinates of the target vehicle at the next simulation moment;
  • the perception prediction unit 320 is used to perceive the true value of the target vehicle to the surrounding obstacles at the current simulation moment and to perform Predict the obtained multi-modal future trajectory, and predict the true value of the target vehicle's perception of surrounding obstacles at the next simulation moment;
  • control the prediction unit 330 for estimating the distribution based on the positioning error and perception error of the target vehicle , and the real coordinates and perceived true value of the target vehicle at the next simulation moment, to predict the simulated control amount of the target vehicle at the next simulation moment.
  • the positioning prediction unit 310 is specifically configured to: calculate the target vehicle's real coordinates at the current simulation moment, the dynamic model of the target vehicle, and the current simulation moment The simulation control quantity of , and predict the real coordinates of the target vehicle at the next simulation moment.
  • the dynamic model is obtained by learning real operating data of the target vehicle.
  • the dynamic model is a non-parametric dynamic model.
  • the perception prediction unit 320 is specifically configured to: predict the multi-modality of the surrounding obstacles in the future based on the true value of the target vehicle's perception of the surrounding obstacles at the current simulation moment.
  • the probability distribution of the state running trajectory; based on the predicted probability distribution, the degree of danger of the surrounding obstacles and/or the driving trajectory of the target vehicle, the true value of the target vehicle's perception of the surrounding obstacles at the next simulation moment is obtained.
  • the device further includes a first estimation unit, configured to generate a distribution estimation result of the positioning error of the target vehicle in the following manner: from the real operating data of the target vehicle In the method, the online positioning result of the target vehicle is obtained, and based on the online positioning result and positioning auxiliary information, the abnormal positioning scene is predicted, and the positioning auxiliary information is auxiliary information for determining abnormal positioning; for prediction For each abnormal positioning scene, determine the true positioning value of the target vehicle in the abnormal positioning scene, and obtain the positioning error of the target vehicle in the abnormal positioning scene based on the determined true positioning value.
  • the device further includes a second estimation unit, configured to generate a distribution estimation result of the target vehicle's perception error in the following manner: from the real operating data of the target vehicle In the method, the online perception result of the target vehicle to the surrounding obstacles is obtained, and based on the online perception result, the abnormal perception scene is predicted; for each predicted abnormal perception scene, the Manual labeling results of obstacle perception results; from the obtained manual labeling results, obtain the true value of the target vehicle's perception of the surrounding obstacles at the current moment, and based on the acquired true value of perception, obtain the Perceptual errors in perceptual scenes.
  • a second estimation unit configured to generate a distribution estimation result of the target vehicle's perception error in the following manner: from the real operating data of the target vehicle
  • the online perception result of the target vehicle to the surrounding obstacles is obtained, and based on the online perception result, the abnormal perception scene is predicted; for each predicted abnormal perception scene, the Manual labeling results of obstacle perception results; from the obtained manual labeling results, obtain the true value of the target vehicle's perception of the surrounding obstacles at
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this application. It can be understood and implemented by those skilled in the art without creative effort.
  • the embodiment of the present application also provides an electronic device.
  • the structure diagram of the electronic device is shown in FIG. Electrically connected; the memory 4002 is configured to store at least one computer-executable instruction, and the processor 4001 is configured to execute the at least one computer-executable instruction, thereby performing any one of the embodiments or any optional one in the present application.
  • the processor 4001 can be FPGA (Field-Programmable Gate Array, Field Programmable Gate Array) or other devices with logic processing capabilities, such as MCU (Microcontroller Unit, micro control unit), CPU (Central Process Unit, central processing unit ).
  • MCU Microcontroller Unit, micro control unit
  • CPU Central Process Unit, central processing unit
  • the embodiment of the present application also provides another computer-readable storage medium, which stores a computer program, and the computer program is used to realize any of the functions provided by any embodiment or any optional implementation mode in the present application when executed by a processor.
  • the steps of a predictive control method are also provided.
  • the computer-readable storage medium includes but is not limited to any type of disk (including floppy disk, hard disk, optical disk, CD-ROM, and magneto-optical disk), ROM (Read-Only Memory, read-only memory), RAM ( Random Access Memory, Random Access Memory), EPROM (Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory, Electrically Erasable Programmable Read-Only Memory), flash memory, magnetic card or ray card. That is, a readable storage medium includes any medium that stores or transmits information in a form readable by a device (eg, a computer).
  • a device eg, a computer

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

用于预测控制的方法、装置、设备及计算机可读存储介质。该方法包括:根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对目标车辆的模拟控制量,预测目标车辆在下一模拟时刻的真实坐标(步骤S101);根据目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测目标车辆在下一模拟时刻对周围障碍物的感知真值(步骤S102);基于目标车辆的定位误差和感知误差的分布估计结果,以及目标车辆在下一模拟时刻的真实坐标和感知真值,预测下一模拟时刻对目标车辆的模拟控制量(步骤S103)。

Description

预测控制的方法、装置、设备及计算机可读存储介质 技术领域
本申请涉及控制技术领域,特别涉及预测控制的方法、装置、设备及计算机可读存储介质。
背景技术
在无人驾驶领域中,仿真系统是非常重要的离线算法模块。目前的仿真系统主要分为基于世界(world based)的仿真和基于日志(log based)的仿真。对于基于日志的仿真,一般基于线上数据或者场景编辑构建一个30秒到1分钟的段时间场景,用于虚拟无人车环境。
其中,基于日志的仿真的重要性,主要体现在以下三方面:1、支持验证规划、控制等算法的软件模块进行新算法的设计升级;2、辅助修复无人车路测中形成的接管以及需要优化的场景;3、提供极端的边界场景,验证无人车算法的安全性。
目前,开源的基于日志的仿真系统很多,如apollo的仿真系统,其中提供了记录线上日志的回放和基于场景编辑的仿真,还有Carla、Airsim、Commonroad等仿真系统。
由于其开源软件通用性的考虑,有以下三方面缺陷:1、大部分仿真软件假设了完美的定位环境、感知结果以及车辆底层反馈,这使得上述支持规划、控制等算法的软件模块,低估了实际无人车中不确定性观测对算法的影响,进而影响了对这些软件模块的实际表现的评估。
2、一部分仿真软件缺乏对于实际感知数据的结果层面的噪声(比如过分割、断桢等),使得预测和控制在环无法真正有效实现。
3、对于仿真中的交通参与者使用简单IDM模型进行考虑,缺乏真实性,使得交互算法的表现在仿真系统中失真。
由于存在上述缺陷,使得使用现有仿真系统进行预测控制的效果不够理想。
发明内容
有鉴于此,本申请提供了用于预测控制的方法、装置、设备及计算机可读存储介质,能够提升仿真系统的预测控制效果。
具体地,本申请是通过如下技术方案实现的:
一种预测控制方法,包括:根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标;根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测所述目标车辆在下一模拟时刻对周围障碍物的感知 真值;基于所述目标车辆的定位误差和感知误差的分布估计结果,以及所述目标车辆在下一模拟时刻的真实坐标和感知真值,预测下一模拟时刻对所述目标车辆的模拟控制量。
一种预测控制装置,包括:定位预测单元,用于根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标;感知预测单元,用于根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测所述目标车辆在下一模拟时刻对周围障碍物的感知真值;控制预测单元,用于基于所述目标车辆的定位误差和感知误差的分布估计结果,以及所述目标车辆在下一模拟时刻的真实坐标和感知真值,预测下一模拟时刻对所述目标车辆的模拟控制量。
一种电子设备,包括:处理器、存储器;所述存储器,用于存储计算机程序;所述处理器,用于通过调用所述计算机程序,执行上述预测控制方法。
一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述预测控制方法。
在以上本申请提供的技术方案中,不但在仿真场景中为周围障碍物构建了合理的多模态运行轨迹,还根据目标车辆的实际定位以及感知结果的分布情况引入观测不确定性,即引入定位误差和感知误差,有效提升了仿真场景的真实性,从而能较好地模拟线上实际各场景的表现,进而为实际的无人车规划控制系统的开发和改进提供验证环境。
附图说明
图1为本申请示出的一种预测控制方法的流程示意图;
图2为本申请示出的生成噪声数据和车辆动力学模型的组成框图;
图3为本申请示出的一种预测控制装置的组成示意图;
图4为本申请示出的一种电子设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这 些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
需要说明的是,本申请实施例提供了一种预测控制方法,该方法由一种规划仿真系统实现,该规划仿真系统具体是一种完整的支持不确性观测的预测控制在环的规划仿真系统,能较好地模拟线上实际各场景的表现,为实际的无人车规划控制系统的开发和改进提供验证环境。
参见图1,为本申请实施例提供的一种预测控制方法的流程示意图,该方法包括以下步骤S101-S103:
S101:根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对目标车辆的模拟控制量,预测目标车辆在下一模拟时刻的真实坐标。
在本申请实施例中,目标车辆可以是任一种类型的无人车,比如清扫车、铰链车、洗地车、乘用车等。
在上述提及的规划仿真系统中,该规划仿真系统具有实现不确定性观测仿真算法的内核,算法内核的内部维护了一个基于真实观测的自车全局坐标系St,同时,还维护了车辆场景中的交通参与者(即目标车辆的周围障碍物)的感知真值Xt。基于此,可以从内核维护的全局坐标系St中,获取目标车辆在当前模拟时刻的真实坐标St。
此外,规划仿真系统可以运行预测规划控制算法,通过该预测规划控制算法,可以输出当前模拟时刻对目标车辆的模拟控制量Ut,这样,算法内核便可以基于目标车辆在当前模拟时刻的真实坐标St、以及当前模拟时刻对目标车辆的模拟控制量Ut,得到目标车辆在下一模拟时刻的真实坐标St+1。
在本申请实施例的一种实现方式中,S101中的“根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对目标车辆的模拟控制量,预测目标车辆在下一模拟时刻的真实坐标”,可以包括:根据目标车辆在当前模拟时刻的真实坐标、目标车辆的动力学模型、以及当前模拟时刻对目标车辆的模拟控制量,预测目标车辆在下一模拟时刻的真实坐标。
在本实现方式中,可以预先建立一个关于目标车辆的动力学模型,使算法内核可以基于目标车辆在当前模拟时刻的真实坐标St、目标车辆的动力学模型、以及当前模拟时刻对目标车辆的模拟控制量Ut,得到目标车辆在下一模拟时刻的真实坐标St+1。
其中,关于目标车辆的动力学模型,可以是通过对目标车辆的真实运行数据进行学习得到的,而且,该动力学模型可以是非参数动力学模型。
具体来讲,需要预先获取目标车辆的真实运行数据(也即线上运行数据),该真实 运行数据中可以包括目标车辆产生的定位、感知、以及车辆底层反馈的运行数据等,该运行数据包括但不限于自动驾驶运营过程中模块产生、以及在开启传感器和软件模块下人工驾驶产生的数据。当得到目标车辆的真实运行数据后,可以基于该真实运行数据,通过机器学习模型进行车辆动力学模型的估计,具体可以通过获取不同场景下对于目标车辆的横向纵向观测以及车辆控制输入,经过深度学习算法,得到目标车辆的动力学模型。
参见图2所示的生成噪声数据和车辆动力学模型的组成框图,可以由图2所示的数据获取模块21、车辆动力系统分析模块27和动力模型建立模块28,实现目标车辆的动力学模型的构建。
需要说明的是,在现有的一部分仿真软件中,使用的是简单的事先标定的动力学模型,比如针对车辆底层数据的标定,这使得预测和控制在环无法真正有效实现。而本申请实施例是通过非参数动力学模型,基于大量行车数据流式学习车辆动力系统,可以提供更好的动力系统,相对传统人工标注的动力学模型,其应对各场景的车辆状态的预测精度更高。
S102:根据目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测目标车辆在下一模拟时刻对周围障碍物的感知真值。
其中,周围障碍物是指仿真场景中的目标车辆周围的交通参与者,本申请实施例不对周围障碍物的类型进行限定,其可以是车辆、行人等;本申请实施例也不对感知真值所包含的信息进行限定,其可以包括障碍物的类型、外形信息以及运动状态等。
在上述提及的规划仿真系统中,该规划仿真系统的算法内核,不但维护了上述的自车全局坐标系St,同时还维护了仿真场景中周围障碍物在当前模拟时刻的感知真值Xt。此外,还需要预测出周围障碍物的多模态未来运行轨迹的相关信息,这样,算法内核便可以基于目标车辆在当前模拟时刻对周围障碍物的感知真值Xt、以及周围障碍物的多模态未来运行轨迹的相关信息,估计出目标车辆在下一模拟时刻对周围障碍物的感知真值Xt+1。
在本申请实施例的一种实现方式中,S102中的“根据目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测目标车辆在下一模拟时刻对周围障碍物的感知真值”,具体可以包括:根据目标车辆在当前模拟时刻对周围障碍物的感知真值,预测周围障碍物在未来一段时间的多模态运行轨迹的概率分布;基于预测得到的概率分布、周围障碍物的危险程度和/或目标车辆的行驶轨迹,得到目标车辆在下一模拟时刻对周围障碍物的感知真值。
在本实现方式中,对于目标车辆周围的每一障碍物来讲,需要预先获取目标车辆在 当前模拟时刻对该障碍物的感知真值Xt,该感知真值Xt可以是人工标注的结果,需要说明的是,该感知真值Xt的生成方式将在后续步骤S103中进行介绍。
当获取到感知真值Xt后,可以将该感知真值Xt输出到一个预先构建的多模态预测系统中,该多模态预测系统会同时考虑不可行驶区域、交通信号灯、车道线等,通过深度学习算法(包括但不限于基于栅格图的CNN结构神经网络和/或基于图神经网络的深度学习算法),输出该障碍物在未来一段时间的多模态运行轨迹,并计算出该障碍物的未来概率分布,以近似表征该障碍物的未来可能分布。此外,还可以进一步考虑仿真场景中各个障碍物的危险程度和/或目标车辆的行驶轨迹,其中,障碍物的危险程度包括但不限于障碍物的侵略性(比如车辆的侵略性较强、行人的侵略性较弱)、遵守交通规则与否(比如车辆是否严格沿车道线行驶、行人横穿马路等)。
然后,基于预测得到的周围障碍物的未来多模态运行轨迹的概率分布、周围障碍物的危险程度和/或目标车辆的行驶轨迹,通过蒙特卡罗等模拟算法,得到目标车辆在下一模拟时刻对周围障碍物的感知真值Xt+1。
可见,本申请实施例在仿真场景中为周围障碍物构建了合理的多模态运行轨迹的预测概率,并在仿真场景中提供了目标车辆与周围障碍物之间的合理的交互场景,这样,可以有效提升仿真场景的真实性,从而能较好地模拟线上实际各场景的表现,进而为实际的无人车规划控制系统的开发和改进提供验证环境。
S103:基于目标车辆的定位误差和感知误差的分布估计结果,以及目标车辆在下一模拟时刻的真实坐标和感知真值,预测下一模拟时刻对目标车辆的模拟控制量。
在本申请实施例中,需要预先获取目标车辆的定位误差和感知误差的分布估计结果,具体可以基于上述提及的目标车辆的真实运行数据,对定位误差(也即定位噪声)以及感知误差(也即感知噪声)进行分布估计。这样,在上述提及的规划仿真系统中,该规划仿真系统的算法内核,便可以基于目标车辆的定位误差和感知误差的分布估计结果、以及目标车辆在下一模拟时刻的真实坐标St+1和感知真值Xt+1,预测下一模拟时刻对目标车辆的模拟控制量Ut+1。
具体地,可以基于目标车辆的定位误差和感知误差的分布估计结果,对目标车辆在下一模拟时刻的真实坐标St+1和感知真值Xt+1进行转换,该转换操作可以分别由定位噪声系统(包括图2所示的模块22和模块23)和感知噪声系统(包括图2所示的模块24、模块25和模块26)实现,通过该转换操作,得到引入不确定性观测的NSt+1和NXt+1。即,使真实坐标St+1带有坐标误差ΔSt+1,作为NSt+1;使感知真值Xt+1带有感知误差ΔXt+1,作为NXt+1,其中,感知误差包括但不限于原始的过分割、跟踪断桢、速度误差等。最后,规划仿真系统通过运行的预测规划控制算法,基于引入不确定性观测的NSt+1和NXt+1,输出下一模拟时刻对目标车辆的模拟控制量Ut+1。
通过上述方式,循环时间片t从0到场景结束的时间片n,最终实现了对场景的基于不确定观测的预测规划控制仿真。可见,根据目标车辆的实际定位以及感知结果分布情况引入观测不确定性,可以有效提升仿真场景的真实性,从而能较好地模拟线上实际各场景的表现,进而为实际的无人车规划控制系统的开发和改进提供验证环境。
其中,在本申请实施例的一种实现方式中,可以按照下述方式生成目标车辆的定位误差的分布估计结果,包括以下步骤A1-A2:步骤A1:从目标车辆的真实运行数据中,获取目标车辆的线上定位结果,并基于线上定位结果以及定位辅助信息,对异常定位场景进行预测,其中,定位辅助信息为用于判定定位异常的辅助信息。
在本申请实施例中,可以从目标车辆的真实运行数据中,获取目标车辆的线上定位结果,然后,以线上定位结果以及辅助判定定位异常的辅助信息为输入,输出可能异常的定位场景。其中,辅助信息包括但不限于在线检测的标志性物体(比如路沿等)、INS等基于运动学的航线推演、基于时序特征的机器学习定位状态分类模块等。该功能可以由图2所示的定位异常分析模块22实现。
步骤A2:对于预测出的每一异常定位场景,确定目标车辆在该异常定位场景下的定位真值,并基于确定的定位真值,得到目标车辆在该异常定位场景下的定位误差。
在本申请实施例中,当得到每一异常定位场景后,可以利用一套计算量大、时实性差的定位算法,重新离线计算出该异常定位场景下的定位真值,从而可以基于计算出的定位真值以及线上的错误定位结果,计算出该异常定位场景下的定位误差,这样,便可以得到目标车辆的各个异常定位场景下的定位误差的分布估计结果。该功能可以由图2所示的异常定位分布模块23实现。
其中,在本申请实施例的一种实现方式中,可以按照下述方式生成目标车辆的感知误差的分布估计结果,包括以下步骤B1-B3:步骤B1:从目标车辆的真实运行数据中,获取目标车辆对周围障碍物的线上感知结果,并基于线上感知结果,对异常感知场景进行预测。
在本申请实施例中,可以从目标车辆的真实运行数据中,获取目标车辆对周围障碍物的线上感知结果,然后,基于线上感知结果,对障碍物的生命周期、障碍物凸包距离、障碍物的速度变化情况、障碍物速度与质心插值得到的速度差等,进行启发式分析,得到候选的可能出错的复杂感知场景,即可能的异常感知场景。该功能可以由图2所示的感知异常分析模块24实现。
步骤B2:对于预测出的每一异常感知场景,获取该异常感知场景下的障碍物感知结果的人工标注结果。
在本申请实施例中,由于目标车辆在实际运行时,需要获取其周围障碍物的信息,这些信息可以包括通过雷达得到的障碍物三维点云数据、也可以包括通过相机得到的障 碍物图像数据。基于此,可以将上述步骤B1得到的异常感知场景下的感知结果,投影在障碍物三维点云数据和/或障碍物图像数据上,由人工重新给出障碍物标注结果,重新标注的内容包括但不限于对于过分割和欠分割障碍物等进行重新分割,从而给出正确的障碍物感知结果(比如正确的障碍物类别、真实3D状态等)。该功能可以由图2所示的感知标注模块25实现。
步骤B3:从获取的人工标注结果中,获取目标车辆在当前时刻对周围障碍物的感知真值,并基于获取的感知真值,得到目标车辆在该异常感知场景下的感知误差。
在本申请实施例中,当得到感知真值标注结果、以及线上感知系统对于感知结果的错误类别、速度和分割等,之后,对于每一异常感知场景,可以根据感知真值标注结果以及线上错误感知结果,计算出该异常感知场景下的感知误差,这样,便可以得到目标车辆的各个异常感知场景下的感知误差的分布估计结果。该功能可以由图2所示的异常感知分布模块26实现。
在以上本申请实施例提供的预测控制方法中,不但在仿真场景中为周围障碍物构建了合理的多模态运行轨迹,还根据目标车辆的实际定位以及感知结果的分布情况引入观测不确定性,即引入定位误差和感知误差,有效提升了仿真场景的真实性,从而能较好地模拟线上实际各场景的表现,进而为实际的无人车规划控制系统的开发和改进提供验证环境。
参见图3,为本申请实施例提供的一种预测控制装置的组成示意图,该装置包括:定位预测单元310,用于根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标;感知预测单元320,用于根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测所述目标车辆在下一模拟时刻对周围障碍物的感知真值;控制预测单元330,用于基于所述目标车辆的定位误差和感知误差的分布估计结果,以及所述目标车辆在下一模拟时刻的真实坐标和感知真值,预测下一模拟时刻对所述目标车辆的模拟控制量。
在本申请实施例的一种实现方式中,定位预测单元310,具体用于:根据目标车辆在当前模拟时刻的真实坐标、所述目标车辆的动力学模型、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标。
在本申请实施例的一种实现方式中,所述动力学模型是通过对所述目标车辆的真实运行数据进行学习得到的。
在本申请实施例的一种实现方式中,所述动力学模型是非参数动力学模型。
在本申请实施例的一种实现方式中,感知预测单元320,具体用于:根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值,预测周围障碍物在未来一段时间的多模 态运行轨迹的概率分布;基于预测得到的概率分布、周围障碍物的危险程度和/或所述目标车辆的行驶轨迹,得到所述目标车辆在下一模拟时刻对周围障碍物的感知真值。
在本申请实施例的一种实现方式中,所述装置还包括第一估计单元,用于按照下述方式生成所述目标车辆的定位误差的分布估计结果:从所述目标车辆的真实运行数据中,获取所述目标车辆的线上定位结果,并基于所述线上定位结果以及定位辅助信息,对异常定位场景进行预测,所述定位辅助信息为用于判定定位异常的辅助信息;对于预测出的每一异常定位场景,确定所述目标车辆在该异常定位场景下的定位真值,并基于确定的定位真值,得到所述目标车辆在该异常定位场景下的定位误差。
在本申请实施例的一种实现方式中,所述装置还包括第二估计单元,用于按照下述方式生成所述目标车辆的感知误差的分布估计结果:从所述目标车辆的真实运行数据中,获取所述目标车辆对周围障碍物的线上感知结果,并基于所述线上感知结果,对异常感知场景进行预测;对于预测出的每一异常感知场景,获取该异常感知场景下的障碍物感知结果的人工标注结果;从获取的人工标注结果中,获取所述目标车辆在当前时刻对周围障碍物的感知真值,并基于获取的感知真值,得到所述目标车辆在该异常感知场景下的感知误差。
上述装置中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
本申请实施例还提供了一种电子设备,该电子设备的结构示意图如图4所示,该电子设备4000包括至少一个处理器4001、存储器4002和总线4003,至少一个处理器4001均与存储器4002电连接;存储器4002被配置用于存储有至少一个计算机可执行指令,处理器4001被配置用于执行该至少一个计算机可执行指令,从而执行如本申请中任意一个实施例或任意一种可选实施方式提供的任意一种预测控制方法的步骤。
进一步,处理器4001可以是FPGA(Field-Programmable Gate Array,现场可编程门阵列)或者其它具有逻辑处理能力的器件,如MCU(Microcontroller Unit,微控制单元)、CPU(Central Process Unit,中央处理器)。
应用本申请实施例,不但在仿真场景中为周围障碍物构建了合理的多模态运行轨迹,还根据目标车辆的实际定位以及感知结果的分布情况引入观测不确定性,即引入定位误 差和感知误差,有效提升了仿真场景的真实性,从而能较好地模拟线上实际各场景的表现,进而为实际的无人车规划控制系统的开发和改进提供验证环境。
本申请实施例还提供了另一种计算机可读存储介质,存储有计算机程序,该计算机程序用于被处理器执行时实现本申请中任意一个实施例或任意一种可选实施方式提供的任意一种预测控制方法的步骤。
本申请实施例提供的计算机可读存储介质包括但不限于任何类型的盘(包括软盘、硬盘、光盘、CD-ROM、和磁光盘)、ROM(Read-Only Memory,只读存储器)、RAM(Random Access Memory,随即存储器)、EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、EEPROM(Electrically Erasable Programmable Read-Only Memory,电可擦可编程只读存储器)、闪存、磁性卡片或光线卡片。也就是,可读存储介质包括由设备(例如,计算机)以能够读的形式存储或传输信息的任何介质。
应用本申请实施例,不但在仿真场景中为周围障碍物构建了合理的多模态运行轨迹,还根据目标车辆的实际定位以及感知结果的分布情况引入观测不确定性,即引入定位误差和感知误差,有效提升了仿真场景的真实性,从而能较好地模拟线上实际各场景的表现,进而为实际的无人车规划控制系统的开发和改进提供验证环境。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (10)

  1. 一种预测控制方法,包括:
    根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标;
    根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测所述目标车辆在下一模拟时刻对周围障碍物的感知真值;
    基于所述目标车辆的定位误差和感知误差的分布估计结果,以及所述目标车辆在下一模拟时刻的真实坐标和感知真值,预测下一模拟时刻对所述目标车辆的模拟控制量。
  2. 根据权利要求1所述的方法,其特征在于,所述根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标,包括:
    根据目标车辆在当前模拟时刻的真实坐标、所述目标车辆的动力学模型、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标。
  3. 根据权利要求2所述的方法,其特征在于,所述动力学模型是通过对所述目标车辆的真实运行数据进行学习得到的。
  4. 根据权利要求2所述的方法,其特征在于,所述动力学模型是非参数动力学模型。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测所述目标车辆在下一模拟时刻对周围障碍物的感知真值,包括:
    根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值,预测周围障碍物在未来一段时间的多模态运行轨迹的概率分布;
    基于预测得到的概率分布、周围障碍物的危险程度和/或所述目标车辆的行驶轨迹,得到所述目标车辆在下一模拟时刻对周围障碍物的感知真值。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,按照下述方式生成所述目标车辆的定位误差的分布估计结果:
    从所述目标车辆的真实运行数据中,获取所述目标车辆的线上定位结果,并基于所述线上定位结果以及定位辅助信息,对异常定位场景进行预测,所述定位辅助信息为用于判定定位异常的辅助信息;
    对于预测出的每一异常定位场景,确定所述目标车辆在该异常定位场景下的定位真值,并基于确定的定位真值,得到所述目标车辆在该异常定位场景下的定位误差。
  7. 根据权利要求1-5任一项所述的方法,其特征在于,按照下述方式生成所述目 标车辆的感知误差的分布估计结果:
    从所述目标车辆的真实运行数据中,获取所述目标车辆对周围障碍物的线上感知结果,并基于所述线上感知结果,对异常感知场景进行预测;
    对于预测出的每一异常感知场景,获取该异常感知场景下的障碍物感知结果的人工标注结果;
    从获取的人工标注结果中,获取所述目标车辆在当前时刻对周围障碍物的感知真值,并基于获取的感知真值,得到所述目标车辆在该异常感知场景下的感知误差。
  8. 一种预测控制装置,其特征在于,包括:
    定位预测单元,用于根据目标车辆在当前模拟时刻的真实坐标、以及当前模拟时刻对所述目标车辆的模拟控制量,预测所述目标车辆在下一模拟时刻的真实坐标;
    感知预测单元,用于根据所述目标车辆在当前模拟时刻对周围障碍物的感知真值、以及对周围障碍物进行预测得到的多模态未来运行轨迹,预测所述目标车辆在下一模拟时刻对周围障碍物的感知真值;
    控制预测单元,用于基于所述目标车辆的定位误差和感知误差的分布估计结果,以及所述目标车辆在下一模拟时刻的真实坐标和感知真值,预测下一模拟时刻对所述目标车辆的模拟控制量。
  9. 一种电子设备,其特征在于,包括:处理器、存储器;
    所述存储器,用于存储计算机程序;
    所述处理器,用于通过调用所述计算机程序,执行如权利要求1-7中任一项所述的预测控制方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-7任一项所述的预测控制方法。
PCT/CN2022/071016 2021-05-27 2022-01-10 预测控制的方法、装置、设备及计算机可读存储介质 WO2022247303A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110586196.7 2021-05-27
CN202110586196.7A CN115236997B (zh) 2021-05-27 2021-05-27 预测控制方法、装置、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022247303A1 true WO2022247303A1 (zh) 2022-12-01

Family

ID=83666466

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071016 WO2022247303A1 (zh) 2021-05-27 2022-01-10 预测控制的方法、装置、设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN115236997B (zh)
WO (1) WO2022247303A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116991157A (zh) * 2023-04-14 2023-11-03 北京百度网讯科技有限公司 具备人类专家驾驶能力的自动驾驶模型、训练方法和车辆
CN117113722A (zh) * 2023-09-20 2023-11-24 广东省水利水电第三工程局有限公司 一种大型混泥土模具吊装bim仿真方法及系统

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005122272A (ja) * 2003-10-14 2005-05-12 Toyota Motor Corp 車輌の走行経路予測制御装置
CN102358287A (zh) * 2011-09-05 2012-02-22 北京航空航天大学 一种用于车辆自动驾驶机器人的轨迹跟踪控制方法
JP2014118138A (ja) * 2013-07-29 2014-06-30 Daihatsu Motor Co Ltd 運転支援装置
CN108227709A (zh) * 2017-12-29 2018-06-29 深圳地平线机器人科技有限公司 用于控制车辆的自动驾驶的方法和装置
CN109085840A (zh) * 2018-09-21 2018-12-25 大连维德智能视觉技术创新中心有限公司 一种基于双目视觉的车辆导航控制系统及控制方法
CN109572694A (zh) * 2018-11-07 2019-04-05 同济大学 一种考虑不确定性的自动驾驶风险评估方法
CN109572693A (zh) * 2019-01-24 2019-04-05 湖北亿咖通科技有限公司 车辆避障辅助方法、系统及车辆
CN109866752A (zh) * 2019-03-29 2019-06-11 合肥工业大学 基于预测控制的双模式并行车辆轨迹跟踪行驶系统及方法
CN111260950A (zh) * 2020-01-17 2020-06-09 清华大学 一种基于轨迹预测的轨迹跟踪方法、介质和车载设备
CN112415995A (zh) * 2020-09-22 2021-02-26 重庆智行者信息科技有限公司 基于实时安全边界的规划控制方法
CN112578683A (zh) * 2020-10-16 2021-03-30 襄阳达安汽车检测中心有限公司 一种优化的汽车辅助驾驶控制器在环仿真测试方法
CN112666975A (zh) * 2020-12-18 2021-04-16 中山大学 一种基于预测控制和屏障函数的无人机安全轨迹跟踪方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN208717823U (zh) * 2018-04-28 2019-04-09 上海仙途智能科技有限公司 无人清扫系统
CN108502053A (zh) * 2018-06-13 2018-09-07 安徽新华学院 一种轮式机器人平台精确控制方法
EP3653459B1 (en) * 2018-11-15 2021-07-14 Volvo Car Corporation Vehicle safe stop
CN109598066B (zh) * 2018-12-05 2023-08-08 百度在线网络技术(北京)有限公司 预测模块的效果评估方法、装置、设备和存储介质
CN111459995B (zh) * 2020-03-11 2021-11-23 南京航空航天大学 一种基于驾驶数据的多模态车速预测方法
CN111505965B (zh) * 2020-06-17 2020-09-29 深圳裹动智驾科技有限公司 自动驾驶车辆仿真测试的方法、装置、计算机设备及存储介质

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005122272A (ja) * 2003-10-14 2005-05-12 Toyota Motor Corp 車輌の走行経路予測制御装置
CN102358287A (zh) * 2011-09-05 2012-02-22 北京航空航天大学 一种用于车辆自动驾驶机器人的轨迹跟踪控制方法
JP2014118138A (ja) * 2013-07-29 2014-06-30 Daihatsu Motor Co Ltd 運転支援装置
CN108227709A (zh) * 2017-12-29 2018-06-29 深圳地平线机器人科技有限公司 用于控制车辆的自动驾驶的方法和装置
CN109085840A (zh) * 2018-09-21 2018-12-25 大连维德智能视觉技术创新中心有限公司 一种基于双目视觉的车辆导航控制系统及控制方法
CN109572694A (zh) * 2018-11-07 2019-04-05 同济大学 一种考虑不确定性的自动驾驶风险评估方法
CN109572693A (zh) * 2019-01-24 2019-04-05 湖北亿咖通科技有限公司 车辆避障辅助方法、系统及车辆
CN109866752A (zh) * 2019-03-29 2019-06-11 合肥工业大学 基于预测控制的双模式并行车辆轨迹跟踪行驶系统及方法
CN111260950A (zh) * 2020-01-17 2020-06-09 清华大学 一种基于轨迹预测的轨迹跟踪方法、介质和车载设备
CN112415995A (zh) * 2020-09-22 2021-02-26 重庆智行者信息科技有限公司 基于实时安全边界的规划控制方法
CN112578683A (zh) * 2020-10-16 2021-03-30 襄阳达安汽车检测中心有限公司 一种优化的汽车辅助驾驶控制器在环仿真测试方法
CN112666975A (zh) * 2020-12-18 2021-04-16 中山大学 一种基于预测控制和屏障函数的无人机安全轨迹跟踪方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116991157A (zh) * 2023-04-14 2023-11-03 北京百度网讯科技有限公司 具备人类专家驾驶能力的自动驾驶模型、训练方法和车辆
CN117113722A (zh) * 2023-09-20 2023-11-24 广东省水利水电第三工程局有限公司 一种大型混泥土模具吊装bim仿真方法及系统
CN117113722B (zh) * 2023-09-20 2024-03-15 广东省水利水电第三工程局有限公司 一种大型混泥土模具吊装bim仿真方法及系统

Also Published As

Publication number Publication date
CN115236997A (zh) 2022-10-25
CN115236997B (zh) 2023-08-25

Similar Documents

Publication Publication Date Title
US11458991B2 (en) Systems and methods for optimizing trajectory planner based on human driving behaviors
US11487988B2 (en) Augmenting real sensor recordings with simulated sensor data
US20200160598A1 (en) Systems and Methods for Generating Synthetic Light Detection and Ranging Data via Machine Learning
JP2021534484A (ja) 手続き的な世界の生成
US20190065637A1 (en) Augmenting Real Sensor Recordings With Simulated Sensor Data
JP2020125102A (ja) ライダ、レーダ及びカメラセンサのデータを使用する強化学習に基づく自律走行時の最適化されたリソース割当てのための方法及び装置
CN108509820B (zh) 障碍物分割方法及装置、计算机设备及可读介质
US12099351B2 (en) Operational testing of autonomous vehicles
CN105793730A (zh) 对象运动的基于激光雷达的分类
CN109558854B (zh) 障碍物感知方法、装置、电子设备及存储介质
WO2022247303A1 (zh) 预测控制的方法、装置、设备及计算机可读存储介质
Danescu et al. Particle grid tracking system stereovision based obstacle perception in driving environments
JP2024511043A (ja) モデル注入を用いた点群データ拡張のためのシステム、および方法
US20230080540A1 (en) Lidar simulation system
CN109376664A (zh) 机器学习训练方法、装置、服务器和介质
Roos et al. A framework for simulative evaluation and optimization of point cloud-based automotive sensor sets
Agafonov et al. 3D objects detection in an autonomous car driving problem
JP2022081613A (ja) 自動運転特徴の特定方法、装置、設備、媒体及びコンピュータプログラム
US20220156517A1 (en) Method for Generating Training Data for a Recognition Model for Recognizing Objects in Sensor Data from a Surroundings Sensor System of a Vehicle, Method for Generating a Recognition Model of this kind, and Method for Controlling an Actuator System of a Vehicle
CN114966736A (zh) 一种基于点云数据进行目标速度预测的处理方法
US20230278589A1 (en) Autonomous driving sensor simulation
US12106528B2 (en) Generating scene flow labels for point clouds using object labels
CN114663503B (zh) 从图像进行三维位置预测
US11644331B2 (en) Probe data generating system for simulator
CN114663879A (zh) 目标检测方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810042

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22810042

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 22810042

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.05.2024)