[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024045178A1 - 感知方法、装置和系统 - Google Patents

感知方法、装置和系统 Download PDF

Info

Publication number
WO2024045178A1
WO2024045178A1 PCT/CN2022/116830 CN2022116830W WO2024045178A1 WO 2024045178 A1 WO2024045178 A1 WO 2024045178A1 CN 2022116830 W CN2022116830 W CN 2022116830W WO 2024045178 A1 WO2024045178 A1 WO 2024045178A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
driving device
perception
intelligent driving
information
Prior art date
Application number
PCT/CN2022/116830
Other languages
English (en)
French (fr)
Inventor
胡伟辰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2022/116830 priority Critical patent/WO2024045178A1/zh
Publication of WO2024045178A1 publication Critical patent/WO2024045178A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present application relates to the field of autonomous driving, and more specifically, to sensing methods, devices and systems.
  • Autonomous driving technology has gradually become a research hotspot.
  • road conditions for example, wild animals rarely appear on urban roads, while wild animals may appear on mountainous roads.
  • the same perception algorithm has different sensing capabilities in different areas. There may be significant differences. This causes the perception system to experience biases when the vehicle is driving on unfamiliar roads (for example, false detection or missed detection in obstacle recognition), which will reduce the safety of the autonomous driving system.
  • This application provides a sensing method, device and system that can reduce the perception deviation of intelligent driving equipment on roads in different areas, improve the perception accuracy of intelligent driving equipment on characteristic obstacles in different areas, and thereby improve driving safety.
  • the intelligent driving equipment involved in this application may include road vehicles, water vehicles, air vehicles, industrial equipment, agricultural equipment, or entertainment equipment, etc.
  • the intelligent driving device can be a vehicle, which is a vehicle in a broad sense, and can be a means of transportation (such as commercial vehicles, passenger cars, motorcycles, flying cars, trains, etc.), industrial vehicles (such as forklifts, trailers, tractors, etc.) vehicles, etc.), engineering vehicles (such as excavators, bulldozers, cranes, etc.), agricultural equipment (such as lawn mowers, harvesters, etc.), amusement equipment, toy vehicles, etc.
  • This application does not specifically limit the types of vehicles.
  • the intelligent driving device can be a means of transportation such as an airplane or a ship.
  • the first aspect provides a sensing method, which can be executed by an intelligent driving device; or, it can also be executed by a chip or circuit used for an intelligent driving device; or, when the intelligent driving device is a vehicle, it can also be executed by the vehicle Mobile data center (MDC), or it can also be executed by the vehicle's electronic control unit (ECU), such as on-board unit (OBU) or telematics box (telematics box, T- Box), this application does not limit this.
  • MDC vehicle Mobile data center
  • ECU electronice control unit
  • OBU on-board unit
  • T- Box telematics box
  • the method includes: obtaining a first feature model corresponding to a first area that the intelligent driving device needs to pass through.
  • the first feature model is one of a plurality of feature models. Different feature models in the plurality of feature models are used to perceive different Characteristic obstacles of the area; sensing the first set of obstacles in the first area according to the first characteristic model.
  • the first feature model is obtained when it is determined that the intelligent driving device is about to arrive or has arrived at the first area.
  • the intelligent driving device when the intelligent driving device is about to drive or has already traveled in the first area, by obtaining the first feature model corresponding to the first area, the characteristic obstacles in the first area can be identified, which helps to improve intelligent driving.
  • the device's perception accuracy in this area when the intelligent driving device travels to different areas, it can sense different characteristic obstacles corresponding to different areas by obtaining the feature model corresponding to the area. Therefore, it is possible to reduce missed detections and false detections caused by insufficient perception capabilities of the perception algorithm, thereby improving driving safety.
  • the method further includes: sensing a second set of obstacles in the first area according to a second feature model of the intelligent driving device; Within the first preset range, the intelligent driving device is controlled to travel according to the union of the first obstacle set and the second obstacle set; outside the first preset range and within the second preset setting of the intelligent driving device Within the range, control the driving of the intelligent driving device according to the first set of obstacles; and outside the second preset range, control the driving of the intelligent driving device according to a third set of obstacles, wherein the third set of obstacles includes the Obstacles common to the first obstacle set and the second obstacle set.
  • the first feature model is trained on the data of the first area.
  • the data of the first area includes information on the characteristic obstacles of the first area. Therefore, the area can be perceived based on the first feature model. characteristic obstacles.
  • the second feature model is the feature model of the smart driving device itself.
  • the second feature model can be trained based on the data of the area where the smart driving device often travels, or it can also be based on the history of one or more smart driving devices. It is trained on sensor data acquired during driving. It should be understood that the above-mentioned area where the intelligent driving device often travels may not include the first area.
  • the historical driving of the one or more intelligent driving devices may include trips through the first area, but the characteristic obstacles of the first area may not be accurately perceived based on the second feature model due to insufficient data volume.
  • perceiving the obstacles in the first area based on the first feature model can improve the accuracy of perceiving the characteristic obstacles in the first area.
  • the first preset range may be a range of 30 meters from the smart driving device
  • the second preset range may be a range of 60 meters from the smart driving device
  • the above-mentioned first preset range and the second preset range may be The range can also be other values.
  • controlling the driving of the intelligent driving device according to the union of the first obstacle set and the second obstacle set may be: according to the first obstacle set and the second obstacle set.
  • the union of the second obstacle set and the obstacles within the first preset range control the driving of the intelligent driving device.
  • controlling the driving of the smart driving device according to the first set of obstacles may be: according to the first set of obstacles, in the Obstacles outside the first preset range and within the second preset range control the driving of the intelligent driving device.
  • controlling the driving of the smart driving device according to the third set of obstacles may be: based on the obstacles outside the second preset range in the third set of obstacles. , control the driving of the intelligent driving device.
  • different obstacle sets are used to plan the driving path of the intelligent driving device for different preset ranges. For example, for a range closer to the intelligent driving device, the first obstacle set and the third obstacle set are used to plan the driving path. Path planning is performed on obstacles (which may be larger in number) from the union of the two obstacle sets. At the same time, for areas far away from the intelligent driving device, only obstacles (which may be common to the first obstacle set and the second obstacle set) are used. (small number) for path planning. Since the intelligent driving equipment may be in a driving state during the path planning process, the above method can ensure that during the path planning process, from far to near, the obstacles considered are from zero to less, and then to more obstacles. This helps improve the smoothness of the planning algorithm, reduces unnecessary waste of computing power, and helps save energy consumption.
  • the method further includes: determining a first perception level according to the first obstacle set and the second obstacle set, the first perception level indicating the intelligent driving The perceptual accuracy of the device when passing through this first area.
  • the perception algorithm required for obstacle recognition can be determined based on the first perception level.
  • the method further includes: sending the identification information of the first area and the information of the first perception level to the server, so that the server updates the corresponding information of the first area. level of perception.
  • sending the information of the first perception level to the server can cause the server to update the perception level corresponding to the first area, thereby enabling the server to indicate more accurate perception for the intelligent driving equipment that needs to pass through the first area. Accuracy reduces safety accidents caused by misjudgment of the perception level of the first area, helping to improve driving safety.
  • obtaining the first feature model corresponding to the first area where the intelligent driving device is located includes: receiving the first feature model sent by the roadside device in the first area.
  • Information of the feature model, the roadside equipment is located at the entrance of the first area.
  • the intelligent driving device obtains the information of the first characteristic model from the roadside device. There is no need for the intelligent driving device to save the first characteristic model, which helps save the storage space of the intelligent driving device.
  • the roadside device is deployed at the entrance of the first area. There is no need to set up multiple roadside devices to make its detection range cover the entire road, and there is no need to deploy sensing roadside devices, which can greatly save the time spent on deploying roadside devices. Requires cost.
  • the method before receiving the information of the first feature model sent by the roadside device in the first area, the method further includes: sending the intelligent information to the roadside device.
  • Information of the sensor of the driving equipment; receiving information of the first feature model sent by the roadside equipment in the first area includes: receiving information of the first feature model sent by the roadside equipment based on the information of the sensor.
  • the sensor information may include sensor type information; or, the sensor information may also be other information that can indicate the sensor type.
  • the sensor information may include vehicle Identification number (vehicle identification number, VIN).
  • VIN vehicle identification number
  • the roadside device can determine the type of sensor used by the vehicle based on the VIN code.
  • the information of the first feature model sent by the roadside device based on the information of the sensor is received, so that the first feature model is determined based on the information of the sensor of the intelligent driving device, which can ensure that the first feature model is consistent with the information of the sensor. Compatibility of sensor data from smart driving devices.
  • sending the information of the sensor of the intelligent driving device to the roadside device includes: receiving the third perception level information sent by the roadside device, the third sensing level The three perception levels indicate the recommended perception accuracy corresponding to the first area; when the perception accuracy of the second perception level is less than the perception accuracy of the third perception level, the information of the sensor of the smart driving device is sent to the roadside device, where, The second perception level indicates the current perception accuracy corresponding to the first area or the default perception accuracy of the intelligent driving device.
  • the second perception level indicates the current perception accuracy corresponding to the first area.
  • the intelligent driving device uses the second perception level to indicate the current perception accuracy corresponding to the first area.
  • the perceptual accuracy of the grade indication determines the feature model and then performs obstacle recognition.
  • the first feature model can be obtained from the roadside device to perform obstacle perception, which helps It helps to reduce false detections and missed detections caused by insufficient sensing accuracy and improve driving safety.
  • the method when the second perception level indicates the current perception accuracy corresponding to the first area, the method further includes: sending identification information of the first area to the server; Receive the information of the second perception level sent by the server according to the identification information.
  • the second perception level sent by the server is determined based on the perception levels reported by multiple intelligent driving devices driving in the first area. For example, among the perception levels reported by multiple intelligent driving devices, it is determined The most used perception level is the second perception level.
  • the perception accuracy can be determined based on the perception level information obtained from the server, and then whether it is necessary to obtain the first feature model from the roadside device, which helps to reduce safety accidents caused by misjudgment of the perception level. , helping to improve driving safety.
  • a sensing method is provided, which can be executed by a road side unit (RSU); alternatively, it can also be executed by a chip or circuit for the road side device.
  • the roadside device can communicate with the intelligent driving device, and the roadside device can include a high-performance computing unit required for data processing, etc.
  • roadside devices can also include perception devices such as cameras, millimeter-wave radar, and lidar.
  • the method includes: receiving information from a sensor of the smart driving device that needs to be sent through the smart driving device in the first area; determining a first feature model based on the information from the sensor, where the first feature model is one of a plurality of feature models, the Different feature models among the multiple feature models are used to perceive feature obstacles in different areas, and the first feature model corresponds to the first area; the information of the first feature model is sent to the intelligent driving device so that the intelligent driving device The driving device senses the first set of obstacles in the first area according to the first feature model.
  • the intelligent driving equipment by sending the first characteristic model corresponding to the first area to the intelligent driving equipment that needs to pass through the first area, it helps the intelligent driving equipment to identify the characteristics of the first area based on the first characteristic model. Obstacles can help improve the perception accuracy of intelligent driving equipment in this area, and can reduce missed detections and false detections caused by insufficient perception capabilities of the perception algorithm, thus improving driving safety.
  • the method before receiving the information of the sensor of the smart driving device that needs to be sent by the smart driving device in the first area, the method further includes: sending to the smart driving device Perception level information, which indicates the recommended perception accuracy corresponding to the first area.
  • the roadside device can directly send the third perception level information to the intelligent driving device without request from the intelligent driving device, which helps to save signaling.
  • the method further includes: sending identification information of the first area to the intelligent driving device.
  • sending the identification information of the first area to the intelligent driving device helps the intelligent driving device determine the area it is in, and then determines whether it needs to obtain the first feature model, which helps improve driving safety.
  • a sensing device which device includes an acquisition unit and a first processing unit, wherein the acquisition unit is used to: acquire a first feature model corresponding to a first area that the intelligent driving equipment needs to pass through, the third A feature model is one of a plurality of feature models, and different feature models among the plurality of feature models are used to perceive feature obstacles in different areas; the first processing unit is used to: sense the first feature model according to the first feature model The first set of obstacles in the area.
  • the device further includes a second processing unit and a third processing unit, wherein the second processing unit is configured to: according to the second characteristic model of the intelligent driving device Perceiving the second obstacle set in the first area; the third processing unit is configured to: within the first preset range of the intelligent driving device, based on the combination of the first obstacle set and the second obstacle set set to control the driving of the smart driving device; outside the first preset range and within the second preset range of the smart driving device, control the driving of the smart driving device according to the first obstacle set; and in the second preset range If the intelligent driving device is outside the range, the intelligent driving device is controlled to travel according to a third obstacle set, where the third obstacle set includes obstacles common to the first obstacle set and the second obstacle set.
  • the device further includes a fourth processing unit configured to: determine a first obstacle set according to the first obstacle set and the second obstacle set. Perception level, the first perception level indicates the perception accuracy of the intelligent driving device when passing through the first area.
  • the device further includes a first communication unit, the first communication unit being configured to: send the identification information of the first area and the first perception level to the server. information, so that the server updates the perception level corresponding to the first area.
  • the device further includes a second communication unit, the obtaining unit is configured to: obtain the data sent by the roadside equipment in the first area received by the second communication unit.
  • Information of the first characteristic model, the roadside equipment is located at the entrance of the first area.
  • the second communication unit is also used to: send information of the sensor of the intelligent driving device to the roadside device; receive information from the roadside device according to the sensor The information of the first feature model sent.
  • the second communication unit is further configured to: receive information of a third perception level sent by the roadside device, where the third perception level indicates that the first area corresponds to The recommended perception accuracy; when the perception accuracy of the second perception level is less than the perception accuracy of the third perception level, send the information of the sensor of the intelligent driving device to the roadside device, wherein the second perception level indicates the first The current perception accuracy corresponding to the area or the default perception accuracy of the intelligent driving device.
  • the first communication unit when the second perception level indicates the current perception accuracy corresponding to the first area, is further configured to: send the first area to the server identification information; receiving the information of the second perception level sent by the server based on the identification information.
  • a sensing device which device includes a first communication unit and a determination unit, wherein the first communication unit is used to: receive a sensor of the smart driving device that needs to be sent by the smart driving device in the first area. information; the determination unit is configured to: determine a first feature model according to the information of the sensor, the first feature model is one of a plurality of feature models, and different feature models in the plurality of feature models are used to sense different areas. Characteristic obstacle, the first characteristic model corresponds to the first area; the first communication unit is also used to: send the information of the first characteristic model to the intelligent driving device, so that the intelligent driving device according to the first characteristic The model perceives a first set of obstacles in the first region.
  • the device further includes a second communication unit, the second communication unit is configured to: receive the smart driving device sent by the smart driving device at the first communication unit Before receiving the information from the sensor, information on the perception level is sent to the intelligent driving device, where the perception level indicates the recommended perception accuracy corresponding to the first area.
  • the device further includes a third communication unit, the third communication unit is further configured to: send the identification information of the first area to the intelligent driving device.
  • a sensing device in a fifth aspect, includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device executes the first aspect or the second aspect. method in any of the possible implementations.
  • the above-mentioned processing unit may include at least one processor, and the above-mentioned storage unit may be a memory, where the memory may be a storage unit (for example, register, cache, etc.) within the chip, or may be located outside the chip in the intelligent driving device.
  • Storage unit e.g., read-only memory, random access memory, etc.
  • a sixth aspect provides an intelligent driving device, which includes the device in any implementation manner of the third aspect.
  • the intelligent driving device is a vehicle.
  • a seventh aspect provides a roadside equipment, which includes the device in any implementation manner of the fourth aspect.
  • An eighth aspect provides a sensing system, which includes the device described in any one of the third aspect and the device described in any one of the fourth aspect, or includes any one of the sixth aspect The intelligent driving equipment and the roadside equipment described in any one of the seventh aspects.
  • the perception system may further include a server, the server being configured to: receive identification information of the first area that needs to be sent through the intelligent driving device of the first area; according to The identification information sends information of a second perception level to the intelligent driving device, and the second perception level indicates the current perception accuracy corresponding to the first area.
  • a computer program product includes: computer program code.
  • the computer program code When the computer program code is run on a computer, it enables the computer to execute any one of the first aspect or the second aspect. method within the method.
  • the above computer program code may be stored in whole or in part on the first storage medium, where the first storage medium may be packaged together with the processor, or may be packaged separately from the processor.
  • a computer-readable medium stores instructions.
  • the processor implements any of the possible implementation methods of the first aspect or the second aspect. method in.
  • a chip in an eleventh aspect, includes a circuit, and the circuit is used to perform the method in any possible implementation manner of the first aspect or the second aspect.
  • Figure 1 is a schematic diagram of a scenario in which a vehicle senses obstacles
  • Figure 2 is a schematic functional block diagram of an intelligent driving device provided by an embodiment of the present application.
  • Figure 3 is a schematic block diagram of the sensing system architecture provided by the embodiment of the present application.
  • Figure 4 is a schematic diagram of the application scenario of the sensing method provided by the embodiment of the present application.
  • Figure 5 is a schematic flow chart of the sensing method provided by the embodiment of the present application.
  • Figure 6 is a schematic flow chart of the sensing method provided by the embodiment of the present application.
  • Figure 7 is a schematic flow chart of the sensing method provided by the embodiment of the present application.
  • Figure 8 is a schematic diagram of an obstacle set provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of a scene for determining obstacles according to a preset range provided by an embodiment of the present application.
  • Figure 10 is a schematic block diagram of a sensing device provided by an embodiment of the present application.
  • Figure 11 is a schematic block diagram of a sensing device provided by an embodiment of the present application.
  • Figure 12 is a schematic block diagram of a sensing device provided by an embodiment of the present application.
  • At least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • Prefixes such as “first” and “second” are used in the embodiments of this application only to distinguish different description objects, and have no limiting effect on the position, order, priority, quantity or content of the described objects.
  • the use of ordinal words and other prefixes used to distinguish the described objects does not limit the described objects.
  • Words constitute redundant restrictions.
  • the perception data (or sensor data) obtained by the vehicle's perception system is processed to obtain obstacle information around the vehicle, and then the vehicle's driving path is planned based on the obstacle information.
  • the perception data of the roadside device and the vehicle's perception data can be combined to determine the obstacle information around the vehicle.
  • the vehicle can communicate with the roadside infrastructure (vehicle to infrastructure, V2I) through the vehicle and interact with the roadside equipment (or roadside infrastructure equipment).
  • V2I vehicle to infrastructure
  • the roadside device can be set up beside the road, and it can obtain the sensing results located within the detection range of the sensing device (as shown in Figure 1). Since the detection range of roadside equipment is relatively limited, in order to cover the entire road, multiple roadside equipment needs to be installed on the road.
  • the perception algorithm has not been trained on the data of mountainous roads, which may cause the perception algorithm to misdetect or miss detections in obstacle recognition. Detect or miss detection of wildlife. For the former situation (i.e., false detection), it may lead to incorrect prediction of the movement speed or path of obstacles (such as misdetected wild animals) during subsequent vehicle path planning, which in turn leads to the unsafe driving path of the planned vehicle. , for example, may cause the vehicle to collide with an obstacle; for the latter case (i.e., missed detection), the impact of obstacles on the vehicle may be completely ignored when planning the vehicle path, thus seriously affecting driving safety.
  • embodiments of the present application provide a sensing method, device and system.
  • an intelligent driving device such as a vehicle
  • it can obtain the characteristic model corresponding to the area (such as the first characteristic model below). Since This feature model is trained on the data in this area, so it can perceive the characteristic obstacles in this area based on this feature model, which can reduce the perception deviation of intelligent driving equipment on roads in different areas, and improve the performance of intelligent driving equipment in different areas. perception accuracy, thereby improving driving safety.
  • FIG. 2 is a functional block diagram of the intelligent driving device 100 provided by the embodiment of the present application.
  • the smart driving device 100 may include a sensing system 120 and a computing platform 150 , where the sensing system 120 may include several types of sensors that sense information about the environment around the smart driving device 100 .
  • the sensing system 120 may include a positioning system, and the positioning system may be a global positioning system (GPS), Beidou system, or other positioning systems.
  • the sensing system 120 may include one or more of an inertial measurement unit (IMU), lidar, millimeter wave radar, ultrasonic radar, and camera device.
  • IMU inertial measurement unit
  • the computing platform 150 may include processors 151 to 15n (n is a positive integer).
  • the processor is a circuit with signal processing capabilities.
  • the processor may be a circuit with instruction reading and execution capabilities.
  • CPU central processing unit
  • microprocessor microprocessor
  • GPU graphics processing unit
  • DSP digital signal processor
  • the processor can realize certain functions through the logical relationship of the hardware circuit. The logical relationship of the hardware circuit is fixed or can be reconstructed.
  • the processor is an application-specific integrated circuit (application-specific integrated circuit).
  • the process of the processor loading the configuration file and realizing the hardware circuit configuration can be understood as the process of the processor loading instructions to realize the functions of some or all of the above units.
  • the processor can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as a neural network processing unit (NPU), tensor processing unit (TPU), depth Learning processing unit (deep learning processing unit, DPU), etc.
  • the computing platform 150 may also include a memory, which is used to store instructions. Some or all of the processors 151 to 15n may call instructions in the memory to implement corresponding functions.
  • the above-mentioned computing platform 150 may include: at least one of an MDC, a vehicle domain controller (VDC), a chassis domain controller (CDC); or may also include other computing platforms, such as In-car application-server (ICAS) controller, body domain controller (BDC), special equipment system (SAS), media graphics unit (MGU), body Super core (body super core, BSC), ADAS super core (ADAS super core), etc.
  • ICAS may include at least one of the following: vehicle control server ICAS1, intelligent driving server ICAS2, intelligent cockpit server ICAS3, and infotainment server ICAS4.
  • the smart driving device 100 may include an advanced driving assist system (ADAS).
  • ADAS utilizes a variety of sensors in the perception system 120 (including but not limited to: lidar, millimeter wave radar, camera device, ultrasonic sensor, global positioning System, inertial measurement unit) obtains information from around the intelligent driving equipment, analyzes and processes the obtained information, and implements functions such as obstacle perception, target recognition, intelligent driving equipment positioning, path planning, driver monitoring/reminder, etc., thereby Improve the safety, automation and comfort of intelligent driving equipment.
  • ADAS advanced driving assist system
  • FIG 3 shows a schematic block diagram of the sensing system architecture provided by the embodiment of the present application.
  • the sensing system includes a roadside data processing center 200, a cloud server 300 and a vehicle-side data processing center 400.
  • the roadside data processing center 200 can be a computing platform installed in the roadside equipment, or it can also be a server used to control the roadside equipment;
  • the vehicle side data processing center 400 can be a smart driving system installed in the intelligent driving system shown in Figure 2.
  • the computing platform 150 in the device 100 may also be a server used to control the above-mentioned intelligent driving device 100.
  • the road end data processing center 200 includes: a broadcast module 210 and a deployment module 220.
  • the broadcast module 210 is used to send the identification information (for example, the area identification number) of the area that the roadside equipment is responsible to the intelligent driving equipment within the detection range of the roadside equipment, and recommend the perception accuracy of the intelligent driving equipment in this area ( Hereinafter referred to as the information of recommended perceptual accuracy).
  • the deployment module 220 is configured to receive sensor information sent by the smart driving device, and then send information about the feature model of the area it is responsible for to the smart driving device based on the sensor information.
  • the feature model is used to sense the characteristics of the area the roadside device is responsible for. obstacle.
  • the “characteristic obstacles” of a certain area involved in the embodiments of this application may be obstacles unique to the area and different from other areas, for example, wild animals in mountainous areas, or people wearing clothing with regional characteristics. Or the means of transportation unique to the area, etc.
  • the cloud server 300 includes: a transceiver module 310 and a query module 320.
  • the transceiver module 310 can receive the identification information of the area sent by the intelligent driving device, the query module 320 determines the perception accuracy corresponding to the area according to the identification information of the area, and then sends the information of the perception accuracy corresponding to the area to the intelligent driving device through the transceiver module 310 .
  • the perception accuracy corresponding to this area is the perception accuracy determined by the cloud server 300 and used most by the intelligent driving devices in this area.
  • the vehicle-side data processing center 400 includes: a regional awareness scheduling node 410, an regional sensing node 420, a first fusion node 430, a sensing node 440, a second fusion node 450, a regional fusion node 460, and a regulation node 470.
  • the area awareness scheduling node 410 is used to interact with the road end data processing center 200 and the cloud server 300, including but not limited to: receiving the identification information of the area and the information suggesting the sensing accuracy sent by the broadcast module 210; sending the information to the deployment module 220 Send information about the sensor of the intelligent driving device; receive information about the feature model sent by the deployment module 220, and deploy the feature model to the area sensing node 420 and the first fusion node 430; send the identification information of the area to the transceiver module 310; receive the transceiver module 310 sends information corresponding to the perceptual accuracy of the area.
  • the area sensing node 420 is used to process sensor data from the intelligent driving equipment sensing system based on the feature model information received from the road end data processing center 200 to obtain obstacle information around the intelligent driving equipment.
  • the area sensing node 420 includes one or more sub-nodes, each sub-node is used to determine obstacle information (such as the location and type of obstacles, etc.) based on a type of sensor data.
  • the area sensing node 420 includes a sub-node 421, a sub-node 422, and a sub-node 423.
  • the sub-node 421 is used to determine obstacle information based on lidar data
  • the sub-node 422 is used to determine obstacle information based on camera data
  • the sub-node 422 is used to determine obstacle information based on camera data.
  • 423 is used to determine obstacle information based on GPS data.
  • the first fusion node 430 is used to fuse the obstacle information processed by the area sensing node 420 to obtain a first obstacle set.
  • the above-mentioned fusion processing can be understood as: based on the obstacle information determined by each sub-node in the above-mentioned area sensing node 420, further determining the type of obstacles and their positions around the intelligent driving device.
  • the sensing node 440 is used to process sensor data from the sensing system of the smart driving device based on the smart driving device's own feature model to obtain obstacle information around the smart driving device.
  • the sensing node 440 may include one or more child nodes.
  • the second fusion node 450 is used to fuse the obstacle information processed by the sensing node 440 to obtain a second obstacle set.
  • the regional fusion node 460 may include a receiving management module 461, a fusion evaluation module 462 and a sending management module 463, wherein the receiving management module 461 is used to receive information about the obstacle set from the first fusion node 430 and the second fusion node 450, and Send it to the fusion evaluation module 462, which is used to determine a new obstacle set according to the information of the obstacle set, and send it to the sending management module 463, which is used to send the new obstacle set
  • the aggregated information is sent to the regulatory node 470.
  • the regulation node 470 plans the driving path of the intelligent driving device according to the new obstacle set, and generates a control signal to control the driving of the intelligent driving device.
  • system architecture shown in Figure 3 is only an exemplary illustration. During specific implementation, the above system may include more or fewer modules or nodes, and the above modules or nodes may be deleted or deleted according to actual conditions. Increase.
  • the perception accuracy involved in this application can be understood as the accuracy of sensing obstacles. The higher the perception accuracy, the smaller the probability of misdetection or missed detection of obstacles.
  • the roadside data processing center 200 can be set up in the roadside equipment.
  • the roadside equipment can be set up at the entrance of the area it is responsible for (such as the first area), for example At least one of the entrances a, b, c shown in Figure 4.
  • the roadside device communicates with the intelligent driving device within its detection range, and sends relevant information of "the area it is responsible for" to the intelligent driving device, such as identification information of the area and/or information suggesting perception accuracy.
  • FIG. 5 shows a schematic flow chart of the sensing method 500 provided by the embodiment of the present application.
  • the method 500 can be executed by the intelligent driving device 100 shown in FIG. 2 , or can also be executed by any one of the computing platforms 150 in FIG. 2 or multiple processors, or may also be executed by the vehicle-side data processing center 400 shown in FIG. 3 .
  • the following description takes the execution of an intelligent driving device as an example.
  • the method 500 may include steps S501 and S502.
  • the first feature model is one of multiple feature models. Different feature models among the multiple feature models are used to sense the characteristics of different areas. Characteristic obstacles.
  • the "characteristic obstacles" of a region can be obstacles that are unique to the region and different from other regions, such as wild animals in mountainous areas, or people wearing clothing with regional characteristics. It should be understood that the first feature model corresponding to the first region is used to identify characteristic obstacles in the first region.
  • the first region may be divided based on geographical location; or may be divided based on administrative areas; or may be divided based on geographical location and administrative areas, for example, Qinling Mountains in Shaanxi province (administrative area)
  • the nature reserve is divided into the first area; or it can also be divided based on other basis.
  • the intelligent driving device when acquiring the first feature model, may have entered the first area, or the intelligent driving device may not have entered the first area, but has entered the detection range of the roadside device in the first area. .
  • the intelligent driving device can obtain the first characteristic model from the information of the first characteristic model received from the roadside device; or, the intelligent driving device can obtain the first characteristic model based on the information of the first characteristic model received from the roadside device, Get the first feature model.
  • the roadside device detects the intelligent driving device, it sends the information of the first feature model to the intelligent driving device.
  • the intelligent driving device determines that it is driving to the first area, it requests the first feature model from the roadside device, and the roadside device responds to the request of the intelligent driving device and sends it the information of the first feature model.
  • the smart driving device may save the first feature model corresponding to the first area, and when the smart driving device determines that it has driven to the first area, The first feature model is found in its stored historical information.
  • the way in which the intelligent driving device determines that it is driving to the first area may include but is not limited to: determining that it is driving to the first area based on sensor information (such as GPS signals) obtained from the perception system;
  • the identification information of the first area received by the roadside device determines that the vehicle travels to the first area.
  • the identification information of the first area may be an area number, or may be other information that can identify the first area.
  • S502 Perceive the first set of obstacles in the first area according to the first feature model.
  • the intelligent driving device processes sensor data from the perception system according to the first feature model, and perceives the first set of obstacles in the first area.
  • the first obstacle set may include characteristic obstacles of the first area.
  • a driving path can be planned for the intelligent driving device according to the first set of obstacles, and then the driving of the intelligent driving device can be controlled.
  • the intelligent driving device may also sense the second set of obstacles in the first area according to the second feature model.
  • the second feature model is the feature model of the smart driving device itself.
  • the second feature model can be trained based on the sensor data obtained during the historical driving process of the smart driving device. Through the second feature model, the smart driving can be perceived. Obstacles in the roads that the equipment frequently travels on. It should be understood that the characteristic obstacles in the first area cannot be perceived or cannot be accurately perceived based on the second characteristic model.
  • a driving path can be planned for the intelligent driving device based on the first obstacle set and the second obstacle set, and then the intelligent driving device can be controlled to travel.
  • the sensing method provided by the embodiment of the present application can identify the characteristic obstacles in the first area by obtaining the first feature model corresponding to the first area when the intelligent driving device is about to drive or has already driven in the first area, which is helpful to Improve the perception accuracy of intelligent driving equipment in this area.
  • the intelligent driving device travels to different areas, it can sense different characteristic obstacles corresponding to different areas. Therefore, it is possible to reduce missed detections and false detections caused by insufficient perception capabilities of the perception algorithm, thereby improving driving safety.
  • FIG. 6 shows a schematic flowchart of the sensing method 600 provided by the embodiment of the present application.
  • the method 600 can be executed by a roadside device that interacts with the intelligent driving device 100 shown in FIG. 2 , or it can also be performed by the above-mentioned roadside device. It can be executed by the computing platform of the side device, or it can also be executed by the roadside data processing center 200 shown in Figure 3 .
  • the following description takes roadside equipment execution as an example.
  • the method 600 may include steps S601-S603.
  • S601 Receive sensor information of the smart driving device that needs to be sent through the smart driving device in the first area.
  • the first area may be the first area in the method 500
  • the division method may refer to the description in the method 500, which will not be described again here.
  • the sensor information may include sensor type information, for example, information indicating that the sensor is a radar, or a camera device.
  • the sensor information can also be other information that can indicate the sensor type.
  • the sensor information can include a VIN code.
  • the roadside device can be based on The VIN number determines the type of sensor used in the vehicle. It should be understood that the formats of sensor data output by different types of sensors are different, and in the process of processing the data of different sensors to obtain sensing results (such as obstacle collections), different feature models may be required.
  • the first feature model is one of multiple feature models. Different feature models among the multiple feature models are used to sense characteristic obstacles in different areas.
  • the first feature model corresponds to this first region.
  • the first feature model may be the first feature model in method 500.
  • S603 Send the information of the first characteristic model to the intelligent driving device, so that the intelligent driving device perceives the first set of obstacles in the first area according to the first characteristic model.
  • the information of the first feature model may include the first feature model, or the information of the first feature model may also be information that can identify or obtain the first feature model.
  • the sensing method provided by the embodiment of the present application sends the first feature model corresponding to the first area to the smart driving device that needs to pass through the first area, which helps the smart driving device recognize the third feature model based on the first feature model.
  • Characteristic obstacles in an area can help improve the perception accuracy of intelligent driving equipment in this area, and can reduce missed detections and false detections caused by insufficient perception capabilities of the perception algorithm, thus improving driving safety.
  • Figure 7 shows a schematic flow chart of the sensing method 700 provided by the embodiment of the present application.
  • the method 700 is an extension of the method 500 and the method 600.
  • the method 700 can be executed in parallel with the methods 500 and 600 .
  • the steps performed by the roadside equipment in the method 700 can be performed by the roadside data processing center 200 shown in Figure 3
  • the steps performed by the server in the method 700 can be executed by the cloud server 300 shown in FIG. 3 .
  • the steps or operations of the sensing method shown in FIG. 7 are only examples, and embodiments of the present application may also perform other operations or modifications of each operation in FIG. 7 .
  • the various steps in FIG. 7 may be performed in a different order than presented in FIG. 7 , and it is possible that not all operations in FIG. 7 may be performed.
  • the method 700 may include:
  • S701 When the roadside device recognizes the intelligent driving device, send the identification information of the first area and the information of the third perception level to the intelligent driving device.
  • the identification information of the first area may be the identification information of the first area in the above embodiment.
  • it may be an area number, or it may be other information that can identify the first area.
  • the roadside device may respectively send the identification information of the first area and the information of the third perception level to the intelligent driving device, where the third perception level indicates the recommended perception accuracy.
  • the recommended sensing accuracy can be: To ensure driving safety, it is recommended that the intelligent driving device senses the accuracy of obstacles in this area.
  • the roadside device is set at the entrance of the first area.
  • the intelligent driving device sends the identification information of the first area to the server.
  • S703 The server determines the second perception level according to the identification information of the first area.
  • the second perception level indicates the current perception accuracy of the first area.
  • the second perception level may be determined by the server based on information on the perception level reported by the intelligent driving device passing through the first area.
  • S704 The server sends the second perception level information to the intelligent driving device.
  • S703 may be skipped and S704 may be executed directly, that is, the server sends the second perception level information to the intelligent driving device according to the identification information of the first area.
  • the intelligent driving device determines that the perception accuracy indicated by the second perception level is less than the perception accuracy indicated by the third perception level.
  • the second perception level may indicate the current perception accuracy of the first area, or may also indicate the default perception accuracy of the intelligent driving device.
  • the default perception accuracy may be set when the smart driving device leaves the factory, or may be set by the user of the smart driving device.
  • S702 to S704 may not be executed, and S705 may be executed directly after S701 is executed. That is, if the intelligent driving device does not obtain information indicating the perception accuracy of the first area from the server, the intelligent driving device determines whether its default perception accuracy is less than the perception accuracy indicated by the third perception level.
  • the intelligent driving device can use the feature model according to the feature model currently used by the intelligent driving device (such as the feature model of the intelligent driving device itself). , that is, the second feature model below), to sense surrounding obstacles; or the intelligent driving device determines the feature model for perceiving obstacles based on the perception accuracy indicated by the second perception level.
  • the smart driving device sends the sensor information of the smart driving device to the roadside device.
  • the sensor information may be the sensor information in method 600.
  • the roadside device determines the first feature model corresponding to the first area based on the sensor information.
  • the roadside device also determines the execution files and dependencies associated with the first feature model, and sends them to the intelligent driving device along with the first feature model.
  • the roadside device sends the information of the first feature model to the intelligent driving device.
  • S707 can be skipped and S708 can be executed directly, that is, the roadside device sends the information of the first feature model to the intelligent driving device according to the sensor information.
  • the intelligent driving device senses the first set of obstacles in the first area according to the first feature model.
  • the intelligent driving device deploys a first feature model (for example, deploys the first feature model to the area sensing node 420 in FIG. 3 ), and performs sensor processing from the sensing system according to the first feature model. The data is processed to obtain the first obstacle set.
  • a first feature model for example, deploys the first feature model to the area sensing node 420 in FIG. 3
  • sensor processing from the sensing system according to the first feature model.
  • the data is processed to obtain the first obstacle set.
  • the recommended perception accuracy indicated by the third perception level in S701 can also be used to characterize the accuracy of perceiving obstacles based on the first feature model.
  • the intelligent driving device senses the second set of obstacles in the first area according to the second feature model of the intelligent driving device.
  • the default perception accuracy of the intelligent driving device involved in S705 can also be used to characterize the accuracy of obstacle perception based on the second feature model.
  • S709 and S710 can also be executed simultaneously.
  • the intelligent driving device controls the intelligent driving device to drive according to the first obstacle set and the second obstacle set.
  • the Hungarian matching algorithm is used to match the obstacles in the first obstacle set and the second obstacle set to obtain a new obstacle set, and then the driving path of the intelligent driving device is planned according to the new obstacle set.
  • a new obstacle set obtained after matching two obstacle sets can be as shown in Figure 8.
  • frame 810 is the first obstacle set, including obstacle 5 to obstacle 10
  • frame 820 is the second obstacle set, including obstacle 1 to obstacle 7
  • Obstacle sets A, B, and C are new obstacle sets obtained by matching the first obstacle set and the second obstacle set.
  • obstacle set A may include obstacles 1 to 4
  • obstacle set B may include obstacles 5 to 7
  • obstacle set C may include obstacles 8 to 10.
  • the obstacle set B (or the third obstacle set) includes obstacles common to the first obstacle set and the second obstacle set
  • the obstacle set A is the obstacle set excluding the second obstacle set.
  • Obstacles in B are the obstacle set obtained
  • obstacle set C is the obstacle set obtained by removing the obstacles in obstacle set B from the first obstacle set.
  • the information of the above-mentioned obstacle sets A, B, and C is sent to the regulation and control module (such as the regulation and control node 470), so that it can plan the driving path of the intelligent driving device based on the above-mentioned obstacle information.
  • the regulation and control module such as the regulation and control node 470
  • different obstacle sets are used to plan the driving path of the intelligent driving device for different preset ranges. For example, within the first preset range of the intelligent driving device, the intelligent driving device is controlled to travel according to the union of the first obstacle set and the second obstacle set; outside the first preset range and within the second set of the intelligent driving device Within the preset range, the intelligent driving device is controlled to travel according to the first obstacle set; and outside the second preset range, the intelligent driving device is controlled to travel according to the third obstacle set.
  • within may include the original number, and “outside” may not include the original number.
  • within 30 meters of an intelligent driving device includes the distance less than or equal to 30 meters from the intelligent driving device; beyond 30 meters of the intelligent driving device, includes the distance greater than 30 meters from the intelligent driving device.
  • path planning can be performed based on the obstacle sets A', B' and C'.
  • A' is a set of obstacles within the first preset range 920 of the vehicle 910 in the obstacle set A
  • B' is a set of obstacles within the first preset range 920 of the vehicle 910 in the obstacle set B.
  • the set; C' is a set of obstacles in the obstacle set C that are within the first preset range 920 of the vehicle 910.
  • the union of the obstacle sets A', B' and C' can also be understood as: the union of the first obstacle set and the second obstacle set, consisting of obstacles within the first preset range 920 of the vehicle 910 collection.
  • path planning can be performed based on the obstacle sets B" and C".
  • B′′ is a set of obstacles in the obstacle set B that are outside the first preset range 920 of the vehicle 910 and within the second preset range 930 of the vehicle 910
  • C′′ is a set of obstacles in the obstacle set C that are in A set of obstacles outside the first preset range 920 of the vehicle 910 and within the second preset range 930 of the vehicle 910 .
  • the union of the obstacle sets B” and C” can also be understood as: the union of the first obstacle set and the second obstacle set is outside the first preset range 920 of the vehicle 910 and is within the first preset range 920 of the vehicle 910 .
  • path planning can be performed based on the obstacle set B"'.
  • B"' is composed of obstacles in the obstacle set B that are outside the second preset range 930 of the vehicle 910 collection.
  • the obstacle set B"' can also be understood as: a set of obstacles outside the second preset range 930 of the vehicle 910 in the third obstacle set.
  • the above-mentioned first preset range may be a range of 30 meters from the intelligent driving device (or vehicle 910), and the above-mentioned second preset range (or second preset range 930) It can be within a range of 60 meters from the smart driving device (or vehicle 910).
  • the above-mentioned setting of the preset range is only an exemplary explanation.
  • other preset ranges can also be used.
  • the nearest in-path vehicle (CIPV) method can be used to determine the preset range.
  • the value of the preset range can be different in different directions of the intelligent driving device.
  • the intelligent driving device determines the first perception level based on the first obstacle set and the second obstacle set.
  • the first perception level indicates the perception accuracy of the intelligent driving device when passing through the first area.
  • the perception accuracy of the intelligent driving device when passing through the first area can also be understood as the perception accuracy of the intelligent driving device in the first area.
  • the intelligent driving device determines the first perception level based on the obstacle sets A, B, and C. Specifically, the intelligent driving equipment determines the cumulative number of matches s, the cumulative number of false detections in the area fd, and the cumulative number of missed detections in the area lf. Among them, the cumulative number of matches s is the total number of obstacles included in the obstacle set B determined once or multiple times, and the cumulative number of regional false detections fd is the total number of obstacles included in the obstacle set A determined once or multiple times. The cumulative area missed detection number lf is the total number of obstacles included in the obstacle set C determined once or multiple times. Further, the false detection rate a and the accuracy rate b are calculated according to the following formulas:
  • the first perception level is determined according to the false detection rate a and the accuracy rate b.
  • the relationship between the perception level, the false detection rate, and the accuracy rate can be shown in Table 1. It should be understood that the lower the false detection rate and the higher the accuracy rate, the higher the perception accuracy indicated by the perception level. As shown in Table 1, the higher the perception level value, the higher the perception accuracy. That is, the perception accuracy indicated by perception level 1 is smaller than the perception accuracy indicated by perception level 2, and so on.
  • the first perception level is determined to be perception level 1.
  • a0 to a3 shown in Table 1 may be 30%, 20%, 10%, 5% respectively
  • b0 to b3 may be 80%, 85%, 90%, 95% respectively, or a0 to a3 And b0 to b3 can also take other values.
  • the perception level can also be determined by combining the time c required by the perception algorithm to perceive obstacles. For example, when the false detection rate a and the accuracy rate b respectively meet the conditions corresponding to perception level 1, the time required for the perception algorithm to detect obstacles is greater than or equal to the duration threshold corresponding to perception level 1 and less than the duration threshold corresponding to perception level 2. When the duration threshold is reached, the first perception level is determined to be perception level 1. For example, c0 to c3 shown in Table 1 can be 0.005 hours, 0.010 hours, 0.015 hours, 0.020 hours respectively, or c0 to c3 can also take other values.
  • perception level False detection rate threshold accuracy threshold 1 a0 b0 c0 2 a1 b1 c1 3 a2 b2 c2 4 a3 b3 c3
  • the intelligent driving device can save the information of the first perception level, and when driving to the first area next time, the perception algorithm required for obstacle recognition can be determined based on the first perception level.
  • the intelligent driving device sends the identification information of the first area and the information of the first perception level to the server.
  • the intelligent driving device sends the identification information of the first area and the information of the first perception level to the server, so that the server updates the perception level corresponding to the first area.
  • the intelligent driving device when the first perception level is different from the second perception level received through the above S704, the intelligent driving device sends the information of the first perception level to the server.
  • S712 and S713 may not be executed.
  • the sensing method provided by the embodiment of the present application can obtain the first feature model corresponding to the first area from the roadside device when the intelligent driving device is about to drive or has already driven in the first area. According to the first feature model, the first feature model can be identified.
  • the characteristic obstacles in the first area help improve the perception accuracy of the intelligent driving device in this area. Furthermore, when the intelligent driving device travels to different areas, it can sense different characteristic obstacles corresponding to different areas. Therefore, it is possible to reduce missed detections and false detections caused by insufficient perception capabilities of the perception algorithm, thereby improving driving safety.
  • the sensing method provided by the embodiments of this application can also use different obstacle sets to plan its driving path for different preset ranges of intelligent driving equipment, which helps to improve the smoothness of the planning algorithm and reduce unnecessary computing power.
  • the implementation of the sensing method provided by the embodiments of the present application makes it possible to deploy roadside devices only at the entrance of the area, without the need to set up multiple roadside devices so that their detection range covers the entire road. , and there is no need to deploy sensing roadside equipment, which can greatly save the cost of deploying roadside equipment.
  • embodiments of the present application can also evaluate the perception level of the intelligent driving device in the first area, so that the server updates the perception level corresponding to the first area, thereby reducing safety accidents caused by misjudgment of the perception level of the first area. Helps improve driving safety.
  • Figure 10 shows a schematic block diagram of a sensing device 1000 provided by an embodiment of the present application.
  • the device 1000 includes an acquisition unit 1010 and a first processing unit 1020.
  • the apparatus 1000 may include means for performing the method in FIG. 5 or FIG. 7 . Moreover, each unit in the device 1000 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in FIG. 5 or FIG. 7 .
  • the acquisition unit 1010 can be used to execute S501 in the method 500
  • the first processing unit 1020 can be used to execute S502 in the method 500.
  • the acquisition unit 1010 is used to: acquire the first feature model corresponding to the first area that the intelligent driving device needs to pass through.
  • the first feature model is one of a plurality of feature models, and different ones of the multiple feature models
  • the feature model is used to perceive characteristic obstacles in different areas;
  • the first processing unit 1020 is used to: perceive the first set of obstacles in the first area according to the first feature model.
  • the device 1000 further includes a second processing unit and a third processing unit, wherein the second processing unit is configured to: sense the second obstacle in the first area according to the second characteristic model of the intelligent driving device. set; the third processing unit is configured to: within the first preset range of the intelligent driving device, control the driving of the intelligent driving device according to the union of the first obstacle set and the second obstacle set; in the third Outside a preset range and within a second preset range of the smart driving device, control the driving of the smart driving device according to the first set of obstacles; and outside the second preset range, control based on the third set of obstacles
  • the intelligent driving device travels, wherein the third obstacle set includes obstacles common to the first obstacle set and the second obstacle set.
  • the device further includes a fourth processing unit configured to: determine a first perception level according to the first obstacle set and the second obstacle set, the first perception level indicating the intelligent driving The perceptual accuracy of the device when passing through this first area.
  • the device further includes a first communication unit configured to: send the identification information of the first area and the information of the first perception level to the server, so that the server updates the corresponding information of the first area. level of perception.
  • the device further includes a second communication unit, and the acquisition unit 1010 is configured to: acquire the information of the first feature model sent by the roadside equipment in the first area received by the second communication unit, and the roadside equipment Located at the entrance to this first area.
  • the first communication unit and the second communication unit may be the same communication unit, or they may be different communication units, which is not specifically limited in the embodiments of the present application.
  • the second communication unit is also configured to: send information about the sensor of the intelligent driving device to the roadside device; and receive information about the first characteristic model sent by the roadside device based on the information of the sensor.
  • the second communication unit is also configured to: receive information of a third perception level sent by the roadside device, where the third perception level indicates the recommended perception accuracy corresponding to the first area; when the perception of the second perception level When the accuracy is less than the perception accuracy of the third perception level, the information of the sensor of the intelligent driving device is sent to the roadside device, wherein the second perception level indicates the current perception accuracy corresponding to the first area or the information of the intelligent driving device. Default perceptual accuracy.
  • the first communication unit is also configured to: send the identification information of the first area to the server; receive the identification information sent by the server according to the identification information information of this second level of perception.
  • the above-mentioned acquisition unit 1010 may include the area-aware scheduling node 410 shown in FIG. 3; the above-mentioned first processing unit 1020 may include the area-aware node 420 and the first fusion node 430 shown in FIG. 3; the above-mentioned second processing unit It may include the sensing node 440 and the second fusion node 450 shown in Fig. 3; the above-mentioned third processing unit may include the region fusion node 460 shown in Fig. 3, or the above-mentioned third processing unit may include the region fusion node shown in Fig.
  • the above-mentioned fourth processing unit may include the area fusion node 460 shown in Figure 3; the above-mentioned first communication unit or the second communication unit may include the area-aware scheduling node 410 shown in Figure 3.
  • the above-mentioned acquisition unit 1010 and the first communication unit (and/or the second communication unit) may be the same unit, or the acquisition unit 1010 includes the first communication unit (and/or the second communication unit) .
  • each operation performed by the acquisition unit 1010 and the first processing unit 1020 may be performed by the same processor, or may be performed by different processors, for example, by multiple processors.
  • one or more processors can be connected to one or more sensors in the perception system 120 in Figure 2 to obtain information such as the type and location of obstacles around the intelligent driving device from the one or more sensors, and The above information is processed according to the first feature model and/or the second feature model to obtain an obstacle set (for example, the first obstacle set and/or the second obstacle set).
  • one or more processors can also regulate the driving path of the intelligent driving device according to the above obstacle set.
  • the one or more processors may be processors provided in a vehicle machine, or may also be processors provided in other vehicle-mounted terminals.
  • the device 1000 may be a chip provided in a vehicle machine or other vehicle-mounted terminal.
  • the above-mentioned device 1000 may be a computing platform 150 as shown in Figure 2 provided in an intelligent driving device.
  • Figure 11 shows a schematic block diagram of a sensing device 1100 provided by an embodiment of the present application.
  • the device 1100 includes a first communication unit 1110 and a determining unit 1120.
  • the apparatus 1100 may include means for performing the method in FIG. 6 or FIG. 7 . Moreover, each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in FIG. 6 or FIG. 7 .
  • the first communication unit 1110 can be used to execute S601 and S603 in the method 600
  • the determining unit 1120 can be used to execute S602 in the method 600.
  • the first communication unit 1110 is used to: receive the information of the sensor of the smart driving device that needs to be sent through the smart driving device in the first area; the determination unit 1120 is used to: determine the first characteristic model according to the information of the sensor.
  • the first feature model is one of a plurality of feature models, different feature models among the plurality of feature models are used to perceive feature obstacles in different areas, and the first feature model corresponds to the first area;
  • a communication unit 1110 is also configured to send the information of the first characteristic model to the intelligent driving device, so that the intelligent driving device perceives the first set of obstacles in the first area according to the first characteristic model.
  • the device further includes a second communication unit, the second communication unit being configured to: before the first communication unit 1110 receives the information of the sensor of the smart driving device sent by the smart driving device, send a message to the smart driving device. Sensing level information is sent, and the sensing level indicates the recommended sensing accuracy corresponding to the first area.
  • the first communication unit and the second communication unit may be the same communication unit, or they may be different communication units, which is not specifically limited in the embodiments of the present application.
  • the device further includes a third communication unit, the third communication unit is further configured to: send the identification information of the first area to the intelligent driving device.
  • the third communication unit and the second communication unit may be the same communication unit; or the third communication unit and the first communication unit may be the same communication unit; or the first communication unit and the second communication unit may be the same communication unit. Both the communication unit and the third communication unit may be the same communication unit, or they may be different communication units.
  • the above-mentioned first communication unit 1110 may include the deployment module 220 shown in FIG. 3; the above-mentioned determining unit 1120 may also include the deployment module 220 shown in FIG. 3; the above-mentioned second communication unit may include the broadcast module shown in FIG. 3 Module 210; the above-mentioned third communication unit may include the broadcast module 210 shown in Figure 3.
  • each operation performed by the first communication unit 1110 and the determination unit 1120 may be performed by the same processor, or may be performed by different processors, for example, by multiple processors.
  • the one or more processors mentioned above may be processors provided in the roadside device, for example, a processor provided in the computing platform of the roadside device.
  • the device 1100 may be a chip provided in a roadside device.
  • the above-mentioned device 1100 may be a computing platform provided in a roadside device.
  • Embodiments of the present application also provide a device, which includes a processing unit and a storage unit, where the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device performs the method performed in the above embodiments or step.
  • each unit in the above device is only a division of logical functions.
  • the units may be fully or partially integrated into a physical entity, or may be physically separated.
  • the unit in the device can be implemented in the form of a processor calling software; for example, the device includes a processor, the processor is connected to a memory, instructions are stored in the memory, and the processor calls the instructions stored in the memory to implement any of the above methods.
  • the processor is, for example, a general-purpose processor, such as a CPU or a microprocessor
  • the memory is a memory within the device or a memory outside the device.
  • the units in the device can be implemented in the form of hardware circuits, and some or all of the functions of the units can be implemented through the design of the hardware circuits, which can be understood as one or more processors; for example, in one implementation,
  • the hardware circuit is an ASIC, which realizes the functions of some or all of the above units through the design of the logical relationship of the components in the circuit; for another example, in another implementation, the hardware circuit can be implemented through PLD, taking FPGA as an example. It can include a large number of logic gate circuits, and the connection relationships between the logic gate circuits can be configured through configuration files to realize the functions of some or all of the above units. All units of the above device may be fully realized by the processor calling software, or may be fully realized by hardware circuits, or part of the units may be realized by the processor calling software, and the remaining part may be realized by hardware circuits.
  • the processor is a circuit with signal processing capabilities.
  • the processor may be a circuit with instruction reading and execution capabilities, such as a CPU, a microprocessor, a GPU, or DSP, etc.; in another implementation, the processor can realize certain functions through the logical relationship of the hardware circuit. The logical relationship of the hardware circuit is fixed or can be reconstructed.
  • the processor is a hardware circuit implemented by ASIC or PLD. For example, FPGA.
  • the process of the processor loading the configuration file and realizing the hardware circuit configuration can be understood as the process of the processor loading instructions to realize the functions of some or all of the above units.
  • it can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as NPU, TPU, DPU, etc.
  • each unit in the above device can be one or more processors (or processing circuits) configured to implement the above method, such as: CPU, GPU, NPU, TPU, DPU, microprocessor, DSP, ASIC, FPGA , or a combination of at least two of these processor forms.
  • processors or processing circuits
  • each unit in the above device may be integrated together in whole or in part, or may be implemented independently. In one implementation, these units are integrated together and implemented as a system-on-a-chip (SOC).
  • SOC may include at least one processor for implementing any of the above methods or implementing the functions of each unit of the device.
  • the at least one processor may be of different types, such as a CPU and an FPGA, or a CPU and an artificial intelligence processor. CPU and GPU etc.
  • FIG. 12 is a schematic block diagram of a sensing device according to an embodiment of the present application.
  • the sensing device 1200 shown in FIG. 12 may include: a processor 1210, a transceiver 1220, and a memory 1230.
  • the processor 1210, the transceiver 1220 and the memory 1230 are connected through an internal connection path.
  • the memory 1230 is used to store instructions, and the processor 1210 is used to execute the instructions stored in the memory 1230, so that the transceiver 1220 receives/sends some parameters.
  • the memory 1230 can be coupled with the processor 1210 through an interface or integrated with the processor 1210 .
  • transceiver 1220 may include but is not limited to a transceiver device such as an input/output interface to realize communication between the device 1200 and other devices or communication networks.
  • the device 1200 may be provided in the roadside data processing center 200 shown in FIG. 3 , or in the cloud server 300 , or in the vehicle-side data processing center 400 .
  • the device 1200 can be provided in the intelligent driving device 100 shown in FIG. 2, and the processor 1210 can use a general-purpose CPU, microprocessor, ASIC, GPU or one or more integrated The circuit is used to execute relevant programs to implement the sensing method of the method embodiment of the present application.
  • the processor 1210 may also be an integrated circuit chip with signal processing capabilities.
  • each step of the sensing method of the present application can be completed by instructions in the form of hardware integrated logic circuits or software in the processor 1210 .
  • the above-mentioned processor 1210 can also be a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory 1230.
  • the processor 1210 reads the information in the memory 1230 and executes the sensing method of the method embodiment of the present application in conjunction with its hardware.
  • the memory 1230 may be a read only memory (ROM), a static storage device, a dynamic storage device or a random access memory (RAM).
  • ROM read only memory
  • RAM random access memory
  • the transceiver 1220 uses a transceiver device such as, but not limited to, a transceiver to implement communication between the device 1200 and other devices or communication networks. For example, when the device 1200 is installed in an intelligent driving device, the information of the first characteristic model can be received through the transceiver 1220; when the device 1200 is installed in a roadside device, the information of the first characteristic model can be sent through the transceiver 1220.
  • the identification information of the first area and the information of the first perception level can be sent through the transceiver 1220; when the device 1200 is installed in a cloud server, the identification information of the first area and the information of the first perception level can be sent through the transceiver 1220. Identification information of the first area and information of the first perception level.
  • Embodiments of the present application also provide a perception system, which includes the device 1000 shown in Figure 10 and the device 1100 shown in Figure 11; or, the perception system includes the above-mentioned intelligent driving equipment and roadside equipment.
  • the perception system may also include a server, the server being configured to: receive the identification information of the first area that needs to be sent by the intelligent driving device of the first area; and send the second perception to the intelligent driving device according to the identification information.
  • Level information, the second perception level indicates the current perception accuracy corresponding to the first area.
  • Embodiments of the present application also provide a computer program product.
  • the computer program product includes computer program code.
  • the computer program code When the computer program code is run on a computer, it causes the computer to implement the method in the embodiment of the present application.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable medium stores computer instructions. When the computer instructions are run on a computer, they enable the computer to implement the method in the embodiments of the present application.
  • An embodiment of the present application also provides a chip, including a circuit, for executing the method in the embodiment of the present application.
  • each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor.
  • the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware processor for execution, or can be executed by a combination of hardware and software modules in the processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or power-on erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • the memory may include read-only memory (ROM) or random access memory (RAM), and provide instructions and data to the processor.
  • ROM read-only memory
  • RAM random access memory
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in essence or part of the technical solution in the form of a software product.
  • the computer software product is stored in a storage medium and includes a number of instructions to make a computer device (can be a personal computer, server, or network device, etc.) that executes all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请提供了一种感知方法、装置和系统。感知装置获取智能驾驶设备需通过的第一区域所对应的第一特征模型,该第一特征模型为多个特征模型之一,该多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物。该感知装置根据该第一特征模型感知该第一区域中的第一障碍物集合。本申请的方法可以应用于智能车辆、电动车辆等自动驾驶车辆中,能够降低感知算法的感知能力不足导致的漏检、误检的情况,进而提高行驶安全。

Description

感知方法、装置和系统 技术领域
本申请涉及自动驾驶领域,并且更具体地,涉及感知方法、装置和系统。
背景技术
自动驾驶技术逐渐成为研究热点,然而不同区域的道路情况存在一定的差异性(例如,城市的道路很少出现野生动物,而山区道路可能出现野生动物),同样的感知算法在不同区域的感知能力可能存在很大的差异。这导致车辆在陌生的道路上行驶时,感知系统会出现感知偏差的情况(例如,在障碍物识别中出现误检或漏检的情况),这会降低自动驾驶系统的安全性。
因此,如何减少自动驾驶系统在不同区域道路上的感知偏差,以提高自动驾驶系统在不同区域的感知精度,进而提高车辆行驶安全,成为一个亟待解决的问题。
发明内容
本申请提供一种感知方法、装置和系统,能够降低智能驾驶设备在不同区域道路上的感知偏差,提高智能驾驶设备对不同区域中的特征障碍物的感知精度,进而提高行驶安全。
本申请涉及的智能驾驶设备可以包括路上交通工具、水上交通工具、空中交通工具、工业设备、农业设备、或娱乐设备等。例如智能驾驶设备可以为车辆,该车辆为广义概念上的车辆,可以是交通工具(如商用车、乘用车、摩托车、飞行车、火车等),工业车辆(如:叉车、挂车、牵引车等),工程车辆(如挖掘机、推土车、吊车等),农用设备(如割草机、收割机等),游乐设备,玩具车辆等,本申请对车辆的类型不作具体限定。再如,智能驾驶设备可以为飞机、或轮船等交通工具。
第一方面,提供了一种感知方法,该方法可以由智能驾驶设备执行;或者,也可以由用于智能驾驶设备的芯片或电路执行;或者,在智能驾驶设备为车辆时,也可以由车辆的移动数据中心(mobile data center,MDC),或者还可以由车辆的电子控制单元(electronic control unit,ECU),例如车载单元(on board unit,OBU)或车载信息服务盒子(telematics box,T-Box)内的控制部件来执行,本申请对此不作限定。
该方法包括:获取智能驾驶设备需通过的第一区域所对应的第一特征模型,该第一特征模型为多个特征模型之一,该多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物;根据该第一特征模型感知该第一区域中的第一障碍物集合。
在一些可能的实现方式中,确定智能驾驶设备即将或已经到达第一区域时,获取该第一特征模型。
在上述技术方案中,在智能驾驶设备即将或已经在第一区域行驶时,通过获取第一区域对应的第一特征模型,能够识别到该第一区域的特征障碍物,有助于提高智能驾驶设备在该区域的感知精度。进一步地,智能驾驶设备行驶至不同区域时,可以通过获取与该区 域对应的特征模型,感知不同区域对应的不同的特征障碍物。因此,能够降低感知算法的感知能力不足导致的漏检、误检的情况,进而提高行驶安全。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该智能驾驶设备的第二特征模型感知该第一区域中的第二障碍物集合;在该智能驾驶设备的第一预设范围内,根据该第一障碍物集合和该第二障碍物集合的并集控制该智能驾驶设备行驶;在该第一预设范围外且在该智能驾驶设备的第二预设范围内,根据该第一障碍物集合控制该智能驾驶设备行驶;以及在该第二预设范围外,根据第三障碍物集合控制该智能驾驶设备行驶,其中,该第三障碍物集合包括该第一障碍物集合和该第二障碍物集合共有的障碍物。
示例性地,第一特征模型是针对第一区域的数据进行训练得到的,该第一区域的数据包括该第一区域的特征障碍物的信息,因此基于该第一特征模型能够感知到该区域的特征障碍物。第二特征模型为智能驾驶设备自己的特征模型,该第二特征模型可以是根据该智能驾驶设备经常行驶的区域的数据进行训练得到的,或者也可以是基于一个或多个智能驾驶设备的历史行驶过程中获取的传感器数据进行训练得到的。应理解,上述该智能驾驶设备经常行驶的区域可能不包括第一区域。进一步地,基于该第二特征模型可以感知到智能驾驶设备经常行驶的道路中存在的障碍物;或者,基于该第二特征模型可以感知多条道路中共同存在的障碍物,例如小型汽车、自行车、摩托车等。应理解,该一个或多个智能驾驶设备历史行驶可能包括经过第一区域的行程,但是可能由于数据量不足,基于该第二特征模型无法准确感知该第一区域的特征障碍物。
因此,基于第一特征模型对第一区域的障碍物进行感知,能够提升对第一区域的特征障碍物感知的准确性。
示例性地,第一预设范围可以为与智能驾驶设备距离30米的范围,第二预设范围可以是与智能驾驶设备距离60米的范围,或者,上述第一预设范围和第二预设范围也可以为其他数值。
其中,在该智能驾驶设备的第一预设范围内,根据该第一障碍物集合和该第二障碍物集合的并集控制该智能驾驶设备行驶,可以为:根据该第一障碍物集合和该第二障碍物集合的并集中,在该第一预设范围内的障碍物,控制该智能驾驶设备行驶。在该第一预设范围外且在该智能驾驶设备的第二预设范围内,根据该第一障碍物集合控制该智能驾驶设备行驶,可以为:根据该第一障碍物集合中,在该第一预设范围外且在该第二预设范围内的障碍物控制该智能驾驶设备行驶。在该智能驾驶设备的第二预设范围外,根据该第三障碍物集合控制该智能驾驶设备行驶,可以为:根据该第三障碍物集合中,在该第二预设范围外的障碍物,控制该智能驾驶设备行驶。
在上述技术方案中,针对智能驾驶设备的不同预设范围,使用不同的障碍物集合对其行驶路径进行规划,例如,对距离智能驾驶设备较近的范围内,根据第一障碍物集合和第二障碍物集合的并集中的障碍物(可能数量较多)进行路径规划,同时对距离智能驾驶设备较远的区域,仅使用第一障碍物集合和第二障碍物集合共有的障碍物(可能数量较少)进行路径规划。由于路径规划过程中,智能驾驶设备可能处于行驶状态中,上述方法能够保证路径规划过程中,由远及近,考虑的障碍物是从零变为较少,进而变为较多的过渡过程,因而有助于提高规划算法的平滑度,并且能够减少不必要的算力浪费,有助于节约能耗。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该第一障碍物集合和该第二障碍物集合确定第一感知等级,该第一感知等级指示该智能驾驶设备在通过该第一区域时的感知精度。
在上述技术方案中,通过评估智能驾驶设备在第一区域的感知等级,便于智能驾驶设备下次行驶至第一区域时,可以根据该第一感知等级确定进行障碍物识别所需的感知算法。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:向服务器发送该第一区域的标识信息和该第一感知等级的信息,以使该服务器更新该第一区域对应的感知等级。
在上述技术方案中,向服务器发送该第一感知等级的信息,能够使得服务器更新该第一区域对应的感知等级,进而使得服务器能够为需通过第一区域的智能驾驶设备指示更为准确的感知精度,减少对第一区域的感知等级的误判造成的安全事故,有助于提高行驶安全。
结合第一方面,在第一方面的某些实现方式中,该获取智能驾驶设备所处第一区域所对应的第一特征模型,包括:接收该第一区域的路侧设备发送的该第一特征模型的信息,该路侧设备位于该第一区域的入口。
在上述技术方案中,智能驾驶设备从路侧设备处获取第一特征模型的信息,无需智能驾驶设备保存第一特征模型,有助于节省智能驾驶设备的存储空间。此外,该路侧设备部署在第一区域的入口,无需设置多个路侧设备以使其检测范围覆盖到整条道路,且无需部署感知类路侧设备,能够极大节省部署路侧设备所需成本。
结合第一方面,在第一方面的某些实现方式中,该接收该第一区域的路侧设备发送的该第一特征模型的信息之前,该方法还包括:向该路侧设备发送该智能驾驶设备的传感器的信息;该接收该第一区域的路侧设备发送的该第一特征模型的信息,包括:接收该路侧设备根据该传感器的信息发送的该第一特征模型的信息。
可选地,该传感器的信息可以包括传感器类型的信息;或者,该传感器的信息也可以为其他能够指示传感器类型的信息,一示例中,以智能驾驶设备为例,该传感器的信息可以包括车辆识别码(vehicle identification number,VIN),在一些可能的实现方式中,路侧设备可以根据VIN码确定车辆所使用的传感器的类型。
在上述技术方案中,接收路侧设备根据该传感器的信息发送的该第一特征模型的信息,使得该第一特征模型是根据智能驾驶设备的传感器的信息确定的,能够保证第一特征模型与智能驾驶设备的传感器数据的兼容性。
结合第一方面,在第一方面的某些实现方式中,该向该路侧设备发送该智能驾驶设备的传感器的信息,包括:接收该路侧设备发送的第三感知等级的信息,该第三感知等级指示该第一区域对应的建议感知精度;当第二感知等级的感知精度小于该第三感知等级的感知精度时,向该路侧设备发送该智能驾驶设备的传感器的信息,其中,该第二感知等级指示该第一区域对应的当前感知精度或该智能驾驶设备的默认感知精度。
在一些可能的实现方式中,第二感知等级指示第一区域对应的当前感知精度,第一区域对应的当前感知精度大于或等于第一区域对应的建议感知精度时,智能驾驶设备根据第二感知等级指示的感知精度确定特征模型,进而进行障碍物识别。
在上述技术方案中,在建议感知精度大于智能驾驶设备的默认感知精度或第一区域对应的当前感知精度时,能够根据从路侧设备处获取第一特征模型,进而进行障碍物感知, 有助于减少感知精度不足造成的误检、漏检,有助于提高行驶安全。
结合第一方面,在第一方面的某些实现方式中,当该第二感知等级指示该第一区域对应的当前感知精度时,该方法还包括:向服务器发送该第一区域的标识信息;接收该服务器根据该标识信息发送的该第二感知等级的信息。
在一些可能的实现方式中,服务器发送的第二感知等级,是根据在该第一区域行驶的多个智能驾驶设备上报的感知等级确定的,例如多个智能驾驶设备上报的感知等级中,确定使用最多的感知等级为该第二感知等级。
在上述技术方案中,能够根据从服务器处获取的感知等级的信息确定感知精度,进而确定是否需要从路侧设备处获取第一特征模型,有助于减少对感知等级的误判造成的安全事故,有助于提高行驶安全。
第二方面,提供了一种感知方法,该方法可以由路侧设备(road side unit,RSU)执行;或者,也可以由用于路侧设备的芯片或电路执行。本申请中,路侧设备可以与智能驾驶设备通信,该路侧设备可以包括数据处理所需的高性能计算单元等。在一些可能的实现方式中,路侧设备还可以包含摄像头,毫米波雷达,激光雷达等感知设备。
该方法包括:接收需通过第一区域的智能驾驶设备发送的该智能驾驶设备的传感器的信息;根据该传感器的信息确定第一特征模型,该第一特征模型为多个特征模型之一,该多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物,该第一特征模型与该第一区域相对应;向该智能驾驶设备发送第一特征模型的信息,以使该智能驾驶设备根据该第一特征模型感知该第一区域中的第一障碍物集合。
上述技术方案中,通过向需要通过第一区域的智能驾驶设备,发送该第一区域对应的第一特征模型,有助于智能驾驶设备基于该第一特征模型,识别到该第一区域的特征障碍物,有助于提高智能驾驶设备在该区域的感知精度,能够降低感知算法的感知能力不足导致的漏检、误检的情况,进而提高行驶安全。
结合第二方面,在第二方面的某些实现方式中,该接收需通过第一区域的智能驾驶设备发送的该智能驾驶设备的传感器的信息之前,该方法还包括:向该智能驾驶设备发送感知等级的信息,该感知等级指示该第一区域对应的建议感知精度。
上述技术方案中,无需智能驾驶设备的请求,路侧设备可以直接向智能驾驶设备发送第三感知等级的信息,有助于节约信令。
结合第二方面,在第二方面的某些实现方式中,该方法还包括:向该智能驾驶设备发送该第一区域的标识信息。
上述技术方案中,向该智能驾驶设备发送该第一区域的标识信息,有助于使智能驾驶设备确定自己所处区域,进而判断是否需要获取第一特征模型,有助于提高行驶安全。
第三方面,提供了一种感知装置,该装置包括获取单元和第一处理单元,其中,该获取单元用于:获取智能驾驶设备需通过的第一区域所对应的第一特征模型,该第一特征模型为多个特征模型之一,该多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物;该第一处理单元用于:根据该第一特征模型感知该第一区域中的第一障碍物集合。
结合第三方面,在第三方面的某些实现方式中,该装置还包括第二处理单元和第三处理单元,其中,该第二处理单元用于:根据该智能驾驶设备的第二特征模型感知该第一区域中的第二障碍物集合;该第三处理单元用于:在该智能驾驶设备的第一预设范围内,根 据该第一障碍物集合和该第二障碍物集合的并集控制该智能驾驶设备行驶;在该第一预设范围外且在该智能驾驶设备的第二预设范围内,根据该第一障碍物集合控制该智能驾驶设备行驶;以及在该第二预设范围外,根据第三障碍物集合控制该智能驾驶设备行驶,其中,该第三障碍物集合包括该第一障碍物集合和该第二障碍物集合共有的障碍物。
结合第三方面,在第三方面的某些实现方式中,该装置还包括第四处理单元,该第四处理单元用于:根据该第一障碍物集合和该第二障碍物集合确定第一感知等级,该第一感知等级指示该智能驾驶设备在通过该第一区域时的感知精度。
结合第三方面,在第三方面的某些实现方式中,该装置还包括第一通信单元,该第一通信单元用于:向服务器发送该第一区域的标识信息和该第一感知等级的信息,以使该服务器更新该第一区域对应的感知等级。
结合第三方面,在第三方面的某些实现方式中,该装置还包括第二通信单元,该获取单元用于:获取该第二通信单元接收的该第一区域的路侧设备发送的该第一特征模型的信息,该路侧设备位于该第一区域的入口。
结合第三方面,在第三方面的某些实现方式中,该第二通信单元还用于:向该路侧设备发送该智能驾驶设备的传感器的信息;接收该路侧设备根据该传感器的信息发送的该第一特征模型的信息。
结合第三方面,在第三方面的某些实现方式中,该第二通信单元还用于:接收该路侧设备发送的第三感知等级的信息,该第三感知等级指示该第一区域对应的建议感知精度;当第二感知等级的感知精度小于该第三感知等级的感知精度时,向该路侧设备发送该智能驾驶设备的传感器的信息,其中,该第二感知等级指示该第一区域对应的当前感知精度或该智能驾驶设备的默认感知精度。
结合第三方面,在第三方面的某些实现方式中,当该第二感知等级指示该第一区域对应的当前感知精度时,该第一通信单元还用于:向服务器发送该第一区域的标识信息;接收该服务器根据该标识信息发送的该第二感知等级的信息。
第四方面,提供了一种感知装置,该装置包括第一通信单元和确定单元,其中,该第一通信单元用于:接收需通过第一区域的智能驾驶设备发送的该智能驾驶设备的传感器的信息;该确定单元用于:根据该传感器的信息确定第一特征模型,该第一特征模型为多个特征模型之一,该多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物,该第一特征模型与该第一区域相对应;该第一通信单元还用于:向该智能驾驶设备发送第一特征模型的信息,以使该智能驾驶设备根据该第一特征模型感知该第一区域中的第一障碍物集合。
结合第四方面,在第四方面的某些实现方式中,该装置还包括第二通信单元,该第二通信单元用于:在该第一通信单元接收该智能驾驶设备发送的该智能驾驶设备的传感器的信息之前,向该智能驾驶设备发送感知等级的信息,该感知等级指示该第一区域对应的建议感知精度。
结合第四方面,在第四方面的某些实现方式中,该装置还包括第三通信单元,该第三通信单元还用于:向该智能驾驶设备发送该第一区域的标识信息。
第五方面,提供了一种感知装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行第一方面或第二方面 中任一种可能实现方式中的方法。
可选地,上述处理单元可以包括至少一个处理器,上述存储单元可以是存储器,其中存储器可以是芯片内的存储单元(例如,寄存器、缓存等),也可以是智能驾驶设备内位于芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
第六方面,提供了一种智能驾驶设备,该智能驾驶设备包括上述第三方面中任一种实现方式中的装置。
结合第六方面,在第六方面的某些实现方式中,该智能驾驶设备为车辆。
第七方面,提供了一种路侧设备,该路侧设备包括上述第四方面中任一种实现方式中的装置。
第八方面,提供了一种感知系统,该系统包括第三方面中任一项所述的装置和第四方面中任一项所述的装置,或者,包括第六方面中任一项所述的智能驾驶设备和第七方面中任一项所述的路侧设备。
结合第八方面,在第八方面的某些实现方式中,该感知系统还可以包括服务器,该服务器用于:接收需通过第一区域的智能驾驶设备发送的该第一区域的标识信息;根据该标识信息向该智能驾驶设备发送第二感知等级的信息,该第二感知等级指示该第一区域对应的当前感知精度。
第九方面,提供了一种计算机程序产品,上述计算机程序产品包括:计算机程序代码,当上述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面或第二方面中任一种可能实现方式中的方法。
需要说明的是,上述计算机程序代码可以全部或部分存储在第一存储介质上,其中第一存储介质可以与处理器封装在一起的,也可以与处理器单独封装。
第十方面,提供了一种计算机可读介质,上述计算机可读介质存储有指令,当上述指令被处理器执行时,使得处理器实现上述第一方面或第二方面中任一种可能实现方式中的方法。
第十一方面,提供了一种芯片,该芯片包括电路,该电路用于执行上述第一方面或第二方面中任一种可能实现方式中的方法。
附图说明
图1是车辆感知障碍物的一种场景的示意图;
图2是本申请实施例提供的智能驾驶设备的功能性框图示意;
图3是本申请实施例提供的感知系统架构的示意性框图;
图4是本申请实施例提供的感知方法应用场景的示意图;
图5是本申请实施例提供的感知方法的示意性流程图;
图6是本申请实施例提供的感知方法的示意性流程图;
图7是本申请实施例提供的感知方法的示意性流程图;
图8是本申请实施例提供的障碍物集合的示意图;
图9是本申请实施例提供的根据预设范围确定障碍物的场景示意图;
图10是本申请实施例提供的感知装置的示意性框图;
图11是本申请实施例提供的感知装置的示意性框图;
图12是本申请实施例提供的感知装置的示意性框图。
具体实施方式
在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
本申请实施例中采用诸如“第一”、“第二”的前缀词,仅仅为了区分不同的描述对象,对被描述对象的位置、顺序、优先级、数量或内容等没有限定作用。本申请实施例中对序数词等用于区分描述对象的前缀词的使用不对所描述对象构成限制,对所描述对象的陈述参见权利要求或实施例中上下文的描述,不应因为使用这种前缀词而构成多余的限制。
随着自动驾驶技术的发展,为了保证行驶安全,开发人员越来越重视车辆对其周围障碍物的感知精度。在当前技术背景下,车辆在行驶过程中,通过对车辆的感知系统获取的感知数据(或称传感器数据)进行处理,得到车辆周围的障碍物信息,进而根据障碍物信息规划车辆的行驶路径。为了提高车辆对道路信息的感知精度,可以结合路侧设备的感知数据和车辆的感知数据,确定车辆周围的障碍物信息。如图1所示,车辆行驶在道路上的过程中,车辆可以通过车辆与路边基础设施(vehicle to infrastructure,V2I)通信,与路侧设备(或称路边基础设备)进行信息交互。其中,路侧设备可以被设置在道路旁,其可以获取位于感知设备检测范围(如图1所示)内的感知结果。由于路侧设备的检测范围相对有限,为了将路侧设备的检测范围覆盖到整条道路,在该条道路上需要设置多个路侧设备。
对于上述技术方案,一方面,为了提高对车辆周围的障碍物的感知精度,需要依赖来自多个路侧设备的感知数据,这提高了自动驾驶技术推广及普及的成本;另一方面,车辆结合路侧设备的感知数据和车辆的感知数据确定障碍物信息时,使用的仍是车辆的感知算法,该感知算法一般是根据车辆经常行驶的区域的感知数据进行训练的,这会导致车辆在陌生的道路上行驶时,感知系统会出现感知偏差的情况。例如,经常在城市道路上行驶的车辆,行驶至山区道路时,由于感知算法未针对山区道路的数据进行训练过,可能导致感知算法在障碍物识别中出现误检或漏检的情况,如误检或漏检野生动物。对于前一种情况(即误检),可能导致后续对车辆路径规划时,错误的预测障碍物(例如被误检的野生动物)的运动速度或路径,进而导致规划的车辆的行驶路径不安全,例如,可能导致车辆与障碍物相撞;对于后一种情况(即漏检),可能会导致在对车辆路径规划时,完全忽略障碍物对车辆的影响,进而严重影响行车安全。
鉴于此,本申请实施例提供一种感知方法、装置及系统,智能驾驶设备(如车辆)行驶至某一区域时,可以获取该区域对应的特征模型(如下文中的第一特征模型),由于该特征模型是针对该区域的数据进行训练得到的,因此能够基于该特征模型感知到该区域的特征障碍物,能够减少智能驾驶设备在不同区域道路上的感知偏差,提高智能驾驶设备在不同区域的感知精度,进而提高行驶安全。下面将结合附图,对本申请实施例中的技术方 案进行描述。
图2是本申请实施例提供的智能驾驶设备100的一个功能框图示意。智能驾驶设备100可以包括感知系统120和计算平台150,其中,感知系统120可以包括感测关于智能驾驶设备100周边的环境的信息的若干种传感器。例如,感知系统120可以包括定位系统,定位系统可以是全球定位系统(global positioning system,GPS),也可以是北斗系统或者其他定位系统。感知系统120可以包括惯性测量单元(inertial measurement unit,IMU)、激光雷达、毫米波雷达、超声雷达以及摄像装置中的一种或者多种。
智能驾驶设备100的部分或所有功能可以由计算平台150控制。计算平台150可包括处理器151至15n(n为正整数),处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如中央处理单元(central processing unit,CPU)、微处理器、图形处理器(graphics processing unit,GPU)(可以理解为一种微处理器)、或数字信号处理器(digital signal processor,DSP)等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为专用集成电路(application-specific integrated circuit,ASIC)或可编程逻辑器件(programmable logic device,PLD)实现的硬件电路,例如现场可编辑逻辑门阵列(field programmable gate array,FPGA)。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,处理器还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如神经网络处理单元(neural network processing unit,NPU)、张量处理单元(tensor processing unit,TPU)、深度学习处理单元(deep learning processing unit,DPU)等。此外,计算平台150还可以包括存储器,存储器用于存储指令,处理器151至15n中的部分或全部处理器可以调用存储器中的指令,以实现相应的功能。
示例性地,上述计算平台150可以包括:MDC、车辆域控制器(vehicle domain controller,VDC)、底盘域控制器(chassis domain controller,CDC)中的至少一个;或者还可以包括其他计算平台,例如车载应用服务(in-car application-server,ICAS)控制器,车身控制器(body domain controller,BDC),特殊装备系统(special equipment system,SAS),媒体图形单元(media graphics unit,MGU),车身超级核心(body super core,BSC),ADAS超级核心(ADAS super core)等,本申请对此不做限定。其中,ICAS可以包括如下至少一项:车辆控制服务器ICAS1、智能驾驶服务器ICAS2、智能座舱服务器ICAS3、信息娱乐服务器ICAS4。
智能驾驶设备100可以包括高级驾驶辅助系统(advanced driving assistant system,ADAS),ADAS利用感知系统120中的多种传感器(包括但不限于:激光雷达、毫米波雷达、摄像装置、超声波传感器、全球定位系统、惯性测量单元)从智能驾驶设备周围获取信息,并对获取的信息进行分析和处理,实现例如障碍物感知、目标识别、智能驾驶设备定位、路径规划、驾驶员监控/提醒等功能,从而提升智能驾驶设备的安全性、自动化程度和舒适度。
在介绍感知方法之前,首先介绍本申请实施例提供的感知方法所适用的系统架构。
图3示出了本申请实施例提供的感知系统架构的示意性框图。该感知系统包括路端数据处理中心200、云端服务器300和车端数据处理中心400。其中,路端数据处理中心200 可以为设置于路侧设备中的计算平台,或者也可以为用于控制路侧设备的服务器;车端数据处理中心400可以为设置于图2所示的智能驾驶设备100中的计算平台150,或者也可以为用于控制上述智能驾驶设备100的服务器。
具体地,路端数据处理中心200包括:广播模块210和部署模块220。其中,广播模块210用于向路侧设备的检测范围内的智能驾驶设备,发送路侧设备负责的区域的标识信息(例如,区域标识号),以及建议智能驾驶设备在该区域的感知精度(以下简称建议感知精度)的信息。部署模块220用于接收智能驾驶设备发送的传感器的信息,进而根据传感器的信息,向智能驾驶设备发送其负责区域的特征模型的信息,该特征模型用于感知该路侧设备负责的区域的特征障碍物。需要说明的是,本申请实施例涉及的某区域的“特征障碍物”可以为该区域特有的、不同于其他区域的障碍物,例如,山区的野生动物,或者穿着具有区域特色服装的人,或者该区域特有的交通工具等。
云端服务器300包括:收发模块310和查询模块320。其中,收发模块310可以接收智能驾驶设备发送的区域的标识信息,查询模块320根据区域的标识信息确定区域对应的感知精度,进而通过收发模块310将区域对应的感知精度的信息发送给智能驾驶设备。可选地,该区域对应的感知精度是由云端服务器300确定的、该区域中智能驾驶设备使用的最多的感知精度。
车端数据处理中心400包括:区域感知调度节点410、区域感知节点420、第一融合节点430、感知节点440、第二融合节点450、区域融合节点460、规控节点470。其中,区域感知调度节点410用于与路端数据处理中心200、云端服务器300进行信息交互,包括但不限于:接收广播模块210发送的区域的标识信息以及建议感知精度的信息;向部署模块220发送智能驾驶设备的传感器的信息;接收部署模块220发送的特征模型的信息,并将特征模型部署至区域感知节点420和第一融合节点430;向收发模块310发送区域的标识信息;接收收发模块310发送的区域对应的感知精度的信息。区域感知节点420用于:基于从路端数据处理中心200处接收的特征模型的信息,对来自于智能驾驶设备感知系统的传感器数据进行处理,得到智能驾驶设备周围的障碍物信息。在一些可能的实现方式中,区域感知节点420包括一个或多个子节点,每个子节点用于根据一种传感器数据确定障碍物信息(如障碍物的位置、类型等)。例如,区域感知节点420包括子节点421、子节点422和子节点423,其中,子节点421用于根据激光雷达数据确定障碍物信息,子节点422用于根据摄像装置数据确定障碍物信息,子节点423用于根据GPS数据确定障碍物信息。第一融合节点430用于对区域感知节点420处理得到的障碍物信息进行融合处理,得到第一障碍物集合。其中,上述融合处理可以理解为:根据上述区域感知节点420中各子节点确定的障碍物信息,进一步确定障碍物的类型及其在智能驾驶设备周围的位置。感知节点440用于:基于智能驾驶设备自己的特征模型,对来自于智能驾驶设备感知系统的传感器数据进行处理,得到智能驾驶设备周围的障碍物信息。在一些可能的实现方式中,感知节点440可以包括一个或多个子节点。第二融合节点450用于对感知节点440处理得到的障碍物信息进行融合处理,得到第二障碍物集合。区域融合节点460可以包括接收管理模块461、融合评估模块462和发送管理模块463,其中,接收管理模块461用于从第一融合节点430和第二融合节点450处接收障碍物集合的信息,并将其发送给融合评估模块462,融合评估模块462用于根据障碍物集合的信息确定新的障碍物集合,并将其发送给发送管 理模块463,发送管理模块463用于将该新的障碍物集合的信息发送至规控节点470。规控节点470根据新的障碍物集合规划智能驾驶设备的行驶路径,并生成控制信号,以控制智能驾驶设备行驶。
应理解,图3所示的系统架构仅为示例性说明,在具体实现过程中,上述系统可能包括更多或更少的模块或节点,并且可以根据实际情况对上述模块或节点进行删减或增加。
本申请涉及的感知精度可以理解为感知障碍物的准确度,感知精度越高,对障碍物误检、漏检的几率越小。
图3所示的感知系统中,路端数据处理中心200可以设置于路侧设备中,如图4所示,该路侧设备可以设置在其负责的区域(如第一区域)的入口,例如图4中所示的入口a、b、c中的至少一处。应理解,路侧设备与其检测范围内智能驾驶设备通信,向该智能驾驶设备发送“其负责的区域”的相关信息,例如,该区域的标识信息和/或建议感知精度的信息。
图5示出了本申请实施例提供的感知方法500的示意性流程图,该方法500可以由图2所示的智能驾驶设备100执行,或者也可以由图2中的计算平台150的某一个或多个处理器执行,或者还可以由图3所示的车端数据处理中心400执行。以下以智能驾驶设备执行为例进行说明,该方法500可以包括步骤S501和S502。
S501,获取智能驾驶设备需通过的第一区域所对应的第一特征模型,该第一特征模型为多个特征模型之一,该多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物。
其中,区域的“特征障碍物”可以为该区域特有的、不同于其他区域的障碍物,例如,山区的野生动物,或者穿着具有区域特色服装的人等。应理解,第一区域所对应的第一特征模型用于识别第一区域中的特征障碍物。
示例性地,该第一区域可以是根据地理位置划分的;或者也可以是根据行政区域划分的;或者还可以是根据地理位置以及行政区域划分的,例如,将陕西省(行政区域)的秦岭自然保护区(地理位置)划分为第一区域;或者还可以是根据其他依据进行划分的。
示例性地,在获取第一特征模型时,智能驾驶设备可能已驶入第一区域,或者智能驾驶设备可能还未驶入第一区域,但是已驶入第一区域的路侧设备的检测范围。
可选地,智能驾驶设备可以从路侧设备处接收的第一特征模型的信息中,获取该第一特征模型;或者,智能驾驶设备可以根据路侧设备处接收的第一特征模型的信息,获取该第一特征模型。一示例中,路侧设备检测到该智能驾驶设备时,向该智能驾驶设备发送第一特征模型的信息。又一示例中,智能驾驶设备确定自己行驶至第一区域时,向路侧设备请求第一特征模型,路侧设备响应于智能驾驶设备的请求,向其发送第一特征模型的信息。
在一些可能的实现方式中,智能驾驶设备曾经在第一区域行驶过,则智能驾驶设备可能保存有第一区域所对应的第一特征模型,则智能驾驶设备确定自己行驶至第一区域时,在其存储的历史信息中查找到第一特征模型。
示例性地,智能驾驶设备确定自己行驶至第一区域的方式,可以包括但不限于:根据从感知系统获取的传感器信息(如GPS信号)确定自己行驶至第一区域;根据从第一区域的路侧设备处接收的第一区域的标识信息确定自己行驶至第一区域。其中,第一区域的标识信息可以为区域号,或者也可以为其他能够标识该第一区域的信息。
S502,根据该第一特征模型感知该第一区域中的第一障碍物集合。
示例性地,智能驾驶设备根据第一特征模型,对来自于感知系统的传感器数据进行处理,感知该第一区域的第一障碍物集合。应理解,第一障碍物集合可以包括该第一区域的特征障碍物。
进一步地,可以根据该第一障碍物集合为智能驾驶设备规划行驶路径,进而控制该智能驾驶设备行驶。
可选地,智能驾驶设备还可以根据第二特征模型感知该第一区域中的第二障碍物集合。其中,第二特征模型为智能驾驶设备自己的特征模型,该第二特征模型可以是根据智能驾驶设备历史行驶过程中获取的传感器数据进行训练得到的,通过该第二特征模型可以感知到智能驾驶设备经常行驶的道路中存在的障碍物。应理解,基于该第二特征模型无法感知或者无法准确感知该第一区域的特征障碍物。
可选地,可以根据第一障碍物集合和第二障碍物集合为智能驾驶设备规划行驶路径,进而控制智能驾驶设备行驶。本申请实施例提供的感知方法,在智能驾驶设备即将或已经在第一区域行驶时,通过获取第一区域对应的第一特征模型,能够识别到该第一区域的特征障碍物,有助于提高智能驾驶设备在该区域的感知精度。进一步地,使得智能驾驶设备行驶至不同区域时,可以感知不同区域对应的不同的特征障碍物。因此,能够降低感知算法的感知能力不足导致的漏检、误检的情况,进而提高行驶安全。
图6示出了本申请实施例提供的感知方法600的示意性流程图,该方法600可以由与图2所示的智能驾驶设备100进行信息交互的路侧设备执行,或者也可以由上述路侧设备的计算平台执行,或者还可以由图3所示的路端数据处理中心200执行。以下以路侧设备执行为例进行说明,该方法600可以包括步骤S601-S603。
S601,接收需通过第一区域的智能驾驶设备发送的该智能驾驶设备的传感器的信息。
示例性地,第一区域可以为方法500中的第一区域,其划分方法可以参考方法500中的描述,在此不再赘述。
示例性地,该传感器的信息可以包括传感器类型的信息,例如指示传感器为雷达,或者摄像装置的信息。或者,该传感器的信息也可以为其他能够指示传感器类型的信息,一示例中,以智能驾驶设备为例,该传感器的信息可以包括VIN码,在一些可能的实现方式中,路侧设备可以根据VIN码确定车辆所使用的传感器的类型。应理解,不同类型传感器输出的传感器数据的格式不同,进而对不同传感器的数据进行处理以得到感知结果(如障碍物集合)的过程中,可能需要不同的特征模型。
S602,根据该传感器的信息确定第一特征模型,该第一特征模型为多个特征模型之一,该多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物,该第一特征模型与该第一区域相对应。
示例性地,该第一特征模型可以为方法500中的第一特征模型。
S603,向该智能驾驶设备发送第一特征模型的信息,以使该智能驾驶设备根据该第一特征模型感知该第一区域中的第一障碍物集合。
在一些可能的实现方式中,该第一特征模型的信息可以包含该第一特征模型,或者该第一特征模型的信息也可以为能够识别到或获取到该第一特征模型的信息。
本申请实施例提供的感知方法,通过向需要通过第一区域的智能驾驶设备,发送该第一区域对应的第一特征模型,有助于智能驾驶设备基于该第一特征模型,识别到该第一区 域的特征障碍物,有助于提高智能驾驶设备在该区域的感知精度,能够降低感知算法的感知能力不足导致的漏检、误检的情况,进而提高行驶安全。
以上结合图5至图6分别介绍了本申请实施例提供的感知方法,应用于智能驾驶设备和路侧设备时的方法流程。以下结合图7,对智能驾驶设备根据第一特征模型感知障碍物集合的具体流程进行举例说明。
图7示出了本申请实施例提供的感知方法700的示意性流程图,该方法700是对方法500和方法600的扩展。例如,该方法700可以与方法500、方法600并行执行。示例性地,该方法700中路侧设备执行的步骤可以由图3所示的路端数据处理中心200执行,该方法700中智能驾驶设备执行的步骤可以由图3所示的车端数据处理中心400执行,该方法700中服务器执行的步骤可以由图3所示的云端服务器300执行。应理解,图7示出的感知方法的步骤或操作仅是示例,本申请实施例还可以执行其他操作或者图7中的各个操作的变形。此外,图7中的各个步骤可以按照与图7呈现的不同的顺序来执行,并且有可能并非要执行图7中的全部操作。
该方法700可以包括:
S701,在路侧设备识别到智能驾驶设备时,向智能驾驶设备发送第一区域的标识信息和第三感知等级的信息。
示例性地,第一区域的标识信息可以为上述实施例中的第一区域的标识信息。例如,可以为区域号,或者也可以为其他能够标识该第一区域的信息。
可选地,路侧设备可以向智能驾驶设备分别发送第一区域的标识信息和第三感知等级的信息,该第三感知等级指示建议感知精度。其中,建议感知精度可以为:为保证行驶安全,建议智能驾驶设备在该区域感知障碍物的准确度。
在一些可能的实现方式中,该路侧设备设置在第一区域的入口。
S702,智能驾驶设备向服务器发送第一区域的标识信息。
S703,服务器根据第一区域的标识信息确定第二感知等级。
其中,第二感知等级指示第一区域的当前感知精度。
在一些可能的实现方式中,该第二感知等级可以为:服务器根据经过该第一区域的智能驾驶设备上报的感知等级的信息确定的。
S704,服务器向智能驾驶设备发送第二感知等级的信息。
在一些可能的实现方式中,可以跳过S703,直接执行S704,即服务器根据第一区域的标识信息向智能驾驶设备发送第二感知等级的信息。
S705,智能驾驶设备确定第二感知等级指示的感知精度小于第三感知等级指示的感知精度。
示例性地,第二感知等级可以指示第一区域的当前感知精度,或者还可以指示智能驾驶设备的默认感知精度。该默认感知精度可以为智能驾驶设备出厂时设定的,或者也可以为智能驾驶设备的用户自行设定的。
在一些可能的实现方式中,可以不执行S702至S704,执行S701后直接执行S705。即,智能驾驶设备未从服务器处获取到指示第一区域的感知精度的信息,则智能驾驶设备判断其默认感知精度是否小于第三感知等级指示的感知精度。
应理解,在第二感知等级指示的感知精度小于第三感知等级指示的感知精度时,继续 执行S706。
在一些可能的实现方式中,若第二感知等级指示的感知精度大于或等于第三感知等级指示的感知精度,则智能驾驶设备可以根据其当前使用的特征模型(如智能驾驶设备自己的特征模型,即下文中的第二特征模型),感知周围的障碍物;或者智能驾驶设备根据第二感知等级指示的感知精度,确定用于感知障碍物的特征模型。
S706,智能驾驶设备向路侧设备发送智能驾驶设备的传感器的信息。
示例性地,该传感器的信息可以为方法600中的传感器的信息。
S707,路侧设备根据传感器的信息确定与第一区域对应的第一特征模型。
在一些可能的实现方式中,路侧设备还确定与第一特征模型关联的执行文件及依赖等,并随第一特征模型发送给智能驾驶设备。
S708,路侧设备向智能驾驶设备发送第一特征模型的信息。
在一些可能的实现方式中,可以跳过S707,直接执行S708,即路侧设备根据传感器的信息向智能驾驶设备发送第一特征模型的信息。
S709,智能驾驶设备根据第一特征模型感知第一区域中的第一障碍物集合。
在一些可能的实现方式中,智能驾驶设备部署第一特征模型(例如,将第一特征模型部署至图3中的区域感知节点420中),并根据该第一特征模型对来自感知系统的传感器数据进行处理,得到第一障碍物集合。
在一些可能的实现方式中,S701中第三感知等级指示的建议感知精度还可以用于表征基于第一特征模型感知障碍物的准确度。
S710,智能驾驶设备根据智能驾驶设备的第二特征模型感知第一区域中的第二障碍物集合。
在一些可能的实现方式中,S705中涉及的智能驾驶设备的默认感知精度,还可以用于表征基于第二特征模型感知障碍物的准确度。
在一些可能的实现方式中,S709和S710也可以同时执行。
S711,智能驾驶设备根据第一障碍物集合和第二障碍物集合,控制智能驾驶设备行驶。
示例性地,使用匈牙利匹配算法对第一障碍物集合和第二障碍物集合中的障碍物进行匹配得到新的障碍物集合,进而根据新的障碍物集合对智能驾驶设备的行驶路径进行规划。
在一些可能的实现方式中,对两个障碍物集合进行匹配后得到的新的障碍物集合可以如图8所示。其中,框810为第一障碍物集合,包括障碍物5至障碍物10,框820为第二障碍物集合,包括障碍物1至障碍物7。障碍物集合A、B、C是对第一障碍物集合和第二障碍物集合进行匹配后得到的新的障碍物集合。具体地,障碍物集合A可以包括障碍物1至障碍物4,障碍物集合B可以包括障碍物5至障碍物7,障碍物集合C可以包括障碍物8至障碍物10。可以理解的是,障碍物集合B(或称第三障碍物集合)包括第一障碍物集合和第二障碍物集合共有的障碍物;障碍物集合A为第二障碍物集合中除去障碍物集合B中的障碍物,得到的障碍物集合;障碍物集合C为第一障碍物集合中除去障碍物集合B中的障碍物,得到的障碍物集合。
进一步地,将上述障碍物集合A、B、C的信息发送至规控模块(如规控节点470),以使其根据上述障碍物信息对智能驾驶设备的行驶路径进行规划。
在一些可能的实现方式中,针对智能驾驶设备的不同预设范围,使用不同的障碍物集 合对其行驶路径进行规划。例如,在智能驾驶设备的第一预设范围内,根据第一障碍物集合和第二障碍物集合的并集控制智能驾驶设备行驶;在第一预设范围外且在智能驾驶设备的第二预设范围内,根据第一障碍物集合控制智能驾驶设备行驶;以及在第二预设范围外,根据第三障碍物集合控制智能驾驶设备行驶。
本申请实施例涉及的“以内”可以包含本数,“以外”可以不包含本数。例如,智能驾驶设备的30米以内,包括与智能驾驶设备距离小于或等于30米的范围;智能驾驶设备的30米以外,包括与智能驾驶设备距离大于30米的范围。
以智能驾驶设备如图9中车辆910为例,在车辆910的第一预设范围920以内,可以根据障碍物集合A’、B’和C’进行路径规划。其中,A’为障碍物集合A中处于车辆910的第一预设范围920以内的障碍物构成的集合;B’为障碍物集合B中处于车辆910的第一预设范围920以内的障碍物构成的集合;C’为障碍物集合C中处于车辆910的第一预设范围920以内的障碍物构成的集合。该障碍物集合A’、B’和C’的并集也可以理解为:第一障碍物集合和第二障碍物集合的并集中,处于车辆910的第一预设范围920以内的障碍物构成的集合。
在车辆910的第一预设范围920以外,且在车辆910的第二预设范围930以内,可以根据障碍物集合B”和C”进行路径规划。其中,B”为障碍物集合B中处于车辆910的第一预设范围920以外,且处于车辆910的第二预设范围930以内的障碍物构成的集合;C”为障碍物集合C中处于车辆910的第一预设范围920以外,且处于车辆910的第二预设范围930以内的障碍物构成的集合。该障碍物集合B”和C”的并集也可以理解为:第一障碍物集合和第二障碍物集合的并集中,处于车辆910的第一预设范围920以外,且处于车辆910的第二预设范围930以内的障碍物构成的集合。
在车辆910的第二预设范围930以外,可以根据障碍物集合B”’进行路径规划。其中,B”’为障碍物集合B中处于车辆910的第二预设范围930以外的障碍物构成的集合。该障碍物集合B”’也可以理解为:第三障碍物集合中,处于车辆910的第二预设范围930以外的障碍物构成的集合。
示例性地,上述第一预设范围(或第一预设范围920)可以为距离智能驾驶设备(或车辆910)30米的范围,上述第二预设范围(或第二预设范围930)可以为距离智能驾驶设备(或车辆910)60米的范围。应理解,上述对预设范围的设定仅为示例性说明,在具体实现过程中,还可以采用其他预设范围,例如,可以使用最近路径车辆(closest in-path vehicle,CIPV)方法确定预设范围,在智能驾驶设备的不同方位,该预设范围的数值可以不同。
S712,智能驾驶设备根据第一障碍物集合和第二障碍物集合确定第一感知等级。
应理解,该第一感知等级指示智能驾驶设备在通过该第一区域时的感知精度。该智能驾驶设备在通过该第一区域时的感知精度,也可以理解为,智能驾驶设备在第一区域内的感知精度。
示例性地,智能驾驶设备根据障碍物集合A、B、C确定第一感知等级。具体地,智能驾驶设备确定累计匹配个数s、累计区域误检个数fd和累计区域漏检个数lf。其中,累计匹配个数s为一次或多次确定的障碍物集合B包括的障碍物的总数,累计区域误检个数fd为一次或多次确定的障碍物集合A包括的障碍物的总数,累计区域漏检个数lf为一次 或多次确定的障碍物集合C包括的障碍物的总数。进一步地,根据如下公式计算误检率a和准确率b:
Figure PCTCN2022116830-appb-000001
Figure PCTCN2022116830-appb-000002
进一步地,根据误检率a和准确率b确定第一感知等级。示例性地,感知等级与误检率、准确率之间的关系可以如表1所示。应理解,误检率越低,且准确率越高,则感知等级指示的感知精度越高。如表1所示,感知等级数值越高,代表的感知精度越高,即感知等级1指示的感知精度小于感知等级2指示的感知精度,以此类推。示例性地,在误检率a小于或等于感知等级1对应的误检率阈值,且误检率a大于感知等级2对应的误检率阈值,且准确率b大于或等于感知等级1对应的准确率阈值,且准确率b小于感知等级2对应的准确率阈值时,确定第一感知等级为该感知等级1。示例性地,表1中所示的a0至a3可以分别为30%、20%、10%、5%,b0至b3可以分别为80%、85%、90%、95%,或者a0至a3以及b0至b3也可以取其他数值。在一些可能的实现方式中,还可以结合感知算法感知障碍物所需时长c确定感知等级。示例性地,误检率a和准确率b分别满足感知等级1对应条件的情况下,在感知算法感知障碍物所需时长大于或等于感知等级1对应的时长阈值,且小于感知等级2对应的时长阈值时,确定第一感知等级为该感知等级1。示例性地,表1中所示的c0至c3可以分别为0.005小时、0.010小时、0.015小时、0.020小时,或者c0至c3也可以取其他数值。
表1感知等级与误检率和准确率对应表
感知等级 误检率阈值 准确率阈值 时长阈值
1 a0 b0 c0
2 a1 b1 c1
3 a2 b2 c2
4 a3 b3 c3
应理解,表1所示感知等级与误检率和准确率之间的关系仅为示例性说明,在具体实现过程中,可以划分更多或更少的感知等级。
在一些可能的实现方式中,智能驾驶设备可以保存该第一感知等级的信息,待下次行驶至第一区域时,可以根据该第一感知等级确定进行障碍物识别所需的感知算法。
S713,智能驾驶设备向服务器发送第一区域的标识信息和第一感知等级的信息。
应理解,智能驾驶设备向服务器发送该第一区域的标识信息和该第一感知等级的信息,以使服务器更新该第一区域对应的感知等级。
在一些可能的实现方式中,在第一感知等级与通过上述S704接收的第二感知等级不同时,智能驾驶设备向服务器发送该第一感知等级的信息。
在一些可能的实现方式中,也可以不执行S712和S713。
本申请实施例提供的感知方法,在智能驾驶设备即将或已经在第一区域行驶时,可以从路侧设备处获取第一区域对应的第一特征模型,根据该第一特征模型能够识别到该第一区域的特征障碍物,有助于提高智能驾驶设备在该区域的感知精度。进一步地,使得智能驾驶设备行驶至不同区域时,可以感知不同区域对应的不同的特征障碍物。因此,能够降 低感知算法的感知能力不足导致的漏检、误检的情况,进而提高行驶安全。本申请实施例提供的感知方法,针对智能驾驶设备的不同预设范围,还能够使用不同的障碍物集合对其行驶路径进行规划,有助于提高规划算法的平滑度,减少不必要的算力浪费,有助于节约能耗。此外,本申请实施例提供的感知方法的实施,使得进行路侧设备部署时,只需在区域的入口处部署路侧设备,无需设置多个路侧设备以使其检测范围覆盖到整条道路,且无需部署感知类路侧设备,能够极大节省部署路侧设备所需成本。此外,本申请实施例还能够评估智能驾驶设备在第一区域的感知等级,以使得服务器更新该第一区域对应的感知等级,进而减少对第一区域的感知等级的误判造成的安全事故,有助于提高行驶安全。
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,各个实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
上文中结合图3至图9详细说明了本申请实施例提供的方法。下面将结合图10至图12详细说明本申请实施例提供的装置。应理解,装置实施例的描述与方法实施例的描述相互对应,因此,未详细描述的内容可以参见上文方法实施例,为了简洁,这里不再赘述。
图10示出了本申请实施例提供的感知装置1000的示意性框图,该装置1000包括获取单元1010和第一处理单元1020。
该装置1000可以包括用于执行图5或图7中的方法的单元。并且,该装置1000中的各单元和上述其他操作和/或功能分别为了实现图5或图7中的方法实施例的相应流程。
其中,当该装置1000用于执行图5中的方法500时,获取单元1010可用于执行方法500中的S501,第一处理单元1020可用于执行方法500中的S502。
具体地,该获取单元1010用于:获取智能驾驶设备需通过的第一区域所对应的第一特征模型,该第一特征模型为多个特征模型之一,该多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物;该第一处理单元1020用于:根据该第一特征模型感知该第一区域中的第一障碍物集合。
可选地,该装置1000还包括第二处理单元和第三处理单元,其中,该第二处理单元用于:根据该智能驾驶设备的第二特征模型感知该第一区域中的第二障碍物集合;该第三处理单元用于:在该智能驾驶设备的第一预设范围内,根据该第一障碍物集合和该第二障碍物集合的并集控制该智能驾驶设备行驶;在该第一预设范围外且在该智能驾驶设备的第二预设范围内,根据该第一障碍物集合控制该智能驾驶设备行驶;以及在该第二预设范围外,根据第三障碍物集合控制该智能驾驶设备行驶,其中,该第三障碍物集合包括该第一障碍物集合和该第二障碍物集合共有的障碍物。
可选地,该装置还包括第四处理单元,该第四处理单元用于:根据该第一障碍物集合和该第二障碍物集合确定第一感知等级,该第一感知等级指示该智能驾驶设备在通过该第一区域时的感知精度。
可选地,该装置还包括第一通信单元,该第一通信单元用于:向服务器发送该第一区域的标识信息和该第一感知等级的信息,以使该服务器更新该第一区域对应的感知等级。
可选地,该装置还包括第二通信单元,该获取单元1010用于:获取该第二通信单元接收的该第一区域的路侧设备发送的该第一特征模型的信息,该路侧设备位于该第一区域的入口。
在一些可能的实现方式中,第一通信单元和第二通信单元可以是同一个通信单元,或者,也可以为不同的通信单元,本申请实施例对此不作具体限定。
可选地,该第二通信单元还用于:向该路侧设备发送该智能驾驶设备的传感器的信息;接收该路侧设备根据该传感器的信息发送的该第一特征模型的信息。
可选地,该第二通信单元还用于:接收该路侧设备发送的第三感知等级的信息,该第三感知等级指示该第一区域对应的建议感知精度;当第二感知等级的感知精度小于该第三感知等级的感知精度时,向该路侧设备发送该智能驾驶设备的传感器的信息,其中,该第二感知等级指示该第一区域对应的当前感知精度或该智能驾驶设备的默认感知精度。
可选地,当该第二感知等级指示该第一区域对应的当前感知精度时,该第一通信单元还用于:向服务器发送该第一区域的标识信息;接收该服务器根据该标识信息发送的该第二感知等级的信息。
示例性地,上述获取单元1010可以包括图3所示的区域感知调度节点410;上述第一处理单元1020可以包括图3所示的区域感知节点420和第一融合节点430;上述第二处理单元可以包括图3所示的感知节点440和第二融合节点450;上述第三处理单元可以包括图3所示的区域融合节点460,或者上述第三处理单元可以包括图3所示的区域融合节点460和规控节点470;上述第四处理单元可以包括图3所示的区域融合节点460;上述第一通信单元或第二通信单元可以包括图3所示的区域感知调度节点410。
在一些可能的实现方式中,上述获取单元1010和第一通信单元(和/或第二通信单元)可以为同一单元,或者,获取单元1010包括第一通信单元(和/或第二通信单元)。
在具体实现过程中,上述获取单元1010和第一处理单元1020所执行的各项操作可以由同一个处理器执行,或者,也可以由不同的处理器执行,例如分别由多个处理器执行。一示例中,一个或多个处理器可以与图2中的感知系统120中一个或多个传感器相连接,从一个或多个传感器中获取智能驾驶设备周围的障碍物类型及位置等信息,并根据第一特征模型和/或第二特征模型对上述信息进行处理得到障碍物集合(例如第一障碍物集合和/或第二障碍物集合)。进而,一个或多个处理器还可以根据上述障碍物集合规控智能驾驶设备的行驶路径。示例性地,在具体实现过程中,上述一个或多个处理器可以为设置在车机中的处理器,或者也可以为设置在其他车载终端中的处理器。一示例中,在具体实现过程中,上述装置1000可以为设置在车机或者其他车载终端中的芯片。又一示例中,在具体实现过程中,上述装置1000可以为设置在智能驾驶设备中如图2所示的计算平台150。
图11示出了本申请实施例提供的感知装置1100的示意性框图,该装置1100包括第一通信单元1110和确定单元1120。
该装置1100可以包括用于执行图6或图7中的方法的单元。并且,该装置1100中的各单元和上述其他操作和/或功能分别为了实现图6或图7中的方法实施例的相应流程。
其中,当该装置1100用于执行图6中的方法600时,第一通信单元1110可用于执行方法600中的S601和S603,确定单元1120可用于执行方法600中的S602。
具体地,该第一通信单元1110用于:接收需通过第一区域的智能驾驶设备发送的该智能驾驶设备的传感器的信息;该确定单元1120用于:根据该传感器的信息确定第一特征模型,该第一特征模型为多个特征模型之一,该多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物,该第一特征模型与该第一区域相对应;该第一通信单元1110 还用于:向该智能驾驶设备发送第一特征模型的信息,以使该智能驾驶设备根据该第一特征模型感知该第一区域中的第一障碍物集合。
可选地,该装置还包括第二通信单元,该第二通信单元用于:在该第一通信单元1110接收该智能驾驶设备发送的该智能驾驶设备的传感器的信息之前,向该智能驾驶设备发送感知等级的信息,该感知等级指示该第一区域对应的建议感知精度。
在一些可能的实现方式中,第一通信单元和第二通信单元可以是同一个通信单元,或者,也可以为不同的通信单元,本申请实施例对此不作具体限定。
可选地,该装置还包括第三通信单元,该第三通信单元还用于:向该智能驾驶设备发送该第一区域的标识信息。
在一些可能的实现方式中,第三通信单元与第二通信单元可以为同一通信单元;或者,第三通信单元与第一通信单元可以为同一通信单元;或者,第一通信单元、第二通信单元、第三通信单元均可以为同一通信单元,或者也可以为不同的通信单元。
示例性地,上述第一通信单元1110可以包括图3所示的部署模块220;上述确定单元1120也可以包括图3所示的部署模块220;上述第二通信单元可以包括图3所示的广播模块210;上述第三通信单元可以包括图3所示的广播模块210。
在具体实现过程中,上述第一通信单元1110和确定单元1120所执行的各项操作可以由同一个处理器执行,或者,也可以由不同的处理器执行,例如分别由多个处理器执行。在具体实现过程中,上述一个或多个处理器可以为设置在路侧设备中的处理器,例如设置在路侧设备的计算平台中的处理器。一示例中,在具体实现过程中,上述装置1100可以为设置在路侧设备中的芯片。又一示例中,在具体实现过程中,上述装置1100可以为设置在路侧设备中的计算平台。
本申请实施例还提供了一种装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行上述实施例执行的方法或者步骤。
应理解,以上装置中各单元的划分仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。此外,装置中的单元可以以处理器调用软件的形式实现;例如装置包括处理器,处理器与存储器连接,存储器中存储有指令,处理器调用存储器中存储的指令,以实现以上任一种方法或实现该装置各单元的功能,其中处理器例如为通用处理器,例如CPU或微处理器,存储器为装置内的存储器或装置外的存储器。或者,装置中的单元可以以硬件电路的形式实现,可以通过对硬件电路的设计实现部分或全部单元的功能,该硬件电路可以理解为一个或多个处理器;例如,在一种实现中,该硬件电路为ASIC,通过对电路内元件逻辑关系的设计,实现以上部分或全部单元的功能;再如,在另一种实现中,该硬件电路为可以通过PLD实现,以FPGA为例,其可以包括大量逻辑门电路,通过配置文件来配置逻辑门电路之间的连接关系,从而实现以上部分或全部单元的功能。以上装置的所有单元可以全部通过处理器调用软件的形式实现,或全部通过硬件电路的形式实现,或部分通过处理器调用软件的形式实现,剩余部分通过硬件电路的形式实现。
在本申请实施例中,处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如CPU、微处理器、GPU、或DSP等;在 另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为ASIC或PLD实现的硬件电路,例如FPGA。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如NPU、TPU、DPU等等。
可见,以上装置中的各单元可以是被配置成实施以上方法的一个或多个处理器(或处理电路),例如:CPU、GPU、NPU、TPU、DPU、微处理器、DSP、ASIC、FPGA,或这些处理器形式中至少两种的组合。
此外,以上装置中的各单元可以全部或部分可以集成在一起,或者可以独立实现。在一种实现中,这些单元集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。该SOC中可以包括至少一个处理器,用于实现以上任一种方法或实现该装置各单元的功能,该至少一个处理器的种类可以不同,例如包括CPU和FPGA,CPU和人工智能处理器,CPU和GPU等。
图12是本申请实施例的一种感知装置的示意性框图。图12所示的感知装置1200可以包括:处理器1210、收发器1220以及存储器1230。其中,处理器1210、收发器1220以及存储器1230通过内部连接通路相连,该存储器1230用于存储指令,该处理器1210用于执行该存储器1230存储的指令,以收发器1220接收/发送部分参数。可选地,存储器1230既可以和处理器1210通过接口耦合,也可以和处理器1210集成在一起。
需要说明的是,上述收发器1220可以包括但不限于输入/输出接口(input/output interface)一类的收发装置,来实现装置1200与其他设备或通信网络之间的通信。
在一些可能的实现方式中,该装置1200可以设置于图3所示的路端数据处理中心200中,或者设置于云端服务器300中,或者设置于车端数据处理中心400中。
在一些可能的实现方式中,该装置1200可以设置于上述图2所示的智能驾驶设备100中,则该处理器1210可以采用通用的CPU,微处理器,ASIC,GPU或者一个或多个集成电路,用于执行相关程序,以实现本申请方法实施例的感知方法。处理器1210还可以是一种集成电路芯片,具有信号的处理能力。在具体实现过程中,本申请的感知方法的各个步骤可以通过处理器1210中的硬件的集成逻辑电路或者软件形式的指令完成。上述处理器1210还可以是通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1230,处理器1210读取存储器1230中的信息,结合其硬件执行本申请方法实施例的感知方法。
存储器1230可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。
收发器1220使用例如但不限于收发器一类的收发装置,来实现装置1200与其他设备或通信网络之间的通信。例如,在装置1200设置于智能驾驶设备中时,可以通过收发 器1220接收第一特征模型的信息;在装置1200设置于路侧设备中时,可以通过收发器1220发送第一特征模型的信息。再例如,在装置1200设置于智能驾驶设备中时,可以通过收发器1220发送第一区域的标识信息和第一感知等级的信息;在装置1200设置于云端服务器中时,可以通过收发器1220接收第一区域的标识信息和第一感知等级的信息。
本申请实施例还提供一种感知系统,该感知系统包括图10所示的装置1000和图11所示的装置1100;或者,该感知系统包括上述智能驾驶设备和路侧设备。
可选地,该感知系统还可以包括服务器,该服务器用于:接收需通过第一区域的智能驾驶设备发送的该第一区域的标识信息;根据该标识信息向该智能驾驶设备发送第二感知等级的信息,该第二感知等级指示该第一区域对应的当前感知精度。
本申请实施例还提供一种计算机程序产品,该计算机程序产品包括计算机程序代码,当计算机程序代码在计算机上运行时,使得计算机实现本申请实施例中的方法。
本申请实施例还提供一种计算机可读存储介质,该计算机可读介质存储有计算机指令,当计算机指令在计算机上运行时,使得计算机实现本申请实施例中的方法。
本申请实施例还提供一种芯片,包括电路,用于执行本申请实施例中的方法。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者上电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,本申请实施例中,该存储器可以包括只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM),并向处理器提供指令和数据。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显 示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (29)

  1. 一种感知方法,其特征在于,包括:
    获取智能驾驶设备需通过的第一区域所对应的第一特征模型,所述第一特征模型为多个特征模型之一,所述多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物;
    根据所述第一特征模型感知所述第一区域中的第一障碍物集合。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述智能驾驶设备的第二特征模型感知所述第一区域中的第二障碍物集合;
    在所述智能驾驶设备的第一预设范围内,根据所述第一障碍物集合和所述第二障碍物集合的并集控制所述智能驾驶设备行驶;
    在所述第一预设范围外且在所述智能驾驶设备的第二预设范围内,根据所述第一障碍物集合控制所述智能驾驶设备行驶;以及
    在所述第二预设范围外,根据第三障碍物集合控制所述智能驾驶设备行驶,其中,所述第三障碍物集合包括所述第一障碍物集合和所述第二障碍物集合共有的障碍物。
  3. 如权利要求2所述的方法,其特征在于,所述方法还包括:
    根据所述第一障碍物集合和所述第二障碍物集合确定第一感知等级,所述第一感知等级指示所述智能驾驶设备在通过所述第一区域时的感知精度。
  4. 如权利要求3所述的方法,其特征在于,所述方法还包括:
    向服务器发送所述第一区域的标识信息和所述第一感知等级的信息,以使所述服务器更新所述第一区域对应的感知等级。
  5. 如权利要求1至4中任一项所述的方法,其特征在于,所述获取智能驾驶设备所处第一区域所对应的第一特征模型,包括:
    接收所述第一区域的路侧设备发送的所述第一特征模型的信息,所述路侧设备位于所述第一区域的入口。
  6. 如权利要求5所述的方法,其特征在于,所述接收所述第一区域的路侧设备发送的所述第一特征模型的信息之前,所述方法还包括:
    向所述路侧设备发送所述智能驾驶设备的传感器的信息;
    所述接收所述第一区域的路侧设备发送的所述第一特征模型的信息,包括:
    接收所述路侧设备根据所述传感器的信息发送的所述第一特征模型的信息。
  7. 如权利要求6所述的方法,其特征在于,所述向所述路侧设备发送所述智能驾驶设备的传感器的信息,包括:
    接收所述路侧设备发送的第三感知等级的信息,所述第三感知等级指示所述第一区域对应的建议感知精度;
    当第二感知等级的感知精度小于所述第三感知等级的感知精度时,向所述路侧设备发送所述智能驾驶设备的传感器的信息,其中,所述第二感知等级指示所述第一区域对应的当前感知精度或所述智能驾驶设备的默认感知精度。
  8. 如权利要求7所述的方法,其特征在于,当所述第二感知等级指示所述第一区域对应的当前感知精度时,所述方法还包括:
    向服务器发送所述第一区域的标识信息;
    接收所述服务器根据所述标识信息发送的所述第二感知等级的信息。
  9. 一种感知方法,其特征在于,包括:
    接收需通过第一区域的智能驾驶设备发送的所述智能驾驶设备的传感器的信息;
    根据所述传感器的信息确定第一特征模型,所述第一特征模型为多个特征模型之一,所述多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物,所述第一特征模型与所述第一区域相对应;
    向所述智能驾驶设备发送所述第一特征模型的信息,以使所述智能驾驶设备根据所述第一特征模型感知所述第一区域中的第一障碍物集合。
  10. 如权利要求9所述的方法,其特征在于,所述接收需通过第一区域的智能驾驶设备发送的所述智能驾驶设备的传感器的信息之前,所述方法还包括:
    向所述智能驾驶设备发送感知等级的信息,所述感知等级指示所述第一区域对应的建议感知精度。
  11. 如权利要求9或10所述的方法,其特征在于,所述方法还包括:
    向所述智能驾驶设备发送所述第一区域的标识信息。
  12. 一种感知装置,其特征在于,包括获取单元和第一处理单元,其中,
    所述获取单元用于:
    获取智能驾驶设备需通过的第一区域所对应的第一特征模型,所述第一特征模型为多个特征模型之一,所述多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物;
    所述第一处理单元用于:
    根据所述第一特征模型感知所述第一区域中的第一障碍物集合。
  13. 如权利要求12所述的装置,其特征在于,所述装置还包括第二处理单元和第三处理单元,其中,
    所述第二处理单元用于:
    根据所述智能驾驶设备的第二特征模型感知所述第一区域中的第二障碍物集合;
    所述第三处理单元用于:
    在所述智能驾驶设备的第一预设范围内,根据所述第一障碍物集合和所述第二障碍物集合的并集控制所述智能驾驶设备行驶;
    在所述第一预设范围外且在所述智能驾驶设备的第二预设范围内,根据所述第一障碍物集合控制所述智能驾驶设备行驶;以及
    在所述第二预设范围外,根据第三障碍物集合控制所述智能驾驶设备行驶,其中,所述第三障碍物集合包括所述第一障碍物集合和所述第二障碍物集合共有的障碍物。
  14. 如权利要求13所述的装置,其特征在于,所述装置还包括第四处理单元,所述第四处理单元用于:
    根据所述第一障碍物集合和所述第二障碍物集合确定第一感知等级,所述第一感知等级指示所述智能驾驶设备在通过所述第一区域时的感知精度。
  15. 如权利要求14所述的装置,其特征在于,所述装置还包括第一通信单元,所述第一通信单元用于:
    向服务器发送所述第一区域的标识信息和所述第一感知等级的信息,以使所述服务器 更新所述第一区域对应的感知等级。
  16. 如权利要求12至15中任一项所述的装置,其特征在于,所述装置还包括第二通信单元,所述获取单元用于:
    获取所述第二通信单元接收的所述第一区域的路侧设备发送的所述第一特征模型的信息,所述路侧设备位于所述第一区域的入口。
  17. 如权利要求16所述的装置,其特征在于,所述第二通信单元还用于:
    向所述路侧设备发送所述智能驾驶设备的传感器的信息;
    接收所述路侧设备根据所述传感器的信息发送的所述第一特征模型的信息。
  18. 如权利要求17所述的装置,其特征在于,所述第二通信单元用于:
    接收所述路侧设备发送的第三感知等级的信息,所述第三感知等级指示所述第一区域对应的建议感知精度;
    当第二感知等级的感知精度小于所述第三感知等级的感知精度时,向所述路侧设备发送所述智能驾驶设备的传感器的信息,其中,所述第二感知等级指示所述第一区域对应的当前感知精度或所述智能驾驶设备的默认感知精度。
  19. 如权利要求18所述的装置,其特征在于,当所述第二感知等级指示所述第一区域对应的当前感知精度时,所述第一通信单元还用于:
    向所述服务器发送所述第一区域的标识信息;
    接收所述服务器根据所述标识信息发送的所述第二感知等级的信息。
  20. 一种感知装置,其特征在于,包括第一通信单元和确定单元,其中,
    所述第一通信单元用于:
    接收需通过第一区域的智能驾驶设备发送的所述智能驾驶设备的传感器的信息;
    所述确定单元用于:
    根据所述传感器的信息确定第一特征模型,所述第一特征模型为多个特征模型之一,所述多个特征模型中的不同的特征模型用于感知不同区域的特征障碍物,所述第一特征模型与所述第一区域相对应;
    所述第一通信单元还用于:
    向所述智能驾驶设备发送所述第一特征模型的信息,以使所述智能驾驶设备根据所述第一特征模型感知所述第一区域中的第一障碍物集合。
  21. 如权利要求20所述的装置,其特征在于,所述装置还包括第二通信单元,所述第二通信单元用于:
    在所述第一通信单元接收所述智能驾驶设备发送的所述智能驾驶设备的传感器的信息之前,向所述智能驾驶设备发送感知等级的信息,所述感知等级指示所述第一区域对应的建议感知精度。
  22. 如权利要求20或21所述的装置,其特征在于,所述装置还包括第三通信单元,所述第三通信单元用于:
    向所述智能驾驶设备发送所述第一区域的标识信息。
  23. 一种感知装置,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,以使得所述装置执行如权利要求 1至8中任一项所述的方法。
  24. 一种感知装置,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,以使得所述装置执行如权利要求9至11中任一项所述的方法。
  25. 一种智能驾驶设备,其特征在于,包括如权利要求12至19、23中任一项所述的装置。
  26. 一种路侧设备,其特征在于,包括如权利要求20至22、24中任一项所述的装置。
  27. 一种感知系统,其特征在于,包括如权利要求25所述的智能驾驶设备,以及如权利要求26所述的路侧设备。
  28. 如权利要求27所述的系统,其特征在于,所述系统还包括服务器,所述服务器用于:接收需通过第一区域的智能驾驶设备发送的所述第一区域的标识信息;
    根据所述标识信息向所述智能驾驶设备发送第二感知等级的信息,所述第二感知等级指示所述第一区域对应的当前感知精度。
  29. 一种芯片,其特征在于,包括电路,所述电路用于执行如权利要求1至11中任一项所述的方法。
PCT/CN2022/116830 2022-09-02 2022-09-02 感知方法、装置和系统 WO2024045178A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/116830 WO2024045178A1 (zh) 2022-09-02 2022-09-02 感知方法、装置和系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/116830 WO2024045178A1 (zh) 2022-09-02 2022-09-02 感知方法、装置和系统

Publications (1)

Publication Number Publication Date
WO2024045178A1 true WO2024045178A1 (zh) 2024-03-07

Family

ID=90100152

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/116830 WO2024045178A1 (zh) 2022-09-02 2022-09-02 感知方法、装置和系统

Country Status (1)

Country Link
WO (1) WO2024045178A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112590778A (zh) * 2019-09-16 2021-04-02 华为技术有限公司 车辆控制的方法、装置、控制器和智能汽车
CN112712719A (zh) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 车辆控制方法、车路协同系统、路侧设备和自动驾驶车辆
CN113848921A (zh) * 2021-09-29 2021-12-28 中国第一汽车股份有限公司 车路云协同感知的方法和系统
CN114332818A (zh) * 2021-12-28 2022-04-12 阿波罗智联(北京)科技有限公司 障碍物的检测方法、装置和电子设备
CN114578344A (zh) * 2022-01-14 2022-06-03 长沙行深智能科技有限公司 一种适用于雨天环境下的目标感知方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112590778A (zh) * 2019-09-16 2021-04-02 华为技术有限公司 车辆控制的方法、装置、控制器和智能汽车
CN112712719A (zh) * 2020-12-25 2021-04-27 北京百度网讯科技有限公司 车辆控制方法、车路协同系统、路侧设备和自动驾驶车辆
CN113848921A (zh) * 2021-09-29 2021-12-28 中国第一汽车股份有限公司 车路云协同感知的方法和系统
CN114332818A (zh) * 2021-12-28 2022-04-12 阿波罗智联(北京)科技有限公司 障碍物的检测方法、装置和电子设备
CN114578344A (zh) * 2022-01-14 2022-06-03 长沙行深智能科技有限公司 一种适用于雨天环境下的目标感知方法、装置及系统

Similar Documents

Publication Publication Date Title
US11941873B2 (en) Determining drivable free-space for autonomous vehicles
US20230418299A1 (en) Controlling autonomous vehicles using safe arrival times
CN110888426B (zh) 用于多车道分离和路段轨线提取的车辆导航系统、方法和逻辑
CN113460042B (zh) 车辆驾驶行为的识别方法以及识别装置
EP3971526B1 (en) Path planning in autonomous driving environments
US12008895B2 (en) Vehicle-to-everything (V2X) misbehavior detection using a local dynamic map data model
US20230322259A1 (en) Inclusion And Use Of Safety and Confidence Information Associated With Objects In Autonomous Driving Maps
US20190139404A1 (en) Method, device and system for wrong-way driver detection
US11881113B2 (en) Predictive vehicle acquisition
US12055412B2 (en) System and methods for updating high definition maps
CN113859265A (zh) 一种驾驶过程中的提醒方法及设备
JP6903598B2 (ja) 情報処理装置、情報処理方法、情報処理プログラム、および移動体
US11928899B2 (en) Method and system for provisioning cloud service on vehicle
US20220341750A1 (en) Map health monitoring for autonomous systems and applications
WO2022159173A1 (en) Vehicle-to-everything (v2x) misbehavior detection using a local dynamic map data model
WO2024045178A1 (zh) 感知方法、装置和系统
US20200035093A1 (en) Systems and methods for managing vehicles using pattern recognition
CN115546781A (zh) 一种点云数据的聚类方法以及装置
CN116978236B (zh) 交通事故预警方法、装置和存储介质
US20230090338A1 (en) Method and system for evaluation and development of automated driving system features or functions
EP4156043A1 (en) Driver scoring system and method with accident proneness filtering using alert clustering and safety-warning based road classification
US20230311855A1 (en) Perception-based parking assistance for autonomous machine systems and applications
CN112183157A (zh) 道路几何识别方法及装置
EP4053737A1 (en) Detecting and collecting accident related driving experience event data
CN114327842A (zh) 多任务部署的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22956998

Country of ref document: EP

Kind code of ref document: A1