CN113537287A - Multi-sensor information fusion method and device, storage medium and automatic driving system - Google Patents
Multi-sensor information fusion method and device, storage medium and automatic driving system Download PDFInfo
- Publication number
- CN113537287A CN113537287A CN202110655285.2A CN202110655285A CN113537287A CN 113537287 A CN113537287 A CN 113537287A CN 202110655285 A CN202110655285 A CN 202110655285A CN 113537287 A CN113537287 A CN 113537287A
- Authority
- CN
- China
- Prior art keywords
- data
- sensor
- vehicle
- target data
- lcm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 24
- 230000004927 fusion Effects 0.000 claims abstract description 71
- 238000012545 processing Methods 0.000 claims abstract description 30
- 238000000034 method Methods 0.000 claims abstract description 24
- 230000004888 barrier function Effects 0.000 claims abstract description 17
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000007405 data analysis Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 5
- 230000002596 correlated effect Effects 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 17
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a multi-sensor information fusion method and device, a storage medium and an automatic driving system, wherein the multi-sensor information fusion method comprises the following steps: subscribing data of each sensor from each sensor data channel by utilizing an LCM protocol; processing the data of each sensor to determine the target data of each sensor; calibrating each sensor to unify the target data of each sensor to the same coordinate system; fusing target data of each sensor in the same coordinate system based on a fusion criterion in a preset rule base to determine barrier data; obstacle data is sent to a vehicle decision module using an LCM protocol to generate vehicle control instructions. Therefore, the method can improve the detection accuracy of surrounding targets, meet the requirements of the automatic driving automobile on the real-time performance and accuracy of the environment sensing system under the high-speed driving condition, and improve the driving safety of the user.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to a multi-sensor information fusion method during automatic driving of a vehicle, a computer-readable storage medium, an automatic driving system of the vehicle and a multi-sensor information fusion device during automatic driving of the vehicle.
Background
In the process of automatic driving of an automatic vehicle on a highway, the real-time requirement on a multi-sensor fusion system is high due to the fact that the speed of the automatic vehicle is high, and the real-time requirement of the system cannot be met by a common data transmission mode at present. In addition, a large amount of sensor data in the multi-sensor fusion system occupies a large bandwidth, and the related technology is difficult to adapt to the high bandwidth requirement. In addition, because the detection effects of different sensors on each target obstacle in different ranges are different, the detection effect of each sensor is reduced along with the increase of the distance, for example, the 16-line laser radar can lose target information along with the increase of the distance during target information acquisition or can have the phenomena of incomplete obstacle clustering, excessive obstacle clustering and the like during single sensor detection on a target, so that the accuracy of the vehicle for detecting the surrounding targets is reduced, the time delay condition is caused, and the personal safety of a user is influenced.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art. Therefore, an object of the present invention is to provide a multi-sensor information fusion method for vehicle automatic driving, which can improve the accuracy of detecting surrounding targets, meet the requirements of real-time performance and accuracy of an environmental sensing system for automatic driving of a vehicle under high-speed driving conditions, and improve the driving safety of a user.
A second object of the invention is to propose a computer-readable storage medium.
The third purpose of the invention is to provide a vehicle automatic driving system.
The fourth purpose of the invention is to provide a multi-sensor information fusion device during automatic driving of a vehicle.
In order to achieve the above object, a first embodiment of the present invention provides a method for fusing multi-sensor information during automatic driving of a vehicle, which includes the following steps: subscribing data of each sensor from each sensor data channel by utilizing LCM (light communication and Marshalling) protocol; processing the data of each sensor to determine target data of each sensor; calibrating the sensors to unify the target data of the sensors in the same coordinate system; fusing target data of each sensor in the same coordinate system based on a fusion criterion in a preset rule base to determine barrier data; and sending the obstacle data to a vehicle decision module by utilizing the LCM protocol so as to generate a vehicle control instruction.
According to the multi-sensor information fusion method during automatic driving of the vehicle, a plurality of sensors are arranged on the vehicle, firstly, data collected by each sensor can be subscribed from each sensor data channel by utilizing an LCM protocol, then the data are processed to determine target data of each sensor, then each sensor is calibrated to unify the target data of each sensor to the same coordinate system, the target data of each sensor in the same coordinate system are fused to determine obstacle data based on a fusion criterion in a preset rule base, and finally the obstacle data are sent to a vehicle decision module by utilizing the LCM protocol to generate a vehicle control command. Therefore, the multi-sensor information fusion method during automatic driving of the vehicle can improve the detection accuracy of surrounding targets, meet the requirements of the automatic driving of the vehicle on the real-time performance and accuracy of the environment sensing system under the high-speed driving condition, and improve the driving safety of users.
In addition, the multi-sensor information fusion method in the automatic driving of the vehicle according to the above embodiment of the present invention may further have the following additional technical features:
according to one embodiment of the invention, the data of each sensor is subscribed from each sensor data channel by using LCM protocol, and the data comprises one or more of the following: receiving GPS data through an LCM protocol; receiving target data analyzed by the ESR millimeter wave radar through an LCM protocol; receiving 4-line laser radar original data through an LCM protocol; receiving 16-line laser radar original data through an LCM protocol; receiving data of a vehicle state through an LCM protocol; receiving target data and lane line data of a camera through an LCM protocol; receiving target data analyzed by an angle radar through an LCM protocol; map data is received via an LCM protocol.
According to one embodiment of the invention, processing the data of the sensors comprises: analyzing the original data to obtain point clouds, and segmenting, clustering, filtering and tracking the obtained point clouds to obtain obstacle set target data; and carrying out data analysis on the received LCM data packet to obtain the obstacle target information.
According to one embodiment of the present invention, calibrating the sensors includes: the method comprises the steps of calibrating target data of any one sensor to a vehicle body coordinate system, calibrating target data of other sensors to the target data of the sensor, regarding the data of the sensors acquired at the same moment as data in the same time period when the target data of different sensors are correlated, and then synchronizing the target data of the same obstacle detected by the sensors to the same position in a three-dimensional space through space synchronization.
According to an embodiment of the present invention, fusing target data of each sensor in the same coordinate system based on a fusion criterion in a preset rule base includes: and judging and screening target data of each sensor under the same coordinate system based on a fusion criterion in a preset rule base, and putting single sensor data meeting the fusion criterion into a container so as to take data stored in the container as the obstacle data.
To achieve the above object, a second embodiment of the present invention provides a computer-readable storage medium, on which a multi-sensor information fusion program for vehicle autonomous driving is stored, which, when executed by a processor, implements the multi-sensor information fusion method for vehicle autonomous driving according to the above embodiment.
The computer-readable storage medium of the embodiment of the invention can improve the detection accuracy of surrounding targets through the multi-sensor information fusion program stored on the computer-readable storage medium during automatic driving of the vehicle, meet the requirements of the automatic driving of the vehicle on the real-time performance and accuracy of the environment sensing system under the high-speed driving condition, and simultaneously improve the driving safety of users.
In order to achieve the above object, a third aspect of the present invention provides a vehicle automatic driving system, which includes a memory, a processor, and a vehicle automatic driving multi-sensor information fusion program stored in the memory and operable on the processor, wherein the processor implements the vehicle automatic driving multi-sensor information fusion method according to the above embodiment when executing the vehicle automatic driving multi-sensor information fusion program.
The automatic vehicle driving system comprises the memory and the processor, wherein the processor executes a multi-sensor information fusion program stored in the memory during automatic vehicle driving, so that the detection accuracy of surrounding targets can be improved, the requirements of automatic driving of an automobile on the real-time performance and accuracy of an environment sensing system under a high-speed driving condition are met, and the driving safety of a user is improved.
In order to achieve the above object, a fourth aspect of the present invention provides a multi-sensor information fusion apparatus for vehicle automatic driving, including: the data receiving module subscribes data of each sensor from each sensor data channel by utilizing an LCM protocol; the data processing module is used for processing the data of each sensor and determining the target data of each sensor; the calibration module is used for calibrating each sensor so as to unify the target data of each sensor to the same coordinate system; the fusion module fuses target data of each sensor in the same coordinate system based on fusion criteria in a preset rule base to determine barrier data; a data transmission module that transmits the obstacle data to a vehicle decision module using the LCM protocol to generate a vehicle control instruction.
The multi-sensor information fusion device comprises a data receiving module, a data processing module, a calibration module, a fusion module and a data transmitting module, wherein a plurality of sensors are arranged on a vehicle, the data receiving module can subscribe data of each sensor from data channels of each sensor on the vehicle by utilizing an LCM protocol, then the data processing module is utilized to process the data of each sensor received by the data receiving module so as to determine target data of each sensor, then the calibration module is utilized to calibrate each sensor, so that the target data of each sensor are unified under the same coordinate system, and the target data of each sensor under the same coordinate system are fused by the fusion module based on a fusion criterion in a preset rule base so as to determine obstacle data, and finally the data transmitting module is utilized to transmit the obstacle data to a vehicle decision module according to the LCM protocol, to generate vehicle control commands. Therefore, the multi-sensor information fusion device during automatic driving of the vehicle can improve the detection accuracy of surrounding targets, meet the requirements of the automatic driving of the vehicle on the real-time performance and accuracy of the environment sensing system under the high-speed driving condition, and improve the driving safety of users.
In addition, the multi-sensor information fusion device for vehicle automatic driving according to the above embodiment of the present invention may further have the following additional technical features:
according to an embodiment of the present invention, the data receiving module is specifically configured to receive GPS data through an LCM protocol; receiving target data analyzed by the ESR millimeter wave radar through an LCM protocol; receiving 4-line laser radar original data through an LCM protocol; receiving 16-line laser radar original data through an LCM protocol; receiving data of a vehicle state through an LCM protocol; receiving target data and lane line data of a camera through an LCM protocol; receiving target data analyzed by an angle radar through an LCM protocol; map data is received via an LCM protocol.
According to an embodiment of the present invention, the calibration module is specifically configured to calibrate target data of any one sensor to a vehicle body coordinate system, calibrate target data of other sensors to the target data of the sensor, regard data of each sensor acquired at the same time as data in the same time period when correlating target data of different sensors, and then synchronize target data of the same obstacle detected by each sensor to the same position in a three-dimensional space through spatial synchronization.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow diagram of a method for multi-sensor information fusion in vehicle autonomous driving according to one embodiment of the invention;
FIG. 2 is a schematic flow chart of processing data from a sensor according to an embodiment of the present invention;
FIG. 3 is a diagram of a fused criteria library in accordance with an embodiment of the present invention;
FIG. 4 is a block diagram of the configuration of the automatic driving system of the vehicle according to the embodiment of the invention;
fig. 5 is a block diagram showing the configuration of the multi-sensor information fusion device in the automatic driving of the vehicle according to the embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a multi-sensor information fusion method and device, a computer-readable storage medium, and a vehicle automatic driving system in vehicle automatic driving according to an embodiment of the present invention with reference to the accompanying drawings.
FIG. 1 is a flow chart of a multi-sensor information fusion method in automatic driving of a vehicle according to an embodiment of the invention
As shown in fig. 1, the multi-sensor information fusion method in the automatic driving of the vehicle according to the embodiment of the present invention includes the following steps:
and S10, subscribing the data of each sensor from each sensor data channel by utilizing an LCM protocol.
First, it should be noted that, in the process of automatic driving, the vehicle needs to acquire data of the environment around the vehicle at any time to obtain an accurate and stable operating state of each obstacle around the automatic driving vehicle. In this embodiment, various sensors may be provided on the vehicle for acquiring data of the vehicle surroundings, for example, sensors that may be provided include, but are not limited to, a camera, radar, and light radar.
Specifically, the present embodiment subscribes to data of sensors disposed on a vehicle from various sensor data channels through an LCM protocol, wherein it is to be noted that the LCM is a set of libraries and tools for message delivery and data grouping, and the objective is to provide a delivery model of publish/subscribe messages for real-time systems with high bandwidth and low latency.
In some embodiments of the invention, subscribing to data for each sensor from each sensor data channel using the LCM protocol may include one or more of: receiving GPS (Global Positioning System) data through an LCM protocol; receiving target data analyzed by the ESR millimeter wave radar through an LCM protocol; receiving 4-line laser radar original data through an LCM protocol; receiving 16-line laser radar original data through an LCM protocol; receiving data of a vehicle state through an LCM protocol; receiving target data and lane line data of a camera through an LCM protocol; receiving target data analyzed by an angle radar through an LCM protocol; map data is received via an LCM protocol.
It is understood that each of the data received by the LCM protocol is data related to automatic driving of the vehicle, which may include GPS data for locating vehicle position information, lidar for analyzing surrounding obstacles, and the like. Of course, other sensors provided on the vehicle may be received, and are not limited herein. It is understood that the more kinds of data of the vehicle are received, the clearer the obstacle data and the state are acquired.
S20, the data of each sensor is processed to determine the target data of each sensor.
After receiving the data of each sensor through the LCM protocol, the received data may be processed to obtain the target data of each sensor, and it should be noted that, for different sensors, corresponding different data processing manners may be used for processing.
In some embodiments of the invention, processing the data of each sensor comprises: analyzing the original data to obtain point clouds, and segmenting, clustering, filtering and tracking the obtained point clouds to obtain obstacle set target data; and carrying out data analysis on the received LCM data packet to obtain the obstacle target information.
Specifically, the processing of 16-line lidar raw data is taken as an example for explanation, wherein the 16-line lidar raw data is obtained according to a 16-line lidar data processing module, and in a specific embodiment, the 16-line lidar data processing module can receive raw data of two 16-line lidar data on the left and right, analyze the received raw data to obtain a point cloud, and simultaneously perform segmentation, clustering, filtering and tracking on the obtained point cloud, so as to obtain a stable obstacle set. Specifically referring to fig. 2, it can be seen that, first, it is determined whether the real-time mode flag in the 16-line lidar data processing module is true, if so, the timestamp is analyzed and the basic raster image is obtained, and if not, the basic raster image is obtained after the return visit mode data processing is performed. After the basic raster image is obtained, whether a calibration option needs to be selected or not is judged, if so, calibration is carried out, then whether the vehicle information display mark is true or not is judged, and if not, whether the vehicle information display mark is true or not is directly judged. And when the vehicle information display mark is judged to be true, displaying information such as vehicle speed, azimuth angle and the like on the multimedia interface, detecting the obstacle in a grid map mode, tracking the obstacle, drawing an obstacle image according to requirements, and finishing.
In some embodiments, the ESR millimeter wave radar data processing module mainly has a function of acquiring ESR millimeter wave lightning raw data by a CAN (Controller Area Network) card, sending the ESR millimeter wave lightning raw data to a local Network by an LCM, and performing data analysis, display and other operations on a received LCM data packet. It should be noted that before the LCM receives the local network ESR data, the LCM may perform initialization processing on the device, such as initialization of a form control, initialization of each receiving thread and initialization of a processing thread, and during the initialization process of the device, communication and data transmission need to be performed through the CAN network, and the CAN interface function includes start-up, connection, opening, closing, data reading, and the like of the CAN card.
Since the 4-line lidar, the camera and the angular millimeter wave radar output obstacle target information containing noise point data, which are collectively referred to as other data processing modules in this embodiment, and can output the position and the speed of an obstacle, the processing procedures of these data are similar. After obtaining preliminary target data, the data need to be filtered, when data association is carried out, the obstacles with continuous association times more than 6 times are regarded as possible stable obstacles, the rest obstacles which do not reach the association times are filtered, when the possible stable obstacles are lost, tracking prediction is carried out, association is carried out again, and finally stable obstacle information is obtained.
And S30, calibrating each sensor to unify the target data of each sensor into the same coordinate system.
Specifically, in order to obtain a high-accuracy data after fusing the data acquired by the autonomous vehicle, calibration of the sensors is required before the fusion, so that the data detected by the sensors have temporal and/or spatial consistency.
In one embodiment of the present invention, calibrating each sensor may include: the method comprises the steps of calibrating target data of any one sensor to a vehicle body coordinate system, calibrating target data of other sensors to the target data of the sensor, regarding the data of the sensors acquired at the same moment as data in the same time period when the target data of different sensors are correlated, and then synchronizing the target data of the same obstacle detected by the sensors to the same position in a three-dimensional space through space synchronization.
Specifically, in the embodiment, in the multi-sensor joint calibration process, data of one sensor may be calibrated to a vehicle body coordinate system, data of other sensors may be calibrated to the data of the sensor, when target data of different sensors are correlated, data of each sensor acquired at the same time may be regarded as data in the same time period, and then the same obstacle data detected by each sensor is synchronized to the same position in a three-dimensional space through spatial synchronization, so that the multi-sensor joint calibration is implemented. For example, the obstacle data detected by the 4-line laser radar is translated to a vehicle body coordinate system with the rear axle of the vehicle as the center, and the right direction is the positive x-axis direction, the forward direction is the positive y-axis direction, and the upward direction is the positive z-axis direction. And then calibrating the obstacle data detected by the ESR millimeter wave radar and the obstacle data detected by the 16-line laser radar to the obstacle data detected by the 4-line laser radar in sequence, wherein the calibration of the obstacle data in the x direction and the y direction is mainly completed in the calibration process.
And S40, fusing the target data of each sensor in the same coordinate system based on the fusion criterion in the preset rule base, and determining the obstacle data.
Specifically, after calibration of each sensor is completed, target data of each sensor may be fused, and in this embodiment, each target data may be fused based on a fusion criterion in a preset rule base, so as to determine obstacle data. Compared with a single-sensor system, the data fusion system has the advantages of expanding a space detection range, expanding a time detection range, enhancing the reliability of the system and the like, and compared with a multi-sensor system without calibration, the data fusion system can ensure the time-space consistency of the data and improve the reliability of the barrier data.
In some embodiments of the present invention, fusing target data of each sensor in the same coordinate system based on the fusion criterion in the preset rule base includes: and judging and screening target data of each sensor under the same coordinate system based on a fusion criterion in a preset rule base, and putting single sensor data meeting the fusion criterion into a container to take the data stored in the container as obstacle data. Optionally, single sensor data that does not meet the fusion criterion is discarded, or verified a second time to ensure that it does not meet the fusion criterion, or a first decision is made with an error.
More specifically, referring to fig. 3, it can be seen that the fusion criterion is stated as the fusion criterion of the millimeter wave radar, and in the vehicle lane, the dynamic millimeter wave obstacle obtains a width and then matches with the longitudinal camera obstacle, where the width of the camera obstacle is given to the dynamic millimeter wave obstacle when the longitudinal direction is less than 7 meters and the lateral direction is less than 2.3 meters; when the longitudinal distance of the dynamic millimeter wave barrier is less than 80 meters, if the dynamic millimeter wave barrier is fused with the 4-line laser radar at the moment, the longitudinal distance value is less than 3.5 meters, the transverse distance value is less than 1.6 meters, and the detection result of the 4-line laser radar is judged to be in the lane at the moment, the dynamic millimeter wave barrier is directly added into a fusion vector at the moment; when the longitudinal distance of the dynamic millimeter wave barrier is greater than 80 meters, the dynamic millimeter wave barrier is directly added into the fusion vector. In the right lane and the left lane, setting the width of the millimeter wave obstacle which is matched with the camera obstacle and has a longitudinal difference of 7 meters and a transverse difference of 2.3 meters as the width of the camera obstacle; and directly adding dynamic millimeter wave obstacles beyond 70 meters into the fusion vector.
And S50, sending the obstacle data to the vehicle decision module by utilizing an LCM protocol to generate a vehicle control command.
Specifically, the fused barrier data are sent to a vehicle decision module by utilizing an LCM protocol so as to generate a vehicle control instruction to control the vehicle, so that the vehicle can accurately avoid the barrier in the automatic driving process, and the safety of drivers and passengers is ensured.
In conclusion, the multi-sensor information fusion method during automatic driving of the vehicle provided by the embodiment of the invention can improve the detection accuracy of surrounding targets, meet the requirements of real-time performance and accuracy of an environment sensing system for automatic driving of the vehicle under a high-speed driving condition, and improve the driving safety of a user.
Further, the present invention proposes a computer-readable storage medium on which a multi-sensor information fusion program at the time of vehicle autonomous driving is stored, which when executed by a processor implements the multi-sensor information fusion method at the time of vehicle autonomous driving as in the above-described embodiment.
The computer-readable storage medium of the embodiment of the invention can realize the multi-sensor information fusion method during automatic driving of the vehicle in the embodiment by executing the multi-sensor information fusion program during automatic driving of the vehicle stored on the computer-readable storage medium through the processor, thereby improving the detection accuracy of surrounding targets, meeting the requirements of automatic driving of the vehicle on the real-time performance and the accuracy of an environment sensing system under a high-speed driving condition and simultaneously improving the driving safety of a user.
Fig. 4 is a block diagram of the configuration of the automatic driving system of the vehicle according to the embodiment of the present invention.
Further, as shown in fig. 4, the present invention provides a vehicle automatic driving system 10, where the driving system 10 includes a memory 11, a processor 12, and a multi-sensor information fusion program stored in the memory 11 and operable on the processor 12, and when the processor 12 executes the multi-sensor information fusion program during vehicle automatic driving, the multi-sensor information fusion method during vehicle automatic driving as described in the above embodiments is implemented.
The vehicle automatic driving system comprises the memory 11 and the processor 12, and the processor 12 executes the multi-sensor information fusion program stored in the memory 11 during the vehicle automatic driving, so that the multi-sensor information fusion method during the vehicle automatic driving in the embodiment can be realized, the detection accuracy of surrounding targets can be improved, the requirements of the automatic driving vehicle on the real-time performance and the accuracy of an environment sensing system under the high-speed driving condition can be met, and the driving safety of a user can be improved.
Fig. 5 is a block diagram showing the configuration of the multi-sensor information fusion device in the automatic driving of the vehicle according to the embodiment of the present invention.
Further, as shown in fig. 5, the present invention provides a multi-sensor information fusion device 100 for vehicle automatic driving, where the fusion device 100 includes a data receiving module 101, a data processing module 102, a calibration module 103, a fusion module 104, and a data sending module 105.
The data receiving module 101 subscribes data of each sensor from each sensor data channel by using an LCM protocol; the data processing module 102 is configured to process data of each sensor and determine target data of each sensor; the calibration module 103 is configured to calibrate each sensor, so as to unify target data of each sensor in the same coordinate system; the fusion module 104 fuses target data of each sensor in the same coordinate system based on fusion criteria in a preset rule base to determine barrier data; the data transmission module 105 transmits the obstacle data to the vehicle decision module using the LCM protocol to generate vehicle control instructions.
First, it should be noted that, in the process of automatic driving, the vehicle needs to acquire data of the environment around the vehicle at any time to obtain an accurate and stable operating state of each obstacle around the automatic driving vehicle. In this embodiment, various sensors may be provided on the vehicle for acquiring data of the vehicle surroundings, for example, sensors that may be provided include, but are not limited to, a camera, radar, and light radar.
Specifically, the present embodiment subscribes to data of sensors disposed on the vehicle from various sensor data channels by the data receiving module 101 according to an LCM protocol, where it should be noted that the LCM is a set of libraries and tools for message delivery and data grouping, and the objective is to provide a delivery model of publish/subscribe messages for real-time systems with high bandwidth and low latency.
In some embodiments of the present invention, the data receiving module 101 is specifically configured to receive GPS data through an LCM protocol; receiving target data analyzed by the ESR millimeter wave radar through an LCM protocol; receiving 4-line laser radar original data through an LCM protocol; receiving 16-line laser radar original data through an LCM protocol; receiving data of a vehicle state through an LCM protocol; receiving target data and lane line data of a camera through an LCM protocol; receiving target data analyzed by an angle radar through an LCM protocol; map data is received via an LCM protocol.
After the data receiving module 101 receives the data of each sensor through the LCM protocol, the data received by the data receiving module 101 may be processed by the data processing module 102 to obtain the target data of each sensor, and it should be noted that different sensors may be processed by using different corresponding data processing methods. The data processing manner of each sensor may refer to the specific implementation manner in the above method embodiment, and is not described herein again.
In order to obtain a high-accuracy data after fusing the data acquired by the autonomous vehicle, calibration modules 103 may be used to calibrate the sensors before fusing, so that the data detected by the sensors have temporal and/or spatial consistency. In an embodiment of the present invention, the calibration module 103 is specifically configured to calibrate target data of any one sensor to a vehicle body coordinate system, calibrate target data of other sensors to the target data of the sensor, regard data of each sensor obtained at the same time as data in the same time period when correlating target data of different sensors, and then synchronize target data of the same obstacle detected by each sensor to the same position in a three-dimensional space through spatial synchronization.
After the calibration module 103 completes calibration of each sensor, the fusion module 104 may be used to fuse the target data of each sensor, and in this embodiment, the fusion processing may be performed on each target data based on a fusion criterion in a preset rule base to determine the obstacle data. Compared with a single-sensor system, the data fusion system has the advantages of expanding a space detection range, expanding a time detection range, enhancing the reliability of the system and the like, and compared with a multi-sensor system without calibration, the data fusion system can ensure the time-space consistency of the data and improve the reliability of the barrier data. And finally, the data transmission module 105 is used for transmitting the barrier data subjected to fusion processing by the fusion module 104 to the vehicle decision module by using an LCM protocol so as to generate a vehicle control instruction to control the vehicle, so that the vehicle can accurately avoid the barrier in the automatic driving process, and the safety of drivers and passengers is ensured.
In some embodiments of the present invention, the data processing module 102 is further configured to analyze the original data to obtain a point cloud, and perform segmentation, clustering, filtering, and tracking on the obtained point cloud to obtain obstacle set target data; and carrying out data analysis on the received LCM data packet to obtain the obstacle target information.
In some embodiments of the present invention, the fusion module 104 is further configured to determine and filter target data of each sensor in the same coordinate system based on a fusion criterion in a preset rule base, and place single sensor data meeting the fusion criterion into a container, so as to use data stored in the container as obstacle data.
It should be noted that, as another specific implementation manner of the embodiment of the present invention, reference may be made to the specific implementation manner of the multi-sensor information fusion method during automatic driving of the vehicle, which is not described herein again.
In conclusion, the multi-sensor information fusion device used during automatic driving of the vehicle provided by the embodiment of the invention can improve the detection accuracy of surrounding targets, meet the requirements of real-time performance and accuracy of an environment sensing system for automatic driving of the vehicle under a high-speed driving condition, and improve the driving safety of a user.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention.
Furthermore, the terms "first", "second", and the like used in the embodiments of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated in the embodiments. Thus, a feature of an embodiment of the present invention that is defined by the terms "first," "second," etc. may explicitly or implicitly indicate that at least one of the feature is included in the embodiment. In the description of the present invention, the word "plurality" means at least two or two and more, such as two, three, four, etc., unless specifically limited otherwise in the examples.
In the present invention, unless otherwise explicitly stated or limited by the relevant description or limitation, the terms "mounted," "connected," and "fixed" in the embodiments are to be understood in a broad sense, for example, the connection may be a fixed connection, a detachable connection, or an integrated connection, and it may be understood that the connection may also be a mechanical connection, an electrical connection, etc.; of course, they may be directly connected or indirectly connected through intervening media, or they may be interconnected within one another or in an interactive relationship. Those of ordinary skill in the art will understand the specific meaning of the above terms in the present invention according to their specific implementation.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (10)
1. A multi-sensor information fusion method in automatic driving of a vehicle is characterized by comprising the following steps:
subscribing data of each sensor from each sensor data channel by utilizing an LCM protocol;
processing the data of each sensor to determine target data of each sensor;
calibrating the sensors to unify the target data of the sensors in the same coordinate system;
fusing target data of each sensor in the same coordinate system based on a fusion criterion in a preset rule base to determine barrier data;
and sending the obstacle data to a vehicle decision module by utilizing the LCM protocol so as to generate a vehicle control instruction.
2. The method for fusing the multi-sensor information during automatic driving of the vehicle according to claim 1, wherein the data of each sensor is subscribed from each sensor data channel by using LCM protocol, and the method comprises one or more of the following steps:
receiving GPS data through an LCM protocol;
receiving target data analyzed by the ESR millimeter wave radar through an LCM protocol;
receiving 4-line laser radar original data through an LCM protocol;
receiving 16-line laser radar original data through an LCM protocol;
receiving data of a vehicle state through an LCM protocol;
receiving target data and lane line data of a camera through an LCM protocol;
receiving target data analyzed by an angle radar through an LCM protocol;
map data is received via an LCM protocol.
3. The method for fusing the multi-sensor information during automatic driving of the vehicle according to claim 2, wherein the processing of the data of each sensor comprises:
analyzing the original data to obtain point clouds, and segmenting, clustering, filtering and tracking the obtained point clouds to obtain obstacle set target data;
and carrying out data analysis on the received LCM data packet to obtain the obstacle target information.
4. The method for fusing the information of multiple sensors during automatic driving of the vehicle according to any one of claims 1 to 3, wherein calibrating the sensors comprises:
the method comprises the steps of calibrating target data of any one sensor to a vehicle body coordinate system, calibrating target data of other sensors to the target data of the sensor, regarding the data of the sensors acquired at the same moment as data in the same time period when the target data of different sensors are correlated, and then synchronizing the target data of the same obstacle detected by the sensors to the same position in a three-dimensional space through space synchronization.
5. The multi-sensor information fusion method in the automatic driving of the vehicle according to any one of claims 1 to 3, wherein fusing the target data of the sensors in the same coordinate system based on the fusion criterion in the preset rule base comprises:
and judging and screening target data of each sensor under the same coordinate system based on a fusion criterion in a preset rule base, and putting single sensor data meeting the fusion criterion into a container so as to take data stored in the container as the obstacle data.
6. A computer-readable storage medium, characterized in that a multi-sensor information fusion program at the time of vehicle autonomous driving, which when executed by a processor, implements the multi-sensor information fusion method at the time of vehicle autonomous driving according to any one of claims 1 to 5, is stored thereon.
7. A vehicle automatic driving system, comprising a memory, a processor and a vehicle automatic driving multi-sensor information fusion program stored in the memory and operable on the processor, wherein the processor implements the vehicle automatic driving multi-sensor information fusion method according to any one of claims 1 to 5 when executing the vehicle automatic driving multi-sensor information fusion program.
8. A multi-sensor information fusion apparatus at the time of automatic driving of a vehicle, characterized by comprising:
the data receiving module subscribes data of each sensor from each sensor data channel by utilizing an LCM protocol;
the data processing module is used for processing the data of each sensor and determining the target data of each sensor;
the calibration module is used for calibrating each sensor so as to unify the target data of each sensor to the same coordinate system;
the fusion module fuses target data of each sensor in the same coordinate system based on fusion criteria in a preset rule base to determine barrier data;
a data transmission module that transmits the obstacle data to a vehicle decision module using the LCM protocol to generate a vehicle control instruction.
9. The device for fusing multi-sensor information during automatic driving of a vehicle according to claim 8, wherein the data receiving module is specifically configured to,
receiving GPS data through an LCM protocol;
receiving target data analyzed by the ESR millimeter wave radar through an LCM protocol;
receiving 4-line laser radar original data through an LCM protocol;
receiving 16-line laser radar original data through an LCM protocol;
receiving data of a vehicle state through an LCM protocol;
receiving target data and lane line data of a camera through an LCM protocol;
receiving target data analyzed by an angle radar through an LCM protocol;
map data is received via an LCM protocol.
10. The device for fusing multi-sensor information during automatic driving of a vehicle according to claim 8 or 9, wherein the calibration module is specifically configured to calibrate target data of any one sensor to a vehicle body coordinate system, calibrate target data of other sensors to the target data of the sensor, regard sensor data acquired at the same time as data in the same time period when correlating target data of different sensors, and then synchronize the same obstacle target data detected by each sensor to the same position in a three-dimensional space through spatial synchronization.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110655285.2A CN113537287A (en) | 2021-06-11 | 2021-06-11 | Multi-sensor information fusion method and device, storage medium and automatic driving system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110655285.2A CN113537287A (en) | 2021-06-11 | 2021-06-11 | Multi-sensor information fusion method and device, storage medium and automatic driving system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113537287A true CN113537287A (en) | 2021-10-22 |
Family
ID=78124913
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110655285.2A Pending CN113537287A (en) | 2021-06-11 | 2021-06-11 | Multi-sensor information fusion method and device, storage medium and automatic driving system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113537287A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114112426A (en) * | 2021-11-08 | 2022-03-01 | 东风汽车集团股份有限公司 | Automatic driving test method, system and device |
CN114170274A (en) * | 2022-02-11 | 2022-03-11 | 北京宏景智驾科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN114608589A (en) * | 2022-03-04 | 2022-06-10 | 西安邮电大学 | Multi-sensor information fusion method and system |
CN114926816A (en) * | 2022-05-18 | 2022-08-19 | 中国第一汽车股份有限公司 | Data fusion result display method and system and vehicle |
CN115339453A (en) * | 2022-10-19 | 2022-11-15 | 禾多科技(北京)有限公司 | Vehicle lane change decision information generation method, device, equipment and computer medium |
CN116248778A (en) * | 2023-05-15 | 2023-06-09 | 珠海迈科智能科技股份有限公司 | Data fusion transmission method and system in multi-protocol environment |
CN117454316A (en) * | 2023-12-25 | 2024-01-26 | 安徽蔚来智驾科技有限公司 | Multi-sensor data fusion method, storage medium and intelligent device |
CN117574314A (en) * | 2023-11-28 | 2024-02-20 | 东风柳州汽车有限公司 | Information fusion method, device and equipment of sensor and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100235129A1 (en) * | 2009-03-10 | 2010-09-16 | Honeywell International Inc. | Calibration of multi-sensor system |
CN110007669A (en) * | 2019-01-31 | 2019-07-12 | 吉林微思智能科技有限公司 | A kind of intelligent driving barrier-avoiding method for automobile |
CN112101092A (en) * | 2020-07-31 | 2020-12-18 | 北京智行者科技有限公司 | Automatic driving environment sensing method and system |
CN112241007A (en) * | 2020-07-01 | 2021-01-19 | 北京新能源汽车技术创新中心有限公司 | Calibration method and arrangement structure of automatic driving environment perception sensor and vehicle |
-
2021
- 2021-06-11 CN CN202110655285.2A patent/CN113537287A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100235129A1 (en) * | 2009-03-10 | 2010-09-16 | Honeywell International Inc. | Calibration of multi-sensor system |
CN110007669A (en) * | 2019-01-31 | 2019-07-12 | 吉林微思智能科技有限公司 | A kind of intelligent driving barrier-avoiding method for automobile |
CN112241007A (en) * | 2020-07-01 | 2021-01-19 | 北京新能源汽车技术创新中心有限公司 | Calibration method and arrangement structure of automatic driving environment perception sensor and vehicle |
CN112101092A (en) * | 2020-07-31 | 2020-12-18 | 北京智行者科技有限公司 | Automatic driving environment sensing method and system |
Non-Patent Citations (1)
Title |
---|
陆峰;徐友春;李永乐;王德宇;谢德胜;: "基于信息融合的智能车障碍物检测方法", 计算机应用, no. 2, 20 December 2017 (2017-12-20) * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114112426A (en) * | 2021-11-08 | 2022-03-01 | 东风汽车集团股份有限公司 | Automatic driving test method, system and device |
CN114170274A (en) * | 2022-02-11 | 2022-03-11 | 北京宏景智驾科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN114608589A (en) * | 2022-03-04 | 2022-06-10 | 西安邮电大学 | Multi-sensor information fusion method and system |
CN114926816A (en) * | 2022-05-18 | 2022-08-19 | 中国第一汽车股份有限公司 | Data fusion result display method and system and vehicle |
CN115339453A (en) * | 2022-10-19 | 2022-11-15 | 禾多科技(北京)有限公司 | Vehicle lane change decision information generation method, device, equipment and computer medium |
CN115339453B (en) * | 2022-10-19 | 2022-12-23 | 禾多科技(北京)有限公司 | Vehicle lane change decision information generation method, device, equipment and computer medium |
CN116248778A (en) * | 2023-05-15 | 2023-06-09 | 珠海迈科智能科技股份有限公司 | Data fusion transmission method and system in multi-protocol environment |
CN116248778B (en) * | 2023-05-15 | 2023-08-11 | 珠海迈科智能科技股份有限公司 | Data fusion transmission method and system in multi-protocol environment |
CN117574314A (en) * | 2023-11-28 | 2024-02-20 | 东风柳州汽车有限公司 | Information fusion method, device and equipment of sensor and storage medium |
CN117454316A (en) * | 2023-12-25 | 2024-01-26 | 安徽蔚来智驾科技有限公司 | Multi-sensor data fusion method, storage medium and intelligent device |
CN117454316B (en) * | 2023-12-25 | 2024-04-26 | 安徽蔚来智驾科技有限公司 | Multi-sensor data fusion method, storage medium and intelligent device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113537287A (en) | Multi-sensor information fusion method and device, storage medium and automatic driving system | |
WO2022022694A1 (en) | Method and system for sensing automated driving environment | |
CN110988912B (en) | Road target and distance detection method, system and device for automatic driving vehicle | |
CN110146869B (en) | Method and device for determining coordinate system conversion parameters, electronic equipment and storage medium | |
CN111192295B (en) | Target detection and tracking method, apparatus, and computer-readable storage medium | |
CN112208529B (en) | Perception system for object detection, driving assistance method, and unmanned device | |
CN112013877B (en) | Detection method and related device for millimeter wave radar and inertial measurement unit | |
JP6973351B2 (en) | Sensor calibration method and sensor calibration device | |
CN111209956A (en) | Sensor data fusion method, and vehicle environment map generation method and system | |
CN113743709A (en) | Online perceptual performance assessment for autonomous and semi-autonomous vehicles | |
CN114485698A (en) | Intersection guide line generating method and system | |
CN112147635B (en) | Detection system, method and device | |
CN114550142A (en) | Parking space detection method based on fusion of 4D millimeter wave radar and image recognition | |
CN111959515B (en) | Forward target selection method, device and system based on visual detection | |
CN115151836A (en) | Method for detecting a moving object in the surroundings of a vehicle and motor vehicle | |
CN209928281U (en) | Automatic pilot | |
CN114463984B (en) | Vehicle track display method and related equipment | |
CN115402347A (en) | Method for identifying a drivable region of a vehicle and driving assistance method | |
CN118463965A (en) | Positioning accuracy evaluation method and device and vehicle | |
CN115965682B (en) | Vehicle passable area determining method and device and computer equipment | |
CN112505704A (en) | Method for improving safety of train autonomous intelligent sensing system and train | |
CN116129669B (en) | Parking space evaluation method, system, equipment and medium based on laser radar | |
CN114998864A (en) | Obstacle detection method, device, equipment and storage medium | |
CN112699700A (en) | Intelligent robot positioning system and method based on radio frequency technology | |
JP7452374B2 (en) | Object detection device and object detection program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |