CN114386481A - Vehicle perception information fusion method, device, equipment and storage medium - Google Patents
Vehicle perception information fusion method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114386481A CN114386481A CN202111524574.5A CN202111524574A CN114386481A CN 114386481 A CN114386481 A CN 114386481A CN 202111524574 A CN202111524574 A CN 202111524574A CN 114386481 A CN114386481 A CN 114386481A
- Authority
- CN
- China
- Prior art keywords
- information
- obstacle detection
- auxiliary
- main
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008447 perception Effects 0.000 title claims abstract description 171
- 238000007500 overflow downdraw method Methods 0.000 title claims description 21
- 238000001514 detection method Methods 0.000 claims abstract description 473
- 230000004927 fusion Effects 0.000 claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 41
- 230000009466 transformation Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 7
- 230000004888 barrier function Effects 0.000 abstract description 6
- 230000000875 corresponding effect Effects 0.000 description 21
- 230000008569 process Effects 0.000 description 15
- 102100025444 Gamma-butyrobetaine dioxygenase Human genes 0.000 description 13
- 101000934612 Homo sapiens Gamma-butyrobetaine dioxygenase Proteins 0.000 description 13
- 101000764644 Homo sapiens Trimethyllysine dioxygenase, mitochondrial Proteins 0.000 description 6
- 102100026223 Trimethyllysine dioxygenase, mitochondrial Human genes 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the invention discloses a method, a device, equipment and a storage medium for fusing vehicle perception information, wherein the method comprises the following steps: acquiring main perception information of a current vehicle and auxiliary perception information of an auxiliary vehicle, wherein the auxiliary vehicle is a vehicle with a position in a perception area related to the current vehicle position; determining main obstacle detection object information and main obstacle detection category information according to the main perception information, and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information; and fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection type information and the auxiliary obstacle detection type information to obtain target obstacle detection type information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection type information. Perception information fusion is carried out by considering perception information of the barrier level, so that the fusion reliability of the perception information is higher.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method, a device, equipment and a storage medium for fusing vehicle perception information.
Background
With the development of new-generation communication technology, the automatic driving automobile gradually develops towards networking, and the cooperative sensing becomes a new technical field. Under the support of communication technology, perception information can be interacted between vehicles, and the multi-view perception data provides more basis for subsequent decision and execution modules while bringing more environmental information to the vehicles.
However, the prior art has at least the following technical problems: many challenges remain in the collaborative process of sharing perceptual data. In communication, whether the bandwidth and delay of the internet of vehicles can support the transmission of the data of the magnitude orders or not needs to be considered, the fusion end needs to reconstruct the received data during fusion, and the larger the data volume is, the larger the calculation amount of the reconstruction process is. Therefore, how to provide a reasonable and effective perception information fusion method is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for fusing vehicle perception information, so as to realize reasonable and effective fusion of the vehicle perception information.
In a first aspect, an embodiment of the present invention provides a vehicle perception information fusion method, including:
acquiring main perception information of a current vehicle and auxiliary perception information of an auxiliary vehicle, wherein the auxiliary vehicle is a vehicle with a position in a perception area related to the current vehicle position;
determining main obstacle detection object information and main obstacle detection category information according to the main perception information, and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information;
and fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection type information and the auxiliary obstacle detection type information to obtain target obstacle detection type information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection type information.
In a second aspect, an embodiment of the present invention further provides a vehicle perception information fusion device, including:
the system comprises a perception information acquisition module, a perception information acquisition module and a perception information acquisition module, wherein the perception information acquisition module is used for acquiring main perception information of a current vehicle and auxiliary perception information of an auxiliary vehicle, and the auxiliary vehicle is a vehicle with a position in a perception area related to the current vehicle position;
the obstacle information acquisition module is used for determining main obstacle detection object information and main obstacle detection category information according to the main perception information and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information;
and the perception information fusion module is used for fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection category information and the auxiliary obstacle detection category information to obtain target obstacle detection category information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection category information.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
storage means for storing one or more programs;
when executed by one or more processors, cause the one or more processors to implement a vehicle awareness information fusion method as provided by any of the embodiments of the present invention.
In a fourth aspect, the embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the vehicle perception information fusion method provided in any embodiment of the present invention.
The vehicle perception information fusion method provided by the embodiment of the invention obtains the main perception information of the current vehicle and the auxiliary perception information of the auxiliary vehicle; determining main obstacle detection object information and main obstacle detection category information according to the main perception information, and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information; and fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection type information and the auxiliary obstacle detection type information to obtain target obstacle detection type information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection type information. The sensing information fusion is carried out by focusing on the sensing information with consideration of the barrier level, so that the sensing information fusion is more comprehensive and has stronger reliability.
Drawings
Fig. 1 is a schematic flowchart of a method for fusing vehicle perception information according to an embodiment of the present invention;
fig. 2 is a schematic view of a multi-vehicle cooperative sensing process according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a vehicle perception information fusion device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic flowchart of a vehicle perception information fusion method according to an embodiment of the present invention. The embodiment can be applied to the situation of the fusion of the perception information of multiple vehicles. The method may be performed by a vehicle awareness information fusion apparatus, which may be implemented in software and/or hardware, for example, which may be configured in a computer device, such as a vehicle. As shown in fig. 1, the method includes:
and S110, acquiring main perception information of the current vehicle and auxiliary perception information of the auxiliary vehicle.
In this embodiment, the perception information of the vehicle may be obtained by performing target detection on the point cloud data collected by the vehicle sensor. In order to reduce the data processing calculation amount, the vehicle acquires point cloud data, performs target detection to obtain perception data, and transmits the perception data through the internet of vehicles. For example, after the current vehicle collects current point cloud data, target detection is performed on the point cloud data to obtain main perception information of the current vehicle. And after the auxiliary vehicle collects the auxiliary point cloud data, performing target detection on the auxiliary point cloud data to obtain auxiliary perception information of the auxiliary vehicle. The auxiliary vehicle transmits the auxiliary perception information to the current vehicle through the internet of vehicles, so that the current vehicle obtains the main perception information of the current vehicle and the auxiliary perception information of the auxiliary vehicle. The auxiliary vehicles are vehicles with positions in the sensing area related to the current vehicle position, and the number of the auxiliary vehicles can be one or multiple.
The sensing area associated with the current vehicle position may be an area within a set range with the current vehicle position as a center. The set range can be set in advance according to actual requirements, and the sensing area associated with the current vehicle position can also be an area formed by positions adjacent to the current vehicle position. And are not limited herein.
It is understood that the current vehicle is a vehicle that performs the vehicle awareness information fusion method. The current vehicle acquires own main perception information and auxiliary perception information transmitted by the auxiliary vehicle through a vehicle network, and the main perception information and the auxiliary perception information are fused to realize accurate identification of the barrier.
The coordinate systems adopted by different vehicles for point cloud data acquisition are different, so that the perception information coordinate systems of different vehicles are different. That is, the main perception information of the current vehicle and the assistant perception information of the assistant vehicle are not under the same coordinate system. In order to ensure the accuracy of perception information fusion, the perception information in different coordinate systems needs to be subjected to coordinate change and then assisted to be fused in the same coordinate system. Optionally, a coordinate system of the perception information of any vehicle may be set as a reference coordinate system, and the perception information of other vehicles is assisted by coordinates and is assisted to the reference coordinate system for fusion of the perception information.
In one embodiment, before obtaining the auxiliary perception information of the auxiliary vehicle, the method includes: and acquiring vehicle perception information of the auxiliary vehicle, and performing coordinate transformation on the vehicle perception information of the auxiliary vehicle to obtain the auxiliary perception information under the current vehicle coordinate system. In consideration of functions of the current vehicle, such as vehicle navigation, of the current vehicle after the sensing information is fused, the coordinate system of the sensing information of the current vehicle can be used as a reference coordinate system, and the vehicle sensing information of the auxiliary vehicle is assisted by coordinates to be under the reference coordinate system. It should be noted that the vehicle perception information of each auxiliary vehicle needs to be coordinate-assisted, so that the perception information of all vehicles is in the same coordinate system. The coordinate assistance of the vehicle perception information may refer to a coordinate assistance method in the prior art, which is not limited herein.
Optionally, the coordinate transformation is performed on the vehicle sensing information of the auxiliary vehicle to obtain the auxiliary sensing information located under the current vehicle coordinate system, and the method includes: and determining an obstacle detection frame in the vehicle perception information of the auxiliary vehicle, and performing coordinate assistance on the coordinates of the obstacle detection frame to obtain the auxiliary perception information under the current vehicle coordinate system. Specifically, the coordinate assistance of the vehicle perception information is specifically the coordinate assistance of the vehicle perception information obstacle detection frame. And determining all obstacle detection frames in the vehicle perception information of each auxiliary vehicle, performing coordinate assistance on each obstacle detection frame to obtain auxiliary detection frame information of the obstacle detection frame in a reference coordinate system, and taking the auxiliary detection frame information of all obstacle detection frames in the reference coordinate system as auxiliary perception information.
The coordinate assistance process is to map the obstacle detection frame detected by the assist vehicles participating in the cooperation from the pixel coordinate system of each vehicle to the pixel coordinate system of the current vehicle. Since the coordinate transformation between vehicles is a three-dimensional transformation, depth information needs to be known in addition to two-dimensional position information in an image, and thus an RGB-D image containing depth information needs to be used. For example, assume that in the auxiliary vehicle pixel coordinate system, one coordinate to be mapped isAfter mapping to the current vehicle pixel coordinate system, the coordinate is transformed into。
First, it needs to be based onCorresponding depth informationThe method is assisted from a two-dimensional pixel coordinate system to a three-dimensional camera coordinate system, and corresponding coordinates are changed intoThe auxiliary process is as follows:
wherein,sis the scaling factor of the corresponding depth map of the image,,is an auxiliary vehicle camerax,yThe focal length on the axis of the lens,,is the center of the aperture of the auxiliary vehicle camera, and the parameters all belong to the internal parameters of the auxiliary vehicle camera.
Then, it is necessary to firstThe method comprises the following steps that the method is assisted under an Inertial Measurement Unit (IMU) coordinate system of an auxiliary vehicle from a camera coordinate system, and is assisted under a world coordinate system based on position information provided by the IMU, and the relation is as follows:
wherein, the [ alpha ], [ beta ] -a , , ]Is a three-dimensional coordinate corresponding to the coordinate after the coordinate is assisted under a world coordinate system,a coordinate transformation matrix representing a camera coordinate system of the auxiliary vehicle to an IMU coordinate system,a transformation matrix representing an IMU coordinate system of the auxiliary vehicle to a world coordinate system.
After transformation to the world coordinate system, the inverse transformation of the above process is continued based on the parameter of the current vehicle, and [ will ], [ , , ]Assisted to the pixel coordinate system of the current vehicle to obtainThe transformation process is
Wherein,,is the coordinate assistance matrix corresponding to the current vehicle,f x 2 ,f y 2 ,c x 2 ,c y 2 is the current vehicle camera's internal parameters.
And S120, determining main obstacle detection object information and main obstacle detection type information according to the main perception information, and determining auxiliary obstacle detection object information and auxiliary obstacle detection type information according to the auxiliary perception information.
In the present embodiment, the fusion of the perception information is specifically divided into the fusion of the obstacle detection target information and the fusion of the obstacle detection category information. Therefore, it is necessary to determine the main obstacle detection object information and the main obstacle detection type information from the main sensing information and determine the auxiliary obstacle detection object information and the auxiliary obstacle detection type information from the auxiliary sensing information, respectively. Optionally, the main obstacle detection object information and the main obstacle detection category information may be directly extracted from the main sensing information, and the auxiliary obstacle detection object information and the auxiliary obstacle detection category information may be extracted from the auxiliary sensing information.
The obstacle detection object information may include information such as coordinates and shapes of the obstacle detection object, and the obstacle detection category information may include information such as categories and probabilities of the obstacle detection object.
S130, fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection type information and the auxiliary obstacle detection type information to obtain target obstacle detection type information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection type information.
In the present embodiment, the fusion of the obstacle detection information and the fusion of the obstacle detection type are performed separately. The obstacle detection information and the obstacle detection category are independently performed, but the obstacle detection information and the obstacle detection category are related to each other. Illustratively, for one detected obstacle, it includes obstacle detection information (coordinates, etc.) and obstacle detection categories (categories, probabilities, etc.). And respectively fusing obstacle detection information and obstacle detection types aiming at the obstacle, and then determining a perception information fusion result of the obstacle based on the fusion result of the obstacle detection information and the fusion result of the obstacle detection types.
As is clear from the above, when the fusion of the obstacle detection information and the fusion of the obstacle detection types are performed, it is necessary to fuse the obstacle detection target information of the same obstacle and fuse the obstacle detection types of the same obstacle. Therefore, before information fusion, it is necessary to associate main obstacle detection information and auxiliary obstacle detection information belonging to the same obstacle.
It is understood that the sensing ranges of the main sensing information and the auxiliary sensing information may be different. Therefore, there may be auxiliary obstacle detection information that is not associated with the main obstacle detection information, and/or there may be main obstacle detection information that is not associated with the auxiliary obstacle detection information. If there is auxiliary obstacle detection information that is not associated with the main obstacle detection information and/or if there is main obstacle detection information that is not associated with the auxiliary obstacle detection information, the auxiliary obstacle detection information may be directly used as partial information in the target obstacle detection target information and/or the main obstacle detection information may be used as partial information in the target obstacle detection target information.
In one embodiment of the present invention, a method for fusing main obstacle detection object information and auxiliary obstacle detection object information to obtain target obstacle detection object information, includes: acquiring main detection frame information in main obstacle detection object information; matching the main detection frame information with the auxiliary detection frame information, determining the auxiliary detection frame information matched with the main detection frame information, and associating the matched main detection frame information with the auxiliary detection frame information; and for each group of associated main detection frame information and auxiliary detection frame information, fusing main detection frame coordinates of the main detection frame information and auxiliary detection frame coordinates of the auxiliary detection frame information to obtain target detection frame coordinates, and taking the target detection frame coordinates as target obstacle detection object information. The main obstacle detection target information and the auxiliary obstacle detection target information are already located in the same coordinate system, and therefore whether the main detection frame information and the auxiliary detection frame information are detection frame information of the same obstacle can be determined according to the degree of coincidence of the main detection frame information and the auxiliary detection frame information. And when the main detection frame information and the auxiliary detection frame information are the detection frame information of the same obstacle, associating the main detection frame information with the auxiliary detection frame information to obtain at least one group of associated main detection frame information and auxiliary detection frame information. And for each group of associated main detection frame information and auxiliary detection frame information, fusing main detection frame coordinates of the main detection frame information and auxiliary detection frame coordinates of the auxiliary detection frame information to obtain target detection frame coordinates, and taking the target detection frame coordinates as target obstacle detection object information.
The fusion of the main detection frame coordinates and the auxiliary detection frame coordinates may be fusion in a manner of averaging, weighting, summing, and the like of the corresponding feature point coordinates, and is not limited herein.
In one embodiment, to improve the accuracy of the fused information, the coordinates of the main detection frame and the coordinates of the auxiliary detection frame may be fused according to the confidence degrees of the coordinates of the main detection frame and the coordinates of the auxiliary detection frame. That is to say, the process of fusing the main detection frame coordinates of the main detection frame information and the auxiliary detection frame coordinates of the auxiliary detection frame information to obtain the target detection frame coordinates includes: acquiring a main confidence degree of the vehicle corresponding to the main detection frame information and an auxiliary confidence degree of the vehicle corresponding to the auxiliary detection frame information; and fusing the coordinates of the main detection frame and the coordinates of the auxiliary detection frame based on the main confidence coefficient and the auxiliary confidence coefficient to obtain the coordinates of the target detection frame. The confidence of the coordinates of the main detection frame may be the confidence of the current vehicle, and the confidence of the auxiliary detection frame may be the confidence of the auxiliary vehicle. The confidence level of the vehicle may be set based on practical experience.
Illustratively, suppose that two vehicles are to be mergedThe detection frame position information is respectively expressed as:bbox 1,bbox 2all are quaternary vectors, and the confidence of two vehicles are respectivelyP(V 1 ),P(V 2 ). Carrying out weighted average on the coordinates of the detection frames based on the confidence degrees of the two vehicles, giving more weight to the position information with high confidence degree, and giving smaller weight to the position information with low confidence degree, wherein the weights of the two vehicles are respectively as follows:,
based on the above formula, the position information of the fused detection frameCan be expressed as .
In one embodiment of the present invention, fusing primary obstacle detection category information and auxiliary obstacle detection category information to obtain target obstacle detection category information includes: acquiring main obstacle detection probability in the main obstacle detection category information and auxiliary obstacle detection probability in the auxiliary obstacle detection category information; and fusing the main obstacle detection probability and the auxiliary obstacle detection probability to obtain a target obstacle detection category probability, and determining target obstacle detection category information based on the target obstacle detection category probability. It is to be understood that the fusion of obstacle category information depends on the associated primary detection frame information and secondary detection frame information determined in the above-described process. And aiming at each group of associated main detection frame information and auxiliary detection frame information, fusing the main obstacle detection probability of the main detection frame information and the auxiliary obstacle detection probability of the auxiliary detection frame information to obtain a target obstacle detection category probability, and taking the target obstacle detection category probability as the target obstacle detection category information. Optionally, the target obstacle detection category information further includes a target obstacle detection category identifier in addition to the target obstacle detection category probability. The target obstacle detection type identifier may be a unique identifier of the obstacle type, and may be represented by characters, numbers, character strings, and the like. For example, the target obstacle detection category identification may be a vehicle, a pedestrian, a non-motorized vehicle, or the like.
The fusion of the detection probabilities is a fusion of probabilities of the same obstacle detection category identifiers. Assuming that the main obstacle detection object information includes a main obstacle detection category 1 and a main obstacle detection category 2, the probability corresponding to the main obstacle detection category 1 is a main obstacle detection category probability 1A, and the probability corresponding to the main obstacle detection category 2 is a main obstacle detection category probability 2A; the auxiliary obstacle detection object information comprises an auxiliary obstacle detection category 1 and an auxiliary obstacle detection category 2, the probability corresponding to the auxiliary obstacle detection category 1 is an auxiliary obstacle detection category probability 1B, and the probability corresponding to the auxiliary obstacle detection category 2 is an auxiliary obstacle detection category probability 2B; probability fusion is performed for the major obstacle detection category 1 and the major obstacle detection category 2, respectively. Specifically, aiming at the main obstacle detection category 1, a main obstacle detection category probability 1A and a main obstacle detection category probability 1B are fused to obtain a main obstacle detection category probability 1C; for the main obstacle detection category 2, fusing a main obstacle detection category probability 2A and a main obstacle detection category probability 2B to obtain a main obstacle detection category probability 2C; the main obstacle detection category 1 and the main obstacle detection category probability 1C, and the main obstacle detection category 2 and the main obstacle detection category probability 2C may be directly output together as the target obstacle detection category information. The magnitude of the main obstacle detection type probability 1C and the magnitude of the main obstacle detection type probability 2C may be compared, and the large probability and the corresponding type thereof may be output as target obstacle detection type information. For example, when the main obstacle detection category probability 1C is greater than the main obstacle detection category probability 2C, the main obstacle detection category 1 and the main obstacle detection category probability 1C are output as target obstacle detection category information.
In one embodiment, the fusing the main obstacle detection probability and the auxiliary obstacle detection probability to obtain the target obstacle detection category probability includes: and fusing the main obstacle detection probability and the auxiliary obstacle detection probability based on the log-likelihood ratio to obtain the target obstacle detection category probability.
Taking the vehicle type as an example, the corresponding obstacle detection type probability may be represented as P (car | V)i) In which V isiThe information representing the category is provided by the perception algorithm of the vehicle i. Conditional probabilities are used because the classification probability is based on the assumption that the information provided by vehicle i is authentic, i.e. if the perceptual information of vehicle i is correct, then the probability that the object belongs to the vehicle class is P (car | V)i). Based on the log-likelihood ratio, the classification probability of the two vehicles is fused, because the log-likelihood ratio has a wider value range compared with the probability, unnecessary truncation errors can be avoided, and the log-likelihood ratio corresponding to the classification probability of the vehicle i can be expressed as
It can be seen that if P (car/V)i) > 0.5, the log-likelihood ratio is positive, and plays an enhancing role in the later fusion, if P (car/V)i) If the log likelihood ratio is less than 0.5, the log likelihood ratio is negative and plays a role in attenuation in subsequent fusion. Confidence P (V) based on two vehicles1),P(V2) The relationship between the log-likelihood ratios before and after the fusion is expressed asWherein,represents a log-likelihood ratio, P (V), corresponding to the classification probability after fusion1+V2) And representing the confidence of the fused two vehicles. The meaning of the equation is that the sum of the confidence degrees of the two classification probability log-likelihood ratios multiplied by the two vehicles respectively is equal to the confidence degree of the fused log-likelihood ratio multiplied by the fused result. Since semantic information provided by both vehicles is obtained based on respective sensor data and perception algorithms, and is not correlated, it can be assumed that the semantic information is from V1And V2Semantic information ofIndependently of each other, so have P (V)1+V2)=P(V1)*P(V2) General formula P (V)1+V2)=P(V1)*P(V2) Substituted type In (3), the fused log-likelihood ratio can be obtained and expressed as:
finally, the apparent likelihood ratio is converted into a probability as a target obstacle detection category probability P (car | V) after fusion1+V2) Expressed as:
after the fused target obstacle detection category probability is obtained, target obstacle detection category information can be obtained by combining the obstacle detection categories.
After the target obstacle detection object information and the target obstacle detection category information are determined, a perception information fusion result can be obtained by combining the target obstacle detection object information and the target obstacle detection category information related to the target obstacle detection object information.
The vehicle perception information fusion method provided by the embodiment of the invention obtains the main perception information of the current vehicle and the auxiliary perception information of the auxiliary vehicle; determining main obstacle detection object information and main obstacle detection category information according to the main perception information, and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information; and fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection type information and the auxiliary obstacle detection type information to obtain target obstacle detection type information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection type information. The sensing information fusion is carried out by focusing on the sensing information with consideration of the barrier level, so that the sensing information fusion is more comprehensive and has stronger reliability.
Example two
Fig. 2 is a schematic view of a multi-vehicle cooperative sensing process according to a second embodiment of the present invention. The present embodiment provides a preferred embodiment based on the above-described scheme. As shown in fig. 2, the multi-vehicle cooperative sensing provided in this embodiment performs multi-vehicle cooperative semantic fusion, which is implemented by a semantic fusion module. In general, the semantic fusion module inputs semantic information (i.e., information of the obstacle detection object and information of the obstacle detection category) of the obstacle level provided by each vehicle and semantic information confidence obtained by a confidence evaluation algorithm. The semantic information is extracted from the original image data based on the target detection network, and includes obstacle detection object information (such as position information of an obstacle) and obstacle detection category information. Generally, the position information of the obstacle is represented by a rectangular Bounding Box (bbox) in the image, which is determined by two coordinate values of the upper left corner and the lower right corner of the rectangular Bounding Box, and is expressed as, andx min ,y min ,x max ,y max ]the obstacle detection category information mainly includes a classification probability output by the detection network, and indicates a probability that the obstacle belongs to the category. Firstly, the position information of the detection frame detected from multiple viewing angles needs to be mapped to the pixel coordinate system of the host vehicle, and then the semantic information of each vehicle is matched and fused in the host vehicle coordinate system, and a few main steps in the fusion process will be described below by taking two vehicles as an example. Suppose that the two vehicles participating in the fusion are respectively vehicle 1, vehicle 2, where vehicle 2 is the current vehicle and vehicle 1 is the auxiliary vehicle.
(1.1) coordinate transformation
The coordinate conversion process refers to that the detection frames detected by vehicles participating in the coordination are mapped from the pixel coordinate system of each vehicle to the pixel coordinate system of the host vehicle, and since the coordinate conversion between the vehicles is three-dimensional conversion, the corresponding depth information needs to be known besides two-dimensional position information in the image, and therefore the adopted image is an RGB-D image and contains depth information. The coordinates of the detection frame of the auxiliary vehicle, vehicle 1, are transformed into the coordinate system of vehicle 2 based on the information in the RGB-D image.
(1.2) data Association
After the detection frames of multiple vehicles are mapped to the coordinate system of the main vehicle, the detection frames belonging to the same obstacle need to be associated so as to prepare for the subsequent fusion process. The detection frames may be matched based on an intersection ratio (IOU) between the detection frames, and the detection frames with the intersection ratio (IOU) greater than a threshold value are associated with each other, and the detection frame matching process of the cars 1 and 2 is as follows:
1) the detection frame matrixes corresponding to the two carts are BBOX1= tone respectivelybbox 1 1 ...bbox m 1} and BBOX2= quick openingbbox 1 2 ...bbox n 2Placing the detection frames successfully matched into BBOX in pairs, and initializing the BBOX to be empty;
2) calculating IOU between every two elements in BBOX1 and BBOX2 to obtain an IOU matrix IOU;
3) selecting the maximum value in the IOUiou max =IOU[i][j]Two detection frames corresponding to subscriptsbbox i 1,bbox j 2Matching is carried out;
a) if it is notiou max >thresholdIf the matching is successful, the last image is mappedbbox i 1,bbox j 2Add to BBOX, then addbbox i 1,bbox j 2Respectively removing from BBOX1 and BBOX 2;
b) if it is notiou max <thresholdIf no matching pair exists in BBOX1 and BBOX2, jumping to step 5;
4) judging whether BBOX1 and BBOX2 are empty;
a) if none are empty, repeating step 2;
b) otherwise, jumping to step 5;
5) returned to BBOX1, BBOX2, and BBOX.
Among the BBOX are associated detection frames, and BBOX1 and BBOX2 are not associated detection frames. And outputting the associated detection frames after fusion, and directly outputting the unassociated detection frames.
(1.3) data fusion
After data association, if a part of detection frames in the vehicle 1 are not successfully matched, semantic information of the detection frames is directly displayed and used for expanding the visual field of the main vehicle without other processing; the other part of the detection frames and the detection frame of the vehicle 2 are matched with the same obstacle, and for the mutually related detection frames, the confidence degrees based on the semantic information provided by the two vehicles are fused. Semantic information fusion includes position fusion of detection frames (i.e., fusion of obstacle detection object information) and classification probability fusion (i.e., fusion of obstacle detection category information).
The position fusion and the classification probability fusion are both realized based on the confidence of the vehicle, and the specific fusion mode can be used for implementing the embodiment and is not repeated herein.
It should be noted that the embodiment of the present invention verifies the multi-vehicle semantic fusion algorithm based on the image data acquired in the real scene. In the original data set, several dangerous scenes which are frequently encountered in the actual driving process are selected, namely strong light scenes, left-turning scenes, slope scenes and shielding scenes, and in the scenes, the detection effect of the perception information fusion method provided by the embodiment is higher than that of the perception information fusion method in the prior art.
The embodiment of the invention provides a semantic multi-vehicle cooperative sensing scheme, which considers the problems of instability, easy attack and the like of a deep learning technology in the sensing process, and designs the confidence coefficient of semantic information and a multi-view semantic information fusion algorithm respectively. Compared with the existing semantic level collaborative perception research, the fusion method provided by the embodiment of the invention considers the uncertainty of semantic information, designs the semantic fusion algorithm based on confidence coefficient, and focuses on the barrier level semantic information which has small data volume and is very key to the driving process, so that the perception information fusion is more comprehensive and reliable.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a vehicle perception information fusion device according to a third embodiment of the present invention. The vehicle perception information fusion device can be implemented in software and/or hardware, for example, the vehicle perception information fusion device can be configured in a computer device. As shown in fig. 3, the apparatus includes a perception information obtaining module 310, an obstacle information obtaining module 320, and a perception information fusing module 330, wherein:
the perception information acquiring module 310 is configured to acquire main perception information of a current vehicle and auxiliary perception information of an auxiliary vehicle, where the auxiliary vehicle is a vehicle whose position is in a perception area associated with the current vehicle position;
the obstacle information acquiring module 320 is configured to determine main obstacle detection object information and main obstacle detection category information according to the main sensing information, and determine auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary sensing information;
the perception information fusion module 330 is configured to fuse the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fuse the main obstacle detection category information and the auxiliary obstacle detection category information to obtain target obstacle detection category information, and obtain a perception information fusion result based on the target obstacle detection object information and the target obstacle detection category information.
The vehicle perception information fusion method provided by the embodiment of the invention obtains the main perception information of the current vehicle and the auxiliary perception information of the auxiliary vehicle; determining main obstacle detection object information and main obstacle detection category information according to the main perception information, and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information; and fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection type information and the auxiliary obstacle detection type information to obtain target obstacle detection type information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection type information. The sensing information fusion is carried out by focusing on the sensing information with consideration of the barrier level, so that the sensing information fusion is more comprehensive and has stronger reliability.
Optionally, on the basis of the above scheme, the perception information fusion module 330 is specifically configured to:
acquiring main detection frame information in main obstacle detection object information;
matching the main detection frame information with the auxiliary detection frame information, determining the auxiliary detection frame information matched with the main detection frame information, and associating the matched main detection frame information with the auxiliary detection frame information;
and for each group of associated main detection frame information and auxiliary detection frame information, fusing main detection frame coordinates of the main detection frame information and auxiliary detection frame coordinates of the auxiliary detection frame information to obtain target detection frame coordinates, and taking the target detection frame coordinates as target obstacle detection object information.
Optionally, on the basis of the above scheme, the perception information fusion module 330 is specifically configured to:
acquiring a main confidence degree of the vehicle corresponding to the main detection frame information and an auxiliary confidence degree of the vehicle corresponding to the auxiliary detection frame information;
and fusing the coordinates of the main detection frame and the coordinates of the auxiliary detection frame based on the main confidence coefficient and the auxiliary confidence coefficient to obtain the coordinates of the target detection frame.
Optionally, on the basis of the above scheme, the perception information fusion module 330 is specifically configured to:
acquiring main obstacle detection probability in the main obstacle detection category information and auxiliary obstacle detection probability in the auxiliary obstacle detection category information;
and fusing the main obstacle detection probability and the auxiliary obstacle detection probability to obtain a target obstacle detection category probability, and determining target obstacle detection category information based on the target obstacle detection category probability.
Optionally, on the basis of the above scheme, the perception information fusion module 330 is specifically configured to:
and fusing the main obstacle detection probability and the auxiliary obstacle detection probability based on the log-likelihood ratio to obtain the target obstacle detection category probability.
Optionally, on the basis of the foregoing scheme, the sensing information obtaining module 310 is specifically configured to:
and acquiring vehicle perception information of the auxiliary vehicle, and performing coordinate transformation on the vehicle perception information of the auxiliary vehicle to obtain auxiliary perception information which is positioned in the same coordinate system with the main perception information.
Optionally, on the basis of the foregoing scheme, the sensing information obtaining module 310 is specifically configured to:
and determining an obstacle detection frame in the vehicle perception information of the auxiliary vehicle, and performing coordinate assistance on the coordinates of the obstacle detection frame to obtain the auxiliary perception information under the current vehicle coordinate system.
The vehicle perception information fusion device provided by the embodiment of the invention can execute the vehicle perception information fusion method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary computer device 412 suitable for use in implementing embodiments of the present invention. The computer device 412 shown in FIG. 4 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 4, computer device 412 is in the form of a general purpose computing device. Components of computer device 412 may include, but are not limited to: one or more processors 416, a system memory 428, and a bus 418 that couples the various system components (including the system memory 428 and the processors 416).
The system memory 428 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 430 and/or cache memory 432. The computer device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage 434 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 418 by one or more data media interfaces. Memory 428 can include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in memory 428, such program modules 442 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 442 generally perform the functions and/or methodologies of the described embodiments of the invention.
The computer device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, display 424, etc.), with one or more devices that enable a user to interact with the computer device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the computer device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, computer device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) through network adapter 420. As shown, network adapter 420 communicates with the other modules of computer device 412 over bus 418. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computer device 412, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 416 executes programs stored in the system memory 428 to perform various functional applications and data processing, such as implementing a vehicle perception information fusion method provided by an embodiment of the present invention, the method including:
acquiring main perception information of a current vehicle and auxiliary perception information of an auxiliary vehicle;
determining main obstacle detection object information and main obstacle detection category information according to the main perception information, and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information;
and fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection type information and the auxiliary obstacle detection type information to obtain target obstacle detection type information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection type information.
Of course, those skilled in the art can understand that the processor may also implement the technical solution of the vehicle perception information fusion method provided by any embodiment of the present invention.
EXAMPLE five
The fifth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for fusing vehicle perception information provided by the fifth embodiment of the present invention, where the method includes:
acquiring main perception information of a current vehicle and auxiliary perception information of an auxiliary vehicle;
determining main obstacle detection object information and main obstacle detection category information according to the main perception information, and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information;
and fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection type information and the auxiliary obstacle detection type information to obtain target obstacle detection type information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection type information.
Of course, the computer-readable storage medium provided by the embodiments of the present invention, on which the stored computer program is not limited to the above method operations, may also perform the relevant operations of the vehicle perception information fusion method provided by any embodiments of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A vehicle perception information fusion method is characterized by comprising the following steps:
acquiring main perception information of a current vehicle and auxiliary perception information of an auxiliary vehicle, wherein the auxiliary vehicle is a vehicle with a position in a perception area related to the current vehicle position;
determining main obstacle detection object information and main obstacle detection category information according to the main perception information, and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information;
and fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection type information and the auxiliary obstacle detection type information to obtain target obstacle detection type information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection type information.
2. The method according to claim 1, wherein fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information includes:
acquiring main detection frame information in the main obstacle detection object information;
determining a main detection frame area in the main detection frame information and an auxiliary detection frame area in the auxiliary detection frame information;
matching the main detection frame information with the auxiliary detection frame information according to the intersection ratio of the main detection frame area and the auxiliary detection frame area, determining the auxiliary detection frame information matched with the main detection frame information, and associating the matched main detection frame information with the auxiliary detection frame information;
and for each group of associated main detection frame information and auxiliary detection frame information, fusing main detection frame coordinates of the main detection frame information and auxiliary detection frame coordinates of the auxiliary detection frame information to obtain target detection frame coordinates, and taking the target detection frame coordinates as the target obstacle detection object information.
3. The method according to claim 2, wherein the fusing the main detection frame coordinates of the main detection frame information and the auxiliary detection frame coordinates of the auxiliary detection frame information to obtain target detection frame coordinates comprises:
acquiring a main confidence degree of the vehicle corresponding to the main detection frame information and an auxiliary confidence degree of the vehicle corresponding to the auxiliary detection frame information;
and fusing the coordinates of the main detection frame and the coordinates of the auxiliary detection frame based on the main confidence degree and the auxiliary confidence degree to obtain the coordinates of the target detection frame.
4. The method according to claim 2, wherein the fusing the primary obstacle detection category information and the secondary obstacle detection category information to obtain target obstacle detection category information comprises:
acquiring a main obstacle detection probability in the main obstacle detection category information and an auxiliary obstacle detection probability in the auxiliary obstacle detection category information;
and fusing the main obstacle detection probability and the auxiliary obstacle detection probability to obtain a target obstacle detection category probability, and determining target obstacle detection category information based on the target obstacle detection category probability.
5. The method according to claim 1, wherein said fusing the primary obstacle detection probability and the secondary obstacle detection probability to obtain a target obstacle detection category probability comprises:
and fusing the main obstacle detection probability and the auxiliary obstacle detection probability based on a log-likelihood ratio to obtain a target obstacle detection category probability.
6. The method of claim 1, wherein obtaining aiding awareness information for aiding a vehicle comprises:
and acquiring vehicle perception information of an auxiliary vehicle, and performing coordinate transformation on the vehicle perception information of the auxiliary vehicle to obtain auxiliary perception information which is positioned in the same coordinate system with the main perception information.
7. The method according to claim 6, wherein the coordinate transformation of the vehicle perception information of the auxiliary vehicle to obtain the auxiliary perception information in the same coordinate system as the main perception information comprises:
and determining an obstacle detection frame in the vehicle perception information of the auxiliary vehicle, and performing coordinate transformation on the coordinates of the obstacle detection frame to obtain the auxiliary perception information in the current vehicle coordinate system.
8. A vehicle awareness information fusion apparatus, comprising:
the system comprises a perception information acquisition module, a perception information acquisition module and a perception information acquisition module, wherein the perception information acquisition module is used for acquiring main perception information of a current vehicle and auxiliary perception information of an auxiliary vehicle, and the auxiliary vehicle is a vehicle with a position in a perception area related to the current vehicle position;
the obstacle information acquisition module is used for determining main obstacle detection object information and main obstacle detection category information according to the main perception information and determining auxiliary obstacle detection object information and auxiliary obstacle detection category information according to the auxiliary perception information;
and the perception information fusion module is used for fusing the main obstacle detection object information and the auxiliary obstacle detection object information to obtain target obstacle detection object information, fusing the main obstacle detection category information and the auxiliary obstacle detection category information to obtain target obstacle detection category information, and obtaining a perception information fusion result based on the target obstacle detection object information and the target obstacle detection category information.
9. A computer device, the device comprising:
one or more processors;
storage means for storing one or more programs;
when executed by one or more processors, cause the one or more processors to implement the vehicle awareness information fusion method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the vehicle awareness information fusion method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111524574.5A CN114386481A (en) | 2021-12-14 | 2021-12-14 | Vehicle perception information fusion method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111524574.5A CN114386481A (en) | 2021-12-14 | 2021-12-14 | Vehicle perception information fusion method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114386481A true CN114386481A (en) | 2022-04-22 |
Family
ID=81195176
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111524574.5A Pending CN114386481A (en) | 2021-12-14 | 2021-12-14 | Vehicle perception information fusion method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114386481A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115249355A (en) * | 2022-09-22 | 2022-10-28 | 杭州枕石智能科技有限公司 | Object association method, device and computer-readable storage medium |
CN115438712A (en) * | 2022-07-26 | 2022-12-06 | 中智行(苏州)科技有限公司 | Perception fusion method, device and equipment based on convolution neural network and vehicle-road cooperation and storage medium |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631414A (en) * | 2015-12-23 | 2016-06-01 | 上海理工大学 | Vehicle-borne multi-obstacle classification device and method based on Bayes classifier |
CN109143215A (en) * | 2018-08-28 | 2019-01-04 | 重庆邮电大学 | It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision |
CN109635868A (en) * | 2018-12-10 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Determination method, apparatus, electronic equipment and the storage medium of barrier classification |
CN109829386A (en) * | 2019-01-04 | 2019-05-31 | 清华大学 | Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method |
CN109996176A (en) * | 2019-05-20 | 2019-07-09 | 北京百度网讯科技有限公司 | Perception information method for amalgamation processing, device, terminal and storage medium |
US20190375425A1 (en) * | 2018-06-06 | 2019-12-12 | Metawave Corporation | Geographically disparate sensor fusion for enhanced target detection and identification in autonomous vehicles |
US20190384292A1 (en) * | 2018-06-15 | 2019-12-19 | Allstate Insurance Company | Processing System For Evaluating Autonomous Vehicle Control Systems Through Continuous Learning |
CN110606071A (en) * | 2019-09-06 | 2019-12-24 | 中国第一汽车股份有限公司 | Parking method, parking device, vehicle and storage medium |
US10627823B1 (en) * | 2019-01-30 | 2020-04-21 | StradVision, Inc. | Method and device for performing multiple agent sensor fusion in cooperative driving based on reinforcement learning |
US20200160559A1 (en) * | 2018-11-16 | 2020-05-21 | Uatc, Llc | Multi-Task Multi-Sensor Fusion for Three-Dimensional Object Detection |
EP3690817A1 (en) * | 2019-01-31 | 2020-08-05 | StradVision, Inc. | Method for providing robust object distance estimation based on camera by performing pitch calibration of camera more precisely with fusion of information acquired through camera and information acquired through v2v communication and device using the same |
CN111696373A (en) * | 2019-03-15 | 2020-09-22 | 北京图森智途科技有限公司 | Motorcade cooperative sensing method, motorcade cooperative control method and motorcade cooperative control system |
CN111950501A (en) * | 2020-08-21 | 2020-11-17 | 东软睿驰汽车技术(沈阳)有限公司 | Obstacle detection method and device and electronic equipment |
CN112113578A (en) * | 2020-09-23 | 2020-12-22 | 安徽工业大学 | Obstacle motion prediction method for automatic driving vehicle |
WO2020257642A1 (en) * | 2019-06-21 | 2020-12-24 | Intel Corporation | For enabling collective perception in vehicular networks |
CN112418092A (en) * | 2020-11-23 | 2021-02-26 | 中国第一汽车股份有限公司 | Fusion method, device, equipment and storage medium for obstacle perception |
CN112651359A (en) * | 2020-12-30 | 2021-04-13 | 深兰科技(上海)有限公司 | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium |
CN113064415A (en) * | 2019-12-31 | 2021-07-02 | 华为技术有限公司 | Method and device for planning track, controller and intelligent vehicle |
CN113335276A (en) * | 2021-07-20 | 2021-09-03 | 中国第一汽车股份有限公司 | Obstacle trajectory prediction method, obstacle trajectory prediction device, electronic device, and storage medium |
-
2021
- 2021-12-14 CN CN202111524574.5A patent/CN114386481A/en active Pending
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631414A (en) * | 2015-12-23 | 2016-06-01 | 上海理工大学 | Vehicle-borne multi-obstacle classification device and method based on Bayes classifier |
US20190375425A1 (en) * | 2018-06-06 | 2019-12-12 | Metawave Corporation | Geographically disparate sensor fusion for enhanced target detection and identification in autonomous vehicles |
US20190384292A1 (en) * | 2018-06-15 | 2019-12-19 | Allstate Insurance Company | Processing System For Evaluating Autonomous Vehicle Control Systems Through Continuous Learning |
CN109143215A (en) * | 2018-08-28 | 2019-01-04 | 重庆邮电大学 | It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision |
US20200160559A1 (en) * | 2018-11-16 | 2020-05-21 | Uatc, Llc | Multi-Task Multi-Sensor Fusion for Three-Dimensional Object Detection |
CN109635868A (en) * | 2018-12-10 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Determination method, apparatus, electronic equipment and the storage medium of barrier classification |
CN109829386A (en) * | 2019-01-04 | 2019-05-31 | 清华大学 | Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method |
US10627823B1 (en) * | 2019-01-30 | 2020-04-21 | StradVision, Inc. | Method and device for performing multiple agent sensor fusion in cooperative driving based on reinforcement learning |
EP3690817A1 (en) * | 2019-01-31 | 2020-08-05 | StradVision, Inc. | Method for providing robust object distance estimation based on camera by performing pitch calibration of camera more precisely with fusion of information acquired through camera and information acquired through v2v communication and device using the same |
CN111696373A (en) * | 2019-03-15 | 2020-09-22 | 北京图森智途科技有限公司 | Motorcade cooperative sensing method, motorcade cooperative control method and motorcade cooperative control system |
CN109996176A (en) * | 2019-05-20 | 2019-07-09 | 北京百度网讯科技有限公司 | Perception information method for amalgamation processing, device, terminal and storage medium |
WO2020257642A1 (en) * | 2019-06-21 | 2020-12-24 | Intel Corporation | For enabling collective perception in vehicular networks |
CN110606071A (en) * | 2019-09-06 | 2019-12-24 | 中国第一汽车股份有限公司 | Parking method, parking device, vehicle and storage medium |
CN113064415A (en) * | 2019-12-31 | 2021-07-02 | 华为技术有限公司 | Method and device for planning track, controller and intelligent vehicle |
CN111950501A (en) * | 2020-08-21 | 2020-11-17 | 东软睿驰汽车技术(沈阳)有限公司 | Obstacle detection method and device and electronic equipment |
CN112113578A (en) * | 2020-09-23 | 2020-12-22 | 安徽工业大学 | Obstacle motion prediction method for automatic driving vehicle |
CN112418092A (en) * | 2020-11-23 | 2021-02-26 | 中国第一汽车股份有限公司 | Fusion method, device, equipment and storage medium for obstacle perception |
CN112651359A (en) * | 2020-12-30 | 2021-04-13 | 深兰科技(上海)有限公司 | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium |
CN113335276A (en) * | 2021-07-20 | 2021-09-03 | 中国第一汽车股份有限公司 | Obstacle trajectory prediction method, obstacle trajectory prediction device, electronic device, and storage medium |
Non-Patent Citations (2)
Title |
---|
尉志青等: "感知-通信-计算融合的智能车联网挑战与趋势", 《中兴通讯技术》, vol. 26, no. 01, 29 February 2020 (2020-02-29), pages 45 - 49 * |
陆峰等: "基于DSmT理论的多视角融合目标检测识别", 《机器人》, vol. 40, no. 05, 12 February 2018 (2018-02-12), pages 723 - 733 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115438712A (en) * | 2022-07-26 | 2022-12-06 | 中智行(苏州)科技有限公司 | Perception fusion method, device and equipment based on convolution neural network and vehicle-road cooperation and storage medium |
CN115438712B (en) * | 2022-07-26 | 2024-09-06 | 中国电信股份有限公司 | Awareness fusion method, device, equipment and storage medium based on cooperation of convolutional neural network and vehicle road |
CN115249355A (en) * | 2022-09-22 | 2022-10-28 | 杭州枕石智能科技有限公司 | Object association method, device and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112861653B (en) | Method, system, equipment and storage medium for detecting fused image and point cloud information | |
EP3627180B1 (en) | Sensor calibration method and device, computer device, medium, and vehicle | |
CN110163930B (en) | Lane line generation method, device, equipment, system and readable storage medium | |
CN109188457B (en) | Object detection frame generation method, device, equipment, storage medium and vehicle | |
CN109271944B (en) | Obstacle detection method, obstacle detection device, electronic apparatus, vehicle, and storage medium | |
CN109214980B (en) | Three-dimensional attitude estimation method, three-dimensional attitude estimation device, three-dimensional attitude estimation equipment and computer storage medium | |
CN110095752B (en) | Positioning method, apparatus, device and medium | |
CN109582880B (en) | Interest point information processing method, device, terminal and storage medium | |
CN109931945B (en) | AR navigation method, device, equipment and storage medium | |
JP7422105B2 (en) | Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device | |
CN114386481A (en) | Vehicle perception information fusion method, device, equipment and storage medium | |
CN113793370B (en) | Three-dimensional point cloud registration method and device, electronic equipment and readable medium | |
CN112650300A (en) | Unmanned aerial vehicle obstacle avoidance method and device | |
CN109635868B (en) | Method and device for determining obstacle type, electronic device and storage medium | |
CN115817463B (en) | Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium | |
CN116844129A (en) | Road side target detection method, system and device for multi-mode feature alignment fusion | |
CN113297958A (en) | Automatic labeling method and device, electronic equipment and storage medium | |
CN115578516A (en) | Three-dimensional imaging method, device, equipment and storage medium | |
CN113222968B (en) | Detection method, system, equipment and storage medium fusing millimeter waves and images | |
CN112668596B (en) | Three-dimensional object recognition method and device, recognition model training method and device | |
CN114627438A (en) | Target detection model generation method, target detection method, device and medium | |
WO2021189420A1 (en) | Data processing method and device | |
CN115866229B (en) | Viewing angle conversion method, device, equipment and medium for multi-viewing angle image | |
CN114429631B (en) | Three-dimensional object detection method, device, equipment and storage medium | |
CN116642490A (en) | Visual positioning navigation method based on hybrid map, robot and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |