CN112883846A - Three-dimensional data acquisition imaging system for detecting vehicle front target - Google Patents
Three-dimensional data acquisition imaging system for detecting vehicle front target Download PDFInfo
- Publication number
- CN112883846A CN112883846A CN202110143593.7A CN202110143593A CN112883846A CN 112883846 A CN112883846 A CN 112883846A CN 202110143593 A CN202110143593 A CN 202110143593A CN 112883846 A CN112883846 A CN 112883846A
- Authority
- CN
- China
- Prior art keywords
- fusion
- module
- data
- target
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 17
- 230000004927 fusion Effects 0.000 claims abstract description 49
- 238000001514 detection method Methods 0.000 claims abstract description 26
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 17
- 238000013507 mapping Methods 0.000 claims abstract description 6
- 238000000034 method Methods 0.000 claims description 29
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000005259 measurement Methods 0.000 claims description 6
- 238000013213 extrapolation Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 230000003068 static effect Effects 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims 3
- 230000000007 visual effect Effects 0.000 abstract description 11
- 238000000605 extraction Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 abstract 1
- 230000007547 defect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Probability & Statistics with Applications (AREA)
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention discloses a three-dimensional data acquisition imaging system for detecting a target in front of a vehicle, and relates to the technical field of three-dimensional data acquisition imaging systems; in order to improve the effect of three-dimensional data acquisition imaging; the system comprises a target identification module, a radar and visual sensor information fusion module, wherein the target identification module comprises a deep convolutional neural network algorithm feature learning module, a multi-feature fused target identification module and a target automatic tracking module, the radar and visual sensor information fusion module comprises a data space-time registration module of a laser radar and a visual sensor, a target detection data fusion module and an inter-sensor information control module, and the radar and visual sensor fusion is divided into data level fusion, feature layer fusion and decision layer fusion. The invention can autonomously and unsupervised learn the mapping between input and output through the convolutional neural network without manual feature extraction.
Description
Technical Field
The invention relates to the technical field of three-dimensional data acquisition imaging systems, in particular to a three-dimensional data acquisition imaging system for detecting a target in front of a vehicle.
Background
Under complex environments, vehicles are difficult to effectively identify targets (vehicles or pedestrians) only by means of one characteristic or one identification means, and the current target identification methods based on a vision sensor are mainly divided into the following methods: the vehicle identification method based on the single characteristic has the advantages of simplicity in implementation, short processing time and high detection rate, but the vehicle identification method based on the single characteristic only identifies the vehicle by using a certain specific characteristic and has single characteristic expression, so that the interference elimination capability of the vehicle identification method based on the single characteristic is limited, and the vehicle identification method based on the single characteristic has the defect of high false detection rate.
Through retrieval, the chinese patent application No. CN201910876938.2 discloses a method and a system for detecting a vehicle ahead, which can improve the reliability and accuracy of vehicle detection and positioning. The method comprises the following steps: respectively acquiring point cloud data and an image of a front vehicle through a laser radar and a vision sensor, and acquiring label types of targets in the point cloud data and the image; and performing feature extraction on the acquired point cloud data and the acquired image by using a feature extractor, and performing pixel-level mapping fusion on the extracted features.
The method and the system for detecting the front vehicle in the patent have the following defects: in the process of detecting the front vehicle, the output efficiency of three-dimensional data acquisition imaging cannot be quickly realized, so that the safety of the vehicle in the driving process cannot be guaranteed.
Disclosure of Invention
The invention aims to solve the defects that in the prior art, the output efficiency of three-dimensional data acquisition imaging cannot be quickly realized in the process of detecting a front vehicle, so that the safety of the vehicle in the driving process cannot be guaranteed, and provides a three-dimensional data acquisition imaging system for detecting a front target of the vehicle.
In order to achieve the purpose, the invention adopts the following technical scheme:
the utility model provides a three-dimensional data acquisition imaging system for detecting vehicle place ahead target, includes target identification module, radar and vision sensor information fusion module, the target identification module includes target identification module and the target automatic tracking module of degree of depth convolution neural network algorithm feature learning module, multi-feature fusion, radar and vision sensor information fusion module include the data space-time registration module of laser radar and vision sensor, target detection data fusion module and the inter-sensor information control module group, radar and vision sensor fusion divide into data level fusion, characteristic layer fusion and decision-making layer fusion.
Preferably: the registration method of the data space-time registration module of the laser radar and the vision sensor comprises the following steps:
s1: the laser radar and the vision sensor are spatially registered;
s2: the lidar is time registered with the vision sensor.
Preferably: the space registration of the laser radar and the visual sensor in the step S1 includes laser radar calibration and visual sensor calibration, and the laser radar calibration includes the following steps:
s11: the method comprises the steps of constructing a geometric model by using distance information and angle information of original data of a laser radar, mapping the distance information and the angle information of the laser radar to a vehicle body coordinate system, and changing the distance information and the angle information into traditional three-dimensional coordinates;
s12: searching a conversion matrix from a laser radar coordinate system to a vehicle body coordinate system;
s13: and adjusting the pitch angle, the roll angle and the yaw angle of the laser radar detector, measuring the distance, the horizontal height and the transverse offset distance between the laser radar detector and the vehicle bumper, and performing coordinate conversion on the result and a vehicle body coordinate system to finish the calibration of the laser radar.
Preferably: the calibration process of the visual sensor is carried out based on the conversion between a pixel coordinate system and a physical coordinate system, between a camera coordinate system and the physical coordinate system and between a camera coordinate system and a vehicle body coordinate system and a distortion parameter matrix.
Preferably: the time alignment of the multi-sensor information fusion in the step S2 unifies the time asynchronous measurement data of each sensor with respect to the same target to a synchronous fusion processing time through some algorithm, performs registration on the time within the same time slice, and performs interpolation and extrapolation on the measured data collected by the sensors within the time slice to achieve the purpose of calculating high-precision time data to low-precision time data.
Preferably: the time registration of the lidar and the vision sensor in the step S2 includes the following steps:
s21: selecting time slices with the time lengths of Ta and Tb, and dividing the fusion time of the time slices according to different motion states of the target, wherein the time slices corresponding to high-speed motion are in the order of seconds, the time slices corresponding to low-speed motion are in the order of minutes, and the corresponding time slices corresponding to static target are in the order of hours;
s22: sorting measurement data obtained by different types of sensors in an incremental sorting mode according to the accuracy of the sensors;
s23: and the process of synchronizing the observation data in the high-precision time slice to the observation data in the low-precision time slice is realized by utilizing interpolation and extrapolation methods.
Preferably: the target detection data fusion module comprises a classical ICP algorithm and a multi-frame ICP algorithm, wherein the multi-frame ICP algorithm comprises the following steps:
s31: acquiring a detection initial value through a sensor and a corresponding detection algorithm, and transmitting the detection initial value to a terminal by using a Tbox;
s32: and (3) taking a laser radar detection target data point set as a source, taking image detection data as a target data point set, and fusing multi-frame data to remove noise points to obtain a matched initial data point set.
On the basis of the scheme: the inter-sensor information control module comprises a temperature sensor, a GPS sensor and a speed sensor.
On the basis of the foregoing scheme, it is preferable that: the deep convolutional neural network algorithm feature learning module comprises a sample library and a deep convolutional neural network model, data of the sample library is acquired based on an MIT vehicle identification database and an INRIA vehicle identification database, a convolutional layer of the deep convolutional neural network model is composed of a plurality of feature graphs, all neurons on each feature graph share the same parameter of a convolutional kernel, and the convolutional kernel is used for performing convolutional operation on a previous layer of input images to obtain the deep convolutional neural network algorithm feature learning module.
It is further preferable on the basis of the foregoing scheme that: the multi-feature fusion target identification module is based on multi-feature fusion vehicle identification of Dempster-Shafer theory and multi-feature fusion vehicle identification of Choquet integral.
The invention has the beneficial effects that:
1. the mapping between input and output can be automatically and unsupervised learned through the convolutional neural network, artificial feature extraction is not needed, ideal output can be obtained through convolutional neural network training on the basis of definite sample classification, and the accuracy of vehicle three-dimensional data acquisition can be effectively improved through building a test sample library.
2. After a plurality of characteristics are selected for fuzzification expression, decision making is carried out by using a data fusion technology. Taking vehicle identification as an example, the project selects a plurality of vehicle characteristics, such as vehicle symmetry characteristics, vehicle tail light characteristics, horizontal edge vehicle waves and classifier characteristics based on a deep convolutional neural network to perform fuzzification expression, and expresses the identification result of each characteristic identification method in a probability mode, so that each characteristic can be efficiently applied to a characteristic fusion framework.
3. By assigning a Mass to the event of whether a vehicle is present in the image, while the remaining trusted portions are assigned to everything in the image, the entire environment may still contain vehicle events that have not been assigned trust. Therefore, the D-S evidence theory can be used for endowing the missed-detection vehicles in the images with certain trust, so that the accuracy of vehicle identification is improved. The project integrates the vehicle symmetry feature, the vehicle tail light feature, the horizontal edge vehicle wave feature and the classifier feature based on the deep convolutional neural network.
Drawings
Fig. 1 is an overall schematic diagram of a three-dimensional data acquisition imaging system for detecting an object in front of a vehicle according to the present invention;
FIG. 2 is a schematic diagram of an overall radar and vision sensor information fusion module of a three-dimensional data acquisition imaging system for detecting a target in front of a vehicle according to the present invention;
fig. 3 is an overall schematic diagram of an object recognition module of a three-dimensional data acquisition imaging system for detecting an object in front of a vehicle according to the present invention.
Detailed Description
The technical solution of the present patent will be described in further detail with reference to the following embodiments.
Reference will now be made in detail to embodiments of the present patent, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present patent and are not to be construed as limiting the present patent.
In the description of this patent, it is to be understood that the terms "center," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in the orientations and positional relationships indicated in the drawings for the convenience of describing the patent and for the simplicity of description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are not to be considered limiting of the patent.
In the description of this patent, it is noted that unless otherwise specifically stated or limited, the terms "mounted," "connected," and "disposed" are to be construed broadly and can include, for example, fixedly connected, disposed, detachably connected, disposed, or integrally connected and disposed. The specific meaning of the above terms in this patent may be understood by those of ordinary skill in the art as appropriate.
The utility model provides a three-dimensional data acquisition imaging system for detecting vehicle place ahead target, as shown in figure 1, includes target identification module, radar and vision sensor information fusion module, the target identification module includes the target identification module and the target automatic tracking module of degree of depth convolution neural network algorithm feature learning module, multi-feature fusion, radar and vision sensor information fusion module include data space-time registration module, target detection data fusion module and the inter-sensor information control module of laser radar and vision sensor, radar and vision sensor fuse and divide into data level and fuse, feature layer and decision-making layer and fuse.
The registration method of the data space-time registration module of the laser radar and the vision sensor comprises the following steps:
s1: the laser radar and the vision sensor are spatially registered;
s2: the lidar is time registered with the vision sensor.
The space registration of the laser radar and the visual sensor in the step S1 includes laser radar calibration and visual sensor calibration, and the laser radar calibration includes the following steps:
s11: the method comprises the steps of constructing a geometric model by using distance information and angle information of original data of a laser radar, mapping the distance information and the angle information of the laser radar to a vehicle body coordinate system, and changing the distance information and the angle information into traditional three-dimensional coordinates;
s12: searching a conversion matrix from a laser radar coordinate system to a vehicle body coordinate system;
s13: and adjusting the pitch angle, the roll angle and the yaw angle of the laser radar detector, measuring the distance, the horizontal height and the transverse offset distance between the laser radar detector and the vehicle bumper, and performing coordinate conversion on the result and a vehicle body coordinate system to finish the calibration of the laser radar.
The calibration process of the visual sensor is carried out based on the conversion between a pixel coordinate system and a physical coordinate system, between a camera coordinate system and the physical coordinate system and between a camera coordinate system and a vehicle body coordinate system and a distortion parameter matrix.
And converting the stereo image formed by the laser radar and the plane image formed by the vision sensor to a certain degree, thereby achieving the spatial registration of the laser radar and the vision sensor.
The time alignment of the multi-sensor information fusion in the step S2 unifies the time asynchronous measurement data of each sensor with respect to the same target to a synchronous fusion processing time through some algorithm, performs registration on the time within the same time slice, and performs interpolation and extrapolation on the measured data collected by the sensors within the time slice to achieve the purpose of calculating high-precision time data to low-precision time data.
The time registration of the lidar and the vision sensor in the step S2 includes the following steps:
s21: selecting time slices with the time lengths of Ta and Tb, and dividing the fusion time of the time slices according to different motion states of the target, wherein the time slices corresponding to high-speed motion are in the order of seconds, the time slices corresponding to low-speed motion are in the order of minutes, and the corresponding time slices corresponding to static target are in the order of hours;
s22: sorting measurement data obtained by different types of sensors in an incremental sorting mode according to the accuracy of the sensors;
s23: and the process of synchronizing the observation data in the high-precision time slice to the observation data in the low-precision time slice is realized by utilizing interpolation and extrapolation methods.
The target detection data fusion module comprises a classical ICP algorithm and a multi-frame ICP algorithm, wherein the multi-frame ICP algorithm comprises the following steps:
s31: acquiring a detection initial value through a sensor and a corresponding detection algorithm, and transmitting the detection initial value to a terminal by using a Tbox;
s32: and (3) taking a laser radar detection target data point set as a source, taking image detection data as a target data point set, and fusing multi-frame data to remove noise points to obtain a matched initial data point set.
The inter-sensor information control module comprises a temperature sensor, a GPS sensor and a speed sensor.
The deep convolutional neural network algorithm feature learning module comprises a sample library and a deep convolutional neural network model, data of the sample library is acquired based on an MIT vehicle identification database and an INRIA vehicle identification database, a convolutional layer of the deep convolutional neural network model is composed of a plurality of feature graphs, all neurons on each feature graph share the same parameter of a convolutional kernel, and the convolutional kernel is used for performing convolutional operation on a previous layer of input images to obtain the deep convolutional neural network algorithm feature learning module.
The multi-feature fusion target identification module is based on multi-feature fusion vehicle identification of Dempster-Shafer theory and multi-feature fusion vehicle identification of Choquet integral.
The target automatic tracking module performs differential calculation on the image by using a high-speed DSP chip, can automatically identify the moving direction of the object in a visual range, and automatically controls the cloud platform to track the moving object. And an automatic zoom lens is used for assisting, and all actions of the object are clearly transmitted to the monitoring center in a close-up mode during the time from the target object entering the sight line range of the intelligent tracking ball machine to the target object leaving the sight line range.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Claims (10)
1. The utility model provides a three-dimensional data acquisition imaging system for detecting vehicle place ahead target, includes target identification module, radar and vision sensor information fusion module, its characterized in that, target identification module includes the target identification module and the target automatic tracking module of degree of depth convolution neural network algorithm feature learning module, multi-feature fusion, radar and vision sensor information fusion module include the data space-time registration module of laser radar and vision sensor, target detection data fusion module and the inter-sensor information control module group, radar and vision sensor fusion divide into data level fusion, characteristic layer fusion and decision-making layer fusion.
2. The three-dimensional data acquisition imaging system for detecting the target in front of the vehicle as claimed in claim 1, wherein the registration method of the data space-time registration module of the laser radar and the vision sensor comprises the following steps:
s1: the laser radar and the vision sensor are spatially registered;
s2: the lidar is time registered with the vision sensor.
3. The system of claim 2, wherein the lidar and the vision sensor spatially registered in step S1 are calibrated with the vision sensor, and the lidar calibration comprises the following steps:
s11: the method comprises the steps of constructing a geometric model by using distance information and angle information of original data of a laser radar, mapping the distance information and the angle information of the laser radar to a vehicle body coordinate system, and changing the distance information and the angle information into traditional three-dimensional coordinates;
s12: searching a conversion matrix from a laser radar coordinate system to a vehicle body coordinate system;
s13: and adjusting the pitch angle, the roll angle and the yaw angle of the laser radar detector, measuring the distance, the horizontal height and the transverse offset distance between the laser radar detector and the vehicle bumper, and performing coordinate conversion on the result and a vehicle body coordinate system to finish the calibration of the laser radar.
4. The system of claim 3, wherein the calibration process of the vision sensor is performed based on the transformation of the pixel coordinate system and the physical coordinate system, the transformation of the camera coordinate system and the physical coordinate system, and the transformation of the camera coordinate system and the vehicle body coordinate system, and the distortion parameter matrix.
5. The system of claim 2, wherein the time alignment of the multi-sensor information fusion in step S2 unifies the time asynchronous measurement data of each sensor with respect to the same object into a synchronous fusion processing time by an algorithm, and the time in the same time slice is aligned, and the measured data collected by the sensors in the time slice is interpolated and extrapolated to derive the high-precision time data from the low-precision time data.
6. The system of claim 2, wherein the lidar and the vision sensor in step S2 are temporally registered, and further comprising the steps of:
s21: selecting time slices with the time lengths of Ta and Tb, and dividing the fusion time of the time slices according to different motion states of the target, wherein the time slices corresponding to high-speed motion are in the order of seconds, the time slices corresponding to low-speed motion are in the order of minutes, and the corresponding time slices corresponding to static target are in the order of hours;
s22: sorting measurement data obtained by different types of sensors in an incremental sorting mode according to the accuracy of the sensors;
s23: and the process of synchronizing the observation data in the high-precision time slice to the observation data in the low-precision time slice is realized by utilizing interpolation and extrapolation methods.
7. The system of claim 1, wherein the object detection data fusion module comprises a classical ICP algorithm and a multi-frame ICP algorithm, and the multi-frame ICP algorithm comprises the following steps:
s31: acquiring a detection initial value through a sensor and a corresponding detection algorithm, and transmitting the detection initial value to a terminal by using a Tbox;
s32: and (3) taking a laser radar detection target data point set as a source, taking image detection data as a target data point set, and fusing multi-frame data to remove noise points to obtain a matched initial data point set.
8. The system of claim 1, wherein the inter-sensor information control module comprises a temperature sensor, a GPS sensor and a speed sensor.
9. The system of claim 1, wherein the deep convolutional neural network algorithm feature learning module comprises a sample library and a deep convolutional neural network model, the data acquisition of the sample library is based on an MIT (MIT-based vehicle identification) database and an INRIA (input information) database, and a convolutional layer of the deep convolutional neural network model is composed of a plurality of feature maps, all neurons on each feature map share the same parameter of a convolutional kernel, and the convolutional kernel performs convolutional operation on a previous layer of input image.
10. The system of claim 1, wherein the multi-feature fusion target identification module is based on Dempster-Shafer theory multi-feature fusion vehicle identification and Choquet integral multi-feature fusion vehicle identification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110143593.7A CN112883846A (en) | 2021-02-02 | 2021-02-02 | Three-dimensional data acquisition imaging system for detecting vehicle front target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110143593.7A CN112883846A (en) | 2021-02-02 | 2021-02-02 | Three-dimensional data acquisition imaging system for detecting vehicle front target |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112883846A true CN112883846A (en) | 2021-06-01 |
Family
ID=76055760
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110143593.7A Pending CN112883846A (en) | 2021-02-02 | 2021-02-02 | Three-dimensional data acquisition imaging system for detecting vehicle front target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112883846A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114295139A (en) * | 2021-12-14 | 2022-04-08 | 武汉依迅北斗时空技术股份有限公司 | Cooperative sensing positioning method and system |
-
2021
- 2021-02-02 CN CN202110143593.7A patent/CN112883846A/en active Pending
Non-Patent Citations (3)
Title |
---|
刘培勋: "《车辆主动安全中关于车辆检测与跟踪算法的若干研究》", 《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》 * |
杨鑫 等: "《面向高级辅助驾驶雷达和视觉传感器信息融合算法的研究》", 《汽车实用技术》 * |
陆峰 等: "《基于信息融合的智能车障碍物检测方法》", 《计算机应用》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114295139A (en) * | 2021-12-14 | 2022-04-08 | 武汉依迅北斗时空技术股份有限公司 | Cooperative sensing positioning method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110942449B (en) | Vehicle detection method based on laser and vision fusion | |
CN115032651B (en) | Target detection method based on laser radar and machine vision fusion | |
CN112396650A (en) | Target ranging system and method based on fusion of image and laser radar | |
CN112767475A (en) | Intelligent roadside sensing system based on C-V2X, radar and vision | |
CN112700470A (en) | Target detection and track extraction method based on traffic video stream | |
CN112215306A (en) | Target detection method based on fusion of monocular vision and millimeter wave radar | |
CN112731371B (en) | Laser radar and vision fusion integrated target tracking system and method | |
CN112149550A (en) | Automatic driving vehicle 3D target detection method based on multi-sensor fusion | |
CN110873879A (en) | Device and method for deep fusion of characteristics of multi-source heterogeneous sensor | |
CN113160327A (en) | Method and system for realizing point cloud completion | |
CN113848545B (en) | Fusion target detection and tracking method based on vision and millimeter wave radar | |
CN112949782A (en) | Target detection method, device, equipment and storage medium | |
CN115273034A (en) | Traffic target detection and tracking method based on vehicle-mounted multi-sensor fusion | |
CN114495064A (en) | Monocular depth estimation-based vehicle surrounding obstacle early warning method | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
CN115546741A (en) | Binocular vision and laser radar unmanned ship marine environment obstacle identification method | |
CN115876198A (en) | Target detection and early warning method, device, system and medium based on data fusion | |
CN115451948A (en) | Agricultural unmanned vehicle positioning odometer method and system based on multi-sensor fusion | |
CN116978009A (en) | Dynamic object filtering method based on 4D millimeter wave radar | |
CN116148801B (en) | Millimeter wave radar-based target detection method and system | |
CN117111085A (en) | Automatic driving automobile road cloud fusion sensing method | |
CN116699602A (en) | Target detection system and method based on millimeter wave radar and camera fusion | |
CN112883846A (en) | Three-dimensional data acquisition imaging system for detecting vehicle front target | |
CN112798020B (en) | System and method for evaluating positioning accuracy of intelligent automobile | |
CN113988197A (en) | Multi-camera and multi-laser radar based combined calibration and target fusion detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210601 |