CN110414418B - Road detection method for multi-scale fusion of image-laser radar image data - Google Patents
Road detection method for multi-scale fusion of image-laser radar image data Download PDFInfo
- Publication number
- CN110414418B CN110414418B CN201910677344.9A CN201910677344A CN110414418B CN 110414418 B CN110414418 B CN 110414418B CN 201910677344 A CN201910677344 A CN 201910677344A CN 110414418 B CN110414418 B CN 110414418B
- Authority
- CN
- China
- Prior art keywords
- laser radar
- image
- fusion
- data
- channels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention discloses a road detection method for multi-scale fusion of image-laser radar image data, which comprises the following steps: synchronously projecting the laser radar image data to a corresponding image plane through a coordinate system space through a coordinate system transformation matrix; selecting three channels of laser radar projection point cloud, and converting the three channels into compact laser radar projection images through processing; inputting the two kinds of data into two identical and relatively independent encoders at the same time for multi-scale dense fusion; and inputting the multi-scale dense fusion result into an image-laser radar fusion road detection network to obtain a pixel-level lane classification result. The LCR fusion module can better utilize the characteristics of the laser radar, fuse the global information of the laser radar and integrate the number of output channels; compared with a common single-mode data result, the road detection network is more robust and has less influence on shadow and illumination by constructing the image-laser radar fusion road detection network.
Description
Technical Field
The invention relates to the technical field of computer vision image processing, in particular to a road detection method for multi-scale fusion of image-laser radar image data.
Background
In order to ensure the understanding and grasp of the intelligent vehicle on the road environment, the road area detection is an important basis for the intelligent vehicle to run. The human driver judges whether the road in front of the vehicle is flat or not by identifying the driving available area in front, and whether other barrier areas such as pedestrians, vehicles and hollow sections exist or not, so that the driving speed and the posture of the vehicle are adjusted, and the vehicle is controlled to make corresponding actions. In the intelligent vehicle sensing system, the road area detection provides the most important basis for the further control strategy of the intelligent vehicle, when the intelligent vehicle detects the front road range, the local path planning and decision control can be carried out according to the relative position of the intelligent vehicle in the road, for example, when the vehicle has deviation in the road area, the correction can be carried out according to the deviation amount, and meanwhile, the vehicle pose is corrected according to the deviation angle.
Patent publication No. CN108985247A describes a method for multi-spectral image road recognition, which uses road features in a spectral image to perform road segmentation using superpixels and structure tensors. This method does not take into account the three-dimensional characteristics of the road and the spatial characteristics of the road obstacles.
Patent publication No. CN107808140A describes a monocular road recognition algorithm based on image fusion, which segments a drivable road by using original image data and illumination-invariant image fusion. Also, the method ignores the features of the road in the three-dimensional scene and does not take into account the geometric information of the object.
Patent publication No. CN108052933A describes a road identification method based on convolutional neural network, which uses a classical full convolutional neural network to segment image road regions. But the method cannot cope with complex scene road segmentation and is sensitive to shadow and illumination because only the original image data is utilized.
The existing image processing method is also a semantic segmentation method based on deep learning, is easily influenced by outdoor light change and road shadow shielding by only depending on visual information, cannot effectively utilize the depth information of obstacles in a three-dimensional environment, and has the situations of false detection, missing detection and the like. For example, the image brings the road edge, the isolation belt and the sidewalk which are similar to the color and the texture of the road into the road detection result, so that the false detection condition occurs; due to the fact that the shadow of the building is shielded, the illumination condition changes, the shadow area in the road is classified by mistake, and the condition of missing detection is caused.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a road detection method for multi-scale fusion of image-laser radar image data, and solves the defects of the prior art.
The purpose of the invention is realized by the following technical scheme: a road detection method based on image-laser radar image data multi-scale fusion comprises the following steps:
synchronously projecting the laser radar image data to a corresponding image plane through a coordinate system space through a coordinate system transformation matrix;
selecting three channels of laser radar projection point cloud, and converting the three channels into compact laser radar projection images through processing;
inputting the two kinds of data into two identical and relatively independent encoders at the same time for multi-scale dense fusion;
and inputting the multi-scale dense fusion result into an image-laser radar fusion road detection network to obtain a pixel-level lane classification result.
Before the detection method is carried out, training of an image-laser radar fusion road detection network is required to be completed.
The training of the image-laser radar fusion road detection network comprises the following contents:
projecting the laser radar data to a corresponding image plane through a coordinate system space synchronization, wherein three channels of the laser radar data respectively comprise depth information in X, Y and Z directions;
taking three channels of laser radar projection point cloud, respectively carrying out dense upsampling processing through expansion filtering, normalizing the three channels of laser radar projection point cloud and image data to a [0,1] interval at the same time, and simultaneously unifying the sizes of a laser radar projection image and an original picture to 1248 multiplied by 384 resolution;
initializing neural network parameters by using uniform distribution between [0,1], inputting two data into the network with batch _ size as 1, and updating model parameters by using Cross Engine Loss function and Adam optimizer for the established cooperative network.
The three channels respectively comprise depth information in X, Y and Z directions.
And carrying out dense up-sampling processing on the three channels of the laser radar projection point cloud by expanding and filtering to convert the laser radar projection point cloud into a compact laser radar projection image.
The two data comprise image characteristics and laser radar characteristics; the laser radar image data is restricted on different scale features through an LCR fusion module, and the LCR fusion module utilizes a 1 multiplied by 1 convolution fusion laser radar projection global feature expression and integrates output channel number.
The invention has the beneficial effects that: a road detection method of image-laser radar image data multi-scale fusion can use laser radar features to carry out multi-scale feature constraint under the condition of keeping image data features, so that a network is more robust, and the occurrence probability of an overfitting phenomenon is reduced; the LCR fusion module can better utilize the characteristics of the laser radar, fuse the global information of the laser radar and integrate the number of output channels; compared with a common single-mode data result, the road detection network is more robust and has less influence on shadow and illumination by constructing the image-laser radar fusion road detection network.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a multi-scale fusion architectural diagram;
FIG. 3 is a diagram of an LCR fusion module;
FIG. 4 is a diagram of a basic network architecture;
FIG. 5 is a diagram of an image-lidar integrated road detection network;
FIG. 6 is a graph showing the effect of the experiment according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "upper", "inner", "outer", etc. indicate orientations or positional relationships based on those shown in the drawings or orientations or positional relationships that the products of the present invention conventionally use, which are merely for convenience of description and simplification of description, but do not indicate or imply that the devices or elements referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present invention.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," and "connected" are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, a road detection method based on multi-scale fusion of image-lidar image data includes the following steps:
synchronously projecting the laser radar image data to a corresponding image plane through a coordinate system space through a coordinate system transformation matrix;
further, the coordinate system transformation matrix is as follows:
as shown in fig. 2, three channels of the laser radar projection point cloud are selected to be subjected to dense up-sampling processing through extended filtering and converted into a compact laser radar projection image. Normalized to [0,1] simultaneously with the lidar image data, while the lidar projection image and the original picture were uniformly sized to 1248x 384.
Furthermore, the density degree of the scattered and sparse laser radar point cloud projection can be increased by the expanded filtering, and the denser laser radar point cloud projection is formed, so that the object contour is clearer, the boundary is smoother, the neural network learning characteristic is facilitated, and the road segmentation MIoU is improved.
Inputting the two kinds of data into two identical and relatively independent encoders at the same time for multi-scale dense fusion;
and inputting the multi-scale dense fusion result into an image-laser radar fusion road detection network to obtain a pixel-level lane classification result.
Before the detection method is carried out, training of an image-laser radar fusion road detection network is required to be completed.
The training of the image-laser radar fusion road detection network comprises the following contents:
the model respectively passes a visual image and a laser radar projection image through two paths of encoders on the basis of a UNet encoder, wherein each path is (R, G, B) and (X, Y, Z) three-channel data. And (3) utilizing a laser radar projection characteristic diagram to restrain image characteristics in each downsampling layer, and finally recovering the fusion data into the original image size and the channel number through a symmetrical deconvolution structure.
Training:
1) and projecting the laser radar data to a corresponding image plane through a coordinate system space synchronization, wherein three channels of the laser radar data respectively comprise depth information in X, Y and Z directions.
2) Taking three channels of laser radar projection point cloud, respectively carrying out dense up-sampling processing through expansion filtering, normalizing the three channels and image data to a [0,1] interval, and simultaneously unifying the sizes of the laser radar projection image and an original picture to a 1248 multiplied by 384 resolution ratio.
3) Initializing neural network parameters by using uniform distribution between [0,1], inputting two data into the network with batch _ size as 1, and updating model parameters by using Cross Engine Loss function and Adam optimizer for the established cooperative network.
The three channels respectively comprise depth information in X, Y and Z directions.
As shown in fig. 3, the two data include an image feature and a lidar feature; the laser radar image data is restricted on different scale features through an LCR fusion module, and the LCR fusion module utilizes a 1 multiplied by 1 convolution fusion laser radar projection global feature expression and integrates output channel number.
Further, the LCR fusion module uses a 1 × 1 convolution to integrate the number of output feature channels and fuse the lidar global information representation. This operation reduces the number of parameters significantly compared to using a 3 x3 convolution, reducing the risk of over-fitting, which can be understood as a fully connected layer, to express global information.
As shown in fig. 4 and 5, a full convolution road segmentation model is constructed according to a multi-scale dense fusion mode, fusion information is constructed by using three down-sampling layers of an FCN network, and the size of an original image is restored by using three anti-convolution layers, so that pixel-level lane classification is obtained. Initializing neural network parameters by using uniform distribution between [0,1], inputting two data into the network with batch _ size as 1, and updating model parameters by using Cross Engine Loss function and Adam optimizer for the established cooperative network.
Furthermore, the LCR module is utilized to fuse the data of the two modes in different scales of the network, so that the characteristics of the LCR module are more fully utilized, and the characteristics of the two data are more closely combined. And the two modal data do not influence parameters mutually when the coding features are extracted, the two features are different and are mutually promoted, and a regularization effect is formed, so that feature learning of both sides is carried out towards a direction of more excellent and stronger generalization capability. The network input size is 1248x384, the two data are 3 channels respectively, Adam is selected as the optimization method, the initial learning rate is 0.0001, the batch size is set to be 1, and the learning rate adjustment strategy is as follows: the learning rate is reduced by 0.5 times when the loss is not reduced after 4 times of verification. The network weight initialization mode uses random initialization.
As shown in fig. 6, the influence of the final result on the shadow and the illumination is smaller after the image is processed by the method, and the sidewalk area with similar color and texture and the road area with the influence of the shadow and the illumination are more restricted by the three-dimensional object information contained in the laser radar, so that the segmentation effect is more robust than that of single image data. Compared with other fusion modes, such as a front fusion mode and a rear fusion mode, the multi-scale fusion mode is more sufficient, so that the segmentation effect is superior.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (4)
1. A road detection method for multi-scale fusion of image-laser radar image data is characterized by comprising the following steps: the detection method comprises the following steps:
synchronously projecting the laser radar image data to a corresponding image plane through a coordinate system space through a coordinate system transformation matrix;
selecting three channels of laser radar projection point cloud, and converting the three channels into compact laser radar projection images through processing;
inputting the two kinds of data into two identical and relatively independent encoders at the same time for multi-scale dense fusion;
inputting the multi-scale dense fusion result into an image-laser radar fusion road detection network to obtain a pixel-level lane classification result;
before the detection method is carried out, training of an image-laser radar fusion road detection network is required to be completed;
the training of the image-laser radar fusion road detection network comprises the following contents:
the model respectively passes a visual image and a laser radar projection image through two paths of encoders on the basis of a UNet encoder, wherein each path is three-path data of (R, G, B) and (X, Y, Z);
utilizing a laser radar projection characteristic diagram to restrain image characteristics in each downsampling layer, and finally recovering the fusion data into the original image size and the channel number through a symmetrical deconvolution structure;
projecting the laser radar data to a corresponding image plane through a coordinate system space synchronization, wherein three channels of the laser radar data respectively comprise depth information in X, Y and Z directions;
taking three channels of laser radar projection point cloud, respectively carrying out dense upsampling processing through expansion filtering, normalizing the three channels of laser radar projection point cloud and image data to a [0,1] interval at the same time, and simultaneously unifying the sizes of a laser radar projection image and an original picture to 1248 multiplied by 384 resolution;
initializing neural network parameters by using uniform distribution between [0,1], inputting two data into the network with batch _ size as 1, and updating model parameters by using Cross Engine Loss function and Adam optimizer for the established cooperative network.
2. The method for detecting the road through the multi-scale fusion of the image-lidar image data according to claim 1, wherein the method comprises the following steps: the three channels respectively comprise depth information in X, Y and Z directions.
3. The method for detecting the road through the multi-scale fusion of the image-lidar image data according to claim 1, wherein the method comprises the following steps: and carrying out dense up-sampling processing on the three channels of the laser radar projection point cloud by expanding and filtering to convert the laser radar projection point cloud into a compact laser radar projection image.
4. The method for detecting the road through the multi-scale fusion of the image-lidar image data according to claim 1, wherein the method comprises the following steps: the two data comprise image characteristics and laser radar characteristics; the laser radar image data is restricted on different scale features through an LCR fusion module, and the LCR fusion module utilizes a 1 multiplied by 1 convolution fusion laser radar projection global feature expression and integrates output channel number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910677344.9A CN110414418B (en) | 2019-07-25 | 2019-07-25 | Road detection method for multi-scale fusion of image-laser radar image data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910677344.9A CN110414418B (en) | 2019-07-25 | 2019-07-25 | Road detection method for multi-scale fusion of image-laser radar image data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110414418A CN110414418A (en) | 2019-11-05 |
CN110414418B true CN110414418B (en) | 2022-06-03 |
Family
ID=68363264
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910677344.9A Active CN110414418B (en) | 2019-07-25 | 2019-07-25 | Road detection method for multi-scale fusion of image-laser radar image data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110414418B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111274976B (en) * | 2020-01-22 | 2020-09-18 | 清华大学 | Lane detection method and system based on multi-level fusion of vision and laser radar |
CN111291676B (en) * | 2020-02-05 | 2020-12-11 | 清华大学 | Lane line detection method and device based on laser radar point cloud and camera image fusion and chip |
CN111626217B (en) * | 2020-05-28 | 2023-08-22 | 宁波博登智能科技有限公司 | Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion |
CN111967373B (en) * | 2020-08-14 | 2021-03-30 | 东南大学 | Self-adaptive enhanced fusion real-time instance segmentation method based on camera and laser radar |
CN112017170B (en) * | 2020-08-26 | 2024-05-14 | 广东建科交通工程质量检测中心有限公司 | Road pavement pit identification method, device and equipment based on three-dimensional shadow model |
CN111951306B (en) * | 2020-08-31 | 2024-06-07 | 华通科技有限公司 | Target detection method for fusion of laser radar and image video |
CN112433228B (en) * | 2021-01-05 | 2023-02-03 | 中国人民解放军国防科技大学 | Multi-laser radar decision-level fusion method and device for pedestrian detection |
CN113655494B (en) * | 2021-07-27 | 2024-05-10 | 上海智能网联汽车技术中心有限公司 | Road side camera and 4D millimeter wave fused target detection method, device and medium |
CN113838030B (en) * | 2021-09-24 | 2024-05-14 | 北京杰迈科技股份有限公司 | Switch state detection method |
CN114612869A (en) * | 2022-03-14 | 2022-06-10 | 合肥工业大学 | Information fusion method for road side laser radar and vehicle-mounted laser radar |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009199284A (en) * | 2008-02-21 | 2009-09-03 | Univ Of Tokyo | Road object recognition method |
CN103308925A (en) * | 2013-05-31 | 2013-09-18 | 中国科学院合肥物质科学研究院 | Integral three-dimensional color laser radar data point cloud generating method and device thereof |
CN104374376A (en) * | 2014-11-05 | 2015-02-25 | 北京大学 | Vehicle-mounted three-dimensional measurement system device and application thereof |
CN108445496A (en) * | 2018-01-02 | 2018-08-24 | 北京汽车集团有限公司 | Ranging caliberating device and method, distance-measuring equipment and distance measuring method |
CN109461178A (en) * | 2018-09-10 | 2019-03-12 | 中国科学院自动化研究所 | A kind of monocular image depth estimation method and device merging sparse known label |
CN109917419A (en) * | 2019-04-12 | 2019-06-21 | 中山大学 | A kind of depth fill-in congestion system and method based on laser radar and image |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9989969B2 (en) * | 2015-01-19 | 2018-06-05 | The Regents Of The University Of Michigan | Visual localization within LIDAR maps |
US10200683B2 (en) * | 2016-12-21 | 2019-02-05 | Microvision, Inc. | Devices and methods for providing foveated scanning laser image projection with depth mapping |
-
2019
- 2019-07-25 CN CN201910677344.9A patent/CN110414418B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009199284A (en) * | 2008-02-21 | 2009-09-03 | Univ Of Tokyo | Road object recognition method |
CN103308925A (en) * | 2013-05-31 | 2013-09-18 | 中国科学院合肥物质科学研究院 | Integral three-dimensional color laser radar data point cloud generating method and device thereof |
CN104374376A (en) * | 2014-11-05 | 2015-02-25 | 北京大学 | Vehicle-mounted three-dimensional measurement system device and application thereof |
CN108445496A (en) * | 2018-01-02 | 2018-08-24 | 北京汽车集团有限公司 | Ranging caliberating device and method, distance-measuring equipment and distance measuring method |
CN109461178A (en) * | 2018-09-10 | 2019-03-12 | 中国科学院自动化研究所 | A kind of monocular image depth estimation method and device merging sparse known label |
CN109917419A (en) * | 2019-04-12 | 2019-06-21 | 中山大学 | A kind of depth fill-in congestion system and method based on laser radar and image |
Non-Patent Citations (1)
Title |
---|
"Pedestrian Detection Combining RGB and Dense LIDAR Data";Cristiano Premebida etc;《2014 IEEE/RSJ International Conference on Intelligent Robots and Systems》;20140918;正文第4113页-4114页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110414418A (en) | 2019-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110414418B (en) | Road detection method for multi-scale fusion of image-laser radar image data | |
CN110588653B (en) | Control system, control method and controller for autonomous vehicle | |
CN111798475B (en) | Indoor environment 3D semantic map construction method based on point cloud deep learning | |
CN109740465B (en) | Lane line detection algorithm based on example segmentation neural network framework | |
CN110531753B (en) | Control system, control method and controller for autonomous vehicle | |
CN111563415B (en) | Binocular vision-based three-dimensional target detection system and method | |
CN117441197A (en) | Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field | |
CN111274976A (en) | Lane detection method and system based on multi-level fusion of vision and laser radar | |
EP3264367A2 (en) | Image generating apparatus, image generating method, and recording medium | |
CN108416292B (en) | Unmanned aerial vehicle aerial image road extraction method based on deep learning | |
CN107633220A (en) | A kind of vehicle front target identification method based on convolutional neural networks | |
CN111046781B (en) | Robust three-dimensional target detection method based on ternary attention mechanism | |
CN110321877B (en) | Three-eye rearview mirror and three-eye vision safe driving method and system | |
CN107886043B (en) | Vision-aware anti-collision early warning system and method for forward-looking vehicles and pedestrians of automobile | |
CN115019043B (en) | Cross-attention mechanism-based three-dimensional object detection method based on image point cloud fusion | |
CN111967373B (en) | Self-adaptive enhanced fusion real-time instance segmentation method based on camera and laser radar | |
CN113095152A (en) | Lane line detection method and system based on regression | |
CN117975436A (en) | Three-dimensional target detection method based on multi-mode fusion and deformable attention | |
CN116188999B (en) | Small target detection method based on visible light and infrared image data fusion | |
CN117058646B (en) | Complex road target detection method based on multi-mode fusion aerial view | |
CN111461221A (en) | Multi-source sensor fusion target detection method and system for automatic driving | |
CN111339830A (en) | Target classification method based on multi-modal data features | |
US20220301099A1 (en) | Systems and methods for generating object detection labels using foveated image magnification for autonomous driving | |
CN117111055A (en) | Vehicle state sensing method based on thunder fusion | |
CN112529011B (en) | Target detection method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |