CN116229118A - Bird's eye view target detection method based on manifold matching - Google Patents
Bird's eye view target detection method based on manifold matching Download PDFInfo
- Publication number
- CN116229118A CN116229118A CN202310484058.7A CN202310484058A CN116229118A CN 116229118 A CN116229118 A CN 116229118A CN 202310484058 A CN202310484058 A CN 202310484058A CN 116229118 A CN116229118 A CN 116229118A
- Authority
- CN
- China
- Prior art keywords
- network
- manifold
- target detection
- matching
- bird
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 69
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 title claims abstract description 26
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 240000004050 Pentaglottis sempervirens Species 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 2
- 238000012360 testing method Methods 0.000 description 11
- 238000011156 evaluation Methods 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 241000905137 Veronica schmidtiana Species 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Radar Systems Or Details Thereof (AREA)
- Image Processing (AREA)
Abstract
A bird's-eye view target detection method based on manifold matching network designs an embeddable manifold matching network by utilizing manifold matching thought, embeds the pipeline network of the existing bird's-eye view target detection; the manifold matching network matches various sensor data such as images and point clouds around the same manifold, and performs fusion of the two types of data, so that space and geometric information among different sensor data are better aligned, redundancy and repetition of information are reduced, and accuracy and robustness of target detection are improved.
Description
Technical Field
The invention relates to the field of target detection, in particular to a bird's eye view target detection method based on manifold matching.
Background
The autopilot sensing system is one of the core parts of an autopilot car and is mainly responsible for acquiring and analyzing various sensor data from the surrounding environment to obtain a comprehensive understanding of the surrounding situation of the vehicle, thereby supporting the decision and control module to make a correct decision. Autopilot sensing systems typically include a variety of sensors, such as lidar, cameras, millimeter wave radar, GPS, inertial measurement units, and the like. These sensors may provide environmental information around the vehicle, including road and road signs, location and dynamic information of other vehicles and pedestrians, and the like. Tasks of the autopilot awareness system include target detection and tracking, lane line detection, obstacle identification and classification, landmark identification, map matching, and the like. The laser radar sensor can work normally in severe weather and provide detailed three-dimensional scene information, and the camera has the advantages of low cost and rich semantic information for a perception system. However, in the automatic driving target detection task, the single sensor has the following drawbacks:
(1) The sensing range is limited, for example, a monocular camera can only acquire image information of one plane view angle, and the range which can be acquired by a radar and a laser radar is limited. Thus, a single sensor cannot obtain all-round environmental information, and some important targets may be missed.
(2) And (3) sensor error superposition: the accuracy and robustness of a single sensor are limited, and there may be certain errors in the data acquired multiple times, and these errors may be superimposed, resulting in inaccurate target detection results.
(3) Target shielding problem: in complex traffic environments, objects may be blocked from each other, and it is difficult for a single sensor to accurately identify all objects. For example, when a pedestrian is obscured by a car, the monocular camera may not be able to detect the pedestrian correctly.
In the related research in recent years, the majority of multi-sensor fusion target detection network models can better solve the defects, and the robustness and instantaneity of the models are improved by utilizing a plurality of sensors to acquire effective perception information. Wherein, the problems of precision consistency among the sensors and the like are corrected while different processing is required to be considered aiming at the data types and the characteristics of different sensors.
Disclosure of Invention
In this regard, the disclosure provides a bird's eye view target detection method based on manifold matching, which is mainly used for performing a matching algorithm on laser radar data and camera data which are widely applied at present, can better integrate the two data, and improves the accuracy of the target detection task.
The aerial view target detection method based on manifold matching, provided by the disclosure, comprises the following steps:
step S1: constructing a bird's eye view target detection modelWherein, model->The system comprises an aerial view generating module of a plurality of sensor branches, an aerial view fusion module generated by each sensor and a target detection module; acquiring an original sample data set for training;
step S2: building manifold matching networkEmbedding it into the model->After the aerial view fusion module, obtaining an aerial view target detection model based on manifold matching>The method comprises the steps of carrying out a first treatment on the surface of the Said network->For mapping data acquired by different sensors into the same manifold space; />
Step S3: for a model using the raw sample datasetTraining to obtain model->The method is used for detecting the aerial view target.
Further, the aerial view target detection modelThe method comprises an image voxel branch and a point cloud voxel branch, wherein the image voxel branch is constructed by a trunk network RESNET50, a neck network FPN and an image aerial view generating network LSS, and the point cloud voxel branch is constructed by a trunk network SECOND, a neck network SECONDFPN and a point cloud aerial view generating network VoxelNet.
Further, the manifold matching networkThe device comprises a tensor matching module, which is used for converting data from different sensors into corresponding voxel tensors and realizing matching alignment of tensor characteristics.
Further, the tensor matching is accomplished by similarity calculation or matching matrix calculation.
Further, the manifold matching networkThe system comprises a generating network, a distinguishing network and a convolution module which are connected in sequence, wherein the generating network comprises 1 linear layer and 7 convolution layers, and the distinguishing network comprises 1 linear layer and 6 convolution layers.
A bird's eye view target detection device based on manifold matching corresponding to the method comprises the following steps:
a bird's eye view generating module of each sensor branch;
the bird's-eye view fusion module is used for fusing the bird's-eye views generated by the sensors;
the manifold matching network module is used for mapping the fused data from different sensors into the same manifold space;
and the target detection module is used for carrying out target extraction detection based on the output of the manifold matching network.
According to the manifold matching-based aerial view target detection method, an embeddable manifold matching network is designed according to the manifold matching idea, and a pipeline network for existing target detection can be freely and reasonably embedded; the manifold matching network matches the image and the point cloud data around the same manifold, and performs fusion of the two types of data; and embedding the manifold matching network into a bird's-eye view fusion module of the constructed bird's-eye view target detection model, and training by utilizing a data set to obtain the target detection model with higher detection precision.
Compared with the prior art, the beneficial effects of the present disclosure are: (1) an embeddable manifold matching network is adopted, so that the existing target detection pipeline network can be freely and reasonably embedded; (2) the laser radar data and the camera data which are widely applied at present are used as a matching algorithm, so that the two data can be well fused, and the accuracy of the laser radar data and the camera data in a target detection task can be improved; (3) the method can be used for establishing fitting mapping relations among different data modalities in an automatic driving perception system so as to improve detection accuracy.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the disclosure.
Fig. 1 shows a flow chart according to an exemplary embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of a manifold matching network in an exemplary embodiment according to the present disclosure.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are illustrated in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The disclosure provides a bird's eye view target detection method based on manifold matching. Various sensors (such as a camera and a laser radar) respectively acquire image information and three-dimensional space information in the environment, and the image information and the three-dimensional space information are fused to detect surrounding targets. In the aspect of multi-sensor fusion target detection, compared with a traditional method based on simple weighted fusion, the manifold matching-based method can better align the space and geometric information among different sensor data, reduce redundancy and repetition of information, and improve the accuracy and robustness of target detection. The manifold matching network comprises a generating network, an identifying network and a convolution block, wherein the convolution block can set the layer number according to tasks and requirements so as to improve the detection performance of the model.
An exemplary embodiment flow is shown in fig. 1.
Taking MPSoC ZCU105 development board as an embedded platform test as an example according to FIG. 1, the main steps are further described:
step one: constructing a bird's eye view target detection modelAnd acquiring a nuScenes data set used for training, and dividing the data set into a training set and a testing set according to the proportion.
Step two: for the constructed modelConfiguring model parameters and superparameters, using training set +.>Training, optimizing and adjusting the model to obtain a bird's eye view target detection model with better performance>。
Step three: building manifold matching networkThe method is used for matching two sensor data and improving the perception and decision making capability of the model. Improving it to model->In the aerial view fusion module of (2), obtaining an aerial view target detection model based on manifold matching>. The method comprises the steps of respectively generating bird's eye views by image voxel branches and point cloud voxel branches, fusing the two bird's eye views, enabling the fused result to pass through a manifold matching network, and inputting the result into a detection head to obtain a final detection result. The manifold matching network consists of a generating network, a judging network and a convolution module, wherein the specific layer number of the convolution network can be determined according to tasks. In particular, alignment matching needs to be performed on the feature space of the image and the point cloud in the training process, so that two types of data need to be converted into corresponding voxel tensors in the manifold matching network, and the two types of tensors perform feature matching alignment in the manifold matching network.
Step four: for modelsConfiguring model parameters and superparameters, using training set +.>Training, optimizing and adjusting the model to obtain the model +.>。
Step five: model to be trained by training setAnd model->Test set +.>The target detection accuracy evaluation of (2) can obtain an evaluation result, and the optimal result is compared with a bird's eye view target detection network which is not embedded into the manifold matching network. From the evaluation results it can be demonstrated that the model +.>The average detection accuracy of (a) is higher than that of the model +.>Mean detection accuracy of (a) description model>Better performance can be obtained with the same training set and test set.
In the disclosure, the manifold matching network constructed in the third step is embedded into the aerial view target detection model constructed in the first step, and the tensor matching in the fourth step is aligned, so that similarity calculation and matching matrix calculation are needed in the network.
In the disclosure, the manifold matching network constructed in the third step can map the point cloud data and the image data acquired by different sensors into the same manifold space, so that the information of different sensors can be organically fused. In this manifold space, according to the extracted feature representation of the point cloud data and the image data, these features may represent not only the own information of the data but also the positional relationship of the data in the manifold space.
Application and test examples:
the MPSoCZCU105 development board was used as an embedded test platform. The bird's eye view target detection performance based on manifold matching in this embodiment was tested by the following experiment.
The common dataset was driven automatically using nuScenes from https:// www.nuscenes.org/the basic cases of the dataset include: the provision of (a) a sensor comprising: 6 cameras, 1 laser radar and 5 millimeter wave radars, wherein the 6 cameras cover 360 degrees and have overlapping parts, the acquisition rate is 12Hz, the laser radar is 32 lines, the acquisition rate is 20Hz, and the acquisition rate of the millimeter wave radars is 13Hz; (b) 1000 driving scenes comprising boston and singapore, each scene being 20s, the resolution of the picture being 1600 x 900; (c) A total of 3D frames, category information, and important attributes of 23 class targets are annotated, and the target detection task supports detection of 10 class targets.
The experimental method is as follows:
1) And testing a testing set on the GPU by using a manifold matching aerial view target detection model trained by the nuScenes data set to obtain a final evaluation result.
2) And deploying the manifold matching aerial view target detection model passing the test on an ARM processor through format conversion.
3) The test is performed using the autopilot public dataset nuScenes and the test program is written based on the c++ programming language.
The experimental results are shown in table 1, wherein FusionPainting, MVP and pointagement are classical methods of camera and lidar fusion, and MvMBev is an abbreviation for the methods proposed in the present disclosure. The nuScenes data set is used for training and testing, and in the experiment, the manifold matching network is embedded into the aerial view target detection model for training, so that the model can be fused more efficiently and accurately on the same manifold space on the data processing of two different modes.
TABLE 1 average precision and NuScens detection score contrast of aerial view target detection model and other models based on manifold matching network
Target detection method | NDS (nuScens detection score) | MAP (average precision) |
FusionPainting | 71.6% | 68.1% |
MVP | 70.5% | 66.4% |
PointAugmenting | 71.1% | 66.8% |
MvMBev | 72.1% | 69.0% |
The average accuracy and nuScenes detection score of different target detection methods are shown in table 1, wherein the higher the average accuracy is, the better the detection performance of the model on the target is, and the higher the nuScenes detection score is, the stronger the comprehensive performance of the model is. Experimental results show that nuScens detection scores and average precision of the aerial view target detection model embedded with the manifold matching network are higher than those of other three models. The method provided by the disclosure is superior to the existing common method in target detection precision, and meanwhile, the method is proved to have good practicability under an embedded computing platform.
The foregoing embodiments are merely exemplary embodiments of the present invention, and it will be appreciated by those skilled in the art that variations may be made in light of the above teachings and principles of the present invention and that these variations may be applied to other specific tasks and are not limited to the manner in which they are described herein, such that they are merely preferred, but are not intended to be limiting.
Claims (6)
1. A bird's eye view target detection method based on manifold matching comprises the following steps:
step S1: constructing a bird's eye view target detection modelWherein, model->The system comprises an aerial view generating module of a plurality of sensor branches, an aerial view fusion module generated by each sensor and a target detection module; acquiring an original sample data set for training;
step S2: building manifold matching networkEmbedding it into the model->After the aerial view fusion module, obtaining an aerial view target detection model based on manifold matching>The method comprises the steps of carrying out a first treatment on the surface of the Said network->For mapping data acquired by different sensors into the same manifold space;
2. The method of claim 1, wherein the bird's eye view target detection modelThe method comprises an image voxel branch and a point cloud voxel branch, wherein the image voxel branch is constructed by a trunk network RESNET50, a neck network FPN and an image aerial view generating network LSS, and the point cloud voxel branch is constructed by a trunk network SECOND, a neck network SECONDFPN and a point cloud aerial view generating network VoxelNet.
4. A method according to claim 3, wherein the tensor matching is done by similarity calculation or matching matrix calculation.
5. The method of claim 3 or 4, wherein the manifold matching networkThe system comprises a generating network, a distinguishing network and a convolution module which are connected in sequence, wherein the generating network comprises 1 linear layer and 7 convolution layers, and the distinguishing network comprises 1 linear layer and 6 convolution layers.
6. An aerial view target detection device based on manifold matching, comprising:
a bird's eye view generating module of each sensor branch;
the bird's-eye view fusion module is used for fusing the bird's-eye views generated by the sensors;
the manifold matching network module is used for mapping the fused data from different sensors into the same manifold space;
and the target detection module is used for carrying out target extraction detection based on the output of the manifold matching network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310484058.7A CN116229118A (en) | 2023-05-04 | 2023-05-04 | Bird's eye view target detection method based on manifold matching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310484058.7A CN116229118A (en) | 2023-05-04 | 2023-05-04 | Bird's eye view target detection method based on manifold matching |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116229118A true CN116229118A (en) | 2023-06-06 |
Family
ID=86580849
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310484058.7A Pending CN116229118A (en) | 2023-05-04 | 2023-05-04 | Bird's eye view target detection method based on manifold matching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116229118A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118628533A (en) * | 2024-08-13 | 2024-09-10 | 浙江大华技术股份有限公司 | Target tracking method and computer equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113159151A (en) * | 2021-04-12 | 2021-07-23 | 中国科学技术大学 | Multi-sensor depth fusion 3D target detection method for automatic driving |
CN115205633A (en) * | 2022-07-27 | 2022-10-18 | 北京大学 | Automatic driving multi-mode self-supervision pre-training method based on aerial view comparison learning |
CN115953662A (en) * | 2022-12-29 | 2023-04-11 | 中国铁道科学研究院集团有限公司通信信号研究所 | Multi-mode fusion recognition-based train operation environment obstacle sensing method |
-
2023
- 2023-05-04 CN CN202310484058.7A patent/CN116229118A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113159151A (en) * | 2021-04-12 | 2021-07-23 | 中国科学技术大学 | Multi-sensor depth fusion 3D target detection method for automatic driving |
CN115205633A (en) * | 2022-07-27 | 2022-10-18 | 北京大学 | Automatic driving multi-mode self-supervision pre-training method based on aerial view comparison learning |
CN115953662A (en) * | 2022-12-29 | 2023-04-11 | 中国铁道科学研究院集团有限公司通信信号研究所 | Multi-mode fusion recognition-based train operation environment obstacle sensing method |
Non-Patent Citations (7)
Title |
---|
HONGYU ZHOU ET AL.: "PersDet: Monocular 3D Detection in Perspective Bird’s-Eye-View", 《ARXIV》 * |
LITEAI: "MIT Han Lab&OmniML | BEVFusion:具有统一鸟瞰图表示的多任务多传感器融合", Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/522372079> * |
YIN ZHOU ET AL.: "VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection", 《ARXIV》 * |
ZHIJIAN LIU ET AL.: "BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation", 《ARXIV》 * |
张易;项志宇;乔程昱;陈舒雅;: "基于3维点云鸟瞰图的高精度实时目标检测", 《机器人》, no. 2 * |
赵坤: "基于图像和点云融合的全天候三维车辆检测方法研究", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》, no. 2 * |
黄文锋: "面向自动驾驶中多模态融合感知算法的攻击和防御", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 2, pages 2 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118628533A (en) * | 2024-08-13 | 2024-09-10 | 浙江大华技术股份有限公司 | Target tracking method and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11255973B2 (en) | Method and apparatus for extracting lane line and computer readable storage medium | |
CN108920584B (en) | Semantic grid map generation method and device | |
CN112667837A (en) | Automatic image data labeling method and device | |
US11094112B2 (en) | Intelligent capturing of a dynamic physical environment | |
US20180240194A1 (en) | Visual analytics based vehicle insurance anti-fraud detection | |
CN110906954A (en) | High-precision map test evaluation method and device based on automatic driving platform | |
CN113034566A (en) | High-precision map construction method and device, electronic equipment and storage medium | |
CN112150448B (en) | Image processing method, device and equipment and storage medium | |
CN112084835A (en) | Generating map features based on aerial data and telemetry data | |
CN114252884A (en) | Method and device for positioning and monitoring roadside radar, computer equipment and storage medium | |
CN116229118A (en) | Bird's eye view target detection method based on manifold matching | |
US20220196432A1 (en) | System and method for determining location and orientation of an object in a space | |
CN113836251B (en) | Cognitive map construction method, device, equipment and medium | |
CN114252883B (en) | Target detection method, apparatus, computer device and medium | |
CN113988197A (en) | Multi-camera and multi-laser radar based combined calibration and target fusion detection method | |
CN112528918A (en) | Road element identification method, map marking method and device and vehicle | |
CN112507891A (en) | Method and device for automatically identifying high-speed intersection and constructing intersection vector | |
CN114252859A (en) | Target area determination method and device, computer equipment and storage medium | |
CN114252868A (en) | Laser radar calibration method and device, computer equipment and storage medium | |
CN114255264B (en) | Multi-base-station registration method and device, computer equipment and storage medium | |
CN112232272B (en) | Pedestrian recognition method by fusing laser and visual image sensor | |
CN115468576A (en) | Automatic driving positioning method and system based on multi-mode data fusion | |
CN114550016A (en) | Unmanned aerial vehicle positioning method and system based on context information perception | |
CN114594485A (en) | Apparatus and method for identifying high-rise structures using LiDAR sensors | |
CN112766068A (en) | Vehicle detection method and system based on gridding labeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230606 |
|
RJ01 | Rejection of invention patent application after publication |