[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114255264B - Multi-base-station registration method and device, computer equipment and storage medium - Google Patents

Multi-base-station registration method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114255264B
CN114255264B CN202011018673.1A CN202011018673A CN114255264B CN 114255264 B CN114255264 B CN 114255264B CN 202011018673 A CN202011018673 A CN 202011018673A CN 114255264 B CN114255264 B CN 114255264B
Authority
CN
China
Prior art keywords
point cloud
cloud data
target
base station
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011018673.1A
Other languages
Chinese (zh)
Other versions
CN114255264A (en
Inventor
孟令钊
房颜明
王邓江
关喜嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202011018673.1A priority Critical patent/CN114255264B/en
Publication of CN114255264A publication Critical patent/CN114255264A/en
Application granted granted Critical
Publication of CN114255264B publication Critical patent/CN114255264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The embodiment of the disclosure relates to a multi-base-station registration method, a multi-base-station registration device, computer equipment and a storage medium. The method comprises the steps of obtaining radar point cloud data and corresponding map point cloud data of each base station, respectively and iteratively matching the radar point cloud data and the corresponding map point cloud data of each base station according to a single base station to obtain a position coordinate matching result of an origin of a laser radar coordinate system, adjusting original registration parameters of each base station according to the matching result until the output matching result meets a preset condition, outputting the adjusted registration parameters of each base station, and calculating the relative registration parameters of each base station according to the registration parameters of each base station. The registration parameters of each base station are obtained by matching the high-precision map point cloud data and the radar point cloud data, so that the laser radar is calibrated by using the high-precision data, and the accuracy of the obtained registration parameters of each base station is higher.

Description

Multi-base-station registration method and device, computer equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of map measurement, in particular to a multi-base-station registration method, a multi-base-station registration device, computer equipment and a storage medium.
Background
With the development of measurement technology, roadside lidar has a mature detection capability. When a plurality of roadside laser radars are used for measuring a target object, the relative positions of the roadside laser radars are generally required to be registered, so that the base station can fuse point cloud data output by the roadside laser radars according to the relative positions to accurately identify the target object.
Currently, when registering the relative positions of the roadside radars, a method may be adopted, for example, to place a marker in the detection overlapping area of the roadside radars and register the roadside radars with the marker as a reference.
However, the above registration method has a problem of inaccurate registration.
Disclosure of Invention
The embodiment of the disclosure provides a multi-base-station registration method, a multi-base-station registration device, a computer device and a storage medium, which can be used for improving the accuracy of registration of a plurality of laser radars or a plurality of base stations.
In a first aspect, an embodiment of the present disclosure provides a method for multi-base-station registration, where the method includes:
acquiring radar point cloud data and corresponding map point cloud data of each base station; the precision of the map point cloud data is greater than a preset precision threshold;
and respectively and iteratively executing matching between the radar point cloud data of each base station and the corresponding map point cloud data according to a single base station to obtain a matching result of the position coordinate of the origin of the laser radar coordinate system, adjusting the original registration parameters of each base station according to the matching result until the output matching result meets a preset condition, and outputting the adjusted registration parameters of each base station. The registration parameters of the base station comprise longitude, latitude, altitude, a rotation angle of a warp winding degree, a rotation angle of a latitude winding degree and a rotation angle of an altitude winding degree of an origin of a coordinate system of the base station;
and calculating the relative registration parameters of the base stations according to the registration parameters of the base stations.
In one embodiment, the manner of obtaining the map point cloud data includes:
acquiring original map point cloud data of each base station; the precision of the original map point cloud data is greater than the preset precision threshold;
and obtaining map point cloud data of an area in the preset scanning range from the original map point cloud data according to the actual installation position of the laser radar of each base station and the preset scanning range of the laser radar, wherein the map point cloud data is used as the map point cloud data.
In one embodiment, before the matching the radar point cloud data of each base station with the corresponding map point cloud data to obtain the matching result of the position coordinate of the origin of the laser radar coordinate system, the method further includes:
removing dynamic radar point cloud data in the radar point cloud data to obtain static radar point cloud data;
matching the radar point cloud data of each base station with the corresponding map point cloud data to obtain a matching result of the position coordinate of the origin of the laser radar coordinate system, wherein the matching result comprises the following steps:
and matching the static radar point cloud data of each base station with the corresponding map point cloud data to obtain a position coordinate matching result of the origin of the laser radar coordinate system.
In one embodiment, matching the static radar point cloud data of each base station with the corresponding map point cloud data to obtain a matching result of the position coordinate of the origin of the laser radar coordinate system, includes:
extracting features in the static radar point cloud data to obtain a first feature set; the first set of features comprises at least two first features;
extracting features in the map point cloud data to obtain a second feature set;
and matching the first characteristic set with the second characteristic set to obtain a matching result of the position coordinate of the origin of the laser radar coordinate system.
In one embodiment, matching the first feature set with the second feature set to obtain the position coordinates of the lidar includes:
acquiring a first projection line segment of a line segment between two first features in the first feature set on a preset plane in a radar coordinate system, and acquiring a first projection included angle between the first projection line segment and a corresponding coordinate axis;
acquiring a second projection line segment of a line segment between two second features in the second feature set on a preset plane in a geographic coordinate system to obtain a second projection included angle between the second projection line segment and a corresponding coordinate axis; the type of two second features in the second feature set is the same as the type of two first features in the first feature set;
performing difference operation on the first projection included angle and the second projection included angle to obtain a rotation angle of the origin of the laser radar coordinate system;
and obtaining a matching result of the position coordinate of the origin of the laser radar coordinate system according to the rotation angle of the origin of the laser radar coordinate system, the first characteristic set and the second characteristic set.
In one embodiment, obtaining a matching result of the position coordinate of the origin of the lidar coordinate system according to the rotation angle of the origin of the lidar coordinate system, the first feature set and the second feature set includes:
matching each first feature in the first feature set with each second feature in the second feature set to obtain a target first feature and a target second feature which belong to the same type;
correcting the position coordinate of the first target characteristic according to the value of the cosine function of the rotation angle to obtain the corrected position coordinate of the first target characteristic;
and determining a matching result of the position coordinate of the origin of the laser radar coordinate system according to the corrected position coordinate of the first characteristic of the target and the corrected position coordinate of the second characteristic of the target.
In one embodiment, the rotation angle of the lidar coordinate system origin comprises: a warp winding degree rotating angle, a latitude winding rotating angle and an altitude winding rotating angle; the position coordinates of the origin of the laser radar coordinate system comprise: longitude, latitude, altitude.
In a second aspect, embodiments of the present disclosure provide a method for environmental awareness, the method comprising:
acquiring relative registration parameters of each base station based on the multi-base station registration method in the first aspect;
converting the position coordinates of each base station into a coordinate system where the position coordinates of the same base station are located according to the relative registration parameters of each base station to obtain the converted position coordinates of each base station;
and identifying the target object in the sensing area of each base station by adopting a preset identification method according to the conversion position coordinates of each base station and the radar point cloud data of each base station.
In a third aspect, an embodiment of the present disclosure provides a multi-base station registration apparatus, where the apparatus includes:
the acquisition module is used for acquiring radar point cloud data and corresponding map point cloud data of each base station; the precision of the map point cloud data is greater than a preset precision threshold;
and the registration module is used for respectively and iteratively matching the radar point cloud data of each base station with the corresponding map point cloud data according to a single base station to obtain a matching result of the position coordinate of the origin of the laser radar coordinate system, adjusting the original registration parameters of each base station according to the matching result until the output matching result meets a preset condition, and outputting the adjusted registration parameters of each base station. The registration parameters of the base station comprise longitude, latitude, altitude, a rotation angle of a warp winding degree, a rotation angle of a latitude winding degree and a rotation angle of an altitude winding degree of an origin of a coordinate system of the base station;
and the calculation module is used for calculating the relative registration parameters of all the base stations according to the registration parameters of all the base stations.
In a fourth aspect, an embodiment of the present disclosure provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method of the first aspect when executing the computer program.
In a fifth aspect, the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method of the first aspect.
The multi-base-station registration method, the multi-base-station registration device, the computer equipment and the storage medium, which are provided by the embodiment of the disclosure, are used for acquiring radar point cloud data and corresponding map point cloud data of each base station, respectively and iteratively matching the radar point cloud data of each base station with the corresponding map point cloud data according to a single base station to obtain a matching result of a position coordinate of an origin of a laser radar coordinate system, adjusting original registration parameters of each base station according to the matching result until the output matching result meets a preset condition, outputting the adjusted registration parameters of each base station, and then calculating the relative registration parameters of each base station according to the registration parameters of each base station. The registration parameters of all the base stations are obtained by matching the high-precision map point cloud data and the radar point cloud data, so that the laser radar is calibrated by using the high-precision data, the accuracy of the obtained registration parameters of all the base stations is higher, the accuracy of the relative registration parameters obtained according to the registration parameters of all the base stations is higher, and the accuracy of the multiple base stations in registration is greatly improved.
Drawings
FIG. 1 is a diagram of an application environment of a multi-base-station cooperative sensing method in an embodiment;
FIG. 2 is a schematic flow chart of a multi-base-station cooperative sensing method in an embodiment;
FIG. 3 is a schematic flow chart illustrating a method for calibrating a lidar according to one embodiment;
FIG. 4 is a flowchart illustrating the step S1003 in the embodiment of FIG. 3;
FIG. 5 is a flowchart illustrating the step S1006 in the embodiment of FIG. 4;
FIG. 6 is a flowchart illustrating the step S1006 in the embodiment of FIG. 4;
FIG. 7 is a flowchart illustrating the step S1011 in the embodiment of FIG. 6;
FIG. 8 is a flowchart illustrating the step S1002 in the embodiment of FIG. 3;
fig. 9 is a schematic flowchart of a multi-base station registration method in an embodiment;
FIG. 10 is a schematic flow chart diagram illustrating a multi-base station registration method in one embodiment;
fig. 11 is a schematic flowchart of a multi-base station registration method in an embodiment;
FIG. 12 is a flowchart illustrating the step S1209 in the embodiment of FIG. 11;
FIG. 13 is a flowchart illustrating the step S1213 in the embodiment of FIG. 12;
FIG. 14 is a flow diagram illustrating the determination of a target region of interest in one embodiment;
FIG. 15 is a flowchart illustrating the step S1303 in the embodiment of FIG. 14;
FIG. 16 is a schematic illustration of a target region of interest in one embodiment;
FIG. 17 is a flowchart illustrating the step S1306 in the embodiment of FIG. 16;
FIG. 18 is a schematic illustration of a target region of interest in one embodiment;
FIG. 19 is a flow diagram illustrating the determination of a target region of interest in one embodiment;
FIG. 20 is a flow diagram that illustrates a data processing method in one embodiment;
FIG. 21 is a flowchart illustrating the step S1402 in the embodiment of FIG. 20;
FIG. 22 is a schematic view showing the flow of step S1404 in the embodiment of FIG. 21;
FIG. 23 is a flowchart illustrating step S1406 in the embodiment of FIG. 22;
FIG. 24 is a schematic view of the flowchart of step S1407 in the embodiment of FIG. 22;
FIG. 25 is a flow diagram illustrating a data processing method according to one embodiment;
FIG. 26 is a flowchart illustrating the step S1417 in the embodiment of FIG. 25;
FIG. 27 is a flowchart illustrating a step S1419 in the embodiment of FIG. 26;
FIG. 28 is a flowchart illustrating a step S1419, shown in FIG. 26;
FIG. 29 is a flowchart illustrating a step S1419 in the embodiment of FIG. 26;
FIG. 30 is a flowchart illustrating the step S1419 in the embodiment of FIG. 26;
FIG. 31 is a flow diagram illustrating a data processing method according to one embodiment;
FIG. 32 is a schematic flow chart diagram illustrating a method for object detection in one embodiment;
FIG. 33 is a schematic flowchart of a method for object detection in another embodiment;
FIG. 34 is a schematic flow chart diagram of a target detection method in another embodiment;
FIG. 35 is a schematic flow chart diagram of a method for object detection in another embodiment;
FIG. 36 is a schematic flow chart illustrating a roadside radar positioning monitoring method according to an embodiment;
FIG. 37 is a flow diagram illustrating a process for obtaining second spatial information according to one embodiment;
FIG. 38 is a schematic illustration of an initial point cloud in one embodiment;
FIG. 39 is a flow diagram illustrating a process for obtaining first spatial information according to one embodiment;
FIG. 40 is a flowchart illustrating the determination of whether a roadside radar is abnormally located in one embodiment;
FIG. 41 is a schematic diagram illustrating a process for determining whether a roadside radar is abnormally located in accordance with another embodiment;
fig. 42 is a block diagram of the structure of a multi-base station registration apparatus in one embodiment;
FIG. 43 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The following describes technical solutions related to the embodiments of the present disclosure with reference to a scenario in which the embodiments of the present disclosure are applied.
The multi-base station cooperative sensing method provided by the embodiment of the disclosure can be applied to the application environment shown in fig. 1. The server is connected with a plurality of base stations, each base station can be provided with a laser radar, the base stations can communicate with each other, and the base stations communicate with the server through a network. Each laser radar is used for scanning the surrounding environment area of the laser radar so as to output radar point cloud data and sending the radar point cloud data to a corresponding server and/or base station; the server and/or the base station can process the received radar point cloud data to realize the processes of target detection, target tracking and the like. The laser radar may be a roadside laser radar, or may be another type of laser radar. The base station may process the received data, and may include various forms of Road Side Units (RSUs), edge computing devices such as edge servers, which may be implemented as individual servers or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a multi-base station cooperative sensing method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
s10, receiving radar point cloud data and corresponding map point cloud data sent by a plurality of base stations; and the precision of the map point cloud data is greater than a preset precision threshold.
And S11, matching the radar point cloud data on each base station with the corresponding map point cloud data to obtain the position coordinates of each base station.
And S12, converting the radar point cloud data of each base station into a preset coordinate system according to the position coordinates of each base station and the relative position coordinates between the base stations to obtain target radar point cloud data.
S13, determining a target region of interest according to the target radar point cloud data, the first map and the second map; the first map is map point cloud data in a point cloud format, and the second map is map point cloud data in a vector format.
And S14, extracting radar point cloud data in the target interest area from the target radar point cloud data.
And S15, identifying the target object in the target interest area according to the radar point cloud data in the target interest area to obtain the characteristic information of the target object.
The multi-base-station cooperative sensing method of the embodiment can realize wider information sensing through cooperation of a plurality of base stations. It should be noted that the multi-base station cooperative sensing method may be applied to a server, an edge computing device, or an integrated system configured with a server and an edge device. As long as the computing power of the server, the edge computing device or the integrated system is enough to support the operations involved in the steps of the multi-base-station cooperative sensing method. The application does not limit whether the multi-base station cooperative sensing method is applied to a server, an edge computing device or the integrated system. When the multi-base-station cooperative sensing method is applied to a comprehensive system configured with a server and edge equipment, how to allocate corresponding tasks to the server and the edge equipment can be flexibly selected according to actual requirements and equipment configuration, and the application is not limited to this.
Before using the lidar to perform environment sensing or perform cooperative sensing of multiple base stations, the lidar of the base station and each base station in the multiple base station system generally need to be calibrated (registered), and the following embodiments describe the calibration (registration) process in detail, and are described as follows: (it should be noted that the server may calibrate the lidar, the base station may also calibrate the lidar, and the calibration methods may be the same, and the position coordinate of the origin of the lidar coordinate system is provided in this embodiment as a calibration process based on map point cloud data, which is a process for converting the acquired point cloud data to the coordinate system of the map point cloud data by the perspective principle for the lidar or the base station with the lidar installed)
In one embodiment, a method for calibrating a lidar is provided, as shown in fig. 3, the method comprising:
s1001, radar point cloud data of the laser radar in a preset scanning range are obtained.
The preset scanning range may be determined by the server in advance according to the identification requirement, or may be determined according to the performance of the laser radar, for example, the preset scanning range of a general laser radar is a 360 ° scanning range. The radar point cloud data is obtained after the laser radar scans the surrounding environment. The lidar may be of various types, and when the lidar is used for collecting point cloud data in a road environment, the lidar may be mounted on any marker, e.g. the lidar may be mounted on a trunk, also on a lamp pole.
Specifically, when the laser radar scans the surrounding environment within a preset scanning range, the laser radar outputs radar point cloud data and sends the radar point cloud data to a base station connected with the laser radar, the base station receives the radar point cloud data and then sends the radar point cloud data to a server, and the server identifies objects in the surrounding environment where the base station is located by analyzing the radar point cloud data.
S1002, obtaining map point cloud data of an area to be matched corresponding to a preset scanning range from the map point cloud data of the preset scanning range; and the precision of the map point cloud data is greater than a preset precision threshold.
The preset precision threshold value can be determined by the server according to the actual measurement precision requirement. The precision of the map point cloud data described in this embodiment is greater than the preset precision threshold, and therefore, when the preset precision threshold is higher, the map point cloud data is high-precision map point cloud data. And the area to be matched corresponds to the area related to the preset scanning range of the laser radar. It should be noted that the coordinate system of the map point cloud data is a world coordinate system.
Specifically, before the laser radar is calibrated, high-precision map point cloud data can be obtained, and when a preset scanning range of the laser radar is determined, the map point cloud data in the preset scanning range can be further extracted from the map point cloud data according to the preset scanning range to serve as the map point cloud data of the area to be matched, so that the radar point cloud data output by the laser radar can be registered according to the map point cloud data of the area to be matched, and the laser radar can be calibrated.
S1003, matching the map point cloud data and the radar point cloud data of the area to be matched to obtain calibration parameters of the laser radar; the map point cloud data of the area to be matched and the radar point cloud data in the preset scanning range contain the same object to be matched.
The calibration parameters comprise longitude, latitude, altitude, a rotation angle of a warp winding degree, a rotation angle of a latitude winding degree and a rotation angle of an altitude winding degree of an origin of a laser radar coordinate system. Alternatively, the object to be matched may represent different types of objects such as lane lines, lamp posts, vehicles, and the like. Specifically, when the map point cloud data of the area to be matched is obtained based on the above steps, the original calibration parameters may be used to match the features in the map point cloud data of the area to be matched with the features in the radar point cloud data to obtain a matching result, the original calibration parameters are adjusted to make the matching result meet the preset standard, the adjusted calibration parameters are output, and the calibration process is ended. It should be noted that, when matching the map point cloud data and the radar point cloud data of the area to be matched, the precondition is that the map point cloud data of the area to be matched and the radar point cloud data within the preset scanning range contain the same object to be matched, so that whether the matching result meets the preset standard can be determined according to the same object to be matched. Specifically, whether the matching result meets the preset condition can be judged by calculating the coordinate distance between the radar point cloud data and the map point cloud data of the point cloud points on the object to be matched. At this time, the preset condition may be set to be that the matching result meets the preset condition when the coordinate distance is minimum, or the matching result meets the preset condition when the distance is smaller than a preset threshold.
According to the calibration method of the laser radar, radar point cloud data of the laser radar in a preset scanning range are obtained, then, map point cloud data of an area to be matched corresponding to the preset scanning range are obtained from map point cloud data of which the precision is larger than a preset precision threshold according to the preset scanning range, and then the map point cloud data of the area to be matched and the radar point cloud data are matched to obtain calibration parameters of the laser radar. Because the map point cloud data is high-precision map point cloud data, the laser radar is calibrated by using the high-precision map point cloud data and the radar point cloud data, so that the calibration of the laser radar by using the high-precision data is realized, and the calibration accuracy can be improved. Compared with the traditional method for calibrating the laser radar by adopting specific positioning equipment, the calibration method disclosed by the embodiment of the disclosure does not influence the calibration accuracy due to the lower precision of the positioning equipment, and does not influence the calibration efficiency due to the instability of the precision of the positioning equipment. Therefore, the calibration method provided by the embodiment of the disclosure can improve the calibration accuracy and improve the calibration efficiency.
In practical application, the radar point cloud data acquired by the laser radar can be converted into a coordinate system of map point cloud data by using the calibration parameters acquired in the calibration process. Usually, the coordinate position of the map point cloud data is based on longitude and latitude, and on this basis, after the laser radar acquires the radar point cloud data, the absolute position coordinates (longitude and latitude) can be acquired according to the calibration parameters. Since the interaction of most road scenes is based on absolute position coordinates, the calibration process is beneficial to the wide application of the laser radar.
Optionally, on the premise that the absolute position coordinate of the target can be obtained, the position coordinate of the origin of the laser radar coordinate system enables the server to realize the identification work of position identification, type identification, attribute identification and the like of the target object to be identified according to the absolute position coordinate of the target object to be identified and a corresponding identification algorithm, so that the purpose of target object detection is achieved, and the laser radar coordinate system can be widely applied to any field needing target identification, such as vehicle identification, obstacle identification, road detection and the like in the field of automatic driving navigation.
In an embodiment, before the step S1003 "matching the map point cloud data and the radar point cloud data of the area to be matched to obtain the calibration parameter of the laser radar", the method in the embodiment of fig. 3 further includes the steps of: and eliminating dynamic radar point cloud data in the radar point cloud data to obtain static radar point cloud data.
The dynamic radar point cloud data comprises point cloud data of objects in a moving state in the radar point cloud data, such as point cloud data of a running vehicle. The static radar point cloud data includes point cloud data of objects in the radar point cloud data, for example, lamp posts on the roadside, which are in a static state.
Specifically, when the server acquires radar point cloud data output by the laser radar, the radar point cloud data can be further removed, point cloud data of objects which belong to movement in the radar point cloud data, namely dynamic radar point cloud data, is determined firstly, then the dynamic radar point cloud data is removed from the radar point cloud data, point cloud data of objects which belong to static states, namely static radar point cloud data, is reserved, and then the server calibrates the laser radar according to the static radar point cloud data.
In the above embodiment, because the object in the static radar point cloud data is static, so compare in dynamic radar point cloud data, the error of the static radar point cloud data that laser radar obtained is less relatively, and then can improve the calibration accuracy when the server carries out calibration to laser radar according to the static radar point cloud data.
Specifically, when the server obtains the static radar point cloud data, the static radar point cloud data can be used for calibrating the laser radar. Therefore, the step S1003 specifically includes: and matching the map point cloud data and the static radar point cloud data of the area to be matched to obtain the calibration parameters of the laser radar.
When the server obtains the static radar point cloud data based on the steps, the server can match the features in the static radar point cloud data with the features in the map point cloud data of the area to be matched to obtain a matching result, then the original calibration parameters are adjusted to enable the matching result to accord with the preset standard, the adjusted calibration parameters are output, and the calibration process is finished. Optionally, the original calibration parameters may be obtained by measurement, or may be current calibration parameters of the laser radar. Optionally, the point cloud in the static radar point cloud data and the point cloud in the map point cloud data of the area to be matched may be matched to obtain a matching result, the original calibration parameters are adjusted to make the matching result meet the preset standard, the adjusted calibration parameters are output, and the calibration process is ended. Optionally, the original calibration parameters may be obtained by measurement, or may be current calibration parameters of the laser radar. .
On the basis of the implementation manner of S1003 described in the above embodiment, the embodiment of the present disclosure further provides a specific implementation method of the implementation manner, as shown in fig. 4, the specific implementation method includes: the method comprises the following steps:
s1004, extracting features in the static radar point cloud data to obtain a first feature set; the first set of features includes at least two first features.
The features in the static radar point cloud data represent static objects in the surrounding environment scanned by the laser radar, and for example, the features may be edges of a road, markers around the road, trees, lamp posts, and the like. The first features included in the first feature set may be all features extracted from the static radar point cloud data or may be part of the extracted features. For example, lamp posts beside a road and in the static radar point cloud data are extracted, and the corresponding first feature set comprises two first features of the road and the lamp posts beside the road.
Specifically, when the server acquires the static radar point cloud data, all features in the static radar point cloud data can be extracted through an existing feature extraction algorithm, or partial features in the static radar point cloud data can be extracted according to matching requirements, so that at least two extracted features, namely a first feature set comprising at least two first features, are obtained. The feature extraction algorithm may be a neural network feature extraction algorithm, or may be other feature extraction algorithms, which is not limited herein.
S1005, extracting the features in the map point cloud data of the area to be matched to obtain a second feature set.
The features in the map point cloud data of the area to be matched represent objects in the area range related to the map, such as the border of a road in the map, markers around the road, trees, lamp posts and the like. The second features included in the second feature set may be all features extracted from the map point cloud data or may be part of the extracted features. For example, the lamp posts on the road and beside the road in the map point cloud data are extracted, and the corresponding second feature set comprises two second features of the road and the lamp posts beside the road.
Specifically, when the server acquires the map point cloud data, all or part of features in the map point cloud data can be acquired correspondingly; optionally, the server may also extract all features in the map point cloud data through an existing feature extraction algorithm, or extract part of features in the map point cloud data according to matching requirements, so as to obtain at least two extracted features, that is, a second feature set including at least two second features. The number of the second features included in the second feature set may be the same as or different from the number of the first features.
And S1006, matching the first characteristic set with the second characteristic set to obtain calibration parameters of the laser radar.
When the server obtains the first feature set and the second feature set based on the above steps, the server may match each first feature in the first feature set with each second feature in the second feature set to obtain a group of matched first features and second features, or multiple groups of matched first features and second features, to obtain a matching result, adjust the original calibration parameters to make the matching result meet the preset standard, output the adjusted calibration parameters, and end the calibration process. The information related to the feature may be information such as position coordinates, direction, size, and heading angle of the feature.
The embodiment disclosed above realizes calibration of the laser radar based on the matching features in the static radar point cloud data and the map point cloud data. Because the features in the map point cloud data are accurate features, the high-precision features are used for calibrating the laser radar, and the calibration accuracy can be improved. Moreover, because the features in the map point cloud data are easy to obtain, no additional equipment is needed to obtain, and compared with a method that additional positioning equipment is needed to position by using a traditional calibration method, the calibration method provided by the embodiment can also reduce calibration cost.
Position coordinates of the origin of the laser radar coordinate system in one embodiment, the calibration parameters include longitude, latitude, altitude, rotation angle around longitude, rotation angle around latitude, rotation angle around altitude of the origin of the coordinate system of the radar point cloud data. In step S1006, matching the first feature set with the second feature set to obtain the calibration parameter of the laser radar, as shown in fig. 5, the method includes:
s1007, obtaining a first projection line segment of a line segment between two first features in the first feature set on a preset plane in a radar coordinate system, and obtaining a first projection included angle between the first projection line segment and a corresponding coordinate axis.
Wherein the radar coordinate system may be a rectangular coordinate system. The preset plane is a plane where any two coordinate axes in the radar coordinate system are located, for example, the coordinate axes of the radar coordinate system include: x axle, Y axle, Z axle, the corresponding preset plane includes: XY plane, YZ plane, XZ plane.
Specifically, when the server obtains the first feature set, any two first features may be selected from the first feature set, and then the two first features are connected in the radar coordinate system according to the position coordinates of the two first features to obtain a line segment between the two first features, and then the line segment is projected onto a preset plane to obtain a first projected line segment. And further selecting a coordinate axis corresponding to the preset plane to obtain a first projection included angle between the coordinate axis and the first projection line segment. For example, if the preset plane is an XZ plane, the coordinate axis corresponding to the XZ plane is a Z axis, and accordingly, an included angle between a first projection line segment of the two first features on the XZ plane and the corresponding Z axis is a first projection included angle; if the preset plane is a YZ plane, the coordinate axis corresponding to the YZ plane is a Y axis, and correspondingly, the included angle between the first projection line segment of the two first characteristics on the YZ plane and the corresponding Y axis is a first projection included angle; if the preset plane is an XY plane, the coordinate axis corresponding to the XY plane is an X axis, and accordingly, an included angle between the first projection line segment of the two first features on the XY plane and the corresponding X axis is a first projection included angle.
S1008, obtaining a second projection line segment of a line segment between two second features in the second feature set on a preset plane in a geographic coordinate system, and obtaining a second projection included angle between the second projection line segment and a corresponding coordinate axis; the type of the two second features in the second set of features is the same as the type of the two first features in the first set of features.
The preset plane is a plane where any two coordinate axes in the geographic coordinate system are located, for example, the coordinate axes of the geographic coordinate system include: longitude axis, latitude axis, the height above sea level axle, the corresponding plane of predetermineeing includes: longitude plane, latitude plane, altitude plane.
Specifically, when the server obtains the second feature set, two second features may be selected from the second feature set, and then the two second features are connected in the geographic coordinate system according to the position coordinates of the two second features to obtain a line segment between the two second features, and then the line segment is projected onto the preset plane to obtain a second projected line segment. And further selecting a coordinate axis corresponding to the preset plane to obtain a second projection included angle between the coordinate axis and the second projection line segment. For example, if the preset plane is an elevation plane, the coordinate axis corresponding to the elevation plane is an elevation axis, and accordingly, an included angle between a first projection line segment of the two first features on the elevation plane and the corresponding elevation axis is a first projection included angle; if the preset plane is a latitude plane, the coordinate axis corresponding to the latitude plane is a latitude axis, and correspondingly, the included angle between the first projection line segment of the two first characteristics on the latitude plane and the corresponding latitude axis is the first projection included angle; if the preset plane is a longitude plane, the coordinate axis corresponding to the longitude plane is a longitude axis, and accordingly, an included angle between a first projection line segment of the two first features on the longitude plane and the corresponding longitude axis is a first projection included angle. It should be noted that the two second features selected here have the same type as the two first features selected from the first feature set, for example, if the two selected first features are a road and a light pole, then the two second features in the second feature set are also a road and a light pole correspondingly.
S1009, performing difference operation on the first projection included angle and the second projection included angle to obtain a rotation angle of the origin of the laser radar coordinate system; the rotation angle of the origin of the laser radar coordinate system comprises a rotation angle around longitude, a rotation angle around latitude and a rotation angle around altitude.
And when the server obtains the first projection included angle and the second projection included angle based on the steps, further performing difference value operation on the first projection included angle and the second projection included angle, and taking the calculated difference value angle as the calibrated rotation angle of the laser radar. When the line segment between the two first features or the line segment between the two second features is projected on the preset plane in the respective coordinate system, the projected preset planes are different, and the coordinate axes corresponding to the preset planes are different, so that the projected preset planes correspond to different rotation angles.
For example, if a preset plane in the radar coordinate system is an XZ plane, a coordinate axis corresponding to the XZ plane is a Z axis, a preset plane in the geographic coordinate system is an elevation plane, a coordinate axis corresponding to the elevation plane is an elevation axis, and a rotation angle of the laser radar is a rotation angle around the latitude; if the preset plane in the radar coordinate system is a YZ plane, the coordinate axis corresponding to the YZ plane is a Y axis, the preset plane in the geographic coordinate system is a latitude plane, the coordinate axis corresponding to the latitude plane is a latitude axis, and the rotation angle of the laser radar is a warp-winding degree rotation angle; if the preset plane in the radar coordinate system is an XY plane, the coordinate axis corresponding to the XY plane is an X axis, the preset plane in the geographic coordinate system is a longitude plane, the coordinate axis corresponding to the longitude plane is a longitude axis, and the rotation angle of the laser radar is the rotation angle around the altitude.
In an embodiment, the step S1006 of matching the first feature set with the second feature set to obtain the calibration parameters of the lidar, as shown in fig. 6, includes:
s1010, matching each first feature in the first feature set with each second feature in the second feature set to obtain a target first feature and a target second feature which belong to the same type.
When the server obtains the first feature set and the second feature set based on the above steps, the server may screen out first features and second features of the same type from the first feature set and the second feature set, and use the screened first features as target first features and the screened second features as target second features. Of course, any type of feature may be selected in the screening as long as the type of the first feature and the second feature selected is the same. For example, a first feature belonging to a light pole type is screened from the first feature set, and a second feature belonging to a light pole type is also screened from the second feature set.
S1011, determining the position coordinate of the origin of the laser radar coordinate system according to the position coordinate of the first characteristic of the target and the position coordinate of the second characteristic of the target; the position coordinates include longitude, latitude, altitude.
When the server screens out the first target feature and the second target feature based on the steps, the position coordinate of the first target feature and the position coordinate of the second target feature can be obtained, because the position coordinate of the first target feature is a relative quantity in a radar coordinate system, and the position coordinate of the second target feature is an absolute quantity in a geographic coordinate system, the position coordinate of the first target feature and the position coordinate of the second target feature are substituted into corresponding radian calculation formulas, and then the position coordinate of the origin of the laser radar coordinate system can be calculated; the position coordinates include longitude, latitude, altitude. According to the method, the position coordinate of the origin of the laser radar coordinate system of the laser radar is obtained through calculation according to the position coordinate of the same characteristic in the radar coordinate system and the position coordinate in the geographic coordinate system, the laser radar can be calibrated by simple operation, and the method is easy to realize.
In practical applications, when the laser radar scans the surrounding environment to obtain the point cloud data, horizontal scanning is usually performed, but there is a case that horizontal scanning is not performed, and at this time, the position coordinates of the first feature extracted from the point cloud data need to be corrected, and then the position coordinates of the origin of the laser radar coordinate system are determined based on the corrected position coordinates, so that the embodiment further provides an implementation manner of the above-mentioned S1011.
As shown in fig. 7, the step S1011 "of determining the position coordinates of the origin of the laser radar coordinate system based on the position coordinates of the first feature of the target and the position coordinates of the second feature of the target" includes:
s1012, substituting the rotation angle into a preset cosine function to obtain a cosine function value, and correcting the position coordinate of the first target characteristic according to the cosine function value to obtain the corrected position coordinate of the first target characteristic.
The present embodiment relates to a specific calculation method for correcting the position coordinate of the target first feature, and specifically, the rotation angle may be substituted into the following relation (1) to calculate, so as to obtain the corrected position coordinate of the target first feature:
A'=A×cos(θ) (1);
in the above formula, a represents the position coordinate of the first feature of the target, and may represent one of an X coordinate, a Y coordinate, and a Z coordinate of the first feature of the target; a' represents the position coordinates of the corrected target first feature, and corresponds to a. θ represents the rotation angle of the laser radar; if A represents the X coordinate, the corresponding theta represents the rotation angle around the altitude, and the corresponding A' represents the corrected X coordinate; if A represents the Y coordinate, the corresponding theta represents the rotation angle of the wrap angle, and the corresponding A' represents the corrected Y coordinate; if A represents Z coordinate, the corresponding theta represents the rotation angle around latitude, and the corresponding A' represents the corrected Z coordinate.
And S1013, determining the position coordinate of the origin of the laser radar coordinate system according to the corrected position coordinate of the first target feature and the corrected position coordinate of the second target feature.
After the base station corrects the position coordinate of the first feature of the target based on the above steps, the method described in S1011 may be used to determine the position coordinate of the origin of the laser radar coordinate system according to the corrected position coordinate of the first feature of the target and the position coordinate of the second feature of the target. For a specific method, reference is made to the description of S1011, which is not repeated herein.
According to the method, the position coordinate translation vector of the origin of the laser radar coordinate system is determined through the corrected position coordinate of the first characteristic of the target and the corrected position coordinate of the second characteristic of the target, so that the error of the laser radar for acquiring radar point cloud data under the non-horizontal scanning condition is eliminated, and the accuracy of the acquired calibration parameters is improved.
Specifically, the position coordinates of the origin of the laser radar coordinate system include: longitude, latitude, altitude. When the base station executes the step S1013, specifically, determining the longitude of the origin of the laser radar coordinate system according to the corrected X coordinate in the position coordinate of the first feature of the target and the longitude coordinate in the position coordinate of the second feature of the target; determining the latitude of the origin of the laser radar coordinate system according to the corrected Y coordinate in the position coordinate of the first target characteristic and the corrected latitude coordinate in the position coordinate of the second target characteristic; and determining the altitude of the origin of the laser radar coordinate system according to the corrected Z coordinate in the position coordinate of the first characteristic of the target and the altitude coordinate in the position coordinate of the second characteristic of the target. The determination method of the longitude, the latitude and the altitude can be referred to the description of the foregoing S1011, and is not repeated here.
In an embodiment, a specific implementation manner of the above S1002 is further provided, as shown in fig. 8, the above S1002 "obtaining map point cloud data of an area to be matched corresponding to a preset scanning range from the map point cloud data from the preset scanning range" includes:
and S1014, determining an initial origin according to the installation position of the laser radar.
The initial origin refers to the position coordinates of the laser radar in the map point cloud data.
Specifically, when determining which map point cloud data in the map point cloud data to extract as the map point cloud data of the area to be matched, an initial origin of the laser radar may be determined in the map point cloud data according to an actual installation position of the laser radar, so that the server may determine the area to be matched in the map point cloud data according to the initial origin. The actual installation position of the laser radar can be any position, for example, in an actual application, if the laser radar is installed on a lamp post, the lamp post is found in the map point cloud data correspondingly, and then the position of the lamp post is determined as an initial origin of the laser radar.
And S1005, selecting the map point cloud data in a preset scanning range from the map point cloud data as the map point cloud data of the area to be matched by taking the initial origin as the center.
After the initial origin of the laser radar is determined by the server based on the steps, corresponding map point cloud data are obtained on an area in a preset scanning range around the initial origin by taking the initial origin as the center, and the obtained map point cloud data are used as the map point cloud data of the area to be matched.
According to the method, the initial origin of the laser radar is determined on the high-precision map through the actual installation position of the laser radar, so that the map point cloud data of the area to be matched can correspond to the radar point cloud data of the laser radar in the preset scanning range, the accuracy of the subsequent matching result is improved, and the accuracy of the calibration process is improved.
In practical application, when a plurality of roadside lidar are applied to measure a target object, the plurality of roadside lidar are generally required to be registered, so that the server can perform spatial synchronization on the point cloud data of the plurality of roadside lidar, and then processes such as target detection, target tracking or environment perception of the point cloud data based on the plurality of roadside lidar are realized. But the current registration method has the problem of inaccurate registration. In order to solve the problem, the application provides a multi-base-station registration method.
That is, the above S12 is a method for the server to register multiple base stations, and the following embodiment describes the process in detail as follows: in one embodiment, a multi-base station registration method is provided, as shown in fig. 9, the method includes:
s1201, acquiring radar point cloud data and corresponding map point cloud data of each base station; and the precision of the map point cloud data is greater than a preset precision threshold.
The radar point cloud data are point cloud data obtained after a laser radar of the base station scans the surrounding environment, and the radar point cloud data are used for representing distance information of objects in the surrounding environment. The lidar can be various types of lidar, and when this lidar was used for gathering the point cloud data in the road environment, lidar can install on arbitrary marker, for example, lidar can install in predetermined roadside position, for example can install on the mounting bracket, also can install on the lamp pole. The preset accuracy threshold may be determined by the base station according to the actual measurement accuracy requirement. The map point cloud data is map point cloud data to be matched, and the map point cloud data can be map point cloud data in an area where a base station is located or map point cloud data in a scanning area related to a laser radar in a scanning range. The precision of the map point cloud data described in this embodiment is greater than the preset precision threshold, and therefore, when the preset precision threshold is higher, the map point cloud data is high-precision map point cloud data.
Specifically, the base station may obtain the map point cloud data in a database, or may obtain the map point cloud data in other manners, which is not limited herein. Meanwhile, the base station can start the laser radar to perform scanning operation of a preset scanning range on the surrounding area, so that radar point cloud data are obtained through data acquired by the laser radar.
And S1202, respectively and iteratively executing matching of the radar point cloud data of each base station and the corresponding map point cloud data according to a single base station to obtain a matching result of the position coordinate of the origin of the laser radar coordinate system, adjusting the original registration parameters of each base station according to the matching result until the output matching result meets a preset condition, and outputting the adjusted registration parameters of each base station. The registration parameters of the base station comprise longitude, latitude, altitude, rotation angle around longitude, rotation angle around latitude and rotation angle around altitude of the origin of a coordinate system of the base station.
The radar point cloud data and the map point cloud data contain the same object to be matched. The object to be matched can represent different types of objects such as lane lines, lamp posts, vehicles and the like. The position coordinates of the origin of the laser radar coordinate system include longitude coordinates, latitude coordinates, and altitude coordinates. Specifically, when obtaining the map point cloud data and the radar point cloud data based on the steps, each base station can send the data to the server, then the server matches the features in the map point cloud data with the features of the object to be matched, which are extracted from the radar point cloud data, to obtain a matching result, then the original calibration parameters are adjusted to enable the matching result to meet the preset standard, and finally the adjusted registration parameters of each base station are output. The position coordinates of the origin of the laser radar coordinate system are optional, the position coordinates of point clouds in the map point cloud data and the position coordinates of the point clouds in the radar point cloud data can be matched to obtain a matching result, the original calibration parameters are adjusted to enable the matching result to meet the preset standard, and finally the adjusted registration parameters of each base station are output.
After the registration parameters are obtained for each base station, the data obtained from each base station may be converted into the coordinate system of the map point cloud data (world coordinate system) by using the registration parameters. It should be noted that each base station may include other sensors besides the laser radar, such as a camera, a millimeter wave radar, and the like. These sensors constitute an information sensing system. The system is calibrated during operation to obtain calibration parameters, which can convert data of each sensor in the system into the same coordinate system (spatial synchronization process), for example, convert images acquired by a camera into the coordinate system of the lidar.
And S1203, calculating relative registration parameters of the base stations according to the registration parameters of the base stations.
The relative registration parameters can be used for converting the data of other base stations into the coordinate system of the base station, so that the spatial synchronization based on the coordinate system of the base station is realized.
Specifically, after the server obtains the registration parameters of each base station based on the above steps, the server may further perform difference operation on the registration parameters of each base station to obtain the relative configuration parameters between each base station. And then the server can realize the registration of each base station based on the obtained relative configuration parameters of each base station.
According to the registration method, radar point cloud data and corresponding map point cloud data of each base station are obtained, the radar point cloud data of each base station and the corresponding map point cloud data are matched according to iteration execution of a single base station, a matching result of position coordinates of an origin of a laser radar coordinate system is obtained, original registration parameters of each base station are adjusted according to the matching result until the output matching result meets a preset condition, the adjusted registration parameters of each base station are output, and then the relative registration parameters of each base station are calculated according to the registration parameters of each base station. The registration parameters of all the base stations are obtained by matching the high-precision map point cloud data and the radar point cloud data, so that the laser radar is calibrated by using the high-precision data, the accuracy of the obtained registration parameters of all the base stations is higher, the accuracy of the relative registration parameters obtained according to the registration parameters of all the base stations is higher, and the accuracy of the multiple base stations in registration is greatly improved.
In an embodiment, a manner for each base station to obtain the map point cloud data is provided, as shown in fig. 10, the manner includes:
s1205, acquiring original map point cloud data of each base station; the precision of the original map point cloud data is larger than a preset precision threshold.
The original map point cloud data can be map data in an area where each base station is located, or map data of a laser radar on each base station in a preset scanning range area. Specifically, each base station may obtain respective original map point cloud data in a database, or may obtain respective original map point cloud data in other manners, which is not limited herein. The accuracy of the original map point cloud data is the same as that of the map point cloud data.
And S1206, obtaining map point cloud data of an area in a preset scanning range from the original map point cloud data according to the actual installation position of the laser radar of each base station and the preset scanning range of the laser radar, and using the map point cloud data as the map point cloud data.
The preset scanning range may be determined by each base station in advance according to the identification requirement, or may be determined according to the performance of the lidar on each base station, for example, the preset scanning range of a general lidar is a 360 ° scanning range. Specifically, each base station may determine an initial origin of the respective lidar in the respective raw map point cloud data according to an actual installation location of the respective lidar. And then selecting map point cloud data of an area in a preset scanning range from the respective original map point cloud data by taking the initial origin as a center as the map point cloud data corresponding to each base station.
It should be noted that, in the above process, when determining which map point cloud data in the original map point cloud data is extracted as the map point cloud data, an initial origin of the lidar of each base station may be determined in the original map point cloud data according to an actual installation position of the lidar of each base station, so that each base station may determine a corresponding area of the map point cloud data in the original map point cloud data according to the initial origin. For example, in practical application, if the laser radar is installed on a lamp post, the lamp post is found in the original map point cloud data correspondingly, and then the position of the lamp post is determined as the initial origin of the laser radar. After each base station determines the initial origin of the laser radar based on the steps, corresponding map point cloud data can be obtained on an area in a preset scanning range around the initial origin by taking the initial origin as the center, and the obtained map point cloud data is used as the map point cloud data.
According to the method, the initial origin of the laser radar is determined on the high-precision map through the actual installation position of the laser radar, the map point cloud data can correspond to the radar point cloud data of the laser radar in the preset scanning range, the matching degree of the later-matched map point cloud data and the radar point cloud data is further improved, and therefore the accuracy of later calibration of the laser radar is improved.
In an embodiment, before the step S1202 of matching the radar point cloud data of each base station with the corresponding map point cloud data to obtain the matching result of the position coordinate of the origin of the laser radar coordinate system, the method in the embodiment of fig. 9 further includes the steps of: and eliminating dynamic radar point cloud data in the radar point cloud data to obtain static radar point cloud data.
The dynamic radar point cloud data comprises point cloud data of objects in a moving state in the radar point cloud data, such as point cloud data of a running vehicle. The static radar point cloud data includes point cloud data of objects in the radar point cloud data, for example, lamp posts on the roadside, which are in a static state.
Specifically, when the server acquires radar point cloud data output by the laser radars on the base stations, the radar point cloud data can be further rejected, point cloud data of objects which belong to movement in the radar point cloud data, namely dynamic radar point cloud data, are determined firstly, then the dynamic radar point cloud data are rejected from the radar point cloud data, point cloud data of objects which belong to static states, namely static radar point cloud data, are reserved, and therefore the server can calibrate the laser radars on the base stations according to the static radar point cloud data. It should be noted that map point cloud data generally includes static point cloud data, so that the map point cloud data does not need to be removed, but if the map point cloud data obtained by the base station includes dynamic point cloud data, the embodiment also provides a method for removing map point cloud data, and the method is consistent with the method for removing dynamic radar point cloud data, which is specifically referred to the foregoing description.
In the above embodiment, since the objects in the static radar point cloud data are static, compared with the dynamic radar point cloud data, the error of the static radar point cloud data obtained by the laser radar is relatively small, and then the calibration accuracy of the server in calibrating the laser radar according to the static radar point cloud data can be improved.
Specifically, when the server obtains the static radar point cloud data, the static radar point cloud data can be used for calibrating the laser radar on each base station. Therefore, the step S1202 of matching the radar point cloud data of each base station with the corresponding map point cloud data to obtain the matching result of the position coordinate of the origin of the laser radar coordinate system specifically includes: and matching the static radar point cloud data with the corresponding map point cloud data to obtain a position coordinate matching result of the origin of the laser radar coordinate system.
When the server obtains the static radar point cloud data based on the steps, the server can match the features in the static radar point cloud data with the features in the map point cloud data, and then obtains the position calibration of the laser radar by analyzing the matched features; optionally, the point cloud in the static radar point cloud data and the point cloud in the map point cloud data may be matched, and the position calibration of the laser radar may be obtained by analyzing the matched point cloud.
On the basis of the implementation manner of S1202 described in the above embodiment, the embodiment of the present disclosure further provides a specific implementation method of "matching static radar point cloud data with corresponding map point cloud data to obtain a position coordinate matching result of an origin of a laser radar coordinate system", where as shown in fig. 11, the specific implementation method includes: the method comprises the following steps:
s1207, extracting features in the static radar point cloud data to obtain a first feature set; the first feature set includes at least two first features.
The features in the static radar point cloud data represent static objects in the surrounding environment scanned by the laser radar, and for example, the features may be edges of a road, markers around the road, trees, lamp posts, and the like. The first features included in the first feature set may be all features extracted from the static radar point cloud data or may be part of the extracted features. For example, a road and a lamp post beside the road in the static radar point cloud data are extracted, and the corresponding first feature set comprises two first features of the road and the lamp post beside the road.
Specifically, when the server acquires the static radar point cloud data, all features in the static radar point cloud data can be extracted through an existing feature extraction algorithm, or partial features in the static radar point cloud data can be extracted according to matching requirements, so that at least two extracted features, namely a first feature set comprising at least two first features, are obtained. The feature extraction algorithm may be a neural network feature extraction algorithm, or may be other feature extraction algorithms, which is not limited herein.
And S1208, extracting the features in the map point cloud data to obtain a second feature set.
Wherein the features in the map point cloud data represent that the map relates to objects within an area, e.g. the border of a road in the second map, markers around the road, trees, light poles, etc. The second features included in the second feature set may be all features extracted from the map point cloud data or may be part of the extracted features. For example, the lamp posts on the road and beside the road in the map point cloud data are extracted, and the corresponding second feature set comprises two second features of the road and the lamp posts beside the road.
Specifically, when the server acquires the map point cloud data, all or part of features in the map point cloud data can be acquired correspondingly; optionally, the server may also extract all features in the map point cloud data through an existing feature extraction algorithm, or extract part of features in the map point cloud data according to matching requirements, so as to obtain at least two extracted features, that is, a second feature set including at least two second features. The number of the second features included in the second feature set may be the same as or different from the number of the first features.
S1209, matching the first characteristic set with the second characteristic set to obtain a matching result of the position coordinate of the origin of the laser radar coordinate system.
When the server obtains the first feature set and the second feature set based on the above steps, each first feature in the first feature set may be matched with each second feature in the second feature set to obtain a group of matched first features and second features, or multiple groups of matched first features and second features, and then a matching result of the position coordinates of the first laser radar may be obtained by analyzing the relevant information of the matched first features and second features. The related information of the feature may be the position coordinate, direction, size, heading angle, etc. of the feature.
The embodiment disclosed above realizes calibration of the laser radar based on the matching features in the static radar point cloud data and the map point cloud data. Because the features in the map point cloud data are accurate features, the high-precision features are used for calibrating the laser radar, and the calibration accuracy can be improved. In addition, because the features in the map point cloud data are easy to obtain, and do not need to be obtained by additional equipment, compared with the traditional calibration method which needs additional positioning equipment for positioning, the calibration method of the embodiment can also reduce the calibration cost, and further reduce the subsequent registration cost of multiple base stations.
In an embodiment, a specific implementation manner of the foregoing S1209 is provided, and as shown in fig. 12, the method includes:
s1210, obtaining a first projection line segment of a line segment between two first features in the first feature set on a preset plane in a radar coordinate system, and obtaining a first projection included angle between the first projection line segment and a corresponding coordinate axis.
Wherein the radar coordinate system may be a rectangular coordinate system. The preset plane is a plane where any two coordinate axes in the radar coordinate system are located, for example, the coordinate axes of the radar coordinate system include: x axle, Y axle, Z axle, the corresponding preset plane includes: XY plane, YZ plane, XZ plane.
Specifically, when the server obtains the first feature set, any two first features may be selected from the first feature set, and then the two first features are connected in the radar coordinate system according to the position coordinates of the two first features to obtain a line segment between the two first features, and then the line segment is projected onto a preset plane to obtain a first projected line segment. And further selecting a coordinate axis corresponding to the preset plane to obtain a first projection included angle between the coordinate axis and the first projection line segment. For example, if the preset plane is an XZ plane, the coordinate axis corresponding to the XZ plane is a Z axis, and accordingly, an included angle between a first projection line segment of the two first features on the XZ plane and the corresponding Z axis is a first projection included angle; if the preset plane is a YZ plane, a coordinate axis corresponding to the YZ plane is a Y axis, and accordingly, an included angle between a first projection line segment of the two first features on the YZ plane and the corresponding Y axis is a first projection included angle; if the preset plane is an XY plane, the coordinate axis corresponding to the XY plane is an X axis, and accordingly, an included angle between the first projection line segment of the two first features on the XY plane and the corresponding X axis is a first projection included angle.
S1211, obtaining a second projection line segment of a line segment between two second features in the second feature set on a preset plane in a geographic coordinate system, and obtaining a second projection included angle between the second projection line segment and a corresponding coordinate axis; the type of the two second features in the second set of features is the same as the type of the two first features in the first set of features.
The preset plane is a plane where any two coordinate axes in the geographic coordinate system are located, for example, the coordinate axes of the geographic coordinate system include a longitude axis, a latitude axis and an altitude axis, and the corresponding preset plane includes: a longitude plane, a latitude plane, and an altitude plane.
Specifically, when the server obtains the second feature set, two second features may be selected from the second feature set, and then the two second features are connected in the geographic coordinate system according to the position coordinates of the two second features to obtain a line segment between the two second features, and then the line segment is projected onto the preset plane to obtain a second projected line segment. And further selecting a coordinate axis corresponding to the preset plane to obtain a second projection included angle between the coordinate axis and the second projection line segment. For example, if the preset plane is an elevation plane, the coordinate axis corresponding to the elevation plane is an elevation axis, and accordingly, an included angle between a first projection line segment of the two first features on the elevation plane and the corresponding elevation axis is a first projection included angle; if the preset plane is a latitude plane, the coordinate axis corresponding to the latitude plane is a latitude axis, and correspondingly, the included angle between the first projection line segment of the two first characteristics on the latitude plane and the corresponding latitude axis is the first projection included angle; if the preset plane is a longitude plane, the coordinate axis corresponding to the longitude plane is a longitude axis, and accordingly, an included angle between a first projection line segment of the two first features on the longitude plane and the corresponding longitude axis is a first projection included angle. It should be noted that the two second features selected here are the same as the two first features selected from the first feature set, for example, if the two selected first features are a road and a light pole, then the two second features in the second feature set are a road and a light pole correspondingly.
And S1212, performing difference operation on the first projection included angle and the second projection included angle to obtain a rotation angle of the origin of the laser radar coordinate system.
Wherein, the rotation angle of laser radar coordinate system origin includes: the rotation angle of the warp winding degree, the rotation angle of the latitude winding degree and the rotation angle of the altitude winding degree. Specifically, when the base station obtains the first projection included angle and the second projection included angle based on the above steps, the difference value operation is further performed on the first projection included angle and the second projection included angle, and the calculated difference value angle is used as the calibrated rotation angle of the laser radar. When the line segment between the two first features or the line segment between the two second features is projected on the preset plane in the respective coordinate system, the projected preset planes are different, and the coordinate axes corresponding to the preset planes are different, so that the projected preset planes correspond to different rotation angles.
For example, if a preset plane in the radar coordinate system is an XZ plane, a coordinate axis corresponding to the XZ plane is a Z axis, a preset plane in the geographic coordinate system is an elevation plane, a coordinate axis corresponding to the elevation plane is an elevation axis, and a rotation angle of an origin of the laser radar coordinate system is a rotation angle around the latitude; if the preset plane in the radar coordinate system is a YZ plane, the coordinate axis corresponding to the YZ plane is a Y axis, the preset plane in the geographic coordinate system is a latitude plane, the coordinate axis corresponding to the latitude plane is a latitude axis, and the rotation angle of the origin of the laser radar coordinate system is a warp-degree rotation angle; if the preset plane in the radar coordinate system is an XY plane, the coordinate axis corresponding to the XY plane is an X axis, the preset plane in the geographic coordinate system is a longitude plane, the coordinate axis corresponding to the longitude plane is a longitude axis, and the rotation angle of the origin of the laser radar coordinate system is the rotation angle around the altitude.
S1213, obtaining a matching result of the position coordinate of the origin of the laser radar coordinate system according to the rotation angle of the origin of the laser radar coordinate system, the first characteristic set and the second characteristic set.
The position coordinates of the origin of the laser radar coordinate system described in this embodiment include longitude, latitude, and altitude. Specifically, when the server obtains the rotation angle of the origin of the laser radar coordinate system based on the above steps, the rotation angle of the origin of the laser radar coordinate system may be used to correct the position coordinates of the features in the first feature set, and then the corrected first feature set and the corrected second feature set are matched to obtain a set of matched first features and second features, or multiple sets of matched first features and second features, and then the position coordinates of the origin of the laser radar coordinate system are obtained by analyzing the relevant information of the matched first features and second features.
Further, in an embodiment, there is provided a specific implementation of S1213 above, as shown in fig. 13, the implementation includes:
and S1214, matching each first feature in the first feature set with each second feature in the second feature set to obtain a target first feature and a target second feature which belong to the same type.
Specifically, when the server obtains the first feature set and the second feature set based on the above steps, the server may screen out first features and second features of the same type from the first feature set and the second feature set, and use the screened first features as target first features and the screened second features as target second features. Of course, any type of feature may be selected in the screening as long as the types of the first feature and the second feature to be selected are the same. For example, a first feature belonging to a light pole type is screened from the first feature set, and a second feature belonging to a light pole type is also screened from the second feature set.
And S1215, correcting the position coordinate of the first characteristic of the target according to the value of the cosine function of the rotation angle to obtain the corrected position coordinate of the first characteristic of the target.
The present embodiment relates to a specific calculation method for correcting the position coordinate of the target first feature, and specifically, the rotation angle may be substituted into the following relation (1) to calculate, so as to obtain the corrected position coordinate of the target first feature:
A'=A×cos(θ) (1);
in the above formula, a represents the position coordinate of the first feature of the target, and may represent one of an X coordinate, a Y coordinate, and a Z coordinate of the first feature of the target; a' represents the position coordinates of the corrected target first feature, and corresponds to a. Theta represents the rotation angle of the origin of the laser radar coordinate system; if A represents the X coordinate, the corresponding theta represents the rotation angle around the altitude, and the corresponding A' represents the corrected X coordinate; if A represents the Y coordinate, the corresponding theta represents the rotation angle of the wrap angle, and the corresponding A' represents the corrected Y coordinate; if A represents Z coordinate, the corresponding theta represents the rotation angle around latitude, and the corresponding A' represents the corrected Z coordinate.
And S1216, determining a matching result of the position coordinates of the origin of the laser radar coordinate system according to the corrected position coordinates of the first characteristic of the target and the corrected position coordinates of the second characteristic of the target.
Specifically, after the server corrects the position coordinates of the first target feature based on the above steps, the position coordinates of the origin of the laser radar coordinate system may be determined according to the corrected position coordinates of the first target feature and the corrected position coordinates of the second target feature by using the method described in S1209, so as to obtain a matching result of the position coordinates of the origin of the laser radar coordinate system. For a specific method, reference is made to the description of S1209, which is not described herein again.
When the position coordinates of the origin of the laser radar coordinate system include longitude, latitude and altitude, when the base station executes the step S1216, specifically, determining the longitude of the origin of the laser radar coordinate system according to the corrected X coordinate in the position coordinates of the first feature of the target and the longitude coordinate in the position coordinates of the second feature of the target; determining the latitude of the origin of the laser radar coordinate system according to the corrected Y coordinate in the position coordinate of the first target characteristic and the corrected latitude coordinate in the position coordinate of the second target characteristic; and determining the altitude of the origin of the laser radar coordinate system according to the corrected Z coordinate in the position coordinate of the first characteristic of the target and the altitude coordinate in the position coordinate of the second characteristic of the target. The respective determination methods of the longitude, the latitude and the altitude can be referred to the description of the foregoing S1213, which is not described herein again.
According to the method, the position coordinate of the origin of the laser radar coordinate system is determined through the corrected position coordinate of the first characteristic of the target and the corrected position coordinate of the second characteristic of the target, so that the error of the laser radar for acquiring radar point cloud data under the non-horizontal scanning condition is eliminated, and the accuracy of calibrating the laser radar according to the radar point cloud data is improved.
In practical application, before the server identifies the target object according to the radar point cloud data, background data which belongs to non-roads in the radar point cloud data needs to be removed, and then the target object is further identified, so that the accuracy of target object identification is improved. However, the existing method for removing background data in point cloud data has the problem of low accuracy. In view of this problem, the present application provides a method for determining a target region of interest, which is used to determine the target region of interest accurately, so as to improve the accuracy when performing target object identification on the target region of interest later.
That is, the above-mentioned S13 is a process for the server to determine the target region of interest, and the following embodiment describes the process in detail as follows: ( Note that: the server can determine a target region of interest according to the received radar point cloud data, the base station can also determine the target region of interest according to the received radar point cloud data, the determination methods are the same, after the base station determines the target region of interest, the base station can further identify the characteristic information of a target object in the target region of interest, then the characteristic information is sent to the server to be processed by the server, and the following embodiment takes the server as an example to determine the target region of interest )
In one embodiment, a method for determining a target region of interest is provided, as shown in fig. 14, the method includes:
and S1301, acquiring the space position of the laser radar according to the registration parameters and the point cloud data acquired by the laser radar. The registration parameter is obtained by registering the point cloud data and a first map, and the first map is map data with preset precision in a point cloud format; the preset precision is greater than a preset precision threshold.
The preset accuracy threshold may be determined by the base station according to the actual identification accuracy requirement. The accuracy of the first map described in this embodiment is greater than the preset accuracy threshold, and therefore, when the preset accuracy threshold is higher, the first map is high-accuracy map data. The spatial position of the lidar represents the spatial position of the lidar in a coordinate system of the map data, which may include information such as longitude, latitude, altitude, etc. of the lidar.
Specifically, the position coordinate of the laser radar in the point cloud data coordinate system may be determined first, and then the position coordinate of the laser radar in the point cloud data coordinate system may be converted into the map data coordinate system by using a registration parameter obtained by registering the point cloud data acquired by the laser radar and the first map, so as to obtain the spatial position of the laser radar.
And S1302, determining the scanning range of the laser radar according to the space position of the laser radar and the scanning radius of the laser radar.
The scanning radius of the laser radar can be determined by the base station according to the identification requirement in advance, and can also be determined according to the performance of the laser radar. Specifically, when the base station obtains the spatial position of the laser radar, the scanning range of the laser radar is determined by taking the spatial position as the center and the scanning radius of the laser radar as the radius. For example, when the lidar scans at 360 °, the base station can determine the scanning range of a circular area according to the spatial position and the scanning radius of the lidar.
S1303, determining a target region of interest according to the scanning range of the laser radar and the target region in the second map; the target area is determined by the vector data of the second map; the second map is map data in a vector format.
The second map in the vector format contains description information of objects such as road borders, road center lines, control positions of zebra crossings, types of the zebra crossings and the like. The base station may obtain map data from a database or by other means with an accuracy greater than a preset accuracy threshold and then convert the map data into a second map in vector format for later use. It can be understood that when converting the map data into the first map in the point cloud format and the second map in the vector format, the first map and the second map are guaranteed to be the same in accuracy as the map data. The target area contains vector data representing, for example, roads, zebra crossings, light poles, road markers, vehicles, etc. The target interesting area is an area to be identified. Specifically, when the base station obtains the scanning range of the laser radar and the second map based on the foregoing steps, the intersection of the target area in the second map and the scanning range of the laser radar may be further obtained to obtain a target region of interest including all vector data in the target area, and the base station may also obtain a target region of interest including part of the vector data in the target area. It can be understood that the base station can select the target area in the second map to customize the suitable target area of interest according to the actual identification requirement.
According to the method for determining the target area, the space position of the laser radar is obtained according to the registration parameters and the point cloud data collected by the laser radar, the scanning range of the laser radar is determined according to the space position of the laser radar and the scanning radius of the laser radar, and then the target area of interest is determined according to the scanning range of the laser radar and the target area in the second map. The registration parameter is a parameter obtained by registering the point cloud data with a first map, the first map is map data with a point cloud format and a preset precision, the target area with the preset precision greater than a preset precision threshold is determined by vector data in a second map, and the second map is the map data with the vector format. In the method, the first map is high-precision map data, so that the scanning range of the laser radar is obtained by using the high-precision first map and the point cloud data for registration, the accuracy of the scanning range can be improved, and the accuracy of determining the target region of interest according to the scanning range can be improved.
In an embodiment, an implementation manner of the above S1303 is provided, and as shown in fig. 15, the above S1303 "determining a target region of interest according to the scanning range of the lidar and the target region in the second map" includes:
and S1305, determining a scanning range outline on the second map according to the scanning range of the laser radar.
When the base station obtains the scanning range of the laser radar, the scanning range profile may be further determined according to the area range corresponding to the scanning range on the second map, specifically, the profile of the area range corresponding to the scanning range on the second map may be directly determined as the scanning range profile, optionally, the profile of the area range corresponding to the scanning range on the second map may be corrected first, then the corrected profile is determined as the scanning range profile, optionally, the scanning range of the laser radar may be narrowed according to the actual geographic environment, and then the scanning range profile may be determined according to the area range corresponding to the scanning range of the laser radar with the narrowed range. For example, in combination with the actual geographic environment, it is found that the scanning range of the laser radar includes the invalid region range, and therefore the invalid region range is removed from the scanning range of the laser radar, that is, the scanning range of the laser radar is reduced, and the reduced scanning range of the laser radar includes the valid region range. The invalid region may be determined according to actual identification requirements, for example, the invalid region may be a region containing data of objects belonging to mountains, buildings, and the like.
Optionally, the specific method for determining the scanning range profile may include: and taking the space position of the laser radar as a center, and projecting the scanning range on the second map in the horizontal direction to obtain the scanning range profile. That is, the scanning range is projected in the horizontal direction on the second map to obtain a contour formed by connecting the farthest points in the projection area, and the contour is determined as the scanning range contour.
And S1306, performing intersection operation on the target area and the scanning range outline to obtain a target region of interest.
Specifically, after the base station determines the target area in the second map, the target area and the obtained scanning range profile are further directly subjected to intersection operation to obtain a target area of interest. For example, as shown in fig. 16, if the target area includes a road side line a and a region side line b occupied by a lamp post, and the scanning range profile is a circular profile c, the road side line a and the region side line b occupied by the lamp post are subjected to intersection operation with the scanning range profile c, and then the target interest region d can be obtained.
In the embodiment, the intersection operation is performed on the target area and the scanning range profile to obtain the target region of interest, and the target area contains the vector data of the target object to be identified, so that redundant vector data of the target object not to be identified is effectively removed from the target region of interest obtained after the intersection operation, and the extraction of the target region of interest containing the target object to be identified is realized.
In an embodiment, another implementation manner of the foregoing S1306 is provided, and as shown in fig. 17, the foregoing S1306 "intersecting the target region with the scan range profile to obtain the target region of interest" includes:
s1307 selects an area outline where the target vector data is located from the vector data included in the target area.
The target vector data may be determined according to a user requirement or an identification requirement, for example, if a road in the target area needs to be identified, the corresponding target vector data is vector data representing the road. Specifically, when the base station determines the area outline of the target vector data according to the identification requirement or the user requirement, the area outline of the target vector data is selected from the target area obtained from the second map, so that the target interested area is determined according to the area outline of the target vector data. Alternatively, the target vector data may include at least one of vector data representing an automobile road, vector data of a pedestrian road, vector data of a roadside marker, and vector data of a vehicle.
And S1308, performing intersection operation on the outline of the area where the target vector data is located and the outline of the scanning range to obtain a target interested area.
When the base station obtains the outline of the area where the target vector data is based on the steps, the outline of the area where the target vector data is located and the outline of the scanning range can be directly subjected to intersection operation to obtain the target region of interest. For example, as shown in fig. 18, if the contour of the region where the target vector data is located is a road edge L1, and the contour of the scanning range is a circular contour L2, the road edge L1 and the scanning range contour L2 are subjected to intersection operation, and then the target roi L3 can be obtained.
In the method described in the foregoing embodiment, because the target vector data may be specified by the user, the data processing method provided in this embodiment may set the target region of interest according to the user requirement, so that the recognition range when the target object is recognized later may be reduced, thereby improving the accuracy of recognizing the target object.
In one embodiment, the method of the embodiment of fig. 14, as shown in fig. 19, further includes:
and S1309, extracting point cloud data in the target region of interest from the point cloud data acquired by the laser radar according to the target region of interest.
When the base station obtains the target region of interest based on the implementation, the point cloud data contained in the target region of interest can be extracted from the point cloud data acquired by the laser radar, and specifically, the base station can extract the point cloud data in the target region of interest from the point cloud data acquired by the laser radar by adopting the existing segmentation algorithm.
S1310, identifying an object in the target region of interest according to the point cloud data in the target region of interest.
When the base station obtains the point cloud data in the target region of interest, the existing identification or analysis algorithm can be adopted to analyze based on the point cloud data, so as to obtain the related attribute information of the object contained in the target region of interest. Because the target region of interest determined by the method is accurate, the data contained in the target region of interest are all effective, the problem of interference caused by invalid data to the base station when the base station identifies the object in the target region of interest is avoided, and the accuracy of identifying the object by the base station is improved. Meanwhile, the identification process of the base station on invalid data in the object identification process is reduced, and the object identification efficiency of the base station is improved.
In practical application, when a server (information processing platform) identifies a target object in a radar scanning area corresponding to each base station and obtains characteristic information of the target object, the server usually displays the target object in the radar scanning area corresponding to each base station, but the server cannot effectively acquire the target object in the scanning blind area because scanning blind areas exist between the radar scanning areas corresponding to each base station during display, so that the server can simultaneously display the target object in the radar scanning areas corresponding to each base station later, and the problem of discontinuous display is solved. The following examples describe this process in detail.
In one embodiment, a method for acquiring data of a blind sensing area is provided, as shown in fig. 20, the method includes: ( Note that: after the server identifies the target object, the server can further process the characteristic information of the target object so as to display the characteristics of the target object in the sensing area corresponding to each base station on the display screen, and the server can be applied to the fields of navigation, automatic driving and the like. The following embodiments are described by taking a server as an example )
S1401, judging whether a sensing blind area exists between sensing areas corresponding to base stations in a target region of interest;
the perception blind area is an area which is not scanned and exists between perception areas when the laser radar on each base station scans the surrounding areas. Specifically, the server first obtains the range and the position of the sensing area corresponding to each base station, and further determines whether a scanning blind area exists between the sensing areas of each radar according to the range and the position of the sensing area corresponding to each base station, and if so, determines the range and the position of the scanning blind area as the range and the position of the sensing blind area. Specifically, the registration parameters of each base station may be first utilized to convert the sensing areas of the base stations into the same coordinate system, and then whether a sensing blind area exists between the sensing areas corresponding to the base stations is determined according to the positions of the base stations in the same coordinate system. Alternatively, the registration parameters of each base station may be obtained by performing steps S1201-S1202 in the above of the present application. See the above description for details, which are not repeated herein.
S1402, if blank areas exist among the sensing areas corresponding to the base stations, identifying the characteristic information of the target object in the sensing area corresponding to each base station, and generating the characteristic information of the target object in the sensing blind area according to the characteristic information of the target object of each base station.
The target object represents an object to be recognized, and may be one object to be recognized or multiple objects to be recognized, which is not limited herein. The characteristic information may represent information that can describe the target object, such as a size, a type, a position, a moving speed, a heading angle, a moving direction, and the like of the target object, and the characteristic information of the target object in this embodiment represents characteristic information of the target object that is identified by the base station or the server according to the point cloud data output by the laser radar.
When the server determines the sensing blind areas among the base stations based on the steps and obtains the characteristic information of the target object of each base station, the server can predict the characteristic information of the target object in the sensing blind areas by analyzing the characteristic information of the target object in the sensing areas corresponding to the base stations; optionally, the server may also select feature information of the target object in the sensing region meeting the preset condition, and predict feature information of the target object in the sensing blind area by analyzing the feature information of the target object in the sensing region meeting the preset condition; the preset condition may be predetermined by the server, for example, the preset condition may be a sensing area closest to the blind sensing area, or may be a sensing area containing a moving target object.
In the data acquisition method of the perception blind area, the server predicts the characteristic information of the target object in the perception blind area according to the characteristic information of the target object of each base station by judging whether the perception blind area exists between the perception areas corresponding to each base station or not if the perception blind area exists. The server predicts the characteristic information of the target object in the sensing area corresponding to each base station, and the characteristic information of the target object in the sensing area corresponds to the actual scene, so that the characteristic information of the target object predicted by the method better conforms to the actual environment information.
In an embodiment, a specific implementation of the foregoing S1402 is provided, and as shown in fig. 21, the foregoing S1402 "generating feature information of a target object in a blind sensing area according to feature information of the target object of each base station" includes:
s1403, extracting feature information of the target object in the target area from the feature information of the target object of each base station; the target area is a sensing area adjacent to the sensing blind area.
Specifically, when the server obtains the feature information of the target object in the sensing areas corresponding to the base stations and a sensing blind area exists between the sensing areas corresponding to the base stations, the server may further determine the sensing area adjacent to the sensing blind area, that is, the target area, and extract the feature information of the target object in the target area from the feature information of the target object in the sensing areas, so as to perform prediction according to the feature information of the target object in the target area.
And S1404, generating characteristic information of the target object in the perception blind area according to the characteristic information of the target object in the target area.
When the server obtains the characteristic information of the target object in the target area based on the steps, the characteristic information of the target object in the perception blind area can be predicted by analyzing the characteristic information of the target object in the target area. Optionally, the server may also select feature information of the target object in the target area meeting the preset condition, and predict feature information of the target object in the blind sensing area by analyzing the feature information of the target object in the target area meeting the preset condition; the preset condition may be predetermined by the server, and for example, the preset condition may be a target area including movement. In this embodiment, the target object in the sensing blind area is predicted according to the feature information of the target object in the target area, and since the target area is a sensing area adjacent to the sensing blind area, the feature information of the target object in the target area is closer to the feature information of the target object in the sensing blind area, so that the accuracy of predicting the feature information of the target object in the blank area according to the feature information of the target object in the target area is higher.
Further, in an embodiment, a specific implementation manner of the foregoing S1404 is provided, and as shown in fig. 22, the foregoing S1404 "generating feature information of a target object in a blind sensing area according to feature information of the target object in a target area" includes:
s1405, determining whether the target object has the feature information predicted at the previous moment in the perception blind area; if the target object does not have the feature information predicted at the previous time in the blind sensing area, step S1406 is executed, and if the target object has the feature information predicted at the previous time in the blind sensing area, step S1407 is executed.
The characteristic information predicted at the previous moment is the characteristic information obtained after the server predicts the characteristic information of the target object in the perception blind area when the server receives the characteristic information of the target object sent by the base stations or the characteristic information of the target object identified at the previous moment, and the perception blind areas corresponding to the base stations have the perception blind areas. The step described in this embodiment is a determining step, configured to determine whether the target object has feature information predicted at a previous time in the blind sensing area, and if the result of the determination is that the target object does not have the feature information predicted at the previous time in the blind sensing area, it indicates that the server at the previous time does not predict the target object in the blind sensing area, that is, no target object moves into the blind sensing area at the previous time, and therefore the target object at the previous time may be moving towards the blind sensing area or already be located on a boundary of the blind sensing area. Correspondingly, if the judgment result is that the target object has the feature information predicted at the previous moment in the sensing blind area, the server at the previous moment predicts the target object in the sensing blind area, that is, the target object moves into the sensing blind area at the previous moment, so that the target object at the current moment may still move in the sensing blind area or is about to move out of the sensing blind area. The following steps describe different operations performed by the server under the two judgment results.
And S1406, predicting and generating the characteristic information of the target object in the perception blind area at the current moment according to the characteristic information of the target object in the target area, and storing the characteristic information.
In this case, after obtaining the feature information of the target object in the target area, the server may directly predict the feature information of the target object in the blind perception area at the current time by analyzing the feature information of the target object in the target area, obtain a prediction result, and store the prediction result, so that the server may be used in predicting the feature information of the target object in the blind perception area at the next time later.
S1407, determining whether feature information of the target object in the sensing blind area at the current time needs to be predicted according to the feature information of the target object in the target area, if so, executing step S1408, and if not, executing step S1409.
The embodiment relates to a situation that a server determines feature information predicted at a previous moment in a sensing blind area, because the situation indicates that a target object at the current moment is likely to move in the sensing blind area or move out of the sensing blind area, if the target object moves in the sensing blind area, the situation indicates that the target object at the current moment is also in the sensing blind area, the current moment needs the server to continue predicting the feature information of the target object in the sensing blind area, and if the target object moves out of the sensing blind area, the current moment does not need to predict the feature information of the target object in the sensing blind area, because the target object has moved out of the sensing blind area and enters the sensing area corresponding to a base station, and does not belong to the sensing blind area, the server can acquire the feature information of the target object through data reported by the base station.
And S1408, predicting the characteristic information of the target object in the perception blind area at the current moment according to the predicted characteristic information of the target object at the previous moment, and storing the characteristic information.
In this case, the server directly obtains the feature information of the target object predicted at the previous time from the stored information, predicts the feature information of the target object in the blind sensing area at the current time by analyzing the feature information of the target object predicted at the previous time, and stores the predicted feature information at the same time, so that the feature information of the target object in the blind sensing area at the next time can be predicted later. For example, if the target object is a vehicle, the server may acquire the speed and the traveling direction of the vehicle predicted at the previous time, and when the server predicts the traveling information of the vehicle at the current time, the server may use the speed and the traveling direction as the speed and the traveling direction of the vehicle predicted at the current time.
S1409, stopping prediction.
The server according to the present embodiment does not need to predict the feature information of the target object in the blank space at the present time, and in this case, the server directly stops the prediction, that is, does not predict the feature information of the target object in the blank space.
In one embodiment, a specific implementation manner of the foregoing S1406 is provided, and as shown in fig. 23, the foregoing S1406 "predicting and generating feature information of a target object in a blind perception region at the current time according to feature information of the target object in the target region" includes:
s1410, determining whether a target object exists at the boundary of the perception dead zone according to the characteristic information of the target object in the target zone; if the target object exists at the boundary of the perception dead zone, step S1411 is executed, and if the target object does not exist at the boundary of the perception dead zone, step S1412 is executed.
The embodiment relates to a situation that a server does not predict a target object in a perception blind area at the previous moment, in which case, it is described that the target object at the previous moment may be moving towards the perception blind area, a corresponding target object at the current moment is not in the perception blind area, or the target object at the previous moment is already located on a boundary of the perception blind area, and the corresponding target object at the current moment enters the perception blind area. In this embodiment, the server may determine whether the target object at the current time is located in the sensing blind area by determining whether the target object is located at the boundary of the sensing blind area, and if the target object is located at the boundary of the sensing blind area, it indicates that the target object at the current time is located in the sensing blind area, and if the target object is not located at the boundary of the sensing blind area, it indicates that the target object at the current time is not located in the sensing blind area.
S1411, predicting the characteristic information of the target object in the sensing blind area at the current moment according to the characteristic information of the target object at the boundary of the sensing blind area.
The embodiment relates to a situation that a server judges that a target object exists at the boundary of a perception blind area, namely the target object at the current moment is located in the perception blind area, in the situation, the server extracts the characteristic information of the target object at the boundary of the perception blind area from the characteristic information of the target object in a target area, and then predicts the characteristic information of the target object at the current moment in the perception blind area according to the extracted characteristic information.
And S1412, selecting preset background data from the target area and filling the preset background data into the perception blind area at the current moment.
In this embodiment, the server may determine that no target object exists at the boundary of the blind sensing area, that is, the target object at the current time is not in the blind sensing area, in this case, the server may determine the background data connected to the blind sensing area from the target area, for example, the corresponding background data in the area formed when the boundary of the blind sensing area extends to the target area. And then directly filling the background data into the perception blind area at the current moment, so that the perception blind area can contain effective data even if no moving target object exists.
The method provided by the embodiment can effectively predict the data in the perception blind area no matter whether the perception blind area contains the moving target object or not, so that the method is suitable for data prediction of various scenes and has high applicability.
In one embodiment, a specific implementation manner of the foregoing S1407 is provided, as shown in fig. 24, the foregoing S1407 "determining whether it is necessary to predict the feature information of the target object in the blind perception region at the current time according to the feature information of the target object in the target region" includes:
s1413, determining whether a target object corresponding to the feature information predicted at the previous time appears in the target area according to the feature information of the target object in the target area, if no target object corresponding to the feature information predicted at the previous time appears in the target area, executing step S1414, and if a target object corresponding to the feature information predicted at the previous time appears in the target area, executing step S1415.
The present embodiment relates to a situation where the server has already predicted the target object in the blind sensing area at the previous time, in this case, it is described that the target object at the previous time is in the blind sensing area, and then the target object at the current time may still move in the blind sensing area or move out of the blind sensing area. In this embodiment, the server may determine whether the target object at the current time is still in the blind sensing area by determining whether the target object corresponding to the feature information predicted at the previous time appears in the target area. If a target object corresponding to the predicted characteristic information at the previous moment appears in the target area, the target object at the previous moment is shown to move out of the perception blind area at the current moment and enter the perception area corresponding to the base station adjacent to the perception blind area; if the target object corresponding to the feature information predicted at the previous moment does not appear in the target area, the target object at the previous moment still moves in the perception blind area at the current moment.
And S1414, determining characteristic information of the target object in the perception blind area at the current moment needing to be predicted.
The embodiment relates to a case that the server determines that the target object corresponding to the feature information predicted at the previous moment does not appear in the target area, in which case, the server needs to continuously predict the feature information of the target object in the blind perception area at the current moment.
And S1415, determining characteristic information of the target object in the perception blind area at the current moment without prediction.
The embodiment relates to a case where the server determines that the target object corresponding to the feature information predicted at the previous moment appears in the target area, in which case, the server does not need to predict the feature information of the target object in the perception blind area at the current moment.
The method realizes different operations of the server for predicting the characteristic information in the perception blind area according to different application scenes, and further realizes the prediction of the characteristic information of the target object when the target object is about to move into the perception blind area, the prediction of the characteristic information of the target object when the target object moves in the perception blind area, and the prediction of the characteristic information of the target object when the target object moves out of the perception blind area. Due to the fact that the tracks of the target objects in the perception blind areas under various scenes are considered, the accuracy of the information predicted by the prediction method is high.
In practical application, after a server (information processing platform) identifies a target object in a sensing area corresponding to each base station and obtains characteristic information of the target object, the server usually displays the target object in the sensing area corresponding to each base station, but the display effect is poor because an overlapping area exists between the sensing areas corresponding to each base station, which leads to repeated display when the target object is displayed at a later stage. The following examples describe this process in detail.
In an embodiment, as shown in fig. 25, a data acquisition method for an overlapping area is further provided, which is described by taking an example that the method is applied to a server in fig. 1, and after step S1401 in the embodiment in fig. 20, the method further includes steps of:
s1416, judging whether an overlapping area exists between the sensing areas corresponding to the base stations in the target interested area.
Specifically, the server first obtains the range and the position of the sensing area corresponding to each base station, further determines whether an overlapping or overlapping area exists between the sensing areas according to the range and the position of the sensing area corresponding to each base station, and if so, determines the range and the position of the overlapping or overlapping area as the range and the position of the overlapping area. Specifically, the registration parameters of each base station may be first utilized to convert the sensing areas of the base stations into the same coordinate system, and then whether an overlapping area exists between the sensing areas corresponding to the base stations is determined according to the positions of the base stations in the same coordinate system. Alternatively, the registration parameters of each base station may be obtained by performing steps S1201-S1202 in the above of the present application. The details can be found in the above description, and are not repeated herein.
S1417, if the sensing areas corresponding to the base stations are overlapped, identifying the characteristic information of the target object in the sensing area corresponding to each base station, and performing de-duplication processing on the target object in the overlapped area according to the characteristic information of the target object sent by each base station.
When the server determines the overlapping area among the base stations based on the steps and obtains the characteristic information of the target object sent by each base station, whether the overlapping area contains repeated target objects or not can be determined by analyzing the characteristic information of the target object in the sensing area corresponding to each base station, if the repeated target objects exist, one target object is reserved in the overlapping area, and other repeated target objects are removed.
In the data acquisition method of the overlapping area, the server judges whether the overlapping area exists between the sensing areas corresponding to the base stations, and if so, the server performs de-duplication processing on the target object in the overlapping area according to the characteristic information of the target object sent by the base stations. Because the characteristic information of the target object contains information of various characteristics of the target object, the method for determining the repeated target object in the overlapping area based on the characteristic information of the target object can improve the accuracy of duplicate removal.
In an embodiment, a specific implementation manner of the foregoing S702 is provided, and as shown in fig. 26, the "performing de-duplication processing on a target object in an overlapping area according to feature information of the target object sent by each base station" in the foregoing S1417 includes:
s1418, extracting the feature information of the target object in the overlapping area from the feature information of the target object transmitted by each base station.
Specifically, when the server acquires feature information of target objects of multiple base stations and an overlapping area exists between sensing areas corresponding to the multiple base stations, the server may further extract the feature information of the target object in the overlapping area, so as to determine whether a repeated target object exists according to the feature information of the target object in the overlapping area.
S1419, detecting whether repeated target objects exist in the overlapping area according to the characteristic information of the target objects in the overlapping area through preset judgment conditions.
The preset determination condition may be determined by the server in advance according to the actual determination requirement, for example, if the type of the target object is unique, whether the target object belongs to the same type may be used as the preset determination condition. Specifically, when the server obtains the feature information of the target object in the overlap area based on the foregoing steps, the feature information of each target object in the overlap area may be further compared or analyzed, whether the feature information of each target object can meet the requirement of the preset determination condition is determined, and if there is a target object that can meet the requirement of the preset determination condition, it is determined that the target object that meets the requirement of the preset determination condition belongs to a repeated target object. And if the target object which can meet the requirements of the preset judgment condition does not exist, determining that no repeated target object exists in the overlapping area.
And S1420, if the overlapped target object exists in the overlapped area, performing de-overlapping processing on the target object in the overlapped area.
The present embodiment relates to a case where the server determines that there is a duplicate target object in the overlap area, and in this case, the server directly performs an operation of performing deduplication processing on the target object in the overlap area.
Optionally, the characteristic information of the target object may include a position of a center point of the target object, a type of the target object, and a heading angle of the target object. Different feature information may correspond to different methods for detecting whether a repeated target object exists in the overlapping area, and the following embodiment exemplifies four detection methods.
The first detection method comprises the following steps: the step S1419 of detecting whether there is a repeated target object in the overlap area according to the feature information of the target object in the overlap area by using a preset determination condition includes:
and S1421, calculating the distance between the central point positions of any two target objects in the overlapping area.
When the server obtains the feature information of the target objects in the overlapping area, two target objects can be arbitrarily selected as target objects to be determined, the central point positions of the two target objects are extracted from the feature information of the two target objects, and then the distance between the central point positions of the two target objects is calculated.
S1422, determine whether the distance is smaller than a preset distance threshold, if the distance is smaller than the preset distance threshold, execute step S1423, and if the distance is greater than or equal to the preset distance threshold, execute step S1424.
The preset distance threshold value can be determined by the server according to the identification precision. The embodiment relates to a step of judging whether the distance between the central point positions of two target objects is smaller than a preset distance threshold by a server, wherein if the distance is smaller than the preset distance threshold, the probability that the two target objects belong to the same target object is very high; if the distance is greater than or equal to the preset distance threshold, it is indicated that the probability that the two target objects belong to the same target object is very small. And then the server executes different operations according to different judgment results.
S1423, determining that the two target objects are repeated target objects.
The embodiment relates to a case that the server determines that the distance is smaller than the preset distance threshold, in which case, the server directly determines that two target objects are repetitive target objects.
S1424, determining that the two target objects are not repeated target objects.
The embodiment relates to a case that the server judges that the distance is greater than or equal to a preset distance threshold, and in this case, the server directly determines that the two target objects are not repeated target objects.
The method realizes that the server directly judges whether the two target objects belong to the same target object according to the position of the central point of the target object, and is simple and practical.
The second detection method comprises the following steps: the step S1419, as shown in fig. 28, of detecting whether there is a repeated target object in the overlapping area according to the feature information of the target object in the overlapping area by using a preset determination condition, includes:
and S1425, calculating the distance between the central point positions of any two target objects in the overlapping area.
The present step is the same as the content described in the foregoing step S1421, and details are referred to the foregoing description, which is not repeated herein.
S1426, determining whether the distance is smaller than a preset distance threshold and the types of the two target objects are consistent, if the distance is smaller than the preset distance threshold and the types of the two target objects are consistent, performing step S1427, and if the distance and the type satisfy the conditions except that "the distance is smaller than the preset distance threshold and the types of the two target objects are consistent", performing step S1428.
The embodiment relates to a step of judging whether a distance between center point positions of two target objects is smaller than a preset distance threshold value by a server and whether types of the two target objects are consistent, wherein the server has four possible application scenes, namely the distance is smaller than the preset distance threshold value and the types of the two target objects are consistent; the distance is smaller than a preset distance threshold value, and the types of the two target objects are inconsistent; the distance is greater than or equal to a preset distance threshold, and the types of the two target objects are consistent; the distance is greater than or equal to a preset distance threshold, and the types of the two target objects are inconsistent. The server then performs different operations according to different possible application scenarios. If the distance is smaller than the preset distance threshold value and the types of the two target objects are consistent, the probability that the two target objects belong to the same target object is very high, and at the moment, it is accurate to judge that the two target objects belong to the same target object; if the distance and the type satisfy the conditions except that the distance is smaller than the preset distance threshold value and the types of the two target objects are consistent, it is indicated that the probability that the two target objects belong to the same target object is very small, and it is accurate to judge that the two target objects do not belong to the same target object.
S1427, determining that the two target objects are repeated target objects.
The embodiment relates to a case where the server determines that the distance is smaller than the preset distance threshold and the types of the two target objects are consistent, in which case, the server directly determines that the two target objects are repetitive target objects.
S1428, determining that the two target objects are not repeated target objects.
The present embodiment relates to a case where the server determines that the distance and the type satisfy conditions other than "the distance is smaller than the preset distance threshold and the types of the two target objects are identical", in which case the server directly determines that the two target objects are not repetitive target objects.
The method realizes that the server superposes and judges whether the two target objects belong to the same target object according to the two conditions of the central point position of the target object and the type of the target object, and is more accurate.
The third detection method comprises the following steps: the step S1419, as shown in fig. 29, of detecting whether there is a repeated target object in the overlapping area according to the characteristic information of the target object in the overlapping area by using a preset determination condition, includes:
and S1429, calculating the distance between the central point positions of any two target objects in the overlapping area.
The present step is the same as the content described in the foregoing step S1421, and details are referred to the foregoing description, which is not repeated herein.
And S1430, calculating the difference between the course angles of the two target objects in the overlapping area.
When the server calculates the distance between the central point positions of the two target objects based on the foregoing steps, the heading angles of the two target objects may be further extracted from the feature information of the two target objects, and then the difference between the heading angles of the two target objects is calculated.
S1431, determining whether the distance is smaller than a preset distance threshold and the difference is smaller than a preset difference threshold, if the distance is smaller than the preset distance threshold and the difference is smaller than the preset difference threshold, executing step S1432, and if the distance and the difference satisfy the conditions except that "the distance is smaller than the preset distance threshold and the difference is smaller than the preset difference threshold", executing step S1433.
The preset difference threshold may be determined by the server according to the recognition accuracy, for example, the preset difference threshold may be different angle values such as 5 °, 6 °, 7 °, and the like, which is not limited herein. The embodiment relates to a judging step that a server judges whether the distance between the central point positions of two target objects is smaller than a preset distance threshold or not and whether the difference value between the course angles of the two target objects is smaller than a preset difference threshold or not, wherein the judging step comprises four possible application scenes, namely, the distance is smaller than the preset distance threshold, and the difference value between the course angles of the two target objects is smaller than the preset difference threshold; the distance is smaller than a preset distance threshold, and the difference value between the course angles of the two target objects is larger than or equal to a preset difference threshold; the distance is greater than or equal to a preset distance threshold, and the difference value between the course angles of the two target objects is smaller than a preset difference threshold; the distance is greater than or equal to a preset distance threshold, and the difference between the course angles of the two target objects is greater than or equal to a preset difference threshold. The server then performs different operations according to different possible application scenarios. If the distance is smaller than the preset distance threshold value and the difference value is smaller than the preset difference value threshold value, the probability that the two target objects belong to the same target object is very high, and at the moment, the fact that the two target objects belong to the same target object is judged to be accurate; if the distance and the difference value satisfy the conditions except that the distance is smaller than the preset distance threshold value and the difference value is smaller than the preset difference threshold value, the probability that the two target objects belong to the same target object is very small, and at this moment, it is accurate to judge that the two target objects do not belong to the same target object.
S1432, it is determined that the two target objects are repetitive target objects.
The embodiment relates to a case where the server determines that the distance is smaller than the preset distance threshold and the difference is smaller than the preset difference threshold, in which case the server directly determines that the two target objects are repetitive target objects.
S1433, it is determined that the two target objects are not duplicate target objects.
In this embodiment, the server determines that the distance and the difference value are not the same as the preset distance threshold, and determines whether the distance and the difference value are smaller than the preset difference threshold.
The method realizes that the server superposes and judges whether two target objects belong to the same target object according to the two conditions of the central point position of the target object and the course angle of the target object, and is more accurate.
The fourth detection method is as follows: the feature information of the target object may include a position of a center point of the target object, a type of the target object, and a heading angle of the target object, as shown in fig. 30, in the S1419, "detecting whether there is a repeated target object in the overlapping area according to the feature information of the target object in the overlapping area through a preset determination condition," includes:
and S1434, calculating the distance between the central point positions of any two target objects in the overlapping area.
The step is the same as the step S1419, and details are referred to the foregoing description, which is not repeated herein.
And S1435, calculating the difference value between the heading angles of the two target objects in the overlapping area.
The step is the same as the step S1430, and the details are referred to the foregoing description, which is not repeated herein.
S1436, determine whether the distance is smaller than a preset distance threshold, whether the types of the two target objects are consistent, and whether the difference is smaller than a preset difference threshold, if the distance is smaller than the preset distance threshold, and the types of the two target objects are consistent and the difference is smaller than the preset difference threshold, execute step S1437, and if the distance, the types, and the difference satisfy the conditions except that "the distance is smaller than the preset distance threshold, and the types of the two target objects are consistent and the difference is smaller than the preset difference threshold", execute step S1438.
The embodiment relates to a determination step in which a server determines whether a distance between center point positions of two target objects is smaller than a preset distance threshold, and a difference between course angles of the two target objects is smaller than a preset difference threshold, and at the same time, types of the two target objects are consistent, which are multiple possibilities in total, and are not listed here.
S1437, it is determined that the two target objects are duplicate target objects.
The embodiment relates to a case where the server determines that the distance is smaller than the preset distance threshold, the types of the two target objects are consistent, and the difference value is smaller than the preset difference threshold, in which case the server directly determines that the two target objects are repetitive target objects.
S1438, it is determined that the two target objects are not duplicate target objects.
In this embodiment, the server determines that the two target objects are not repetitive target objects, except that the server determines that the two target objects are identical in type and have a difference smaller than a preset difference threshold.
According to the method, the server can determine whether the two target objects belong to the same target object according to the three conditions of the central point position of the target object, the type of the target object and the course angle of the target object in a superposition mode, and the method is more accurate.
In an embodiment, there is further provided a data processing method, as shown in fig. 31, based on the methods described in the embodiments of fig. 20 and fig. 25, after the step of S1404, the method further includes:
s1439, displaying the characteristic information of the target object in the sensing area and the sensing blind area corresponding to each base station.
After the server obtains the feature information of the target object in the sensing area corresponding to each base station based on the method described in the embodiments of fig. 20 and fig. 25, and performs deduplication processing on the target object in the overlapping area according to the feature information of the target object in the sensing area corresponding to each base station, or predicts the feature information of the target object in the sensing blind area according to the feature information of the target object in the sensing area corresponding to each base station, the method described in this embodiment may further display the feature information of the target object detected by the base station after the deduplication processing, and the predicted feature information in the sensing blind area in one screen. It should be noted that the frequency of the server display screen may be the same as or different from the frequency of the lidar data collected by the base station, for example, the frequency of the lidar data collected is 10Hz/s, and the frequency of the server display screen may also be 10Hz/s.
In the method, the repeated display problem of the target object caused by overlapping of radar scanning areas does not exist in the finally displayed picture due to the duplicate removal processing, and the discontinuous problem does not exist in the finally displayed picture due to the fact that the characteristic information of the target object predicted by the server is displayed in the perception dead zone, so that the picture display effect is improved.
In one embodiment, the present application further provides a target detection method, as shown in fig. 32, the method including:
s1601, point cloud data of the laser radar and vector data of a corresponding high-precision map are obtained.
The point cloud data of the laser radar is data of target object information which is scanned by the laser radar and recorded in a point form, and each point cloud data comprises three-dimensional coordinates. The high-precision map is an electronic map with higher precision and more data dimensions, the higher precision is embodied in the centimeter level, and the more data dimensions are embodied in the fact that the high-precision map comprises surrounding static information which is related to traffic except road information. The vector data of the high-precision map refers to a large amount of driving assistance information stored as structured data, and the information can be divided into two types, wherein one type is road data, such as lane information of positions, types, widths, gradients, curvatures and the like of lane lines, and the other type is fixed object information around lanes, such as information of traffic signs, traffic lights and the like, lane height limits, sewer openings, obstacles and other road details.
Specifically, the server first obtains point cloud data of the laser radar and vector data of a corresponding high-precision map. Optionally, the server may obtain point cloud data of the lidar from the lidar installed in the target area. Alternatively, the server may obtain high-accuracy map data of the target area from the high-accuracy map memory. Optionally, the vector data of the high-precision map data includes at least one of a road edge, a road center line, a road direction line, and a zebra crossing. Optionally, the target area may be an intersection or a road where the vehicle travels. Alternatively, the lidar may include 8-line lidar, 16-line lidar, 24-line lidar, 32-line lidar, 64-line lidar, 128-line lidar, and the like.
And S1602, converting the coordinates of the point cloud data into a coordinate system of a high-precision map by using the calibration parameters of the laser radar to obtain the point cloud data to be detected.
The calibration parameters of the laser radar comprise longitude, latitude, altitude, a rotation angle of a warp winding degree, a rotation angle of a latitude winding degree and a rotation angle of an altitude winding degree of the origin of a laser radar coordinate system. Specifically, the server converts the coordinates of the point cloud data into a coordinate system of a high-precision map by using the calibration parameters of the laser radar to obtain the point cloud data to be detected. Optionally, the server may convert the origin coordinate of the laser radar to be in the coordinate system of the high-precision map by using the calibration parameter of the laser radar, and then convert the coordinate of the point cloud data to be in the coordinate system of the high-precision map according to the corresponding coordinate of the origin coordinate of the laser radar in the coordinate system of the high-precision map, so as to obtain the point cloud data to be detected.
And S1603, performing target detection on the cloud data to be detected to obtain a target detection result.
Specifically, the server performs target detection on the obtained point cloud data to be detected to obtain a target detection result. Optionally, the server may perform target detection on the point cloud data to be detected by using a preset roadside perception detection algorithm to obtain a target detection result. Alternatively, the target object may be a vehicle or a pedestrian. Optionally, the target detection result may include a position of the target object, a type of the target object, a heading angle of the target object, and the like.
And S1604, judging whether the target detection result is abnormal or not according to the vector data and the traffic rules.
Specifically, the server judges whether the target detection result is abnormal or not according to the vector data of the high-precision map and the traffic rules. Wherein the vector data of the high-precision map represents road information in the target area. Optionally, the vector data of the high-precision map data includes at least one of a road edge, a road center line, a road direction line, and a zebra crossing. It can be understood that, because the range of the lidar device is inaccurate, the lidar data is seriously shielded by a moving object, and the target lidar data far away from the lidar is less, which causes the problem that the roadside perception algorithm has inaccurate detection on the lidar data, therefore, the target detection result needs to be corrected. Illustratively, if the target detection result is that the vehicle is on the edge of the road and the vector data of the high-precision map data is that the vehicle is in the center of the road, the server determines that the vehicle should be located in the center of the road according to the vector data of the high-precision map data and the traffic rules, and then the server determines that the target detection result is abnormal.
And S1605, if the target detection result is abnormal, correcting the target detection result by using the vector data.
Specifically, if the server determines that the target detection result is abnormal, the target detection result is corrected by using the vector data of the high-precision map. And continuing to take the target detection result as that the vehicle is on the edge line of the road and the vector data of the high-precision map data as that the vehicle is in the center of the road as an example, the server corrects the target detection result by using the high-precision map data to correct the vehicle to the center of the road.
In the target detection method, the coordinate of the point cloud data of the laser radar can be converted into the coordinate system of the high-precision map by utilizing the external reference of the laser radar to obtain the point cloud data to be detected, so that the target detection can be carried out on the point cloud data to be detected, a target detection result can be obtained, and further whether the target detection result is abnormal or not can be judged through the vector data and the traffic rules of the high-precision map data.
In the above-described scenario in which the target detection result is corrected using the vector data of the high-precision map data, the target detection result includes the position of the target object. In one embodiment, as shown in fig. 33, the above S1605 includes:
and S1606, determining the road position according to the vector data.
Specifically, the server determines the road position from the vector data of the high-precision map data. Optionally, the server may determine the road position according to at least one of a road edge, a road center line, a road direction line, and a zebra crossing in the vector data of the high-precision map data. For example, the server may determine the road position from the road direction line in the vector data of the high-precision map data.
S1607, judging whether the target object is on the road position according to the position of the target object.
Specifically, the server determines whether the target object is at the road position based on the position of the target object. Optionally, the server may compare the position of the target object with the road position, and determine whether the target object is located on the road position. For example, the target object's location is in a grass at the side of the road, and the server determines that the target object is not at the road location.
S1608, if not, corrects the position of the target object to the road position to obtain the corrected target detection result.
Specifically, if the server determines that the target object is not located at the road position, the server corrects the position of the target object to the road position to obtain a corrected target detection result. Optionally, the server may translate the target object to the road position to obtain the corrected target detection result, or may directly drag the target object in the roadside sensing result to the road position to obtain the corrected target detection result.
In this embodiment, the server can accurately determine the road position according to the vector data of the high-precision map data, and then can accurately determine whether the target object is at the road position according to the position of the target object, so that the position of the target object can be accurately corrected to the road position according to whether the target object is at the road position, a corrected target detection result is obtained, and the accuracy of the obtained corrected target detection result is improved.
In the above-described scenario in which the target detection result is corrected using the vector data of the high-precision map data, the target detection result includes the position of the target object and the type of the target object. In one embodiment, as shown in fig. 34, the above S1605 includes:
and S1609, determining the road position according to the vector data.
Specifically, the server determines the road position from the vector data of the high-precision map data. Optionally, the server may determine the road position according to at least one of a road edge, a road center line, a road direction line, and a zebra crossing in the vector data of the high-precision map data. For example, the server may determine the road position from the road center line in the vector data of the high-precision map data.
S1610, according to the position of the target object and the road position, determining the road type of the road where the target object is located.
Specifically, the server determines the road type of the road where the target object is located according to the position of the target object and the determined road position. For example, if the target object is a vehicle, the position of the target object is on a road, and the road position is a road center line, the server determines that the road type of the road where the target object is located is a motor vehicle lane.
S1611, determining a target road type corresponding to the target object according to the corresponding relation between the object type and the road type.
Specifically, the server determines a target road type corresponding to the target object according to the correspondence between the object type and the road type. For example, the correspondence between the object type and the road may be: if the object type is a vehicle, the road type is a motor lane; if the object type is a pedestrian, the road type is a non-motor vehicle lane. Correspondingly, if the target object is a vehicle, the type of the target road corresponding to the target object determined by the server is a motor lane.
And S1612, if the road type of the road where the target object is located at present is not consistent with the target road type, correcting the type of the target object to be the type matched with the road type, and obtaining a corrected target detection result.
Specifically, if the server determines that the road type of the road where the target object is currently located is not consistent with the target road type, the type of the target object is corrected to be a type matched with the road type of the road where the target object is currently located, and a corrected target detection result is obtained. Illustratively, if the road type of the road where the target object is currently located is a non-motor vehicle lane, the target road type is a motor vehicle lane, and the road type of the road where the target object is currently located is not consistent with the target road type, the server corrects the type of the target object to a type matched with the non-motor vehicle lane, and obtains a corrected target detection result.
In this embodiment, the server may accurately determine the road position according to the vector data of the high-precision map data, and may accurately determine the road type of the road where the target object is currently located according to the position of the target object and the determined road position, so as to accurately determine the target road type corresponding to the target object according to the correspondence between the object type and the road type.
In the scene in which the target detection result is corrected by using the vector data of the high-precision map data, the roadside sensing result includes the course angle of the target object. In one embodiment, as shown in fig. 35, the step S1605 includes:
s1613, obtaining the times that the course angle of the target object is larger than the preset threshold value within the preset frame number according to the vector data of the high-precision map data.
Specifically, the server obtains the times that the course angle of the target object is larger than the preset threshold value within the preset frame number according to the vector data of the high-precision map data. Optionally, the preset number of frames may be ten frames, the preset threshold may be 45 degrees, and the number of times that the heading angle of the target object is greater than the preset threshold may be two or more than two.
S1614, judging whether the times that the course angle of the target object is larger than the preset threshold value within the preset frame number is larger than the preset time threshold value or not.
Specifically, the server judges whether the times that the course angle of the target object is greater than the preset threshold value within the preset frame number is greater than the preset time threshold value. For example, the preset number of times threshold is three times, and the number of times that the course angle of the target object is greater than the preset threshold within the preset number of frames is four times, the server determines that the number of times that the course angle of the target object is greater than the preset threshold within the preset number of frames is greater than the preset number of times threshold.
And S1615, if yes, correcting the course angle of the target object to obtain a corrected target detection result.
Specifically, if the times that the course angle of the target object is greater than the preset threshold within the preset number of frames are greater than the preset time threshold, the server corrects the course angle of the target object to obtain a correction result. Optionally, the server may modify the course angle of the target object to a preset threshold value, so as to obtain a modification result.
In this embodiment, the server can accurately obtain the times that the course angle of the target object is greater than the preset threshold within the preset number of frames according to the vector data of the high-precision map data, and then can accurately judge whether the times that the course angle of the target object is greater than the preset threshold within the preset number of frames is greater than the preset time threshold, and if the course angle of the target object is greater than the preset threshold within the preset number of frames, the course angle of the target object can be accurately corrected, so as to obtain an accurately corrected target detection result, and improve the accuracy of the obtained corrected target detection result.
In the scene where the point cloud data to be detected is obtained by converting the coordinates of the point cloud data to the coordinate system of the high-precision map by using the external reference of the laser radar, in an embodiment, the method further includes: and registering the point cloud data of the laser radar according to the point cloud data of the high-precision map to obtain the registered point cloud data.
Specifically, the server registers the point cloud data of the laser radar according to the point cloud data of the high-precision map to obtain the registered point cloud data. Optionally, the server may register the point cloud data of the laser radar and the point cloud data of the high-precision map to obtain a registration parameter, and convert the point cloud data of the laser radar into a coordinate system corresponding to the point cloud data of the high-precision map according to the registration parameter to obtain the registered point cloud data. Further, after the server obtains the registered point cloud data, the coordinate of the registered point cloud data can be converted into the coordinate system of the high-precision map by using the calibration parameters of the laser radar, and the point cloud data to be detected is obtained.
In this embodiment, the server can register the point cloud data of the laser radar according to the point cloud data of the high-precision map to obtain the registered point cloud data, and then can convert the coordinates of the registered point cloud data into the coordinate system of the high-precision map by using the calibration parameters of the laser radar to obtain the point cloud data to be detected, so that the accuracy of the obtained point cloud data to be detected is improved.
In an embodiment, the present application further provides a method for monitoring laser radar positioning, which is applied to a computer device for example, it may be understood that the method may also be applied to a server, may also be applied to a system including a roadside radar and a server, and is implemented by interaction between the roadside radar and the server. As shown in fig. 36, the method includes the steps of:
s1701, real-time point cloud data of the roadside radar are collected in real time, and first space information between the real-time target and the standard target is obtained according to the real-time point cloud data and the standard point cloud data.
The standard point cloud data is obtained by adopting a high-precision map. The real-time target is a target object determined according to the real-time point cloud data, and the standard target is a target object determined according to the standard point cloud data.
In this embodiment, the roadside radar may be a laser radar.
The real-time point cloud data comprises absolute positions of all static targets in the coverage area of the roadside radar acquired when the roadside radar is used in real time. The standard point cloud data comprises a pre-obtained high-precision map of lane levels corresponding to a roadside radar coverage area stored in a point cloud format. A high-precision map in point cloud format records the absolute position of each static target object on the actual road surface. Real-time targets and standard targets are used to refer to static target objects within a coverage area, including road borders, landmarks around roads, trees, light poles, and the like.
Optionally, the first spatial information may be used to characterize a position relationship between the real-time target and the standard target when the roadside radar is used in real time, such as at least one of an offset angle of the real-time target relative to the standard target, an offset direction of the real-time target relative to the standard target, and an offset distance of the real-time target relative to the standard target.
Specifically, the computer equipment acquires real-time point cloud data acquired by a roadside radar in real time, performs feature recognition on the real-time point cloud data to obtain a real-time target, performs feature recognition on pre-stored standard point cloud data to obtain a standard target, and matches the standard target with the pre-stored standard point cloud data to obtain first spatial information between the real-time target and the standard target which refer to the same static target object.
S1702, comparing the first spatial information with the second spatial information obtained initially.
And the second spatial information is the spatial information between the initial target and the standard target obtained according to the initially collected initial point cloud data and standard point cloud data of the roadside radar.
The initial point cloud data comprises the distance information of each static target object relative to the roadside radar in the coverage area of the roadside radar acquired when the roadside radar is used for the first time. The initial target is a target object determined according to standard point cloud data, and is used to refer to a static target object in the coverage area in this embodiment.
Optionally, the second spatial information may be used to characterize a positional relationship between the initial target and the standard target when the roadside radar is in initial use, such as at least one of an offset angle of the initial target with respect to the standard target, an offset direction of the initial target with respect to the standard target, and an offset distance of the initial target with respect to the standard target.
The first spatial information and the second spatial information comprise the same type of information. For example, the first spatial information includes a deviation angle of the real-time target from the standard target, and the second spatial information also includes a deviation angle of the initial target from the standard target.
S1703, determining whether the roadside radar is abnormally positioned according to the comparison result of the first spatial information and the second spatial information.
Specifically, the computer equipment compares first spatial information between a real-time target and a standard target obtained in real-time use with second spatial information between an initial target and the standard target obtained by the roadside radar in initial use, and determines whether positioning is changed or not along with increase of use time of the roadside radar according to a comparison result, so that positioning is abnormal.
In this embodiment, the computer device uses the standard point cloud data as a basic basis, obtains first spatial information between the real-time target and the standard target according to the real-time point cloud data and the standard point cloud data obtained through real-time acquisition, obtains second spatial information between the initial target and the standard target according to the initial point cloud data and the standard point cloud data obtained through initial acquisition, compares the first spatial information and the second spatial information, and determines whether the real-time positioning of the roadside radar changes relative to the initial positioning according to a comparison result, so as to realize real-time detection of the positioning effect of the roadside radar, timely find whether the roadside radar is abnormally positioned, and improve the efficiency of positioning and monitoring of the roadside radar.
In an embodiment, before performing positioning monitoring of the roadside radar in real time, second spatial information of the roadside radar when being used for the first time needs to be acquired in advance, as shown in fig. 37, before S1701, the method for positioning monitoring of the roadside radar further includes:
s1704, acquiring initial point cloud data, performing feature recognition on the initial point cloud data to obtain initial targets, and acquiring relative position relations between the initial targets.
And S1705, performing feature recognition on the standard point cloud data to obtain a standard target.
Specifically, the computer equipment acquires initial point cloud data in a coverage area acquired by a roadside radar during initial use, performs feature recognition on the initial point cloud data to obtain initial targets, determines a relative position relation between the initial targets according to the position of the geometric center of each initial target relative to the roadside radar, and performs feature recognition on pre-stored standard point cloud data of the same coverage area to obtain standard targets. .
And S1706, performing feature matching on the initial target and the standard target to obtain a corresponding relation between the matched initial target and the matched standard target.
S1707, obtaining the absolute position of the standard target matched with the origin of the roadside radar in the initial target, and obtaining the absolute position of the initial target according to the relative position relation between the initial targets.
Wherein, the origin of the roadside radar is the position for setting the roadside radar. The initial target includes a roadside radar origin. The absolute position includes absolute position coordinates, such as latitude and longitude coordinates, and may also include elevation, east-west rotation, north-south rotation, or vertical rotation.
In this embodiment, the absolute position coordinate is an absolute position coordinate of a central point of the static target.
Specifically, the computer device performs feature matching on the initial target and the standard target to obtain a matched initial target and a matched standard target and a corresponding relation between the matched initial target and the matched standard target. And acquiring the absolute position of the standard target matched with the origin of the roadside radar in the initial target. FIG. 38 is a graph of initial targets from an initial point cloud, where M is a roadside radar origin, M is the remaining initial targets except for a roadside radar origin, each initial target M is a distance l from the roadside radar origin M, and an offset angle α from the roadside radar origin M. And after the computer equipment performs characteristic matching on the initial target and the standard target, obtaining the absolute position, such as longitude and latitude coordinates, of the standard target matched with the origin of the roadside radar. And the computer equipment obtains the absolute position of each initial target M according to the distance l between each initial target M and the origin M of the roadside radar and the offset angle alpha.
S1708, obtaining second spatial information between the matched initial target and the standard target according to the absolute position of the initial target and the absolute position of the standard target matched with the initial target.
Specifically, the computer device acquires the distance between the absolute positions of the matched initial target and the standard target as the second spatial information.
When the absolute position of the initial target is a longitude and latitude coordinate, the second spatial information between the initial target and the standard target is the distance between the longitude and latitude coordinate of the initial target and the longitude and latitude coordinate of the corresponding standard target.
In this embodiment, when the roadside radar is used for the first time, the computer device performs feature recognition on the acquired initial point cloud data and the pre-stored standard point cloud data to obtain an initial target and a standard target respectively, obtains a corresponding relation between the matched initial target and the standard target in a feature matching manner, obtains an absolute position of an origin of the roadside radar in the initial target, and obtains an absolute position of each initial target in combination with a relative position relation between the initial targets, so that when the roadside radar is used for the first time, second spatial information between the initial target and the standard target is determined according to the absolute positions of the matched initial target and the standard target. The roadside radar used for the first time is accurate in positioning, the difference between the roadside radar and the standard target when the roadside radar is accurate in positioning can be accurately reflected through the second spatial information, the difference is used as a reference standard, whether the roadside radar is abnormal in positioning during subsequent real-time positioning can be effectively monitored, and the accuracy of positioning monitoring is improved.
In an embodiment, when performing positioning monitoring of the roadside radar in real time, the first spatial information needs to be obtained according to the real-time point cloud data acquired by the roadside radar, as shown in fig. 39, S1701 includes:
s1709, real-time point cloud data are obtained, feature recognition is carried out on the real-time point cloud data, real-time targets are obtained, and relative position relations among the real-time targets are obtained. Specifically, the roadside radar can scan the periphery by 360 degrees by taking the position of the roadside radar as a center to form a coverage area corresponding to the roadside radar, and real-time point cloud data of the coverage area is acquired. The computer equipment acquires real-time point cloud data in a coverage area acquired by the roadside radar in real-time use, performs feature recognition on the real-time point cloud data to obtain real-time targets, and determines the relative position relationship between the real-time targets according to the position of the geometric center of each real-time target relative to the roadside radar.
S1710, obtaining the absolute position of a standard target matched with the origin of the roadside radar in the real-time target, and obtaining the absolute position of the real-time target according to the relative position relation between the real-time targets.
Specifically, the computer device obtains the absolute position of a standard target matched with a roadside radar origin in the real-time targets, and obtains the absolute position of each real-time target by combining the relative position relationship between the real-time targets.
And S1711, acquiring the absolute position of the standard target matched with the real-time target according to the corresponding relation between the matched initial target and the standard target.
The real-time target corresponds to the initial target and the static targets indicated by the standard target one by one.
And S1712, obtaining first space information between the matched real-time target and the standard target according to the absolute position of the real-time target and the absolute position of the standard target matched with the real-time target.
Specifically, the computer device acquires the absolute position of the standard target matched with the real-time target according to the corresponding relation between the matched initial target and the standard target and in combination with the one-to-one correspondence between the real-time target and the initial target, and acquires the distance between the matched real-time target and the absolute position of the standard target as the first spatial information.
The second spatial information is the distance between the longitude and latitude coordinates of the initial target and the longitude and latitude coordinates of the corresponding matched standard target, and the first spatial information is the distance between the longitude and latitude coordinates of the real-time target and the longitude and latitude coordinates of the corresponding matched standard target.
In this embodiment, after the roadside radar is used for the first time, the computer device performs feature recognition on the acquired real-time point cloud data to obtain real-time targets, determines an absolute position of an origin of the roadside radar in the real-time targets according to a corresponding relationship between the initial targets and the standard targets, obtains an absolute position of each real-time target by combining a relative position relationship between the real-time targets, and determines first spatial information between the targets and the standard targets when the absolute positions of the matched real-time targets and the standard targets are determined in a subsequent roadside radar use process. The first spatial information can reflect real-time difference between real-time targets and standard targets obtained in real time, so that the real-time positioning condition of the roadside radar is accurately monitored.
In one embodiment, by comparing the second spatial information obtained when the roadside radar is used for the first time with the first spatial information obtained after the first use, the method determines whether the roadside radar is located abnormally in real time, as shown in fig. 40, S1703 includes:
and S1713, acquiring a difference value between the second spatial information and the first spatial information.
And S1714, judging whether the roadside radar is abnormally positioned according to the difference.
Specifically, the computer device obtains a difference value between the second spatial information and the first spatial information of the same static target, and can judge whether the roadside radar is abnormally positioned according to whether the difference value is greater than a preset difference value. If the obtained difference is larger than the preset difference, positioning of the roadside radar is abnormal; and if the obtained difference is less than or equal to the preset difference, the positioning of the roadside radar is normal.
Optionally, the computer device may obtain an average value or a maximum value of a difference value between the first spatial information and the second spatial information corresponding to the static target, and determine whether the roadside radar is positioned abnormally according to whether the average value or the maximum value of the difference value is greater than a preset difference value.
In this embodiment, the computer device obtains second spatial information between the initial target and the standard target obtained when the roadside radar is used for the first time, and a difference value between first spatial information between the real-time target and the standard target obtained when the roadside radar is used in real time after the first time, and the positioning change of the roadside radar is reflected through the quantization of the difference value when the roadside radar is used for the first time and is used in subsequent real time, so that the positioning monitoring accuracy of the roadside radar is improved.
In one embodiment, to further improve the accuracy of roadside radar location monitoring, as shown in fig. 41, S1714 includes:
and S1715, acquiring the ratio of the difference value to the second spatial information.
And S1716, judging whether the ratio meets a preset range.
And if so, determining that the roadside radar is positioned normally.
If not, determining that the positioning of the roadside radar is abnormal.
Specifically, the computer device determines whether the roadside radar is abnormally positioned by acquiring a ratio of the difference to the second spatial information and judging whether the ratio meets a preset range. For example, the second spatial information is an initial distance between an initial target and a standard target obtained when the roadside radar is used for the first time, the first spatial information is a real-time distance between a real-time target and the standard target obtained when the roadside radar is used for the subsequent real-time, and a difference value between the first spatial information and the second spatial information is a distance deviation between the initial distance and the real-time distance. And comparing the distance deviation with the initial distance by the roadside radar, and judging whether the ratio of the distance deviation to the initial distance meets a preset range of 2%. If the obtained ratio is within 2% of the preset range, namely less than or equal to 2%, determining that the roadside radar is positioned normally; and if the obtained ratio exceeds the preset range by 2 percent, namely is more than 2 percent, determining that the roadside radar is abnormally positioned.
In this embodiment, the computer device further obtains a ratio of the difference between the first spatial information and the second spatial information to the second spatial information, and determines whether the positioning abnormality is caused by the problems of the roadside radar such as distance measurement abnormality, slow rotation speed, or point loss by judging whether the ratio satisfies a preset range. The applicability of the whole positioning monitoring method can be improved through a ratio judging mode, and the positioning monitoring accuracy of the roadside radar is further improved.
In one embodiment, after determining that the roadside radar is abnormally located, the roadside radar location monitoring method further includes:
and if the positioning of the roadside radar is abnormal, sending an abnormal alarm instruction to the control platform.
The abnormal alarm command comprises a radar number of the road side radar with abnormal positioning.
Specifically, if the roadside radar is abnormally positioned, the radar number of the roadside radar is acquired, abnormal alarm information including the radar number of the roadside radar is generated, and the abnormal alarm information is sent to the control platform.
In this embodiment, after the computer equipment determines that the roadside radar is abnormally positioned, the computer equipment further sends abnormal alarm information to the control platform, so that related workers can timely learn the roadside radar number with abnormal positioning through the control platform, the roadside radar with abnormal positioning can be conveniently subjected to targeted maintenance, and the maintenance efficiency of the roadside radar is improved.
It should be understood that although the various steps in the flow charts of fig. 2-41 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-41 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 42, there is provided a multi-base station registration apparatus including: an acquisition module 11, a registration module 12 and a calculation module 13, wherein:
the acquisition module 11 is used for acquiring radar point cloud data and corresponding map point cloud data of each base station; the precision of the map point cloud data is greater than a preset precision threshold;
and the matching module 12 is configured to respectively perform iterative execution on the radar point cloud data of each base station and the corresponding map point cloud data according to a single base station to obtain a matching result of the position coordinates of the origin of the laser radar coordinate system, adjust the original registration parameters of each base station according to the matching result until the output matching result meets a preset condition, and output the adjusted registration parameters of each base station. The registration parameters of the base station comprise longitude, latitude, altitude, a rotation angle of a warp winding degree, a rotation angle of a latitude winding degree and a rotation angle of an altitude winding degree of an origin of a coordinate system of the base station;
a calculating module 13, configured to calculate the relative registration parameters of each base station according to the registration parameters of each base station.
For specific definition of the multi-base station registration apparatus, reference may be made to the above definition of the multi-base station registration method, which is not described herein again. The modules in the multi-base station registration apparatus can be implemented in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 43. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of multi-base station registration. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 43 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of:
acquiring radar point cloud data and corresponding map point cloud data of each base station; the precision of the map point cloud data is greater than a preset precision threshold;
and respectively and iteratively executing matching between the radar point cloud data of each base station and the corresponding map point cloud data according to a single base station to obtain a matching result of the position coordinate of the origin of the laser radar coordinate system, adjusting the original registration parameters of each base station according to the matching result until the output matching result meets a preset condition, and outputting the adjusted registration parameters of each base station. The registration parameters of the base station comprise longitude, latitude, altitude, a rotation angle of a warp winding degree, a rotation angle of a latitude winding degree and a rotation angle of an altitude winding degree of an origin of a coordinate system of the base station;
and calculating the relative registration parameters of the base stations according to the registration parameters of the base stations.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, the computer program, when executed by a processor, further implementing the steps of:
acquiring radar point cloud data and corresponding map point cloud data of each base station; the precision of the map point cloud data is greater than a preset precision threshold;
and respectively and iteratively executing matching between the radar point cloud data of each base station and the corresponding map point cloud data according to a single base station to obtain a matching result of the position coordinate of the origin of the laser radar coordinate system, adjusting the original registration parameters of each base station according to the matching result until the output matching result meets a preset condition, and outputting the adjusted registration parameters of each base station. The registration parameters of the base station comprise longitude, latitude, altitude, a rotation angle of a warp winding degree, a rotation angle of a latitude winding degree and a rotation angle of an altitude winding degree of an origin of a coordinate system of the base station;
and calculating the relative registration parameters of the base stations according to the registration parameters of the base stations.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided by the embodiments of the disclosure may include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express a few implementation modes of the embodiments of the present disclosure, and the description thereof is specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, variations and modifications can be made without departing from the concept of the embodiments of the present disclosure, and these are all within the scope of the embodiments of the present disclosure. Therefore, the protection scope of the patent of the embodiment of the disclosure should be subject to the appended claims.

Claims (11)

1. A method of multi-base station registration, the method comprising:
acquiring radar point cloud data and corresponding map point cloud data of each base station; the precision of the map point cloud data is greater than a preset precision threshold;
acquiring a first feature set of radar point cloud data of each base station and a second feature set of corresponding map point cloud data, acquiring a first projection included angle of the first feature set and a second projection included angle of the second feature set, and acquiring a matching result of position coordinates of an origin of a laser radar coordinate system according to the first feature set, the second feature set, the first projection included angle and the second projection included angle; the radar point cloud data of each base station are static radar point cloud data; the first projection included angle is an included angle between a projection line segment of a line segment between any two first features in the first feature set in a preset plane in a radar coordinate system and a corresponding coordinate axis; the second projection included angle is an included angle between a projection line segment of a line segment between any two second features in the second feature set in a preset plane in a geographic coordinate system and a corresponding coordinate axis;
adjusting the original registration parameters of each base station according to the matching result until the output matching result meets the preset condition, and outputting the adjusted registration parameters of each base station; the registration parameters of the base station comprise longitude, latitude, altitude, a rotation angle of a warp winding degree, a rotation angle of a latitude winding degree and a rotation angle of an altitude winding degree of an origin of a coordinate system of the base station;
and calculating the relative registration parameters of the base stations according to the registration parameters of the base stations.
2. The method of claim 1, wherein obtaining the map point cloud data comprises:
acquiring original map point cloud data of each base station; the precision of the original map point cloud data is greater than the preset precision threshold;
and obtaining map point cloud data of an area in the preset scanning range from the original map point cloud data according to the actual installation position of the laser radar of each base station and the preset scanning range of the laser radar, wherein the map point cloud data is used as the map point cloud data.
3. The method according to claim 1, wherein before obtaining the first feature set of the radar point cloud data of each base station and the corresponding second feature set of the map point cloud data, and obtaining the first projection angle of the first feature set and the second projection angle of the second feature set, and obtaining the matching result of the position coordinates of the origin of the lidar coordinate system according to the first feature set, the second feature set, the first projection angle, and the second projection angle, the method further comprises:
and eliminating dynamic radar point cloud data in the radar point cloud data of each base station to obtain static radar point cloud data of each base station.
4. The method of claim 1, wherein obtaining the first feature set of radar point cloud data and the corresponding second feature set of map point cloud data for each base station comprises:
extracting features in the static radar point cloud data of each base station to obtain a first feature set; the first set of features comprises at least two first features;
and extracting the features in the map point cloud data of each base station to obtain a second feature set.
5. The method according to claim 4, wherein obtaining the matching result of the position coordinate of the origin of the lidar coordinate system according to the first feature set, the second feature set, the first projection angle and the second projection angle comprises:
performing difference operation on the first projection included angle and the second projection included angle to obtain a rotation angle of the origin of the laser radar coordinate system;
and obtaining a matching result of the position coordinate of the origin of the laser radar coordinate system according to the rotation angle of the origin of the laser radar coordinate system, the first characteristic set and the second characteristic set.
6. The method according to claim 5, wherein obtaining the matching result of the position coordinate of the origin of the lidar coordinate system according to the first feature set, the second feature set, the first projection angle, and the second projection angle comprises:
matching each first feature in the first feature set with each second feature in the second feature set to obtain a target first feature and a target second feature which belong to the same type;
correcting the position coordinate of the first target characteristic according to the value of the cosine function of the rotation angle to obtain the corrected position coordinate of the first target characteristic;
and determining a matching result of the position coordinate of the origin of the laser radar coordinate system according to the corrected position coordinate of the first characteristic of the target and the position coordinate of the second characteristic of the target.
7. The method of claim 6, wherein the rotation angle of the lidar coordinate system origin comprises: a warp winding degree rotation angle, a latitude winding rotation angle and an altitude winding rotation angle; the position coordinates of the origin of the laser radar coordinate system comprise: longitude, latitude, altitude.
8. A method of environmental awareness, the method comprising:
acquiring relative registration parameters of each base station based on the multi-base-station registration method of any one of claims 1 to 7;
converting the position coordinates of each base station into a coordinate system where the position coordinates of the same base station are located according to the relative registration parameters of each base station to obtain the converted position coordinates of each base station;
and identifying the target object in the sensing area of each base station by adopting a preset identification method according to the conversion position coordinates of each base station and the radar point cloud data of each base station.
9. A multi-base station registration apparatus, the apparatus comprising:
the acquisition module is used for acquiring radar point cloud data and corresponding map point cloud data of each base station; the precision of the map point cloud data is greater than a preset precision threshold;
the registration module is used for acquiring a first feature set of radar point cloud data of each base station and a second feature set of corresponding map point cloud data, acquiring a first projection included angle of the first feature set and a second projection included angle of the second feature set, obtaining a matching result of a position coordinate of an origin of a laser radar coordinate system according to the first feature set, the second feature set, the first projection included angle and the second projection included angle, adjusting original registration parameters of each base station according to the matching result until the output matching result meets a preset condition, and outputting the adjusted registration parameters of each base station; the radar point cloud data of each base station are static radar point cloud data; the first projection included angle is an included angle between a projection line segment of a line segment between any two first features in the first feature set in a preset plane in the laser radar coordinate system and a corresponding coordinate axis; the second projection included angle is an included angle between a projection line segment of a line segment between any two second features in the second feature set in a preset plane in a geographic coordinate system and a corresponding coordinate axis; the registration parameters of the base station comprise longitude, latitude, altitude, a rotation angle of a warp winding degree, a rotation angle of a latitude winding degree and a rotation angle of an altitude winding degree of an origin of a coordinate system of the base station;
and the calculation module is used for calculating the relative registration parameters of all the base stations according to the registration parameters of all the base stations.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented by the processor when executing the computer program.
11. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, realizing the steps of the method of any one of claims 1 to 8.
CN202011018673.1A 2020-09-24 2020-09-24 Multi-base-station registration method and device, computer equipment and storage medium Active CN114255264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011018673.1A CN114255264B (en) 2020-09-24 2020-09-24 Multi-base-station registration method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011018673.1A CN114255264B (en) 2020-09-24 2020-09-24 Multi-base-station registration method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114255264A CN114255264A (en) 2022-03-29
CN114255264B true CN114255264B (en) 2023-03-24

Family

ID=80790108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011018673.1A Active CN114255264B (en) 2020-09-24 2020-09-24 Multi-base-station registration method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114255264B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118795456A (en) * 2024-09-10 2024-10-18 常州双禾电子有限公司 Acquisition positioning combination algorithm based on radar

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673183A (en) * 2019-09-24 2020-01-10 南通润邦重机有限公司 Container identification and positioning method combined with GPS/INS

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109521403B (en) * 2017-09-19 2020-11-20 百度在线网络技术(北京)有限公司 Parameter calibration method, device and equipment of multi-line laser radar and readable medium
CN111398936B (en) * 2020-03-11 2021-04-06 山东大学 Multi-path side laser radar point cloud registration device and using method thereof
CN111583337B (en) * 2020-04-25 2023-03-21 华南理工大学 Omnibearing obstacle detection method based on multi-sensor fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673183A (en) * 2019-09-24 2020-01-10 南通润邦重机有限公司 Container identification and positioning method combined with GPS/INS

Also Published As

Publication number Publication date
CN114255264A (en) 2022-03-29

Similar Documents

Publication Publication Date Title
CN114252884B (en) Roadside radar positioning monitoring method, device, computer equipment and storage medium
CN110174093B (en) Positioning method, device, equipment and computer readable storage medium
CN109059954B (en) Method and system for supporting high-precision map lane line real-time fusion update
EP3680609A1 (en) Antenna downward inclination angle measurement method based on multi-scale deep semantic segmentation network
CN105512646B (en) A kind of data processing method, device and terminal
KR20200121274A (en) Method, apparatus, and computer readable storage medium for updating electronic map
US11625851B2 (en) Geographic object detection apparatus and geographic object detection method
Puente et al. Automatic detection of road tunnel luminaires using a mobile LiDAR system
WO2019198076A1 (en) Real-time raw data- and sensor fusion
CN105608417A (en) Traffic signal lamp detection method and device
CN112927565B (en) Method, device and system for improving accuracy of comprehensive track monitoring data of apron
CN114295139A (en) Cooperative sensing positioning method and system
CN114252883B (en) Target detection method, apparatus, computer device and medium
Liu et al. Deep-learning and depth-map based approach for detection and 3-D localization of small traffic signs
CN114252868A (en) Laser radar calibration method and device, computer equipment and storage medium
CN114252859A (en) Target area determination method and device, computer equipment and storage medium
CN115166722B (en) Non-blind-area single-rod multi-sensor detection device for road side unit and control method
CN114255264B (en) Multi-base-station registration method and device, computer equipment and storage medium
CN112255604A (en) Method and device for judging accuracy of radar data and computer equipment
CN112184903B (en) Method, device, equipment and medium for detecting high-voltage line tree obstacle risk points
CN114252869B (en) Multi-base-station cooperative sensing method, device, computer equipment and storage medium
CN109815773A (en) A kind of low slow small aircraft detection method of view-based access control model
CN116912786A (en) Intelligent network-connected automobile multi-mode fusion detection method based on vehicle-road cooperation
CN116893685A (en) Unmanned aerial vehicle route planning method and system
CN116229118A (en) Bird's eye view target detection method based on manifold matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant