CN111815687A - Point cloud matching method, positioning method, device and storage medium - Google Patents
Point cloud matching method, positioning method, device and storage medium Download PDFInfo
- Publication number
- CN111815687A CN111815687A CN202010565092.3A CN202010565092A CN111815687A CN 111815687 A CN111815687 A CN 111815687A CN 202010565092 A CN202010565092 A CN 202010565092A CN 111815687 A CN111815687 A CN 111815687A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- feature
- cloud data
- preset
- semantic information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 238000001914 filtration Methods 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 51
- 238000000605 extraction Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a point cloud matching method, a point cloud positioning device and a point cloud storage medium. The point cloud matching method comprises the following steps: determining semantic information corresponding to a first point cloud feature of target point cloud data, wherein the target point cloud data is obtained by scanning the surrounding environment of a target pose; filtering point features corresponding to the dynamic object in the first point cloud features based on the semantic information; and performing feature matching on the filtered first point cloud features and second point cloud features of the preset point cloud data to obtain a matching result. By the method, the target point cloud data can be accurately matched with the preset point cloud data.
Description
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a point cloud matching method, a point cloud positioning method, an electronic device, and a computer-readable storage medium.
Background
The relocation technique is one of the most common problems to be solved in the map location technique (SLAM). In practical production application scenarios, the application of map positioning technology is often challenged by uncertain external environments. For example, when the pose of the robot on the global map is determined without any environment prior information, if the robot is moved by the outside, the continuity of the acquired data is affected, and thus pose positioning fails. For another example, in a modern production environment, more and more robots need to autonomously move in a high dynamic environment, and when the external environment of the robot changes very severely, the robot is often positioned and drifted or even lost. Based on the above-listed situations, the pose of the robot needs to be re-determined. In the robot map positioning technology, the point cloud matching process is particularly critical, but the existing point cloud matching method is not high in accuracy.
Disclosure of Invention
The method mainly solves the technical problem that an existing point cloud matching method is low in accuracy.
In order to solve the technical problem, the application adopts a technical scheme that: provided is a point cloud matching method, which includes: determining semantic information corresponding to a first point cloud feature of target point cloud data, wherein the target point cloud data is obtained by scanning the surrounding environment of a target pose; filtering point features corresponding to the dynamic object in the first point cloud features based on the semantic information; and performing feature matching on the filtered first point cloud features and second point cloud features of the preset point cloud data to obtain a matching result.
In order to solve the technical problem, the application adopts a technical scheme that: there is provided a positioning method, the method comprising: taking the current pose as a target pose, and scanning the surrounding environment of the target pose by using a scanning device to obtain target point cloud data; obtaining a plurality of candidate point cloud data respectively corresponding to a plurality of candidate poses by using the global map data; taking each frame of candidate point cloud data as preset point cloud data, performing feature matching on the target point cloud data and each frame of preset point cloud data, and selecting preset point cloud data which meets preset conditions in matching with the target point cloud data; positioning the current pose by using the pose corresponding to the selected preset point cloud data; the process of performing feature matching on the target point cloud data and each frame of preset point cloud data can be realized by using the point cloud matching method.
In order to solve the above technical problem, another technical solution adopted by the present application is: an electronic device is provided that includes a processor, a memory coupled to the processor, wherein the memory stores program instructions for executing the memory to implement the method.
In order to solve the above technical problem, the present application adopts another technical solution that: there is provided a computer readable storage medium storing program instructions that when executed enable the above method.
The beneficial effect of this application is: by implementing the scheme, the semantic information is corresponding to the second point cloud features of the preset point cloud data, so that after the semantic information corresponding to the first point cloud features of the target point cloud data is determined, the features of the target point cloud data and the preset point cloud data can be further matched based on the determined semantic information; and before matching, filtering the point characteristics corresponding to the dynamic object in the first point cloud characteristics based on the semantic information corresponding to the first point cloud characteristics, and matching the filtered first point cloud characteristics with the second point cloud characteristics of the preset point cloud data, so that the obtained matching result is more accurate.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a point cloud matching method according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating a point cloud matching method according to another embodiment of the present application;
FIG. 3 is a schematic diagram of a detailed flow chart of S220 in FIG. 2;
FIG. 4 is a schematic diagram of a detailed flow chart of S220 in FIG. 2;
FIG. 5 is a detailed flowchart of S223 in FIG. 4;
FIG. 6 is a schematic view of a detailed flow chart of S150 in FIG. 1;
fig. 7 is a detailed flowchart of S1531 in fig. 6;
FIG. 8 is a schematic flow chart diagram illustrating an embodiment of a positioning method of the present application;
FIG. 9 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 10 is a schematic diagram of another embodiment of an electronic device of the present application;
FIG. 11 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flow chart of an embodiment of a point cloud matching method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 1 is not limited in this embodiment. As shown in fig. 1, the present embodiment may include:
s110: and acquiring a target image set and target point cloud data which are obtained by respectively shooting and scanning the surrounding environment of the target pose and correspondingly.
The target pose may be, but is not limited to, the current pose of the robot. The target image set comprises a plurality of images of the target pose surrounding environment, the images can be acquired through a device with a shooting function (such as a camera sensor), and in a specific application scene, the robot can acquire the images of the target pose surrounding environment through in-situ rotation and the like, so that the acquired target pose surrounding environment information is more comprehensive. Of course, the robot may also acquire the image of the environment around the target pose in other ways, which is not limited herein.
The target point cloud data may be point cloud data within a preset range around the target pose, which may be obtained by scanning the environment around the target pose, and may specifically be obtained by scanning with a scanning device (such as a laser radar sensor). The target point cloud data may be composed of a plurality of points, which may include information such as coordinates of the points. The point in the target point cloud data may be a point directly acquired by the scanning device, or a point obtained by down-sampling or voxelization of the point acquired by the scanning device.
S120: and performing semantic segmentation on each image in the target image set to obtain semantic information of the image, and performing feature extraction on the target point cloud data to obtain a first point cloud feature of the target point cloud data.
Each image in the target image set may be semantically segmented using, but not limited to, a semantic segmentation neural network (swift net) to obtain semantic information of the image. The semantically segmented neural network relies on a lightweight architecture as the primary recognition engine, which achieves 75.5% mean cross-over-parallel ratio (MIoU) on the classical cityscaps dataset and a processing speed of approximately 40HZ at a resolution of 1024 x 2048 on GTX 1080T. The semantic information of the image may include road surface, sidewalk, tree, bush, pedestrian, building, ground, wall, vehicle, goods shelf, and other categories.
The first point cloud feature may be a set of features of points in the target point cloud data, which may be extracted, but is not limited to, by using a point cloud feature extraction network (KPConv). Specifically, the point cloud feature extraction network may extract each point x in the target point cloud dataiAs the circle center, r is the radius to determine a sphere, and the sphere can cover x in the point cloudiOther points x thani'. H core points are determined in the sphere, which are not points in the point cloud, but rather areAnd calculating some special pose points through a specific rule. Each core point has a weight matrix, and the kernel function can be used for calculating x in the sphere rangei' and using the weight matrix to pair xi'the characteristics of the' are transformed; obtaining each x falling within the sphere by the above methodi' characteristics of, finally, each xi' the characteristics are added up as xiCharacteristic vector p ofiI.e. xiThe characteristics of (1).
S130: and determining semantic information corresponding to the first point cloud characteristics of the target point cloud data.
The semantic information corresponding to each point feature in the first point cloud features can be determined based on the shooting parameters of the shooting device and the scanning parameters of the scanning device. In other words, semantic information of the image may be projected onto the first point cloud feature based on the shooting parameters of the shooting device and the scanning parameters of the scanning device, so that each point feature in the first point cloud feature carries the semantic information.
S140: and filtering point features corresponding to the dynamic object in the first point cloud features based on the semantic information.
The point feature corresponding to the dynamic object may be a point feature of which the corresponding semantic information in the first point cloud feature is the dynamic object, where the dynamic object may also be referred to as a potential moving object, that is, an object that may move, such as a vehicle, a pedestrian, a shelf, and the like.
S150: and performing feature matching on the filtered first point cloud features and second point cloud features of the preset point cloud data to obtain a matching result.
The preset point cloud data may be preset point cloud data corresponding to a pose in the global map, that is, point cloud data in a preset range around the pose in the global map. The global map can be a map constructed in advance according to the planned path of the robot, and the pose in the global map can be the pose in the planned path of the robot. The second point cloud feature extraction method is the same as the first point cloud feature extraction method. And performing feature matching on the remaining point features (static features) in the filtered first point cloud features and the second point cloud features of the preset point cloud data, so that the influence of a dynamic object on the subsequent judgment of the current pose of the robot can be reduced, and the finally obtained pose is more accurate.
Referring to fig. 2, S150 may be preceded by:
s210: and counting the semantic information of the filtered first point cloud features to obtain semantic statistical information of the first point cloud features.
Optionally, the semantic statistical information of the first point cloud feature includes a semantic information category corresponding to the first point cloud feature and a ratio of each semantic information category.
The semantic information category corresponding to the first point cloud feature may be a semantic information category corresponding to a point feature in the first point cloud feature, and the proportion of each semantic information category in the first point cloud feature may be the proportion of each point feature corresponding to the same semantic information category in the first point cloud feature.
For example, semantic information corresponding to a point feature in the first point cloud feature includes "sidewalk" and "building", and the semantic information category corresponding to the first point cloud feature is "sidewalk" and "building". The occupation ratio of the point features corresponding to the semantic information sidewalk in the first point cloud features is the occupation ratio of the semantic information category sidewalk, and the occupation ratio of the point features corresponding to the semantic information building in the first point cloud features is the occupation ratio of the semantic information category building.
S220: and comparing the semantic statistical information of the first point cloud characteristic and the second point cloud characteristic.
Optionally, the semantic statistical information of the second point cloud features includes semantic information categories corresponding to the second point cloud features and a ratio of each semantic information category, and an obtaining manner of the semantic statistical information refers to an obtaining manner of corresponding semantic information in the first point cloud features.
The specific way of comparing the semantic statistical information of the first point cloud feature and the second point cloud feature may be: and comparing the semantic information categories corresponding to the first point cloud characteristic and the second point cloud characteristic and the occupation ratios of the semantic information categories respectively to obtain semantic similarity and occupation ratio similarity.
Referring to fig. 3, in S220, comparing semantic information categories corresponding to the first point cloud feature and the second point cloud feature to obtain semantic similarity may include:
s221: and establishing category vectors for the first point cloud feature and the second point cloud feature respectively according to semantic information category conditions corresponding to the first point cloud feature and the second point cloud feature.
Each bit of the category vector corresponds to a semantic information category, corresponding bits of the category vectors of the first point cloud feature and the second point cloud feature correspond to the same semantic information category, if the bit of the category vector is a first character, the semantic information category corresponding to the bit existing in the corresponding point cloud feature is represented, and if the bit of the category vector is a second character, the semantic information category corresponding to the bit existing in the corresponding point cloud feature is represented.
The number of bits in the category vector may be the total number of semantic information categories, the semantic information category sequence corresponding to the bits in the category vector may be preset, the semantic information category sequence corresponding to the category vector bit of the first point cloud feature is the same as the semantic information category sequence corresponding to the category vector bit of the second point cloud feature, and if the characters on the bits corresponding to the category vectors of the first point cloud feature and the second point cloud feature are the first characters, it is determined that the categories corresponding to the bits exist in both the first point cloud feature and the second point cloud feature.
For example, a "0" is set as the first character, a "1" is set as the second character, and the initial category vector is (0,0, 0). The sequence of the semantic information categories corresponding to 3 bits in the initial category vector is preset, wherein the first bit corresponds to the semantic information category 'sidewalk', the second bit corresponds to the semantic information category 'tree', and the third bit corresponds to the semantic information category 'building'. If the semantic information category corresponding to the first point cloud feature is sidewalk and trees, and the semantic information category corresponding to the second point cloud feature is sidewalk and buildings, the category vector of the first point cloud feature is (1,1,0), and the category vector of the second point cloud feature is (1,0, 1).
S222: and calculating the Hamming distance between the category vectors of the first point cloud characteristic and the second point cloud characteristic to obtain the semantic similarity.
The semantic similarity may also be referred to as semantic information category similarity, and may be the number of different semantic information categories corresponding to the first point cloud feature and the second point cloud feature, that is, the number of different bits in the category vectors of the first point cloud feature and the second point cloud feature, and may be represented as a hamming distance between the category vectors of the first point cloud feature and the second point cloud feature. For example, the category vector of the first point cloud feature is (1,1,0), the category vector of the second point cloud feature is (1,0,1), the number of different bits in the two category vectors is 2, the hamming distance between the two category vectors is 2, and the semantic similarity between the first point cloud feature and the second point cloud feature is 2.
Referring to fig. 4, in S220, comparing the ratio between the first point cloud feature and the second point cloud feature, and obtaining the ratio similarity may include:
s223: and establishing a corresponding proportion vector based on the proportion of each semantic information category.
Wherein the value of the proportion vector is used for representing the proportion of the corresponding semantic information category.
Referring to fig. 5, S223 may include:
s2231: and establishing a proportion vector of preset digits.
Where each bit represents a preset duty cycle.
For example, if the preset number of bits of the ratio vector is 20, the preset ratio is 5%, that is, 100% of the total ratio is divided into 20 equal parts, and each bit in the ratio vector represents 5% of the preset ratio.
S2232: and acquiring a quotient between the ratio of the semantic information category and the preset ratio, and rounding up the quotient to obtain an integer quotient.
For example, the preset occupancy is 5%, the occupancy of the semantic information category "sidewalk" corresponding to the first point cloud feature is 34%, and the integer quotient corresponding to the semantic information category "sidewalk" is 7.
S2233: and assigning the integer quotient digits in the ratio vector as a first character and assigning the rest digits as a second character according to a preset sequence.
For example, the initial occupancy vector is set to be (0,0,0,0,0,0,0,0,0,0, 0), if the integer quotient corresponding to the semantic information type "sidewalk" corresponding to the first point cloud feature is 7, the occupancy vector of the semantic information type "sidewalk" corresponding to the first point cloud feature is (1,1,1,1,1,1, 0,0,0,0,0,0,0,0,0,0,0,0,0,0), that is, the first 7 bits in the occupancy vector are assigned to be "1", and the remaining 13-bit characters are assigned to be "0". And when the proportion of a certain semantic information category corresponding to the first point cloud feature is within (30%, 35%), the first 7 bit value is "1", and the remaining 13 bit character value is "0".
S224: and respectively calculating the Hamming distance of the ratio vector of the same semantic information category corresponding to the first point cloud characteristic and the second point cloud characteristic.
If the first point cloud feature and the second point cloud feature both correspond to a semantic information category "sidewalk", and the occupancy vector of the semantic information category "sidewalk" corresponding to the first point cloud feature is (1,1,1,1,1,1, 0,0,0,0,0,0,0,0,0,0), and the occupancy vector of the semantic information category "sidewalk" corresponding to the second point cloud feature is (1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0, and the hamming distance of the semantic information category "sidewalk" occupancy.
S225: and obtaining the occupation similarity based on the corresponding Hamming distance of each semantic information category corresponding to the first point cloud characteristic and the second point cloud characteristic.
Optionally, the hamming distances corresponding to each semantic information category of the first point cloud feature and the second point cloud feature are summed to obtain the occupation similarity. In other words, the occupancy similarity is the sum of the corresponding hamming distances of each semantic information category of the first point cloud feature and the second point cloud feature.
S230: it is determined whether to perform S150 according to the comparison result.
In a specific embodiment of the present application, if the sum of the semantic similarity and the proportion similarity is smaller than a similarity threshold, a point cloud feature matching degree between the target point cloud data and the preset point cloud data is calculated. And under the condition that the sum of the semantic similarity and the proportion similarity of the first point cloud feature and the second point cloud feature is smaller than a similarity threshold, considering that the comparison result of the semantic statistical information of the first point cloud feature and the second point cloud feature meets the requirement, and further performing feature matching on the filtered first point cloud feature and the second point cloud feature of the preset point cloud data to obtain a matching result, namely executing S150.
In another embodiment of the present application, in a case that the positioning pose of the robot is lost, if the sum of the semantic similarity and the occupation similarity is smaller than a similarity threshold, and/or the pose corresponding to the preset point cloud data is within a preset range of the pose before the robot is lost, S150 is executed.
Referring to fig. 6, S150 may include:
s151: and matching the filtered first point cloud characteristics with the second point cloud characteristics of the preset point cloud data to obtain a plurality of matching point pairs between the target point cloud data and the preset point cloud data.
Referring to fig. 7, S151 may include:
s1511: and taking each point of the target point cloud data as a point to be matched, calculating a characteristic distance between the point to be matched and each point in the preset point cloud data by using the first point cloud characteristic and the second point cloud characteristic, and calculating a coordinate distance between the point to be matched and each point in the preset point cloud data.
In one embodiment, the feature distance between the point to be matched and each point in the preset point cloud data may be a hamming distance between a point feature (vector) corresponding to the point to be matched in the first point cloud data and a point feature (vector) in the second point cloud feature.
For example, P1Second point cloud feature, P, of the preset point cloud data1=(p11,p12,p13),p1iAs a first point cloud feature P1The ith point feature vector of (1), P2=(p21,p22,p23) Second point cloud feature, p, of the preset point cloud data2iIs the second point cloud feature P2The ith point feature vector of (1). P is to be11Respectively calculating p as the feature vector corresponding to the point to be matched11And p21~p23The Hamming distance between the point to be matched and the point p in the preset point cloud data21The Hamming distance between the corresponding points 1 is used as the characteristic distance between the point to be matched and the point 1, and the point to be matched and the point p in the preset point cloud data are compared with the point 122The Hamming distance between the corresponding points 2 is used as the characteristic distance between the point to be matched and the point 2, and the point to be matched and the point p in the preset point cloud data are compared with the point 223And the Hamming distance between the corresponding points 3 is taken as the characteristic distance between the point to be matched and the point 3.
In one embodiment, the coordinate distance between the point to be matched and each point in the preset point cloud data may be an euclidean distance.
S1512: and respectively carrying out weighting processing on the characteristic distance and the coordinate distance between the point to be matched and each point in the preset point cloud data to obtain the corresponding point between the target point cloud data and the preset point cloud data and the similarity of the corresponding point.
The characteristic distance and the coordinate distance between the point to be matched and the point in the preset point cloud data can be weighted according to the following formula:
score=w1s+w2o,
wherein score represents the similarity between the point to be matched and the point in the preset point cloud data, w1A weight representing a characteristic distance between the point to be matched and a point in the preset point cloud data, s represents a characteristic distance between the point to be matched and a point in the preset point cloud data, w2And the weight represents the coordinate distance between the point to be matched and the point in the preset point cloud data, and the weight o represents the coordinate distance between the point to be matched and the point in the preset point cloud data.
The method comprises the steps of selecting a point with the minimum similarity with a point to be matched from preset point cloud data in a mode of traversing points of the preset point cloud data one by one, wherein the point with the minimum similarity with the point to be matched is used as a point corresponding to the point to be matched in the preset point cloud data, namely the point with the minimum similarity with the point to be matched in the preset point cloud data and the point to be matched are used as corresponding points, and the similarity between the point with the minimum similarity with the point to be matched in the preset point cloud data and the point to be matched is used as the similarity of the corresponding points.
S1513: and judging whether the similarity of the corresponding points is smaller than a preset threshold value.
And respectively judging whether the similarity of each group of corresponding points between the target point cloud data and the preset point cloud data is smaller than a preset threshold value. If yes, go to S1514.
S1514: and taking the corresponding points as matching point pairs.
And respectively taking the corresponding points of which the similarity between the target point cloud data and the preset point cloud data is greater than a preset threshold value as matching point pairs.
S152: and taking the number of the matching point pairs as the point cloud feature matching degree between the target point cloud data and the preset point cloud data.
The point cloud feature matching degree between the target point cloud data and the preset point cloud data may be the number of matching point pairs between the target point cloud data and the preset point cloud data, and the more the number of matching points is, the higher the point cloud feature matching degree between the target point cloud data and the preset point cloud data is.
Through the implementation of the embodiment, the second point cloud feature of the preset point cloud data corresponds to the semantic information, so that after the semantic information corresponding to the first point cloud feature of the target point cloud data is determined, the features of the target point cloud data and the preset point cloud data can be further matched based on the determined semantic information; and before matching, filtering the point characteristics corresponding to the dynamic object in the first point cloud characteristics based on the semantic information corresponding to the first point cloud characteristics, and matching the filtered first point cloud characteristics with the second point cloud characteristics of the preset point cloud data, so that the obtained matching result is more accurate.
Fig. 8 is a flowchart illustrating an embodiment of a positioning method according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 8 is not limited in this embodiment. As shown in fig. 8, the present embodiment may include:
s310: and taking the current pose as a target pose, and scanning the surrounding environment of the target pose by using a scanning device to obtain target point cloud data.
The current pose referred by the embodiment is the current pose of the robot, that is, the actual pose of the robot.
S320: and obtaining a plurality of candidate point cloud data respectively corresponding to a plurality of candidate poses by using the global map data.
The global map data may be point cloud data corresponding to the global map mentioned in the above embodiment, that is, data corresponding to a planned path of the robot, and may include candidate point cloud data corresponding to a plurality of candidate poses in the planned path of the robot. The candidate point cloud data of the candidate pose can refer to point cloud data in a preset range around the candidate pose.
S330: and taking each frame of candidate point cloud data as preset point cloud data, performing feature matching on the target point cloud data and each frame of preset point cloud data, and selecting the preset point cloud data meeting preset conditions in matching with the target point cloud data.
The specific feature matching in this step can be realized by the point cloud matching method in the first embodiment. In addition, referring to the first embodiment, in the feature matching process, if the sum of the semantic similarity and the proportion similarity between the target point cloud data and the current frame preset point cloud data is smaller than a preset threshold and/or the pose corresponding to the preset point cloud data is within a preset range of the pose before losing, the current frame preset point cloud data is added into the candidate matching set. And then respectively calculating the point cloud feature matching degree between the target point cloud data and each preset point cloud data in the candidate matching set so as to find out the preset point cloud data which meets the preset conditions with the target point cloud data from the candidate point cloud data matching set.
The preset point cloud data matched with the target point cloud data to meet the preset condition may be the preset point cloud data with the point cloud feature matching degree greater than the point cloud feature matching degree threshold.
S340: and positioning the current pose by using the pose corresponding to the selected preset point cloud data.
When multiple frames of preset point cloud data exist, one frame of preset point cloud data can be selected from the multiple frames of preset point cloud data, and the current pose is located in the global map by using the pose corresponding to the frame of preset point cloud data. For example, the pose corresponding to the preset point cloud data with the highest point cloud feature matching degree between the preset point cloud data and the target point cloud data is selected as an initial value and transmitted into a pose positioning algorithm, and the current pose is positioned in the global map. Of course, the pose corresponding to each frame of the selected preset point cloud data can also be used as the initial value of the pose positioning algorithm to position the current pose in the global map. The pose positioning algorithm can be an ICP algorithm, an NDT algorithm and the like.
Through the implementation of the embodiment, the target point cloud data and the multi-frame candidate point cloud data in the global map data can be matched by using the point cloud matching method provided by the first embodiment, so that the candidate point cloud data which is most matched with the target point cloud data is obtained, and the current pose of the robot can be positioned in the global map according to the candidate pose corresponding to the candidate point cloud data.
Fig. 9 is a schematic structural diagram of an embodiment of an electronic device according to the present application. As shown in fig. 9, the electronic device includes a processor 410, a memory 420 coupled to the processor.
Wherein the memory 420 stores program instructions for implementing the method of any of the above embodiments; the processor 410 is configured to execute program instructions stored by the memory 420 to implement the steps of the above-described method embodiments. The processor 410 may also be referred to as a Central Processing Unit (CPU), among others. The processor 410 may be an integrated circuit chip having signal processing capabilities. The processor 410 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In a specific embodiment of the present application, the electronic device may be a robot. Referring to fig. 10, the robot includes a robot body 430, and a scanning device 431 and a photographing device 432 provided on the robot body 430, in addition to the processor 410 and the memory 420 described above.
The scanning device 431 may be a device with a scanning function, such as a laser radar, which may be used to scan the surrounding environment of the robot to obtain point cloud data of the surrounding environment of the robot. The photographing device 432 may be a device having a photographing function, such as a camera sensor, which may be used to photograph an image of the surroundings of the robot. Also, the scanning device 431 and the photographing device 432 may transmit the acquired data to the processor 410 for processing.
FIG. 11 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application. The computer-readable storage medium 500 of the embodiments of the present application stores program instructions 510, and the program instructions 500 implement the methods provided by the above-described embodiments of the present application when executed. The program instructions 510 may form a program file stored in the computer-readable storage medium 500 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned computer-readable storage medium 500 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.
Claims (13)
1. A point cloud matching method, comprising:
determining semantic information corresponding to a first point cloud feature of target point cloud data, wherein the target point cloud data is obtained by scanning the surrounding environment of a target pose;
filtering point features corresponding to the dynamic object in the first point cloud features based on the semantic information;
and performing feature matching on the filtered first point cloud features and second point cloud features of the preset point cloud data to obtain a matching result.
2. The method of claim 1, wherein prior to the determining semantic information corresponding to the first point cloud feature of the target point cloud data, the method further comprises:
acquiring a target image set and target point cloud data which are obtained by respectively shooting and scanning the surrounding environment of a target pose and correspondingly;
performing semantic segmentation on each image in the target image set to obtain semantic information of the image, and performing feature extraction on the target point cloud data to obtain a first point cloud feature of the target point cloud data;
the determining semantic information corresponding to the first point cloud feature of the target point cloud data comprises:
and determining the semantic information corresponding to each point feature in the first point cloud features based on the shooting parameters and the scanning parameters.
3. The method of claim 1, further comprising, before the feature matching the filtered first point cloud features with the second point cloud features of the preset point cloud data to obtain a matching result:
counting the semantic information of the filtered first point cloud features to obtain semantic statistical information of the first point cloud features;
comparing semantic statistical information of the first point cloud feature and the second point cloud feature;
and determining whether to perform characteristic matching on the filtered first point cloud characteristics and the second point cloud characteristics of the preset point cloud data according to the comparison result so as to obtain a matching result.
4. The method of claim 3,
the semantic statistical information of the first point cloud features comprises semantic information categories corresponding to the first point cloud features and the occupation ratio of each semantic information category, and the semantic statistical information of the second point cloud features comprises semantic information categories corresponding to the second point cloud features and the occupation ratio of each semantic information category.
5. The method of claim 4,
the comparing semantic statistical information of the first point cloud feature and the second point cloud feature comprises:
comparing the semantic information categories corresponding to the first point cloud feature and the second point cloud feature with the occupation ratios of the semantic information categories respectively to obtain semantic similarity and occupation ratio similarity;
the step of determining whether to perform feature matching of the filtered first point cloud features and the second point cloud features of the preset point cloud data according to the comparison result to obtain a matching result includes:
if the sum of the semantic similarity and the proportion similarity is smaller than a preset threshold value, performing feature matching on the filtered first point cloud features and second point cloud features of the preset point cloud data to obtain a matching result; and/or the presence of a gas in the gas,
and under the condition that the positioning pose of the robot is lost, the target pose is the current pose of the robot, and if the pose corresponding to the preset point cloud data is within the preset range of the pose before loss, the step of performing feature matching on the filtered first point cloud features and the second point cloud features of the preset point cloud data is executed to obtain a matching result.
6. The method of claim 5,
comparing semantic information categories corresponding to the first point cloud feature and the second point cloud feature to obtain semantic similarity, wherein the semantic similarity comprises the following steps:
establishing category vectors for the first point cloud feature and the second point cloud feature respectively according to semantic information category conditions corresponding to the first point cloud feature and the second point cloud feature, wherein each bit of the category vector corresponds to one semantic information category, corresponding bits of the category vectors of the first point cloud feature and the second point cloud feature correspond to the same semantic information category, if the bit of the category vector is a first character, the semantic information category corresponding to the bit exists in the corresponding point cloud feature, and if the bit of the category vector is a second character, the semantic information category corresponding to the bit does not exist in the corresponding point cloud feature;
calculating the Hamming distance between the category vectors of the first point cloud feature and the second point cloud feature to obtain the semantic similarity;
comparing the ratio between the first point cloud feature and the second point cloud feature to correspondingly obtain ratio similarity, wherein the step of comparing the ratio between the first point cloud feature and the second point cloud feature comprises the following steps:
establishing a corresponding proportion vector based on the proportion of each semantic information category, wherein the value of the proportion vector is used for expressing the proportion of the corresponding semantic information category;
calculating the Hamming distance of the ratio vector of the same semantic information category corresponding to the first point cloud feature and the second point cloud feature respectively;
and obtaining the occupation similarity based on the corresponding Hamming distance of each semantic information category corresponding to the first point cloud feature and the second point cloud feature.
7. The method according to claim 6, wherein establishing a corresponding proportion vector based on the proportion of each semantic information category comprises:
establishing a ratio vector of preset digits, wherein each digit represents a preset ratio;
obtaining a quotient between the ratio of the semantic information category and the preset ratio, and rounding up the quotient to obtain an integer quotient;
assigning the integer quotient bits in the ratio vector to be first characters according to a preset sequence, and assigning the remaining bits to be second characters;
the obtaining the occupation similarity based on the corresponding hamming distance of each semantic information category corresponding to the first point cloud feature and the second point cloud feature comprises:
and summing the corresponding Hamming distances of each semantic information category of the first point cloud feature and the second point cloud feature to obtain the proportion similarity.
8. The method of claim 1, wherein the feature matching the filtered first point cloud features with the second point cloud features of the preset point cloud data to obtain a matching result comprises:
matching the filtered first point cloud characteristics with second point cloud characteristics of the preset point cloud data to obtain a plurality of matching point pairs between the target point cloud data and the preset point cloud data;
and taking the number of the matching point pairs as the point cloud feature matching degree between the target point cloud data and the preset point cloud data.
9. The method of claim 8, wherein the matching the filtered first point cloud features and the second point cloud features of the pre-set point cloud data to obtain a number of pairs of matching points between the target point cloud data and the pre-set point cloud data comprises:
taking each point of the target point cloud data as a point to be matched, calculating a characteristic distance between the point to be matched and each point in the preset point cloud data by using the first point cloud characteristic and the second point cloud characteristic, and calculating a coordinate distance between the point to be matched and each point in the preset point cloud data;
respectively weighting the characteristic distance and the coordinate distance between the point to be matched and each point in the preset point cloud data to obtain a corresponding point between the target point cloud data and the preset point cloud data and the similarity of the corresponding point;
and if the similarity of the corresponding points is smaller than a preset threshold value, taking the corresponding points as matching point pairs.
10. A method of positioning, comprising:
taking the current pose as a target pose, and scanning the surrounding environment of the target pose by using a scanning device to obtain target point cloud data;
obtaining a plurality of candidate point cloud data respectively corresponding to a plurality of candidate poses by using global map data;
taking each frame of candidate point cloud data as preset point cloud data, performing feature matching on the target point cloud data and each frame of preset point cloud data, and selecting preset point cloud data which meets preset conditions in matching with the target point cloud data;
positioning the current pose by using the pose corresponding to the selected preset point cloud data;
wherein, the process of feature matching between the target point cloud data and the preset point cloud data is realized by using the method of any one of claims 1 to 9.
11. An electronic device comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions;
the processor is configured to execute the program instructions stored by the memory to implement the method of any of claims 1-10.
12. The apparatus of claim 11, wherein the electronic device is a robot, the robot further comprising a robot body and a scanning device and a photographing device disposed on the robot body.
13. A computer-readable storage medium, characterized in that the storage medium stores program instructions which, when executed, implement the steps of the method of any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010565092.3A CN111815687B (en) | 2020-06-19 | 2020-06-19 | Point cloud matching method, positioning method, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010565092.3A CN111815687B (en) | 2020-06-19 | 2020-06-19 | Point cloud matching method, positioning method, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111815687A true CN111815687A (en) | 2020-10-23 |
CN111815687B CN111815687B (en) | 2024-09-03 |
Family
ID=72845376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010565092.3A Active CN111815687B (en) | 2020-06-19 | 2020-06-19 | Point cloud matching method, positioning method, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111815687B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112784873A (en) * | 2020-12-25 | 2021-05-11 | 华为技术有限公司 | Semantic map construction method and equipment |
CN113064576A (en) * | 2021-04-09 | 2021-07-02 | 北京云迹科技有限公司 | Volume adjusting method and device, mobile equipment and storage medium |
CN113656418A (en) * | 2021-07-27 | 2021-11-16 | 追觅创新科技(苏州)有限公司 | Semantic map storage method and device, storage medium and electronic device |
CN113726891A (en) * | 2021-08-31 | 2021-11-30 | 中联重科建筑起重机械有限责任公司 | Method and device for establishing communication connection and engineering machinery |
CN114526720A (en) * | 2020-11-02 | 2022-05-24 | 北京四维图新科技股份有限公司 | Positioning processing method, device, equipment and storage medium |
CN114926649A (en) * | 2022-05-31 | 2022-08-19 | 中国第一汽车股份有限公司 | Data processing method, device and computer readable storage medium |
EP4068208A1 (en) * | 2021-03-31 | 2022-10-05 | Topcon Corporation | Point cloud information processing device, point cloud information processing method, and point cloud information processing program |
CN115619871A (en) * | 2022-09-05 | 2023-01-17 | 中汽创智科技有限公司 | Vehicle positioning method, device, equipment and storage medium |
WO2023131203A1 (en) * | 2022-01-04 | 2023-07-13 | 深圳元戎启行科技有限公司 | Semantic map updating method, path planning method, and related apparatuses |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180275277A1 (en) * | 2017-03-22 | 2018-09-27 | Here Global B.V. | Method, apparatus and computer program product for mapping and modeling a three dimensional structure |
CN109285220A (en) * | 2018-08-30 | 2019-01-29 | 百度在线网络技术(北京)有限公司 | A kind of generation method, device, equipment and the storage medium of three-dimensional scenic map |
CN110068824A (en) * | 2019-04-17 | 2019-07-30 | 北京地平线机器人技术研发有限公司 | A kind of sensor pose determines method and apparatus |
WO2019153245A1 (en) * | 2018-02-09 | 2019-08-15 | Baidu.Com Times Technology (Beijing) Co., Ltd. | Systems and methods for deep localization and segmentation with 3d semantic map |
CN110335319A (en) * | 2019-06-26 | 2019-10-15 | 华中科技大学 | Camera positioning and the map reconstruction method and system of a kind of semantics-driven |
CN111008660A (en) * | 2019-12-03 | 2020-04-14 | 北京京东乾石科技有限公司 | Semantic map generation method, device and system, storage medium and electronic equipment |
CN111190981A (en) * | 2019-12-25 | 2020-05-22 | 中国科学院上海微系统与信息技术研究所 | Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium |
WO2020103108A1 (en) * | 2018-11-22 | 2020-05-28 | 深圳市大疆创新科技有限公司 | Semantic generation method and device, drone and storage medium |
CN111209978A (en) * | 2020-04-20 | 2020-05-29 | 浙江欣奕华智能科技有限公司 | Three-dimensional visual repositioning method and device, computing equipment and storage medium |
-
2020
- 2020-06-19 CN CN202010565092.3A patent/CN111815687B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180275277A1 (en) * | 2017-03-22 | 2018-09-27 | Here Global B.V. | Method, apparatus and computer program product for mapping and modeling a three dimensional structure |
WO2019153245A1 (en) * | 2018-02-09 | 2019-08-15 | Baidu.Com Times Technology (Beijing) Co., Ltd. | Systems and methods for deep localization and segmentation with 3d semantic map |
CN109285220A (en) * | 2018-08-30 | 2019-01-29 | 百度在线网络技术(北京)有限公司 | A kind of generation method, device, equipment and the storage medium of three-dimensional scenic map |
WO2020103108A1 (en) * | 2018-11-22 | 2020-05-28 | 深圳市大疆创新科技有限公司 | Semantic generation method and device, drone and storage medium |
CN110068824A (en) * | 2019-04-17 | 2019-07-30 | 北京地平线机器人技术研发有限公司 | A kind of sensor pose determines method and apparatus |
CN110335319A (en) * | 2019-06-26 | 2019-10-15 | 华中科技大学 | Camera positioning and the map reconstruction method and system of a kind of semantics-driven |
CN111008660A (en) * | 2019-12-03 | 2020-04-14 | 北京京东乾石科技有限公司 | Semantic map generation method, device and system, storage medium and electronic equipment |
CN111190981A (en) * | 2019-12-25 | 2020-05-22 | 中国科学院上海微系统与信息技术研究所 | Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium |
CN111209978A (en) * | 2020-04-20 | 2020-05-29 | 浙江欣奕华智能科技有限公司 | Three-dimensional visual repositioning method and device, computing equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
王任栋;徐友春;齐尧;韩栋斌;李华;: "一种鲁棒的城市复杂动态场景点云配准方法", 机器人, no. 03 * |
王宪伦;张海洲;安立雄;: "基于图像语义分割的物体位姿估计", 机械制造与自动化, no. 02, 20 April 2020 (2020-04-20) * |
薛耀红;梁学章;马婷;梁英;车翔玖;: "扫描点云的一种自动配准方法", 计算机辅助设计与图形学学报, no. 02 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114526720A (en) * | 2020-11-02 | 2022-05-24 | 北京四维图新科技股份有限公司 | Positioning processing method, device, equipment and storage medium |
CN114526720B (en) * | 2020-11-02 | 2024-04-16 | 北京四维图新科技股份有限公司 | Positioning processing method, device, equipment and storage medium |
CN112784873A (en) * | 2020-12-25 | 2021-05-11 | 华为技术有限公司 | Semantic map construction method and equipment |
CN112784873B (en) * | 2020-12-25 | 2024-08-23 | 华为技术有限公司 | Semantic map construction method and device |
EP4068208A1 (en) * | 2021-03-31 | 2022-10-05 | Topcon Corporation | Point cloud information processing device, point cloud information processing method, and point cloud information processing program |
CN113064576A (en) * | 2021-04-09 | 2021-07-02 | 北京云迹科技有限公司 | Volume adjusting method and device, mobile equipment and storage medium |
CN113656418A (en) * | 2021-07-27 | 2021-11-16 | 追觅创新科技(苏州)有限公司 | Semantic map storage method and device, storage medium and electronic device |
CN113656418B (en) * | 2021-07-27 | 2023-08-22 | 追觅创新科技(苏州)有限公司 | Semantic map storage method and device, storage medium and electronic device |
CN113726891A (en) * | 2021-08-31 | 2021-11-30 | 中联重科建筑起重机械有限责任公司 | Method and device for establishing communication connection and engineering machinery |
WO2023131203A1 (en) * | 2022-01-04 | 2023-07-13 | 深圳元戎启行科技有限公司 | Semantic map updating method, path planning method, and related apparatuses |
CN114926649A (en) * | 2022-05-31 | 2022-08-19 | 中国第一汽车股份有限公司 | Data processing method, device and computer readable storage medium |
CN115619871A (en) * | 2022-09-05 | 2023-01-17 | 中汽创智科技有限公司 | Vehicle positioning method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111815687B (en) | 2024-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111815687B (en) | Point cloud matching method, positioning method, equipment and storage medium | |
CN110657803B (en) | Robot positioning method, device and storage device | |
US10019657B2 (en) | Joint depth estimation and semantic segmentation from a single image | |
CN109035304B (en) | Target tracking method, medium, computing device and apparatus | |
KR101896357B1 (en) | Method, device and program for detecting an object | |
CN111652217A (en) | Text detection method and device, electronic equipment and computer storage medium | |
CN110443258B (en) | Character detection method and device, electronic equipment and storage medium | |
CN114677412B (en) | Optical flow estimation method, device and equipment | |
CN111652181B (en) | Target tracking method and device and electronic equipment | |
CN114677565B (en) | Training method and image processing method and device for feature extraction network | |
CN111950389B (en) | Depth binary feature facial expression recognition method based on lightweight network | |
CN112101360A (en) | Target detection method and device and computer readable storage medium | |
CN112836625A (en) | Face living body detection method and device and electronic equipment | |
CN114359665A (en) | Training method and device of full-task face recognition model and face recognition method | |
CN112597918A (en) | Text detection method and device, electronic equipment and storage medium | |
CN111179270A (en) | Image co-segmentation method and device based on attention mechanism | |
CN115620321B (en) | Table identification method and device, electronic equipment and storage medium | |
CN112241736B (en) | Text detection method and device | |
CN113379592B (en) | Processing method and device for sensitive area in picture and electronic equipment | |
CN112084371A (en) | Film multi-label classification method and device, electronic equipment and storage medium | |
WO2023241372A1 (en) | Camera intrinsic parameter calibration method and related device | |
CN117994561A (en) | Target detection method and device based on directed bounding box and electronic equipment | |
CN116258873A (en) | Position information determining method, training method and device of object recognition model | |
CN114511877A (en) | Behavior recognition method and device, storage medium and terminal | |
CN113139629A (en) | Font identification method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |