CN114004869A - Positioning method based on 3D point cloud registration - Google Patents
Positioning method based on 3D point cloud registration Download PDFInfo
- Publication number
- CN114004869A CN114004869A CN202111109705.3A CN202111109705A CN114004869A CN 114004869 A CN114004869 A CN 114004869A CN 202111109705 A CN202111109705 A CN 202111109705A CN 114004869 A CN114004869 A CN 114004869A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- points
- data
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000006243 chemical reaction Methods 0.000 claims abstract description 15
- 238000005457 optimization Methods 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000005516 engineering process Methods 0.000 claims description 7
- 230000033001 locomotion Effects 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 6
- 238000011161 development Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 238000010845 search algorithm Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 2
- 230000004807 localization Effects 0.000 claims 5
- 238000001514 detection method Methods 0.000 abstract description 4
- 238000000605 extraction Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 abstract description 3
- 238000007781 pre-processing Methods 0.000 abstract description 2
- 239000013598 vector Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 235000012149 noodles Nutrition 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Electromagnetism (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a positioning method based on 3D point cloud, which comprises the following steps: 3DSLAM, data acquisition, feature point extraction and matching optimization and path planning. The 3DSLAM process uses a mobile robot to build a point cloud map of the complete working environment. During navigation, real-time point cloud data are obtained by using the multi-line laser radar, and preprocessing and special detection point extraction are carried out on the point cloud data. And obtaining laser odometer data through interframe matching, and then optimizing the laser odometer data. Point cloud data adjacent to the current frame point cloud is searched using a nearest neighbor search. And calculating the relative conversion relation between the current robot pose and the robot pose in the point cloud map by using an icp matching algorithm, and adding the relative conversion relation as a constraint into the pose map to optimize the laser odometer so as to obtain accurate robot pose information. And realizing autonomous navigation of the robot by using the navigation tool. The invention improves the positioning precision of the mobile robot during navigation and simultaneously improves the stability and reliability of positioning.
Description
Technical Field
The invention relates to the technical field of autonomous positioning and navigation of robots, in particular to a positioning method based on 3D point cloud registration.
Background
With the development of computer technology, mobile robots have gained wide attention, autonomous navigation of robots is a basic function of mobile robots, and positioning of robots is a prerequisite for achieving autonomous navigation of robots. The robot can acquire the information of the robot and the environment through the sensor module carried by the robot. Common sensors used in the existing positioning technology include inertial navigation sensors (IMU), wheel encoders, GPS, laser radar, and the like. The GPS is suitable for positioning and navigation in outdoor environment, and in indoor environment with poor signals, the GPS precision is difficult to meet the requirement and even can not work normally.
Currently, a laser radar is widely applied to indoor mobile robots by combining with a positioning technology of an inertial navigation sensor and a wheel type encoder. Wherein the wheel encoders in combination with inertial navigation sensors (IMU) can obtain positional information as well as pose information of the robot. When the robot moves, the motion information such as the motion speed, the acceleration and the like of the robot can be obtained by resolving the raw data of the wheel type encoder and the inertial navigation sensor. And combining the posture information and the motion information of the robot to obtain the odometer information of the robot. And matching the current environment information scanned by the laser radar with the stored two-dimensional map to obtain the estimated pose of the robot in the environment. The maximum detection range of the single-line laser radar is 10-20 m, and sufficient environmental information cannot be detected in an open large indoor environment, so that the robot positioning fails. The detection range of the multi-line laser radar can reach more than 100m, the three-dimensional laser point cloud data of the multi-line laser radar are converted into two-dimensional laser data to replace a single-line laser radar in the existing laser radar positioning scheme, the effective range of the laser data is effectively enlarged, and the positioning precision is improved. However, in an environment with sparse feature points, the position and attitude information of the robot obtained by registering the laser data only containing the plane information with the two-dimensional map is not accurate, because the plane laser data cannot accurately describe the environment information of the robot, and the two-dimensional map cannot completely store the environment information. Thus, in some special use scenarios, such as long corridors, the positioning of the robot may be subject to large errors.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a positioning method based on 3D point cloud registration, which is used for improving the positioning precision and enhancing the applicability and robustness to special environments.
The method uses a multi-line laser radar to match the robot environment point cloud obtained by real-time scanning with a three-dimensional environment point cloud map obtained by a 3DSLAM technology after processing, so as to obtain the estimated pose of the robot in the working environment. And fusing the data with the data of the inertial navigation sensor and the wheel type encoder to calculate more accurate robot posture information. And (3) using the pose information obtained by the positioning module as input data, and planning the running path of the robot by a move _ base algorithm and a teb algorithm to realize the navigation function of the robot.
A positioning method based on 3D point cloud registration comprises the following steps:
step 1: calibrating external parameters of the sensor; installing an inertial navigation sensor on a base of the robot, and taking the position as an origin of a base coordinate system; calculating the relative position relationship between a radar coordinate system and a base coordinate system by manually measuring the relative position relationship between the laser radar and the inertial navigation sensor; issuing TF conversion relation topics of a radar coordinate system and a base coordinate system on an industrial personal computer by using a communication mechanism of an ROS (robot operating system);
step 2: 3D mapping; controlling the robot to move in a working environment; acquiring a 3D point cloud map of a robot working environment by using a 3DSLAM technology, and performing downsampling on point clouds, wherein the 3D point cloud map is stored in an octree data format; extracting point clouds in a set height range of the point cloud map, and projecting the point clouds onto a horizontal plane of an origin of a coordinate system of a chassis of the robot to form a two-dimensional grid map;
and step 3: extracting and matching the characteristic points;
step 3-1: point cloud pretreatment; acquiring real-time point cloud data by using a 3D laser radar; selecting a starting point and a final point, keeping the angle of the two points between pi and 3 pi, and controlling the angle of a frame of point cloud within the range of the starting point and the final point; storing the received point cloud information into a two-dimensional array form, wherein the first index of the array represents the number of lines where points are located, the second index represents the number of points on the lines, and the data stored in the array represents the distance between the points and the laser radar; removing point clouds with too far or too close distances according to the distance of the point clouds; calculating a horizontal included angle between each point and the previous point in a laser beam below the horizontal direction in the multi-line laser radar, and marking two points of which the horizontal included angle between the two points is smaller than a certain threshold value as ground points; clustering the point clouds meeting the clustering quantity into a plurality of point cloud sets by using a point cloud clustering algorithm, and rejecting the point clouds which cannot form clusters as noise points;
step 3-2: extracting feature points; calculating curvature values of all clustering points except for the ground points; the calculation formula is as follows:
wherein r isiRepresenting the distance value of the ith point cloud, and S represents the number of the point clouds; dividing a frame of point cloud into a plurality of areas according to the azimuth, sorting the point cloud in each area according to the curvature of each point, marking the points with the curvature values larger than a certain threshold value as angular points, and marking the points with the curvature values smaller than the certain threshold value as plane points; the points with the largest curvature and the points with the smallest curvature are marked as large angular points and large plane points;
step 3-3: matching frames; respectively carrying out matching calculation on the current point cloud, the diagonal point and the plane point of the previous frame of point cloud by using a point cloud matching algorithm; carrying out point-to-line matching on a large angle in the current frame point cloud and an angular point in the previous frame point cloud to obtain change information of a pitch angle, a roll angle and a z axis in the pose of the robot; carrying out point-to-surface matching calculation on a large plane point in the current frame point cloud and a plane point in the previous frame point cloud by taking the pitch angle, the roll angle and the z-axis change as constraints to obtain x, y and yaw angle change information in the robot pose; assuming that the movement of the laser radar is uniform; after calculating a conversion matrix of a frame data end point relative to a starting point, interpolating any point in the frame data according to the time relative to the starting point when the point is obtained, and obtaining the pose of each point; the formula for interpolation is as follows:
wherein t isiRepresenting the time of the data of the ith IMU in a frame of laser point cloud data,the pose of the laser odometer is shown when the ith IMU data is received after the kth frame of laser point cloud data is received; using a rotation matrix R and a translation T to represent the corresponding relationship between the point in the frame data and the point in the previous frame data:
deriving the rotation matrix to obtain the distances of the points to the line and the plane, and obtaining an error function for optimization:
each line of f represents a characteristic point, pose information obtained by matching point cloud frames is optimized through a Newton iteration method to obtain laser odometer data, and the laser odometer data is issued in an ROS topic mode;
step 3-4: matching point cloud maps; matching and searching the current point cloud and the point cloud map established in the step 2-1 by using a kd-tree nearest neighbor search algorithm; storing the map point cloud meeting the threshold condition as a historical key frame point cloud for matching; calculating a relative conversion relation between the historical key frame point cloud and the current point cloud by using a point cloud matching algorithm to obtain a final pose conversion relation; taking the laser odometer obtained by calculation in the step 3-2 as an optimization target, and adding pose constraints obtained by matching in the pose graph to optimize the optimization target to finally obtain accurate robot pose data;
and 4, step 4: converting the robot pose information and the point cloud data in the step 3-3 into two-dimensional laser data serving as input data, and using the 2D grid map in the step 2-1 as a global navigation map; performing path planning by using an ROS navigation algorithm to realize autonomous navigation of the robot;
the invention has the advantages and positive effects that:
1. the map point cloud is subjected to down-sampling compression, so that the number of point clouds in the point cloud map is reduced on the premise of not influencing the positioning precision, and the operation speed is increased.
2. The invention acquires the environmental information based on the multi-line laser radar, compared with the single-line laser radar, the detection range of the multi-line laser radar is far larger than that of the single-line laser radar, and the point cloud information of the vertical space can be acquired. More abundant environment information can be obtained.
3. The invention uses a clustering method to segment and filter the point cloud information. The point clouds with the same attribute are respectively used for registration, so that the registration precision is improved, the registration calculation amount is reduced, and the calculation speed is improved.
4. The invention is based on a 3D point cloud registration method, and the real-time laser point cloud data and the established environment point cloud map are registered, so that the influence of feature point sparsity on positioning is reduced, and the positioning precision is improved.
5. The method and the device project the multi-line laser point cloud data according to actual use requirements to obtain appropriate two-dimensional laser data, and improve navigation flexibility.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is an established environment point cloud map.
Fig. 3 is environmental point cloud information collected in real time.
Fig. 4a to 4b are schematic diagrams of point cloud information after point cloud segmentation, where fig. 4a is a schematic diagram of corner points and plane points, and fig. 4b is a schematic diagram of ground points.
And 5, converting the three-dimensional point cloud data to obtain two-dimensional laser data.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
A working flow diagram of the positioning method based on the 3D point cloud is shown in figure 1, and the method comprises four steps of 3DSLAM, data acquisition, feature point extraction and matching optimization and path planning. Fig. 2 shows an environmental point cloud map constructed by 3DSLAM technology. The point cloud includes location information and attribute information. Fig. 3 is real-time point cloud data of a current environment acquired by a multiline lidar. Fig. 4 is point cloud data after point cloud segmentation, including corner points, plane points, and ground points. The obstacle avoidance is realized by using the two-dimensional laser data shown in fig. 5 in the navigation process.
With reference to fig. 1-5, the embodiments of the present invention are as follows:
step 1: and calibrating external parameters of the sensor. And installing an inertial navigation sensor on a base of the robot to serve as an origin of a base coordinate system. The robosense16 line laser radar is installed 1m above the inertial navigation sensor, the industrial personal computer adopts a TX2 development board of Yingdada, and the development board is provided with an Ubuntu16.04 operating system + ROS Kinetic + PCL + gtsam as a software platform. And calculating the relative position relationship between the radar coordinate system and the base coordinate system by manually measuring the relative position relationship between the laser radar and the inertial navigation sensor. And (3) issuing TF conversion relation topics of the radar coordinate system and the base coordinate system by using a communication mechanism of the ROS (robot operating system).
Step 2: and 3D constructing a graph. The robot is controlled to move in a working environment through the logic remote control handle, and the laser radar can scan all directions. In the moving process of the robot, IMU data of an inertial navigation sensor and point cloud data of a laser radar are used as input data, a Lego-lomam 3DSLAM algorithm is operated to carry out point cloud map construction on an experimental environment, and a point cloud compression algorithm is used for carrying out compression and downsampling on the point cloud. And after the robot completes construction of the map, storing the point cloud map by using a PCL point cloud tool. And extracting the point cloud in the height range of 1-2m from the point cloud map, and projecting the point cloud onto the horizontal plane of the origin of the coordinate system of the robot chassis to form a two-dimensional grid map.
And step 3: and extracting and matching the feature points.
Step 3-1: and (4) point cloud preprocessing. And during robot navigation, real-time scanning is carried out on the surrounding environment by using a multi-line radar, and received point cloud data are preprocessed. And creating the topic of the ROS for receiving the data of the laser point cloud and converting the data of the laser point cloud into the format of the ROS. And taking the point cloud data received by the laser radar after one-week operation as a frame of point cloud data, taking the first point cloud data received in one frame as a starting point, and taking the last point cloud data received as a final point. And calculating the horizontal angle between the starting point and the final point by taking the negative axis of the x axis as a reference, and subtracting 2 pi from the angle of the final point if the difference between the angle of the starting point and the angle of the final point is greater than 3 pi. And if the initial point angle and the final point angle are smaller than pi, adding 2 pi to the final point angle. By the above method, the angle difference between the starting point and the last point is kept between Π and 3 Π. And projecting all point cloud data to a two-dimensional array, wherein the first index of the array represents the number of lines where points are located, the second index represents the number of the points on the lines, and the data stored in the array represents the distance between the points and the laser radar. And removing point clouds which are too far away or too close from the point clouds. The laser radar used in the invention has 16 laser beams, and the working ground when the robot runs can be regarded as a horizontal plane, so that only 8 lines from bottom to top can scan the ground. And traversing all the points of the 8 laser beams, and finding the middle point which is adjacent to the first index value and the second index value of the current point in the two-dimensional array, namely the point which is horizontally at the same angle as the current point and is adjacent to the laser beam. And calculating the horizontal angle difference between the current point cloud and the adjacent point, and marking the ground point by the two points if the angle difference is less than 10 degrees. And marking the point clouds except the ground points by using a breadth-first search method, namely taking a core point, calculating the distance between each point cloud in eight neighborhoods of the core point and the core point, if the distance is smaller than a threshold value, determining that the point clouds are in one cluster, and if the distance is larger than the threshold value, determining that the point clouds are not in the same cluster. And taking the points in the eight neighborhoods in the same cluster as core points, calculating the distance between the point and the points which are not screened in the eight neighborhoods, wherein the points are not in one cluster if the distance is greater than a threshold value, and the points are regarded as being in one cluster if the distance is less than the threshold value. And the like, and ending the search of one cluster until all the screened points in the eight neighborhoods of the points are not in one cluster. If the number of the point clouds in a cluster is more than 30, marking the points in the cluster as available cluster points. If the number of point clouds in a cluster is less than 30, the point clouds in the cluster are all marked with discrete points. Noise points in the laser point cloud data can be effectively removed through a clustering method. And releasing the processed point cloud data in a ROS topic mode.
Step 3-2: and extracting the characteristic points. And creating an ROS node to receive the point cloud topic published in the step 3-1. And performing interpolation processing on the processed point cloud data by using the fusion data of the inertial navigation sensor and the wheel type encoder to remove motion distortion. Traversing each point cloud except the ground, and calculating the deviation of each point from five adjacent points in front and back as a curvature value c, wherein the calculation formula is as follows:
wherein r isiThe distance value of the ith point cloud is shown, S represents the number of the point clouds, and S is 10 in the invention. Equally dividing a circle of point clouds into 6 fan-shaped areas, sorting the point clouds in each area according to the curvature of each point, regarding the point with the curvature value larger than a threshold as an angular point, and regarding the point with the curvature value smaller than the threshold as a plane point. 2 of the corner points with the largest curvatureThe points are marked as large corner points, and the 10 points with the smallest curvature among the plane points are marked as large plane points. And finding a pose transformation relation between two frames of point clouds through point-to-point matching. The large angular point set of the current frame point cloud is matched with the angular point set of the previous frame point cloud, and the large plane point set of the current frame point cloud is matched with the plane point set of the previous frame point cloud.
Step 3-3: and matching frames. In order to further improve the matching accuracy, point-to-line and point-to-surface matching methods are used. The correspondence between the angular points and the angular point lines is that firstly two points j and l with the nearest distance in the previous frame of point cloud data corresponding to the angular point i are found, and then whether the two points are angular points or not is judged, wherein j and l are necessarily points on different lines, because one line at a time has at most one edge point in a certain section. The plane point corresponds to a plane in the previous frame of point cloud. Finding the closest point of the point cloud data of the previous frame, finding another point on the line scanning, and finding a point on the adjacent scanning line, thereby ensuring that the three points do not form a plane on one line. With point-to-line and point-to-plane correspondence, the point-to-line and point-to-plane distances are then required. Firstly, the distance d epsilon from the point to the line segment is obtained, and the formula is as follows:
whereinRepresents the ith point of the k frame point,a vector representing the point cloud of the ith point of the (k + 1) th frame to the point cloud of the jth point of the kth frame. The numerator of the formula is the cross product of two vectors, and the modulo operation after the cross product becomes the area of a triangle formed by the two vectors. The denominator of the formula is that the modulus of the vector is equivalent to the length of the base of the triangle. The area of the triangle is divided by an edge to obtain the corresponding vertexI.e. the distance from the corner point to the corner line.
Then, the distance dh from the plane point to the corresponding plane is obtained, and the formula is shown as follows:
the upper part of the molecule in the formula is calculated to obtain a vector which represents the height of the cube, the lower part of the molecule is calculated to obtain a vector, the cross multiplication of the two vectors is performed, then the modulus is taken to represent the area of the triangle, and the volume of the cube is represented by the multiplication of the two vectors. And the denominator represents the noodles with triangular bottom surfaces of the cube, and the obtained height is the distance from the plane point to the plane. The movement of the lidar is assumed to be uniform. After the conversion matrix of the end point of one frame data relative to the starting point is calculated, interpolation can be carried out on any point in the frame data according to the time relative to the starting point when the conversion matrix is obtained, and the pose of each point is obtained. The formula for interpolation is as follows:
wherein t isiRepresenting the time of the data of the ith IMU in a frame of laser point cloud data,and the pose of the laser odometer when the ith IMU data is received after the kth frame of laser point cloud data is received is shown. Using a rotation matrix R and a translation T to represent the corresponding relationship between the point in the frame data and the point in the previous frame data:
deriving the rotation matrix to obtain the distances of the points to the line and the plane, and obtaining an error function for optimization:
and each line of f represents a characteristic point, and the position and posture information obtained by matching point cloud frames through a Newton iteration method is optimized to obtain laser odometer data and is issued in an ROS topic mode.
Step 3-4: and matching point cloud maps. To reduce the computational load of matching, the current point cloud is downsampled. And performing matching search on the current point cloud and the point cloud map by using a kd-tree nearest neighbor search algorithm. And storing the map point cloud with the matching distance smaller than 0.5 as historical key frame point cloud for matching. And calculating the relative conversion relation between the historical key frame point cloud and the current point cloud by using an icp matching algorithm to obtain the final pose conversion relation. And 3, taking the laser odometer obtained by calculation in the step 3-3 as an optimization target, adding pose constraints obtained by matching in the pose graph by using the gtsam, and optimizing the optimization target by using a graph optimization algorithm to finally obtain accurate robot pose data.
And 4, step 4: navigation is performed using a navigation toolkit, which requires two input data: a two-dimensional global map and two-dimensional laser data. And 2, acquiring a two-dimensional global environment map, wherein the two-dimensional laser data is formed by projecting point cloud data acquired from the 5 th to the 10 th lines of the multi-line laser radar onto a two-dimensional horizontal plane. And (4) replacing the robot posture data obtained by the self-adaptive Monte Carlo algorithm with the robot posture obtained by calculation in the step (3-4). And planning a global driving path through a move _ base algorithm, forming a local cost map after two-dimensional laser expansion processing, and planning the local path by using a teb algorithm to realize autonomous navigation of the robot.
Claims (6)
1. A positioning method based on 3D point cloud registration comprises the following specific steps:
step 1: calibrating external parameters of the sensor; installing an inertial navigation sensor on a base of the robot, and taking the position as an origin of a base coordinate system; calculating the relative position relationship between a radar coordinate system and a base coordinate system by manually measuring the relative position relationship between the laser radar and the inertial navigation sensor; a communication mechanism of a robot operating system ROS is used on an industrial personal computer to issue TF conversion relation topics of a radar coordinate system and a base coordinate system;
step 2: 3D mapping; controlling the robot to move in a working environment; acquiring a 3D point cloud map of a robot working environment by using a 3DSLAM technology, and performing downsampling on point clouds, wherein the 3D point cloud map is stored in an octree data format; extracting point clouds in a set height range of the point cloud map, and projecting the point clouds onto a horizontal plane of an origin of a coordinate system of a chassis of the robot to form a two-dimensional grid map;
and step 3: extracting and matching the characteristic points;
step 3-1: point cloud pretreatment; acquiring real-time point cloud data by using a 3D laser radar; selecting a starting point and a final point, keeping the angle of the two points between pi and 3 pi, and controlling the angle of a frame of point cloud within the range of the starting point and the final point; storing the received point cloud information into a two-dimensional array form, wherein the first index of the array represents the number of lines where points are located, the second index represents the number of points on the lines, and the data stored in the array represents the distance between the points and the laser radar; removing point clouds with too far or too close distances according to the distance of the point clouds; calculating a horizontal included angle between each point and the previous point in a laser beam below the horizontal direction in the multi-line laser radar, and marking two points of which the horizontal included angle between the two points is smaller than a certain threshold value as ground points; clustering the point clouds meeting the clustering quantity into a plurality of point cloud sets by using a point cloud clustering algorithm, and rejecting the point clouds which cannot form clusters as noise points;
step 3-2: extracting feature points; calculating curvature values of all clustering points except for the ground points; the calculation formula is as follows:
wherein r isiRepresenting the distance value of the ith point cloud, and S represents the number of the point clouds; dividing a frame of point cloud into multiple regions according to orientation, and processing the point cloud in each region according to the curvature of each pointLine sorting, marking points with curvature values larger than a certain threshold value as angular points, and marking points with curvature smaller than a certain threshold value as plane points; the points with the largest curvature and the points with the smallest curvature are marked as large angular points and large plane points;
step 3-3: matching frames; respectively carrying out matching calculation on the current point cloud, the diagonal point and the plane point of the previous frame of point cloud by using a point cloud matching algorithm; carrying out point-to-line matching on a large angle in the current frame point cloud and an angular point in the previous frame point cloud to obtain change information of a pitch angle, a roll angle and a z axis in the pose of the robot; carrying out point-to-surface matching calculation on a large plane point in the current frame point cloud and a plane point in the previous frame point cloud by taking the pitch angle, the roll angle and the z-axis change as constraints to obtain x, y and yaw angle change information in the robot pose; assuming that the movement of the laser radar is uniform; after calculating a conversion matrix of a frame data end point relative to a starting point, interpolating any point in the frame data according to the time relative to the starting point when the point is obtained, and obtaining the pose of each point; the formula for interpolation is as follows:
wherein t isiRepresenting the time of the data of the ith IMU in a frame of laser point cloud data,the pose of the laser odometer is shown when the ith IMU data is received after the kth frame of laser point cloud data is received; using a rotation matrix R and a translation T to represent the corresponding relationship between the point in the frame data and the point in the previous frame data:
deriving the rotation matrix to obtain the distances of the points to the line and the plane, and obtaining an error function for optimization:
each line of f represents a characteristic point, pose information obtained by matching point cloud frames is optimized through a Newton iteration method to obtain laser odometer data, and the laser odometer data is issued in an ROS topic mode;
step 3-4: matching point cloud maps; matching and searching the current point cloud and the point cloud map established in the step 2-1 by using a kd-tree nearest neighbor search algorithm; storing the map point cloud meeting the threshold condition as a historical key frame point cloud for matching; calculating a relative conversion relation between the historical key frame point cloud and the current point cloud by using a point cloud matching algorithm to obtain a final pose conversion relation; taking the laser odometer obtained by calculation in the step 3-2 as an optimization target, and adding pose constraints obtained by matching in the pose graph to optimize the optimization target to finally obtain accurate robot pose data;
and 4, step 4: converting the robot pose information and the point cloud data in the step 3-3 into two-dimensional laser data serving as input data, and using the 2D grid map in the step 2-1 as a global navigation map; and performing path planning by using an ROS navigation algorithm to realize autonomous navigation of the robot.
2. The 3D point cloud registration-based localization method of claim 1, wherein: in the step 1, a TX2 development board of England is used as a hardware platform, and the development board is provided with an Ubuntu16.04 operating system + ROS Kinetic + PCL + gtsam as a software platform.
3. The 3D point cloud registration-based localization method of claim 1, wherein: the set height range of the extracted point cloud map in the step 2 is 1-2 meters; the algorithm for constructing the 3D point cloud map is a Lego-loam slam algorithm.
4. The 3D point cloud registration-based localization method of claim 1, wherein: the 3D laser radar in the step 3-1 has 16 laser beams, the working ground of the robot during operation can be regarded as a horizontal plane, and the laser radar is horizontally arranged, so that only 8 lines from bottom to top can possibly scan the ground; judging that the angle threshold of the ground point cloud is 5 degrees; and judging whether the clustered point clouds meet the requirement, wherein the threshold is 40.
5. The 3D point cloud registration-based localization method of claim 1, wherein: the number of the fan-shaped areas for dividing the circle of point cloud in the step 3-2 is 6; the number of large angular points extracted from the point cloud is 3, and the number of large plane points is 15.
6. The 3D point cloud registration-based localization method of claim 1, wherein: and 4, projecting the multi-line laser radar to a two-dimensional horizontal plane to form two-dimensional laser data, wherein the number of the adopted laser beams is from 5 th to 10 th.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111109705.3A CN114004869A (en) | 2021-09-18 | 2021-09-18 | Positioning method based on 3D point cloud registration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111109705.3A CN114004869A (en) | 2021-09-18 | 2021-09-18 | Positioning method based on 3D point cloud registration |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114004869A true CN114004869A (en) | 2022-02-01 |
Family
ID=79921994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111109705.3A Pending CN114004869A (en) | 2021-09-18 | 2021-09-18 | Positioning method based on 3D point cloud registration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114004869A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114593738A (en) * | 2022-05-09 | 2022-06-07 | 山东大学 | Robot global positioning method and system based on octree search reflective column |
CN114608600A (en) * | 2022-03-21 | 2022-06-10 | 江苏盛海智能科技有限公司 | Automatic driving system building method and terminal |
CN114758095A (en) * | 2022-04-11 | 2022-07-15 | 重庆大学 | Self-adaptive scanning method for surface of complex curved surface part |
CN115328163A (en) * | 2022-09-16 | 2022-11-11 | 西南交通大学 | Speed and precision optimization method for inspection robot radar odometer |
CN115512054A (en) * | 2022-09-06 | 2022-12-23 | 武汉大学 | Method for constructing three-dimensional space-time continuous point cloud map |
CN115509230A (en) * | 2022-09-28 | 2022-12-23 | 北京星航机电装备有限公司 | ROS-based outdoor robot autonomous navigation method and device and robot |
CN116359938A (en) * | 2023-05-31 | 2023-06-30 | 未来机器人(深圳)有限公司 | Object detection method, device and carrying device |
CN117649495A (en) * | 2024-01-30 | 2024-03-05 | 山东大学 | Indoor three-dimensional point cloud map generation method and system based on point cloud descriptor matching |
CN117761704A (en) * | 2023-12-07 | 2024-03-26 | 上海交通大学 | Method and system for estimating relative positions of multiple robots |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110221603A (en) * | 2019-05-13 | 2019-09-10 | 浙江大学 | A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud |
CN111929699A (en) * | 2020-07-21 | 2020-11-13 | 北京建筑大学 | Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system |
CN112862894A (en) * | 2021-04-12 | 2021-05-28 | 中国科学技术大学 | Robot three-dimensional point cloud map construction and expansion method |
CN113269837A (en) * | 2021-04-27 | 2021-08-17 | 西安交通大学 | Positioning navigation method suitable for complex three-dimensional environment |
-
2021
- 2021-09-18 CN CN202111109705.3A patent/CN114004869A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110221603A (en) * | 2019-05-13 | 2019-09-10 | 浙江大学 | A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud |
CN111929699A (en) * | 2020-07-21 | 2020-11-13 | 北京建筑大学 | Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system |
CN112862894A (en) * | 2021-04-12 | 2021-05-28 | 中国科学技术大学 | Robot three-dimensional point cloud map construction and expansion method |
CN113269837A (en) * | 2021-04-27 | 2021-08-17 | 西安交通大学 | Positioning navigation method suitable for complex three-dimensional environment |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114608600A (en) * | 2022-03-21 | 2022-06-10 | 江苏盛海智能科技有限公司 | Automatic driving system building method and terminal |
CN114758095B (en) * | 2022-04-11 | 2024-08-27 | 重庆大学 | Self-adaptive scanning method for surface of complex curved surface part |
CN114758095A (en) * | 2022-04-11 | 2022-07-15 | 重庆大学 | Self-adaptive scanning method for surface of complex curved surface part |
CN114593738B (en) * | 2022-05-09 | 2022-07-26 | 山东大学 | Robot global positioning method and system based on octree search reflective column |
CN114593738A (en) * | 2022-05-09 | 2022-06-07 | 山东大学 | Robot global positioning method and system based on octree search reflective column |
CN115512054A (en) * | 2022-09-06 | 2022-12-23 | 武汉大学 | Method for constructing three-dimensional space-time continuous point cloud map |
CN115328163A (en) * | 2022-09-16 | 2022-11-11 | 西南交通大学 | Speed and precision optimization method for inspection robot radar odometer |
CN115328163B (en) * | 2022-09-16 | 2023-03-28 | 西南交通大学 | Speed and precision optimization method for inspection robot radar odometer |
CN115509230A (en) * | 2022-09-28 | 2022-12-23 | 北京星航机电装备有限公司 | ROS-based outdoor robot autonomous navigation method and device and robot |
CN116359938B (en) * | 2023-05-31 | 2023-08-25 | 未来机器人(深圳)有限公司 | Object detection method, device and carrying device |
CN116359938A (en) * | 2023-05-31 | 2023-06-30 | 未来机器人(深圳)有限公司 | Object detection method, device and carrying device |
CN117761704A (en) * | 2023-12-07 | 2024-03-26 | 上海交通大学 | Method and system for estimating relative positions of multiple robots |
CN117649495A (en) * | 2024-01-30 | 2024-03-05 | 山东大学 | Indoor three-dimensional point cloud map generation method and system based on point cloud descriptor matching |
CN117649495B (en) * | 2024-01-30 | 2024-05-28 | 山东大学 | Indoor three-dimensional point cloud map generation method and system based on point cloud descriptor matching |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114004869A (en) | Positioning method based on 3D point cloud registration | |
CN111563442B (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
CN111429574B (en) | Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion | |
CN112070770B (en) | High-precision three-dimensional map and two-dimensional grid map synchronous construction method | |
CN113865580B (en) | Method and device for constructing map, electronic equipment and computer readable storage medium | |
CN110726409B (en) | Map fusion method based on laser SLAM and visual SLAM | |
CN109186606B (en) | Robot composition and navigation method based on SLAM and image information | |
CN113345008B (en) | Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation | |
CN113781582A (en) | Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration | |
CN112066982B (en) | Industrial mobile robot positioning method in high dynamic environment | |
CN113587933A (en) | Indoor mobile robot positioning method based on branch-and-bound algorithm | |
CN114526745A (en) | Drawing establishing method and system for tightly-coupled laser radar and inertial odometer | |
CN111862214B (en) | Computer equipment positioning method, device, computer equipment and storage medium | |
CN113778096B (en) | Positioning and model building method and system for indoor robot | |
CN114565726A (en) | Simultaneous positioning and mapping method in unstructured dynamic environment | |
CN116380039A (en) | Mobile robot navigation system based on solid-state laser radar and point cloud map | |
CN116977628A (en) | SLAM method and system applied to dynamic environment and based on multi-mode semantic framework | |
CN113554705A (en) | Robust positioning method for laser radar in changing scene | |
CN117029870A (en) | Laser odometer based on road surface point cloud | |
CN116659500A (en) | Mobile robot positioning method and system based on laser radar scanning information | |
CN116679307A (en) | Urban rail transit inspection robot positioning method based on three-dimensional laser radar | |
CN114419155B (en) | Visual image construction method based on laser radar assistance | |
CN117537803B (en) | Robot inspection semantic-topological map construction method, system, equipment and medium | |
CN117232514A (en) | Indoor path planning method based on two-dimensional laser radar and binocular camera | |
Jež | 3D mapping and localization using leveled map accelerated ICP |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |