CN113985429B - Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar - Google Patents
Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar Download PDFInfo
- Publication number
- CN113985429B CN113985429B CN202111114995.0A CN202111114995A CN113985429B CN 113985429 B CN113985429 B CN 113985429B CN 202111114995 A CN202111114995 A CN 202111114995A CN 113985429 B CN113985429 B CN 113985429B
- Authority
- CN
- China
- Prior art keywords
- point
- points
- laser
- pose
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 238000013507 mapping Methods 0.000 claims abstract description 21
- 238000005457 optimization Methods 0.000 claims description 62
- 230000033001 locomotion Effects 0.000 claims description 26
- 230000009466 transformation Effects 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 20
- 239000011159 matrix material Substances 0.000 claims description 20
- 230000011218 segmentation Effects 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 15
- 238000010586 diagram Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 238000010276 construction Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 5
- JXASPPWQHFOWPL-UHFFFAOYSA-N Tamarixin Natural products C1=C(O)C(OC)=CC=C1C1=C(OC2C(C(O)C(O)C(CO)O2)O)C(=O)C2=C(O)C=C(O)C=C2O1 JXASPPWQHFOWPL-UHFFFAOYSA-N 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 238000002474 experimental method Methods 0.000 description 7
- 238000005259 measurement Methods 0.000 description 4
- 101100391182 Dictyostelium discoideum forI gene Proteins 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 241000196324 Embryophyta Species 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000714197 Avian myeloblastosis-associated virus Species 0.000 description 1
- 101150037717 Mavs gene Proteins 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/42—Simultaneous measurement of distance and other co-ordinates
Landscapes
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention relates to a method for scanning and reconstructing an outdoor environment by using a four-rotor unmanned aerial vehicle airborne three-dimensional laser radar, and aims to provide an environment scanning and reconstructing method based on the four-rotor unmanned aerial vehicle airborne three-dimensional laser radar, so that the four-rotor unmanned aerial vehicle can perform real-time autonomous positioning and mapping by using the airborne three-dimensional laser radar. The invention discloses an environment scanning and reconstructing method based on a three-dimensional laser radar, which is realized on a four-rotor unmanned aerial vehicle provided with an onboard three-dimensional laser radar. The invention is mainly applied to the occasion of autonomous positioning of the quadrotor unmanned aerial vehicle.
Description
Technical Field
The invention relates to a method for scanning and reconstructing an outdoor environment by using a four-rotor unmanned aerial vehicle airborne three-dimensional laser radar, in particular to a method for scanning and reconstructing an online environment of a limited airborne computing resource of a four-rotor unmanned aerial vehicle.
Background
The four-rotor unmanned aerial vehicle is an aircraft with a multi-rotor structure. Due to the advantages of convenience, low cost and strong maneuvering characteristics of the quadrotor unmanned aerial vehicle, the quadrotor unmanned aerial vehicle is widely applied in the fields of scientific research, civil use, military use and the like. At present, a complex and changeable flight environment needs a four-rotor unmanned aerial vehicle to realize autonomous flight. The autonomous flying four-rotor unmanned aerial vehicle system needs to solve the problems of positioning and mapping. Under the environments of no motion capture system, GPS (Global Positioning System ) signal rejection and the like, the unmanned aerial vehicle needs to carry out autonomous positioning and environment sensing by means of a sensor carried on the body. The problem that the unmanned aerial vehicle utilizes the airborne sensor to carry out simultaneous localization and mapping is the SLAM (Simultaneous Localization AND MAPPING) problem. The method is beneficial to the data collection mode of the radar, the system can still work normally in a non-texture and low-illumination environment such as a building, at night and the like, can provide a measurement result with a high signal to noise ratio, and the adoption of a three-dimensional laser radar SLAM algorithm for autonomous positioning and map construction of the four-rotor unmanned aerial vehicle is one of the current research hotspots.
In 2014, the Ji Zhang team of the university of Carcinyl Meteon (Conference: inRobotics: SCIENCE AND SYSTEMS Conference; the author: zhang J, singh S; the month of publication: 7 in 2014; article title: LOAM: lidar odometry AND MAPPING IN REAL-time [ C ]; page number: 1-9) proposed a lidar SLAM algorithm named LOAM. The algorithm achieves low drift and low computational complexity while not requiring high accuracy distance measurement and inertial navigation measurement data. In 2017, the team performed a four-rotor unmanned aerial vehicle three-dimensional point cloud map building test based on an airborne two-dimensional laser radar (journal: autonomousRobots; author: zhang J, singh S; publication year and month: 2017; article title: low-DRIFT AND REAL-TIME LIDAR odometry AND MAPPING; page number: 401-416). Researchers use an eight-rotor unmanned aerial vehicle carrying a two-dimensional laser radar and an inertial measurement unit to construct a three-dimensional point cloud map of a small bridge. in 2015, the team proposed the V-LOAM algorithm, and the sensors involved were a nodding two-dimensional laser radar and a monocular camera. The V-LOAM algorithm combines the advantages of the two sensors, overcomes the defects of the two sensors, and further improves the performance compared with the SLAM algorithm based on a single sensor (meeting: in 2015 IEEE International Conference on Robotics and Automation; the author: zhang J, singh S; the publication year and month: 2015 years and month; Article title: visual-lidar odometry AND MAPPING: low-drift, robust, andfast [ C ]; page number: 2174-2181). In 2017, the team proposed a novel SLAM algorithm for autonomous positioning and mapping by using a three-dimensional laser radar, a monocular camera and an IMU (conference: in 2017 IEEE International Conference on Robotics andAutomation; the author: zhang J, singh S; Year and month of publication: 2017, 5 months; article title: enabling aggressive motion estimation at low-drift and accuratemapping in real-time [ C ]; page number: 5051-5058). The algorithm can perform high-frequency sensor pose estimation, and simultaneously construct a dense and accurate three-dimensional map under various illumination conditions and environment structures.
In 2017, the Vijay Kumar team at the university of pennsylvania has proposed a SLAM algorithm with three-dimensional lidar, multiple cameras and IMU as the primary sensors (journal: IEEE Robotics and AutomationLetters; journal: ozaslan T, loianno G, keller J, et al; publication month: 2017; article title: autonomous navigation AND MAPPING for inspectionof penstocks and tunnels with MAVs; page numbers: 1740-1747). The four-rotor unmanned aerial vehicle is used for carrying the laser radar and the camera to perform environment scanning and map building in a tunnel environment.
In 2016, researchers from the national university of singapore have proposed a navigation system that allows a small quadrotor unmanned aerial vehicle to autonomously navigate using two-dimensional lidar in a GPS-free wooded environment (journal: journal of Intelligent & robotics Systems; authored: cui J, lai S, dong X, et al; month of publication: 2016; article title: utonomous navigation of UAV in foliage environment; pages 259-276). The system comprises unmanned aerial vehicle double-loop control, pose estimation during navigation and online path planning.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide an environment scanning and reconstructing method based on the airborne three-dimensional laser radar of the four-rotor unmanned aerial vehicle, so that the four-rotor unmanned aerial vehicle can perform real-time autonomous positioning and image building by using the airborne three-dimensional laser radar. The technical scheme adopted by the invention is that the environment scanning and reconstruction method based on the three-dimensional laser radar is realized on a four-rotor unmanned aerial vehicle provided with an airborne three-dimensional laser radar, and the method comprises the steps of carrying out point cloud segmentation on a real-time three-dimensional point cloud image acquired by the three-dimensional laser radar, extracting characteristic points from the point cloud image for characteristic matching, carrying out pose image optimization and loop detection to obtain high-precision three-dimensional positioning data and a three-dimensional point cloud map, and finally carrying out environment reconstruction by utilizing a qPCV (Portion of Visible Sky) method.
The method comprises the following specific steps:
Firstly, defining a circle of rotation of a laser transmitter in a laser radar as a scanning period, obtaining point cloud data in the scanning period as a frame of point cloud picture, recording as a kth frame of point cloud picture, Representing a positive integer, wherein all laser points in each frame of point cloud image are marked as P, P k represents all laser points in the k frame of point cloud image, P k={pk1,pk2,…,pkn},pki is one laser point in point cloud P k at k time, P k is projected onto a two-dimensional image, the resolution of the laser points is M multiplied by N, the size of the M is determined by the horizontal resolution of a radar, the size of the N is determined by the vertical resolution of the radar, after re-projection, each laser point in the three-dimensional point cloud becomes a pixel point in the two-dimensional image, wherein each pixel point is provided with coordinate information and depth information, the coordinate P ki=[xki,yki,zki]T represents the position of the laser point in the point cloud image at k time by taking the center of the laser radar as an origin, and the depth information r ki represents the Euclidean distance from P ki to the center point of the laser radar at k time;
And then marking the pixel points estimated as the ground in the depth image: an efficient partitioning of data is achieved by representing the x-y plane as a circle with radius R = infinity and dividing it into several independent sectors, the parameter Δα being used to describe the angle covered by each sector, thus resulting in an M-segment sector S i, The sector number to which point p ki maps is represented by sector (p ki), and is calculated as follows
Where atan2 (y ki,xki) gives the angle between the positive x-axis of the plane and the point (x ki,yki) and ensures that it is within 0,2 pi, the set of all points mapped to the same sector S s is denoted by P s,
Ps={pi∈P|sector(pi)=ss}#(2)
Next, based on the distance from point (x ki,yki,zki) to (0, z ki) in the same sectorIs dispersed to a plurality of sets of different distance rangesJ=1, 2, …, B represents the maximum number of pools, respectivelyAndRepresenting a collectionMinimum and maximum distances of coverage, if the distance of a point p ki∈Ps is satisfactoryThen it indicates that the point falls onIn the aggregate, useRepresenting all points falling in the set, thereby integrating the three-dimensional pointsMapping into a two-dimensional point set:
for each non-empty set All calculate a representative pointSelecting the point with the lowest z-axis coordinate value as the representative pointSubsequently, the representative point is utilizedAnd performing linear fitting by an increment algorithm, judging whether the straight line can represent the ground according to the slope of the fitted straight line, finding out the straight line closest to the straight line in the sector to which each point belongs, if the straight line is judged to represent the ground, and the distance from the point to the straight line is smaller than a threshold value, marking the point as the ground, otherwise, marking the point without marking;
Then dividing and marking non-ground points in the two-dimensional depth image, wherein the scanned laser points A and B are adjacent laser points obtained by adjacent laser beams OA and OB, O represents the center point of the laser radar, a coordinate system taking O as an origin exists, the y axis of the coordinate system coincides with a longer laser beam in the OA and OB, here, assuming that the OA is longer, an angle beta is defined as an included angle between the y axis and a straight line passing through the A and B, and because the two-dimensional depth image is provided with depth information of each point, namely, the distance from each laser point to the center of the laser radar, the distance from the A and B to the origin point O is respectively defined as r A and r B, and a calculation formula of beta is obtained:
wherein alpha is the included angle between two adjacent laser beams of the laser radar, is determined by the resolution of the laser radar, and is a fixed value;
Then, according to the two-dimensional depth images obtained before, carrying out point cloud segmentation, if two laser points are adjacent in the depth images and the angle beta between the two laser points is larger than a threshold value theta, regarding the two-dimensional depth images as originating from the same object, calculating from the laser point at the upper left corner, and firstly carrying out breadth-first search on each non-marked characteristic point from left to right and from top to bottom, marking all the laser points which are the same as the laser points of the point originating from the same object as the same label, finding the next unmarked point in sequence after the breadth-first search of the current point is finished, and repeating the operation, thereby ensuring that the whole two-dimensional depth image is marked in O (n) time;
The segmented data is further processed as follows: 1) If the number of laser points under a certain label exceeds 30, the label is valid; 2) If the number of laser points under a certain label is less than 30 and more than or equal to 5, counting the number of laser points in the vertical direction; 3) If the number of laser points in the vertical direction is more than 3, the label is effective; 4) The rest of the tags are marked as unusable classes, i.e. the tags are not valid;
After this treatment, each laser spot has three pieces of information: 1) The label value after segmentation is denoted by L ki; 2) A row number and a column number in the two-dimensional depth image; 3) Depth information r ki, after which feature extraction is performed on these points;
The first step in feature extraction is to define the smoothness c of each laser point in the two-dimensional depth image
Wherein the set S consists of points p ki and 10 adjacent points on the same line, each of which is 5, so |s|=10, the point of the threshold c th,c>cth is set as an edge feature point, the point of c < c th is a plane feature point, in order to extract the features in all directions, the whole two-dimensional depth image is divided into a plurality of subgraphs in the horizontal direction, for each subgraph, each point is calculated and ordered in smoothness on each line, and then, from the non-ground points of each line, the value of c is selected to be the largestEdge feature points form an edge feature point setSelecting the minimum c value from the ground points of each row or the laser points with effective labelsEach plane characteristic point forms a plane characteristic point setTo facilitate subsequent feature matching, a slave setIn (1) selectingThe edge characteristic points with the largest c value form a smaller edge characteristic point set F e; from a collectionIn (1) selectingThe planar feature points with the smallest c value and belonging to the ground points form a smaller planar feature point set F p, obviously having the following relationship: And
Then, the pose change between two continuous scanning frames is estimated by using the extracted features, the current moment is set as k moment, and the edge feature point set and the plane feature point set with smaller current moment are respectivelyAndThe larger edge characteristic point set and the plane characteristic point set in the last moment are respectivelyAndLet t k-1 denote the start time of the (k-1) th scan, at the end of each scan, the point cloud P k-1 received during the scan will be re-projected to the time stamp t k, the re-projected point cloud beingDuring the kth scanning movement,Together with P k, is used to estimate the motion of the radar because So willAndAre projected to the time t k to respectively obtainAndNext, a slave is requiredAndFind in (a)AndIs a correlation point of (2);
for collections In (1) an edge feature point i, inThe label which is the same as i and is closest to the label is searched, and the corresponding characteristic of the edge characteristic point is a straight line, and the two points determine the straight line, so that the label is characterized in thatSearching the edge characteristic point l which is the same as the label of j and is closest to the label, thereby constructing a geometric constraint relation from point to straight line, and respectively marking the coordinates of the point i, the point j and the point l asAndThereby converting the gesture transformation into a point-to-line distance calculation
An optimization equation of the edge feature points is constructed;
for collections One of the planar feature points i, inThe label which is the same as i and is closest to the label is searched for a plane characteristic point j, and the corresponding characteristic of the plane characteristic point is a plane, and three points determine the plane, so that the label is characterized in thatThe two plane characteristic points l and m which are the same as the label of j and are nearest to the label are searched, so that the geometrical constraint relation of the points to the plane is constructed, and the coordinates of the points i, j, l and m are respectively recorded asAndThereby converting the gesture transformation into a point-to-plane distance calculation
An optimization equation of the plane feature points is established. The above algorithm for finding the nearest point uses a nearest neighbor search method of a K-D Tree (K-Dimensional Tree), and motion estimation is performed as follows:
In the scanning process, the angular speed and the linear speed of the radar are uniform, let T be the current moment, T k be the time when the kth scanning starts, let T k be the radar pose transformation between [ T k, T ] moments, T k comprise the rigid motion of the radar in the six-degree-of-freedom space, T k=[tx,ty,tz,θx,θy,θz]T, let T i be the timestamp of the laser point p ki,pki∈Pk, and T (k,i) be the radar pose transformation between [ T k,ti ] moments, then T (k,i) is obtained by linear interpolation:
to solve the motion estimation problem of radar, it is necessary to establish separately AndAnd (3) withAndThe geometrical relationship between them is obtainable according to formula (8)
Wherein V (k,i) isOr (b)To obtain the coordinates of a laser point i,T (k,i) (1:3) is the first to third terms of T (k,i), i.e., (T x,ty,tz), for the point coordinates corresponding thereto. R represents a rotation matrix defined by the formula Rodrigues
Wherein θ is the rotation angle:
θ=||T(k,i)(4:6)||,#(11)
Unit vector representing rotation direction:
representing the vector Tensed antisymmetric matrix, i.e. for vectorsThe method comprises the following steps:
can be obtained according to the formula (6) and the formulas (8) - (12) Geometric relationship between the laser spot and its corresponding point:
Is available in the same way Geometric relationship between the laser spot and its corresponding point:
Taking equation (14) as an example, performing a Levenberg-Marquardt L-M (Levenberg-Marquardt) optimization, each row in f e representing an edge feature point, h e comprising the distance between the edge feature point and the corresponding edge line, calculating a jacobian matrix of f e relative to T k, denoted J, Then, the L-M method utilizes a solution optimization problem formula (14) to obtain an iterative formula of T k:
Tk←Tk-(JTJ+λdiag(JTJ))-1JThe.#(16)
Where lambda is a coefficient factor in the L-M method. Obtaining pose estimation T k=[tx,ty,tz,θx,θy,θz]T, performing second L-M optimization based on [ T z,θx,θy ] and formula (15) in the pose estimation T k=[tx,ty,tz,θx,θy,θz]T, and obtaining [ T x,ty,θz ] selected by T k=[tx,ty,tz,θx,θy,θz]T and [ T z,θx,θy ] obtained by first optimization to form final pose estimation;
The precondition of carrying out the L-M optimization twice is that the ground change is not great in the movement process of the laser radar, if the ground change is severe or the unmanned aerial vehicle flies at high altitude, the plane characteristics are obtained from non-ground points when the scanned ground points are less than 7200, and when the pose estimation is carried out, the L-M optimization twice is not adopted, and the formulas (14) - (15) are combined to obtain:
f(Tk)=h,#(17)
Each row in f represents a planar feature point or edge feature point, h comprises the distance between the corresponding planar features or edge features, and f is expressed as a jacobian matrix relative to T k The pose estimation with six degrees of freedom is obtained through one-time L-M optimization, and the iteration formula is as follows:
Tk←Tk-(JTJ+λdiag(JTJ))-1JTh.#(18)
Then, carrying out pose diagram optimization design, defining the projection of all scanned point clouds before the k moment in an inertial coordinate system as Q k-1, wherein Q k-1 is a three-dimensional point cloud map which is established at the k moment, and the rear end optimization aims at obtaining characteristic points at the k moment Matching the pose estimation results into a point cloud map Q k-1, thereby obtaining a group of pose estimation results;
the storage mode of the point cloud map at each moment is to store the characteristic point set obtained by each scanning Order the To store the sets of all feature point sets before time k, each set in Q k-1 is associated with the pose at the time of radar scan, since the pose-optimized sensor pose estimate drifts less, assuming no drift in a short time, letS is defined asIs of a size of (a) and (b). To be used forIs a node in the pose graph,As a constraint on the node, the observed data on the node willAdding the new node into the pose graph to obtain an objective function optimized by the pose graph
Wherein V represents pose estimation of all pose nodes in the pose graph, E is a set of all edges in the pose graph, e ij represents errors of the pose estimation represented by edges (i, j), sigma ij represents covariance matrix of the pose estimation, so that the optimization problem becomes a least square problem, and the L-M optimization solution is utilized to obtain more accurate pose transformation estimation;
For a new set of feature points Matching the feature point set with the feature point set at the previous moment by using an iterative closest point (ITERATIVE CLOSEST POINT, ICP) method, if the matching is successful, indicating that loop-back is detected, thus obtaining new pose estimation constraint, adding the new pose estimation constraint into a pose graph, performing optimization calculation by using a iSAM method, and updating the pose estimation; if the matching fails, the current position of the unmanned aerial vehicle is not the position, and a new constraint is not added;
The map construction module optimizes pose estimation released at 2 Hz according to the pose map, and projects each frame of local point cloud map to an inertial coordinate system, so that a three-dimensional point cloud map is obtained, and the release frequency of the point cloud map is 2 Hz; the origin of the local point cloud map is the sensor position at the current moment, and the origin of the inertial coordinate system is the sensor position at the beginning of the algorithm; the transformation integration module interpolates the 10 Hz pose estimation issued by the radar odometer and the 2 Hz pose estimation obtained by optimizing the pose map, and issues the final pose estimation with the frequency of 10 Hz; therefore, the simultaneous positioning and mapping are realized;
And finally, processing the three-dimensional point cloud map by utilizing qPCV algorithm integrated in CloudCompare software to obtain a reconstructed three-dimensional point cloud map.
The invention has the characteristics and beneficial effects that:
The invention designs an environment scanning and reconstructing method based on the three-dimensional laser radar, which has good effects of real-time autonomous positioning and image building of a four-rotor unmanned aerial vehicle carrying the three-dimensional laser radar.
Drawings
Fig. 1 is a technical roadmap of the invention.
Fig. 2 is a schematic view of ground point cloud segmentation.
Fig. 3 is a schematic diagram of determining whether adjacent laser points are from the same object.
Fig. 4 is a point cloud growth schematic.
Fig. 5 is a schematic diagram of a construction feature point geometric constraint relationship, in which fig. 5 (a) is a schematic diagram of an edge feature point construction geometric constraint relationship, and fig. 5 (b) is a schematic diagram of a planar feature point construction geometric constraint relationship.
Fig. 6 is a graph of unmanned aerial vehicle position change for performing outdoor fixed point experiments.
Fig. 7 is a three-dimensional point cloud map of north ocean squares of the university of Tianjin obtained by performing an outdoor mapping experiment.
Fig. 8 is a three-dimensional point cloud map of the north ocean plaza of the university of Tianjin after the reconstruction process.
Detailed Description
In order to overcome the defects of the prior art, the invention aims to provide an environment scanning and reconstructing method based on the airborne three-dimensional laser radar of the four-rotor unmanned aerial vehicle, so that the four-rotor unmanned aerial vehicle can perform real-time autonomous positioning and image building by using the airborne three-dimensional laser radar. The technical scheme adopted by the invention is that the environment scanning and reconstruction method based on the three-dimensional laser radar is realized on a four-rotor unmanned aerial vehicle provided with an airborne three-dimensional laser radar, and the method comprises the steps of carrying out point cloud segmentation on a real-time three-dimensional point cloud image acquired by the three-dimensional laser radar, extracting characteristic points from the point cloud image for characteristic matching, carrying out pose image optimization and loop detection to obtain high-precision three-dimensional positioning data and a three-dimensional point cloud map, and finally carrying out environment reconstruction by utilizing a qPCV (Portion of Visible Sky) method.
The method comprises defining a scanning period of one rotation of a laser transmitter in a laser radar, obtaining point cloud data in the scanning period as a frame of point cloud image, recording as a k frame of point cloud image,All laser points in the point cloud for each frame are denoted as P, P k representing all laser points in the point cloud for the kth frame. P k={pk1,pk2,…,pkn},pki is one laser point in the point cloud P k at time k. P k is projected onto a two-dimensional image with a resolution of mxn, where M is determined by the horizontal resolution of the radar and N is determined by the vertical resolution of the radar. In the method, as an example, a VLP-16 three-dimensional laser radar of Velodyne company is taken, a laser emitter in the VLP-16 laser radar rotates for 360 degrees in one circle, the angular resolution in the horizontal direction is 0.2 degrees, so M=360/0.2=1800, and the VLP-16 has 16 scanning lines in the vertical direction, so N=16. After re-projection, each laser point in the three-dimensional point cloud becomes a pixel point in the two-dimensional image, wherein each pixel point carries coordinate information and depth information. The coordinate p ki=[xki,yki,zki]T represents the position of the laser point in the point cloud diagram at the time k in a coordinate system with the center of the laser radar as the origin. Depth information r ki then represents the euclidean distance of the laser radar center point at time p ki to time k. This image is hereinafter referred to as a depth image.
And then marking the pixel points estimated to be the ground in the depth image. In an outdoor environment, it is not possible that all the ground is flat, so the ground plane model must be adapted to describe both flat and inclined terrain and terrain in between. To solve this problem, an efficient partitioning of data can be achieved by representing the x-y plane as a circle with radius r= infinity and dividing it into several independent sectors, the parameter Δα being used to describe the angle covered by each sector, thus yielding M segments of sectors S i,The sector number to which point p ki maps is represented by sector (p ki), and is calculated as follows
Where atan2 (y ki,xki) gives the angle between the positive x-axis of the plane and the point (x ki,yki) and ensures that it is within 0,2 pi. The method represents by P s the set of all points mapped to the same sector S s,
Ps={pi∈P|sector(pi)=ss}#(2)
Next, based on the distance from point (x ki,yki,zki) to (0, z ki) in the same sectorIs dispersed to a plurality of sets of different distance rangesJ=1, 2, …, B. The method is respectively used forAndRepresenting a collectionMinimum and maximum distances of coverage. If the distance of a point p ki∈Ps is satisfiedThen it indicates that the point falls onIn the aggregate, useRepresenting all points falling in the set. Thereby, three-dimensional points are integratedMapping to a two-dimensional point set
Obviously, this mapping is one-to-many, oneMultiple points may be included or none may be included. Thus, for each non-empty setCan calculate a representative pointThe use of the lowest point of the z-axis coordinates yields better results because it is most likely to lie on the ground plane. Subsequently, the representative point is utilizedAnd performing linear fitting by an increment algorithm. And finally judging whether the straight line can represent the ground according to the slope of the fitted straight line. For each point, finding the nearest straight line to the point in the sector to which the point belongs, if the straight line is judged to be capable of representing the ground, and the distance from the point to the straight line is smaller than a threshold value, marking the point as the ground, otherwise, marking no mark.
For a point cloud image of k moments obtained by VLP-16 laser radar scanning, taking Δα=0.2°, there are m=1800 sectors in total, which is equal to the number of columns of the two-dimensional depth image. The vertical scanning range of VLP-16 lidar is [ -15 deg., 15 deg. ], and the laser point scanned to the ground must appear on 8 scanning lines ranging from [ -15 deg. ], 1 deg. ], corresponding to the next 8 lines of data of the two-dimensional depth image. Let b=8, drop 8 points in each sector to 8 points, respectivelyIn (3), each set is exactly one point, representing a pointAnd also determined accordingly. The straight line is fitted starting from the lowest row, since the points of the lowest row are most likely to be scanned to the ground. And if the absolute value of the slope of the straight line after fitting is smaller than or equal to 10 degrees, judging the straight line as a ground point.
The method then segments and marks non-ground points in the two-dimensional depth image. The key of the segmentation method adopted by the method is to judge whether the laser points obtained by any two laser beams come from scanning the same object. Assume that the scanned laser points a and B are adjacent laser points derived from adjacent laser beams OA and OB, and O represents the center point of the laser radar. Without loss of generality, it is assumed that there is a coordinate system with O as the origin and the y-axis of the coordinate system coincides with the longer of OA and OB, here OA is assumed to be longer. The angle beta is defined as the angle between the y-axis and the straight line passing through A and B. Since the two-dimensional depth image carries depth information for each point, i.e. the distance from each laser point to the laser radar center, we can know the distances from A and B to the origin O, defined as r A and r B, respectively. From this a calculation formula for beta can be derived,
Wherein alpha is the included angle between two adjacent laser beams of the laser radar, is determined by the resolution of the laser radar, and is a fixed value. For VLP-16 lidar, α=0.2° when a and B are horizontally adjacent points and α=2° when a and B are vertically adjacent points.
And then performing point cloud segmentation according to the two-dimensional depth image obtained before. Two laser points are considered to originate from the same object if they are adjacent in the depth image and the angle β between them is larger than a threshold θ. For each two-dimensional depth image, calculation starts from the laser spot in the upper left corner, from left to right, from top to bottom. For each unlabeled feature point, a breadth first search is performed, whereby all laser points from the same object as the point can be labeled as the same label. After the breadth-first search of the current point is finished, the next unmarked point is found in sequence, and the operation is repeated, so that the marking of the whole two-dimensional depth image can be ensured to be finished in O (n) time.
It should be noted that this judgment method has a failed scenario: when the lidar is very close to a wall. For a laser spot that is far from the lidar but still on the wall, β will be small, so the part of the wall that is far from the lidar may be split into multiple parts. Essentially, this means that if β is smaller than θ, it is difficult to infer whether the two points originate from two different objects or are merely points located on a wall nearly parallel to the laser line direction. Moreover, when the lidar is in a grassland environment, the weeds on the ground will be divided into multiple parts, but these parts have little help with feature matching or even reduce the matching accuracy. In consideration of both cases, the segmented data is further processed as follows: 1) If the number of laser points under a certain label exceeds 30, the label is valid; 2) If the number of laser points under a certain label is less than 30 and more than or equal to 5, counting the number of laser points in the vertical direction; 3) If the number of laser points in the vertical direction is more than 3, the label is effective; 4) The remaining tags are marked as unusable classes, i.e. the tag is not valid.
After this treatment, each laser spot has three pieces of information: 1) The label value after segmentation is denoted by L ki; 2) A row number and a column number in the two-dimensional depth image; 3) Depth information r ki. These points are then feature extracted.
The first step in feature extraction is to define the smoothness c of each laser point in the two-dimensional depth image
The set S is composed of a point p ki and 10 adjacent points on the same line, and each of the points is about 5, so |s|=10. The point at which the threshold c th,c>cth is set is an edge feature point, and the point at which c < c th is a plane feature point. In order to extract features in all directions, the whole two-dimensional depth image is segmented in the horizontal direction into several sub-images. For each subgraph, a smoothness calculation is performed and ordered for each point on each row. Then, selecting the non-ground point with the largest c value from each rowEdge feature points form an edge feature point setSelecting the minimum c value from the ground points of each row or the laser points with effective labelsEach plane characteristic point forms a plane characteristic point setTo facilitate subsequent feature matching, a slave setIn (1) selectingThe edge characteristic points with the largest c value form a smaller edge characteristic point set F e; from a collectionIn (1) selectingThe planar feature points with the smallest c value and belonging to the ground points form a smaller set of planar feature points F p. The following relationship is apparent: And In this context, each two-dimensional depth image is divided into 6 sub-images, each sub-image having a resolution of 300×16.
The extracted features are then used to estimate the pose change between two successive scan frames. Setting the current moment as k moment, the edge feature point set and the plane feature point set with smaller current moment are respectivelyAndThe larger edge characteristic point set and the plane characteristic point set in the last moment are respectivelyAndLet t k-1 denote the start time of the (k-1) th scan, at the end of each scan, the point cloud P k-1 received during the scan will be re-projected to the time stamp t k, the re-projected point cloud beingDuring the kth scanning movement,And P k are used together to estimate the motion of the radar. Because of So willAndAre projected to the time t k to respectively obtainAndNext, a slave is requiredAndFind in (a)AndIs a related point of the (c).
For collectionsIn (1) an edge feature point i, inThe label which is the same as i and is closest to the label is searched, and the corresponding characteristic of the edge characteristic point is a straight line, and the two points determine the straight line, so that the label is characterized in thatThe same label as j and nearest to it is found. Thereby constructing a point-to-line geometric constraint relationship. The coordinates of points i, j and l are respectively noted asAndThereby converting the gesture transformation into a point-to-line distance calculation
An optimization equation of the edge feature points is constructed.
For collectionsOne of the planar feature points i, inThe label which is the same as i and is closest to the label is searched for a plane characteristic point j, and the corresponding characteristic of the plane characteristic point is a plane, and three points determine the plane, so that the label is characterized in thatThe two planar feature points l and m that are the same as the label of j and closest thereto are found. Thereby constructing a point-to-plane geometric constraint relationship. The coordinates of points i, j, l and m are respectively noted asAndThereby converting the gesture transformation into a point-to-plane distance calculation
An optimization equation of the plane feature points is established. The above algorithm for finding the nearest point uses a nearest neighbor search method of a K-D Tree (K-Dimensional Tree). Motion estimation is performed as follows.
It is assumed that both the angular velocity and the linear velocity of the radar are uniform during one scan. Let t be the current time, t k be the time at which the kth scan starts. Let T k be the radar pose transformation between [ T k, T ] moments. T k includes rigid body motion of the radar in six degrees of freedom, T k=[tx,ty,tz,θx,θ,θz]T. For laser point p ki,pki∈Pk, let T i be its timestamp, T (k,i) be the radar pose transform between times [ T k,ti ]. Then T (k,i) can be derived by linear interpolation:
to solve the motion estimation problem of radar, it is necessary to establish separately AndAnd (3) withAndGeometric relationship between the two. Obtainable according to formula (8)
Wherein V (k,i) isOr (b)To obtain the coordinates of a laser point i,T (k,i) (1:3) is the first to third terms of T (k,i), i.e., (T x,ty,tz), for the point coordinates corresponding thereto. R represents a rotation matrix defined by the formula Rodrigues
Wherein θ is the rotation angle:
θ=||T(k,i)(4:6)||,#(11)
Unit vector representing rotation direction:
representing the vector Tensed antisymmetric matrix, i.e. for vectorsThe method comprises the following steps:
can be obtained according to the formula (6) and the formulas (8) - (12) Geometric relationship between the laser spot and its corresponding point:
Is available in the same way Geometric relationship between the laser spot and its corresponding point:
Using formula (14) as an example, a Levenberg-Marquardt (L-M) optimization was performed. Each row in f e represents an edge feature point, and h e includes a distance between the edge feature point and a corresponding edge line. The jacobian matrix for f e relative to T k, denoted J, Then, the L-M method utilizes a solution optimization problem formula (14) to obtain an iterative formula of T k:
Tk←Tk-(JTJ+λdiag(JTJ))-1JThe.#(16)
Where lambda is a coefficient factor in the L-M method. Thus, pose estimation T k=[tx,ty,tz,θx,θy,θz]T is obtained, second L-M optimization is carried out based on [ T z,θx,θy ] and formula (15) in the pose estimation T k=[tx,ty,tz,θx,θy,θz]T, and [ T x,ty,θz ] selected in T k=[tx,ty,tz,θx,θy,θz]T and [ T z,θx,θy ] obtained by first optimization form final pose estimation.
It should be noted that, the precondition of performing the L-M optimization twice is that the laser radar has little ground change during the motion process, if the ground change is severe or the unmanned aerial vehicle flies at high altitude, the plane characteristics are obtained from non-ground points when the scanned ground points are less than 7200, and when the pose estimation is performed, the L-M optimization twice is not adopted, but the formulas (14) - (15) are combined to obtain:
f(Tk)=h,#(17)
Each row in f represents a planar feature point or edge feature point, and h includes the distance between the corresponding planar features or edge features. Let f be the jacobian matrix relative to T k And obtaining pose estimation with six degrees of freedom through one-time L-M optimization. The iteration is as follows:
Tk←Tk-(JTJ+λdiag(JTJ))-1JTh.#(18)
And then, carrying out pose diagram optimization design. Defining the projection of all scanned point clouds before the k moment to be Q k-1 under an inertial coordinate system, then Q k-1 is a three-dimensional point cloud map which is built up at the k moment. The purpose of the back-end optimization is to obtain the characteristic points at the moment k Matching the pose estimation results into the point cloud map Q k-1, so that a set of pose estimation results with lower pose estimation frequency and higher precision, which are obtained by the feature matching, are obtained.
The storage mode of the point cloud map at each moment is to store the characteristic point set obtained by each scanningOrder the To store a set of all feature point sets before time k. Each set in Q k-1 is associated with a pose at the time of radar scan. Because the sensor pose estimation drift after pose diagram optimization is lower, the sensor pose estimation drift is assumed to be free from drift in a short time, and the sensor pose estimation drift can be madeS is defined asIs of a size of (a) and (b). To be used forIs a node in the pose graph,As constraints on the node, is the observed data on the node. Will beAdding the new node into the pose graph to obtain an objective function optimized by the pose graph
Where V represents the pose estimates for all the pose nodes in the pose graph, ε is the set of all the edges in the pose graph, e ij represents the error in the pose estimates represented by edge (i, j), and Σ ij represents the covariance matrix of the pose estimates. The optimization problem becomes a least square problem, and the L-M optimization solution is utilized to obtain more accurate pose transformation estimation.
For a new set of feature pointsMatching the feature point set with the feature point set at the previous moment by using an iterative closest point (ITERATIVE CLOSEST POINT, ICP) method, if the matching is successful, the loop is detected, so that new pose estimation constraint is obtained, the new pose estimation constraint is added into a pose graph, and then optimization calculation is performed by using a iSAM method to update the pose estimation. If the matching fails, the current position of the unmanned aerial vehicle is not passed, and new constraint is not added.
The map construction module optimizes pose estimation released at 2 Hz according to the pose map, and projects each frame of local point cloud map to an inertial coordinate system, so that a three-dimensional point cloud map is obtained, and the release frequency of the point cloud map is 2 Hz. The origin of the local point cloud map is the sensor position at the current moment, and the origin of the inertial coordinate system is the sensor position at the beginning of the algorithm. The transformation integration module interpolates the 10 Hz pose estimation issued by the radar odometer and the 2 Hz pose estimation obtained by optimizing the pose map, and issues the final pose estimation with the frequency of 10 Hz. Therefore, the simultaneous positioning and mapping are realized.
And finally, processing the three-dimensional point cloud map by utilizing qPCV algorithm integrated in CloudCompare software to obtain a reconstructed three-dimensional point cloud map.
The invention aims to solve the technical problem of providing an environment scanning and reconstructing method based on a four-rotor unmanned aerial vehicle-mounted three-dimensional laser radar, which realizes real-time autonomous positioning and mapping by using the four-rotor unmanned aerial vehicle-mounted three-dimensional laser radar.
The technical scheme adopted by the invention is that the environment scanning and reconstruction method based on the three-dimensional laser radar is realized on a four-rotor unmanned aerial vehicle provided with an airborne three-dimensional laser radar, and the method comprises the steps of carrying out point cloud segmentation on a real-time three-dimensional point cloud image acquired by the three-dimensional laser radar, extracting characteristic points from the point cloud image for characteristic matching, carrying out pose image optimization and loop detection to obtain high-precision three-dimensional positioning data and a three-dimensional point cloud map, and finally carrying out environment reconstruction by utilizing a qPCV (Portion of Visible Sky) method. The method comprises the following steps:
Firstly, defining a circle of rotation of a laser transmitter in a laser radar as a scanning period, obtaining point cloud data in the scanning period as a frame of point cloud picture, recording as a kth frame of point cloud picture, All laser points in the point cloud for each frame are denoted as P, P k representing all laser points in the point cloud for the kth frame. P k={pk1,pk2,…,pkn},pki is one laser point in the point cloud P k at time k. P k is projected onto a two-dimensional image with a resolution of mxn, where M is determined by the horizontal resolution of the radar and N is determined by the vertical resolution of the radar. In the method, as an example, a VLP-16 three-dimensional laser radar of Velodyne company is taken, a laser emitter in the VLP-16 laser radar rotates for 360 degrees in one circle, the angular resolution in the horizontal direction is 0.2 degrees, so M=360/0.2=1800, and the VLP-16 has 16 scanning lines in the vertical direction, so N=16. After re-projection, each laser point in the three-dimensional point cloud becomes a pixel point in the two-dimensional image, wherein each pixel point carries coordinate information and depth information. The coordinate p ki=[xki,yki,zki]T represents the position of the laser point in the point cloud diagram at the time k in a coordinate system with the center of the laser radar as the origin. Depth information r ki then represents the euclidean distance of the laser radar center point at time p ki to time k. This image is hereinafter referred to as a depth image.
And then marking the pixel points estimated to be the ground in the depth image. In an outdoor environment, it is not possible that all the ground is flat, so the ground plane model must be adapted to describe both flat and inclined terrain and terrain in between. To solve this problem, an efficient partitioning of data can be achieved by representing the x-y plane as a circle with radius r= infinity and dividing it into several independent sectors, the parameter Δα being used to describe the angle covered by each sector, thus yielding M segments of sectors S i,The sector number to which point p ki maps is represented by sector (p ki), and is calculated as follows
Where atan2 (y ki,xki) gives the angle between the positive x-axis of the plane and the point (x ki,yki) and ensures that it is within 0,2 pi. The method represents by P s the set of all points mapped to the same sector S s,
Ps={pi∈P|sector(pi)=ss}#(2)
Next, based on the distance from point (x ki,yki,zki) to (0, z ki) in the same sectorIs dispersed to a plurality of sets of different distance rangesJ=1, 2, …, B. The method is respectively used forAndRepresenting a collectionMinimum and maximum distances of coverage. If the distance of a point p ki∈Ps is satisfiedThen it indicates that the point falls onIn the aggregate, useRepresenting all points falling in the set. Thereby, three-dimensional points are integratedMapping to a two-dimensional point set
Obviously, this mapping is one-to-many, oneMultiple points may be included or none may be included. Thus, for each non-empty setCan calculate a representative pointThe use of the lowest point of the z-axis coordinates yields better results because it is most likely to lie on the ground plane. Subsequently, the representative point is utilizedAnd performing linear fitting by an increment algorithm. And finally judging whether the straight line can represent the ground according to the slope of the fitted straight line. For each point, finding the nearest straight line to the point in the sector to which the point belongs, if the straight line is judged to be capable of representing the ground, and the distance from the point to the straight line is smaller than a threshold value, marking the point as the ground, otherwise, marking no mark.
For a point cloud image of k moments obtained by VLP-16 laser radar scanning, taking Δα=0.2°, there are m=1800 sectors in total, which is equal to the number of columns of the two-dimensional depth image. The vertical scanning range of VLP-16 lidar is [ -15 deg., 15 deg. ], and the laser point scanned to the ground must appear on 8 scanning lines ranging from [ -15 deg. ], 1 deg. ], corresponding to the next 8 lines of data of the two-dimensional depth image. Let b=8, drop 8 points in each sector to 8 points, respectivelyIn (3), each set is exactly one point, representing a pointAnd also determined accordingly. The straight line is fitted starting from the lowest row, since the points of the lowest row are most likely to be scanned to the ground. And if the absolute value of the slope of the straight line after fitting is smaller than or equal to 10 degrees, judging the straight line as a ground point.
The method then segments and marks non-ground points in the two-dimensional depth image. The key of the segmentation method adopted by the method is to judge whether the laser points obtained by any two laser beams come from scanning the same object. Assume that the scanned laser points a and B are adjacent laser points derived from adjacent laser beams OA and OB, and O represents the center point of the laser radar. Without loss of generality, it is assumed that there is a coordinate system with O as the origin and the y-axis of the coordinate system coincides with the longer of OA and OB, here OA is assumed to be longer. The angle beta is defined as the angle between the y-axis and the straight line passing through A and B. Since the two-dimensional depth image carries depth information for each point, i.e. the distance from each laser point to the laser radar center, we can know the distances from A and B to the origin O, defined as r A and r B, respectively. From this a calculation formula for beta can be derived,
Wherein alpha is the included angle between two adjacent laser beams of the laser radar, is determined by the resolution of the laser radar, and is a fixed value. For VLP-16 lidar, α=0.2° when a and B are horizontally adjacent points and α=2° when a and B are vertically adjacent points.
And then performing point cloud segmentation according to the two-dimensional depth image obtained before. Two laser points are considered to originate from the same object if they are adjacent in the depth image and the angle β between them is larger than a threshold θ. For each two-dimensional depth image, calculation starts from the laser spot in the upper left corner, from left to right, from top to bottom. For each unlabeled feature point, a breadth first search is performed, whereby all laser points from the same object as the point can be labeled as the same label. After the breadth-first search of the current point is finished, the next unmarked point is found in sequence, and the operation is repeated, so that the marking of the whole two-dimensional depth image can be ensured to be finished in O (n) time.
It should be noted that this judgment method has a failed scenario: when the lidar is very close to a wall. For a laser spot that is far from the lidar but still on the wall, β will be small, so the part of the wall that is far from the lidar may be split into multiple parts. Essentially, this means that if β is smaller than θ, it is difficult to infer whether the two points originate from two different objects or are merely points located on a wall nearly parallel to the laser line direction. Moreover, when the lidar is in a grassland environment, the weeds on the ground will be divided into multiple parts, but these parts have little help with feature matching or even reduce the matching accuracy. In consideration of both cases, the segmented data is further processed as follows: 1) If the number of laser points under a certain label exceeds 30, the label is valid; 2) If the number of laser points under a certain label is less than 30 and more than or equal to 5, counting the number of laser points in the vertical direction; 3) If the number of laser points in the vertical direction is more than 3, the label is effective; 4) The remaining tags are marked as unusable classes, i.e. the tag is not valid.
After this treatment, each laser spot has three pieces of information: 1) The label value after segmentation is denoted by L ki; 2) A row number and a column number in the two-dimensional depth image; 3) Depth information r ki. These points are then feature extracted.
The first step in feature extraction is to define the smoothness c of each laser point in the two-dimensional depth image
The set S is composed of a point p ki and 10 adjacent points on the same line, and each of the points is about 5, so |s|=10. The point at which the threshold c th,c>cth is set is an edge feature point, and the point at which c < c th is a plane feature point. In order to extract features in all directions, the whole two-dimensional depth image is segmented in the horizontal direction into several sub-images. For each subgraph, a smoothness calculation is performed and ordered for each point on each row. Then, selecting the non-ground point with the largest c value from each rowEdge feature points form an edge feature point setSelecting the minimum c value from the ground points of each row or the laser points with effective labelsEach plane characteristic point forms a plane characteristic point setTo facilitate subsequent feature matching, a slave setIn (1) selectingThe edge characteristic points with the largest c value form a smaller edge characteristic point set F e; from a collectionIn (1) selectingThe planar feature points with the smallest c value and belonging to the ground points form a smaller set of planar feature points F p. The following relationship is apparent: And In this context, each two-dimensional depth image is divided into 6 sub-images, each sub-image having a resolution of 300×16.
The extracted features are then used to estimate the pose change between two successive scan frames. Setting the current moment as k moment, the edge feature point set and the plane feature point set with smaller current moment are respectivelyAndThe larger edge characteristic point set and the plane characteristic point set in the last moment are respectivelyAndLet t k-1 denote the start time of the (k-1) th scan, at the end of each scan, the point cloud P k-1 received during the scan will be re-projected to the time stamp t k, the re-projected point cloud beingDuring the kth scanning movement,And P k are used together to estimate the motion of the radar. Because of So willAndAre projected to the time t k to respectively obtainAndNext, a slave is requiredAndFind in (a)AndIs a related point of the (c).
For collectionsIn (1) an edge feature point i, inThe label which is the same as i and is closest to the label is searched, and the corresponding characteristic of the edge characteristic point is a straight line, and the two points determine the straight line, so that the label is characterized in thatThe same label as j and nearest to it is found. Thereby constructing a point-to-line geometric constraint relationship. The coordinates of points i, j and l are respectively noted asAndThereby converting the gesture transformation into a point-to-line distance calculation
An optimization equation of the edge feature points is constructed.
For collectionsOne of the planar feature points i, inThe label which is the same as i and is closest to the label is searched for a plane characteristic point j, and the corresponding characteristic of the plane characteristic point is a plane, and three points determine the plane, so that the label is characterized in thatThe two planar feature points l and m that are the same as the label of j and closest thereto are found. Thereby constructing a point-to-plane geometric constraint relationship. The coordinates of points i, j, l and m are respectively noted asAndThereby converting the gesture transformation into a point-to-plane distance calculation
An optimization equation of the plane feature points is established. The above algorithm for finding the nearest point uses a nearest neighbor search method of a K-D Tree (K-Dimensional Tree). Motion estimation is performed as follows.
It is assumed that both the angular velocity and the linear velocity of the radar are uniform during one scan. Let t be the current time, t k be the time at which the kth scan starts. Let T k be the radar pose transformation between [ T k, T ] moments. T k includes rigid body motion of the radar in six degrees of freedom, T k=[tx,ty,z,θx,θy,θz]T. For laser point p ki,pki∈Pk, let T i be its timestamp, T (k,i) be the radar pose transform between times [ T k,ti ]. Then T (k,i) can be derived by linear interpolation:
to solve the motion estimation problem of radar, it is necessary to establish separately AndAnd (3) withAndGeometric relationship between the two. Obtainable according to formula (8)
Wherein V (k,i) isOr (b)To obtain the coordinates of a laser point i,T (k,i) (1:3) is the first to third terms of T (k,i), i.e., (T x,ty,tz), for the point coordinates corresponding thereto. R represents a rotation matrix defined by the formula Rodrigues
Wherein θ is the rotation angle:
θ=||T(k,i)(4:6)||,#(11)
Unit vector representing rotation direction:
representing the vector Tensed antisymmetric matrix, i.e. for vectorsThe method comprises the following steps:
can be obtained according to the formula (6) and the formulas (8) - (12) Geometric relationship between the laser spot and its corresponding point:
Is available in the same way Geometric relationship between the laser spot and its corresponding point:
Using formula (14) as an example, a Levenberg-Marquardt (L-M) optimization was performed. Each row in f e represents an edge feature point, and h e includes a distance between the edge feature point and a corresponding edge line. The jacobian matrix for f e relative to T k, denoted J, Then, the L-M method utilizes a solution optimization problem formula (14) to obtain an iterative formula of T k:
Tk←T-(JTJ+λdiag(JTJ))-1JThe.#(16)
Where lambda is a coefficient factor in the L-M method. Thus, pose estimation T k=[tx,ty,tz,θx,θy,θz]T is obtained, second L-M optimization is carried out based on [ T z,θx,θy ] and formula (15) in the pose estimation T k=[tx,ty,tz,θx,θy,θz]T, and [ T x,ty,θz ] selected in T k=[tx,ty,tz,θx,θy,θz]T and [ T z,θx,θy ] obtained by first optimization form final pose estimation.
It should be noted that, the precondition of performing the L-M optimization twice is that the laser radar has little ground change during the motion process, if the ground change is severe or the unmanned aerial vehicle flies at high altitude, the plane characteristics are obtained from non-ground points when the scanned ground points are less than 7200, and when the pose estimation is performed, the L-M optimization twice is not adopted, but the formulas (14) - (15) are combined to obtain:
f(Tk)=h,#(17)
Each row in f represents a planar feature point or edge feature point, and h includes the distance between the corresponding planar features or edge features. Let f be the jacobian matrix relative to T k And obtaining pose estimation with six degrees of freedom through one-time L-M optimization. The iteration is as follows:
Tk←Tk-(JTJ+λdiag(JTJ))-1JTh.#(18)
And then, carrying out pose diagram optimization design. Defining the projection of all scanned point clouds before the k moment to be Q k-1 under an inertial coordinate system, then Q k-1 is a three-dimensional point cloud map which is built up at the k moment. The purpose of the back-end optimization is to obtain the characteristic points at the moment k Matching the pose estimation results into the point cloud map Q k-1, so that a set of pose estimation results with lower pose estimation frequency and higher precision, which are obtained by the feature matching, are obtained.
The storage mode of the point cloud map at each moment is to store the characteristic point set obtained by each scanningOrder the To store a set of all feature point sets before time k. Each set in Q k-1 is associated with a pose at the time of radar scan. Because the sensor pose estimation drift after pose diagram optimization is lower, the sensor pose estimation drift is assumed to be free from drift in a short time, and the sensor pose estimation drift can be madeS is defined asIs of a size of (a) and (b). To be used forIs a node in the pose graph,As constraints on the node, is the observed data on the node. Will beAdding the new node into the pose graph to obtain an objective function optimized by the pose graph
Where V represents the pose estimates for all the pose nodes in the pose graph, ε is the set of all the edges in the pose graph, e ij represents the error in the pose estimates represented by edge (i, j), and Σ ij represents the covariance matrix of the pose estimates. The optimization problem becomes a least square problem, and the L-M optimization solution is utilized to obtain more accurate pose transformation estimation.
For a new set of feature pointsMatching the feature point set with the feature point set at the previous moment by using an iterative closest point (ITERATIVE CLOSEST POINT, ICP) method, if the matching is successful, the loop is detected, so that new pose estimation constraint is obtained, the new pose estimation constraint is added into a pose graph, and then optimization calculation is performed by using a iSAM method to update the pose estimation. If the matching fails, the current position of the unmanned aerial vehicle is not passed, and new constraint is not added.
The map construction module optimizes pose estimation released at 2 Hz according to the pose map, and projects each frame of local point cloud map to an inertial coordinate system, so that a three-dimensional point cloud map is obtained, and the release frequency of the point cloud map is 2 Hz. The origin of the local point cloud map is the sensor position at the current moment, and the origin of the inertial coordinate system is the sensor position at the beginning of the algorithm. The transformation integration module interpolates the 10 Hz pose estimation issued by the radar odometer and the 2 Hz pose estimation obtained by optimizing the pose map, and issues the final pose estimation with the frequency of 10 Hz. Therefore, the simultaneous positioning and mapping are realized.
And finally, processing the three-dimensional point cloud map by utilizing qPCV algorithm integrated in CloudCompare software to obtain a reconstructed three-dimensional point cloud map.
In order to verify the effectiveness of the environment scanning and reconstruction based on the three-dimensional laser radar, practical experiment verification is carried out.
1. Introduction to experimental platform
The experimental platform takes a four-rotor unmanned aerial vehicle with a wheel base of 0.65 m as a main body, a three-dimensional laser radar is carried on the unmanned aerial vehicle for data acquisition, a computing board with an x86 architecture is used for realizing an environment scanning and reconstruction algorithm, and an onboard differential GPS sensor provides unmanned aerial vehicle positioning data for comparing positioning accuracy with a positioning algorithm part in the method.
2. Positioning accuracy comparison experiment
Fig. 6 depicts a position change curve of the position estimation of the unmanned aerial vehicle using the method and a position change curve of the position estimation of the unmanned aerial vehicle using the differential GPS sensor when the outdoor fixed point experiment is performed. It can be seen that the positioning accuracy of the method is 5cm in the z-direction, which benefits from the point cloud segmentation process in the lidar SLAM algorithm. The positioning accuracy in the x-y direction was 20 cm.
Through the analysis, the effectiveness of the positioning algorithm part in the invention is proved.
3. Graph construction and environment reconstruction experiments
Fig. 7 depicts a three-dimensional point cloud map effect map of the north ocean plaza of Tianjin university, which has no obvious double image and good effect, when an outdoor map building experiment is performed. Fig. 8 depicts the three-dimensional point cloud map obtained after the reconstruction process of fig. 7, wherein the object looks more stereoscopic when the reconstructed map is performed, and the object in the point cloud map is easier to distinguish.
Through the analysis, the effectiveness of the algorithm provided by the invention is proved.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (1)
1. The environment scanning and reconstructing method based on the three-dimensional laser radar is characterized by being implemented on a four-rotor unmanned aerial vehicle provided with an onboard three-dimensional laser radar, and comprises the steps of carrying out point cloud segmentation on a real-time three-dimensional point cloud image acquired by the three-dimensional laser radar, extracting characteristic points from the point cloud image for characteristic matching, carrying out pose image optimization and loop detection to obtain three-dimensional positioning data and a three-dimensional point cloud map with higher precision, and finally carrying out environment reconstruction by using a qPCV (Portion of Visible Sky) method; the method comprises the following specific steps:
Firstly, defining a circle of rotation of a laser transmitter in a laser radar as a scanning period, obtaining point cloud data in the scanning period as a frame of point cloud picture, recording as a kth frame of point cloud picture, Representing a positive integer, wherein all laser points in each frame of point cloud image are marked as P, P k represents all laser points in the k frame of point cloud image, P k={pk1,pk2,…,pkn},pki is one laser point in point cloud P k at k time, P k is projected onto a two-dimensional image, the resolution of the laser points is M multiplied by N, the size of the M is determined by the horizontal resolution of a radar, the size of the N is determined by the vertical resolution of the radar, after re-projection, each laser point in the three-dimensional point cloud becomes a pixel point in the two-dimensional image, wherein each pixel point is provided with coordinate information and depth information, the coordinate P ki=[xki,yki,zki]T represents the position of the laser point in the point cloud image at k time by taking the center of the laser radar as an origin, and the depth information r ki represents the Euclidean distance from P ki to the center point of the laser radar at k time;
And then marking the pixel points estimated as the ground in the depth image: an efficient partitioning of data is achieved by representing the x-y plane as a circle with radius R = infinity and dividing it into several independent sectors, the parameter Δα being used to describe the angle covered by each sector, thus resulting in an M-segment sector S i, The sector number to which point p ki maps is represented by sector (p ki), and is calculated as follows
Where atan2 (y ki,xki) gives the angle between the positive x-axis of the plane and the point (x ki,yki) and ensures that it is within 0,2 pi, the set of all points mapped to the same sector S s is denoted by P s,
Ps={pi∈P|sector(pi)=ss} (2)
Next, based on the distance from point (x ki,yki,zki) to (0, z ki) in the same sectorIs dispersed to a plurality of sets of different distance rangesWherein B represents the maximum number of the aggregate, respectivelyAndRepresenting a collectionMinimum and maximum distances of coverage, if the distance of a point p ki∈Ps is satisfactoryThen it indicates that the point falls onIn the aggregate, useRepresenting all points falling in the set, thereby integrating the three-dimensional pointsMapping into a two-dimensional point set:
for each non-empty set All calculate a representative pointSelecting the point with the lowest z-axis coordinate value as the representative pointSubsequently, the representative point is utilizedAnd performing linear fitting by an increment algorithm, judging whether the straight line can represent the ground according to the slope of the fitted straight line, finding out the straight line closest to the straight line in the sector to which each point belongs, if the straight line is judged to represent the ground, and the distance from the point to the straight line is smaller than a threshold value, marking the point as the ground, otherwise, marking the point without marking;
Then dividing and marking non-ground points in the two-dimensional depth image, wherein the scanned laser points A and B are adjacent laser points obtained by adjacent laser beams OA and OB, O represents the center point of the laser radar, a coordinate system taking O as an origin exists, the y axis of the coordinate system coincides with a longer laser beam in the OA and OB, here, assuming that the OA is longer, an angle beta is defined as an included angle between the y axis and a straight line passing through the A and B, and because the two-dimensional depth image is provided with depth information of each point, namely, the distance from each laser point to the center of the laser radar, the distance from the A and B to the origin point O is respectively defined as r A and r B, and a calculation formula of beta is obtained:
wherein alpha is the included angle between two adjacent laser beams of the laser radar, is determined by the resolution of the laser radar, and is a fixed value;
Then, according to the two-dimensional depth images obtained before, carrying out point cloud segmentation, if two laser points are adjacent in the depth images and the angle beta between the two laser points is larger than a threshold value theta, regarding the two-dimensional depth images as originating from the same object, calculating from the laser point at the upper left corner, and firstly carrying out breadth-first search on each non-marked characteristic point from left to right and from top to bottom, marking all the laser points which are the same as the laser points of the point originating from the same object as the same label, finding the next unmarked point in sequence after the breadth-first search of the current point is finished, and repeating the operation, thereby ensuring that the whole two-dimensional depth image is marked in O (n) time;
The segmented data is further processed as follows: 1) If the number of laser points under a certain label exceeds 30, the label is valid; 2) If the number of laser points under a certain label is less than 30 and more than or equal to 5, counting the number of laser points in the vertical direction; 3) If the number of laser points in the vertical direction is more than 3, the label is effective; 4) The rest of the tags are marked as unusable classes, i.e. the tags are not valid;
After this treatment, each laser spot has three pieces of information: 1) The label value after segmentation is denoted by L ki; 2) A row number and a column number in the two-dimensional depth image; 3) Depth information r ki, after which feature extraction is performed on these points;
The first step in feature extraction is to define the smoothness c of each laser point in the two-dimensional depth image
Wherein the set S consists of points p ki and 10 adjacent points on the same line, each of which is 5, so |s|=10, the point of the threshold c th,c>cth is set as an edge feature point, the point of c < c th is a plane feature point, in order to extract the features in all directions, the whole two-dimensional depth image is divided into a plurality of subgraphs in the horizontal direction, for each subgraph, each point is calculated and ordered in smoothness on each line, and then, from the non-ground points of each line, the value of c is selected to be the largestEdge feature points form an edge feature point setSelecting the minimum c value from the ground points of each row or the laser points with effective labelsEach plane characteristic point forms a plane characteristic point setTo facilitate subsequent feature matching, a slave setIn (1) selectingThe edge characteristic points with the largest c value form a smaller edge characteristic point set F e; from a collectionIn (1) selectingThe planar feature points with the smallest c value and belonging to the ground points form a smaller planar feature point set F p, obviously having the following relationship: And
Then, the pose change between two continuous scanning frames is estimated by using the extracted features, the current moment is set as k moment, and the edge feature point set and the plane feature point set with smaller current moment are respectivelyAndThe larger edge characteristic point set and the plane characteristic point set in the last moment are respectivelyAndLet t k-1 denote the start time of the (k-1) th scan, at the end of each scan, the point cloud P k-1 received during the scan will be re-projected to the time stamp t k, the re-projected point cloud beingDuring the kth scanning movement,Together with P k, is used to estimate the motion of the radar becauseSo willAndAre projected to the time t k to respectively obtainAndNext, a slave is requiredAndFind in (a)AndIs a correlation point of (2);
for collections In (1) an edge feature point i, inThe label which is the same as i and is closest to the label is searched, and the corresponding characteristic of the edge characteristic point is a straight line, and the two points determine the straight line, so that the label is characterized in thatSearching the edge characteristic point l which is the same as the label of j and is closest to the label, thereby constructing a geometric constraint relation from point to straight line, and respectively marking the coordinates of the point i, the point j and the point l asAndThereby converting the gesture transformation into a point-to-line distance calculation
An optimization equation of the edge feature points is constructed;
for collections One of the planar feature points i, inThe label which is the same as i and is closest to the label is searched for a plane characteristic point j, and the corresponding characteristic of the plane characteristic point is a plane, and three points determine the plane, so that the label is characterized in thatThe two plane characteristic points l and m which are the same as the label of j and are nearest to the label are searched, so that the geometrical constraint relation of the points to the plane is constructed, and the coordinates of the points i, j, l and m are respectively recorded asAndThereby converting the gesture transformation into a point-to-plane distance calculation
The optimization equation of the plane feature point is established, the algorithm for finding the nearest point uses a nearest neighbor search method of a K-D Tree (K-Dimensional Tree), and motion estimation is performed as follows:
In the scanning process, the angular speed and the linear speed of the radar are uniform, let T be the current moment, tk be the time when the kth scanning starts, let T k be the radar pose transformation between [ T k, T ] moments, T k comprise the rigid motion of the radar in the six-degree-of-freedom space, T k=[tx,ty,tz,θx,θy,θz]T, let T i be the timestamp thereof for the laser point p ki,pki∈Pk, and T (k,i) be the radar pose transformation between [ T k,ti ] moments, then Y (k,i) is obtained by linear interpolation:
to solve the motion estimation problem of radar, it is necessary to establish separately AndAnd (3) withAndThe geometrical relationship between them is obtainable according to formula (8)
Wherein V (k,i) isOr (b)To obtain the coordinates of a laser point i,For the point coordinates corresponding thereto, T (k,i) (1:3) is the first to third terms of T (k,i), i.e., (T x,ty,tz), R represents a rotation matrix defined by the rodrich formula
Wherein θ is the rotation angle:
θ=||T(k,i)(4:6)|| (11)
Unit vector representing rotation direction:
representing the vector Tensed antisymmetric matrix, i.e. for vectorsThe method comprises the following steps:
can be obtained according to the formula (6) and the formulas (8) - (12) Geometric relationship between the laser spot and its corresponding point:
Is available in the same way Geometric relationship between the laser spot and its corresponding point:
Taking equation (14) as an example, performing a Levenberg-Marquardt L-M (Levenberg-Marquardt) optimization, each row in f e representing an edge feature point, h e comprising the distance between the edge feature point and the corresponding edge line, calculating a jacobian matrix of f e relative to T k, denoted J, Then, the L-M method utilizes a solution optimization problem formula (14) to obtain an iterative formula of T k:
Tk←Tk-(JTJ+λdiag(JTJ))-1JThe (16)
Wherein lambda is a coefficient factor in the L-M method, so as to obtain pose estimation T k=[tx,ty,tz,θx,θy,θz]T, performing second L-M optimization based on [ T z,θx,θy ] and formula (15) in the pose estimation T k=[tx,ty,tz,θx,θy,θz]T, and obtaining [ T x,ty,θz ] selected by T k=[tx,ty,tz,θx,θy,θz]T and [ T z,θx,θy ] obtained by the first optimization to form final pose estimation;
The precondition of carrying out the L-M optimization twice is that the ground change is not great in the movement process of the laser radar, if the ground change is severe or the unmanned aerial vehicle flies at high altitude, the plane characteristics are obtained from non-ground points when the scanned ground points are less than 7200, and when the pose estimation is carried out, the L-M optimization twice is not adopted, and the formulas (14) - (15) are combined to obtain:
f(Tk)=h (17)
Each row in f represents a planar feature point or edge feature point, h comprises the distance between the corresponding planar features or edge features, and f is expressed as a jacobian matrix relative to T k The pose estimation with six degrees of freedom is obtained through one-time L-M optimization, and the iteration formula is as follows:
Tk←Tk-(JTJ+λdiag(JTJ))-1JTh (18)
Then, carrying out pose diagram optimization design, defining the projection of all scanned point clouds before the k moment in an inertial coordinate system as Q k-1, wherein Q k-1 is a three-dimensional point cloud map which is established at the k moment, and the rear end optimization aims at obtaining characteristic points at the k moment Matching the pose estimation results into a point cloud map Q k-1, thereby obtaining a group of pose estimation results;
the storage mode of the point cloud map at each moment is to store the characteristic point set obtained by each scanning Order the To store the sets of all feature point sets before time k, each set in Q k-1 is associated with the pose at the time of radar scan, since the pose-optimized sensor pose estimate drifts less, assuming no drift in a short time, letS is defined asTo the size of (1)Is a node in the pose graph,As a constraint on the node, the observed data on the node willAdding the new node into the pose graph to obtain an objective function optimized by the pose graph
Wherein V represents pose estimation of all pose nodes in the pose graph, E is a set of all edges in the pose graph, e ij represents errors of the pose estimation represented by edges (i, j), sigma ij represents covariance matrix of the pose estimation, so that the optimization problem becomes a least square problem, and the L-M optimization solution is utilized to obtain more accurate pose transformation estimation;
For a new set of feature points Matching the feature point set with the feature point set at the previous moment by using an iterative closest point (ITERATIVE CLOSEST POINT, ICP) method, if the matching is successful, indicating that loop-back is detected, thus obtaining new pose estimation constraint, adding the new pose estimation constraint into a pose graph, performing optimization calculation by using a iSAM method, and updating the pose estimation; if the matching fails, the current position of the unmanned aerial vehicle is not the position, and a new constraint is not added;
The map construction module optimizes pose estimation released at 2 Hz according to the pose map, and projects each frame of local point cloud map to an inertial coordinate system, so that a three-dimensional point cloud map is obtained, and the release frequency of the point cloud map is 2 Hz; the origin of the local point cloud map is the sensor position at the current moment, and the origin of the inertial coordinate system is the sensor position at the beginning of the algorithm; the transformation integration module interpolates the 10 Hz pose estimation issued by the radar odometer and the 2 Hz pose estimation obtained by optimizing the pose map, and issues the final pose estimation with the frequency of 10 Hz; therefore, the simultaneous positioning and mapping are realized;
And finally, processing the three-dimensional point cloud map by utilizing qPCV algorithm integrated in CloudCompare software to obtain a reconstructed three-dimensional point cloud map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111114995.0A CN113985429B (en) | 2021-09-23 | 2021-09-23 | Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111114995.0A CN113985429B (en) | 2021-09-23 | 2021-09-23 | Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113985429A CN113985429A (en) | 2022-01-28 |
CN113985429B true CN113985429B (en) | 2024-07-26 |
Family
ID=79736368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111114995.0A Active CN113985429B (en) | 2021-09-23 | 2021-09-23 | Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113985429B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114563795B (en) * | 2022-02-25 | 2023-01-17 | 湖南大学无锡智能控制研究院 | Positioning tracking method and system based on laser odometer and label fusion algorithm |
CN115435784B (en) * | 2022-08-31 | 2024-06-14 | 中国科学技术大学 | High-altitude operation platform laser radar and inertial navigation fusion positioning map building device and method |
CN115166686B (en) * | 2022-09-06 | 2022-11-11 | 天津大学 | Multi-unmanned aerial vehicle distributed cooperative positioning and mapping method in satellite rejection environment |
CN115328163B (en) * | 2022-09-16 | 2023-03-28 | 西南交通大学 | Speed and precision optimization method for inspection robot radar odometer |
CN115586511B (en) * | 2022-11-25 | 2023-03-03 | 唐山百川工业服务有限公司 | Laser radar two-dimensional positioning method based on array stand column |
CN116954265B (en) * | 2023-09-20 | 2023-12-05 | 天津云圣智能科技有限责任公司 | Method and device for rescheduling local motion trail and electronic equipment |
CN117863190B (en) * | 2024-03-08 | 2024-07-19 | 广州小鹏汽车科技有限公司 | Method for controlling movement of foot robot and foot robot |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109945856A (en) * | 2019-02-18 | 2019-06-28 | 天津大学 | Based on inertia/radar unmanned plane autonomous positioning and build drawing method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268609B (en) * | 2013-05-17 | 2016-04-20 | 清华大学 | A kind of some cloud dividing method of orderly extraction ground |
CN109297510B (en) * | 2018-09-27 | 2021-01-01 | 百度在线网络技术(北京)有限公司 | Relative pose calibration method, device, equipment and medium |
CN109961440B (en) * | 2019-03-11 | 2021-06-18 | 重庆邮电大学 | Three-dimensional laser radar point cloud target segmentation method based on depth map |
CN110223379A (en) * | 2019-06-10 | 2019-09-10 | 于兴虎 | Three-dimensional point cloud method for reconstructing based on laser radar |
CN113050993A (en) * | 2019-12-27 | 2021-06-29 | 中兴通讯股份有限公司 | Laser radar-based detection method and device and computer-readable storage medium |
CN111929699B (en) * | 2020-07-21 | 2023-05-09 | 北京建筑大学 | Laser radar inertial navigation odometer considering dynamic obstacle and map building method and system |
CN112767490B (en) * | 2021-01-29 | 2022-06-21 | 福州大学 | Outdoor three-dimensional synchronous positioning and mapping method based on laser radar |
CN113066105B (en) * | 2021-04-02 | 2022-10-21 | 北京理工大学 | Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit |
-
2021
- 2021-09-23 CN CN202111114995.0A patent/CN113985429B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109945856A (en) * | 2019-02-18 | 2019-06-28 | 天津大学 | Based on inertia/radar unmanned plane autonomous positioning and build drawing method |
Non-Patent Citations (1)
Title |
---|
LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain;Tixiao Shan et al.;《2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)》;20190106;4758-4765 * |
Also Published As
Publication number | Publication date |
---|---|
CN113985429A (en) | 2022-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113985429B (en) | Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar | |
CN109709801B (en) | Indoor unmanned aerial vehicle positioning system and method based on laser radar | |
CN112347840B (en) | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method | |
CN113781582B (en) | Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration | |
CN103411609B (en) | A kind of aircraft return route planing method based on online composition | |
CN114526745B (en) | Drawing construction method and system for tightly coupled laser radar and inertial odometer | |
CN110361027A (en) | Robot path planning method based on single line laser radar Yu binocular camera data fusion | |
CN112634451A (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
CN115407357B (en) | Low-harness laser radar-IMU-RTK positioning mapping algorithm based on large scene | |
CN111426320B (en) | Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter | |
WO2021021862A1 (en) | Mapping and localization system for autonomous vehicles | |
Qian et al. | Robust visual-lidar simultaneous localization and mapping system for UAV | |
CN112833892B (en) | Semantic mapping method based on track alignment | |
CN113447949B (en) | Real-time positioning system and method based on laser radar and prior map | |
Niewola et al. | PSD–probabilistic algorithm for mobile robot 6D localization without natural and artificial landmarks based on 2.5 D map and a new type of laser scanner in GPS-denied scenarios | |
Zhang et al. | Online ground multitarget geolocation based on 3-D map construction using a UAV platform | |
CN113838129B (en) | Method, device and system for obtaining pose information | |
CN113554705B (en) | Laser radar robust positioning method under changing scene | |
Majdik et al. | Micro air vehicle localization and position tracking from textured 3d cadastral models | |
CN114066972A (en) | Unmanned aerial vehicle autonomous positioning method based on monocular vision | |
CN113379915A (en) | Driving scene construction method based on point cloud fusion | |
Alliez et al. | Indoor localization and mapping: Towards tracking resilience through a multi-slam approach | |
Pan et al. | SLAM-based Forest Plot Mapping by Integrating IMU and Self-calibrated Dual 3D Laser Scanners | |
CN113403942B (en) | Label-assisted bridge detection unmanned aerial vehicle visual navigation method | |
Wang et al. | Raillomer: Rail vehicle localization and mapping with LiDAR-IMU-odometer-GNSS data fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |