[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109308737A - A kind of mobile robot V-SLAM method of three stage point cloud registration methods - Google Patents

A kind of mobile robot V-SLAM method of three stage point cloud registration methods Download PDF

Info

Publication number
CN109308737A
CN109308737A CN201810757384.XA CN201810757384A CN109308737A CN 109308737 A CN109308737 A CN 109308737A CN 201810757384 A CN201810757384 A CN 201810757384A CN 109308737 A CN109308737 A CN 109308737A
Authority
CN
China
Prior art keywords
point
stage
point cloud
rgb
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810757384.XA
Other languages
Chinese (zh)
Inventor
胡章芳
漆保凌
罗元
张毅
方继康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201810757384.XA priority Critical patent/CN109308737A/en
Publication of CN109308737A publication Critical patent/CN109308737A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A kind of mobile robot V-SLAM method of three stage point cloud registration methods is claimed in the present invention, comprising the following steps: S1 obtains the RGB information and Depth information of ambient enviroment;S2 generates three dimensional point cloud;S3 is extracted ORB characteristics of image to RGB figure is obtained, the characteristic matching of point set element is carried out using FLANN;S4 carries out screening a little pair to RGB figure by RANSAC sampling policy to which point completes pretreatment stage in obtaining;S5 completes point cloud using the dual distance threshold method of corresponding points based on rigid body translation consistency and is just registrated the stage;S6, in the case where obtaining good initial pose, the ICP essence method for registering for introducing a kind of Dynamic iterations angular factors completes the smart registration stage;S7 realizes three-dimensional point cloud environment rebuilt in conjunction with g2o algorithm optimization robot pose track in the key frame Filtering system that rear end part introduces sliding window and stochastical sampling combines.The present invention can be improved the registration accuracy and registration efficiency that surrounding three-dimensional rebuilds midpoint cloud map.

Description

A kind of mobile robot V-SLAM method of three stage point cloud registration methods
Technical field
The invention belongs to the fields mobile robot V-SLAM, especially in a method for cloud map reconstruction.
Background technique
In recent years, with the fast development of mobile robot intelligence degree and computer vision processing technique, moving machine Device people in the industries such as home life, food and drink, virtual reality and independent navigation using extremely wide.Vision positions and ground simultaneously Figure building (Visual simultaneous localization and mapping, V-SLAM) refers to that mobile robot is expert at Into in the process, the position of itself is reconstructed and estimated to ambient enviroment by self-contained visual sensor.With The cost of visual sensor is low, precision is high exists so that having increasingly been used for robot, especially Kinect depth camera Occur, has many advantages, such as that acquisition environmental information is abundant, renewal speed is fast and applied well in mobile robot V-SLAM system In.
V-SLAM is generally divided into the rear end part of front-end vision part and optimization.Front-end vision portion based on method of characteristic point Point, it is all the main stream approach of visual odometry (Visual Odometry, VO) all the time, its task is to estimate neighbor map Camera motion as between provides preferable initial value to rear end optimization, and rear end then handles the optimization problem of cartographic model, according to before The camera pose for holding visual odometry measurement optimizes processing and obtains globally consistent track and map.In visual odometry There are registration errors for interframe registration part it is big, inefficient cause effectively to apply robot it is high-precision build figure and Positioning.
Summary of the invention
Present invention seek to address that there is a problem of easily falling into local optimum, matching error in existing point cloud registration algorithm big. It proposes that a kind of precision is higher, can preferably estimate the V-SLAM method of three stage point cloud registration algorithms of robot estimation. Technical scheme is as follows:
A kind of mobile robot V-SLAM method of three stage point cloud registration methods comprising following steps:
S1, the RGB color figure and Depth depth map that ambient enviroment is obtained using Kinect camera;
S2, the internal reference matrix of Kinect is obtained to be converted to three-dimensional space point with depth map to RGB figure, obtained by step 1 The Depth data of pixel coordinate combination 2D picture point on the RGB figure taken generate three dimensional point cloud;
S3, ORB characteristics of image is extracted to the RGB figure of step S1, point set element is carried out using the quick approximate KNN of FLANN Characteristic matching, obtain RGB figure characteristic point pair;
S4, step S3 obtain RGB figure characteristic point pair on the basis of, using RANSAC stochastical sampling consistency algorithm pair Error hiding, which reject, obtains interior point, completes pretreatment stage;
S5, the stage is registrated at the beginning of completing point cloud using the dual distance threshold method of corresponding points based on rigid body translation consistency;
S6, in the case where obtaining meeting the initial pose of condition, introduce a kind of ICP iteration closest approach of Dynamic iterations angular factors Smart method for registering completes the smart registration stage;
S7, the key frame Filtering system for introducing sliding window and stochastical sampling combination, in conjunction with g2o algorithm optimization robot position Three-dimensional point cloud environment rebuilt is realized in appearance track.
Further, the step of step S2 generates three dimensional point cloud according to RGB figure and Depth figure are as follows:
By the Depth data of the pixel coordinate combination pixel coordinate corresponding points of RGB figure, the space bit of each point is acquired It sets, to produce three dimensional point cloud, spatial point [x, y, z] and its pixel coordinate [u, v, d] in the picture are with as follows Corresponding relationship, u, v indicate that the pixel coordinate of 2D picture point, d refer to Depth data:
Z=d/s
Wherein, fx、fyRespectively represent focal length of the camera in x, y-axis, cx、cyThe aperture center of camera is represented, s is deep The scaling factor of figure, the i.e. ratio of Depth data and actual range are spent, under normal conditions fx, fy、cx、cyIt is defined as phase The internal reference Matrix C of machine
Further, the step S3 extracts realtime graphic feature ORB to the RGB information of step S1, it can find out figure As in " angle point " and the feature peripheral region of extraction is described, using FLANN algorithm carry out bi-directional matching.
Further, the step S4 is that the matching double points generated to step S3 are picked using RANSAC algorithm progress error hiding Remove, obtain interior point and complete pretreatment, the specific steps of pretreatment stage include: to the characteristic points of two groups of adjacent RGB figures into Row pretreatment, sets a threshold value d, rejects the matching double points for being greater than threshold value, qualified interior point is obtained, for source point collection A With the feature point set on target point set B, d1And d2Some Euclidean of nearest in target point set and time closest approach on respectively A Distance sets d2<αd1, wherein α=0.5, takes interior quantity minimum threshold for N=13, and current RGB figure is illustrated if being unsatisfactory for not Meet this demands, carries out the following group RGB figure pretreatment;Pretreated two panels point cloud is denoted as P and Q, the i-th moment of camera respectively Pose piWith the pose p of jth moment camerajPose transformation relation are as follows:Wherein
R3×3Indicate spin matrix, t3×1Indicate transposed matrix,Indicate transformation matrix
Further, the step S5 completes point using the dual distance threshold method of corresponding points based on rigid body translation consistency Cloud is just registrated the stage, specifically includes:
Topological structure with a piece of cloud any point and its adjoint point remains unchanged, and has rigid body translation consistency.Therefore right Any point p in cloud PiWith its Neighbor Points pi', it also should be Neighbor Points in match point cloud Q, be obtained on the basis of pretreated Good original state is taken, if the matching double points obtained are correct matching double points, for any two points to (pi, qj) and (pi', qj') should meet a little pair between be equidistant, such as two constraint conditions of following formula:
Wherein piWith pi' respectively indicate any point in a cloud P and its Neighbor Points, qjWith qj' respectively indicate piWith pi' Corresponding matching point in its match point cloud Q, u1、u2For distance threshold, most of noise spot can be rejected by being just registrated, is obtained Match point cloud S and L with good original state.
Further, the S6 step is that have under step S4 and S5 pretreatment stage and the just processing in registration stage Then good initial pose is registrated, process using the ICP essence of Dynamic iterations angular factors are as follows: 1) it is initial to be just registrated acquisition by Corresponding point set Si0With Li0;2) obtains essence registration point set by dynamic angle iteration methodWithIndicate the three-dimensional point on the 1st iteration point set L,Indicate the three-dimensional point on the 1st iteration point set L; 3) point set S, is solved using SVDi0With Li0Between spin matrix R and translation vector t;4) after completes an iteration, dynamic angular Threshold value iteration is spent to reduce;5) then. repeats step 2)~5), until meeting following formula:
ε indicates the error threshold d between point cloudtIndicate the average Euclidean distance between point set.
Further, inclined between the dynamic angle threshold method, that is, every completion matched cloud L of an iteration point cloud S It moves angle to reduce, selects the sine value of the direction vector angle between matching double points as match point error equation, match point is just String value such as formula:
Wherein niAnd njRespectively point siWith ljApproximate normal vector, θ be two matching double points normal vector angle.Match point Weight calculated by following formula:
Wherein E is the dynamic angle factor, may be matched such as following formula of quantity a little by weight equation:
Further, the dynamic angle factor is reduced by following formula:
E=EDdec,Ddec∈ (0,1), DdecIndicate zoom factor.
Further, the step S7 is in the key frame Filtering system for introducing sliding window and stochastical sampling combination, method Are as follows: it is the frame of N+m, the arest neighbors m frame that removal slip window sampling is chosen, also surplus N frame, then the i-th frame that if, which there is quantity at current time, Selected probability is that P (i) is as follows:
According to the probability size for choosing frame, to screen frame, and it is excellent to the progress of robot pose using g2o figure optimization algorithm Change, obtains mobile robot travel track, ultimately produce three-dimensional point cloud map.
It advantages of the present invention and has the beneficial effect that:
The present invention provides a kind of mobile robot V-SLAM methods of three stage point cloud registration methods, using a kind of layer The point cloud registration method of secondary formula, using RANSAC (stochastical sampling consistency) sampling policy to RGB figure carry out screening a little pair from And point completes pretreatment in obtaining;It is just registrated the stage by proposing that the dual distance threshold method of corresponding points completes point cloud, guarantees point cloud Between have good initial pose;In pretreatment and on the basis of the first registration stage, a kind of Dynamic iterations angular factors are proposed ICP essence method for registering reduces the error hiding of interframe registration and improves registration efficiency.Rear end part introduce slip window sampling and The key frame Filtering system that stochastical sampling method combines optimizes in conjunction with g2o (general graph optimization) figure and calculates Method optimizes robot pose track.Globally consistent V-SLAM system may be implemented in the three stage point cloud registerings of this method, can be with Preferably it is applied in actual mobile robot to obtain the higher three-dimensional map environment of precision and efficient robotic fortune Row track.
Detailed description of the invention
Fig. 1 is the mobile robot V-SLAM method for the three stage point cloud registration methods that the present invention provides preferred embodiment Flow diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, detailed Carefully describe.Described embodiment is only a part of the embodiments of the present invention.
The technical solution that the present invention solves above-mentioned technical problem is:
As shown in Figure 1, the present invention provides a kind of mobile robot V-SLAM method of three stage point cloud registration methods, Itself the following steps are included:
S1, the RGB color information and Depth depth information that ambient enviroment is obtained using Kinect camera;
S2, the internal reference matrix of Kinect is obtained to carry out the rigid body translation between RGB camera and depth camera, pass through its picture Plain coordinate combines the Depth data of the point to generate three dimensional point cloud.The Depth of the point is combined using the pixel coordinate of RGB figure Data, so that it may the spatial position of each point is acquired, to produce three dimensional point cloud.Spatial point [x, y, z] and its scheming Pixel coordinate [u, v, d] (d refers to Depth data) as in has following corresponding relationship:
Z=d/s
Wherein, fx、fyRespectively represent focal length of the camera in x, y-axis, cx、cyThe aperture center of camera is represented, s is deep The scaling factor (ratios of Depth data and actual range) of degree figure is set as 1000.F under normal conditionsx、fy、cx、cy It is defined as the internal reference Matrix C of camera
S3, RGB is schemed to extract ORB characteristics of image, the feature of point set element is carried out using FLANN (quick approximate KNN) Matching;
S4, the error hiding of RGB figure is carried out using RANSAC (stochastical sampling consistency) algorithm to reject point in acquisition, is completed Pretreatment stage.To the characteristic points of two groups of adjacent RGB figures to pre-processing, a threshold value d is set, rejects and is greater than threshold value Matching double points, the interior point met (true corresponding), for the feature point set on source point collection A and target point set B, d1And d2Point Not Wei a little nearest and time closest approach Euclidean distance in target point set on A, according to the empirical method in engineering herein Middle setting d2<αd1(wherein α=0.5), taking interior quantity minimum threshold is N=13, illustrates that current RGB figure is discontented if being unsatisfactory for This demands of foot carry out the following group RGB figure pretreatment.The pose estimation that least square method is carried out by internally putting, can be to source point Collect A and target source point set B and carry out a fine tuning, pretreated two panels point cloud is denoted as P and Q respectively.The pose at the i-th moment of camera piWith the pose p of jth moment camerajPose transformation relation are as follows: pj=piTi j, wherein
S5, the stage is registrated at the beginning of completing point cloud using the dual distance threshold method of corresponding points based on rigid body translation consistency.It adopts With the dual distance threshold method based on rigid body translation consistency, i.e., protected with the topological structure of any point in a piece of cloud and its adjoint point Hold it is constant, have rigid body translation consistency.Therefore for any point pi in cloud P and its Neighbor Points pi', in match point cloud Q In also should be Neighbor Points.Good original state is obtained on the basis of pretreated, if the matching double points obtained are correct With point pair, then for any two points to (pi, qj) and (pi', qj') should meet a little pair between be equidistant, about such as following formula two Beam condition:
Wherein piWith pi' respectively indicate any point in a cloud P and its Neighbor Points, qjWith qj' respectively indicate piWith pi' Corresponding matching point in its match point cloud Q.u1、u2For distance threshold, most of noise spot can be rejected by being just registrated, is obtained Match point cloud S and L with good original state.
The smart method for registering of S6, the ICP (iteration closest approach) for introducing a kind of Dynamic iterations angular factors completes the smart registration stage. 1. just registration obtains initial corresponding point set Si0 and Li0;2. further obtaining essence registration point set by dynamic angle iteration methodWith3. SVD is taken to solve spin matrix R and translation vector t between point set Si1 and Li1; 4. after completing an iteration, dynamic angle threshold value iteration reduces;5. step 2~5 are then repeated, until meeting following formula:
Wherein, the Dynamic iterations threshold method in smart registration stage are as follows: the direction vector angle between selection matching double points is just String value is as match point error equation, the sine value of match point such as formula:
Wherein niAnd njRespectively point siWith ljApproximate normal vector, θ be two matching double points normal vector angle.Match point Weight calculated by following formula:
Wherein E is the dynamic angle factor, may be matched such as following formula of quantity a little by weight equation:
S7, the key frame Filtering system for introducing sliding window and stochastical sampling combination, consider robot within a short period of time Big, the shorter frame of time in the past of a possibility that a possibility that returning to current location is smaller, therefore the longer frame of time in the past is chosen A possibility that selection, is small.If it is the frame of N+m, the arest neighbors m frame that removal slip window sampling is chosen, also surplus N that, which there is quantity at current time, Frame.The selected probability of so the i-th frame is that P (i) is as follows:
According to the probability size for choosing frame, to screen frame.And robot pose is carried out using g2o figure optimization algorithm excellent Change, obtains mobile robot travel track, ultimately produce three-dimensional point cloud map.
The above embodiment is interpreted as being merely to illustrate the present invention rather than limit the scope of the invention.? After the content for having read record of the invention, technical staff can be made various changes or modifications the present invention, these equivalent changes Change and modification equally falls into the scope of the claims in the present invention.

Claims (9)

1. a kind of mobile robot V-SLAM method of three stage point cloud registration methods, which comprises the following steps:
S1, the RGB color figure and Depth depth map that ambient enviroment is obtained using Kinect camera;
S2, the internal reference matrix of Kinect is obtained to be converted to three-dimensional space point with depth map to RGB figure, obtained by step 1 The Depth data of pixel coordinate combination 2D picture point on RGB figure generate three dimensional point cloud;
S3, ORB characteristics of image is extracted to the RGB figure of step S1, the spy of point set element is carried out using the quick approximate KNN of FLANN Sign matching, obtains RGB figure characteristic point pair;
S4, step S3 obtain RGB figure characteristic point pair on the basis of, using RANSAC stochastical sampling consistency algorithm to accidentally Interior point is obtained with reject, completes pretreatment stage;
S5, the stage is registrated at the beginning of completing point cloud using the dual distance threshold method of corresponding points based on rigid body translation consistency;
S6, in the case where obtaining meeting the initial pose of condition, the ICP iteration closest approach essence for introducing a kind of Dynamic iterations angular factors is matched Quasi- method completes the smart registration stage;
S7, the key frame Filtering system for introducing sliding window and stochastical sampling combination, in conjunction with g2o algorithm optimization robot pose rail Mark realizes three-dimensional point cloud environment rebuilt.
2. the mobile robot V-SLAM method of three stage point cloud registration method according to claim 1, feature exist In the step S2 schemes the step of generating three dimensional point cloud according to RGB figure and Depth are as follows:
By the Depth data of the pixel coordinate combination pixel coordinate corresponding points of RGB figure, the spatial position of each point is acquired, from And three dimensional point cloud is produced, spatial point [x, y, z] and its pixel coordinate [u, v, d] in the picture have following corresponding pass System, u, v indicate that the pixel coordinate of 2D picture point, d refer to Depth data:
Z=d/s
Wherein, fx、fyRespectively represent focal length of the camera in x, y-axis, cx、cyThe aperture center of camera is represented, s is depth map The scaling factor, the i.e. ratio of Depth data and actual range, under normal conditions fx、fy、cx、cyIt is defined as the interior of camera Join Matrix C
3. the mobile robot V-SLAM method of three stage point cloud registration method according to claim 1, feature exist Extract realtime graphic feature ORB in the RGB information of, the step S3 to step S1, it can find out " angle point " in image with And the feature peripheral region of extraction is described, bi-directional matching is carried out using FLANN algorithm.
4. the mobile robot V-SLAM method of three stage point cloud registration method according to claim 3, feature exist In the step S4 is that the matching double points generated to step S3 carry out error hiding rejecting using RANSAC algorithm, and it is complete to obtain interior point At pretreatment, the specific steps of pretreatment stage include: to set to the characteristic point of two groups of adjacent RGB figures to pre-processing One threshold value d rejects the matching double points for being greater than threshold value, qualified interior point is obtained, on source point collection A and target point set B Feature point set, d1And d2Some Euclidean distance of nearest in target point set and time closest approach on respectively A, sets d2<α d1, wherein α=0.5, taking interior quantity minimum threshold is N=13, illustrates that current RGB figure is unsatisfactory for this stage and wants if being unsatisfactory for It asks, carries out the following group RGB figure pretreatment;Pretreated two panels point cloud is denoted as P and Q, the pose p at the i-th moment of camera respectivelyiWith The pose p of j moment camerajPose transformation relation are as follows: pi=piTi j, wherein
R3×3Indicate spin matrix, t3×1Indicate transposed matrix, Ti jIndicate transformation matrix.
5. the mobile robot V-SLAM method of three stage point cloud registration method according to claim 4, feature exist In, the step S5 completes point cloud using the dual distance threshold method of corresponding points based on rigid body translation consistency and is just registrated the stage, It specifically includes:
Topological structure with a piece of cloud any point and its adjoint point remains unchanged, and has rigid body translation consistency.Therefore for point Any point p in cloud PiWith its Neighbor Points pi', it also should be Neighbor Points in match point cloud Q, be obtained on the basis of pretreated good Good original state, if the matching double points obtained are correct matching double points, for any two points to (pi, qj) and (pi', qj') should meet a little pair between be equidistant, such as two constraint conditions of following formula:
Wherein piWith pi' respectively indicate any point in a cloud P and its Neighbor Points, qjWith qj' respectively indicate piWith pi' match at it To the Corresponding matching point in cloud Q, u1、u2For distance threshold, most of noise spot can be rejected by being just registrated, is had Match point the cloud S and L of good original state.
6. the mobile robot V-SLAM method of three stage point cloud registration method according to claim 5, feature exist In, the S6 step is that have good initial pose under step S4 and S5 pretreatment stage and the just processing in registration stage, Then it is registrated using the ICP essence of Dynamic iterations angular factors, process are as follows: 1), which is just registrated, obtains initial corresponding point set Si0With Li0;2) obtains essence registration point set by dynamic angle iteration methodWith It indicates the 1st time Three-dimensional point on iteration point set L,Indicate the three-dimensional point 3 on the 1st iteration point set L), using SVD solve point set Si0With Li0It Between spin matrix R and translation vector t;4) after completes an iteration, dynamic angle threshold value iteration reduces;5) then. repeats to walk It is rapid 2)~5), until meeting following formula:
ε indicates the error threshold d between point cloudtIndicate the average Euclidean distance between point set.
7. the mobile robot V-SLAM method of three stage point cloud registration method according to claim 6, feature exist In the dynamic angle threshold method, that is, every deviation angle completed between matched cloud L of an iteration point cloud S reduces, selection The sine value of direction vector angle between matching double points is as match point error equation, the sine value of match point such as formula:
Wherein niAnd njRespectively point siWith ljApproximate normal vector, θ be two matching double points normal vector angle.The weight of match point It is calculated by following formula:
Wherein E is the dynamic angle factor, may be matched such as following formula of quantity a little by weight equation:
8. the mobile robot V-SLAM method of three stage point cloud registration method according to claim 7, feature exist In the dynamic angle factor is reduced by following formula:
E=EDdec,Ddec∈ (0,1), DdecIndicate zoom factor.
9. the mobile robot V-SLAM method of three stage point cloud registration method according to claim 7, feature exist In the step S7 is in the key frame Filtering system for introducing sliding window and stochastical sampling combination, method are as follows: if current time has Quantity is the frame of N+m, the arest neighbors m frame that removal slip window sampling is chosen, also surplus N frame, then the probability that the i-th frame is selected is P (i) as follows:
According to the probability size for choosing frame, to screen frame, and robot pose is optimized using g2o figure optimization algorithm, obtained Mobile robot travel track is taken, three-dimensional point cloud map is ultimately produced.
CN201810757384.XA 2018-07-11 2018-07-11 A kind of mobile robot V-SLAM method of three stage point cloud registration methods Pending CN109308737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810757384.XA CN109308737A (en) 2018-07-11 2018-07-11 A kind of mobile robot V-SLAM method of three stage point cloud registration methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810757384.XA CN109308737A (en) 2018-07-11 2018-07-11 A kind of mobile robot V-SLAM method of three stage point cloud registration methods

Publications (1)

Publication Number Publication Date
CN109308737A true CN109308737A (en) 2019-02-05

Family

ID=65225904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810757384.XA Pending CN109308737A (en) 2018-07-11 2018-07-11 A kind of mobile robot V-SLAM method of three stage point cloud registration methods

Country Status (1)

Country Link
CN (1) CN109308737A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859256A (en) * 2019-03-13 2019-06-07 大连理工大学 A kind of three-dimensional point cloud method for registering based on automatic corresponding point matching
CN109903326A (en) * 2019-02-28 2019-06-18 北京百度网讯科技有限公司 Method and apparatus for determining the rotation angle of engineering mechanical device
CN110148164A (en) * 2019-05-29 2019-08-20 北京百度网讯科技有限公司 Transition matrix generation method and device, server and computer-readable medium
CN110243390A (en) * 2019-07-10 2019-09-17 北京华捷艾米科技有限公司 The determination method, apparatus and odometer of pose
CN110287873A (en) * 2019-06-25 2019-09-27 清华大学深圳研究生院 Noncooperative target pose measuring method, system and terminal device based on deep neural network
CN110471422A (en) * 2019-08-29 2019-11-19 南京理工大学 The detection of obstacles and automatic obstacle avoiding method of intelligent wheel chair
CN110554405A (en) * 2019-08-27 2019-12-10 华中科技大学 normal scanning registration method and system based on cluster combination
CN110738695A (en) * 2019-10-12 2020-01-31 哈尔滨工业大学 image feature point mismatching and removing method based on local transformation model
CN110852356A (en) * 2019-10-24 2020-02-28 华南农业大学 Method for extracting characteristic points of V-SLAM dynamic threshold image of mobile robot
CN110866934A (en) * 2019-10-14 2020-03-06 中国科学院自动化研究所 Normative coding-based complex point cloud segmentation method and system
CN110910452A (en) * 2019-11-26 2020-03-24 上海交通大学 Low-texture industrial part pose estimation method based on deep learning
CN111415377A (en) * 2020-02-19 2020-07-14 重庆邮电大学 Partial quality optimal transmission theory-based incomplete point cloud registration method
CN111612841A (en) * 2020-06-22 2020-09-01 上海木木聚枞机器人科技有限公司 Target positioning method and device, mobile robot and readable storage medium
CN111612829A (en) * 2020-06-03 2020-09-01 纵目科技(上海)股份有限公司 Method, system, terminal and storage medium for constructing high-precision map
CN111862343A (en) * 2020-07-17 2020-10-30 歌尔科技有限公司 Three-dimensional reconstruction method, device and equipment and computer readable storage medium
CN112154303A (en) * 2019-07-29 2020-12-29 深圳市大疆创新科技有限公司 High-precision map positioning method, system, platform and computer readable storage medium
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
CN112446952A (en) * 2020-11-06 2021-03-05 杭州易现先进科技有限公司 Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium
CN113119099A (en) * 2019-12-30 2021-07-16 深圳富泰宏精密工业有限公司 Computer device and method for controlling mechanical arm to clamp and place object
CN113223062A (en) * 2021-06-04 2021-08-06 武汉工控仪器仪表有限公司 Point cloud registration method based on angular point feature point selection and quick descriptor
CN113284184A (en) * 2021-05-24 2021-08-20 湖南大学 Robot RGBD visual perception oriented 6D pose estimation method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
US20160100034A1 (en) * 2011-10-28 2016-04-07 Magic Leap, Inc. System and method for augmented and virtual reality
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
EP3333538A1 (en) * 2016-12-07 2018-06-13 Hexagon Technology Center GmbH Scanner vis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160100034A1 (en) * 2011-10-28 2016-04-07 Magic Leap, Inc. System and method for augmented and virtual reality
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
EP3333538A1 (en) * 2016-12-07 2018-06-13 Hexagon Technology Center GmbH Scanner vis
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕宪伟: "基于RGB-D数据的SLAM算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
张毅 等: "室内环境下移动机器人三维视觉SLAM", 《智能系统学报》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903326A (en) * 2019-02-28 2019-06-18 北京百度网讯科技有限公司 Method and apparatus for determining the rotation angle of engineering mechanical device
KR20200105381A (en) * 2019-02-28 2020-09-07 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Method and apparatus for determining rotation angle of engineering mechanical device
KR102315535B1 (en) * 2019-02-28 2021-10-20 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Method and apparatus for determining rotation angle of engineering mechanical device
US11182928B2 (en) 2019-02-28 2021-11-23 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for determining rotation angle of engineering mechanical device
CN109859256B (en) * 2019-03-13 2023-03-31 大连理工大学 Three-dimensional point cloud registration method based on automatic corresponding point matching
CN109859256A (en) * 2019-03-13 2019-06-07 大连理工大学 A kind of three-dimensional point cloud method for registering based on automatic corresponding point matching
CN110148164A (en) * 2019-05-29 2019-08-20 北京百度网讯科技有限公司 Transition matrix generation method and device, server and computer-readable medium
CN110287873A (en) * 2019-06-25 2019-09-27 清华大学深圳研究生院 Noncooperative target pose measuring method, system and terminal device based on deep neural network
CN110243390B (en) * 2019-07-10 2021-07-27 北京华捷艾米科技有限公司 Pose determination method and device and odometer
CN110243390A (en) * 2019-07-10 2019-09-17 北京华捷艾米科技有限公司 The determination method, apparatus and odometer of pose
CN112154303B (en) * 2019-07-29 2024-04-05 深圳市大疆创新科技有限公司 High-precision map positioning method, system, platform and computer readable storage medium
CN112154303A (en) * 2019-07-29 2020-12-29 深圳市大疆创新科技有限公司 High-precision map positioning method, system, platform and computer readable storage medium
CN110554405A (en) * 2019-08-27 2019-12-10 华中科技大学 normal scanning registration method and system based on cluster combination
CN110554405B (en) * 2019-08-27 2021-07-30 华中科技大学 Normal scanning registration method and system based on cluster combination
CN110471422A (en) * 2019-08-29 2019-11-19 南京理工大学 The detection of obstacles and automatic obstacle avoiding method of intelligent wheel chair
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
CN110738695A (en) * 2019-10-12 2020-01-31 哈尔滨工业大学 image feature point mismatching and removing method based on local transformation model
CN110738695B (en) * 2019-10-12 2021-08-13 哈尔滨工业大学 Image feature point mismatching and removing method based on local transformation model
CN110866934B (en) * 2019-10-14 2022-09-09 中国科学院自动化研究所 Normative coding-based complex point cloud segmentation method and system
CN110866934A (en) * 2019-10-14 2020-03-06 中国科学院自动化研究所 Normative coding-based complex point cloud segmentation method and system
CN110852356A (en) * 2019-10-24 2020-02-28 华南农业大学 Method for extracting characteristic points of V-SLAM dynamic threshold image of mobile robot
CN110852356B (en) * 2019-10-24 2023-05-23 华南农业大学 Method for extracting V-SLAM dynamic threshold image feature points of mobile robot
CN110910452B (en) * 2019-11-26 2023-08-25 上海交通大学 Low-texture industrial part pose estimation method based on deep learning
CN110910452A (en) * 2019-11-26 2020-03-24 上海交通大学 Low-texture industrial part pose estimation method based on deep learning
CN113119099A (en) * 2019-12-30 2021-07-16 深圳富泰宏精密工业有限公司 Computer device and method for controlling mechanical arm to clamp and place object
CN111415377B (en) * 2020-02-19 2023-09-15 湖南贝靼科技有限公司 Incomplete point cloud registration method based on partial quality optimal transmission theory
CN111415377A (en) * 2020-02-19 2020-07-14 重庆邮电大学 Partial quality optimal transmission theory-based incomplete point cloud registration method
CN111612829A (en) * 2020-06-03 2020-09-01 纵目科技(上海)股份有限公司 Method, system, terminal and storage medium for constructing high-precision map
CN111612829B (en) * 2020-06-03 2024-04-09 纵目科技(上海)股份有限公司 High-precision map construction method, system, terminal and storage medium
CN111612841A (en) * 2020-06-22 2020-09-01 上海木木聚枞机器人科技有限公司 Target positioning method and device, mobile robot and readable storage medium
CN111612841B (en) * 2020-06-22 2023-07-14 上海木木聚枞机器人科技有限公司 Target positioning method and device, mobile robot and readable storage medium
CN111862343A (en) * 2020-07-17 2020-10-30 歌尔科技有限公司 Three-dimensional reconstruction method, device and equipment and computer readable storage medium
CN111862343B (en) * 2020-07-17 2024-02-02 歌尔科技有限公司 Three-dimensional reconstruction method, device, equipment and computer readable storage medium
CN112446952B (en) * 2020-11-06 2024-01-26 杭州易现先进科技有限公司 Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium
CN112446952A (en) * 2020-11-06 2021-03-05 杭州易现先进科技有限公司 Three-dimensional point cloud normal vector generation method and device, electronic equipment and storage medium
CN113284184A (en) * 2021-05-24 2021-08-20 湖南大学 Robot RGBD visual perception oriented 6D pose estimation method and system
CN113223062A (en) * 2021-06-04 2021-08-06 武汉工控仪器仪表有限公司 Point cloud registration method based on angular point feature point selection and quick descriptor
CN113223062B (en) * 2021-06-04 2024-05-07 武汉工控仪器仪表有限公司 Point cloud registration method based on corner feature point selection and quick description

Similar Documents

Publication Publication Date Title
CN109308737A (en) A kind of mobile robot V-SLAM method of three stage point cloud registration methods
CN109974707B (en) Indoor mobile robot visual navigation method based on improved point cloud matching algorithm
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN105719352B (en) Face three-dimensional point cloud super-resolution fusion method and apply its data processing equipment
CN113538218B (en) Weak pairing image style migration method based on pose self-supervision countermeasure generation network
Zhang et al. Vehicle global 6-DoF pose estimation under traffic surveillance camera
CN104751493A (en) Sparse tracking method on basis of gradient texture features
CN111582232A (en) SLAM method based on pixel-level semantic information
CN111860651A (en) Monocular vision-based semi-dense map construction method for mobile robot
Gao et al. Pose refinement with joint optimization of visual points and lines
Li et al. Sparse-to-local-dense matching for geometry-guided correspondence estimation
Zhang et al. Hybrid iteration and optimization-based three-dimensional reconstruction for space non-cooperative targets with monocular vision and sparse lidar fusion
Zhou et al. Information-efficient 3-D visual SLAM for unstructured domains
Zhang et al. Accurate real-time SLAM based on two-step registration and multimodal loop detection
Huang et al. MC-VEO: A visual-event odometry with accurate 6-DoF motion compensation
Zhang et al. Point-plane SLAM based on line-based plane segmentation approach
Zhang et al. PLI-VIO: Real-time monocular visual-inertial odometry using point and line interrelated features
Hou et al. Octree-based approach for real-time 3d indoor mapping using rgb-d video data
Sun et al. Scale‐aware camera localization in 3D LiDAR maps with a monocular visual odometry
Wang et al. ReLoc: indoor visual localization with hierarchical sitemap and view synthesis
Sun et al. Indoor Li-DAR 3D mapping algorithm with semantic-based registration and optimization
Park et al. Localization of an unmanned ground vehicle based on hybrid 3D registration of 360 degree range data and DSM
Pei et al. Loop closure in 2d lidar and rgb-d slam
Hu et al. Accurate fiducial mapping for pose estimation using manifold optimization
Xiang et al. OverlapMamba: Novel Shift State Space Model for LiDAR-based Place Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190205