[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107862735B - RGBD three-dimensional scene reconstruction method based on structural information - Google Patents

RGBD three-dimensional scene reconstruction method based on structural information Download PDF

Info

Publication number
CN107862735B
CN107862735B CN201710865372.4A CN201710865372A CN107862735B CN 107862735 B CN107862735 B CN 107862735B CN 201710865372 A CN201710865372 A CN 201710865372A CN 107862735 B CN107862735 B CN 107862735B
Authority
CN
China
Prior art keywords
plane
frame
point
points
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710865372.4A
Other languages
Chinese (zh)
Other versions
CN107862735A (en
Inventor
齐越
王晨
衡亦舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Research Institute Of Beijing University Of Aeronautics And Astronautics
Beihang University
Original Assignee
Qingdao Research Institute Of Beijing University Of Aeronautics And Astronautics
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Research Institute Of Beijing University Of Aeronautics And Astronautics, Beihang University filed Critical Qingdao Research Institute Of Beijing University Of Aeronautics And Astronautics
Priority to CN201710865372.4A priority Critical patent/CN107862735B/en
Publication of CN107862735A publication Critical patent/CN107862735A/en
Application granted granted Critical
Publication of CN107862735B publication Critical patent/CN107862735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of three-dimensional image processing, and particularly relates to an RGBD three-dimensional scene reconstruction method based on structural information, which comprises the following steps of S1, detecting scene information in the ith frame, and marking the scene information in model data corresponding to the ith frame; s2, estimating a camera attitude corresponding to the (i + 1) th frame, and adding scene information marked in model data corresponding to the (i) th frame during calculation; s3, according to the camera posture corresponding to the (i + 1) th frame, the (i + 1) th frame is merged into the model data corresponding to the (i) th frame to obtain the model data corresponding to the (i + 1) th frame; s4, detecting scene information in a projection graph of the model data corresponding to the (i + 1) th frame, and back-projecting the scene information to the model data corresponding to the (i + 1) th frame; and S5, for each value of i, i is 1,2,3, N-1, wherein N is the total frame number, and the steps S1-S4 are repeated to complete the three-dimensional reconstruction work. The method can detect the geometrical structure information existing in the scene in real time, and can minimize the camera attitude estimation of registration error in the camera attitude estimation process, thereby better completing the real-time three-dimensional reconstruction technology.

Description

RGBD three-dimensional scene reconstruction method based on structural information
Technical Field
The invention belongs to the technical field of three-dimensional image processing, and particularly relates to an RGBD three-dimensional scene reconstruction method based on structural information.
Background
The real-time three-dimensional reconstruction technology is always a research hotspot in the modeling field, along with the popularization and development of depth sensors, provides very favorable preconditions for the real-time three-dimensional reconstruction technology, and greatly improves the feasibility and the precision in the field. In a real-time modeling process, pose estimation of a camera is a core problem, and stability and reliability of camera pose estimation are key points for determining a final modeling result. A commonly used attitude estimation algorithm is an ICP algorithm, but due to inaccuracy of acquisition depth in an actual scanning process, the ICP algorithm may have error accumulation in different degrees in a continuous operation process, which may eventually cause failure of attitude estimation. Some of the more common solutions adopted at present are to combine geometric image information and color image information, including pre-registration based on color image features, point-to-point registration based on color information weighting, and some edge structure information as a weighting standard. The methods are based on the original data of the acquired image, and directly calculate at the pixel or point cloud level without considering the prior knowledge of the structure in the scene.
Disclosure of Invention
The invention aims to provide an RGBD three-dimensional scene reconstruction method based on structural information, which can detect the geometric structural information existing in a scene in real time, and can minimize camera attitude estimation of registration error in the camera attitude estimation process, thereby better completing a real-time three-dimensional reconstruction technology.
In order to achieve the purpose, the invention adopts the following technical scheme: an RGBD three-dimensional scene reconstruction method based on structural information,
s1, detecting scene information in the ith frame, and marking the scene information in model data corresponding to the ith frame;
s2, estimating a camera attitude corresponding to the (i + 1) th frame, and adding scene information marked in model data corresponding to the (i) th frame during calculation;
s3, according to the camera posture corresponding to the (i + 1) th frame, the (i + 1) th frame is merged into the model data corresponding to the (i) th frame to obtain the model data corresponding to the (i + 1) th frame;
s4, detecting scene information in a projection graph of the model data corresponding to the (i + 1) th frame, and back-projecting the scene information to the model data corresponding to the (i + 1) th frame;
and S5, for each value of i, i is 1,2,3, N-1, wherein N is the total frame number, and the steps S1-S4 are repeated to complete the three-dimensional reconstruction work.
Further, the scene information is plane structure information.
Further, the specific step of step S1 is:
s11, preprocessing the ith frame to obtain the coordinates and normal vectors of each point of the ith frame in a world coordinate system;
and S12, detecting the plane structure information in the ith frame, and marking the plane structure information in the model data corresponding to the ith frame.
The RGBD three-dimensional scene reconstruction method based on the structural information according to claim 3, wherein the specific step of detecting the plane structural information in step S12 is as follows:
s121, selecting a plane point in the three-dimensional point cloud of the ith frame in a world coordinate system;
s122, setting an initial alternative plane;
s123, clustering an initial plane area;
s124, obtaining a plane in the three-dimensional point cloud, and recalculating a plane equation of the plane;
and S125, merging planes.
Further, the specific steps of step S121 are:
(1) calculating the curvature value of each point in the three-dimensional point cloud,
Figure BDA0001415979040000021
where k (u, v) represents the curvature value of a point with pixel coordinates u, v, n0Is a triangular normal vector formed by the right neighborhood point and the upper neighborhood point of the position to be solved,
Figure BDA0001415979040000022
n1,n2,n3normal vectors corresponding to triangles in other three directions;
(2) dividing points in the three-dimensional point cloud into plane area points and non-plane area points according to the curvature values; and setting a threshold, and regarding each point in the three-dimensional point cloud, if the curvature value of the point is greater than the threshold, considering the point as a non-planar area point, and not including the subsequent calculation process.
Further, the specific step of step S122 is:
(1) dividing the plane area points;
(2) calculating a plane equation corresponding to each area,
C=Am*4 TAm*4
wherein m is the total number of all points in the region block, and A is a matrix formed by all points; solving the eigenvalue and eigenvector of the matrix C, taking the eigenvector corresponding to the minimum eigenvalue as the equation parameter of the plane area corresponding to the area, and normalizing the eigenvector to be P ═ P1,p2,...,pi,...pnAs an initial candidate plane.
Further, the specific steps of step S123 are:
(1) calculating the geometric relation between all the plane area points and the initial candidate plane, namely calculating the distance between each plane area point and the initial candidate plane and the included angle between the normal vector of each point and the normal vector of the initial candidate plane;
(2) setting a threshold, and regarding a plane area point, if the distance from the plane area point to the initial candidate plane and the included angle between the normal vector of the plane area point and the normal vector of the initial candidate plane are both smaller than the threshold, determining that the point belongs to the point corresponding to the initial plane area;
(3) and solving the equation parameter corresponding to the minimum measurement value to serve as the plane where the plane point is located, and further clustering out an initial plane area.
Further, the specific steps of step S124 are: calculating the area of the initial plane region obtained by clustering, setting a threshold value, regarding each initial plane region obtained by clustering, if the area of the initial plane region is larger than the threshold value, considering the plane as the plane in the three-dimensional point cloud, and otherwise, removing the plane.
Further, the specific steps of step S125 are:
(1) selecting any two planes, calculating an included angle between normal vectors of the two planes and an average value of distances from all points in one plane to the other plane, setting a threshold value, and combining the two planes if the two values are smaller than the threshold value;
(2) and respectively calculating the similarity between corresponding plane equations for all the reserved initial plane areas, and merging the two areas when the similarity is very close.
Further, the specific step of step S4 is:
s41, projecting the model data corresponding to the (i + 1) th frame to obtain a model projection diagram corresponding to the (i + 1) th frame;
s42, determining whether the unmarked points in the model projection graph corresponding to the (i + 1) th frame belong to a known plane or not;
s43, determining whether a new plane is generated or not for points which are not marked in the model projection graph corresponding to the (i + 1) th frame and do not belong to the known plane;
and S44, marking the points which are determined in the step S42 and belong to the known plane and the new plane generated in the step S42, and back projecting the points to the model data corresponding to the (i + 1) th frame.
The method deeply analyzes the requirement on the RGBD key frame in the three-dimensional reconstruction, and has the advantages compared with the prior key frame extraction technology aiming at the three-dimensional reconstruction:
(1) in consideration of the requirement of real-time performance, in the process of calculating the plane structure, a blocking strategy is adopted, the work of the plane mark is parallelized, and the operation efficiency can be greatly improved.
(2) Structural information of a scene is considered as favorable prior knowledge, the structural information is added into an ICP point cloud registration process, and structural information constraints are added in a paired point cloud searching process and an energy optimization process, so that registration errors can be reduced as much as possible.
Drawings
FIG. 1 shows the result of structure detection of an initial frame in a scene 1 according to the present invention, wherein (a) is an original color image, (b) is a normal vector projection diagram, and (c) is the detection result;
FIG. 2 shows the result of the structure detection of the second frame in scene 1, where (a) is the original color image, (b) is the projection of the normal vector of the model, and (c) is the detection result;
FIG. 3 shows the results of the overall structure detection of the model after a period of time for scene 1 acquisition in the present invention;
FIG. 4 shows the results of modeling part of the data of scene 1 in the present invention;
FIG. 5 shows the result of the structure detection of the initial frame in scene 2 in the present invention, wherein (a) is the original color image and (b) is the detection result;
fig. 6 shows the result of structure detection after a period of time data acquisition in scene 2 in the present invention, where (a) is the original color image and (b) is the detection result;
FIG. 7 shows the results of modeling part of the data of scene 2 in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The principle of the invention is as follows: first frame data of a scene is collected, two-dimensional data are converted into three-dimensional point clouds by utilizing internal and external parameters of a camera, a normal vector of each point is calculated by utilizing field correlation, the position of the camera corresponding to a current frame is specified as a world coordinate system, and a plane structure is detected for an initial frame. Firstly, calculating the curvature value of each point, excluding information of some non-planar structure points, partitioning all planar structure points to be processed, estimating a planar parameter corresponding to each data block as a candidate plane, and partitioning to realize parallel operation and meet the real-time requirement. And then sequentially calculating the geometric relationship between each point and the candidate planes, wherein the geometric relationship mainly comprises the distance between the point and the plane and the included angle between the normal vector of the point and the normal vector of the plane, clustering all the points in the projection graph through the judgment of the point and the plane, reserving some regions with larger areas as clustering results, and recalculating plane parameters corresponding to the regions. However, due to the premise of the previous blocking, a plurality of planes corresponding to the area blocks belong to the same area in the real scene, and therefore, area merging operation needs to be performed on all the calculated plane areas, and the premise of merging is that plane equations corresponding to the two plane areas have high similarity. The work of marking the first frame of image structure information is completed through the steps. Then acquiring subsequent frame data successively, estimating the pose of the camera for each frame data, firstly searching a point pair matched between the model projection drawing and the current frame point cloud, and in the process of searching the point pair, taking a structure point with a mark in the model projection drawing as an alternative point for the reliability of information, and searching a point pair which is closest to the geometric information of the current frame at the position corresponding to the current frame; and secondly, constructing an energy equation, and adding an energy term of structural similarity measurement on the basis of the geometric measurement of the distance between the original calculation point pairs, thereby minimizing the registration error as much as possible. And fusing the current frame data with the data of the model according to the calculated camera pose to form new model data, obtaining a new model projection drawing under the current camera pose, and recalculating the structure identifier of the projection drawing. The calculation process is mainly divided into two parts, namely area expansion of the existing plane and re-measurement of the new plane. The adopted method is a region growing method, so that the operation speed can be greatly improved. All the marking results are fused into new model data, so that reliable structural information can be provided for subsequent posture estimation and modeling work.
Based on the principle, the RGBD three-dimensional scene reconstruction method based on the structural information specifically comprises the following steps:
s1, detecting scene information in the ith frame, and marking the scene information in model data corresponding to the ith frame;
s11, preprocessing the ith frame to obtain the coordinates and normal vectors of each point of the ith frame in a world coordinate system;
firstly, obtaining depth image data of an ith frame, and preprocessing the depth image data, wherein a fast double-filtering method is mainly adopted:
Figure BDA0001415979040000051
wherein P is the coordinate of the three-dimensional point in the world coordinate system after mapping (A, B, C, D, C, B, C
Figure BDA0001415979040000052
s、pi、pj、Ii、Ij、δ1、δ2) Or (d, s, I, I, j, δ1、δ2) Meaning and value range of
Then, converting the two-dimensional data into a three-dimensional point cloud according to the internal parameters of the depth camera:
v(u,v)=K-1·(u,v,d)T (2)
wherein u, v are pixel coordinates in the filtered depth data map, d is the corresponding depth value, K-1Is the inverse of the depth camera's internal reference matrix.
And setting the camera position corresponding to the initial frame scene as the origin of a world coordinate system, and converting the ith frame into a three-dimensional point cloud under the world coordinate system to obtain the coordinates of each point. Sequentially calculating the normal vector of each point in the three-dimensional point cloud under the current world coordinate system according to the three-dimensional coordinates of the adjacent pixels:
Figure BDA0001415979040000053
where n (u, v) represents a normal vector to a point with pixel coordinates u, v.
S12, detecting the plane structure information in the ith frame, and marking the plane structure information in model data corresponding to the ith frame;
the specific steps for detecting the plane structure information in the ith frame are as follows:
s121, selecting a plane point in the three-dimensional point cloud of the ith frame in a world coordinate system;
calculating the curvature value of each point in the three-dimensional point cloud,
Figure BDA0001415979040000054
where k (u, v) represents the curvature value of a point with pixel coordinates u, v, n0Is a triangular normal vector formed by the right neighborhood point and the upper neighborhood point of the position to be solved,
Figure BDA0001415979040000055
n1,n2,n3and the normal vectors corresponding to the triangles in the other three directions. Dividing points in the three-dimensional point cloud into plane area points and non-plane area points according to the curvature values; and setting a threshold, and regarding each point in the three-dimensional point cloud, if the curvature value of the point is greater than the threshold, considering the point as a non-planar area point, and not including the subsequent calculation process. In the present embodiment, the threshold is set to 0.01, and when k (u, v) is greater than 0.01, the point (u, v) is considered to be a non-planar point, otherwise it is a planar point.
S122, setting an initial alternative plane;
the specific operation steps of step S122 are: firstly, the plane area points are partitioned, in the embodiment, the partitioning is carried out according to the area size of 30 × 30, the plane equation corresponding to each area is calculated,
C=Am*4 TAm*4 (5)
where m is the total number of all points in the region block and A is the matrix of all points. Solving the eigenvalue and eigenvector of the matrix C, taking the eigenvector corresponding to the minimum eigenvalue as the equation parameter of the plane area corresponding to the area, and normalizing the eigenvector to be P ═ P1,p2,...,pi,...pnAs an initial candidate plane.
S123, clustering an initial plane area;
the method comprises the steps of firstly calculating the geometric relation between all plane area points and an initial candidate plane, namely calculating the distance between each point and the initial candidate plane and the included angle between the normal vector of each point and the normal vector of the initial candidate plane for each plane area point, setting a threshold value, regarding a plane area point, if the distance between each point and the initial candidate plane and the included angle between the normal vector of each point and the normal vector of the initial candidate plane are smaller than the threshold value, regarding the point as the point corresponding to the initial plane area, solving an equation parameter corresponding to the minimum metric value as the plane where the plane point is located, and further clustering the initial plane area. In this embodiment, the threshold value of the distance from the set point to the plane is (0.04 × depth value of the point), and the threshold value of the angle between the normal vector of the point and the normal vector of the plane is 15 degrees.
S124, obtaining a plane in the three-dimensional point cloud, and recalculating a plane equation of the plane;
calculating the area of the initial plane region obtained by clustering, setting a threshold, regarding each initial plane region obtained by clustering, if the area is greater than the threshold, the plane is considered to be a plane in the three-dimensional point cloud, otherwise, removing the plane, specifically, calculating a bounding box of each initial plane region, and solving the corresponding plane area, in this embodiment, setting that the initial plane region is removed when the area is less than (0.06m × 0.06 m). The plane equations for the remaining planes are then recalculated by the RANSAC algorithm.
S125, merging planes;
since in a real scene, a common planar structure such as a ground is separated by some furniture and is identified as two planar areas, the planes obtained in step S124 need to be merged. The method comprises the specific steps of selecting any two planes, calculating an included angle between normal vectors of the two planes and an average value of distances from all points in one plane to the other plane, setting a threshold value, and combining the two planes if the two values are smaller than the threshold value. In this embodiment, the threshold of the angle between the normal vectors of the two planes is 2 degrees, and the average threshold of the distance from all points on one plane to the other plane is 0.01 m.
And respectively calculating the similarity between corresponding plane equations for all the reserved initial plane areas, and merging the two areas when the similarity is very close.
Figure BDA0001415979040000071
Wherein p isi,pjAre respectively two differentArea-corresponding plane parameter, AkA matrix composed of all the points contained in the area i, niThe number of points in region i.
And finally, marking the obtained plane structure information in model data corresponding to the ith frame.
S2, estimating a camera attitude corresponding to the (i + 1) th frame, and adding scene information marked in model data corresponding to the (i) th frame during calculation;
s21, calculating three-dimensional point cloud under a world coordinate system corresponding to the (i + 1) th frame, and recording the three-dimensional point cloud
Figure BDA0001415979040000072
And transforming the frame to a model coordinate system corresponding to the ith frame, and recording the frame as
Figure BDA0001415979040000073
Wherein
Figure BDA0001415979040000074
Ti+1->iIs the camera pose matrix to be solved.
S22, obtaining projection point cloud of model data corresponding to the ith frame and recording the projection point cloud
Figure BDA00014159790400000719
S23, calculating
Figure BDA0001415979040000075
And
Figure BDA0001415979040000076
the matching point pair between:
Figure BDA0001415979040000077
wherein
Figure BDA0001415979040000078
A measure of the distance between the pairs of points is represented,
Figure BDA0001415979040000079
represents a measure of the normal component angle between the pairs of points,
Figure BDA00014159790400000710
Figure BDA00014159790400000711
is composed of
Figure BDA00014159790400000712
And the parameter equation of the plane represents the distance measurement from the point to the plane.
Figure BDA00014159790400000713
By being at
Figure BDA00014159790400000714
The above formula is calculated in the 3 x 3 neighborhood, and the point pair with the minimum calculation result is used as the point pair
Figure BDA00014159790400000715
The matching points.
And S24, adding the constraint of the plane structure information when constructing the energy equation to obtain the camera attitude corresponding to the (i + 1) th frame.
Figure BDA00014159790400000716
Wherein,
Figure BDA00014159790400000717
to represent
Figure BDA00014159790400000718
And (4) a parameter equation of the plane. The above equation is sequentially iterated by a least square optimization method to finally converge to obtain the camera attitude Ti+1->i
S3, according to the camera posture corresponding to the (i + 1) th frame, the (i + 1) th frame is merged into the model data corresponding to the (i) th frame to obtain the model data corresponding to the (i + 1) th frame;
s4, detecting scene information in a projection graph of the model data corresponding to the (i + 1) th frame, and back-projecting the scene information to the model data corresponding to the (i + 1) th frame;
s41, projecting the model data corresponding to the (i + 1) th frame to obtain a model projection diagram corresponding to the (i + 1) th frame; there are already some points in the projection that were marked in the previous calculation.
S42, determining whether the unmarked points in the model projection graph corresponding to the (i + 1) th frame belong to a known plane or not;
Figure BDA0001415979040000081
where v and n are the position and normal vector of the point to be found, respectively, P is the set of all known plane parameters, and dv is the depth value of the v point.
S43, determining whether a new plane is generated or not by a region growing method for points which are not marked in the model projection diagram corresponding to the (i + 1) th frame and do not belong to the known plane;
firstly, a seed point is selected arbitrarily, the relation between the seed point and four neighborhoods of the seed point is judged in sequence, when the formula (12) is close to 0, the field and the seed point are considered to belong to the same plane, and then the neighborhood point is used as a new seed point to carry out continuous iterative calculation until the condition is not met any more.
D=(arccos(nneighborgnseed)*180/Pi-15) (10)
And after the calculation marks are completed for all the points, the points are back projected into the model data corresponding to the (i + 1) th frame, and the data of the whole model are updated.
And S5, for each value of i, i is 1,2,3, N-1, wherein N is the total frame number, and the steps S1-S4 are repeated to complete the three-dimensional reconstruction work.
In order to verify the effectiveness and the practicability of the method, simulation work is respectively carried out on the data under two different conditions of the existing data set and the real scene. As can be seen from comparison among the calculation results of the data sets corresponding to fig. 1-4 and fig. 1(c), fig. 2(c) and fig. 3, in the process of continuously accumulating the calculation of the scene model, data can be effectively supplemented, and simultaneously, plane information of each region in the scene can be effectively segmented, so that a relatively complete three-dimensional scene model corresponding to fig. 4 is finally formed. Fig. 5-7 correspond to the process of acquiring data and modeling in a real scene, and identify the results corresponding to fig. 5(b) and fig. 6(b) by effectively detecting the plane geometric information in the scene, thereby finally forming a relatively complete three-dimensional scene model corresponding to fig. 7.
Compared with other existing modeling methods, the method can firstly utilize the structural information of the scene as prior knowledge to be added into the optimization process of the camera attitude, can reduce the registration error as much as possible, and secondly adopts a more effective parallel method to calculate in the detection process of the structural information, thereby realizing a technology with higher time efficiency and more accurate positioning.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (10)

1. An RGBD three-dimensional scene reconstruction method based on structural information is characterized in that,
s1, detecting scene information in the ith frame, and marking the scene information in model data corresponding to the ith frame;
s2, estimating a camera attitude corresponding to the (i + 1) th frame, and adding scene information marked in model data corresponding to the (i) th frame during calculation;
s3, according to the camera posture corresponding to the (i + 1) th frame, the (i + 1) th frame is merged into the model data corresponding to the (i) th frame to obtain the model data corresponding to the (i + 1) th frame;
s4, detecting scene information in a projection graph of the model data corresponding to the (i + 1) th frame, and back-projecting the scene information to the model data corresponding to the (i + 1) th frame;
and S5, for each value of i, i is 1,2,3, N-1, wherein N is the total frame number, and the steps S1-S4 are repeated to complete the three-dimensional reconstruction work.
2. The RGBD three-dimensional scene reconstruction method based on the structural information as claimed in claim 1, wherein the scene information is planar structural information.
3. The RGBD three-dimensional scene reconstruction method based on structural information according to claim 2, wherein the specific steps of the step S1 are as follows:
s11, preprocessing the ith frame to obtain the coordinates and normal vectors of each point of the ith frame in a world coordinate system;
and S12, detecting the plane structure information in the ith frame, and marking the plane structure information in the model data corresponding to the ith frame.
4. The RGBD three-dimensional scene reconstruction method based on the structural information according to claim 3, wherein the specific step of detecting the plane structural information in step S12 is as follows:
s121, selecting a plane point in the three-dimensional point cloud of the ith frame in a world coordinate system;
s122, setting an initial alternative plane;
s123, clustering an initial plane area;
s124, obtaining a plane in the three-dimensional point cloud, and recalculating a plane equation of the plane;
and S125, merging planes.
5. The RGBD three-dimensional scene reconstruction method based on the structural information as claimed in claim 4, wherein the step S121 comprises the following steps:
(1) calculating the curvature value of each point in the three-dimensional point cloud,
Figure FDA0002819466240000011
where k (u, v) represents the curvature value of a point with pixel coordinates u, v, n0Right neighborhood point and top for the location to be soughtThe normal vector of the triangle formed by the neighborhood of squares points,
Figure FDA0002819466240000012
n1,n2,n3normal vectors corresponding to triangles in other three directions;
(2) dividing points in the three-dimensional point cloud into plane area points and non-plane area points according to the curvature values; and setting a threshold, and regarding each point in the three-dimensional point cloud, if the curvature value of the point is greater than the threshold, considering the point as a non-planar area point, and not including the subsequent calculation process.
6. The RGBD three-dimensional scene reconstruction method based on the structural information as claimed in claim 5, wherein the step S122 comprises the following steps:
(1) dividing the plane area points;
(2) calculating a plane equation corresponding to each area,
C=Am*4 TAm*4
wherein m is the total number of all points in the region block, and A is a matrix formed by all points; solving the eigenvalue and eigenvector of the matrix C, taking the eigenvector corresponding to the minimum eigenvalue as the equation parameter of the plane area corresponding to the area, and normalizing the eigenvector to be P ═ P1,p2,...,pi,...pnAs an initial candidate plane.
7. The RGBD three-dimensional scene reconstruction method based on the structural information as claimed in claim 6, wherein the step S123 comprises the following steps:
(1) calculating the geometric relation between all the plane area points and the initial candidate plane, namely calculating the distance between each plane area point and the initial candidate plane and the included angle between the normal vector of each point and the normal vector of the initial candidate plane;
(2) setting a threshold, and regarding a plane area point, if the distance from the plane area point to the initial candidate plane and the included angle between the normal vector of the plane area point and the normal vector of the initial candidate plane are both smaller than the threshold, determining that the point belongs to the point corresponding to the initial plane area;
(3) and solving the equation parameter corresponding to the minimum measurement value to serve as the plane where the plane point is located, and further clustering out an initial plane area.
8. The RGBD three-dimensional scene reconstruction method based on structural information according to claim 7, wherein the specific steps of the step S124 are as follows: calculating the area of the initial plane region obtained by clustering, setting a threshold value, regarding each initial plane region obtained by clustering, if the area of the initial plane region is larger than the threshold value, considering the plane as the plane in the three-dimensional point cloud, and otherwise, removing the plane.
9. The RGBD three-dimensional scene reconstruction method based on structural information according to claim 7, wherein the specific steps of the step S125 are as follows:
(1) selecting any two planes, calculating an included angle between normal vectors of the two planes and an average value of distances from all points in one plane to the other plane, setting a threshold value, and combining the two planes if the two values are smaller than the threshold value;
(2) and respectively calculating the similarity between corresponding plane equations for all the reserved initial plane areas, and merging the two areas when the similarity is very close.
10. The RGBD three-dimensional scene reconstruction method based on structural information according to any of claims 2-9, wherein the specific steps of step S4 are:
s41, projecting the model data corresponding to the (i + 1) th frame to obtain a model projection diagram corresponding to the (i + 1) th frame;
s42, determining whether the unmarked points in the model projection graph corresponding to the (i + 1) th frame belong to a known plane or not;
s43, determining whether a new plane is generated or not for points which are not marked in the model projection graph corresponding to the (i + 1) th frame and do not belong to the known plane;
and S44, marking the points which are determined in the step S42 and belong to the known plane and the new plane generated in the step S43, and back projecting the points to the model data corresponding to the (i + 1) th frame.
CN201710865372.4A 2017-09-22 2017-09-22 RGBD three-dimensional scene reconstruction method based on structural information Active CN107862735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710865372.4A CN107862735B (en) 2017-09-22 2017-09-22 RGBD three-dimensional scene reconstruction method based on structural information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710865372.4A CN107862735B (en) 2017-09-22 2017-09-22 RGBD three-dimensional scene reconstruction method based on structural information

Publications (2)

Publication Number Publication Date
CN107862735A CN107862735A (en) 2018-03-30
CN107862735B true CN107862735B (en) 2021-03-05

Family

ID=61698147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710865372.4A Active CN107862735B (en) 2017-09-22 2017-09-22 RGBD three-dimensional scene reconstruction method based on structural information

Country Status (1)

Country Link
CN (1) CN107862735B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242984B (en) * 2018-08-27 2020-06-16 百度在线网络技术(北京)有限公司 Virtual three-dimensional scene construction method, device and equipment
WO2020073982A1 (en) * 2018-10-11 2020-04-16 Shanghaitech University System and method for extracting planar surface from depth image
CN109544677B (en) * 2018-10-30 2020-12-25 山东大学 Indoor scene main structure reconstruction method and system based on depth image key frame
CN110276839B (en) * 2019-06-20 2023-04-25 武汉大势智慧科技有限公司 Bottom fragment removing method based on live-action three-dimensional data
CN110378349A (en) * 2019-07-16 2019-10-25 北京航空航天大学青岛研究院 The mobile terminal Android indoor scene three-dimensional reconstruction and semantic segmentation method
CN112258618B (en) * 2020-11-04 2021-05-14 中国科学院空天信息创新研究院 Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
CN112767551B (en) * 2021-01-18 2022-08-09 贝壳找房(北京)科技有限公司 Three-dimensional model construction method and device, electronic equipment and storage medium
CN115421509B (en) * 2022-08-05 2023-05-30 北京微视威信息科技有限公司 Unmanned aerial vehicle flight shooting planning method, unmanned aerial vehicle flight shooting planning device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622775A (en) * 2012-01-12 2012-08-01 北京理工大学 Heart real-time dynamic rebuilding technology based on model interpolation compensation
CN103413352A (en) * 2013-07-29 2013-11-27 西北工业大学 Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion
CN106875437A (en) * 2016-12-27 2017-06-20 北京航空航天大学 A kind of extraction method of key frame towards RGBD three-dimensional reconstructions

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170064287A1 (en) * 2015-08-24 2017-03-02 Itseez3D, Inc. Fast algorithm for online calibration of rgb-d camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622775A (en) * 2012-01-12 2012-08-01 北京理工大学 Heart real-time dynamic rebuilding technology based on model interpolation compensation
CN103413352A (en) * 2013-07-29 2013-11-27 西北工业大学 Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion
CN106875437A (en) * 2016-12-27 2017-06-20 北京航空航天大学 A kind of extraction method of key frame towards RGBD three-dimensional reconstructions

Also Published As

Publication number Publication date
CN107862735A (en) 2018-03-30

Similar Documents

Publication Publication Date Title
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
CN108564616B (en) Fast robust RGB-D indoor three-dimensional scene reconstruction method
CN107093205B (en) A kind of three-dimensional space building window detection method for reconstructing based on unmanned plane image
CN112001926B (en) RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping
CN110009732B (en) GMS feature matching-based three-dimensional reconstruction method for complex large-scale scene
CN108597009B (en) Method for detecting three-dimensional target based on direction angle information
CN108776989B (en) Low-texture planar scene reconstruction method based on sparse SLAM framework
CN113269094B (en) Laser SLAM system and method based on feature extraction algorithm and key frame
Micusik et al. Descriptor free visual indoor localization with line segments
CN107818598B (en) Three-dimensional point cloud map fusion method based on visual correction
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN111612728A (en) 3D point cloud densification method and device based on binocular RGB image
CN110570474B (en) Pose estimation method and system of depth camera
CN110533716B (en) Semantic SLAM system and method based on 3D constraint
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
CN111998862A (en) Dense binocular SLAM method based on BNN
CN108305278A (en) Images match correlation improved method in a kind of ORB-SLAM algorithms
CN110544279A (en) pose estimation method combining image identification and genetic algorithm fine registration
CN106709432B (en) Human head detection counting method based on binocular stereo vision
KR102494552B1 (en) a Method for Indoor Reconstruction
Yong-guo et al. The navigation of mobile robot based on stereo vision
CN108694348B (en) Tracking registration method and device based on natural features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant