CN111242990B - 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching - Google Patents
360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching Download PDFInfo
- Publication number
- CN111242990B CN111242990B CN202010010168.6A CN202010010168A CN111242990B CN 111242990 B CN111242990 B CN 111242990B CN 202010010168 A CN202010010168 A CN 202010010168A CN 111242990 B CN111242990 B CN 111242990B
- Authority
- CN
- China
- Prior art keywords
- camera
- dimensional
- phase
- point cloud
- reconstruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005457 optimization Methods 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 55
- 239000011159 matrix material Substances 0.000 claims abstract description 34
- 230000009466 transformation Effects 0.000 claims abstract description 24
- 238000003384 imaging method Methods 0.000 claims abstract description 20
- 230000000007 visual effect Effects 0.000 claims abstract description 10
- 230000008569 process Effects 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 18
- 238000010586 diagram Methods 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 230000010363 phase shift Effects 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 238000005286 illumination Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Analysis (AREA)
- Algebra (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching, which can rapidly realize 360-degree reconstruction of three-dimensional point cloud of a measured object and carry out nonlinear optimization on a reconstruction result, and is realized by the following scheme: firstly, calibrating a digital projector and a camera, acquiring corresponding structured light deformation images, calculating the phase orders of deformed stripe pixel points, and simultaneously determining polar lines of the deformed stripe pixel points on different camera imaging planes of a camera array, thereby establishing epipolar geometry and equiphase joint constraint, calculating dense matching of structured light images of different visual angles, and generating dense matching relations of deformed stripe phases of different angles; initializing a camera transformation matrix and a three-dimensional point cloud initial point by using a phase dense matching relation and a triangularization principle, constructing an objective function and a graph optimization model thereof, and solving; and carrying out triangularization curved surface reconstruction on the optimized three-dimensional point cloud to obtain a complete 360-degree three-dimensional target reconstruction model of the measured target.
Description
Technical Field
The invention relates to a three-dimensional reconstruction technology, in particular to a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching.
Background
The three-dimensional reconstruction has wide application in the fields of industrial production, reverse engineering, aviation measurement, virtual reality and the like. Based on the three-dimensional reconstruction of structured light projection and image information, the three-dimensional information of the three-dimensional object to be measured with extremely high precision can be obtained, and meanwhile, a more complete three-dimensional model with a larger angle range can be obtained by utilizing the multi-view geometric principle of a multi-camera or a moving camera, so that the structured light reconstruction method of the multi-camera is an effective means for obtaining the complete three-dimensional model with high precision and 360 degrees. In the structured light reconstruction method, registration of the three-dimensional point cloud and optimization of the reconstruction result are two key technologies related to the reconstruction effect of the three-dimensional model.
Registration of three-dimensional point clouds largely determines the reconstruction accuracy of three-dimensional models, and is of great interest to those skilled in the art. In general, to obtain a complete three-dimensional model of an object, the datasets from different perspectives need to be transformed into the same coordinate system, a process called three-dimensional data registration. The registration of three-dimensional data between different viewing angles is particularly important, and the registration is directly related to the reconstruction precision and the automation degree of three-dimensional reconstruction. When reconstructing a three-dimensional model, three-dimensional data of the object surface needs to be acquired from different angles respectively due to the limitation of the observation direction and the shape of the object, so that the three-dimensional object surface with real and natural textures and capable of simulating any illumination and visual angle is obtained. Point cloud data is a collection of three-dimensional data points representing object surface information and spatial distribution obtained by various three-dimensional data acquisition devices, usually represented in the form of unstructured three-dimensional points, which are spatially discrete geometric points. The most basic constituent elements of a point cloud are spatially discrete points and their associated surface properties. The difficulty of registration is in the acquisition of correspondence between the point clouds of two three-dimensional point clouds.
In the structured light reconstruction method, a complete three-dimensional model is mainly generated by fusing a plurality of local three-dimensional point clouds. The three-dimensional measurement of structured light is to project a grating stripe modulated by a periodic function on the surface of the measured object through a projection device, and the phases of the grating stripes of each point are shifted due to the change of the height of the surface of the object. This method can acquire three-dimensional information of the object surface. Due to the limited visibility of the optical scanning system, a scanning blind area caused by shielding exists in single-view scanning, and multiple point clouds are registered and fused after multiple times of scanning are needed to obtain a complete model, namely, the point clouds with a certain overlapping area, which are acquired under different view angles, are registered together according to the characteristic that the overlapping areas are consistent, so that the point clouds can be fused into a whole under the same coordinate system. The key technology of the three-dimensional point cloud fusion is a three-dimensional point cloud registration technology. The three-dimensional point cloud registration technology is a transformation operation of matching and aligning point clouds under different coordinate systems by searching mapping relations among the point clouds under different view angles and utilizing a certain algorithm to transform the object point clouds through rigid body transformation such as rotation, translation and the like, and the key point is to calculate coordinate transformation parameters: the rotation matrix and translation vector transform the source point cloud into the same coordinate system as the target point cloud. Structured light three-dimensional point cloud registration has a number of interfering factors. Firstly, the influence of registration noise, and in the process of reconstructing the point cloud data by using structured light, the generated three-dimensional data often has a plurality of small-amplitude noise and outliers due to the influence of artificial interference, ambient illumination and abrupt change of the object surface type, so that the reconstructed model is rough and disordered. And the calculation amount is huge, and in large-scale data operation, the data amount of the point cloud has great influence on the later operation efficiency. Registration divergences may result from initial state differences between the scan data and the model. The data of the point cloud is generally over thousands, even millions, and such huge data, if all involved in calculation and repeated traversal search, would necessarily result in inefficiency. The three-dimensional point cloud registration technology needs to perform feature matching between corresponding point sets, so that a large amount of calculation time is needed.
The optimization of the reconstruction result is further fine adjustment of the three-dimensional point cloud registration result in the reconstruction process of the structured light, so that the reconstructed three-dimensional model and the pose of the camera have global minimum errors. The three-dimensional point cloud registration obtains the corresponding rigid transformation relation of a plurality of point clouds with different view angles, but because various tiny fluctuation noise exists in the single-chip point cloud, if a higher-precision complete reconstruction model is required to be obtained, the fine adjustment of the spatial position of each point in each three-dimensional point cloud is required, and the adjustment cannot be realized through rigid transformation, so that an optimization model aiming at the complete reconstruction point cloud noise is required to be constructed, and the optimization of the reconstruction result is realized. How to design a reconstruction result optimization algorithm is also a technical difficulty in obtaining a higher-precision three-dimensional reconstruction model. In order to acquire a three-dimensional model of the whole object, when registering three-dimensional data of objects at different view angles to the same reference coordinate system, registration errors are accumulated due to continuous change of the reference view angles. Global optimization of data registration as a whole can reduce registration errors. In the optimization of the reconstruction structure, the construction of the error function or the cost function is a key step in designing the reconstruction result optimization algorithm. In the multi-view passive three-dimensional reconstruction based on feature matching, the difference of pixel coordinates of the same space point under imaging conditions of different view angles is measured by the re-projection error, and the two norms of the overall size of the re-projection error are taken as key parameters of a construction error function. The solving of the error function is a process of searching a nonlinear optimization optimal value by utilizing an iteration mode, the process uses a disturbance model to obtain the derivative of a single error item about the quantity to be optimized, then continuous iteration is carried out to obtain a unique or multiple minimum values, and the quantity to be optimized corresponding to the global minimum value is judged to be the global optimal value through judgment. In practical applications, the number of point clouds is from thousands to millions, and a great amount of time cost and space cost are required for the iterative solution of the error function, so that it is also necessary to design the error function with rapid convergence and an optimized initial value that can enable the error function to converge rapidly.
Disclosure of Invention
The invention aims to provide a method capable of rapidly reconstructing three-dimensional point cloud of a measured object by 360 degrees and carrying out nonlinear optimization on a reconstruction result.
The invention provides a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching, which has the following technical characteristics: in the process of structure light projection and camera array 1 acquisition, firstly, calibrating a digital projector 2 and a camera array 1, calibrating the digital projector 2, then projecting structure light fringes, shooting different angle deformation fringes after the camera array 1 is calibrated, obtaining corresponding structure light deformation images, further calculating the phase orders of deformation fringe pixel points, simultaneously determining polar lines of the deformation fringe pixel points on different camera imaging planes of the camera array 1, thereby establishing opposite geometry and equal phase joint constraint, calculating dense matching of structure light images of different visual angles, and generating dense matching relation of different angle deformation fringe phases; initializing a camera transformation matrix and a three-dimensional initial point by using a phase dense matching relation and a triangularization principle, designing a globally optimized objective function, characterizing an overall error, constructing an objective function diagram optimization model and solving; through iteration, calculating optimal solutions of different camera poses and the whole three-dimensional point cloud, and finishing iterative optimization calculation of the objective function; generating a complete three-dimensional point cloud by utilizing the optimized three-dimensional point cloud, performing triangularized curved surface reconstruction on the optimized three-dimensional model to obtain a complete 360-degree three-dimensional target reconstruction model of the measured target, and completing the generation of the complete three-dimensional target model.
Compared with the prior art, the invention has the following beneficial effects.
The method utilizes the calibrated camera array 1 and the digital projector to simultaneously acquire corresponding structured light deformation images from different angles; then, utilizing the joint constraint of the epipolar geometry and the structural light isophase of the camera array 1 to calculate the dense matching of the structural light images of different visual angles, and utilizing the triangulation principle to calculate the initial value of optimization iteration; then designing an objective function representing the overall error, constructing an objective function diagram optimization model, and iteratively calculating optimal solutions of different camera poses and the overall three-dimensional point cloud; and finally, carrying out triangularization curved surface reconstruction on the optimized three-dimensional model to obtain a complete 360-degree three-dimensional reconstruction model of the three-dimensional target to be measured. Through four processes of structured light projection and camera array 1 acquisition, continuous phase dense matching at different view angles, objective function construction, iterative optimization calculation and complete three-dimensional object model generation, 360-degree three-dimensional point cloud rapid reconstruction of a measured object can be realized, and experimental results show that the invention has the advantages of high reconstruction precision, low texture dependence, less rotation times of the measured object, no contact, mutual independence of calculation of each point and the like. Compared with the traditional iterative closest point three-dimensional registration method, the method is more efficient, accurate and stable.
According to the method, corresponding structure light deformation images are obtained at the same time from different angles, the phase orders of deformed stripe pixel points are calculated, the line of the deformed stripe pixel points is determined, the joint constraint of the opposite geometry and the phase is established, then the phase joint constraint of the opposite geometry, the structure light and the like of the camera array 1 is utilized, the dense matching of the phases of the deformed stripe of the structure light with different view angles is calculated, meanwhile, the global optimization objective function is designed, the graph optimization model of the function is constructed, and then the solution is carried out. The objective function takes the transformation matrix representing the pose of the camera and the space position of the three-dimensional point cloud into consideration, so that the global optimization of the three-dimensional reconstruction process of the structured light is realized, and meanwhile, due to the dense matching relation of continuous phases of different visual angles and the accurate initial value design of the objective function, the precision of an objective optimization result is greatly improved, and the time consumption of the calculation process is reduced.
According to the invention, according to the continuous phase dense matching process of different visual angles, corresponding structured light deformation images are obtained simultaneously from different angles by using a calibrated camera array and a digital projector, then dense matching of structured light images of different visual angles is calculated by using phase joint constraints such as epipolar geometry, structured light and the like of the camera array, and an initial value of optimization iteration is calculated by using a trigonometry principle, so that the method has the advantages of high reconstruction precision, low texture dependence, less rotation times of a measured object, no contact and mutual independence of calculation of each point.
Drawings
FIG. 1 is a schematic flow chart of a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching;
FIG. 2 is a schematic diagram of a 360-degree three-dimensional reconstruction optimization method device based on continuous phase dense matching;
FIG. 3 is a diagram of a epipolar geometry and equiphase joint constraint;
FIG. 4 is a schematic representation of an objective function diagram optimization.
In the figure: 1 camera array 1,2 digital projector, 3 projection structure light stripe, 4 three-dimensional target, 5 first camera imaging plane, 6 second camera imaging plane, 7 three-dimensional target surface equiphase line, 8 camera pose vertex, 9 three-dimensional point cloud vertex.
It should be understood that the above-described figures are merely schematic and are not drawn to scale.
Detailed Description
An exemplary embodiment of a 360 ° three-dimensional reconstruction optimization method based on continuous phase dense matching of the present invention is described in detail below, and the present invention will be described in further detail. It is noted that the following examples are given for the purpose of illustration only and are not to be construed as limiting the scope of the invention, since numerous insubstantial modifications and adaptations of the invention will be within the scope of the invention as viewed by one skilled in the art from the foregoing disclosure.
See fig. 1. According to the invention, the three-dimensional reconstruction optimization method comprises four processes of structured light projection and camera array 1 acquisition, continuous phase dense matching of different visual angles, objective function construction, iterative optimization calculation and complete three-dimensional object model generation. In the process of structure light projection and camera array 1 acquisition, firstly, calibrating a digital projector 2 and a camera array 1, calibrating the digital projector 2, then projecting structure light fringes, shooting different angle deformation fringes after the camera array 1 is calibrated, obtaining corresponding structure light deformation images, further calculating the phase orders of deformation fringe pixel points, simultaneously determining polar lines of the deformation fringe pixel points on different camera imaging planes of the camera array 1, thereby establishing joint constraint of opposite geometry and equal phase, calculating dense matching of structure light images of different visual angles, and generating dense matching relation of different angle deformation fringe phase positions; initializing a camera transformation matrix and a three-dimensional initial point by using a phase dense matching relation and a triangularization principle, designing a globally optimized objective function, representing overall errors, constructing an objective function graph optimization model and solving, and calculating optimal solutions of different camera poses and overall three-dimensional point clouds through iteration to complete objective function and iterative optimization calculation; generating a complete three-dimensional point cloud by utilizing the optimized three-dimensional point cloud, performing triangularized curved surface reconstruction on the optimized three-dimensional model to obtain a complete 360-degree three-dimensional target reconstruction model of the measured target, and completing the generation of the complete three-dimensional target model.
See fig. 2-3. The structured light three-dimensional reconstruction optimization device based on continuous phase dense matching comprises: from M K ×N K A camera array 1 composed of cameras, a digital projector 2 arranged at the central axis of the plane of the camera array 1, and a spacing d between the cameras cc The optical axes of the digital projector 2 and the camera array 1 converge on the three-dimensional object 4 to be measured and generate the projection structure light fringes 3 covering the three-dimensional object 4 to be measured, the first camera and the second camera are respectively provided with O centers 1 And O 2 ,O 1 、O 2 The points correspond to the first camera imaging plane 5 and the second camera imaging plane 6, respectively.
In three-dimensional space, a point P on the surface of a measured object is positioned on a first camera imaging plane I 1 And a second camera imaging plane I 2 Respectively forming pixel points p 1 And pixel point p 2 ,O 1 、O 2 And a three-dimensional space midpoint P forms a triangle, the triangle vertex is positioned on an equiphase line 7 of the measured three-dimensional target surface, and the equiphase line 7 is imaged by a first camera and a second camera respectively.
In the process of structured light projection and camera array 1 acquisition, the method is utilizedThe digital projector projects the checkered calibration image at the plane, the camera array 1 acquires the checkered image, and the image corner detection is carried out, so that the basic matrix between each camera and the digital projector is obtained through calculation. Decomposing the basic matrix to obtain a rotation matrix R of the kth camera relative to the digital projector kc And a translation matrix t kc Where kc is the index number of the camera. Then the digital projector is used for projecting the structured light stripe P with sine light and shade distribution i (u, v) structured-light stripe P i (u, v) satisfy:
wherein (u, v) is any one pixel coordinate of the projector pixel coordinate system, A p (u, v) represents the intensity of the DC component, B p (u, v) represents the stripe amplitude, 2 pi i/N represents the phase shift amount of the ith stripe, N is the total phase shift steps of the structured light stripe, and the value is an integer greater than or equal to 4. Simultaneously, a camera array is used for collecting a three-dimensional target to be detected to obtain a reflected deformed stripe image I i (x, y) satisfies:
wherein (x, y) is any one pixel coordinate of the camera pixel coordinate system, A c (x, y) represents the background intensity of the measured object, B c (x, y) represents the acquisition fringe amplitude. By means ofThe phase function, i.e. the truncated phase, representing the deformation fringes after modulation of the object surface can be solved to obtain:
and a phase functionThe value range is (-pi, pi)]Then, gray code projections with different frequencies are utilized to determine the orders of different phase periods, and the orders are unfolded to obtain corresponding absolute phases phi (x, y):
wherein T is the structured light phase period, u is the truncated phase coordinate corresponding to the current pixel, the absolute phase has continuity, and the equal phase line direction is consistent with the original projection stripe direction.
See fig. 3. And determining the epipolar geometry constraint of the matching points. From the first camera perspective, p 1 Is a projection of the spatial point P, the possible projection positions at the second camera imaging plane being at e 2 And p 2 On the line of (i.e. the polar line L) 2 And (3) upper part. The internal reference matrix of the camera array 1 is unified as K, wherein the pixel point p 1 And p 2 Under homogeneous coordinates:
p 1 =KP,p 2 =K(RP+t) (5)
pixel point p 1 And p 2 Corresponding normalized plane coordinatesAnd->Satisfying the epipolar constraint condition:
wherein [ (x)] T Representing the transpose of the matrix, E 12 Is an essential matrix between two cameras. A pixel matching point that satisfies the epipolar geometry constraint determines a likely corresponding epipolar location within another pixel coordinate system, but does not yet determine the specific coordinates to which that pixel corresponds. Then determining the structured light and the like of the matching pointsBit constraint. As shown in fig. 3, the image I corresponding to the first camera l In the pixel point p 1 Corresponding absolute phase phi 1 (x, y) can be obtained from equation (4), and the absolute phase Φ 1 (x, y) corresponds to the equiphase line 7 of the measured three-dimensional target surface in the three-dimensional space, and is marked as a contour line S on which the spatial point P is located. The contour line is projected by a second camera, on which an equiphase curve S is likewise formed 2 In image I 2 The above can be expressed as:
S 2 (x 2 ,y 2 )=Φ 1 (w 1 )=Φ 1 (x 1 ,y 1 ) (7)
then from the first camera perspective, FIG. I 1 Pixel point p in (a) 1 In the imaging plane I of the second camera 2 The re-projection point on the image can only be on the equiphase curve S 2 And (3) upper part.
Finally, the P-point camera array 1 is combined with the epipolar geometry constraint and the structured light equiphase constraint. In image I 2 Middle solution polar line L 2 And curve S 2 Intersection point p 2 Its pixel coordinates are the image I 1 Middle p 1 And (3) obtaining the matching coordinates of the points, and obtaining the accurate matching of the P point on the imaging planes of the first camera and the second camera. Similarly, image I is traversed 1 All pixel points in the image I are removed, and the matching points exceed the image I 2 The points in the coordinate range and other points can find the correct corresponding relation, thereby obtaining an image I 1 And image I 2 Is a dense match of (c). And obtaining the initial value of the three-dimensional point cloud and the initial value of the transformation matrix in each camera coordinate system by using a triangulation principle and a multi-point projection algorithm.
In the objective function construction and iterative optimization calculation process, a three-dimensional point cloud initial value with larger error is obtained by utilizing a triangulation principle, and the error is reduced to the minimum through iterative optimization. Also, the transformation matrix R for different cameras kc |t kc And optimizing to obtain the optimal three-dimensional reconstruction result of the measured object. The process first builds an optimized objective function. Using the observed value and estimated value of the three-dimensional point cloud corresponding to all camerasThe distance between the three-dimensional point clouds and the transformation matrix R|t are expressed as lie algebra as an optimized objective functionDomain, wherein the transformation matrix R|t is +.>The index map on this is denoted exp (ζ), satisfies:
wherein ρ is the front three-dimension of ζ, representing translation in the three-dimensional point cloud transformation, and φ is the rear three-dimension of ζ, representing rotation in the three-dimensional point cloud transformation.
The set of estimated values of the three-dimensional point cloud is represented as { Q } j (x, y, z) }, the objective function is set to:
wherein,for the depth distance corresponding to the jth three-dimensional point under the kth camera coordinate system, +.>In order to obtain the pixel coordinate corresponding to the jth three-dimensional point under the kth camera coordinate system, K -1 For the inverse of the internal matrix of each camera, M is the total number of three-dimensional point clouds. The process then globally solves the constructed objective function.
See fig. 4. The solution of the objective function is constructed as a graph optimization problem. In FIG. 4The solid triangle represents the camera pose vertex, the dotted triangle represents the pose uncertainty of the vertex, the farther the dotted triangle is from the solid triangle, the larger the angle is, the larger the deviation of the camera pose observation value from the true value is represented; the solid circle represents the three-dimensional point cloud vertex, the dotted circle represents the point cloud uncertainty, and the larger the dotted circle is, the larger the deviation of the observed value of the point cloud from the true value is represented; the virtual straight line connecting the point cloud and the camera pose vertex represents the observation model. The vertexes of the graph are all three-dimensional space point clouds and the pose of the camera array 1, and represent optimization variables of the graph optimization problem; edges of a graph in the graph optimization problem are connected with vertexes, represent observation relations among different vertexes in a common region, and are error items of the graph optimization problem. The process optimizes the three-dimensional point cloud and the pose of the camera array 1 simultaneously, and sets a three-dimensional point cloud vertex and a pose vertex respectively. And the three-dimensional point cloud and the pose of the camera array 1 are simultaneously optimized in the objective function construction and iterative optimization calculation process. In solving the optimization problem specifically, vertex and edge types are first defined. The three-dimensional point cloud vertex dimension is 3,the pose node of the camera is 6-dimensional lie algebra, < ->The specific implementation of the observation equation in any one camera is carried out for each three-dimensional point, and special attention needs to be paid to the fact that the Rdrigas transformation is needed for the pose node of the 6-dimensional lie algebra, and projection is carried out by using the observation equation after the transformation matrix R|t of each camera is obtained. A graph of the problem is then constructed. As shown in an objective function formula (10) and fig. 4, the composition of the graph mainly comprises observed three-dimensional coordinate values corresponding to a jth three-dimensional point under a kth camera coordinate system; the initial values of the map are obtained from camera array 1 calibration data and dense matching trigonometry. An optimization algorithm is then selected. In the optimization problem, a falling strategy of a Levenberg-Marquardt method is selected, and an automatic derivative library of G2O is utilized, so that a Jacobian matrix for calculating a first-order derivative of a high-dimensional matrix and a second-order derivative are omittedSea plug matrix. And meanwhile, an marginalization method in the simultaneous localization and map construction technology is introduced, schur elimination in a descent strategy is realized, and calculation of the optimization problem is accelerated. And finally, setting an optimization threshold value, and analyzing the iteration result until the result converges.
And in the whole three-dimensional target model generation process, the optimized three-dimensional point cloud is utilized to carry out triangulation and curved surface reconstruction to obtain a whole 360-degree three-dimensional reconstruction model of the three-dimensional target to be measured, and meanwhile, the pose relation of the camera array 1 under a whole three-dimensional model coordinate system is determined by utilizing the optimized transformation matrix of each camera to correspond to the pose of each camera relative to the point cloud.
According to the method, through four processes of structured light projection and camera array 1 acquisition, continuous phase dense matching of different view angles, objective function construction, iterative optimization calculation and complete three-dimensional object model generation, 360-degree three-dimensional point cloud rapid reconstruction of the measured object can be achieved, and the method has the advantages of being high in reconstruction accuracy, low in texture dependence, small in rotation times of the measured object, free of contact, independent in calculation of each point and the like.
While the invention has been described in detail in connection with the drawings, it should be understood that the foregoing is only illustrative of the preferred embodiment of the invention and is not intended to limit the invention thereto, but rather that various modifications, equivalents, improvements and substitutions can be made therein by those skilled in the art without departing from the spirit and principles of the invention, and are intended to be included within the scope of the appended claims.
Claims (7)
1. A360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching has the following technical characteristics: in the process of structural light projection and camera array (1) acquisition, firstly, calibrating a digital projector (2) and the camera array (1), calibrating a rear projection structural light stripe by the digital projector (2), shooting different angle deformation stripes after calibrating by the camera array (1), obtaining corresponding structural light deformation images, further calculating the phase orders of the deformation stripe pixel points, simultaneously determining polar lines of the deformation stripe pixel points on different camera imaging planes of the camera array (1), thereby establishing the joint constraint of opposite-pole geometry and equal phase, calculating dense matching of the structural light images of different visual angles, and generating dense matching relation of the deformation stripe phases of different angles; initializing a camera transformation matrix and a three-dimensional point cloud initial point by using a phase dense matching relation and a triangularization principle, designing a globally optimized objective function, characterizing an overall error, constructing an objective function diagram optimization model and solving; through iteration, calculating optimal solutions of different camera poses and the whole three-dimensional point cloud, and finishing iterative optimization calculation of the objective function; generating a complete three-dimensional point cloud by utilizing the optimized three-dimensional point cloud, performing triangularized curved surface reconstruction on the optimized three-dimensional point cloud to obtain a complete 360-degree three-dimensional target reconstruction model of the measured target, and completing the generation of the complete three-dimensional target model;
determining epipolar geometry constraints of the matching points: according to p from the first camera angle 1 Is a projection of the spatial point P, the possible projection positions at e in the second camera imaging plane 2 And p 2 On the line of (i.e. the polar line L) 2 The internal reference matrix of the camera array 1 is unified as K, wherein the pixel point p 1 And p 2 Under homogeneous coordinates: p is p 1 =KP,p 2 =K(RP+t),
Pixel point p 1 And p 2 Corresponding normalized plane coordinatesAnd->Satisfying the epipolar constraint condition:
E=t 12 ∧ R 12
wherein [ (x)] T Representing the transpose of the matrix, E 12 Is an essential matrix between two cameras;
corresponding image I at a first camera l In the pixel point p 1 Corresponding absolute phase phi 1 (x, y) is derived from the absolute phase phi (x, y) 1 (x, y) in three-dimensional space corresponding to the equiphase line (7) of the three-dimensional object surface to be measured, the spatial point P being on a contour S which is projected by a second camera on which an equiphase curve S is likewise formed 2 In image I 2 The above is expressed as:
S 2 (x 2 ,y 2 )=Φ 1 (w 1 )=Φ 1 (x 1 ,y 1 )
finally, combining the P-point camera array (1) epipolar geometric constraint and structured light equiphase constraint;
in image I 2 Middle solution polar line L 2 And curve S 2 Intersection point p 2 Its pixel coordinates are the image I 1 Middle p 1 Obtaining the accurate matching of the P point in the imaging planes of the first camera and the second camera according to the matching coordinates of the points, and traversing the image I 1 All pixel points in the image I are removed, and the matching points exceed the image I 2 The points in the coordinate range and other points find the correct corresponding relation, and then the image I is obtained 1 And image I 2 Obtaining the initial value of the three-dimensional point cloud and the initial value of the transformation matrix in each camera coordinate system by using the triangulation principle and the multi-point projection algorithm; using the distances between the observed values and the estimated values of the three-dimensional point clouds corresponding to all cameras as an optimized objective function, and expressing the three-dimensional point clouds and a transformation matrix R|t as lie algebraDomain, wherein the transformation matrix R|t is +.>The index map on this is denoted exp (ζ), satisfies:
wherein ρ is the front three-dimension of ζ, represents translation in the three-dimensional point cloud transformation, φ is the rear three-dimension of ζ, represents rotation in the three-dimensional point cloud transformation, and the set of estimation values of the three-dimensional point cloud is represented as { Q } j (x, y, z) }, the objective function is set to:
wherein the method comprises the steps ofFor the depth distance corresponding to the jth three-dimensional point under the kth camera coordinate system, +.>In order to obtain the pixel coordinate corresponding to the jth three-dimensional point under the kth camera coordinate system, K -1 For the inverse of the internal matrix of each camera, M is the total number of three-dimensional point clouds.
2. The 360 ° three-dimensional reconstruction optimization method based on continuous phase dense matching as claimed in claim 1, wherein: in three-dimensional space, a point P on the surface of a measured object is positioned on a first camera imaging plane I 1 And a second camera imaging plane I 2 Respectively forming pixel points p 1 And pixel point p 2 ,O 1 、O 2 And a three-dimensional space midpoint P forms a triangle, the triangle vertex is positioned on an equiphase line (7) of the surface of the three-dimensional object to be measured, and the equiphase line (7) is imaged by a first camera and a second camera respectively.
3. The 360 ° three-dimensional reconstruction optimization method based on continuous phase dense matching as claimed in claim 1, wherein: in the process of structured light projection and camera array (1) acquisition, a digital projector (2) is utilized to project checkerboard calibration images at a plane, the camera array (1) acquires checkerboard images, image corner detection is carried out, and then a basic matrix between each camera and the digital projector is obtained through calculation.
4. A 360 ° three-dimensional reconstruction optimization method based on continuous phase dense matching as claimed in claim 3, wherein: according to the index number kc of the camera, decomposing by the basic matrix to obtain a rotation matrix R of the kth camera relative to the digital projector (2) kc And a translation matrix t kc Then according to the phase period T of the structured light, the phase shift total step number N of the structured light stripe, the phase shift quantity 2 pi/N of the ith stripe of any pixel coordinate (u, v) of the projector pixel coordinate system, and the direct current component intensity A p (u, v) and stripe amplitude B p (u, v) obtaining a structured light stripe P satisfying the sinusoidal light-shade distribution projected by the digital projector i (u,v),
Wherein, the value of N is an integer greater than or equal to 4.
5. The 360 ° three-dimensional reconstruction optimization method based on continuous phase dense matching of claim 4, wherein: the camera array is used for collecting the three-dimensional target to be measured, and the background intensity A of the object to be measured is measured according to any pixel coordinate (x, y) of the camera pixel coordinate system c (x, y) obtaining the fringe amplitude B c (x, y) phase function of deformed fringes after modulation of the object surfaceObtaining a deformed fringe image I satisfying the reflection i (x,y):
Solving to obtain a truncated phase:
wherein the phase functionThe value range is (-pi, pi)]。
6. The 360 ° three-dimensional reconstruction optimization method based on continuous phase dense matching of claim 5, wherein: gray code projection with different frequencies is utilized to determine the orders of different phase periods, and the orders are unfolded to obtain corresponding absolute phases phi (x, y):
wherein T is the structured light phase period, u is the truncated phase coordinate corresponding to the current pixel, the absolute phase has continuity, and the equal phase line direction is consistent with the original projection stripe direction.
7. A structured light three-dimensional reconstruction optimization device based on continuous phase dense matching for implementing the 360 ° three-dimensional reconstruction optimization method based on continuous phase dense matching as claimed in any one of claims 1 to 6, comprising: from M K ×N K A camera array (1) composed of cameras, a digital projector (2) arranged at the central axis of the plane of the camera array (1), and a spacing d between the cameras cc The method is characterized in that: the optical axes of the digital projector (2) and the camera array (1) converge on the three-dimensional object (4) to be measured and generate projection structure light fringes (3) covering the three-dimensional object (4), the centers of the first camera and the second camera are respectively O 1 And O 2 ,O 1 、O 2 The points correspond to a first camera imaging plane (5) and a second camera imaging plane (6), respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010010168.6A CN111242990B (en) | 2020-01-06 | 2020-01-06 | 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010010168.6A CN111242990B (en) | 2020-01-06 | 2020-01-06 | 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111242990A CN111242990A (en) | 2020-06-05 |
CN111242990B true CN111242990B (en) | 2024-01-30 |
Family
ID=70877630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010010168.6A Active CN111242990B (en) | 2020-01-06 | 2020-01-06 | 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111242990B (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11906294B2 (en) * | 2020-07-28 | 2024-02-20 | Ricoh Company, Ltd. | Alignment apparatus, alignment system, alignment method, and recording medium |
CN112053432B (en) * | 2020-09-15 | 2024-03-26 | 成都贝施美医疗科技股份有限公司 | Binocular vision three-dimensional reconstruction method based on structured light and polarization |
CN112785685B (en) * | 2020-12-25 | 2024-10-15 | 新拓三维技术(深圳)有限公司 | Assembly guiding method and system |
CN113516775B (en) * | 2021-02-09 | 2023-02-28 | 天津大学 | Three-dimensional reconstruction method for acquiring stamp auxiliary image by mobile phone camera |
CN112967342B (en) * | 2021-03-18 | 2022-12-06 | 深圳大学 | High-precision three-dimensional reconstruction method and system, computer equipment and storage medium |
CN113074667B (en) * | 2021-03-22 | 2022-08-23 | 苏州天准软件有限公司 | Global absolute phase alignment method based on mark points, storage medium and system |
CN113074661B (en) * | 2021-03-26 | 2022-02-18 | 华中科技大学 | Projector corresponding point high-precision matching method based on polar line sampling and application thereof |
CN113345039B (en) * | 2021-03-30 | 2022-10-28 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Three-dimensional reconstruction quantization structure optical phase image coding method |
CN113205592B (en) * | 2021-05-14 | 2022-08-05 | 湖北工业大学 | Light field three-dimensional reconstruction method and system based on phase similarity |
CN113256795B (en) * | 2021-05-31 | 2023-10-03 | 中国科学院长春光学精密机械与物理研究所 | Endoscopic three-dimensional detection method |
CN113432550B (en) * | 2021-06-22 | 2023-07-18 | 北京航空航天大学 | Three-dimensional measurement splicing method for large-size part based on phase matching |
CN113658260B (en) * | 2021-07-12 | 2024-07-23 | 南方科技大学 | Robot pose calculation method, system, robot and storage medium |
CN113724368B (en) * | 2021-07-23 | 2023-02-07 | 北京百度网讯科技有限公司 | Image acquisition system, three-dimensional reconstruction method, device, equipment and storage medium |
CN113587816B (en) * | 2021-08-04 | 2024-07-26 | 天津微深联创科技有限公司 | Array type large scene structured light three-dimensional scanning measurement method and device thereof |
CN114708316B (en) * | 2022-04-07 | 2023-05-05 | 四川大学 | Structured light three-dimensional reconstruction method and device based on circular stripes and electronic equipment |
CN114943814B (en) * | 2022-06-08 | 2024-07-26 | 长春理工大学 | Three-dimensional scanning auxiliary face modeling analysis system |
CN114863036B (en) * | 2022-07-06 | 2022-11-15 | 深圳市信润富联数字科技有限公司 | Data processing method and device based on structured light, electronic equipment and storage medium |
CN114972544B (en) * | 2022-07-28 | 2022-10-25 | 星猿哲科技(深圳)有限公司 | Method, device and equipment for self-calibration of external parameters of depth camera and storage medium |
CN116778066B (en) * | 2023-08-24 | 2024-01-26 | 先临三维科技股份有限公司 | Data processing method, device, equipment and medium |
CN117333649B (en) * | 2023-10-25 | 2024-06-04 | 天津大学 | Optimization method for high-frequency line scanning dense point cloud under dynamic disturbance |
CN117635875B (en) * | 2024-01-25 | 2024-05-14 | 深圳市其域创新科技有限公司 | Three-dimensional reconstruction method, device and terminal |
CN118052939B (en) * | 2024-04-15 | 2024-06-18 | 清华大学深圳国际研究生院 | High-speed fringe projection three-dimensional reconstruction method |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104240289A (en) * | 2014-07-16 | 2014-12-24 | 崔岩 | Three-dimensional digitalization reconstruction method and system based on single camera |
CN104331897A (en) * | 2014-11-21 | 2015-02-04 | 天津工业大学 | Polar correction based sub-pixel level phase three-dimensional matching method |
JP2015158749A (en) * | 2014-02-21 | 2015-09-03 | 株式会社リコー | Image processor, mobile body, robot, device control method and program |
CN104952075A (en) * | 2015-06-16 | 2015-09-30 | 浙江大学 | Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method |
CN106600686A (en) * | 2016-12-06 | 2017-04-26 | 西安电子科技大学 | Three-dimensional point cloud reconstruction method based on multiple uncalibrated images |
CN106683173A (en) * | 2016-12-22 | 2017-05-17 | 西安电子科技大学 | Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching |
CN107833181A (en) * | 2017-11-17 | 2018-03-23 | 沈阳理工大学 | A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision |
CN108090960A (en) * | 2017-12-25 | 2018-05-29 | 北京航空航天大学 | A kind of Object reconstruction method based on geometrical constraint |
CN108257089A (en) * | 2018-01-12 | 2018-07-06 | 北京航空航天大学 | A kind of method of the big visual field video panorama splicing based on iteration closest approach |
CN108648240A (en) * | 2018-05-11 | 2018-10-12 | 东南大学 | Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration |
CN108898630A (en) * | 2018-06-27 | 2018-11-27 | 清华-伯克利深圳学院筹备办公室 | A kind of three-dimensional rebuilding method, device, equipment and storage medium |
CN109064536A (en) * | 2018-07-27 | 2018-12-21 | 电子科技大学 | A kind of page three-dimensional rebuilding method based on binocular structure light |
WO2019113531A1 (en) * | 2017-12-07 | 2019-06-13 | Ouster, Inc. | Installation and use of vehicle light ranging system |
CN109919876A (en) * | 2019-03-11 | 2019-06-21 | 四川川大智胜软件股份有限公司 | A kind of true face model building of three-dimensional and three-dimensional true face photographic system |
CN110288642A (en) * | 2019-05-25 | 2019-09-27 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Three-dimension object fast reconstructing method based on camera array |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9208612B2 (en) * | 2010-02-12 | 2015-12-08 | The University Of North Carolina At Chapel Hill | Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information |
-
2020
- 2020-01-06 CN CN202010010168.6A patent/CN111242990B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015158749A (en) * | 2014-02-21 | 2015-09-03 | 株式会社リコー | Image processor, mobile body, robot, device control method and program |
CN104240289A (en) * | 2014-07-16 | 2014-12-24 | 崔岩 | Three-dimensional digitalization reconstruction method and system based on single camera |
CN104331897A (en) * | 2014-11-21 | 2015-02-04 | 天津工业大学 | Polar correction based sub-pixel level phase three-dimensional matching method |
CN104952075A (en) * | 2015-06-16 | 2015-09-30 | 浙江大学 | Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method |
CN106600686A (en) * | 2016-12-06 | 2017-04-26 | 西安电子科技大学 | Three-dimensional point cloud reconstruction method based on multiple uncalibrated images |
CN106683173A (en) * | 2016-12-22 | 2017-05-17 | 西安电子科技大学 | Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching |
CN107833181A (en) * | 2017-11-17 | 2018-03-23 | 沈阳理工大学 | A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision |
WO2019113531A1 (en) * | 2017-12-07 | 2019-06-13 | Ouster, Inc. | Installation and use of vehicle light ranging system |
CN108090960A (en) * | 2017-12-25 | 2018-05-29 | 北京航空航天大学 | A kind of Object reconstruction method based on geometrical constraint |
CN108257089A (en) * | 2018-01-12 | 2018-07-06 | 北京航空航天大学 | A kind of method of the big visual field video panorama splicing based on iteration closest approach |
CN108648240A (en) * | 2018-05-11 | 2018-10-12 | 东南大学 | Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration |
CN108898630A (en) * | 2018-06-27 | 2018-11-27 | 清华-伯克利深圳学院筹备办公室 | A kind of three-dimensional rebuilding method, device, equipment and storage medium |
CN109064536A (en) * | 2018-07-27 | 2018-12-21 | 电子科技大学 | A kind of page three-dimensional rebuilding method based on binocular structure light |
CN109919876A (en) * | 2019-03-11 | 2019-06-21 | 四川川大智胜软件股份有限公司 | A kind of true face model building of three-dimensional and three-dimensional true face photographic system |
CN110288642A (en) * | 2019-05-25 | 2019-09-27 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Three-dimension object fast reconstructing method based on camera array |
Non-Patent Citations (7)
Title |
---|
"Active integral imaging system based on multiple structured light method";XIONG Z L, WANG Q H, XING Y, et al.;《Optics Express DOI: 10.1364/OE.23.027094》;全文 * |
Di Jia,Mingyuan Zhao."FDM: fast dense matching based on sparse matching".《Signal, Image and Video Processing》 .2019,全文. * |
SERVIN M,GARNICA G,ESTRADA J C,et al.."High-resolution low-noise 360-degree digital solid reconstruction using phase stepping profilometry".《Optics Express https://doi.org/10.1364/OE.22.010914》.2014,全文. * |
Yi Zhou, Guillermo Gallego, Henri Rebecq, Laurent Kneip, Hongdong Li, Davide Scaramuzza ."Semi-Dense 3D Reconstruction with a Stereo Event Camera" .《European Conference on Computer Vision (ECCV)》.2018,全文. * |
基于Bayes理论的散斑三维重建方法;赵碧霞;张华;计算机工程(第12期);全文 * |
杨振发 ; 万刚 ; 曹雪峰 ; 李锋 ; 谢理想."基于几何结构特征的点云表面重建方法".《系统仿真学报》.2017,全文. * |
江泽涛 ; 郑碧娜 ; 吴敏.一种基于立体像对稠密匹配的三维重建方法.第八届全国信号与信息处理联合学术会议.2009,全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111242990A (en) | 2020-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111242990B (en) | 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching | |
CN110288642B (en) | Three-dimensional object rapid reconstruction method based on camera array | |
CN110514143B (en) | Stripe projection system calibration method based on reflector | |
CN110672039B (en) | Object omnibearing three-dimensional measurement method based on plane reflector | |
Sitnik et al. | Digital fringe projection system for large-volume 360-deg shape measurement | |
CN115345822A (en) | Automatic three-dimensional detection method for surface structure light of aviation complex part | |
JP2016075637A (en) | Information processing apparatus and method for the same | |
Sweeney et al. | Large scale sfm with the distributed camera model | |
CN111473744A (en) | Three-dimensional shape vision measurement method and system based on speckle embedded phase shift stripe | |
CN112308963A (en) | Non-inductive three-dimensional face reconstruction method and acquisition reconstruction system | |
CN107167073A (en) | A kind of three-dimensional rapid measurement device of linear array structure light and its measuring method | |
Zhou et al. | A novel laser vision sensor for omnidirectional 3D measurement | |
Guehring | Reliable 3D surface acquisition, registration and validation using statistical error models | |
Garrido-Jurado et al. | Simultaneous reconstruction and calibration for multi-view structured light scanning | |
JP2016217941A (en) | Three-dimensional evaluation device, three-dimensional data measurement system and three-dimensional measurement method | |
Lin et al. | Vision system for fast 3-D model reconstruction | |
Dubreuil et al. | Mesh-based shape measurements with stereocorrelation: principle and first results | |
Habib et al. | A comparative analysis of two approaches for multiple-surface registration of irregular point clouds | |
CN116295113A (en) | Polarization three-dimensional imaging method integrating fringe projection | |
Ruchay et al. | Accuracy analysis of 3D object reconstruction using RGB-D sensor | |
Ozan et al. | Calibration of double stripe 3D laser scanner systems using planarity and orthogonality constraints | |
Trebuňa et al. | 3D Scaning–technology and reconstruction | |
Li et al. | Using laser measuring and SFM algorithm for fast 3D reconstruction of objects | |
Pedersini et al. | 3D area matching with arbitrary multiview geometry | |
Wang et al. | Implementation and experimental study on fast object modeling based on multiple structured stripes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |