CN115147540A - Improved three-dimensional reconstruction method - Google Patents
Improved three-dimensional reconstruction method Download PDFInfo
- Publication number
- CN115147540A CN115147540A CN202210489218.2A CN202210489218A CN115147540A CN 115147540 A CN115147540 A CN 115147540A CN 202210489218 A CN202210489218 A CN 202210489218A CN 115147540 A CN115147540 A CN 115147540A
- Authority
- CN
- China
- Prior art keywords
- reconstruction
- model
- point cloud
- dimensional
- dimensional reconstruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000005259 measurement Methods 0.000 claims abstract description 24
- 238000005457 optimization Methods 0.000 claims abstract description 14
- 238000004040 coloring Methods 0.000 claims abstract description 12
- 238000005516 engineering process Methods 0.000 claims abstract description 11
- 230000000007 visual effect Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 4
- 238000004422 calculation algorithm Methods 0.000 claims description 28
- 239000011159 matrix material Substances 0.000 claims description 20
- 230000010363 phase shift Effects 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 8
- 239000002245 particle Substances 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 5
- 238000009877 rendering Methods 0.000 claims description 4
- 239000013589 supplement Substances 0.000 abstract description 2
- 238000013519 translation Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 4
- 238000002377 Fourier profilometry Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000000691 measurement method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010587 phase diagram Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000001314 profilometry Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses an improved three-dimensional reconstruction method, which belongs to the field of three-dimensional reconstruction and comprises the following steps: s1: establishing a three-dimensional reconstruction system; s2: connecting the software with the hardware through a visual interface; s3: carrying out system calibration; s4: obtaining available parameters of the binocular camera; s5: newly building a reconstruction project; s6: scanning the surface of an object by using a Digital Light Processing (DLP) projector through a multi-frequency heterodyne three-dimensional reconstruction technology, namely projecting sinusoidal grating stripes with multiple frequencies on the surface of the object to finish single measurement; s7: displaying the three-dimensional model after single measurement and removing redundant scanning reconstruction parts; s8: model optimization or continuous scanning is carried out for preparing multi-layer point cloud splicing; s9: after a new point cloud model is collected, splicing the new point cloud and the old model; s10: and globally optimizing and coloring the multilayer point cloud model. The invention can improve the reconstruction precision and the reconstruction speed, can supplement the model from the visual angle and enrich the reconstruction details of the model.
Description
Technical Field
The invention relates to a three-dimensional reconstruction method, in particular to an improved three-dimensional reconstruction method.
Background
Three-dimensional reconstruction using optical techniques is a method and technique for obtaining the external topography of a measured object in a non-contact manner by using appropriate optical and electronic instruments, and the optical three-dimensional measurement techniques are generally divided into two main categories, passive three-dimensional measurement and active three-dimensional measurement according to the difference of imaging illumination modes.
Passive three-dimensional reconstruction techniques extract three-dimensional information of an object directly from three-dimensional images acquired by one or more camera systems without structured light illumination. The method has a simple hardware structure, is usually directly realized by using a software method, and has unique advantages when the structured light cannot be used for illumination, but the method has the advantages of low density of obtained three-dimensional points and relatively poor reconstruction precision on objects, is suitable for scenes with low requirements on details of the measured objects, and has less application in the field of industrial detection.
The active three-dimensional measurement technology mainly adopts different projection devices to project different types of structured light to a measured object, shoots a structured light image which is modulated and deformed on the surface of the measured object, and then calculates the three-dimensional shape data of the measured object from the image carrying the three-dimensional shape information of the surface of the measured object. Among the active three-dimensional measurement techniques, the structured light three-dimensional measurement technique has been developed most rapidly, and a plurality of branches including a laser scanning method (LS), a Fourier Transform Profilometry (FTP), a Phase Measurement Profilometry (PMP), a color-coded fringe projection method (CFP), etc. have appeared, among which the FTP and PMP are most widely used.
The PMP method has the basic idea that the phase value of each pixel in an image is calculated through a plurality of grating stripe images with certain phase difference, then the height information of an object is calculated according to the phase value, at least three grating stripe images are needed to carry out phase calculation, and meanwhile, the object to be measured is required to be still in the shooting process, so that the PMP method is more suitable for three-dimensional measurement of a static object. Generally, the more grating fringe images are used in the measurement process, the higher the three-dimensional reconstruction accuracy, and therefore, PMP is widely used in the industrial measurement field.
In recent years, on the basis of tracking, digesting and absorbing foreign advanced technologies, systematic research is carried out on the structured light measuring technology in China, and a commercial structured light measuring system is provided.
However, in the structured light measurement system both at home and abroad, the phase calculation method of Gray coding and phase shift method is still partially used, which has the disadvantages of low phase calculation precision, low measurement precision, great influence of the surface darkness of the measured workpiece and the like compared with the phase calculation method using the multi-frequency heterodyne method.
Accordingly, those skilled in the art have provided an improved three-dimensional reconstruction method to solve the problems set forth in the background above.
Disclosure of Invention
The invention aims to provide an improved three-dimensional reconstruction method, which can improve the reconstruction precision and the reconstruction speed, can supplement a model from a visual angle and enrich the reconstruction details of the model so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
an improved three-dimensional reconstruction method comprising the steps of:
s1: establishing a three-dimensional reconstruction system, wherein the three-dimensional reconstruction system comprises hardware and software, and the hardware comprises a binocular camera and a DLP projector;
s2: connecting the software with the hardware through a visual interface;
s3: carrying out system calibration;
s4: acquiring available parameters of a binocular camera by adopting a calibration method;
s5: newly building a reconstruction project, including automatic splicing of the mark points and single measurement;
s6: scanning the surface of an object by using a Digital Light Processing (DLP) projector through a multi-frequency heterodyne three-dimensional reconstruction technology, namely projecting sinusoidal grating stripes with multiple frequencies on the surface of the object to finish single measurement;
s7: displaying the three-dimensional model after single measurement and removing redundant scanning reconstruction parts;
s8: after the model reconstruction is completed, globally optimizing and coloring the model, and storing reconstructed data; or continuing to scan the object for later multi-layer point cloud splicing, and if mark point splicing is executed, turning to the step S6;
s9: after a new point cloud model is acquired, splicing the new point cloud with the old model;
s10: and carrying out global optimization and coloring on the multilayer point cloud model, and storing point cloud data after finishing the global optimization and coloring.
As a still further scheme of the invention: in S6, a fitting negative exponential phase unwrapping algorithm is selected for phase calculation.
As a still further scheme of the invention: in S6, the frequencies are 1, 1/4, 1/16, 1/64, and 1/256 respectively, the first four frequencies adopt a four-step phase shift method, that is, 4 images are acquired respectively, the sinusoidal light with the frequency of 1/256 adopts an eight-step phase shift method, that is, 8 images are acquired, and the 24 images and the white light projection image form an object reconstruction image set with 25 images as a unit.
As a still further scheme of the invention: in S8, the storing of the reconstruction data is specifically to automatically store the reconstruction data according to a storage path selected before the new project, and the reconstruction data is a point cloud file in a ply format.
As a still further scheme of the invention: in the step S9, a conversion relation matrix between the two pieces of point cloud is obtained through a mark point stitching algorithm, so that the two pieces of point cloud are stitched.
As a still further scheme of the invention: the marking points are the same points under the surface of the same object corresponding to two or more pieces of point clouds.
As a still further scheme of the invention: in S10, the global optimization and coloring specifically includes automatically finding a particle position according to a marked particle adhered to the surface of the object before shooting, moving relative positions of a plurality of point clouds to optimize a reconstruction model, and performing color rendering on the model through an OpenGL graphics library.
Compared with the prior art, the invention has the beneficial effects that:
1) When the PMP structured light technology is used, grating stripes are projected in a mode of not adopting Gray codes and a traditional phase shift method, but a sinusoidal grating stripe pattern is projected in a multi-frequency heterodyne mode, so that the reconstruction precision and the reconstruction speed are better.
2) A point cloud splicing icp algorithm is added, a plurality of pieces of point clouds are overlapped together in a marking point mode, and the model can be supplemented from a visual angle, so that the reconstruction details of the model are enriched.
3) And the operation is more friendly and direct through the visual interface presentation.
Drawings
Fig. 1 is a flow chart of an improved three-dimensional reconstruction method.
Detailed Description
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
An improved three-dimensional reconstruction method comprising the steps of:
s1: establishing a three-dimensional reconstruction system which comprises hardware and software;
the hardware comprises an industrial binocular camera, a DLP projector and a film calibration board which are built together; the software comprises a program integrating a binocular stereo vision algorithm, a multi-frequency heterodyne three-dimensional reconstruction technology and a mark point splicing algorithm; the hardware is connected with a computer through a VGA signal line, an HDMI signal line and a USB data line, the computer is loaded with the software, and then a power line of the DLP projector is connected
S2: connecting the software with the hardware through a visual interface;
transmitting picture data acquired by an industrial binocular camera to software of a computer;
s3: carrying out system calibration;
calibrating the film calibration plate by a binocular stereo vision algorithm to finish the detection of the centers of 99 marking circles on the film calibration plate;
s4: acquiring available parameters of a binocular camera by adopting a calibration method;
adopting a seven-step calibration method, inputting the collected 7 film calibration plate pictures into a program and obtaining available parameters of the binocular camera through a binocular stereo vision algorithm
S5: newly building a reconstruction project, including automatic splicing of the mark points and single measurement;
the mark point splicing can stack a plurality of pieces of point clouds through a mark point splicing algorithm, and a piece of point cloud is generated through single measurement;
s6: in step S4, parameters of a binocular camera are obtained through calibration, and then the surface of an object is scanned by using a DLP projector through a multi-frequency heterodyne three-dimensional reconstruction technology, namely sinusoidal grating stripes with multiple frequencies are projected on the surface of the object and are collected by the binocular camera at the same time, so that single measurement is completed;
s7: displaying the three-dimensional model after single measurement and removing redundant scanning reconstruction parts through a tool preset in a program;
s8: after the model reconstruction is completed, globally optimizing and coloring the model, and storing reconstructed data; or continuing to scan the object for later multi-layer point cloud splicing, and if mark point splicing is executed, turning to the step S6;
s9: after a new point cloud model is collected, splicing the new point cloud and the old model;
after a new point cloud model is acquired, splicing operation of N mark points can be carried out to make up for and improve the problem that the angle of a reconstructed model is lost due to the fact that structured light can only be shot at one angle, a plurality of mark points of the point cloud are manually selected and stacked together, and the splicing operation of the new point cloud and the old model can be completed by selecting accurate alignment;
s10: and carrying out global optimization and coloring on the multilayer point cloud model, and storing point cloud data after finishing the global optimization and coloring.
In this embodiment: the binocular stereo vision algorithm is embodied as follows,
the binocular camera also follows the pinhole model when imaging. And setting the projection of a point A in the three-dimensional space on the planes of the left camera and the right camera, wherein the real object image corresponding to the binocular system comprises a binocular camera, a projection light source, a calibration object and other fixed supports. The coordinates of the three-dimensional world coordinates of the binocular system on the homogeneous coordinates and the pixel coordinates of the two-dimensional image in the imaging process can be expressed as follows:
[X W Y W Z W ] T is the coordinate of A in a world coordinate system; [ u, v ]] T Is the pixel coordinate of a on the left image plane. Wherein,
the same can be deduced that the point A projects the homogeneous coordinate on the image of the right camera. In the formula (1-2): s is a proportionality coefficient, K is a camera intrinsic parameter, (u 0, v 0) is a principal point coordinate of an image plane, and f u And f v The focal lengths of the images on the u-axis and the v-axis are respectively. R and t are the rotation and translation matrices from the world coordinate system to the camera coordinate system, which constitute the extrinsic parameters of the camera.
One of the main aspects of binocular stereo vision systems, which differs from single cameras, is that, in addition to the internal parameters of each camera requiring calibration, the relative position and attitude between the two cameras must be precisely determined, i.e. the external parameters of the binocular camera need to be calibrated. Defining a world coordinate system O-XYZ on a perspective center of a left camera, and setting a coordinate system of a right camera as O r ~X r Y r Z r Then the rotation matrix and translation vector of the left camera and the right camera relative to the world coordinate system are [ I | 0%]And [ R | t](I is an identity matrix, 0 is a zero matrix), and the rotation and translation matrix between the two cameras is [ R | t]. In summary, the parameters of the camera needing to be calibrated in the binocular stereo vision system are f u ,f v ,(u0,v0),k1,k2,p1,p2,R,t。
When the binocular system is calibrated, a flat plate calibration method based on Zhangyingyou is adopted, a flat plate is arranged by adopting 9 multiplied by 11 circles, a calibration plate is divided into 12 big circles and 87 small circles, and a circle center positioning method is utilized to realize a graph of 99 circle centers on the calibration plateLike the positioning of coordinates. The distance of each central point is accurately known when the flat plate for calibration is manufactured, and a space coordinate system can be established according to the known distances. When the coordinate system of the calibration plate is established, the center of the first marking circle at the upper left corner established by the world coordinate system is positioned, and the world coordinates of the centers of all the marking circles on the calibration plate are P i =(x i ,y i ,0) T I.e., z coordinates are all "0", as can be obtained from the formula (1-2):
the correspondence between the three-dimensional coordinate points M of the 99 circles on the plane of the calibration plate and the coordinate points M of the two-dimensional image projected onto the image is therefore a matrix transformation H, in which:
Wherein H = K [ r ] 1 r 2 t] (1-5)
The matrix H is a 3 × 3 homography matrix. Thus, the homography matrix can be written as H = [ H ] 1 h 2 h 3 ]。
[h 1 h 2 h 3 ]=λK[r 1 r 2 t] (1-6)
The property of the rotation matrix R having unity orthogonality is utilized, and thus the orthogonality of the rotation matrix R can be obtained from the equations (1-6):
Substituting B into (1-7) to obtain
As can be seen from the equations (1-9), by checking the image coordinates of each image taken by the binocular system, a set of such equations can be obtained, and if the camera takes n images of the calibration plate from different positions, for example, n sets of such equations can be obtained, where the homography matrix H can be obtained using the eigenvectors corresponding to the minimum eigenvalues. After H is obtained, the camera internal parameter K and the external parameter of each image can be obtained,
namely r 1 =λK -1 h 1 ,r 2 =λK -1 h 2 ,r 3 =r 1 ×r 2 ,t=λK -1 h 3
Wherein λ =1/| K -1 h 1 ||=1/||K -1 h 2 ||。
The intrinsic parameters of the camera are solved by using Cholesky matrix decomposition algorithm as follows:
γ=-B 12 α 2 β/λ
when the video camera collects images, due to the limitation of the camera process, the collected images generate certain errors, the errors are expressed on the imaging distortion, and the video camera distortion model adopts a second-order radial distortion model, namely:
as actual pixel coordinates, (x, y) as ideal pixel coordinates, (k) 1 ,k 2 ) For the distortion factor, from the relationship between the ideal pixel coordinates and the image coordinates, and the actual pixel coordinates and the image coordinates, the following equation can be obtained:
assuming (1-11) formula AX = B, the distortion coefficient k can be solved by using least squares 1 ,k 2 The initial values of (a) are:
X=(A T A) -1 A T B
the calibration of the internal parameters of the binocular camera can be completed by the method, but the rotation and translation relation between the two cameras needs to be calibrated, namely the external parameters of the binocular vision system are calibrated. Since the coordinate system of the binocular system is established at the center position of the left camera, the contents to be calibrated are the rotational-translational relationship of the right camera to the left camera. With R rl Rotation matrix, T, representing right camera to left camera rl Representing the translation vector from the right camera to the left camera, the following transformation is present:
X l =R rl X r +T rl (1-12)
in the formulae (1-12): x l And X r Respectively, the coordinate points of the three-dimensional point P under the coordinate systems of the left camera and the right camera. According to the obtained rotation matrix R from the world coordinate system to the left and right camera coordinate systems l And R r And a translation vector T l And T r Then it is easy to deduce:
the internal and external parameters of the camera obtained by the above algorithm are rough parameters, and further optimization processing needs to be performed according to the parameters. The LM algorithm based on least square is used during optimization, the internal parameters and the external parameters of the left camera and the right camera are bound together for optimization, and all optimized parameters comprise K l ,k 1l ,k 2l ,p 1l ,p 2l ,R li ,t li ,K r ,k 1r ,k 2r ,p 1r ,p 2r ,R rl ,t rl . Wherein K l ,K r ,R li ,t li Is obtained by a plate calibration method, k 1l ,k 2l ,p 1l ,p 2l ,k 1r ,k 2r ,p 1r ,p 2r Radial and tangential distortion parameters for the left and right cameras, respectively, are set to 0 as initial values. Since these initial values are already closer to the exact result, a fast convergence of the Levenberg-Marquardt algorithm can be guaranteed. The objective function for the Levenberg-Marquardt algorithm minimization is:
(1-14) in the formulaIs the center P of a circle on the calibration plate lj The three-dimensional coordinates of the real space are projected on the ith image of the left camera in a projection transformation mode, a lij Representing a spatial point P lj Projected onto the left cameraActual image two-dimensional coordinates on the ith image;is the center P of a circle on the calibration plate lj Transformed by projection to the projection coordinates on the ith image of the right camera, a rij Indicates a mark point P lj Actual image two-dimensional coordinates on the ith image on the right camera. And (4) performing least square LM optimization on the formulas (1-14) to finally finish calibration of the binocular stereo measurement system.
In this embodiment: the multi-frequency heterodyne three-dimensional reconstruction technique is specifically as follows,
the wrapped phase must be obtained for each point before unwrapping the phase, where the most sophisticated and reliable algorithm is the phase shift method, which is already commercially available. The phase shift method has the greatest advantage that the point-to-point operation is adopted when the initial phase of the object is solved, namely, the phase value of a certain point is not influenced by the light intensity value of an adjacent point in principle, so that the error caused by uneven reflectivity of the object surface is avoided.
The basic idea is as follows: projecting a grating with sinusoidal light distribution as surface structure light onto the surface of a measured workpiece, wherein the light intensity function generated by the detected sinusoidal grating modulated by the surface height of the measured object can be represented by the following formula:
I(x,y,δ j )=A(x,y)+B(x,y)cos[φ(x,y)+δ j ] (2-1)
in the formula I (x, y, delta) j ) As a function of intensity, A (x, y) is the background intensity, B (x, y) is the modulation depth of the fringes, δ j To shift the phase value, phi (x, y) is the phase of this object point, representing the height information of the measured object. The invention adopts a standard four-step phase shift method to calculate the wrapping phase of the grating image, and the phase shifts of the four grating images are respectively
δ 0 =0,δ 1 =π/2,δ 2 =π,δ 3 =3π/2,
From (2-1) obtaining
The phase function can be solved by combining 4 equations:
φ(x,y)=arctan(I 3 (x,y)-I 1 (x,y))/(I 0 (x,y)-I 2 (x,y))-π≤φ≤π (2-3)
the phase determined here is the wrapped phase.
The wrapped phase calculated by the phase shift algorithm is unique within one phase period, but has a plurality of grating stripes in the whole measuring space, the wrapped phase is distributed in a zigzag shape, and phase unwrapping must be performed on phase values of spatial points to obtain continuous unwrapped phase values. There are many methods for phase unwrapping algorithms, which are mainly classified into two categories, spatial phase unwrapping and temporal phase unwrapping. When spatial phase unwrapping is adopted, the profile of the surface to be measured is mostly required to be continuous, and the surface with discontinuity or steps is difficult to measure. Therefore, the time-phase unwrapping algorithm is widely used. Among them, the gray code plus phase shift method and the multi-frequency heterodyne method are most applied. However, in the two methods, the multi-frequency heterodyne method has higher measurement accuracy and stronger anti-interference capability, and is gradually receiving more and more attention.
The basic process is to project a series of fringe patterns with different frequencies to the surface of the measured object, and synchronously shoot to obtain a group of deformed gratings modulated by the surface of the measured object, and the phase of each point is independently calculated on the sequence. Therefore, the boundary and the region with low signal-to-noise ratio do not affect other better data points, and error propagation is avoided in principle.
At present, a typical algorithm of multi-frequency heterodyne has linear phase expansion, fitting exponential phase expansion, fitting negative exponential phase expansion, fourier correction phase expansion and the like. The linear phase unwrapping algorithm needs to project the number of S frequency stripes which is changed from 1 to S, 4S pictures are needed by the four-step phase shifting method, and the measuring time and the calculating process are too complicated. The present application proposes a simplified algorithm, namely a fitting exponential phase unwrapping algorithm, with which the accuracy achieved by 32 or 64 sets of fringes can be achieved, typically only by projecting 5 or 6 sets of fringes. But since the fitting data is concentrated at the smaller t value of the phi-t curve, the error is larger, the method can be further improved: the starting point is changed to the position of the maximum fringe number (t = s), and then the fringe numbers are respectively reduced by 1,2,4, 8 \8230 \8230ands/2, so that the measured phase values are all gathered at the larger t value of the phi-t curve, and errors are reduced.
In this embodiment: the invention selects a fitting negative exponential phase unwrapping algorithm to carry out phase calculation. The specific calculation method comprises the following steps:
the method comprises the following steps: solving a wrapped phase diagram phi w (m, n, t) after each set of fringe measurement;
step two: and solving the phase difference of the same point on the wrapped phase diagram of two adjacent sets of fringes and the discontinuity number of 2 pi:
ΔΦw(m,n,t)=Φw(m,n,t)-Φw(m,n,t-1) (2-4)
d(m,n,t)=NINT(ΔΦw(m,n,t)/2π) (2-5)
where Δ Φ w (m, n, t) refers to the wrapping phase difference at the same point (m, n) at the t-th and t-1-th sets of frequencies, NINT operation refers to rounding off; the subscript w represents the wrapped phase, t =2,3,.. S, specifying d (m, n, 1) =0.
Step three: unwrapped phase
When the phase algorithm is expanded by adopting the fitting negative exponent, only s, s-1, s-2, s-4,. S/2, the amplitude wrapped phase value (s maximum fringe number) t' =1,2,4,. S/2 are adopted
ΔΦu(s-t',s-2t')=U{ΔΦw(s-t',s-2t'),ΔΦu(s,s-t')} (2-7)
ΔΦu(s,s-2t')=ΔΦu(s-t',s-2t')+ΔΦu(s,s-t') (2-8)
Wherein:
ΔΦw(2t,t)=Φw(2t)-Φw(t)
ΔΦu(s,s-1)=ΔΦw(s,s-1)
ΔΦu(s,0)=Φu(s)
U{Φ1,Φ2}=Φ1-2πNINT(Φ1-Φ2)/(2π)
in the ideal case, Φ u(s) is the final unwrapped phase value at that point, but in practice, we will also use the intermediate phase values for reliability, i.e. the intermediate data is least-squares fitted to obtain the fitted slope:
Φu=w×s (2-10)
Φ u is the final unwrapped phase for that point.
In this embodiment: in S6, the frequencies are 1, 1/4, 1/16, 1/64, and 1/256 respectively, the first four frequencies adopt a four-step phase shift method, that is, 4 images are acquired respectively, the sinusoidal light with the frequency of 1/256 adopts an eight-step phase shift method, that is, 8 images are acquired, and the 24 images and the white light projection image form an object reconstruction image set with 25 images as a unit.
In this embodiment: in S8, the step of storing the reconstruction data is to automatically store the reconstruction data according to a storage path selected before the new project, and the reconstruction data is a point cloud file in a "ply format.
In this embodiment: in the step S9, a conversion relation matrix between the two pieces of point cloud is obtained through a mark point stitching algorithm, so that the two pieces of point cloud are stitched.
Wherein, the mark point splicing algorithm is concretely as follows,
rigid motion that needs to be carried out when two-piece point cloud realizes concatenation: rotating R and translating T, in order to finally determine the absolute position between the cameras, a transformation relationship between the two point clouds must be obtained, which may be implemented by a stitching algorithm, which specifically is:
suppose there are two point clouds { m } i And { d } i N, wherein { m =1,2 i And { d } i The same ratio and there are N (N.gtoreq.3) common points. Now the point cloud { m i Align to point cloud d i In the coordinate system, namely:
d i =Rm i +T+V i (3-1)
wherein, V i Is a noise vector. In order to obtain the optimal solutionThe least squares method is often used to minimize the following relation:
because, the set of points m i And d i Having the same center of gravity, the following equation can be obtained from this constraint:
equation (3-2) can be rewritten as:
minimization of the equation corresponds to maximization of the last term, which is equivalent to maximizationWhere H is defined as the relationship matrix:
suppose H is decomposed into H = U Λ V by SVD T Then rotating the matrixCan be obtained according to the following formula:
then:
therefore, a conversion relation matrix between the two pieces of point clouds can be obtained.
In this embodiment: the marking points are the same points under the surface of the same object corresponding to two or more pieces of point clouds.
In this embodiment: in S10, the global optimization and coloring specifically includes automatically finding a particle position according to a marked particle adhered to the surface of the object before shooting, moving relative positions of a plurality of point clouds to optimize a reconstruction model, and performing color rendering on the model through an OpenGL graphics library.
When the PMP structured light technology is used, grating stripes are projected in a mode of not adopting Gray codes and a traditional phase shift method, but a sinusoidal grating stripe pattern is projected in a multi-frequency heterodyne mode, so that the reconstruction precision and the reconstruction speed are better. In addition, the invention adds the point cloud splicing icp algorithm, and a plurality of pieces of point clouds are overlapped together in a marking point mode, so that the model can be supplemented in a visual angle, and the reconstruction details of the model are enriched.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention are equivalent to or changed within the technical scope of the present invention.
Claims (7)
1. An improved three-dimensional reconstruction method, comprising the steps of:
s1: establishing a three-dimensional reconstruction system, wherein the three-dimensional reconstruction system comprises hardware and software, and the hardware comprises a binocular camera and a DLP projector;
s2: connecting the software with hardware through a visual interface;
s3: carrying out system calibration;
s4: acquiring available parameters of the binocular camera by adopting a calibration method;
s5: newly building a reconstruction project, including automatic splicing of the mark points and single measurement;
s6: scanning the surface of an object by using a Digital Light Processing (DLP) projector through a multi-frequency heterodyne three-dimensional reconstruction technology, namely projecting sinusoidal grating stripes with multiple frequencies on the surface of the object to finish single measurement;
s7: displaying the three-dimensional model after single measurement and removing redundant scanning reconstruction parts;
s8: after the model reconstruction is completed, globally optimizing and coloring the model, and storing reconstructed data; or continuing to scan the object for later multi-layer point cloud splicing, and if mark point splicing is executed, turning to the step S6;
s9: after a new point cloud model is acquired, splicing the new point cloud with the old model;
s10: and carrying out global optimization and coloring on the multilayer point cloud model, and storing point cloud data after finishing the global optimization and coloring.
2. An improved three-dimensional reconstruction method as claimed in claim 1, wherein in said S6, a fitting negative exponential phase unwrapping algorithm is selected for phase calculation.
3. An improved three-dimensional reconstruction method as claimed in claim 1, wherein in said S6, the plurality of frequencies are 1, 1/4, 1/16, 1/64, and 1/256 respectively, the first four frequencies adopt a four-step phase shift method, that is, 4 sinusoidal lights are collected, 8 sinusoidal lights with 1/256 frequency are collected by an eight-step phase shift method, and the 24 images plus the white light projection image form an object reconstruction map set with 25 images as a unit.
4. The improved three-dimensional reconstruction method according to claim 1, wherein in S8, the saving of the reconstruction data is to automatically save the reconstruction data according to a saving path selected before the new project, and the reconstruction data is a point cloud file in a.
5. The improved three-dimensional reconstruction method of claim 1, wherein in S9, a transformation relation matrix between two pieces of point cloud is obtained by a mark point stitching algorithm, so as to achieve stitching of the two pieces of point cloud.
6. The improved three-dimensional reconstruction method of claim 5, wherein the marked points are the same points under the surface of the same object corresponding to two or more point clouds.
7. The improved three-dimensional reconstruction method according to claim 1, wherein in the S10, the global optimization and rendering specifically includes automatically finding a particle position according to a marked particle pasted on the surface of the object before shooting and moving relative positions of a plurality of point clouds to optimize the reconstruction model, and simultaneously performing color rendering on the model through an OpenGL graphics library.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210489218.2A CN115147540A (en) | 2022-05-07 | 2022-05-07 | Improved three-dimensional reconstruction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210489218.2A CN115147540A (en) | 2022-05-07 | 2022-05-07 | Improved three-dimensional reconstruction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115147540A true CN115147540A (en) | 2022-10-04 |
Family
ID=83407248
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210489218.2A Pending CN115147540A (en) | 2022-05-07 | 2022-05-07 | Improved three-dimensional reconstruction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115147540A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117333649A (en) * | 2023-10-25 | 2024-01-02 | 天津大学 | Optimization method for high-frequency line scanning dense point cloud under dynamic disturbance |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322544A (en) * | 2019-05-14 | 2019-10-11 | 广东康云科技有限公司 | A kind of visualization of 3 d scanning modeling method, system, equipment and storage medium |
CN111750805A (en) * | 2020-07-06 | 2020-10-09 | 山东大学 | Three-dimensional measuring device and method based on binocular camera imaging and structured light technology |
WO2021088481A1 (en) * | 2019-11-08 | 2021-05-14 | 南京理工大学 | High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection |
-
2022
- 2022-05-07 CN CN202210489218.2A patent/CN115147540A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322544A (en) * | 2019-05-14 | 2019-10-11 | 广东康云科技有限公司 | A kind of visualization of 3 d scanning modeling method, system, equipment and storage medium |
WO2020228326A1 (en) * | 2019-05-14 | 2020-11-19 | 广东康云科技有限公司 | Visual three-dimensional scanning modeling method, system and device, and storage medium |
WO2021088481A1 (en) * | 2019-11-08 | 2021-05-14 | 南京理工大学 | High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection |
CN111750805A (en) * | 2020-07-06 | 2020-10-09 | 山东大学 | Three-dimensional measuring device and method based on binocular camera imaging and structured light technology |
Non-Patent Citations (2)
Title |
---|
余良伟: "离心泵叶轮的结构光测量技术研究", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 * |
韩成, 国防工业出版社 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117333649A (en) * | 2023-10-25 | 2024-01-02 | 天津大学 | Optimization method for high-frequency line scanning dense point cloud under dynamic disturbance |
CN117333649B (en) * | 2023-10-25 | 2024-06-04 | 天津大学 | Optimization method for high-frequency line scanning dense point cloud under dynamic disturbance |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11808564B2 (en) | Calibration method for fringe projection systems based on plane mirrors | |
US9605961B2 (en) | Information processing apparatus that performs three-dimensional shape measurement, information processing method, and storage medium | |
US20120176478A1 (en) | Forming range maps using periodic illumination patterns | |
JP2019516983A (en) | Three-dimensional scanning system and scanning method thereof | |
CN111563564A (en) | Speckle image pixel-by-pixel matching method based on deep learning | |
CN110940295B (en) | High-reflection object measurement method and system based on laser speckle limit constraint projection | |
CN107610183B (en) | Calibration method of fringe projection phase height conversion mapping model | |
CN105043298A (en) | Quick three-dimensional shape measurement method without phase unwrapping based on Fourier transform | |
CN108195313A (en) | A kind of high dynamic range method for three-dimensional measurement based on Intensity response function | |
CN110702025B (en) | Grating type binocular stereoscopic vision three-dimensional measurement system and method | |
CN113091646B (en) | Three-dimensional shape measurement method based on stripe calibration | |
CN111724464A (en) | Mobile measurement point cloud coloring method and device | |
CN111649694B (en) | Implicit phase-parallax mapping binocular measurement missing point cloud interpolation method | |
CN112945089A (en) | Structured light coding method based on stripe width modulation | |
CN115147540A (en) | Improved three-dimensional reconstruction method | |
CN112562007B (en) | Three-mesh constraint-based wrapping phase non-unfolding fast stereo matching method | |
CN113551617A (en) | Binocular double-frequency complementary three-dimensional surface type measuring method based on fringe projection | |
CN117450955B (en) | Three-dimensional measurement method for thin object based on space annular feature | |
CN113237436A (en) | Device and method for measuring shape of polarized light space phase shift non-Lambertian metal object | |
Petković et al. | Multiprojector multicamera structured light surface scanner | |
Li et al. | Fast phase-based stereo matching method for 3D shape measurement | |
CN116518869A (en) | Metal surface measurement method and system based on photometric stereo and binocular structured light | |
CN114998518A (en) | Three-dimensional reconstruction method and system for visual structured light | |
CN110006364B (en) | Three-dimensional real-time microscopic measurement method based on circular stripe radial space carrier phase shift | |
CN114264253A (en) | High-temperature object three-dimensional contour non-contact measuring device and measuring method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221004 |
|
RJ01 | Rejection of invention patent application after publication |