CN115451866B - Three-dimensional measurement method of highly reflective surface based on light field equivalent camera array model - Google Patents
Three-dimensional measurement method of highly reflective surface based on light field equivalent camera array model Download PDFInfo
- Publication number
- CN115451866B CN115451866B CN202210967265.3A CN202210967265A CN115451866B CN 115451866 B CN115451866 B CN 115451866B CN 202210967265 A CN202210967265 A CN 202210967265A CN 115451866 B CN115451866 B CN 115451866B
- Authority
- CN
- China
- Prior art keywords
- point clouds
- camera
- light field
- auxiliary
- main camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000691 measurement method Methods 0.000 title claims abstract description 5
- 238000005259 measurement Methods 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 12
- 238000003384 imaging method Methods 0.000 claims description 11
- 230000000295 complement effect Effects 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 abstract description 3
- 239000011159 matrix material Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000009466 transformation Effects 0.000 description 5
- 230000036544 posture Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 239000002184 metal Substances 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000001314 profilometry Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000003746 surface roughness Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
本发明在传统条纹投影三维测量技术的基础上,提出一种基于光场等效相机阵列模型,并结合光场相机进行高反光表面三维测量。本发明的光场等效相机阵列模型避免了光场相机繁琐的标定过程和复杂的标定结果,实现快速精确标定。本发明的基于光场等效相机阵列模型的高反光表面三维测量方法,有效解决高反光表面三维重建后信息丢失问题。
Based on the traditional fringe projection 3D measurement technology, the present invention proposes a method based on a light field equivalent camera array model and combines a light field camera to perform 3D measurement of highly reflective surfaces. The light field equivalent camera array model of the present invention avoids the cumbersome calibration process and complex calibration results of the light field camera, and realizes fast and accurate calibration. The highly reflective surface 3D measurement method based on the light field equivalent camera array model of the present invention effectively solves the problem of information loss after 3D reconstruction of highly reflective surfaces.
Description
技术领域Technical Field
本发明涉及本发明涉及一种三维重建方法,尤其涉及一种具有高反光区域物体的三维重建方法。The present invention relates to a three-dimensional reconstruction method, and in particular to a three-dimensional reconstruction method for an object with a high-reflective area.
背景技术Background technique
条纹投影轮廓术常用于测量物体的三维表面形貌,相移条纹图案被投影到物体表面发生变形,并由相机采集,相机的像素对应点的三维坐标信息,可通过采集的变形条纹图案中所包含的相位信息来计算得到。然而,由于物体表面加工处理方式不同、表面粗糙度各异,在光线投射至物体表面时会发生镜面反射和漫反射,其中镜面反射具有方向性,是造成图像过饱和的主要因素。这使得条纹投影三维测量方法面临图像过饱和问题,即入射光强远超过相机图像传感器的最大灰度值。从而使得后续的相位解调和相位展开过程中存在较大误差,进而导致相位紊乱,最终给计算得到的点云数据引入过多的误差或者过于稀疏。Fringe projection profilometry is often used to measure the three-dimensional surface morphology of an object. The phase-shifted fringe pattern is projected onto the surface of the object and deformed, and then collected by the camera. The three-dimensional coordinate information of the corresponding point of the camera's pixel can be calculated through the phase information contained in the collected deformed fringe pattern. However, due to different surface processing methods and surface roughness of objects, specular reflection and diffuse reflection will occur when light is projected onto the surface of the object. Specular reflection is directional and is the main factor causing image oversaturation. This makes the fringe projection three-dimensional measurement method face the problem of image oversaturation, that is, the incident light intensity far exceeds the maximum grayscale value of the camera image sensor. As a result, there are large errors in the subsequent phase demodulation and phase unfolding process, which leads to phase disorder, and ultimately introduces too many errors or excessive sparseness to the calculated point cloud data.
发明内容Summary of the invention
光场成像技术能够同时记录光线的位置信息和方向信息,因此,与传统相机相比,光场相机可捕捉到四维光场信息,通过提取四维光场信息得到场景的多视角图像,即子孔径图像,异视角子孔径图像对于同一世界点采集到的光线信息并不相同,这就使得光场子孔径图像之间存在一定的差异性和互补性。理论上来说,对于某一子孔径所记录的图像信息中因过曝而丢失的场景信息,能够在其他视角下的子孔径图像得到补充。因此,本发明在传统条纹投影三维测量技术的基础上,提出一种基于光场等效相机阵列模型的高反光表面三维测量方法及系统,有效解决高反光表面三维重建后信息丢失问题。Light field imaging technology can simultaneously record the position information and direction information of light. Therefore, compared with traditional cameras, light field cameras can capture four-dimensional light field information. By extracting the four-dimensional light field information, multi-perspective images of the scene, namely sub-aperture images, are obtained. The light information collected by sub-aperture images of different viewing angles for the same world point is not the same, which makes the light field sub-aperture images have certain differences and complementarities. Theoretically, the scene information lost in the image information recorded by a certain sub-aperture due to overexposure can be supplemented by sub-aperture images at other viewing angles. Therefore, based on the traditional fringe projection three-dimensional measurement technology, the present invention proposes a high-reflective surface three-dimensional measurement method and system based on a light field equivalent camera array model, which effectively solves the problem of information loss after three-dimensional reconstruction of high-reflective surfaces.
根据本发明实施例的第一方面,提出了一种光场等效相机阵列模型,避免了光场相机繁琐的标定过程和复杂的标定结果,实现快速精确标定。According to a first aspect of an embodiment of the present invention, a light field equivalent camera array model is proposed, which avoids the cumbersome calibration process and complex calibration results of the light field camera and achieves fast and accurate calibration.
根据本发明实施例的第二方面,提出了一种高反光表面三维测量方法,利用标定的光场相机对高反光表面进行多视角三维重建,从而得到不同视角下高反光表面的多方向点云信息,该光场相机看作由主相机和辅助相机组成的等效相机阵列,选择子孔径图像中的中心视角作为主相机的视角,辅助相机的成像看作是除中心视角下的其他视角成像,将所有辅助相机的坐标系统一到主相机坐标系中;选择主相机下重建得到点云作为目标点云,其余辅助相机下重建得到点云作为辅助点云,将所有重建点云统一至主相机坐标系下,再将统一至主相机坐标系下的所有重建点云投影至XOY平面,得到投影图像后,通过连通域分析提取得到每个点云投影图像中缺失部分所在的像素位置和大小,确定对于目标点云具有修复意义的异视角点云信息,然后从异视角点云中提取互补点云,并拼接至目标点云的对应区域,以此对目标点云进行迭代修复,直至完全恢复因过度曝光而丢失的信息,实现对高反光表面物体进行完整三维测量工作。According to a second aspect of an embodiment of the present invention, a method for three-dimensional measurement of a highly reflective surface is proposed, wherein a calibrated light field camera is used to perform multi-view three-dimensional reconstruction of the highly reflective surface, thereby obtaining multi-directional point cloud information of the highly reflective surface at different viewing angles, wherein the light field camera is regarded as an equivalent camera array consisting of a main camera and an auxiliary camera, and the central viewing angle in the sub-aperture image is selected as the viewing angle of the main camera, and the imaging of the auxiliary camera is regarded as the imaging of other viewing angles except the central viewing angle, and the coordinate systems of all auxiliary cameras are unified into the coordinate system of the main camera; the point cloud reconstructed under the main camera is selected as the target point cloud, and the point cloud reconstructed under the remaining auxiliary cameras is selected as the target point cloud. The point cloud is used as an auxiliary point cloud. All reconstructed point clouds are unified into the main camera coordinate system, and then all reconstructed point clouds unified into the main camera coordinate system are projected onto the XOY plane. After obtaining the projection image, the pixel position and size of the missing part in each point cloud projection image are extracted through connected domain analysis, and the different-view point cloud information that is meaningful for repairing the target point cloud is determined. Then, complementary point clouds are extracted from the different-view point clouds and spliced to the corresponding area of the target point cloud. In this way, the target point cloud is iteratively repaired until the information lost due to overexposure is completely restored, thereby realizing complete three-dimensional measurement of objects on highly reflective surfaces.
根据本发明实施例的第三方面,提出一种高反光表面三维测量系统,包括:光场相机;三维重建模块,其被配置为利用标定的光场相机对高反光表面进行多视角三维重建,从而得到不同视角下高反光表面的多方向点云信息,该光场相机看作由主相机和辅助相机组成的等效相机阵列,选择子孔径图像中的中心视角作为主相机的视角,辅助相机的成像看作是除中心视角下的其他视角成像,将所有辅助相机的坐标系统一到主相机坐标系中,选择主相机下重建得到点云作为目标点云,其余辅助相机下重建得到点云作为辅助点云,将所有重建点云统一至主相机坐标系下,再将统一至主相机坐标系下的所有重建点云投影至XOY平面,得到投影图像后,通过连通域分析提取得到每个点云投影图像中缺失部分所在的像素位置和大小,确定对于目标点云具有修复意义的异视角点云信息,然后从异视角点云中提取互补点云,并拼接至目标点云的对应区域,以此对目标点云进行迭代修复,直至完全恢复因过度曝光而丢失的信息,实现对高反光表面物体进行完整三维测量工作。According to a third aspect of an embodiment of the present invention, a high-reflective surface 3D measurement system is proposed, comprising: a light field camera; and a 3D reconstruction module, which is configured to use a calibrated light field camera to perform multi-view 3D reconstruction on the high-reflective surface, thereby obtaining multi-directional point cloud information of the high-reflective surface under different viewing angles, wherein the light field camera is regarded as an equivalent camera array composed of a main camera and an auxiliary camera, the central viewing angle in the sub-aperture image is selected as the viewing angle of the main camera, the imaging of the auxiliary camera is regarded as imaging of other viewing angles except the central viewing angle, the coordinate systems of all auxiliary cameras are unified into the coordinate system of the main camera, the point cloud reconstructed under the main camera is selected as the target point cloud, and the remaining The point cloud reconstructed under the auxiliary camera is used as the auxiliary point cloud. All reconstructed point clouds are unified under the main camera coordinate system, and then all reconstructed point clouds unified under the main camera coordinate system are projected to the XOY plane. After obtaining the projection image, the pixel position and size of the missing part in each point cloud projection image are extracted through connected domain analysis, and the different-viewing angle point cloud information that is meaningful for repairing the target point cloud is determined. Then, complementary point clouds are extracted from the different-viewing angle point clouds and spliced to the corresponding area of the target point cloud. In this way, the target point cloud is iteratively repaired until the information lost due to overexposure is completely restored, thereby realizing complete three-dimensional measurement of objects on highly reflective surfaces.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明实施例的技术方案,下面将对实施例的附图作简单地介绍。In order to more clearly illustrate the technical solution of the embodiment of the present invention, the drawings of the embodiment are briefly introduced below.
图1(a)为本发明一实施例提供的光场相机模型示意图。FIG. 1( a ) is a schematic diagram of a light field camera model provided by an embodiment of the present invention.
图1(b)为本发明一实施例提供的等效相机阵列模型示意图。FIG. 1( b ) is a schematic diagram of an equivalent camera array model provided by an embodiment of the present invention.
图2为本发明一实施例提供的基于光场等效阵列模型的三维测量系统示意图。FIG. 2 is a schematic diagram of a three-dimensional measurement system based on a light field equivalent array model provided by an embodiment of the present invention.
图3为本发明一实施例提供的正弦相移条纹示意图。FIG. 3 is a schematic diagram of sinusoidal phase-shifted fringes provided by an embodiment of the present invention.
图4为本发明一实施例提供的光场等效相机阵列三维测量系统示意图。FIG. 4 is a schematic diagram of a light field equivalent camera array three-dimensional measurement system provided by an embodiment of the present invention.
图5为本发明一实施例提供的点云自适应修复示意图。FIG5 is a schematic diagram of point cloud adaptive restoration provided by an embodiment of the present invention.
图6为本发明一实施例提供的金属柱体表面重建示意图,其中(a)原始光场图像(b)传统条纹投影三维重建结果(c)本发明三维重建结果。FIG6 is a schematic diagram of surface reconstruction of a metal cylinder provided by an embodiment of the present invention, wherein (a) the original light field image, (b) the traditional fringe projection 3D reconstruction result, and (c) the 3D reconstruction result of the present invention.
具体实施方式Detailed ways
从原始光场图像中提取的子孔径图像是从不同视角对场景所成的像,因此聚焦型光场相机可以等价于一个相机阵列,从而可将四维光场L(s,t,u,v)简化为二维L(u,v)阵列,用光场相机的角度分辨率(s,t)在阵列上进行定位。基于此,本发明提出了基于聚焦型光场相机的等效相机阵列模型。为了标定聚焦型光场相机,将结构未知的聚焦型光场相机看作由主相机和辅助相机组成的等效相机阵列,选择子孔径图像中的中心视角作为主相机的视角。考虑到子孔径图像的提取过程,可将该模型中所有相机视为具有相同的内参矩阵,光场相机子孔径图像的不同视角反映在相机的姿态变化中。辅助相机的成像可以看作是除中心视角下的其他视角成像。因此,主相机图像平面上世界点Pw=(x,y,z)T到主相机平面上点mm=(um,vm)T和辅助相机图像平面上点ma=(ua,va)T的透视图投影变换可以表示为:The sub-aperture image extracted from the original light field image is an image of the scene from different perspectives. Therefore, the focusing light field camera can be equivalent to a camera array, so that the four-dimensional light field L(s, t, u, v) can be simplified into a two-dimensional L(u, v) array, and the angular resolution (s, t) of the light field camera is used for positioning on the array. Based on this, the present invention proposes an equivalent camera array model based on a focusing light field camera. In order to calibrate the focusing light field camera, the focusing light field camera with unknown structure is regarded as an equivalent camera array composed of a main camera and an auxiliary camera, and the central perspective in the sub-aperture image is selected as the perspective of the main camera. Considering the extraction process of the sub-aperture image, all cameras in the model can be regarded as having the same intrinsic parameter matrix, and the different perspectives of the sub-aperture image of the light field camera are reflected in the posture changes of the camera. The imaging of the auxiliary camera can be regarded as imaging of other perspectives except the central perspective. Therefore, the perspective projection transformation from the world point Pw = (x, y, z) T on the main camera image plane to the point mm = ( um , vm ) T on the main camera plane and the point ma = ( ua , va ) T on the auxiliary camera image plane can be expressed as:
其中,表示齐次坐标,sm和sa为尺度因子,AC为相机的本征矩阵,[Rm|Tm]和[Ra|Ta]分别为主相机和辅助相机的外参矩阵。in, represents homogeneous coordinates, s m and sa are scale factors, AC is the intrinsic matrix of the camera, [R m |T m ] and [R a |T a ] are the extrinsic matrices of the main camera and the auxiliary camera, respectively.
为了避免主相机和辅助相机对同一世界点的重建结果不一致,需要将所有辅助相机的坐标系统一到主相机坐标系中。因此,主相机坐标系默认为等效相机阵列坐标系。在这种情况下,辅助相机的位置和姿态可以用主相机的外参矩阵表示。同时,从辅助相机到主相机的位姿变化是一个刚体变换过程。因此,可以推导出辅助相机坐标系到主相机坐标系的刚体变换,可以表示为:In order to avoid inconsistent reconstruction results of the same world point by the main camera and the auxiliary camera, the coordinate systems of all auxiliary cameras need to be unified into the main camera coordinate system. Therefore, the main camera coordinate system defaults to the equivalent camera array coordinate system. In this case, the position and posture of the auxiliary camera can be represented by the external parameter matrix of the main camera. At the same time, the change of posture from the auxiliary camera to the main camera is a rigid body transformation process. Therefore, the rigid body transformation from the auxiliary camera coordinate system to the main camera coordinate system can be derived, which can be expressed as:
其中,[Rs|Ts]为辅助摄像机的结构参数,表示从辅助相机坐标系到主相机坐标系的刚体变换矩阵,Pm和Pa分别为主相机坐标系和辅助相机坐标系中某一点的三维坐标。为了对辅助相机的不同姿态进行标识,用C(i,j)表示位于等效相机阵列中的第i行第j列的相机,Pa,ij表示C(i,j)相机坐标系中的坐标,ma,ij为相应的图像像素坐标系中的坐标。在主相机坐标系下,主相机与辅助相机的变换可由下式表示:Among them, [ Rs | Ts ] is the structural parameter of the auxiliary camera, which represents the rigid body transformation matrix from the auxiliary camera coordinate system to the main camera coordinate system. Pm and Pa are the three-dimensional coordinates of a point in the main camera coordinate system and the auxiliary camera coordinate system, respectively. In order to identify the different postures of the auxiliary camera, C(i, j) is used to represent the camera located in the i-th row and j-th column of the equivalent camera array, Pa,ij represents the coordinates in the C(i, j) camera coordinate system, and ma,ij is the coordinates in the corresponding image pixel coordinate system. In the main camera coordinate system, the transformation between the main camera and the auxiliary camera can be expressed as follows:
其中,sa,ij为第i行第j列等效相机的尺度因子,I为单位矩阵,Rs,ij、Ts,ij为第C(i,j)的结构参数。将不同的辅助相机视角下计算得到的Pa,ij统一至主坐标系中,得到聚焦型光场相机的等效相机阵列模型,如图1所示。Where s a,ij is the scale factor of the equivalent camera in the i-th row and j-th column, I is the unit matrix, and R s,ij and T s,ij are the structural parameters of C(i, j). The Pa,ij calculated under different auxiliary camera viewing angles are unified into the main coordinate system to obtain the equivalent camera array model of the focusing light field camera, as shown in Figure 1.
搭建如图2所示的基于光场等效相机阵列模型的高反光表面三维测量系统,其中包括DLP4500结构光投影仪、Raytrix R8光场相机、上位机等。利用MATLAB编程生成3组分辨率为912×1140相移正弦条纹图案,频率分别为15、12、10,每组包括水平和竖直相移条纹各4幅,相邻相移条纹图案之间相位相差π/2,共计24幅正弦条纹图案,如图3所示。将制作好的棋盘格标定板置于光场相机视场中不同位置,尽量覆盖不同角度,在每一位置处投影仪均投射24幅正弦条纹图案,将标定板置于10个不同的位姿,可以得到10组正弦条纹图案序列,每组图案包含一幅棋盘格图片和24幅正弦条纹图案,共25张图案。图片采集完成后,依据光场子孔径图像提取算法,对光场相机采集到的原始光场数据进行解码,为了保证测量系统三维重建的精度,需要对光场相机的角度分辨率和空间分辨率进行平衡,最后将角度分辨率调整为6×6,即从原始图像中可提取36个视角的子孔径图像,空间分辨率为1430×784,即从原始光场图像中提取的子孔径图像分辨率为1430×784。将采集到的标定图案进行解码,得到36组图案,每组图案为每个视角下所采集的标定图案,共250张图案。采用开源标定软件进行标定,得到各等效相机的内外参数、相机阵列结构参数和投影仪内外参数。A high-reflective surface three-dimensional measurement system based on the light field equivalent camera array model as shown in Figure 2 was built, including a DLP4500 structured light projector, a Raytrix R8 light field camera, a host computer, etc. Three sets of phase-shifted sinusoidal fringe patterns with a resolution of 912×1140 were generated using MATLAB programming, with frequencies of 15, 12, and 10, respectively. Each set includes 4 horizontal and vertical phase-shifted fringes, and the phase difference between adjacent phase-shifted fringe patterns is π/2, totaling 24 sinusoidal fringe patterns, as shown in Figure 3. The prepared checkerboard calibration plate is placed at different positions in the field of view of the light field camera, trying to cover different angles. At each position, the projector projects 24 sinusoidal fringe patterns. The calibration plate is placed in 10 different positions, and 10 sets of sinusoidal fringe pattern sequences can be obtained. Each set of patterns contains a checkerboard image and 24 sinusoidal fringe patterns, for a total of 25 patterns. After the image acquisition is completed, the original light field data collected by the light field camera is decoded according to the light field sub-aperture image extraction algorithm. In order to ensure the accuracy of the three-dimensional reconstruction of the measurement system, it is necessary to balance the angular resolution and spatial resolution of the light field camera. Finally, the angular resolution is adjusted to 6×6, that is, 36 sub-aperture images of different viewing angles can be extracted from the original image, and the spatial resolution is 1430×784, that is, the resolution of the sub-aperture image extracted from the original light field image is 1430×784. The collected calibration patterns are decoded to obtain 36 groups of patterns, each group of patterns is a calibration pattern collected at each viewing angle, with a total of 250 patterns. The open source calibration software is used for calibration to obtain the internal and external parameters of each equivalent camera, the camera array structure parameters, and the internal and external parameters of the projector.
在通过标定工作得到等效相机阵列模型和投影仪内参矩阵以及外参矩阵后,则在统一坐标系下,被测物体在不同视角下的三维重建结果可由下式得到:After the equivalent camera array model and the projector intrinsic and extrinsic matrix are obtained through calibration, the three-dimensional reconstruction results of the object under test at different viewing angles in a unified coordinate system can be obtained by the following formula:
其中,Eij为等效相机阵列中C(i,j)的结构参数,sp为投影仪尺度因子,sc,ij第i行第j列等效相机尺度因子。Ap为投影仪的本征矩阵,[Rp|Tp]为投影仪的外参矩阵,up为对应点的投影仪像素坐标系下横坐标,Φi(uc,vc)为在点(uc,vc)的绝对相位值,W为投影条纹宽度,N为条纹最大周期整数。Pc=RcPw+Tc为相机坐标系中的坐标,Rc,Tc为相机的外参。相机阵列中的主相机为中心视角下相机,因此选择主相机坐标系作为测量系统的坐标系,则Eij可定义为:Where, E ij is the structural parameter of C(i, j) in the equivalent camera array, s p is the projector scale factor, and s c,ij is the equivalent camera scale factor in the i-th row and j-th column. A p is the intrinsic matrix of the projector, [R p |T p ] is the extrinsic parameter matrix of the projector, u p is the horizontal coordinate of the corresponding point in the projector pixel coordinate system, Φ i (u c ,v c ) is the absolute phase value at the point (u c ,v c ), W is the projection fringe width, and N is the maximum fringe period integer. P c =R c P w +T c is the coordinate in the camera coordinate system, and R c ,T c are the extrinsic parameters of the camera. The main camera in the camera array is the camera under the central perspective, so the main camera coordinate system is selected as the coordinate system of the measurement system, then E ij can be defined as:
其中,(s,t)为聚焦型光场相机的角度分辨率。因此,可利用聚焦型光场进行多视角三维重建,Rs,ij,Ts,ij代表第i行第j列等效相机的外参。从而得到不同视角下被测物体的多方向点云信息,如图4所示。Among them, (s, t) is the angular resolution of the focusing light field camera. Therefore, the focusing light field can be used for multi-view 3D reconstruction, and R s,ij ,T s,ij represent the extrinsic parameters of the equivalent camera in the i-th row and j-th column. Thus, the multi-directional point cloud information of the measured object under different viewing angles is obtained, as shown in Figure 4.
在测量系统中,由于被测物体表面存在高反光区域,导致从多个视角重建的点云可能不完整,即图像过曝光导致重建点云出现空洞现象。相机在不同视角下记录的来自世界坐标系中同一点的光线强度是不相等的,因此在对高反光表面物体重建得到的异视角点云中,丢失信息的位置和大小是不同的。也就是说,不完整的点云可以被其他视角下重建的具有互补性息的点云所修复,如图5所示。In the measurement system, due to the presence of highly reflective areas on the surface of the measured object, the point cloud reconstructed from multiple perspectives may be incomplete, that is, the image is overexposed, resulting in holes in the reconstructed point cloud. The light intensity recorded by the camera from the same point in the world coordinate system at different perspectives is not equal, so the position and size of the lost information in the different perspective point cloud reconstructed from the highly reflective surface object are different. In other words, the incomplete point cloud can be repaired by the point cloud with complementary information reconstructed from other perspectives, as shown in Figure 5.
选择光场等效相机阵列测量系统中,主相机下重建得到点云{P}m作为目标点云,其余作为辅助点云{P}a。为了避免冗余的计算,将所有重建点云统一至主相机坐标系下,再将点云投影至XOY平面,由下式可得到投影图像:In the light field equivalent camera array measurement system, the point cloud {P} m reconstructed under the main camera is used as the target point cloud, and the rest are used as auxiliary point clouds {P} a . In order to avoid redundant calculations, all reconstructed point clouds are unified to the main camera coordinate system, and then the point clouds are projected to the XOY plane. The projection image can be obtained by the following formula:
其中,Bm表示目标点云投影得到的二值投影图像,零像素区域表示点云缺失区域;Ba表示源点云投影得到的二值投影图像;Ωz表示将点云投影至XOY平面上。得到投影图像后,通过连通域分析可提取得到每个点云投影图像中缺失部分所在的像素位置和大小,从而可确定对于目标点云具有修复意义的异视角点云信息,可表示为:Among them, Bm represents the binary projection image obtained by projecting the target point cloud, and the zero pixel area represents the missing area of the point cloud; Ba represents the binary projection image obtained by projecting the source point cloud; Ωz represents the projection of the point cloud onto the XOY plane. After obtaining the projection image, the pixel position and size of the missing part in each point cloud projection image can be extracted through connected domain analysis, so as to determine the different-viewpoint point cloud information that has the significance of repairing the target point cloud, which can be expressed as:
Λ=Ψ(Bm)ΔΨ(Ba)Λ=Ψ(B m )ΔΨ(B a )
其中,Ψ表示连通域分析,AΔB表示根据点集A中零像素的位置从点集B中提取坐标集Λ。然后,可以从异视角点云中提取互补点云,并拼接至目标点云的对应区域。修复后的点云{P}re可表示为:Among them, Ψ represents the connected domain analysis, and AΔB represents the extraction of the coordinate set Λ from the point set B according to the position of the zero pixel in the point set A. Then, the complementary point cloud can be extracted from the different-view point cloud and spliced to the corresponding area of the target point cloud. The repaired point cloud {P} re can be expressed as:
其中,δΛ表示根据坐标集Λ提取点云,表示点云拼接。重复上述步骤,对目标点云进行迭代修复,直至完全恢复因过度曝光而丢失的信息,从而利用光场等效相机阵列测量系统对高反光表面物体进行完整三维测量工作。Among them, δ Λ represents the point cloud extracted according to the coordinate set Λ, Represents point cloud stitching. Repeat the above steps to iteratively repair the target point cloud until the information lost due to overexposure is completely restored, so as to perform complete three-dimensional measurement of highly reflective surface objects using the light field equivalent camera array measurement system.
上述对点云的处理可以通过配置在上位机上的三维重建模块实现。三维重建模块以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(randomaccess memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The above processing of the point cloud can be implemented by a three-dimensional reconstruction module configured on the host computer. When the three-dimensional reconstruction module is implemented in the form of a software function module and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including a number of instructions for a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program codes.
以抛光金属柱体为例,原始光场图像如图6(a)所示,采用传统条纹投影三维重建方法如图6(b)所示,本发明重建结果如图6(c)所示。Taking a polished metal cylinder as an example, the original light field image is shown in FIG6(a), the traditional fringe projection 3D reconstruction method is shown in FIG6(b), and the reconstruction result of the present invention is shown in FIG6(c).
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210967265.3A CN115451866B (en) | 2022-08-12 | 2022-08-12 | Three-dimensional measurement method of highly reflective surface based on light field equivalent camera array model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210967265.3A CN115451866B (en) | 2022-08-12 | 2022-08-12 | Three-dimensional measurement method of highly reflective surface based on light field equivalent camera array model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115451866A CN115451866A (en) | 2022-12-09 |
CN115451866B true CN115451866B (en) | 2024-06-25 |
Family
ID=84298214
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210967265.3A Active CN115451866B (en) | 2022-08-12 | 2022-08-12 | Three-dimensional measurement method of highly reflective surface based on light field equivalent camera array model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115451866B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3126592A1 (en) * | 2020-08-03 | 2022-02-03 | Institut De La Recherche Scientifique | Methode et systeme de profilometrie par illumination haute vitesse a bande limitee avec deux objectifs |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107123156A (en) * | 2017-03-10 | 2017-09-01 | 西北工业大学 | A kind of active light source projection three-dimensional reconstructing method being combined with binocular stereo vision |
GB2569656B (en) * | 2017-12-22 | 2020-07-22 | Zivid Labs As | Method and system for generating a three-dimensional image of an object |
CN111750806B (en) * | 2020-07-20 | 2021-10-08 | 西安交通大学 | A multi-view three-dimensional measurement system and method |
CN113108721B (en) * | 2021-04-09 | 2022-02-15 | 四川大学 | High-reflectivity object three-dimensional measurement method based on multi-beam self-adaptive complementary matching |
CN113205592B (en) * | 2021-05-14 | 2022-08-05 | 湖北工业大学 | A three-dimensional reconstruction method and system of light field based on phase similarity |
CN113205593B (en) * | 2021-05-17 | 2022-06-07 | 湖北工业大学 | A three-dimensional reconstruction method of highly reflective surface structured light field based on point cloud adaptive repair |
-
2022
- 2022-08-12 CN CN202210967265.3A patent/CN115451866B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3126592A1 (en) * | 2020-08-03 | 2022-02-03 | Institut De La Recherche Scientifique | Methode et systeme de profilometrie par illumination haute vitesse a bande limitee avec deux objectifs |
Non-Patent Citations (1)
Title |
---|
基于数字微镜器件的高光面物体三维测量方法;邢威;张福民;冯维;曲兴华;;光学学报;20171210(12);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115451866A (en) | 2022-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110880185B (en) | High-precision dynamic real-time 360-degree omnidirectional point cloud acquisition method based on fringe projection | |
CN106595528B (en) | A kind of micro- binocular stereo vision measurement method of telecentricity based on digital speckle | |
Nehab et al. | Efficiently combining positions and normals for precise 3D geometry | |
WO2018201677A1 (en) | Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system | |
CN113205592B (en) | A three-dimensional reconstruction method and system of light field based on phase similarity | |
CN109307483A (en) | A Phase Unwrapping Method Based on Geometric Constraints of Structured Light Systems | |
Pagani et al. | Dense 3D Point Cloud Generation from Multiple High-resolution Spherical Images. | |
CN107633518A (en) | A kind of product design detection method based on Kinect | |
US8350893B2 (en) | Three-dimensional imaging apparatus and a method of generating a three-dimensional image of an object | |
Aliaga et al. | A self-calibrating method for photogeometric acquisition of 3D objects | |
KR100603602B1 (en) | 3D mesh generation method using dense unaligned 3D measurement points | |
CN117994446B (en) | Complementary three-dimensional reconstruction method and system based on polarization binocular line structured light fusion | |
CN115451866B (en) | Three-dimensional measurement method of highly reflective surface based on light field equivalent camera array model | |
CN118482665A (en) | A structured light three-dimensional measurement method and device using polarization self-rotation filtering | |
Hafeez et al. | The effect of patterns on image-based modelling of texture-less objects | |
KR20210065030A (en) | Method and apparatus for reconstructing three-dimensional image via a diffraction grating | |
Grifoni et al. | 3D multi-modal point clouds data fusion for metrological analysis and restoration assessment of a panel painting | |
CN117830435A (en) | Multi-camera system calibration method based on 3D reconstruction | |
Yang et al. | Unidirectional structured light system calibration with auxiliary camera and projector | |
Skabek et al. | Comparison of photgrammetric techniques for surface reconstruction from images to reconstruction from laser scanning | |
CN113432550A (en) | Large-size part three-dimensional measurement splicing method based on phase matching | |
CN119006464B (en) | Defect detection method, device, system, computer equipment and readable storage medium | |
CN118537484A (en) | Three-dimensional reconstruction method, device and equipment for high-reflectivity object based on phase shift method | |
Martins et al. | Camera calibration using reflections in planar mirrors and object reconstruction using volume carving method | |
JP7327484B2 (en) | Template creation device, object recognition processing device, template creation method, object recognition processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |