[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106846383A - High dynamic range images imaging method based on 3D digital micro-analysis imaging systems - Google Patents

High dynamic range images imaging method based on 3D digital micro-analysis imaging systems Download PDF

Info

Publication number
CN106846383A
CN106846383A CN201710057799.1A CN201710057799A CN106846383A CN 106846383 A CN106846383 A CN 106846383A CN 201710057799 A CN201710057799 A CN 201710057799A CN 106846383 A CN106846383 A CN 106846383A
Authority
CN
China
Prior art keywords
image
high dynamic
focus
images
dynamic range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710057799.1A
Other languages
Chinese (zh)
Other versions
CN106846383B (en
Inventor
郑驰
邱国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Nottingham Ningbo China
Original Assignee
University of Nottingham Ningbo China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Nottingham Ningbo China filed Critical University of Nottingham Ningbo China
Priority to CN201710057799.1A priority Critical patent/CN106846383B/en
Publication of CN106846383A publication Critical patent/CN106846383A/en
Application granted granted Critical
Publication of CN106846383B publication Critical patent/CN106846383B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

本发明涉及一种基3D数字显微成像系统的高动态范围图像成像方法,通过生成待观测物体的高动态范围图像且获取待观测样本的原始高动态多聚焦序列图像,利用相位匹配方法和傅里叶变换进行图像配准和超像素层级上的移动,再通过前景背景分割方法将目标物体分割出来;对于分割后的图像作四叉树分解处理,标记图像序列中的清晰图像块,并记录每一幅图像所对应的高度信息;最后将标记好的清晰图像块融合成待观测物流的三维立体形状,并采用中值滤波对生成的三维立体形状进行滤波,以消除三维立体形状因采样频率不足而引起的锯齿效果,从而使得生成的待观测物体的三维立体形成更加平滑。

The invention relates to a high dynamic range image imaging method based on a 3D digital microscopic imaging system. By generating a high dynamic range image of an object to be observed and obtaining an original high dynamic multi-focus sequence image of a sample to be observed, a phase matching method and a Fourier Lie transform performs image registration and superpixel level movement, and then segments the target object through the foreground and background segmentation method; performs quadtree decomposition processing on the segmented image, marks clear image blocks in the image sequence, and records The height information corresponding to each image; finally, the marked clear image blocks are fused into the three-dimensional shape of the logistics to be observed, and the median filter is used to filter the generated three-dimensional shape to eliminate the three-dimensional shape due to the sampling frequency The sawtooth effect caused by the deficiency makes the three-dimensional formation of the object to be observed smoother.

Description

基于3D数字显微成像系统的高动态范围图像成像方法High Dynamic Range Image Imaging Method Based on 3D Digital Microscopic Imaging System

技术领域technical field

本发明涉及高清高精度显微成像检测技术领域,尤其涉及一种基于3D数字显微成像系统的高动态范围图像成像方法。The invention relates to the technical field of high-definition and high-precision microscopic imaging detection, in particular to a high dynamic range image imaging method based on a 3D digital microscopic imaging system.

背景技术Background technique

多焦距3D技术(Shape from Focus,简称SFF)是目前数字显微图像处理领域内常用的3D技术。由于多焦距3D技术只需要应用传统单目显微镜就可以获得观测样本的三维形状而得到专家学者的广泛关注。区别于立体视觉技术利用双目镜头获得深度信息,多焦距3D技术只通过移动、观测物体到镜头的距离,检测图像中清晰区域,从而便可以恢复重建出物体的深度信息。Multi-focal length 3D technology (Shape from Focus, referred to as SFF) is currently a commonly used 3D technology in the field of digital microscopic image processing. Since the multi-focal length 3D technology can obtain the three-dimensional shape of the observed sample only by using a traditional monocular microscope, it has attracted widespread attention from experts and scholars. Different from stereo vision technology that uses binocular lens to obtain depth information, multi-focal length 3D technology only detects clear areas in the image by moving and observing the distance from the object to the lens, so as to restore and reconstruct the depth information of the object.

但是,多焦距3D技术的主要缺陷在于当观测样本存在高反光的情况时,由于采集得到的图像的动态范围不足而导致在某些区域内图像细节不足,甚至某些区域内图像没有细节,这样就大大影响了重建之后的物体三维形状的准确率。然而,目前许多科学研究仍然主要专注于聚焦因子对物体三维形状重建准确率的影响,却忽略了原始图像在动态范围方面的质量对物体三维形状重建结果的影响。However, the main defect of the multi-focal length 3D technology is that when the observed sample has high reflection, the dynamic range of the collected image is insufficient, resulting in insufficient image details in some areas, or even no image details in some areas, so It greatly affects the accuracy of the three-dimensional shape of the reconstructed object. However, many current scientific researches still mainly focus on the influence of the focusing factor on the accuracy of 3D shape reconstruction of objects, but ignore the influence of the quality of the original image in terms of dynamic range on the results of 3D shape reconstruction of objects.

为了克服所得到图像中动态范围不足因素的影响,高动态范围成像技术被提出。利用高动态范围成像技术可以得到高动态范围图像(High-Dynamic Range,简称HDR)。通过标定,对于同一个场景的不同曝光时间的图像进行融合,可得到该场景的32位的高动态范围光照谱。这些32位的光照谱图像能够准确、真实地反映场景中的动态范围,然后通过局部色调映射将这些32位的光照谱图像映射到8位的普通图像,从而便于传统显示设备显示和保存这些8位的普通图像。但是,由于高动态范围成像技术的高计算复杂度,目前市面上的显微3D重建方法在实现此技术方面仍然存在局限性。In order to overcome the influence of insufficient dynamic range in the obtained images, high dynamic range imaging technology is proposed. A high dynamic range image (High-Dynamic Range, HDR for short) can be obtained by using a high dynamic range imaging technology. Through calibration, the images of different exposure times of the same scene are fused to obtain a 32-bit high dynamic range light spectrum of the scene. These 32-bit light spectrum images can accurately and truly reflect the dynamic range in the scene, and then map these 32-bit light spectrum images to 8-bit ordinary images through local tone mapping, so that it is convenient for traditional display devices to display and save these 8-bit images. normal image. However, current microscopic 3D reconstruction methods on the market still have limitations in implementing this technique due to the high computational complexity of high dynamic range imaging techniques.

发明内容Contents of the invention

本发明所要解决的技术问题是针对上述现有技术提供一种基于3D数字显微成像系统的高动态范围图像成像方法。该高动态范围图像成像方法能够解决现有图像成像方法无法拍清高动态场景的缺陷,并且能够同时准确地生成待观测物体的三维立体形状,从而为观测者提供全方位的3D立体视觉享受。The technical problem to be solved by the present invention is to provide a high dynamic range image imaging method based on a 3D digital microscopic imaging system in view of the above prior art. The high dynamic range image imaging method can solve the defect that the existing image imaging methods cannot capture high dynamic scenes clearly, and can simultaneously and accurately generate the three-dimensional shape of the object to be observed, thereby providing observers with a full range of 3D stereoscopic visual enjoyment.

本发明解决上述技术问题所采用的技术方案为:基于3D数字显微成像系统的高动态范围图像成像方法,其特征在于,包括如下步骤:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a high dynamic range image imaging method based on a 3D digital microscopic imaging system, which is characterized in that it includes the following steps:

步骤1,针对显微镜载物台上的待观测物体,通过调节载物台的高度,并利用相机获取自待观测物体底部到待观测物体顶部的每个层面的高动态多聚焦图像,以获得三维立体成像所需的原始高动态多聚焦序列图像;Step 1. For the object to be observed on the microscope stage, adjust the height of the stage and use the camera to obtain high dynamic multi-focus images of each layer from the bottom of the object to be observed to the top of the object to be observed to obtain a three-dimensional Original high dynamic multi-focus sequence images required for stereo imaging;

步骤2,采用相位匹配方法对所得原始高动态多聚焦序列图像进行配准,以使得所述原始高动态多聚焦序列图像中前后相连的图像对的空间位置、缩放尺度和图像尺寸对应一致,从而得到配准好的高动态多聚焦序列图像;Step 2, using the phase matching method to register the obtained original high dynamic multi-focus sequence images, so that the spatial positions, zoom scales and image sizes of the consecutive image pairs in the original high dynamic multi-focus sequence images are correspondingly consistent, so that Get the registered high dynamic multi-focus sequence images;

步骤3,针对配准好的高动态多聚焦序列图像,采用背景累积的前景背景分割方法提取需要生成三维立体的观测样本区域;Step 3, for the registered high dynamic multi-focus sequence images, use the foreground and background segmentation method of background accumulation to extract the observation sample area that needs to generate three-dimensional stereo;

步骤4,对所述观测样本区域采用四叉树分割方法进行分割,且检测高动态多聚焦序列图像的每一幅图像中的清晰部分,并记录每一幅图像所对应的高度信息;Step 4: Segment the observed sample area using a quadtree segmentation method, detect clear parts in each image of the high dynamic multi-focus sequence images, and record the height information corresponding to each image;

步骤5,对检测出来的各幅图像中的清晰部分进行融合,从而生成待观测物体的三维立体形状。In step 5, the clear parts in the detected images are fused to generate a three-dimensional shape of the object to be observed.

进一步地,所述步骤1中利用相机获取每个层面的高动态范围图像的过程包括:Further, the process of using the camera to obtain the high dynamic range image of each layer in the step 1 includes:

(a)标定相机的相应曲线;(b)获取对于同一场景中不同曝光值的图像;(c)利用标定的相机的所述相应曲线,生成所述场景的32位的光照谱图;(d)利用局部色调映射将所述32位的光照谱图映射至8位的普通图像,并保存所述普通图像为计算机能够显示和储存的格式。(a) calibrate the corresponding curve of the camera; (b) acquire images for different exposure values in the same scene; (c) utilize the corresponding curve of the calibrated camera to generate a 32-bit light spectrogram of the scene; (d ) using local tone mapping to map the 32-bit light spectrogram to an 8-bit normal image, and save the normal image in a format that can be displayed and stored by a computer.

进一步地,在步骤1中,所述原始高动态多聚焦序列图像的获得过程包括:首先,通过移动载物台的高度,改变待观测物体与显微镜的物镜之间的距离,实现单目显微镜不同聚焦平面图像序列;其次,记录每一幅聚焦平面图像高度信息的要求;再次,对于每一幅聚焦平面图像进行聚焦检测,并记录所述每一幅聚焦平面图像中具有最大聚焦清晰度的像素点,以用于后续的三维立体形状重建。Further, in step 1, the acquisition process of the original high dynamic multi-focus sequence image includes: firstly, by moving the height of the stage, changing the distance between the object to be observed and the objective lens of the microscope, so as to achieve different A sequence of focal plane images; secondly, the requirements for recording the height information of each focal plane image; again, performing focus detection for each focal plane image, and recording the pixel with the maximum focus definition in each focal plane image points for subsequent 3D shape reconstruction.

具体地,在步骤2中,所述相位匹配方法对所得原始高动态多聚焦序列图像进行配准的过程包括:Specifically, in step 2, the process of registering the obtained original high dynamic multi-focus sequence images by the phase matching method includes:

首先,在所述原始高动态多聚聚序列图像中,针对每两幅前后相连的各图像对,将图像对中的各图像转换为灰度图像,从而得到灰度图像对;Firstly, in the original high dynamic multi-sequence image, for every two image pairs connected back and forth, each image in the image pair is converted into a grayscale image, thereby obtaining a grayscale image pair;

其次,采用复数带通滤波器从转换后的灰度图像对中提取出各个频段的相位信息;Secondly, the phase information of each frequency band is extracted from the converted gray-scale image pair by using a complex band-pass filter;

再次,利用提取的所述相位信息,通过傅里叶变换实现所述灰度图像对在超像素层级上的移动,以保证前后相连两幅图像的位置的一致性;Again, using the extracted phase information, realize the movement of the grayscale image pair at the superpixel level through Fourier transform, so as to ensure the consistency of the positions of the two consecutive images;

最后,对于原始高动态多聚焦序列图像中的每一组图像对,重复该过程,直到高动态多聚焦图像序列中所有图像的缩放尺度和位移保持一致。Finally, for each set of image pairs in the original high-dynamic multi-focus image sequence, the process is repeated until the scale and displacement of all images in the high-dynamic multi-focus image sequence are consistent.

具体地,所述步骤4中采用四叉树分割方法分割观测样本区域的过程包括:Specifically, the process of using the quadtree segmentation method in the step 4 to segment the observation sample area includes:

首先,将原始高动态多聚焦序列图像作为四叉树根的一层输入到四叉树中;First, the original high dynamic multi-focus sequence image is input into the quadtree as a layer of the root of the quadtree;

其次,设定图像分解条件,并根据四叉树中的各层图像是否满足分解条件进行处理:Secondly, set the image decomposition conditions, and process according to whether the images of each layer in the quadtree meet the decomposition conditions:

如果对于一层图像满足所述的图像分解条件,则对这层图像进行四叉分解,并输入到四叉树的下一层;依次类推,直到图像序列被分解所得的最小图像块都不满足所述的图像分解条件,则结束四叉树分解过程;其中,设定的图像分解条件为:If the image decomposition condition is satisfied for a layer of image, the image of this layer is quadrupled and input to the next layer of the quadtree; and so on until the smallest image block obtained by decomposing the image sequence does not satisfy Described image decomposition condition, then end quadtree decomposition process; Wherein, the image decomposition condition of setting is:

对于四叉树中图像序列中每一层被分解的图像块分别计算其聚焦因子最大差异值MDFM和梯度差异值SMDG;其中,聚焦因子最大差异值MDFM和梯度差异值SMDG的计算公式分别如下:For the decomposed image blocks of each layer in the image sequence in the quadtree, the maximum difference value of the focus factor MDFM and the value of the gradient difference SMDG are respectively calculated; the calculation formulas of the maximum difference value of the focus factor MDFM and the value of the gradient difference SMDG are as follows:

MDFM=FMmax-FMminMDFM = FM max - FM min ;

其中,FMmax表示焦距测量的最大值,FMmin表示焦距测量的最小值;gradmax(x,y)表示最大梯度值,gradmin(x,y)表示最小梯度值;Among them, FM max represents the maximum value of focal length measurement, FM min represents the minimum value of focal length measurement; grad max (x, y) represents the maximum gradient value, and grad min (x, y) represents the minimum gradient value;

针对四叉树中的一层图像块,如果满足MDFM≥0.98×SMDG,表面该层图像序列中存在完全聚焦的图像块,则该层图像块将不会继续向下分解;反之,该层图像块将会继续分解下去,直到四叉树中所有图像都被分解到无法分解的子图像块。For a layer of image blocks in the quadtree, if MDFM≥0.98×SMDG is satisfied, and there is a fully focused image block in the image sequence of this layer, the image block of this layer will not continue to decompose downward; otherwise, the image of this layer Blocks will continue to be decomposed until all images in the quadtree have been decomposed into sub-image blocks that cannot be decomposed.

具体地,所述焦距测量的最大值FMmax、焦距测量的最小值FMmin的获取过程为:Specifically, the acquisition process of the maximum value FM max of the focal length measurement and the minimum value FM min of the focal length measurement is:

首先,计算四叉树根的一层图像中每一个像素的梯度矩阵,计算公式为:First, calculate the gradient matrix of each pixel in the layer image of the root of the quadtree, the calculation formula is:

GMi=gradient(Ii),i=1,2,…,n;GM i = gradient(I i ), i=1,2,...,n;

其中,Ii为第i个原始高动态多聚焦图像,GMi为与Ii相对应的梯度矩阵;n为原始高动态多聚焦序列图像中的图像总个数;Wherein, I i is the i-th original high dynamic multi-focus image, GM i is the gradient matrix corresponding to I i ; n is the total number of images in the original high dynamic multi-focus sequence images;

其次,找到这一层图像每一点的所有梯度矩阵中最大的梯度矩阵以及最小的梯度矩阵,公式如下:Secondly, find the largest gradient matrix and the smallest gradient matrix among all the gradient matrices of each point of this layer of image, the formula is as follows:

GMmax=max(GMi(x,y)),i=1,2,…,n;GM max = max(GM i (x,y)), i=1,2,...,n;

GMmin=min(GMi(x,y)),i=1,2,…,n;GM min =min(GM i (x,y)),i=1,2,...,n;

再次,计算这一层图像所有点的梯度矩阵之和,计算公式如下:Again, calculate the sum of the gradient matrix of all points in this layer of image, the calculation formula is as follows:

FMi=ΣxΣygradi(x,y),i=1,2,…,n;FM i = Σ x Σ y grad i (x, y), i = 1, 2,..., n;

最后,分别找到上述梯度矩阵之和的最大值和最小值,计算公式如下:Finally, find the maximum and minimum values of the sum of the above gradient matrices respectively, and the calculation formula is as follows:

FMmax=max{FMi},i=1,2,…,n;FM max = max{FM i }, i=1,2,...,n;

FMmin=min{FMi},i=1,2,…,n。FM min = min{FM i }, i=1, 2, . . . , n.

具体地,所述步骤5中针对各幅图像的清晰部分进行融合的过程包括:针对所得所有的清晰部分作为清晰的子图像块,分别记录其高度信息,并将所有的清晰的子图像块融合成一幅完整的观测样本的三维立体图像。Specifically, the process of fusing the clear parts of each image in step 5 includes: taking all the obtained clear parts as clear sub-image blocks, recording their height information respectively, and fusing all the clear sub-image blocks into a complete three-dimensional image of the observed sample.

改进地,所述步骤5中还包括:采用中值滤波方法对生成的三维立体形状进行滤波,以消除三维立体形状因采样频率不足而引起的锯齿效果,从而使得生成的三维立体形成更加平滑。Improvement, the step 5 also includes: filtering the generated three-dimensional shape by using a median filter method to eliminate the jagged effect caused by insufficient sampling frequency of the three-dimensional shape, so as to make the generated three-dimensional shape smoother.

与现有技术相比,本发明的优点在于:Compared with the prior art, the present invention has the advantages of:

首先,本发明所提供的高动态范围图像成像方法采用了高动态范围成像、三维立体成像和多景深图像融合技术,同时获取同一场景的不同曝光时间的图像序列,生成场景的32位的光照谱图,然后将32位的光照谱图利用局部色调映射至8位的普通图像,并保存成计算机能够显示、存储的生成高动态范围视频格式,利用色调映射技术,实时显示和传输高动态范围显微视频,便于观测者实时动态观看待观测物体;First of all, the high dynamic range image imaging method provided by the present invention adopts high dynamic range imaging, three-dimensional imaging and multi-depth image fusion technology, simultaneously acquires image sequences of different exposure times of the same scene, and generates a 32-bit light spectrum of the scene Then map the 32-bit light spectrum map to an 8-bit common image by using local tone, and save it into a high dynamic range video format that can be displayed and stored by the computer, and use the tone mapping technology to display and transmit the high dynamic range display in real time Micro video, which is convenient for the observer to watch the object to be observed dynamically in real time;

其次,由于高动态范围视频技术的应用涉及到的计算复杂度较高,本发明采用相位匹配、四叉树分割等方法,能够实时处理视频信号,生成实时显微视频显示,从而降低了计算复杂度;Secondly, due to the high computational complexity involved in the application of high dynamic range video technology, the present invention uses methods such as phase matching and quadtree segmentation to process video signals in real time and generate real-time microscopic video displays, thereby reducing computational complexity. Spend;

再次,本发明中的高动态范围图像成像方法可以实时高清地观察待观测物体,克服了目前图像成像技术对高对比度样本无法同时看清反光和不反光区域的缺陷;Again, the high dynamic range image imaging method in the present invention can observe the object to be observed in real time and in high definition, which overcomes the defect that the current image imaging technology cannot see the reflective and non-reflective areas clearly at the same time for high-contrast samples;

最后,本发明的高动态范围图像成像方法能够获得由焦点各异的图像合成的完全对焦图像;在处理焦点各异的图像过程中,通过自动获取图像中点的深度,从而恢复出图像表面上点的三维坐标,为新兴材料质量检测提供强有力的辅助保障。Finally, the high dynamic range image imaging method of the present invention can obtain a fully focused image synthesized from images with different focal points; in the process of processing images with different focal points, the depth of the midpoint of the image can be automatically obtained to restore the image on the surface. The three-dimensional coordinates of the points provide a strong auxiliary guarantee for the quality inspection of emerging materials.

附图说明Description of drawings

图1为本发明实施例一中基于3D数字显微成像系统的高动态范围图像成像方法流程示意图;1 is a schematic flow diagram of a high dynamic range image imaging method based on a 3D digital microscopic imaging system in Embodiment 1 of the present invention;

图2为本发明实施例一中3D数字显微成像系统的示意图;2 is a schematic diagram of a 3D digital microscopic imaging system in Embodiment 1 of the present invention;

图3为实施例一中金属螺钉所对应的原始高动态多聚焦序列图像;Fig. 3 is the original high dynamic multi-focus sequence image corresponding to the metal screw in the first embodiment;

图4为实施例一中所获取的金属螺钉的高动态范围图像与普通自动曝光图像对比图;其中,左侧一列为对应的高动态范围图像,右侧一列为对应的普通自动曝光图像;Fig. 4 is a comparison diagram between the high dynamic range image of the metal screw obtained in Example 1 and the ordinary automatic exposure image; wherein, the left column is the corresponding high dynamic range image, and the right column is the corresponding ordinary automatic exposure image;

图5为实施例一中提取前景图像的示意图;5 is a schematic diagram of extracting a foreground image in Embodiment 1;

图6a为实施例一中使用高动态范围图像生成的没有图像纹理映射的3D立体图像;Figure 6a is a 3D stereoscopic image without image texture mapping generated using a high dynamic range image in Embodiment 1;

图6b为实施例一中使用高动态范围图像生成的有图像纹理映射的3D立体图像;Figure 6b is a 3D stereoscopic image with image texture mapping generated using a high dynamic range image in Embodiment 1;

图6c为实施例一中使用原始自动曝光图像生成的没有图像纹理映射的3D立体图像;Figure 6c is a 3D stereoscopic image without image texture mapping generated using the original auto-exposure image in Embodiment 1;

图6d为实施例一中使用原始自动曝光图像生成的有图像纹理映射的3D立体图像;Figure 6d is a 3D stereoscopic image with image texture mapping generated using the original auto-exposure image in Embodiment 1;

图6e为实施例一中没有图像纹理映射的3D立体图像的真值图;FIG. 6e is a truth map of a 3D stereo image without image texture mapping in Embodiment 1;

图6f为实施例一中有图像纹理映射的3D立体图像的真值图;Fig. 6f is a truth map of a 3D stereo image with image texture mapping in Embodiment 1;

图7a为实施例二中使用高动态范围图像生成的没有图像纹理映射的3D立体图像;Figure 7a is a 3D stereoscopic image without image texture mapping generated using a high dynamic range image in Embodiment 2;

图7b为实施例二中使用高动态范围图像生成的有图像纹理映射的3D立体图像;Figure 7b is a 3D stereoscopic image with image texture mapping generated using a high dynamic range image in Embodiment 2;

图7c为实施例二中使用原始自动曝光图像生成的没有图像纹理映射的3D立体图像;Figure 7c is a 3D stereoscopic image without image texture mapping generated using the original auto-exposure image in Embodiment 2;

图7d为实施例二中使用原始自动曝光图像生成的有图像纹理映射的3D立体图像;Figure 7d is a 3D stereoscopic image with image texture mapping generated using the original auto-exposure image in Embodiment 2;

图7e为实施例二中没有图像纹理映射的3D立体图像的真值图;Fig. 7e is the truth map of the 3D stereoscopic image without image texture mapping in the second embodiment;

图7f为实施例二中有图像纹理映射的3D立体图像的真值图;Fig. 7f is the truth map of the 3D stereoscopic image with image texture mapping in the second embodiment;

图8为实施例二中利用高动态范围图像生成3D立体形状方法以及没有使用高动态范围图像的平方根误差对比图;Fig. 8 is a comparison diagram of the square root error between the method of generating a 3D stereoscopic shape using a high dynamic range image and the method of not using a high dynamic range image in Embodiment 2;

图9a为实施例三中使用高动态范围图像生成的没有图像纹理映射的3D立体图像;Figure 9a is a 3D stereoscopic image without image texture mapping generated using a high dynamic range image in Embodiment 3;

图9b为实施例三中使用高动态范围图像生成的有图像纹理映射的3D立体图像;Figure 9b is a 3D stereoscopic image with image texture mapping generated using a high dynamic range image in Embodiment 3;

图9c为实施例三中使用原始自动曝光图像生成的没有图像纹理映射的3D立体图像;Fig. 9c is a 3D stereoscopic image without image texture mapping generated using the original auto-exposure image in Embodiment 3;

图9d为实施例三中使用原始自动曝光图像生成的有图像纹理映射的3D立体图像;Fig. 9d is a 3D stereoscopic image with image texture mapping generated using the original auto-exposure image in Embodiment 3;

图9e为实施例三中没有图像纹理映射的3D立体图像的真值图;FIG. 9e is a truth map of a 3D stereoscopic image without image texture mapping in Embodiment 3;

图9f为实施例三中有图像纹理映射的3D立体图像的真值图;Fig. 9f is the truth map of the 3D stereoscopic image with image texture mapping in the third embodiment;

图10为高动态范围图像生成3D立体形状方法以及原始自动曝光图像生成3D立体形状方法所生成3D立体图像对应的平方根误差对比图。FIG. 10 is a comparison diagram of the square root error corresponding to the 3D stereoscopic image generated by the method of generating a 3D stereoscopic shape from a high dynamic range image and the method of generating a 3D stereoscopic shape from an original automatic exposure image.

具体实施方式detailed description

以下结合附图实施例对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

实施例一Embodiment one

如图2所示,本实施例一中所采用的3D数字显微成像系统包括有传统光学显微镜、能够在X轴、Y轴和Z轴的任意方向上移动的自动载物台、CMOS相机和计算机。其中,本实施例一中的待观测物体为金属螺钉,金属螺钉放置在自动载物台上。参见图1中所示,本实施例一中基于3D数字显微成像系统的高动态范围图像成像方法包括如下步骤:As shown in Figure 2, the 3D digital microscopic imaging system adopted in the first embodiment includes a traditional optical microscope, an automatic stage capable of moving in any direction of the X-axis, Y-axis and Z-axis, a CMOS camera and computer. Wherein, the object to be observed in the first embodiment is a metal screw, and the metal screw is placed on the automatic stage. Referring to Fig. 1, the high dynamic range image imaging method based on the 3D digital microscopic imaging system in the first embodiment includes the following steps:

步骤1,针对显微镜载物台上的待观测物体,即金属螺钉,通过调节载物台的高度,使得CMOS相机聚焦在待观测物体的每个层面上,也就是金属螺钉的每个层面上,并利用相机获取自金属螺钉底部到金属螺钉顶部的每个层面的高动态多聚焦图像,以获得三维立体成像所需的原始高动态多聚焦序列图像;针对金属螺钉的原始高动态多聚焦序列图像参见图3所示;其中,获取待观测物体的高动态多聚焦图像过程包括了高动态范围图像获取和多聚焦图像获取两个过程;具体地,获取待观测物体每个层面的高动态范围图像的过程包括:Step 1. For the object to be observed on the microscope stage, that is, the metal screw, by adjusting the height of the stage, the CMOS camera is focused on each layer of the object to be observed, that is, each layer of the metal screw, And use the camera to obtain high dynamic multi-focus images of each layer from the bottom of the metal screw to the top of the metal screw to obtain the original high dynamic multi-focus sequence images required for three-dimensional imaging; the original high dynamic multi-focus sequence images for metal screws See Figure 3; wherein, the process of obtaining a high dynamic multi-focus image of the object to be observed includes two processes of high dynamic range image acquisition and multi-focus image acquisition; specifically, obtaining a high dynamic range image of each layer of the object to be observed The process includes:

(a)标定相机的相应曲线;(b)获取对于同一场景中不同曝光值的图像;(c)利用标定的相机的相应曲线,生成场景的32位的光照谱图;(d)利用局部色调映射,将32位的光照谱图映射至8位的普通图像,并保存普通图像为计算机能够显示和储存的格式。金属螺钉所对应的每层的高动态范围图像参见图4中的左侧一列所示;(a) Calibrate the corresponding curve of the camera; (b) Acquire images with different exposure values in the same scene; (c) Use the corresponding curve of the calibrated camera to generate a 32-bit light spectrum of the scene; (d) Use the local tone Mapping, mapping the 32-bit light spectrum to an 8-bit common image, and saving the normal image into a format that can be displayed and stored by a computer. The high dynamic range image of each layer corresponding to the metal screw is shown in the left column in Figure 4;

步骤2,采用相位匹配方法对所得原始高动态多聚焦序列图像进行配准,以使得原始高动态多聚焦序列图像中前后相连的图像对的空间位置、缩放尺度和图像尺寸对应一致,从而得到配准好的高动态多聚焦序列图像;其中,相位匹配方法对所得原始高动态多聚焦序列图像进行配准的过程包括:Step 2: Use the phase matching method to register the obtained original high dynamic multi-focus sequence images, so that the spatial positions, zoom scales and image sizes of the consecutive image pairs in the original high dynamic multi-focus sequence images are correspondingly consistent, so as to obtain the registration The aligned high dynamic multi-focus sequence images; wherein, the process of registering the obtained original high dynamic multi-focus sequence images by the phase matching method includes:

首先,在原始高动态多聚聚序列图像中,针对每两幅前后相连的各图像对,将图像对中的各图像转换为灰度图像,从而得到灰度图像对;Firstly, in the original high dynamic multi-sequence image, for every two image pairs that are connected back and forth, each image in the image pair is converted into a grayscale image, thereby obtaining a grayscale image pair;

其次,采用复数带通滤波器从转换后的灰度图像对中提取出各个频段的相位信息;Secondly, the phase information of each frequency band is extracted from the converted gray-scale image pair by using a complex band-pass filter;

再次,利用提取的相位信息,通过傅里叶变换实现灰度图像对在超像素层级上的移动,以保证前后相连两幅图像的位置的一致性;Thirdly, using the extracted phase information, the gray-scale image pair is moved at the superpixel level through Fourier transform to ensure the consistency of the positions of the two consecutive images;

最后,对于原始高动态多聚焦序列图像中的每一组图像对,重复该过程,直到高动态多聚焦图像序列中所有图像的缩放尺度和位移保持一致。Finally, for each set of image pairs in the original high-dynamic multi-focus image sequence, the process is repeated until the scale and displacement of all images in the high-dynamic multi-focus image sequence are consistent.

步骤3,针对配准好的高动态多聚焦序列图像,采用背景累积的前景背景分割方法提取需要生成三维立体的观测样本区域;参见图5所示,即利用帧间差分,将金属螺钉对应的背景提取出来,然后进行阈值分割,从而得到前景图像;Step 3, for the registered high dynamic multi-focus sequence images, use the foreground and background segmentation method of background accumulation to extract the observation sample area that needs to be generated in three dimensions; see Figure 5, that is, use the inter-frame difference to divide the corresponding metal screw The background is extracted and then thresholded to obtain the foreground image;

步骤4,对观测样本区域采用四叉树分割方法进行分割,且检测高动态多聚焦序列图像的每一幅图像中的清晰部分,并记录每一幅图像所对应的高度信息;其中,Step 4: Segment the observation sample area using a quadtree segmentation method, and detect the clear part in each image of the high dynamic multi-focus sequence image, and record the height information corresponding to each image; wherein,

针对本实施例一中的四叉树分割方法说明如下:The description of the quadtree segmentation method in the first embodiment is as follows:

首先,将原始高动态多聚焦序列图像作为四叉树根的一层输入到四叉树中;First, the original high dynamic multi-focus sequence image is input into the quadtree as a layer of the root of the quadtree;

其次,设定图像分解条件,并根据四叉树中的各层图像是否满足分解条件进行处理:Secondly, set the image decomposition conditions, and process according to whether the images of each layer in the quadtree meet the decomposition conditions:

如果对于一层图像满足该图像分解条件,则对这层图像进行四叉分解,并输入到四叉树的下一层;依次类推,直到图像序列被分解所得的最小图像块都不满足图像分解条件,则结束四叉树分解过程;其中,针对图像分解条件说明如下:If the image decomposition condition is satisfied for a layer of image, the image of this layer is quadrupled and input to the next layer of the quadtree; and so on, until the smallest image block obtained by decomposing the image sequence does not satisfy the image decomposition condition, then end the quadtree decomposition process; wherein, the description of the image decomposition conditions is as follows:

对于四叉树中图像序列中每一层被分解的图像块分别计算其聚焦因子最大差异值MDFM和梯度差异值SMDG;其中,聚焦因子最大差异值MDFM和梯度差异值SMDG的计算公式分别如下:For the decomposed image blocks of each layer in the image sequence in the quadtree, the maximum difference value of the focus factor MDFM and the value of the gradient difference SMDG are respectively calculated; the calculation formulas of the maximum difference value of the focus factor MDFM and the value of the gradient difference SMDG are as follows:

MDFM=FMmax-FMminMDFM = FM max - FM min ;

其中,FMmax表示焦距测量的最大值,FMmin表示焦距测量的最小值;gradmax(x,y)表示最大梯度值,gradmin(x,y)表示最小梯度值;针对焦距测量的最大值FMmax、焦距测量的最小值FMmin的计算情况为:Among them, FM max represents the maximum value of focal length measurement, FM min represents the minimum value of focal length measurement; grad max (x, y) represents the maximum gradient value, grad min (x, y) represents the minimum gradient value; the maximum value for focal length measurement The calculation of FM max and the minimum value FM min of focal length measurement is:

首先,计算四叉树根的一层图像中每一个像素的梯度矩阵,计算公式为:First, calculate the gradient matrix of each pixel in the layer image of the root of the quadtree, the calculation formula is:

GMi=gradient(Ii),i=1,2,…,n;GM i = gradient(I i ), i=1,2,...,n;

其中,Ii为第i个原始高动态多聚焦图像,GMi为与Ii相对应的梯度矩阵;n为原始高动态多聚焦序列图像中的图像总个数;Wherein, I i is the i-th original high dynamic multi-focus image, GM i is the gradient matrix corresponding to I i ; n is the total number of images in the original high dynamic multi-focus sequence images;

其次,找到这一层图像每一点的所有梯度矩阵中最大的梯度矩阵以及最小的梯度矩阵,公式如下:Secondly, find the largest gradient matrix and the smallest gradient matrix among all the gradient matrices of each point of this layer of image, the formula is as follows:

GMmax=max(GMi(x,y)),i=1,2,…,n;GM max = max(GM i (x,y)), i=1,2,...,n;

GMmin=min(GMi(x,y)),i=1,2,…,n;GM min =min(GM i (x,y)),i=1,2,...,n;

再次,计算这一层图像所有点的梯度矩阵之和,计算公式如下:Again, calculate the sum of the gradient matrix of all points in this layer of image, the calculation formula is as follows:

FMi=Σxygradi(x,y),i=1,2,…,n;FM i = Σ xy grad i (x, y), i = 1, 2,..., n;

最后,分别找到上述梯度矩阵之和的最大值和最小值,计算公式如下:Finally, find the maximum and minimum values of the sum of the above gradient matrices respectively, and the calculation formula is as follows:

FMmax=max{FMi},i=1,2,…,n;FMmin=min{FMi},i=1,2,…,n。FM max =max{FM i }, i=1,2,...,n; FM min =min{FM i },i=1,2,...,n.

针对检测高动态多聚焦序列图像的每一幅图像中的清晰部分过程说明如下:The process of detecting the clear part in each image of the high dynamic multi-focus sequence image is described as follows:

对于四叉树中每一个图像块序列,找到图像块序列中梯度矩阵最大的一个图像块,并记录该具有最大梯度矩阵的图像块在图像序列的位置和其高度信息;For each image block sequence in the quadtree, find an image block with the largest gradient matrix in the image block sequence, and record the position and height information of the image block with the largest gradient matrix in the image sequence;

i=1,2,…,n;其中,fmi(x,y)表示图像序列中第i张图像的梯度矩阵。 i=1,2,...,n; where, fm i (x,y) represents the gradient matrix of the i-th image in the image sequence.

步骤5,对检测出来的各幅图像中的清晰部分进行融合,从而生成待观测物体的三维立体形状,也就是金属螺钉的三维立体图像;其中,设定金属螺钉对应的三维立体图像标记为Z:Step 5, fuse the clear parts of the detected images to generate the three-dimensional shape of the object to be observed, that is, the three-dimensional image of the metal screw; where the three-dimensional image corresponding to the metal screw is marked as Z :

Z(x,y)=zi(x,y),zi(x,y)表示图像序列中第i张的清晰的图像块。Z(x, y)=z i (x, y), z i (x, y) represents the clear image block of the i-th image in the image sequence.

为了将传统使用原始自动曝光图像生成3D立体形状方法与本发明中使用高动态范围图像生成3D立体形状方法进行比较,本实施例一给出了金属螺钉分别利用上述两种3D立体形状方法所生成立体图像对应的比较图,具体参见图6中所示。在本发明中,将使用原始自动曝光图像技术记为Normal SFF,将使用高动态范围图像技术记为HDR-SFF。其中:In order to compare the traditional method of using the original auto-exposure image to generate a 3D stereo shape with the method of the present invention using a high dynamic range image to generate a 3D stereo shape, this embodiment 1 provides the metal screws generated by using the above two 3D stereo shape methods respectively. For the comparison diagram corresponding to the stereoscopic image, refer to FIG. 6 for details. In the present invention, the use of the original automatic exposure image technology is recorded as Normal SFF, and the use of high dynamic range image technology is recorded as HDR-SFF. in:

为了对比上述两种3D立体形状生成方法的准确率,本发明实施例一通过引入平方根误差,以衡量两种3D立体形状生成方法在相同条件下,与真值之间的差距:In order to compare the accuracy of the above two 3D three-dimensional shape generation methods, Embodiment 1 of the present invention introduces the square root error to measure the gap between the two 3D three-dimensional shape generation methods and the true value under the same conditions:

其中,GT(i,j)表示真值,Z(i,j)表示Normal SFF或者HDR-SFF值。Wherein, G T (i, j) represents a true value, and Z(i, j) represents a Normal SFF or HDR-SFF value.

表1给出了两种3D立体形状生成方法在使用22种不同的聚焦因子,对应得到的平方根误差。通过对比表1中的结果可以看出,针对同一个聚焦因子,采用高动态范围图像生成的3D立体形状对应的聚焦因子平方根误差值要小于没有采用高动态范围图像生成的3D立体形状对应的聚焦因子平方根误差值。表1中的结果表明,使用本发明中高动态范围图像生成的3D立体形状比没有使用高动态范围图像生成的3D立体形状要更加准确。Table 1 shows the square root errors corresponding to the two 3D stereo shape generation methods using 22 different focusing factors. By comparing the results in Table 1, it can be seen that for the same focus factor, the square root error value of the focus factor corresponding to the 3D stereo shape generated by the high dynamic range image is smaller than the focus factor corresponding to the 3D stereo shape generated by the high dynamic range image. Factor square root error value. The results in Table 1 show that the 3D stereoscopic shape generated using the high dynamic range image in the present invention is more accurate than the 3D stereoscopic shape generated without using the high dynamic range image.

表1Table 1

实施例二Embodiment two

本实施例二中采用一种塑料材料的银行卡作为待观测物体,银行卡上具有一个小写英文字母“d”。其中,针对该银行卡所生成其三维立体图像的步骤与实施例一中金属螺钉三维立体图像的生成步骤相同,此处不再赘述。In the second embodiment, a plastic bank card is used as the object to be observed, and the bank card has a lowercase English letter "d". Wherein, the steps for generating the three-dimensional image of the bank card are the same as the steps for generating the three-dimensional image of the metal screw in Embodiment 1, and will not be repeated here.

在本实施例二中,为了验证本发明中高动态范围图像成像方法的准确性和鲁棒性,本实施例二给出了该银行卡所生成对应的高动态范围图像,具体参见图7a~图7f所示。图8为针对本实施例二中的银行卡,利用高动态范围图像生成3D立体形状方法以及没有使用高动态范围图像的平方根误差对比图。In the second embodiment, in order to verify the accuracy and robustness of the high dynamic range image imaging method of the present invention, the second embodiment provides the corresponding high dynamic range image generated by the bank card, see Fig. 7a-Fig. 7f shown. Fig. 8 is a comparison diagram of the square root error between the method of generating a 3D stereoscopic shape using a high dynamic range image and the method without using a high dynamic range image for the bank card in the second embodiment.

由图8可以看出,针对同一个聚焦因子,采用高动态范围图像生成的3D立体形状对应的聚焦因子平方根误差值要小于没有采用高动态范围图像生成的3D立体形状对应的聚焦因子平方根误差值。可见,使用本发明中高动态范围图像生成的3D立体形状比没有使用高动态范围图像生成的3D立体形状要更加准确。It can be seen from Figure 8 that for the same focus factor, the square root error value of the focus factor corresponding to the 3D stereoscopic shape generated by using the high dynamic range image is smaller than the square root error value of the focus factor corresponding to the 3D stereoscopic shape generated by the high dynamic range image . It can be seen that the 3D stereoscopic shape generated using the high dynamic range image in the present invention is more accurate than the 3D stereoscopic shape generated without using the high dynamic range image.

实施例三Embodiment three

本实施例三中采用金属芯片作为待观测物体。其中,针对该金属芯片所生成其三维立体图像的步骤与实施例一中金属螺钉三维立体图像的生成步骤相同,此处不再赘述。In the third embodiment, a metal chip is used as the object to be observed. Wherein, the steps for generating the three-dimensional image of the metal chip are the same as the steps for generating the three-dimensional image of the metal screw in Embodiment 1, and will not be repeated here.

图10为高动态范围图像生成3D立体形状方法以及原始自动曝光图像生成3D立体形状方法所生成3D立体图像对应的平方根误差对比图。FIG. 10 is a comparison diagram of the square root error corresponding to the 3D stereoscopic image generated by the method of generating a 3D stereoscopic shape from a high dynamic range image and the method of generating a 3D stereoscopic shape from an original automatic exposure image.

由图10可以看出,针对同一个聚焦因子,采用高动态范围图像生成的3D立体形状对应的聚焦因子平方根误差值要小于没有采用高动态范围图像生成的3D立体形状对应的聚焦因子平方根误差值。可见,使用本发明中高动态范围图像生成的3D立体形状比没有使用高动态范围图像生成的3D立体形状要更加准确。It can be seen from Figure 10 that for the same focus factor, the square root error value of the focus factor corresponding to the 3D stereoscopic shape generated by the high dynamic range image is smaller than the square root error value of the focus factor corresponding to the 3D stereoscopic shape generated by the high dynamic range image . It can be seen that the 3D stereoscopic shape generated using the high dynamic range image in the present invention is more accurate than the 3D stereoscopic shape generated without using the high dynamic range image.

Claims (8)

1.基于3D数字显微成像系统的高动态范围图像成像方法,其特征在于,包括如下步骤:1. The high dynamic range image imaging method based on 3D digital microscopic imaging system, is characterized in that, comprises the steps: 步骤1,针对显微镜载物台上的待观测物体,通过调节载物台的高度,并利用相机获取自待观测物体底部到待观测物体顶部的每个层面的高动态多聚焦图像,以获得三维立体成像所需的原始高动态多聚焦序列图像;Step 1. For the object to be observed on the microscope stage, adjust the height of the stage and use the camera to obtain high dynamic multi-focus images of each layer from the bottom of the object to be observed to the top of the object to be observed to obtain a three-dimensional Original high dynamic multi-focus sequence images required for stereo imaging; 步骤2,采用相位匹配方法对所得原始高动态多聚焦序列图像进行配准,以使得所述原始高动态多聚焦序列图像中前后相连的图像对的空间位置、缩放尺度和图像尺寸对应一致,从而得到配准好的高动态多聚焦序列图像;Step 2, using the phase matching method to register the obtained original high dynamic multi-focus sequence images, so that the spatial positions, zoom scales and image sizes of the consecutive image pairs in the original high dynamic multi-focus sequence images are correspondingly consistent, so that Get the registered high dynamic multi-focus sequence images; 步骤3,针对配准好的高动态多聚焦序列图像,采用背景累积的前景背景分割方法提取需要生成三维立体的观测样本区域;Step 3, for the registered high dynamic multi-focus sequence images, use the foreground and background segmentation method of background accumulation to extract the observation sample area that needs to generate three-dimensional stereo; 步骤4,对所述观测样本区域采用四叉树分割方法进行分割,且检测高动态多聚焦序列图像的每一幅图像中的清晰部分,并记录每一幅图像所对应的高度信息;Step 4: Segment the observed sample area using a quadtree segmentation method, detect clear parts in each image of the high dynamic multi-focus sequence images, and record the height information corresponding to each image; 步骤5,对检测出来的各幅图像中的清晰部分进行融合,从而生成待观测物体的三维立体形状。In step 5, the clear parts in the detected images are fused to generate a three-dimensional shape of the object to be observed. 2.根据权利要求1所述的高动态范围图像成像方法,其特征在于,所述步骤1中利用相机获取每个层面的高动态范围图像的过程包括:2. The high dynamic range image imaging method according to claim 1, wherein the process of utilizing a camera to obtain the high dynamic range image of each level in the step 1 comprises: (a)标定相机的相应曲线;(b)获取对于同一场景中不同曝光值的图像;(c)利用标定的相机的所述相应曲线,生成所述场景的32位的光照谱图;(d)利用局部色调映射将所述32位的光照谱图映射至8位的普通图像,并保存所述普通图像为计算机能够显示和储存的格式。(a) calibrate the corresponding curve of the camera; (b) acquire images for different exposure values in the same scene; (c) utilize the corresponding curve of the calibrated camera to generate a 32-bit light spectrogram of the scene; (d ) using local tone mapping to map the 32-bit light spectrogram to an 8-bit normal image, and save the normal image in a format that can be displayed and stored by a computer. 3.根据权利要求1所述的高动态范围图像成像方法,其特征在于,在步骤1中,所述原始高动态多聚焦序列图像的获得过程包括:3. The high dynamic range image imaging method according to claim 1, wherein in step 1, the obtaining process of the original high dynamic multi-focus sequence image comprises: 在步骤1中,首先,通过移动载物台的高度,改变待观测物体与显微镜的物镜之间的距离,实现单目显微镜不同聚焦平面图像序列;其次,记录每一幅聚焦平面图像高度信息的要求;再次,对于每一幅聚焦平面图像进行聚焦检测,并记录所述每一幅聚焦平面图像中具有最大聚焦清晰度的像素点,以用于后续的三维立体形状重建。In step 1, firstly, by moving the height of the stage, changing the distance between the object to be observed and the objective lens of the microscope to realize the image sequence of different focal planes of the monocular microscope; secondly, recording the height information of each focal plane image Requirements; Again, focus detection is performed on each focal plane image, and the pixel point with the maximum focus definition in each focal plane image is recorded for subsequent three-dimensional shape reconstruction. 4.根据权利要求1所述的高动态范围图像成像方法,其特征在于,在步骤2中,所述相位匹配方法对所得原始高动态多聚焦序列图像进行配准的过程包括:4. The high dynamic range image imaging method according to claim 1, wherein, in step 2, the process of registering the obtained original high dynamic multi-focus sequence images by the phase matching method comprises: 首先,在所述原始高动态多聚聚序列图像中,针对每两幅前后相连的各图像对,将图像对中的各图像转换为灰度图像,从而得到灰度图像对;Firstly, in the original high dynamic multi-sequence image, for every two image pairs connected back and forth, each image in the image pair is converted into a grayscale image, thereby obtaining a grayscale image pair; 其次,采用复数带通滤波器从转换后的灰度图像对中提取出各个频段的相位信息;Secondly, the phase information of each frequency band is extracted from the converted gray-scale image pair by using a complex band-pass filter; 再次,利用提取的所述相位信息,通过傅里叶变换实现所述灰度图像对在超像素层级上的移动,以保证前后相连两幅图像的位置的一致性;Again, using the extracted phase information, realize the movement of the grayscale image pair at the superpixel level through Fourier transform, so as to ensure the consistency of the positions of the two consecutive images; 最后,对于原始高动态多聚焦序列图像中的每一组图像对,重复该过程,直到高动态多聚焦图像序列中所有图像的缩放尺度和位移保持一致。Finally, for each set of image pairs in the original high-dynamic multi-focus image sequence, the process is repeated until the scale and displacement of all images in the high-dynamic multi-focus image sequence are consistent. 5.根据权利要求1所述的高动态范围图像成像方法,其特征在于,所述步骤4中采用四叉树分割方法分割观测样本区域的过程包括:5. The high dynamic range image imaging method according to claim 1, wherein the process of adopting the quadtree segmentation method to segment the observation sample area in the step 4 comprises: 首先,将原始高动态多聚焦序列图像作为四叉树根的一层输入到四叉树中;First, the original high dynamic multi-focus sequence image is input into the quadtree as a layer of the root of the quadtree; 其次,设定图像分解条件,并根据四叉树中的各层图像是否满足分解条件进行处理:Secondly, set the image decomposition conditions, and process according to whether the images of each layer in the quadtree meet the decomposition conditions: 如果对于一层图像满足所述的图像分解条件,则对这层图像进行四叉分解,并输入到四叉树的下一层;依次类推,直到图像序列被分解所得的最小图像块都不满足所述的图像分解条件,则结束四叉树分解过程;其中,设定的图像分解条件为:If the image decomposition condition is satisfied for a layer of image, the image of this layer is quadrupled and input to the next layer of the quadtree; and so on until the smallest image block obtained by decomposing the image sequence does not satisfy Described image decomposition condition, then end quadtree decomposition process; Wherein, the image decomposition condition of setting is: 对于四叉树中图像序列中每一层被分解的图像块分别计算其聚焦因子最大差异值MDFM和梯度差异值SMDG;其中,聚焦因子最大差异值MDFM和梯度差异值SMDG的计算公式分别如下:For the decomposed image blocks of each layer in the image sequence in the quadtree, the maximum difference value of the focus factor MDFM and the value of the gradient difference SMDG are respectively calculated; the calculation formulas of the maximum difference value of the focus factor MDFM and the value of the gradient difference SMDG are as follows: MDFM=FMmax-FMminMDFM = FM max - FM min ; SS Mm DD. GG == ΣΣ ΣΣ [[ gradgrad maxmax (( xx ,, ythe y )) -- gradgrad minmin (( xx ,, ythe y )) ]] == ΣΣgradΣΣgrad maxmax (( xx ,, ythe y )) -- ΣΣgradΣΣgrad minmin (( xx ,, ythe y )) ;; 其中,FMmax表示焦距测量的最大值,FMmin表示焦距测量的最小值;gradmax(x,y)表示最大梯度值,gradmin(x,y)表示最小梯度值;Among them, FM max represents the maximum value of focal length measurement, FM min represents the minimum value of focal length measurement; grad max (x, y) represents the maximum gradient value, and grad min (x, y) represents the minimum gradient value; 针对四叉树中的一层图像块,如果满足MDFM≥0.98×SMDG,表面该层图像序列中存在完全聚焦的图像块,则该层图像块将不会继续向下分解;反之,该层图像块将会继续分解下去,直到四叉树中所有图像都被分解到无法分解的子图像块。For a layer of image blocks in the quadtree, if MDFM≥0.98×SMDG is satisfied, and there is a fully focused image block in the image sequence of this layer, the image block of this layer will not continue to decompose downward; otherwise, the image of this layer Blocks will continue to be decomposed until all images in the quadtree have been decomposed into sub-image blocks that cannot be decomposed. 6.根据权利要求5所述的高动态范围图像成像方法,其特征在于,所述焦距测量的最大值FMmax、焦距测量的最小值FMmin的获取过程为:6. The high dynamic range image imaging method according to claim 5, wherein the acquisition process of the maximum value FM max of the focal length measurement and the minimum value FM min of the focal length measurement is: 首先,计算四叉树根的一层图像中每一个像素的梯度矩阵,计算公式为:First, calculate the gradient matrix of each pixel in the layer image of the root of the quadtree, the calculation formula is: GMi=gradient(Ii),i=1,2,…,n;GM i = gradient(I i ), i=1,2,...,n; 其中,Ii为第i个原始高动态多聚焦图像,GMi为与Ii相对应的梯度矩阵;n为原始高动态多聚焦序列图像中的图像总个数;Wherein, I i is the i-th original high dynamic multi-focus image, GM i is the gradient matrix corresponding to I i ; n is the total number of images in the original high dynamic multi-focus sequence images; 其次,找到这一层图像每一点的所有梯度矩阵中最大的梯度矩阵以及最小的梯度矩阵,公式如下:Secondly, find the largest gradient matrix and the smallest gradient matrix among all the gradient matrices of each point of this layer of image, the formula is as follows: GMmax=max(GMi(x,y)),i=1,2,…,n;GM max = max(GM i (x,y)), i=1,2,...,n; GMmin=min(GMi(x,y)),i=1,2,…,n;GM min =min(GM i (x,y)),i=1,2,...,n; 再次,计算这一层图像所有点的梯度矩阵之和,计算公式如下:Again, calculate the sum of the gradient matrix of all points in this layer of image, the calculation formula is as follows: FMi=ΣxΣygradi(x,y),i=1,2,…,n;FM i = Σ x Σ y grad i (x, y), i = 1, 2,..., n; 最后,分别找到上述梯度矩阵之和的最大值和最小值,计算公式如下:Finally, find the maximum and minimum values of the sum of the above gradient matrices respectively, and the calculation formula is as follows: FMmax=max{FMi},i=1,2,…,n;FM max = max{FM i }, i=1,2,...,n; FMmin=min{FMi},i=1,2,…,n。FM min = min{FM i }, i=1, 2, . . . , n. 7.根据权利要求5所述的高动态范围图像成像方法,其特征在于,所述步骤5中针对各幅图像的清晰部分进行融合的过程包括:针对所得所有的清晰部分作为清晰的子图像块,分别记录其高度信息,并将所有的清晰的子图像块融合成一幅完整的观测样本的三维立体图像。7. The high dynamic range image imaging method according to claim 5, wherein the process of merging the clear parts of each image in the step 5 comprises: using all the clear parts obtained as clear sub-image blocks , respectively record their height information, and fuse all the clear sub-image blocks into a complete three-dimensional image of the observed sample. 8.根据权利要求1所述的高动态范围图像成像方法,其特征在于,所述步骤5中还包括:采用中值滤波方法对生成的三维立体形状进行滤波,以消除三维立体形状因采样频率不足而引起的锯齿效果,从而使得生成的三维立体形成更加平滑。8. The high dynamic range image imaging method according to claim 1, characterized in that, said step 5 also includes: using a median filter method to filter the generated three-dimensional shape, to eliminate the three-dimensional shape due to sampling frequency The aliasing effect caused by insufficient, so that the generated three-dimensional stereoscopic formation is smoother.
CN201710057799.1A 2017-01-23 2017-01-23 High dynamic range image imaging method based on 3D digital microscopic imaging system Expired - Fee Related CN106846383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710057799.1A CN106846383B (en) 2017-01-23 2017-01-23 High dynamic range image imaging method based on 3D digital microscopic imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710057799.1A CN106846383B (en) 2017-01-23 2017-01-23 High dynamic range image imaging method based on 3D digital microscopic imaging system

Publications (2)

Publication Number Publication Date
CN106846383A true CN106846383A (en) 2017-06-13
CN106846383B CN106846383B (en) 2020-04-17

Family

ID=59121732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710057799.1A Expired - Fee Related CN106846383B (en) 2017-01-23 2017-01-23 High dynamic range image imaging method based on 3D digital microscopic imaging system

Country Status (1)

Country Link
CN (1) CN106846383B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392946A (en) * 2017-07-18 2017-11-24 宁波永新光学股份有限公司 A kind of micro- multiple focal length images series processing method rebuild towards 3D shape
CN107680152A (en) * 2017-08-31 2018-02-09 太原理工大学 Target surface topography measurement method and apparatus based on image procossing
CN108470149A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on light-field camera
CN109360163A (en) * 2018-09-26 2019-02-19 深圳积木易搭科技技术有限公司 A kind of fusion method and emerging system of high dynamic range images
CN110197463A (en) * 2019-04-25 2019-09-03 深圳大学 High dynamic range image tone mapping method and its system based on deep learning
CN112489196A (en) * 2020-11-30 2021-03-12 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation
CN110849266B (en) * 2019-11-28 2021-05-25 江西瑞普德测量设备有限公司 Telecentric lens telecentricity debugging method of image measuring instrument
CN117784388A (en) * 2024-02-28 2024-03-29 宁波永新光学股份有限公司 High dynamic range metallographic image generation method based on camera response curve

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100194902A1 (en) * 2009-02-05 2010-08-05 National Chung Cheng University Method for high dynamic range imaging
CN103946732A (en) * 2011-09-26 2014-07-23 微软公司 Video display modification based on sensor input for a see-through near-to-eye display
CN104224127A (en) * 2014-09-17 2014-12-24 西安电子科技大学 Optical projection tomography device and method based on camera array
US20160125630A1 (en) * 2014-10-30 2016-05-05 PathPartner Technology Consulting Pvt. Ltd. System and Method to Align and Merge Differently Exposed Digital Images to Create a HDR (High Dynamic Range) Image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100194902A1 (en) * 2009-02-05 2010-08-05 National Chung Cheng University Method for high dynamic range imaging
CN103946732A (en) * 2011-09-26 2014-07-23 微软公司 Video display modification based on sensor input for a see-through near-to-eye display
CN104224127A (en) * 2014-09-17 2014-12-24 西安电子科技大学 Optical projection tomography device and method based on camera array
US20160125630A1 (en) * 2014-10-30 2016-05-05 PathPartner Technology Consulting Pvt. Ltd. System and Method to Align and Merge Differently Exposed Digital Images to Create a HDR (High Dynamic Range) Image

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392946A (en) * 2017-07-18 2017-11-24 宁波永新光学股份有限公司 A kind of micro- multiple focal length images series processing method rebuild towards 3D shape
CN107392946B (en) * 2017-07-18 2020-06-16 宁波永新光学股份有限公司 Microscopic multi-focus image sequence processing method for three-dimensional shape reconstruction
CN107680152A (en) * 2017-08-31 2018-02-09 太原理工大学 Target surface topography measurement method and apparatus based on image procossing
CN108470149A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on light-field camera
CN109360163A (en) * 2018-09-26 2019-02-19 深圳积木易搭科技技术有限公司 A kind of fusion method and emerging system of high dynamic range images
CN110197463A (en) * 2019-04-25 2019-09-03 深圳大学 High dynamic range image tone mapping method and its system based on deep learning
CN110197463B (en) * 2019-04-25 2023-01-03 深圳大学 High dynamic range image tone mapping method and system based on deep learning
CN110849266B (en) * 2019-11-28 2021-05-25 江西瑞普德测量设备有限公司 Telecentric lens telecentricity debugging method of image measuring instrument
CN112489196A (en) * 2020-11-30 2021-03-12 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation
CN112489196B (en) * 2020-11-30 2022-08-02 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation
CN117784388A (en) * 2024-02-28 2024-03-29 宁波永新光学股份有限公司 High dynamic range metallographic image generation method based on camera response curve
CN117784388B (en) * 2024-02-28 2024-05-07 宁波永新光学股份有限公司 High dynamic range metallographic image generation method based on camera response curve

Also Published As

Publication number Publication date
CN106846383B (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN106846383B (en) High dynamic range image imaging method based on 3D digital microscopic imaging system
Hu et al. Revisiting single image depth estimation: Toward higher resolution maps with accurate object boundaries
CN108053367B (en) A 3D point cloud stitching and fusion method based on RGB-D feature matching
CN112116576B (en) Polarization structure light imaging and improved defect detection method
Genovese et al. Stereo-digital image correlation (DIC) measurements with a single camera using a biprism
CN109580630B (en) Visual inspection method for defects of mechanical parts
EP3005292B1 (en) 3d recording device, method for producing a 3d image, and method for setting up a 3d recording device
KR20100019455A (en) Single-lens, single-aperture, single-sensor 3-d imaging device
KR102253320B1 (en) Method for displaying 3 dimension image in integral imaging microscope system, and integral imaging microscope system implementing the same
CN109544628A (en) A kind of the accurate reading identifying system and method for pointer instrument
CN108362469A (en) Size based on pressure sensitive paint and light-field camera and surface pressure measurement method and apparatus
CN107786808B (en) Method and apparatus for generating data representative of a shot associated with light field data
CN107977938A (en) A kind of Kinect depth image restorative procedure based on light field
CN116952357A (en) Spectral imaging visual vibration measurement system and method based on combination of line-plane cameras
Leach et al. Fusion of photogrammetry and coherence scanning interferometry data for all-optical coordinate measurement
CN110378995A (en) A method of three-dimensional space modeling is carried out using projection feature
CN103438803B (en) Computer vision technique accurately measures the method for Rectangular Parts size across visual field
CN114065650B (en) Material crack tip multi-scale strain field measurement tracking method based on deep learning
CN112070675B (en) A graph-based normalized light-field super-resolution method and light-field microscopy device
CN109697695B (en) Visible light image guided ultra-low resolution thermal infrared image interpolation algorithm
Zakeri et al. Guided optimization framework for the fusion of time-of-flight with stereo depth
CN115112509A (en) Material surface indentation measuring method based on Mask R-CNN network
CN110084749B (en) Splicing method of light field images with inconsistent focal lengths
Buat et al. Active chromatic depth from defocus for industrial inspection
Diskin et al. UAS exploitation by 3D reconstruction using monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200417