CN106447708A - OCT eye fundus image data registration method - Google Patents
OCT eye fundus image data registration method Download PDFInfo
- Publication number
- CN106447708A CN106447708A CN201610883700.9A CN201610883700A CN106447708A CN 106447708 A CN106447708 A CN 106447708A CN 201610883700 A CN201610883700 A CN 201610883700A CN 106447708 A CN106447708 A CN 106447708A
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- grid
- spatial
- algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 80
- 239000011159 matrix material Substances 0.000 claims abstract description 63
- 238000013519 translation Methods 0.000 claims abstract description 21
- 238000003708 edge detection Methods 0.000 claims abstract description 18
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims description 21
- 238000001914 filtration Methods 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 230000001629 suppression Effects 0.000 claims description 4
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000012800 visualization Methods 0.000 claims description 3
- 239000007787 solid Substances 0.000 claims description 2
- 230000009466 transformation Effects 0.000 abstract description 9
- 230000002207 retinal effect Effects 0.000 abstract description 6
- 230000004256 retinal image Effects 0.000 abstract description 4
- 238000012014 optical coherence tomography Methods 0.000 description 27
- 230000008569 process Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 241000283973 Oryctolagus cuniculus Species 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 210000004027 cell Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 230000002792 vascular Effects 0.000 description 2
- 208000012661 Dyskinesia Diseases 0.000 description 1
- 238000012952 Resampling Methods 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 210000000981 epithelium Anatomy 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000012567 medical material Substances 0.000 description 1
- 230000017311 musculoskeletal movement, spinal reflex action Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 210000000608 photoreceptor cell Anatomy 0.000 description 1
- 239000000049 pigment Substances 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Image Analysis (AREA)
Abstract
本发明选取OCT眼底图像,使用Canny边缘检测法提取OCT眼底图像的视网膜边缘,并将这些边缘以点云数据的格式收集起来;其次,采用空间网格划分的方法提取点云数据的特征点;再次,应用奇异值分解算法计算出待配准点云间的变换矩阵,消除明显的位置误差;最后,使用改进的迭代最近点算法进行精确配准,把得到的旋转矩阵和平移矩阵应用于原始的OCT眼底图像上,得到最终的结果。当处理具有较大数据量的密集型点云时,本发明算法在时间复杂度和配准精度方面有明显的优势。在大多数情况下,本发明算法将传统的迭代最近点算法效率提高了70%,本发明不仅对OCT眼底图像配准与拼接有效,并且创建具有较大视野眼底视网膜图像精确性。
The present invention selects the OCT fundus image, uses the Canny edge detection method to extract the retinal edge of the OCT fundus image, and collects these edges in the format of point cloud data; secondly, adopts the method of spatial grid division to extract the feature points of the point cloud data; Thirdly, apply the singular value decomposition algorithm to calculate the transformation matrix between the point clouds to be registered, and eliminate the obvious position error; finally, use the improved iterative closest point algorithm for precise registration, and apply the obtained rotation matrix and translation matrix to the original On the OCT fundus image, the final result is obtained. When dealing with dense point clouds with a large amount of data, the algorithm of the present invention has obvious advantages in terms of time complexity and registration accuracy. In most cases, the algorithm of the present invention improves the efficiency of the traditional iterative closest point algorithm by 70%. The present invention is not only effective for OCT fundus image registration and splicing, but also creates a fundus retinal image with a larger field of view.
Description
技术领域technical field
本申请涉及图像配准技术,具体的,涉及对于采用OCT成像的三维眼底图像数据进行配准的方法。The present application relates to image registration technology, in particular, to a method for registration of three-dimensional fundus image data using OCT imaging.
背景技术Background technique
图像配准技术通常是指对于一幅图像,寻找某一系列的空间变换,使其与另一幅或多幅图像对应的特征信息有相同的空间位置。对于OCT三维成像的眼底数据而言,配准的结果应使不同眼底图像中视网膜各个层相互对齐而不会出现明显的断层。Image registration technology usually refers to finding a series of spatial transformations for an image so that it has the same spatial position as the feature information corresponding to another image or multiple images. For the fundus data of OCT three-dimensional imaging, the result of registration should make the layers of the retina in different fundus images aligned with each other without obvious faults.
目前图像配准技术主要分为两类:基于特征的图像配准和基于互信息的图像配准。基于特征的图像配准方法首先提取图像的特征信息,然后以这些特征为模型进行配准。特征提取的结果是一组包含图像特征的数据以及对这些数据的描述,每个数据由一组属性表示,对属性的进一步描述包括边缘的定向和弧度的大小等。这些特征构成了图像的局部特征,而局部特征之间存在着相互关系,如几何关系、辐射度量关系、拓扑关系等。可以用这些局部特征之间的关系描述全局特征。通常基于局部特征配准大多都是基于点、线或边缘的,而全局特征的配准则是利用局部特征之间的关系进行配准的方法。基于互信息的配准方法通常是指最大互信息的图像配准方法。基于互信息的图像配准是用两幅图像的联合概率分布与完全独立时的概率分布的广义距离来估计互信息,并作为多模态医学图像配准的测度。当两幅基于共同场景的图像达到最佳配准时,它们的对应像素的灰度互信息应为最大。由于基于互信息的配准对噪声比较敏感,一般情况下,需要通过滤波和分割等方法对图像进行预处理,接着进行采样、变换、插值、优化等过程达到配准的目的。Currently, image registration techniques are mainly divided into two categories: feature-based image registration and mutual information-based image registration. The feature-based image registration method first extracts the feature information of the image, and then uses these features as a model for registration. The result of feature extraction is a set of data containing image features and a description of these data. Each data is represented by a set of attributes. Further descriptions of attributes include the orientation of edges and the size of radians. These features constitute the local features of the image, and there are interrelationships among the local features, such as geometric relations, radiometric relations, topological relations and so on. Global features can be described by the relationship between these local features. Usually, registration based on local features is mostly based on points, lines or edges, while the registration criterion of global features is a method of registration based on the relationship between local features. Registration methods based on mutual information usually refer to image registration methods with maximum mutual information. Image registration based on mutual information uses the generalized distance between the joint probability distribution of two images and the probability distribution when they are completely independent to estimate mutual information, and it is used as a measure of multimodal medical image registration. When two images based on a common scene achieve the best registration, the gray level mutual information of their corresponding pixels should be the largest. Because the registration based on mutual information is sensitive to noise, in general, it is necessary to preprocess the image by filtering and segmentation, and then perform sampling, transformation, interpolation, optimization and other processes to achieve the purpose of registration.
二维医学图像配准的文献有很多,Kratika Sharma和Ajay Goyal(Sharma K,Goyal A.Classification based survey of image registrationmethods.International Conference on Computing,Communications&NetworkingTechnologies,2013,1-7)将图像配准方法步骤分为四步,分别为空间关系方法,放缩方法、金字塔方法和恒定描述符小波方法。Mei-sen Pan等学者(Pan M,Jiang J,Rong Q,et al.Amodified medical image registration.Multimedia Tools&Applications,2013,70:1585-1615)提出了一种基于边缘检测的B样条梯度算子方法。这种方法实现简单,并具有较低的计算负载和良好的配准精度。而对于两个完全重叠且存在噪声的图像,LucianCiobanu和Luís(Ciobanu L,L.Iterative filtering of SIFTkeypoint matches for multi-view registration in Distributed VideoCoding.Multimedia Tools&Applications,2011,55:557-578)采用迭代的策略重建点对间的关系并以此来消除噪声点。这种策略使得噪声点的总体数量显著减少,同时又保持了原有的高速精确匹配。为了使图像配准算法更加准确和健壮,有一种新颖的想法是建模整张图像分布模型。Shihui Ying等学者(Ying S,Wu G,Wang Q,et al.GroupwiseRegistration via Graph Shrinkage on the Image Manifold.IEEE Conference onComputer Vision&Pattern Recognition,2013,2323-2330)最先引入这个概念,并将图像特征点配准的步骤可以归纳为动态收缩问题。除上述方法外,其他一些基于特征的配准算法也能得到较好的结果。There are many documents on two-dimensional medical image registration. Kratika Sharma and Ajay Goyal (Sharma K, Goyal A. Classification based survey of image registration methods. International Conference on Computing, Communications & Networking Technologies, 2013, 1-7) divided the steps of image registration methods into It consists of four steps, which are spatial relationship method, scaling method, pyramid method and constant descriptor wavelet method. Mei-sen Pan and other scholars (Pan M, Jiang J, Rong Q, et al. Amodified medical image registration. Multimedia Tools & Applications, 2013, 70: 1585-1615) proposed a B-spline gradient operator method based on edge detection . This method is simple to implement and has low computational load and good registration accuracy. And for two completely overlapping images with noise, Lucian Ciobanu and Luís (Ciobanu L, L. Iterative filtering of SIFT keypoint matches for multi-view registration in Distributed Video Coding. Multimedia Tools & Applications, 2011, 55:557-578) uses an iterative strategy to reconstruct the relationship between point pairs and eliminate noise points. This strategy results in a significant reduction in the overall number of noise points while maintaining the original high-speed accurate matching. To make image registration algorithms more accurate and robust, a novel idea is to model the entire image distribution. Scholars such as Shihui Ying (Ying S, Wu G, Wang Q, et al. Groupwise Registration via Graph Shrinkage on the Image Manifold. IEEE Conference on Computer Vision & Pattern Recognition, 2013, 2323-2330) first introduced this concept, and matched image features The standard procedure can be summarized as a dynamic shrinkage problem. In addition to the above methods, some other feature-based registration algorithms can also get better results.
然而,光学相干性断层扫描眼底数据是由多个二维光学相干性断层扫描图像叠加组成的三维体数据。以上方法在处理这样的数据时将面临运行内存和计算时间的限制。因此,为了配准三维眼底数据,一个得到具有较大视野光学相干性断层扫描体数据的方案是采用蒙太奇方法(Li Y,Gregori G,Lam B L,et al.Automatic montage of SD-OCT datasets.Optics express,2011,19:26239-26248)。此方法使用以血管脊为兴趣特征,利用重采样、插值和互相关的方法来拼凑完整的光学相干性断层扫描体数据。这种蒙太奇方法可以将分散,部分重叠的光学相干性断层扫描图像拼接成一个具有较大视野的三维图像。然而,当眼底图像中作为兴趣特征的血管脊模糊时,这种方法将无法完成配准。除此之外,也有学者利用现有的工具和平台来生成具有较大视野体数据。Meng Lu(MengL.Acceleration method of 3D medical images registration based on computeunified device architecture.Bio-medical materials and engineering,2014,24:1109-1116)基于由NVIDIA提供的计算设备架构出了一个加速配准的运算方法,该算法可以提高三维医学图像配准的性能,加快计算速度,适合处理大规模数据。此外,StephanPreibisch等学者(Preibisch S,Saalfeld S,Tomancak P.Globally optimal stitchingof tiled 3D microscopic image acquisitions.Bioinformatics,2009,25:1463-1465)还在imageJ平台上实现了一个拼接插件,它能够在不需要先验知识的情况下将分散的小块体数据重建成一个整体。除上述拼接插件外,其他类型的拼接工具也陆续开始应用。然而,由于光学相干性断层扫描设备自身的瓶颈以及扫描过程中人眼的不自觉的运动,所得到的光学相干性断层扫描眼底数据会掺杂一些微小的非刚性变换(陈国琳.非刚性医学图像配准方法的研究与实现[D].南京理工大学硕士论文,2009)。因此上述方法在处理此类较为特殊的临床眼科光学相干性断层扫描图像时具有一定的局限性。However, optical coherence tomography fundus data is three-dimensional volume data composed of multiple two-dimensional optical coherence tomography images superimposed. The above methods will face the limitation of operating memory and computing time when processing such data. Therefore, in order to register 3D fundus data, a scheme to obtain optical coherence tomography volume data with a large field of view is to use the montage method (Li Y, Gregori G, Lam B L, et al. Automatic montage of SD-OCT datasets. Optics express, 2011, 19:26239-26248). This method uses resampling, interpolation, and cross-correlation to piece together complete optical coherence tomography volume data using vascular ridges as features of interest. This montage approach stitches scattered, partially overlapping OCT images into a single 3D image with a large field of view. However, this method fails to complete the registration when the vascular ridges, which are features of interest, are blurred in fundus images. In addition, some scholars use existing tools and platforms to generate volumetric data with a larger field of view. Meng Lu (MengL.Acceleration method of 3D medical images registration based on computed device architecture.Bio-medical materials and engineering,2014,24:1109-1116) developed an accelerated registration algorithm based on the computing device architecture provided by NVIDIA , the algorithm can improve the performance of 3D medical image registration, accelerate the calculation speed, and is suitable for processing large-scale data. In addition, scholars such as Stephan Preibisch (Preibisch S, Saalfeld S, Tomancak P. Globally optimal stitching of tiled 3D microscopic image acquisitions. Bioinformatics, 2009, 25:1463-1465) also implemented a stitching plug-in on the imageJ platform, which can be used without In the case of prior knowledge, the scattered small-block data can be reconstructed into a whole. In addition to the above-mentioned stitching plug-ins, other types of stitching tools have also begun to be applied. However, due to the bottleneck of the optical coherence tomography equipment itself and the involuntary movement of the human eye during the scanning process, the obtained optical coherence tomography fundus data will be doped with some small non-rigid transformations (Chen Guolin. Non-rigid medical image Research and Implementation of Registration Method [D]. Master Thesis of Nanjing University of Science and Technology, 2009). Therefore, the above method has certain limitations when dealing with such special clinical ophthalmology optical coherence tomography images.
综上,对于OCT三维成像的眼底数据,如何能够快速、精确的创建具有较大视野的眼底图像的算法,以此为临床医师对眼科疾病的诊断和治疗提供帮助,是现有技术亟需解决的技术问题。In summary, for the fundus data of OCT three-dimensional imaging, how to quickly and accurately create an algorithm for fundus images with a larger field of view, so as to provide help for clinicians in the diagnosis and treatment of ophthalmic diseases, is an urgent need to solve in the existing technology technical problems.
发明内容Contents of the invention
本发明的目的在于提出一种OCT眼底图像数据配准方法,以快速、精确创建具有较大视野范围的眼底视网膜图像数据。The object of the present invention is to propose a registration method for OCT fundus image data to quickly and accurately create fundus retinal image data with a larger field of view.
为达此目的,本发明采用以下技术方案:For reaching this purpose, the present invention adopts following technical scheme:
一种OCT眼底图像数据配准方法,包括如下步骤:A method for registering OCT fundus image data, comprising the steps of:
步骤一、采用Canny边缘检测方法对图像进行去噪、边缘检测:Step 1. Use the Canny edge detection method to perform denoising and edge detection on the image:
对原始图像进行灰度化,对图像进行滤波,然后计算图像的梯度幅值,对梯度幅值进行非极大值抑制,然后用双阈值算法检测和连接边缘;Grayscale the original image, filter the image, then calculate the gradient magnitude of the image, perform non-maximum suppression on the gradient magnitude, and then use the double threshold algorithm to detect and connect the edges;
步骤二、图像点云数据的提取及可视化:Step 2. Extraction and visualization of image point cloud data:
将经过步骤一处理的图像按照原有的顺序叠加成一个三维体数据,并把每一个边缘点按照其所处的空间位置抽取成三维点云中的一个点,点云点的三维坐标对应叠加得到的三维体数据的空间坐标;The image processed in step 1 is superimposed into a 3D volume data in the original order, and each edge point is extracted into a point in the 3D point cloud according to its spatial position, and the 3D coordinates of the point cloud points are superimposed correspondingly The spatial coordinates of the obtained three-dimensional volume data;
步骤三、点云数据边界特征点的提取:Step 3. Extraction of point cloud data boundary feature points:
将点云中的点按照其空间坐标分配到不同的空间网格中,接着找出所有属于点云边界的空间网格,最后将边界网格中的点云点提取出来作为点云数据的边界特征点;Assign the points in the point cloud to different spatial grids according to their spatial coordinates, then find all the spatial grids belonging to the boundary of the point cloud, and finally extract the point cloud points in the boundary grid as the boundary of the point cloud data Feature points;
步骤四、采用奇异值分解算法完成点云数据初始配准:Step 4. Use the singular value decomposition algorithm to complete the initial registration of point cloud data:
令P代表原始集合,Q代表对照集合,参见公式3,定义目标矩阵如下:Let P represent the original set, Q represent the control set, see formula 3, define the target matrix as follows:
CP和CQ分别是原始集合P以及对照集合Q的质心,M代表点云数据集中点云的数量,Pi和Qi分别代表原始集合P与对照集Q合中第i个点,将目标矩阵E采用奇异值分解(SingularValue Decomposition,SVD),则E=UDVT。其中U的列是EET矩阵的特征向量,V的列是ETE矩阵的特征向量。VT是V的转置矩阵且D为对角矩阵,D=diag(di),其中,di为E的奇异值,令C P and C Q are the centroids of the original set P and the control set Q respectively, M represents the number of point clouds in the point cloud data set, P i and Q i represent the i-th point in the original set P and the control set Q respectively, and The target matrix E adopts singular value decomposition (Singular Value Decomposition, SVD), then E=UDV T . where the columns of U are the eigenvectors of the EE T matrix, and the columns of V are the eigenvectors of the E T E matrix. V T is the transpose matrix of V and D is a diagonal matrix, D=diag(d i ), where d i is the singular value of E, let
则旋转矩阵R=UBVT,平移矩阵T=CQ-RCP,将所求得的矩阵R和T作用于原始集合P以消除初始条件下原始集合P与对照集合Q可能存在的较大位移误差;Then the rotation matrix R=UBV T , the translation matrix T=C Q -RC P , apply the obtained matrices R and T to the original set P to eliminate the large displacement that may exist between the original set P and the control set Q under the initial conditions error;
步骤五、采用改进迭代最近点算法完成点云数据精确配准:Step 5. Use the improved iterative closest point algorithm to complete the precise registration of point cloud data:
步骤5.1初始化。对于给定的两个点云集合P和Q,指定一个收敛阈值τ,Step 5.1 Initialization. For given two point cloud sets P and Q, specify a convergence threshold τ,
步骤5.2采用公式5算出点云数据中每个点对应的权值,比较阈值ε并将权值较低的点排除,Step 5.2 Use Formula 5 to calculate the weight corresponding to each point in the point cloud data, compare the threshold ε and exclude points with lower weights,
QB为集合Q中点PA的对应点,Dis(PA,QB)代表PA和QB间的欧氏距离,DisMAX代表点对间欧氏距离的最大值,欧式距离采用公式6计算得到,Q B is the corresponding point of the point PA in the set Q, Dis(PA, Q B ) represents the Euclidean distance between PA and Q B , and Dis MAX represents the maximum value of the Euclidean distance between point pairs, and the Euclidean distance adopts the formula 6 is calculated to get,
步骤5.3迭代以下步骤,直到公式7的最小均方根误差收敛于所给定的阈值τ:Step 5.3 Iterate the following steps until the minimum root mean square error of Equation 7 converges to the given threshold τ:
步骤5.3.1根据公式6计算出集合P和Q中点云之间的欧式距离,Step 5.3.1 calculates the Euclidean distance between point clouds in sets P and Q according to formula 6,
其中,ωx,ωy,ωz分别代表M-估计在各个坐标方向的权重因素,(xA,yA,zA),(xB,yB,zB)分别为集合P中点A和Q中点B的空间坐标,Among them, ω x , ω y , ω z respectively represent the weight factors of M-estimation in each coordinate direction, (x A , y A , z A ), (x B , y B , z B ) are the midpoints of the set P The spatial coordinates of point B in A and Q,
步骤5.3.2对于排除了权值较低的点的集合P,在集合Q中寻找欧氏距离最近的点作为对应点并存储在最近点集中,Step 5.3.2 For the set P that excludes points with lower weights, find the point with the closest Euclidean distance in the set Q as the corresponding point and store it in the nearest point set,
步骤5.3.3利用公式7采用最小二乘法计算计算集合P和最近点集间的旋转矩阵R和平移矩阵T,Step 5.3.3 Use formula 7 to calculate the rotation matrix R and translation matrix T between the calculation set P and the nearest point set using the least square method,
步骤5.3.4将旋转矩阵R和平移矩阵T应用于集合P,得到新的集合,利用公式7计算最小均方根误差是否收敛于所给定的阈值τ,如果是则结束运算,否则利用步骤5.3进行迭代计算。Step 5.3.4 Apply the rotation matrix R and the translation matrix T to the set P to obtain a new set, use formula 7 to calculate whether the minimum root mean square error converges to the given threshold τ, if so, end the operation, otherwise use the step 5.3 Perform iterative calculations.
进一步的,在步骤一中,原始图像的灰度化公式为Gray=0.299R+0.587G+0.114B,图像滤波采用高斯滤波,采用一阶有限差分来计算图像像素矩阵关于横向和纵向的偏导数。Further, in step 1, the grayscale formula of the original image is Gray=0.299R+0.587G+0.114B, the image filtering adopts Gaussian filtering, and the first-order finite difference is used to calculate the partial derivative of the image pixel matrix with respect to the horizontal and vertical directions .
进一步的,在步骤一中,采用双阈值来检测并连接边缘,高阈值的作用在于抑制边缘,所有高于此阈值的梯度幅值才能被视为边缘,低阈值的作用在于连接边缘,将所有梯度幅值高于低阈值的点视为边缘并连接起来成为最终边缘检测的结果。Further, in step 1, double thresholds are used to detect and connect edges. The function of high threshold is to suppress edges, and all gradient magnitudes higher than this threshold can be regarded as edges. The function of low threshold is to connect edges, and all Points whose gradient magnitude is higher than a low threshold are regarded as edges and connected to become the result of the final edge detection.
进一步的,在步骤三中,采用有向包围盒作为点云数据的最小包围盒,以将点云数据点划分到不同的空间网格中,Further, in step 3, the directed bounding box is used as the minimum bounding box of the point cloud data to divide the point cloud data points into different spatial grids,
按照相同的体积将点云数据分解成若干个空间网格,每个空间网格的大小被定义为:Decompose the point cloud data into several spatial grids according to the same volume, and the size of each spatial grid is defined as:
公式中L代表点云数据中点云点的数量,V代表最小包围盒的体积。V/L为点云密度的倒数,代表点云数据中每一个点云点所占空间的平均大小。令空间网格的初始大小Sgrid为的点云密度倒数的K倍,并将点云数据最小包围盒按照Sgrid大小分成若干个空间网格。In the formula, L represents the number of point cloud points in the point cloud data, and V represents the volume of the smallest bounding box. V/L is the reciprocal of the point cloud density, representing the average size of the space occupied by each point cloud point in the point cloud data. Let the initial size S grid of the spatial grid be K times the reciprocal of the point cloud density, and divide the minimum bounding box of the point cloud data into several spatial grids according to the size of S grid .
进一步的,使用边界种子网格算法找到所有边界空间网格,并提取这个边界空间网格中包含的点云点作为点云数据的边界特征点,将空间网格分成两类,空格和实格,不包含任何点云点的空间网格为空格,其他的空间网格为实格。以空间坐标(x,y,z)表示某一网格空间位置,用函数f表示此空间网格的类型,如果某个空间网格是实格,f(x,y,z)=1,否则f(x,y,z)=0,使用公式2来判定某个空间网格是否为的边界空间网格:Further, use the boundary seed grid algorithm to find all the boundary space grids, and extract the point cloud points contained in this boundary space grid as the boundary feature points of the point cloud data, and divide the space grids into two types, space and real grid , the spatial grids that do not contain any point cloud points are empty spaces, and the other spatial grids are solid grids. Use spatial coordinates (x, y, z) to represent the spatial position of a certain grid, and use the function f to represent the type of this spatial grid. If a certain spatial grid is a real grid, f(x, y, z) = 1, Otherwise f(x,y,z)=0, use formula 2 to determine whether a certain spatial grid is the boundary spatial grid:
当U(x,y,z)≤1时,代表此空间网格的六邻域中上下、左右、前后的网格中至多有一对都是实格,那么此空间网格是一个边界网格。When U(x,y,z)≤1, it means that at most one pair of the upper, lower, left and right, front and rear grids in the six neighborhoods of this spatial grid is a real grid, then this spatial grid is a boundary grid .
进一步的,在步骤三中,8≤K≤24。Further, in step three, 8≤K≤24.
进一步的,在步骤五中,收敛阈值τ=0.2,比较阈值ε为,0.2≤ε≤0.4。Further, in step five, the convergence threshold τ=0.2, and the comparison threshold ε is 0.2≤ε≤0.4.
进一步的,在步骤五中,M-估计在各个坐标方向的权重公式为Further, in step five, the weight formula of M-estimation in each coordinate direction is
v为各个坐标方向上标准化的残差值,c为常数。v is the standardized residual value in each coordinate direction, and c is a constant.
进一步的,c=1.345。Further, c=1.345.
本发明选取OCT眼底图像,使用Canny边缘检测法提取OCT眼底图像的视网膜边缘,并将这些边缘以点云数据的格式收集起来;其次,采用空间网格划分的方法提取点云数据的特征点;再次,应用奇异值分解算法计算出待配准点云间的变换矩阵,消除明显的位置误差;最后,使用改进的迭代最近点算法进行精确配准,把得到的旋转矩阵和平移矩阵应用于原始的OCT眼底图像上,得到最终的结果。The present invention selects the OCT fundus image, uses the Canny edge detection method to extract the retinal edge of the OCT fundus image, and collects these edges in the format of point cloud data; secondly, adopts the method of spatial grid division to extract the feature points of the point cloud data; Thirdly, apply the singular value decomposition algorithm to calculate the transformation matrix between the point clouds to be registered, and eliminate the obvious position error; finally, use the improved iterative closest point algorithm for precise registration, and apply the obtained rotation matrix and translation matrix to the original On the OCT fundus image, the final result is obtained.
当处理具有较大数据量的密集型点云时,本发明算法在时间复杂度和配准精度方面有明显的优势。在大多数情况下,本发明算法将传统的迭代最近点算法效率提高了70%。本发明不仅对OCT眼底图像配准与拼接有效,并且创建具有较大视野眼底视网膜图像精确性。When dealing with dense point clouds with a large amount of data, the algorithm of the present invention has obvious advantages in terms of time complexity and registration accuracy. In most cases, the algorithm of the invention improves the efficiency of the traditional iterative closest point algorithm by 70%. The invention is not only effective for registration and splicing of OCT fundus images, but also creates a fundus retinal image with a large visual field with high accuracy.
附图说明Description of drawings
图1是根据本发明的具体实施例的OCT眼底图像数据配准方法的流程图;Fig. 1 is the flow chart of the OCT fundus image data registration method according to the specific embodiment of the present invention;
图2是根据本发明的具体实施例的采用边界网格种子算法的边界网格的状态图;Fig. 2 is the state diagram of the boundary grid adopting boundary grid seed algorithm according to the specific embodiment of the present invention;
图3是根据本发明的具体实施例的点云数据配准过程对比图;Fig. 3 is a comparison diagram of point cloud data registration process according to a specific embodiment of the present invention;
图4是根据本发明的具体实施例的点云数据重合区域部分结果放大对比图;Fig. 4 is according to the specific embodiment of the present invention the enlarged comparison diagram of the partial results of point cloud data overlapping area;
图5是根据本发明的具体实施例的变换矩阵作用于原始集合与对照集合的配准后位置对比图;Fig. 5 is a position comparison diagram of the transformation matrix acting on the original set and the control set after registration according to a specific embodiment of the present invention;
图6是根据本发明的具体实施例的对于“斯坦福兔子”点云数据的配准结果的对比图;Fig. 6 is a comparison diagram of registration results for "Stanford Rabbit" point cloud data according to a specific embodiment of the present invention;
图7是根据本发明的改进的算法与传统迭代算法时间消耗的对比图。Fig. 7 is a comparison diagram of the time consumption of the improved algorithm according to the present invention and the traditional iterative algorithm.
具体实施方式detailed description
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, but not to limit the present invention. In addition, it should be noted that, for the convenience of description, only some structures related to the present invention are shown in the drawings but not all structures.
本发明的原理为:采用Canny边缘检测方法对医学眼底图像进行去噪,并将得到的边缘作为图像的特征,将这些特征提取成点云数据,使用最小包围盒的方法提取出点云数据中的边界特征点,利用边界特征点采用奇异值分解完成初始配准,通过改进迭代最近点方法对点云数据进行精确配准。The principle of the present invention is: use the Canny edge detection method to denoise the medical fundus image, and use the obtained edge as the feature of the image, extract these features into point cloud data, and use the method of the minimum bounding box to extract the point cloud data. The boundary feature points of , use the boundary feature points to complete the initial registration by singular value decomposition, and accurately register the point cloud data by improving the iterative closest point method.
参见图1,示出了根据本发明的OCT眼底图像数据配准方法的流程图,具体包括如下步骤:Referring to Fig. 1, it shows the flow chart of OCT fundus image data registration method according to the present invention, specifically comprises the following steps:
步骤一、采用Canny边缘检测方法对图像进行去噪、边缘检测:Step 1. Use the Canny edge detection method to perform denoising and edge detection on the image:
图像的边缘指图像从某一灰度值急剧变化到另一灰度的临界区域,它通常分隔开两快亮度变化显著的区域。图像的边缘携带了图像大部分信息,是图像特征提取的重要部分,该步骤一中包括如下子步骤:The edge of the image refers to the critical area where the image changes sharply from a certain gray value to another gray level, and it usually separates two areas where the brightness changes significantly. The edge of the image carries most of the information of the image and is an important part of image feature extraction. The first step includes the following sub-steps:
(1)对原始图像进行灰度化:实际应用场景中,大部分图像为彩色格式。首先需要将彩色格式的图像灰度化。较为常见的灰度化方法为对图像RGB三个通道的采样值进行加权平均,即Gray=(R+G+B)/3。对于医学眼科图像,考虑到人眼的生理特点,本发明中灰度化公式为Gray=0.299R+0.587G+0.114B。(1) Grayscale the original image: In practical application scenarios, most images are in color format. The image in color format needs to be grayscaled first. A more common grayscale method is to perform weighted average of the sampling values of the three RGB channels of the image, that is, Gray=(R+G+B)/3. For medical ophthalmology images, considering the physiological characteristics of the human eye, the gray scale formula in the present invention is Gray=0.299R+0.587G+0.114B.
(2)对图像进行滤波,然后计算图像的梯度幅值:(2) Filter the image, and then calculate the gradient magnitude of the image:
图像滤波是图像处理中一个重要的过程,它在尽量保留图像原有特征的情况下去抑制和剔除噪声对图像处理的影响,从而提高图像处理的准确率与可靠性。常见的滤波方法主要包括均值滤波、中值滤波、高斯滤波和双边滤波等。本发明中Canny边缘检测算法采用高斯滤波。Image filtering is an important process in image processing. It suppresses and eliminates the influence of noise on image processing while retaining the original features of the image as much as possible, thereby improving the accuracy and reliability of image processing. Common filtering methods mainly include mean filtering, median filtering, Gaussian filtering and bilateral filtering. The Canny edge detection algorithm in the present invention adopts Gaussian filtering.
图像的边缘一般处于灰度值变化较大的区域,可以使用一阶有限差分来计算图像像素矩阵关于横向和纵向的偏导数。对于偏导数越大的点,说明该点所处区域的灰度变化较大,有可能是图像的边缘点。The edge of the image is generally in the area where the gray value changes greatly, and the first-order finite difference can be used to calculate the partial derivative of the image pixel matrix with respect to the horizontal and vertical directions. For a point with a larger partial derivative, it means that the gray level of the area where the point is located has a large change, and it may be an edge point of the image.
(3)对梯度幅值进行非极大值抑制,然后用双阈值算法检测和连接边缘:(3) Perform non-maximum suppression on the gradient magnitude, and then use a double threshold algorithm to detect and connect edges:
由低阶导数的结论可知,梯度幅值为极大值的点不一定为区域灰度值变化最大的点,并不能说明此点一定为边缘点,所以本发明中Canny边缘检测算法进行了非极大值抑制。即搜寻梯度幅值极大值点的八邻域,将梯度幅值最大的点保留,其他的点略掉,确保任意小区域内最多不超过一个边缘点。From the conclusion of the low-order derivative, it can be seen that the point with the maximum gradient amplitude is not necessarily the point with the largest change in the gray value of the region, and it does not mean that this point must be an edge point, so the Canny edge detection algorithm in the present invention has carried out non- Maximum suppression. That is to search for the eight neighbors of the maximum gradient amplitude point, keep the point with the largest gradient amplitude value, and omit the other points to ensure that there is no more than one edge point in any small area.
Canny边缘检测算法采用双阈值来检测并连接边缘。高阈值的作用在于抑制边缘,所有高于此阈值的梯度幅值才能被视为边缘。故高阈值越高,检测出来的边缘越少。低阈值的作用在于连接边缘,由于高阈值的抑制作用,导致许多检测出来的边缘点并不能连接成一条边。此时Canny边缘检测算法将边缘点中所有梯度幅值高于低阈值的点视为边缘并连接起来成为最终边缘检测的结果。The Canny edge detection algorithm uses a double threshold to detect and connect edges. The role of a high threshold is to suppress edges, and all gradient magnitudes above this threshold are considered edges. Therefore, the higher the high threshold, the fewer edges are detected. The role of the low threshold is to connect the edges. Due to the inhibition of the high threshold, many detected edge points cannot be connected into an edge. At this time, the Canny edge detection algorithm regards all points whose gradient magnitude is higher than the low threshold among the edge points as edges and connects them to become the result of the final edge detection.
步骤二、图像点云数据的提取及可视化Step 2. Extraction and visualization of image point cloud data
点云数据由一组三维空间中的点构成,通常用来描述物体的三维空间结构。每一个点云点都至少包含一组表示自身空间位置的三维坐标,有些点云也具有高维坐标,这些坐标用来表示颜色或光照反射强度等信息。将原始眼底光学相干性断层扫描图像经过步骤一的处理后,得到了一系列包含视网膜各层边缘的图像。Point cloud data consists of a set of points in three-dimensional space, and is usually used to describe the three-dimensional spatial structure of an object. Each point cloud point contains at least a set of three-dimensional coordinates representing its own spatial position, and some point clouds also have high-dimensional coordinates, which are used to represent information such as color or light reflection intensity. After the original fundus optical coherence tomography image is processed in step 1, a series of images including the edges of each retinal layer are obtained.
将这些图像按照原有的顺序叠加成一个三维体数据,并把每一个边缘点按照其所处的空间位置抽取成三维点云中的一个点,点云点的三维坐标对应叠加得到的三维体数据的空间坐标。Superimpose these images into a 3D volume data in the original order, and extract each edge point into a point in the 3D point cloud according to its spatial position, and the 3D coordinates of the point cloud points correspond to the 3D volume obtained by superposition The spatial coordinates of the data.
步骤三、点云数据边界特征点的提取Step 3. Extraction of point cloud data boundary feature points
在该步骤中,首先将点云中的点按照其空间坐标分配到不同的空间网格中,接着找出所有属于点云边界的空间网格,最后将边界网格中的点云点提取出来作为点云数据的边界特征点。In this step, the points in the point cloud are first assigned to different spatial grids according to their spatial coordinates, then all the spatial grids belonging to the point cloud boundary are found, and finally the point cloud points in the boundary grid are extracted As the boundary feature points of point cloud data.
进一步的,要将点云数据点划分到不同的空间网格中,首先需要找到能够包含整个点云的最小包围盒,再将此最小包围盒等分成若干个空间网格,使得任意点云数据点都被包含在某个空间网格中。Furthermore, to divide the point cloud data points into different spatial grids, it is first necessary to find the smallest bounding box that can contain the entire point cloud, and then divide the smallest bounding box into several spatial grids, so that any point cloud data Points are contained in a certain spatial grid.
本发明采用有向包围盒作为点云数据的最小包围盒,有向包围盒是一种方向任意的包围盒方法,可以更加紧凑的包含整个物体。得到点云数据的最小包围盒之后,本发明按照相同的体积将点云数据分解成若干个空间网格,每个空间网格的大小被定义为:The present invention uses a directional bounding box as the smallest bounding box of point cloud data, and the directional bounding box is a bounding box method with any direction, which can contain the whole object more compactly. After obtaining the minimum bounding box of the point cloud data, the present invention decomposes the point cloud data into several spatial grids according to the same volume, and the size of each spatial grid is defined as:
公式中L代表点云数据中点云点的数量,V代表最小包围盒的体积。V/L为点云密度的倒数,代表点云数据中每一个点云点所占空间的平均大小。令空间网格的初始大小Sgrid为的点云密度倒数的K倍,并将点云数据最小包围盒按照Sgrid大小分成若干个空间网格。公式1中K是一个可变参数,对于本发明所处理的密集型点云数据,实验证明当K被赋值在8~24时,空间网格的大小能够包含足够多的点云数据点。In the formula, L represents the number of point cloud points in the point cloud data, and V represents the volume of the smallest bounding box. V/L is the reciprocal of the point cloud density, representing the average size of the space occupied by each point cloud point in the point cloud data. Let the initial size S grid of the spatial grid be K times the reciprocal of the point cloud density, and divide the minimum bounding box of the point cloud data into several spatial grids according to the size of S grid . K in Formula 1 is a variable parameter. For the intensive point cloud data processed by the present invention, experiments have proved that when K is assigned a value of 8-24, the size of the spatial grid can contain enough point cloud data points.
进一步的,本发明使用边界种子网格算法找到所有边界空间网格,并提取这个边界空间网格中包含的点云点作为点云数据的边界特征点。为了提取边界空间网格,可以将空间网格分成两类,空格和实格,不包含任何点云点的空间网格为空格,其他的空间网格为实格。以空间坐标(x,y,z)表示某一网格空间位置,用函数f表示此空间网格的类型,如果某个空间网格是实格,f(x,y,z)=1,否则f(x,y,z)=0。空间中与此网格相邻的六邻域网格坐标分别为(x-1,y,z),(x+1,y,z),(x,y-1,z),(x,y+1,z),(x,y,z-1),(x,y,z+1)。使用公式2来判定某个空间网格是否为的边界空间网格:Furthermore, the present invention uses a boundary seed grid algorithm to find all boundary space grids, and extracts point cloud points contained in the boundary space grids as boundary feature points of point cloud data. In order to extract the boundary spatial grid, the spatial grid can be divided into two types, blank and real grid, the spatial grid that does not contain any point cloud points is blank, and the other spatial grids are real grid. Use spatial coordinates (x, y, z) to represent the spatial position of a certain grid, and use the function f to represent the type of this spatial grid. If a certain spatial grid is a real grid, f(x, y, z) = 1, Otherwise f(x,y,z)=0. The coordinates of the six neighbor grids adjacent to this grid in space are (x-1, y, z), (x+1, y, z), (x, y-1, z), (x, y+1, z), (x, y, z-1), (x, y, z+1). Use formula 2 to determine whether a certain spatial grid is the boundary spatial grid:
U(x,y,z)为三个乘积的和,令中间网格的坐标为(x,y,z),那么(x-1,y,z),(x+1,y,z),(x,y-1,z),(x,y+1,z),(x,y,z-1),(x,y,z+1)表示此网格的六邻域网格所在位置。f(x-1,y,z)·f(x+1,y,z)=1当且仅当网格上下位置都是实格,f(x,y-1,z)·f(x,y+1,z)=1当且仅当网格左右位置都是实格,f(x,y,z-1)·f(x,y,z+1)=1当且仅当网格前后位置都是实格。当U(x,y,z)≤1时,代表此空间网格的六邻域中上下、左右、前后的网格中至多有一对都是实格,那么此空间网格是一个边界网格。U(x,y,z) is the sum of three products, let the coordinates of the middle grid be (x,y,z), then (x-1,y,z), (x+1,y,z) , (x,y-1,z), (x,y+1,z), (x,y,z-1), (x,y,z+1) represent the six-neighborhood grid of this grid location. f(x-1,y,z)·f(x+1,y,z)=1 if and only if the upper and lower positions of the grid are real cells, f(x,y-1,z)·f(x ,y+1,z)=1 if and only if the left and right positions of the grid are real cells, f(x,y,z-1)·f(x,y,z+1)=1 if and only if the grid The front and rear positions of the grid are all real grids. When U(x,y,z)≤1, it means that at most one pair of the upper, lower, left and right, front and rear grids in the six neighborhoods of this spatial grid is a real grid, then this spatial grid is a boundary grid .
参见图2,是根据本发明的具体实施例的采用边界网格种子算法的边界网格的状态图,左图为平面边界,中间图为线边界,右图有点边界。Referring to FIG. 2 , it is a state diagram of a boundary grid using a boundary grid seed algorithm according to a specific embodiment of the present invention. The left figure is a plane boundary, the middle figure is a line boundary, and the right figure is a point boundary.
步骤四、采用奇异值分解算法完成点云数据初始配准Step 4. Use the singular value decomposition algorithm to complete the initial registration of point cloud data
对于上述点云数据边界特征点,使用奇异值分解算法来计算待配准集合之间的旋转矩阵和平移矩阵,完成点云数据的初始配准。For the boundary feature points of the above point cloud data, the singular value decomposition algorithm is used to calculate the rotation matrix and translation matrix between the registration sets to complete the initial registration of the point cloud data.
令P代表原始集合,Q代表对照集合,参见公式3,定义目标矩阵如下:Let P represent the original set, Q represent the control set, see formula 3, define the target matrix as follows:
CP和CQ分别是原始集合P以及对照集合Q的质心,M代表点云数据集中点云的数量,Pi和Qi分别代表原始集合P与对照集Q合中第i个点,将目标矩阵E采用奇异值分解(SingularValue Decomposition,SVD),则E=UDVT。其中U的列是EET矩阵的特征向量,V的列是ETE矩阵的特征向量。VT是V的转置矩阵且D为对角矩阵,D=diag(di),这里di为E的奇异值,即E'E矩阵的特征值的平方根,令C P and C Q are the centroids of the original set P and the control set Q respectively, M represents the number of point clouds in the point cloud data set, P i and Q i represent the i-th point in the original set P and the control set Q respectively, and The target matrix E adopts singular value decomposition (Singular Value Decomposition, SVD), then E=UDV T . where the columns of U are the eigenvectors of the EE T matrix, and the columns of V are the eigenvectors of the E T E matrix. V T is the transpose matrix of V and D is a diagonal matrix, D=diag(d i ), where d i is the singular value of E, that is, the square root of the eigenvalue of the E'E matrix, so that
则旋转矩阵R=UBVT,平移矩阵T=CQ-RCP,将所求得的矩阵R和T作用于原始集合P以消除初始条件下原始集合P与对照集合Q可能存在的较大位移误差,例如,P'=RP+T,为精确配准步骤提供一个良好的状态。Then the rotation matrix R=UBV T , the translation matrix T=C Q -RC P , apply the obtained matrices R and T to the original set P to eliminate the large displacement that may exist between the original set P and the control set Q under the initial conditions An error, eg, P'=RP+T, provides a good state for the precise registration step.
步骤五、采用改进迭代最近点算法完成点云数据精确配准Step 5. Use the improved iterative closest point algorithm to complete the precise registration of point cloud data
对于传统的迭代最近点算法的主体思想是首先根据某种几何特征来寻找出P和Q的所有对应点,并将这些对应点作为配准的对象。接着构建表示配准程度的目标方程,通过寻找目标方程的最优解来计算出当前情况下的旋转矩阵和平移矩阵。最后不断地迭代此流程,直到目标方程满足某个设定的阈值。The main idea of the traditional iterative closest point algorithm is to find out all the corresponding points of P and Q according to some geometric features, and use these corresponding points as the objects of registration. Then construct the objective equation representing the degree of registration, and calculate the rotation matrix and translation matrix in the current situation by finding the optimal solution of the objective equation. Finally, this process is iterated continuously until the objective equation satisfies a certain set threshold.
传统的迭代最近点算法以点到点的距离作为几何特征来寻找对应点。对于集合P和Q中的两个点Pi(xp,yp,zp)和Qi(xq,yq,zq),它们之间的欧氏距离表示为The traditional iterative closest point algorithm uses the point-to-point distance as a geometric feature to find the corresponding point. For two points P i (x p ,y p ,z p ) and Q i (x q ,y q ,z q ) in sets P and Q, the Euclidean distance between them is expressed as
对于集合P中的任意一点,利用上述公式计算Q中每一点到此点的距离,并选取欧氏距离最小的点作为此点在集合Q中的对应点。接着寻找旋转矩阵R与平移矩阵T,将其作用于Pi,则所得到点的位置为RPi+T,利用最小二乘法构造目标方程For any point in the set P, use the above formula to calculate the distance from each point in Q to this point, and select the point with the smallest Euclidean distance as the corresponding point of this point in the set Q. Then find the rotation matrix R and the translation matrix T, and apply them to P i , then the position of the obtained point is RP i +T, and use the least squares method to construct the objective equation
其中,N代表点云中点的数量,E代表经过变换后集合P中每一个点到集合Q中对应点距离的平方和。Among them, N represents the number of points in the point cloud, and E represents the sum of the squares of the distances from each point in the set P to the corresponding point in the set Q after transformation.
从上述目标方程中可以看出,当E最小时代表此次迭代对应点的相对位置最近。故使得E最小的旋转矩阵和平移矩阵便为此次迭代的最优解。对于旋转矩阵和平移矩阵的求解,采用平移和旋转分离的办法,即先对平移矩阵T进行初值估算,首先计算P和Q的重心分别为It can be seen from the above objective equation that when E is the smallest, it means that the relative position of the corresponding point of this iteration is the closest. Therefore, the rotation matrix and translation matrix that make E the smallest are the optimal solutions for this iteration. For the solution of the rotation matrix and translation matrix, the method of separating translation and rotation is adopted, that is, the initial value estimation of the translation matrix T is performed first, and the centers of gravity of P and Q are first calculated as
和 and
则集合P和Q间的平移估算值为pc-qc,此时目标方程化简为Then the estimated value of translation between sets P and Q is p c -q c , and the objective equation is simplified as
以此求得此次迭代过程中的旋转矩阵R并利用T=Q-RP计算出平移矩阵T,重复迭代以上过程,直至目标方程最优解E收敛与给定的阈值。In this way, the rotation matrix R in this iteration process is obtained, and the translation matrix T is calculated by using T=Q-RP, and the above process is repeated until the optimal solution E of the objective equation converges to a given threshold.
对于具有较好初始条件的点云数据,迭代最近点算法能够获得较为精确的配准结果。然而,传统的迭代最近点算法也存在一些不足之处。For point cloud data with better initial conditions, the iterative closest point algorithm can obtain more accurate registration results. However, the traditional iterative closest point algorithm also has some shortcomings.
首先,迭代最近点算法假设两个点云集合为包含关系,即一个点云集合是另一个点云集合的子集,这一情况通常较难满足。First of all, the iterative closest point algorithm assumes that two point cloud sets are inclusive, that is, one point cloud set is a subset of the other point cloud set, which is usually difficult to satisfy.
其次,在选取对应点对的过程中,对于点云数据集合中的任意一点,算法会计算此点与另一集合中所有点的欧氏距离。假设两个点云集合各有NP和NQ个点,那么此步骤的时间复杂度为O(NP×NQ)。这使得算法花费大量时间计算对应点对的欧氏距离,计算代价很大。此外,算法默认欧氏距离最近的点对为对应点,由于存在一定量的噪声点,故此步骤后产生一定的误差,使算法陷入局部最小值。Secondly, in the process of selecting corresponding point pairs, for any point in the point cloud data set, the algorithm will calculate the Euclidean distance between this point and all points in another set. Assuming that the two point cloud sets each have N P and N Q points, then the time complexity of this step is O(N P ×N Q ). This makes the algorithm spend a lot of time calculating the Euclidean distance of the corresponding point pairs, and the calculation cost is very high. In addition, the algorithm defaults the point pair with the closest Euclidean distance as the corresponding point. Due to the existence of a certain amount of noise points, a certain error occurs after this step, which makes the algorithm fall into a local minimum.
因此,本发明提出了改进迭代最近点算法:传统迭代最近点算法中,点云集合中的所有点被赋予了相同的权重,因此所有点都会参加到点与点间欧氏距离的计算过程中,这是算法的瓶颈所在。本发明算法对不同的点赋予不同的权重,点对间的欧氏距离越远,它们的权值就越小。假设PA是集合P中的一点,则PA的权重如公式5表示为Therefore, the present invention proposes an improved iterative closest point algorithm: in the traditional iterative closest point algorithm, all points in the point cloud collection are given the same weight, so all points will participate in the calculation process of the Euclidean distance between points , which is the bottleneck of the algorithm. The algorithm of the invention assigns different weights to different points, and the farther the Euclidean distance between point pairs is, the smaller their weights are. Suppose P A is a point in the set P, then the weight of P A is expressed as
QB为集合Q中点PA的对应点,Dis(PA,QB)代表PA和QB间的欧氏距离。DisMAX代表点对间欧氏距离的最大值,对于给定的阈值ε,所有权重低于ε的对应点对将被舍弃,不参与到迭代计算的过程中。阈值ε是一个可变参数,它用来权衡配准的时间复杂度和配准的精确度。若令ε减小,则意味着更多的对应点对会参与到迭代计算过程中,使配准结果更加精确,但与此同时也增加了计算次数,提高了算法的时间复杂度。优选地,ε为0.2~0.4。Q B is the corresponding point of point PA in set Q, and Dis(PA , Q B ) represents the Euclidean distance between PA and Q B. Dis MAX represents the maximum value of the Euclidean distance between point pairs. For a given threshold ε, all corresponding point pairs with a weight lower than ε will be discarded and will not participate in the iterative calculation process. Threshold ε is a variable parameter, which is used to trade off the time complexity of registration and the accuracy of registration. If ε is reduced, it means that more corresponding point pairs will participate in the iterative calculation process, making the registration result more accurate, but at the same time it also increases the number of calculations and increases the time complexity of the algorithm. Preferably, ε is 0.2-0.4.
由于点云数据可能会存在噪声,这些噪声会影响点云重心及迭代最近点目标方程的计算等过程,从而降低配准精度。本发明引入了M-估计以排除噪声点对实验结果的影响。M-估计稳健回归的基本思想是根据回归残差的大小确定各点的权重,以达到稳健的目的。为减少异常点的影响,可以对不同的点赋予不同的权重,即对残差小的点给予较大的权重,而对残差较大的点给予较小的权重。本发明中M-估计在各个坐标方向的权重公式为Because the point cloud data may have noise, these noises will affect the calculation of the center of gravity of the point cloud and the calculation of the iterative nearest point objective equation, thereby reducing the registration accuracy. The present invention introduces M-estimation to eliminate the influence of noise points on the experimental results. The basic idea of M-estimation robust regression is to determine the weight of each point according to the size of the regression residual to achieve the purpose of robustness. In order to reduce the influence of abnormal points, different weights can be assigned to different points, that is, a larger weight is given to a point with a small residual error, and a smaller weight is given to a point with a larger residual error. In the present invention, the weight formula of M-estimation in each coordinate direction is
某一点的权重因素被定义为The weight factor for a point is defined as
v为各个坐标方向上标准化的残差值,c为常数,一般取1.345。因此,当v属于区间(-c,c)时,M-估计退化为经典最小二乘估计。而当残差v大于c时,权重因素wv随着残差的增大而减小。对应集合P中点A和Q中点B间欧氏距离的计算方法如公式6所示:v is the standardized residual value in each coordinate direction, c is a constant, generally 1.345. Therefore, when v belongs to the interval (-c,c), the M-estimator degenerates into a classical least squares estimate. And when the residual v is greater than c, the weight factor w v decreases as the residual increases. The calculation method of the Euclidean distance between the middle point A of the corresponding set P and the middle point B of Q is shown in formula 6:
其中,ωx,ωy,ωz分别代表M-估计在各个坐标方向的权重因素,(xA,yA,zA),(xB,yB,zB)分别为点A和B的空间坐标,此时迭代方程如下:Among them, ω x , ω y , ω z respectively represent the weight factors of M-estimation in each coordinate direction, (x A , y A , z A ), (x B , y B , z B ) are points A and B respectively The space coordinates of , the iterative equation at this time is as follows:
综上,改进后的迭代最近点算法如下:In summary, the improved iterative closest point algorithm is as follows:
步骤5.1初始化。对于给定的两个点云集合P和Q,指定一个收敛阈值τ,优选地,τ为0.2;Step 5.1 Initialization. For given two point cloud sets P and Q, specify a convergence threshold τ, preferably, τ is 0.2;
步骤5.2采用公式5算出点云数据中每个点对应的权值,比较阈值ε并将权值较低的点排除,优选地,ε为0.2~0.4;Step 5.2 Use formula 5 to calculate the weight corresponding to each point in the point cloud data, compare the threshold ε and exclude points with lower weight, preferably, ε is 0.2 to 0.4;
步骤5.3迭代以下步骤,直到公式7的最小均方根误差收敛于所给定的阈值τ:Step 5.3 Iterate the following steps until the minimum root mean square error of Equation 7 converges to the given threshold τ:
步骤5.3.1根据公式6计算出集合P和Q中点云之间的欧式距离,Step 5.3.1 calculates the Euclidean distance between point clouds in sets P and Q according to formula 6,
步骤5.3.2对于排除了权值较低的点的集合P,在集合Q中寻找欧氏距离最近的点作为对应点并存储在最近点集中,Step 5.3.2 For the set P that excludes points with lower weights, find the point with the closest Euclidean distance in the set Q as the corresponding point and store it in the nearest point set,
步骤5.3.3利用公式7采用最小二乘法计算计算集合P和最近点集间的旋转矩阵R和平移矩阵T,Step 5.3.3 Use formula 7 to calculate the rotation matrix R and translation matrix T between the calculation set P and the nearest point set using the least square method,
步骤5.3.4将旋转矩阵R和平移矩阵T应用于集合P,得到新的集合,利用公式7计算最小均方根误差是否收敛于所给定的阈值τ,如果是则结束运算,否则利用步骤5.3进行迭代计算。Step 5.3.4 Apply the rotation matrix R and the translation matrix T to the set P to obtain a new set, use formula 7 to calculate whether the minimum root mean square error converges to the given threshold τ, if so, end the operation, otherwise use the step 5.3 Perform iterative calculations.
因此,本发明选取OCT眼底图像,使用Canny边缘检测法提取OCT眼底图像的视网膜边缘,并将这些边缘以点云数据的格式收集起来;其次,采用空间网格划分的方法提取点云数据的特征点;再次,应用奇异值分解算法计算出待配准点云间的变换矩阵,消除明显的位置误差;最后,使用改进的迭代最近点算法进行精确配准,把得到的旋转矩阵和平移矩阵应用于原始的OCT眼底图像上,得到最终的结果。Therefore, the present invention selects the OCT fundus image, uses the Canny edge detection method to extract the retinal edge of the OCT fundus image, and collects these edges in the form of point cloud data; secondly, adopts the method of spatial grid division to extract the features of the point cloud data point; again, apply the singular value decomposition algorithm to calculate the transformation matrix between the point clouds to be registered, and eliminate the obvious position error; finally, use the improved iterative closest point algorithm for precise registration, and apply the obtained rotation matrix and translation matrix to On the original OCT fundus image, the final result is obtained.
实施例1:Example 1:
下面结合实验,对本发明做进一步阐述,并将本发明和经典迭代最近点方法进行比较,验证其精确性和鲁棒性。In the following, the present invention will be further described in combination with experiments, and the present invention will be compared with the classic iterative closest point method to verify its accuracy and robustness.
实验所使用的计算机配置为Intel Core E7500双核CPU,主频2.93GHz,内存为1GB×2 DDR2,操作系统为Microsoft Windows Win7 Sp1旗舰版,算法的实现平台为MicrosoftVisual Studio 2010.The computer configuration used in the experiment is Intel Core E7500 dual-core CPU, the main frequency is 2.93GHz, the memory is 1GB×2 DDR2, the operating system is Microsoft Windows Win7 Sp1 flagship version, and the algorithm implementation platform is MicrosoftVisual Studio 2010.
为了评估算法的配准性能,本发明算法采用两个眼底光学断层扫描图像作为实验数据,它们是人类视网膜结构相邻的部分。两个数据集合的重叠部分大约为75×500×375个体素。图3从四个不同的角度展示了点云数据的配准过程。图片的左方区域显示了两个点云集的初始位置(红色的点代表原始集合,绿色的点代表对照集合),右方区域显示了迭代过程中的实时配准结果,右侧点云的外部轮廓表示点云的最小有向包围盒。In order to evaluate the registration performance of the algorithm, the algorithm of the present invention uses two fundus optical tomography images as experimental data, which are adjacent parts of the human retina structure. The overlap of the two datasets is approximately 75 x 500 x 375 voxels. Figure 3 shows the registration process of point cloud data from four different perspectives. The left area of the picture shows the initial positions of the two point cloud sets (the red points represent the original set, and the green points represent the control set), the right area shows the real-time registration results during the iteration process, and the outside of the right point cloud A contour represents the minimal directed bounding box of a point cloud.
上述迭代过程结束后,便得到了最终的配准结果。为了让实验结果更加直观,图4展示了配准结果的部分放大图。如图4所示,图像左侧区域显示了原始集合与对照集合的初始位置,能够观察到它们之间存在较大的错位。左侧图像显示了配准前连个数据集合重叠区域的位置关系,右侧图像显示了本文算法的配准结果。经过本发明算法处理过后的结果显示在图像的右侧,图像显示大部分存在错位的点都得到了较为精确的配准。After the above iterative process ends, the final registration result is obtained. In order to make the experimental results more intuitive, Figure 4 shows a partial enlarged view of the registration results. As shown in Figure 4, the left area of the image shows the initial positions of the original set and the control set, and it can be observed that there is a large misalignment between them. The image on the left shows the positional relationship of the overlapping areas of two data sets before registration, and the image on the right shows the registration result of the algorithm in this paper. The result processed by the algorithm of the present invention is displayed on the right side of the image, and the image shows that most of the misaligned points have been relatively accurately registered.
在配准过程结束后,我们将得到的一系列变换矩阵作用于原始集合与对照集合上,并将它们用imageJ渲染出来。图5可视化了原始数据以及经过配准后的结果数据。图5渲染了四个眼底光学相干性断层扫描图像体数据。上面两幅图像分别是上述提到的原始集合与对照集合,后两幅本发明的实验结果,左下图像为配准结果的侧视图,右下为配准结果的俯视图,图中凹陷部分为人眼黄斑中心。采用传统的眼底体数据配准算法往往导致配准后的视网膜内核层、感光细胞层和色素上皮层出现断崖的现象。然而从图5显示结果来看,配准结果并未出现明显的视网膜层断层的情况,且在数据的重叠区域也观察不到清晰的镶嵌痕迹,这种具有较大视野的光学相干性断层扫描体数据可能对临床医师预防和诊断眼科疾病有一定的帮助。After the registration process, we apply the obtained series of transformation matrices to the original set and the control set, and render them with imageJ. Figure 5 visualizes the original data and the resulting data after registration. Figure 5 renders four fundus optical coherence tomography image volume data. The above two images are the above-mentioned original set and control set respectively, the latter two are the experimental results of the present invention, the lower left image is the side view of the registration result, the lower right is the top view of the registration result, and the concave part in the figure is the human eye macular center. The traditional fundus volume data registration algorithm often leads to the phenomenon of cliffs in the registered retinal inner nucleus layer, photoreceptor cell layer and pigment epithelium layer. However, from the results shown in Figure 5, there is no obvious retinal layer fault in the registration results, and no clear mosaic traces can be observed in the overlapping areas of the data. This optical coherence tomography with a larger field of view Body data may be of some help to clinicians in the prevention and diagnosis of ophthalmic diseases.
在本发明中,图3-5仅用来表示本发明所获得的效果,但本发明的效果不仅依赖于图3-5来表示。In the present invention, Figs. 3-5 are only used to show the effects obtained by the present invention, but the effects of the present invention are not only expressed depending on Figs. 3-5.
实施例2Example 2
此外,本发明采用了配准误差来评估算法的配准精确度,使用时间消耗长度来评估算法的配准效率。配准误差表示配准过程中对应点匹配失败的点数占总点云点数的百分比,它具有以下计算形式In addition, the present invention uses the registration error to evaluate the registration accuracy of the algorithm, and uses the time consumption length to evaluate the registration efficiency of the algorithm. The registration error represents the percentage of the number of points that failed to match corresponding points in the registration process to the total number of point cloud points, and it has the following calculation form
其中Success(PA,QB)的定义为where Success(P A ,Q B ) is defined as
上述公式中N代表所有参与计算的对应点对个数。(PA,QB)代表一个对应点对。Success(PA,QB)用来表示对应点对(PA,QB)的配准结果。若配准后对应点对间的欧氏距离小于给定的阈值δ,则表示此对应点对配准成功,Success(PA,QB)=1。本发明算法中δ的取值为(PA,QB)精确配准前欧氏距离的0.15倍,即配准后欧氏距离小于原来距离15%的对应点对被视为配准成功。通过统计与计算,表1显示了本发明的算法与Besl提出的传统迭代最近点算法在时间消耗和配准误差方面的对比。In the above formula, N represents the number of corresponding point pairs involved in the calculation. (P A , Q B ) represents a pair of corresponding points. Success(P A , Q B ) is used to represent the registration result of the corresponding point pair (P A , Q B ). If the Euclidean distance between the corresponding point pairs after registration is less than the given threshold δ, it means that the registration of the corresponding point pair is successful, and Success(P A , Q B )=1. The value of δ in the algorithm of the present invention is 0.15 times of the Euclidean distance before accurate registration (PA, QB ) , that is, the corresponding point pair whose Euclidean distance is less than 15% of the original distance after registration is regarded as successful registration. Through statistics and calculation, Table 1 shows the comparison between the algorithm of the present invention and the traditional iterative closest point algorithm proposed by Besl in terms of time consumption and registration error.
表1本发明算法与传统迭代最近点算法的实验结果的对比Table 1 The comparison of the experimental results of the algorithm of the present invention and the traditional iterative closest point algorithm
作为实验对比,本发明对一些开源的点云数据集进行了实验。图6展示了采用本发明算法对公开库“斯坦福兔子”点云数据的配准结果,左图为点云数据的初始位置,右图为配准后的点云数据。此点云数据共有9731个点云点,使用传统迭代最近点算法用时11.023秒,配准误差为0.001130。使用改进的迭代最近点算法用时3.994秒,配准误差为0.000794。本发明通过对传统迭代最近点算法的改进,使得点云数据的配准从计算时间和配准精度上都得到了较为明显的提高。As an experimental comparison, the present invention conducts experiments on some open source point cloud datasets. Figure 6 shows the registration results of the point cloud data of the public library "Stanford Rabbit" using the algorithm of the present invention. The left picture shows the initial position of the point cloud data, and the right picture shows the registered point cloud data. This point cloud data has a total of 9731 point cloud points. It takes 11.023 seconds to use the traditional iterative closest point algorithm, and the registration error is 0.001130. Using the improved iterative closest point algorithm takes 3.994 seconds and the registration error is 0.000794. By improving the traditional iterative closest point algorithm, the present invention significantly improves the registration of point cloud data in terms of calculation time and registration accuracy.
为了更直观的对比改进算法和传统算法,本发明对不同大小的点云数据进行了对比实验,并采用折线图的方式展示了算法在计算时间上的差异。表2中第一个点云数据量为9731的数据集合为“斯坦福兔子”,第二个为斯坦福三维扫描仓库中的“龙”(http://graphics.stanford.edu/data/3Dscanrep/),其余为实际项目中的点云数据。下面的表2和图7显示了对于不同数据集本发明算法与传统迭代最近点算法在配准时间上的对比。In order to compare the improved algorithm and the traditional algorithm more intuitively, the present invention conducts comparative experiments on point cloud data of different sizes, and uses a line graph to show the difference in calculation time of the algorithm. In Table 2, the first data set with a point cloud data volume of 9731 is "Stanford Rabbit", and the second is "Dragon" in the Stanford 3D scanning warehouse (http://graphics.stanford.edu/data/3Dscanrep/) , and the rest are point cloud data in the actual project. Table 2 and Fig. 7 below show the comparison of registration time between the algorithm of the present invention and the traditional iterative closest point algorithm for different data sets.
表2不同大小的点云数据集合,本发明算法与传统迭代最近点算法计算时间对比Table 2 Point cloud data collections of different sizes, the calculation time comparison between the algorithm of the present invention and the traditional iterative closest point algorithm
如表2与图7所示,当处理具有较大数据量的密集型点云时,本发明算法在时间复杂度和配准精度方面有明显的优势。在大多数情况下,本发明算法将传统的迭代最近点算法效率提高了70%。As shown in Table 2 and Figure 7, when dealing with dense point clouds with a large amount of data, the algorithm of the present invention has obvious advantages in terms of time complexity and registration accuracy. In most cases, the algorithm of the invention improves the efficiency of the traditional iterative closest point algorithm by 70%.
上述实验结果验证了本发明对OCT眼底图像配准与拼接的有效性,同时验证了本发明对创建具有较大视野眼底视网膜图像的精确性和鲁棒性。The above experimental results verify the effectiveness of the present invention for OCT fundus image registration and splicing, and at the same time verify the accuracy and robustness of the present invention for creating retinal images with a larger field of view.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施方式仅限于此,对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单的推演或替换,都应当视为属于本发明由所提交的权利要求书确定保护范围。The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments. It cannot be determined that the specific embodiments of the present invention are limited thereto. Under the present invention, several simple deduction or substitutions can also be made, all of which should be considered as belonging to the protection scope of the present invention determined by the submitted claims.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610883700.9A CN106447708A (en) | 2016-10-10 | 2016-10-10 | OCT eye fundus image data registration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610883700.9A CN106447708A (en) | 2016-10-10 | 2016-10-10 | OCT eye fundus image data registration method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106447708A true CN106447708A (en) | 2017-02-22 |
Family
ID=58173153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610883700.9A Pending CN106447708A (en) | 2016-10-10 | 2016-10-10 | OCT eye fundus image data registration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106447708A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107481274A (en) * | 2017-08-11 | 2017-12-15 | 武汉理工大学 | A kind of three-dimensional makees the robustness reconstructing method of object point cloud |
CN108090901A (en) * | 2017-12-28 | 2018-05-29 | 西安中科微光影像技术有限公司 | A kind of biological support alignment schemes and device based on cardiovascular OCT images |
CN108107444A (en) * | 2017-12-28 | 2018-06-01 | 国网黑龙江省电力有限公司检修公司 | Substation's method for recognizing impurities based on laser data |
CN108629804A (en) * | 2017-03-20 | 2018-10-09 | 北京大学口腔医学院 | A kind of three-dimensional face symmetric reference plane extracting method with weight distribution mechanism |
CN108961294A (en) * | 2018-07-17 | 2018-12-07 | 北醒(北京)光子科技有限公司 | A kind of dividing method and device of three-dimensional point cloud |
CN109523581A (en) * | 2017-09-19 | 2019-03-26 | 华为技术有限公司 | A kind of method and apparatus of three-dimensional point cloud alignment |
CN110544274A (en) * | 2019-07-18 | 2019-12-06 | 山东师范大学 | A method and system for fundus image registration based on multispectral |
CN110648368A (en) * | 2019-08-30 | 2020-01-03 | 广东奥普特科技股份有限公司 | Calibration board corner point discrimination method based on edge features |
CN110717884A (en) * | 2019-08-30 | 2020-01-21 | 温州医科大学 | A method for expressing the structural changes of corneal irregularities based on the change consistency parameters of anterior segment tomography |
CN110946659A (en) * | 2019-12-25 | 2020-04-03 | 武汉中科医疗科技工业技术研究院有限公司 | Registration method and system for image space and actual space |
CN111612847A (en) * | 2020-04-30 | 2020-09-01 | 重庆见芒信息技术咨询服务有限公司 | Point cloud data matching method and system for robot grabbing operation |
CN112155734A (en) * | 2020-09-29 | 2021-01-01 | 苏州微创畅行机器人有限公司 | Readable storage medium, bone modeling and registering system and bone surgery system |
CN112529945A (en) * | 2020-11-17 | 2021-03-19 | 西安电子科技大学 | Registration method for multi-view three-dimensional ISAR scattering point set |
CN112669386A (en) * | 2020-12-29 | 2021-04-16 | 北京大学 | Magnetoencephalogram automatic positioning and registering method and device based on three-dimensional optical scanning |
CN113192197A (en) * | 2021-05-24 | 2021-07-30 | 北京京东乾石科技有限公司 | Method, device, equipment and storage medium for constructing global point cloud map |
CN113436070A (en) * | 2021-06-20 | 2021-09-24 | 四川大学 | Fundus image splicing method based on deep neural network |
CN115100258A (en) * | 2022-08-29 | 2022-09-23 | 杭州三坛医疗科技有限公司 | Hip joint image registration method, device, equipment and storage medium |
CN115909302A (en) * | 2023-03-09 | 2023-04-04 | 菏泽学院 | A data processing method for drug disintegration performance identification |
CN117323002A (en) * | 2023-11-30 | 2024-01-02 | 北京万特福医疗器械有限公司 | A neuroendoscopic surgery visualization system based on mixed reality technology |
CN119152040A (en) * | 2024-11-19 | 2024-12-17 | 菲特(天津)检测技术有限公司 | Position estimation method, device and electronic device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120206438A1 (en) * | 2011-02-14 | 2012-08-16 | Fatih Porikli | Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images |
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
-
2016
- 2016-10-10 CN CN201610883700.9A patent/CN106447708A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120206438A1 (en) * | 2011-02-14 | 2012-08-16 | Fatih Porikli | Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images |
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
Non-Patent Citations (1)
Title |
---|
XIN WANG ET AL.: "An iterative closest point approach for the registration of volumetric human retina image data obtained by optical coherence tomography", 《MULTIMEDIA TOOLS AND APPLICATIONS》 * |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629804B (en) * | 2017-03-20 | 2021-06-18 | 北京大学口腔医学院 | Three-dimensional face symmetric reference plane extraction method with weight distribution mechanism |
CN108629804A (en) * | 2017-03-20 | 2018-10-09 | 北京大学口腔医学院 | A kind of three-dimensional face symmetric reference plane extracting method with weight distribution mechanism |
CN107481274A (en) * | 2017-08-11 | 2017-12-15 | 武汉理工大学 | A kind of three-dimensional makees the robustness reconstructing method of object point cloud |
CN109523581A (en) * | 2017-09-19 | 2019-03-26 | 华为技术有限公司 | A kind of method and apparatus of three-dimensional point cloud alignment |
CN108090901A (en) * | 2017-12-28 | 2018-05-29 | 西安中科微光影像技术有限公司 | A kind of biological support alignment schemes and device based on cardiovascular OCT images |
CN108107444A (en) * | 2017-12-28 | 2018-06-01 | 国网黑龙江省电力有限公司检修公司 | Substation's method for recognizing impurities based on laser data |
CN108107444B (en) * | 2017-12-28 | 2021-12-14 | 国网黑龙江省电力有限公司检修公司 | Transformer substation foreign matter identification method based on laser data |
CN108090901B (en) * | 2017-12-28 | 2021-11-19 | 中科微光医疗研究中心(西安)有限公司 | Biological stent alignment method and device based on cardiovascular OCT (optical coherence tomography) image |
CN108961294A (en) * | 2018-07-17 | 2018-12-07 | 北醒(北京)光子科技有限公司 | A kind of dividing method and device of three-dimensional point cloud |
CN110544274A (en) * | 2019-07-18 | 2019-12-06 | 山东师范大学 | A method and system for fundus image registration based on multispectral |
CN110544274B (en) * | 2019-07-18 | 2022-03-29 | 山东师范大学 | Multispectral-based fundus image registration method and system |
CN110717884B (en) * | 2019-08-30 | 2022-02-22 | 温州医科大学 | Method for expressing corneal irregular change based on ocular surface structure change consistency |
CN110648368B (en) * | 2019-08-30 | 2022-05-17 | 广东奥普特科技股份有限公司 | A method for discriminating corner points of calibration board based on edge features |
CN110717884A (en) * | 2019-08-30 | 2020-01-21 | 温州医科大学 | A method for expressing the structural changes of corneal irregularities based on the change consistency parameters of anterior segment tomography |
CN110648368A (en) * | 2019-08-30 | 2020-01-03 | 广东奥普特科技股份有限公司 | Calibration board corner point discrimination method based on edge features |
CN110946659A (en) * | 2019-12-25 | 2020-04-03 | 武汉中科医疗科技工业技术研究院有限公司 | Registration method and system for image space and actual space |
CN111612847B (en) * | 2020-04-30 | 2023-10-20 | 湖北煌朝智能自动化装备有限公司 | Point cloud data matching method and system for robot grabbing operation |
CN111612847A (en) * | 2020-04-30 | 2020-09-01 | 重庆见芒信息技术咨询服务有限公司 | Point cloud data matching method and system for robot grabbing operation |
CN112155734A (en) * | 2020-09-29 | 2021-01-01 | 苏州微创畅行机器人有限公司 | Readable storage medium, bone modeling and registering system and bone surgery system |
CN112155734B (en) * | 2020-09-29 | 2022-01-28 | 苏州微创畅行机器人有限公司 | Readable storage medium, bone modeling and registering system and bone surgery system |
CN112529945B (en) * | 2020-11-17 | 2023-02-21 | 西安电子科技大学 | A Registration Method of Multi-view 3D ISAR Scattering Point Set |
CN112529945A (en) * | 2020-11-17 | 2021-03-19 | 西安电子科技大学 | Registration method for multi-view three-dimensional ISAR scattering point set |
CN112669386A (en) * | 2020-12-29 | 2021-04-16 | 北京大学 | Magnetoencephalogram automatic positioning and registering method and device based on three-dimensional optical scanning |
CN112669386B (en) * | 2020-12-29 | 2022-09-23 | 北京大学 | Method and device for automatic positioning and registration of magnetoencephalography based on three-dimensional optical scanning |
CN113192197A (en) * | 2021-05-24 | 2021-07-30 | 北京京东乾石科技有限公司 | Method, device, equipment and storage medium for constructing global point cloud map |
CN113192197B (en) * | 2021-05-24 | 2024-04-05 | 北京京东乾石科技有限公司 | Global point cloud map construction method, device, equipment and storage medium |
CN113436070A (en) * | 2021-06-20 | 2021-09-24 | 四川大学 | Fundus image splicing method based on deep neural network |
CN113436070B (en) * | 2021-06-20 | 2022-05-17 | 四川大学 | Fundus image splicing method based on deep neural network |
CN115100258A (en) * | 2022-08-29 | 2022-09-23 | 杭州三坛医疗科技有限公司 | Hip joint image registration method, device, equipment and storage medium |
CN115909302A (en) * | 2023-03-09 | 2023-04-04 | 菏泽学院 | A data processing method for drug disintegration performance identification |
CN117323002A (en) * | 2023-11-30 | 2024-01-02 | 北京万特福医疗器械有限公司 | A neuroendoscopic surgery visualization system based on mixed reality technology |
CN119152040A (en) * | 2024-11-19 | 2024-12-17 | 菲特(天津)检测技术有限公司 | Position estimation method, device and electronic device |
CN119152040B (en) * | 2024-11-19 | 2025-02-14 | 菲特(天津)检测技术有限公司 | Pose estimation method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106447708A (en) | OCT eye fundus image data registration method | |
Izadinia et al. | Im2cad | |
CN111243093B (en) | Three-dimensional face grid generation method, device, equipment and storage medium | |
Shen | Accurate multiple view 3d reconstruction using patch-based stereo for large-scale scenes | |
CN106910242B (en) | Method and system for 3D reconstruction of indoor complete scene based on depth camera | |
Furukawa et al. | Dense 3d motion capture from synchronized video streams | |
CN103247045B (en) | A kind of method obtaining artificial scene principal direction and image border from multi views | |
EP3545497B1 (en) | System for acquiring a 3d digital representation of a physical object | |
CN111968238A (en) | Human body color three-dimensional reconstruction method based on dynamic fusion algorithm | |
WO2022040920A1 (en) | Digital-twin-based ar interactive system and method | |
WO2005081178A1 (en) | Method and apparatus for matching portions of input images | |
CN102222357B (en) | Foot Shape 3D Surface Reconstruction Method Based on Image Segmentation and Mesh Subdivision | |
CN113192179A (en) | Three-dimensional reconstruction method based on binocular stereo vision | |
CN109766866B (en) | Face characteristic point real-time detection method and detection system based on three-dimensional reconstruction | |
CN108596961A (en) | Point cloud registration method based on Three dimensional convolution neural network | |
WO2015006224A1 (en) | Real-time 3d computer vision processing engine for object recognition, reconstruction, and analysis | |
CN108010123A (en) | A kind of three-dimensional point cloud acquisition methods for retaining topology information | |
Turetken et al. | Detecting irregular curvilinear structures in gray scale and color imagery using multi-directional oriented flux | |
CN110633628B (en) | 3D model reconstruction method of RGB image scene based on artificial neural network | |
Liu et al. | 3D real human reconstruction via multiple low-cost depth cameras | |
WO2018133119A1 (en) | Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera | |
CN116878524A (en) | Dynamic SLAM dense map construction method based on pyramid L-K optical flow and multi-view geometric constraint | |
Köser et al. | Dense 3d reconstruction of symmetric scenes from a single image | |
CN112435211A (en) | Method for describing and matching dense contour feature points in endoscope image sequence | |
CN113223189B (en) | Method for repairing holes of three-dimensional point cloud model of object grabbed by mechanical arm and fitting ruled body |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170222 |