[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106447708A - OCT eye fundus image data registration method - Google Patents

OCT eye fundus image data registration method Download PDF

Info

Publication number
CN106447708A
CN106447708A CN201610883700.9A CN201610883700A CN106447708A CN 106447708 A CN106447708 A CN 106447708A CN 201610883700 A CN201610883700 A CN 201610883700A CN 106447708 A CN106447708 A CN 106447708A
Authority
CN
China
Prior art keywords
point
space
cloud
registration
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610883700.9A
Other languages
Chinese (zh)
Inventor
王欣
赵振龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201610883700.9A priority Critical patent/CN106447708A/en
Publication of CN106447708A publication Critical patent/CN106447708A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an OCT eye fundus image data registration method, which comprises the following steps: at first, extracting retina edges of an OCT eye fundus image by virtue of a Canny edge detection method, and collecting the edges in a format of point cloud data; then, extracting characteristic points of the point cloud data by adopting a space grid division method; next, calculating a transformation matrix of point clouds to be registered to eliminate obvious position errors by virtue of an SVD (singular value decomposition) algorithm; finally, performing accurate registration by virtue of an improved iterative closest point algorithm, and applying an obtained rotation matrix and translation matrix to the original OCT eye fundus image to obtain a final result. When a dense point cloud with a relatively large volume of data is processed, the method has obvious advantages in terms of time complexity and registration accuracy. Under most conditions, the efficiency of a conventional iterative closest point algorithm is improved by 70 percent by the method, not only can the OCT eye fundus image be effectively registered and spliced, but also the accuracy of a large-vision eye fundus retina accuracy is ensured.

Description

A kind of OCT eye fundus image Registration of Measuring Data method
Technical field
The application is related to image registration techniques, specifically, is related to enter for the three-dimensional eye fundus image data using OCT image The method of row registration.
Background technology
Image registration techniques typically refer to for piece image, find certain a series of spatial alternation so as to another width Or the corresponding characteristic information of multiple image has identical locus.For the optical fundus data of OCT three-dimensional imaging, registration Result should make each layer of retina in different eye fundus images be mutually aligned without obvious tomography.
Image registration techniques are broadly divided into two classes at present:The image registration of feature based and being joined based on the image of mutual information Accurate.The method for registering images of feature based extracts the characteristic information of image first, is then characterized as that model carries out registration with these. The result of feature extraction is one group of data comprising characteristics of image and the description to these data, and each data is by one group of attribute Represent, to size further describing including the orientation at edge and radian of attribute etc..These features constitute the local of image Feature, and between local feature, there is mutual relation, such as geometrical relationship, radiometric quantities relation, topological relation etc..Can be used this Relationship description global characteristics between a little local features.It is typically based on local feature registration and be mostly all based on point, line or edge , and the registration of global characteristics be then using local feature between relation carry out registration method.Registration based on mutual information Method typically refers to the method for registering images of maximum mutual information.It is general with the joint of two width images based on the image registration of mutual information Rate is distributed with the generalized distance of probability distribution when being completely independent estimating mutual information, and as multimodal medical image registration Estimate.When two width reach optimal registration based on the image of common scene, the gray scale mutual information of their respective pixel should be Greatly.Because the registration based on mutual information is more sensitive to noise ratio, generally, need by the method such as filtering and segmentation to figure As carrying out pretreatment, then carry out sampling, convert, interpolation, the process such as optimization reach the purpose of registration.
The document of two-dimensional medical images registration has a lot, Kratika Sharma and Ajay Goyal (Sharma K, Goyal A.Classification based survey of image registration methods.International Conference on Computing,Communications&Networking Technologies, 2013,1-7) method for registering images step is divided into four steps, respectively spatial relationship method, scaling method, Pyramid method and constant descriptor wavelet method.The scholars such as Mei-sen Pan (Pan M, Jiang J, Rong Q, et al.A modified medical image registration.Multimedia Tools&Applications,2013,70: 1585-1615) propose a kind of B-spline gradient operator method based on rim detection.This method is realized simply, and has relatively Low computational load and good registration accuracy.And for two images that are completely overlapped and there is noise, Lucian Ciobanu and Lu í s(Ciobanu L,L.Iterative filtering of SIFT keypoint matches for multi-view registration in Distributed Video Coding.Multimedia Tools&Applications,2011,55:557-578) adopt between the tactful reconstruction point pair of iteration Relation and noise spot is eliminated with this.This strategy makes the total number of noise spot substantially reduce, and maintains former simultaneously again Some high speed accurately mates.In order that image registration algorithm is more accurate and healthy and strong, a kind of idea of novelty is had to be whole of modeling Image distribution model.The scholars such as Shihui Ying (Ying S, Wu G, Wang Q, et al.Groupwise Registration via Graph Shrinkage on the Image Manifold.IEEE Conference on Computer Vision&Pattern Recognition, 2013,2323-2330) introduce this concept at first, and by image The step of Characteristic points match can be summarized as dynamic shrinkage problem.In addition to the above methods, the registration calculation of some other feature based Method also can obtain preferable result.
However, optical coherence tomoscan optical fundus data is to be superimposed by multiple two-dimension optical coherence's tomoscan images The three-dimensional data of composition.The restriction that above method will face running memory and calculate the time when processing such data.Cause This, for registration three-dimensional optical fundus data, a scheme obtaining having larger visual field optical coherence tomoscan volume data is Using montage method (Li Y, Gregori G, Lam B L, et al.Automatic montage of SD-OCT data sets.Optics express,2011,19:26239-26248).The method uses with blood vessel ridge as interest characteristicss, using weight Sampling, the method for interpolation and cross-correlation are piecing together complete optical coherence tomoscan volume data.This montage method can Disperseing, partly overlapping optical coherence tomoscan image is spliced into one and has more wide-field 3-D view.So And, when in eye fundus image, the blood vessel ridge as interest characteristicss obscures, this method will be unable to complete registration.In addition, There is scholar to generate using existing instrument and platform and there is larger visual field volume data.Meng Lu(Meng L.Acceleration method of 3D medical images registration based on compute unified device architecture.Bio-medical materials and engineering,2014,24: 1109-1116) operation method accelerating registration is gone out based on the computing device framework being provided by NVIDIA, this algorithm is permissible Improve the performance of 3 d medical images registration, accelerate calculating speed, be suitable for processing large-scale data.Additionally, Stephan The scholars such as Preibisch (Preibisch S, Saalfeld S, Tomancak P.Globally optimal stitching of tiled 3D microscopic image acquisitions.Bioinformatics,2009,25:1463-1465) One splicing plug-in unit is also achieved on imageJ platform, it can be by scattered fritter in the case of not needing priori Volume reconstruction becomes an entirety.In addition to above-mentioned splicing plug-in unit, other kinds of splicing tool also starts to apply successively.However, Due to the bottleneck of optical coherence tomographic apparatus itself and the automatic motion of human eye in scanning process, obtained Optical coherence tomoscan optical fundus data can adulterate some small non-rigid transformation (Chen Guolin. non-rigid medical images are joined The research of quasi- method and realization [D]. Institutes Of Technology Of Nanjing's Master's thesis, 2009).Therefore said method is processing such more spy During different Clinical Ophthalmology optical coherence tomoscan image, there is certain limitation.
To sum up, for the optical fundus data of OCT three-dimensional imaging, how fast, accurately create have more wide-field The algorithm of eye fundus image, provides help as clinicist to the diagnosis of ophthalmic diseasess and treatment, is that prior art needs solution badly Technical problem certainly.
Content of the invention
It is an object of the invention to proposing a kind of OCT eye fundus image Registration of Measuring Data method, had relatively with quick, accurate establishment The eye ground view data of big field range.
For reaching this purpose, the present invention employs the following technical solutions:
A kind of OCT eye fundus image Registration of Measuring Data method, comprises the steps:
Step one, denoising, rim detection are carried out to image using Canny edge detection method:
Gray processing is carried out to original image, image is filtered, then calculate the gradient magnitude of image, to gradient magnitude Carry out non-maxima suppression, then use the detection of dual threashold value-based algorithm and adjoining edge;
Step 2, the extraction of image cloud data and visualization:
The image processing through step one is become a three-dimensional data according to original laminated structure, and each side Edge point is taken into one of three-dimensional point cloud point according to its residing locus, and the corresponding superposition of three-dimensional coordinate of point cloud point obtains Three-dimensional data space coordinatess;
Step 3, the extraction of cloud data edge feature point:
Point in a cloud is assigned in different space lattices according to its space coordinates, then finds out and all belong to a cloud Point cloud point in boundary mesh is finally extracted the edge feature point as cloud data by the space lattice on border;
Step 4, complete cloud data initial registration using singular value decomposition algorithm:
Make P represent original collection, Q represents comparison set, referring to formula 3, define objective matrix as follows:
CPAnd CQIt is the barycenter of original collection P and comparison set Q respectively, M represents the quantity of cloud data centrostigma cloud, PiAnd QiRepresent original collection P respectively and compare i-th point during collection Q closes, objective matrix E is adopted singular value decomposition (Singular Value Decomposition, SVD), then E=UDVT.Wherein the row of U are EETThe characteristic vector of matrix, the row of V are ETE matrix Characteristic vector.VTIt is the transposed matrix of V and D is diagonal matrix, D=diag (di), wherein, diSingular value for E, order
Then spin matrix R=UBVT, translation matrix T=CQ-RCP, the matrix tried to achieve R and T is acted on original collection P To eliminate original collection P and comparison set Q larger displacement that may be present error under initial condition;
Step 5, using improve iterative closest point algorithm complete cloud data accuracy registration:
Step 5.1 initializes.Conjunction P and Q is converged it is intended that a convergence threshold τ for two given points,
Step 5.2 calculates in cloud data each using formula 5 and puts corresponding weights, compares threshold epsilon weights are relatively low Point exclusion,
QBFor set Q midpoint PACorresponding point, Dis (PA,QB) represent PAAnd QBBetween Euclidean distance, DisMAXRepresent point right Between Euclidean distance maximum, Euclidean distance is calculated using formula 6,
Step 5.3 iteration following steps, until the lowest mean square root error convergence of formula 7 is in given threshold tau:
Step 5.3.1 calculates the Euclidean distance between set P and Q point cloud according to formula 6,
Wherein, ωx, ωy, ωzRepresent the weight in each coordinate direction for the M- estimation, (x respectivelyA,yA,zA), (xB, yB,zB) be respectively set P midpoint A and Q midpoint B space coordinatess,
Step 5.3.2, for the set P eliminating the relatively low point of weights, finds the nearest point of Euclidean distance in set Q As corresponding point and be stored in closest approach concentrate,
Step 5.3.3 utilizes formula 7 to adopt method of least square to calculate the spin matrix R between set of computations P and nearest point set With translation matrix T,
Spin matrix R and translation matrix T are applied to set P by step 5.3.4, obtain new set, are calculated using formula 7 Whether lowest mean square root error converges on given threshold tau, if it is terminates computing, is otherwise changed using step 5.3 In generation, calculates.
Further, in step one, the gray processing formula of original image is Gray=0.299R+0.587G+0.114B, Image filtering adopts gaussian filtering, divides using first difference to calculate image pixel matrix with regard to horizontal and vertical local derviation Number.
Further, in step one, to be detected using dual threshold and adjoining edge, the effect of high threshold is to suppress side Edge, all gradient magnitudes higher than this threshold value just can be considered edge, and the effect of Low threshold is adjoining edge, by all gradients Amplitude is higher than that the point of Low threshold is considered as edge and couples together the result becoming final rim detection.
Further, in step 3, using oriented bounding box as the minimum bounding box of cloud data, cloud number will be put Strong point is divided in different space lattices,
According to identical volume, cloud data is resolved into several space lattices, the size of each space lattice is defined For:
In formula, L represents the quantity of cloud data midpoint cloud point, and V represents the volume of minimum bounding box.V/L is point cloud density Inverse, represent the mean size that each of cloud data point cloud point is taken up space.Make initial size S of space latticegrid For reciprocal K times of point cloud density, and by cloud data minimum bounding box according to SgridSize is divided into several space lattices.
Further, find all boundary space grids using border seed trellis algorithm, and extract this boundary space The point cloud point comprising in grid, as the edge feature point of cloud data, space lattice is divided into two classes, space and real lattice, does not wrap Space lattice containing any cloud point is space, and other space lattices are real lattice.Represent a certain net with space coordinatess (x, y, z) Grid space position, represents the type of this space lattice with function f, if certain space lattice is real lattice, f (x, y, z)=1, and no Then f (x, y, z)=0, judged using formula 2 certain space lattice whether as boundary space grid:
As U (x, y, z)≤1, representing in six neighborhoods of this space lattice at most has in the grid of top to bottom, left and right, front and rear It is all for a pair real lattice, then this space lattice is a boundary mesh.
Further, in step 3,8≤K≤24.
Further, in step 5, convergence threshold τ=0.2, comparing threshold epsilon is, 0.2≤ε≤0.4.
Further, in step 5, M- estimates in the weight equation of each coordinate direction is
V is standardized residual values on each coordinate direction, and c is constant.
Further, c=1.345.
The present invention chooses OCT eye fundus image, extracts the view film edge of OCT eye fundus image using Canny edge detection method, And these edges are collected with the form of cloud data;Secondly, cloud data is extracted using the method that space lattice divides Characteristic point;Again, application singular value decomposition algorithm calculates the transformation matrix between subject to registration cloud, eliminates visibility point by mistake Difference;Finally, carry out accuracy registration using improved iterative closest point algorithm, the spin matrix obtaining and translation matrix application in On original OCT eye fundus image, obtain final result.
When processing the intensive point cloud with larger data amount, inventive algorithm is in time complexity and registration accuracy side There is obvious advantage in face.In most of the cases, traditional iterative closest point efficiency of algorithm is improve 70% by inventive algorithm. The present invention not only to OCT ocular fundus image registration with splicing effectively, and creates that to have larger visual field eye ground image accurate Property.
Brief description
Fig. 1 is the flow chart of OCT eye fundus image Registration of Measuring Data method according to a particular embodiment of the invention;
Fig. 2 is the state diagram of the boundary mesh of employing boundary mesh seed algorithm according to a particular embodiment of the invention;
Fig. 3 is cloud data registration process comparison diagram according to a particular embodiment of the invention;
Fig. 4 is cloud data overlapping region partial results amplification comparison diagram according to a particular embodiment of the invention;
Fig. 5 be transformation matrix according to a particular embodiment of the invention act on original collection registering with comparison set after Position versus figure;
Fig. 6 is the contrast of the registration result for " Stamford rabbit " cloud data according to a particular embodiment of the invention Figure;
Fig. 7 is the comparison diagram of the modified hydrothermal process according to the present invention and conventional iterative algorithm time loss.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention, rather than limitation of the invention.It also should be noted that, in order to just Part related to the present invention rather than entire infrastructure is illustrate only in description, accompanying drawing.
The principle of the present invention is:Denoising carried out using Canny edge detection method to fundus oculi image, and will obtain Edge, as the feature of image, these feature extractions is become cloud data, extracts a cloud number using the method for minimum bounding box According in edge feature point, using edge feature point, initial registration is completed using singular value decomposition, by improve iterative closest point Method carries out accuracy registration to cloud data.
Referring to Fig. 1, the flow chart showing the OCT eye fundus image Registration of Measuring Data method according to the present invention, specifically include as Lower step:
Step one, denoising, rim detection are carried out to image using Canny edge detection method:
The edge of image refers to the critical region that image drastically changes to another gray scale from a certain gray value, and it generally separates The significant region of two fast brightness flop.The edge of image carries image most information, is the important portion of image characteristics extraction Point, this step one includes following sub-step:
(1) gray processing is carried out to original image:In practical application scene, most of image is color format.Firstly the need of Image gray processing by color format.Relatively conventional gray processing method is that the sampled value to tri- passages of image RGB carries out adding Weight average, i.e. Gray=(R+G+B)/3.For medical-ophthalmologic image it is contemplated that the physiological characteristicies of human eye, gray processing in the present invention Formula is Gray=0.299R+0.587G+0.114B.
(2) image is filtered, then calculates the gradient magnitude of image:
Image filtering is an important process in image procossing, and it goes in the case of retaining the original feature of image as far as possible Suppression and the impact to image procossing for the cancelling noise, thus improve accuracy rate and the reliability of image procossing.Common filtering side Method mainly includes mean filter, medium filtering, gaussian filtering and bilateral filtering etc..In the present invention, Canny edge detection algorithm is adopted Use gaussian filtering.
The edge of image is typically in the larger region of gray-value variation, it is possible to use first difference divides to calculate image Picture element matrix is with regard to horizontal and vertical partial derivative.For the bigger point of partial derivative, the grey scale change in this region residing for point is described Larger it may be possible to the marginal point of image.
(3) non-maxima suppression is carried out to gradient magnitude, then use the detection of dual threashold value-based algorithm and adjoining edge:
From the conclusion of low order derivative, gradient magnitude is that the point of maximum differs that to be set to area grayscale value changes maximum Point, can not illustrate that this point one is set to marginal point, so Canny edge detection algorithm has carried out non-maximum suppression in the present invention System.Search the eight neighborhood of gradient magnitude maximum point, the maximum point of gradient magnitude is retained, other points omit it is ensured that appointing Meaning is no more than a marginal point in zonule.
Canny edge detection algorithm to be detected using dual threshold and adjoining edge.The effect of high threshold is to suppress edge, All gradient magnitudes higher than this threshold value just can be considered edge.Therefore high threshold is higher, the edge detecting is fewer.Low threshold Effect be adjoining edge, due to the inhibitory action of high threshold, lead to the marginal points that many detects not can be connected to A line.Now the point that gradient magnitudes all in marginal point are higher than Low threshold is considered as edge and connects by Canny edge detection algorithm Pick up the result to become final rim detection.
Step 2, the extraction of image cloud data and visualization
Cloud data is made up of the point in one group of three dimensions, is commonly used to describe the three-D space structure of object.Each Individual point cloud point all includes at least one group of three-dimensional coordinate representing self space position, and some point clouds also have high dimension coordinate, these Coordinate is used for representing the information such as color or illumination reflex strength.By original optical fundus optical coherence tomoscan image through step After one process, a series of images comprising retina each layer edge are obtained.
These images are become a three-dimensional data according to original laminated structure, and each marginal point according to its institute The locus at place are taken into one of three-dimensional point cloud point, and the three-dimensional coordinate of point cloud point is corresponding to be superimposed the three-dimensional data obtaining Space coordinatess.
Step 3, the extraction of cloud data edge feature point
In this step, first the point in a cloud is assigned in different space lattices according to its space coordinates, then Find out all space lattices belonging to point cloud boundary, finally extract the point cloud point in boundary mesh as cloud data Edge feature point.
Further, cloud data point is divided in different space lattices it is necessary first to find can comprise whole The minimum bounding box of individual point cloud, then this minimum bounding box is divided into several space lattices so that arbitrfary point cloud data point all It is comprised in certain space lattice.
The present invention adopts oriented bounding box as the minimum bounding box of cloud data, and oriented bounding box is that a kind of direction is any Bounding volume method, overall compact can comprise whole object.After obtaining the minimum bounding box of cloud data, the present invention presses According to identical volume, cloud data is resolved into several space lattices, the size of each space lattice is defined as:
In formula, L represents the quantity of cloud data midpoint cloud point, and V represents the volume of minimum bounding box.V/L is point cloud density Inverse, represent the mean size that each of cloud data point cloud point is taken up space.Make initial size S of space latticegrid For reciprocal K times of point cloud density, and by cloud data minimum bounding box according to SgridSize is divided into several space lattices.Public In formula 1, K is a variable element, for the intensive cloud data handled by the present invention, experiment proves to be assigned 8 as K~ When 24, being sized to of space lattice comprises enough cloud data points.
Further, the present invention finds all boundary space grids using border seed trellis algorithm, and extracts this side The point cloud point comprising in boundary's space lattice is as the edge feature point of cloud data.In order to extract boundary space grid, can be by Space lattice is divided into two classes, space and real lattice, and the space lattice not comprising any cloud point is space, and other space lattices are Real lattice.Represent a certain mesh space position with space coordinatess (x, y, z), represent the type of this space lattice with function f, if certain Individual space lattice is real lattice, f (x, y, z)=1, otherwise f (x, y, z)=0.In space, the six neighborhood grids adjacent with this grid are sat Mark is respectively (x-1, y, z), (x+1, y, z), (x, y-1, z), (x, y+1, z), (x, y, z-1), (x, y, z+1).Using formula 2 Come to judge certain space lattice whether as boundary space grid:
U (x, y, z) is the sum of three products, and the coordinate making intermediate mesh is (x, y, z), then (x-1, y, z), (x+1, Y, z), (x, y-1, z), (x, y+1, z), (x, y, z-1), (x, y, z+1) represents six neighborhood grid positions of this grid.f And if only if that grid upper-lower position is all real lattice for (x-1, y, z) f (x+1, y, z)=1, f (x, y-1, z) f (x, y+1, z)= 1 and if only if that grid right position is all real lattice, and f (x, y, z-1) f (x, y, z+1)=1 and if only if grid front and back position is all It is real lattice.As U (x, y, z)≤1, representing in six neighborhoods of this space lattice at most has one in the grid of top to bottom, left and right, front and rear To being all real lattice, then this space lattice is a boundary mesh.
Referring to Fig. 2, it is the shape of the boundary mesh of employing boundary mesh seed algorithm according to a particular embodiment of the invention State figure, left figure is planar boundary, and middle graph is line boundary, and right figure has a border.
Step 4, complete cloud data initial registration using singular value decomposition algorithm
For above-mentioned cloud data edge feature point, calculate the rotation between set subject to registration using singular value decomposition algorithm Torque battle array and translation matrix, complete the initial registration of cloud data.
Make P represent original collection, Q represents comparison set, referring to formula 3, define objective matrix as follows:
CPAnd CQIt is the barycenter of original collection P and comparison set Q respectively, M represents the quantity of cloud data centrostigma cloud, PiAnd QiRepresent original collection P respectively and compare i-th point during collection Q closes, objective matrix E is adopted singular value decomposition (Singular Value Decomposition, SVD), then E=UDVT.Wherein the row of U are EETThe characteristic vector of matrix, the row of V are ETE matrix Characteristic vector.VTIt is the transposed matrix of V and D is diagonal matrix, D=diag (di), d hereiSingular value for E, i.e. E'E square The square root of the eigenvalue of battle array, order
Then spin matrix R=UBVT, translation matrix T=CQ-RCP, the matrix tried to achieve R and T is acted on original collection P To eliminate original collection P and comparison set Q larger displacement that may be present error under initial condition, for example, P'=RP+T, for essence Really step of registration provides a good state.
Step 5, using improve iterative closest point algorithm complete cloud data accuracy registration
Juche idea for traditional iterative closest point algorithm is to search out P and Q according to certain geometric properties first All corresponding point, and using these corresponding point as registering object.Then build the target equation representing registering degree, pass through The optimal solution finding target equation is calculating spin matrix and translation matrix under present case.This stream of last constantly iteration Journey, until target equation meets the threshold value of certain setting.
Traditional iterative closest point algorithm to find corresponding point using the distance of point-to-point as geometric properties.For set P With two point P in Qi(xp,yp,zp) and Qi(xq,yq,zq), the Euclidean distance between them is expressed as
For any point in set P, calculate, using above-mentioned formula, the distance that in Q, every bit is put to this, and choose Europe The minimum point of family name's distance corresponding point in set Q as this.Then find spin matrix R and translation matrix T, acted on In Pi, then the position of obtained point is RPi+ T, constructs target equation using method of least square
Wherein, N represents the quantity at point cloud midpoint, and E representative each of set P after conversion puts corresponding in set Q The quadratic sum of point distance.
As can be seen that the relative position working as E minimum this iteration corresponding point of interval scale is nearest from above-mentioned target equation.Therefore Make the minimum spin matrix of E and the translation matrix just optimal solution for this iteration.Spin matrix and translation matrix are asked Solution, using the method for translation and rotating separation, first carries out initial value estimation, the center of gravity calculating P and Q first is divided to translation matrix T It is not
With
Then the translation estimated value between set P and Q is pc-qc, now target equation abbreviation be
Spin matrix R in this iterative process is tried to achieve with this and calculates translation matrix T using T=Q-RP, repeat to change For above procedure, until the convergence of target equation optimal solution E and given threshold value.
For the cloud data with preferable initial condition, iterative closest point algorithm is obtained in that more accurately registration is tied Really.However, in place of traditional iterative closest point algorithm there is also some shortcomings.
First, two points of iterative closest point algorithm hypothesis converge and are combined into inclusion relation, and that is, a point converges conjunction is another Point converges the subset of conjunction, this situation generally more difficult satisfaction.
Secondly, choose corresponding point to during, for any point in cloud data set, algorithm can calculate this In point and another set Euclidean distance a little.Assume that two points converge conjunction and respectively have NPAnd NQIndividual point, then the time of this step Complexity is O (NP×NQ).This makes algorithm take a significant amount of time the Euclidean distance calculating corresponding point pair, and calculation cost is very big.This Outward, algorithm gives tacit consent to the nearest point of Euclidean distance to for corresponding point, due to there is a certain amount of noise spot, so producing one after step Fixed error, makes algorithm be absorbed in local minimum.
Therefore, the present invention proposes improvement iterative closest point algorithm:In conventional iterative closest approach algorithm, point converges in conjunction Have been assigned a little identical weight, therefore a little all can identify oneself with a little in the calculating process of Euclidean distance and between point, this It is the bottleneck place of algorithm.Inventive algorithm gives different weights to different points, and the Euclidean distance between point pair is more remote, they Weights less.Assume PAA bit in set P, then PAWeight such as formula 5 be expressed as
QBFor set Q midpoint PACorresponding point, Dis (PA,QB) represent PAAnd QBBetween Euclidean distance.DisMAXRepresent point right Between Euclidean distance maximum, for given threshold epsilon, all weights are less than the corresponding point of ε to being rejected, and are not involved in repeatedly During in generation, calculates.Threshold epsilon is a variable element, and it is used for weighing the degree of accuracy of the time complexity of registration and registration. If making ε reduce, meaning that more corresponding point participate in iterative process to meeting, making registration result more accurate, but Meanwhile also increase calculation times, improve the time complexity of algorithm.Preferably, ε is 0.2~0.4.
Because cloud data may have noise, these noises can affect a cloud center of gravity and iterative closest point target equation The process such as calculating, thus reducing registration accuracy.Invention introduces M- estimates to exclude the impact to experimental result for the noise spot. M- estimates that the basic thought of robustness regression is the weight determining each point according to the size of regression residuals, to reach sane purpose. For reducing the impact of abnormity point, different weights can be given to different points, that is, the point little to residual error gives larger weight, And the point larger to residual error gives less weight.In the present invention M- estimate in the weight equation of each coordinate direction be
The weight of certain point is defined as
V is standardized residual values on each coordinate direction, and c is constant, typically takes 1.345.Therefore, when v belongs to interval (- c, when c), M- degradation estimation is classical least-squares estimation.And when residual error v is more than c, weight wvIncreasing with residual error Reduce greatly.Correspond to the computational methods of Euclidean distance between set P midpoint A and Q midpoint B as shown in Equation 6:
Wherein, ωx, ωy, ωzRepresent the weight in each coordinate direction for the M- estimation, (x respectivelyA,yA,zA), (xB, yB,zB) be respectively point A and B space coordinatess, now iterative equation is as follows:
To sum up, the iterative closest point algorithm after improvement is as follows:
Step 5.1 initializes.Conjunction P and Q is converged it is intended that a convergence threshold τ is it is preferable that τ is for two given points 0.2;
Step 5.2 calculates in cloud data each using formula 5 and puts corresponding weights, compares threshold epsilon weights are relatively low Point exclusion it is preferable that ε be 0.2~0.4;
Step 5.3 iteration following steps, until the lowest mean square root error convergence of formula 7 is in given threshold tau:
Step 5.3.1 calculates the Euclidean distance between set P and Q point cloud according to formula 6,
Step 5.3.2, for the set P eliminating the relatively low point of weights, finds the nearest point of Euclidean distance in set Q As corresponding point and be stored in closest approach concentrate,
Step 5.3.3 utilizes formula 7 to adopt method of least square to calculate the spin matrix R between set of computations P and nearest point set With translation matrix T,
Spin matrix R and translation matrix T are applied to set P by step 5.3.4, obtain new set, are calculated using formula 7 Whether lowest mean square root error converges on given threshold tau, if it is terminates computing, is otherwise changed using step 5.3 In generation, calculates.
Therefore, the present invention chooses OCT eye fundus image, extracts the retina of OCT eye fundus image using Canny edge detection method Edge, and these edges are collected with the form of cloud data;Secondly, point cloud is extracted using the method that space lattice divides The characteristic point of data;Again, application singular value decomposition algorithm calculates the transformation matrix between subject to registration cloud, eliminates obvious position Put error;Finally, carry out accuracy registration using improved iterative closest point algorithm, should the spin matrix obtaining and translation matrix For original OCT eye fundus image, obtain final result.
Embodiment 1:
With reference to experiment, the present invention is further elaborated, and the present invention and classical iterative closest point approach are carried out Relatively, its accuracy and robustness are verified.
Testing used allocation of computer is Intel Core E7500 double-core CPU, and dominant frequency 2.93GHz inside saves as 1GB × 2 DDR2, operating system is Microsoft Windows Win7 Sp1 Ultimate, and the platform of realizing of algorithm is Microsoft Visual Studio 2010.
For the registering performance of assessment algorithm, inventive algorithm adopts two optical fundus optical tomography image as experiment Data, they are the adjacent parts of human retina structure.The lap that two datasets are closed is about 75 × 500 × 375 Individual voxel.Fig. 3 illustrates the registration process of cloud data from four different angles.The left area of picture shows two points The initial position (red point represents original collection, and the point of green represents comparison set) converging, right region shows iteration During real-time registration result, the exterior contour of right-hand point cloud represents the oriented bounding box of the minimum of a cloud.
After above-mentioned iterative process terminates, just obtain final registration result.In order to make experimental result more directly perceived, Fig. 4 Illustrate the magnified partial view of registration result.As shown in figure 4, image left field show original collection and comparison set just Beginning position, is able to observe that there is larger dislocation between them.Left-side images connect a data acquisition system before showing registration overlapping The position relationship in region, image right shows the registration result of this paper algorithm.Result after inventive algorithm is processed It is shown in the right side of image, the most of point that there is dislocation of image display is obtained for more accurately registering.
After registration process terminates, a series of transformation matrixs obtaining are acted on original collection and comparison set by us On, and they are rendered out with imageJ.Fig. 5 has visualized initial data and the result data after registration.Fig. 5 wash with watercolours Four optical fundus optical coherence tomoscan image volume datas are contaminated.Two width images are original collection mentioned above respectively above With comparison set, the experimental result of the two width present invention afterwards, lower-left image is the side view of registration result, and bottom right is registration result Top view, in figure female is human eye macula lutea center.Often led to after registration using traditional optical fundus volume data registration Algorithm Inner nuclear layer of retina, photoreceptor layer and pigment epithelium layer the phenomenon of the cliff of displacement occurs.But from the point of view of Fig. 5 display result, join The situation of obvious layer of retina tomography in quasi- result, and does not observe in the overlapping region of data yet and clearly inlay Vestige, this have more wide-field optical coherence tomoscan volume data may to clinicist prevention and diagnosis ophthalmology disease Disease has certain help.
In the present invention, Fig. 3-5 is only used for representing the effect that the present invention is obtained, but the effect of the present invention depends not only upon Fig. 3-5 is representing.
Embodiment 2
Additionally, present invention employs the accuracy of registration that registration error carrys out assessment algorithm, use time consumes length to comment The registering efficiency of estimation algorithm.Registration error represents that the points of corresponding point matching failure in registration process account for total percentage putting cloud point number It has following form of calculation to ratio
Wherein Success (PA,QB) definition be
In above-mentioned formula, N represents all corresponding point participating in calculating to number.(PA,QB) represent corresponding point pair. Success(PA,QB) be used for representing corresponding point to (PA,QB) registration result.If the Euclidean distance after registration between corresponding point pair is little In given threshold value δ then it represents that this corresponding point is to registration success, Success (PA,QB)=1.In inventive algorithm, the value of δ is (PA,QB) 0.15 times of Euclidean distance before accuracy registration, that is, after registration, Euclidean distance is less than the corresponding point of original distance 15% to quilt It is considered as registration success.By statistics and calculating, table 1 shows that the conventional iterative that the algorithm of the present invention is proposed with Besl is counted recently Contrast in terms of time loss and registration error for the method.
The contrast of the experimental result of table 1 inventive algorithm and conventional iterative closest approach algorithm
As Experimental comparison, the present invention is tested to the cloud data collection that some are increased income.Fig. 6 illustrates using this The registration result to open storehouse " Stamford rabbit " cloud data for the bright algorithm, left figure is the initial position of cloud data, and right figure is Cloud data after registration.This cloud data has 9731 point cloud points, using the conventional iterative closest approach algorithm used time 11.023 Second, registration error is 0.001130.Using improved 3.994 seconds iterative closest point algorithm used times, registration error is 0.000794. The present invention by the improvement of conventional iterative closest approach algorithm so that the registration of cloud data is from the time of calculating and registration accuracy It is obtained for and more significantly improve.
In order to more intuitively contrast innovatory algorithm and traditional algorithm, it is right that the present invention has been carried out to different size of cloud data Ratio experiment, and illustrate algorithm by the way of broken line graph in the temporal difference of calculating.In table 2, first cloud data amount is 9731 data acquisition system is " Stamford rabbit ", and second is " the imperial " (http in the 3-D scanning warehouse of Stamford:// Graphics.stanford.edu/data/3Dscanrep/), remaining is the cloud data in actual items.Table 2 below and Fig. 7 shows for different pieces of information collection inventive algorithm with conventional iterative closest approach algorithm in registering temporal contrast.
The different size of cloud data set of table 2, inventive algorithm and conventional iterative closest approach algorithm calculate time contrast
If table 2 is with shown in Fig. 7, when processing the intensive point cloud with larger data amount, inventive algorithm is multiple in the time Miscellaneous degree and registration accuracy aspect have obvious advantage.In most of the cases, traditional iterative closest point is calculated by inventive algorithm Method efficiency improves 70%.
The effectiveness to OCT ocular fundus image registration and splicing for the above-mentioned experiment show present invention, demonstrates this simultaneously Invention has accuracy and the robustness of larger visual field eye ground image to creating.
Above content is to further describe it is impossible to assert with reference to specific preferred implementation is made for the present invention The specific embodiment of the present invention is only limitted to this, for general technical staff of the technical field of the invention, is not taking off On the premise of present inventive concept, some simple deduction or replace can also be made, all should be considered as belonging to the present invention by institute The claims submitted to determine protection domain.

Claims (9)

1. a kind of OCT eye fundus image Registration of Measuring Data method, comprises the steps:
Step one, denoising, rim detection are carried out to image using Canny edge detection method:
Gray processing is carried out to original image, image is filtered, then calculate the gradient magnitude of image, gradient magnitude is carried out Non-maxima suppression, then uses the detection of dual threashold value-based algorithm and adjoining edge;
Step 2, the extraction of image cloud data and visualization:
The image processing through step one is become a three-dimensional data according to original laminated structure, and each marginal point It is taken into one of three-dimensional point cloud point according to its residing locus, the three-dimensional coordinate of point cloud point is corresponding to be superimposed three obtaining The space coordinatess of dimension volume data;
Step 3, the extraction of cloud data edge feature point:
Point in a cloud is assigned in different space lattices according to its space coordinates, then finds out and all belong to point cloud boundary Space lattice, finally using in boundary mesh point cloud point extract the edge feature point as cloud data;
Step 4, complete cloud data initial registration using singular value decomposition algorithm:
Make P represent original collection, Q represents comparison set, referring to formula 3, define objective matrix as follows:
CPAnd CQIt is the barycenter of original collection P and comparison set Q respectively, M represents the quantity of cloud data centrostigma cloud, PiAnd Qi Represent original collection P respectively and compare i-th point during collection Q closes, objective matrix E is adopted singular value decomposition (Singular Value Decomposition, SVD), then E=UDVT.Wherein the row of U are EETThe characteristic vector of matrix, the row of V are ETE matrix Characteristic vector.VTIt is the transposed matrix of V and D is diagonal matrix, D=diag (di), wherein, diSingular value for E, order
Then spin matrix R=UBVT, translation matrix T=CQ-RCP, the matrix tried to achieve R and T is acted on original collection P to disappear Except original collection P and comparison set Q larger displacement that may be present error under initial condition;
Step 5, using improve iterative closest point algorithm complete cloud data accuracy registration:
Step 5.1 initializes.Conjunction P and Q is converged it is intended that a convergence threshold τ for two given points,
Step 5.2 calculates in cloud data each using formula 5 and puts corresponding weights, compares threshold epsilon and by point relatively low for weights Exclusion,
QBFor set Q midpoint PACorresponding point, Dis (PA,QB) represent PAAnd QBBetween Euclidean distance, DisMAXRepresent Europe between point pair The maximum of family name's distance, Euclidean distance is calculated using formula 6,
Step 5.3 iteration following steps, until the lowest mean square root error convergence of formula 6 is in given threshold tau:
Step 5.3.1 calculates the Euclidean distance between set P and Q point cloud according to formula 6,
Wherein, ωx, ωy, ωzRepresent the weight in each coordinate direction for the M- estimation, (x respectivelyA,yA,zA), (xB,yB,zB) It is respectively the space coordinatess of set P midpoint A and Q midpoint B,
Step 5.3.2, for the set P eliminating the relatively low point of weights, finds the nearest point conduct of Euclidean distance in set Q Corresponding point are simultaneously stored in closest approach concentration,
Step 5.3.3 utilizes formula 7 to adopt method of least square to calculate the spin matrix R peace between set of computations P and nearest point set Move matrix T,
Spin matrix R and translation matrix T are applied to set P by step 5.3.4, obtain new set, calculate minimum using formula 7 Whether root-mean-square error converges on given threshold tau, if it is terminates computing, otherwise is iterated counting using step 5.3 Calculate.
2. OCT eye fundus image Registration of Measuring Data method according to claim 1 it is characterised in that:
In step one, the gray processing formula of original image is Gray=0.299R+0.587G+0.114B, and image filtering adopts Gaussian filtering, divides using first difference to calculate image pixel matrix with regard to horizontal and vertical partial derivative.
3. OCT eye fundus image Registration of Measuring Data method according to claim 2 it is characterised in that:
In step one, to be detected using dual threshold and adjoining edge, the effect of high threshold is to suppress edge, all higher than this The gradient magnitude of threshold value just can be considered edge, and the effect of Low threshold is adjoining edge, and all gradient magnitudes are higher than low threshold The point of value is considered as edge and couples together the result becoming final rim detection.
4. OCT eye fundus image Registration of Measuring Data method according to claim 1 it is characterised in that:
In step 3, using oriented bounding box as the minimum bounding box of cloud data, cloud data point is divided into not In same space lattice,
According to identical volume, cloud data is resolved into several space lattices, the size of each space lattice is defined as:
In formula, L represents the quantity of cloud data midpoint cloud point, and V represents the volume of minimum bounding box.V/L is falling of a cloud density Number, represents the mean size that each of cloud data point cloud point is taken up space.Make initial size S of space latticegridFor K times of point cloud density inverse, and by cloud data minimum bounding box according to SgridSize is divided into several space lattices.
5. OCT eye fundus image Registration of Measuring Data method according to claim 4 it is characterised in that:
Find all boundary space grids using border seed trellis algorithm, and extract the point comprising in this boundary space grid Cloud point, as the edge feature point of cloud data, space lattice is divided into two classes, space and real lattice, does not comprise any cloud point Space lattice is space, and other space lattices are real lattice.Represent a certain mesh space position with space coordinatess (x, y, z), use Function f represents the type of this space lattice, if certain space lattice is real lattice, f (x, y, z)=1, otherwise f (x, y, z)=0, Judged using formula 2 certain space lattice whether as boundary space grid:
As U (x, y, z)≤1, representing in six neighborhoods of this space lattice at most has a pair in the grid of top to bottom, left and right, front and rear It is all real lattice, then this space lattice is a boundary mesh.
6. the OCT eye fundus image Registration of Measuring Data method according to claim 4 or 5 it is characterised in that:
In step 3,8≤K≤24.
7. OCT eye fundus image Registration of Measuring Data method according to claim 1 it is characterised in that:
In step 5, convergence threshold τ=0.2, comparing threshold epsilon is, 0.2≤ε≤0.4.
8. OCT eye fundus image Registration of Measuring Data method according to claim 7 it is characterised in that:
In step 5, M- estimates in the weight equation of each coordinate direction is
w = 1 | v | ≤ c c | v | | v | > c
V is standardized residual values on each coordinate direction, and c is constant.
9. OCT eye fundus image Registration of Measuring Data method according to claim 8 it is characterised in that:
C=1.345.
CN201610883700.9A 2016-10-10 2016-10-10 OCT eye fundus image data registration method Pending CN106447708A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610883700.9A CN106447708A (en) 2016-10-10 2016-10-10 OCT eye fundus image data registration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610883700.9A CN106447708A (en) 2016-10-10 2016-10-10 OCT eye fundus image data registration method

Publications (1)

Publication Number Publication Date
CN106447708A true CN106447708A (en) 2017-02-22

Family

ID=58173153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610883700.9A Pending CN106447708A (en) 2016-10-10 2016-10-10 OCT eye fundus image data registration method

Country Status (1)

Country Link
CN (1) CN106447708A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481274A (en) * 2017-08-11 2017-12-15 武汉理工大学 A kind of three-dimensional makees the robustness reconstructing method of object point cloud
CN108090901A (en) * 2017-12-28 2018-05-29 西安中科微光影像技术有限公司 A kind of biological support alignment schemes and device based on cardiovascular OCT images
CN108107444A (en) * 2017-12-28 2018-06-01 国网黑龙江省电力有限公司检修公司 Substation's method for recognizing impurities based on laser data
CN108629804A (en) * 2017-03-20 2018-10-09 北京大学口腔医学院 A kind of three-dimensional face symmetric reference plane extracting method with weight distribution mechanism
CN108961294A (en) * 2018-07-17 2018-12-07 北醒(北京)光子科技有限公司 A kind of dividing method and device of three-dimensional point cloud
CN109523581A (en) * 2017-09-19 2019-03-26 华为技术有限公司 A kind of method and apparatus of three-dimensional point cloud alignment
CN110544274A (en) * 2019-07-18 2019-12-06 山东师范大学 multispectral-based fundus image registration method and system
CN110648368A (en) * 2019-08-30 2020-01-03 广东奥普特科技股份有限公司 Calibration board corner point discrimination method based on edge features
CN110717884A (en) * 2019-08-30 2020-01-21 温州医科大学 Method for expressing corneal irregularity structure change based on change consistency parameters of anterior segment tomography technology
CN110946659A (en) * 2019-12-25 2020-04-03 武汉中科医疗科技工业技术研究院有限公司 Registration method and system for image space and actual space
CN111612847A (en) * 2020-04-30 2020-09-01 重庆见芒信息技术咨询服务有限公司 Point cloud data matching method and system for robot grabbing operation
CN112155734A (en) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 Readable storage medium, bone modeling and registering system and bone surgery system
CN112529945A (en) * 2020-11-17 2021-03-19 西安电子科技大学 Registration method for multi-view three-dimensional ISAR scattering point set
CN112669386A (en) * 2020-12-29 2021-04-16 北京大学 Magnetoencephalogram automatic positioning and registering method and device based on three-dimensional optical scanning
CN113192197A (en) * 2021-05-24 2021-07-30 北京京东乾石科技有限公司 Method, device, equipment and storage medium for constructing global point cloud map
CN113436070A (en) * 2021-06-20 2021-09-24 四川大学 Fundus image splicing method based on deep neural network
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
CN115909302A (en) * 2023-03-09 2023-04-04 菏泽学院 Data processing method for identifying disintegration performance of medicine
CN117323002A (en) * 2023-11-30 2024-01-02 北京万特福医疗器械有限公司 Neural endoscopic surgery visualization system based on mixed reality technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120206438A1 (en) * 2011-02-14 2012-08-16 Fatih Porikli Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120206438A1 (en) * 2011-02-14 2012-08-16 Fatih Porikli Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIN WANG ET AL.: "An iterative closest point approach for the registration of volumetric human retina image data obtained by optical coherence tomography", 《MULTIMEDIA TOOLS AND APPLICATIONS》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629804B (en) * 2017-03-20 2021-06-18 北京大学口腔医学院 Three-dimensional face symmetric reference plane extraction method with weight distribution mechanism
CN108629804A (en) * 2017-03-20 2018-10-09 北京大学口腔医学院 A kind of three-dimensional face symmetric reference plane extracting method with weight distribution mechanism
CN107481274A (en) * 2017-08-11 2017-12-15 武汉理工大学 A kind of three-dimensional makees the robustness reconstructing method of object point cloud
CN109523581A (en) * 2017-09-19 2019-03-26 华为技术有限公司 A kind of method and apparatus of three-dimensional point cloud alignment
CN108090901A (en) * 2017-12-28 2018-05-29 西安中科微光影像技术有限公司 A kind of biological support alignment schemes and device based on cardiovascular OCT images
CN108107444A (en) * 2017-12-28 2018-06-01 国网黑龙江省电力有限公司检修公司 Substation's method for recognizing impurities based on laser data
CN108107444B (en) * 2017-12-28 2021-12-14 国网黑龙江省电力有限公司检修公司 Transformer substation foreign matter identification method based on laser data
CN108090901B (en) * 2017-12-28 2021-11-19 中科微光医疗研究中心(西安)有限公司 Biological stent alignment method and device based on cardiovascular OCT (optical coherence tomography) image
CN108961294A (en) * 2018-07-17 2018-12-07 北醒(北京)光子科技有限公司 A kind of dividing method and device of three-dimensional point cloud
CN110544274B (en) * 2019-07-18 2022-03-29 山东师范大学 Multispectral-based fundus image registration method and system
CN110544274A (en) * 2019-07-18 2019-12-06 山东师范大学 multispectral-based fundus image registration method and system
CN110648368B (en) * 2019-08-30 2022-05-17 广东奥普特科技股份有限公司 Calibration board corner point discrimination method based on edge features
CN110717884A (en) * 2019-08-30 2020-01-21 温州医科大学 Method for expressing corneal irregularity structure change based on change consistency parameters of anterior segment tomography technology
CN110648368A (en) * 2019-08-30 2020-01-03 广东奥普特科技股份有限公司 Calibration board corner point discrimination method based on edge features
CN110717884B (en) * 2019-08-30 2022-02-22 温州医科大学 Method for expressing corneal irregular change based on ocular surface structure change consistency
CN110946659A (en) * 2019-12-25 2020-04-03 武汉中科医疗科技工业技术研究院有限公司 Registration method and system for image space and actual space
CN111612847B (en) * 2020-04-30 2023-10-20 湖北煌朝智能自动化装备有限公司 Point cloud data matching method and system for robot grabbing operation
CN111612847A (en) * 2020-04-30 2020-09-01 重庆见芒信息技术咨询服务有限公司 Point cloud data matching method and system for robot grabbing operation
CN112155734A (en) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 Readable storage medium, bone modeling and registering system and bone surgery system
CN112155734B (en) * 2020-09-29 2022-01-28 苏州微创畅行机器人有限公司 Readable storage medium, bone modeling and registering system and bone surgery system
CN112529945A (en) * 2020-11-17 2021-03-19 西安电子科技大学 Registration method for multi-view three-dimensional ISAR scattering point set
CN112529945B (en) * 2020-11-17 2023-02-21 西安电子科技大学 Multi-view three-dimensional ISAR scattering point set registration method
CN112669386B (en) * 2020-12-29 2022-09-23 北京大学 Magnetoencephalogram automatic positioning and registering method and device based on three-dimensional optical scanning
CN112669386A (en) * 2020-12-29 2021-04-16 北京大学 Magnetoencephalogram automatic positioning and registering method and device based on three-dimensional optical scanning
CN113192197A (en) * 2021-05-24 2021-07-30 北京京东乾石科技有限公司 Method, device, equipment and storage medium for constructing global point cloud map
CN113192197B (en) * 2021-05-24 2024-04-05 北京京东乾石科技有限公司 Global point cloud map construction method, device, equipment and storage medium
CN113436070B (en) * 2021-06-20 2022-05-17 四川大学 Fundus image splicing method based on deep neural network
CN113436070A (en) * 2021-06-20 2021-09-24 四川大学 Fundus image splicing method based on deep neural network
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
CN115909302A (en) * 2023-03-09 2023-04-04 菏泽学院 Data processing method for identifying disintegration performance of medicine
CN117323002A (en) * 2023-11-30 2024-01-02 北京万特福医疗器械有限公司 Neural endoscopic surgery visualization system based on mixed reality technology

Similar Documents

Publication Publication Date Title
CN106447708A (en) OCT eye fundus image data registration method
Hong et al. Stereopifu: Depth aware clothed human digitization via stereo vision
CN108830826A (en) A kind of system and method detecting Lung neoplasm
CN110390650B (en) OCT image denoising method based on dense connection and generation countermeasure network
Luo et al. Multi-view hair capture using orientation fields
US10878574B2 (en) 3D quantitative analysis of retinal layers with deep learning
Zheng et al. Detailed reconstruction of 3D plant root shape
CN106997605B (en) A method of foot type video is acquired by smart phone and sensing data obtains three-dimensional foot type
CN107230206A (en) A kind of 3D Lung neoplasm dividing methods of the super voxel sequence lung images based on multi-modal data
CN106934761A (en) A kind of method for registering of three-dimensional non-rigid optical coherence tomographic image
CN103021017A (en) Three-dimensional scene rebuilding method based on GPU acceleration
CN110148217A (en) A kind of real-time three-dimensional method for reconstructing, device and equipment
CN103020933B (en) A kind of multisource image anastomosing method based on bionic visual mechanism
Giancardo et al. Textureless macula swelling detection with multiple retinal fundus images
CN102567734B (en) Specific value based retina thin blood vessel segmentation method
CN109215085A (en) A kind of article statistic algorithm using computer vision and image recognition
CN108764250A (en) A method of extracting essential image with convolutional neural networks
CN110555908A (en) three-dimensional reconstruction method based on indoor moving target background restoration
CN103679801A (en) Angiocarpy three-dimensional reconstruction method based on multi-view X-ray film
CN110533113A (en) Branch's point detecting method of tree in a kind of digital picture
CN106327479A (en) Apparatus and method for identifying blood vessels in angiography-assisted congenital heart disease operation
CN106846338A (en) Retina OCT image based on mixed model regards nipple Structural Techniques
CN101996415B (en) Three-dimensional modeling method for eyeball
CN105930793A (en) Human body detection method based on SAE characteristic visual learning
Wang et al. An iterative closest point approach for the registration of volumetric human retina image data obtained by optical coherence tomography

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170222

WD01 Invention patent application deemed withdrawn after publication