[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN101388115A - Depth image autoegistration method combined with texture information - Google Patents

Depth image autoegistration method combined with texture information Download PDF

Info

Publication number
CN101388115A
CN101388115A CNA200810224183XA CN200810224183A CN101388115A CN 101388115 A CN101388115 A CN 101388115A CN A200810224183X A CNA200810224183X A CN A200810224183XA CN 200810224183 A CN200810224183 A CN 200810224183A CN 101388115 A CN101388115 A CN 101388115A
Authority
CN
China
Prior art keywords
image
registration
pixel
depth image
subsequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA200810224183XA
Other languages
Chinese (zh)
Other versions
CN101388115B (en
Inventor
齐越
赵沁平
杨棽
沈旭昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing University of Aeronautics and Astronautics
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN200810224183XA priority Critical patent/CN101388115B/en
Publication of CN101388115A publication Critical patent/CN101388115A/en
Application granted granted Critical
Publication of CN101388115B publication Critical patent/CN101388115B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A depth image automatic registration process combining with texture information is mainly used to reconstruct the three-dimensional model of various actual objects, which comprises the steps: firstly extracting a texture image from scanning data or generating a texture image according a depth image, secondly extracting interest pixels in the texture image based on the characteristics of SIFT, and finding out candidate sets therein matched with the interest pixels through the crosscheck method, thirdly finding out correct matched pixel pairs in the candidate sets according to geometry information constraint, fourthly finding out matched apex pairs corresponding to the matched pixel pairs in a three-dimensional space, and calculating the rigid replacement matrix between two depth images, fifthly optimizing the result through using the improved ICP algorism, sixthly dividing the input sequences of several depth images into a plurality of bar type subsequences based on the registrations of two depth images, finally combining the subsequences through the strategy of forward searching, and constructing an integral three-dimensional model. The invention can fast register large-scale three-dimensional scanning data to generate a three-dimensional model, and has strong anti-noise capability and excellent universality.

Description

A kind of depth image autoegistration method of combined with texture information
Technical field
The present invention relates to a kind of method of rebuilding the object dimensional model according to 3 d scan data automatically, a kind of depth image autoegistration method of combined with texture information particularly, belong to computer virtual reality and computer graphics techniques field, be mainly used in the reconstruction of various real-world object three dimensional lifelike models, particularly be applied to the reconstructing three-dimensional model of historical relic in the digital museum.
Background technology
Rebuild the three-dimensional model of object in the real world, by more and more widely be applied in aspects such as virtual emulation, computer animation, reverse engineering, computer-aided design (CAD) and digital museum.Along with the continuous development of 3-D scanning equipment, the data that collect based on 3-D scanning equipment are rebuild the model of real-world object, become a kind of popular three-dimensional modeling method gradually.The spatial digitizer of main flow not only can obtain the depth information on model surface summit, and can also obtain their colouring information, the depth image (range image) and the texture image (texture image) of resolution such as promptly can collect simultaneously, depth image be also referred to as cloud data (point cloud data, PCD).The geometric configuration of real-world object is often complicated, and the visual angle of spatial digitizer is limited, therefore, in order to obtain the whole geological information of three-dimensional object surface, need be in a plurality of different viewpoints scannings, the scan-data that will collect each time is stitched together again, reverts to a complete three-dimensional model, and this process is exactly the registration of scan-data.
At present, can be divided into two processes for the scan-data registration: two width of cloth view registrations (pair-wise registration) and several view registrations (multi-view registration).For two amplitude deepness images in the zone that overlaps, the purpose of two width of cloth view registrations is optimal motion matrixes of finding out between this two amplitude deepness image, under the situation of error minimum, wherein an amplitude deepness image is spliced on another width of cloth.The registration of two amplitude deepness images generally was divided into for two steps: rough registration (coarse registration) and meticulous registration (accurate registration).At first, calculate the estimated value of rigid body permutation matrix between two amplitude deepness images, then, optimize the result of back by minimizing a predefined error function in the meticulous registration stage in the rough registration stage.Similarly, the purpose of several data registrations is the given multi-amplitude deepness images of splicing, and it mainly comprises two aspects: find out the syntople between these depth images, the overlapping region is promptly arranged between which depth image and minimize global error.
In two kinds of thick registrations commonly used, the method of the rough registration of one class is a kind of local feature description of definition symbol, calculate the eigenwert of all points then, find out three pairs or above character pair point between two amplitude deepness images by comparing and mating these eigenwerts again, use Horn (referring to Horn at last, B.Closed-form solution of absolute orientation using unit quaternions.Journal of the Optical Society of America 4 (1987), 629-642.) method, just can calculate the rigid body permutation matrix between two width of cloth data.Common local feature description's symbol has: image rotating (spin images) is (referring to Huber, D.F.Automaticthree-dimensional modeling from reality.PhD thesis, Carnegie Mellon University, 2002.Chair-Martial Hebert.), based on the descriptor of body surface derivative characteristic (referring to Gal R, Cohen-Or D.Salientgeometric features for partial shape matching and similarity.ACM Trans.Graph.2006,25,130-150.), based on the descriptor of integration body (referring to Gelfand, N., Mitra, N.J., Guibas, L.J., andPottmann, H.Robust global registration.In Symposium on Geometry Processing (2005), pp.197-206.) with based on the descriptor of metric space (referring to Li, X., and Guskov, I.Multi-scale features forapproximate alignment of point-based surfaces.In SGP ' 05:Proceedings of the third Eurographicssymposium on Geometry processing (Aire-la-Ville, Switzerland, Switzerland, 2005), EurographicsAssociation, p.217.); The method of another kind of rough registration is based on as random sampling consistance (RANdom SAmpleConsensus, RANSAC) random algorithm of algorithm.For example Chen is (referring to Chen, C.-S., Hung, Y.-P., and Cheng, J.-B.Ransac-based darces:A new approach to fast auto-matic registration of partially overlappingrange images.IEEE Trans.Pattern Anal.Mach.Intell.21,11 (1999), 1229-1234.) method that proposes, by on two amplitude deepness images, choosing some reference point in advance and introducing space constraints and improved the RANSAC algorithm, between registration speed and result's robustness, find an equilibrium point.
Although the character representation of existing local geometric features remains unchanged, even under the certain noise condition, also has robustness under rigid body translation.But, since low dimensional feature and high dimensional feature registration speed and stable aspect intrinsic contradictions and depth image in have noise and a large amount of geometrical defects, for the registration of large-scale data, use local geometric features to be difficult to obtain better effects merely.In addition, for any local feature description symbol, the eigenwert on each summit is all calculated according to its neighborhood inner vertex, therefore, for the different summit of body surface, if the geometric properties of their neighborhoods is similar, will obtain close eigenwert at these places, summit so, this makes the unique point of seeking coupling between two amplitude deepness images become the thing of a difficulty.The constraint condition that adds other is carried out beta pruning or feature is done polymerization all is the effective ways that address this problem, but they have increased the time complexity of Feature Points Matching algorithm greatly.
Consider that the research work at the two dimensional image registration has obtained achievement preferably, researchers begin to attempt the problem that the combined with texture image solves two amplitude deepness image registrations.Wherein Roth is (referring to Roth G.Registering two overlapping rangeimages.In:3-D Digital Imaging and Modeling, Proceedings.Second Intemational Conference on, 1999,191-200.) by the correct corresponding relation of triangle searching that the match interest summit is formed, the efficient of this method and accuracy are not high.Bendels is (referring to Bendels GH, Degener P, Wahl R, Kortgen M, Klein R.Image-basedregistration of 3d-range data using feature surface elements.In VAST, 2004,115-124.) behind the corresponding vertex of finding out the interest pixel, earlier feature extraction is done on these summits and neighborhood thereof, utilize the constraint of geological information to find out matching relationship correct between them again, the method of finally finishing registration .Bendels the scan-data quality better and between data angle change hour, can obtain registration results preferably.Seo is (referring to Seo JK, Sharp GC, Lee SW.Range data registrationusing photometric features.In:Computer Vision and Pattern Recognition, CVPR 2005,1140-145.) utilizing optical correction techniques to handle three-dimensional rotation and the projection distortion problem that exists between texture image on the Bendels method basis, but can only correcting plane or the texture in the zone of almost plane, so be difficult to surface geometry information complex objects.Liu Xiaoli (referring to Liu Xiaoli, Peng Xiang, Li Ameng, Gao Pengdong. the degree of depth picture coupling of combined with texture information..Computer-aided design (CAD) and graphics journal, 2007,19 (3), 340-345.) use the character pixel on the Sobel operator extraction texture image, find out the characteristic of correspondence pixel by normalized related coefficient then, at last by Hausdorff apart from check these character pixels right validity. this method relies on the information of texture image merely and finds out matched pixel, and needs man-machine interaction to select the overlapping region, only is applicable to that simple shooting angle changes the registration of little texture image.
In the meticulous registration of two amplitude deepness images, researchers have proposed many accurate method for registering, wherein that tool milestone significance is iterative closest point (Iterative Closest Point, ICP) algorithm is (referring to Besl, P.J., and McKay, N.D.Amethod for registration of 3-d shapes.IEEE Trans.Pattern Anal.Mach.Intell.14,2 (1992), 239-256. and Chen, Y., and Medioni, G.Object modelling by registration of multiple range images.ImageVision Comput.10,3 (1992), 145-155.).The ICP algorithm is an original state with the initial position of two amplitude deepness images, then by an iterative process, constantly reduce all corresponding point between two amplitude deepness images distance and, up to reaching certain threshold value, thereby calculate the kinematic matrix between two amplitude deepness images.ICP algorithm and its mutation generally need an original state preferably, therefore need the process of a rough registration to calculate the original state of the estimated value of rigid motion between two amplitude deepness images as accurate registration.
When the registration multi-amplitude deepness image, (V E), is called illustraton of model (model graph) can to set up between each amplitude deepness image of a whole model of expression the figure G of topological relation usually.Wherein, each vertex v of figure iRepresent an amplitude deepness image, each bar limit e I, jThe expression vertex v iAnd v jBetween two amplitude deepness images of representative the overlapping region is arranged, and the kinematic matrix between them also is assigned to this edge.When the set of the limit of structural map G, in the worst case, need judge whether they are overlapping to any two amplitude deepness images, algorithm time complexity at this moment is O (n 2).Pingi (Pingi P, Fasano A, Cignoni P, Montani C, Scopigno R.Exploiting the scanning sequence for automatic registration of large sets of range maps.ComputerGraphics Forum 2005,24 (3), 517-526.) simplify this exhaustive algorithm by pre-defined a kind of scanning strategy.This about the map generalization tree representation of whole model make up the least member collection of complete models by these depth images, in case therefore G is fabricated out, just can generate the complete model of tree reconstruction by seeking of G.
But scanning and registration strategies towards band that Pingi proposes are too idealized, are difficult in the reality guarantee all to have the overlapping region between data adjacent arbitrarily in the range image sequence.
Summary of the invention
Technology of the present invention is dealt with problems: overcome the deficiencies in the prior art, a kind of automatic deepness image registration method of combined with texture information is provided, scan-data that can the fast processing big data quantity generates three-dimensional model, and noise resisting ability is strong simultaneously.
Technical solution of the present invention: a kind of automatic deepness image registration method of combined with texture information, be divided into the registration of two amplitude deepness images and the registration of multi-amplitude deepness image, wherein:
The step of registration of two amplitude deepness images is as follows:
If P and Q are scan-datas subject to registration, the texture image of P and depth image are respectively I pAnd R p, the texture image of Q and depth image are respectively I qAnd R q,
The first step is extracted texture image I from P and Q pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q
Second step is based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck;
The 3rd step, use adaptive RANSAC algorithm, it is right to find out matched pixel correct among the Candidate Set C according to geological information constraint;
In the 4th step, it is right to the coupling summit of correspondence to find out in three dimensions with matched pixel, and according to these summit corresponding relations, calculates the rigid body permutation matrix between two amplitude deepness images;
The 5th step, use the result of improved ICP algorithm optimization in the 4th step, finish the accurate registration of two amplitude deepness images;
The step of registration of several scan-datas is as follows:
If between the expression depth image illustraton of model of topological relation be G (V, E), S=s 1, s 2... s n) be the list entries of n amplitude deepness image subject to registration;
In the 6th step,, the list entries S of multi-amplitude deepness image is divided into the banded subsequence S=of k (W based on two amplitude deepness image registrations 1, W 2..., W k), W wherein i∈ S (i=1...k) is the subsequence of some amplitude deepness images, W iAny two adjacent depth images among the ∈ S (i=1...k) all have the overlapping region;
The 7th step, adopt the strategy of search forward, merge subsequence, according to the complete three-dimensional model of subsequence structure that has merged.
In the described first step, according to depth image R pAnd R qGenerate texture image I pAnd I qMethod be:
It is level and smooth that depth image is Laplacian, obtains its basic mode type, calculates the distance of each summit corresponding vertex on the basic mode type in the depth image again, and with the brightness value of each distance value as each summit respective pixel on texture image.
In described second step, based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and be by the method that the method for crosscheck is therefrom found out the right Candidate Set C of matched pixel:
(1) use the SIFT algorithm to find out I respectively pAnd I qIn the interest set of pixels;
(2) according to the Euclidean distance between the SIFT proper vector of two pixels, at I pAnd I qThe interest set of pixels between seek corresponding pixel, its method is:
(2.1) deletion does not have the interest pixel of corresponding vertex in depth image;
(2.2) for the interest pixel i in the texture image, if i is then deleted in the summit of i correspondence in depth image on the border of depth image grid;
(2.3) for I pIn each crucial pixel i p, at I qIn find out and make
Figure A200810224183D00091
Two minimum crucial pixels
Figure A200810224183D00092
With
Figure A200810224183D00093
, establish d ( i p , i 1 q ) ≤ d ( i p , i 2 q ) , definition
Figure A200810224183D00095
With Between diversity factor be diff i p ( i 1 q , i 2 q ) = d ( i p , i 2 q ) - d ( i p , i 1 q ) d ( i p , i 1 q ) , And pre-defined diversity factor threshold value δ DiffIf, diff i p ( i 1 q , i 2 q ) < &delta; diff , Then With
Figure A200810224183D000910
Between notable difference is arranged,
Figure A200810224183D000911
Can be used as i pRespective pixel, otherwise i pAt I qIn multiple correspondence is arranged, should give up i p, when diff i p ( i 1 q , i 2 q ) < &delta; diff The time, again with same method at I pIn find out pixel
Figure A200810224183D000913
Respective pixel
Figure A200810224183D000914
If
Figure A200810224183D000915
And i pBe same pixel, then i pWith
Figure A200810224183D000916
Be I pAnd I qBetween a pair of candidate's respective pixel, will
Figure A200810224183D000917
Add Candidate Set C, otherwise still think i pAt I qIn do not have respective pixel.
In described the 3rd step, use adaptive RANSAC algorithm, right method is to find out matched pixel correct among the Candidate Set C according to geological information constraint:
(1) to all elements (i among the Candidate Set C p, i q) according to d (i p, i q) sort with ascending order, and the result is put into a formation L;
(2) whether l element is correct coupling before among the check L, and the method for inspection is:
(2.1) from preceding l the element of L 3 elements of picked at random as a sample;
(2.2) calculate the rigid body permutation matrix according to 3 pairs of matched pixel geological information of corresponding vertex in depth image;
(2.3), judge whether all the other L-3 element is correct coupling under this rigid body displacement according to the geological information of corresponding vertex; The element of correct coupling is interior point, and remaining is an exterior point, and point is formed the unanimity collection of this sampling in all, and the number according to interior point upgrades the sampling number upper limit then;
(2.4) repeat above step (2.1)-(2.3), reach the sampling number upper limit, find out consistent sampling of concentrating element (to count out promptly at most) at most then, and the unanimity collection that will sample specifically is as the element set of correct coupling up to cycle index.
In described the 6th step, based on two amplitude deepness image registrations, the step that the list entries of multi-amplitude deepness image is divided into the subsequence of several strips shape is:
(1) with S iAdd first subsequence W 1In;
(2) all adjacent depth image s among the registration sequence S successively i, s I+1(if i=1..n-1 is s for ∈ S iWith s I+1The overlapping region is arranged, then with s I+1Add s iThe subsequence at place, otherwise with s I+1Add a new subsequence, and in figure G, add relevant limit.
In described the 7th step, adopt the strategy of search forward, merge these subsequences, and be according to the step of the complete three-dimensional model of the subsequence structure that has merged:
(1) to subsequence W iEvery amplitude deepness image among the ∈ S (i=2...k) is retrieved subsequence W successively jWhether there are depth image and W among the ∈ S (j=i-1...1) iIn depth image the overlapping region is arranged, if there is such depth image, so just can be with W iAnd W jMerge, be about to W jIn each element be inserted into W iHeader element before, simultaneously in figure G, add relevant limit;
(2) continuous circulation step (1) is until only remaining a subsequence, perhaps at residue subsequence W iIn do not have the overlapping region between any two;
(3) rebuild whole model according to figure G.
The present invention's beneficial effect compared with prior art is:
(1) the present invention extracts texture image I from P and Q pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q, compare with traditional method based on the local geometric features coupling, when texture image was known, the inventive method was not calculated the geometrical characteristic on each summit, had greatly improved speed, and there are the data of how much noises in the energy good treatment.
(2) from P and Q, extract texture image I pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q, compare with existing method for registering based on texture information, when texture image was unknown, the inventive method can generate texture image according to the low-dimensional geometric properties on summit in the three dimensions, obviously has more versatility.
(3) compare with traditional method, the present invention is based on SIFT feature extraction texture image I based on the local geometric features coupling pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck, can utilize method for registering images to find out corresponding relation between the summit, when handling the scan-data of complex model, have the obvious speed advantage.
(3) compare with the method for registering of existing combined with texture image, the inventive method adopts based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck, promptly introduced process, and selected correct matched pixel in conjunction with depth image to the crosscheck of interest pixel, improved the accuracy of registration results.
(4) the present invention is divided into the list entries S of multi-amplitude deepness image the subsequence of several strips shape, adopt the strategy of search forward, merge subsequence, according to the complete three-dimensional model of subsequence structure that has merged, multi-amplitude deepness image for input, this method based on divide-and-conquer strategy is generation model figure fast, avoided all will doing any two amplitude deepness images in the classic method problem of registration, improve the speed of several view registrations greatly, therefore greatly reduced the time of whole several registration process.
Description of drawings
Fig. 1 is two amplitude deepness image registration process process flow diagrams of the present invention;
Fig. 2 is the autoregistration process of two amplitude deepness images of the embodiment of the invention, wherein: (a) be two depth images subject to registration, (b) be two width of cloth texture images that from scan-data, extract, wherein white point is represented the respective pixel between two width of cloth images, (c) stain in is the summit of the respective pixel correspondence in three dimensions between texture image, (d) is final registration results;
Fig. 3 is the registration process of the multi-amplitude deepness image of the embodiment of the invention, 13 amplitude deepness images for input, earlier all two adjacent width of cloth data are done registration, whole scanning sequence can be divided into 5 ribbon subsequences like this, merge this 5 subsequences then, finally generate a complete model.
Fig. 4,5,6,7 is respectively the registration results of the different model scan-datas that adopt the inventive method, wherein Fig. 4 is the registration results of a model of big foot, Fig. 5 is the registration results of No. two models of big foot, and Fig. 6 is the registration results of Buddha model, and Fig. 7 is the registration results of terra cotta warriors and horses model.
Embodiment
As shown in Figure 1, the present invention includes the registration process of registration He several scan-datas of two width of cloth scan-datas, specific as follows:
1. from scan-data, extract or generate texture image.
If P and Q are scan-datas subject to registration, the texture image of P and depth image are respectively I pAnd R p, the texture image of Q and depth image are respectively I qAnd R qIf comprised texture image among P and the Q, then directly extracted them and convert gray level image to as I pAnd I q,, then at first generate texture image according to depth image if do not comprise texture image among P and the Q or the texture image that comprises can't registration the time.
A given amplitude deepness image R, then
Figure A200810224183D00111
Make that Γ is a kind of geometric properties descriptor, &ForAll; v &Element; R 3 , (eigenwert of v) representing the v place can be done linear transformation to the eigenwert on all summits to Γ, makes their codomain be [0,1], defines the texture image that is generated by Γ so I &Gamma; ( x , y ) = &Gamma; ( R ( x , y ) ) , R ( x , y ) &NotEqual; &phi; &phi; , R ( x , y ) = &phi; , Can generate a width of cloth 2 d texture image according to three-dimensional geometric properties Γ like this.Any in theory geometric properties descriptor all can be used to generate texture image, but considers the efficient of algorithm and result's accuracy, wishes that then Γ has good discrimination, and is easy to calculate.
The basic thought that the present invention generates texture image is: establishing the depth image that comprises in the scan-data is R 0 = { v i 0 | v i 0 &Element; R 3 , i = 1 . . . n } , And R k = { v i k | v i k &Element; R 3 , i = 1 . . . n } Be to R 0Make the basic mode type that obtains after level and smooth for k time, wherein v i k &Element; R k Be v i 0 &Element; R 0 Corresponding point after level and smooth, definition so &Gamma; &delta; ( v i 0 ) = sign ( v i 0 ) &CenterDot; | | v i 0 - v i k | | , I=1...n, wherein sign ( v i 0 ) = 1 , ( v i 0 - v i k ) &CenterDot; norm ( v i 0 ) &GreaterEqual; 0 - 1 , ( v i 0 - v i k ) &CenterDot; norm ( v i 0 ) < 0 , Be
Figure A200810224183D001111
The normal vector at place.In the specific implementation, the present invention uses the Laplacian based on grid smoothly to obtain the basic mode type. because depth image R 0In contained the two-dimensional structure information of a cloud, so gridding R 0Be simply also fast.In smoothing process, topological relation remains unchanged between the summit, establishes M k=(V k, E) be R kCorresponding triangle gridding, wherein V kRepresent the set on summit and limit respectively with E,
Figure A200810224183D001112
Be M kThe border. &ForAll; v i k = ( x i k , y i k , z i k ) &Element; M k , Then v i k + 1 = 1 d i &Sigma; j &Element; N ( i ) v j k , v i k &NotElement; &PartialD; M k v i k , v i k &Element; &PartialD; M k , N (i)={ j| (i, j) ∈ E}, d wherein i=| N (i) |, in experiment, generally get k=64.
2. find out the Candidate Set of respective pixel between texture image.
(1) consider that the SIFT feature not only has unchangeability to the convergent-divergent and the rotation of image, simultaneously illumination variation, noise, block with less conditions such as viewpoint variation under robustness is preferably also arranged, the present invention uses the SIFT algorithm to find out I respectively pAnd I qIn the interest set of pixels;
(2) at I pAnd I qThe interest set of pixels between seek corresponding pixel, for detected each the interest pixel i of SIFT algorithm, SIFT proper vector with sift (i) expression i, d (i then, j)=and ‖ sift (i), sift (j) ‖ can represent the similarity degree between two interest pixel i and the j, more little then these two pixels of this distance are similar more, otherwise big more then their difference of distance are big more.Owing to may there be the complicacy of angle variation and texture image self between depth image; the simple Euclidean distance of SIFT proper vector of using is sought respective pixel as evaluation criterion; introduce a large amount of wrong couplings through regular meeting; simultaneously because the interest pixel of using the SIFT algorithm to find out is too much; the present invention adopts following steps to find out the Candidate Set C of respective pixel between two width of cloth texture images, and its detailed process is:
(2.1) for guaranteeing that the pixel in the result set all has corresponding summit in depth image, deletion does not have the interest pixel of corresponding vertex in depth image;
(2.2) for the interest pixel i in the texture image, if i is then deleted in the summit of i correspondence in depth image on the border of depth image grid;
(2.3) find out I by the method for crosscheck pAnd I qBetween the Candidate Set of respective pixel.For I pIn each crucial pixel i p, at I qIn find out and make
Figure A200810224183D00121
Two minimum crucial pixels
Figure A200810224183D00122
With Might as well establish d ( i p , i 1 q ) &le; d ( i p , i 2 q ) , definition
Figure A200810224183D00125
With
Figure A200810224183D00126
Between diversity factor be diff i p ( i 1 q , i 2 q ) = d ( i p , i 2 q ) - d ( i p , i 1 q ) d ( i p , i 1 q ) , And pre-defined diversity factor threshold value δ DiffIf, diff i p ( i 1 q , i 2 q ) < &delta; diff , Then think
Figure A200810224183D00129
With
Figure A200810224183D001210
Between notable difference is arranged,
Figure A200810224183D001211
Can be used as i pRespective pixel, otherwise think i pAt I qIn multiple correspondence is arranged, should give up i p, when diff i p ( i 1 q , i 2 q ) < &delta; diff The time, again with same method at I pIn find out pixel
Figure A200810224183D001213
Respective pixel
Figure A200810224183D001214
If
Figure A200810224183D001215
And i pBe same pixel, then think i pWith
Figure A200810224183D001216
Be I pAnd I qBetween a pair of candidate's respective pixel, will Add Candidate Set C, otherwise, still think i pAt I qIn do not have respective pixel.In experiment, generally get δ Diff=0.2.
3. choose the set of correct matched pixel according to depth image
The coupling that still comprises mistake among the simple respective pixel Candidate Set C that uses the SIFT proper vector to find out, the pixel that method that need to introduce other is found out correct coupling among the C is right. under the known situation of the three-dimensional information of pixel corresponding vertex, can judge the correctness of coupling fully with three-dimensional constraint condition.The present invention uses a kind of adaptive RANSAC algorithm in conjunction with the three-dimensional information on summit, and the pixel of finding out correct coupling from Candidate Set C is right, and its method is:
(1) to all elements (i among the Candidate Set C p, i q) according to d (i p, i q) sort with ascending order, and the result is put into a formation L;
(2) need in the three dimensions not three couple of conllinear coupling summit owing to recover minimum of a rigid body permutation matrix, thus the present invention only check among the L before l element whether be correct coupling, generally get l=50 in the experiment.
(2.1) from preceding l the element of L 3 elements of picked at random as a sample;
(2.2) calculate the rigid body permutation matrix according to these 3 pairs of matched pixel geological information of corresponding vertex in depth image;
(2.3), judge whether all the other l-3 element is correct coupling under this rigid body displacement according to the geological information of corresponding vertex.The element of correct coupling is an interior point (inlier), and remaining is exterior point (outlier), and point is formed the unanimity collection (consensus set) of this sampling in all, and the number according to interior point upgrades the sampling number upper limit then;
(2.4) repeat above three steps, reach the sampling number upper limit up to cycle index, the unanimity collection of maximum sampling of counting out in finally getting as a result of collects the element set as correct coupling.
4. it is right to summit corresponding in depth image to find out correct matched pixel, and uses the estimated value that calculates rigid body permutation matrix between two amplitude deepness images based on the method for hypercomplex number.
Between depth image and texture image, there is mapping F p: R p→ I pAnd F q: R q→ I qBecause algorithm of the present invention can guarantee result set C in the 2nd, 3 steps rIn pixel in depth image, finding corresponding vertex, promptly &ForAll; ( i k p , i k q ) &Element; C r , &Exists; v k p &Element; R p , v k q &Element; R q Make F p ( v k p ) = i k p , F q ( v k q ) = i k q , So in case find out C r, can be according to F pAnd F qFind out R pAnd R qBetween corresponding vertex set, find the solution the rigid body permutation matrix with the method for Horn again.
5. meticulous registration
In case calculate the estimated value of rigid body permutation matrix between two amplitude deepness images, just can use ICP algorithm or its mutation to be original state with this result, finish iteration optimization.In meticulous registration process, the present invention has improved classical ICP algorithm: at first, do not consider to be positioned on the depth image border and close on the summit on border in iterative process, can obtain better accuracy like this; Secondly, the distance of corresponding point and dynamically adjusts distance threshold during according to each iteration, like this in each iterative process, have only the interior corresponding point of threshold range be used to calculate new transformation matrix and the distance that minimizes corresponding point with.
In meticulous registration process, if the distance of corresponding point and not restraining, perhaps converge on a value that surpasses pre-defined threshold value, then thinking can't this two amplitude deepness image of registration, and promptly the result of rough registration is incorrect or do not have an overlapping region between this two amplitude deepness image.In the process of multi-amplitude deepness image registration, this criterion is used to differentiate two amplitude deepness images and whether has the overlapping region.
The step of registration of several scan-datas is as follows:
When registration n amplitude deepness image, the illustraton of model of establishing topological relation between the expression depth image is that (V E), wherein, schemes each vertex v of G to G i∈ V all represents an amplitude deepness image, | V|=n, each bar limit e of figure G I, jThe expression vertex v iAnd v jBetween two amplitude deepness images of representative the overlapping region is arranged, and the weights of this edge are from v iTo v j, the rigid body permutation matrix, easily know, if e I, jThere is then e J, iAlso exist, and its weights are e I, jInverse matrix. the generation tree representation of figure G makes up the least member collection of complete models by these depth images, therefore in case make up the G that publishes picture, just can set by the generation of searching G and rebuild complete model.
6. based on two amplitude deepness image registrations, the list entries of multi-amplitude deepness image is divided into the subsequence of several strips shape.
If S=is (s 1, s 2..., s n) be the list entries of n amplitude deepness image subject to registration, when these data of registration, the present invention need set up illustraton of model G (V equally, E), although can do a registration to judge whether they have the overlapping region to any two amplitude deepness images among the S, if have then to calculate relevant rigid body permutation matrix, all limits among the G thereby structure is published picture.But when handling large-scale data, this is a task very consuming time.Under the scanning strategy of Pingi, &ForAll; s i &Element; S ( 1 < i < n ) , s iWith s I-1And s I+1Between all have the overlapping region, so just can be successively to two adjacent amplitude deepness image s i, s I+1∈ S (i=1...n-1) does registration, the G thereby structure is published picture, and assurance figure G is communicated with.Yet this scanning of Pingi and the strategy of registration are too desirable, are difficult to guarantee two adjacent width of cloth data s among the S in practice i, s I+1(i=1...n-1) all there is the overlapping region between.The present invention supposes that the user can be according to a definite sequence when scanning, promptly finish scanning in banded mode, but per two scan stripes interbands are not necessarily continuous, be not necessarily to have the overlapping region between first amplitude deepness image of last amplitude deepness image of last band and next band, under this assumption, sequence S can be divided into k ribbon subsequence: S=(W 1, W 2.., W k), &ForAll; W i &Element; S All satisfy the hypothesis of Pingi, i.e. W iIn any two adjacent depth images the overlapping region is all arranged.Concrete division methods is:
(1) with s iAdd first subsequence W 1In;
(2) all adjacent depth image s among the registration sequence S successively i, s I+1∈ S (i=1...n-1) is if s iWith s I+1The overlapping region is arranged, then with s I+1Add s iThe subsequence at place, otherwise with s I+1Add a new subsequence, and in figure G, add relevant limit.
7. adopt a kind of strategy of search forward to merge these subsequences, and construct complete three-dimensional model.
(1) to subsequence W iEvery amplitude deepness image among the ∈ S (i=2...k) is retrieved subsequence W successively jWhether there are depth image and W among the ∈ S (j=i-1...1) iIn depth image the overlapping region is arranged, if there is such depth image, so just can be with W iAnd W jMerge, be about to W jIn each element be inserted into W iHeader element before, simultaneously in figure G, add relevant limit;
(2) up to only remaining a subsequence, perhaps there is not the overlapping region in constantly cyclic process (1) between any two in the residue subsequence, the former shows that figure G is communicated with, and one that promptly can find out it generates tree and constructs complete model; And under one situation of back, a part of the equal representative model of each remaining subsequence, but be to use autoregistration algorithm of the present invention these parts can't be stitched together again.
(3) rebuild whole model according to the generation tree of figure G
Obviously, the strategy of Pingi proposition can be regarded a special case of the present invention as.With traditional O (n that needs any two width of cloth scan-datas of registration 2) several method for registering of time complexity compare, and adopt method of the present invention, the scan-data in the same subsequence can not done registration again, has therefore reduced the time of whole several registration process greatly.
As shown in Figure 2, (a) be two depth images subject to registration, (b) be two width of cloth texture images that from scan-data, extract, wherein white point is represented the respective pixel between two width of cloth images, (c) stain in is the summit of the respective pixel correspondence in three dimensions between texture image, (d) is final registration results; Fig. 2 has illustrated the process of two amplitude deepness image autoregistrations with example.
As shown in Figure 3,, earlier all two adjacent width of cloth data are done registration, whole scanning sequence can be divided into 5 ribbon subsequences like this, merge this 5 subsequences then, finally generate a complete model for 13 amplitude deepness images of input.
Be respectively the registration results of the different model scan-datas that adopt the inventive method as Fig. 4,5,6,7, Fig. 4 is the registration results of a model of big foot, Fig. 5 is the registration results of No. two models of big foot, and Fig. 6 is the registration results of Buddha model, and Fig. 7 is the registration results of terra cotta warriors and horses model.Wherein, comprised texture image in the scan-data of model of big foot and No. two models of big foot, can directly come registration based on known texture image, and do not comprise texture image in the scan-data of Buddha model and terra cotta warriors and horses model, need to generate texture image according to depth image earlier, come registration based on the texture image that generates again.From registration results, for comprising texture image and not comprising the scan-data of texture image, the present invention all can well registration they.

Claims (6)

1, a kind of depth image autoegistration method of combined with texture information is characterized in that: be divided into the registration of two amplitude deepness images and the registration of multi-amplitude deepness image, wherein:
The step of registration of two amplitude deepness images is as follows:
If P and Q are scan-datas subject to registration, the texture image of P and depth image are respectively I pAnd R p, the texture image of Q and depth image are respectively I qAnd R q,
The first step is extracted texture image I from P and Q pAnd I qOr according to depth image R pAnd R qGenerate texture image I pAnd I q
Second step is based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and therefrom find out the right Candidate Set C of matched pixel by the method for crosscheck;
The 3rd step, use adaptive RANSAC algorithm, it is right to find out matched pixel correct among the Candidate Set C according to geological information constraint;
In the 4th step, it is right to the coupling summit of correspondence to find out in three dimensions with matched pixel, and according to these summit corresponding relations, calculates the rigid body permutation matrix between two amplitude deepness images;
The 5th step, use the result of improved ICP algorithm optimization in the 4th step, finish the accurate registration of two amplitude deepness images;
The step of registration of several scan-datas is as follows:
If between the expression depth image illustraton of model of topological relation be G (V, E), S=(s 1, s 2..., s n) be the list entries of n amplitude deepness image subject to registration;
In the 6th step,, the list entries S of multi-amplitude deepness image is divided into the banded subsequence S=of k (W based on two amplitude deepness image registrations 1, W 2..., W k), W wherein i∈ S (i=1...k) is the subsequence of some amplitude deepness images, W iAny two adjacent depth images among the ∈ S (i=1...k) all have the overlapping region;
The 7th step, adopt the strategy of search forward, merge subsequence, according to the complete three-dimensional model of subsequence structure that has merged.
2, the automatic deepness image registration method of combined with texture information according to claim 1 is characterized in that: in the described first step, according to depth image R pAnd R qGenerate texture image I pAnd I qMethod be:
It is level and smooth that depth image is Laplacian, obtains its basic mode type, calculates the distance of each summit corresponding vertex on the basic mode type in the depth image again, and with the brightness value of each distance value as each summit respective pixel on texture image.
3, the automatic deepness image registration method of combined with texture information according to claim 1 is characterized in that: in described second step, based on SIFT feature extraction texture image I pAnd I qIn the interest pixel, and be by the method that the method for crosscheck is therefrom found out the right Candidate Set C of matched pixel:
(1) use the SIFT algorithm to find out I respectively pAnd I qIn the interest set of pixels;
(2) according to the Euclidean distance between the SIFT proper vector of two pixels, at I pAnd I qThe interest set of pixels between seek corresponding pixel, its method is:
(2.1) deletion does not have the interest pixel of corresponding vertex in depth image;
(2.2) for the interest pixel i in the texture image, if i is then deleted in the summit of i correspondence in depth image on the border of depth image grid;
(2.3) for I pIn each crucial pixel i p, at I qIn find out and make
Figure A200810224183C00031
Two minimum crucial pixels
Figure A200810224183C00032
With
Figure A200810224183C00033
, establish d ( i p , i 1 q ) &le; d ( i p , i 2 q ) , Definition
Figure A200810224183C00035
With
Figure A200810224183C00036
Between diversity factor be diff i p ( i 1 q , i 2 q ) = d ( i p , i 2 q ) - d ( i p , i 1 q ) d ( i p , i 1 q ) , And pre-defined diversity factor threshold value δ DiffIf, diff i p ( i 1 q , i 2 q ) < &delta; diff , then
Figure A200810224183C00039
With
Figure A200810224183C000310
Between notable difference is arranged,
Figure A200810224183C000311
Can be used as i pRespective pixel, otherwise i pAt I qIn multiple correspondence is arranged, should give up i p, when diff i p ( i 1 q , i 2 q ) < &delta; diff The time, again with same method at I pIn find out pixel
Figure A200810224183C000313
Respective pixel
Figure A200810224183C000314
If,
Figure A200810224183C000315
And i pBe same pixel, then i pWith Be I pAnd I qBetween a pair of candidate's respective pixel, will
Figure A200810224183C000317
Add Candidate Set C, otherwise still think i pAt I qIn do not have respective pixel.
4, the automatic deepness image registration method of combined with texture information according to claim 1, it is characterized in that: in described the 3rd step, use adaptive RANSAC algorithm, right method is to find out matched pixel correct among the Candidate Set C according to geological information constraint:
(1) to all elements (i among the Candidate Set C p, i q) according to d (i p, i q) sort with ascending order, and the result is put into a formation L;
(2) whether l element is correct coupling before among the check L, and the method for inspection is:
(2.1) from preceding l the element of L 3 elements of picked at random as a sample;
(2.2) calculate the rigid body permutation matrix according to 3 pairs of matched pixel geological information of corresponding vertex in depth image;
(2.3), judge whether all the other L-3 element is correct coupling under this rigid body displacement according to the geological information of corresponding vertex; The element of correct coupling is interior point, and remaining is an exterior point, and point is formed the unanimity collection of this sampling in all, and the number according to interior point upgrades the sampling number upper limit then;
(2.4) repeat above step (2.1)-(2.3), reach the sampling number upper limit up to cycle index, it is maximum to find out the consistent element of concentrating then, maximum sampling of counting out in promptly, and the unanimity collection that will sample specifically is as the element set of correct coupling.
5, the automatic deepness image registration method based on colouring information according to claim 1 is characterized in that: in described the 6th step, based on two amplitude deepness image registrations, the step that the list entries of multi-amplitude deepness image is divided into the subsequence of several strips shape is:
(1) with s 1Add among first subsequence W1;
(2) all adjacent depth image s among the registration sequence S successively i, s I+1∈ S (i=1..n-1) is if s iWith s I+1The overlapping region is arranged, then with s I+1Add s iThe subsequence at place, otherwise with s I+1Add a new subsequence, and in figure G, add relevant limit.
6, the automatic deepness image registration method based on colouring information according to claim 1, it is characterized in that: in described the 7th step, adopt the strategy of search forward, merge these subsequences, and be according to the step of the complete three-dimensional model of the subsequence structure that has merged:
(1) to subsequence W iEvery amplitude deepness image among the ∈ S (i=2...k) is retrieved subsequence W successively jWhether there are depth image and W among the ∈ S (j=i-1...1) iIn depth image the overlapping region is arranged, if there is such depth image, so just can be with W iAnd W jMerge, be about to W jIn each element be inserted into W iHeader element before, simultaneously in figure G, add relevant limit;
(2) continuous circulation step (1) is until only remaining a subsequence, perhaps at residue subsequence W iIn do not have the overlapping region between any two;
(3) rebuild whole model according to figure G.
CN200810224183XA 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information Expired - Fee Related CN101388115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810224183XA CN101388115B (en) 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810224183XA CN101388115B (en) 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information

Publications (2)

Publication Number Publication Date
CN101388115A true CN101388115A (en) 2009-03-18
CN101388115B CN101388115B (en) 2011-07-27

Family

ID=40477519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810224183XA Expired - Fee Related CN101388115B (en) 2008-10-24 2008-10-24 Depth image autoegistration method combined with texture information

Country Status (1)

Country Link
CN (1) CN101388115B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916456A (en) * 2010-08-11 2010-12-15 李浩民 Method for producing personalized three-dimensional cartoon
CN101930618A (en) * 2010-08-20 2010-12-29 李浩民 Method for producing individual two-dimensional anime
CN102034274A (en) * 2010-12-10 2011-04-27 中国人民解放军国防科学技术大学 Method for calculating corresponding relationship between frame and frame in 3D point cloud sequence
CN102202159A (en) * 2011-03-29 2011-09-28 段连飞 Digital splicing method for unmanned aerial photographic photos
CN102208033A (en) * 2011-07-05 2011-10-05 北京航空航天大学 Data clustering-based robust scale invariant feature transform (SIFT) feature matching method
CN101697232B (en) * 2009-09-18 2012-03-07 浙江大学 SIFT characteristic reducing method facing close repeated image matching
CN102034231B (en) * 2009-09-25 2012-06-06 汉王科技股份有限公司 Image sequence registration method
CN101719282B (en) * 2009-11-11 2012-06-27 哈尔滨工业大学 Digital jointing method of three-dimensional object fractures based on fracture section
WO2012139268A1 (en) * 2011-04-11 2012-10-18 Intel Corporation Gesture recognition using depth images
CN102945289A (en) * 2012-11-30 2013-02-27 苏州搜客信息技术有限公司 Image search method based on CGCI-SIFT (consistence index-scale invariant feature transform) partial feature
CN103236064A (en) * 2013-05-06 2013-08-07 东南大学 Point cloud automatic registration method based on normal vector
CN103236081A (en) * 2013-04-25 2013-08-07 四川九洲电器集团有限责任公司 Registration method for color point cloud
CN103247040A (en) * 2013-05-13 2013-08-14 北京工业大学 Layered topological structure based map splicing method for multi-robot system
CN103279955A (en) * 2013-05-23 2013-09-04 中国科学院深圳先进技术研究院 Image matching method and system
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
CN104956404A (en) * 2013-01-30 2015-09-30 高通股份有限公司 Real-time 3d reconstruction with power efficient depth sensor usage
CN105574847A (en) * 2014-11-03 2016-05-11 韩华泰科株式会社 Camera system and image registration method
CN105913492A (en) * 2016-04-06 2016-08-31 浙江大学 Method for complementing object shape in RGBD image
CN106204710A (en) * 2016-07-13 2016-12-07 四川大学 The method that texture block based on two-dimensional image comentropy is mapped to three-dimensional grid model
CN108520055A (en) * 2018-04-04 2018-09-11 天目爱视(北京)科技有限公司 A kind of product testing identification method compared based on temmoku point cloud
CN109035384A (en) * 2018-06-06 2018-12-18 广东您好科技有限公司 Pixel synthesis technology based on three-dimensional scanning and model automatic vertex processing engine
CN109215110A (en) * 2017-07-21 2019-01-15 湖南拓视觉信息技术有限公司 Whole scene scanning means and 3-D scanning modeling
WO2020063986A1 (en) * 2018-09-30 2020-04-02 先临三维科技股份有限公司 Method and apparatus for generating three-dimensional model, device, and storage medium
CN110946654A (en) * 2019-12-23 2020-04-03 中国科学院合肥物质科学研究院 Bone surgery navigation system based on multimode image fusion
CN112734821A (en) * 2019-10-28 2021-04-30 阿里巴巴集团控股有限公司 Depth map generation method, computing node cluster and storage medium
CN115953406A (en) * 2023-03-14 2023-04-11 杭州太美星程医药科技有限公司 Matching method, device, equipment and readable medium for medical image registration

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570823B (en) * 2016-10-11 2019-06-18 山东科技大学 Based on matched thick joining method of cloud of plane characteristic

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1209073C (en) * 2001-12-18 2005-07-06 中国科学院自动化研究所 Identity discriminating method based on living body iris
GB0208909D0 (en) * 2002-04-18 2002-05-29 Canon Europa Nv Three-dimensional computer modelling
CN1267849C (en) * 2004-07-02 2006-08-02 清华大学 Finger print identifying method based on broken fingerprint detection
CN101154265A (en) * 2006-09-29 2008-04-02 中国科学院自动化研究所 Method for recognizing iris with matched characteristic and graph based on partial bianry mode

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697232B (en) * 2009-09-18 2012-03-07 浙江大学 SIFT characteristic reducing method facing close repeated image matching
CN102034231B (en) * 2009-09-25 2012-06-06 汉王科技股份有限公司 Image sequence registration method
CN101719282B (en) * 2009-11-11 2012-06-27 哈尔滨工业大学 Digital jointing method of three-dimensional object fractures based on fracture section
CN101916456A (en) * 2010-08-11 2010-12-15 李浩民 Method for producing personalized three-dimensional cartoon
CN101930618A (en) * 2010-08-20 2010-12-29 李浩民 Method for producing individual two-dimensional anime
CN101930618B (en) * 2010-08-20 2012-05-30 无锡幻影科技有限公司 Method for manufacturing personalized two-dimensional cartoon
CN102034274A (en) * 2010-12-10 2011-04-27 中国人民解放军国防科学技术大学 Method for calculating corresponding relationship between frame and frame in 3D point cloud sequence
CN102202159A (en) * 2011-03-29 2011-09-28 段连飞 Digital splicing method for unmanned aerial photographic photos
CN102202159B (en) * 2011-03-29 2014-08-27 段连飞 Digital splicing method for unmanned aerial photographic photos
CN103477352A (en) * 2011-04-11 2013-12-25 英特尔公司 Gesture recognition using depth images
WO2012139268A1 (en) * 2011-04-11 2012-10-18 Intel Corporation Gesture recognition using depth images
US8718327B2 (en) 2011-04-11 2014-05-06 Intel Corporation Gesture recognition using depth images
CN102208033A (en) * 2011-07-05 2011-10-05 北京航空航天大学 Data clustering-based robust scale invariant feature transform (SIFT) feature matching method
CN102945289A (en) * 2012-11-30 2013-02-27 苏州搜客信息技术有限公司 Image search method based on CGCI-SIFT (consistence index-scale invariant feature transform) partial feature
CN104956404A (en) * 2013-01-30 2015-09-30 高通股份有限公司 Real-time 3d reconstruction with power efficient depth sensor usage
CN103236081B (en) * 2013-04-25 2016-04-27 四川九洲电器集团有限责任公司 A kind of method for registering of colour point clouds
CN103236081A (en) * 2013-04-25 2013-08-07 四川九洲电器集团有限责任公司 Registration method for color point cloud
CN103236064B (en) * 2013-05-06 2016-01-13 东南大学 A kind of some cloud autoegistration method based on normal vector
CN103236064A (en) * 2013-05-06 2013-08-07 东南大学 Point cloud automatic registration method based on normal vector
CN103247040B (en) * 2013-05-13 2015-11-25 北京工业大学 Based on the multi-robot system map joining method of hierarchical topology structure
CN103247040A (en) * 2013-05-13 2013-08-14 北京工业大学 Layered topological structure based map splicing method for multi-robot system
CN103279955A (en) * 2013-05-23 2013-09-04 中国科学院深圳先进技术研究院 Image matching method and system
CN103279955B (en) * 2013-05-23 2016-03-09 中国科学院深圳先进技术研究院 Image matching method and system
CN105574847A (en) * 2014-11-03 2016-05-11 韩华泰科株式会社 Camera system and image registration method
CN105574847B (en) * 2014-11-03 2020-11-10 韩华泰科株式会社 Camera system and image registration method thereof
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
CN104408701B (en) * 2014-12-03 2018-10-09 中国矿业大学 A kind of large scene video image joining method
CN105913492A (en) * 2016-04-06 2016-08-31 浙江大学 Method for complementing object shape in RGBD image
CN105913492B (en) * 2016-04-06 2019-03-05 浙江大学 A kind of complementing method of RGBD objects in images shape
CN106204710A (en) * 2016-07-13 2016-12-07 四川大学 The method that texture block based on two-dimensional image comentropy is mapped to three-dimensional grid model
CN109215110A (en) * 2017-07-21 2019-01-15 湖南拓视觉信息技术有限公司 Whole scene scanning means and 3-D scanning modeling
CN108520055A (en) * 2018-04-04 2018-09-11 天目爱视(北京)科技有限公司 A kind of product testing identification method compared based on temmoku point cloud
CN109035384A (en) * 2018-06-06 2018-12-18 广东您好科技有限公司 Pixel synthesis technology based on three-dimensional scanning and model automatic vertex processing engine
WO2020063986A1 (en) * 2018-09-30 2020-04-02 先临三维科技股份有限公司 Method and apparatus for generating three-dimensional model, device, and storage medium
US11978157B2 (en) 2018-09-30 2024-05-07 Shining 3D Tech Co., Ltd. Method and apparatus for generating three-dimensional model, device, and storage medium
CN112734821A (en) * 2019-10-28 2021-04-30 阿里巴巴集团控股有限公司 Depth map generation method, computing node cluster and storage medium
CN112734821B (en) * 2019-10-28 2023-12-22 阿里巴巴集团控股有限公司 Depth map generation method, computing node cluster and storage medium
CN110946654A (en) * 2019-12-23 2020-04-03 中国科学院合肥物质科学研究院 Bone surgery navigation system based on multimode image fusion
CN115953406A (en) * 2023-03-14 2023-04-11 杭州太美星程医药科技有限公司 Matching method, device, equipment and readable medium for medical image registration
CN115953406B (en) * 2023-03-14 2023-05-23 杭州太美星程医药科技有限公司 Matching method, device, equipment and readable medium for medical image registration

Also Published As

Publication number Publication date
CN101388115B (en) 2011-07-27

Similar Documents

Publication Publication Date Title
CN101388115B (en) Depth image autoegistration method combined with texture information
Yuan et al. Pcn: Point completion network
US10109055B2 (en) Multiple hypotheses segmentation-guided 3D object detection and pose estimation
Vanegas et al. Building reconstruction using manhattan-world grammars
Bojsen-Hansen et al. Tracking surfaces with evolving topology.
WO2015139574A1 (en) Static object reconstruction method and system
US20200380711A1 (en) Method and device for joint segmentation and 3d reconstruction of a scene
Li et al. Confidence-based large-scale dense multi-view stereo
CN112015275A (en) Digital twin AR interaction method and system
CN108921895A (en) A kind of sensor relative pose estimation method
CN110910433A (en) Point cloud matching method based on deep learning
Liu et al. High-quality textured 3D shape reconstruction with cascaded fully convolutional networks
CN116310098A (en) Multi-view three-dimensional reconstruction method based on attention mechanism and variable convolution depth network
Zhu et al. Automatic multi-view registration of unordered range scans without feature extraction
Fan et al. Convex hull aided registration method (CHARM)
Gao et al. Pose refinement with joint optimization of visual points and lines
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
Lovi Incremental free-space carving for real-time 3d reconstruction
Guo et al. Line-based 3d building abstraction and polygonal surface reconstruction from images
Dai et al. A novel two-stage algorithm for accurate registration of 3-D point clouds
Wu et al. Photogrammetric reconstruction of free-form objects with curvilinear structures
Lin et al. High-resolution multi-view stereo with dynamic depth edge flow
Budd et al. Hierarchical shape matching for temporally consistent 3D video
CN112837420B (en) Shape complement method and system for terracotta soldiers and horses point cloud based on multi-scale and folding structure
Zhou et al. Multi-attribute joint point cloud super-resolution with adversarial feature graph networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110727

Termination date: 20141024

EXPY Termination of patent right or utility model