CN108399631A - A kind of inclination image of scale invariability regards dense Stereo Matching method more - Google Patents
A kind of inclination image of scale invariability regards dense Stereo Matching method more Download PDFInfo
- Publication number
- CN108399631A CN108399631A CN201810172144.3A CN201810172144A CN108399631A CN 108399631 A CN108399631 A CN 108399631A CN 201810172144 A CN201810172144 A CN 201810172144A CN 108399631 A CN108399631 A CN 108399631A
- Authority
- CN
- China
- Prior art keywords
- image
- point
- object space
- depth
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of inclination images of scale invariability to regard dense Stereo Matching method more.There is the problem of different scale is big, visual angle change is big and serious shielding due to tilting image, bigger difficulty is caused to image dense Stereo Matching.Multi-view images matching algorithm proposed by the present invention ensure that the reliability of match point using first extraction candidate matches point, the route for carrying out candidate matches point verification again in conjunction with strategies such as left and right bi-directional matching, pyramids.Match window carries out resampling along core line, solves the problems, such as to tilt image visual angle change.When sampling match window along core line, in conjunction with current image to the ground resolution of object space point, adjustment is zoomed in and out to sampling window, solves the problems, such as to tilt image different scale.
Description
Technical field
The present invention relates to the inclination images of Surveying and mapping technical field more particularly to a kind of scale invariability to regard intensive more
Method of completing the square.
Background technology
In traditional aerophotogrammetry, the products such as DEM/DSM of production are 2.5 dimensions, and be beyond expression side of buildings
The information of equal positions.Demand with application fields such as digital city, smart cities to high-precision city threedimensional model increasingly increases
By force, oblique photograph technology is rapidly developed.
Oblique photograph is the camera apparatus increased on the basis of conventional vertical aeroplane photography on strabismus direction, most common
Be five lens camera systems, i.e., towards front, rear, left and right and under treat as when shoot.The side view camera of oblique photograph can be passed
Vertical aerial photography of uniting is difficult the information such as the side of buildings taken, facilitates structure high-precision, three-dimensional city true to nature
Model.Image dense Stereo Matching is that picture of the same name is found on other images to each pixel of image according to principle of stereoscopic vision
The process for putting and calculating object space three-dimensional point cloud is one of committed step of cybercity construction.But with traditional lower seeing image picture
It compares, tilts image there are engineer's scale differences the problems such as big, visual angle difference is big, occlusion area largely exists, image is close to tilting
Collection matching causes prodigious difficulty.
Invention content
The object of the present invention is to provide a kind of inclination images of scale invariability to regard dense Stereo Matching method more, overcomes inclination shadow
As because engineer's scale difference is big, visual angle difference is big, occlusion area largely exists the problems such as dense Stereo Matching caused by it is difficult, can apply
In high-precision cybercity construction.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of inclination image of scale invariability regards dense Stereo Matching method more, including:
Using the matched strategy of pyramid, the pyramidal matching result in upper layer is passed to lower layer's pyramid to matching constraint;
Image depth figure initial phase is being referred to, is being matched using three tie point of sky or upper layer pyramid obtained in advance
The depth map of acquisition carrys out initialized reference image depth figure;
The image Pixel matching stage is being referred to, first according to image initial depth figure is referred to, is calculating to refer to and be selected on image
The initial object space point of picture point to obtain back projection picture point of the initial object space o'clock on first target image, and then calculates the
Candidate matches point on one target image;It recycles candidate matches point to calculate new object space point, and then utilizes other target shadows
As being verified to candidate matches point, if be proved to be successful, corresponding object space point is updated;The above matching process is repeated, until
Completion is matched with reference to all picture points of image, exports matching result;
If being currently bottom pyramid, matching finishes;Otherwise, matching result is transferred to lower layer's pyramid.
As seen from the above technical solution provided by the invention, using first extraction candidate matches point, carry out candidate again
With a route for verification the reliability of match point is ensure that in conjunction with strategies such as left and right bi-directional matching, pyramids.Match window edge
Core line carries out resampling, solves the problems, such as to tilt image visual angle change.When sampling match window along core line, in conjunction with current image
To the ground resolution of object space point, adjustment is zoomed in and out to sampling window, solves the problems, such as to tilt image different scale.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment
Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this
For the those of ordinary skill in field, without creative efforts, other are can also be obtained according to these attached drawings
Attached drawing.
Fig. 1 is the flow that a kind of inclination image of scale invariability provided in an embodiment of the present invention regards dense Stereo Matching method more
Figure;
Fig. 2 is the signal that a kind of inclination image of scale invariability provided in an embodiment of the present invention regards dense Stereo Matching method more
Figure.
Specific implementation mode
With reference to the attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on this
The embodiment of invention, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, belongs to protection scope of the present invention.
The embodiment of the present invention provides a kind of inclination image of scale invariability and regards dense Stereo Matching method more, as shown in Figure 1, should
Method uses the matched strategy of pyramid, the pyramidal matching result in upper layer to be passed to lower layer's pyramid to matching constraint, this hair
In bright embodiment, the pyramid number of plies can be set as 3 layers, and this method mainly includes the following steps:
Step 1, data acquisition.
Acquired data include oblique photograph image and corresponding image internal and external orientation and empty three tie points;
Wherein:
The oblique photograph image includes a reference image and several target images, and target image is pressed and with reference to image
Degree of overlapping sorts from high to low, and (quantity of target image should be no less than two, while be considered based on matching efficiency, target image
Quantity should take certain upper limit value, such as 8);
Image elements of interior orientation includes camera focus, and elements of exterior orientation includes spin matrix and line element.
Step 2 is initialized with reference to image depth figure
In the embodiment of the present invention, referring to image depth figure initial phase, using three tie point of sky obtained in advance or
The depth map that the pyramid matching of person upper layer obtains carrys out initialized reference image depth figure;Specifically, if being currently top layer's gold
Word tower then utilizes the three tie point initialized reference image depth figure of sky obtained in advance;If not being currently top layer's pyramid,
The matching result obtained using upper layer pyramid is come initialized reference image depth figure.
The step of using empty three tie point initialized reference image depth figure, is as follows:
In empty three tie points, search out all to reference to the visible tie point of an image (i.e. picture point position of tie point
In with reference on image), it is corresponding visible to being used as with reference to the distance between image photographic center to calculate each visible tie point
The depth value of tie point refers to the Delaunay triangles on image further according to the image space coordinate value structure of these visible tie points
Net, and the depth value with reference to each pixel on image is obtained from interpolation in Delaunay triangulation network, it is constituted with reference to image with this
Initial depth figure.
Method using the matching result initialized reference image depth figure of upper layer pyramid acquisition is same as described above, matching
Result refers to object space point and its corresponding image points on multiple target images.
Step 3, with reference to image Pixel matching.
In the embodiment of the present invention, the image Pixel matching stage is being referred to, it can be to reference image by picture point or every certain
Spacing takes a picture point to be matched.
1) according to image initial depth figure is referred to, the initial object space point with reference to selected fixation point on image is calculated, to obtain
Back projection picture point of the initial object space o'clock on first target image is obtained, and then calculates the candidate matches on first target image
Point, main process are as follows:
To referring to image IAOn a picture point iA, initial depth value d is obtained on reference to image initial depth figure, and press
Following formula calculates its corresponding initial object space point PAObject space three-dimensional coordinate;
Wherein, (X, Y, Z) is initial object space point PAObject space three-dimensional coordinate, (x, y) be picture point iAImage space coordinate, f is phase
Machine focal length, R are the spin matrix in image elements of exterior orientation, (XS,YS,ZS) be image elements of exterior orientation in line element;
By initial object space point PABack projection is to first target image IB, obtain back projection picture point iB1;
With back projection picture point iB1Centered on, along core line in first target image IBUpper resampling obtains banded search
Window (as shown in the rectangle that striped is filled in Fig. 2), and referring to image IAOn with picture point iACentered on along core line resampling obtain
Source window;Wherein, when along core line resampling search window, in conjunction with picture point iAGround resolution riAWith picture point iB1Ground distributor
Resolution riB1, it is r to have carried out multiple to search windowiA/riB1Scaling;
With picture point iAGround resolution riAFor, riACalculation formula it is as follows:
Calculated in the search window using correlation coefficient process it is maximum with source window correlation coefficient value and more than threshold value (for example,
It can be set as location point i 0.7)B2;Then negative relational matching is carried out, with location point iB2Centered in first target image IBOn
Match window is taken along core line, is referring to image IAOn with picture point iACentered on along core line take search window (in Fig. 2 grid fill
Shown in rectangle), location point i is obtained using correlation coefficient matching methodB2Referring to image IAThe maximum position of related coefficient in the field of search
Point, and calculate the maximum location point of related coefficient and picture point iADistance;If distance is less than a pixel therebetween, recognize
For location point iB2For iACandidate matches point;Otherwise it is assumed that candidate matches point calculates failure;
Using aforesaid way, continue the candidate matches point for calculating remaining picture point.
2) new object space point is calculated using candidate matches point, and then candidate matches point is tested using other target images
Card.
It will be understood by those skilled in the art that first target image IBOn candidate matches point iB2It is also picture point iAIt is same
Name picture point still might not be accurate, so needing to be verified in conjunction with other target images;Main process is as follows:
In conjunction with candidate matches point iB2, new object space point O is calculated using forward intersection principleA;
For other target images, successively by new object space point OAIn back projection to corresponding target image, obtain corresponding
Back projection's picture point;
As shown in Fig. 2, target image ICOn back projection's picture point be denoted as iC1, respectively to refer to image IAUpper picture point iAAnd it is anti-
Project picture point iC1Centered on it is corresponding refer to image IAWith target image ICUpper edge core line samples source window and search window;
It is calculated and the maximum location point i of source window correlation coefficient value in search windowC2If related coefficient is more than threshold value, then it is assumed that position
Set point iC2For picture point iACorresponding image points;Remaining target image is also adopted to be handled in a like fashion;
If being successfully matched at least N number of corresponding image points on other images, then it is assumed that candidate matches point iB2It is proved to be successful;
It is as picture point iAIn first target image IBOn corresponding image points, later, using all corresponding image points using regard front hand over
Corresponding object space point can be updated.
In the embodiment of the present invention, N=1 can be set, namely removing first target image IBOn other outer images
It is successfully matched to a corresponding image points and then thinks candidate matches point iB2It is proved to be successful.By taking foregoing as an example, as N=1, such as
Fruit location point iC2For picture point iACorresponding image points, then candidate matches point iB2It is proved to be successful, iB2With iC2All it is picture point iAPicture of the same name
Point;If remaining target image, which also calculates, arrives corresponding corresponding image points, using all corresponding image points using mostly regarding front friendship
Corresponding object space point can be updated.
In the embodiment of the present invention, in example shown in Fig. 2, for referring to image, since the target image of selection is all it
Neighbouring image, therefore it is located at center with reference to image, target image is uniformly distributed around;But those skilled in the art can
To understand, the spatial relationship between target image is determined by the elements of exterior orientation being noted above, the core line pass that when matching utilizes
Thus system calculates.
3) the above matching process 1 is repeated) -2), until matching completion with reference to all picture points of image, export matching result
(i.e. all corresponding image points and its corresponding object space point).
If step 4 is currently bottom pyramid, matching finishes;Otherwise, matching result is transferred to lower layer's gold word
Tower (pyramidal with reference to image depth figure for initializing lower layer).
In the embodiment of the present invention, the depth map of the reference picture recalculated with matching result after matching is first compared in step 2
Beginning depth map is more accurate.
It will be understood by those skilled in the art that the specific value of " threshold value " and " setting value " involved in above-mentioned steps can
To be set according to actual conditions or experience by related technical personnel.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment can
By software realization, the mode of necessary general hardware platform can also be added to realize by software.Based on this understanding,
The technical solution of above-described embodiment can be expressed in the form of software products, the software product can be stored in one it is non-easily
In the property lost storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are with so that a computer is set
Standby (can be personal computer, server or the network equipment etc.) executes the method described in each embodiment of the present invention.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto,
Any one skilled in the art is in the technical scope of present disclosure, the change or replacement that can be readily occurred in,
It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims
Subject to enclosing.
Claims (7)
1. a kind of inclination image of scale invariability regards dense Stereo Matching method more, which is characterized in that including:
Using the matched strategy of pyramid, the pyramidal matching result in upper layer is passed to lower layer's pyramid to matching constraint;
Image depth figure initial phase is being referred to, is being matched and is obtained using three tie point of sky or upper layer pyramid obtained in advance
Depth map carry out initialized reference image depth figure;
The image Pixel matching stage is being referred to, first according to image initial depth figure is referred to, is being calculated with reference to selected fixation point on image
Initial object space point, to obtain back projection picture point of the initial object space o'clock on first target image, and then calculate first
Candidate matches point on target image;It recycles candidate matches point to calculate new object space point, and then utilizes other target images pair
Candidate matches point is verified, if be proved to be successful, updates corresponding object space point;The above matching process is repeated, until reference
All picture points of image match completion, export matching result;
If being currently bottom pyramid, matching finishes;Otherwise, matching result is transferred to lower layer's pyramid.
2. a kind of inclination image of scale invariability according to claim 1 regards dense Stereo Matching method more, which is characterized in that
This method further includes the steps that data acquisition, and acquired data include foreign side in oblique photograph image and corresponding image
Bit element and empty three tie points;
The oblique photograph image includes a reference image and several target images, and target image is pressed overlapping with reference image
Degree sorts from high to low;
Image elements of interior orientation includes camera focus, and elements of exterior orientation includes spin matrix and line element.
3. a kind of inclination image of scale invariability according to claim 1 regards dense Stereo Matching method more, which is characterized in that
Image depth figure initial phase is being referred to, it is initial using three tie point of sky obtained in advance if being currently top layer's pyramid
Change and refers to image depth figure;If current is not top layer's pyramid, the matching result that is obtained using the matching of upper layer pyramid come
Initialized reference image depth figure.
4. a kind of inclination image of scale invariability according to claim 1 or 3 regards dense Stereo Matching method more, feature exists
In using empty three tie point initialized reference image depth figure the step of is as follows:
In empty three tie points, search out all to reference to the visible tie point of image, calculating each visible tie point and arriving
Depth value with reference to the distance between image photographic center as corresponding visible tie point, further according to these visible tie points
Image space coordinate value structure is with reference to the Delaunay triangulation network on image, and interpolation is obtained with reference to image from Delaunay triangulation network
The depth value of each upper pixel, is constituted with this with reference to image initial depth figure.
5. a kind of inclination image of scale invariability according to claim 1 regards dense Stereo Matching method more, which is characterized in that
The image Pixel matching stage is being referred to, to reference image by picture point, or a picture point is being taken to be matched at spacing intervals.
6. a kind of inclination image of scale invariability according to claim 5 regards dense Stereo Matching method more, which is characterized in that
According to image initial depth figure is referred to, the initial object space point with reference to selected fixation point on image is calculated, to obtain initial object space
Back projection's picture point o'clock on first target image, and then the step of calculating the candidate matches point on first target image is such as
Under:
To referring to image IAOn a picture point iA, initial depth value d is obtained on reference to image initial depth figure, and by following
Formula calculates corresponding initial object space point PAObject space three-dimensional coordinate;
Wherein, (X, Y, Z) is initial object space point PAObject space three-dimensional coordinate, (x, y) be picture point iAImage space coordinate, f is that camera is burnt
Away from R is the spin matrix in image elements of exterior orientation, (XS,YS,ZS) be image elements of exterior orientation in line element;
By initial object space point PABack projection is to first target image IB, obtain back projection picture point iB1;
With back projection picture point iB1Centered on, along core line in first target image IBUpper resampling obtains banded search window
Mouthful, and referring to image IAOn with picture point iACentered on along core line resampling obtain source window;Wherein, it is searched along core line resampling
When rope window, in conjunction with picture point iAGround resolution riAWith picture point iB1Ground resolution riB1, multiple has been carried out to search window
For riA/riB1Scaling;
Calculate maximum and more than threshold value the location point i with source window correlation coefficient value in the search window using correlation coefficient processB2;
Then negative relational matching is carried out, with location point iB2Centered in first target image IBUpper edge core line takes match window, is referring to
Image IAOn with picture point iACentered on along core line take search window, location point i is obtained using correlation coefficient matching methodB2With reference to image
IAThe maximum location point of related coefficient in the field of search, and calculate the maximum location point of related coefficient and picture point iADistance;If two
Distance is less than a pixel between person, then it is assumed that location point iB2For iACandidate matches point;
Using aforesaid way, continue the candidate matches point for calculating remaining picture point.
7. a kind of inclination image of scale invariability according to claim 1 or 6 regards dense Stereo Matching method more, feature exists
In the recycling candidate matches point calculates new object space point, and then is tested candidate matches point using other target images
Card, if be proved to be successful, the step of updating corresponding object space point, is as follows:
In conjunction with candidate matches point iB2, new object space point O is calculated using forward intersection principleA;
For other target images, successively by new object space point OAIn back projection to corresponding target image, corresponding anti-throwing is obtained
Imaging point;
Wherein, target image ICOn back projection's picture point be denoted as iC1, respectively to refer to image IAUpper picture point iAAnd back projection picture point iC1
Centered on it is corresponding refer to image IAWith target image ICUpper edge core line samples source window and search window;In the search window
It calculates and the maximum location point i of source window correlation coefficient valueC2If related coefficient is more than threshold value, then it is assumed that location point iC2For iA
Corresponding image points;Remaining target image is also adopted to be handled in a like fashion;
If being successfully matched at least N number of corresponding image points on other images, then it is assumed that candidate matches point iB2It is proved to be successful;It is made
For picture point iAIn first target image IBOn corresponding image points, later, using all corresponding image points using regard forward intersection more
New corresponding object space point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810172144.3A CN108399631B (en) | 2018-03-01 | 2018-03-01 | Scale invariance oblique image multi-view dense matching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810172144.3A CN108399631B (en) | 2018-03-01 | 2018-03-01 | Scale invariance oblique image multi-view dense matching method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108399631A true CN108399631A (en) | 2018-08-14 |
CN108399631B CN108399631B (en) | 2022-02-11 |
Family
ID=63091464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810172144.3A Active CN108399631B (en) | 2018-03-01 | 2018-03-01 | Scale invariance oblique image multi-view dense matching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108399631B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615646A (en) * | 2018-12-17 | 2019-04-12 | 石家庄爱赛科技有限公司 | Stereo matching system and solid matching method |
CN111126418A (en) * | 2018-10-30 | 2020-05-08 | 国家海洋局第一海洋研究所 | Oblique image matching method based on planar perspective projection |
CN111222586A (en) * | 2020-04-20 | 2020-06-02 | 广州都市圈网络科技有限公司 | Inclined image matching method and device based on three-dimensional inclined model visual angle |
CN112598740A (en) * | 2020-12-29 | 2021-04-02 | 中交第二公路勘察设计研究院有限公司 | Rapid and accurate matching method for large-range multi-view oblique image connection points |
CN113989250A (en) * | 2021-11-02 | 2022-01-28 | 中国测绘科学研究院 | Improved block dense matching method, system, terminal and medium based on depth map |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102197649A (en) * | 2008-08-29 | 2011-09-21 | 皇家飞利浦电子股份有限公司 | Dynamic transfer of three-dimensional image data |
CN104392457A (en) * | 2014-12-11 | 2015-03-04 | 中国测绘科学研究院 | Automatic matching method and device for connection points of slanted images |
CN104778701A (en) * | 2015-04-15 | 2015-07-15 | 浙江大学 | Local image describing method based on RGB-D sensor |
CN104794490A (en) * | 2015-04-28 | 2015-07-22 | 中测新图(北京)遥感技术有限责任公司 | Slanted image homonymy point acquisition method and slanted image homonymy point acquisition device for aerial multi-view images |
CN104966281A (en) * | 2015-04-14 | 2015-10-07 | 中测新图(北京)遥感技术有限责任公司 | IMU/GNSS guiding matching method of multi-view images |
CN105160702A (en) * | 2015-08-20 | 2015-12-16 | 武汉大学 | Stereoscopic image dense matching method and system based on LiDAR point cloud assistance |
US20160125585A1 (en) * | 2014-11-03 | 2016-05-05 | Hanwha Techwin Co., Ltd. | Camera system and image registration method thereof |
CN105953777A (en) * | 2016-04-27 | 2016-09-21 | 武汉讯图科技有限公司 | Large-plotting-scale tilt image measuring method based on depth image |
CN106228609A (en) * | 2016-07-09 | 2016-12-14 | 武汉广图科技有限公司 | A kind of oblique photograph three-dimensional modeling method based on spatial signature information |
CN107194380A (en) * | 2017-07-03 | 2017-09-22 | 上海荷福人工智能科技(集团)有限公司 | The depth convolutional network and learning method of a kind of complex scene human face identification |
-
2018
- 2018-03-01 CN CN201810172144.3A patent/CN108399631B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102197649A (en) * | 2008-08-29 | 2011-09-21 | 皇家飞利浦电子股份有限公司 | Dynamic transfer of three-dimensional image data |
US20160125585A1 (en) * | 2014-11-03 | 2016-05-05 | Hanwha Techwin Co., Ltd. | Camera system and image registration method thereof |
CN104392457A (en) * | 2014-12-11 | 2015-03-04 | 中国测绘科学研究院 | Automatic matching method and device for connection points of slanted images |
CN104966281A (en) * | 2015-04-14 | 2015-10-07 | 中测新图(北京)遥感技术有限责任公司 | IMU/GNSS guiding matching method of multi-view images |
CN104778701A (en) * | 2015-04-15 | 2015-07-15 | 浙江大学 | Local image describing method based on RGB-D sensor |
CN104794490A (en) * | 2015-04-28 | 2015-07-22 | 中测新图(北京)遥感技术有限责任公司 | Slanted image homonymy point acquisition method and slanted image homonymy point acquisition device for aerial multi-view images |
CN105160702A (en) * | 2015-08-20 | 2015-12-16 | 武汉大学 | Stereoscopic image dense matching method and system based on LiDAR point cloud assistance |
CN105953777A (en) * | 2016-04-27 | 2016-09-21 | 武汉讯图科技有限公司 | Large-plotting-scale tilt image measuring method based on depth image |
CN106228609A (en) * | 2016-07-09 | 2016-12-14 | 武汉广图科技有限公司 | A kind of oblique photograph three-dimensional modeling method based on spatial signature information |
CN107194380A (en) * | 2017-07-03 | 2017-09-22 | 上海荷福人工智能科技(集团)有限公司 | The depth convolutional network and learning method of a kind of complex scene human face identification |
Non-Patent Citations (4)
Title |
---|
SUN LI 等: ""Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching"", 《ROBUST, EFFICIENT DEPTH RECONSTRUCTION WITH HIERARCHICAL CONFIDENCE-BASED MATCHING》 * |
张振超: ""多视角倾斜航空影像匹配技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
郑顺义 等: ""三角网约束下的层次匹配方法"", 《计算机辅助设计与图形学学报》 * |
闫利 等: ""利用网络图进行高分辨率航空多视影像密集匹配"", 《测绘学报》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111126418A (en) * | 2018-10-30 | 2020-05-08 | 国家海洋局第一海洋研究所 | Oblique image matching method based on planar perspective projection |
CN109615646A (en) * | 2018-12-17 | 2019-04-12 | 石家庄爱赛科技有限公司 | Stereo matching system and solid matching method |
CN111222586A (en) * | 2020-04-20 | 2020-06-02 | 广州都市圈网络科技有限公司 | Inclined image matching method and device based on three-dimensional inclined model visual angle |
CN111222586B (en) * | 2020-04-20 | 2020-09-18 | 广州都市圈网络科技有限公司 | Inclined image matching method and device based on three-dimensional inclined model visual angle |
CN112598740A (en) * | 2020-12-29 | 2021-04-02 | 中交第二公路勘察设计研究院有限公司 | Rapid and accurate matching method for large-range multi-view oblique image connection points |
CN113989250A (en) * | 2021-11-02 | 2022-01-28 | 中国测绘科学研究院 | Improved block dense matching method, system, terminal and medium based on depth map |
CN113989250B (en) * | 2021-11-02 | 2022-07-05 | 中国测绘科学研究院 | Improved block dense matching method, system, terminal and medium based on depth map |
Also Published As
Publication number | Publication date |
---|---|
CN108399631B (en) | 2022-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108399631A (en) | A kind of inclination image of scale invariability regards dense Stereo Matching method more | |
CN109631855B (en) | ORB-SLAM-based high-precision vehicle positioning method | |
CN110223383A (en) | A kind of plant three-dimensional reconstruction method and system based on depth map repairing | |
CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
CN107862744A (en) | Aviation image three-dimensional modeling method and Related product | |
CN112927360A (en) | Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data | |
CN106228609A (en) | A kind of oblique photograph three-dimensional modeling method based on spatial signature information | |
CN112598740B (en) | Rapid and accurate matching method for large-range multi-view oblique image connection points | |
CN106780712B (en) | Three-dimensional point cloud generation method combining laser scanning and image matching | |
CN106204443A (en) | A kind of panorama UAS based on the multiplexing of many mesh | |
CN113160420B (en) | Three-dimensional point cloud reconstruction method and device, electronic equipment and storage medium | |
WO2013106920A1 (en) | Densifying and colorizing point cloud representation of physical surface using image data | |
CN110428501B (en) | Panoramic image generation method and device, electronic equipment and readable storage medium | |
CN110232738B (en) | Multi-view remote sensing image stereo reconstruction method based on disparity map and key points | |
CN114241125B (en) | Multi-view satellite image-based fine three-dimensional modeling method and system | |
CN109255808A (en) | Building texture blending method and apparatus based on inclination image | |
CN113566793A (en) | True orthoimage generation method and device based on unmanned aerial vehicle oblique image | |
CN110889899A (en) | Method and device for generating digital earth surface model | |
CN113077552A (en) | DSM (digital communication system) generation method and device based on unmanned aerial vehicle image | |
CN112288637A (en) | Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method | |
CN117197388A (en) | Live-action three-dimensional virtual reality scene construction method and system based on generation of antagonistic neural network and oblique photography | |
CN109785421A (en) | A kind of texture mapping method and system based on the combination of vacant lot image | |
CN113392879A (en) | Multi-view matching method for aerial image | |
CN112785686A (en) | Forest map construction method based on big data and readable storage medium | |
CN107784666B (en) | Three-dimensional change detection and updating method for terrain and ground features based on three-dimensional images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |