CN110490913A - Feature based on angle point and the marshalling of single line section describes operator and carries out image matching method - Google Patents
Feature based on angle point and the marshalling of single line section describes operator and carries out image matching method Download PDFInfo
- Publication number
- CN110490913A CN110490913A CN201910660833.3A CN201910660833A CN110490913A CN 110490913 A CN110490913 A CN 110490913A CN 201910660833 A CN201910660833 A CN 201910660833A CN 110490913 A CN110490913 A CN 110490913A
- Authority
- CN
- China
- Prior art keywords
- line segment
- matching
- corner
- texture
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 239000011159 matrix material Substances 0.000 claims abstract description 19
- 230000008859 change Effects 0.000 claims abstract description 14
- 238000010183 spectrum analysis Methods 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 25
- 230000000875 corresponding effect Effects 0.000 claims description 24
- 230000003595 spectral effect Effects 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 5
- 230000002596 correlated effect Effects 0.000 claims description 2
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 238000005316 response function Methods 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims 1
- 238000005286 illumination Methods 0.000 abstract description 3
- 238000000605 extraction Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Operator, which is described, the invention discloses a kind of feature based on angle point and the marshalling of single line section carries out image matching method, first, to Extraction of Image line segment and Harris angle point, it is searched for and is organized into groups, the angle point that will test and line segment building have angle point-single line section texture descriptor of scale, rotation and illumination invariant, wherein Harris angle point has the advantages that rotational invariance, and line segment can be lifted at the reliability in parallax scene change using half-breadth description;And carry out the spatial weighting shortest distance and estimate, obtain local matching results;Finally, establishing the candidate matches of each line segment, matching matrix is established, solves global registration result using spectrum analysis;Image Matching of the present invention, which describes operator, has the characteristics that scale, rotation and illumination invariant;The present invention establishes image pyramid to stereopsis respectively, matches one by one on the pyramid of different layers, can eliminate the influence of scale;And it can overcome in multi-line section matching and organize into groups the shortcomings that computationally intensive, time-consuming.
Description
Technical Field
The invention belongs to the technical field of remote sensing image recognition and computer vision, relates to an image matching method, and particularly relates to an image matching method based on Harris corner and single-line grouping feature description operators and combining geometric constraints to perform pyramid transfer strategies, and feature points and feature lines can be matched at the same time.
Background
The image matching technology is widely applied to the aspects of three-dimensional reconstruction, image retrieval, target tracking, military reconnaissance and the like, and has important military, medical and ecological environment monitoring and other application values. Although the current feature matching algorithm obtains remarkable results in the research based on point feature matching, the matching technology based on linear features still has some problems due to the influence of factors such as illumination, noise, shielding and the like: 1) the line segment features are not significant, so significant SIFT operators can avoid points on the line segment deliberately; 2) line segment matching based on geometric constraint also has problems, for example, end points extracted by directly adopting a matching technology of epipolar constraint are not accurate enough, and the change of visual angles is required to be small or the geometric relationship between images is required to be predicted; 3) the matching method based on the line segment grouping generally comprises the matching relation among a plurality of internal line segments, and a plurality of possible feature groupings are constructed, so that the calculation is complex and the time consumption is long; 4) the general line segment feature descriptor will make buffer areas to both sides of the line segment and statistically describe the texture of the whole area, but due to the change of the camera view angle, the texture of only one side of the line segment may be stable, and the texture of the other side may change greatly, resulting in uncertainty of matching.
Harris corners are used for edge detection and corner detection, are usually located at the intersection of edges, resist changes in the shooting angle of view, have rotational invariance, and are invariant to image grayscale translation changes since only the first derivative of the image is used. These good features make it possible to use it as matching feature points. Meanwhile, the Harris corner points and the line segments have proximity on the plane position, and the end points of the line segments which are complete and accurate in positioning are probably the Harris corner points, so that a matching description operator for grouping the Harris corner points and the single line segments is constructed, unreasonable line segment groups can be effectively filtered on the basis of overcoming rotation and gray change, and the method is shorter in time consumption and better in effect compared with a matching algorithm for multi-line segment grouping.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides the image matching method which has high time and storage efficiency and can be used for matching the corner points with the single-line segment grouped image matching description operators with stable image rotation, translation and scale transformation.
The technical scheme adopted by the invention is as follows: a method for image matching based on feature description operators of corner and single line segment grouping is characterized by comprising the following steps:
step 1: inputting a reference image and an image to be matched, constructing a multi-level Gaussian image pyramid, performing layer-by-layer down-sampling on the image, performing step 2-4 on the reference image and the image to be matched of each layer, and calculating an optimal matching scale;
step 2: respectively extracting a straight line segment and a Harris angular point from a reference image and an image to be matched;
searching Harris corners in a specified range for the end points of each extracted straight line segment, grouping the single straight line segments and the nearest corners in a correlated manner, and combining feature vectors of the corners and spectral information descriptors of the straight line segments to form corner-line segment half-width texture feature descriptors;
and step 3: matching point lines of the reference image and the image to be matched according to the corner point-line segment half-width texture feature descriptors of the line segments to obtain a candidate matching set;
and 4, step 4: calculating the similarity of candidate matching according to the geometric relation between the images to be matched, wherein the similarity comprises the distance of image corner points and the similarity of line segment descriptors, screening candidate matching for the grouping of the reference image and the images to be matched, and establishing a candidate matching matrix M of each line segment i; solving the established matching matrix M through spectrum analysis, and judging whether the candidate matching is accepted or rejected;
and 5: and outputting the stereo image corresponding to the optimal matching scale to obtain the optimal scales of the reference image and the image to be matched and the result of corner-single line segment matching corresponding to the optimal scales.
The image matching technology provided by the invention has the beneficial effects that: (1) aiming at the characteristic that the remote sensing image contains abundant line features, the angular points and the single line segments are grouped and an angular point-line segment descriptor is constructed, the texture features of the line segments and the geometric relation between the line segments and the corresponding angular points are fully utilized to screen out candidate matching, the time complexity of the algorithm is effectively reduced, and the reliability of the matching result is improved; (2) and establishing an image pyramid. (3) The image rotation and scaling influence is eliminated in the matching process, the whole algorithm has rotation and scale invariance, and pixel gray information is not involved in the matching process, so that the method also has good invariance to image brightness transformation.
Drawings
FIG. 1 is a flow chart of an embodiment of the invention;
FIG. 2 is a schematic diagram of association between a corner and a line segment according to an embodiment of the present invention;
FIG. 3 is a half-width line segment descriptor diagram of an embodiment of the present invention;
FIG. 4 is a schematic diagram of the Harris corners and line segments after grouping having rotational invariance, in accordance with an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the method for image matching based on feature description operators of corner and single line grouping provided by the present invention includes the following steps:
step 1: inputting a reference image and an image to be matched, constructing a multilevel Gaussian image pyramid based on Gaussian down-sampling (other algorithms such as wavelet decomposition can be adopted), performing layer-by-layer down-sampling on the image, performing step 2-4 on the reference image and the image to be matched of each layer, and calculating the optimal matching scale;
step 2: respectively extracting a straight line segment and a Harris angular point from a reference image and an image to be matched;
referring to fig. 2, searching Harris corners in a specified range for end points of each extracted straight line segment, grouping the single straight line segment and the nearest corner, and combining feature vectors of the corners and spectral information descriptors of the straight line segments to form corner-segment half-width texture feature descriptors;
for a straight line segment, rectangular regions with the same length in the vertical direction on both sides are taken as texture description regions and are divided into m sub-regions. Obtaining a line segment spectral feature description vector L with the dimensionality of 2 m:
L=(M1,S1,M2,S2,...,Mm,Sm)T
in the formula Mi、SiRespectively, the results of pixel gradient mean and standard deviation of each segment of sub-region in the region after being independently normalized.
Referring to fig. 3, in consideration of the problem of texture stability at two sides of a line segment, for each line segment, after a texture region description is established, variance calculation is performed on pixel values at two sides of the line segment, and if the variance is large, it indicates that the region at the side has poor texture stability and is most likely to be in a depth change region. And adopting a unilateral matching strategy for the line segment with the unilateral variance larger than a certain threshold value, namely constructing texture constraint by using only the stable half of the description vector so as to reduce missing matching. At this time, the spectral feature description vector L of the line segment becomes:
L=(M1,S1,M2,S2,...,Mm/2,Sm/2)T
when extracting the Harris corner, firstly, the x and y direction gradient values X, Y in the local corner area are calculated, and a corner response function E is constructedx,y:
Wherein,w being a point in a local region of the corner pointGaussian weight, I is the gray value in the local region of the corner,is a convolution operation symbol.
Will Ex,yWritten in matrix form, E (x, y) ═ x, y H (x, y)TWhereinTwo characteristic values [ alpha beta ] of H are calculated]And the minimum eigenvalue alpha corresponds to the eigenvector [ alpha ] of the ellipse major semi-axis determined by Harris corner1 α2]As one of the description vector factors of the corner point. Wherein the characteristic value [ alpha beta ]]Representing the magnitude of the gradient change in a local region of the corner, and a feature vector alpha1 α2]The direction of the overall gradient change (direction of the semiaxis of the ellipse) in the local region of the corner point is represented.
And (3) executing the step 2.1 on the end point of each extracted line segment, and executing the step 2.2 if the Harris angular point is searched in a specified range, wherein the rotation problem of the image is mainly solved by using the rotation invariance of the Harris angular point, a single line segment is associated (grouped) with the corresponding angular point, and the angular point is ensured to be the end point of the corresponding single line segment as far as possible. And combining the feature vectors of the corners and the spectral information descriptors of the line segments to form corner-line segment texture descriptors.
And 2.1, searching angular points of two end points of each extracted line segment of the reference image and the image to be matched within a distance threshold t according to the proximity between the angular points and the end points of the line segments, and executing the step 2.2 if one or more angular points exist.
In this embodiment, two end points of each extracted line segment are used as centers for the reference image and the image to be matched, a circular window with a radius of 3 pixels is opened, and an angular point is searched in the window.
Step 2.2: please refer to fig. 4, the corner points with the closest distance are selected to establish a group with a single line segment, other corner points are discarded, one corner point may correspond to a plurality of line segments, and a single line segment only corresponds to at most two corner points. By using the rotational invariance of Harris corner points, i.e. the corresponding square of the major and minor axes of the characteristic ellipseThe rotation angles of the 'long axis direction' and the 'line segment direction' are determined (fixed in the counterclockwise direction) to be unchanged, so that the rotation deformation between the two images is eliminated. Let the coordinates of the starting point and the ending point of the straight line segment be sx sy]、[ex ey]Then, the counterclockwise included angle θ between the straight line and the Harris angular point major axis gradient direction can be calculated by the following formula:
and finally combining the description vector [ alpha beta ] of the angular point, the included angle theta of the straight line and the angular point gradient direction and the texture description vector L of the line segment to form an angular point-line segment texture feature descriptor of the candidate grouping, wherein when only one angular point is matched, the angular point-line segment texture feature descriptor is as follows:
LΜ=(M1,S1,M2,S2,...,Mm/2,Sm/2,α,β,θ)T
when there are two matched corners, the corner-line segment texture feature descriptor is:
LΜ=(M1,S1,M2,S2,...,Mm/2,Sm/2,α1,β1,θ1,α2,β2,θ2)T
wherein alpha is1、β1、θ1、α2、β2、θ2Respectively representing the eigenvectors corresponding to the 2 Harris corner points and the corners counterclockwise to the line segment.
And step 3: matching point lines of the reference image and the image to be matched according to the corner point-line segment half-width texture feature descriptors of the line segments to obtain a candidate matching set;
since the line segments have different widths, the weight ratio of the line segments to the corner points also changes. For a half-width line spectral feature descriptor vector L with dimension m, each value in the corner-line texture feature descriptor, the weights of the line descriptor and the Harris corner feature are more generally weighted as follows:
the LHD represents the weighted value corresponding to the corner-line texture feature descriptor, and should satisfy:
and giving corresponding weight according to the rules:
(1) if the line segment matches two corners, the weight corresponding to the line segment descriptor and the corner descriptor is:
in the formulaIs the weight of the i-th line segment neighborhood subregion statistics (including standard deviation and mean), WHThe weights of the vectors are described for the corner features,the Gaussian weight of the ith line segment neighborhood subregion; the farther the sub-region is from the line segment, the lower the weight, and the calculation formula of the Gaussian weight is:
wherein d is the distance from the ith line segment neighborhood subregion to the line segment, and comprises:
the weight matrix W is now expressed as:
(2) if a line segment matches only one corner, the weight of the line segment descriptor is:
the weight matrix W is now expressed as:
the final weighted corner-line segment texture feature descriptor considering the Gaussian weight is L Mm WT。
And 4, step 4: calculating the similarity of candidate matching according to the geometric relation between the images to be matched, wherein the similarity comprises the distance of image corner points and the similarity of line segment descriptors, screening candidate matching for the grouping of the reference image and the images to be matched, and establishing a candidate matching matrix M of each line segment i; and solving the established matching matrix M through spectrum analysis, and judging whether the candidate matching is accepted or rejected.
When matching, for each line segment in the reference image, describing the sub-weighted value by the corner-line segment textural features, then calculating the Euclidean distances between the line segment textural features and the sub-weighted values of all the line segments in the image to be matched, and sequencing the Euclidean distances, wherein the shortest distance is assumed to be s1The next shortest distance is s2When s is1And s2When the following conditions are satisfied, s is taken1And combining the corresponding two line segments into a candidate matching pair:
where t represents a distance limiting threshold, and only if the shortest distance is less than this threshold, the corresponding line segment will be considered as a candidate matching line segment. t is tsRepresents a threshold of shortest-to-second shortest distance ratio, indicating that no correctly matching line segment exists when the shortest distance is too close to the second shortest distance, s1And s2May all beAnd (4) mismatching.
And finally, calculating all candidate matches in the two images, and establishing a line segment candidate matching set:
in the formulaRespectively are line segments corresponding to the ith group of candidate matching in the reference image and the image to be matched, and n is the number of candidate matching pairs.
Calculating the texture and geometric consistency scores between the two candidate matches, constructing an adjacency matrix M, regarding the M as an undirected graph, solving the undirected graph through spectral analysis, and screening the candidate matches to obtain a final line segment matching result.
For two pairs of candidate matches M (a, b), whereThe following constraints are used to construct their similarities.
(1) Cross ratio IiAnd represents the degree of coincidence of the two line segments in the horizontal direction.
(2) Projection ratio PiAnd represents the distance in the vertical direction of the two line segments.
(3) The line segment angle theta, i.e. the angle between two line segments, can be calculated by a line segment angle formula.
(4) Texture similarity V, their texture descriptors for line segments in the reference image and the image to be matched
Respectively expressed as:
after the texture and the geometric constraint are constructed for the candidate matching pairs (a, b), the texture and the geometric constraint can be obtained respectively,and
having obtained the texture and geometric constraints, a texture and geometric consistency score M between matching candidate pairs can be calculatedijIf the geometric change or texture change of the two groups of candidate matches is larger than a preset value, M is carried outijThe value of (d) is set to 0. MijIs the value of the ith row and jth column element in the adjacency matrix M.
The texture and geometric consistency score M is calculated byij:
Formula (II) dI,dp,dΘ,Are respectively as
Wherein t isI,tP,tΘT is a threshold limiting the variation of geometry and texture of two sets of candidate matches, only if d isI,dp,dΘ,When all are less than 1, Γ is true, otherwise the texture and geometric consistency scores MijIs set to zero.
After the adjacency matrix M is constructed, the line segment matching problem is converted into finding out a matching cluster C, so thatMaximum; representing all candidate matches by a vector x, for the ith set of candidate matches, if it belongs to the matching cluster C, then x (i) is 1, otherwise, x (i) is 0; thus the best matching solution x*Comprises the following steps:
x*=argmax(xTMx)。
and 5: and outputting the stereo image corresponding to the optimal matching scale to obtain the optimal scales of the reference image and the image to be matched and the result of corner-single line segment matching corresponding to the optimal scales.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (7)
1. A method for image matching based on feature description operators of corner and single line segment grouping is characterized by comprising the following steps:
step 1: inputting a reference image and an image to be matched, constructing a multi-level Gaussian image pyramid, performing layer-by-layer down-sampling on the image, performing step 2-4 on the reference image and the image to be matched of each layer, and calculating an optimal matching scale;
step 2: respectively extracting a straight line segment and a Harris angular point from a reference image and an image to be matched;
searching Harris corners in a specified range for the end points of each extracted straight line segment, grouping the single straight line segments and the nearest corners in a correlated manner, and combining feature vectors of the corners and spectral information descriptors of the straight line segments to form corner-line segment half-width texture feature descriptors;
and step 3: matching point lines of the reference image and the image to be matched according to the corner point-line segment half-width texture feature descriptors of the line segments to obtain a candidate matching set;
and 4, step 4: calculating the similarity of candidate matching according to the geometric relation between the images to be matched, wherein the similarity comprises the distance of image corner points and the similarity of line segment descriptors, screening candidate matching for the grouping of the reference image and the images to be matched, and establishing a candidate matching matrix M of each line segment i; solving the established matching matrix M through spectrum analysis, and judging whether the candidate matching is accepted or rejected;
and 5: and outputting the stereo image corresponding to the optimal matching scale to obtain the optimal scales of the reference image and the image to be matched and the result of corner-single line segment matching corresponding to the optimal scales.
2. The method of claim 1, wherein the method comprises the following steps: in the step 1, a multi-level Gaussian image pyramid is constructed by adopting Gaussian down-sampling or wavelet decomposition.
3. The method of claim 1, wherein the method comprises the following steps: in the step 2, regarding the straight line segment, taking rectangular areas with the same length in the vertical direction on the two sides of the straight line segment as texture description areas, and dividing the texture description areas into m sections of sub-areas; obtaining a line segment spectral feature description vector L with the dimensionality of 2 m:
L=(M1,S1,M2,S2,...,Mm,Sm)T;
in the formula Mi、SiRespectively obtaining the result of individual normalization of the gradient mean value and the standard deviation of each segment of sub-area pixels in the area;
for each line segment, after texture region description is established, variance calculation is carried out on pixel values on two sides of the line segment, and a unilateral matching strategy is adopted for the line segment with the unilateral variance larger than a certain threshold value, namely, only the stable half of the description vector is used for constructing texture constraint so as to reduce missing matching; at this time, the spectral feature description vector L of the line segment becomes:
L=(M1,S1,M2,S2,...,Mm/2,Sm/2)T
when extracting the Harris corner, firstly, the x and y direction gradient values X, Y in the local corner area are calculated, and a corner response function E is constructedx,y:
Ex,y=Ax2+2Cxy+By2
Wherein,w is the Gaussian weight of points in the local region of the corner point, I is the gray value in the local region of the corner point,is a convolution operation symbol;
will Ex,yWritten in matrix form, E (x, y) ═ x, y H (x, y)TWhereinTwo characteristic values [ alpha beta ] of H are calculated]And the minimum eigenvalue alpha corresponds to the eigenvector [ alpha ] of the ellipse major semi-axis determined by Harris corner1 α2]As one of the description vector factors of the corner point; wherein the characteristic value [ alpha beta ]]Representing the magnitude of the gradient change in a local region of the corner, and a feature vector alpha1 α2]The direction of the overall gradient change in the local area of the corner point, i.e. the direction of the semiaxis of the ellipse, is represented.
4. The image matching method for the feature descriptors based on the corner point and single line segment grouping as claimed in claim 3, wherein the step 2 of combining the feature vectors of the corner points and the spectral information descriptors of the line segments to form the corner point-line segment half-width texture feature descriptors is to perform the following steps for each extracted straight line segment:
step 2.1: searching angular points in a distance threshold t for the reference image and the image to be matched respectively by taking two end points of each extracted line segment as centers according to the proximity between the angular points and the end points of the line segment; if one or more corner points exist, executing the step 2.2;
step 2.2: sorting the diagonal points according to the distance from the angular points to the end points, taking the angular point with the closest distance to establish a grouping with the line segment, abandoning other angular points, enabling one angular point to correspond to a plurality of line segments, and enabling a single line segment to correspond to only one angular point;
let the coordinates of the starting point and the ending point of the straight line segment be sx sy]、[ex ey]And then, calculating the anticlockwise included angle theta between the straight line segment and the Harris angular point long semiaxis gradient direction by the following formula:
finally, combining the description vector [ alpha beta ] of the angular point, the included angle theta between the straight line segment and the gradient direction of the angular point and the texture description vector L of the straight line segment to form an angular point-straight line segment texture feature descriptor of the candidate grouping;
wherein: when only one matched corner is provided, the corner-line segment half-width texture feature descriptor is as follows:
LΜ=(M1,S1,M2,S2,...,Mm/2,Sm/2,α,β,θ)T;
when there are two matched corners, the corner-line segment half-width texture feature descriptor is:
LΜ=(M1,S1,M2,S2,...,Mm/2,Sm/2,α1,β1,θ1,α2,β2,θ2)T;
wherein alpha is1、β1、θ1、α2、β2、θ2Respectively representing the eigenvectors corresponding to the 2 Harris corner points and the corners counterclockwise to the line segment.
5. The method of claim 4, wherein the method comprises the following steps: in step 3, for each value of the corner-straight line segment texture feature descriptor in a half-width line segment spectral feature description vector L with a dimension of m, the weight of the straight line segment descriptor and the weight of the Harris corner feature are weighted more generally as follows:
the LHD represents the weighted value corresponding to the corner-line texture feature descriptor, and should satisfy:
and giving corresponding weight according to the rules:
(1) if the line segment matches two corners, the weight corresponding to the line segment descriptor and the corner descriptor is:
in the formulaAs a weight of the i-th line segment neighborhood subregion statistic, WHDescribing the weight of the vector for the corner feature; since the farther the sub-region is from the line segment, the lower the weight, the calculation formula of the gaussian weight is:
wherein d is the distance from the ith line segment neighborhood subregion to the line segment, and comprises:
in the formulaThe Gaussian weight of the ith line segment neighborhood subregion;
the weight matrix W is now expressed as:
(2) if a line segment matches only one corner, the weight of the line segment descriptor is:
the weight matrix W is now expressed as:
the final weighted corner-line segment texture feature descriptor considering the Gaussian weight is L Mm WT。
6. The method of claim 5, wherein the method comprises: step 4, calculating the similarity of candidate matching according to the geometric relationship between the images to be matched, when matching, describing the sub-weighted value of the corner-straight line segment textural features of each straight line segment in the reference image, then calculating the Euclidean distances between the straight line segment textural features and the sub-weighted values of the descriptions of all the line segments in the images to be matched, and sequencing the Euclidean distances, wherein the shortest distance is assumed to be s1The next shortest distance is s2When s is1And s2When the following conditions are satisfied, s is taken1And combining the corresponding two line segments into a candidate matching pair:
wherein t represents a distance limit threshold, and only when the shortest distance is less than the threshold, the corresponding line segment is considered as a candidate matching line segment;
and finally, calculating all candidate matches in the two images, and establishing a line segment candidate matching set:
in the formulaRespectively corresponding line segments of the ith group of candidate matching in the reference image and the image to be matched, wherein n is the number of candidate matching pairs;
calculating the texture and geometric consistency scores between the two candidate matches, constructing an adjacency matrix M, regarding the M as an undirected graph, solving the undirected graph through spectral analysis, and screening the candidate matches to obtain a final line segment matching result;
for two pairs of candidate matches M (a, b), whereThe following constraints are used to construct their similarity:
(1) cross ratio IiIndicating the contact ratio of the two line segments in the horizontal direction;
(2) projection ratio PiIndicating the distance between the two line segments in the vertical direction;
(3) the line segment included angle theta, namely the included angle of two line segments, can be calculated by a line segment included angle formula;
(4) the texture similarity V, the texture descriptors for the line segments in the reference image and the image to be matched are respectively expressed as:
after texture and geometric constraint are constructed for the candidate matching pairs (a, b), the texture and geometric constraint are respectively obtained,and
after texture and geometric constraints are obtained, a texture and geometric consistency score M between matching candidate pairs is calculatedijIf the geometric change or texture change of the two groups of candidate matches is larger than a preset value, M is carried outijIs set to 0; mijIs the value of the ith row and jth column element in the adjacency matrix M; after the adjacency matrix M is constructed, the line segment matching problem is converted into finding out a matching cluster C, so thatMaximum;
representing all candidate matches by a vector x, for the ith set of candidate matches, if it belongs to the matching cluster C, then x (i) is 1, otherwise, x (i) is 0; thus the best matching solution x*Comprises the following steps:
x*=argmax(xTMx)。
7. the method of claim 6, wherein the method comprises: the texture and geometric consistency score M is calculated byij:
Formula (II) dI,dp,dΘ,Are respectively as
Wherein t isI,tP,tΘT is a threshold limiting the variation of geometry and texture of two sets of candidate matches, only if d isI,dp,dΘ,When all are less than 1, Γ is true, otherwise the texture and geometric consistency scores MijIs set to zero.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910660833.3A CN110490913B (en) | 2019-07-22 | 2019-07-22 | Image matching method based on feature description operator of corner and single line segment grouping |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910660833.3A CN110490913B (en) | 2019-07-22 | 2019-07-22 | Image matching method based on feature description operator of corner and single line segment grouping |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110490913A true CN110490913A (en) | 2019-11-22 |
CN110490913B CN110490913B (en) | 2022-11-22 |
Family
ID=68547833
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910660833.3A Active CN110490913B (en) | 2019-07-22 | 2019-07-22 | Image matching method based on feature description operator of corner and single line segment grouping |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110490913B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111898646A (en) * | 2020-07-06 | 2020-11-06 | 武汉大学 | Cross-view image straight line feature matching method based on point line graph optimization solution |
CN112163622A (en) * | 2020-09-30 | 2021-01-01 | 山东建筑大学 | Overall situation and local fusion constrained line segment feature matching method for aviation wide-baseline stereopair |
CN113095384A (en) * | 2021-03-31 | 2021-07-09 | 安徽工业大学 | Remote sensing image matching method based on context characteristics of straight line segments |
CN113378507A (en) * | 2021-06-01 | 2021-09-10 | 中科晶源微电子技术(北京)有限公司 | Mask data cutting method and device, equipment and storage medium |
CN113538503A (en) * | 2021-08-21 | 2021-10-22 | 西北工业大学 | Solar panel defect detection method based on infrared image |
CN114012736A (en) * | 2021-12-08 | 2022-02-08 | 北京云迹科技有限公司 | Positioning object for assisting environment positioning and robot system |
CN114299312A (en) * | 2021-12-10 | 2022-04-08 | 中国科学技术大学 | Line segment matching method and matching system |
CN116309837A (en) * | 2023-03-16 | 2023-06-23 | 南京理工大学 | Method for identifying and positioning damaged element by combining characteristic points and contour points |
CN117114971A (en) * | 2023-08-01 | 2023-11-24 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184801A (en) * | 2015-09-28 | 2015-12-23 | 武汉大学 | Optical and SAR image high-precision registration method based on multilevel strategy |
WO2017049994A1 (en) * | 2015-09-25 | 2017-03-30 | 深圳大学 | Hyperspectral image corner detection method and system |
CN106709870A (en) * | 2017-01-11 | 2017-05-24 | 辽宁工程技术大学 | Close-range image straight-line segment matching method |
-
2019
- 2019-07-22 CN CN201910660833.3A patent/CN110490913B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017049994A1 (en) * | 2015-09-25 | 2017-03-30 | 深圳大学 | Hyperspectral image corner detection method and system |
CN105184801A (en) * | 2015-09-28 | 2015-12-23 | 武汉大学 | Optical and SAR image high-precision registration method based on multilevel strategy |
CN106709870A (en) * | 2017-01-11 | 2017-05-24 | 辽宁工程技术大学 | Close-range image straight-line segment matching method |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111898646A (en) * | 2020-07-06 | 2020-11-06 | 武汉大学 | Cross-view image straight line feature matching method based on point line graph optimization solution |
CN111898646B (en) * | 2020-07-06 | 2022-05-13 | 武汉大学 | Cross-view image straight line feature matching method based on point-line graph optimization solution |
CN112163622B (en) * | 2020-09-30 | 2022-07-05 | 山东建筑大学 | Global and local fusion constrained aviation wide-baseline stereopair line segment matching method |
CN112163622A (en) * | 2020-09-30 | 2021-01-01 | 山东建筑大学 | Overall situation and local fusion constrained line segment feature matching method for aviation wide-baseline stereopair |
CN113095384A (en) * | 2021-03-31 | 2021-07-09 | 安徽工业大学 | Remote sensing image matching method based on context characteristics of straight line segments |
CN113095384B (en) * | 2021-03-31 | 2023-04-28 | 安徽工业大学 | Remote sensing image matching method based on linear segment context characteristics |
CN113378507A (en) * | 2021-06-01 | 2021-09-10 | 中科晶源微电子技术(北京)有限公司 | Mask data cutting method and device, equipment and storage medium |
CN113378507B (en) * | 2021-06-01 | 2023-12-05 | 中科晶源微电子技术(北京)有限公司 | Mask data cutting method and device, equipment and storage medium |
CN113538503A (en) * | 2021-08-21 | 2021-10-22 | 西北工业大学 | Solar panel defect detection method based on infrared image |
CN113538503B (en) * | 2021-08-21 | 2023-09-01 | 西北工业大学 | Solar panel defect detection method based on infrared image |
CN114012736A (en) * | 2021-12-08 | 2022-02-08 | 北京云迹科技有限公司 | Positioning object for assisting environment positioning and robot system |
CN114299312A (en) * | 2021-12-10 | 2022-04-08 | 中国科学技术大学 | Line segment matching method and matching system |
CN116309837A (en) * | 2023-03-16 | 2023-06-23 | 南京理工大学 | Method for identifying and positioning damaged element by combining characteristic points and contour points |
CN116309837B (en) * | 2023-03-16 | 2024-04-26 | 南京理工大学 | Method for identifying and positioning damaged element by combining characteristic points and contour points |
CN117114971A (en) * | 2023-08-01 | 2023-11-24 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
CN117114971B (en) * | 2023-08-01 | 2024-03-08 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110490913B (en) | 2022-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110490913B (en) | Image matching method based on feature description operator of corner and single line segment grouping | |
CN110363122B (en) | Cross-domain target detection method based on multi-layer feature alignment | |
US20130089260A1 (en) | Systems, Methods, and Software Implementing Affine-Invariant Feature Detection Implementing Iterative Searching of an Affine Space | |
CN105809626A (en) | Self-adaption light compensation video image splicing method | |
CN109472770B (en) | Method for quickly matching image characteristic points in printed circuit board detection | |
CN106408597A (en) | Neighborhood entropy and consistency detection-based SAR (synthetic aperture radar) image registration method | |
CN111768447A (en) | Monocular camera object pose estimation method and system based on template matching | |
CN105809678B (en) | A kind of line segment feature global registration method between two views under short base line condition | |
CN108401565B (en) | Remote sensing image registration method based on improved KAZE features and Pseudo-RANSAC algorithms | |
CN111199245A (en) | Rape pest identification method | |
CN112883850A (en) | Multi-view aerospace remote sensing image matching method based on convolutional neural network | |
Li et al. | Image Matching Algorithm based on Feature-point and DAISY Descriptor. | |
CN108182705A (en) | A kind of three-dimensional coordinate localization method based on machine vision | |
CN112329662B (en) | Multi-view saliency estimation method based on unsupervised learning | |
CN114358166B (en) | Multi-target positioning method based on self-adaptive k-means clustering | |
CN111311657B (en) | Infrared image homologous registration method based on improved corner principal direction distribution | |
CN111144469B (en) | End-to-end multi-sequence text recognition method based on multi-dimensional associated time sequence classification neural network | |
CN112001954A (en) | Polar curve constraint-based underwater PCA-SIFT image matching method | |
CN104156952B (en) | A kind of image matching method for resisting deformation | |
CN116630662A (en) | Feature point mismatching eliminating method applied to visual SLAM | |
CN116128919A (en) | Multi-temporal image abnormal target detection method and system based on polar constraint | |
CN114964206A (en) | Monocular vision odometer target pose detection method | |
CN113705731A (en) | End-to-end image template matching method based on twin network | |
Tan et al. | An improved ORB-GMS image feature extraction and matching algorithm | |
CN105139428A (en) | Quaternion based speeded up robust features (SURF) description method and system for color image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |