[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106991431B - Post-verification method for local feature point matching pairs - Google Patents

Post-verification method for local feature point matching pairs Download PDF

Info

Publication number
CN106991431B
CN106991431B CN201710123132.7A CN201710123132A CN106991431B CN 106991431 B CN106991431 B CN 106991431B CN 201710123132 A CN201710123132 A CN 201710123132A CN 106991431 B CN106991431 B CN 106991431B
Authority
CN
China
Prior art keywords
diff
matching
image
matching pairs
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710123132.7A
Other languages
Chinese (zh)
Other versions
CN106991431A (en
Inventor
姚金良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201710123132.7A priority Critical patent/CN106991431B/en
Publication of CN106991431A publication Critical patent/CN106991431A/en
Application granted granted Critical
Publication of CN106991431B publication Critical patent/CN106991431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a post-verification method of local characteristic point matching pairs. Firstly, extracting local characteristic points in an image, and obtaining candidate local characteristic point matching pairs through visual vocabularies; then, extracting attribute change values for the candidate local feature point matching pairs: a primary direction change value and an orientation change value; then, verifying whether the two matching pairs are consistent or not according to the attribute change value and the threshold value of the matching pair; and finally, confirming whether the candidate local feature point matching pair is a correct matching pair or not by adopting a voting method according to the number of votes of positive votes. The post-verification method can adapt to the influence caused by the transformation such as image cutting, rotation, scale scaling and the like, can be used for image retrieval and classification based on visual vocabularies and the like, and improves the accuracy of retrieval and identification. The method has a very good verification effect on the matching pairs of the characteristic points in the non-perspective transformation images, and can greatly improve the accuracy and recall rate of copy retrieval in the image copy retrieval application based on visual vocabularies.

Description

Post-verification method for local feature point matching pairs
Technical Field
The invention belongs to the field of computer image processing and image retrieval, and relates to a post-verification method for matching pairs of local feature points in two images.
Background
With the extensive research and application of local feature points in images, image analysis, identification and retrieval based on local feature points have become an important mode in the current image processing field. With the help of the bag-of-words model in document processing, an image can be represented as a set of local features, thereby eliminating some redundant information in the image. In recent years, researchers have quantized descriptors of local feature points into visual vocabularies, and have proposed a Bag of visual vocabulary model (Bag of visual word model). The model has become an important class of current image recognition and retrieval methods. The combination of the visual vocabulary bag-of-words model and the inverted index is the most effective image retrieval mode based on the content at present, and has very good robustness. In an image retrieval application, it can cope with various editing operations and transformations of an image; the inverted index structure improves the retrieval efficiency and can realize real-time query in a large-scale image library. However, the visual vocabulary obtained by the feature vector quantization of the local feature points has no clear meaning with respect to the vocabulary in the natural language. The distinguishing capability is weak, and the content of the local image cannot be completely represented. In order to be able to guarantee the discriminative power of the visual vocabulary: the more the number of visual vocabularies in the dictionary, the better; but more visual words result in a weaker anti-noise capability and more computation is required when the feature vectors are quantized into visual words. In addition, reducing the number of visual words in the dictionary to eliminate the noise effect also results in a reduction in visual word distinguishing capability, resulting in a higher rate of mismatching of visual words. The mismatching of visual words brings difficulty to the subsequent image similarity calculation.
Researchers have proposed many constructive methods for the mismatching problem of visual vocabularies. These methods can be largely divided into two categories: one is to add an additional descriptor to the visual vocabulary to improve the distinguishing capability of the visual vocabulary, and the other is to perform spatial consistency verification according to candidate local feature point matching pairs in the two images, thereby filtering mismatching pairs. Currently, there are additional descriptor methods: the method proposed by Liang Zheng for Embedding color information into local feature points (Liang Zheng, Shengjin Wang, Qi Tian, Coupled Binary Embedding for Large-Scale Image Retrieval, IEEE TRANSACTION IMAGE PROCESSING, VOL.23, NO.8,2014), Yao proposes a method for Embedding the context information of visual vocabulary as an additional descriptor and a Hamming Embedding method proposed by H.J re gou. These methods require additional descriptors to be added to the index repository, increasing the storage consumption of the system. In addition, the robustness of the additional descriptors is also a concern.
The space verification method based on the matching pairs of the local feature points is a post-verification method. The method is characterized in that the spatial consistency of matched local feature points between a query image and a candidate image is calculated on the basis that the candidate image or a target is found in the process of retrieval or classification. Since image editing and slight perspective transformation do not change the spatial relative relationship of local feature points on a certain target in an image, the spatial consistency between matching pairs of local feature points is widely used in post-verification methods for filtering mismatching pairs. The initial method is to adopt RANSAC method to obtain the transformation parameters of the image on the local feature point matching pair set, and consider the matching pairs which do not conform to the transformation model as mismatching. Because the RANSAC method is low in efficiency, researchers have proposed a weak geometric consistency method, which determines the transformation parameters of an image according to the difference between the scale and the main direction of local feature points and filters mismatching through the parameters. Furthermore, Zhou proposes a Spatial Coding (Spatial Coding) method that identifies the correct match by matching the Spatial position (relative positional relationship OF x and Y coordinates) consistency OF the pair (Zhou WG, Li HQ, Lu Y et al. Lingyang Chu builds a graphical model of the consistency of the principal direction and location. The Graph Model considers a strongly connected matching pair as the correct one (Lingyang Chu, Shuqiangjiang, et al. robust Spatial relationship Graph Model for Partial Duplicate image retrieval, IEEE TRANSACTIONS ON MULTIMEDIA, VOL.15, NO.8, PP.1982-1986, DECEMBER 2013). Wu combines the visual vocabulary into Bundle through the maximum stability limit region, then indexes the image based on the Bundle, and realizes the similarity measurement through the matching of the visual vocabulary in the Bundle.
Aiming at the problem of low matching accuracy caused by the problem of weak distinguishing capability after the local feature is quantized into visual words, the method provided by the invention utilizes the attribute change consistency between the local feature point matching pairs and a voting method to confirm the correct matching pairs. The method expands the existing post-verification method of the space consistency of the local characteristic points; the method has higher processing speed, can be used for various image editing operations, and can be applied to relevant applications such as image recognition and retrieval.
Disclosure of Invention
The invention aims to provide a post-verification method of local characteristic point matching pairs in the current image content retrieval application based on visual vocabularies, which can be used for confirming correct local characteristic point matching pairs and filtering wrong matching pairs so as to improve the retrieval accuracy.
The method comprises the following specific steps:
step (1) obtaining a local characteristic point matching pair in two images according to a visual vocabulary corresponding to the local characteristic point;
the visual vocabulary is vocabulary ID obtained by quantizing the characteristic vector of the local characteristic point in the image;
the local feature points are obtained by a local feature point detection method (such as SIFT), and have the following correlation properties in the image: spatial position, scale, principal direction and eigenvector;
the local feature point matching pair is a pair of local feature points with consistent visual vocabularies in the two images. Suppose two images, Img and Img', in which the local feature points are respectively represented by ViAnd Vm' to; if ViAnd Vm' the visual words obtained by quantization are the same, then (V)i,Vm') is a matching pair of local feature points.
And (2) calculating the attribute change value of the local feature point matching pair.
The attribute change value is the difference value of the attributes of the two matched local feature points, and reflects the change condition of converting one local feature point in the original image into the corresponding feature point in the result image after the image conversion. The method of the invention provides two attribute change values: a primary direction change value and an orientation change value. Assume that there are two matching pairs M1 between images Img and Img': (V)i,Vm') and M2: (V)j,Vn′)。ViIs denoted as θiThe location attribute is represented as (Px)i,Pyi);VmThe main direction of' is denoted by θm', the location attribute is represented as (Px)m′,Pym'). The dominant direction change value (Diff _ Ori) is defined as the difference in the dominant direction of the local feature points in the matching pair, as shown in equation (1). Acquiring the azimuth change value requires two matching pairs, firstly acquiring the azimuth of two local feature points of the two matching pairs in the same image, and acquiring the local feature V in the image ImgiAnd VjThe Direction value of (a) is Direction (i, j), as shown in equation (2), where the arctan2 function is used to obtain the arctan value, the angle is: origin point coordinates (Py)j-Pyi,Pxj-Pxi) The angle between the vector of (a) and the positive direction of the x axis along the counterclockwise direction; same Vm' and VnThe orientation value of' can be obtained by equation (3).
Diff_Ori(i,m)=θm′-θi(1)
Direction(i,j)=arctan2(Pyj-Pyi,Pxj-Pxi) (2)
Direction′(m,n)=arctan2(Py′n-Py′m,Px′n-Px′m) (3)
Thus, the variance value Diff _ Dir (i, m) of the azimuth attribute is:
Diff_Dir(i,m)=Direction(i,j)-Direction′(m,n) (4)
and (3) confirming whether the local characteristic point matching pairs are consistent or not according to the attribute change value consistency between the local characteristic point matching pairs.
And the consistency of the attribute change values is judged through a threshold value. And if all the attribute change values meet the threshold requirement, the two local feature point matching pairs are considered to be consistent. The main direction change value consistency of the two matching pairs M1 and M2 is obtained by equations (5) and (6), where TH _ ORI is a threshold value of the main direction change consistency.
Diff_Ori_M(M1,M2)=|Diff_Ori(i,m)-Diff_Ori(j,n)| (5)
Figure DEST_PATH_GDA0001325472300000041
Wherein Diff _ Ori _ M (M1, M2) is expressed as the difference between the main direction change values of two matching pairs M1 and M2; diff _ Ori (i, M) denotes the matching pair M1 primary direction change value; diff _ Ori (j, n) denotes the matching pair M2 primary direction change value; ori _ Cons (M1, M2) represents the result value of the consistency of two matching pairs M1 and M2.
The method eliminates the influence of image rotation on the direction change value by subtracting the main direction change value. Specifically, the judgment of the consistency of the orientation change values of the two matching pairs M1 and M2 is realized by the formulas (7), (8) and (9); equation (8) is used to eliminate the effect of image rotation, where TH _ Dir is the orientation variation consistency threshold.
Diff_Dir_M(M1,M2)=|Diff_Dir(i,m)-Diff_Dir(j,n)| (7)
Diff_Dir(M1,M2)=|Diff_Dir_M(M1,M2)-Diff_Ori_M(M1,M2)| (8)
Wherein Diff _ Dir _ M (M1, M2) is expressed as the difference between the orientation change values of two matching pairs M1 and M2; diff _ Dir (i, M) represents the orientation change value of the matching pair M1; diff _ Dir (j, n) represents the orientation change value of the matching pair M2; diff _ Dir (M1, M2) represents the difference between the difference of the main direction change values and the difference of the orientation change values of the two matching pairs M1 and M2; dir _ Cons (M1, M2) represents the result value of two matching pairs M1 and M2 orientation consistency.
And (4) adopting a voting method to determine whether a certain matching pair is a correct matching pair. That is: for a certain matching pair MiAnd verifying the consistency of the attribute change values with other matching pairs, and if the attribute change values are consistent, throwing a positive ticket. A positive ratio of the number of Votes to the number of matching pairs of two image candidates greater than a given threshold (Th _ Votes) is considered a matching pair MiIs a correct match.
Compared with the prior art, the invention has the following beneficial effects:
the method is different from the method for separately and independently processing the consistency characteristics in the weak geometric constraint method, and provides the method for verifying the consistency between the matching pairs by combining the main direction change value and the azimuth change value of the local characteristic point pair, thereby improving the accuracy of consistency verification between the matching pairs.
The main direction change value and the azimuth change value provided by the method have very high matching pair consistency area rate; assuming that the orientations among the feature points are randomly distributed, the probability that the inconsistent matching pairs are considered to be consistent through the orientation change value is TH _ Dir/pi; when TH _ Dir is 0.087(5 degrees), its probability is 2.8%. Similarly, when TH _ ORI is 0.087(5 degrees), the probability of misjudgment by the principal direction change value is also 2.8%. The probability of misjudgment of consistency between the judgment matching pairs by the two characteristic combinations is as follows: 0.00077. therefore, the matching pair consistency verification method provided by the method has very high accuracy.
The method of the invention considers the feature point matching problem as the matching problem of part of image contents in two images, and the changes of the orientation and the main direction of the feature point matching pair between the same image contents are consistent. In addition, since the consistency verification of the feature point matching pairs has a very high accuracy, the voting method can effectively cope with the problem of partial image content matching. Therefore, the method has very good robustness for complex editing operations such as image cutting, target adding and the like.
The method eliminates the influence of image rotation on the difference value of the direction change value by subtracting the difference value of the main direction change value. While the principal direction of the local feature points itself has rotational invariance. Therefore, the method has rotation robustness.
The method has good verification effect on the matching pairs of the characteristic points in the non-perspective transformation images, and can greatly improve the accuracy and recall rate of copy retrieval in the image copy retrieval application based on visual vocabularies.
Drawings
FIG. 1 shows a flow chart of the method of the present invention;
FIG. 2 local feature point matching results based on visual vocabulary;
FIG. 3 is a schematic diagram of calculating a change value of an orientation attribute;
FIG. 4 shows the number of votes for a matching pair based on a voting method;
FIG. 5 illustrates the method of the present invention confirming the effect of a correct matching pair;
FIG. 6 is a schematic diagram showing the comparison result of the effectiveness of the post-verification method of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings, and it should be noted that the described embodiments are only for the understanding of the present invention, and do not limit the present invention in any way.
The method comprises the following specific steps:
step (1) obtaining a local characteristic point matching pair in two images according to a visual vocabulary corresponding to the local characteristic point; there are many methods for extracting local feature points at present, and the method adopts a scale invariant descriptor (SIFT) which is widely used at present and has robustness such as rotation, scale transformation and the like. After SIFT descriptor extraction, the image is represented as a bureauSet of partial feature points { Si},SiIs a descriptor of local feature points in an image, which has the following correlation properties in the image: feature vector (F)i) Main direction (theta)i) Dimension (σ)i) And spatial position (Px)i,Pyi). Matching of local feature points may quantize the feature vectors into visual vocabulary IDs, and then obtain matched pairs of local feature points based on consistency of the visual vocabulary IDs. In order to quantize the feature vector of the local feature point to the visual vocabulary ID, in the present embodiment, a product quantization method (product quantization method) is adopted, and a visual vocabulary dictionary is constructed by K-means clustering. The quantization method has very high quantization efficiency. In the present embodiment, 32-dimensional SIFT local feature descriptors are adopted, and the product quantization method divides 32-dimensional feature vectors into 4 groups, each of which is 8-dimensional. A group of 8-dimensional feature vectors are quantized into 32 roots according to a sample library; from the root combinations, a 2 can be constructed20A dictionary of visual words. According to the steps, two images, Img and Img', are assumed, wherein the local feature points are respectively represented by ViAnd Vm' to; if ViAnd Vm' if the visual vocabulary ID obtained by quantizing the feature vector is the same, (V)i,Vm') is a matching pair of local feature points. Fig. 2 is a sample of the matching result of the present embodiment. The line segments crossing the two images represent local feature point matching pairs, the two endpoints are the spatial positions of the local feature points in the images, and the arrow lines represent the scale and the main direction of the feature points. In fig. 2, white line segments represent correct local feature point matching pairs, and the corresponding image contents are consistent; and black line segments are wrong local feature point matching pairs. It can also be seen from the figure that the local contents of two local feature points in the wrong matching pair have certain similarity (both are edge points with a little radian), but the contents are inconsistent from the whole image. The goal of the method is to identify matching pairs that filter out these errors.
And (2) calculating the attribute change value of the local feature point matching pair. The attribute variation value is the difference value of the attributes of the two matched local feature points, which reflects the difference valueThe change condition that one local feature point in the original image is transformed to the corresponding feature point in the result image after the image transformation is performed. The method of the invention provides two attribute change values: a primary direction change value and an orientation change value. Assume that there are two matching pairs M1 between images Img and Img': (V)i,Vm') and M2: (V)j,Vn′)。ViAnd VmThe principal direction and location attributes of' are respectively expressed as: (theta)i,Pxi,Pyi) And (theta)m′,Pxm′,Pym'). The dominant direction variation value (Diff _ Ori) is defined as the difference in the dominant direction of the local feature points in the matching pair, as shown in equation (10). Acquiring the azimuth change value requires two matching pairs, firstly acquiring the azimuth of two local feature points of the two matching pairs in the same image, and acquiring the local feature V in the image ImgiAnd VjIs Direction (i, j), as shown in equation (11), where the arctan2 function is used to obtain the arctan value, the angle is: origin point coordinates (Py)j-Pyi,Pxj-Pxi) The angle between the vector of (a) and the positive direction of the x axis along the counterclockwise direction; same Vm' and VnThe orientation value of' can be obtained by equation (12).
Diff_Ori(i,m)=θm′-θi(10)
Direction(i,j)=arctan2(Pyj-Pyi,Pxj-Pxi) (11)
Direction′(m,n)=arctan2(Py′n-Py′m,Px′n-Px′m) (12)
Thus, the variance value Diff _ Dir (i, m) of the azimuth attribute is:
Diff_Dir(i,m)=Direction(i,j)-Direction′(m,n) (13)
the orientation attribute variation value can be visually represented by fig. 3; the orientation attribute in the image is represented by the angle of the line connecting the two feature points with the x-axis. If the image is not subject to a rotational attack, the azimuthal properties of the two coincident matching pairs should be the same. The orientation attribute variation value expresses the stability of the orientations of the two feature points in the image. Also, the primary direction property change value is also robust to most operations other than rotation.
And (3) confirming whether the local characteristic point matching pairs are consistent or not according to the attribute change value consistency between the local characteristic point matching pairs. And the consistency of the attribute change values is judged through a threshold value. And if all the attribute change values meet the threshold requirement, the two local feature point matching pairs are considered to be consistent. Two matching pairs M1: (V)i,Vm') and M2: (V)j,Vn') is obtained by equations (14) and (15), where TH _ ORI is the main direction change uniformity threshold. The method adopts the comparison between two matching pairs for confirmation; if the image is rotated, corresponding matched feature points in the image are correspondingly rotated; so that the main direction change value between the two matching pairs is not affected by the image rotation. In this example, we performed correlation experiments on the correlation test library, and finally set TH _ ORI to 0.087(5 degrees).
Diff_Ori_M(M1,M2)=|Diff_Ori(i,m)-Diff_Ori(j,n)| (14)
Figure DEST_PATH_GDA0001325472300000081
Wherein Diff _ Ori _ M (M1, M2) is expressed as the difference between the main direction change values of two matching pairs M1 and M2; diff _ Ori (i, M) denotes the matching pair M1 primary direction change value; diff _ Ori (j, n) denotes the matching pair M2 primary direction change value; ori _ Cons (M1, M2) represents the result value of the consistency of two matching pairs M1 and M2.
The method eliminates the influence of image rotation on the direction change value by subtracting the main direction change value. Specifically, the judgment of the consistency of the orientation change values of the two matching pairs M1 and M2 is realized by the formulas (16), (17) and (18); equation (17) is used to eliminate the effect of image rotation, where TH _ Dir is the orientation variation consistency threshold.
Diff_Dir_M(M1,M2)=|Diff_Dir(i,m)-Diff_Dir(j,n)| (16)
Diff_Dir(M1,M2)=|Diff_Dir_M(M1,M2)-Diff_Ori_M(M1,M2)| (17)
Figure DEST_PATH_GDA0001325472300000082
Wherein Diff _ Dir _ M (M1, M2) is expressed as the difference between the orientation change values of two matching pairs M1 and M2; diff _ Dir (i, M) represents the orientation change value of the matching pair M1; diff _ Dir (j, n) represents the orientation change value of the matching pair M2; diff _ Dir (M1, M2) represents the difference between the difference of the main direction change values and the difference of the orientation change values of the two matching pairs M1 and M2; dir _ Cons (M1, M2) represents the result value of two matching pairs M1 and M2 orientation consistency.
In this example, we performed correlation experiments on the correlation test library, and finally set TH _ Dir to be also 0.087(5 degrees).
The calculation of the position consistency between every two candidate matching pairs in the two images requires a large amount of calculation. In the present embodiment, in order to improve the efficiency of the verification, the method adopts the following two strategies. The first strategy is as follows: preferentially judging the consistency of the main direction; if the two matching pairs are not consistent, subsequent judgment of orientation consistency is not needed, and therefore the calculation amount is reduced. The second strategy is as follows: when a certain matching pair has been verified by other matching pairs or has received enough positive tickets, the correct matching pair is directly confirmed without subsequent verification, thereby improving the efficiency. In addition, the candidate matching pairs can be filtered by setting a high-frequency vocabulary, so that the verification efficiency is improved.
And (4) adopting a voting method to determine whether a certain matching pair is a correct matching pair. That is: for a certain matching pair MiAnd verifying the consistency of the attribute change values with other matching pairs, and if the attribute change values are consistent, throwing a positive ticket. A positive vote divided by the number of pairs of matching candidates for two images greater than a given threshold (Th _ Votes) is considered to be a matching pair MiIs a correct match. In this example, Th _ Votes was set to 0.2 experimentally the algorithm flow for calculating the correct logarithm of match between two images according to the above steps is as follows:
Figure DEST_PATH_GDA0001325472300000091
fig. 4 is a sample of the results of verification of candidate matching pairs, where the numbers on the respective matching pairs represent the number of positive tickets they obtain, and those mismatching pairs obtain a very small number of positive tickets, mostly 0. Therefore, the method can effectively identify the mismatching pairs.
To demonstrate the effectiveness of the method, in this example, we performed relevant tests. Copy image retrieval based on local feature point matching is adopted as an application scene of the method. A copy image detection standard test library Holidays and a Web image library are adopted as test libraries. The Hollidays test library includes 157 original images, as well as jpeg images of various compression rates acquired thereon, and various cropped copy image sets. The Web test library is a group of 32 copy images which are collected from the network. The test uses the mAP value in the field of image retrieval as an evaluation index. The test results are shown in fig. 6. The Spatial Coding method is the best visual vocabulary post-verification method at present, and the Baseline method is directly based on the retrieval effect of the copy images of the number of candidate matching pairs without performing the post-verification. From fig. 6, it can be confirmed that the method has certain advantages on both test libraries.
Fig. 5 shows the application effect of the method in the copy image retrieval. Where the gray black lines indicate correct matching pairs and white indicates mismatching pairs identified by the method.

Claims (3)

1. A post-verification method for matching pairs of local feature points is characterized by comprising the following steps:
step (1) obtaining a local characteristic point matching pair in two images according to a visual vocabulary corresponding to the local characteristic point;
the visual vocabulary is vocabulary ID obtained by quantizing the characteristic vector of the local characteristic point in the image;
the local feature points are obtained by a local feature point detection method, and have the following correlation attributes in the image: spatial position, scale, principal direction and eigenvector;
the local characteristic point matching pair is a pair of local characteristic points with consistent visual vocabularies in the two images; setting two images including Img and Img', wherein the local characteristic points are respectively represented by ViAnd Vm' to; if ViAnd Vm' the visual words obtained by quantization are the same, then (V)i,Vm') is a matching pair of local feature points;
calculating the attribute change value of the local feature point matching pair;
the attribute change value is a difference value of two matched local feature point attributes, which reflects a change condition that one local feature point in the original image is transformed to a corresponding feature point in a result image after image transformation, and the image has two attribute change values: a primary direction change value and an orientation change value; assume that there are two matching pairs M1 between images Img and Img': (V)i,Vm') and M2: (V)j,Vn′);ViIs denoted as θiThe location attribute is represented as (Px)i,Pyi);VmThe main direction of' is denoted by θm', the location attribute is represented as (Px)m′,Pym') to a host; the dominant direction variation value (Diff _ Ori) is defined as the difference in dominant direction of local feature points in the matching pair, as shown in equation (1); acquiring the azimuth change value requires two matching pairs, firstly acquiring the azimuth of two local feature points of the two matching pairs in the same image, and acquiring the local feature V in the image ImgiAnd VjThe Direction value of (a) is Direction (i, j), as shown in equation (2), where the arctan2 function is used to obtain the arctan value, the angle is: origin point coordinates (Py)j-Pyi,Pxj-Pxi) The angle between the vector of (a) and the positive direction of the x axis along the counterclockwise direction; same Vm' and VnThe orientation value of' can be obtained by formula (3);
Diff_Ori(i,m)=θm′-θi(1)
Direction(i,j)=arctan2(Pyj-Pyi,Pxj-Pxi) (2)
Direction′(m,n)=arctan2(Py′n-Py′m,Px′n-Px′m) (3)
thus, the variance value Diff _ Dir (i, m) of the azimuth attribute is:
Diff_Dir(i,m)=Direction(i,j)-Direction′(m,n) (4)
step (3) confirming whether the local characteristic point matching pairs are consistent or not according to the attribute change value consistency between the local characteristic point matching pairs;
step (4) determining whether a matching pair is a correct matching pair by adopting a voting method; that is: for a certain matching pair MiVerifying the consistency of the attribute change values with other matching pairs, and if the attribute change values are consistent, throwing a positive ticket; a positive ratio of the number of Votes to the number of matching pairs for two candidate images greater than a given threshold Th _ Votes is considered a matching pair MiIs a correct match.
2. The method for post-verification of matching pairs of local feature points according to claim 1, wherein the step 3 is implemented as follows:
the consistency of the attribute change values is judged through a threshold value, and when all the attribute change values meet the threshold value requirement, the matching pairs of the two local feature points are considered to be consistent;
two matching pairs M1: (V)i,Vm') and M2: (V)j,Vn') is obtained by equations (5) and (6), where TH _ ORI is a threshold value of the main direction change consistency;
Diff_Ori_M(M1,M2)=|Diff_Ori(i,m)-Diff_Ori(j,n)| (5)
Figure FDA0001237636180000021
wherein Diff _ Ori _ M (M1, M2) is expressed as the difference between the main direction change values of two matching pairs M1 and M2; diff _ Ori (i, M) denotes the matching pair M1 primary direction change value; diff _ Ori (j, n) denotes the matching pair M2 primary direction change value; ori _ Cons (M1, M2) represents the result value of the consistency of two matching pairs M1 and M2.
3. The method of claim 2, wherein the local feature point matching pair is verified after the matching, and the method comprises:
eliminating the influence of the image rotation on the orientation change value by subtracting the main orientation change value; specifically, the method comprises the following steps:
the judgment of the consistency of the orientation change values of the two matching pairs M1 and M2 is realized by formulas (7), (8) and (9); equation (8) is used to eliminate the effect of image rotation, where TH _ Dir is the orientation variation consistency threshold;
Diff_Dir_M(M1,M2)=|Diff_Dir(i,m)-Diff_Dir(j,n)| (7)
Diff_Dir(M1,M2)=|Diff_Dir_M(M1,M2)-Diff_Ori_M(M1,M2)| (8)
Figure FDA0001237636180000031
wherein Diff _ Dir _ M (M1, M2) is expressed as the difference between the orientation change values of two matching pairs M1 and M2; diff _ Dir (i, M) represents the orientation change value of the matching pair M1; diff _ Dir (j, n) represents the orientation change value of the matching pair M2; diff _ Dir (M1, M2) represents the difference between the difference of the main direction change values and the difference of the orientation change values of the two matching pairs M1 and M2; dir _ Cons (M1, M2) represents the result value of two matching pairs M1 and M2 orientation consistency.
CN201710123132.7A 2017-03-03 2017-03-03 Post-verification method for local feature point matching pairs Active CN106991431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710123132.7A CN106991431B (en) 2017-03-03 2017-03-03 Post-verification method for local feature point matching pairs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710123132.7A CN106991431B (en) 2017-03-03 2017-03-03 Post-verification method for local feature point matching pairs

Publications (2)

Publication Number Publication Date
CN106991431A CN106991431A (en) 2017-07-28
CN106991431B true CN106991431B (en) 2020-02-07

Family

ID=59412669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710123132.7A Active CN106991431B (en) 2017-03-03 2017-03-03 Post-verification method for local feature point matching pairs

Country Status (1)

Country Link
CN (1) CN106991431B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116910296B (en) * 2023-09-08 2023-12-08 上海任意门科技有限公司 Method, system, electronic device and medium for identifying transport content

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699905A (en) * 2013-12-27 2014-04-02 深圳市捷顺科技实业股份有限公司 Method and device for positioning license plate
CN104484671A (en) * 2014-11-06 2015-04-01 吉林大学 Target retrieval system applied to moving platform
CN104615642A (en) * 2014-12-17 2015-05-13 吉林大学 Space verification wrong matching detection method based on local neighborhood constrains

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169993A1 (en) * 2012-10-01 2015-06-18 Google Inc. Geometry-preserving visual phrases for image classification using local-descriptor-level weights

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699905A (en) * 2013-12-27 2014-04-02 深圳市捷顺科技实业股份有限公司 Method and device for positioning license plate
CN104484671A (en) * 2014-11-06 2015-04-01 吉林大学 Target retrieval system applied to moving platform
CN104615642A (en) * 2014-12-17 2015-05-13 吉林大学 Space verification wrong matching detection method based on local neighborhood constrains

Also Published As

Publication number Publication date
CN106991431A (en) 2017-07-28

Similar Documents

Publication Publication Date Title
Pham et al. Lcd: Learned cross-domain descriptors for 2d-3d matching
Li et al. Pairwise geometric matching for large-scale object retrieval
Liu et al. Contextual hashing for large-scale image search
Sattler et al. Image Retrieval for Image-Based Localization Revisited.
US8892542B2 (en) Contextual weighting and efficient re-ranking for vocabulary tree based image retrieval
Schroth et al. Exploiting text-related features for content-based image retrieval
CN109118420B (en) Watermark identification model establishing and identifying method, device, medium and electronic equipment
Wang et al. Shrinking the semantic gap: spatial pooling of local moment invariants for copy-move forgery detection
CN104966081A (en) Spine image recognition method
US20140314324A1 (en) Geometric coding for billion-scale partial-duplicate image search
Arróspide et al. A study of feature combination for vehicle detection based on image processing
Uchida Local feature detectors, descriptors, and image representations: A survey
CN106991431B (en) Post-verification method for local feature point matching pairs
Sahbi et al. Context-dependent kernel design for object matching and recognition
CN105678349B (en) A kind of sub- generation method of the context-descriptive of visual vocabulary
CN115034257A (en) Cross-modal information target identification method and device based on feature fusion
Farhan et al. Image plagiarism system for forgery detection in maps design
CN103823889B (en) L1 norm total geometrical consistency check-based wrong matching detection method
CN111612063A (en) Image matching method, device and equipment and computer readable storage medium
CN113920303B (en) Convolutional neural network based weak supervision type irrelevant image similarity retrieval system and control method thereof
Wang et al. Spatial descriptor embedding for near-duplicate image retrieval
CN116361681A (en) Document classification method, device, computer equipment and medium based on artificial intelligence
Jégou et al. INRIA LEAR-TEXMEX: Video copy detection task
Konlambigue et al. Performance evaluation of state-of-the-art filtering criteria applied to sift features
CN106649624B (en) Local feature point verification method based on global relationship consistency constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20170728

Assignee: Hangzhou Zihong Technology Co., Ltd

Assignor: Hangzhou University of Electronic Science and technology

Contract record no.: X2021330000654

Denomination of invention: A post verification method for local feature point matching

Granted publication date: 20200207

License type: Common License

Record date: 20211104

EE01 Entry into force of recordation of patent licensing contract