[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116543144A - Automatic extraction method, device, equipment and medium for positioning kernel area of image matching - Google Patents

Automatic extraction method, device, equipment and medium for positioning kernel area of image matching Download PDF

Info

Publication number
CN116543144A
CN116543144A CN202310572055.9A CN202310572055A CN116543144A CN 116543144 A CN116543144 A CN 116543144A CN 202310572055 A CN202310572055 A CN 202310572055A CN 116543144 A CN116543144 A CN 116543144A
Authority
CN
China
Prior art keywords
positioning
region
area
image
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310572055.9A
Other languages
Chinese (zh)
Inventor
王承峰
戴昌志
钱勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchuan Technology Suzhou Co ltd
Original Assignee
Changchuan Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchuan Technology Suzhou Co ltd filed Critical Changchuan Technology Suzhou Co ltd
Priority to CN202310572055.9A priority Critical patent/CN116543144A/en
Publication of CN116543144A publication Critical patent/CN116543144A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an automatic extraction method, device, equipment and medium for a positioning nuclear region of image matching. The method comprises the following steps: acquiring set reference images and image matching data; determining an alternative region of the reference image according to the image matching data; extracting a locating kernel region in the candidate region, comprising: extracting features of the candidate areas to generate a set of feature points; determining a plurality of minimum circumscribed positive rectangles according to the set of the feature points to obtain a plurality of rectangular areas; carrying out uniqueness detection on the rectangular area, and selecting a rectangular area passing through the uniqueness detection; and carrying out positioning accuracy detection on the rectangular area detected through the uniqueness, and selecting the rectangular area detected through the positioning accuracy as a positioning kernel area extracted from the candidate area. By adopting the method and the device, the positioning nuclear region can be automatically extracted from the reference image, and the extraction effect of the positioning nuclear region is good.

Description

Automatic extraction method, device, equipment and medium for positioning kernel area of image matching
Technical Field
The application relates to the technical field of machine vision, in particular to an automatic extraction method, device, equipment and medium for a positioning nuclear region of image matching.
Background
Image matching is an algorithm for automatically searching a certain designated pattern in an image or a certain designated part in an article in a machine vision technology, and the algorithm comprises the steps of firstly marking and framing a positioning nuclear area in a reference image to manufacture a model, and then searching the position and the angle of the positioning nuclear area in a target image. Image matching is generally classified into "gray matching" and "feature matching". Currently, the main stream of gray level matching methods is Correlation-Based gray level value matching, and the method uses normalized cross-Correlation function values to evaluate the similarity between a template image (an image in a positioning kernel area) and a target image, so that the method is a simple and effective matching algorithm, but is not suitable for matching under the condition of rotation. The feature matching method is to extract features (features such as points, lines and the like) in the template image and the target image respectively, describe the parameters of the features, and then match the features by using the described parameters, wherein the matching of the point features (edge points, corner points, interest points and the like) is widely used, and the feature point model is generally provided with scale and rotation invariance and can be suitable for matching under the conditions of scaling and rotation. The using steps of the two matching methods are unified as follows: selecting a reference image, extracting a location kernel area on the reference image, creating a location kernel, finding a template on the target image, and using the finding result data.
At present, an operator is required to select a proper area according to the characteristics on the image and combined with own experience in the step of extracting the positioning kernel area on the reference image, so that the full-automatic application scene of image matching cannot be met, the quality of the positioning kernel area depends on the experience of the operator, the subjectivity is high, the operator with less experience can extract a worse positioning kernel area, the extraction effect is poor, and the problem of low image matching accuracy caused by incapability of accurate positioning is solved.
Disclosure of Invention
Based on the above, it is necessary to provide a method, a device, equipment and a medium for automatically extracting a positioning core region, which can automatically extract a positioning core region and match images with good extraction effect, aiming at the technical problem of low accuracy of image matching.
An automatic extraction method of a positioning kernel area for image matching comprises the following steps:
acquiring set reference images and image matching data;
determining an alternative region of the reference image according to the image matching data;
extracting a locating kernel region in the candidate region, comprising:
extracting features of the alternative areas to generate a set of feature points;
determining a plurality of minimum circumscribed positive rectangles according to the set of the feature points to obtain a plurality of rectangular areas; carrying out uniqueness detection on the rectangular area, and selecting a rectangular area passing through the uniqueness detection;
And carrying out positioning accuracy detection on the rectangular area detected through the uniqueness, and selecting the rectangular area detected through the positioning accuracy as a positioning kernel area extracted from the candidate area.
In one embodiment, the image matching data includes a number of locating kernel regions, and the determining the candidate region of the reference image based on the image matching data includes:
gridding the reference image to obtain a plurality of grids;
selecting grids of the positioning core region number from all grids to form a grid combination, and storing a plurality of grid combinations into a combination set;
traversing the combination set, taking the area where each grid in the current grid combination is located as an alternative area, and executing the step of extracting a positioning core area in the alternative area;
after the step of extracting the positioning kernel region from the candidate region, the method further comprises:
if all the candidate areas in the current grid combination are extracted to the positioning core area, the traversal is exited, and the extracted positioning core area is output.
In one embodiment, the gridding the reference image to obtain a plurality of grids includes:
According to the number of the positioning kernel areas, the reference image is meshed into a plurality of grids with the same number of rows and columns; wherein, the number of rows or columns satisfies:
where n is the number of rows or columns, and num is the number of positioning kernel regions.
In one embodiment, the selecting the grid of the number of the positioning kernel areas from all grids forms a grid combination, and storing a plurality of grid combinations into a combination set includes:
numbering the grids;
selecting a plurality of numbers of the positioning core areas for combination to obtain a plurality of grid combinations;
and respectively calculating the sum of the distances between grids corresponding to the numbers in each grid combination, screening the grid combination with the largest sum of the distances, and storing the grid combination into a combination set.
In one embodiment, the image matching data comprises an image matching method; the feature extraction is performed on the candidate region to generate a set of feature points, including:
if the image matching method is feature point matching, calling a stored feature point extractor interface to extract features of the alternative region, and storing the extracted feature points into a feature point set;
if the image matching method is gray level matching, sliding the candidate area according to a preset stride by using a window with a preset size, calculating the entropy value of the image in each step of window, and recording the center point coordinate of each step of window;
Determining a maximum entropy value, and screening entropy values larger than a preset multiple of the maximum entropy value;
and storing the center point coordinates of the window corresponding to the screened entropy values as characteristic points into a set of characteristic points.
In one embodiment, the determining a plurality of minimum circumscribed positive rectangles according to the set of feature points to obtain a plurality of rectangular areas includes:
clustering the set of the characteristic points by adopting a clustering algorithm based on density to obtain a plurality of clusters;
and respectively generating the minimum circumscribed positive rectangle of each cluster to obtain a plurality of rectangular areas.
In one embodiment, the image matching data includes matching method corresponding parameters; the step of carrying out the uniqueness detection on the rectangular area, selecting the rectangular area passing the uniqueness detection comprises the following steps:
creating a positioning kernel using the reference image and the current rectangular region;
searching a matched positioning core in the target by using the corresponding parameters of the matching method and the created positioning core by taking the reference image as the target;
if the number of the matched positioning cores is 1, the current rectangular area is stored into a unique area set through unique detection, and the positioning cores corresponding to the current rectangular area are stored into a positioning core set;
And taking the next rectangular area as the current rectangular area, and returning to the step of creating a positioning kernel by using the reference image and the current rectangular area.
In one embodiment, the detecting the positioning accuracy of the rectangular area detected by the uniqueness, selecting the rectangular area detected by the positioning accuracy as the positioning kernel area extracted from the candidate area, includes:
obtaining detection images respectively generated after geometric transformation of a plurality of amplitudes is carried out on the reference image;
traversing the positioning kernel set, respectively searching matched positioning kernels in each detection image by using the current positioning kernels and corresponding parameters of the matching method, and taking the coordinates of the central points of the areas of the matched positioning kernels as the positioning coordinates of the corresponding detection images;
calculating coordinates of the current region center point coordinates of the positioning core after geometric transformation of each detection image respectively, and obtaining transformation coordinates of the corresponding detection images;
calculating the offset between the positioning coordinates and the transformation coordinates corresponding to the same detection image;
if the offset corresponding to all the detection images is smaller than or equal to a preset error threshold value, selecting a rectangular area corresponding to the current positioning kernel from the uniqueness area set as a positioning kernel area extracted from the alternative area, and exiting from traversing the positioning kernel set.
In one embodiment, the acquiring the detection images respectively generated after performing geometric transformation of a plurality of magnitudes on the reference image includes:
adding Gaussian noise to the reference image to obtain a Gaussian noise image;
combining a plurality of preset translation amounts and a plurality of preset rotation amounts to obtain a plurality of rigid body transformation amplitudes comprising the translation amounts and the rotation amounts;
and respectively executing rigid transformation on the Gaussian noise image according to the amplitude of each rigid transformation to obtain a plurality of detection images.
An automatic extraction device for a positioning nuclear area of image matching, comprising:
the data acquisition module is used for acquiring the set reference image and the set image matching data;
the region determining module is used for determining an alternative region of the reference image according to the image matching data;
a positioning core extraction module, configured to extract a positioning core region from the candidate region; the positioning core extraction module comprises:
the feature extraction unit is used for extracting features of the alternative areas and generating a set of feature points;
the rectangle generating unit is used for determining a plurality of minimum circumscribed positive rectangles according to the set of the characteristic points to obtain a plurality of rectangle areas;
the uniqueness detection unit is used for carrying out uniqueness detection on the rectangular area and selecting a rectangular area passing through the uniqueness detection;
And the precision detection unit is used for detecting the positioning precision of the rectangular area detected by the uniqueness and selecting the rectangular area detected by the positioning precision as a positioning kernel area extracted from the alternative area.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring set reference images and image matching data;
determining an alternative region of the reference image according to the image matching data;
extracting a locating kernel region in the candidate region, comprising:
extracting features of the alternative areas to generate a set of feature points;
determining a plurality of minimum circumscribed positive rectangles according to the set of the feature points to obtain a plurality of rectangular areas; carrying out uniqueness detection on the rectangular area, and selecting a rectangular area passing through the uniqueness detection;
and carrying out positioning accuracy detection on the rectangular area detected through the uniqueness, and selecting the rectangular area detected through the positioning accuracy as a positioning kernel area extracted from the candidate area.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring set reference images and image matching data;
determining an alternative region of the reference image according to the image matching data;
extracting a locating kernel region in the candidate region, comprising:
extracting features of the alternative areas to generate a set of feature points;
determining a plurality of minimum circumscribed positive rectangles according to the set of the feature points to obtain a plurality of rectangular areas; carrying out uniqueness detection on the rectangular area, and selecting a rectangular area passing through the uniqueness detection;
and carrying out positioning accuracy detection on the rectangular area detected through the uniqueness, and selecting the rectangular area detected through the positioning accuracy as a positioning kernel area extracted from the candidate area.
According to the automatic extraction method, the device, the computer equipment and the computer readable storage medium for the positioning kernel region of the image matching, after the candidate region of the reference image is determined, the candidate region is subjected to feature extraction, the rectangular region is generated, the rectangular region is subjected to unique detection and positioning accuracy detection, and therefore the rectangular region detected through the unique detection and the positioning accuracy is selected as the positioning kernel region, the automatic extraction of the positioning kernel region of the reference image is realized, and the method is applicable to full-automatic application scenes of the image matching; and the extracted positioning core region meets the uniqueness and the positioning accuracy, compared with the existing manual drawing positioning core region, the method can avoid extracting the positioning core region which does not meet the uniqueness and the positioning accuracy of the positioning core, and the extraction effect of the positioning core region is good, so that the uniqueness and the positioning accuracy of image matching are effectively improved, and the image matching accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions of embodiments or conventional techniques of the present application, the drawings required for the descriptions of the embodiments or conventional techniques will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of an automatic extraction method of a positioning kernel region for image matching in one embodiment;
FIG. 2 is a flow diagram of extracting a location core region in an alternative region in one embodiment;
FIG. 3 is a flow diagram of determining an alternative region of a reference image based on image matching data in one embodiment;
FIG. 4 is a diagram of meshing in one embodiment;
FIG. 5 is a schematic diagram of a selected grid number in one embodiment;
FIG. 6 is a flowchart of a method for automatically extracting a location kernel area for image matching in another embodiment;
FIG. 7 is a flow diagram of extracting a location core region in an alternative region in another embodiment;
FIG. 8 is a schematic structural diagram of an automatic image matching kernel region extraction device in one embodiment;
FIG. 9 is a schematic diagram of a positioning core extraction module in one embodiment.
Detailed Description
In order to facilitate an understanding of the present application, a more complete description of the present application will now be provided with reference to the relevant figures. Examples of the present application are given in the accompanying drawings. This application may, however, be embodied in many different forms and is not limited to the embodiments described herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," and/or the like, specify the presence of stated features, integers, steps, operations, elements, components, or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof.
As described in the background art, in the prior art, the operation of extracting the positioning kernel region from the reference image requires an operator to manually select a suitable region based on experience, which cannot meet the full-automatic application scenario of image matching, and may extract a worse positioning kernel region from an operator with less experience, for example, the extraction effect is poor: 1. the problems of failure in model creation, low robustness of matching models, poor positioning accuracy and the like are caused by the fact that feature information is not rich (a solid color region is drawn), a positioning nuclear region with unstable features (a feature region greatly influenced by illumination is drawn) and an edge region of a drawing image in an image with a certain distortion are drawn. 2. The feature region drawn is not a unique feature region for the entire image, such as multiple feature regions where the same feature exists, which may make it impossible for the program to determine the exact location of the match. Based on the above, the scheme which can be applied to image matching and automatically extracts and positions the nuclear area and has good extraction effect is provided.
In one embodiment, as shown in fig. 1, there is provided an automatic extraction method of a location kernel region for image matching, the method comprising:
s110: and acquiring the set reference image and the set image matching data.
The reference image is required by extracting a positioning kernel from image matching; the image matching data is an operating parameter of image matching, for example, may include at least one of a number of positioning kernel areas, a matching method, and a parameter corresponding to the matching method; the matching method is data for explaining a method used for image matching, the number of the positioning kernel areas refers to the number of the positioning kernel areas needing to be extracted, and the corresponding parameters of the matching method comprise matching search scores, search ranges and the like. Specifically, the reference image and the image matching data may be configured by a user.
S130: an alternative region of the reference image is determined from the image matching data.
The candidate region is a selected partial region in the reference image. The number of the candidate regions may be one or more.
S150: a locating kernel region is extracted from the candidate region.
Referring to fig. 2, step S150 includes steps S151 to S157. For the case where there are a plurality of candidate areas, steps S151 to S157 are performed for each of the candidate areas.
S151: and extracting the characteristics of the alternative areas to generate a set of characteristic points.
Feature extraction is performed on one alternative region to obtain a plurality of feature points, and all the feature points form a feature point set.
S153: and determining a plurality of minimum circumscribed positive rectangles according to the set of the feature points to obtain a plurality of rectangular areas.
A minimum bounding rectangle may be determined from the plurality of feature points, and the plurality of minimum bounding rectangles may be determined based on all feature points in the set of feature points. The minimum bounding rectangle is a minimum bounding rectangle parallel to the coordinate axis, and can be determined according to a known minimum bounding rectangle generating method, for example, a rectangular boundary is determined by using a maximum abscissa, a minimum abscissa, a maximum ordinate and a minimum ordinate of vertices corresponding to a plurality of feature points. The area corresponding to the minimum circumscribed positive rectangle is the rectangular area.
S155: and carrying out uniqueness detection on the rectangular area, and selecting the rectangular area passing through the uniqueness detection.
The uniqueness detection is to detect whether a rectangular area is a unique feature area in a reference image. For example, the uniqueness of a rectangular region can be detected by detecting whether or not a positioning kernel created based on the rectangular region can match to a unique region in a target image.
S157: and carrying out positioning accuracy detection on the rectangular area detected through the uniqueness, and selecting the rectangular area detected through the positioning accuracy as a positioning kernel area extracted from the candidate area.
The positioning accuracy detection is used for detecting the positioning accuracy of the positioning kernel extracted from the rectangular area. For example, whether the deviation size satisfies the set requirement can be determined by detecting the deviation between the rectangular region and the region to which the positioning kernel created based on the rectangular region is matched in the target image, thereby detecting whether the positioning accuracy is satisfied.
Specifically, for a plurality of rectangular areas passing through the uniqueness detection, positioning accuracy detection is sequentially carried out on each rectangular area passing through the uniqueness detection, when the currently detected rectangular area meets the requirement of passing through the positioning accuracy detection, the rectangular area is used as a positioning core area, the positioning accuracy detection is stopped, and one candidate area is ensured to extract one positioning core area.
According to the automatic extraction method for the positioning kernel region of the image matching, after the candidate region of the reference image is determined, the candidate region is subjected to feature extraction, a rectangular region is generated, and the rectangular region is subjected to uniqueness detection and positioning accuracy detection, so that the rectangular region detected through the uniqueness detection and the positioning accuracy is selected as the positioning kernel region, the automatic extraction of the positioning kernel region of the reference image is realized, and the automatic extraction method is applicable to full-automatic application scenes of the image matching; and the extracted positioning core region meets the uniqueness and the positioning accuracy, compared with the existing manual drawing positioning core region, the method can avoid extracting the positioning core region which does not meet the uniqueness and the positioning accuracy of the positioning core, and the extraction effect of the positioning core region is good, so that the uniqueness and the positioning accuracy of image matching are effectively improved, and the image matching accuracy is improved.
In one embodiment, the image matching data includes a number of locating kernel regions. Specifically, the number of localization core regions is at least two. Referring to fig. 3, step S130 includes steps S131 to S135.
S131: gridding the reference image to obtain a plurality of grids.
For example, the number of grids may be determined according to the number of positioning kernel regions, so as to grid the reference image to obtain grids. Wherein, the region corresponding to each grid is used as an extraction unit for locating the nuclear region.
S133: and selecting grids with the number of the positioning core areas from all grids to form a grid combination, and storing a plurality of grid combinations into a combination set.
Taking the number of positioning core areas as num as an example, num grids are selected from all grids to be combined in various ways, so that a plurality of grid combinations can be formed.
S135: traversing the combination set, and taking the areas where the grids in the current grid combination are located as alternative areas respectively.
Then, step S150 is performed. Namely, the extraction of the positioning kernel area is respectively carried out on the area where each grid is in the currently traversed grid combination. Correspondingly, after step S150, step S160 is further included.
S160: and judging whether all the candidate areas in the current grid combination are extracted to the positioning core area.
If not, step S171 is executed: and taking the areas of the grids in the next grid combination as the alternative areas respectively, and executing step S150.
If yes, step S173 is executed: and exiting the traversal and outputting the extracted positioning kernel region.
For example, the extracted location core region may be output to a display for display. And traversing the combination set, traversing each grid in the current grid combination set, and executing automatic extraction of the positioning core region in the region where the grid is located until all the regions where the grid is located in one grid combination are traversed to extract the positioning core region, and at the moment, exiting the traversing, and taking the extracted positioning core region in each grid region in the grid combination as an extraction result.
In image matching, there are a number of scenes that need to be extracted to locate the number of kernel regions. For example, in order to reduce the positioning error of a large-resolution image and improve the matching precision, a plurality of positioning kernel areas need to be extracted; for example, in the Correlation gray-scale Correlation matching method, for the rotation angle, it is necessary to use at least two positioning kernel regions to create a template and search for, and use at least two line segments of the found rectangular frame center line to calculate the rotation angle, and therefore, when the Correlation gray-scale Correlation matching method is used and the rotation angle needs to be output, it is necessary to extract at least two positioning kernel regions. In the prior art, when a scene of a plurality of positioning core areas is needed, an operator is required to manually draw, and the operator is required to understand the drawing rule of the corresponding scene, so that the drawing difficulty and the operation complexity are increased. In this embodiment, an automatic extraction scheme for the number of at least two positioning core areas is provided, which can be suitable for a scene where a plurality of positioning core areas need to be extracted, and manual drawing is not needed, so that the problems of high difficulty and complexity in operation under the requirement of multiple positioning core areas are solved.
In one embodiment, step S131 includes: according to the number of the positioning nuclear areas, gridding the reference image into a plurality of grids with equal row numbers and column numbers; wherein the number of rows or columns satisfies the following equation 1:
wherein n is the number of rows or columns, and num is the number of positioning core areas; sqrt represents an open square operation; ceil represents a round-up operation. In this embodiment, the number of rows and columns of the gridding is determined according to the number of the positioning kernel areas, and the whole reference image is gridded in an n-row n-column mode, so that the obtained grid is appropriate in size and good in dividing effect.
In one embodiment, step S133 includes steps (a 1) to (a 3).
Step (a 1): the grids are numbered.
The grid numbering may be performed by sequentially accumulating the grid numbers from 0 in a zigzag manner from left to right and from top to bottom. For example, the grid number for 3*3 is shown in fig. 4. It will be appreciated that in other embodiments, other numbering schemes may be used, such as a top-to-bottom 1-type numbering of the grid for each column, and a sequential numbering in the order of the columns of the grid from left to right.
Step (a 2): and selecting a plurality of numbers of the positioning core areas for combination to obtain a plurality of grid combinations.
And selecting numbers according to the number of the positioning core areas to perform C-coding Combination. For example, when the number of the positioning core areas is 3, according to formula 1, the number of the meshes of 3*3 is 9, that is, 3 mesh numbers are selected from the numbers of the 9 meshes to be combined.
Step (a 3): and respectively calculating the sum of the distances between grids corresponding to the numbers in each grid combination, screening the grid combination with the largest sum of the distances, and storing the grid combination into a combination set.
Wherein, the distance refers to the distance between the coordinates of the central points of the grids corresponding to the numbers. The grid combination having the largest sum of distances between grids is selected among all the grid combinations, for example, the grid shown in fig. 4 is exemplified as the grid combination having the largest sum of distances { (0,2,6), (0,2,8), (2, 6, 8), (0, 6, 8) }, as shown in fig. 5 (a) to 5 (d).
For the matching positioning of the large-view image, a plurality of positioning kernel areas are extracted for positioning, and the purpose is mainly to effectively reduce the influence of the positioning error of a single positioning kernel on the positioning result of the whole image, thereby improving the positioning precision. It has been found that the greater the distance between the locating nucleus regions, the more pronounced the effect of reducing the locating errors. For example, assuming that the positioning is performed using the positioning kernel areas extracted by the mesh 0 and the mesh 8 in fig. 4, if the positioning kernel of the mesh 8 has a relatively large positioning error and is positioned on the mesh 7, then: the positioning coordinates are coordinates of a connecting line center point of the grid 0 and the grid 7, and the positioning angle is an angle of the connecting line. If the grid 8 positioning core is replaced by the grid 4 positioning core, the positioning core of the grid 4 generates positioning errors and positions the grid 3, and then: the positioning coordinates are coordinates of a connecting line central point of the grid 0 and the grid 3, and the positioning angle is an angle of the connecting line. In comparison, the deviation between 0-3 and true 0-4 will be much larger than the deviation between 0-7 and true 0-8, the larger the distance between the grids, the smaller the deviation in positioning. In this embodiment, the grid combination with the largest sum of the screening distances is stored in the combination set, and the positioning kernel is extracted based on the combination set, so that the positioning deviation of a plurality of positioning kernels can be reduced, and the positioning accuracy of the positioning kernels can be improved.
As shown in fig. 6, in one detailed embodiment, a user configured reference image and image matching data are obtained, wherein the image matching data includes a number of location kernel areas; next, a combined set of grid combinations is generated by meshing the reference image, defining a grid number, and the grid number. Then, sequentially traversing each grid combination in the combination set, sequentially traversing each grid for each grid combination, and executing step S150 by taking each grid as an alternative area, thereby executing automatic extraction of the positioning core area, storing the grid areas extracted into the positioning core area into a result area set, and otherwise, emptying the result area set; judging whether all grid areas in the currently traversed grid combination are extracted to a positioning core area, if not, clearing a result area set, traversing the next grid combination, executing automatic extraction of the positioning core area for each grid in the next grid combination, and circulating in this way until all grids in one grid combination are extracted to the positioning core area, taking the positioning core area which is stored in the result area and corresponds to each grid area in the grid combination as the final extracted positioning core area, and exiting from the traversal to finish automatic extraction of a plurality of positioning core areas.
In one embodiment, the image matching data includes an image matching method. The image matching method may include feature point matching and gray level matching. For example, for a visual inspection device, a user may select a corelation gray scale Correlation match or Feature Point match. Specifically, step S151 includes step (b 1), or steps (b 2) to (b 4).
Step (b 1): and if the image matching method is feature point matching, calling a stored feature point extractor interface to extract features of the alternative region, and storing the extracted feature points into a feature point set.
The stored feature point extractor may be a feature point extractor provided by the current feature point matching method. In other embodiments, feature extraction may be performed by using an extraction method of the same principle as the current feature point matching method.
Taking Feature Point matching of a visual inspection device as an example, feature Point is Feature Point-based matching provided by the Open eVision visual library, and Feature extraction may be performed on the candidate region using a Feature Point extractor provided by the Open eVision visual library.
Step (b 2): if the image matching method is gray level matching, sliding the candidate area according to a preset stride by using a window with a preset size, calculating the entropy value of the image in each step of window, and recording the center point coordinate of each step of window.
Step (b 3): and determining the maximum entropy value, and screening the entropy value which is larger than the preset multiple of the maximum entropy value.
The preset multiple can be set according to actual needs; specifically, the preset multiple is a value less than 1, for example, the preset multiple may be 0.9. The entropy value of the image can reflect the information quantity of the image; and screening entropy values larger than a preset multiple of the maximum entropy value from all entropy values, so that window images capable of reflecting more information can be selected as much as possible.
Step (b 4): and storing the center point coordinates of the window corresponding to the screened entropy values as characteristic points into a set of characteristic points.
Taking the Correlation gray scale Correlation matching of the visual detection equipment as an example, using a window with a preset size to step and slide the candidate area, calculating the entropy value (image entropy) of an image in each window, storing the entropy value of each window in an entropy value set, and correspondingly storing the center point coordinate of the current window in a window point set. After the sliding is finished, the statistical entropy is greater than the central point coordinate corresponding to the window with the maximum entropy value of 0.9 times (the preset multiple is defined according to the actual situation), the window with the central point coordinate has a higher entropy value and can be used as the characteristic point of the corelation, and finally the characteristic point is stored in the set of the characteristic points.
In this embodiment, for different image matching methods, different feature extraction methods are used respectively, so that corresponding effective features can be automatically extracted.
In one embodiment, step S153 includes: clustering the set of the characteristic points by adopting a clustering algorithm based on density to obtain a plurality of clusters; and respectively generating the minimum circumscribed positive rectangle of each cluster to obtain a plurality of rectangular areas.
Wherein the density-based clustering algorithm may include a KANN-DBSCAN clustering method. The KANN-DBSCAN clustering method refers to a method for adaptively generating Density clusters (DBSCAN, density-Based Spatial Clustering of Application with Noise) of a Density threshold list based on a K-average nearest neighbor algorithm (KANN, K-Average Nearest Neighbor) and mathematical expectations. It will be appreciated that in other embodiments, other suitable clustering methods may also be selected based on the targets of the set of feature points, such as shape-based clustering, size-based clustering, and the like.
In this embodiment, the set of feature points is divided into different feature point subsets by using the concept based on density clustering to obtain clusters, and rectangular regions corresponding to the clusters and having minimum circumscribed positive rectangles are generated, so that the part with dense feature points is extracted and a region is generated, and the region is used as a candidate of a positioning kernel region, so that the positioning kernel region with rich features can be extracted.
In one embodiment, the image matching data includes matching method correspondence parameters. Step S155 includes steps (c 1) to (c 4).
Step (c 1): a positioning kernel is created using the reference image and the current rectangular region.
Specifically, the current rectangular area is taken as a positioning kernel area, and a well-known positioning kernel creation method is adopted to create a positioning kernel based on a reference image.
Step (c 2): and using the corresponding parameters of the matching method and the created positioning check to find the matched positioning core in the target by taking the reference image as the target.
And using the reference image as a target image, and searching a matched positioning core in the target image by using the corresponding parameters of the matching method and the created positioning core, namely performing image matching template searching operation.
Step (c 3): if the number of the matched positioning cores is 1, the current rectangular area is stored into a unique area set through the uniqueness detection, and the positioning cores corresponding to the current rectangular area are stored into a positioning core set.
The number of the matched positioning kernels is 1, which means that the created positioning kernels can be matched to a unique area in the target image, and the corresponding rectangular area meets the uniqueness and passes the uniqueness detection.
Step (c 4): the next rectangular area is taken as the current rectangular area.
Then, returning to step (c 1), each rectangular region is traversed in this way until the uniqueness detection of all rectangular regions is completed.
In one embodiment, step S157 includes steps (d 1) to (d 5).
Step (d 1): the detection images generated after performing geometric transformations of a plurality of magnitudes on the reference image are acquired, respectively.
Generating a detection image after the reference image is subjected to geometric transformation of an amplitude; the reference image is subjected to geometric transformation of a plurality of amplitudes respectively to obtain a plurality of detection images. Wherein the geometric transformation may be a rigid body transformation, or an affine transformation, a rotational transformation, etc.
Step (d 2): traversing the positioning kernel set, respectively searching matched positioning kernels in each detection image by using corresponding parameters of the current positioning kernels and the matching method, and taking the coordinates of the central points of the areas of the matched positioning kernels as the positioning coordinates of the corresponding detection images.
Step (d 3): and calculating coordinates of the current region center point coordinates of the positioning core after geometric transformation of each detection image respectively, and obtaining transformation coordinates of the corresponding detection images.
For example, assuming that the coordinates of the central point of the current positioning kernel are a, the reference image is subjected to geometric transformation with the amplitude of B to obtain a first detection image, that is, the amplitude of geometric transformation performed by the first detection image is B, then: and (3) obtaining a coordinate C after geometric transformation with the amplitude of B, wherein the coordinate C is a transformation coordinate corresponding to the first detection image.
Step (d 4): and calculating the offset between the positioning coordinates and the transformation coordinates corresponding to the same detection image.
After the steps (d 2) and (d 3), based on the current positioning kernel, each detection image corresponds to a positioning coordinate and a transformation coordinate, and the offset between the positioning coordinate and the transformation coordinate of the same detection image is calculated. For example, the offsets in the X and Y directions that occur for the positioning coordinates and the transformed coordinates may be calculated, and this offset represents the error in the current positioning core positioning.
Step (d 5): if the offset corresponding to all the detection images is smaller than or equal to the preset error threshold value, selecting a rectangular area corresponding to the current positioning kernel from the uniqueness area set as a positioning kernel area extracted from the alternative area, and exiting from traversing the positioning kernel set.
The preset error threshold can be set according to actual needs. If the offset corresponding to all the detection images calculated in the step (d 4) is smaller than or equal to the preset error threshold, the positioning error of the current positioning core is indicated to be within the allowable range, and the positioning precision requirement is met, namely, the rectangular area corresponding to the current positioning core passes the positioning precision detection; at this time, the rectangular area corresponding to the current positioning core is used as the positioning core area extracted from the alternative area, and the traversal of the positioning core set is exited, namely the positioning precision detection of the next positioning core is not performed any more, so that one positioning core area is extracted from one alternative area. Specifically, if the offset exists in the offsets corresponding to all the detected images calculated in the step (d 4) and is greater than the preset error threshold, taking the next positioning core in the positioning core set as a new current positioning core, and returning to the step (d 2).
In this embodiment, after the uniqueness detection, the detection image is used as the target image, and whether the positioning accuracy requirement is met is detected by detecting the deviation between the positioning kernel created by the rectangular area and the positioning kernel matched by the created positioning kernel in the target image, so that the automatic detection positioning accuracy method is provided, and the automatically extracted positioning kernel area has the indexes of high robustness, high positioning accuracy and uniqueness.
In one embodiment, step (d 1) comprises: adding Gaussian noise to the reference image to obtain a Gaussian noise image; combining a plurality of preset translation amounts and a plurality of preset rotation amounts to obtain a plurality of rigid body transformation amplitudes comprising the translation amounts and the rotation amounts; and respectively executing rigid transformation on the Gaussian noise image according to the amplitude of each rigid transformation to obtain a plurality of detection images.
The gaussian noise is added to the reference image, that is, a gray image of the reference image conforming to the gaussian distribution is generated, and then the gray image and the reference image are added to obtain a gaussian noise image. For example, random gray images conforming to gaussian distribution are generated by using RandGauss functions in the intel Ipp library, and the random gray images are added to the original reference image to obtain gaussian noise images.
Taking 3 translation amounts and 3 rotation amounts as examples, the translation amount list is [2,5,8], the rotation amount list is [2.0, -3.5,5.0], wherein the values of the translation amount and the rotation amount can be customized according to actual conditions, and the translation amount is the translation amount common to the X direction and the Y direction. 1 value is selected from the translation amount list and one value is selected from the rotation amount list to be combined, so that a rigid body transformation amplitude set of { [2,2.0], [2, -3.5], [2,5.0], [5,2.0], [5, -3.5], [5,5.0], [8,2.0], [8, -3.5], [8,5.0] }. And traversing the rigid body transformation amplitude set, calculating a rigid body transformation matrix of the current amplitude according to the rotation quantity and the translation quantity, and then performing rigid body transformation on the Gaussian noise image to obtain a detection image.
In this embodiment, by adding gaussian noise to the reference image first and then performing rigid body transformation of different magnitudes, a plurality of detection images are obtained from one reference image for positioning accuracy detection, so that positioning accuracy detection is achieved under the condition that there is only one reference image.
For example, as shown in fig. 7, in a detailed embodiment, the type of the image matching method is first determined, and if the image matching method is a Feature Point matching method, a Feature Point extractor provided by the current method or a Feature Point extraction method adopting the same principle is selected, and Feature points in the current grid are extracted to obtain a set of Feature points; if the method is a corelation gray level matching method, defining the size of a sliding window and the step of step sliding, step-by-step sliding the window in a grid, calculating the entropy value of the image in each window, counting the maximum entropy value, calculating the entropy threshold value (the preset multiple of the entropy value), and counting the center point coordinate of the window with the entropy value larger than the entropy threshold value as a characteristic point to obtain a set of characteristic points. Then, clustering the clusters of the characteristic points to generate a minimum circumscribed positive rectangle corresponding to each cluster and storing the minimum circumscribed positive rectangle in a clustering area set; traversing the clustering region set, carrying out uniqueness judgment on each minimum external positive rectangle, then acquiring a detection image for positioning precision detection, and carrying out positioning precision detection based on the detection image.
It should be understood that, although the steps in the flowcharts of fig. 1-3 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps of fig. 1-3 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
In one embodiment, as shown in fig. 8, an image matching positioning kernel region automatic extraction apparatus is provided, which includes a data acquisition module 810, a region determination module 830, and a positioning kernel extraction module 850. Wherein:
the data acquisition module 810 is configured to acquire the set reference image and image matching data; the region determining module 830 is configured to determine an alternative region of the reference image according to the image matching data; the positioning core extraction module 850 is configured to extract a positioning core region in the candidate region.
Among them, referring to fig. 9, the positioning core extraction module 850 includes a feature extraction unit 851, a rectangle generation unit 853, a uniqueness detection unit 855, and a precision detection unit 857. The feature extraction unit 851 is configured to perform feature extraction on the candidate region, and generate a set of feature points; the rectangle generating unit 853 is configured to determine a plurality of minimum circumscribed positive rectangles according to the set of feature points, so as to obtain a plurality of rectangular areas; the uniqueness detection unit 855 is configured to perform uniqueness detection on the rectangular area, and select a rectangular area that passes through the uniqueness detection; the accuracy detecting unit 857 is for performing positioning accuracy detection on the rectangular region detected by the uniqueness, and selects the rectangular region detected by the positioning accuracy as a positioning kernel region extracted from the candidate region.
According to the automatic extraction device for the positioning kernel region of the image matching, after the candidate region of the reference image is determined, the candidate region is subjected to feature extraction, a rectangular region is generated, and the rectangular region is subjected to uniqueness detection and positioning accuracy detection, so that the rectangular region detected through the uniqueness detection and the positioning accuracy is selected as the positioning kernel region, the automatic extraction of the positioning kernel region of the reference image is realized, and the automatic extraction device can be suitable for full-automatic application scenes of the image matching; and the extracted positioning core region meets the uniqueness and the positioning accuracy, compared with the existing manual drawing positioning core region, the method can avoid extracting the positioning core region which does not meet the uniqueness and the positioning accuracy of the positioning core, and the extraction effect of the positioning core region is good, so that the uniqueness and the positioning accuracy of image matching are effectively improved, and the image matching accuracy is improved.
For specific limitation of the automatic extraction device of the positioning kernel region for image matching, reference may be made to the limitation of the automatic extraction method of the positioning kernel region for image matching hereinabove, and the description thereof will not be repeated here. All or part of each module in the automatic image matching positioning kernel region extracting device can be realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
The computer equipment can realize steps in the method embodiments, and can realize automatic extraction of the positioning core area, so that the computer equipment can be suitable for full-automatic application scenes of image matching; and the extracted positioning core region meets the uniqueness and positioning accuracy, and the extraction effect of the positioning core region is good, so that the uniqueness and positioning accuracy of image matching are effectively improved, and the image matching accuracy is improved.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
The computer readable storage medium can realize steps in the method embodiments, and can realize automatic extraction of the positioning core area, thus being applicable to full-automatic application scenes of image matching; and the extracted positioning core region meets the uniqueness and positioning accuracy, and the extraction effect of the positioning core region is good, so that the uniqueness and positioning accuracy of image matching are effectively improved, and the image matching accuracy is improved.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
In the description of the present specification, reference to the terms "some embodiments," "other embodiments," "desired embodiments," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic descriptions of the above terms do not necessarily refer to the same embodiment or example.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (12)

1. An automatic extraction method for a positioning kernel area of image matching is characterized by comprising the following steps:
acquiring set reference images and image matching data;
determining an alternative region of the reference image according to the image matching data;
extracting a locating kernel region in the candidate region, comprising:
extracting features of the alternative areas to generate a set of feature points;
determining a plurality of minimum circumscribed positive rectangles according to the set of the feature points to obtain a plurality of rectangular areas;
carrying out uniqueness detection on the rectangular area, and selecting a rectangular area passing through the uniqueness detection;
and carrying out positioning accuracy detection on the rectangular area detected through the uniqueness, and selecting the rectangular area detected through the positioning accuracy as a positioning kernel area extracted from the candidate area.
2. The method of claim 1, wherein the image matching data includes a number of localization kernel regions, the determining the candidate regions of the reference image based on the image matching data comprising:
gridding the reference image to obtain a plurality of grids;
selecting grids of the positioning core region number from all grids to form a grid combination, and storing a plurality of grid combinations into a combination set;
Traversing the combination set, taking the area where each grid in the current grid combination is located as an alternative area, and executing the step of extracting a positioning core area in the alternative area;
after the step of extracting the positioning kernel region from the candidate region, the method further comprises:
if all the candidate areas in the current grid combination are extracted to the positioning core area, the traversal is exited, and the extracted positioning core area is output.
3. The method according to claim 2, wherein gridding the reference image to obtain a plurality of grids comprises:
according to the number of the positioning kernel areas, the reference image is meshed into a plurality of grids with the same number of rows and columns; wherein, the number of rows or columns satisfies:
where n is the number of rows or columns, and num is the number of positioning kernel regions.
4. The method of claim 2, wherein the selecting the number of the location core areas from all grids to form a grid combination, and storing a plurality of the grid combinations into a combination set comprises:
numbering the grids;
selecting a plurality of numbers of the positioning core areas for combination to obtain a plurality of grid combinations;
And respectively calculating the sum of the distances between grids corresponding to the numbers in each grid combination, screening the grid combination with the largest sum of the distances, and storing the grid combination into a combination set.
5. The method of claim 1, wherein the image matching data comprises an image matching method; the feature extraction is performed on the candidate region to generate a set of feature points, including:
if the image matching method is feature point matching, calling a stored feature point extractor interface to extract features of the alternative region, and storing the extracted feature points into a feature point set;
if the image matching method is gray level matching, sliding the candidate area according to a preset stride by using a window with a preset size, calculating the entropy value of the image in each step of window, and recording the center point coordinate of each step of window;
determining a maximum entropy value, and screening entropy values larger than a preset multiple of the maximum entropy value;
and storing the center point coordinates of the window corresponding to the screened entropy values as characteristic points into a set of characteristic points.
6. The method according to claim 1, wherein determining a plurality of minimum bounding positive rectangles from the set of feature points results in a plurality of rectangular regions, comprising:
Clustering the set of the characteristic points by adopting a clustering algorithm based on density to obtain a plurality of clusters;
and respectively generating the minimum circumscribed positive rectangle of each cluster to obtain a plurality of rectangular areas.
7. The method of claim 1, wherein the image matching data includes matching method correspondence parameters; the step of carrying out the uniqueness detection on the rectangular area, selecting the rectangular area passing the uniqueness detection comprises the following steps:
creating a positioning kernel using the reference image and the current rectangular region;
searching a matched positioning core in the target by using the corresponding parameters of the matching method and the created positioning core by taking the reference image as the target;
if the number of the matched positioning cores is 1, the current rectangular area is stored into a unique area set through unique detection, and the positioning cores corresponding to the current rectangular area are stored into a positioning core set;
and taking the next rectangular area as the current rectangular area, and returning to the step of creating a positioning kernel by using the reference image and the current rectangular area.
8. The method of claim 7, wherein the detecting the positioning accuracy of the rectangular region detected by the uniqueness, selecting the rectangular region detected by the positioning accuracy as the positioning kernel region extracted from the candidate region, comprises:
Obtaining detection images respectively generated after geometric transformation of a plurality of amplitudes is carried out on the reference image;
traversing the positioning kernel set, respectively searching matched positioning kernels in each detection image by using the current positioning kernels and corresponding parameters of the matching method, and taking the coordinates of the central points of the areas of the matched positioning kernels as the positioning coordinates of the corresponding detection images;
calculating coordinates of the current region center point coordinates of the positioning core after geometric transformation of each detection image respectively, and obtaining transformation coordinates of the corresponding detection images;
calculating the offset between the positioning coordinates and the transformation coordinates corresponding to the same detection image;
if the offset corresponding to all the detection images is smaller than or equal to a preset error threshold value, selecting a rectangular area corresponding to the current positioning kernel from the uniqueness area set as a positioning kernel area extracted from the alternative area, and exiting from traversing the positioning kernel set.
9. The method of claim 8, wherein the acquiring the separately generated detection images after performing the geometric transformation of the reference image with a plurality of magnitudes comprises:
adding Gaussian noise to the reference image to obtain a Gaussian noise image;
Combining a plurality of preset translation amounts and a plurality of preset rotation amounts to obtain a plurality of rigid body transformation amplitudes comprising the translation amounts and the rotation amounts;
and respectively executing rigid transformation on the Gaussian noise image according to the amplitude of each rigid transformation to obtain a plurality of detection images.
10. An automatic extraction device for a positioning nuclear area of image matching is characterized by comprising:
the data acquisition module is used for acquiring the set reference image and the set image matching data;
the region determining module is used for determining an alternative region of the reference image according to the image matching data;
a positioning core extraction module, configured to extract a positioning core region from the candidate region; the positioning core extraction module comprises:
the feature extraction unit is used for extracting features of the alternative areas and generating a set of feature points;
the rectangle generating unit is used for determining a plurality of minimum circumscribed positive rectangles according to the set of the characteristic points to obtain a plurality of rectangle areas;
the uniqueness detection unit is used for carrying out uniqueness detection on the rectangular area and selecting a rectangular area passing through the uniqueness detection;
and the precision detection unit is used for detecting the positioning precision of the rectangular area detected by the uniqueness and selecting the rectangular area detected by the positioning precision as a positioning kernel area extracted from the alternative area.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 9 when the computer program is executed.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 9.
CN202310572055.9A 2023-05-19 2023-05-19 Automatic extraction method, device, equipment and medium for positioning kernel area of image matching Pending CN116543144A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310572055.9A CN116543144A (en) 2023-05-19 2023-05-19 Automatic extraction method, device, equipment and medium for positioning kernel area of image matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310572055.9A CN116543144A (en) 2023-05-19 2023-05-19 Automatic extraction method, device, equipment and medium for positioning kernel area of image matching

Publications (1)

Publication Number Publication Date
CN116543144A true CN116543144A (en) 2023-08-04

Family

ID=87445100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310572055.9A Pending CN116543144A (en) 2023-05-19 2023-05-19 Automatic extraction method, device, equipment and medium for positioning kernel area of image matching

Country Status (1)

Country Link
CN (1) CN116543144A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095001A (en) * 2023-10-19 2023-11-21 南昌工学院 Online product quality detection method, device and storage device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095001A (en) * 2023-10-19 2023-11-21 南昌工学院 Online product quality detection method, device and storage device

Similar Documents

Publication Publication Date Title
CN109360232B (en) Indoor scene layout estimation method and device based on condition generation countermeasure network
CN110032998B (en) Method, system, device and storage medium for detecting characters of natural scene picture
US9619691B2 (en) Multi-view 3D object recognition from a point cloud and change detection
CN105844669B (en) A kind of video object method for real time tracking based on local Hash feature
CN110110646B (en) Gesture image key frame extraction method based on deep learning
Barath et al. Learning to find good models in RANSAC
Marie et al. The delta medial axis: a fast and robust algorithm for filtered skeleton extraction
CN114529837A (en) Building outline extraction method, system, computer equipment and storage medium
CN116452644A (en) Three-dimensional point cloud registration method and device based on feature descriptors and storage medium
CN106936964A (en) A kind of mobile phone screen angular-point detection method based on Hough transformation template matches
CN108875504B (en) Image detection method and image detection device based on neural network
CN117576219A (en) Camera calibration equipment and calibration method for single shot image of large wide-angle fish-eye lens
CN111368573A (en) Positioning method based on geometric feature constraint
CN116543144A (en) Automatic extraction method, device, equipment and medium for positioning kernel area of image matching
CN111275616B (en) Low-altitude aerial image splicing method and device
CN114155285B (en) Image registration method based on gray histogram
CN109766943B (en) Template matching method and system based on global perception diversity measurement
CN115775220A (en) Method and system for detecting anomalies in images using multiple machine learning programs
CN108597589B (en) Model generation method, target detection method and medical imaging system
CN117830623A (en) Image positioning area selection method, device, equipment and storage medium
CN111553927B (en) Checkerboard corner detection method, detection system, computer device and storage medium
CN105809657A (en) Angular point detection method and device
WO2020197495A1 (en) Method and system for feature matching
US20100111420A1 (en) Registration and visualization of image structures based on confiners
CN117333518A (en) Laser scanning image matching method, system and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination