[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117830623A - Image positioning area selection method, device, equipment and storage medium - Google Patents

Image positioning area selection method, device, equipment and storage medium Download PDF

Info

Publication number
CN117830623A
CN117830623A CN202410065696.XA CN202410065696A CN117830623A CN 117830623 A CN117830623 A CN 117830623A CN 202410065696 A CN202410065696 A CN 202410065696A CN 117830623 A CN117830623 A CN 117830623A
Authority
CN
China
Prior art keywords
image
positioning
area
matching
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410065696.XA
Other languages
Chinese (zh)
Inventor
钱勇
王承峰
陈思乡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchuan Technology Suzhou Co ltd
Original Assignee
Changchuan Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchuan Technology Suzhou Co ltd filed Critical Changchuan Technology Suzhou Co ltd
Priority to CN202410065696.XA priority Critical patent/CN117830623A/en
Publication of CN117830623A publication Critical patent/CN117830623A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/7625Hierarchical techniques, i.e. dividing or merging patterns to obtain a tree-like representation; Dendograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image positioning area selection method, an image positioning area selection device, image positioning area selection equipment and a storage medium. The image positioning area selection method comprises the following steps: acquiring a reference image to be selected and an image matching parameter for image matching; filtering the fuzzy region of the reference image to generate a clear region image of the reference image; extracting features based on the clear region image to obtain image feature points; and extracting a positioning area for image matching from the reference image according to the image characteristic points and the image matching parameters. By adopting the method and the device, the definition of the positioning area can be improved, so that the accuracy of image matching by the positioning area is improved.

Description

Image positioning area selection method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for selecting an image positioning area.
Background
With the development of machine vision technology, image matching technology is increasingly used, for example, in semiconductor wafer inspection, the image matching technology is adopted to register the image of the die to be inspected with the image of the die template, and then subsequent inspection processing is performed.
The image matching technology firstly marks and frames in a reference image to select a positioning area to manufacture a model, and then searches the position and the angle of the positioning model in a target image. The clearer the image selected as the positioning area, the higher the accuracy of image matching.
However, in practical applications, some reference images may have a blurring problem, for example, a crystal grain is subjected to a production process such as a Bump in a production process, and because the Bump imaging height is inconsistent with the crystal grain imaging, a defocus blurring problem may occur, which causes blurring of a Bump imaging edge region. If an image blurring region is used as a positioning region, matching deviation is caused, and image matching accuracy is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image positioning area selection method, apparatus, device, and storage medium that can improve image matching accuracy.
An image positioning area selection method comprises the following steps:
acquiring a reference image to be selected and an image matching parameter for image matching;
filtering the fuzzy region of the reference image to generate a clear region image of the reference image;
extracting features based on the clear region image to obtain image feature points;
and extracting a positioning area for image matching from the reference image according to the image characteristic points and the image matching parameters.
An image positioning area selecting device, comprising:
the information acquisition module is used for acquiring a reference image to be selected and an image matching parameter for image matching;
the blurring filtering module is used for carrying out blurring region filtering processing on the reference image and generating a clear region image of the reference image;
the feature extraction module is used for carrying out feature extraction based on the clear region image to obtain image feature points;
and the region selection module is used for selecting a positioning region for image matching from the reference image according to the image characteristic points and the image matching parameters.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a reference image to be selected and an image matching parameter for image matching;
filtering the fuzzy region of the reference image to generate a clear region image of the reference image;
extracting features based on the clear region image to obtain image feature points;
and extracting a positioning area for image matching from the reference image according to the image characteristic points and the image matching parameters.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a reference image to be selected and an image matching parameter for image matching;
filtering the fuzzy region of the reference image to generate a clear region image of the reference image;
extracting features based on the clear region image to obtain image feature points;
and extracting a positioning area for image matching from the reference image according to the image characteristic points and the image matching parameters.
According to the image positioning area selection method, the device, the equipment and the storage medium, the clear area image is generated by filtering the fuzzy area of the reference image before the feature extraction, the image feature points are extracted based on the clear area image, so that the extracted image feature points are positioned in the clear area of the image in the reference image, the positioning area in the clear area of the image can be extracted from the reference image according to the image feature points in the clear area of the image, the fuzzy area is prevented from being used as the positioning area, the definition of the positioning area can be improved, and the accuracy of image matching by the positioning area is improved.
Drawings
In order to more clearly illustrate the technical solutions of embodiments or conventional techniques of the present application, the drawings required for the descriptions of the embodiments or conventional techniques will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of a method for selecting an image positioning area according to an embodiment;
FIG. 2 is a schematic flow chart of a specific process for filtering a blurred region of a reference image to generate a clear region image of the reference image in one embodiment;
FIG. 3 is a reference image in one embodiment;
FIG. 4 is a clear region image obtained by filtering the reference image shown in FIG. 3 by a fuzzy region filtering process;
FIG. 5 is a schematic diagram showing distribution of image feature points in a reference image according to one embodiment;
FIG. 6 is a flowchart of an embodiment for extracting a location area for image matching from a reference image according to image feature points and image matching parameters;
FIG. 7 is a schematic diagram of the distribution of a plurality of rectangular areas in one embodiment;
FIG. 8 is a schematic distribution diagram of a uniqueness detected rectangular region in the rectangular region shown in FIG. 7;
FIG. 9 is a schematic view of a final selected positioning area;
FIG. 10 is a block diagram of an image location area selection device according to an embodiment.
Detailed Description
In order to facilitate an understanding of the present application, a more complete description of the present application will now be provided with reference to the relevant figures. Examples of the present application are given in the accompanying drawings. This application may, however, be embodied in many different forms and is not limited to the embodiments described herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," and/or the like, specify the presence of stated features, integers, steps, operations, elements, components, or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof.
In one embodiment, there is provided an image positioning area selection method, as shown in fig. 1, including the steps of:
s110: and acquiring a reference image to be selected and an image matching parameter for image matching.
The reference image to be selected is an image of a positioning area to be extracted; in image matching, a positioning area is extracted from a reference image, and a positioning kernel is created. The image matching parameters are working parameters of image matching, for example, the image matching parameters can comprise at least one of image matching methods and parameters corresponding to the matching methods; the image matching method is data for explaining a method used for image matching, and the corresponding parameters of the matching method comprise matching search scores, search ranges and the like. The reference image and the image matching parameters may be obtained by user configuration.
S130: and carrying out fuzzy region filtering processing on the reference image to generate a clear region image of the reference image.
And filtering out the fuzzy area in the reference image to generate a clear area image.
S150: and extracting features based on the clear region image to obtain image feature points.
And extracting features based on the clear region image, wherein the extracted image feature points are feature points in the clear region of the image. Specifically, feature extraction is performed based on a clear region image, so that a plurality of image feature points can be obtained.
S170: and extracting a positioning area for image matching from the reference image according to the image characteristic points and the image matching parameters.
Specifically, a locating area is extracted from a reference image by adopting an extracting method of the locating area according to the image characteristic points and the image matching parameters. Since the image feature points are located in the clear image region, the positioning region extracted based on the image feature points is located in the clear image region.
According to the image positioning area selection method, the reference image is subjected to fuzzy area filtering processing before feature extraction to generate the clear area image, and the image feature points are extracted based on the clear area image, so that the extracted image feature points are located in the clear area of the image in the reference image, and therefore the positioning area in the clear area of the image can be extracted from the reference image according to the image feature points in the clear area of the image, the fuzzy area is prevented from being used as the positioning area, the definition of the positioning area can be improved, and the accuracy of image matching by using the positioning area is improved.
In one embodiment, referring to fig. 2, step S130 includes the following steps.
S131: the reference image is divided into a plurality of window areas according to a preset window size.
The size of the preset window can be set according to actual conditions. For example, the reference image may be divided into window regions of p×p size by sliding the window pixel by pixel from the upper left corner to the lower right corner of the reference image.
S133: and respectively calculating the no-reference structure definition of each window area.
S135: and (3) carrying out binarization processing on the window areas based on the definition of the non-reference structure of each window area, and generating a clear area image for filtering the fuzzy area.
The window area is binarized by adopting the definition of the non-reference structure of the window area, so that the fuzzy area of the reference image can be filtered out and the clear area can be displayed. For example, the reference image shown in fig. 3 is subjected to window region division, no-reference structure definition of each window region is calculated, and binarization processing is performed based on the no-reference structure definition, to generate a clear region image shown in fig. 4.
The blurring of the image causes the attenuation or loss of high frequency information, which appears in the spatial domain as a smooth image edge area, which can only roughly distinguish the contours of objects in the image. In this embodiment, the blurred region is filtered by adopting the resolution of the non-reference structure, and the blurred region can be accurately filtered from the edge detection theory.
In one embodiment, step S133 includes steps (a 1) to (a 4).
Step (a 1): and taking the image of the window area as an image to be evaluated, and filtering the image to be evaluated to obtain a contrast image.
For example, assuming that the image to be evaluated is E, the contrast image is defined as E blur =blur (E), which can use an average filter and a gaussian filter.
Step (a 2): and respectively extracting gradient information of the image to be evaluated and the contrast image by using an edge detection operator, and generating a gradient image of the image to be evaluated and a gradient image of the contrast image.
For example, image E and image E are extracted separately using the Sobel operator blur Edge information in the horizontal and vertical directions of (2) to obtain an image E and an image E blur Gradient images of (1) are G and G respectively bluer
Step (a 3): and finding out N image blocks with the most abundant gradient information in the gradient image of the image to be evaluated, and corresponding image blocks of the N image blocks in the gradient image of the contrast image.
For exampleThe image gradient image G can be divided into 8×8 small blocks, the step length is 4, and 50% overlap exists between adjacent blocks, so that important edges can be prevented from being lost, the variance of each image block is calculated, the larger the variance is, the richer the gradient information is, N image blocks with the largest variance are selected, and the N image blocks are marked as (x) i ,i=1,2,3,…,N),G blur Corresponding blocks in (c) are defined as (y i ,i=1,2,3,…,N)。
Step (a 4): and calculating the no-reference structure definition of the window area according to N image blocks in the gradient image of the image to be evaluated and corresponding image blocks in the gradient image of the contrast image.
Specifically, each x is calculated first i And y i Structural similarity of SSIM (x) i ,y i ) The no-reference structural definition of image E may be defined as:
the more blurred the image E, the higher the score of the NRSS.
In one embodiment, the image matching parameters include an image matching method. The image matching method includes any one of a feature point matching method and a gradation matching method, and for example, for a visual detection apparatus, a user can select gradation correlation matching or feature point matching.
Specifically, step S150 includes: when the image matching method is a feature point matching method, if the image processing library is provided with a feature point extractor interface, the feature point extractor interface is called to extract features of a clear region of an image corresponding to the clear region in the reference image, so as to obtain image feature points; if the image processing library does not provide the feature point extractor interface, feature extraction is carried out on the clear areas of the corresponding clear area images in the reference images by using a feature extraction method corresponding to the feature point matching method, so that image feature points are obtained.
The clear region corresponding to the clear region image in the reference image refers to a region corresponding to the region displaying the content of the clear image in the reference image in the clear region image, namely, a region with clear image in the reference image. For example, feature extraction is performed in an area corresponding to an unfiltered image shown in fig. 4 in the reference image shown in fig. 3, and as shown in fig. 5, the extracted feature points are marked with white lines.
The feature extraction method corresponding to the feature point matching method is a method for extracting features by adopting an extraction method with the same principle as the feature point matching method. Taking the example of feature point matching of a visual inspection device, which is feature point-based matching provided by a visual library (e.g., open eVision), feature extraction may be performed using a feature point extractor interface provided by the visual library. For the feature point matching method, the feature point extractor interface provided by the image processing library is preferentially used, and the feature extraction is performed by using the extraction method of the same principle when the feature point extractor interface is not provided, so that the method is convenient and quick. Specifically, after feature extraction is completed, the image feature points may be stored in a feature point set.
In another embodiment, if the image matching method is a gray level matching method, a window with a fixed size may be used to slide the clear region of the image corresponding to the clear region in the reference image according to a fixed step, calculate the entropy value of the image in each step of window, and record the coordinates of the center point of each step of window; determining a maximum entropy value, and screening windows corresponding to entropy values larger than a preset multiple of the maximum entropy value; and taking the coordinates of the central point of the screened window as the image characteristic point. The preset multiple is a value smaller than 1, for example, may be 0.9. Thus, for different image matching methods, different feature extraction methods are respectively used, and corresponding effective features can be automatically extracted.
In one embodiment, referring to fig. 6, step S170 includes steps S171 to S177.
S171: and clustering the image characteristic points to obtain a plurality of clusters.
S173: and respectively generating the minimum circumscribed positive rectangle of each cluster to obtain a plurality of rectangular areas.
The minimum circumscribed rectangle is a minimum circumscribed rectangle parallel to the coordinate axis, and can be determined according to a known minimum circumscribed rectangle generating method, for example, a rectangle boundary is determined by using a maximum abscissa, a minimum abscissa, a maximum ordinate and a minimum ordinate of vertices corresponding to a plurality of feature points of the cluster. The area corresponding to the smallest circumscribed positive rectangle is a rectangular area, as shown in fig. 7.
S175: and carrying out uniqueness detection on the rectangular area according to the image matching parameters, and selecting the rectangular area passing through the uniqueness detection.
The uniqueness detection is to detect whether a rectangular region is a unique feature region in a reference image. For example, the uniqueness of a rectangular region can be detected by detecting whether or not a positioning kernel created based on the rectangular region can match to a unique region in a target image.
S177: and carrying out positioning precision detection on the rectangular area detected through the uniqueness according to the image matching parameters, and taking the rectangular area detected through the positioning precision as a positioning area in the frame selection in the reference image.
The positioning accuracy detection is to detect the positioning accuracy of the positioning kernel extracted from the rectangular region. For example, whether the deviation size satisfies the set requirement can be determined by detecting the deviation between the rectangular region and the region to which the positioning kernel created based on the rectangular region is matched in the target image, thereby detecting whether the positioning accuracy is satisfied. Specifically, for a plurality of rectangular areas passing through the uniqueness detection, the positioning accuracy detection is sequentially carried out on each rectangular area passing through the uniqueness detection, and when the currently detected rectangular area meets the requirement of passing through the positioning accuracy detection, the rectangular area is taken as a positioning area, the positioning accuracy detection is stopped, and one positioning area is ensured to be extracted.
The rectangular area is generated according to the image feature points, and the uniqueness detection and the positioning accuracy detection are carried out on the rectangular area, so that the extracted positioning area meets the uniqueness and the positioning accuracy requirements, the extraction effect of the positioning area is good, the uniqueness and the positioning accuracy of image matching can be effectively improved, and the image matching accuracy is improved.
In one embodiment, step S171 includes: and clustering the image characteristic points by adopting an HDBSCAN clustering algorithm to obtain a plurality of clusters.
The clustering algorithm mainly comprises a partitioning method (K-MEANS algorithm, K-MEDOIDS algorithm, CLARANS algorithm), a hierarchical method (BIRCH algorithm, CURE algorithm, CHAMELON algorithm), a density-based method (DBSCAN algorithm, OPTICS algorithm, DENCLUE algorithm), a grid-based method (STING algorithm, CLIQUE algorithm, WAVE-CLUSTER algorithm), a model-based method (statistics, neural network) and the like.
The HDBSCAN adopts the ideas of density clustering and hierarchical clustering, and combines a DBSCAN algorithm with a hierarchical clustering algorithm. Compared with the traditional distance-based clustering algorithm, such as a K-means algorithm, a hierarchical clustering algorithm and the like, the method can be used for adaptively identifying clusters with different densities, automatically determining the number of the clusters, and for some data sets which are difficult to process, identifying some small clusters instead of classifying the clusters into noise data as in other clustering algorithms, wherein the rectangular area selected by clustering is more abundant in characteristics. It will be appreciated that in other embodiments, other suitable clustering methods may also be selected based on the targets of the set of image feature points, such as shape-based clustering, size-based clustering, and the like.
In one embodiment, the image matching parameters include matching method correspondence parameters. The matching method corresponding parameters comprise matching search scores, search ranges and the like.
Specifically, step S175 includes: creating a positioning kernel using the reference image and the current rectangular region; using the corresponding parameters of the matching method and the established positioning check to find a matched positioning core in the target by taking the reference image as the target; if the number of the matched positioning kernels is 1 and the current rectangular area has no intersection with the fuzzy area filtered out from the reference image, the current rectangular area is stored into a unique area set through unique detection, and the positioning kernels correspondingly created in the current rectangular area are stored into a positioning kernel set; taking the next rectangular area as the current rectangular area, and returning to the step of creating a positioning kernel by using the reference image and the current rectangular area, and circulating until all the rectangular areas are traversed.
Specifically, a positioning kernel is created by using a current rectangular area as a positioning area and adopting a well-known positioning kernel creation method based on a reference image. If the number of the locating kernels matched and searched is 1, the created locating kernels can be matched to a unique area in the target image; the blurred region filtered in the reference image is as the region without image content of the white grid in fig. 4, if the rectangular region has no intersection with the blurred region filtered, the rectangular region does not contain the blurred region; the number of the locating kernels is 1, and no intersection of the current rectangular area and the fuzzy area is used as a uniqueness judging condition, so that the rectangular area passing through detection can be ensured to contain no fuzzy image and can be uniquely matched, and the finally extracted locating area is ensured to be uniquely located and has high accuracy. As shown in fig. 8, the rectangular box is selected as the rectangular region subjected to the uniqueness detection.
In one embodiment, step S177 includes steps (b 1) to (b 5).
Step (b 1): the detection images generated after performing geometric transformations of a plurality of magnitudes on the reference image are acquired, respectively.
Generating a detection image after the reference image is subjected to geometric transformation of an amplitude; the reference image is subjected to geometric transformation of a plurality of amplitudes respectively to obtain a plurality of detection images. Wherein the geometric transformation may be a rigid body transformation, or an affine transformation, a rotational transformation, etc.
Step (b 2): traversing the positioning kernel set, respectively searching matched positioning kernels in each detection image by using corresponding parameters of the current positioning kernels and the matching method, and determining the coordinates of the central points of the areas of the matched positioning kernels to obtain the positioning coordinates of the corresponding detection images.
Step (b 3): and calculating coordinates of the current region center point coordinates of the positioning core after geometric transformation of each detection image respectively, and obtaining transformation coordinates of the corresponding detection images.
For example, assuming that the coordinates of the central point of the current positioning kernel are a, the reference image is subjected to geometric transformation with an amplitude of B to obtain a first detection image, that is, the amplitude of geometric transformation performed by the first detection image is B, then: and (3) obtaining a coordinate C after geometric transformation with the amplitude of B, wherein the coordinate C is a transformation coordinate corresponding to the first detection image.
Step (b 4): an offset between the positioning coordinates and the transformed coordinates corresponding to the same detected image is calculated.
Based on the current positioning kernel, each detection image corresponds to one positioning coordinate and transformation coordinate, and the offset between the positioning coordinate and transformation coordinate of the same detection image is calculated. For example, the offsets in the X and Y directions that occur for the positioning coordinates and the transformed coordinates may be calculated, and this offset represents the error in the current positioning core positioning.
Step (b 5): if the offset corresponding to all the detection images is smaller than or equal to the preset error threshold value, selecting a rectangular area corresponding to the current positioning kernel in the uniqueness area set from the reference image as the positioning area of the reference image, and exiting from traversing the positioning kernel set.
If the offset corresponding to all the detection images is smaller than or equal to the preset error threshold, the current positioning core positioning error is indicated to be within the allowable range, and the positioning precision requirement is met, namely the rectangular area corresponding to the current positioning core passes the positioning precision detection; at this time, the rectangular area corresponding to the current positioning core is used as a positioning area, the traversal of the positioning core set is exited, that is, the positioning precision detection of the next positioning core is not performed any more, and the extraction of one positioning area is ensured. Specifically, if the offset exists in the offsets corresponding to all the detected images and is greater than the preset error threshold, taking the next positioning core in the positioning core set as a new current positioning core, and returning to the step (b 2). After the uniqueness detection, the detection image is taken as a target image, and whether the positioning accuracy requirement is met or not is detected by detecting the position deviation between the positioning kernel created by the rectangular area and the positioning kernel matched by the created positioning kernel in the target image, so that the automatically extracted positioning area meets the uniqueness, accuracy and high positioning accuracy. For example, as shown in fig. 9, is the final selected location area.
In one embodiment, step (b 1) comprises: adding Gaussian noise to the reference image to obtain a Gaussian noise image; combining a plurality of preset translation amounts and a plurality of preset rotation amounts to obtain a plurality of rigid body transformation amplitudes comprising the translation amounts and the rotation amounts; and respectively executing rigid transformation on the Gaussian noise image according to the amplitude of each rigid transformation to obtain a plurality of detection images.
The gaussian noise may be added to the reference image by generating a gray image of the reference image conforming to a gaussian distribution, and then adding the gray image to the reference image to obtain a gaussian noise image. Taking 3 translation amounts and 3 rotation amounts as examples, 1 value is selected in the translation amount list, and one value is selected in the rotation amount list for combination, so that one rigid body transformation amplitude is obtained, and 9 rigid body transformation amplitudes are obtained through combination to form a rigid body transformation amplitude set. And traversing the rigid body transformation amplitude set, calculating a rigid body transformation matrix of the current amplitude according to the rotation quantity and the translation quantity, and then performing rigid body transformation on the Gaussian noise image to obtain a detection image. In this embodiment, by adding gaussian noise to the reference image first and then performing rigid body transformation of different magnitudes, a plurality of detection images are obtained from one reference image for positioning accuracy detection, so that positioning accuracy detection is achieved under the condition that there is only one reference image.
It should be understood that, although the steps in the flowcharts of fig. 1-2 and 6 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1-2, 6 may include steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
In one embodiment, as shown in fig. 10, there is provided an image positioning area selecting apparatus including: an information acquisition module 110, a blur filtering module 130, a feature extraction module 150 and a region selection module 170.
The information acquisition module 110 is configured to acquire a reference image to be selected and an image matching parameter for image matching.
The blur filtering module 130 is configured to perform a blur area filtering process on the reference image, and generate a clear area image of the reference image.
The feature extraction module 150 is configured to perform feature extraction based on the clear area image, so as to obtain an image feature point.
The region selection module 170 is configured to select a positioning region for image matching from the reference image according to the image feature points and the image matching parameters.
According to the image positioning area selecting device, the reference image is subjected to fuzzy area filtering processing before feature extraction to generate the clear area image, and the image feature points are extracted based on the clear area image, so that the extracted image feature points are located in the clear area of the image in the reference image, and therefore the positioning area in the clear area of the image can be extracted from the reference image according to the image feature points in the clear area of the image, the fuzzy area is prevented from being used as the positioning area, the definition of the positioning area can be improved, and the accuracy of image matching by using the positioning area is improved.
For specific definition of the image positioning area selecting device, reference may be made to the definition of the image positioning area selecting method hereinabove, and the description thereof will not be repeated here. The modules in the image positioning area selection device can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
The computer equipment can avoid using the fuzzy area as the positioning area and can improve the definition of the positioning area due to the fact that the steps in the method embodiments are realized when the processor is used for executing the computer program, and accordingly, the accuracy of image matching by using the positioning area is improved.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
The computer readable storage medium can improve the definition of the positioning area, thereby improving the accuracy of image matching by the positioning area, because the stored computer program is executed by the processor to realize the steps in the method embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
In the description of the present specification, reference to the terms "some embodiments," "other embodiments," "desired embodiments," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic descriptions of the above terms do not necessarily refer to the same embodiment or example.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. The image positioning area selecting method is characterized by comprising the following steps of:
acquiring a reference image to be selected and an image matching parameter for image matching;
filtering the fuzzy region of the reference image to generate a clear region image of the reference image;
extracting features based on the clear region image to obtain image feature points;
and extracting a positioning area for image matching from the reference image according to the image characteristic points and the image matching parameters.
2. The method of claim 1, wherein the performing the blurring region filtering process on the reference image to generate a clear region image of the reference image comprises:
dividing the reference image into a plurality of window areas according to a preset window size;
respectively calculating the definition of the reference-free structure of each window area;
and based on the definition of the reference-free structure of each window area, binarizing the window area to generate a clear area image for filtering the fuzzy area.
3. The method of claim 1, wherein the image matching parameters comprise an image matching method; the feature extraction based on the clear region image to obtain image feature points comprises the following steps:
when the image matching method is a feature point matching method, if an image processing library is provided with a feature point extractor interface, the feature point extractor interface is called to extract features of a clear region corresponding to the clear region image in the reference image, so as to obtain image feature points;
if the image processing library does not provide a feature point extractor interface, feature extraction is carried out on the clear region corresponding to the clear region image in the reference image by using a feature extraction method corresponding to the feature point matching method, so as to obtain image feature points.
4. A method according to any one of claims 1-3, wherein said extracting a location area for image matching from said reference image based on said image feature points and said image matching parameters comprises:
clustering the image characteristic points to obtain a plurality of clusters;
respectively generating a minimum circumscribed positive rectangle of each cluster to obtain a plurality of rectangular areas;
carrying out uniqueness detection on the rectangular area according to the image matching parameters, and selecting a rectangular area passing through the uniqueness detection;
and detecting the positioning precision of the rectangular area detected through the uniqueness according to the image matching parameters, and selecting the rectangular area detected through the positioning precision as a positioning area in the reference image.
5. A method as recited in claim 4, wherein said clustering said image feature points to obtain clusters comprises:
and clustering the image characteristic points by adopting an HDBSCAN clustering algorithm to obtain a plurality of clusters.
6. The method according to claim 4, wherein the image matching parameters include matching method corresponding parameters, the uniqueness of the rectangular area is detected according to the image matching parameters, and selecting the rectangular area passing the uniqueness detection includes:
creating a positioning kernel using the reference image and the current rectangular region;
searching a matched positioning core in the target by using the corresponding parameters of the matching method and the created positioning core by taking the reference image as the target;
if the number of the matched positioning kernels is 1 and the current rectangular area has no intersection with the fuzzy area filtered out from the reference image, the current rectangular area is stored into a unique area set through unique detection, and the positioning kernels correspondingly created by the current rectangular area are stored into a positioning kernel set;
and taking the next rectangular area as the current rectangular area, and returning to the step of creating a positioning kernel by using the reference image and the current rectangular area.
7. The method according to claim 6, wherein the detecting of the positioning accuracy of the rectangular region detected by uniqueness based on the image matching parameter, in which the frame-selecting the rectangular region detected by the positioning accuracy as the positioning region in the reference image, comprises:
obtaining detection images respectively generated after geometric transformation of a plurality of amplitudes is carried out on the reference image;
traversing the positioning kernel set, respectively searching matched positioning kernels in each detection image by using the current positioning kernels and corresponding parameters of the matching method, and determining the regional center point coordinates of the matched positioning kernels to obtain positioning coordinates of the corresponding detection images;
calculating coordinates of the current region center point coordinates of the positioning core after geometric transformation of each detection image respectively, and obtaining transformation coordinates of the corresponding detection images;
calculating an offset between the positioning coordinates and the transformed coordinates corresponding to the same detected image;
if the offset corresponding to all the detection images is smaller than or equal to a preset error threshold value, selecting a rectangular area corresponding to the current positioning kernel in the uniqueness area set from the reference image as a positioning area of the reference image, and exiting from traversing the positioning kernel set.
8. An image positioning area selecting device, characterized by comprising:
the information acquisition module is used for acquiring a reference image to be selected and an image matching parameter for image matching;
the blurring filtering module is used for carrying out blurring region filtering processing on the reference image and generating a clear region image of the reference image;
the feature extraction module is used for carrying out feature extraction based on the clear region image to obtain image feature points;
and the region selection module is used for selecting a positioning region for image matching from the reference image according to the image characteristic points and the image matching parameters.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202410065696.XA 2024-01-17 2024-01-17 Image positioning area selection method, device, equipment and storage medium Pending CN117830623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410065696.XA CN117830623A (en) 2024-01-17 2024-01-17 Image positioning area selection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410065696.XA CN117830623A (en) 2024-01-17 2024-01-17 Image positioning area selection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117830623A true CN117830623A (en) 2024-04-05

Family

ID=90520912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410065696.XA Pending CN117830623A (en) 2024-01-17 2024-01-17 Image positioning area selection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117830623A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118505802A (en) * 2024-05-28 2024-08-16 惠然微电子技术(无锡)有限公司 Method, equipment and storage medium for positioning wafer measurement area

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118505802A (en) * 2024-05-28 2024-08-16 惠然微电子技术(无锡)有限公司 Method, equipment and storage medium for positioning wafer measurement area

Similar Documents

Publication Publication Date Title
CN109978839B (en) Method for detecting wafer low-texture defects
CN113781402B (en) Method and device for detecting scratch defects on chip surface and computer equipment
CN109522908B (en) Image significance detection method based on region label fusion
CN107543828B (en) Workpiece surface defect detection method and system
WO2018107939A1 (en) Edge completeness-based optimal identification method for image segmentation
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
JP7508556B2 (en) Character segmentation method, device, and computer-readable storage medium
CN107092871B (en) Remote sensing image building detection method based on multiple dimensioned multiple features fusion
CN108765465A (en) A kind of unsupervised SAR image change detection
CN107305691A (en) Foreground segmentation method and device based on images match
US20170178341A1 (en) Single Parameter Segmentation of Images
WO2017135120A1 (en) Computationally efficient frame rate conversion system
CN103632137A (en) Human iris image segmentation method
CN117830623A (en) Image positioning area selection method, device, equipment and storage medium
CN113538500B (en) Image segmentation method and device, electronic equipment and storage medium
CN106815851A (en) A kind of grid circle oil level indicator automatic reading method of view-based access control model measurement
CN114970590A (en) Bar code detection method
CN111553927B (en) Checkerboard corner detection method, detection system, computer device and storage medium
CN112052859A (en) License plate accurate positioning method and device in free scene
CN113284158B (en) Image edge extraction method and system based on structural constraint clustering
CN116543144A (en) Automatic extraction method, device, equipment and medium for positioning kernel area of image matching
CN116363097A (en) Defect detection method and system for photovoltaic panel
CN114723767A (en) Stain detection method and device, electronic equipment and floor sweeping robot system
Hoshi et al. Accurate and robust image correspondence for structure-from-motion and its application to multi-view stereo
CN114255253A (en) Edge detection method, edge detection device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination