CN117392044A - Image processing method, system, device and storage medium - Google Patents
Image processing method, system, device and storage medium Download PDFInfo
- Publication number
- CN117392044A CN117392044A CN202210765487.7A CN202210765487A CN117392044A CN 117392044 A CN117392044 A CN 117392044A CN 202210765487 A CN202210765487 A CN 202210765487A CN 117392044 A CN117392044 A CN 117392044A
- Authority
- CN
- China
- Prior art keywords
- image
- value
- point
- edge
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 51
- 238000003860 storage Methods 0.000 title claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 71
- 238000000605 extraction Methods 0.000 claims abstract description 63
- 238000000034 method Methods 0.000 claims abstract description 47
- 230000000875 corresponding effect Effects 0.000 claims description 71
- 230000008569 process Effects 0.000 claims description 32
- 238000010586 diagram Methods 0.000 claims description 30
- 238000009825 accumulation Methods 0.000 claims description 14
- 241001270131 Agaricus moelleri Species 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000012512 characterization method Methods 0.000 claims description 3
- 230000002596 correlated effect Effects 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 description 16
- 238000004891 communication Methods 0.000 description 9
- 238000001514 detection method Methods 0.000 description 8
- 238000012216 screening Methods 0.000 description 8
- 230000002349 favourable effect Effects 0.000 description 7
- 230000009467 reduction Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000012491 analyte Substances 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30148—Semiconductor; IC; Wafer
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
An image processing method, system, device and storage medium, the method includes: acquiring an image to be analyzed, wherein the image to be analyzed contains an image of an edge contour of an object to be detected; and extracting edges of the image to be analyzed to obtain the edge contour of the object to be analyzed. According to the invention, the image to be analyzed of the object to be detected is firstly obtained so as to combine the characteristics of the image to be analyzed, and the edge extraction is carried out on the image to be analyzed, so that the edge contour of the object to be detected can be accurately extracted, and meanwhile, the image to be analyzed of the object to be detected is utilized for carrying out image processing so as to achieve the purpose of contour extraction, and the efficiency of extracting the edge contour of the object to be detected can be improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of optical detection, in particular to an image processing method, an image processing system, image processing equipment and a storage medium.
Background
In the technical field of optical detection, edge contour extraction is performed on an object to be detected in a regular shape, so that the determination of the center position of the object to be detected is an important step, and the method is often applied to application scenes such as target identification, accurate product positioning and the like.
Taking accurate positioning of products as an example, most of production equipment is at present automatic equipment, and the position of the products in the equipment is controlled usually in a mechanical control mode, for example, the production equipment conveys the products through a mechanical arm and detects the edges of the products through a sensor, edge data acquired during rotation of the products are acquired through a linear array CCD sensor, the center of the products is fitted through the acquired edge data, the movement of the mechanical arm is controlled through the fed-back data, and therefore when the products are sent to a chuck (chuck), the products are located at preset positions.
However, the accuracy of the edge extraction and mechanical positioning is limited, resulting in the alignment of the product not reaching the desired accuracy range.
Disclosure of Invention
The embodiment of the invention solves the problem of providing an image processing method, an image processing system, image processing equipment and a storage medium, which are beneficial to accurately extracting the edge contour of an object to be detected.
To solve the above problems, an embodiment of the present invention provides an image processing method, including: acquiring an image to be analyzed, wherein the image to be analyzed contains an image of an edge contour of an object to be detected; and extracting edges of the image to be analyzed to obtain the edge contour of the object to be analyzed.
Correspondingly, the embodiment of the invention also provides an image processing system for executing the image processing method of the embodiment of the invention, which comprises the following steps: the image acquisition module is used for acquiring an image to be analyzed, wherein the image to be analyzed contains an image of the edge contour of the object to be detected; and the edge extraction module is used for carrying out edge extraction on the analysis image to obtain the edge contour of the object to be detected.
Correspondingly, the embodiment of the invention also provides equipment, which comprises at least one memory and at least one processor, wherein the memory stores one or more computer instructions, and the one or more computer instructions are executed by the processor to realize the image processing method.
Correspondingly, the embodiment of the invention also provides a storage medium, wherein one or more computer instructions are stored in the storage medium, and the one or more computer instructions are used for realizing the image processing method according to the embodiment of the invention.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following advantages:
according to the image processing method provided by the embodiment of the invention, the image to be analyzed of the object to be detected is obtained firstly so as to combine the characteristics of the image to be analyzed, and the image to be analyzed is subjected to edge extraction, so that the edge contour of the object to be detected can be extracted accurately, and meanwhile, the image processing is performed by using the image to be analyzed of the object to be detected, so that the purpose of extracting the contour is realized, and the efficiency of extracting the edge contour of the object to be detected is also improved.
In the alternative scheme, an image to be analyzed is a weighted graph, an image to be detected of an object to be detected at an edge position is obtained, an edge contour of the object to be detected extends in the image to be detected along a preset direction range, multiple target feature graphs of the image to be detected are obtained, the target feature graphs are weighted to obtain a weighted graph representing a corresponding relation between a first point position of the image to be detected and a weighted value, and the weighted graph is used for enabling a gradient amplitude of the weighted value of the first point at the edge of the object to be detected to be larger than a gradient amplitude of the weighted value of the first point adjacent to the first point at the edge; the weighting method is favorable for performing a noise reduction function (for example, is favorable for reducing the probability of misjudging patterns, background patterns or other noise patterns in the object to be detected as edges of the object to be detected), and is favorable for extracting the edges of the weighted graph while considering the corresponding features of the edges, and meanwhile, the weighting method is favorable for accurately extracting the edge profile of the object to be detected because the gradient amplitude of the weighted value of the first point at the edge of the object to be detected is larger than the gradient amplitude of the weighted value of the first point adjacent to the first point at the edge.
Drawings
FIG. 1 is a flow chart of an embodiment of an image processing method of the present invention;
FIG. 2 is a schematic diagram of an embodiment of the sample in step S1 of FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of each step in step S1 of FIG. 1;
FIG. 4 is a schematic diagram of an embodiment of the image to be measured in the step S10 of FIG. 3;
fig. 5 is a schematic diagram of the image to be tested after being rotated in step S11 in fig. 3;
FIG. 6 is a schematic diagram of a continuous path taken from a start line to an end line
FIG. 7 is a functional block diagram of one embodiment of an image processing system of the present invention;
FIG. 8 is a functional block diagram of one embodiment of an image acquisition module of FIG. 7;
fig. 9 is a hardware configuration diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
As known from the background art, the accuracy of the current edge extraction method needs to be improved.
According to research, in the current edge extraction process, for different illumination systems, different product materials and patterns are likely to generate uneven illumination of pictures, and are affected by complex patterns in the product or factors with dirty surfaces or low reflectivity of the product, so that poor edge contrast is easily caused, and false extraction of edge contours is caused.
In order to solve the technical problem, an embodiment of the invention provides an image processing method. Referring to fig. 1, a flowchart of an embodiment of an image processing method of the present invention is shown. The image processing method of the present embodiment includes the following basic steps:
step S1: acquiring an image to be analyzed, wherein the image to be analyzed contains an image of the edge contour of an object to be detected;
step S2: and carrying out edge extraction on the image to be analyzed to obtain the edge contour of the object to be analyzed.
The image to be analyzed of the object to be measured is obtained firstly so as to combine the characteristics of the image to be analyzed, and edge extraction is carried out on the image to be analyzed, so that the edge contour of the object to be measured can be extracted accurately, and meanwhile, the image to be analyzed of the object to be measured is utilized for image processing, so that the purpose of contour extraction is achieved, and the efficiency of extracting the edge contour of the object to be measured is improved.
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
Referring to fig. 1 to 5 in combination, step S1 is performed to acquire an image 300 to be analyzed (as shown in fig. 6) containing an image of an edge profile of the object 100 to be analyzed.
The edge profile of the object to be measured 100 is then obtained by processing the image to be analyzed.
Specifically, the edge extraction can be performed on the image to be analyzed by combining the features of the image to be analyzed, so that the edge contour of the object to be analyzed can be extracted accurately, and meanwhile, the image processing is performed by using the image to be analyzed 300 of the object to be analyzed, so that the contour extraction is realized, and the efficiency of extracting the edge contour of the object to be analyzed 100 can be improved.
In this embodiment, in the image to be analyzed 300, the second attribute value of the first point at the edge of the object to be analyzed 100 has a minimum value or a maximum value.
The first point at the edge of the object to be measured 100 has the most value in the image to be analyzed 300, which makes the first point at the edge of the object to be measured 100 more prominent, so that when the edge extraction is performed on the image to be analyzed 300 subsequently, the edge profile of the object to be measured 100 can be obtained by extracting the second attribute value and the path with the value of the third most value of the image to be analyzed 300.
In this embodiment, the shape of the object 100 is a circle. In other embodiments, the object may have other shapes, such as square, according to the specific type of the object. As an example, the object 100 is a wafer. In other embodiments, the object to be tested may be other products that need to extract the edge profile thereof.
In this embodiment, acquiring the image to be analyzed 300 includes: images to be analyzed of the object 100 at a plurality of different edge positions are acquired, and the number of the edge positions is greater than or equal to 3.
In order to improve the accuracy of the subsequent determination of the actual center of the test object 100, at least 3 points are generally required to determine the center, and therefore, the number of edge positions 100L is greater than or equal to 3.
In this embodiment, the plurality of edge positions 100L are uniformly distributed along the edge of the object to be measured 100, so that the accuracy of center determination is further improved. For example, the object 100 has a predetermined initial center S1, and taking the number of edge positions 100L as 3 as an example, the included angles β between the connecting lines of the adjacent edge positions 100L and the predetermined initial center S1 of the object 100 are all 120 °. In other embodiments, when the number of edge positions is 4, each edge position forms an included angle of 90 ° with the connecting line of the preset initial center of the object to be measured.
Taking the shape of the object to be measured 100 as a circle as an example, the preset initial center is the preset initial center. Specifically, in this embodiment, the position of the preset initial center S1 of the object to be measured 100 is the preset position for mechanical positioning.
As an example, the image 300 to be analyzed is a weighted graph, where the weighted graph is obtained by weighting multiple target feature graphs corresponding to the image to be measured of the object 100, so that each feature corresponding to the first point at the edge of the object 100 can be comprehensively considered, which is further beneficial to further improving the accuracy of extracting the edge contour of the object 100.
In one embodiment, the image processing method is used to determine the center of the object 100 by using the extracted edge profile of the object 100, so as to ensure the positioning accuracy of the object 100 in the apparatus. For example, in the semiconductor manufacturing and measuring process, the determination of the center of the wafer is an important step for precisely aligning each part of the wafer, so as to ensure that the positioning accuracy is controlled within a certain range when the position of the pattern to be measured is further determined according to the known relative position vector relationship between the center of the wafer and the pattern to be measured.
The steps of acquiring the image to be analyzed are described in detail below with reference to fig. 3 to 5.
Referring to fig. 3 to 5 in combination, step S10 is performed to obtain an image 200 of the object 100 at the edge position 100L, where the image 200 includes an image of an edge contour of the object 100, and the edge contour of the object 100 extends along a predetermined direction range (not shown) in the image 200.
The image to be measured 200 contains an image of the edge contour of the object to be measured 100, so that the edge contour of the object to be measured 100 is extracted by performing image processing on the image to be measured 200 later.
The image processing is performed by using the image 200 to be detected of the object to be detected 100, so as to achieve the purpose of extracting the outline, which is also beneficial to improving the efficiency of extracting the edge outline of the object to be detected 100, and is correspondingly beneficial to meeting the requirement of large-scale automatic processing of the machine, and effectively lifting the output of the machine.
In this embodiment, acquiring the image 200 to be measured of the object 100 to be measured at the edge position 100L includes: selecting an edge position 100L according to a preset initial center S1 and a preset radius R of the object 100 to be detected, wherein the distance between the edge position 100L and the preset initial center S1 is the preset radius R; an image 200 of the object 100 to be measured at the edge position 100L is acquired.
Thus, acquiring the image 200 of the object 100 to be measured at the edge position 100L includes: an image 200 of the test object 100 at a plurality of different edge positions 100L is acquired. In this embodiment, the number of edge positions 100L is greater than or equal to 3.
As an example, the shape of the image to be measured 200 is rectangular, and the image to be measured 200 is a dark field image. In other embodiments, the image to be measured may be a bright field image.
In this embodiment, the image to be analyzed is acquired by using the detection system. Specifically, the image 200 to be measured of the object 100 to be measured at the edge position is acquired with the detection system.
The detection system comprises an image acquisition module and a motion platform, wherein the motion platform is used for supporting the object to be detected 100 and realizing relative movement between the image acquisition module and the object to be detected 100. Correspondingly, after the image 200 of the object 100 to be measured at the edge position 100L is obtained, the coordinates of each pixel point of the image 200 to be measured at the edge position 100L under the motion platform coordinate system can be obtained.
In this embodiment, before the target feature map of the image to be detected 200 is obtained, the graphics processing method further includes: step S12 is executed, where an initial feature map is obtained according to the image to be measured 200, where the initial feature map includes at least one of a gradient map, a gradient magnitude map, a gray map, a color map, and an image to be measured of the image to be measured 200, and the gradient map includes gray gradient values of each first point.
And if the target feature map obtained later comprises a binary map, carrying out binarization processing by utilizing at least one of the initial feature maps so as to obtain a corresponding binary map, and taking the threshold condition of the first attribute value corresponding to the selected initial feature map as a standard when carrying out the binarization processing.
The two-value graph is used to distinguish a first point at the edge of the object 100 from a first point adjacent to the first point at the edge, so that the point at the edge of the object 100 is more prominent after the target feature graph is weighted later.
Specifically, an initial feature map that can make points at the edge of the object 100 more prominent may be selected according to the actual characteristics (e.g., pattern or color) of the object 100.
It will be appreciated that the first attribute value of the initial feature map is related to the pixel location of the image 200 under test.
In a specific embodiment, the initial feature map comprises a gradient map. Correspondingly, the gradient map is subjected to binarization processing.
Since the gray gradient value of the first point at the edge of the object 100 is generally larger, obtaining the binary image based on the gradient image is also beneficial to distinguishing the first point at the edge of the object 100 from the adjacent first point, and makes the first point at the edge of the object 100 more prominent.
In this embodiment, the sobel operator is used to calculate the pixel gradient of the image 200 to be measured, so as to obtain the gray gradient value. The gradient amplitude map and the gradient angle map of the image 200 to be measured can also be obtained by using the sobel operator.
The gradient amplitude information and the gradient angle information can be calculated simultaneously through the sobel operators in the X direction and the Y direction, and the method can be realized by only determining the size of the convolution kernel without additional parameters, thereby being beneficial to quickly and conveniently obtaining the gradient amplitude map and the gradient angle map.
Correspondingly, the sizes of the convolution kernel of the sobel operator in the X direction and the convolution kernel in the Y direction are n. The size of the convolution kernel is increased, so that the effect of noise reduction is improved, the larger the size of the convolution kernel is, the better the smoothness is, noise of single pixel points is filtered, sensitivity to noise points is reduced, and the edge with larger width is found; however, if the value of n is too large, the edges of the object 100 in the image 200 to be measured are easily filtered out as noise, and especially when the edges are thin, the probability that the edges are filtered out is higher. Thus, in this embodiment, n is an integer of 7 to 9.
In the image 200 to be measured, for any one pixel, the gray gradient value in the X direction is G x The gray gradient value in the Y direction is G y The gradient amplitude of the pixel point isThe gradient angle value of the pixel point is arctan (G x /G y ) Wherein the X direction is orthogonal to the Y direction.
As an example, the gradient angle is defined in the range of-180 ° to 180 °. In other embodiments, other definitions may be used, for example, from 0 ° to 360 °.
It should be noted that, in other embodiments, the binary image may be obtained based on other types of initial feature images.
For example, in another embodiment, when the image to be measured is a color chart and the color of the edge contour of the object to be measured is a specific color (for example, red), the initial feature chart may be selected from the color chart, and accordingly, when the binarization is performed subsequently, whether the first point is the first point related to the specific color is used as the screening criterion.
In other embodiments, when the edge profile of the object to be measured has a specific gray value, the initial feature map may select the original map (i.e. the image to be measured 200 itself), and correspondingly, the image to be measured is binarized by using the gray threshold value later to obtain a binary map of the image to be measured.
With continued reference to fig. 1, step S13 is executed to obtain, according to the image to be detected 200, a plurality of target feature maps, where each target feature map includes a correspondence between a position of each first point and a first attribute value, where the positions of the first points of the target feature maps are in one-to-one correspondence with positions of pixels of the image to be detected, and the types of the first attribute values of the plurality of target feature maps are different, and the first attribute values represent attributes of the target feature maps.
And weighting the target feature map to obtain weighted values of all the first attribute values of each first point so as to obtain a weighted map related to the first point position of the image to be detected, so that the weighted map is subjected to edge extraction.
In this embodiment, the kinds of the target feature map are plural. Through selecting multiple target feature graphs, the weighting item is added, so that more types of features can be considered, and further the accuracy of subsequent edge extraction is improved.
In this embodiment, the first attribute values of the plurality of target feature maps include any of a gradient magnitude, a gradient angle value, a pixel gray value, a binarized value, and a color characterization value.
The image processing method can extract the edge contour of the object 100 to be measured, so that a suitable target feature map is selected according to the actual characteristics (such as pattern or color) of the object 100 to be measured, so that the point at the edge of the object 100 to be measured is more prominent after weighting.
Specifically, the target feature map includes any of a gradient magnitude map, a gradient angle map, a gradient difference map, a gray scale map, a binary map, and a color map, wherein the gradient difference map includes gray scale gradient difference absolute values of the first points in a first gradient direction and a second gradient direction, and the first gradient direction and the second gradient direction are different.
Specifically, the absolute value of the gray gradient difference between the first gradient direction and the second gradient direction refers to: absolute value of difference between the gray gradient value in the first gradient direction and the gray gradient value in the second gradient direction.
In this embodiment, the first gradient direction is perpendicular to the extending direction of the edge of the object 100.
It should be further noted that, the image processing method can extract the edge contour of the object 100 to be measured, and the gradient amplitude of the first point at the edge of the object 100 to be measured is generally larger, so that the gradient amplitude map or the gradient difference map is used as the target feature map, which is beneficial to making the first point at the edge of the object 100 to be measured more prominent.
Similarly, since the gradient angle corresponding to the first point at the edge of the object 100 is relatively uniform and is generally located within a specific angle range, the gradient angle diagram is also beneficial to making the first point at the edge of the object 100 more protruding.
Similarly, other target feature maps, such as gray level maps, binary maps, and color maps, that are useful for distinguishing the first point at the edge of the object 100 from the remaining first points may be selected according to the actual characteristics of the image 200. For example, since the image to be measured is a dark field image or a bright field image, if the edge portion has a specific gray scale range, a gray scale map may also be selected as the target feature map.
It should be noted that, the gray scale image herein is an original gray scale image of the image to be measured.
In this embodiment, the target feature map includes a first target feature map and a second target feature map, and the first attribute values of the first points at the edges of the object to be measured 100 in the first target feature map and the second target feature map are both maximum values or both minimum values. The first attribute values of the first points at the edges of the object 100 in the first target feature map and the second target feature map have extreme values, so that the first points at the edges are more prominent.
Specifically, the first target feature map includes one or both of a gradient angle map and a binary map, the second target feature map is a gradient magnitude map or the image to be measured 200 itself, the gradient angle map includes gradient angle values of gray gradients of the first points, and the gradient angle values at the edge of the object to be measured 100 have a maximum value or a minimum value, and the gradient magnitude map includes gradient magnitudes of the gray gradients of the first points.
The gradient angles corresponding to the first point at the edge of the object to be measured 100 are uniform and generally lie in a specific angle range, so that the gradient angle value at the edge of the object to be measured 100 has a maximum value or a minimum value, and correspondingly, the gradient angle map is used for weighting, so that the gradient amplitude of the weighted value of the first point at the edge of the object to be measured 100 is larger than the gradient amplitude of the weighted value of the first point adjacent to the first point at the edge. Correspondingly, when the weighted graph is subjected to edge extraction in the follow-up process, the direction characteristics at the edge can be considered, so that the accuracy, the universality and the robustness of extracting the edge profile of the object to be detected 100 can be further improved.
The binary image is also a way to highlight the first point at the edge of the object 100, so the binary image can be used to weight the first point at the edge of the object 100 and the adjacent first point. The binary image generally includes two types of first points, wherein the first attribute value of one type of first point is 1, and the first attribute value of the other type of first point is 0, so that the binary image is used for weighting, so that the contribution value of the binary value of a specific first point to the weighted value is 0, and the edge point of the object 100 is more prominent in the weighted image.
As one example, the second target feature map is a gradient magnitude map. As can be seen from the foregoing description, the gradient magnitude of the first point at the edge of the object 100 is generally larger, so the weighting is performed by using the gradient magnitude graph, which is beneficial to distinguishing the first point at the edge of the object 100 from the adjacent first point. Correspondingly, when the weighted graph is subjected to edge extraction in the follow-up process, the amplitude characteristics at the edge can be considered, so that the accuracy, the universality and the robustness of the edge profile of the object to be detected can be further improved.
In this embodiment, taking the example that the target feature map includes a binary map, the binary map includes a first class of points with a first attribute value being a first class of attribute values, and a second class of points with a first attribute value being a second class of attribute values, and the edge of the object to be measured 100 has the second class of points.
That is, the second type points serve as candidate points for edge processing of the object to be measured, the first type points serve as non-candidate points, and the binary image is used to distinguish between the candidate points and the remaining first points of the edge profile of the object to be measured 100.
In a specific embodiment, the edge extraction is performed on the weighted graph to obtain the edge profile of the object to be measured, and the weighted value sum of the first points passing by the edge profile is the smallest, so that the binary graph includes the first class point with the first attribute value of 1 and the second class point with the first attribute value of 0, and the edge of the object to be measured 100 is provided with the second class point.
It should be noted that, when the first attribute value of the candidate point of the edge contour is set to 0, after the weighted graph is obtained by weighting the target feature graph, the first attribute value of the first point at the edge of the object 100 in the first target feature graph is favorable to be the minimum value, so that the probability that the second class point is selected is easily increased when the weighted graph is subsequently extracted by making the weight of the first target feature graph larger than the weight of the second target feature graph.
Therefore, by adopting the binary image, the corresponding weighted value of the first class points in the weighted image is easy to be larger, so that most of the first points which do not meet the requirements can be easily screened out.
In this embodiment, the first target feature map is a binary map, and the first attribute values of the first points at the edges of the object to be measured 100 in the first target feature map and the second target feature map are all minimum values, so that the edge extraction can be performed by using a shortest path algorithm.
In this embodiment, acquiring the binary image according to the initial feature image includes: and taking the threshold condition of the first attribute value corresponding to the initial feature map as a standard, and performing binarization processing on the initial feature map.
As an example, an initial feature map used for obtaining the binary image is a gradient map, the gradient map includes gray gradient values of each first point, and correspondingly, the gradient map is converted into the binary image by taking a threshold condition of the gray gradient values as a screening standard.
In other embodiments, when the image to be measured is a color chart and the color of the edge contour of the object to be measured is a specific color (for example, red), the initial feature chart may be a color chart, and accordingly, when the binarization processing is performed, whether the first point is the first point related to the specific color is used as a screening criterion, the first attribute value of the first point related to the specific color is set to 0, and the first attribute values of the remaining first points are set to 1.
Because the first attribute values of the first points at the edge of the object to be measured 100 in the first target feature map and the second target feature map are both extreme values, in this embodiment, the second target feature map is an inverse value amplitude map, the first attribute values of the first points in the inverse value amplitude map are inversely related to the gradient amplitudes of the corresponding pixel points of the image to be measured 200, the first target feature map is a binary map, the binary map includes first class points with first attribute values being first class attribute values and second class points with first attribute values being second class attribute values, and the edge of the object to be measured 100 has second class points with second class attribute values smaller than the first class attribute values.
In the to-be-measured image 200, the gradient amplitude of the first point at the edge of the to-be-measured object 100 is generally larger, so that the inverse value amplitude graph is obtained, so that the first attribute value corresponding to the first point at the edge of the to-be-measured object 100 is smaller in the inverse value amplitude graph, so that the weighted value corresponding to the first point at the edge is smaller in the weighted graph, and the second attribute value is smaller than the first attribute value, so that the first attribute values of the first point at the edge of the to-be-measured object 100 in the first target feature graph and the second target feature graph are both minimum values, so that the second attribute value of the first point at the edge of the to-be-measured object 100 in the to-be-analyzed image has the minimum value, and the subsequent shortest path-based method is convenient to obtain the first point of the edge contour of the to be-measured object 100.
Specifically, acquiring the target feature map from the image to be measured 200 includes: acquiring gradient amplitude values of each pixel point of the image 200 to be detected, and obtaining a positive value amplitude value diagram; and carrying out inverse processing on the positive value amplitude diagram to obtain an inverse value amplitude diagram, wherein the inverse processing comprises the following steps: subtracting the gradient amplitude of each first point in the positive value amplitude graph by a preset value to obtain the amplitude inverse value of each first point, so as to obtain an inverse value amplitude graph; the preset value is greater than or equal to the maximum value of the gradient magnitudes of the first points.
In this embodiment, taking an image 200 to be measured as an 8-bit wide image (the gray scale range is from 0 to 255) as an example, the preset value is greater than or equal to 200. The preset value is close to the gray maximum value (namely the gray value 255) of the image with the 8-bit width, and the gradient amplitude of each first point in the positive value amplitude chart is subtracted by the preset value, so that a smaller amplitude inverse value can be obtained for the first point with the larger gradient amplitude.
In one embodiment, the preset value is equal to the gray maximum value of an 8-bit wide image, i.e., the preset value is equal to 255.
It should be noted that, in other embodiments, the preset value may be smaller than the maximum value, or the preset value may be zero, according to practical situations.
Based on the above mechanism, in other embodiments, it may also be: the second target feature map is a positive value amplitude map, and the first attribute value of each first point in the positive value amplitude map is positively correlated with the gradient amplitude of the corresponding first point of the image to be detected; the first target feature graph is a binary graph, the binary graph comprises a first class point with a first attribute value being a first class attribute value and a second class point with the first attribute value being a second class attribute value, the edge of the object to be detected is provided with the second class point, and the second class attribute value is larger than the first class attribute value.
Referring to fig. 5 in combination, in this embodiment, before acquiring multiple target feature images according to the image to be analyzed 200, acquiring the image to be analyzed 300 further includes: step S11 is executed to perform rotation processing on the image to be measured 200, so that the edge profile image of the object to be measured 100 extends along the preset direction range.
The edge profile image of the object to be measured 100 extends along a preset direction range, so that the gray scale of patterns on two sides of the edge of the object to be measured 100 is balanced, and the probability of noise in the excessive directions is reduced.
In addition, when the edge extraction is performed on the image 300 to be analyzed, the start line and the end line are usually determined first, and the direction from the start line to the end line is taken as the reference direction, and then the image 200 to be analyzed is rotated, so that the edge extraction is performed in the same reference direction.
It should be noted that, when the first target feature map includes a gradient angle map, the edge profile image of the object 100 is extended along the preset direction range, so that the same criteria are also used for screening.
In this embodiment, the image to be measured 200 includes a plurality of image edges, and only one background image edge (not labeled) is included in the plurality of image edges, and the background image edge is completely covered by the background image; performing rotation processing on the image to be measured 200 to extend the edge profile image of the object to be measured 100 along a preset direction range, including: acquiring the edge of a background image; and rotating the background image edge to enable the background image edge to face a preset direction.
As shown in fig. 5, fig. 5 (a) is a schematic diagram of fig. 4 (a) after rotation, fig. 5 (b) is a schematic diagram of fig. 4 (b) after rotation, and fig. 5 (c) is a schematic diagram of fig. 4 (c) after rotation, the image to be measured 200 is rotated, so that the edge profile image of the object to be measured 100 extends along the vertical direction range.
By making the edge of the background image face to the preset direction, the region of the object 100 to be measured in the image 200 to be measured is located at a fixed side, so that the position of the image of the object 100 to be measured in the region of the image 200 to be measured is uniform, and the influence of the inconsistency of the position of the region on each first attribute value in the target feature map is reduced.
For example, in the present embodiment, after the rotation process, the edge profile image of the object 100 is extended along the vertical direction, so that the edge of the background image faces the left side in the X direction, and the pattern area of the object 100 in the image 200 is located on the right side of the image 200 in the X direction.
In other embodiments, after the image to be measured is rotated, the edge contour image of the object to be measured is extended along the vertical direction range, and the edge of the background image is directed to the right side in the X direction. In other embodiments, after the image to be measured is rotated, the edge profile image of the object to be measured is extended along the horizontal direction range, so that the edge of the background image faces to the upper side or the lower side in the Y direction.
Specifically, acquiring the background image edge includes: the gray level statistical value of each image edge of the image 200 to be detected is obtained, wherein the gray level statistical value of the image edge comprises: the average value of the gray scales of all the pixel points on the image edge, or the sum of the average value of the gray scales of all the pixel points on the image edge and the first maximum value; obtaining an image edge with a gray statistic value having a second maximum value, and obtaining a background image edge; wherein, in the case that the image to be measured 200 is a dark field image, the first maximum value is the maximum value, and the second maximum value is the minimum value; in the case where the image to be measured 200 is a bright field image, the first maximum value is the minimum value and the second maximum value is the maximum value.
The background image is usually located in the edge area of the image 200 to be measured, and therefore, the gray statistics of the first point in the preset width area of the edge needs to be calculated.
When the image 200 to be measured is a dark field image, the background image is very dark, and therefore, the position corresponding to the minimum value in the gray statistic value is selected as the position of the background image.
The gray level statistical value calculates the gray level average value of the first point, so that the robustness of the statistical mode is improved, for example, the influence of the noise point of a single pixel on the accuracy of the gray level statistical value is reduced, and the accuracy of the gray level statistical value is improved.
Moreover, when the image 200 to be measured is a dark field image, in the case that the gray statistics value includes the sum of the average value and the first maximum value of the gray of each first point on the image edge, the first maximum value is the maximum value, that is, the maximum value of the gray is counted, that is, for each image edge, the gray value of the brightest first point is counted, and if the gray statistics value of one image edge is still the minimum, it is indicated that the image edge is darker than the other image edges, which is beneficial to improving the accuracy of determining the position of the background image.
Similarly, in other embodiments, when the image to be measured is a bright field image, the background image is bright, and therefore, the position corresponding to the maximum value in the gray statistics is selected as the background image position. And if the gray level statistical value of one image edge is still maximum, the image edge is brighter than other image edges, and the accuracy of determining the position of the background image is improved.
It should be noted that the preset width of the image edge should not be too small or too large. If the preset width of the image edge is too small, the influence of noise on the gray statistics value is easy to become large, so that the accuracy of the gray statistics value is not improved; if the preset width of the image edge is too large, the pixels of the pattern of the object to be measured 100 are easily counted, so that the accuracy of the gray statistics value is easily reduced. For this reason, in the present embodiment, the preset width of the image edge is the sum of the widths of 5 to 10 pixels.
With continued reference to fig. 3, after obtaining the binary image, before weighting the target feature image, the method further includes: step S14 is executed to perform an expansion process on the binary image, the expansion process including: and traversing all the second class points, and setting the first attribute value of the second class points as the first attribute value when the first class points exist in the first points adjacent to the second class points along the expansion direction, wherein the included angle between the expansion direction and the extending direction of the edge profile is smaller than 20 degrees.
By performing the expansion processing, the accuracy of dividing the first point at the edge of the object to be measured 100 from the remaining first points is improved, and the interference of the internal pattern of the object to be measured 100 is reduced, so that the probability of selecting the first point inside the object to be measured 100 according to the weight is reduced when the edge profile of the shortest path algorithm is based subsequently.
Moreover, since the image processing method is used to extract the edge profile of the object 100, and the edge profile of the object 100 extends along the predetermined direction range in the image 200 to be measured, accordingly, the expansion process along the extending direction range of the edge profile is more important for reducing the effect of the disturbance, and therefore, the angle between the expansion direction and the extending direction of the edge profile is less than 20 °.
Specifically, the expansion directions of the second class points are the same; the expansion direction of the second class of points may be parallel to the tangential direction of the edge profile at any position of the edge profile, or the expansion direction of the second class of points may be parallel to the tangential direction of the edge profile at the position of the second class of points.
In this embodiment, the shape of the convolution kernel used in the expansion process is a rectangle, and the size of the convolution kernel is m1×m2, that is, any vertex angle of the rectangle is taken as an origin, two sides passing through the origin and perpendicular to each other are respectively a first side and a second side, and the lengths of the first side and the second side are respectively m1 pixel size and m2 pixel size.
Correspondingly, taking any vertex angle of the rectangle as an origin, taking a vector corresponding to a first side as a first vector, wherein the first vector has a size of m1 pixel sizes, the direction of the first vector is parallel to the first side, the vector corresponding to a second side as a second vector, the second vector has a size of m2 pixel sizes, the direction of the second vector is parallel to the second side, and the expansion direction is the vector sum direction of the first vector and the second vector.
In a specific embodiment, the value of m1 may be 11 and the value of m2 may be 7.
It should be noted that, for the first point at the true edge, the probability that the first type point exists around it is low, and therefore, even if the expansion process is performed, the influence on the first point at the edge is small.
It should be noted that, in other embodiments, according to the self-characteristics of the image to be measured, a binary image may be used instead of weighting other types of target feature images.
For example, if the edge contour of the image to be measured is brighter and the first point of the rest positions is darker, the target feature map may select a gray scale map of the image to be measured, and correspondingly, only the gray scale map is weighted later, so as to give a smaller weighted value or a larger weighted value to the brighter first point, so that the edge point at the edge of the object to be measured is more prominent in the weighted map.
Or, the target feature map may select a gradient angle map of the image to be measured, and only weight the gradient angle map later to give a smaller weight value or a larger weight value to the first point meeting the angle requirement corresponding to the edge, so that the edge point at the edge of the object to be measured is more prominent in the weight map.
Or, the target feature map may select a gradient amplitude map of the image to be measured, and only weight the gradient amplitude map later to give a smaller weight value or a larger weight value to the first point meeting the gradient amplitude requirement corresponding to the edge, so that the edge point at the edge of the object to be measured is more prominent in the weight map.
Or when the image to be measured is a color image, the target feature image can also select the color image of the image to be measured, so that different weighted values are given to the first points with different colors, and smaller weighted values or larger weighted values are given to the first points meeting the color requirements of the edge contour, so that the edge points at the edge of the object to be measured are more prominent in the weighted image.
Referring to fig. 1, step S15 is executed to weight the target feature map to obtain weighted values of all the first attribute values of each first point, so as to obtain a weighted map representing a correspondence between the pixel positions and the weighted values of the image to be measured 200, where the weighting is used to make the gradient amplitude of the weighted value of the first point at the edge of the object to be measured in the first gradient direction larger than the gradient amplitude of the weighted value of the first point adjacent to the first point at the edge in the first gradient direction, where the first gradient direction is perpendicular to the extending direction of the edge of the object to be measured 100.
The weighted graph is used as the image 300 to be analyzed, and the weighted graph is subsequently subjected to edge extraction to obtain the edge contour of the object 100 to be detected.
The method adopts a mode of weighting the target feature map to comprehensively consider various features, which is favorable for noise reduction, so that when the weighted map is subjected to edge extraction, all the features corresponding to the edges can be considered at the same time, and the edge contour of the object to be detected can be extracted accurately.
Specifically, by weighting the target feature map, the method is beneficial to playing a role in noise reduction (for example, is beneficial to reducing the probability of misjudging patterns, background patterns or other noise patterns in the object to be measured 100 as edges of the object to be measured), and is beneficial to extracting edges of the weighted map while considering each feature corresponding to the edges; meanwhile, since the weighting is used for making the gradient amplitude of the weighted value of the first point at the edge of the object to be measured 100 larger than the gradient amplitude of the weighted value of the first point adjacent to the first point at the edge, the edge point of the object to be measured is more prominent, thereby being beneficial to accurately extracting the edge contour of the object to be measured.
In this embodiment, the image to be analyzed includes a correspondence between each second point and a second attribute value of the image to be analyzed, in the image to be analyzed, the second attribute value of the second point at the edge of the object to be measured 100 has a minimum value or a maximum value, so that when the weighted graph is extracted at the subsequent edge, the edge profile of the object to be measured 100 can be obtained by extracting the weighted value and the path with the value being the third maximum value.
It should be noted that, for any target feature map, the position of the first point in the target feature map has a one-to-one correspondence with the position of the second point in the image to be analyzed.
In this embodiment, weighting the target feature map includes: and weighting the first target feature map and the second target feature map, wherein the weighting is used for enabling the gradient amplitude of the weighted value at the edge of the object to be detected 100 in the weighted map to be larger than the gradient amplitude of the first attribute value of the pixel at the edge of the object to be detected in the first target feature map.
The weighting is used for enabling the gradient amplitude of the weighted value at the edge of the object to be measured 100 in the weighted graph to be larger than the gradient amplitude of the first attribute value of the pixel at the edge of the object to be measured in the first target feature graph, so that the highlighting degree of the edge of the object to be measured 100 in the weighted graph is enhanced, and further the subsequent edge extraction of the weighted graph is facilitated.
In a specific embodiment, the first attribute values of the edge pixels of the object 100 to be measured in the first target feature map and the second target feature map are all minimum values, and the first target feature map is a binary map, so that in the process of weighting the target feature maps, the weight of the first target feature map is greater than that of the second target feature map.
Specifically, when the first target feature map and the second target feature map are weighted, the first target feature map and the expanded binary map are weighted.
Because the binary image includes the first class points with the first class attribute values and the second class points with the second class attribute values, the first class attribute values are 1, the second class attribute values are 0, and the second class points are arranged at the edge of the object to be measured 100, the first target feature image is given a larger weight, the first attribute values of the first class points have a larger contribution to the weighted value, and the first attribute values of the second class points have a contribution to the weighted value of 0, so that the edge points of the object to be measured 100 are more prominent in the weighted image.
As an example, the first target feature map has weights of 180 to 245 and the second target feature map has weights of 0.1 to 0.3.
The weight of the first target feature map is far greater than the weight of the second target feature map, so that the influence caused by screening of the binary map is increased, and most of first points are screened out.
When the first attribute value of the first point in the second target feature graph is smaller and the first attribute value of the first point in the binary graph is 0, the weighting value corresponding to the first point in the weighting graph is smaller, and the probability that the first point is the first point at the edge is higher; when the first attribute value of the first point in the second target feature graph is larger and the first attribute value of the first point in the binary graph is 1, the corresponding weighting value of the first point in the weighting graph is larger; when the first attribute value of the first point in the second target feature graph is smaller and the first attribute value of the first point in the binary graph is 0, the corresponding weight of the first point in the weighted graph is larger; when the first attribute value of the first point in the second target feature graph is smaller and the first attribute value of the first point in the binary graph is 1, the corresponding weighting value of the first point in the weighting graph is large.
It should be noted that, in other embodiments, the weight of the first target feature map may be changed, and the weight of the second target feature map may be changed in equal proportion.
As one example, weighting the first target feature map and the second target feature map includes: calculating the weighted value of each first point by adopting a formula w (x, y) =w1 (255-mag (x, y)) +w2+binary_point (x, y), wherein w (x, y) is the weighted value corresponding to any pixel point, 255-mag (x, y) is the first attribute value corresponding to the pixel point in the inverse value amplitude diagram, mag (x, y) is the gradient amplitude corresponding to the pixel point in the gradient amplitude diagram, binary_point (x, y) is the first attribute value corresponding to the pixel point in the binary diagram, w1 is 0.1 to 0.3, and w2 is 180 to 245.
Therefore, in the present embodiment, the weighted value of the first point at the edge of the object 100 has a minimum value.
In other embodiments, the weighted value of the first point at the edge of the object may also have a maximum value according to the different weighting modes and/or the type of the selected target feature map.
Referring to fig. 1, step S2 is performed to extract edges of the image 300 to be analyzed, and obtain edge contours of the object 100 to be analyzed.
Specifically, performing edge extraction on the image 300 to be analyzed includes: determining a starting line and an ending line in the image 300 to be analyzed, wherein the direction from the starting line to the ending line is a reference direction; acquiring a second point in each line from the starting line to the ending line along the reference direction as a target point, wherein the sum of second attribute values of the target point has a third maximum value, and the target point forms an edge point at the edge of the object to be detected; in the image to be analyzed, the third maximum value is the minimum value under the condition that the second attribute value of the second point at the edge of the object to be analyzed has the minimum value; in the image to be analyzed, the third maximum value is the maximum value under the condition that the second attribute value of the second point at the edge of the object to be analyzed has the maximum value.
Since the second attribute value of the second point at the edge of the object to be measured 100 in the image to be analyzed 300 has the minimum value or the maximum value, by obtaining a continuous path from the start line to the end line along the reference direction, and the sum of the second attribute values of the second points through which the continuous path passes has the third maximum value, the edge point of the edge profile of the object to be measured 100 is extracted.
It should be noted that, before determining the start line and the end line in the image to be analyzed 300, performing edge extraction on the image to be analyzed 300 may further include: a region to be extracted is acquired in the image to be analyzed 300, the region to be extracted comprising a plurality of rows of pixels, each row of pixels comprising a plurality of second points. And the region to be extracted is acquired to determine the region needing edge extraction, so that the speed of edge extraction is improved.
In this embodiment, the region to be extracted is the whole image to be analyzed. In other embodiments, the region to be extracted may be a local region of the image to be analyzed according to practical situations.
It should be noted that, in the present embodiment, according to the extending direction of the edge profile of the object 100, the region to be extracted includes a plurality of rows of pixels, where the row direction is the X direction (as shown in fig. 5), and the arrangement direction of the plurality of rows of pixels is the Y direction (as shown in fig. 5). In other embodiments, the row direction may also be the Y direction.
In addition, the edge profile of the object 100 to be measured is obtained, and the edge profile is further used for determining the center of the object 100 to be measured by using a plurality of edge profiles obtained by edge extraction. Correspondingly, each image to be analyzed is subjected to edge processing to obtain a plurality of edge profiles.
In this embodiment, acquiring a second point as the target point in each of the rows from the start row to the end row includes: respectively taking the second attribute values of the second points of the initial row as accumulated values of the corresponding second points; traversing each line from the second line to the ending line in turn, and carrying out path searching processing on the current line to obtain a position pointer table and accumulated values of all second points of the ending line, wherein the path searching processing comprises: repeating the accumulating relation obtaining process for each second point of the current line, wherein the accumulating relation obtaining process comprises the following steps: acquiring each second point in a search range of a previous row of the current second point as a point to be selected, wherein the search range covers a plurality of points to be selected which are closest to the current second point in the previous row of the second points; acquiring a third maximum value in the second attribute value and the value of each point to be selected and the current second point respectively, and obtaining a maximum accumulated value; taking the most value as the accumulated value of the current second point, and recording the position relation between the to-be-selected point corresponding to the most value and the current second point as a position pointer of the current second point; repeating accumulation relation acquisition processing on all second points of the current line to obtain accumulation values and position relations of the second points, wherein the corresponding relations between the second points and the position relations form a position pointer table; acquiring a second point with a third maximum value of the accumulated value in the ending line as an ending position; and acquiring the target point according to the position pointer table and the end position, wherein the target point passes through the end position, and the sum of the second attribute values of the target point is equal to the accumulated value of the end position.
In this embodiment, in the image to be analyzed, the second attribute value of the second point at the edge of the object to be analyzed 100 has the minimum value, and therefore, the third minimum value is the minimum value.
In other embodiments, in the image to be analyzed, the third maximum value is the maximum value if the second attribute value of the second point at the edge of the object to be analyzed has the maximum value.
The starting line and the ending line are determined in the image to be analyzed, so that the direction of the path finding process is determined.
By determining the start line, the second attribute values of the second points of the start line are respectively used as accumulated values of the corresponding second points, so that preparation is made for subsequent updating of the accumulated values.
As one example, edge extraction is performed using a shortest path algorithm, including Dijkstra's algorithm. Based on Dijkstra algorithm, unique and single pixel represented edges can be obtained, and based on Dijkstra algorithm, a globally optimal solution can be obtained, and the influence of local noise is not easy to be caused, so that the accuracy of edge contour extraction is improved.
In this embodiment, in the process of repeating the accumulation relationship acquiring process for each second point of the current row, the candidate points covered by the search range of the i-th current second point include the i-2 th to i+2-th second points in the second points of the previous row, where i represents the positions of the current second points in the arrangement direction, and the arrangement direction is perpendicular to the reference direction.
For any second point, the search range covers not only one second point belonging to the four neighborhoods and two second points belonging to the diagonal neighborhoods, but also one second point respectively adjacent to the second points of the diagonal neighborhoods, namely 5 second points in total, so that the search range is enlarged, and a path with a weighted value and a third highest value is obtained. Meanwhile, for any one of the second points, the points to be selected covered by the search range are located in the adjacent pixel rows along the reference direction, and the number of the points to be selected is 5, so that the points to be selected covered by the search range of any one of the second points are not too many, and the calculation efficiency is improved.
The steps for acquiring the continuous path are described in detail below in conjunction with fig. 6. Fig. 6 (a) shows a schematic diagram of an embodiment of an image to be analyzed, fig. 6 (b) shows a schematic diagram of a procedure of an embodiment of an accumulation relation acquisition process, fig. 6 (c) shows a schematic diagram of an embodiment of a position pointer table, and fig. 6 (d) shows a schematic diagram of an embodiment of acquiring an edge according to a position pointer table and an end position.
It should be noted that, in this embodiment, the image 300 to be analyzed is a weighted graph.
As shown in fig. 6 (a), the second attribute values of the second points of the start line are 24, 35, 255, and 230, respectively, and then the second attribute values of the second points of the start line are taken as the accumulated values of the corresponding second points, respectively.
In the process of obtaining the continuous path from the starting line to the ending line, traversing each line from the second line to the ending line in turn, and carrying out path searching processing on the current line, wherein the path searching processing comprises: the accumulation relationship acquisition process is repeated for each second point of the current line, and therefore, when the second line is the current line of the first path-finding process.
As shown in fig. 6 (b) and 6 (c), the candidate points covered by the search range of the ith current second point include the i-2 th to i+2 th second points in the second points of the previous row, and the second attribute value of the current second point is 12 by taking the second point of the current row as the current second point, and the second attribute values of the candidate points of the current second point are 24, 35, 255 and 255 respectively, so that the sum of the second attribute values of each candidate point and the current second point is 36, 47, 267 and 267 respectively, to obtain the accumulated value of the most value of 36, and the accumulated value of the most value of 36 is taken as the accumulated value of the current second point; similarly, when the third second point in the current row is taken as the current second point, the second attribute value of the current second point is 26, and the second attribute values of the points to be selected of the current second point are 24, 35, 255 and 230 respectively, so that the sum of the second attribute values of each point to be selected and the current second point is 50, 61, 281 and 256 respectively, the most accumulated value is 50, and the most accumulated value 50 is taken as the accumulated value of the current second point; when the fourth second point of the current row is taken as the current second point, the first attribute value of the current second point is 255, and the second attribute values of the points to be selected of the current second point are 35, 255 and 230 respectively, so that the sum of the second attribute values of each point to be selected and the current second point is 290, 510 and 485 respectively, and the maximum accumulated value is 290.
And traversing each line from the second line to the ending line in turn, carrying out path searching processing on the current line, repeating accumulation relation acquisition processing on each second point of the current line in the path searching processing process, recording the position relation between the to-be-selected point corresponding to the most accumulated value and the current second point as a position pointer of the current second point until the second point with the third most accumulated value in the ending line is acquired as the ending position, thereby acquiring a target point according to the position pointer table and the ending position, wherein the target point passes through the ending position and the sum of second attribute values of the target points is equal to the accumulated value of the ending position.
Wherein the search range of the ith current second point covers the candidate points including the (i-2) th to (i+2) th second points in the previous row of second points, i representing the positions of the current second points in the arrangement direction, and thus, the numbers in fig. 6 (c) represent the relative positional relationship between the candidate points corresponding to the most accumulated value and the current second points, and the numbers 0, 1 and 2 represent the relative positional deviation amounts.
With continued reference to fig. 1, the image processing method further includes: step S3 is performed to determine the center of the object 100 using the plurality of edge profiles obtained by the edge extraction.
The center of the object 100 to be measured is determined so as to precisely control the position of the object 100 to be measured in the apparatus.
Specifically, determining the center of the object 100 to be measured using the plurality of edge profiles obtained by the edge extraction includes: acquiring coordinates of a plurality of pixel points of the edge contour according to the coordinate information of the pixel points of the image 200 to be detected; and fitting coordinates of a plurality of pixel points of the edge profile to obtain the center coordinates of the object to be detected 100.
Specifically, acquiring coordinates of pixel points of an edge contour includes: acquiring first coordinates of each pixel point corresponding to the edge contour in an image coordinate system; and converting the first coordinate into a second coordinate in a coordinate system of the motion platform, wherein the second coordinate is used as the coordinate of the pixel point of the edge contour. Correspondingly, based on the second coordinates, the center coordinates of the object to be measured 100 are fitted.
The first point positions of the target feature map are in one-to-one correspondence with the pixel point positions of the image to be detected, so that after the edge points of the edge contour are obtained, the coordinates of the pixel points of the edge contour can be obtained.
After the image 200 of the object 100 to be measured at the edge position 100L is obtained, the coordinates of each pixel point of the image 200 to be measured at the edge position 100L in the motion platform coordinate system are known, and the coordinates in the image coordinate system have a corresponding relationship with the coordinates in the motion platform coordinate system, so that the first coordinates of each pixel point corresponding to the edge contour in the image coordinate system can be converted into the second coordinates in the motion platform coordinate system. The object 100 is located on a motion platform, and the motion platform coordinate system is a world coordinate system of the object 100, which can represent the real position of the object 100, so the first coordinate needs to be converted into the second coordinate.
In this embodiment, the shape of the object 100 is a circle, and the second coordinates of all the pixel points are taken into the general equation: x 2+y 2+a1 x+a2 x+a3=0, and solving the values of a1, a2 and a3 by solving a least square matrix, thereby fitting the center coordinates of the object to be measured.
It should be noted that, by the general equation of the circle, not only the center coordinates but also the radius can be obtained, and therefore, the radius obtained by the general equation of the circle can be compared with the preset radius R of the object to be measured 100, thereby playing a role in evaluating the accuracy of edge extraction.
In particular, the general equation for a circle can be converted into a standard equation for a circle, namely (X+a1/2)/(2+ (Y+a2/2) 2= (a1≡2+a2≡2-4×a3)/4, so that after the values of a1, a2 and a3, the corresponding center coordinates, and the radius of the circle, are obtained.
Correspondingly, the embodiment of the invention also provides an image processing system. Referring to fig. 7, a functional block diagram of one embodiment of an image processing system of the present invention is shown. The image processing system of the present embodiment will be described with reference to fig. 2 to 6.
The image processing system according to the present embodiment is used in the image processing method of the foregoing embodiment, and includes: an image acquisition module 10, configured to acquire an image 300 to be analyzed (as shown in fig. 6), where the image to be analyzed contains an image of an edge contour of the object 100 to be measured; the edge extraction module 20 is configured to perform edge extraction on the analysis image, and obtain an edge contour of the object to be measured 100.
In this embodiment, in the image to be analyzed, the second attribute value of the second point at the edge of the object to be analyzed 100 has the minimum value or the maximum value, which makes the second point at the edge of the object to be analyzed 100 more prominent, so that when the subsequent image to be analyzed 300 is subjected to edge extraction, the edge profile of the object to be analyzed 100 can be obtained by extracting the path with the second attribute value and the value of the image to be analyzed as extrema.
The specific description of the analyte 100 may be combined with the corresponding description of the previous embodiments, and will not be repeated here.
As an example, the image 300 to be analyzed is a weighted graph, where the weighted graph is obtained by weighting multiple target feature graphs corresponding to the image to be measured of the object 100, so that each feature corresponding to the first point at the edge of the object 100 can be comprehensively considered, which is further beneficial to further improving the accuracy of extracting the edge profile of the object to be measured.
In one embodiment, the image processing method is used to determine the center of the object 100 by using the edge profile of the object 100, so as to ensure the positioning accuracy of the object 100 in the device. For example, in the semiconductor manufacturing and measuring process, the determination of the center of the wafer is an important step for precisely aligning each part of the wafer, so as to ensure that the positioning accuracy is controlled within a certain range when the position of the pattern to be measured is further determined according to the known relative position vector relationship between the center of the wafer and the pattern to be measured.
In this embodiment, the image acquisition module 10 acquires an image to be analyzed using a detection system.
The detection system comprises an image acquisition module and a motion platform, wherein the motion platform is used for supporting the object to be detected 100 and realizing relative movement between the image acquisition module and the object to be detected 100. Correspondingly, after the image 200 of the object 100 to be measured at the edge position 100L is obtained, the coordinates of each pixel point of the image 200 to be measured at the edge position 100L under the motion platform coordinate system can be obtained.
The image acquisition module 10 is described in detail below with reference to fig. 3 to 5.
Referring to fig. 3-5 in combination with fig. 8, fig. 8 is a functional block diagram of one embodiment of an image acquisition module 10, the image acquisition module 10 comprising: the image to be measured acquisition unit 10 is configured to acquire an image to be measured 200 of the object to be measured 100 at an edge position 100L, where the image to be measured 200 contains an image of an edge contour of the object to be measured 100, and the edge contour of the object to be measured 100 extends along a preset direction range in the image to be measured 200.
In this embodiment, the number of edge positions 100L is greater than or equal to 3.
As an example, the shape of the image to be measured 200 is rectangular, and the image to be measured 200 is a dark field image. In other embodiments, the image to be measured may be a bright field image.
Specifically, the image 200 to be measured of the object 100 to be measured at the edge position is acquired with the detection system.
In this embodiment, the image acquisition module 10 further includes: the initial feature map obtaining unit 12 is configured to obtain an initial feature map according to the image to be measured 200 before obtaining the target feature map of the image to be measured 200, where the initial feature map includes at least one of a gradient map, a gradient magnitude map, a gray map, a color map, and an image to be measured of the image to be measured 200, and the gradient map includes gray gradient values of each first point.
And performing binarization processing by using at least one of the initial feature images to obtain a corresponding binary image, wherein the threshold condition of the first attribute value corresponding to the selected initial feature image is used as a standard when performing the binarization processing.
Specifically, an initial feature map that can make the first point at the edge of the object 100 more prominent may be selected according to the actual characteristics (e.g., pattern or color) of the object 100.
In a specific embodiment, the initial feature map comprises a gradient map. Correspondingly, the gradient map is subjected to binarization processing.
Since the gray gradient value of the first point at the edge of the object 100 is generally larger, obtaining the binary image based on the gradient image is also beneficial to distinguishing the first point at the edge of the object 100 from the adjacent first point, and makes the first point at the edge of the object 100 more prominent.
In this embodiment, the sobel operator is used to calculate the pixel gradient of the image 200 to be measured, so as to obtain the gray gradient value. The gradient amplitude map and the gradient angle map of the image 200 to be measured can also be obtained by using the sobel operator.
In this embodiment, the sizes of the convolution kernel of the sobel operator in the X direction and the convolution kernel in the Y direction are n×n, where n is an integer from 7 to 9.
In the image 200 to be measured, for any one pixel, the gray gradient value in the X direction is G x The gray gradient value in the Y direction is G y The gradient amplitude of the pixel point isThe gradient angle value of the pixel point is arctan (G x /G y ) Wherein the X direction is orthogonal to the Y direction.
As an example, the gradient angle is defined in the range of-180 ° to 180 °. In other embodiments, other definitions may be used, for example, from 0 ° to 360 °.
It should be noted that, in other embodiments, the binary image may be obtained based on other types of initial feature images. For example, in another embodiment, when the image to be measured is a color chart and the color of the edge contour of the object to be measured is a specific color (for example, red), the initial feature chart may be selected from the color chart, and accordingly, when the binarization is performed subsequently, whether the first point is the first point related to the specific color is used as the screening criterion. In other embodiments, when the edge profile of the object to be measured has a specific gray value, the initial feature map may select the original map (i.e. the image to be measured 200 itself), and correspondingly, the image to be measured is binarized by using the gray threshold value later to obtain a binary map of the image to be measured.
The image acquisition module 10 further includes: the target feature map obtaining unit 13 is configured to obtain, according to the image to be detected 200, multiple target feature maps, where each target feature map includes a correspondence between a position of each first point and a first attribute value, and the positions of the first points of the target feature maps are in one-to-one correspondence with positions of pixels of the image to be detected, and the first attribute values of the multiple target feature maps are different in kind, and the first attribute values represent attributes of the target feature map.
And weighting the target feature map to obtain weighted values of all the first attribute values of each first point so as to obtain a weighted map related to the first point position of the image to be detected, so that the weighted map is subjected to edge extraction.
In this embodiment, the kinds of the target feature map are plural. Through selecting multiple target feature graphs, the weighting item is added, so that more types of features can be considered, and further the accuracy of subsequent edge extraction is improved.
In this embodiment, the first attribute values of the plurality of target feature maps include any of a gradient magnitude, a gradient angle value, a pixel gray value, a binarized value, and a color characterization value.
According to the actual characteristics (such as pattern or color) of the object 100, a suitable target feature map is selected, so that after weighting, the edge points at the edge of the object 100 are more prominent.
The target feature map comprises any of a gradient amplitude map, a gradient angle map, a gradient difference map, a gray scale map, a binary map and a color map, wherein the gradient difference map comprises gray scale gradient difference absolute values of the first points in a first gradient direction and a second gradient direction, and the first gradient direction and the second gradient direction are different. In this embodiment, the first gradient direction is perpendicular to the extending direction of the edge of the object 100.
Specifically, the absolute value of the gray gradient difference between the first gradient direction and the second gradient direction refers to: absolute value of difference between the gray gradient value in the first gradient direction and the gray gradient value in the second gradient direction.
In this embodiment, the target feature map includes a first target feature map and a second target feature map, and the first attribute values of the first points at the edges of the object to be measured 100 in the first target feature map and the second target feature map are both maximum values or both minimum values. The first attribute values of the first points at the edges of the object 100 in the first target feature map and the second target feature map have extreme values, so that the first points at the edges are more prominent.
Specifically, the first target feature map includes one or both of a gradient angle map and a binary map, the second target feature map is a gradient magnitude map or the image to be measured 200 itself, the gradient angle map includes gradient angle values of gray gradients of the first points, and the gradient angle values at the edge of the object to be measured 100 have a maximum value or a minimum value, and the gradient magnitude map includes gradient magnitudes of the gray gradients of the first points.
As one example, the second target feature map is a gradient magnitude map.
In this embodiment, taking the example that the target feature map includes a binary map, the binary map includes a first class of points with a first attribute value being a first class of attribute values, and a second class of points with a first attribute value being a second class of attribute values, and the edge of the object to be measured 100 has the second class of points.
In one embodiment, the edge extraction is performed on the weighted graph to obtain the edge profile of the object 100, and the weighted value sum of the first points passing by the edge profile is the smallest, so that the binary graph includes the first class point with the first attribute value of 1 and the second class point with the first attribute value of 0, and the edge of the object 100 has the second class point.
In this embodiment, the first target feature map is a binary map, and the first attribute values of the first points at the edges of the object to be measured 100 in the first target feature map and the second target feature map are all minimum values, so that the edge extraction can be performed by using a shortest path algorithm.
In this embodiment, the target feature map obtaining unit 13 obtains a binary map according to an initial feature map, where the binary process is performed on the initial feature map with a threshold condition of a first attribute value corresponding to the initial feature map as a standard.
As an example, an initial feature map used for obtaining the binary image is a gradient map, the gradient map includes gray gradient values of each first point, and correspondingly, the gradient map is converted into the binary image by taking a threshold condition of the gray gradient values as a screening standard.
In other embodiments, when the image to be measured is a color chart and the color of the edge contour of the object to be measured is a specific color (for example, red), the initial feature chart may be a color chart, and accordingly, when the binarization processing is performed, whether the first point is the first point related to the specific color is used as a screening criterion, the first attribute value of the first point related to the specific color is set to 0, and the first attribute values of the remaining first points are set to 1.
Because the first attribute values of the first points at the edge of the object to be measured 100 in the first target feature map and the second target feature map are both extreme values, in this embodiment, the second target feature map is an inverse value amplitude map, the first attribute values of the first points in the inverse value amplitude map are inversely related to the gradient amplitudes of the corresponding pixel points of the image to be measured 200, the first target feature map is a binary map, the binary map includes first class points with first attribute values being first class attribute values and second class points with first attribute values being second class attribute values, and the edge of the object to be measured 100 has second class points with second class attribute values smaller than the first class attribute values.
Specifically, the target feature map acquisition unit 13 includes: a positive value amplitude value graph obtaining subunit (not shown) configured to obtain gradient amplitude values of each pixel point of the image 200 to be measured, so as to obtain a positive value amplitude value graph; an inverting processing subunit (not shown) configured to perform an inverting process on the positive value amplitude chart to obtain an inverted value amplitude chart, where the inverting process includes: subtracting the gradient amplitude of each first point in the positive value amplitude graph by using a preset value to obtain the amplitude inverse value of each first point, so as to obtain an inverse value amplitude graph; the preset value is greater than or equal to the maximum value of the gradient magnitudes of the first points.
In this embodiment, taking an image 200 to be measured as an 8-bit wide image (the gray scale range is from 0 to 255) as an example, the preset value is greater than or equal to 200. In one particular embodiment, the preset value is equal to 255.
In other embodiments, the preset value may also be less than the maximum value, or the preset value may also be zero.
In other embodiments, it may also be: the second target feature map is a positive value amplitude map, and the first attribute value of each first point in the positive value amplitude map is positively correlated with the gradient amplitude of the corresponding first point of the image to be detected; the first target feature graph is a binary graph, the binary graph comprises a first class point with a first attribute value being a first class attribute value and a second class point with the first attribute value being a second class attribute value, the edge of the object to be detected is provided with the second class point, and the second class attribute value is larger than the first class attribute value.
The image acquisition module 10 further includes: the rotation processing unit 11 is configured to perform rotation processing on the image to be measured 200 before acquiring multiple target feature images according to the image to be measured 200, so that the edge profile image of the object to be measured 100 extends along a preset direction range.
As shown in fig. 5, fig. 5 (a) is a schematic diagram of fig. 4 (a) after rotation, fig. 5 (b) is a schematic diagram of fig. 4 (b) after rotation, and fig. 5 (c) is a schematic diagram of fig. 4 (c) after rotation, the image to be measured 200 is rotated, so that the edge profile image of the object to be measured 100 extends along the vertical direction range.
In this embodiment, the image to be measured 200 includes a plurality of image edges, and only one background image edge (not labeled) is included in the plurality of image edges, and the background image edge is completely covered by the background image; the rotation processing unit 11 includes: an edge acquisition subunit (not shown) for acquiring a background image edge; a rotation subunit (not shown) for rotating the background image edge to orient the background image edge in a predetermined direction.
By making the edge of the background image face to the preset direction, the region of the object 100 to be measured in the image 200 to be measured is located at a fixed side, so that the position of the image of the object 100 to be measured in the region of the image 200 to be measured is uniform, and the influence of the inconsistency of the position of the region on each first attribute value in the target feature map is reduced.
Specifically, the edge obtaining subunit is configured to obtain a gray statistics value of each image edge of the image to be measured 200, and obtain an image edge with a second maximum gray statistics value, so as to obtain a background image edge. The gray level statistic value of the image edge comprises: the average value of the gray scales of each pixel point on the image edge, or the sum of the average value of the gray scales of each pixel point on the image edge and the first maximum value, and the first maximum value is the maximum value and the second maximum value is the minimum value when the image to be measured 200 is a dark field image, and the first maximum value is the minimum value and the second maximum value when the image to be measured 200 is a bright field image.
The image acquisition module 10 further includes: an expansion processing unit 14, configured to perform expansion processing on the binary image before weighting the target feature image, where the expansion processing includes: and traversing all the second class points, and setting the first attribute value of the second class points as the first attribute value when the first class points exist in the first points adjacent to the second class points along the expansion direction, wherein the included angle between the expansion direction and the extending direction of the edge profile is smaller than 20 degrees.
By performing the expansion processing, the accuracy of dividing the first point at the edge of the object to be measured 100 from the remaining first points is improved, and the interference of the internal pattern of the object to be measured 100 is reduced, so that the probability of selecting the first point inside the object to be measured 100 according to the weight is reduced when the edge profile of the shortest path algorithm is based subsequently.
Specifically, the expansion directions of the second class points are the same; the expansion direction of the second class of points may be parallel to the tangential direction of the edge profile at any position of the edge profile, or the expansion direction of the second class of points may be parallel to the tangential direction of the edge profile at the position of the second class of points.
It should be noted that, in other embodiments, according to the self-characteristics of the image to be measured, a binary image may be used instead of weighting other types of target feature images.
The image acquisition module 10 further includes: the weighting unit 15 is configured to weight the target feature map to obtain weighted values of all the first attribute values of each first point, so as to obtain a weighted map representing a correspondence between positions of pixels of the image to be measured 200 and the weighted values, where the weighting is configured to make a gradient amplitude of the weighted value of the first point at the edge of the object to be measured in a first gradient direction greater than a gradient amplitude of the weighted value of the first point adjacent to the first point at the edge in the first gradient direction, where the first gradient direction is perpendicular to an extending direction of the edge of the object to be measured 100.
The weighted graph is used as an image to be analyzed, and the weighted graph is subsequently subjected to edge extraction to obtain the edge contour of the object 100 to be detected. The method adopts a mode of weighting the target feature map to comprehensively consider various features, which is favorable for noise reduction, so that when the weighted map is subjected to edge extraction, all the features corresponding to the edges can be considered at the same time, and the edge contour of the object to be detected can be extracted accurately.
In this embodiment, the weighted value of the first point at the edge of the object 100 has the minimum value or the maximum value, so that when the weighted graph is extracted at the subsequent edge, the edge contour of the object 100 can be obtained by extracting the path with the weighted value and the value being the third highest value.
In this embodiment, the weighting unit weights the first target feature map and the second target feature map, where the weighting unit is configured to make the gradient amplitude of the weighted value at the edge of the object to be measured 100 in the weighted map larger than the gradient amplitude of the first attribute value of the pixel at the edge of the object to be measured in the first target feature map.
In one embodiment, the first attribute values of the edge pixels of the object 100 to be measured in the first target feature map and the second target feature map are both minimum values, and the first target feature map is a binary map, so that the weight of the first target feature map is greater than that of the second target feature map.
Specifically, the first target feature map and the expanded binary map are weighted.
Because the binary image includes the first class points with the first class attribute values and the second class points with the second class attribute values, the first class attribute values are 1, the second class attribute values are 0, and the second class points are arranged at the edge of the object to be measured 100, the first target feature image is given a larger weight, the first attribute values of the first class points have a larger contribution to the weighted value, and the first attribute values of the second class points have a contribution to the weighted value of 0, so that the edge points of the object to be measured 100 are more prominent in the weighted image.
As an example, the first target feature map has weights of 180 to 245 and the second target feature map has weights of 0.1 to 0.3.
It should be noted that, in other embodiments, the weight of the first target feature map may be changed, and the weight of the second target feature map may be changed in equal proportion.
As an example, the weight value of each first point is calculated by using the formula w (x, y) =w1×255-mag (x, y))+w2×binary_point (x, y), where w (x, y) is the weight value corresponding to any pixel point, 255-mag (x, y) is the first attribute value corresponding to the pixel point in the inverse value amplitude graph, mag (x, y) is the gradient amplitude corresponding to the pixel point in the gradient amplitude graph, binary_point (x, y) is the first attribute value corresponding to the pixel point in the binary graph, w1 is 0.1 to 0.3, and w2 is 180 to 245.
Therefore, in the present embodiment, the weighted value of the first point at the edge of the object 100 has a minimum value.
In other embodiments, the weighted value of the first point at the edge of the object may also have a maximum value according to the different weighting modes and/or the type of the selected target feature map.
In this embodiment, the edge extraction module 20 includes: a direction defining unit for determining a start line and an end line in the image 300 to be analyzed, wherein a direction from the start line to the end line is a reference direction; a target point obtaining unit, configured to obtain, along the reference direction, a second point as a target point in each of the lines from the start line to the end line, where a sum of second attribute values of the target points has a third maximum value, and the target point constitutes an edge point at an edge of the object to be measured; wherein, in the image to be analyzed 300, when the second attribute value of the second point at the edge of the object to be analyzed 100 has the minimum value, the third minimum value is the minimum value; in the image to be analyzed 300, in the case where the second attribute value of the second point at the edge of the object to be analyzed 100 has a maximum value, the third maximum value is the maximum value.
Since the second attribute value of the second point at the edge of the object to be measured 100 in the image to be analyzed 300 has the minimum value or the maximum value, by obtaining a continuous path from the start line to the end line along the reference direction, and the sum of the second attribute values of the second points through which the continuous path passes has the third maximum value, the edge point of the edge profile of the object to be measured 100 is extracted.
The edge extraction module 20 may further include: the region definition unit is configured to acquire a region to be extracted in the image to be analyzed, where the region to be extracted includes a plurality of rows of pixels, and each row of pixels includes a plurality of second points, before determining a start line and an end line in the image to be analyzed 300. And the region to be extracted is acquired to determine the region needing edge extraction, so that the speed of edge extraction is improved.
In this embodiment, the region to be extracted is the whole image 300 to be analyzed. In other embodiments, the region to be extracted may be a local region of the image to be analyzed according to practical situations.
The edge profile of the object 100 is obtained, and is further used for determining the center of the object by using a plurality of edge profiles obtained by edge extraction. Correspondingly, each image to be analyzed is subjected to edge processing to obtain a plurality of edge profiles.
In this embodiment, the target point acquisition unit includes: a preset subunit, configured to respectively use second attribute values of the second points in the initial row as accumulated values of the corresponding second points; the searching subunit is configured to traverse each line from the second line to the end line in sequence, and perform a path searching process on the current line, to obtain an accumulated value of each second point of the position pointer table and the end line, where the path searching process includes: repeating the accumulating relation obtaining process for each second point of the current line, wherein the accumulating relation obtaining process comprises the following steps: acquiring each second point in a search range of a previous row of the current second point as a point to be selected, wherein the search range covers a plurality of points to be selected which are closest to the current second point in the previous row of the second points; acquiring a third maximum value in the second attribute value and the value of each point to be selected and the current second point respectively, and obtaining a maximum accumulated value; taking the most value as the accumulated value of the current second point, and recording the position relation between the to-be-selected point corresponding to the most value and the current second point as a position pointer of the current second point; repeating accumulation relation acquisition processing on all second points of the current line to obtain accumulation values and position relations of the second points, wherein the corresponding relations between the second points and the position relations form a position pointer table; an end position acquisition subunit configured to acquire, as an end position, a second point in the end row where the accumulated value has a third maximum value; and the extraction subunit is used for acquiring the target point according to the position pointer table and the end position, wherein the target point passes through the end position, and the sum of the second attribute values of the target point is equal to the accumulated value of the end position.
In the present embodiment, in the image to be analyzed 300, the second attribute value of the second point at the edge of the object to be analyzed 100 has the minimum value, and therefore, the third minimum value is the minimum value. In other embodiments, in the image to be analyzed, the third maximum value is the maximum value if the second attribute value of the second point at the edge of the object to be analyzed has the maximum value.
As an example, the target point acquisition unit performs edge extraction using a shortest path algorithm including Dijkstra algorithm.
In this embodiment, in the process of repeating the accumulation relationship acquiring process for each second point of the current row, the candidate points covered by the search range of the i-th current second point include the i-2 th to i+2-th second points in the second points of the previous row, where i represents the positions of the current second points in the arrangement direction, and the arrangement direction is perpendicular to the reference direction.
In this embodiment, an image processing system includes: the center extraction module 30 is configured to determine the center of the object 100 by using a plurality of edge profiles obtained by edge extraction.
The center of the object 100 to be measured is determined so as to precisely control the position of the object 100 to be measured in the apparatus.
Specifically, the center extraction module 30 includes: a coordinate acquiring unit, configured to acquire coordinates of a plurality of pixels of the edge contour according to coordinate information of the pixels of the image 200 to be measured; and the fitting unit is used for fitting the coordinates of a plurality of pixel points of the edge contour to obtain the center coordinates of the object to be measured 100.
The fitting center requires a plurality of points, and thus coordinates of a plurality of pixel points of the edge contour need to be acquired in advance.
Specifically, the coordinate acquisition unit includes: the first coordinate acquisition unit is used for acquiring first coordinates of each pixel point corresponding to the edge contour in the image coordinate system; and the second coordinate acquisition unit is used for converting the first coordinate into a second coordinate in the coordinate system of the motion platform, wherein the second coordinate is used as the coordinate of the pixel point of the edge contour. Correspondingly, the fitting unit is configured to fit the center coordinate of the object to be measured 100 based on the second coordinate.
After the image 200 of the object 100 to be measured at the edge position 100L is obtained, the coordinates of each pixel point of the image 200 to be measured at the edge position 100L in the motion platform coordinate system are known, and the coordinates in the image coordinate system have a corresponding relationship with the coordinates in the motion platform coordinate system, so that the first coordinates of each pixel point corresponding to the edge contour in the image coordinate system can be converted into the second coordinates in the motion platform coordinate system. The object 100 is located on a motion platform, and the motion platform coordinate system is a world coordinate system of the object 100, which can represent the real position of the object 100, so the first coordinate needs to be converted into the second coordinate.
In this embodiment, the shape of the object 100 is a circle, and the second coordinates of all the pixel points are taken into the general equation: x 2+y 2+a1 x+a2 x+a3=0, and solving the values of a1, a2 and a3 by solving a least square matrix, thereby fitting the center coordinates of the object to be measured.
It should be noted that, by the general equation of the circle, not only the center coordinates but also the radius can be obtained, and therefore, the radius obtained by the general equation of the circle can be compared with the preset radius R of the object to be measured 100, thereby playing a role in evaluating the accuracy of edge extraction.
In particular, the general equation for a circle can be converted into a standard equation for a circle, namely (X+a1/2)/(2+ (Y+a2/2) 2= (a1≡2+a2≡2-4×a3)/4, so that after the values of a1, a2 and a3, the corresponding center coordinates, and the radius of the circle, are obtained.
The embodiment of the invention also provides equipment which can realize the image processing method provided by the embodiment of the invention through loading the image processing method in a program form.
Referring to fig. 9, a hardware configuration diagram of an apparatus according to an embodiment of the present invention is shown. The device of the embodiment comprises: at least one processor 01, at least one communication interface 02, at least one memory 03 and at least one communication bus 04.
In this embodiment, the number of the processor 01, the communication interface 02, the memory 03 and the communication bus 04 is at least one, and the processor 01, the communication interface 02 and the memory 03 communicate with each other through the communication bus 04.
The communication interface 02 may be an interface of a communication module for performing network communication, for example, an interface of a GSM module.
The processor 01 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement the image processing method of the present embodiment.
The memory 03 may comprise a high-speed RAM memory or may further comprise a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The memory 03 stores one or more computer instructions that are executed by the processor 01 to implement the image processing method provided in the foregoing embodiment.
It should be noted that, the implementation terminal device may further include other devices (not shown) that may not be necessary for the disclosure of the embodiment of the present invention; embodiments of the present invention will not be described in detail herein, as such other devices may not be necessary to an understanding of the present disclosure.
The embodiment of the invention also provides a storage medium, which stores one or more computer instructions for implementing the image processing method provided in the previous embodiment.
According to the image processing method provided by the embodiment of the invention, the image to be analyzed of the object to be detected is obtained firstly so as to combine the characteristics of the image to be analyzed, and the image to be analyzed is subjected to edge extraction, so that the edge contour of the object to be detected can be extracted accurately, and meanwhile, the image processing is performed by using the image to be analyzed of the object to be detected, so that the purpose of extracting the contour is realized, and the efficiency of extracting the edge contour of the object to be detected is also improved.
The embodiments of the invention described above are combinations of elements and features of the invention. Elements or features may be considered optional unless mentioned otherwise. Each element or feature may be practiced without combining with other elements or features. In addition, embodiments of the invention may be constructed by combining some of the elements and/or features. The order of operations described in embodiments of the invention may be rearranged. Some configurations of any embodiment may be included in another embodiment and may be replaced with corresponding configurations of another embodiment. It will be obvious to those skilled in the art that claims which are not explicitly cited in each other in the appended claims may be combined into embodiments of the present invention or may be included as new claims in a modification after the filing of this application.
Embodiments of the invention may be implemented by various means, such as hardware, firmware, software or combinations thereof. In a hardware configuration, the method according to the exemplary embodiments of the present invention may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, etc.
In a firmware or software configuration, embodiments of the present invention may be implemented in the form of modules, procedures, functions, and so on. The software codes may be stored in memory units and executed by processors. The memory unit may be located inside or outside the processor and may send and receive data to and from the processor via various known means.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.
Claims (22)
1. An image processing method, comprising:
acquiring an image to be analyzed, wherein the image to be analyzed contains an image of an edge contour of an object to be detected;
and extracting edges of the image to be analyzed to obtain the edge contour of the object to be analyzed.
2. The image processing method according to claim 1, wherein the image to be analyzed is a weighted graph, and acquiring the image to be analyzed includes:
acquiring an image to be detected of an object to be detected at an edge position, wherein the image to be detected contains an image of an edge contour of the object to be detected, and the edge contour of the object to be detected extends along a preset direction range in the image to be detected;
acquiring a plurality of target feature images according to the image to be detected, wherein each target feature image comprises a corresponding relation between the position of each first point and a first attribute value, the positions of the first points of the target feature images are in one-to-one correspondence with the positions of the pixels of the image to be detected, the types of the first attribute values of the plurality of target feature images are different, and the first attribute values represent the attributes of the target feature images;
And weighting the target feature images to obtain weighted values of all the first attribute values of each first point so as to obtain a weighted image representing the corresponding relation between the pixel point positions of the image to be measured and the weighted values, wherein the weighting is used for enabling the gradient amplitude of the weighted value of the first point at the edge of the object to be measured in the first gradient direction to be larger than the gradient amplitude of the weighted value of the first point adjacent to the first point at the edge in the first gradient direction, and the first gradient direction is perpendicular to the extending direction of the edge of the object to be measured.
3. The image processing method of claim 2, wherein the first attribute values of the plurality of target feature maps comprise any of a gradient magnitude, a gradient angle value, a pixel gray value, a binarized value, and a color characterization value;
and acquiring a plurality of target feature images according to the image to be detected, wherein the target feature images comprise any plurality of gradient amplitude images, gradient angle images, gradient difference images, gray images, binary images and color images, the gradient difference images comprise gray gradient difference absolute values of the first points in a first gradient direction and a second gradient direction, and the first gradient direction and the second gradient direction are different.
4. The image processing method according to claim 2, wherein, in the target feature images obtained according to the image to be detected, the plurality of target feature images include a first target feature image and a second target feature image, and first attribute values of first points at edges of the object to be detected in the first target feature image and the second target feature image are maximum values or minimum values, the first target feature image includes one or both of a gradient angle image and a binary image, the second target feature image is a gradient magnitude image or the image to be detected, the gradient angle image includes gradient angle values of gray gradients of the first points, and the gradient angle values at the edges of the object to be detected have maximum values or minimum values, the gradient magnitude image includes gradient magnitudes of the gray gradients of the first points;
weighting the target feature map includes: and weighting the first target feature map and the second target feature map, wherein the weighting is used for enabling the gradient amplitude of the weighted value at the edge of the object to be detected in the weighted map to be larger than the gradient amplitude of the first attribute value of the pixel at the edge of the object to be detected in the first target feature map.
5. The image processing method according to claim 3 or 4, wherein the target feature map includes a binary map;
Before the target feature map of the image to be detected is obtained, the method further comprises the following steps: acquiring an initial feature map according to the image to be detected, wherein the initial feature map comprises at least one of a gradient map, a gradient amplitude map, a gray map, a color map and the image to be detected of the image to be detected, and the gradient map comprises gray gradient values of the first points;
acquiring the binary image according to the initial feature image, wherein the acquiring comprises the following steps: and taking a threshold condition of a first attribute value corresponding to the initial feature map as a standard, and performing binarization processing on the initial feature map.
6. The image processing method according to claim 4, wherein the second target feature map is a positive value amplitude map, and the first attribute value of each first point in the positive value amplitude map is positively correlated with the gradient amplitude of the corresponding pixel point of the image to be detected; the first target feature graph is a binary graph, the binary graph comprises a first class point with a first attribute value being a first class attribute value and a second class point with the first attribute value being a second class attribute value, the second class point is arranged at the edge of the object to be detected, and the second class attribute value is larger than the first class attribute value;
or,
the second target feature map is an inverse value amplitude map, and a first attribute value of each first point in the inverse value amplitude map is inversely related to the gradient amplitude of the corresponding pixel point of the image to be detected; the first target feature graph is a binary graph, the binary graph comprises a first class point with a first attribute value being a first class attribute value and a second class point with the first attribute value being a second class attribute value, the second class point is arranged at the edge of the object to be detected, and the second class attribute value is smaller than the first class attribute value.
7. The image processing method according to claim 4, wherein the second target feature map is an inverse magnitude map, and a first attribute value of each first point in the inverse magnitude map is inversely related to a gradient magnitude of a corresponding pixel point of the image to be detected;
the step of obtaining the target feature map according to the image to be detected comprises the following steps: acquiring gradient amplitude values of all pixel points of the image to be detected, and obtaining a positive value amplitude value diagram; performing inverse processing on the positive value amplitude diagram to obtain an inverse value amplitude diagram, wherein the inverse processing comprises: subtracting the gradient amplitude of each first point in the positive value amplitude graph by a preset value to obtain an amplitude inverse value of each first point, so as to obtain the inverse value amplitude graph; the preset value is greater than or equal to the maximum value of the gradient magnitudes of the first points.
8. The image processing method according to claim 4 or 7, wherein the first attribute values of the first points at the edges of the object to be detected in the first target feature map and the second target feature map are both minimum values; the first target feature map is a binary map;
and in the process of weighting the target feature images, the weight of the first target feature image is larger than that of the second target feature image.
9. The image processing method according to claim 7, wherein the first target feature map is a binary map, the binary map includes a first class of points having a first attribute value of 1 and a second class of points having a first attribute value of 0, the second class of points are provided at the edge of the object to be detected, the weight of the first target feature map is 180 to 245, the weight of the second target feature map is 0.1 to 0.3, and the preset value is greater than or equal to 200 in the case that the second target feature map is an inverse magnitude map.
10. The image processing method according to claim 3 or 4, wherein the target feature map includes a binary map including a first type of points having a first type of attribute values and a second type of points having a second type of attribute values, and the second type of points are provided at the edge of the object to be detected;
before weighting the target feature map, the method further comprises: performing expansion processing on the binary image, wherein the expansion processing comprises the following steps: and traversing all the second class points, and setting the first attribute value of the second class points as the first attribute value when the first class points exist in the first points adjacent to the second class points along the expansion direction, wherein the included angle between the expansion direction and the extending direction of the edge profile is smaller than 20 degrees.
11. The image processing method according to claim 10, wherein the expansion directions of the second class points are the same; the expansion direction of the second class of points is parallel to the tangential direction of the edge contour at any position, or the expansion direction of the second class of points is parallel to the tangential direction of the edge contour at the position of the second class of points.
12. The image processing method according to claim 2, wherein acquiring the image to be analyzed before acquiring the plurality of target feature images from the image to be analyzed, further comprises: and carrying out rotation processing on the image to be detected, so that the edge contour image of the object to be detected extends along a preset direction range.
13. The image processing method according to claim 12, wherein the image to be measured includes a plurality of image edges, and only one background image edge among the plurality of image edges is completely covered by the background image;
performing rotation processing on the image to be detected to enable the edge contour image of the object to be detected to extend along a preset direction range, wherein the rotation processing comprises the following steps: acquiring the background image edge; and rotating the background image edge to enable the background image edge to face a preset direction.
14. The image processing method of claim 13, wherein acquiring the background image edge comprises: acquiring gray level statistical values of edges of the images to be detected, wherein the gray level statistical values of the edges of the images comprise: the average value of the gray scales of all the pixel points on the image edge, or the sum of the average value of the gray scales of all the pixel points on the image edge and the first maximum value;
acquiring an image edge with the gray statistic value having a second maximum value to obtain a background image edge;
wherein, when the image to be measured is a dark field image, the first maximum value is the maximum value, and the second maximum value is the minimum value; and under the condition that the image to be detected is a bright field image, the first maximum value is a minimum value, and the second maximum value is a maximum value.
15. The image processing method according to claim 1, wherein the image to be analyzed includes a correspondence between each second point and a second attribute value of the image to be analyzed in which the second attribute value of the second point at the edge of the object to be analyzed has a minimum value or a maximum value;
the edge extraction of the image to be analyzed comprises the following steps: determining a starting line and an ending line in the image to be analyzed, wherein the direction pointing to the ending line from the starting line is a reference direction;
Acquiring a second point in each line from the starting line to the ending line along the reference direction as a target point, wherein the sum of second attribute values of the target point has a third maximum value, and the target point forms an edge point at the edge of the object to be detected;
in the image to be analyzed, the third maximum value is the minimum value under the condition that the second attribute value of the second point at the edge of the object to be analyzed has the minimum value; in the image to be analyzed, the third maximum value is the maximum value under the condition that the second attribute value of the second point at the edge of the object to be analyzed has the maximum value.
16. The image processing method according to claim 15, wherein acquiring one second point as the target point in each of the lines from the start line to the end line, comprises:
respectively taking second attribute values of the second points of the initial row as accumulated values of the corresponding second points;
traversing each line from the second line to the ending line in turn, and carrying out path searching processing on the current line to obtain a position pointer table and accumulated values of all second points of the ending line, wherein the path searching processing comprises: repeating the accumulation relation acquisition processing for each second point of the current line;
Wherein the accumulation relation acquisition processing includes: acquiring each second point in a search range of a previous row of the current second point as a point to be selected, wherein the search range covers a plurality of points to be selected which are closest to the current second point in the second point of the previous row; acquiring a third maximum value in the second attribute value and the value of each point to be selected and the current second point respectively, and obtaining a maximum accumulated value; taking the most value as an accumulated value of the current second point, and recording the position relation between the to-be-selected point corresponding to the most value and the current second point as a position pointer of the current second point;
repeating the accumulation relation acquisition processing on all the second points of the current row to obtain accumulation values and the position relation of each second point, wherein the corresponding relation between each second point and the position relation forms the position pointer table;
acquiring a second point with a third maximum value of the accumulated value in the ending line as an ending position;
and acquiring the target point according to the position pointer table and the end position, wherein the target point passes through the end position, and the sum of the second attribute values of the target point is equal to the accumulated value of the end position.
17. The image processing method according to claim 16, wherein in repeating the accumulation relation acquisition process for each second point of the current line, the candidate points covered by the search range of the i-th current second point include the i-2 th to i+2-th second points in the preceding line second points, wherein i represents the positions of the current second points in an arrangement direction, the arrangement direction being perpendicular to the reference direction.
18. The image processing method according to claim 1, wherein acquiring the image to be analyzed includes: acquiring images to be analyzed of an object to be detected at a plurality of different edge positions, wherein the number of the edge positions is more than or equal to 3;
the image processing method further includes: and determining the center of the object to be detected by utilizing the plurality of edge profiles obtained by the edge extraction.
19. The image processing method according to claim 18, wherein determining the center of the object to be measured using a plurality of edge profiles obtained by the edge extraction includes: acquiring coordinates of a plurality of first points of the edge contour according to the coordinate information of the first points of the image to be detected; fitting the coordinates of the first points of the edge profile to obtain the center coordinates of the object to be detected.
20. An image processing system for performing the image processing method of any one of claims 1 to 19, the image processing system comprising:
the image acquisition module is used for acquiring an image to be analyzed, wherein the image to be analyzed contains an image of the edge contour of the object to be detected;
and the edge extraction module is used for carrying out edge extraction on the analysis image to obtain the edge contour of the object to be detected.
21. An apparatus comprising at least one memory and at least one processor, the memory storing one or more computer instructions, wherein the one or more computer instructions are executable by the processor to implement the image processing method of any of claims 1-19.
22. A storage medium storing one or more computer instructions for implementing the image processing method of any one of claims 1 to 19.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210765487.7A CN117392044A (en) | 2022-07-01 | 2022-07-01 | Image processing method, system, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210765487.7A CN117392044A (en) | 2022-07-01 | 2022-07-01 | Image processing method, system, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117392044A true CN117392044A (en) | 2024-01-12 |
Family
ID=89436082
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210765487.7A Pending CN117392044A (en) | 2022-07-01 | 2022-07-01 | Image processing method, system, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117392044A (en) |
-
2022
- 2022-07-01 CN CN202210765487.7A patent/CN117392044A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11403839B2 (en) | Commodity detection terminal, commodity detection method, system, computer device, and computer readable medium | |
CN107014294B (en) | Contact net geometric parameter detection method and system based on infrared image | |
CN108629775B (en) | Thermal state high-speed wire rod surface image processing method | |
CN105654507B (en) | A kind of vehicle overall dimension measurement method based on the tracking of image behavioral characteristics | |
US6954550B2 (en) | Image processing method and apparatus | |
US8699784B2 (en) | Inspection recipe generation and inspection based on an inspection recipe | |
CN115100206B (en) | Printing defect identification method for textile with periodic pattern | |
CN111551567B (en) | Object surface defect detection method and system based on fringe projection | |
CN110766095A (en) | Defect detection method based on image gray level features | |
CN115170570B (en) | Fabric fuzzing and pilling detection method based on gray level run matrix | |
JP2021039734A (en) | Specification of module size of optical code | |
US20110164129A1 (en) | Method and a system for creating a reference image using unknown quality patterns | |
CN116758067A (en) | Metal structural member detection method based on feature matching | |
CN117392158A (en) | Image processing method, system, device and storage medium | |
CN112819842B (en) | Workpiece contour curve fitting method, device and medium suitable for workpiece quality inspection | |
CN113508395B (en) | Method and device for detecting objects in an image composed of pixels | |
CN117392044A (en) | Image processing method, system, device and storage medium | |
CN109084721B (en) | Method and apparatus for determining a topographical parameter of a target structure in a semiconductor device | |
CN117007940A (en) | Self-adaptive method for detecting performance of circuit board | |
CN116908185A (en) | Method and device for detecting appearance defects of article, electronic equipment and storage medium | |
CN116563298A (en) | Cross line center sub-pixel detection method based on Gaussian fitting | |
CN113554688B (en) | O-shaped sealing ring size measurement method based on monocular vision | |
CN114677428A (en) | Power transmission line icing thickness detection method based on unmanned aerial vehicle image processing | |
CN111986208A (en) | Target mark positioning circle capturing and positioning method and device and computer equipment | |
CN117974667B (en) | Full-automatic vision sorting control method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |