CN109509200B - Checkerboard corner detection method based on contour extraction and computer readable storage medium - Google Patents
Checkerboard corner detection method based on contour extraction and computer readable storage medium Download PDFInfo
- Publication number
- CN109509200B CN109509200B CN201811601937.9A CN201811601937A CN109509200B CN 109509200 B CN109509200 B CN 109509200B CN 201811601937 A CN201811601937 A CN 201811601937A CN 109509200 B CN109509200 B CN 109509200B
- Authority
- CN
- China
- Prior art keywords
- pixel point
- points
- checkerboard
- corner
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 55
- 238000000605 extraction Methods 0.000 title claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000003708 edge detection Methods 0.000 claims abstract description 19
- 238000012216 screening Methods 0.000 claims abstract description 16
- 238000004590 computer program Methods 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 12
- YBJHBAHKTGYVGT-ZKWXMUAHSA-N (+)-Biotin Chemical compound N1C(=O)N[C@@H]2[C@H](CCCCC(=O)O)SC[C@@H]21 YBJHBAHKTGYVGT-ZKWXMUAHSA-N 0.000 claims description 7
- FEPMHVLSLDOMQC-UHFFFAOYSA-N virginiamycin-S1 Natural products CC1OC(=O)C(C=2C=CC=CC=2)NC(=O)C2CC(=O)CCN2C(=O)C(CC=2C=CC=CC=2)N(C)C(=O)C2CCCN2C(=O)C(CC)NC(=O)C1NC(=O)C1=NC=CC=C1O FEPMHVLSLDOMQC-UHFFFAOYSA-N 0.000 claims description 7
- 230000005587 bubbling Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 230000001629 suppression Effects 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 abstract description 25
- 238000004364 calculation method Methods 0.000 abstract description 2
- 230000007547 defect Effects 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a checkerboard corner detection method based on contour extraction, a checkerboard corner detection device and a computer readable storage medium, wherein the method comprises the following steps: performing edge detection processing on the original image to obtain edge points in the original image; extracting the outline of the checkerboard in the original image according to the edge points; carrying out corner recognition on the extracted outline of the checkerboard; and screening out the internal corner points of the checkerboard according to the identified corner points. According to the application, by extracting the checkerboard outline, a large number of non-corner pixels are removed, so that on one hand, the calculation amount of corner recognition is greatly reduced, on the other hand, the checkerboard corners can be accurately recognized, the advantages of an edge-based corner detection algorithm and a gray-based corner detection algorithm are combined, the defects of the edge-based corner detection algorithm and the gray-based corner detection algorithm are improved, the data processing amount can be greatly reduced, the processing speed and the processing efficiency are improved, and the anti-interference capability and the accuracy are effectively improved.
Description
Technical Field
The application relates to the field of image processing, in particular to a checkerboard corner detection method based on contour extraction and a computer readable storage medium.
Background
Camera calibration and pose measurement are basic and popular problems in the field of machine vision, and the processing process generally establishes a mapping relation between a world coordinate system and a pixel coordinate system through characteristic points on a target, so that internal parameters of a camera and pose parameters of the target are obtained by solving a PnP problem. Therefore, feature point coordinate extraction is a very important step, in which corner points are the most common type of feature points, and checkerboard corner points are widely used as a special corner point.
The detection algorithm of the current checkerboard corner points can be mainly divided into two types: an edge-based corner detection algorithm and a gray-based corner detection algorithm. The edge-based corner detection algorithm firstly performs segmentation and edge extraction on the image, and then detects corner points according to the characteristic that the corner points are edge inflection points or intersection points. The gray-scale-based corner detection algorithm considers the corner as a maximum point of the gray and gradient change of the image in a local range, and thus detects the corner mainly by calculating the curvature and gradient.
The corner detection algorithm based on the edge needs to extract the edge of the image firstly, and then the subsequent steps are all to detect according to the characteristics of inflection points or intersection points on the basis, when the edge of the image is interrupted, the corner cannot be extracted well, so that the quality requirement on the edge detection is high, the processing steps of the algorithm are complex, the calculated amount is large, and the time consumption is long. The gray-based corner detection algorithm mainly detects corners by calculating curvature and gradient, and commonly includes Harris operator and Susan operator, under the condition of complex background, please refer to fig. 1 to 3, fig. 1 is an original image, pixels of the original image are 1280 x 1024, fig. 2 is an identification effect after Harris operator is used, fig. 3 is an identification effect after Susan operator is used, and it can be seen that fig. 2 and fig. 3 identify a plurality of non-corner pixels. Therefore, although the principle of the gray-based angular point detection algorithm is simple, the stability is high and the method is easy to realize, the accuracy is low, the false detection is easy, and the anti-interference capability is weak.
Disclosure of Invention
The application provides a checkerboard corner detection method, a checkerboard corner detection device and a computer readable storage medium based on contour extraction, which can solve the problems of complex processing steps, large calculated amount and long time consumption of a corner detection algorithm based on edges in the existing checkerboard corner detection algorithm and the problems of low accuracy, easiness in false detection and weak anti-interference capability of the corner detection algorithm based on gray scale.
According to a first aspect of the present application, the present application provides a method for detecting corner points of a checkerboard based on contour extraction, the method comprising: performing edge detection processing on the original image to obtain edge points in the original image; extracting the outline of the checkerboard in the original image according to the edge points; carrying out corner recognition on the extracted outline of the checkerboard; and screening out the internal corner points of the checkerboard according to the identified corner points.
Preferably, in the step of performing edge detection processing on the original image to acquire edge points in the original image, the step of: by a transversal convolution factor S x And longitudinal convolution factor S y Carrying out plane convolution with the original image to obtain a transverse gray level difference approximation value G of the pixel point of the original image x Longitudinal gray level difference approximation G y Wherein the original image is set as A, the lateral convolution factorLongitudinal convolution factor->According to the formula: the lateral gray level difference approximation value is G x =S x aA, longitudinal gray scale difference approximation is G y =S y A, a; each pixel point in the original image ALateral gray level difference approximation G x Longitudinal gray scale difference approximation G y The magnitude of the gray weighting difference G of the pixel point is obtained according to the following formula (1) or formula (2), wherein the formula is as follows: />When the gray weighting difference G of the pixel point is larger than a first set threshold value, the pixel point is an edge point, otherwise, the pixel point is marked as 0.
Preferably, the step of extracting the outline of the checkerboard in the original image from the edge points includes: scanning pixel points of an original image according to a set sequence, when the pixel points in the original image are scanned to be effective points, giving the set mark values to the effective points in the original image according to a first set rule, wherein the mark values of the effective points of the same connected domain are represented by an equivalent chain, the equivalent chain comprises mark values with an equivalent relationship, the equivalent chain is stored in an equivalent array in an equivalent pair mode, the mark values of the same equivalent chain in the equivalent array are updated to be uniform mark values according to a second set rule, and the pixel points with the same mark values are pixel points of the same connected domain; judging whether the pixel points of the connected domain meet the set conditions, and if the parameters of the pixel points of the connected domain meet the set conditions, extracting the connected domain as the outline of the checkerboard.
Preferably, the step of assigning the set mark value to the valid point in the original image according to the first setting rule includes the steps of: judging whether the marking values of the pixel points in the neighborhood of the current effective point are all 0, wherein the pixel points in the neighborhood comprise a first pixel point, a second pixel point, a third pixel point and a fourth pixel point, the first pixel point is the pixel point adjacent to the left side of the current effective point, the second pixel point is the pixel point adjacent to the upper side of the first pixel point, the third pixel point is the pixel point adjacent to the left side of the second pixel point, the fourth pixel point is the pixel point adjacent to the right side of the second pixel point, when the marking values of the first pixel point, the second pixel point, the third pixel point and the fourth pixel point are all 0, marking values of the effective point which are different from the marked effective point before are given, the marking values of the current effective point are stored in an equivalent pair array, and otherwise, marking values of the effective point which are not 0 are selected from the four pixel points according to the sequence of the first pixel point, the second pixel point, the third pixel point and the fourth pixel point; when the marking values of the first pixel point and the fourth pixel point are further judged to be not 0 and are not equal, the marking values of the first pixel point and the fourth pixel point are stored in an equivalent array as equivalent pairs, otherwise, whether the marking values of the third pixel point and the fourth pixel point are not 0 and are not equal is further judged, and if the marking values of the third pixel point and the fourth pixel point are judged to be not 0 and are not equal, the third pixel point and the fourth pixel point are stored in the equivalent array as equivalent pairs.
Preferably, the step of updating the tag values of the same equivalent chain in the equivalent array to uniform tag values according to a second set rule includes: and updating the mark value of the pixel point corresponding to each mark value in the equivalent chain in the equivalent array to the end value of the equivalent chain so that the updated mark values of the same connected domain are the same.
Preferably, the step of determining whether the pixel point of the connected domain satisfies a set condition includes: counting the number of the pixel points of the connected domain, and when the number of the pixel points exceeds a set number level, the connected domain is the outline of a checkerboard; alternatively, the maximum value Xmax and the minimum value Xmin of the abscissa and the maximum value Ymax and the minimum value Ymin of the ordinate of the pixel point in the same connected domain are recorded, and according to the formula, g= (Xmax-Xmin) × (Ymax-Ymin), the maximum value of the product G of the connected domain is identified as the outline of the checkerboard.
Preferably, the step of identifying the corner points of the extracted outline of the checkerboard includes: calculating gradients I of pixel points I (X, Y) of the outline of the checkerboard in the X direction and the Y direction by using a horizontal difference operator and a vertical difference operator x 、I y, wherein ,calculating three products of two directional gradients of the pixel points to obtain a matrix m: / > wherein ,/> I x I y =I x ·I y The method comprises the steps of carrying out a first treatment on the surface of the Performing Gaussian smoothing filtering on four elements of the matrix M to obtain a new matrix M: /> wherein ,calculating the Harris response value R of each pixel point according to the formula: r=detm-a (traceM) 2 Wherein detm=λ 1 λ 2 =AC-B 2 ,traceM=λ 1 +λ 2 =a+c, a=0.1; and (3) performing R value non-maximum suppression in the 3X 3 neighborhood of the pixel point to obtain a maximum value point which is the corner point.
Preferably, the step of screening the inner corners of the checkerboard according to the identified corners includes: traversing the corners of the checkerboard from the set corners, calculating the distances between the currently selected corners and other corners, and finding out four corners closest to the current corners by adopting a bubbling sequencing method; and calculating the variance of the distance values between the four corner points and the current corner point, and if the variance is smaller than a second set threshold value, the current corner point is the internal corner point of the checkerboard.
According to a second aspect of the present application, there is provided a checkerboard corner detection device based on contour extraction, the device comprising: the edge point acquisition module is used for carrying out edge detection processing on the original image so as to acquire edge points in the original image; the contour extraction module is used for extracting the contours of the checkerboard in the original image according to the edge points; the corner recognition module is used for recognizing corners of the extracted outline of the checkerboard; and the internal corner screening module is used for screening the internal corners of the checkerboard according to the identified corners.
According to a third aspect of the present application, there is provided a terminal comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: performing edge detection processing on the original image to obtain edge points in the original image; extracting the outline of the checkerboard in the original image according to the edge points; carrying out corner recognition on the extracted outline of the checkerboard; and screening out the internal corner points of the checkerboard according to the identified corner points.
According to a fourth aspect of the present application there is provided a computer readable storage medium storing a computer program which when executed by a processor performs the steps of the method as described above.
The application has the beneficial effects that: according to the method, the edge detection processing is carried out on the original image to obtain the edge points in the original image, the outline of the checkerboard in the original image is extracted according to the edge points, then the corner recognition is carried out on the extracted outline of the checkerboard, and the internal corner points of the checkerboard can be screened out according to the recognized corner points.
Drawings
FIG. 1 is an original image of the present application;
FIG. 2 is a graph showing the recognition effect of the gray-based corner detection algorithm after the corner detection of FIG. 1 using Harris operator;
FIG. 3 is an identification effect of the gray-based corner detection algorithm after corner detection of FIG. 1 using the Susan operator;
FIG. 4 is a flow chart of a checkerboard corner detection method based on contour extraction of the present application;
FIG. 5 is an effect diagram of the edge detection process of the original image by the Sobel operator according to the present application;
FIG. 6 is a schematic diagram of pixel points in the neighborhood of the current active point 3×3 of the present application;
fig. 7 is a flowchart of the operation of step S1021 of the present application;
fig. 8 is an effect diagram of the present application for extracting the outline of the checkerboard in the original image by step S102;
fig. 9 is an effect diagram of the present application on the identification of corner points in the outline of a checkerboard by step S103;
fig. 10 is an effect diagram of screening out internal corner points of the identified corner points of the checkerboard by step S104; and
fig. 11 is a schematic diagram of a checkerboard corner detection device based on contour extraction according to the present application.
Reference numerals illustrate: the edge point obtaining module 111 and the contour extracting module 112 and the corner identifying module 113 and the internal corner screening module 114.
Detailed Description
The application will be described in further detail below with reference to the drawings by means of specific embodiments.
The conception of the application is as follows: by extracting the checkerboard outline and removing a large number of non-corner pixels, the data processing capacity can be greatly reduced, the processing speed and the processing efficiency can be improved, and the anti-interference capability and the accuracy can be effectively improved.
Referring to fig. 1 to 10, the present application provides a checkerboard corner detection method based on contour extraction, which includes:
step S101: and carrying out edge detection processing on the original image to obtain edge points in the original image.
In this embodiment, the edge detection process is performed on the original image by the Sobel operator. The Sobel operator detects the edge when the edge reaches the extreme value according to the gray weighting difference of the adjacent points in the neighborhood of the pixel point 8.
Thus, in step S101, it includes:
step S1011: by a transversal convolution factorS x And longitudinal convolution factor S y Carrying out plane convolution with the original image to obtain a transverse gray level difference approximation value G of the pixel point of the original image x Longitudinal gray level difference approximation G y Wherein the original image is set as A, the lateral convolution factorLongitudinal convolution factor- >According to the formula: the lateral gray level difference approximation value is G x =S x aA, longitudinal gray scale difference approximation is G y =S y *A。
Step S1012: differential approximation value G of transverse gray scale of each pixel point in original image A x Longitudinal gray scale difference approximation G y The gray weighting difference G of the pixel point is obtained through a formula, wherein the formula is as follows: g= |g x |+|G y |。
Step S1013: when the gray weighting difference G of the pixel point is larger than a first set threshold value, the pixel point is an edge point, otherwise, the pixel point is marked as 0.
In this embodiment, the first setting threshold may be set according to an actual usage scenario, which is not limited herein.
Referring to fig. 5, fig. 5 is an effect diagram of performing edge detection processing on an original image by a Sobel operator. The Sobel operator has simple principle and less calculation amount, and is a more common edge detection method when the accuracy requirement is not very high. According to the application, for the characteristic that the corner points of the checkerboard are edge inflection points or intersection points, a Sobel operator is selected for edge detection, most non-corner points are removed, and the data volume of subsequent processing is greatly reduced.
Step S102: and extracting the outline of the checkerboard in the original image according to the edge points.
This step mainly extracts the outline of the checkerboard from the edge points obtained in step S101. The design thought of the step is to adopt a connected domain marking method to classify the contours according to the connectivity of the contours of the checkerboard, and then extract the contours by combining with the features of the contours of the checkerboard.
In step S102, the method specifically includes the following steps:
step S1021: when the pixel points in the original image are scanned to be effective points, the set marking values are given to the effective points in the original image according to a first setting rule, the marking values of the effective points in the same connected domain are represented by means of equivalent chains, the equivalent chains comprise marking values with equivalent relations, and the equivalent chains are stored in an equivalent array in the form of equivalent pairs. And updating the marking value of the same equivalent chain in the equivalent array into a uniform marking value according to a second setting rule, wherein the pixels with the same marking value are the pixels of the same connected domain.
In this embodiment, first, the content related to the connected domain is defined.
Equivalent pair: a and B are marking values, and an equivalent pair (A and B) indicates that the pixel points with marking values A and B belong to the same connected domain, namely, the pixel points with marking values A and B have an equivalent relationship, and the marking values with the equivalent relationship are stored into an equivalent array and are also a labelpair array, so that labelpair [ A ] =B.
Equivalent chain: the tag values in the equivalent chains are all the same connected domain, which is stored in the form of equivalent pairs. For example, (2, 3,4, 5) in the equivalent chain represents that the pixel points with the marking values of 2,3,4,5 are the pixel points of the same connected domain, the equivalent chain is stored as equivalent pairs (2, 3), (3, 4), (4, 5), (5, 5), wherein (5, 5) represents the tail of the equivalent chain, and 5 is the end value.
Equivalent array: the equivalence array is also a labelpari array, which is a one-dimensional array storing equivalence pairs, for example, equivalence pair (2, 3) is stored as labelpair [2] =3. The equivalent array can store a plurality of equivalent chains, the equivalent chains are distinguished in the equivalent array through different address areas, for example, in the array, labelpair [2] =3, labelpair [3] =4, labelpair [4] =5, the equivalent pairs (2, 3), (3, 4), (4, 5), (5, 5) belong to the same equivalent chain, the equivalent chains are (2, 3,4, 5), labelpair [1] =6, the equivalent pairs (1, 6) belong to the same equivalent chain, the equivalent chains are (1, 6), and two different connected domains can be seen in one array through two different equivalent chains.
In the two-step method for marking the connected domain, the equivalent array is realized by a chain structure, the mark belonging to the same connected domain is expressed by an equivalent chain, and the one-dimensional array is only needed for storage.
In this embodiment, the scanning is performed in order column by row from the upper left corner of the image.
Accordingly, referring to fig. 6 and 7, in the step of assigning the set mark value to the effective point in the original image according to the first setting rule, it includes:
step S1021a: and judging whether the marking values of the pixel points in the neighborhood of the current effective point are all 0. Referring to fig. 6, a pixel point in the neighborhood of the current effective point 3×3 includes a first pixel point a1, a second pixel point a2, a third pixel point a3 and a fourth pixel point a4, where the first pixel point a1 is a pixel point adjacent to the left side of the current effective point, the second pixel point a2 is a pixel point adjacent to the upper side of the first pixel point a1, the third pixel point a3 is a pixel point adjacent to the left side of the second pixel point, and the fourth pixel point a4 is a pixel point adjacent to the right side of the second pixel point. When the marking values of the first pixel point a1, the second pixel point a2, the third pixel point a3 and the fourth pixel point a4 are all 0, the step S1021b is skipped, otherwise the step S1021c is skipped.
Step S1021b: the significant point is assigned a marking value that is different from the previously marked significant point and the marking value for the current significant point is stored in an equivalent pair array. For example, if a previous pixel has been assigned a flag value of 1, 2, or 3, a new flag value, e.g., 4, is assigned to that pixel and stored in the equivalent array, i.e., labelpair [4] =4.
Step S1021c: the effective point is given by selecting a marking value of a pixel point with a value not being 0 from the four pixel points according to the sequence of the first pixel point a1, the second pixel point a2, the third pixel point a3 and the fourth pixel point a4.
Step S1021d: when it is determined that the mark values of the first pixel point a1 and the fourth pixel point a4 are not 0 at the same time and are not equal, if so, the step S1021e is skipped, otherwise, the step S1021f is skipped.
Step S1021e: the marking values of the first pixel point a1 and the fourth pixel point a4 are stored in an equivalent array as equivalent pairs. That is, labelpair [ a1] =a4.
Step S1021f: and further judging whether the marking values of the third pixel point a3 and the fourth pixel point a4 are not 0 and are not equal at the same time, if so, jumping to the step S1021g, otherwise, keeping the marking values of the third pixel point a3 and the fourth pixel point a4 unchanged.
Step S1021g: the third pixel point a3 and the fourth pixel point a4 are stored in an equivalent array as equivalent pairs. That is, labelpair [ a4] =a3.
In this embodiment, the second setting rule is to update the tag value of the pixel point corresponding to each tag value in the equivalent chain in the equivalent array to the end value of the equivalent chain, and the tag values of the same connected domain after updating are the same. For example, in the equivalent chain (2, 3,4, 5), equivalent pairs of equivalent chain 1 stored in the array include: the values of the equivalent pairs are set to 5 because the end value of the equivalent chain is 5, namely, the labelpair [2] =3, labelpair [3] =5, labelpair [4] =5 and labelpair [5] =5 are set to the values of the pixels corresponding to the labeling values of 2,3,4 and 5 to be 5. Therefore, the pixels having the same mark value are the same connected region.
Step S1022: judging whether the pixel points of the connected domain meet the set conditions, and if the parameters of the pixel points of the connected domain meet the set conditions, extracting the connected domain as the outline of the checkerboard.
Since the number of the pixels of the connected domain of the checkerboard is different from the number of the pixels of the interference points and the line segments from the connected domain, and the checkerboard occupies most of the area of the image from the view of the use scene, the outline of the checkerboard can be judged by judging the number of the pixels of the connected domain or the occupied range of the connected domain in the image.
In this embodiment, in step S1022, it is determined whether the connected region is a checkered outline, and if the number of pixels exceeds the set number level, the connected region is a checkered outline by counting the number of pixels in the connected region. For example, if the number of pixels in the connected domain exceeds 10000, the connected domain is considered to be a checkered outline.
In other embodiments, determining whether the connected region is a checkered outline may also be determined by determining the range of the connected region in the image: and recording the maximum value Xmax and the minimum value Xmin of the abscissa and the maximum value Ymax and the minimum value Ymin of the ordinate of the pixel points in the same connected domain, and determining the maximum value of the product G of the connected domain as the outline of the checkerboard according to the formula G= (Xmax-Xmin) x (Ymax-Ymin).
Referring to fig. 8, fig. 8 is an effect diagram of extracting the outline of the checkerboard in the original image in step S102.
Step S103: and carrying out corner recognition on the extracted outline of the checkerboard.
In this embodiment, corner recognition is performed on the outline of the checkerboard by using Harris operator.
Thus, in step S103, it includes:
step S1031: calculating gradients I of pixel points I (X, Y) of the outline of the checkerboard in the X direction and the Y direction by using a horizontal difference operator and a vertical difference operator x 、I y, wherein ,
step S1032: calculating three products of two directional gradients of the pixel points to obtain a matrix m: wherein ,/>I x I y =I x ·I y 。
Step S1033: performing Gaussian smoothing filtering on four elements of the matrix M to obtain a new matrix M: wherein ,/>
Step S1034: calculating the Harris response value R of each pixel point according to the formula: r=detm- α (traceM) 2 Wherein detm=λ 1 λ 2 =AC-B 2 ,traceM=λ 1 +λ 2 =A+C,α=0.1。
Step S1035: and (3) performing R value non-maximum suppression in the 3X 3 neighborhood of the pixel point to obtain a maximum value point which is the corner point.
It can be seen that the corner recognition of the checkerboard outline by the Harris operator is simple in principle, good in stability and wider in application compared with the Susan operator.
With continued reference to fig. 9, fig. 9 is an effect diagram of the identification of the corner points in the outline of the checkerboard by step S103. It can be seen that in the figure the corner points of the squares of the checkerboard are identified.
Step S104: and screening out the internal corner points of the checkerboard according to the identified corner points.
Because the internal corner features of the checkerboard are: in the four directions of the upper, lower, left and right of the inner corner, there are four corner points which are closest to the upper, lower, left and right corner points, and the distance value is almost the same, that is, the variance of the four distance values is smaller, if the variance is not the inner corner point, the condition cannot be satisfied. Therefore, the internal corner points can be found by judging whether four corner points with the same distance value exist in the vertical, horizontal and transverse directions of the corner points.
Therefore, in the present embodiment, in step S104, specifically, it includes:
step S1041: traversing the corners of the checkerboard from the set corners, calculating the distances between the currently selected corners and other corners, and finding out four corners closest to the current corners by adopting a bubbling sequencing method;
step S1042: and calculating the variance of the distance values between the four corner points and the current corner point, and if the variance is smaller than a second set threshold value, the current corner point is the internal corner point of the checkerboard.
Referring to fig. 10, fig. 10 is an effect diagram of screening out internal corner points from the identified corner points of the checkerboard by step S104. It can be seen that in the figure, the inner corners of the checkered blocks are identified.
The application can be compared with corner detection algorithms that directly identify corners using Harris operators. In the operation speed, the matlab is used for simulation to verify, the matlab and the matlab adopt the same Harris code, the math is tested in the same environment, four sample images are selected at will, the corner detection algorithm for directly identifying the corner by using the Harris operator and the simulation time data of the application are shown in the following table, and the operation speed of the application is far faster than that of the corner detection algorithm for directly identifying the corner by using the Harris operator.
Units: second of | Harris | The application is that |
Sample FIG. 1 | 4.3554 | 0.3136 |
Sample FIG. 2 | 4.3190 | 0.3464 |
Sample FIG. 3 | 4.3316 | 0.3319 |
Sample FIG. 4 | 4.4103 | 0.3348 |
Table 1 comparison of recognition speed with the present application using Harris operator corner detection algorithm
In terms of accuracy, the recognition effect of the corner detection algorithm for directly recognizing the corner by using the Harris operator is shown in fig. 2, the effect of recognizing the corner by using the algorithm of the application is shown in fig. 9, and obviously, the recognition effect of the application is better, and many results which are not corner appear in the recognition results of fig. 2.
In conclusion, compared with the corner detection algorithm for directly identifying the corners by using the Harris operator, the method greatly improves the identification speed and accuracy.
Correspondingly, according to the functional modularized thinking of computer software, the application also provides a checkerboard corner detection device based on contour extraction, which corresponds to the embodiment of the checkerboard corner detection method based on contour extraction shown in fig. 4 and 7. Referring to fig. 11, the following specifically discloses the modules included in the apparatus and specific functions implemented by each module.
The checkerboard corner detection device based on contour extraction comprises: an edge point obtaining module 111, configured to perform edge detection processing on an original image to obtain edge points in the original image; a contour extraction module 112, configured to extract a contour of a checkerboard in an original image according to edge points; the corner recognition module 113 is used for performing corner recognition on the extracted outline of the checkerboard; and the internal corner screening module 114 is configured to screen internal corners of the checkerboard according to the identified corners.
The edge point acquisition module 111 is further configured to: by a transversal convolution factor S x And longitudinal convolution factor S y Carrying out plane convolution with the original image to obtain a transverse gray level difference approximation value G of the pixel point of the original image x Longitudinal gray level difference approximation G y Wherein the original image is set as A, the lateral convolution factorLongitudinal convolution factor->According to the formula: the lateral gray level difference approximation value is G x =S x aA, longitudinal gray scale difference approximation is G y =S y A, a; differential approximation value G of transverse gray scale of each pixel point in original image A x Longitudinal gray scale difference approximation G y The magnitude of the gray weighting difference G of the pixel point is obtained according to the following formula: g= |g x |+|G y I (I); when the gray weighting difference G of the pixel point is larger than a first set threshold value, the pixel point is an edge point, otherwise, the pixel point is marked as 0.
The contour extraction module 112 includes:
the connected domain marking unit is used for scanning the pixel points of the original image according to a set sequence, when the pixel points in the original image are scanned to be effective points, a set marking value is given to the effective points in the original image according to a first set rule, the marking value of the effective points of the same connected domain is expressed in a marking chain mode, the marking chain stores marking values with equivalent relations in an equivalent array in an equivalent pair mode, the marking value of the same marking chain in the equivalent array is updated to be a uniform marking value according to a second set rule, and then the pixel points with the same marking value are the pixel points of the same connected domain;
and the extraction unit is used for judging whether the pixel points of the connected domain meet the set conditions, and extracting the connected domain as the outline of the checkerboard if the parameters of the pixel points of the connected domain meet the set conditions.
The connected domain labeling unit is further configured to: judging whether the marking values of the pixel points in the neighborhood of the current effective point are all 0, wherein the pixel points in the neighborhood comprise a first pixel point, a second pixel point, a third pixel point and a fourth pixel point, the first pixel point is the pixel point adjacent to the left side of the current effective point, the second pixel point is the pixel point adjacent to the upper side of the first pixel point, the third pixel point is the pixel point adjacent to the left side of the second pixel point, the fourth pixel point is the pixel point adjacent to the right side of the second pixel point, when the marking values of the first pixel point, the second pixel point, the third pixel point and the fourth pixel point are all 0, marking values of the effective point which are different from the marked effective point before are given, the marking values of the current effective point are stored in an equivalent pair array, and otherwise, marking values of the effective point which are not 0 are selected from the four pixel points according to the sequence of the first pixel point, the second pixel point, the third pixel point and the fourth pixel point; when the marking values of the first pixel point and the fourth pixel point are further judged to be not 0 and are not equal, the marking values of the first pixel point and the fourth pixel point are stored in an equivalent array as equivalent pairs, otherwise, whether the marking values of the third pixel point and the fourth pixel point are not 0 and are not equal is further judged, and if the marking values of the third pixel point and the fourth pixel point are judged to be not 0 and are not equal, the third pixel point and the fourth pixel point are stored in the equivalent array as equivalent pairs.
The connected domain labeling unit is further configured to: and updating the mark value of the pixel point corresponding to each mark value in the mark chain in the equivalent array to the end value of the mark chain so that the updated mark values of the same connected domain are the same.
The extraction unit is also used for: counting the number of the pixel points of the connected domain, and when the number of the pixel points exceeds a set number level, the connected domain is the outline of a checkerboard; alternatively, the maximum value Xmax and the minimum value Xmin of the abscissa and the maximum value Ymax and the minimum value Ymin of the ordinate of the pixel point in the same connected domain are recorded, and according to the formula, g= (Xmax-Xmin) × (Ymax-Ymin), the maximum value of the product G of the connected domain is identified as the outline of the checkerboard.
The corner recognition module 113 is further configured to: calculating gradients I of pixel points I (X, Y) of the outline of the checkerboard in the X direction and the Y direction by using a horizontal difference operator and a vertical difference operator x 、I y, wherein ,calculating three products of two directional gradients of the pixel points to obtain a matrix m: /> wherein ,/>I x I y =I x ·I y The method comprises the steps of carrying out a first treatment on the surface of the Performing Gaussian smoothing filtering on four elements of the matrix M to obtain a new matrix M: /> wherein ,calculating the Harris response value R of each pixel point according to the formula: r=detm- α (traceM) 2 Wherein detm=λ 1 λ 2 =AC-B 2 ,traceM=λ 1 +λ 2 =a+c, a=0.1; and (3) performing R value non-maximum suppression in the 3X 3 neighborhood of the pixel point to obtain a maximum value point which is the corner point.
The interior corner screening module 114 is further configured to: traversing the corners of the checkerboard from the set corners, calculating the distances between the currently selected corners and other corners, and finding out four corners closest to the current corners by adopting a bubbling sequencing method; and calculating the variance of the distance values between the four corner points and the current corner point, and if the variance is smaller than a second set threshold value, the current corner point is the internal corner point of the checkerboard.
The working principle of the contour extraction-based checkerboard corner detection method of the present application will be described with reference to fig. 1 to 11.
Firstly, edge detection is carried out on an original image through a Sobel operator to obtain edge points in the original image.
Then, the edge points are marked with the edge points as effective points. The method specifically comprises the following steps: scanning is carried out from the upper left corner of the image according to a row-by-row sequence, whether the marking values of the pixel points in the 3X 3 neighborhood of the current effective point are all 0 is judged, if yes, a new marking value is given to the effective point, the new marking value is stored in a new equivalent chain, otherwise, the marking value of a pixel point with a value which is not 0 is selected from the four pixel points according to the sequence of a first pixel point a1, a second pixel point a2, a third pixel point a3 and a fourth pixel point a4 in the neighborhood, and the marking value is given to the effective point. When it is determined that the mark values of the first pixel point a1 and the fourth pixel point a4 are not 0 at the same time and are not equal, the mark values of the first pixel point a1 and the fourth pixel point a4 are stored in the equivalent array as equivalent pairs. And judging whether the marking values of the third pixel point a3 and the fourth pixel point a4 are not 0 and are not equal at the same time, and storing the third pixel point a3 and the fourth pixel point a4 in an equivalent array as equivalent pairs.
After the scanning is completed, all the marking values in the equivalent chain are updated to the end value of the equivalent chain, so that the pixel points with the same marking value belong to the same communication area. Since the tag values of the same equivalent chain belong to the same connected region, there may be a plurality of equivalent chains after the scanning is completed, that is, a plurality of connected regions are indicated.
According to the characteristics of the communication area, if the number of pixels of the communication area exceeds a set number, such as 10000, the communication area is considered to be a checkerboard outline. And identifying the corner points of the outline of the checkerboard, and screening out internal corner points from the identified corner points.
The present disclosure also proposes a terminal, including: a processor; a memory for storing processor-executable instructions; the processor is configured to perform edge detection processing on the original image to obtain edge points in the original image; extracting the outline of the checkerboard in the original image according to the edge points; carrying out corner recognition on the extracted outline of the checkerboard; and screening out the internal corner points of the checkerboard according to the identified corner points.
The steps of the various method embodiments described above, such as those shown in fig. 4, are implemented when the processor executes the computer program. Alternatively, the processor may implement the functions of the modules/units in the above-described device embodiments, such as the functional modules of fig. 11, when executing the computer program.
The computer program may be divided into one or more modules/units, which are stored in the memory and executed by the processor to accomplish the present invention, for example. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program in the contour extraction based tessellation corner detection device. For example, the computer program may be divided into modules as shown in fig. 11, each module having the specific functions as described above.
The checkerboard corner detection device based on contour extraction can be computing equipment such as a desktop computer, a notebook computer, a palm computer and a cloud server, and shooting equipment such as a vehicle recorder and a motion camera. The checkerboard corner detection device/terminal equipment based on contour extraction can comprise, but is not limited to, a processor and a memory. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of a contour extraction based tessellation point detection apparatus, and does not constitute a limitation of the contour extraction based tessellation point detection apparatus, and may include more or less components than illustrated, or may combine some components, or different components, e.g., the contour extraction based tessellation point detection apparatus may further include an input-output device, a network access device, a bus, etc.
The processor may be a central processing unit (CentralProcessing Unit, CPU), other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc., which is a control center of the contour-based extraction checkerboard corner detection device, and connects the respective parts of the contour-based extraction checkerboard corner detection device using various interfaces and lines.
The memory may be used to store the computer program and/or the module, and the processor may implement various functions of the contour extraction-based checkerboard corner detection device by running or executing the computer program and/or the module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure digital (SecureDigital, SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid state memory device.
The present disclosure proposes a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the tessellation corner detection method based on contour extraction as described above.
The modules/units of the contour extraction-based tessellation corner detection device may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-only Memory (ROM), a random access Memory (RAM, randomAccess Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The application has the beneficial effects that: according to the method, the edge detection processing is carried out on the original image to obtain the edge points in the original image, the outline of the checkerboard in the original image is extracted according to the edge points, then the corner recognition is carried out on the extracted outline of the checkerboard, and the internal corner points of the checkerboard can be screened out according to the recognized corner points.
The foregoing is a further detailed description of the application in connection with specific embodiments, and it is not intended that the application be limited to such description. It will be apparent to those skilled in the art that several simple deductions or substitutions can be made without departing from the inventive concept.
Claims (4)
1. The checkerboard corner detection method based on contour extraction is characterized by comprising the following steps:
performing edge detection processing on an original image to obtain edge points in the original image;
extracting the outline of the checkerboard in the original image according to the edge points;
carrying out corner recognition on the extracted outline of the checkerboard;
screening out the internal corner points of the checkerboard according to the identified corner points;
the step of performing edge detection processing on the original image to obtain edge points in the original image includes:
by a transversal convolution factor S x And longitudinal convolution factor S y Carrying out plane convolution with the original image to obtain a transverse gray level difference approximation value G of a pixel point of the original image x Longitudinal gray level difference approximation G y Wherein the original image is set as A, and the transverse convolution factorLongitudinal convolution factor->According to the formula: the lateral gray level difference approximation value is G x =S x aA, longitudinal gray scale difference approximation is G y =S y *A;
A lateral gray differential approximation value G of each pixel point in the original image A x Longitudinal gray scale difference approximation G y The magnitude of the gray weighting difference G of the pixel point is obtained according to the following formula: g= |g x |+|G y |;
When the gray weighting difference G of the pixel point is larger than a first set threshold value, the pixel point is the edge point, otherwise, the pixel point is marked as 0;
the step of extracting the outline of the checkerboard in the original image according to the edge points comprises the following steps:
scanning pixel points of the original image according to a set sequence, when the pixel points in the original image are scanned to be effective points, namely, when the pixel points are scanned to be edge points, setting a set mark value to the effective points in the original image according to a first set rule, wherein the mark value of the effective points of the same connected domain is represented by an equivalent chain, the equivalent chain comprises mark values with the equivalent relation, the equivalent chain is stored in an equivalent array in an equivalent pair mode, the mark value of the same equivalent chain in the equivalent array is updated to be the uniform mark value according to a second set rule, and the pixel points with the same mark value are the pixel points of the same connected domain;
judging whether the pixel points of the connected domain meet the set conditions, and if the parameters of the pixel points of the connected domain meet the set conditions, extracting the connected domain as the outline of the checkerboard;
The step of assigning the set marking value to the valid point in the original image according to the first setting rule includes the steps of:
judging whether the marking values of the pixel points in the neighborhood of the current effective point are all 0, wherein the pixel points in the neighborhood comprise a first pixel point, a second pixel point, a third pixel point and a fourth pixel point, when the marking values of the first pixel point, the second pixel point, the third pixel point and the fourth pixel point are all 0, the first pixel point is different from the marking value of the effective point which is marked before, the second pixel point is the pixel point which is adjacent above the current effective point, the third pixel point is the pixel point which is adjacent to the left of the second pixel point, the fourth pixel point is the pixel point which is adjacent to the right of the second pixel point, and when the marking values of the first pixel point, the second pixel point, the third pixel point and the fourth pixel point are all 0, the marking values of the effective point which are different from the effective point which is marked before are given, the marking values of the current effective point are stored in the equivalent pair array, otherwise, the marking values of the effective point is not given to the first pixel point, the third pixel point and the fourth pixel point are not marked according to the marking order of the first pixel point, the third pixel point and the fourth pixel point are 0;
When the marking values of the first pixel point and the fourth pixel point are further judged to be not 0 and not equal at the same time, the marking values of the first pixel point and the fourth pixel point are stored in the equivalent array as equivalent pairs, otherwise, whether the marking values of the third pixel point and the fourth pixel point are not 0 and not equal at the same time is further judged, and if the marking values of the third pixel point and the fourth pixel point are judged to be not 0 and not equal at the same time, the third pixel point and the fourth pixel point are stored in the equivalent array as equivalent pairs; the step of updating the tag values of the same equivalent chain in the equivalent array to uniform tag values according to a second set rule comprises the following steps:
updating the mark value of the pixel point corresponding to each mark value in the equivalent chain in the equivalent array to the end value of the equivalent chain so that the updated mark values of the same connected domain are the same;
the step of screening the internal corner points of the checkerboard according to the identified corner points comprises the following steps:
traversing the corner points of the checkerboard from the set corner points, calculating the distances between the currently selected corner points and other corner points, and finding out four corner points closest to the current corner points by adopting a bubbling sequencing method;
And calculating variances of the distance values of the four corner points and the current corner point, and if the variances are smaller than a second set threshold value, the current corner point is the internal corner point of the checkerboard.
2. The checkerboard corner detection method according to claim 1, wherein the step of judging whether or not the pixel points of the connected domain satisfy a set condition includes:
counting the number of the pixel points of the connected domain, and when the number of the pixel points exceeds a set number level, the connected domain is the outline of the checkerboard; or,
and recording the maximum value Xmax and the minimum value Xmin of the abscissa and the maximum value Ymax and the minimum value Ymin of the ordinate of the pixel points in the same connected domain, and determining the maximum value of the product G of the connected domain as the outline of the checkerboard according to a formula, g= (Xmax-Xmin) × (Ymax-Ymin).
3. The method for detecting corner points of a checkerboard according to claim 1, wherein said step of identifying corner points of said extracted outline of said checkerboard includes:
calculating gradients I of pixel points I (X, Y) of the outline of the checkerboard in the X direction and the Y direction by using a horizontal difference operator and a vertical difference operator x 、I y, wherein ,
calculating three products of the two directional gradients of the pixel points to obtain a matrix m: wherein ,I x I y =I x ·I y ;
performing Gaussian smoothing filtering on four elements of the matrix M to obtain a new matrix M: wherein ,
calculating the Harris response value R of each pixel point according to the formula: r=detm- α (traceM) 2 Wherein detm=λ 1 λ 2 =AC-B 2 ,traceM=λ 1 +λ 2 =A+C,α=0.1;
And (3) performing R value non-maximum suppression in the 3X 3 neighborhood of the pixel point to obtain a maximum point which is the corner point.
4. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811601937.9A CN109509200B (en) | 2018-12-26 | 2018-12-26 | Checkerboard corner detection method based on contour extraction and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811601937.9A CN109509200B (en) | 2018-12-26 | 2018-12-26 | Checkerboard corner detection method based on contour extraction and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109509200A CN109509200A (en) | 2019-03-22 |
CN109509200B true CN109509200B (en) | 2023-09-29 |
Family
ID=65755323
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811601937.9A Active CN109509200B (en) | 2018-12-26 | 2018-12-26 | Checkerboard corner detection method based on contour extraction and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109509200B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110348263B (en) * | 2019-06-24 | 2022-09-30 | 西安理工大学 | Two-dimensional random code image identification and extraction method based on image identification |
CN110648368B (en) * | 2019-08-30 | 2022-05-17 | 广东奥普特科技股份有限公司 | Calibration board corner point discrimination method based on edge features |
CN113160320A (en) * | 2020-01-20 | 2021-07-23 | 北京芯海视界三维科技有限公司 | Chessboard angular point detection method and device for camera parameter calibration |
CN111553927B (en) * | 2020-04-24 | 2023-05-16 | 厦门云感科技有限公司 | Checkerboard corner detection method, detection system, computer device and storage medium |
CN113744177B (en) * | 2020-05-28 | 2024-08-23 | 中科寒武纪科技股份有限公司 | Corner detection method, device and storage medium of image |
CN111681284A (en) * | 2020-06-09 | 2020-09-18 | 商汤集团有限公司 | Corner point detection method and device, electronic equipment and storage medium |
CN112017218B (en) * | 2020-09-09 | 2024-08-02 | 杭州海康威视数字技术股份有限公司 | Image registration method and device, electronic equipment and storage medium |
CN113283416A (en) * | 2020-12-29 | 2021-08-20 | 深圳怡化电脑股份有限公司 | Character outline recognition method and device, electronic equipment and machine readable medium |
CN115830049B (en) * | 2022-07-18 | 2024-08-09 | 宁德时代新能源科技股份有限公司 | Corner detection method and device |
CN116030450B (en) * | 2023-03-23 | 2023-12-19 | 摩尔线程智能科技(北京)有限责任公司 | Checkerboard corner recognition method, device, equipment and medium |
Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009021089A (en) * | 2007-07-11 | 2009-01-29 | Nippon Zeon Co Ltd | Organic electroluminescent element and its manufacturing method |
CN101667288A (en) * | 2008-09-02 | 2010-03-10 | 新奥特(北京)视频技术有限公司 | Method for detecting corner points of communicated regions in binary symbol images |
CN101667287A (en) * | 2008-09-02 | 2010-03-10 | 新奥特(北京)视频技术有限公司 | Method for detecting corner points of outermost frames of symbols in symbol images |
CN103093451A (en) * | 2011-11-03 | 2013-05-08 | 北京理工大学 | Checkerboard intersection recognition algorithm |
CN103177439A (en) * | 2012-11-26 | 2013-06-26 | 惠州华阳通用电子有限公司 | Automatically calibration method based on black and white grid corner matching |
CN103345755A (en) * | 2013-07-11 | 2013-10-09 | 北京理工大学 | Chessboard angular point sub-pixel extraction method based on Harris operator |
WO2013188980A1 (en) * | 2012-06-20 | 2013-12-27 | Cfs Concrete Forming Systems Inc. | Formwork apparatus having resilient standoff braces and methods related thereto |
CN103489192A (en) * | 2013-09-30 | 2014-01-01 | 北京林业大学 | Method for detecting number of Arabidopsis leaves and distance between cusp and center of mass of each leaf |
CN103593840A (en) * | 2013-09-30 | 2014-02-19 | 北京林业大学 | Method for detecting phenotype of Arabidopsis |
CN103606127A (en) * | 2013-12-03 | 2014-02-26 | 中国科学院大学 | Anti-copying image watermarking method based on optical microstructure |
CN103996191A (en) * | 2014-05-09 | 2014-08-20 | 东北大学 | Detection method for black and white checkerboard image corners based on least square optimization |
CN104122271A (en) * | 2014-07-09 | 2014-10-29 | 宁波摩视光电科技有限公司 | Automated optical inspection (AOI)-based bullet apparent defect detection method |
CN104866856A (en) * | 2015-05-17 | 2015-08-26 | 西南石油大学 | Imaging log image solution cave information picking method based on connected domain equivalence pair processing |
CN105006022A (en) * | 2015-08-11 | 2015-10-28 | 中山大学 | Simplified method and device for edge collapse of 3D geometry graphics |
CN106097281A (en) * | 2016-06-27 | 2016-11-09 | 安徽慧视金瞳科技有限公司 | A kind of calibration maps for projecting interactive system and demarcate detection method |
CN106204570A (en) * | 2016-07-05 | 2016-12-07 | 安徽工业大学 | A kind of angular-point detection method based on non-causal fractional order gradient operator |
CN106340010A (en) * | 2016-08-22 | 2017-01-18 | 电子科技大学 | Corner detection method based on second-order contour difference |
CN106378514A (en) * | 2016-11-22 | 2017-02-08 | 上海大学 | Stainless steel non-uniform tiny multi-weld-joint visual inspection system and method based on machine vision |
CN106780537A (en) * | 2017-01-11 | 2017-05-31 | 山东农业大学 | A kind of paper cocooning frame silk cocoon screening plant and method based on image procossing |
JP2017135587A (en) * | 2016-01-28 | 2017-08-03 | 東芝メディカルシステムズ株式会社 | Image processing device, image processing method, medical equipment and test method |
CN107123146A (en) * | 2017-03-20 | 2017-09-01 | 深圳市华汉伟业科技有限公司 | The mark localization method and system of a kind of scaling board image |
CN107749071A (en) * | 2017-09-12 | 2018-03-02 | 深圳市易成自动驾驶技术有限公司 | Big distortion gridiron pattern image angular-point detection method and device |
CN108062759A (en) * | 2018-01-25 | 2018-05-22 | 华中科技大学 | A kind of more pixel-parallel labeling methods and system for being used to mark bianry image |
CN108447095A (en) * | 2018-01-31 | 2018-08-24 | 潍坊歌尔电子有限公司 | A kind of fisheye camera scaling method and device |
CN108734743A (en) * | 2018-04-13 | 2018-11-02 | 深圳市商汤科技有限公司 | Method, apparatus, medium and electronic equipment for demarcating photographic device |
CN108895959A (en) * | 2018-04-27 | 2018-11-27 | 电子科技大学 | A kind of camera calibration plate angle point calculating method based on sub-pix |
CN108898147A (en) * | 2018-06-27 | 2018-11-27 | 清华大学 | A kind of two dimensional image edge straightened method, apparatus based on Corner Detection |
CN108986721A (en) * | 2018-06-14 | 2018-12-11 | 武汉精测电子集团股份有限公司 | A kind of test pattern generation method for display panel detection |
CN109035320A (en) * | 2018-08-12 | 2018-12-18 | 浙江农林大学 | Depth extraction method based on monocular vision |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150145862A1 (en) * | 2013-11-27 | 2015-05-28 | Adobe Systems Incorporated | Texture Modeling of Image Data |
CN106023171B (en) * | 2016-05-12 | 2019-05-14 | 惠州学院 | A kind of image angular-point detection method based on turning radius |
-
2018
- 2018-12-26 CN CN201811601937.9A patent/CN109509200B/en active Active
Patent Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009021089A (en) * | 2007-07-11 | 2009-01-29 | Nippon Zeon Co Ltd | Organic electroluminescent element and its manufacturing method |
CN101667288A (en) * | 2008-09-02 | 2010-03-10 | 新奥特(北京)视频技术有限公司 | Method for detecting corner points of communicated regions in binary symbol images |
CN101667287A (en) * | 2008-09-02 | 2010-03-10 | 新奥特(北京)视频技术有限公司 | Method for detecting corner points of outermost frames of symbols in symbol images |
CN103093451A (en) * | 2011-11-03 | 2013-05-08 | 北京理工大学 | Checkerboard intersection recognition algorithm |
WO2013188980A1 (en) * | 2012-06-20 | 2013-12-27 | Cfs Concrete Forming Systems Inc. | Formwork apparatus having resilient standoff braces and methods related thereto |
CN103177439A (en) * | 2012-11-26 | 2013-06-26 | 惠州华阳通用电子有限公司 | Automatically calibration method based on black and white grid corner matching |
CN103345755A (en) * | 2013-07-11 | 2013-10-09 | 北京理工大学 | Chessboard angular point sub-pixel extraction method based on Harris operator |
CN103489192A (en) * | 2013-09-30 | 2014-01-01 | 北京林业大学 | Method for detecting number of Arabidopsis leaves and distance between cusp and center of mass of each leaf |
CN103593840A (en) * | 2013-09-30 | 2014-02-19 | 北京林业大学 | Method for detecting phenotype of Arabidopsis |
CN103606127A (en) * | 2013-12-03 | 2014-02-26 | 中国科学院大学 | Anti-copying image watermarking method based on optical microstructure |
CN103996191A (en) * | 2014-05-09 | 2014-08-20 | 东北大学 | Detection method for black and white checkerboard image corners based on least square optimization |
CN104122271A (en) * | 2014-07-09 | 2014-10-29 | 宁波摩视光电科技有限公司 | Automated optical inspection (AOI)-based bullet apparent defect detection method |
CN104866856A (en) * | 2015-05-17 | 2015-08-26 | 西南石油大学 | Imaging log image solution cave information picking method based on connected domain equivalence pair processing |
CN105006022A (en) * | 2015-08-11 | 2015-10-28 | 中山大学 | Simplified method and device for edge collapse of 3D geometry graphics |
JP2017135587A (en) * | 2016-01-28 | 2017-08-03 | 東芝メディカルシステムズ株式会社 | Image processing device, image processing method, medical equipment and test method |
CN106097281A (en) * | 2016-06-27 | 2016-11-09 | 安徽慧视金瞳科技有限公司 | A kind of calibration maps for projecting interactive system and demarcate detection method |
CN106204570A (en) * | 2016-07-05 | 2016-12-07 | 安徽工业大学 | A kind of angular-point detection method based on non-causal fractional order gradient operator |
CN106340010A (en) * | 2016-08-22 | 2017-01-18 | 电子科技大学 | Corner detection method based on second-order contour difference |
CN106378514A (en) * | 2016-11-22 | 2017-02-08 | 上海大学 | Stainless steel non-uniform tiny multi-weld-joint visual inspection system and method based on machine vision |
CN106780537A (en) * | 2017-01-11 | 2017-05-31 | 山东农业大学 | A kind of paper cocooning frame silk cocoon screening plant and method based on image procossing |
CN107123146A (en) * | 2017-03-20 | 2017-09-01 | 深圳市华汉伟业科技有限公司 | The mark localization method and system of a kind of scaling board image |
CN107749071A (en) * | 2017-09-12 | 2018-03-02 | 深圳市易成自动驾驶技术有限公司 | Big distortion gridiron pattern image angular-point detection method and device |
CN108062759A (en) * | 2018-01-25 | 2018-05-22 | 华中科技大学 | A kind of more pixel-parallel labeling methods and system for being used to mark bianry image |
CN108447095A (en) * | 2018-01-31 | 2018-08-24 | 潍坊歌尔电子有限公司 | A kind of fisheye camera scaling method and device |
CN108734743A (en) * | 2018-04-13 | 2018-11-02 | 深圳市商汤科技有限公司 | Method, apparatus, medium and electronic equipment for demarcating photographic device |
CN108895959A (en) * | 2018-04-27 | 2018-11-27 | 电子科技大学 | A kind of camera calibration plate angle point calculating method based on sub-pix |
CN108986721A (en) * | 2018-06-14 | 2018-12-11 | 武汉精测电子集团股份有限公司 | A kind of test pattern generation method for display panel detection |
CN108898147A (en) * | 2018-06-27 | 2018-11-27 | 清华大学 | A kind of two dimensional image edge straightened method, apparatus based on Corner Detection |
CN109035320A (en) * | 2018-08-12 | 2018-12-18 | 浙江农林大学 | Depth extraction method based on monocular vision |
Non-Patent Citations (4)
Title |
---|
An automatic camera calibration method based on checkerboard;Qilin Bi等;《Traitement du signal》;第34卷(第3-4期);209-226页 * |
摄像机标定的棋盘格亚像素角点检测;罗钧;《重庆大学学报》;第31卷(第6期);615-618页 * |
潘文明、易文兵著.《手把手教你学FPGA设计 基于大道至简的至简设计法》.北京:北京航空航天大学出版社,2017,237-238页. * |
赵小川.《MATLAB数字图像处理 从仿真到C/C++代码的自动生成》.北京:北京航空航天大学出版社,2015,277页. * |
Also Published As
Publication number | Publication date |
---|---|
CN109509200A (en) | 2019-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109509200B (en) | Checkerboard corner detection method based on contour extraction and computer readable storage medium | |
CN110766736B (en) | Defect detection method, defect detection device, electronic equipment and storage medium | |
CN110766679B (en) | Lens contamination detection method and device and terminal equipment | |
CN108896278B (en) | Optical filter silk-screen defect detection method and device and terminal equipment | |
CN111612781A (en) | Screen defect detection method and device and head-mounted display equipment | |
CN111161222B (en) | Printing roller defect detection method based on visual saliency | |
CN111080661A (en) | Image-based line detection method and device and electronic equipment | |
CN109300104B (en) | Angular point detection method and device | |
CN115205223B (en) | Visual inspection method and device for transparent object, computer equipment and medium | |
CN113781406B (en) | Scratch detection method and device for electronic component and computer equipment | |
CN111539238B (en) | Two-dimensional code image restoration method and device, computer equipment and storage medium | |
CN112634301A (en) | Equipment area image extraction method and device | |
CN107909554B (en) | Image noise reduction method and device, terminal equipment and medium | |
CN117094975A (en) | Method and device for detecting surface defects of steel and electronic equipment | |
CN113283439B (en) | Intelligent counting method, device and system based on image recognition | |
CN108960012A (en) | Feature point detecting method, device and electronic equipment | |
CN108960247B (en) | Image significance detection method and device and electronic equipment | |
CN110298835B (en) | Leather surface damage detection method, system and related device | |
CN110751156A (en) | Method, system, device and medium for table line bulk interference removal | |
CN111126248A (en) | Method and device for identifying shielded vehicle | |
CN109784328A (en) | Position method, terminal and the computer readable storage medium of bar code | |
CN111524153B (en) | Image analysis force determination method and device and computer storage medium | |
CN111340040B (en) | Paper character recognition method and device, electronic equipment and storage medium | |
CN113744200B (en) | Camera dirt detection method, device and equipment | |
CN114463352A (en) | Slide scanning image target segmentation and extraction method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |