WO2023077404A1 - 缺陷检测方法、装置和系统 - Google Patents
缺陷检测方法、装置和系统 Download PDFInfo
- Publication number
- WO2023077404A1 WO2023077404A1 PCT/CN2021/128893 CN2021128893W WO2023077404A1 WO 2023077404 A1 WO2023077404 A1 WO 2023077404A1 CN 2021128893 W CN2021128893 W CN 2021128893W WO 2023077404 A1 WO2023077404 A1 WO 2023077404A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- value
- image
- gray value
- mapping
- defect
- Prior art date
Links
- 230000007547 defect Effects 0.000 title claims abstract description 197
- 238000001514 detection method Methods 0.000 title claims abstract description 110
- 238000013507 mapping Methods 0.000 claims abstract description 112
- 238000000034 method Methods 0.000 claims abstract description 42
- 238000010801 machine learning Methods 0.000 claims abstract description 26
- 230000002950 deficient Effects 0.000 claims abstract description 14
- 230000011218 segmentation Effects 0.000 claims abstract description 11
- 238000004590 computer program Methods 0.000 claims description 22
- 238000003062 neural network model Methods 0.000 claims description 17
- 238000003860 storage Methods 0.000 claims description 10
- 238000003384 imaging method Methods 0.000 claims description 7
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 claims description 6
- 229910052744 lithium Inorganic materials 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E60/00—Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
- Y02E60/10—Energy storage using batteries
Definitions
- the present application relates to the technical field of defect detection, in particular to a defect detection method, device and system.
- the defects of the object will affect the performance of the object, and finally lead to the unqualified quality of the object. Therefore, it is necessary to detect the defects of the object.
- the embodiments of the present application provide a defect detection method, device or system, which can increase the defect detection speed.
- the present application provides a defect detection method, including: obtaining the average gray value of the image of the object to be detected; constructing a mapping table, the elements of the mapping table include each The mapping value corresponding to the gray value, wherein the mapping value corresponding to the gray value greater than or equal to the reference value is the first value, the mapping value corresponding to the gray value smaller than the reference value is the second value, and the reference value is the absolute value of the difference between the average gray value and the preset gray value; look up the mapping value corresponding to the gray value of each pixel in the image from the mapping table; according to the gray value of each pixel A mapping value corresponding to the gray value, segmenting at least one suspected defect sub-image from the image, wherein the mapping value corresponding to the gray value of each pixel in each suspicious defect sub-image is the first value; The at least one suspected defect sub-image is input to a machine learning model to obtain a defect detection result.
- a mapping table is constructed according to the average gray value of the image and the gray value range of the image. Subsequently, it is only necessary to look up the mapping value corresponding to the gray value of each pixel of the image from the mapping table without performing mathematical calculations, which improves the speed of defect detection.
- using submaps of suspected defects as input to the machine learning model, rather than the entire image of the object to be processed, can also help improve the speed of defect detection.
- obtaining the average gray value of the image of the object to be detected includes: obtaining the original gray value range of the image; performing contrast stretching on the image to expand the original gray value range to The gray value range; wherein, the average gray value is the average gray value of the image after the contrast stretching.
- contrast stretching the gray level difference between the defect area and the non-defect area is enlarged. In this way, the robustness and accuracy of the suspicious defect sub-region segmentation are improved, thereby improving the robustness and accuracy of defect detection while increasing the defect detection speed.
- performing contrast stretching on the image includes: converting the original gray value I1(x, y) of each pixel in the image into a gray value I2(x, y) according to the following formula:
- a is the lower limit of the original gray value range
- b is the upper limit of the original gray value range
- c is the lower limit of the gray value range
- d is the upper limit of the gray value range.
- the first value is d and the second value is c. In this way, the success rate of subgraph segmentation of suspected defects can be improved.
- the machine learning model includes a residual neural network model, and the total number of convolutional layers and fully connected layers in the residual neural network model is 14. In this way, defect detection accuracy can be improved.
- the difference between the maximum original gray value and the minimum original gray value of the non-defect area in the image except for the defect ranges from 35 to 50. In this way, the defect detection accuracy can be further improved.
- the difference between the maximum original gray value and the minimum original gray value is 40. In this way, the defect detection accuracy can be further improved.
- the maximum original grayscale value is 105, and the minimum original grayscale value is 75. In this way, the defect detection accuracy can be further improved.
- segmenting at least one suspected defect sub-image from the image includes: according to the mapping value corresponding to the gray value of each pixel, from the Segment a plurality of connected regions in the image, and the mapping value corresponding to the gray value of each pixel in each connected region is the first value; when two adjacent connected regions meet the preset conditions, the The two connected regions are merged into one suspicious defect sub-image, wherein the areas of the two connected regions are respectively the first area and the second area less than or equal to the first area, and the overlapping of the two connected regions
- the area of the region is a third area
- the preset condition includes that the ratio of the third area to the first area is greater than a preset ratio; when the two connected regions do not meet the preset condition, The two connected regions are determined as two suspected defect sub-images.
- the two The connected regions are merged into one suspected defect subimage; otherwise, the two connected regions are regarded as two suspected defect subimages. In this way, the number of suspected defect sub-images can be reduced, and the speed at which the machine learning model can obtain defect detection results can be increased, thereby further improving the defect detection speed.
- the preset ratio is greater than 0.5 and less than 1. In this way, the number of suspicious defect subgraphs can be reduced, and the accuracy of the suspicious defect subgraph can also be taken into account.
- the preset ratio is 0.8. In this way, the number of suspicious defect subgraphs can be reduced, and the accuracy of the suspicious defect subgraph can be better considered.
- the data type of the elements in the mapping table is unsigned byte.
- the defect detection result includes a defect type. In this way, the defect detection result is more accurate.
- the object to be detected includes a pole piece of a battery.
- the defect detection speed of the pole piece of the battery can be improved by using the above defect detection scheme.
- the battery includes a lithium battery.
- the detection speed of a defect of a pole piece of a lithium battery can be improved by using the above defect detection solution.
- the present application provides a defect detection device, including: an acquisition module configured to acquire the average gray value of an image of an object to be detected; a construction module configured to construct a mapping table, and the elements of the mapping table include The mapping value corresponding to each gray value within the gray value range of the image, wherein the mapping value corresponding to the gray value greater than or equal to the reference value is the first value, and the gray value smaller than the reference value corresponds to The mapping value of is the second value, and the reference value is the absolute value of the difference between the average gray value and the preset gray value; the search module is configured to search the image from the mapping table The mapping value corresponding to the gray value of each pixel; the segmentation module is configured to segment at least one suspected defect sub-image from the image according to the mapping value corresponding to the gray value of each pixel, wherein each The mapping value corresponding to the gray value of each pixel in the suspected defect sub-image is the first value; and an input module configured to input the at least one suspected defect sub-image into a machine learning
- a mapping table is constructed according to the average gray value of the image and the gray value range of the image. Subsequently, it is only necessary to look up the mapping value corresponding to the gray value of each pixel of the image from the mapping table without performing mathematical calculations, which improves the speed of defect detection.
- using submaps of suspected defects as input to the machine learning model, rather than the entire image of the object to be processed, can also help improve the speed of defect detection.
- the present application provides a defect detection device, including: a memory; and a processor coupled to the memory, configured to execute the instructions described in any one of the above-mentioned embodiments based on instructions stored in the memory. Defect detection method.
- the present application provides a defect detection system, comprising: the defect detection device described in any one of the above embodiments; and an imaging device configured to scan the object to be detected to obtain the image.
- the present application provides a computer-readable storage medium, including computer program instructions, wherein when the computer program instructions are executed by a processor, the defect detection method described in any one of the above-mentioned embodiments is implemented.
- the present application provides a computer program product, including a computer program, wherein when the computer program is executed by a processor, the defect detection method described in any one of the above embodiments is implemented.
- FIG. 1 is a schematic flow chart of a defect detection method according to an embodiment of the present application
- FIG. 2 is a schematic flow chart of a defect detection method according to another embodiment of the present application.
- Fig. 3 is a schematic flow chart of a defect detection method according to another embodiment of the present application.
- Fig. 4 is a schematic diagram of a residual neural network model of an embodiment of the present application.
- FIG. 5 is a schematic diagram of a defect detection device according to an embodiment of the present application.
- Fig. 6 is a schematic diagram of a defect detection device according to another embodiment of the present application.
- Fig. 7 is a schematic diagram of a defect detection system according to an embodiment of the present application.
- the inventor found that in the preprocessing process, for the gray value of each pixel of the image, complex mathematical calculation processes (such as subtraction, absolute value, comparison, etc.) are required to determine whether the pixel is suspicious Defects, which cause the image preprocessing process to take longer, resulting in a lower defect detection speed. In the case of large image size, the preprocessing process will take longer.
- the embodiment of the present application proposes the following technical solutions to improve the defect detection speed.
- FIG. 1 is a schematic flowchart of a defect detection method according to an embodiment of the present application. As shown in FIG. 1 , the defect detection method includes step 102 to step 110 .
- step 102 the average gray value of the image of the object to be detected is acquired.
- the image may be an image of the surface of the object to be detected.
- the object to be detected includes a pole piece of a battery, for example, a pole piece of a lithium battery.
- the image may be an image of the surface of the pole piece.
- the embodiment of the present application is not limited thereto, and the object to be detected may also be other workpieces.
- the average grayscale value of the image is the average grayscale value of the original grayscale of the image.
- the average gray value of the image is the average gray value of the gray values obtained by performing contrast stretching on the original gray, which will be described in detail in conjunction with some embodiments later.
- mapping table is constructed, and elements of the mapping table include mapping values corresponding to each gray value within the gray value range of the image.
- the absolute value of the difference between the average grayscale value of the image and the preset grayscale value is referred to as a reference value.
- the mapping value corresponding to the gray value greater than or equal to the reference value is the first value
- the mapping value corresponding to the gray value smaller than the reference value is the second value.
- an image has grayscale values ranging from 0 to 255.
- a corresponding mapping value is assigned in the mapping table, that is, there are 266 mapping values corresponding to gray values in the mapping table.
- the mapping value corresponding to each grayscale value from 100 to 255 is the first value
- the mapping value corresponding to each grayscale value from 0 to 99 is the second value. It should be understood that the second value is different from the first value.
- the above-mentioned preset grayscale value is a benchmark for the degree to which the grayscale value of each pixel of the image deviates from the average grayscale value, and its specific value can be set according to actual conditions.
- the data type of the elements in the mapping table is unsigned char.
- step 106 the mapping value corresponding to the gray value of each pixel in the image is searched from the mapping table.
- the corresponding mapping value is found from the mapping table as the second value (for example, 0); if the grayscale value of a certain pixel is 120 , then the corresponding mapping value is found from the mapping table as the first value (for example, 255).
- the mapping value corresponding to the gray value of each pixel can be directly found from the mapping table.
- step 108 at least one suspected defect sub-image is segmented from the image according to the mapping value corresponding to the gray value of each pixel.
- the mapping value corresponding to the gray value of each pixel in each suspected defect sub-image is the first value.
- the absolute value of the difference between the gray value of each pixel in each suspected defect sub-image and the average gray value of the entire image is greater than the preset gray value.
- step 110 at least one suspected defect sub-image is input to a machine learning model to obtain a defect detection result.
- the at least one suspected defect sub-image here is obtained in step 108 .
- the machine learning model can obtain defect detection results based on the suspected defect sub-images.
- a sample defect image is used as an input, and the defect type of the sample defect image is used as an output to train a machine learning model;
- a sample non-defect image is used as an input, and the non-defect result is used as an output to train a machine learning model. train.
- machine learning models include, but are not limited to, residual neural network models.
- the defect detection result is no defect. In other embodiments, the defect detection result is defective. In some embodiments, where the defect detection result is defective, the defect detection result further includes a defect type. Taking battery pole pieces as an example, the types of defects may include, but are not limited to: metal leakage, cracks, dark spots, bubbles, pits, unknown, etc.
- a mapping table is constructed according to the average gray value of the image and the gray value range of the image. Subsequently, it is only necessary to look up the mapping value corresponding to the gray value of each pixel of the image from the mapping table without performing mathematical calculations, which improves the speed of defect detection.
- using submaps of suspected defects as input to the machine learning model, rather than the entire image of the object to be processed, can also help improve the speed of defect detection.
- Fig. 2 is a schematic flowchart of a defect detection method according to another embodiment of the present application.
- the defect detection method includes step 102 to step 110 , and step 102 includes step 1021 and step 1022 .
- step 1021 and step 1022 The implementation process of some steps (for example, step 1021 and step 1022) is mainly introduced below, and other steps may refer to the description of the embodiment shown in FIG. 1 .
- step 1021 the original gray value range of the image is obtained.
- the image of the object to be detected can be obtained by scanning and imaging the surface of the object to be detected, and the gray value range of the image is the original gray value range.
- the range of the original gray scale of the image on the surface of the pole piece can be obtained. It can be understood that the normal area on the surface of the pole piece is smooth, and the surface texture and color are consistent. If the surface of the entire pole piece is evenly illuminated, the gray value of the normal area of the obtained image is similar.
- step 1022 contrast stretching is performed on the image to expand the original gray value range to a gray value range.
- the average gray value of the image is the average gray value of the image after contrast stretching.
- the image can be contrast stretched in the following manner:
- a is the lower limit of the original gray value range
- b is the upper limit of the original gray value range
- c is the lower limit of the gray value range after contrast stretching
- d is the gray value range after contrast stretching upper limit.
- a mapping table is constructed, and elements of the mapping table include mapping values corresponding to each gray value within the gray value range of the image.
- the mapping value corresponding to the gray value greater than or equal to the reference value is the first value
- the mapping value corresponding to the gray value smaller than the reference value is the second value.
- the first value is the upper limit d of the gray value range after the contrast stretching, such as 255; the second value is the lower limit c of the gray value range after the contrast stretching, such as 0. In this way, the success rate of subsequent suspicious defect subgraph segmentation can be improved.
- step 106 the mapping value corresponding to the gray value of each pixel in the image is searched from the mapping table.
- step 108 at least one suspected defect sub-image is segmented from the image according to the mapping value corresponding to the gray value of each pixel.
- step 110 at least one suspected defect sub-image is input to a machine learning model to obtain a defect detection result.
- the gray level difference between the defect area and the non-defect area is enlarged by contrast stretching. In this way, the robustness and accuracy of the suspicious defect sub-region segmentation are improved, thereby improving the robustness and accuracy of defect detection while increasing the defect detection speed.
- Fig. 3 is a schematic flowchart of a defect detection method according to another embodiment of the present application.
- the defect detection method includes step 102 to step 110 , and step 108 includes step 1081 to step 1083 .
- the implementation process of some steps (for example, step 1081 to step 1083) is mainly introduced below, and other steps may refer to the description of the embodiment shown in FIG. 1 .
- step 102 the average gray value of the image of the object to be detected is acquired.
- step 102 may include step 1021 and step 1022 shown in FIG. 2 .
- a mapping table is constructed, and elements of the mapping table include mapping values corresponding to each gray value within the gray value range of the image.
- the mapping value corresponding to the gray value greater than or equal to the reference value is the first value
- the mapping value corresponding to the gray value smaller than the reference value is the second value.
- step 106 the mapping value corresponding to the gray value of each pixel in the image is searched from the mapping table.
- step 1081 according to the mapping value corresponding to the gray value of each pixel, a plurality of connected regions are segmented from the image, and the mapping value corresponding to the gray value of each pixel in each connected region is the first value.
- each connected region the absolute value of the difference between the gray value of each pixel and the average gray value of the image is greater than the preset gray value.
- every connected region may be a defect region.
- step 1082 when two adjacent connected regions satisfy the preset condition, the two connected regions are merged into one suspected defect sub-image.
- the areas of two adjacent connected regions are respectively referred to as the first area and the second area, and the area of the overlapping region of the two adjacent connected regions is referred to as the third area.
- the second area is smaller than or equal to the first area, that is, the areas of two adjacent connected regions may be equal or unequal.
- the aforementioned preset condition includes that the ratio of the third area to the first area is greater than a preset ratio.
- step 1083 if the two adjacent connected regions do not satisfy the preset condition, determine the two connected regions as two suspected defective sub-images.
- the predetermined ratio is greater than 0.5 and less than 1, for example, the predetermined ratio is 0.8. In this way, the connected regions whose overlap ratio is less than or equal to 0.5 will not be merged, so that the number of suspected defect subgraphs can be reduced, and the accuracy of the suspected defect subgraph can also be taken into account.
- the connected regions that meet the preset conditions are merged into a suspected defect subgraph, and each connected region that does not meet the preset conditions is a suspected defect subimage. In this way, at least one suspected defect sub-image is obtained.
- step 110 at least one suspected defect sub-image is input to a machine learning model to obtain a defect detection result.
- the two connected Regions are merged into one suspected defect subimage; otherwise two connected regions are regarded as two suspected defect subimages. In this way, the number of suspected defect sub-images can be reduced, and the speed at which the machine learning model can obtain defect detection results can be increased, thereby further improving the defect detection speed.
- the preprocessing time of the steps before step 110 is not greater than 80ms.
- the inventors try to find a solution to improve the defect detection accuracy while increasing the defect detection speed.
- the total number of convolutional layers and fully connected layers in the residual neural network model is 14. In this way, both the defect detection speed and the defect detection accuracy can be improved.
- the inventor also noticed that when the gray value of the non-defective area in the image of the object to be detected changes in different ranges, the residual neural network with a total number of 14 convolutional layers and fully connected layers is used. The accuracy of the defect detection results of the network model to detect the defect varies.
- the difference between the maximum original gray value and the minimum original gray value of the non-defect area in the image of the object to be detected is in the range of 35 to 50.
- the defect detection result is more accurate.
- the difference between the maximum original grayscale value of the non-defective area and the minimum original grayscale value of the non-defective area is 40.
- the grayscale value of the non-defective area ranges from 75 to 105, that is, the maximum original grayscale value of the non-defective area is 105, and the minimum original grayscale value of the non-defective area is 75.
- the defect detection result is further more accurate.
- Fig. 4 is a schematic diagram of a residual neural network model according to an embodiment of the present application.
- the residual neural network model includes three residual network units (ResNet Unit) between the maximum pooling layer and the average pooling layer, each residual network unit includes two residual blocks, each A residual block consists of two convolutional layers.
- the residual neural network model also includes the first convolutional layer before the max pooling layer and the fully connected layer after the average pooling layer.
- the size of the convolution kernel of the first convolution layer is 7*7, the number of convolution kernels is 64, and the size of the image becomes 1/2 of the original size after passing through the first convolution layer. In some embodiments, the size of the image becomes 1/2 of the original size after the maximum pooling layer. In some embodiments, the size of the convolution kernel of each convolution layer in each residual network unit is 3*3, the number of convolution kernels is 256, and the image passes through each convolution in the residual network unit Dimensions remain the same after layering.
- the residual neural network model utilizes the following loss function during training:
- focal loss - ⁇ (1-y′) ⁇ log(y′)
- focal loss is the loss function
- y' is the probability of a certain category
- ⁇ is the weight of the category
- ⁇ is the modulation factor
- the residual neural network model shown in Figure 4 can be realized by reducing a residual network unit on the basis of the ResNet18 model.
- the residual neural network model shown in Figure 4 can also be called a ResNet14 model.
- the volume of the ResNet14 model is reduced by 75%, the defect detection speed is increased by 25%, and the defect detection accuracy is increased by 5%.
- the inference time of the ResNet14 model is no greater than 20ms.
- using the ResNet14 model helps to classify defects with low probability (0.1%), reducing the possibility of missed detection.
- each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same or similar parts of each embodiment can be referred to each other.
- the description is relatively simple, and for the related parts, please refer to the part of the description of the method embodiment.
- Fig. 5 is a schematic diagram of a defect detection device according to an embodiment of the present application.
- the defect detection device includes an acquisition module 501 , a construction module 502 , a search module 503 , a segmentation module 504 and an input module 505 .
- the obtaining module 501 is configured to obtain the average gray value of the image of the object to be detected.
- the construction module 502 is configured to construct a mapping table, and the elements of the mapping table include mapping values corresponding to each gray value within the gray value range of the image.
- the absolute value of the difference between the average grayscale value and the preset grayscale value is a reference value
- the grayscale value greater than or equal to the reference value corresponds to the first value
- the grayscale value smaller than the reference value corresponds to
- the mapped value of is the second value.
- the lookup module 503 is configured to look up the mapping value corresponding to the gray value of each pixel in the image from the mapping table.
- the segmentation module 504 is configured to segment at least one suspected defect sub-image from the image according to the mapping value corresponding to the gray value of each pixel.
- the mapping value corresponding to the gray value of each pixel in each suspected defect sub-image is the first value.
- the input module 505 is configured to input at least one suspected defect sub-image into the machine learning model to obtain a defect detection result.
- a mapping table is constructed according to the average gray value of the image and the gray value range of the image. Subsequent only need to look up the mapping value corresponding to the gray value of each pixel of the image from the mapping table, without the need for mathematical calculations, which greatly improves the speed of defect detection.
- the acquisition module 501 is configured to acquire the average gray value of the image of the object to be detected in the manner described above.
- the segmentation module 504 is configured to segment at least one suspected defect sub-image from the image in the manner described above.
- Fig. 6 is a schematic diagram of a defect detection device according to another embodiment of the present application.
- the defect detection device 600 includes a memory 601 and a processor 602 coupled to the memory 601 , and the processor 602 is configured to execute the method of any one of the foregoing embodiments based on instructions stored in the memory 601 .
- the memory 601 may include, for example, a system memory, a fixed non-volatile storage medium, and the like.
- the system memory may store an operating system, an application program, a boot loader (Boot Loader) and other programs, for example.
- Boot Loader Boot Loader
- the defect detection device 600 may also include an input and output interface 603, a network interface 604, a storage interface 605, and the like. These interfaces 603 , 604 , and 605 , and between the memory 601 and the processor 602 may be connected via a bus 606 , for example.
- the input and output interface 603 provides a connection interface for input and output devices such as a display, a mouse, a keyboard, and a touch screen.
- the network interface 604 provides connection interfaces for various networked devices.
- the storage interface 605 provides connection interfaces for external storage devices such as SD cards and U disks.
- the defect detection device is further configured to upload the defect detection result to the data platform and/or upload the suspected defect sub-image whose defect detection result is defective to the defect image library.
- the images in the image library can be used as training samples, thereby improving the accuracy of the machine learning model in subsequently detecting defects.
- Fig. 7 is a schematic diagram of a defect detection system according to an embodiment of the present application.
- the defect detection system includes a defect detection device 701 and an imaging device 702 according to any one of the above embodiments.
- the imaging device 702 is configured to scan the object to be detected to obtain an image of the object to be detected.
- imaging device 702 is a line scan camera.
- the defect detection device 701 acquires the image of the object to be detected from the imaging device 702, and performs defect detection in the manner described above. After the defect detection result is obtained, the marking machine can be used to mark the defect of the object to be detected.
- An embodiment of the present application also provides a computer-readable storage medium, including computer program instructions, and when the computer program instructions are executed by a processor, the method of any one of the above-mentioned embodiments is implemented.
- An embodiment of the present application further provides a computer program product, including a computer program, and when the computer program is executed by a processor, the method of any one of the foregoing embodiments is implemented.
- the size of the image is large, the effect of the above embodiment on improving the defect detection speed is more obvious.
- the size of the image is 16K.
- the processing time of the defect detection process using the defect detection method of the above embodiment is not greater than 100ms, and there is no A missed detection occurred.
- the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, this application may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. .
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
- the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
Description
Claims (21)
- 一种缺陷检测方法,包括:获取待检测物体的图像的平均灰度值;构建映射表,所述映射表的元素包括所述图像的灰度值范围内的每个灰度值对应的映射值,其中,大于或等于参考值的灰度值对应的映射值为第一值,小于所述参考值的灰度值对应的映射值为第二值,所述参考值为所述平均灰度值与预设灰度值之间的差值的绝对值;从所述映射表中查找所述图像中每个像素的灰度值对应的映射值;根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像,其中,每个可疑缺陷子图像中每个像素的灰度值对应的映射值为所述第一值;将所述至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。
- 根据权利要求1所述的方法,其中,获取待检测物体的图像的平均灰度值包括:获取所述图像的原始灰度值范围;对所述图像进行对比度拉伸,以将所述原始灰度值范围扩展为所述灰度值范围;其中,所述平均灰度值为所述图像在所述对比度拉伸后的平均灰度值。
- 根据权利要求3所述的方法,其中,所述第一值为d,所述第二值为c。
- 根据权利要求3或4所述的方法,其中,c=0,d=255。
- 根据权利要求1-5任意一项所述的方法,其中,所述机器学习模型包括残差神经网络模型,所述残差神经网络模型中卷积层和全连接层的总层数为14。
- 根据权利要求6所述的方法,其中,所述图像中除缺陷之外的非缺陷区域的最 大原始灰度值和最小原始灰度值之间的差值的范围为35至50。
- 根据权利要求7所述的方法,其中,所述最大原始灰度值和所述最小原始灰度值之间的差值为40。
- 根据权利要求8所述的方法,其中,所述最大原始灰度值为105,所述最小原始灰度值为75。
- 根据权利要求1-9任意一项所述的方法,其中,根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像包括:根据每个像素的灰度值对应的映射值,从所述图像中分割出多个连通区域,每个连通区域中每个像素的灰度值对应的映射值为所述第一值;在相邻的两个连通区域满足预设条件的情况下,将所述两个连通区域合并为一个可疑缺陷子图像,其中,所述两个连通区域的面积分别为第一面积和小于或等于所述第一面积的第二面积,所述两个连通区域的重叠区域的面积为第三面积,所述预设条件包括所述第三面积与所述第一面积之比大于预设比值;在所述两个连通区域不满足所述预设条件的情况下,将所述两个连通区域确定为两个可疑缺陷子图像。
- 根据权利要求10所述的方法,其中,所述预设比值大于0.5且小于1。
- 根据权利要求11所述的方法,其中,所述预设比值为0.8。
- 根据权利要求1-12任意一项所述的方法,其中,所述映射表中的元素的数据类型为无符号字节型。
- 根据权利要求1-13任意一项所述的方法,其中,所述缺陷检测结果包括缺陷类型。
- 根据权利要求1-14任意一项所述的方法,其中,所述待检测物体包括电池的极片。
- 根据权利要求15所述的方法,其中,所述电池包括锂电池。
- 一种缺陷检测装置,包括:获取模块,被配置为获取待检测物体的图像的平均灰度值;构建模块,被配置为构建映射表,所述映射表的元素包括所述图像的灰度值范围内的每个灰度值对应的映射值,其中,大于或等于参考值的灰度值对应的映射值为第一值,小于所述参考值的灰度值对应的映射值为第二值,所述参考值为所述平均灰度 值与预设灰度值之间的差值的绝对值;查找模块,被配置为从所述映射表中查找所述图像中每个像素的灰度值对应的映射值;分割模块,被配置为根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像,其中,每个可疑缺陷子图像中每个像素的灰度值对应的映射值为所述第一值;和输入模块,被配置为将所述至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。
- 一种缺陷检测装置,包括:存储器;和耦接至所述存储器的处理器,被配置为基于存储在所述存储器中的指令,执行权利要求1-16任意一项所述的缺陷检测方法。
- 一种缺陷检测系统,包括:权利要求17或18所述的缺陷检测装置;和成像装置,被配置为对所述待检测物体进行扫描以得到所述图像。
- 一种计算机可读存储介质,包括计算机程序指令,其中,所述计算机程序指令被处理器执行时实现权利要求1-16任意一项所述的缺陷检测方法。
- 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-16任意一项所述的缺陷检测方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21962919.3A EP4280153A4 (en) | 2021-11-05 | 2021-11-05 | METHOD, DEVICE AND SYSTEM FOR ERROR DETECTION |
JP2023552290A JP7569479B2 (ja) | 2021-11-05 | 2021-11-05 | 欠陥検出方法、装置及びシステム |
CN202180053074.XA CN116420159A (zh) | 2021-11-05 | 2021-11-05 | 缺陷检测方法、装置和系统 |
PCT/CN2021/128893 WO2023077404A1 (zh) | 2021-11-05 | 2021-11-05 | 缺陷检测方法、装置和系统 |
KR1020237025378A KR20230124713A (ko) | 2021-11-05 | 2021-11-05 | 결함 검출 방법, 장치 및 시스템 |
US18/465,557 US20230419472A1 (en) | 2021-11-05 | 2023-09-12 | Defect detection method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/128893 WO2023077404A1 (zh) | 2021-11-05 | 2021-11-05 | 缺陷检测方法、装置和系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/465,557 Continuation US20230419472A1 (en) | 2021-11-05 | 2023-09-12 | Defect detection method, device and system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023077404A1 true WO2023077404A1 (zh) | 2023-05-11 |
Family
ID=86240371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/128893 WO2023077404A1 (zh) | 2021-11-05 | 2021-11-05 | 缺陷检测方法、装置和系统 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20230419472A1 (zh) |
EP (1) | EP4280153A4 (zh) |
JP (1) | JP7569479B2 (zh) |
KR (1) | KR20230124713A (zh) |
CN (1) | CN116420159A (zh) |
WO (1) | WO2023077404A1 (zh) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116342589A (zh) * | 2023-05-23 | 2023-06-27 | 之江实验室 | 一种跨视场划痕缺陷连续性检测方法和系统 |
CN116402827A (zh) * | 2023-06-09 | 2023-07-07 | 山东华禹威达机电科技有限公司 | 基于图像处理的采煤机用电缆夹板缺陷检测方法及装置 |
CN116703890A (zh) * | 2023-07-28 | 2023-09-05 | 上海瑞浦青创新能源有限公司 | 极耳缺陷的检测方法和系统 |
CN116721106A (zh) * | 2023-08-11 | 2023-09-08 | 山东明达圣昌铝业集团有限公司 | 一种基于图像处理的型材瑕疵视觉检测方法 |
CN116843678A (zh) * | 2023-08-28 | 2023-10-03 | 青岛冠宝林活性炭有限公司 | 一种硬碳电极生产质量检测方法 |
CN116883408A (zh) * | 2023-09-08 | 2023-10-13 | 威海坤科流量仪表股份有限公司 | 基于人工智能的积算仪壳体缺陷检测方法 |
CN116984628A (zh) * | 2023-09-28 | 2023-11-03 | 西安空天机电智能制造有限公司 | 一种基于激光特征融合成像的铺粉缺陷检测方法 |
CN117078666A (zh) * | 2023-10-13 | 2023-11-17 | 东声(苏州)智能科技有限公司 | 二维和三维结合的缺陷检测方法、装置、介质和设备 |
CN117078667A (zh) * | 2023-10-13 | 2023-11-17 | 山东克莱蒙特新材料科技有限公司 | 基于机器视觉的矿物铸件检测方法 |
CN117095009A (zh) * | 2023-10-20 | 2023-11-21 | 山东绿康装饰材料有限公司 | 一种基于图像处理的pvc装饰板缺陷检测方法 |
CN117115153A (zh) * | 2023-10-23 | 2023-11-24 | 威海坤科流量仪表股份有限公司 | 基于视觉辅助的印制线路板质量智能检测方法 |
CN117152180A (zh) * | 2023-10-31 | 2023-12-01 | 山东克莱蒙特新材料科技有限公司 | 基于人工智能的矿物铸件缺陷检测方法 |
CN117197141A (zh) * | 2023-11-07 | 2023-12-08 | 山东远盾网络技术股份有限公司 | 一种汽车零部件表面缺陷检测方法 |
CN117291937A (zh) * | 2023-11-27 | 2023-12-26 | 山东嘉达装配式建筑科技有限责任公司 | 基于图像特征分析的自动抹灰效果视觉检测系统 |
CN117474913A (zh) * | 2023-12-27 | 2024-01-30 | 江西省兆驰光电有限公司 | 一种针痕检测机台判定方法、系统、存储介质及计算机 |
CN117649412A (zh) * | 2024-01-30 | 2024-03-05 | 山东海天七彩建材有限公司 | 一种铝材表面质量的检测方法 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116609493B (zh) * | 2023-07-21 | 2023-09-22 | 宁德时代新能源科技股份有限公司 | 压痕检测方法、叠片电芯制造方法、装置和电子设备 |
CN117237442B (zh) * | 2023-11-16 | 2024-04-09 | 宁德时代新能源科技股份有限公司 | 连通域定位方法、图形处理器、设备和生产线 |
CN117876367B (zh) * | 2024-03-11 | 2024-06-07 | 惠州威尔高电子有限公司 | 一种用于电路板印刷的曝光优化方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001077165A (ja) * | 1999-09-06 | 2001-03-23 | Hitachi Ltd | 欠陥検査方法及びその装置並びに欠陥解析方法及びその装置 |
CN103499585A (zh) * | 2013-10-22 | 2014-01-08 | 常州工学院 | 基于机器视觉的非连续性锂电池薄膜缺陷检测方法及其装置 |
CN110288566A (zh) * | 2019-05-23 | 2019-09-27 | 北京中科晶上科技股份有限公司 | 一种目标缺陷提取方法 |
CN113538603A (zh) * | 2021-09-16 | 2021-10-22 | 深圳市光明顶照明科技有限公司 | 一种基于阵列产品的光学检测方法、系统和可读存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7447374B1 (en) * | 2003-01-06 | 2008-11-04 | Apple Inc. | Method and apparatus for an intuitive digital image processing system that enhances digital images |
SG139602A1 (en) * | 2006-08-08 | 2008-02-29 | St Microelectronics Asia | Automatic contrast enhancement |
CN109472783B (zh) * | 2018-10-31 | 2021-10-01 | 湘潭大学 | 一种泡沫镍表面缺陷提取及分类方法 |
JP2020187657A (ja) | 2019-05-16 | 2020-11-19 | 株式会社キーエンス | 画像検査装置 |
-
2021
- 2021-11-05 CN CN202180053074.XA patent/CN116420159A/zh active Pending
- 2021-11-05 EP EP21962919.3A patent/EP4280153A4/en active Pending
- 2021-11-05 JP JP2023552290A patent/JP7569479B2/ja active Active
- 2021-11-05 WO PCT/CN2021/128893 patent/WO2023077404A1/zh active Application Filing
- 2021-11-05 KR KR1020237025378A patent/KR20230124713A/ko unknown
-
2023
- 2023-09-12 US US18/465,557 patent/US20230419472A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001077165A (ja) * | 1999-09-06 | 2001-03-23 | Hitachi Ltd | 欠陥検査方法及びその装置並びに欠陥解析方法及びその装置 |
CN103499585A (zh) * | 2013-10-22 | 2014-01-08 | 常州工学院 | 基于机器视觉的非连续性锂电池薄膜缺陷检测方法及其装置 |
CN110288566A (zh) * | 2019-05-23 | 2019-09-27 | 北京中科晶上科技股份有限公司 | 一种目标缺陷提取方法 |
CN113538603A (zh) * | 2021-09-16 | 2021-10-22 | 深圳市光明顶照明科技有限公司 | 一种基于阵列产品的光学检测方法、系统和可读存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4280153A4 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116342589B (zh) * | 2023-05-23 | 2023-08-22 | 之江实验室 | 一种跨视场划痕缺陷连续性检测方法和系统 |
CN116342589A (zh) * | 2023-05-23 | 2023-06-27 | 之江实验室 | 一种跨视场划痕缺陷连续性检测方法和系统 |
CN116402827A (zh) * | 2023-06-09 | 2023-07-07 | 山东华禹威达机电科技有限公司 | 基于图像处理的采煤机用电缆夹板缺陷检测方法及装置 |
CN116402827B (zh) * | 2023-06-09 | 2023-08-11 | 山东华禹威达机电科技有限公司 | 基于图像处理的采煤机用电缆夹板缺陷检测方法及装置 |
CN116703890A (zh) * | 2023-07-28 | 2023-09-05 | 上海瑞浦青创新能源有限公司 | 极耳缺陷的检测方法和系统 |
CN116703890B (zh) * | 2023-07-28 | 2023-12-19 | 上海瑞浦青创新能源有限公司 | 极耳缺陷的检测方法和系统 |
CN116721106A (zh) * | 2023-08-11 | 2023-09-08 | 山东明达圣昌铝业集团有限公司 | 一种基于图像处理的型材瑕疵视觉检测方法 |
CN116721106B (zh) * | 2023-08-11 | 2023-10-20 | 山东明达圣昌铝业集团有限公司 | 一种基于图像处理的型材瑕疵视觉检测方法 |
CN116843678B (zh) * | 2023-08-28 | 2023-11-21 | 青岛冠宝林活性炭有限公司 | 一种硬碳电极生产质量检测方法 |
CN116843678A (zh) * | 2023-08-28 | 2023-10-03 | 青岛冠宝林活性炭有限公司 | 一种硬碳电极生产质量检测方法 |
CN116883408A (zh) * | 2023-09-08 | 2023-10-13 | 威海坤科流量仪表股份有限公司 | 基于人工智能的积算仪壳体缺陷检测方法 |
CN116883408B (zh) * | 2023-09-08 | 2023-11-07 | 威海坤科流量仪表股份有限公司 | 基于人工智能的积算仪壳体缺陷检测方法 |
CN116984628B (zh) * | 2023-09-28 | 2023-12-29 | 西安空天机电智能制造有限公司 | 一种基于激光特征融合成像的铺粉缺陷检测方法 |
CN116984628A (zh) * | 2023-09-28 | 2023-11-03 | 西安空天机电智能制造有限公司 | 一种基于激光特征融合成像的铺粉缺陷检测方法 |
CN117078667A (zh) * | 2023-10-13 | 2023-11-17 | 山东克莱蒙特新材料科技有限公司 | 基于机器视觉的矿物铸件检测方法 |
CN117078666B (zh) * | 2023-10-13 | 2024-04-09 | 东声(苏州)智能科技有限公司 | 二维和三维结合的缺陷检测方法、装置、介质和设备 |
CN117078666A (zh) * | 2023-10-13 | 2023-11-17 | 东声(苏州)智能科技有限公司 | 二维和三维结合的缺陷检测方法、装置、介质和设备 |
CN117078667B (zh) * | 2023-10-13 | 2024-01-09 | 山东克莱蒙特新材料科技有限公司 | 基于机器视觉的矿物铸件检测方法 |
CN117095009A (zh) * | 2023-10-20 | 2023-11-21 | 山东绿康装饰材料有限公司 | 一种基于图像处理的pvc装饰板缺陷检测方法 |
CN117095009B (zh) * | 2023-10-20 | 2024-01-12 | 山东绿康装饰材料有限公司 | 一种基于图像处理的pvc装饰板缺陷检测方法 |
CN117115153B (zh) * | 2023-10-23 | 2024-02-02 | 威海坤科流量仪表股份有限公司 | 基于视觉辅助的印制线路板质量智能检测方法 |
CN117115153A (zh) * | 2023-10-23 | 2023-11-24 | 威海坤科流量仪表股份有限公司 | 基于视觉辅助的印制线路板质量智能检测方法 |
CN117152180B (zh) * | 2023-10-31 | 2024-01-26 | 山东克莱蒙特新材料科技有限公司 | 基于人工智能的矿物铸件缺陷检测方法 |
CN117152180A (zh) * | 2023-10-31 | 2023-12-01 | 山东克莱蒙特新材料科技有限公司 | 基于人工智能的矿物铸件缺陷检测方法 |
CN117197141A (zh) * | 2023-11-07 | 2023-12-08 | 山东远盾网络技术股份有限公司 | 一种汽车零部件表面缺陷检测方法 |
CN117197141B (zh) * | 2023-11-07 | 2024-01-26 | 山东远盾网络技术股份有限公司 | 一种汽车零部件表面缺陷检测方法 |
CN117291937A (zh) * | 2023-11-27 | 2023-12-26 | 山东嘉达装配式建筑科技有限责任公司 | 基于图像特征分析的自动抹灰效果视觉检测系统 |
CN117291937B (zh) * | 2023-11-27 | 2024-03-05 | 山东嘉达装配式建筑科技有限责任公司 | 基于图像特征分析的自动抹灰效果视觉检测系统 |
CN117474913A (zh) * | 2023-12-27 | 2024-01-30 | 江西省兆驰光电有限公司 | 一种针痕检测机台判定方法、系统、存储介质及计算机 |
CN117649412A (zh) * | 2024-01-30 | 2024-03-05 | 山东海天七彩建材有限公司 | 一种铝材表面质量的检测方法 |
CN117649412B (zh) * | 2024-01-30 | 2024-04-09 | 山东海天七彩建材有限公司 | 一种铝材表面质量的检测方法 |
Also Published As
Publication number | Publication date |
---|---|
EP4280153A1 (en) | 2023-11-22 |
CN116420159A (zh) | 2023-07-11 |
JP7569479B2 (ja) | 2024-10-18 |
JP2024509411A (ja) | 2024-03-01 |
EP4280153A4 (en) | 2024-04-24 |
KR20230124713A (ko) | 2023-08-25 |
US20230419472A1 (en) | 2023-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023077404A1 (zh) | 缺陷检测方法、装置和系统 | |
CN106875381B (zh) | 一种基于深度学习的手机外壳缺陷检测方法 | |
CN110148130B (zh) | 用于检测零件缺陷的方法和装置 | |
KR102166458B1 (ko) | 인공신경망 기반의 영상 분할을 이용한 불량 검출 방법 및 불량 검출 장치 | |
CN113592845A (zh) | 一种电池涂布的缺陷检测方法及装置、存储介质 | |
JP2017049974A (ja) | 識別器生成装置、良否判定方法、およびプログラム | |
JP2011214903A (ja) | 外観検査装置、外観検査用識別器の生成装置及び外観検査用識別器生成方法ならびに外観検査用識別器生成用コンピュータプログラム | |
WO2024002187A1 (zh) | 缺陷检测方法、缺陷检测设备及存储介质 | |
Xu et al. | Deep learning algorithm for real-time automatic crack detection, segmentation, qualification | |
Peng et al. | Non-uniform illumination image enhancement for surface damage detection of wind turbine blades | |
US12079310B2 (en) | Defect classification apparatus, method and program | |
TW201512649A (zh) | 偵測晶片影像瑕疵方法及其系統與電腦程式產品 | |
CN111369523A (zh) | 显微图像中细胞堆叠的检测方法、系统、设备及介质 | |
CN115775236A (zh) | 基于多尺度特征融合的表面微小缺陷视觉检测方法及系统 | |
CN109584206B (zh) | 零件表面瑕疵检测中神经网络的训练样本的合成方法 | |
CN113609984A (zh) | 一种指针式仪表读数识别方法、装置及电子设备 | |
Fang et al. | Automatic zipper tape defect detection using two-stage multi-scale convolutional networks | |
JP2021143884A (ja) | 検査装置、検査方法、プログラム、学習装置、学習方法、および学習済みデータセット | |
Huang et al. | The detection of defects in ceramic cell phone backplane with embedded system | |
IZUMI et al. | Low-cost training data creation for crack detection using an attention mechanism in deep learning models | |
CN114841992A (zh) | 基于循环生成对抗网络和结构相似性的缺陷检测方法 | |
Zhao et al. | MSC-AD: A Multiscene Unsupervised Anomaly Detection Dataset for Small Defect Detection of Casting Surface | |
CN117433966A (zh) | 一种粉磨颗粒粒径非接触测量方法及系统 | |
JP2021064215A (ja) | 表面性状検査装置及び表面性状検査方法 | |
CN117173154A (zh) | 玻璃瓶的在线图像检测系统及其方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21962919 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20237025378 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021962919 Country of ref document: EP Effective date: 20230816 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023552290 Country of ref document: JP |