[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2023077404A1 - 缺陷检测方法、装置和系统 - Google Patents

缺陷检测方法、装置和系统 Download PDF

Info

Publication number
WO2023077404A1
WO2023077404A1 PCT/CN2021/128893 CN2021128893W WO2023077404A1 WO 2023077404 A1 WO2023077404 A1 WO 2023077404A1 CN 2021128893 W CN2021128893 W CN 2021128893W WO 2023077404 A1 WO2023077404 A1 WO 2023077404A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
image
gray value
mapping
defect
Prior art date
Application number
PCT/CN2021/128893
Other languages
English (en)
French (fr)
Inventor
牛茂龙
黄强威
谢金潭
刘永法
Original Assignee
宁德时代新能源科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宁德时代新能源科技股份有限公司 filed Critical 宁德时代新能源科技股份有限公司
Priority to EP21962919.3A priority Critical patent/EP4280153A4/en
Priority to JP2023552290A priority patent/JP7569479B2/ja
Priority to CN202180053074.XA priority patent/CN116420159A/zh
Priority to PCT/CN2021/128893 priority patent/WO2023077404A1/zh
Priority to KR1020237025378A priority patent/KR20230124713A/ko
Publication of WO2023077404A1 publication Critical patent/WO2023077404A1/zh
Priority to US18/465,557 priority patent/US20230419472A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries

Definitions

  • the present application relates to the technical field of defect detection, in particular to a defect detection method, device and system.
  • the defects of the object will affect the performance of the object, and finally lead to the unqualified quality of the object. Therefore, it is necessary to detect the defects of the object.
  • the embodiments of the present application provide a defect detection method, device or system, which can increase the defect detection speed.
  • the present application provides a defect detection method, including: obtaining the average gray value of the image of the object to be detected; constructing a mapping table, the elements of the mapping table include each The mapping value corresponding to the gray value, wherein the mapping value corresponding to the gray value greater than or equal to the reference value is the first value, the mapping value corresponding to the gray value smaller than the reference value is the second value, and the reference value is the absolute value of the difference between the average gray value and the preset gray value; look up the mapping value corresponding to the gray value of each pixel in the image from the mapping table; according to the gray value of each pixel A mapping value corresponding to the gray value, segmenting at least one suspected defect sub-image from the image, wherein the mapping value corresponding to the gray value of each pixel in each suspicious defect sub-image is the first value; The at least one suspected defect sub-image is input to a machine learning model to obtain a defect detection result.
  • a mapping table is constructed according to the average gray value of the image and the gray value range of the image. Subsequently, it is only necessary to look up the mapping value corresponding to the gray value of each pixel of the image from the mapping table without performing mathematical calculations, which improves the speed of defect detection.
  • using submaps of suspected defects as input to the machine learning model, rather than the entire image of the object to be processed, can also help improve the speed of defect detection.
  • obtaining the average gray value of the image of the object to be detected includes: obtaining the original gray value range of the image; performing contrast stretching on the image to expand the original gray value range to The gray value range; wherein, the average gray value is the average gray value of the image after the contrast stretching.
  • contrast stretching the gray level difference between the defect area and the non-defect area is enlarged. In this way, the robustness and accuracy of the suspicious defect sub-region segmentation are improved, thereby improving the robustness and accuracy of defect detection while increasing the defect detection speed.
  • performing contrast stretching on the image includes: converting the original gray value I1(x, y) of each pixel in the image into a gray value I2(x, y) according to the following formula:
  • a is the lower limit of the original gray value range
  • b is the upper limit of the original gray value range
  • c is the lower limit of the gray value range
  • d is the upper limit of the gray value range.
  • the first value is d and the second value is c. In this way, the success rate of subgraph segmentation of suspected defects can be improved.
  • the machine learning model includes a residual neural network model, and the total number of convolutional layers and fully connected layers in the residual neural network model is 14. In this way, defect detection accuracy can be improved.
  • the difference between the maximum original gray value and the minimum original gray value of the non-defect area in the image except for the defect ranges from 35 to 50. In this way, the defect detection accuracy can be further improved.
  • the difference between the maximum original gray value and the minimum original gray value is 40. In this way, the defect detection accuracy can be further improved.
  • the maximum original grayscale value is 105, and the minimum original grayscale value is 75. In this way, the defect detection accuracy can be further improved.
  • segmenting at least one suspected defect sub-image from the image includes: according to the mapping value corresponding to the gray value of each pixel, from the Segment a plurality of connected regions in the image, and the mapping value corresponding to the gray value of each pixel in each connected region is the first value; when two adjacent connected regions meet the preset conditions, the The two connected regions are merged into one suspicious defect sub-image, wherein the areas of the two connected regions are respectively the first area and the second area less than or equal to the first area, and the overlapping of the two connected regions
  • the area of the region is a third area
  • the preset condition includes that the ratio of the third area to the first area is greater than a preset ratio; when the two connected regions do not meet the preset condition, The two connected regions are determined as two suspected defect sub-images.
  • the two The connected regions are merged into one suspected defect subimage; otherwise, the two connected regions are regarded as two suspected defect subimages. In this way, the number of suspected defect sub-images can be reduced, and the speed at which the machine learning model can obtain defect detection results can be increased, thereby further improving the defect detection speed.
  • the preset ratio is greater than 0.5 and less than 1. In this way, the number of suspicious defect subgraphs can be reduced, and the accuracy of the suspicious defect subgraph can also be taken into account.
  • the preset ratio is 0.8. In this way, the number of suspicious defect subgraphs can be reduced, and the accuracy of the suspicious defect subgraph can be better considered.
  • the data type of the elements in the mapping table is unsigned byte.
  • the defect detection result includes a defect type. In this way, the defect detection result is more accurate.
  • the object to be detected includes a pole piece of a battery.
  • the defect detection speed of the pole piece of the battery can be improved by using the above defect detection scheme.
  • the battery includes a lithium battery.
  • the detection speed of a defect of a pole piece of a lithium battery can be improved by using the above defect detection solution.
  • the present application provides a defect detection device, including: an acquisition module configured to acquire the average gray value of an image of an object to be detected; a construction module configured to construct a mapping table, and the elements of the mapping table include The mapping value corresponding to each gray value within the gray value range of the image, wherein the mapping value corresponding to the gray value greater than or equal to the reference value is the first value, and the gray value smaller than the reference value corresponds to The mapping value of is the second value, and the reference value is the absolute value of the difference between the average gray value and the preset gray value; the search module is configured to search the image from the mapping table The mapping value corresponding to the gray value of each pixel; the segmentation module is configured to segment at least one suspected defect sub-image from the image according to the mapping value corresponding to the gray value of each pixel, wherein each The mapping value corresponding to the gray value of each pixel in the suspected defect sub-image is the first value; and an input module configured to input the at least one suspected defect sub-image into a machine learning
  • a mapping table is constructed according to the average gray value of the image and the gray value range of the image. Subsequently, it is only necessary to look up the mapping value corresponding to the gray value of each pixel of the image from the mapping table without performing mathematical calculations, which improves the speed of defect detection.
  • using submaps of suspected defects as input to the machine learning model, rather than the entire image of the object to be processed, can also help improve the speed of defect detection.
  • the present application provides a defect detection device, including: a memory; and a processor coupled to the memory, configured to execute the instructions described in any one of the above-mentioned embodiments based on instructions stored in the memory. Defect detection method.
  • the present application provides a defect detection system, comprising: the defect detection device described in any one of the above embodiments; and an imaging device configured to scan the object to be detected to obtain the image.
  • the present application provides a computer-readable storage medium, including computer program instructions, wherein when the computer program instructions are executed by a processor, the defect detection method described in any one of the above-mentioned embodiments is implemented.
  • the present application provides a computer program product, including a computer program, wherein when the computer program is executed by a processor, the defect detection method described in any one of the above embodiments is implemented.
  • FIG. 1 is a schematic flow chart of a defect detection method according to an embodiment of the present application
  • FIG. 2 is a schematic flow chart of a defect detection method according to another embodiment of the present application.
  • Fig. 3 is a schematic flow chart of a defect detection method according to another embodiment of the present application.
  • Fig. 4 is a schematic diagram of a residual neural network model of an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a defect detection device according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a defect detection device according to another embodiment of the present application.
  • Fig. 7 is a schematic diagram of a defect detection system according to an embodiment of the present application.
  • the inventor found that in the preprocessing process, for the gray value of each pixel of the image, complex mathematical calculation processes (such as subtraction, absolute value, comparison, etc.) are required to determine whether the pixel is suspicious Defects, which cause the image preprocessing process to take longer, resulting in a lower defect detection speed. In the case of large image size, the preprocessing process will take longer.
  • the embodiment of the present application proposes the following technical solutions to improve the defect detection speed.
  • FIG. 1 is a schematic flowchart of a defect detection method according to an embodiment of the present application. As shown in FIG. 1 , the defect detection method includes step 102 to step 110 .
  • step 102 the average gray value of the image of the object to be detected is acquired.
  • the image may be an image of the surface of the object to be detected.
  • the object to be detected includes a pole piece of a battery, for example, a pole piece of a lithium battery.
  • the image may be an image of the surface of the pole piece.
  • the embodiment of the present application is not limited thereto, and the object to be detected may also be other workpieces.
  • the average grayscale value of the image is the average grayscale value of the original grayscale of the image.
  • the average gray value of the image is the average gray value of the gray values obtained by performing contrast stretching on the original gray, which will be described in detail in conjunction with some embodiments later.
  • mapping table is constructed, and elements of the mapping table include mapping values corresponding to each gray value within the gray value range of the image.
  • the absolute value of the difference between the average grayscale value of the image and the preset grayscale value is referred to as a reference value.
  • the mapping value corresponding to the gray value greater than or equal to the reference value is the first value
  • the mapping value corresponding to the gray value smaller than the reference value is the second value.
  • an image has grayscale values ranging from 0 to 255.
  • a corresponding mapping value is assigned in the mapping table, that is, there are 266 mapping values corresponding to gray values in the mapping table.
  • the mapping value corresponding to each grayscale value from 100 to 255 is the first value
  • the mapping value corresponding to each grayscale value from 0 to 99 is the second value. It should be understood that the second value is different from the first value.
  • the above-mentioned preset grayscale value is a benchmark for the degree to which the grayscale value of each pixel of the image deviates from the average grayscale value, and its specific value can be set according to actual conditions.
  • the data type of the elements in the mapping table is unsigned char.
  • step 106 the mapping value corresponding to the gray value of each pixel in the image is searched from the mapping table.
  • the corresponding mapping value is found from the mapping table as the second value (for example, 0); if the grayscale value of a certain pixel is 120 , then the corresponding mapping value is found from the mapping table as the first value (for example, 255).
  • the mapping value corresponding to the gray value of each pixel can be directly found from the mapping table.
  • step 108 at least one suspected defect sub-image is segmented from the image according to the mapping value corresponding to the gray value of each pixel.
  • the mapping value corresponding to the gray value of each pixel in each suspected defect sub-image is the first value.
  • the absolute value of the difference between the gray value of each pixel in each suspected defect sub-image and the average gray value of the entire image is greater than the preset gray value.
  • step 110 at least one suspected defect sub-image is input to a machine learning model to obtain a defect detection result.
  • the at least one suspected defect sub-image here is obtained in step 108 .
  • the machine learning model can obtain defect detection results based on the suspected defect sub-images.
  • a sample defect image is used as an input, and the defect type of the sample defect image is used as an output to train a machine learning model;
  • a sample non-defect image is used as an input, and the non-defect result is used as an output to train a machine learning model. train.
  • machine learning models include, but are not limited to, residual neural network models.
  • the defect detection result is no defect. In other embodiments, the defect detection result is defective. In some embodiments, where the defect detection result is defective, the defect detection result further includes a defect type. Taking battery pole pieces as an example, the types of defects may include, but are not limited to: metal leakage, cracks, dark spots, bubbles, pits, unknown, etc.
  • a mapping table is constructed according to the average gray value of the image and the gray value range of the image. Subsequently, it is only necessary to look up the mapping value corresponding to the gray value of each pixel of the image from the mapping table without performing mathematical calculations, which improves the speed of defect detection.
  • using submaps of suspected defects as input to the machine learning model, rather than the entire image of the object to be processed, can also help improve the speed of defect detection.
  • Fig. 2 is a schematic flowchart of a defect detection method according to another embodiment of the present application.
  • the defect detection method includes step 102 to step 110 , and step 102 includes step 1021 and step 1022 .
  • step 1021 and step 1022 The implementation process of some steps (for example, step 1021 and step 1022) is mainly introduced below, and other steps may refer to the description of the embodiment shown in FIG. 1 .
  • step 1021 the original gray value range of the image is obtained.
  • the image of the object to be detected can be obtained by scanning and imaging the surface of the object to be detected, and the gray value range of the image is the original gray value range.
  • the range of the original gray scale of the image on the surface of the pole piece can be obtained. It can be understood that the normal area on the surface of the pole piece is smooth, and the surface texture and color are consistent. If the surface of the entire pole piece is evenly illuminated, the gray value of the normal area of the obtained image is similar.
  • step 1022 contrast stretching is performed on the image to expand the original gray value range to a gray value range.
  • the average gray value of the image is the average gray value of the image after contrast stretching.
  • the image can be contrast stretched in the following manner:
  • a is the lower limit of the original gray value range
  • b is the upper limit of the original gray value range
  • c is the lower limit of the gray value range after contrast stretching
  • d is the gray value range after contrast stretching upper limit.
  • a mapping table is constructed, and elements of the mapping table include mapping values corresponding to each gray value within the gray value range of the image.
  • the mapping value corresponding to the gray value greater than or equal to the reference value is the first value
  • the mapping value corresponding to the gray value smaller than the reference value is the second value.
  • the first value is the upper limit d of the gray value range after the contrast stretching, such as 255; the second value is the lower limit c of the gray value range after the contrast stretching, such as 0. In this way, the success rate of subsequent suspicious defect subgraph segmentation can be improved.
  • step 106 the mapping value corresponding to the gray value of each pixel in the image is searched from the mapping table.
  • step 108 at least one suspected defect sub-image is segmented from the image according to the mapping value corresponding to the gray value of each pixel.
  • step 110 at least one suspected defect sub-image is input to a machine learning model to obtain a defect detection result.
  • the gray level difference between the defect area and the non-defect area is enlarged by contrast stretching. In this way, the robustness and accuracy of the suspicious defect sub-region segmentation are improved, thereby improving the robustness and accuracy of defect detection while increasing the defect detection speed.
  • Fig. 3 is a schematic flowchart of a defect detection method according to another embodiment of the present application.
  • the defect detection method includes step 102 to step 110 , and step 108 includes step 1081 to step 1083 .
  • the implementation process of some steps (for example, step 1081 to step 1083) is mainly introduced below, and other steps may refer to the description of the embodiment shown in FIG. 1 .
  • step 102 the average gray value of the image of the object to be detected is acquired.
  • step 102 may include step 1021 and step 1022 shown in FIG. 2 .
  • a mapping table is constructed, and elements of the mapping table include mapping values corresponding to each gray value within the gray value range of the image.
  • the mapping value corresponding to the gray value greater than or equal to the reference value is the first value
  • the mapping value corresponding to the gray value smaller than the reference value is the second value.
  • step 106 the mapping value corresponding to the gray value of each pixel in the image is searched from the mapping table.
  • step 1081 according to the mapping value corresponding to the gray value of each pixel, a plurality of connected regions are segmented from the image, and the mapping value corresponding to the gray value of each pixel in each connected region is the first value.
  • each connected region the absolute value of the difference between the gray value of each pixel and the average gray value of the image is greater than the preset gray value.
  • every connected region may be a defect region.
  • step 1082 when two adjacent connected regions satisfy the preset condition, the two connected regions are merged into one suspected defect sub-image.
  • the areas of two adjacent connected regions are respectively referred to as the first area and the second area, and the area of the overlapping region of the two adjacent connected regions is referred to as the third area.
  • the second area is smaller than or equal to the first area, that is, the areas of two adjacent connected regions may be equal or unequal.
  • the aforementioned preset condition includes that the ratio of the third area to the first area is greater than a preset ratio.
  • step 1083 if the two adjacent connected regions do not satisfy the preset condition, determine the two connected regions as two suspected defective sub-images.
  • the predetermined ratio is greater than 0.5 and less than 1, for example, the predetermined ratio is 0.8. In this way, the connected regions whose overlap ratio is less than or equal to 0.5 will not be merged, so that the number of suspected defect subgraphs can be reduced, and the accuracy of the suspected defect subgraph can also be taken into account.
  • the connected regions that meet the preset conditions are merged into a suspected defect subgraph, and each connected region that does not meet the preset conditions is a suspected defect subimage. In this way, at least one suspected defect sub-image is obtained.
  • step 110 at least one suspected defect sub-image is input to a machine learning model to obtain a defect detection result.
  • the two connected Regions are merged into one suspected defect subimage; otherwise two connected regions are regarded as two suspected defect subimages. In this way, the number of suspected defect sub-images can be reduced, and the speed at which the machine learning model can obtain defect detection results can be increased, thereby further improving the defect detection speed.
  • the preprocessing time of the steps before step 110 is not greater than 80ms.
  • the inventors try to find a solution to improve the defect detection accuracy while increasing the defect detection speed.
  • the total number of convolutional layers and fully connected layers in the residual neural network model is 14. In this way, both the defect detection speed and the defect detection accuracy can be improved.
  • the inventor also noticed that when the gray value of the non-defective area in the image of the object to be detected changes in different ranges, the residual neural network with a total number of 14 convolutional layers and fully connected layers is used. The accuracy of the defect detection results of the network model to detect the defect varies.
  • the difference between the maximum original gray value and the minimum original gray value of the non-defect area in the image of the object to be detected is in the range of 35 to 50.
  • the defect detection result is more accurate.
  • the difference between the maximum original grayscale value of the non-defective area and the minimum original grayscale value of the non-defective area is 40.
  • the grayscale value of the non-defective area ranges from 75 to 105, that is, the maximum original grayscale value of the non-defective area is 105, and the minimum original grayscale value of the non-defective area is 75.
  • the defect detection result is further more accurate.
  • Fig. 4 is a schematic diagram of a residual neural network model according to an embodiment of the present application.
  • the residual neural network model includes three residual network units (ResNet Unit) between the maximum pooling layer and the average pooling layer, each residual network unit includes two residual blocks, each A residual block consists of two convolutional layers.
  • the residual neural network model also includes the first convolutional layer before the max pooling layer and the fully connected layer after the average pooling layer.
  • the size of the convolution kernel of the first convolution layer is 7*7, the number of convolution kernels is 64, and the size of the image becomes 1/2 of the original size after passing through the first convolution layer. In some embodiments, the size of the image becomes 1/2 of the original size after the maximum pooling layer. In some embodiments, the size of the convolution kernel of each convolution layer in each residual network unit is 3*3, the number of convolution kernels is 256, and the image passes through each convolution in the residual network unit Dimensions remain the same after layering.
  • the residual neural network model utilizes the following loss function during training:
  • focal loss - ⁇ (1-y′) ⁇ log(y′)
  • focal loss is the loss function
  • y' is the probability of a certain category
  • is the weight of the category
  • is the modulation factor
  • the residual neural network model shown in Figure 4 can be realized by reducing a residual network unit on the basis of the ResNet18 model.
  • the residual neural network model shown in Figure 4 can also be called a ResNet14 model.
  • the volume of the ResNet14 model is reduced by 75%, the defect detection speed is increased by 25%, and the defect detection accuracy is increased by 5%.
  • the inference time of the ResNet14 model is no greater than 20ms.
  • using the ResNet14 model helps to classify defects with low probability (0.1%), reducing the possibility of missed detection.
  • each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same or similar parts of each embodiment can be referred to each other.
  • the description is relatively simple, and for the related parts, please refer to the part of the description of the method embodiment.
  • Fig. 5 is a schematic diagram of a defect detection device according to an embodiment of the present application.
  • the defect detection device includes an acquisition module 501 , a construction module 502 , a search module 503 , a segmentation module 504 and an input module 505 .
  • the obtaining module 501 is configured to obtain the average gray value of the image of the object to be detected.
  • the construction module 502 is configured to construct a mapping table, and the elements of the mapping table include mapping values corresponding to each gray value within the gray value range of the image.
  • the absolute value of the difference between the average grayscale value and the preset grayscale value is a reference value
  • the grayscale value greater than or equal to the reference value corresponds to the first value
  • the grayscale value smaller than the reference value corresponds to
  • the mapped value of is the second value.
  • the lookup module 503 is configured to look up the mapping value corresponding to the gray value of each pixel in the image from the mapping table.
  • the segmentation module 504 is configured to segment at least one suspected defect sub-image from the image according to the mapping value corresponding to the gray value of each pixel.
  • the mapping value corresponding to the gray value of each pixel in each suspected defect sub-image is the first value.
  • the input module 505 is configured to input at least one suspected defect sub-image into the machine learning model to obtain a defect detection result.
  • a mapping table is constructed according to the average gray value of the image and the gray value range of the image. Subsequent only need to look up the mapping value corresponding to the gray value of each pixel of the image from the mapping table, without the need for mathematical calculations, which greatly improves the speed of defect detection.
  • the acquisition module 501 is configured to acquire the average gray value of the image of the object to be detected in the manner described above.
  • the segmentation module 504 is configured to segment at least one suspected defect sub-image from the image in the manner described above.
  • Fig. 6 is a schematic diagram of a defect detection device according to another embodiment of the present application.
  • the defect detection device 600 includes a memory 601 and a processor 602 coupled to the memory 601 , and the processor 602 is configured to execute the method of any one of the foregoing embodiments based on instructions stored in the memory 601 .
  • the memory 601 may include, for example, a system memory, a fixed non-volatile storage medium, and the like.
  • the system memory may store an operating system, an application program, a boot loader (Boot Loader) and other programs, for example.
  • Boot Loader Boot Loader
  • the defect detection device 600 may also include an input and output interface 603, a network interface 604, a storage interface 605, and the like. These interfaces 603 , 604 , and 605 , and between the memory 601 and the processor 602 may be connected via a bus 606 , for example.
  • the input and output interface 603 provides a connection interface for input and output devices such as a display, a mouse, a keyboard, and a touch screen.
  • the network interface 604 provides connection interfaces for various networked devices.
  • the storage interface 605 provides connection interfaces for external storage devices such as SD cards and U disks.
  • the defect detection device is further configured to upload the defect detection result to the data platform and/or upload the suspected defect sub-image whose defect detection result is defective to the defect image library.
  • the images in the image library can be used as training samples, thereby improving the accuracy of the machine learning model in subsequently detecting defects.
  • Fig. 7 is a schematic diagram of a defect detection system according to an embodiment of the present application.
  • the defect detection system includes a defect detection device 701 and an imaging device 702 according to any one of the above embodiments.
  • the imaging device 702 is configured to scan the object to be detected to obtain an image of the object to be detected.
  • imaging device 702 is a line scan camera.
  • the defect detection device 701 acquires the image of the object to be detected from the imaging device 702, and performs defect detection in the manner described above. After the defect detection result is obtained, the marking machine can be used to mark the defect of the object to be detected.
  • An embodiment of the present application also provides a computer-readable storage medium, including computer program instructions, and when the computer program instructions are executed by a processor, the method of any one of the above-mentioned embodiments is implemented.
  • An embodiment of the present application further provides a computer program product, including a computer program, and when the computer program is executed by a processor, the method of any one of the foregoing embodiments is implemented.
  • the size of the image is large, the effect of the above embodiment on improving the defect detection speed is more obvious.
  • the size of the image is 16K.
  • the processing time of the defect detection process using the defect detection method of the above embodiment is not greater than 100ms, and there is no A missed detection occurred.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, this application may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. .
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

本申请实施例提供一种缺陷检测方法、装置和系统,涉及缺陷检测技术领域。所述方法包括:获取待检测物体的图像的平均灰度值;构建映射表,该映射表的元素包括所述图像的灰度值范围内的每个灰度值对应的映射值,大于或等于参考值的灰度值对应的映射值为第一值,小于参考值的灰度值对应的映射值为第二值,参考值为所述平均灰度值与预设灰度值之间的差值的绝对值;从所述映射表中查找所述图像中每个像素的灰度值对应的映射值;根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像,每个可疑缺陷子图像中每个像素的灰度值对应的映射值为所述第一值;将该至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。

Description

缺陷检测方法、装置和系统 技术领域
本申请涉及缺陷检测技术领域,特别是涉及一种缺陷检测方法、装置和系统。
背景技术
在工业生产中,物体的缺陷会影响物体的性能,最终导致物体的质量不合格。因此,需要对物体的缺陷进行检测。
发明内容
发明人注意到,相关技术中的缺陷检测速度比较低。
本申请实施例提供一种缺陷检测方法、装置或系统,可以提高缺陷检测速度。
第一方面,本申请提供一种缺陷检测方法,包括:获取待检测物体的图像的平均灰度值;构建映射表,所述映射表的元素包括所述图像的灰度值范围内的每个灰度值对应的映射值,其中,大于或等于参考值的灰度值对应的映射值为第一值,小于所述参考值的灰度值对应的映射值为第二值,所述参考值为所述平均灰度值与预设灰度值之间的差值的绝对值;从所述映射表中查找所述图像中每个像素的灰度值对应的映射值;根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像,其中,每个可疑缺陷子图像中每个像素的灰度值对应的映射值为所述第一值;将所述至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。
本申请实施例的技术方案中,一方面,在得到图像的平均灰度值后,根据图像的平均灰度值和图像的灰度值范围构建映射表。后续只需要从映射表中查找图像的每个像素的灰度值对应的映射值,而无需进行数学计算,提高了缺陷检测的速度。另一方面,将可疑缺陷子图作为机器学习模型的输入,而非将待处理物体的整个图像作为输入,也有助于提高缺陷检测的速度。
在一些实施例中,获取待检测物体的图像的平均灰度值包括:获取所述图像的原始灰度值范围;对所述图像进行对比度拉伸,以将所述原始灰度值范围扩展为所述灰度值范围;其中,所述平均灰度值为所述图像在所述对比度拉伸后的平均灰度值。通过对比度拉伸,扩大了缺陷区域与非缺陷区域的灰度差异。如此,提高了可疑缺陷子区域分割的鲁棒性和准确性,从而在提高缺陷检测速度的情况下,提高缺陷检测的鲁棒性和准确性。
在一些实施例中,对所述图像进行对比度拉伸包括:根据如下公式将所述图像 中每个像素的原始灰度值I1(x,y)转换为灰度值I2(x,y):
Figure PCTCN2021128893-appb-000001
其中,a为所述原始灰度值范围的下限,b为所述原始灰度值范围的上限,c为所述灰度值范围的下限,d为所述灰度值范围的上限。
在一些实施例中,所述第一值为d,所述第二值为c。如此,可以提高可疑缺陷子图分割的成功率。
在一些实施例中,c=0,d=255。如此,可以进一步提高可疑缺陷子图分割的成功率。
在一些实施例中,所述机器学习模型包括残差神经网络模型,所述残差神经网络模型中卷积层和全连接层的总层数为14。如此,可以提高缺陷检测准确性。
在一些实施例中,所述图像中除缺陷之外的非缺陷区域的最大原始灰度值和最小原始灰度值之间的差值的范围为35至50。如此,可以进一步提高缺陷检测准确性。
在一些实施例中,所述最大原始灰度值和所述最小原始灰度值之间的差值为40。如此,可以更进一步提高缺陷检测准确性。
在一些实施例中,所述最大原始灰度值为105,所述最小原始灰度值为75。如此,可以更进一步提高缺陷检测准确性。
在一些实施例中,根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像包括:根据每个像素的灰度值对应的映射值,从所述图像中分割出多个连通区域,每个连通区域中每个像素的灰度值对应的映射值为所述第一值;在相邻的两个连通区域满足预设条件的情况下,将所述两个连通区域合并为一个可疑缺陷子图像,其中,所述两个连通区域的面积分别为第一面积和小于或等于所述第一面积的第二面积,所述两个连通区域的重叠区域的面积为第三面积,所述预设条件包括所述第三面积与所述第一面积之比大于预设比值;在所述两个连通区域不满足所述预设条件的情况下,将所述两个连通区域确定为两个可疑缺陷子图像。在这些实施例中,在分割可疑缺陷子图像的过程中,相邻的两个连通区域的重叠区域的面积与相对较大的连通区域的面积之比大于预设比值的情况下,将两个连通区域合并为一个可疑缺陷子图像;否则将两个连通区域作为两个可疑缺陷子图像。如此,可以减小可疑缺陷子图像的数量,提高机器学习模型得到缺陷检测结果的速度,从而可以进一步提高缺陷检测速度。
在一些实施例中,所述预设比值大于0.5且小于1。如此,既可以减小可疑缺陷子图的数量,同时也可以兼顾可疑缺陷子图的准确性。
在一些实施例中,所述预设比值为0.8。如此,既可以减小可疑缺陷子图的数量,同时也可以更好地兼顾可疑缺陷子图的准确性。
在一些实施例中,所述映射表中的元素的数据类型为无符号字节型。
在一些实施例中,所述缺陷检测结果包括缺陷类型。如此,使得缺陷检测结果 更加精确。
在一些实施例中,所述待检测物体包括电池的极片。在待检测物体为电池的极片的情况下,利用上述缺陷检测方案可以提高电池的极片的缺陷的检测速度。
在一些实施例中,所述电池包括锂电池。在待检测物体为锂电池的极片的情况下,利用上述缺陷检测方案可以提高锂电池的极片的缺陷的检测速度。
第二方面,本申请提供一种缺陷检测装置,包括:获取模块,被配置为获取待检测物体的图像的平均灰度值;构建模块,被配置为构建映射表,所述映射表的元素包括所述图像的灰度值范围内的每个灰度值对应的映射值,其中,大于或等于参考值的灰度值对应的映射值为第一值,小于所述参考值的灰度值对应的映射值为第二值,所述参考值为所述平均灰度值与预设灰度值之间的差值的绝对值;查找模块,被配置为从所述映射表中查找所述图像中每个像素的灰度值对应的映射值;分割模块,被配置为根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像,其中,每个可疑缺陷子图像中每个像素的灰度值对应的映射值为所述第一值;和输入模块,被配置为将所述至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。一方面,在得到图像的平均灰度值后,根据图像的平均灰度值和图像的灰度值范围构建映射表。后续只需要从映射表中查找图像的每个像素的灰度值对应的映射值,而无需进行数学计算,提高了缺陷检测的速度。另一方面,将可疑缺陷子图作为机器学习模型的输入,而非将待处理物体的整个图像作为输入,也有助于提高缺陷检测的速度。
第三方面,本申请提供一种缺陷检测装置,包括:存储器;和耦接至所述存储器的处理器,被配置为基于存储在所述存储器中的指令,执行上述任意一个实施例所述的缺陷检测方法。
第四方面,本申请提供一种缺陷检测系统,包括:上述任意一个实施例所述的缺陷检测装置;和成像装置,被配置为对所述待检测物体进行扫描以得到所述图像。
第五方面,本申请提供一种计算机可读存储介质,包括计算机程序指令,其中,所述计算机程序指令被处理器执行时实现上述任意一个实施例所述的缺陷检测方法。
第六方面,本申请提供一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现上述任意一个实施例所述的缺陷检测方法。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据附图获得其他的附图。
图1是本申请一个实施例的缺陷检测方法的流程示意图;
图2是本申请另一个实施例的缺陷检测方法的流程示意图;
图3是本申请又一个实施例的缺陷检测方法的流程示意图;
图4是本申请一个实施例的残差神经网络模型的示意图;
图5是本申请一个实施例的缺陷检测装置的示意图;
图6是本申请另一个实施例的缺陷检测装置的示意图;
图7是本申请一个实施例的缺陷检测系统的示意图。
具体实施方式
下面结合附图和实施例对本申请的实施方式作进一步详细描述。以下实施例的详细描述和附图用于示例性地说明本申请的原理,但不能用来限制本申请的范围,即本申请不限于所描述的实施例。
在本申请的描述中,需要说明的是,除非另有说明,“多个”的含义是两个或两个以上。此外,术语“第一”、“第二”、“第三”等仅用于描述目的,而不能理解为指示或暗示相对重要性。
除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为授权说明书的一部分。
在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它示例可以具有不同的值。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
针对缺陷检测速度低的问题,发明人经过分析后发现,相关技术中的方式下,对物体的图像的预处理过程花费的时间比较长。
经过进一步分析后,发明人发现,在预处理过程中,针对图像的每个像素的灰度值,均需要复杂的数学计算过程(例如减法、绝对值、比较等)来确定该像素是否属于可疑缺陷,这导致图像的预处理过程花费的时间比较长,从而导致缺陷的检测速度较低。在图像的尺寸很大的情况下,预处理过程花费的时间会更长。
有鉴于此,本申请实施例提出了如下技术方案,以提高缺陷检测速度。
图1是本申请一个实施例的缺陷检测方法的流程示意图。如图1所示,缺陷检测方法包括步骤102至步骤110。
在步骤102,获取待检测物体的图像的平均灰度值。
这里,图像可以是待检测物体的表面的图像。在一些实施例中,待检测物体包括电池的极片,例如,锂电池的极片。图像可以是极片的表面的图像。然而,应理解,本申请实施例并不限于此,待检测物体也可以是其他工件。
作为一些实现方式,图像的平均灰度值是图像的原始灰度的平均灰度值。作为另一些实现方式,图像的平均灰度值是对原始灰度进行对比度拉伸后的灰度值的平均灰度值,后文将结合一些实施例进行详细说明。
在步骤104,构建映射表,映射表的元素包括图像的灰度值范围内的每个灰度值对应的映射值。
为了方便说明,将图像的平均灰度值与预设灰度值之间的差值的绝对值称为参考值。在映射表中,大于或等于参考值的灰度值对应的映射值为第一值,小于参考值的灰度值对应的映射值为第二值。
例如,图像的灰度值范围为0至255。针对0至255中的每个灰度值,映射表中均赋予对应的映射值,即,映射表中一共有266个灰度值对应的映射值。假设参考值为100,则100至255中的每个灰度值对应的映射值均为第一值,而0至99中的每个灰度值对应的映射值均为第二值。应理解,第二值与第一值是不同的。
可以理解的是,上述预设灰度值是图像的每个像素的灰度值偏离平均灰度值的程度的基准,其具体取值可以根据实际情况进行设定。
在一些实施例中,映射表中的元素的数据类型为无符号字节型(unsigned char)。
在步骤106,从映射表中查找图像中每个像素的灰度值对应的映射值。
仍以参考值为100为例,如果某个像素的灰度值为80,则从映射表中查找到对应的映射值为第二值(例如0);如果某个像素的灰度值为120,则从映射表中查找到对应的映射值为第一值(例如255)。通过遍历图像中的每个像素,可以从映射表中直接查找到每个像素的灰度值对应的映射值。
在步骤108,根据每个像素的灰度值对应的映射值,从图像中分割出至少一个可疑缺陷子图像。
这里,每个可疑缺陷子图像中每个像素的灰度值对应的映射值均为第一值。换言之,每个可疑缺陷子图像中每个像素的灰度值与整个图像的平均灰度值之间的差值的绝对值均大于预设灰度值。
在步骤110,将至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。
应理解,这里的至少一个可疑缺陷子图像是步骤108得到的。在步骤108仅得到一个可疑缺陷子图像的情况下,将该可疑缺陷子图像输入至机器学习模型;在步骤108得到多个可疑缺陷子图像的情况下,将得到的多个可疑缺陷子图像输入至机器学习模型。
还应理解,通过将不同的样本图像作为输入、不同的检测结果作为输出,对机器学习模型进行训练,可以使得机器学习模型能够根据可疑缺陷子图像得到缺陷检测结果。例如,以样本缺陷图像作为输入,以该样本缺陷图像的缺陷类型作为输出,对机器学习模型进行训练;又例如,以样本非缺陷图像作为输入,以非缺陷结果作为输出,对机器学习模型进行训练。
在一些实施例中,机器学习模型包括但不限于残差神经网络模型。
在一些实施例中,缺陷检测结果是无缺陷。在另一些实施例中,缺陷检测结果是有缺陷。在一些实施例中,在缺陷检测结果是有缺陷的情况 下,缺陷检测结果还包括缺陷类型。以电池极片为例,缺陷类型可以包括但不限于:漏金属、开裂、暗点、气泡、凹坑、未知等。
上述实施例中,一方面,在得到图像的平均灰度值后,根据图像的平均灰度值和图像的灰度值范围构建映射表。后续只需要从映射表中查找图像的每个像素的灰度值对应的映射值,而无需进行数学计算,提高了缺陷检测的速度。另一方面,将可疑缺陷子图作为机器学习模型的输入,而非将待处理物体的整个图像作为输入,也有助于提高缺陷检测的速度。
图2是本申请另一个实施例的缺陷检测方法的流程示意图。
如图2所示,缺陷检测方法包括步骤102至步骤110,步骤102包括步骤1021和步骤1022。下面仅重点介绍部分步骤(例如步骤1021和步骤1022)的实现过程,其他步骤可以参照图1所示实施例的描述。
在步骤1021,获取图像的原始灰度值范围。
例如,通过对待检测物体的表面进行扫描成像后可以得到待检测物体的图像,该图像的灰度值范围即为原始灰度值范围。
以锂电池的极片为例,通过控制光照强度不变,可以得到极片表面的图像的原始灰度的变化范围。可以理解的是,极片表面的正常区域光滑,且表面纹理与色彩一致,若整个极片的表面光照均匀,则得到的图像的正常区域的灰度值相近。
在步骤1022,对图像进行对比度拉伸,以将原始灰度值范围扩展为灰度值范围。这里,图像的平均灰度值为图像在对比度拉伸后的平均灰度值。
在一些实施例中,可以通过如下方式对图像进行对比度拉伸:
根据如下公式将图像中每个像素的原始灰度值I1(x,y)转换为灰度值I2(x,y):
Figure PCTCN2021128893-appb-000002
在上式中,a为原始灰度值范围的下限,b为原始灰度值范围的上限,c为对比度拉伸后的灰度值范围的下限,d为对比度拉伸后的灰度值范围的上限。在一些实施例中,c=0,d=255,如此可以尽可能大地增加图像的对比 度。
在步骤104,构建映射表,映射表的元素包括图像的灰度值范围内的每个灰度值对应的映射值。在映射表中,大于或等于参考值的灰度值对应的映射值为第一值,小于参考值的灰度值对应的映射值为第二值。
在一些实施例中,第一值为对比度拉伸后的灰度值范围的上限d,例如255;第二值为对比度拉伸后的灰度值范围的下限c,例如0。如此,可以提高后续可疑缺陷子图分割的成功率。
在步骤106,从映射表中查找图像中每个像素的灰度值对应的映射值。
在步骤108,根据每个像素的灰度值对应的映射值,从图像中分割出至少一个可疑缺陷子图像。
在步骤110,将至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。
上述实施例中,通过对比度拉伸,扩大了缺陷区域与非缺陷区域的灰度差异。如此,提高了可疑缺陷子区域分割的鲁棒性和准确性,从而在提高缺陷检测速度的情况下,提高缺陷检测的鲁棒性和准确性。
图3是本申请又一个实施例的缺陷检测方法的流程示意图。
如图3所示,缺陷检测方法包括步骤102至步骤110,步骤108包括步骤1081至步骤1083。下面仅重点介绍部分步骤(例如步骤1081至步骤1083)的实现过程,其他步骤可以参照图1所示实施例的描述。
在步骤102,获取待检测物体的图像的平均灰度值。
在一些实现方式中,步骤102可以包括图2所示的步骤1021和步骤1022。
在步骤104,构建映射表,映射表的元素包括图像的灰度值范围内的每个灰度值对应的映射值。在映射表中,大于或等于参考值的灰度值对应的映射值为第一值,小于参考值的灰度值对应的映射值为第二值。
在步骤106,从映射表中查找图像中每个像素的灰度值对应的映射值。
在步骤1081,根据每个像素的灰度值对应的映射值,从图像中分割 出多个连通区域,每个连通区域中每个像素的灰度值对应的映射值为第一值。
例如,通过连通域分析,可以从图像中分割出多个矩形的连通区域。在每个连通区域中,每个像素的灰度值与图像的平均灰度值之间的差值的绝对值大于预设灰度值。换言之,每个连通区域可能是缺陷区域。
在步骤1082,在相邻的两个连通区域满足预设条件的情况下,将两个连通区域合并为一个可疑缺陷子图像。
为了方便描述,将相邻的两个连通区域的面积分别称为第一面积和第二面积,将相邻的两个连通区域的重叠区域的面积称为第三面积。第二面积小于或等于第一面积,即,相邻的两个连通区域的面积可以相等,也可以不相等。上述预设条件包括第三面积与第一面积之比大于预设比值。
在步骤1083,在相邻的两个连通区域不满足预设条件的情况下,将两个连通区域确定为两个可疑缺陷子图像。
在一些实施例中,预设比值大于0.5且小于1,例如,预设比值为0.8。如此,重叠比例小于或等于0.5的连通区域不会被合并,从而既可以减小可疑缺陷子图的数量,同时也可以兼顾可疑缺陷子图的准确性。
通过步骤1082和步骤1083,满足预设条件的连通区域会被合并为一个可疑缺陷子图,不满足预设条件的每个连通区域均为一个可疑缺陷子图像。如此,得到了至少一个可疑缺陷子图像。
在步骤110,将至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。
上述实施例中,在分割可疑缺陷子图像的过程中,相邻的两个连通区域的重叠区域的面积与相对较大的连通区域的面积之比大于预设比值的情况下,将两个连通区域合并为一个可疑缺陷子图像;否则将两个连通区域作为两个可疑缺陷子图像。如此,可以减小可疑缺陷子图像的数量,提高机器学习模型得到缺陷检测结果的速度,从而可以进一步提高缺陷检测速度。
在一些实施例中,在步骤110之前的步骤的预处理时间不大于80ms。
在通过上述各实施例的方式对缺陷进行检测的情况下,发明人尝试寻找在提高缺陷检测速度的同时也提高缺陷检测准确性的方案。发明人注意到,在机器学习模型包括残差神经网络模型的情况下,通过调整残差神经网络模型中卷积层和全连接层的总层数,缺陷检测的准确性会随之变化。
在一些实施例中,残差神经网络模型中卷积层和全连接层的总层数为14。如此,既可以提高缺陷检测速度,又可以提高缺陷检测准确性。
发明人还注意到,待检测物体的图像中除缺陷之外的非缺陷区域的灰度值在不同的范围内变化时,利用卷积层和全连接层的总层数为14的残差神经网络模型对缺陷进行检测的缺陷检测结果的准确性不同。
在一些实施例中,待检测物体的图像中除缺陷之外的非缺陷区域的最大原始灰度值和最小原始灰度值之间的差值的范围为35至50。这种情况下,利用卷积层和全连接层的总层数为14的残差神经网络模型对缺陷进行检测的缺陷检测结果更加准确。
在一些实施例中,非缺陷区域的最大原始灰度值和非缺陷区域的最小原始灰度值之间的差值为40。例如,非缺陷区域的灰度值范围为75至105,即,非缺陷区域的最大原始灰度值为105,非缺陷区域的最小原始灰度值为75。这种情况下,利用卷积层和全连接层的总层数为14的残差神经网络模型对缺陷进行检测的缺陷检测结果进一步更加准确。
图4是本申请一个实施例的残差神经网络模型的示意图。
如图4所示,残差神经网络模型包括位于最大池化层和平均池化层之间的三个残差网络单元(ResNet Unit),每个残差网络单元包括两个残差块,每个残差块包括两个卷积层。此外,残差神经网络模型还包括位于最大池化层之前的第一个卷积层和位于平均池化层之后的全连接层。
在一些实施例中,第一个卷积层的卷积核的大小为7*7,卷积核的数量为64,图像经过第一个卷积层后尺寸变为原来的1/2。在一些实施例中,图像经过最大池化层后尺寸变为原来的1/2。在一些实施例中,每个残差网络单元中的每个卷积层的卷积核的大小为3*3,卷积核的数量为256,图像经过残差网络单元中的每个卷积层后尺寸不变。
在一些实施例中,残差神经网络模型在训练时利用如下损失函数:
focal loss=-α(1-y′)γlog(y′)
在上式中,focal loss是损失函数,y’是某个类别的概率,α是该类别的权重,γ是调制因子。
图4所示的残差神经网络模型可以在ResNet18模型的基础上减少一个残差网络单元来实现。图4所示的残差神经网络模型也可以称为ResNet14模型。
在一些实施例中,与ResNet18模型相比,ResNet14模型体积缩小75%、缺陷检测速度提升25%、缺陷检测精度提升5%。在一些实施例中,ResNet14模型的推理时间不大于20ms。另外,利用ResNet14模型有助于实现低概率(0.1%)的缺陷的分类,减小漏检的可能性。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于装置实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
图5是本申请一个实施例的缺陷检测装置的示意图。
如图5所示,缺陷检测装置包括获取模块501、构建模块502、查找模块503、分割模块504和输入模块505。
获取模块501被配置为获取待检测物体的图像的平均灰度值。
构建模块502被配置为构建映射表,映射表的元素包括图像的灰度值范围内的每个灰度值对应的映射值。这里,平均灰度值与预设灰度值之间的差值的绝对值为参考值,大于或等于参考值的灰度值对应的映射值为第一值,小于参考值的灰度值对应的映射值为第二值。
查找模块503被配置为从映射表中查找图像中每个像素的灰度值对应的映射值。
分割模块504被配置为根据每个像素的灰度值对应的映射值,从图像中分割出至少一个可疑缺陷子图像。这里,每个可疑缺陷子图像中每个像素的灰度值对应的映射值为第一值。
输入模块505被配置为将至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。
上述实施例中,在得到图像的平均灰度值后,根据图像的平均灰度值和图像的灰度值范围构建映射表。后续只需要从映射表中查找图像的每个像素的灰度值对应的映射值,而无需进行数学计算,大大提高了缺陷检测的速度。
在一些实施例中,获取模块501被配置为按照上文介绍的方式来获取待检测物体的图像的平均灰度值。在一些实施例中,分割模块504被配置为按照上文介绍的方式来从图像中分割出至少一个可疑缺陷子图像。
图6是本申请另一个实施例的缺陷检测装置的示意图。
如图6所示,缺陷检测装置600包括存储器601以及耦接至该存储器601的处理器602,处理器602被配置为基于存储在存储器601中的指令,执行前述任意一个实施例的方法。
存储器601例如可以包括系统存储器、固定非易失性存储介质等。系统存储器例如可以存储有操作系统、应用程序、引导装载程序(Boot Loader)以及其他程序等。
缺陷检测装置600还可以包括输入输出接口603、网络接口604、存储接口605等。这些接口603、604、605之间、以及存储器601与处理器602之间例如可以通过总线606连接。输入输出接口603为显示器、鼠标、键盘、触摸屏等输入输出设备提供连接接口。网络接口604为各种联网设备提供连接接口。存储接口605为SD卡、U盘等外置存储设备提供连接接口。
在一些实施例中,缺陷检测装置还被配置为将缺陷检测结果上传至数据平台和/或将缺陷检测结果为有缺陷的可疑缺陷子图上传至缺陷图像库中。后续对机器学习模型进行训练时,可以将图像库中的图像作为训练样本,从而提高机器学习模型后续检测缺陷的准确性。
图7是本申请一个实施例的缺陷检测系统的示意图。
如图7所示,缺陷检测系统包括上述任意一个实施例的缺陷检测装置701和成像装置702。
成像装置702被配置为对待检测物体进行扫描以得到待检测物体的图像。在一些实施例中,成像装置702为线扫相机。缺陷检测装置701从成像装置702获取待检测物体的图像,并按照上文介绍的方式进行缺陷检测。在得到缺陷检测结果后,可以利用打标机对待检测物体进行缺陷打标。
本申请实施例还提供了一种计算机可读存储介质,包括计算机程序指令,该计算机程序指令被处理器执行时实现上述任意一个实施例的方法。
本申请实施例还提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述任意一个实施例的方法。
可以理解的是,在图像的尺寸较大的情况下,上述实施例提高缺陷检测速度的效果更为明显。例如,图像的尺寸为16K。
通过在硬件平台(例如,16核CPU i9-9900K、NVIDIA RTX5000 GPU)上进行量产测试,针对16K图像,利用上述实施例的缺陷检测方法进行缺陷检测的过程的处理时间不大于100ms,并且没有出现漏检。
至此,已经详细描述了本申请的各实施例。为了避免遮蔽本申请的构思,没有描述本领域所公知的一些细节。本领域技术人员根据上面的描述,完全可以明白如何实施这里公开的技术方案。
本领域内的技术人员应当明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用非瞬时性存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解,可由计算机程序指令实现流程图中一个流程或多个流程和/或方框图中一个方框或多个方框中指定的功能。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过 计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
虽然已经参考优选实施例对本申请进行了描述,但在不脱离本申请的范围的情况下,可以对其进行各种改进并且可以用等效物替换其中的部件。尤其是,只要不存在结构冲突,各个实施例中所提到的各项技术特征均可以任意方式组合起来。本申请并不局限于文中公开的特定实施例,而是包括落入权利要求的范围内的所有技术方案。

Claims (21)

  1. 一种缺陷检测方法,包括:
    获取待检测物体的图像的平均灰度值;
    构建映射表,所述映射表的元素包括所述图像的灰度值范围内的每个灰度值对应的映射值,其中,大于或等于参考值的灰度值对应的映射值为第一值,小于所述参考值的灰度值对应的映射值为第二值,所述参考值为所述平均灰度值与预设灰度值之间的差值的绝对值;
    从所述映射表中查找所述图像中每个像素的灰度值对应的映射值;
    根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像,其中,每个可疑缺陷子图像中每个像素的灰度值对应的映射值为所述第一值;
    将所述至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。
  2. 根据权利要求1所述的方法,其中,获取待检测物体的图像的平均灰度值包括:
    获取所述图像的原始灰度值范围;
    对所述图像进行对比度拉伸,以将所述原始灰度值范围扩展为所述灰度值范围;
    其中,所述平均灰度值为所述图像在所述对比度拉伸后的平均灰度值。
  3. 根据权利要求2所述的方法,其中,对所述图像进行对比度拉伸包括:
    根据如下公式将所述图像中每个像素的原始灰度值I1(x,y)转换为灰度值I2(x,y):
    Figure PCTCN2021128893-appb-100001
    其中,a为所述原始灰度值范围的下限,b为所述原始灰度值范围的上限,c为所述灰度值范围的下限,d为所述灰度值范围的上限。
  4. 根据权利要求3所述的方法,其中,所述第一值为d,所述第二值为c。
  5. 根据权利要求3或4所述的方法,其中,c=0,d=255。
  6. 根据权利要求1-5任意一项所述的方法,其中,所述机器学习模型包括残差神经网络模型,所述残差神经网络模型中卷积层和全连接层的总层数为14。
  7. 根据权利要求6所述的方法,其中,所述图像中除缺陷之外的非缺陷区域的最 大原始灰度值和最小原始灰度值之间的差值的范围为35至50。
  8. 根据权利要求7所述的方法,其中,所述最大原始灰度值和所述最小原始灰度值之间的差值为40。
  9. 根据权利要求8所述的方法,其中,所述最大原始灰度值为105,所述最小原始灰度值为75。
  10. 根据权利要求1-9任意一项所述的方法,其中,根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像包括:
    根据每个像素的灰度值对应的映射值,从所述图像中分割出多个连通区域,每个连通区域中每个像素的灰度值对应的映射值为所述第一值;
    在相邻的两个连通区域满足预设条件的情况下,将所述两个连通区域合并为一个可疑缺陷子图像,其中,所述两个连通区域的面积分别为第一面积和小于或等于所述第一面积的第二面积,所述两个连通区域的重叠区域的面积为第三面积,所述预设条件包括所述第三面积与所述第一面积之比大于预设比值;
    在所述两个连通区域不满足所述预设条件的情况下,将所述两个连通区域确定为两个可疑缺陷子图像。
  11. 根据权利要求10所述的方法,其中,所述预设比值大于0.5且小于1。
  12. 根据权利要求11所述的方法,其中,所述预设比值为0.8。
  13. 根据权利要求1-12任意一项所述的方法,其中,所述映射表中的元素的数据类型为无符号字节型。
  14. 根据权利要求1-13任意一项所述的方法,其中,所述缺陷检测结果包括缺陷类型。
  15. 根据权利要求1-14任意一项所述的方法,其中,所述待检测物体包括电池的极片。
  16. 根据权利要求15所述的方法,其中,所述电池包括锂电池。
  17. 一种缺陷检测装置,包括:
    获取模块,被配置为获取待检测物体的图像的平均灰度值;
    构建模块,被配置为构建映射表,所述映射表的元素包括所述图像的灰度值范围内的每个灰度值对应的映射值,其中,大于或等于参考值的灰度值对应的映射值为第一值,小于所述参考值的灰度值对应的映射值为第二值,所述参考值为所述平均灰度 值与预设灰度值之间的差值的绝对值;
    查找模块,被配置为从所述映射表中查找所述图像中每个像素的灰度值对应的映射值;
    分割模块,被配置为根据每个像素的灰度值对应的映射值,从所述图像中分割出至少一个可疑缺陷子图像,其中,每个可疑缺陷子图像中每个像素的灰度值对应的映射值为所述第一值;和
    输入模块,被配置为将所述至少一个可疑缺陷子图像输入至机器学习模型,以得到缺陷检测结果。
  18. 一种缺陷检测装置,包括:
    存储器;和
    耦接至所述存储器的处理器,被配置为基于存储在所述存储器中的指令,执行权利要求1-16任意一项所述的缺陷检测方法。
  19. 一种缺陷检测系统,包括:
    权利要求17或18所述的缺陷检测装置;和
    成像装置,被配置为对所述待检测物体进行扫描以得到所述图像。
  20. 一种计算机可读存储介质,包括计算机程序指令,其中,所述计算机程序指令被处理器执行时实现权利要求1-16任意一项所述的缺陷检测方法。
  21. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-16任意一项所述的缺陷检测方法。
PCT/CN2021/128893 2021-11-05 2021-11-05 缺陷检测方法、装置和系统 WO2023077404A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP21962919.3A EP4280153A4 (en) 2021-11-05 2021-11-05 METHOD, DEVICE AND SYSTEM FOR ERROR DETECTION
JP2023552290A JP7569479B2 (ja) 2021-11-05 2021-11-05 欠陥検出方法、装置及びシステム
CN202180053074.XA CN116420159A (zh) 2021-11-05 2021-11-05 缺陷检测方法、装置和系统
PCT/CN2021/128893 WO2023077404A1 (zh) 2021-11-05 2021-11-05 缺陷检测方法、装置和系统
KR1020237025378A KR20230124713A (ko) 2021-11-05 2021-11-05 결함 검출 방법, 장치 및 시스템
US18/465,557 US20230419472A1 (en) 2021-11-05 2023-09-12 Defect detection method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/128893 WO2023077404A1 (zh) 2021-11-05 2021-11-05 缺陷检测方法、装置和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/465,557 Continuation US20230419472A1 (en) 2021-11-05 2023-09-12 Defect detection method, device and system

Publications (1)

Publication Number Publication Date
WO2023077404A1 true WO2023077404A1 (zh) 2023-05-11

Family

ID=86240371

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/128893 WO2023077404A1 (zh) 2021-11-05 2021-11-05 缺陷检测方法、装置和系统

Country Status (6)

Country Link
US (1) US20230419472A1 (zh)
EP (1) EP4280153A4 (zh)
JP (1) JP7569479B2 (zh)
KR (1) KR20230124713A (zh)
CN (1) CN116420159A (zh)
WO (1) WO2023077404A1 (zh)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342589A (zh) * 2023-05-23 2023-06-27 之江实验室 一种跨视场划痕缺陷连续性检测方法和系统
CN116402827A (zh) * 2023-06-09 2023-07-07 山东华禹威达机电科技有限公司 基于图像处理的采煤机用电缆夹板缺陷检测方法及装置
CN116703890A (zh) * 2023-07-28 2023-09-05 上海瑞浦青创新能源有限公司 极耳缺陷的检测方法和系统
CN116721106A (zh) * 2023-08-11 2023-09-08 山东明达圣昌铝业集团有限公司 一种基于图像处理的型材瑕疵视觉检测方法
CN116843678A (zh) * 2023-08-28 2023-10-03 青岛冠宝林活性炭有限公司 一种硬碳电极生产质量检测方法
CN116883408A (zh) * 2023-09-08 2023-10-13 威海坤科流量仪表股份有限公司 基于人工智能的积算仪壳体缺陷检测方法
CN116984628A (zh) * 2023-09-28 2023-11-03 西安空天机电智能制造有限公司 一种基于激光特征融合成像的铺粉缺陷检测方法
CN117078666A (zh) * 2023-10-13 2023-11-17 东声(苏州)智能科技有限公司 二维和三维结合的缺陷检测方法、装置、介质和设备
CN117078667A (zh) * 2023-10-13 2023-11-17 山东克莱蒙特新材料科技有限公司 基于机器视觉的矿物铸件检测方法
CN117095009A (zh) * 2023-10-20 2023-11-21 山东绿康装饰材料有限公司 一种基于图像处理的pvc装饰板缺陷检测方法
CN117115153A (zh) * 2023-10-23 2023-11-24 威海坤科流量仪表股份有限公司 基于视觉辅助的印制线路板质量智能检测方法
CN117152180A (zh) * 2023-10-31 2023-12-01 山东克莱蒙特新材料科技有限公司 基于人工智能的矿物铸件缺陷检测方法
CN117197141A (zh) * 2023-11-07 2023-12-08 山东远盾网络技术股份有限公司 一种汽车零部件表面缺陷检测方法
CN117291937A (zh) * 2023-11-27 2023-12-26 山东嘉达装配式建筑科技有限责任公司 基于图像特征分析的自动抹灰效果视觉检测系统
CN117474913A (zh) * 2023-12-27 2024-01-30 江西省兆驰光电有限公司 一种针痕检测机台判定方法、系统、存储介质及计算机
CN117649412A (zh) * 2024-01-30 2024-03-05 山东海天七彩建材有限公司 一种铝材表面质量的检测方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116609493B (zh) * 2023-07-21 2023-09-22 宁德时代新能源科技股份有限公司 压痕检测方法、叠片电芯制造方法、装置和电子设备
CN117237442B (zh) * 2023-11-16 2024-04-09 宁德时代新能源科技股份有限公司 连通域定位方法、图形处理器、设备和生产线
CN117876367B (zh) * 2024-03-11 2024-06-07 惠州威尔高电子有限公司 一种用于电路板印刷的曝光优化方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001077165A (ja) * 1999-09-06 2001-03-23 Hitachi Ltd 欠陥検査方法及びその装置並びに欠陥解析方法及びその装置
CN103499585A (zh) * 2013-10-22 2014-01-08 常州工学院 基于机器视觉的非连续性锂电池薄膜缺陷检测方法及其装置
CN110288566A (zh) * 2019-05-23 2019-09-27 北京中科晶上科技股份有限公司 一种目标缺陷提取方法
CN113538603A (zh) * 2021-09-16 2021-10-22 深圳市光明顶照明科技有限公司 一种基于阵列产品的光学检测方法、系统和可读存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7447374B1 (en) * 2003-01-06 2008-11-04 Apple Inc. Method and apparatus for an intuitive digital image processing system that enhances digital images
SG139602A1 (en) * 2006-08-08 2008-02-29 St Microelectronics Asia Automatic contrast enhancement
CN109472783B (zh) * 2018-10-31 2021-10-01 湘潭大学 一种泡沫镍表面缺陷提取及分类方法
JP2020187657A (ja) 2019-05-16 2020-11-19 株式会社キーエンス 画像検査装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001077165A (ja) * 1999-09-06 2001-03-23 Hitachi Ltd 欠陥検査方法及びその装置並びに欠陥解析方法及びその装置
CN103499585A (zh) * 2013-10-22 2014-01-08 常州工学院 基于机器视觉的非连续性锂电池薄膜缺陷检测方法及其装置
CN110288566A (zh) * 2019-05-23 2019-09-27 北京中科晶上科技股份有限公司 一种目标缺陷提取方法
CN113538603A (zh) * 2021-09-16 2021-10-22 深圳市光明顶照明科技有限公司 一种基于阵列产品的光学检测方法、系统和可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4280153A4 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342589B (zh) * 2023-05-23 2023-08-22 之江实验室 一种跨视场划痕缺陷连续性检测方法和系统
CN116342589A (zh) * 2023-05-23 2023-06-27 之江实验室 一种跨视场划痕缺陷连续性检测方法和系统
CN116402827A (zh) * 2023-06-09 2023-07-07 山东华禹威达机电科技有限公司 基于图像处理的采煤机用电缆夹板缺陷检测方法及装置
CN116402827B (zh) * 2023-06-09 2023-08-11 山东华禹威达机电科技有限公司 基于图像处理的采煤机用电缆夹板缺陷检测方法及装置
CN116703890A (zh) * 2023-07-28 2023-09-05 上海瑞浦青创新能源有限公司 极耳缺陷的检测方法和系统
CN116703890B (zh) * 2023-07-28 2023-12-19 上海瑞浦青创新能源有限公司 极耳缺陷的检测方法和系统
CN116721106A (zh) * 2023-08-11 2023-09-08 山东明达圣昌铝业集团有限公司 一种基于图像处理的型材瑕疵视觉检测方法
CN116721106B (zh) * 2023-08-11 2023-10-20 山东明达圣昌铝业集团有限公司 一种基于图像处理的型材瑕疵视觉检测方法
CN116843678B (zh) * 2023-08-28 2023-11-21 青岛冠宝林活性炭有限公司 一种硬碳电极生产质量检测方法
CN116843678A (zh) * 2023-08-28 2023-10-03 青岛冠宝林活性炭有限公司 一种硬碳电极生产质量检测方法
CN116883408A (zh) * 2023-09-08 2023-10-13 威海坤科流量仪表股份有限公司 基于人工智能的积算仪壳体缺陷检测方法
CN116883408B (zh) * 2023-09-08 2023-11-07 威海坤科流量仪表股份有限公司 基于人工智能的积算仪壳体缺陷检测方法
CN116984628B (zh) * 2023-09-28 2023-12-29 西安空天机电智能制造有限公司 一种基于激光特征融合成像的铺粉缺陷检测方法
CN116984628A (zh) * 2023-09-28 2023-11-03 西安空天机电智能制造有限公司 一种基于激光特征融合成像的铺粉缺陷检测方法
CN117078667A (zh) * 2023-10-13 2023-11-17 山东克莱蒙特新材料科技有限公司 基于机器视觉的矿物铸件检测方法
CN117078666B (zh) * 2023-10-13 2024-04-09 东声(苏州)智能科技有限公司 二维和三维结合的缺陷检测方法、装置、介质和设备
CN117078666A (zh) * 2023-10-13 2023-11-17 东声(苏州)智能科技有限公司 二维和三维结合的缺陷检测方法、装置、介质和设备
CN117078667B (zh) * 2023-10-13 2024-01-09 山东克莱蒙特新材料科技有限公司 基于机器视觉的矿物铸件检测方法
CN117095009A (zh) * 2023-10-20 2023-11-21 山东绿康装饰材料有限公司 一种基于图像处理的pvc装饰板缺陷检测方法
CN117095009B (zh) * 2023-10-20 2024-01-12 山东绿康装饰材料有限公司 一种基于图像处理的pvc装饰板缺陷检测方法
CN117115153B (zh) * 2023-10-23 2024-02-02 威海坤科流量仪表股份有限公司 基于视觉辅助的印制线路板质量智能检测方法
CN117115153A (zh) * 2023-10-23 2023-11-24 威海坤科流量仪表股份有限公司 基于视觉辅助的印制线路板质量智能检测方法
CN117152180B (zh) * 2023-10-31 2024-01-26 山东克莱蒙特新材料科技有限公司 基于人工智能的矿物铸件缺陷检测方法
CN117152180A (zh) * 2023-10-31 2023-12-01 山东克莱蒙特新材料科技有限公司 基于人工智能的矿物铸件缺陷检测方法
CN117197141A (zh) * 2023-11-07 2023-12-08 山东远盾网络技术股份有限公司 一种汽车零部件表面缺陷检测方法
CN117197141B (zh) * 2023-11-07 2024-01-26 山东远盾网络技术股份有限公司 一种汽车零部件表面缺陷检测方法
CN117291937A (zh) * 2023-11-27 2023-12-26 山东嘉达装配式建筑科技有限责任公司 基于图像特征分析的自动抹灰效果视觉检测系统
CN117291937B (zh) * 2023-11-27 2024-03-05 山东嘉达装配式建筑科技有限责任公司 基于图像特征分析的自动抹灰效果视觉检测系统
CN117474913A (zh) * 2023-12-27 2024-01-30 江西省兆驰光电有限公司 一种针痕检测机台判定方法、系统、存储介质及计算机
CN117649412A (zh) * 2024-01-30 2024-03-05 山东海天七彩建材有限公司 一种铝材表面质量的检测方法
CN117649412B (zh) * 2024-01-30 2024-04-09 山东海天七彩建材有限公司 一种铝材表面质量的检测方法

Also Published As

Publication number Publication date
EP4280153A1 (en) 2023-11-22
CN116420159A (zh) 2023-07-11
JP7569479B2 (ja) 2024-10-18
JP2024509411A (ja) 2024-03-01
EP4280153A4 (en) 2024-04-24
KR20230124713A (ko) 2023-08-25
US20230419472A1 (en) 2023-12-28

Similar Documents

Publication Publication Date Title
WO2023077404A1 (zh) 缺陷检测方法、装置和系统
CN106875381B (zh) 一种基于深度学习的手机外壳缺陷检测方法
CN110148130B (zh) 用于检测零件缺陷的方法和装置
KR102166458B1 (ko) 인공신경망 기반의 영상 분할을 이용한 불량 검출 방법 및 불량 검출 장치
CN113592845A (zh) 一种电池涂布的缺陷检测方法及装置、存储介质
JP2017049974A (ja) 識別器生成装置、良否判定方法、およびプログラム
JP2011214903A (ja) 外観検査装置、外観検査用識別器の生成装置及び外観検査用識別器生成方法ならびに外観検査用識別器生成用コンピュータプログラム
WO2024002187A1 (zh) 缺陷检测方法、缺陷检测设备及存储介质
Xu et al. Deep learning algorithm for real-time automatic crack detection, segmentation, qualification
Peng et al. Non-uniform illumination image enhancement for surface damage detection of wind turbine blades
US12079310B2 (en) Defect classification apparatus, method and program
TW201512649A (zh) 偵測晶片影像瑕疵方法及其系統與電腦程式產品
CN111369523A (zh) 显微图像中细胞堆叠的检测方法、系统、设备及介质
CN115775236A (zh) 基于多尺度特征融合的表面微小缺陷视觉检测方法及系统
CN109584206B (zh) 零件表面瑕疵检测中神经网络的训练样本的合成方法
CN113609984A (zh) 一种指针式仪表读数识别方法、装置及电子设备
Fang et al. Automatic zipper tape defect detection using two-stage multi-scale convolutional networks
JP2021143884A (ja) 検査装置、検査方法、プログラム、学習装置、学習方法、および学習済みデータセット
Huang et al. The detection of defects in ceramic cell phone backplane with embedded system
IZUMI et al. Low-cost training data creation for crack detection using an attention mechanism in deep learning models
CN114841992A (zh) 基于循环生成对抗网络和结构相似性的缺陷检测方法
Zhao et al. MSC-AD: A Multiscene Unsupervised Anomaly Detection Dataset for Small Defect Detection of Casting Surface
CN117433966A (zh) 一种粉磨颗粒粒径非接触测量方法及系统
JP2021064215A (ja) 表面性状検査装置及び表面性状検査方法
CN117173154A (zh) 玻璃瓶的在线图像检测系统及其方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21962919

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20237025378

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021962919

Country of ref document: EP

Effective date: 20230816

WWE Wipo information: entry into national phase

Ref document number: 2023552290

Country of ref document: JP