CN118521870A - Defect detection model construction method, defect detection method, device and medium - Google Patents
Defect detection model construction method, defect detection method, device and medium Download PDFInfo
- Publication number
- CN118521870A CN118521870A CN202410823941.9A CN202410823941A CN118521870A CN 118521870 A CN118521870 A CN 118521870A CN 202410823941 A CN202410823941 A CN 202410823941A CN 118521870 A CN118521870 A CN 118521870A
- Authority
- CN
- China
- Prior art keywords
- defect detection
- detection model
- defect
- sample set
- image sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007547 defect Effects 0.000 title claims abstract description 183
- 238000001514 detection method Methods 0.000 title claims abstract description 144
- 238000010276 construction Methods 0.000 title claims abstract description 16
- 230000007246 mechanism Effects 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims description 43
- 238000004422 calculation algorithm Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 13
- 238000005516 engineering process Methods 0.000 claims description 12
- 238000003860 storage Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 5
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 claims description 4
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 229910052802 copper Inorganic materials 0.000 claims description 4
- 239000010949 copper Substances 0.000 claims description 4
- 238000003709 image segmentation Methods 0.000 claims description 4
- 238000002347 injection Methods 0.000 claims description 4
- 239000007924 injection Substances 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 13
- 238000013135 deep learning Methods 0.000 description 6
- 239000000306 component Substances 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 210000000988 bone and bone Anatomy 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000011031 large-scale manufacturing process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/52—Scale-space analysis, e.g. wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30141—Printed circuit board [PCB]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Biodiversity & Conservation Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a defect detection model construction method, a defect detection method, a device and a medium, which belong to the technical field of defect detection and comprise the following steps: acquiring an image sample set containing defect information about a PCB, and expanding the image sample set to obtain an expanded image sample set; embedding SEAttention attention mechanism in a YOLOv n backhaul network, and replacing a Neck network of YOLOv n and a part of C2f modules in the backhaul network with C3 modules to generate an improved YOLOv n defect detection model; and taking the extended image sample set as the input of the defect detection model, and training by adopting a Wise-IoU loss function to obtain a target defect detection model. The problem of to PCB circuit board surface to the detection effect poor of small-size defect is solved.
Description
Technical Field
The present invention relates to the field of defect detection technologies, and in particular, to a method for constructing a defect detection model, a method, an apparatus, and a medium for detecting defects.
Background
The PCB (Printed Circuit Board ) is used as a core component of the electronic device, directly affecting the overall performance and reliability of the electronic product. Early detection of surface defects of a PCB mainly depends on manual visual inspection and traditional computer vision algorithms. However, this method has a problem of low efficiency, is limited by human subjective factors and experience, is difficult to ensure the accuracy of detection, and is difficult to cope with the detection requirements of mass production and micro defects. At the same time, relying on specific fixtures and programming also increases the cost and complexity of the test.
In recent years, with the rapid development of deep learning technology, remarkable results have been achieved particularly in the fields of image recognition and target detection. The deep learning model can automatically extract useful information by learning features in a large amount of data, thereby realizing accurate identification of targets. Therefore, the deep learning technology is applied to the detection of the surface defects of the PCB, so that the problems existing in the traditional method are hopeful to be solved, and the detection accuracy and efficiency are improved. However, the surface features of the PCB are complex, most defects belong to small-size defects, and the background and the small-size defects are not obviously separated, so that the problem that the detection effect on the small-size defects is poor in the prior art is caused.
Disclosure of Invention
The invention aims to provide a defect detection model construction method, a defect detection device and a medium, which can improve the detection effect on small-size defects of a PCB.
To achieve the above object, in a first aspect, an embodiment of the present invention provides a method for constructing a defect detection model, including:
acquiring an image sample set containing defect information about a PCB, and expanding the image sample set to obtain an expanded image sample set;
Embedding SEAttention attention mechanism in a YOLOv n backhaul network, and replacing a Neck network of YOLOv n and a part of C2f modules in the backhaul network with C3 modules to generate an improved YOLOv n defect detection model;
and taking the extended image sample set as the input of the defect detection model, and training by adopting a Wise-IoU loss function to obtain a target defect detection model.
In an embodiment, the embedding SEAttention the attention mechanism in the YOLOv n backhaul network includes:
the SEAttention attention mechanism is embedded after the SPPF module in the YOLOv n backhaul network.
In an embodiment, the replacing the part of the C2f modules in the Neck network of YOLOv n and the backhaul network with the C3 modules includes:
Replacing a C2f module of a fifth layer and a ninth layer in the backhaul network with a C3 module;
Replacing a top-level C2f module in the FPN module of the Neck network with a C3 module;
and replacing the C2f module at the bottom layer in the PAN module of the Neck network with a C3 module.
In an embodiment, before the generating the PCB surface defect model of the improvement YOLOv n, the constructing method further includes:
The NMS algorithm employed in YOLOv is replaced with a Soft-NMS algorithm.
In an embodiment, the expanding the image sample set to obtain an expanded image sample set includes:
And performing image rotation, overturning, scaling, brightness and contrast, image color change, image segmentation, noise injection and image synthesis on the image samples in the image sample set by using a data enhancement technology to obtain the extended image sample set.
In an embodiment, before said using said extended image sample set as input to said defect detection model, said method further comprises:
Labeling the image samples in the extended image sample set by using Labelimg tools, and generating a real boundary box of a defect position and a defect type corresponding to the real boundary box; wherein the defect categories include short circuits, open circuits, missing holes, excess copper, or pins.
In an embodiment, the training the extended image sample set as the input of the defect detection model by using a Wise-IoU loss function to obtain a target defect detection model includes:
Taking the extended image sample set as the input of the defect detection model to generate a prediction boundary box;
Generating a similarity between the real boundary box and the prediction boundary box according to the Wise-IoU loss function;
and generating a gradient value of the defect detection model according to the similarity, and updating and training the defect detection model according to the gradient value to obtain the target defect detection model.
In a second aspect, an embodiment of the present invention provides a defect detection method, where the detection method includes:
acquiring a PCB image to be detected;
inputting the PCB image to be detected into a target defect detection model obtained by the defect detection model construction method, and obtaining the defect position, the defect type and the defect confidence of the PCB image to be detected.
In a third aspect, an embodiment of the present invention provides an electronic device, including
The memory device is used for storing the data,
A processor, and
A computer program stored in the memory and executable on the processor, the processor implementing the above-described defect detection model construction method and/or the above-described defect detection method when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the above-described defect detection model construction method and/or the above-described defect detection method.
Compared with the prior art, the defect detection model construction method, the defect detection equipment and the medium have the beneficial effects that:
According to the embodiment of the invention, the importance of each channel in the feature map can be dynamically adjusted by improving the defect detection model YOLOv n, so that the characterization capability of the feature map is enhanced, the extraction and representation of complex features by the model are improved, and the accurate defect detection capability is enhanced. In addition, the model also expands the capture capability of the network to spatial features, so that complex and fine defect structures in the image are better understood and analyzed, and various types of defects, particularly small-size defects, are more effectively positioned and identified.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a schematic flow chart of a method for constructing a defect detection model according to an embodiment of the present invention;
FIG. 2 is a YOLOv n base architecture diagram;
FIG. 3 is a schematic diagram of a defect detection model of improvement YOLOv n;
FIG. 4 is a graph of F1-confidence curve verification effects of a target defect detection model corresponding to a specific application scenario;
FIG. 5 is a recall confidence curve verification effect diagram of a target defect detection model corresponding to a specific application scenario;
FIG. 6 is a graph of accuracy-recall verification effects for a target defect detection model corresponding to a particular application scenario;
FIG. 7 is a confusion matrix effect diagram of a target defect detection model corresponding to a specific application scenario;
fig. 8 is a schematic diagram of a detection result of a PCB image to be detected corresponding to a specific application scenario;
FIG. 9 is a schematic flow chart of a defect detection method according to an embodiment of the present invention;
fig. 10 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
It is apparent that the drawings in the following description are only some examples or embodiments of the present invention, and it is possible for those of ordinary skill in the art to apply the present invention to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the invention can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The terms "a," "an," "the," and similar referents in the context of the invention are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present invention are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
Early detection of surface defects of a PCB mainly depends on manual visual inspection and a traditional computer vision algorithm, however, the method has low efficiency, is limited by artificial subjective factors and experience, is difficult to ensure accuracy, and is difficult to cope with the detection requirements of large-scale production and micro defects; at the same time, relying on specific fixtures and programming also increases the cost and complexity of the test. Therefore, there is a need to explore more efficient, accurate, intelligent detection methods. In the field of object detection, neural networks are widely used for their excellent feature learning and classification capabilities. Currently, the main target detection algorithms include two-stage RCNN, fast RCNN, FASTER RCNN, mask RCNN, as well as one-stage SSD, YOLO, and transducer-based DETR algorithms. These algorithms each have advantages and disadvantages in different scenarios, where two-stage algorithms generally have higher detection accuracy, while one-stage algorithms have faster detection speed, and are particularly suitable for real-time detection scenarios. By combining industrial cameras and computer technology, real-time monitoring of the PCB can be achieved.
In recent years, with the rapid development of deep learning technology, remarkable results are achieved in the fields of image recognition, target detection and the like. The deep learning model can automatically extract useful information by learning features in a large amount of data, so that accurate identification of a target is realized. Therefore, the deep learning technology is applied to the detection of the surface defects of the PCB, so that the problems existing in the traditional method are hopeful to be solved, and the detection accuracy and efficiency are improved. In the prior art, some methods for detecting defects of a PCB based on deep learning have been proposed, and these methods generally use a Convolutional Neural Network (CNN) or other model to perform feature extraction and classification on the PCB image.
However, the surface features of the PCB are complex, most defects belong to small-size defects, and the separation of the background and the small-size defects is not obvious, so that the existing deep learning technologies still have the problem of poor detection effect on the small-size defects.
Based on the above situation, the embodiment of the invention provides a defect detection model construction method, a defect detection device and a medium.
In a first aspect, an embodiment of the present invention provides a method for constructing a defect detection model, and fig. 1 is a schematic flow chart of the method for constructing a defect detection model, as shown in fig. 1, where the method includes the following steps:
Step S101, an image sample set containing defect information about the PCB is obtained, and the image sample set is expanded to obtain an expanded image sample set.
In this embodiment, an image sample set containing defect information about a PCB is obtained and used as basic data for training a subsequent defect detection model, so that in order to increase the diversity of the image sample set, the defect detection model can be better generalized, thereby improving the accuracy and robustness of defect detection.
In one embodiment, the data enhancement technique is used to perform image rotation, flipping, scaling, brightness and contrast, image color change, image segmentation, noise injection, and adjustment of image composition on the image samples in the image sample set to obtain an extended image sample set.
Specifically, in order to improve the recognition capability of the defect detection model to the defects of the PCB, the implementation adopts various image data enhancement technologies. For example, by rotating the image sample at different angles, multiple angle views are obtained to increase the perceptibility of the defect detection model to defects in different directions. The flipping includes horizontal and vertical flipping, which can generate left-right or up-down mirrored versions of the image sample, helping the defect detection model learn more feature variations. The scaling operation adapts the image samples to defects of different sizes. Adjusting brightness and contrast helps the defect detection model to perform stably under various illumination conditions. By adjusting the color balance and the color tone variation, it is ensured that the defect detection model can accurately identify defects when facing circuit boards of different colors. The image sample is divided into a plurality of parts, random noise is added or image synthesis is carried out, training data is further enriched, and robustness and generalization capability of the model are enhanced. The defect detection performance of the defect detection model in the complex circuit board environment can be effectively improved by using the data enhancement technology.
Step S102, embedding SEAttention attention mechanism in a YOLOv n backhaul network, and replacing a Neck network of YOLOv n and a part of C2f modules in the backhaul network with C3 modules to generate an improved YOLOv n defect detection model.
In this embodiment, in order to improve the detection effect of the small-size defect of the PCB, a SEAttention attention mechanism is embedded in the YOLOv n backhaul network, and the SEAttention mechanism can dynamically adjust the importance of each channel according to the global information of the feature map, thereby enhancing the characterization capability of the feature map, helping to improve the extraction and representation of the complex features by the model, and facilitating the accurate detection of the defect. In addition, the Neck network of YOLOv n and a part of C2f modules in the back bone network are replaced by C3 modules, so that the capturing capability of the spatial characteristics of the network can be expanded, and the C3 modules generally have stronger characteristic extraction capability and can better understand and analyze complex and fine defect structures in images. Thus, the resulting improved YOLOv n defect detection model is able to more effectively locate and identify various types of defects, particularly small-size defects.
Referring to fig. 2-3, fig. 2 is a YOLOv n basic architecture diagram, and fig. 3 is a modified YOLOv n defect detection model architecture diagram. Where Input is the Input, output is the Output, conv is the convolution, upsample operates (meaning increasing the spatial dimension of the feature map), decet detector (meaning identifying the location and class of objects in the image, typically a set of bounding boxes surrounding the object, and class labels and confidence scores for each bounding box).
C2f includes Split operations (meaning that a feature map of size C is Split into two feature maps of size C/2), conv convolution, n Bottleneck blocks (each Bottleneck block contains two convolutions), concat operations (meaning that different feature maps are stitched along the channel dimension).
In one embodiment, embedding SEAttention attention mechanisms in a YOLOv n backhaul network comprises:
the SPPF module in the YOLOv n backhaul network is followed by embedding SEAttention attention mechanism.
Specifically, the SPPF module, the spatial pyramid pooling fast (SPATIAL PYRAMID Pooling Fast) module. The SPPF module can carry out space pooling operation on different scales, the feature images subjected to the space pooling operation have richer space information, and the importance of the feature images can be adjusted on multiple scales through SEAttention attention mechanisms after that, and richer space information is integrated. The importance of the feature map may be adjusted in a global scope, rather than being limited to local convolutional layer feature maps, in comparison to the way the SEAttention attention mechanism is embedded after the convolutional layer. And the capability of the backhaul network to process multi-scale information and adapt to complex tasks can be enhanced.
With continued reference to fig. 2-3, in one embodiment, replacing a portion of the C2f modules in the Neck network and the backhaul network of YOLOv n with C3 modules includes:
replacing a C2f module of a fifth layer and a ninth layer in the back bone network with a C3 module;
replacing a top-level C2f module in the FPN module of the Neck network with a C3 module;
the underlying C2f module in the PAN module of Neck network is replaced with a C3 module.
Specifically, the C3 module uses Bottleneck as its basic building unit to perform Cross-phase connection of features between different Network phases by taking reference to the idea of CSPNet (Cross STAGE PARTIAL Network), allows for stacking of different numbers of Bottleneck to accommodate models of different sizes and different computing requirements, extracts features by splitting and fusing, and combines residual connection to enhance the characterization capability of the Network. Compared with a C2f module, the C3 module possibly has deeper network depth or more complex feature transformation capability, can provide richer and more abstract feature representation, better processes the detection problems of details and small targets in complex scenes, and is beneficial to improving the accuracy of the defect detection model on target defect detection.
In one embodiment, prior to generating the modified YOLOv n PCB surface defect model, the build-up method further includes: the NMS algorithm employed in YOLOv is replaced with a Soft-NMS algorithm.
In this embodiment, the original non-maximal suppression algorithm NMS ensures the uniqueness of the final detection result by discarding the bounding box with lower overlapping degree, but in a dense occlusion scene, this processing manner easily causes the situation of missing the target, and the Soft-NMS algorithm gradually reduces the confidence level of the non-maximal value by introducing a softened weight in the overlapping degree calculation, instead of suddenly reducing the confidence level to zero, so that the score change of the bounding box is smoother, the sudden disappearance of the bounding box is reduced, some bounding boxes with low confidence level are reserved, a plurality of overlapping bounding boxes are more reasonably reserved, and the adaptability to the dense occlusion scene is improved.
Specifically, the defect detection model outputs a probability distribution for each prediction bounding box, where the highest probability value represents the confidence that the model belongs to a particular defect class for that prediction bounding box. To screen out valid prediction bounding boxes, a number of redundant prediction bounding boxes are eliminated by Soft-NMS algorithm, and only if the confidence score of the prediction bounding box is higher than a preset threshold, it is considered as a valid target detection result.
And step S103, taking the expanded image sample set as the input of the defect detection model, and training by adopting a Wise-IoU loss function to obtain the target defect detection model.
In this embodiment, for detecting small-size defects or highly overlapped defects, the overlapping and shielding relationship needs to be effectively processed, and the Wise-IoU considers the relative position and scale difference between objects, so that compared with the traditional IoU calculation mode, the detection task of small-size defects can be more effectively and particularly improved, and the detection accuracy of the model in complex scenes and small-size defects is further improved.
In one embodiment, before using the expanded image sample set as input to the defect detection model, the method further comprises: labeling the image samples in the extended image sample set by using Labelimg tools, and generating a real boundary box of the defect position and a defect category corresponding to the real boundary box; wherein the defect categories include short circuits, open circuits, missing holes, excess copper, or pins.
Specifically, for each image sample, a real bounding box is drawn using a Labelimg rectangular tool, marking the location of each defect. When labeling real bounding boxes, each real bounding box is assigned a corresponding defect class including, but not limited to, short circuits, open circuits, missing holes, excess copper, pins, component missing, or incorrect locations.
In one embodiment, the training is performed by using the expanded image sample set as an input of a defect detection model and using a Wise-IoU loss function to obtain a target defect detection model, including:
taking the extended image sample set as the input of a defect detection model to generate a prediction boundary box;
generating a similarity between the real boundary box and the prediction boundary box according to the Wise-IoU loss function;
and generating a gradient value of the defect detection model according to the similarity, and updating and training the defect detection model according to the gradient value to obtain the target defect detection model.
Specifically, the extended image sample set is input into a defect detection model, the defect detection model generates a prediction boundary box for each image sample, a Wise-IoU loss function is used for measuring the similarity between a real boundary box and the prediction boundary box, and the loss function outputs a loss value to represent the difference degree between the prediction boundary box and the real boundary box. In the back propagation stage, the loss value calculated by the Wise-IoU loss function is used for calculating the gradient of the model, and then the parameters of the model are updated by using gradient descent and other methods so that the model can predict the boundary box more accurately. And finally, judging whether the model is trained or not according to the indexes of accuracy, recall rate, average accuracy and the like until a target defect detection model is obtained.
The following describes a specific implementation of the present invention in connection with a specific application scenario. In this specific application scenario, this embodiment is implemented by the following steps:
Step 1001: embedding SEAttention attention mechanism after SPPF module in YOLOv n backhaul network; replacing a C2f module of a fifth layer and a ninth layer in the back bone network with a C3 module; replacing a top-level C2f module in the FPN module of the Neck network with a C3 module; replacing a C2f module at the bottom layer in a PAN module of the Neck network with a C3 module; the NMS algorithm employed in YOLOv was replaced with a Soft-NMS algorithm to generate a defect detection model that improved YOLOv n.
Step 1002: an image sample set (mydata. Yaml data set configuration file) containing defect information about a PCB circuit board is obtained, and image rotation, overturning, scaling, brightness and contrast, image color change, image segmentation, noise injection and image synthesis are carried out on image samples in the image sample set by using a data enhancement technology, so that an extended image sample set is obtained. And then, the image samples in the expanded image sample set are normalized, which is helpful for accelerating the convergence of the model and improving the training effect.
Step 1003: and labeling the image samples in the extended image sample set by using a Labelimg tool, and generating a real boundary box of the defect position and a defect category corresponding to the real boundary box.
Step 1004: using pre-trained yolov8n.pt weights and yolov8n.yaml configurations as starting points, custom defect_yolov8n.yaml configuration files (including input size, learning rate, data enhancement method, etc.) were created to better match task requirements. Generating a similarity between the real boundary box and the prediction boundary box according to the Wise-IoU loss function; and generating a gradient value of the defect detection model according to the similarity, and updating and training the defect detection model according to the gradient value to obtain the target defect detection model.
Referring to fig. 4-7, after steps 1001-1004 are completed, the performance of the target defect detection model is analyzed and tested, including a plurality of key evaluation indicators, such as PR curve, accuracy, recall-confidence curve, confusion matrix, and F1 values. Therefore, the target defect detection model has a good detection effect.
Step 1005: the method comprises the steps of obtaining a PCB image to be detected, inputting the PCB image to be detected into a target defect detection model, referring to FIG. 8, and obtaining the defect position, the defect type and the defect confidence of the PCB image to be detected.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
In a second aspect, an embodiment of the present invention provides a defect detection method, and fig. 9 is a schematic flow chart of the defect detection method provided in the embodiment of the present invention, as shown in fig. 9, where the method includes:
Step S201, a PCB image to be detected is acquired.
Step S202, inputting the PCB image to be detected into a target defect detection model obtained by the defect detection model construction method, and obtaining the defect position, the defect type and the defect confidence of the PCB image to be detected.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the above-mentioned defect detection model building method and/or defect detection method when executing the computer program.
Optionally, the electronic device may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In a fourth aspect, in combination with the method for constructing a defect detection model and/or the method for detecting a defect in the foregoing embodiments, embodiments of the present invention may be implemented by providing a storage medium. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the defect detection model construction methods and/or defect detection methods of the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by the processor to implement a defect detection model construction method and/or a defect detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 10 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present invention, as shown in fig. 10, and an electronic device, which may be a server, and an internal structure diagram of which may be shown in fig. 10, is provided. The electronic device includes a processor, a network interface, an internal memory, and a non-volatile memory connected by an internal bus, where the non-volatile memory stores an operating system, computer programs, and a database. The processor is used for providing computing and control capabilities, the network interface is used for communicating with an external terminal through a network connection, the internal memory is used for providing an environment for the operation of an operating system and a computer program, the computer program is executed by the processor to realize a defect detection model construction method and/or a defect detection method, and the database is used for storing data.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the electronic device to which the present inventive arrangements are applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be understood by those skilled in the art that the technical features of the above-described embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above-described embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and substitutions can be made by those skilled in the art without departing from the technical principles of the present invention, and these modifications and substitutions should also be considered as being within the scope of the present invention.
Claims (10)
1. A method for constructing a defect detection model, the method comprising:
acquiring an image sample set containing defect information about a PCB, and expanding the image sample set to obtain an expanded image sample set;
Embedding SEAttention attention mechanism in a YOLOv n backhaul network, and replacing a Neck network of YOLOv n and a part of C2f modules in the backhaul network with C3 modules to generate an improved YOLOv n defect detection model;
and taking the extended image sample set as the input of the defect detection model, and training by adopting a Wise-IoU loss function to obtain a target defect detection model.
2. The method of claim 1, wherein embedding SEAttention attention mechanisms in a YOLOv n backhaul network comprises:
the SEAttention attention mechanism is embedded after the SPPF module in the YOLOv n backhaul network.
3. The method of claim 1, wherein said replacing a part of C2f modules in the YOLOv n Neck network and the backhaul network with C3 modules comprises:
Replacing a C2f module of a fifth layer and a ninth layer in the backhaul network with a C3 module;
Replacing a top-level C2f module in the FPN module of the Neck network with a C3 module;
and replacing the C2f module at the bottom layer in the PAN module of the Neck network with a C3 module.
4. The method of claim 1, wherein prior to the generating the modified YOLOv n PCB surface defect model, the method further comprises:
The NMS algorithm employed in YOLOv is replaced with a Soft-NMS algorithm.
5. The method of claim 1, wherein expanding the image sample set to obtain an expanded image sample set comprises:
And performing image rotation, overturning, scaling, brightness and contrast, image color change, image segmentation, noise injection and image synthesis on the image samples in the image sample set by using a data enhancement technology to obtain the extended image sample set.
6. The method of constructing of claim 1, wherein prior to said taking the extended image sample set as input to the defect detection model, the method further comprises:
Labeling the image samples in the extended image sample set by using Labelimg tools, and generating a real boundary box of a defect position and a defect type corresponding to the real boundary box; wherein the defect categories include short circuits, open circuits, missing holes, excess copper, or pins.
7. The method according to claim 6, wherein the training the extended image sample set as the input of the defect detection model with a Wise-IoU loss function to obtain a target defect detection model includes:
Taking the extended image sample set as the input of the defect detection model to generate a prediction boundary box;
Generating a similarity between the real boundary box and the prediction boundary box according to the Wise-IoU loss function;
and generating a gradient value of the defect detection model according to the similarity, and updating and training the defect detection model according to the gradient value to obtain the target defect detection model.
8. A defect detection method, the detection method comprising:
acquiring a PCB image to be detected;
Inputting the PCB image to be detected into a target defect detection model obtained by the defect detection model construction method according to any one of claims 1-7, and obtaining the defect position, defect type and defect confidence of the PCB image to be detected.
9. An electronic device, comprising
The memory device is used for storing the data,
A processor, and
Computer program stored on the memory and executable on the processor, which when executing the computer program implements the defect detection model construction method according to any one of claims 1 to 7 and/or the defect detection method according to claim 8.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the defect detection model construction method according to any one of claims 1 to 7 and/or the defect detection method according to claim 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410823941.9A CN118521870A (en) | 2024-06-25 | 2024-06-25 | Defect detection model construction method, defect detection method, device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410823941.9A CN118521870A (en) | 2024-06-25 | 2024-06-25 | Defect detection model construction method, defect detection method, device and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118521870A true CN118521870A (en) | 2024-08-20 |
Family
ID=92282649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410823941.9A Pending CN118521870A (en) | 2024-06-25 | 2024-06-25 | Defect detection model construction method, defect detection method, device and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118521870A (en) |
-
2024
- 2024-06-25 CN CN202410823941.9A patent/CN118521870A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112052787B (en) | Target detection method and device based on artificial intelligence and electronic equipment | |
WO2022142450A1 (en) | Methods and apparatuses for image segmentation model training and for image segmentation | |
CN110738125A (en) | Method, device and storage medium for selecting detection frame by using Mask R-CNN | |
TWI649698B (en) | Object detection device, object detection method, and computer readable medium | |
CN109285105A (en) | Method of detecting watermarks, device, computer equipment and storage medium | |
CN112270686B (en) | Image segmentation model training method, image segmentation device and electronic equipment | |
CN111062854A (en) | Method, device, terminal and storage medium for detecting watermark | |
CN111768415A (en) | Image instance segmentation method without quantization pooling | |
CN107347125B (en) | Video image processing method and device and terminal equipment | |
CN111667459A (en) | Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion | |
CN111783812A (en) | Method and device for identifying forbidden images and computer readable storage medium | |
CN113192175A (en) | Model training method and device, computer equipment and readable storage medium | |
CN115249237A (en) | Defect detection method, defect detection apparatus, and computer-readable storage medium | |
CN109325539B (en) | Insulator string drop detection method and device | |
CN118097755A (en) | Intelligent face identity recognition method based on YOLO network | |
CN117746015A (en) | Small target detection model training method, small target detection method and related equipment | |
CN110210480A (en) | Character recognition method, device, electronic equipment and computer readable storage medium | |
CN111144425B (en) | Method and device for detecting shot screen picture, electronic equipment and storage medium | |
CN118521870A (en) | Defect detection model construction method, defect detection method, device and medium | |
CN112287905A (en) | Vehicle damage identification method, device, equipment and storage medium | |
CN116934696A (en) | Industrial PCB defect detection method and device based on YOLOv7-Tiny model improvement | |
CN117173550A (en) | Method and system for detecting underwater small target of synthetic aperture sonar image | |
CN117079305A (en) | Posture estimation method, posture estimation device, and computer-readable storage medium | |
KR20190093752A (en) | Method and system for scene text detection using deep learning | |
CN113870210A (en) | Image quality evaluation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |