CN114820453A - Method for detecting surface flaws of packaged filament based on deep learning - Google Patents
Method for detecting surface flaws of packaged filament based on deep learning Download PDFInfo
- Publication number
- CN114820453A CN114820453A CN202210326953.1A CN202210326953A CN114820453A CN 114820453 A CN114820453 A CN 114820453A CN 202210326953 A CN202210326953 A CN 202210326953A CN 114820453 A CN114820453 A CN 114820453A
- Authority
- CN
- China
- Prior art keywords
- defect
- filament
- deep learning
- filaments
- picture library
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013135 deep learning Methods 0.000 title claims abstract description 20
- 230000007547 defect Effects 0.000 claims abstract description 66
- 238000001514 detection method Methods 0.000 claims abstract description 21
- 238000000605 extraction Methods 0.000 claims abstract description 8
- 230000011218 segmentation Effects 0.000 claims abstract description 5
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000001914 filtration Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000004804 winding Methods 0.000 claims description 4
- 206010017577 Gait disturbance Diseases 0.000 claims description 3
- 230000009471 action Effects 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 6
- 230000008901 benefit Effects 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 5
- 239000000126 substance Substances 0.000 description 5
- 239000004744 fabric Substances 0.000 description 4
- 239000000835 fiber Substances 0.000 description 4
- 229920000728 polyester Polymers 0.000 description 4
- 239000004753 textile Substances 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000007380 fibre production Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000000275 quality assurance Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000004611 spectroscopical analysis Methods 0.000 description 1
- 238000009987 spinning Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for detecting surface flaws of a packaged filament based on deep learning, which comprises the steps of collecting images of the packaged filament by an industrial camera, simultaneously establishing a packaged filament surface defect picture library containing a plurality of different defect types and different defect levels, preprocessing the images, then adopting a backbone network VGGNet in the deep learning as a basic feature extractor to carry out multi-scale feature extraction on the picture library, respectively extracting corresponding defect type features and level features of the picture library, finally adopting an object detection module to carry out segmentation and positioning on a detected picture, and outputting defect type numbers and a prediction frame. The method has the advantages that various different defects of the coiled filaments can be detected and classified through a deep learning technology, higher-level pixel and area information can be captured, information semantic context around a defect area can be represented, the universality is higher, the accuracy and the efficiency are greatly improved, and the method has great significance to the field of industrial visual detection.
Description
Technical Field
The invention relates to the field of industrial visual inspection, in particular to a method for detecting surface flaws of a wound filament yarn through deep learning.
Background
With the continuous and rapid development of the chemical fiber industry in China, the chemical fiber industry becomes an important component of a strategic emerging industry. On the chemical fiber production line, due to the reasons of overhigh processing speed, uneven tension, insufficient false twisting, poor spinning state of the assembly, paper tube jumping in the winding process and the like in the processing process, the finished product has the problems of broken filaments, stumbled filaments, stiff filaments, poor forming and the like, and the product quality and the enterprise benefit are seriously influenced. Therefore, the defect detection is an important link for quality assurance in the production process of chemical fiber filaments.
The traditional method for detecting the defects manually has the defects of low efficiency, high labor intensity, poor real-time performance, low accuracy and the like, and the detection method based on deep learning can overcome the defects of manual detection to a great extent. The method not only can accurately identify various defects, but also can ensure the real-time requirement of the detection process, namely, the production efficiency of the chemical fiber package can be greatly improved, and the production cost of enterprises can be saved. With the development of computer vision technology, textile detection methods based on automated industrial vision are widely used. Defect detection algorithms can be divided into two categories: traditional algorithms and learning-based algorithms. In general, a fabric image can be viewed as being composed of similar, periodic texture elements that can be removed by well-designed filters, so conventional algorithms typically employ hand-made features. The traditional algorithm is utilized to detect the textile defects, so that the method has many precedents: raheja et al (textile defect detection based on GLCM and Gabor filter: A composition [ J ] Optik,2013,124(23): 6469-. In the spectroscopic method, the local fourier transform and Gabor transform are widely used industrially. Jinet al (Fabric defect detection using gap filters and defect classification based on LBP and Tamura method [ J ]. Journal of the Textile Institute,2013,104(1):18-27) propose to locate the defect region by thresholding the gap response map of the input image, and then train classifiers using texture descriptors such as Tamura method and LBP to classify the Fabric defects. The patent application numbers of the invention are: CN201910368648.7, name: an appearance detection method of polyester filament. The invention establishes a grade model of the surface wear of the polyester filament yarn through training and learning, and introduces the characteristic parameters into a database as comparison data with the actually measured polyester filament yarn characteristic parameters, thereby predicting the grade of the surface wear of the actually measured polyester filament yarn. The invention has patent application number of CN201810975946.8, named as: a method for detecting broken filament defect of package filament judges whether the package filament has broken filament defect by determining line geometric parameters of short lines in target image subimages and setting threshold value for comparison.
In summary, the existing method for detecting the defect of the wound filament has strong limitation, the traditional algorithm can only capture low-level pixel and area information and is not enough to represent the semantic context of information around a defect area, and the extracted features can only be used for detecting the defect of a specific type of fabric; however, the types of defects of filament packages are various, and the detection method is designed separately for each different defect type, which is too time-consuming and labor-consuming. Therefore, a wound filament surface defect detection method based on a deep learning technology is provided based on deep learning and defect detection.
Disclosure of Invention
The invention provides a surface defect method of a coiled filament aiming at various different defect characteristics, aiming at overcoming the defects of the prior art and taking a deep learning technology as a basis.
The invention comprises the following steps:
a method for detecting surface flaws in a wound filament by deep learning, comprising the steps of:
step 1: a library of wound filament surface defect pictures is created by collecting and marking wound filament surface defect pictures. The defect picture library comprises a plurality of marked and classified packaged filament surface defect pictures of the packaged filament; in addition, the surface defect picture library of the package filament covers twelve defect categories of broken filaments, winding diameter, stumbling filaments, net filaments, dirty filaments, poor forming, no tail, multiple tails, stiff filaments, damaged paper tubes, broken ends and tail filaments;
step 2: preprocessing a package filament image, specifically: 2.1) graying the color coiled filament image by adopting a weighted average value method; 2.2) homomorphic filtering is carried out on the image with uneven illumination; 2.3) carrying out denoising treatment on the image by adopting self-adaptive median filtering, namely taking a median value in a filter action region as a value of a central pixel of the region;
and step 3: performing multi-scale feature extraction on the picture library by using a backbone network VGGNet as a basic feature extractor, and extracting corresponding defect category features of the picture library; the network adopts convolution kernels with different sizes to form a convolution group, maximum value pooling of 2 multiplied by 2 is used, a full-convolution layer (full-convolution net) is used for replacing a full-connection layer, parameters during training are reused, so that the full-convolution network obtained through testing can receive any width and height input, and multi-scale feature extraction is carried out. In the original VGGNet, the convolution layers are equivalently replaced by the full-connection layers, so that more complex features can be extracted from each convolution layer, and the identification rate of filament surface defect detection is improved. Fc8 was removed from the original VGGNet, fc6, fc7 were replaced with convolutional layers (Conv6, Conv7), two additional convolutional layers (Conv8_1, Conv8_2) were added after VGGNet, and the convolutional layers between Conv1 and Conv7 all had Relu activation.
And 4, step 4: and classifying unlabeled samples according to the characteristic images and the labeled samples in the training process to obtain pseudo labels. For any unlabeled sample x i The pseudo label of (1) is c, which is related to all n in the class c c A marked sample The similarity calculation is shown as formula (1), and the network model parameters can be adjusted by taking the credibility of the pseudo labels as weight;
and 5: adopting an object detection module ODM to carry out segmentation and positioning on a to-be-detected image, and outputting the defect category number C of a multi-classification task and four coordinate values x of a regression task prediction frame min ,y min Width, height, where x min ,y min Is the offset of the prediction box relative to the cell in which it is located; width, height are the predicted bounding box width and height, respectively; wherein the ODM consists of four convolutional layers.
The invention has the advantages that: the deep learning technology can be used for detecting and classifying various defects of the coiled filaments, can capture higher-level pixel and area information, can express information semantic context around a defect area, has stronger universality, greatly improves the accuracy and efficiency, and has great significance to the field of industrial visual detection.
Drawings
FIG. 1 is a front and top view of a conventional package filament of the present invention;
FIG. 2 is a photograph of surface defects of a wound filament after pretreatment in accordance with the present invention;
FIG. 3 is a flow framework of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The method comprises the steps of firstly establishing a package filament surface defect picture library, then carrying out multi-scale feature extraction on corresponding images by using a deep learning technology, and finally realizing classification. The invention discloses a method for detecting surface flaws of a wound filament based on deep learning, which comprises the following specific steps:
step 1: conventional package filaments are shown in fig. 1, and have defect regions at the end and top. A library of wound filament surface defect pictures is created by collecting and marking wound filament surface defect pictures. The defect picture library comprises a plurality of marked and classified packaged filament surface defect pictures of the packaged filament; in addition, the surface defect picture library of the package filament covers twelve defect categories of broken filaments, winding diameter, stumbling filaments, net filaments, dirty filaments, poor forming, no tail, multiple tails, stiff filaments, damaged paper tubes, broken ends and tail filaments;
step 2: the pre-processing of the image of the package filament, the processing result is shown in fig. 2, and the specific steps are as follows: (1) graying the color coiled filament image by adopting a weighted average value method; (2) homomorphic filtering is carried out on the image with uneven illumination; (3) adopting self-adaptive median filtering to carry out denoising processing on the image, namely taking a median value in a filter action region as a value of a central pixel of the region;
and step 3: performing multi-scale feature extraction on the picture library by using a backbone network VGGNet as a basic feature extractor, and extracting corresponding defect category features of the picture library; the network adopts convolution kernels with different sizes to form a convolution group, maximum value pooling of 2 multiplied by 2 is used, a full-convolution layer (full-convolution net) is used for replacing a full-connection layer, parameters during training are reused, so that the full-convolution network obtained through testing can receive any width and height input, and multi-scale feature extraction is carried out. VGGNet changes the linear kernel in the convolution layer into a small neural network consisting of a series of fully connected layers, so as to extract more complex features in each convolution layer and improve the identification rate of filament surface defect detection; the neuron pair input in the original convolution layer of VGGNet is locally connected, and because the function form of the convolution layer is the same as that of the full-connection layer, namely, the convolution layer and the full-connection layer are both point multiplication operation, the full-connection layer can be used for carrying out equivalent substitution on the convolution layer, namely, richer neurons are used for substituting the original linear nucleus of the convolution layer so as to extract more complex characteristics; fc8 is removed from the original VGGNet, fc6 and fc7 are replaced by convolutional layers (Conv6 and Conv7), two additional convolutional layers (Conv8_1 and Conv8_2) are added after the VGGNet, and the convolutional layers between Conv1 and Conv7 have Relu activation function.
And 4, step 4: unlabeled samples are subjected to the training process according to the characteristic images and the labeled samplesThe pseudo labels are obtained through classification. For any unmarked sample x i The pseudo label of (1) is c, which is related to all n in the class c c A marked sample The similarity calculation is shown as formula (1), and the network model parameters can be adjusted by taking the credibility of the pseudo labels as weight;
and 5: adopting an object detection module ODM to carry out segmentation and positioning on a to-be-detected image, and outputting the defect category number C of a multi-classification task and four coordinate values x of a regression task prediction frame min ,y min Width, height, where x min ,y min Is the offset of the prediction box relative to the cell in which it is located; width, height are the predicted bounding box width and height, respectively; wherein the ODM consists of four convolutional layers. All flow frames are shown in fig. 3.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.
Claims (6)
1. A method for detecting surface flaws of a wound filament based on deep learning is characterized by comprising the following steps:
1) collecting coiled filament pictures through an industrial camera, and simultaneously establishing a coiled filament surface defect picture library containing a plurality of different defect types and different defect levels;
2) preprocessing a picture of the packaged filament;
3) performing multi-scale feature extraction on the picture library by using a backbone network VGGNet in deep learning as a basic feature extractor, and respectively extracting corresponding defect category features and level features of the picture library;
4) classifying the unlabeled samples according to the characteristic pictures and the labeled samples to obtain pseudo labels;
5) and finally, a target detection module is adopted to carry out segmentation and positioning on the tested image, and the defect category number and the prediction frame are output.
2. A method for detecting surface flaws on a wound filament based on deep learning as claimed in claim 1, wherein the step 1) is specifically as follows: establishing a coiled filament surface defect picture library by collecting and marking coiled filament surface defect pictures; the defect picture library comprises a plurality of marked and classified packaged filament surface defect pictures of the packaged filament; in addition, the picture library of surface defects of the package filaments covers twelve defect categories of broken filaments, winding diameter, stumbling filaments, net filaments, dirty filaments, poor forming, no tail, multiple tails, stiff filaments, damaged paper tubes, broken ends and tail filaments.
3. A method for detecting surface flaws on a wound filament based on deep learning as claimed in claim 1, wherein the step 2) is specifically as follows:
2.1) graying the color coiled filament image by adopting a weighted average value method;
2.2) homomorphic filtering is carried out on the image with uneven illumination;
2.3) adopting self-adaptive median filtering to carry out denoising processing on the image, namely taking the median value in the filter action region as the value of the central pixel of the region.
4. A method according to claim 1, wherein the step 3) is specifically as follows: the network adopts convolution kernels with different sizes to form a convolution group, 2 multiplied by 2 maximum value pooling is used, a full convolution layer full-volume-connected net is used for replacing a full connection layer, parameters during training are reused, so that the full convolution network obtained through testing can receive any width and height input, and multi-scale feature extraction is carried out; in the original VGGNet, equivalent replacement of the convolutional layers is performed by using the full connection layer, fc8 is removed from the original VGGNet, fc6 and fc7 are replaced by convolutional layers Conv6 and Conv7, two additional convolutional layers Conv8_1 and Conv8_2 are added after the VGGNet, and the convolutional layers between Conv1 and Conv7 have the Relu activation function.
5. A method for detecting surface flaws on a wound filament based on deep learning as claimed in claim 1, wherein the step 4) is specifically as follows:
for any unlabeled sample x i The pseudo label of (1) is c, which is related to all n in the class c c A marked sample X C ={x c1 ,x c2 ,…,x cnc Calculating the similarity of the pseudo label as a formula (1), and adjusting network model parameters by taking the credibility of the pseudo label as weight;
6. A method of detecting surface flaws on a wound filament based on deep learning as claimed in claim 1, wherein the step 5) is specifically as follows:
adopting an object detection module ODM to carry out segmentation and positioning on a to-be-detected image, and outputting the defect category number C of a multi-classification task and four coordinate values x of a regression task prediction frame min ,y min Width, height, where x min ,y min Is the offset of the prediction box relative to the cell in which it is located; width and height are respectively predicted bounding boxesWidth and height; the ODM consists of four convolutional layers.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210326953.1A CN114820453A (en) | 2022-03-30 | 2022-03-30 | Method for detecting surface flaws of packaged filament based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210326953.1A CN114820453A (en) | 2022-03-30 | 2022-03-30 | Method for detecting surface flaws of packaged filament based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114820453A true CN114820453A (en) | 2022-07-29 |
Family
ID=82531753
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210326953.1A Pending CN114820453A (en) | 2022-03-30 | 2022-03-30 | Method for detecting surface flaws of packaged filament based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114820453A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563276A (en) * | 2023-07-05 | 2023-08-08 | 菲特(天津)检测技术有限公司 | Chemical fiber filament online defect detection method and detection system |
-
2022
- 2022-03-30 CN CN202210326953.1A patent/CN114820453A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116563276A (en) * | 2023-07-05 | 2023-08-08 | 菲特(天津)检测技术有限公司 | Chemical fiber filament online defect detection method and detection system |
CN116563276B (en) * | 2023-07-05 | 2023-09-01 | 菲特(天津)检测技术有限公司 | Chemical fiber filament online defect detection method and detection system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Rasheed et al. | Fabric defect detection using computer vision techniques: a comprehensive review | |
CN109454006B (en) | Detection and classification method based on device for online detection and classification of chemical fiber spindle tripping defects | |
CN108765412B (en) | Strip steel surface defect classification method | |
CN114757900B (en) | Artificial intelligence-based textile defect type identification method | |
CN111145165A (en) | Rubber seal ring surface defect detection method based on machine vision | |
CN109685766B (en) | Cloth flaw detection method based on region fusion characteristics | |
CN109550712A (en) | A kind of chemical fiber wire tailfiber open defect detection system and method | |
CN107328787A (en) | A kind of metal plate and belt surface defects detection system based on depth convolutional neural networks | |
CN113658131B (en) | Machine vision-based tour ring spinning broken yarn detection method | |
CN107369155A (en) | A kind of cloth surface defect detection method and its system based on machine vision | |
CN111861990B (en) | Method, system and storage medium for detecting bad appearance of product | |
CN111415343A (en) | Artificial intelligence-based six-side appearance detection method for chip multilayer ceramic capacitor | |
CN109145985A (en) | A kind of detection and classification method of Fabric Defects Inspection | |
CN114820626A (en) | Intelligent detection method for automobile front part configuration | |
CN115731228B (en) | Gold-plated chip defect detection system and method | |
CN115266732B (en) | Carbon fiber tow defect detection method based on machine vision | |
CN114820453A (en) | Method for detecting surface flaws of packaged filament based on deep learning | |
Liu et al. | A computer vision system for automatic steel surface inspection | |
CN111402225A (en) | Cloth folding false detection defect discrimination method | |
CN115082449B (en) | Electronic component defect detection method | |
CN115240144B (en) | Method and system for intelligently identifying flaws in spinning twisting | |
CN116664540A (en) | Rubber sealing ring surface defect detection method based on Gaussian line detection | |
CN114283319A (en) | Locomotive wheel set tread stripping identification method | |
Geze et al. | Detection and classification of fabric defects using deep learning algorithms | |
CN111899221A (en) | Appearance defect detection-oriented self-migration learning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |