CN113362347B - Image defect region segmentation method and system based on super-pixel feature enhancement - Google Patents
Image defect region segmentation method and system based on super-pixel feature enhancement Download PDFInfo
- Publication number
- CN113362347B CN113362347B CN202110801975.4A CN202110801975A CN113362347B CN 113362347 B CN113362347 B CN 113362347B CN 202110801975 A CN202110801975 A CN 202110801975A CN 113362347 B CN113362347 B CN 113362347B
- Authority
- CN
- China
- Prior art keywords
- super
- image
- pixel
- features
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000007547 defect Effects 0.000 title claims abstract description 61
- 230000011218 segmentation Effects 0.000 title claims abstract description 37
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000007781 pre-processing Methods 0.000 claims abstract description 18
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 238000010586 diagram Methods 0.000 claims abstract description 8
- 238000013507 mapping Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000007526 fusion splicing Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image defect region segmentation method based on multi-scale super-pixel characteristic enhancement, which comprises the following steps: s1: acquiring an image including a surface defect of the workpiece; s2: preprocessing the image; s3: extracting features of the preprocessed image; s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels; s5: fusing and splicing the super-pixel domain association mapping diagrams under different scales with feature layers of corresponding scales of a segmentation network for segmenting the image defect region; s6: the segmentation network outputs segmented defect regions. According to the invention, the priori knowledge of the super pixels under different scales is extracted, multi-scale fusion is carried out with the coding features of the segmentation network, and the feature information is enriched, so that the segmentation network outputs finer prediction segmentation regions.
Description
Technical Field
The invention relates to the field of machine vision deep learning, in particular to an image defect region segmentation method and system based on super-pixel characteristic enhancement.
Background
Most of surface defect detection based on deep learning at present is based on a supervised characterization learning method. The feature learning-based defect detection method can be regarded as an application of the related classical network in the industrial field because the achieved targets are completely consistent with the computer vision task.
The surface detection of industrial products in the industrial field is a key step for determining the quality of the products, and due to the influence of processing environment, processing technology and the like, defects are irregular in shape, different in size and random in position, cannot be predicted in advance, and have unobvious defects (namely similar to the background), so that the traditional vision algorithm is difficult to consider multiple types of defects, and particularly the unobvious defects have larger omission.
The deep learning based on data driving can effectively improve the generalization capability of the addition test model, the convolutional neural network can effectively extract defect features and can also detect key factors of defects, most of the existing feature extraction methods are downsampling-upsampling network structures, the defect features are extracted through convolution and pooling, the downsampling process inevitably leads to the loss of feature information, and the common convolution and pooling can not fully extract the feature information. For low contrast, insignificant defects, the difficulty in extracting the defect features is a further challenge in deep learning.
The Chinese patent with publication number CN111445471A discloses a product surface defect detection method and device based on deep learning and machine vision, wherein the publication number is 2020, 07 and 24. The invention aims to provide a product surface defect detection method and device based on deep learning and machine vision. The technical scheme of the invention is as follows: a product surface defect detection method based on deep learning and machine vision is characterized by comprising the following steps of: acquiring a surface image of a product to be detected in a line scanning mode of an industrial camera; performing defect characteristic pretreatment on the acquired image in real time, and rapidly determining whether defects exist in the surface image of the product to be detected; carrying out defect severity identification and defect type classification on the images with defects by using the trained deep convolutional neural network model; the trained deep convolutional neural network model is formed by performing migration learning, transformation and training on a classical neural network model acceptance-v 3. The patent also makes detection of low contrast, insignificant defects difficult to achieve due to insufficient extraction of the characteristic information.
Disclosure of Invention
The primary aim of the invention is to provide an image defect region segmentation method based on super-pixel characteristic enhancement, which is suitable for detecting low-contrast and multi-scale defects.
It is a further object of the present invention to provide an image defect region segmentation system based on super-pixel feature enhancement.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an image defect region segmentation method based on super-pixel characteristic enhancement comprises the following steps:
s1: acquiring an image including a surface defect of the workpiece;
s2: preprocessing the image;
s3: extracting features of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
s5: fusing and splicing the super-pixel domain association mapping diagrams under different scales with feature layers of corresponding scales of a segmentation network for segmenting the image defect region;
s6: the segmentation network outputs segmented defect regions.
Preferably, the preprocessing is performed on the image in step S2, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
Preferably, in the step S3, feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as characteristics for describing each pixel point.
Preferably, in the step S4, the extracted features are input into the S2pNet network, and the following processing is further required for the extracted features:
the extracted features are subjected to mean value pooling of different scales to obtain features f under different scales α f β f η The downsampling multiple is alpha, beta, eta, and the dimension size is respectively
Preferably, the dimension of the super pixel domain association map at different scales in step S4 isThe first dimension is fixed to be 9, and represents 9 azimuth information of the current pixel point in the neighborhood, wherein the 9 azimuth information is respectively upper left, upper right, left, center, lower right, lower left, lower right, and the value of each store of the super pixel domain association map represents the correlation between the current super pixel and 9 super pixels in the neighborhood.
Preferably, the S2pNet network trains a superpixel neighborhood correlation model by using the extracted features, the superpixel domain correlation model outputs superpixel domain correlation maps under different scales, and the training process of the superpixel domain correlation model is as follows:
for the feature of down-sampling multiple a, first initialize oneS_m matrix of (2), and +.>The feature area is subjected to weighted point multiplication calculation to obtain +.>Is the aggregate characteristic f of (2) 0 :
In the formula, H, W represents the length and width of an image, s_m represents a super-pixel domain association map to be learned, f represents extracted image features, and alpha represents downsampling multiple;
The similarity degree of the reconstructed features and the original features is used as the learning quality degree of the super-pixel neighborhood correlation model, and an objective function Loss is defined s_m :
Loss s_m =|f 0 -f rc | 2
Reversely updating according to the objective function until the objective function is smaller than the threshold value, and finishing training;
for the features with the downsampling multiples of beta and eta, training is completed by adopting the same method as the steps.
Preferably, in step S5, the super-pixel domain association map under different scales and the feature layer of the corresponding scale of the sub-network for dividing the image defect area are fused and spliced, where the fused and spliced are respectively fusion and splicing, and the fusion specifically includes:
f si =λ*s_m i *f i +(1-λ)*f i ,i∈(1,2,3)
wherein f si And (3) representing the feature map after the ith fusion super-pixel incidence matrix, wherein lambda is a super-parameter for adjusting the importance degree of the aggregated feature map and the original feature map.
Preferably, the splicing is specifically:
f out =up(up(up(f s3 )+f s2 )+f s1 )
where up represents the upsampling process and +' represents the stitching process of the feature map.
Preferably, the training of the segmentation network requires preprocessing of workpiece images, including program console motion and synchronized camera acquisition of images, random inversion of input images, contrast stretching, and random center cropping.
An image defect region segmentation system based on super-pixel feature enhancement, comprising:
the image acquisition module is used for acquiring an image comprising the surface defects of the workpiece;
the preprocessing module is used for preprocessing the image;
the feature extraction module is used for extracting features of the preprocessed image;
the S2pNet module is used for inputting the extracted characteristics into an S2pNet network, the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
the fusion splicing module is used for carrying out fusion splicing on the super-pixel domain association mapping diagrams under different scales and the feature layers of the corresponding scales of the segmentation network for segmenting the image defect region;
and the output module outputs the segmented defect areas by utilizing the segmentation network.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
according to the invention, the prior knowledge of the super pixels of the S2pNet network learning image is utilized, the prior knowledge of the super pixels under different scales is extracted, and multi-scale fusion is carried out with the coding features of the segmentation network, so that the feature layer information is more compact, the feature points in the feature layer are mutually influenced, the feature information is enriched, the defect of insufficient supervision information of the weak supervision network is overcome, the first-order or even multi-order information extraction of the image pixels by the weak supervision segmentation network is realized, and finally the segmentation network outputs finer prediction segmentation areas.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a system module according to the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides an image defect region segmentation method based on super-pixel feature enhancement, as shown in fig. 1, comprising the following steps:
s1: acquiring an image including a surface defect of the workpiece;
s2: preprocessing the image;
s3: extracting features of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
s5: fusing and splicing the super-pixel domain association mapping diagrams under different scales with feature layers of corresponding scales of a segmentation network for segmenting the image defect region;
s6: the segmentation network outputs segmented defect regions.
In the step S2, the image is preprocessed, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
In the step S3, feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as characteristics for describing each pixel point.
In the step S4, the extracted features are input into the S2pNet network, and the following processing is further required for the extracted features:
the extracted features are subjected to mean value pooling of different scales to obtain features f under different scales α f β f η The downsampling multiple is alpha, beta, eta, and the dimension size is respectively
The dimension of the super pixel domain association map in the step S4 under different scales is as follows The method comprises the steps that a longitudinal and transverse pixel relation can be extracted by a super-pixel domain association model under multiple scales, the first dimension is fixed to be 9, 9 azimuth information which is included in the neighborhood of a current pixel point is represented, the 9 azimuth information is respectively left upper, right upper, left, center, right lower, left lower and right lower, the value of each store of the super-pixel domain association map represents the correlation between the current super-pixel and 9 super-pixels in the neighborhood, the larger the weight value is, the larger the probability that two super-pixels belong to the same category is indicated, the smaller the weight value is, the smaller the correlation between the two super-pixels is indicated, and the probability of label distribution of different categories is larger.
The S2pNet network is a convolutional neural network formed by a plurality of convolutional layers, is a core part of an algorithm and is mainly responsible for training a neighborhood correlation model of the inner pixel and the outer pixel of the super pixel, and comprises output definition of a network model and training strategy design of the neighborhood correlation model of the super pixel; s2pNet is a coding-inverse coding network structure, and the input and output channels are different but have consistent sizes.
The S2pNet network trains a super-pixel neighborhood association model by using the extracted features, the super-pixel domain association model outputs super-pixel domain association maps under different scales, and the training process of the super-pixel domain association model is as follows:
for the feature of down-sampling multiple a, first initialize oneS_m matrix of (2), and +.>The feature area is subjected to weighted point multiplication calculation to obtain +.>Is the aggregate characteristic f of (2) 0 :
In the formula, H, W represents the length and width of an image, s_m represents a super-pixel domain association map to be learned, f represents extracted image features, and alpha represents downsampling multiple;
The similarity degree of the reconstructed features and the original features is used as the learning quality degree of the super-pixel neighborhood correlation model, and an objective function Loss is defined s_m :
Loss s_m =|f 0 -f rc | 2
Reversely updating according to the objective function until the objective function is smaller than the threshold value, and finishing training;
for the features with the downsampling multiples of beta and eta, training is completed by adopting the same method as the steps.
In the step S5, the super-pixel domain association mapping map under different scales and the feature layer of the corresponding scale of the sub-network for dividing the image defect area are fused and spliced, wherein the fusion and splicing are divided into fusion and splicing, and the fusion specifically comprises:
f si =λ*s_m i *f i +(1-λ)*f i ,i∈(1,2,3)
wherein f si And (3) representing the feature map after the ith fusion super-pixel incidence matrix, wherein lambda is a super-parameter for adjusting the importance degree of the aggregated feature map and the original feature map.
The splicing is specifically as follows:
f out =up(up(up(f s3 )+f s2 )+f s1 )
where up represents the upsampling process and +' represents the stitching process of the feature map.
The training of the segmentation network requires preprocessing of workpiece images, wherein the preprocessing comprises the steps of program console movement and image acquisition by a synchronous camera, and random overturn, contrast stretching and random center cutting of input images are carried out.
Example 2
The present embodiment provides an image defect region segmentation system based on super-pixel feature enhancement, as shown in fig. 2, including:
the image acquisition module is used for acquiring an image comprising the surface defects of the workpiece;
the preprocessing module is used for preprocessing the image;
the feature extraction module is used for extracting features of the preprocessed image;
the S2pNet module is used for inputting the extracted characteristics into an S2pNet network, the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
the fusion splicing module is used for carrying out fusion splicing on the super-pixel domain association mapping diagrams under different scales and the feature layers of the corresponding scales of the segmentation network for segmenting the image defect region;
and the output module outputs the segmented defect areas by utilizing the segmentation network.
The same or similar reference numerals correspond to the same or similar components;
the terms describing the positional relationship in the drawings are merely illustrative, and are not to be construed as limiting the present patent;
it is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.
Claims (5)
1. The image defect region segmentation method based on super-pixel characteristic enhancement is characterized by comprising the following steps of:
s1: acquiring an image including a surface defect of the workpiece;
s2: preprocessing the image;
s3: extracting features of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
s5: fusing and splicing the super-pixel domain association mapping diagrams under different scales with feature layers of corresponding scales of a segmentation network for segmenting the image defect region;
s6: the segmentation network outputs segmented defect areas;
in the step S2, the image is preprocessed, specifically:
performing image acquisition on the image, primarily determining an image detection area as a cutting area;
in the step S3, feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as characteristics for describing each pixel point;
in the step S4, the extracted features are input into the S2pNet network, and the following processing is further required for the extracted features:
the extracted features are subjected to mean value pooling of different scales to obtain features f under different scales α f β f η The downsampling multiple is alpha, beta, eta, and the dimension size is respectively
The dimension of the super pixel domain association map in the step S4 under different scales is as follows The first dimension is fixed to be 9, and represents 9 azimuth information including the current pixel point neighborhood, namely upper left, upper right, left, center, right, lower left and lower right, wherein the super pixel field is closedThe value of each store of the co-map represents the correlation of the current superpixel with 9 superpixels in the neighborhood;
the S2pNet network trains a super-pixel neighborhood association model by using the extracted features, the super-pixel domain association model outputs super-pixel domain association maps under different scales, and the training process of the super-pixel domain association model is as follows:
for the feature of down-sampling multiple a, first initialize oneS_m matrix of (2), and +.>The feature area is subjected to weighted point multiplication calculation to obtain +.>Is the aggregate characteristic f of (2) 0 :
In the formula, H, W represents the length and width of an image, s_m represents a super-pixel domain association map to be learned, f represents extracted image features, and alpha represents downsampling multiple;
The similarity degree of the reconstructed features and the original features is used as the learning quality degree of the super-pixel neighborhood correlation model, and an objective function Loss is defined s_m :
Loss s_m =|f 0 -f rc | 2
Reversely updating according to the objective function until the objective function is smaller than the threshold value, and finishing training;
for the features with the downsampling times of beta and eta, training is completed by adopting the same method as the training process of the features with the downsampling times of alpha.
2. The method for segmenting the image defect area based on the super-pixel feature enhancement according to claim 1, wherein in the step S5, the super-pixel domain association maps under different scales and feature layers of corresponding scales of a sub-network for segmenting the image defect area are fused and spliced, and the fusion and splicing are divided into fusion and splicing, and the fusion specifically includes:
f si =λ*s_m i *f i +(1-λ)*f i ,i∈(1,2,3)
wherein f si And (3) representing the feature map after the ith fusion super-pixel incidence matrix, wherein lambda is a super-parameter for adjusting the importance degree of the aggregated feature map and the original feature map.
3. The image defect area segmentation method based on super-pixel feature enhancement according to claim 2, wherein the stitching specifically comprises:
f out =up(up(up(f s3 )+f s2 )+f s1 )
where up represents the upsampling process and +' represents the stitching process of the feature map.
4. The method for segmenting the image defect area based on the super-pixel characteristic enhancement according to claim 3, wherein training of the segmentation network requires preprocessing of workpiece images, the preprocessing comprises the steps of program console motion and image acquisition by a synchronous camera, and random inversion, contrast stretching and random center clipping are performed on input images.
5. An image defect region segmentation system based on super-pixel feature enhancement, comprising:
the image acquisition module is used for acquiring an image comprising the surface defects of the workpiece;
the preprocessing module is used for preprocessing the image;
the feature extraction module is used for extracting features of the preprocessed image;
the S2pNet module is used for inputting the extracted characteristics into an S2pNet network, the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
the fusion splicing module is used for carrying out fusion splicing on the super-pixel domain association mapping diagrams under different scales and the feature layers of the corresponding scales of the segmentation network for segmenting the image defect region;
the output module outputs the segmented defect areas by utilizing the segmentation network;
the preprocessing module is used for preprocessing the image, and specifically comprises the following steps:
performing image acquisition on the image, primarily determining an image detection area as a cutting area;
the feature extraction module performs feature extraction on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as characteristics for describing each pixel point;
the S2pNet module inputs the extracted features into an S2pNet network, and the extracted features are required to be subjected to the following processing:
the extracted features are subjected to mean value pooling of different scales to obtain features f under different scales α f β f η The downsampling multiple is alpha, beta, eta, and the dimension size is respectively
The dimension of the super pixel domain association map in the S2pNet module under different scales is as followsThe first dimension is fixed to be 9, and represents 9 azimuth information including the current pixel point in the neighborhood, namely upper left, upper right, left, center, lower right, lower left, lower right, and the value of each store of the super pixel domain association map represents the correlation between the current super pixel and 9 super pixels in the neighborhood;
the S2pNet network trains a super-pixel neighborhood association model by using the extracted features, the super-pixel domain association model outputs super-pixel domain association maps under different scales, and the training process of the super-pixel domain association model is as follows:
for the feature of down-sampling multiple a, first initialize oneS_m matrix of (2), and +.>The feature area is subjected to weighted point multiplication calculation to obtain +.>Is the aggregate characteristic f of (2) 0 :
In the formula, H, W represents the length and width of an image, s_m represents a super-pixel domain association map to be learned, f represents extracted image features, and alpha represents downsampling multiple;
The similarity degree of the reconstructed features and the original features is used as the learning quality degree of the super-pixel neighborhood correlation model, and an objective function Loss is defined s_m :
Loss s_m =|f 0 -f rc | 2
Reversely updating according to the objective function until the objective function is smaller than the threshold value, and finishing training;
for the features with the downsampling times of beta and eta, training is completed by adopting the same method as the training process of the features with the downsampling times of alpha.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110801975.4A CN113362347B (en) | 2021-07-15 | 2021-07-15 | Image defect region segmentation method and system based on super-pixel feature enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110801975.4A CN113362347B (en) | 2021-07-15 | 2021-07-15 | Image defect region segmentation method and system based on super-pixel feature enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113362347A CN113362347A (en) | 2021-09-07 |
CN113362347B true CN113362347B (en) | 2023-05-26 |
Family
ID=77539672
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110801975.4A Active CN113362347B (en) | 2021-07-15 | 2021-07-15 | Image defect region segmentation method and system based on super-pixel feature enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113362347B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115063725B (en) * | 2022-06-23 | 2024-04-26 | 中国民航大学 | Aircraft skin defect identification system based on multi-scale self-adaptive SSD algorithm |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019104767A1 (en) * | 2017-11-28 | 2019-06-06 | 河海大学常州校区 | Fabric defect detection method based on deep convolutional neural network and visual saliency |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2975035C (en) * | 2015-03-20 | 2023-08-22 | Ventana Medical Systems, Inc. | System and method for image segmentation |
CN110717354B (en) * | 2018-07-11 | 2023-05-12 | 哈尔滨工业大学 | Super-pixel classification method based on semi-supervised K-SVD and multi-scale sparse representation |
CN112633416A (en) * | 2021-01-16 | 2021-04-09 | 北京工业大学 | Brain CT image classification method fusing multi-scale superpixels |
CN112927235B (en) * | 2021-02-26 | 2022-12-02 | 南京理工大学 | Brain tumor image segmentation method based on multi-scale superpixel and nuclear low-rank representation |
CN112991302B (en) * | 2021-03-22 | 2023-04-07 | 华南理工大学 | Flexible IC substrate color-changing defect detection method and device based on super-pixels |
-
2021
- 2021-07-15 CN CN202110801975.4A patent/CN113362347B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019104767A1 (en) * | 2017-11-28 | 2019-06-06 | 河海大学常州校区 | Fabric defect detection method based on deep convolutional neural network and visual saliency |
Also Published As
Publication number | Publication date |
---|---|
CN113362347A (en) | 2021-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113807355B (en) | Image semantic segmentation method based on coding and decoding structure | |
CN108562589B (en) | Method for detecting surface defects of magnetic circuit material | |
CN108305243B (en) | Magnetic shoe surface defect detection method based on deep learning | |
JP6710135B2 (en) | Cell image automatic analysis method and system | |
CN109580630B (en) | Visual inspection method for defects of mechanical parts | |
CN112508090A (en) | External package defect detection method | |
CN109840483B (en) | Landslide crack detection and identification method and device | |
CN114612472B (en) | SegNet improvement-based leather defect segmentation network algorithm | |
CN116012291A (en) | Industrial part image defect detection method and system, electronic equipment and storage medium | |
CN114332473A (en) | Object detection method, object detection device, computer equipment, storage medium and program product | |
Zhang et al. | Automatic detection of surface defects based on deep random chains | |
CN115035097B (en) | Cross-scene strip steel surface defect detection method based on domain adaptation | |
CN113362347B (en) | Image defect region segmentation method and system based on super-pixel feature enhancement | |
CN112668725A (en) | Metal hand basin defect target training method based on improved features | |
CN116205876A (en) | Unsupervised notebook appearance defect detection method based on multi-scale standardized flow | |
CN113421210B (en) | Surface point Yun Chong construction method based on binocular stereoscopic vision | |
CN114037684A (en) | Defect detection method based on yolov5 and attention mechanism model | |
CN111882545B (en) | Fabric defect detection method based on bidirectional information transmission and feature fusion | |
CN114048789A (en) | Winebottle fault detection based on improved Cascade R-CNN | |
Chang et al. | Bilayer Markov random field method for detecting defects in patterned fabric | |
CN113205136A (en) | Real-time high-precision detection method for appearance defects of power adapter | |
CN116452809A (en) | Line object extraction method based on semantic segmentation | |
CN113255646B (en) | Real-time scene text detection method | |
CN113192018B (en) | Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network | |
CN115937095A (en) | Printing defect detection method and system integrating image processing algorithm and deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230512 Address after: 510090 Dongfeng East Road 729, Yuexiu District, Guangzhou City, Guangdong Province Applicant after: GUANGDONG University OF TECHNOLOGY Applicant after: Guangzhou Deshidi Intelligent Technology Co.,Ltd. Address before: 510090 Dongfeng East Road 729, Yuexiu District, Guangzhou City, Guangdong Province Applicant before: GUANGDONG University OF TECHNOLOGY |