CN116612076B - Cabin micro scratch detection method based on combined twin neural network - Google Patents
Cabin micro scratch detection method based on combined twin neural network Download PDFInfo
- Publication number
- CN116612076B CN116612076B CN202310477398.7A CN202310477398A CN116612076B CN 116612076 B CN116612076 B CN 116612076B CN 202310477398 A CN202310477398 A CN 202310477398A CN 116612076 B CN116612076 B CN 116612076B
- Authority
- CN
- China
- Prior art keywords
- neural network
- data
- residual
- loss
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 50
- 238000001514 detection method Methods 0.000 title claims abstract description 27
- 238000012549 training Methods 0.000 claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 18
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000008569 process Effects 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 26
- 230000007246 mechanism Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 10
- 238000013135 deep learning Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 3
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000005728 strengthening Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 5
- 238000013461 design Methods 0.000 abstract description 4
- 238000000605 extraction Methods 0.000 abstract description 3
- 230000006872 improvement Effects 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 9
- 238000013473 artificial intelligence Methods 0.000 description 4
- 238000005299 abrasion Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0895—Weakly supervised learning, e.g. semi-supervised or self-supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a cabin micro scratch detection method based on a combined twin neural network, which comprises the following steps of: preprocessing data; constructing a combined twin network; and training by utilizing the data obtained by preprocessing to obtain a network model with good performance. The method has stronger detection capability for the micro scratches, solves the problem that a model with higher performance cannot be obtained due to insufficient data quantity in a data set acquisition layer and a data preprocessing layer, and is beneficial to solving the problem of unbalanced data caused by too few micro scratch samples in the data by a neural network. The combined twin neural network provided on the structural design of the neural network has obvious improvement on the quality of feature extraction; and the objective function designed in the training process further solves the problem of unbalanced number of positive and negative samples in the data set through weight cross entropy loss and regularization term processing, and improves the classification performance of the model on the class with fewer samples.
Description
Technical Field
The invention relates to the technical field of cabin micro-scratch detection, in particular to a cabin micro-scratch detection method based on a combined twin neural network.
Background
Cabin micro scratch detection is one of important measures for guaranteeing the safety and performance of an aircraft, and in order to detect the micro scratch and abrasion degree of the surface of the aircraft body, various technologies and methods can be used by airlines and maintenance personnel, including an optical microscope, a laser interferometer, an infrared thermal imaging technology, a nondestructive detection technology and an artificial intelligence technology, and the technologies and methods have advantages and disadvantages, but comprehensive use can improve the detection efficiency and accuracy, ensure the integrity and performance of the surface of the aircraft, and finally guarantee the flight safety. However, the existing detection technology by means of intensive manual labor has high manpower cost, low detection efficiency and poor detection quality, and with the development of artificial intelligence and deep learning technology, many airlines begin to use machine learning and computer vision technology to detect micro scratches and abrasion. By training the neural network, the systems can automatically identify and classify surface defects, thereby improving detection efficiency and accuracy. Artificial intelligence is an excellent technology, and if the artificial intelligence can be applied and popularized in the field of cabin micro scratch detection, great value can be created.
In general, the following difficulties exist in solving the cabin micro-scratch detection problem with deep learning: (1) The data volume is insufficient, a large amount of data is required for training in deep learning, but cabin micro scratch data is difficult to acquire, so that the data volume is relatively small, and the data volume is insufficient for training a model with higher accuracy; (2) Unbalanced data is caused by various types of tiny scratches of the engine room and different scratch numbers of different types, so that training and performance of a model are affected; (3) The feature extraction is difficult, and the tiny scratch of the engine room is a tiny defect, so that the image needs to be extracted with high quality to be accurately identified, and higher requirements are put on the design and optimization of the model. It is therefore highly desirable to devise a new method that can accommodate the complex background interference and problems present in the data to achieve accurate detection of nacelle micro scratches.
Disclosure of Invention
The invention aims to solve the technical problems that a model with higher performance cannot be obtained due to insufficient data quantity, data is unbalanced and characteristics corresponding to cabin micro scratches cannot be extracted effectively due to too few micro scratch samples in the data in the existing cabin micro scratch detection field.
In order to solve the technical problems, the invention is realized by the following steps:
the cabin micro scratch detection method based on the combined twin neural network comprises the following steps:
s1, preprocessing data;
s2, constructing a combined twin neural network;
and S3, training by utilizing the data obtained by preprocessing to obtain a neural network model with good performance.
Further, the step S1 of data preprocessing specifically includes the following steps:
s11, acquiring a data set in an actual scene through an optical microscope, and increasing the proportion of micro scratches in an expansion mode for an acquired data set sample in order to ensure the balance of positive and negative samples of the data end;
s12, adjusting the image size of the data set acquired in the step S11 to 512 multiplied by 512 in a Lanczos interpolation mode;
and S13, rotating, translating, scaling and denoising the data set obtained in the expansion mode, increasing the number and diversity of samples, and improving the generalization capability of the model.
Further, the overall structure of the neural network in the step S2 is a joint twin neural network for cabin micro scratch detection, which specifically includes a residual structure module RM, a joint twin attention module and a multi-view self attention mechanism module; the residual structure module and the combined twin attention module are a plurality of modules stacked together to form feature extractors and classifiers of different layers, each residual structure module contains a plurality of identical residual units, each residual unit is composed of a stacked convolution layer and an activation function and is used for extracting and strengthening features, and each residual unit is connected with the previous residual unit to form residual connection in deep learning; the joint twinning attention module contains two residual structure module inputs that utilize the attention mechanism for finely focusing the critical areas of the image.
Preferably, the residual structure module RM includes an input layer for collecting local features and converting channels, a U-shaped structure layer for multi-scale coding analysis, and a fusion output layer; the left half part of the U-shaped structural layer is a coding structure, multi-scale characteristics are obtained through convolution processing, a down sampling method is utilized to increase the receptive field, the right half part of the U-shaped structural layer is a decoding structure, the characteristics are coded into a high-resolution characteristic diagram through up sampling, and the coding structure and the decoding structure are cascaded through a jump structure at the middle part of the U-shaped structural layer;
the combined twin attention module performs an up-sampling channel space attention mechanism on a residual structure module RM in a decoding stage, so that the processing capacity of the model on the interested features is enhanced; the multi-view self-attention mechanism module fuses the features of the multiple view layers for self-attention adopted by the features of different scales during output so as to more fully utilize the extracted features.
Further, the step S3 specifically includes the following steps:
s31, selecting N=32 samples from the preprocessed data set as a batch;
s32, cutting, scaling and splicing the data in each batch in the step S31 to construct batch data suitable for neural network training;
s33, setting the training rate in self-supervision learning to be 5 multiplied by 10 -4 The neural network parameter updating algorithm is Adam;
S34, the loss of the objective function in the training process comprises pixel value loss and weight cross entropy loss, and the calculation formula of the objective function loss is as follows:
wherein,representing the loss of the objective function of the model,/->Representing weight cross entropy function loss, < >>Representing pixel value function loss, alpha represents a super parameter, and the super parameter alpha is set to be 1.5;
the calculation formula of the pixel value function loss is as follows:
wherein F (x) represents the input of the joint twin neural network, x GT Representing the carefully labeled binarized micro-scratch mask;
the calculation formula of the weight cross entropy function loss is as follows:
wherein N represents the total number of samples, C represents the number of categories, W c Weight factor representing class c, y i,c A label indicating that the ith sample belongs to the c-th class, p i,c Representing the probability that the ith sample belongs to the c-th class;
s35, carrying out L on parameters of the neural network in the training process 1 The regularization mode is processed, and the calculation formula is as follows:
wherein λ represents L 1 The regularization term accounts for the specific gravity of the total loss, and the value is set to be 0.15, theta i Representing constantly optimized parameters in the neural network;
and S36, training the neural network for a plurality of times, wherein the final algorithm performance takes the accuracy rate and recall rate of detecting the micro scratches as evaluation indexes.
Compared with the prior art, the invention has the beneficial effects that:
the method has stronger detection capability for the micro scratches, effectively solves the problem that a model with higher performance cannot be obtained due to insufficient data volume in a data set acquisition layer and a data preprocessing layer, and is beneficial to solving the problem of unbalanced data caused by too few micro scratch samples in the data by a neural network. The combined twin neural network provided on the structural design of the neural network has obvious improvement on the quality of feature extraction; and the objective function designed in the training process further solves the problem of unbalanced number of positive and negative samples in the data set through weight cross entropy loss and regularization term processing, and improves the classification performance of the model on the class with fewer samples.
Drawings
Fig. 1 is a flowchart of a method for detecting micro scratches according to the present invention:
fig. 2 is a diagram showing a neural network structure of a micro scratch detection method according to the present invention:
fig. 3 is a diagram of the residual structure module RM according to the present invention.
Detailed Description
The following describes the embodiments of the present invention in further detail with reference to the drawings and specific examples.
As shown in fig. 1 to 3, the cabin micro scratch detection method based on the combined twin neural network comprises the following steps:
s1, data preprocessing, which specifically comprises the following steps:
s11, acquiring a data set in an actual scene through an optical microscope, and increasing the proportion of micro scratches in an expansion mode for an acquired data set sample in order to ensure the balance of positive and negative samples of the data end;
s12, adjusting the size of the data set image acquired in the step S11 to 512 multiplied by 512 in a Lanczos interpolation mode, wherein Lanczos interpolation is an interpolation method for convoluting a source image based on a Lanczos kernel function, and estimating the value of a target pixel by sampling a convolution result;
and S13, rotating, translating, scaling and denoising the data set obtained in the expansion mode so as to increase the number and diversity of samples and improve the generalization capability of the model.
In the target detection task, the negative sample refers to an image area which does not contain the target object to be detected, and when the model is trained, the model often needs to learn how to detect the target and how to exclude non-target areas at the same time, and the negative sample is equivalent to a background image provided for the model and is used for training the model to correctly classify the target area. In general, the number of negative samples is larger than that of positive samples, and the recognition accuracy of the model is improved by using a large number of negative samples; the proportion of the positive sample (containing the micro scratch image) is expanded by a copying expansion mode, so that the positive sample and the negative sample reach a state of approximate balance of proportion.
When the sizes of sample images input to the neural network for training are inconsistent, the neural network is easy to generate an unstable phenomenon, and the size of the images is adjusted in a Lanczos interpolation mode, so that the stability of the neural network is improved, noise and irrelevant information in the images are reduced, and the learning ability of the model on real information is improved; the training set is subjected to a series of processing, which is beneficial to improving the adaptability of the neural network algorithm and is expected to solve the problem of non-ideal effect in the actual scene which cannot be solved by the prior method.
S2, constructing a combined twin neural network;
as shown in fig. 2 to 3, the overall structure of the neural network is a joint twin neural network for cabin micro scratch detection, and specifically comprises a residual structure module RM, a joint twin attention module and a multi-view self attention mechanism module; the residual structure modules and the combined twin attention module are a plurality of modules stacked together to form feature extractors and classifiers with different layers, specifically, each residual structure module contains a plurality of identical residual units, each residual unit is formed by stacking convolution layers and activating functions and is used for extracting and strengthening features, each residual unit is connected with the previous residual unit to form residual connection in deep learning, the problem of gradient disappearance caused by overlarge network depth can be effectively avoided, and each residual structure module has the same structure and is used for extracting information from input multi-angle features; the joint twinning attention module contains two residual structure module inputs that are highly similar but have different weights for obtaining different information from two perspectives, and uses the attention mechanism for finely focusing the critical areas of the image.
The residual structure module RM comprises an input layer for collecting local characteristics and converting channels, a U-shaped structure layer for multi-scale coding analysis and a fusion output layer; the left half part of the U-shaped structural layer is a coding structure, multi-scale characteristics are obtained through convolution processing, a down sampling method is utilized to increase the receptive field, the right half part of the U-shaped structural layer is a decoding structure, the characteristics are coded into a high-resolution characteristic diagram through up sampling, and the coding structure and the decoding structure are cascaded through a jump structure at the middle part of the U-shaped structural layer;
the combined twin attention module performs an up-sampling channel space attention mechanism on a residual structure module RM in a decoding stage, so that the processing capacity of the model on the interested features is enhanced; the multi-view self-attention mechanism module fuses the features of the multiple view layers for self-attention adopted by the features of different scales during output so as to more fully utilize the extracted features.
The residual structure module RM is introduced with cross-layer connection, so that the gradient is transferred more smoothly, and the problems of gradient disappearance and gradient explosion are relieved; the use of the combined twin attention module reduces the attention of the model to irrelevant information in input, improves the generalization capability of the model, helps the model learn the features with universality, and further improves the performance of the model on new data; the multi-view self-attention mechanism module can weight the input at different positions, so that important information in the input is highlighted, the neural network is helped to better capture key characteristics of the input, and the performance of the network is improved; the good model structural design is beneficial to the problem of extracting the characteristics corresponding to the micro scratches of the engine room.
S3, training by utilizing the data obtained by preprocessing to obtain a neural network model with good performance, wherein the method specifically comprises the following steps of:
s31, selecting N=32 samples from the preprocessed data set as a batch;
s32, cutting, scaling and splicing the data in each batch in the step S31 to construct batch data suitable for neural network training;
s33, setting the training rate in self-supervision learning to be 5 multiplied by 10 -4 The neural network parameter updating algorithm is Adam;
s34, the loss of the objective function in the training process comprises pixel value loss and weight cross entropy loss, and the calculation formula of the objective function loss is as follows:
wherein,representing the loss of the objective function of the model,/->Representing weight cross entropy function loss, < >>Representing pixel value function loss, alpha represents a super parameter, and the super parameter alpha is set to be 1.5;
the calculation formula of the pixel value function loss is as follows:
wherein F (-) represents the joint twin neural network, x is the input image, F (x) represents the input of the joint twin neural network, x GT Representing the carefully labeled binarized micro-scratch mask;
the calculation formula of the weight cross entropy function loss is as follows:
wherein N represents the total number of samples, C represents the number of categories, W c Weight factor representing class c, y i,c A label indicating that the ith sample belongs to the c-th class (0 indicates a negative sample, 1 indicates a positive sample), p i,c Representing the probability that the ith sample belongs to the c-th class;
s35, carrying out L on parameters of the neural network in the training process 1 The regularization mode is processed, and the calculation formula is as follows:
wherein λ represents L 1 The regularization term accounts for the specific gravity of the total loss, and the value is set to be 0.15, theta i Representing the parameters continuously optimized in the neural network, wherein θ is the total parameter of the neural network;
and S36, training the neural network for a plurality of times, wherein the final algorithm performance takes the accuracy rate and recall rate of detecting the micro scratches as evaluation indexes.
The foregoing is merely illustrative of the embodiments of this invention and it will be appreciated by those skilled in the art that variations may be made without departing from the principles of the invention, and such modifications are intended to be within the scope of the invention as defined in the claims.
Claims (2)
1. The cabin micro scratch detection method based on the combined twin neural network is characterized by comprising the following steps of: the method comprises the following steps:
s1, preprocessing data;
s2, constructing a combined twin neural network;
s3, training by utilizing the data obtained by preprocessing to obtain a neural network model with good performance;
the overall structure of the neural network in the step S2 is a combined twin neural network aiming at cabin micro scratch detection, and specifically comprises a residual error structure module RM, a combined twin attention module and a multi-view self attention mechanism module; the residual structure module and the combined twin attention module are a plurality of modules stacked together to form feature extractors and classifiers of different layers, each residual structure module contains a plurality of identical residual units, each residual unit is composed of a stacked convolution layer and an activation function and is used for extracting and strengthening features, and each residual unit is connected with the previous residual unit to form residual connection in deep learning; the joint twinning attention module contains two residual structure module inputs that use the attention mechanism for finely focusing the critical areas of the image;
the residual structure module RM comprises an input layer for collecting local characteristics and converting channels, a U-shaped structure layer for multi-scale coding analysis and a fusion output layer; the left half part of the U-shaped structural layer is a coding structure, multi-scale characteristics are obtained through convolution processing, a down sampling method is utilized to increase the receptive field, the right half part of the U-shaped structural layer is a decoding structure, the characteristics are coded into a high-resolution characteristic diagram through up sampling, and the coding structure and the decoding structure are cascaded through a jump structure at the middle part of the U-shaped structural layer;
the combined twin attention module performs an up-sampling channel space attention mechanism on a residual structure module RM in a decoding stage, so that the processing capacity of the model on the interested features is enhanced; the multi-view self-attention mechanism module fuses the features of the multiple view layers for the self-attention adopted by the features of different scales during output so as to more fully utilize the extracted features;
the step S3 specifically comprises the following steps:
s31, selecting N=32 samples from the preprocessed data set as a batch;
s32, cutting, scaling and splicing the data in each batch in the step S31 to construct batch data suitable for neural network training;
s33, setting the training rate in self-supervision learning to be 5 multiplied by 10 -4 The neural network parameter updating algorithm is Adam;
s34, the loss of the objective function in the training process comprises pixel value loss and weight cross entropy loss, and the calculation formula of the objective function loss is as follows:
wherein,representing the loss of the objective function of the model,/->Representing weight cross entropy function loss, < >>Representing pixel value function loss, alpha represents a super parameter, and the super parameter alpha is set to be 1.5;
the calculation formula of the pixel value function loss is as follows:
wherein F (x) represents the input of the joint twin neural network, x GT Representing the carefully labeled binarized micro-scratch mask;
the calculation formula of the weight cross entropy function loss is as follows:
wherein N represents the total number of samples, C represents the number of categories, w c Weight factor representing class c, y i,c A label indicating that the ith sample belongs to the c-th class, p i,c Representing the probability that the ith sample belongs to the c-th class;
s35, carrying out L on parameters of the neural network in the training process 1 The regularization mode is processed, and the calculation formula is as follows:
wherein λ represents L 1 The regularization term accounts for the specific gravity of the total loss, and the value is set to be 0.15, theta i Representing constantly optimized parameters in the neural network;
and S36, training the neural network for a plurality of times, wherein the final algorithm performance takes the accuracy rate and recall rate of detecting the micro scratches as evaluation indexes.
2. Cabin micro-scratch detection method based on joint twin neural network as claimed in claim 1, wherein:
the step S1 of data preprocessing specifically comprises the following steps:
s11, acquiring a data set in an actual scene through an optical microscope, and artificially increasing the proportion of micro scratches on an acquired data set sample in order to ensure the balance of positive and negative samples of the data end;
s12, adjusting the image size of the data set acquired in the step S11 to 512 multiplied by 512 in a Lanczos interpolation mode;
and S13, rotating, translating, scaling and denoising the data set obtained in the expansion mode, increasing the number and diversity of samples, and improving the generalization capability of the model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310477398.7A CN116612076B (en) | 2023-04-28 | 2023-04-28 | Cabin micro scratch detection method based on combined twin neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310477398.7A CN116612076B (en) | 2023-04-28 | 2023-04-28 | Cabin micro scratch detection method based on combined twin neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116612076A CN116612076A (en) | 2023-08-18 |
CN116612076B true CN116612076B (en) | 2024-01-30 |
Family
ID=87673812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310477398.7A Active CN116612076B (en) | 2023-04-28 | 2023-04-28 | Cabin micro scratch detection method based on combined twin neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116612076B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112017182A (en) * | 2020-10-22 | 2020-12-01 | 北京中鼎高科自动化技术有限公司 | Industrial-grade intelligent surface defect detection method |
CN112465790A (en) * | 2020-12-03 | 2021-03-09 | 天津大学 | Surface defect detection method based on multi-scale convolution and trilinear global attention |
CN112598658A (en) * | 2020-12-29 | 2021-04-02 | 哈尔滨工业大学芜湖机器人产业技术研究院 | Disease identification method based on lightweight twin convolutional neural network |
CN112819762A (en) * | 2021-01-22 | 2021-05-18 | 南京邮电大学 | Pavement crack detection method based on pseudo-twin dense connection attention mechanism |
CN113065645A (en) * | 2021-04-30 | 2021-07-02 | 华为技术有限公司 | Twin attention network, image processing method and device |
CN113420662A (en) * | 2021-06-23 | 2021-09-21 | 西安电子科技大学 | Remote sensing image change detection method based on twin multi-scale difference feature fusion |
CN114418956A (en) * | 2021-12-24 | 2022-04-29 | 国网陕西省电力公司电力科学研究院 | Method and system for detecting change of key electrical equipment of transformer substation |
WO2022099600A1 (en) * | 2020-11-13 | 2022-05-19 | Intel Corporation | Method and system of image hashing object detection for image processing |
CN114708496A (en) * | 2022-03-10 | 2022-07-05 | 三峡大学 | Remote sensing change detection method based on improved spatial pooling pyramid |
WO2022154471A1 (en) * | 2021-01-12 | 2022-07-21 | Samsung Electronics Co., Ltd. | Image processing method, image processing apparatus, electronic device and computer-readable storage medium |
CN115797694A (en) * | 2022-12-06 | 2023-03-14 | 哈尔滨工业大学 | Display panel microdefect classification method based on multi-scale twin neural network |
-
2023
- 2023-04-28 CN CN202310477398.7A patent/CN116612076B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112017182A (en) * | 2020-10-22 | 2020-12-01 | 北京中鼎高科自动化技术有限公司 | Industrial-grade intelligent surface defect detection method |
WO2022099600A1 (en) * | 2020-11-13 | 2022-05-19 | Intel Corporation | Method and system of image hashing object detection for image processing |
CN112465790A (en) * | 2020-12-03 | 2021-03-09 | 天津大学 | Surface defect detection method based on multi-scale convolution and trilinear global attention |
CN112598658A (en) * | 2020-12-29 | 2021-04-02 | 哈尔滨工业大学芜湖机器人产业技术研究院 | Disease identification method based on lightweight twin convolutional neural network |
WO2022154471A1 (en) * | 2021-01-12 | 2022-07-21 | Samsung Electronics Co., Ltd. | Image processing method, image processing apparatus, electronic device and computer-readable storage medium |
CN112819762A (en) * | 2021-01-22 | 2021-05-18 | 南京邮电大学 | Pavement crack detection method based on pseudo-twin dense connection attention mechanism |
CN113065645A (en) * | 2021-04-30 | 2021-07-02 | 华为技术有限公司 | Twin attention network, image processing method and device |
CN113420662A (en) * | 2021-06-23 | 2021-09-21 | 西安电子科技大学 | Remote sensing image change detection method based on twin multi-scale difference feature fusion |
CN114418956A (en) * | 2021-12-24 | 2022-04-29 | 国网陕西省电力公司电力科学研究院 | Method and system for detecting change of key electrical equipment of transformer substation |
CN114708496A (en) * | 2022-03-10 | 2022-07-05 | 三峡大学 | Remote sensing change detection method based on improved spatial pooling pyramid |
CN115797694A (en) * | 2022-12-06 | 2023-03-14 | 哈尔滨工业大学 | Display panel microdefect classification method based on multi-scale twin neural network |
Non-Patent Citations (4)
Title |
---|
A Method for Classification of Surface Defect on Metal Workpieces Based on Twin Attention Mechanism Generative Adversarial Network;Jinghua Hu等;IEEE Sensors Journal;第21卷(第12期);第13430-13441页 * |
基于孪生网络的表征优化自监督医学影像分割研究;钟颖;中国优秀硕士学位论文全文数据库 医药卫生科技辑(第1期);第E060-62页 * |
基于改进深度孪生网络的分类器及其应用;沈雁等;计算机工程与应用(第10期);第24-30页 * |
基于通道注意力与残差卷积神经网络的变压器故障诊断;王陈恩等;黑龙江电力;第44卷(第1期);第68-74页 * |
Also Published As
Publication number | Publication date |
---|---|
CN116612076A (en) | 2023-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368690B (en) | Deep learning-based video image ship detection method and system under influence of sea waves | |
CN113486865B (en) | Power transmission line suspended foreign object target detection method based on deep learning | |
CN115294038A (en) | Defect detection method based on joint optimization and mixed attention feature fusion | |
CN110610475B (en) | Visual defect detection method of deep convolutional neural network | |
CN109376591B (en) | Ship target detection method for deep learning feature and visual feature combined training | |
CN112733950A (en) | Power equipment fault diagnosis method based on combination of image fusion and target detection | |
CN112102229A (en) | Intelligent industrial CT detection defect identification method based on deep learning | |
CN111222519B (en) | Construction method, method and device of hierarchical colored drawing manuscript line extraction model | |
CN111768388A (en) | Product surface defect detection method and system based on positive sample reference | |
CN109034184B (en) | Grading ring detection and identification method based on deep learning | |
CN112633382A (en) | Mutual-neighbor-based few-sample image classification method and system | |
CN116485717B (en) | Concrete dam surface crack detection method based on pixel-level deep learning | |
CN111382785A (en) | GAN network model and method for realizing automatic cleaning and auxiliary marking of sample | |
CN114862838A (en) | Unsupervised learning-based defect detection method and equipment | |
CN115953666B (en) | Substation site progress identification method based on improved Mask-RCNN | |
CN111223087B (en) | Automatic bridge crack detection method based on generation countermeasure network | |
CN113837994B (en) | Photovoltaic panel defect diagnosis method based on edge detection convolutional neural network | |
CN114973032A (en) | Photovoltaic panel hot spot detection method and device based on deep convolutional neural network | |
CN110599459A (en) | Underground pipe network risk assessment cloud system based on deep learning | |
CN116612106A (en) | Method for detecting surface defects of optical element based on YOLOX algorithm | |
CN109919921B (en) | Environmental impact degree modeling method based on generation countermeasure network | |
CN115170816A (en) | Multi-scale feature extraction system and method and fan blade defect detection method | |
CN114549489A (en) | Carved lipstick quality inspection-oriented instance segmentation defect detection method | |
CN116612076B (en) | Cabin micro scratch detection method based on combined twin neural network | |
CN115456957B (en) | Method for detecting change of remote sensing image by full-scale feature aggregation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |