[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118939986A - Structural damage identification method based on two-dimensional vibration data and multi-class augmentation - Google Patents

Structural damage identification method based on two-dimensional vibration data and multi-class augmentation Download PDF

Info

Publication number
CN118939986A
CN118939986A CN202411433194.4A CN202411433194A CN118939986A CN 118939986 A CN118939986 A CN 118939986A CN 202411433194 A CN202411433194 A CN 202411433194A CN 118939986 A CN118939986 A CN 118939986A
Authority
CN
China
Prior art keywords
sample
network
augmentation
samples
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411433194.4A
Other languages
Chinese (zh)
Other versions
CN118939986B (en
Inventor
王宪
彭兆鑫
赵前程
刘楚梁
栗天帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Science and Technology
Original Assignee
Hunan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Science and Technology filed Critical Hunan University of Science and Technology
Priority to CN202411433194.4A priority Critical patent/CN118939986B/en
Publication of CN118939986A publication Critical patent/CN118939986A/en
Application granted granted Critical
Publication of CN118939986B publication Critical patent/CN118939986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

本发明公开了一种基于振动数据二维化和多类别增广的结构损伤识别方法,涉及损伤识别算法技术领域,所述识别方法,首先,为充分利用传感器之间的时间与空间关联信息,将多个传感器的信号构建为二维多通道时频谱图,作为多尺度特征提取网络GoogLeNet的输入;接着,针对实际工程中难以获得数量足够训练深度学习网络的损伤样本的问题,利用辅助判别器生成对抗网络ADC‑GAN实现多类别样本数据增广;然后,采用提出的混合损失函数训练多尺度特征提取网络GoogLeNet,提升骨干网络的训练效果;最后,利用GoogLeNet对实时输入样本进行分类,获得结构状态的评估结果。

The invention discloses a structural damage identification method based on two-dimensionalization and multi-category augmentation of vibration data, and relates to the technical field of damage identification algorithms. The identification method comprises the following steps: firstly, in order to make full use of the temporal and spatial correlation information between sensors, the signals of multiple sensors are constructed into a two-dimensional multi-channel time-frequency spectrum diagram as the input of a multi-scale feature extraction network GoogLeNet; then, in view of the problem that it is difficult to obtain a sufficient number of damage samples for training a deep learning network in actual engineering, an auxiliary discriminator generative adversarial network ADC-GAN is used to realize multi-category sample data augmentation; then, the proposed hybrid loss function is used to train the multi-scale feature extraction network GoogLeNet to improve the training effect of the backbone network; finally, GoogLeNet is used to classify the real-time input samples to obtain the evaluation result of the structural state.

Description

Structural damage identification method based on vibration data two-dimension and multi-category augmentation
Technical Field
The invention relates to the technical field of damage identification algorithms, in particular to a structural damage identification method based on vibration data two-dimension and multi-category augmentation.
Background
The definition of structural damage identification is to accurately determine the qualitative and quantitative judgment of the change of static and dynamic characteristics closely related to structural mechanical properties. During service of large equipment and structures, the coupled effects of factors such as loading, impact, environmental erosion, and material aging will inevitably lead to the accumulation of damage to the structure. Once the damage to the structure accumulates to some extent, if not recognized and handled in time, the damage will spread rapidly, resulting in a major accident.
Currently, structural damage identification methods mainly include identification based on strain signals, temperature signals, and vibration signals. The method based on the vibration signal has stronger global and local damage identification capability and is most widely applied. Methods based on vibration signals can be divided into two main categories, signal processing-based and machine learning-based. Methods based on signal processing comprise random subspace identification, wavelet transformation, blind mode identification and the like, and the methods are used for identifying structural damage by comparing dynamic characteristic parameters of the structure in different states, so that the method is a main early method. The method based on machine learning firstly extracts the characteristics related to the damage from the vibration signal, and then inputs the characteristics into a classifier for classification, so as to realize damage identification. Common methods for the feature extraction step include prediction residual and coefficient analysis, principal component analysis, wavelet transform analysis, and the like of the autoregressive model. The classifier used includes gaussian process, support vector machine, bayesian learning, etc. Machine learning based approaches offer advantages in many applications.
In recent years, deep learning has been increasingly used in the field of structural damage recognition. Unlike traditional machine learning, deep learning methods do not rely on manually selected damage features. Luo Yongpeng and Sharma extract damage characteristics from acceleration signals by adopting a one-dimensional convolutional neural network and classify the damage characteristics, and the damage sample recognition accuracy of the small-sized truss steel frame and the small-sized truss steel frame respectively reaches more than 100% and 88.9%, so that good damage recognition performance is shown. In engineering applications, a plurality of sensors are typically employed to collect vibration data to enable comprehensive structural condition monitoring. However, the deep learning extracted features of the one-dimensional convolutional neural network cannot fully characterize the information association between different sensors at the same time, and the utilization of multi-signal information is insufficient.
In order to use the information correlation between sensors, a method of constructing a one-dimensional vibration signal as an image and performing damage recognition using the image has been developed. Liang Tao et al created multiple vibration signals as a multi-channel graph using markov transition fields, extracted features using a two-dimensional convolutional neural network and classified. In the simulation damage identification task of IASC-ASCE steel frames, the overall sample identification accuracy of the method can reach 94.4%. However, the two-dimensional convolutional neural network structure and the loss function used in the above study have a further room for improvement.
In engineering application, due to relatively insufficient structural damage sample data, convolutional neural networks often face the problem of insufficient training, and the final recognition accuracy is affected to a certain extent. To solve the above problems, luleci et al have used Wasserstein generation challenge network (WGAN) to generate damage samples, expanding the sample library. The method tests at QUGS vibration test data set published by Katal university, and the research object is a small steel frame stand structure. By training with the augmented sample library, the recognition accuracy of the convolutional neural network on the whole sample reaches 97%. However, WGAN is an unsupervised deep learning model, which requires separate construction of the model when generating samples of different classes, which results in relatively low efficiency of training and generating samples.
Disclosure of Invention
In order to solve the technical problems, the invention provides a structural damage identification method based on vibration data two-dimension and multi-category augmentation, which utilizes a lightweight MLP architecture only containing a shallow structure and a small amount of parameters to obtain high precision and high timeliness.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
The method comprises the steps of firstly constructing signals of a plurality of sensors into a two-dimensional multichannel time-frequency spectrogram according to time and space association information among the sensors, and taking the two-dimensional multichannel time-frequency spectrogram as input of a multi-scale feature extraction network; then, generating an countermeasure network by using an auxiliary discriminator to amplify multi-category sample data; then, training a multi-scale feature extraction network by adopting a mixed loss function; and finally, classifying the real-time input samples by utilizing a multi-scale feature extraction network to obtain an evaluation result of the structural state.
The technical scheme is further improved as follows:
Preferably, the signals of the plurality of sensors are constructed as a two-dimensional multichannel time-frequency spectrogram, comprising the following steps:
S1-1, for multiple vibration signals collected by c sensors First to the vibration signalPerforming continuous wavelet transformation to obtain×Time-frequency spectrum coefficient matrix of (a)
(1)
Wherein, Is thatIs used for the transformation of the wavelet coefficients of (a),For the number of degrees of frequency scale,Is thatIs used for the sequence length of (a),Performing modular operation; The calculation formula of (2) is as follows:
(2)
(3)
Wherein, Is Gabor mother waveletIs used for the conjugation of (a),Is thatIs used for the scale-up factor of (a),Is thatA translation factor of (2);
s1-2, will Mapping the numerical range of (C) to the gray value interval of (0, 255), and scaling the size to 224x224 pixels to obtain a time-spectrum gray scale map; Then, time-lapse spectrum gray scale mapGamma correction is performed to obtainThe calculation formula is as follows:
(4)
Wherein, AndRespectively taking maximum value operation and taking minimum value operation,Is thatGamma coefficients of (a); The calculation formula of (2) is as follows:
(5)
Wherein, In order to adjust the gamma coefficient of the gamma,Is thatAn average value of the image entropy and the gray gradient of (c),
S1-3, splicing the gamma corrected time spectrum gray level diagramObtaining a multichannel time-frequency spectrogram; the spatial information of the plurality of sensor signals is correlated by increasing the spatial dimension.
Preferably, the generating of the countermeasure network by using the auxiliary discriminator performs multi-category sample data augmentation, specifically:
introducing condition generators in generating an countermeasure network Distinguishing classifierForming ADC-GAN, optimized objective function of ADC-GANThe method comprises the following steps:
(8)
Wherein, For joint distribution of the real data and the tag information,To generate a joint distribution of data and tag information; lambda is an adjustable hyper-parameter; to determine the classification probability of the classifier input sample z to the true sample y +, The classification probability of the sample y - is generated for z attribution, and the calculation formulas are respectively as follows:
(9)
(10)
Wherein, A feature extraction layer shared by the classifier and the discriminator; And For the full connection layer, the real sample label and the generated sample label are mapped into a learnable feature vector respectively.
Preferably, the mixing loss function is lost by Softmax Center lossAnd exclusive regularization lossThe three parts of the composite material are formed,Guiding a Softmax layer of the backbone network to form a classification decision boundary; center lossThe function of the method is to measure the distance of the high-dimensional embedded features and reduce the distance of the intra-class sample features extracted by the backbone network; exclusive regularization lossThe function of the method is to measure the distance between high-dimensional embedded feature clusters and increase the distance between sample features extracted by a backbone network.
Preferably, the calculation formula of the mixing loss function L is:
(11)
Wherein, AndRepresenting the input feature vector and class of the last full connection layer of the backbone network of the structural state identification model of the ith sample, wherein d represents feature dimension; Weight matrix representing last full-connection layer of backbone network of structural state identification model Is selected from the group consisting of the (j) th column,The ith column of the weight matrix W of the last full-connection layer of the backbone network of the structural state identification model is represented, and n and m respectively represent the category number and the sample number of the small-batch training samples; Is the learnable first The class-center feature vector is used to determine,Representing taking L2 norm operation; And Is an adjustable super parameter for balancing AndLoss between; k is training round, N is adjustable annealing super parameter,Representing taking the L1 norm operation.
Preferably, the multi-scale feature extraction network is trained by adopting a mixed loss function, firstly, an ADC-GAN is trained by using a damage real sample, and a high-quality pseudo sample is generated by utilizing the trained ADC-GAN and histogram specification; mixing a pseudo sample with a real sample, and establishing an augmented sample library; next, a structural state recognition model is trained using the augmented sample library and the mixed loss function.
Compared with the prior art, the structural damage identification method based on vibration data two-dimension and multi-category augmentation provided by the invention has the following advantages:
According to the structural damage identification method based on vibration data two-dimension and multi-category augmentation, firstly, signals of a plurality of sensors are constructed into a two-dimensional multi-channel time-frequency spectrogram to be used as input of a multi-scale feature extraction network GoogLeNet in order to fully utilize time and space association information among the sensors; then, aiming at the problem that a sufficient number of damaged samples for training the deep learning network are difficult to obtain in actual engineering, an auxiliary discriminator is utilized to generate an antagonistic network ADC-GAN to realize multi-class sample data augmentation; then, training GoogLeNet by adopting the proposed mixed loss function, and improving the training effect of the backbone network; and finally, classifying the real-time input samples by GoogLeNet to obtain an evaluation result of the structural state. The method comprises the following components: the data augmentation model ADC-GAN, the structural state recognition model GoogLeNet and the mixed loss function all have the advantage of improving the sample recognition precision. Compared with a representative signal processing method and a new deep learning method, the method disclosed by the invention has the advantages of higher precision, lower requirement on a damaged sample and good practical value.
Drawings
Fig. 1 is an overall frame diagram in the present invention.
Fig. 2 is an overall structural diagram of a standard GAN.
FIG. 3 is an overall block diagram of the ADC-GAN of the present invention.
Fig. 4 is a training and monitoring flow chart in the present invention.
Fig. 5 is a graph of subject sensor placement and structural damage locations corresponding to experimental data in experimental verification of the present invention.
Fig. 6 (a) is the natural frequency of the truss tower during month 9 of 2020 to month 6 of 2021 in the experimental verification of the present invention.
Fig. 6 (b) is the natural frequency of the truss tower during month 5 of 2021 to month 7 of 2021 in the experimental verification of the present invention.
Detailed Description
The following describes specific embodiments of the present invention in detail. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
According to the structural damage identification method based on vibration data two-dimension and multi-category augmentation, firstly, signals of a plurality of sensors are constructed into a two-dimensional multi-channel time-frequency spectrogram to be used as input of a multi-scale feature extraction network GoogLeNet in order to fully utilize time and space association information among the sensors; then, aiming at the problem that a sufficient number of damaged samples for training the deep learning network are difficult to obtain in actual engineering, an auxiliary discriminator is utilized to generate an antagonistic network ADC-GAN to realize multi-class sample data augmentation; then, training GoogLeNet by adopting the proposed mixed loss function, and improving the training effect of the backbone network; and finally, classifying the real-time input samples by GoogLeNet to obtain an evaluation result of the structural state.
The method for identifying the structural damage based on vibration data two-dimension and multi-category augmentation comprises two stages of online and offline, as shown in figure 1. In the off-line stage, firstly, a continuous wavelet transformation is adopted to convert the vibration signal of the sensor into a two-dimensional time-frequency spectrogram, gamma brightness correction is carried out on the time-frequency spectrogram to eliminate brightness difference caused by space position, and a multi-channel time-frequency spectrogram is constructed by splicing the time-frequency spectrograms and is used as the input of a structural state identification model. Then, training the data augmentation model and generating a structural damage pseudo-sample, and performing histogram specification on the pseudo-sample to improve the image quality of the pseudo-sample, thereby realizing multi-category sample data augmentation. And finally, adopting the proposed mixed loss function and the augmented sample library to train the structural state recognition model together, and improving the training effect of the structural state recognition model.
In the online stage, a multichannel time-frequency spectrogram is constructed through online monitoring data, and real-time input is classified by using a structural state recognition model trained in the offline stage, so that a structural state evaluation result is obtained.
The invention discloses a structural damage identification method based on vibration data two-dimension and multi-category augmentation, which comprises the following steps:
Step S1, constructing a multichannel time-frequency spectrogram
In order to fully utilize the time and space correlation information among the sensors, one-dimensional time sequence signals of a plurality of sensors are converted into a multichannel time-frequency spectrogram which is used as the input of a subsequent structural state recognition model. The continuous wavelet transformation can effectively capture the time-frequency characteristics of the vibration signals which are not stable and have noise, and the time-frequency spectrum gray level map of the single vibration signal is obtained by adopting the continuous wavelet transformation in the embodiment.
S1-1, for multiple vibration signals collected by c sensorsFirst to the vibration signalPerforming continuous wavelet transformation to obtain×Time-frequency spectrum coefficient matrix of (a)
(1)
Wherein, Is thatIs used for the transformation of the wavelet coefficients of (a),For the number of degrees of frequency scale,Is thatIs used for the sequence length of (a),Is a modulo operation.The calculation formula of (2) is as follows:
(2)
(3)
Wherein, Is Gabor mother waveletIs used for the conjugation of (a),Is thatIs used for the scale-up factor of (a),Is thatIs a translation factor of (a).
S1-2, for adapting to the input format of the structural state recognition model, time-lapse spectral coefficient matrixAnd (5) performing post-treatment. Will beMapping the numerical range of (C) to the gray value interval of (0, 255), and scaling the size to 224x224 pixels to obtain a time-spectrum gray scale map. Then, to eliminate the luminance difference of the time spectrum caused by the spatial position of the sensor, the gray scale of the time spectrum is displayedGamma correction is performed to obtainThe calculation formula is as follows:
(4)
Wherein, AndRespectively taking maximum value operation and taking minimum value operation,Is thatGamma coefficients of (a); The calculation formula of (2) is as follows:
(5)
Wherein, In order to adjust the gamma coefficient of the gamma,Is thatAn average value of the image entropy and the gray gradient of (c),
In the present embodiment, there is provided0.5, ThenThe calculation formula of (2) is as follows:
(6)
Wherein, Represents the probability distribution of the gray level of a pixel in the image, H represents the height of the image, W represents the width of the image,Representing the horizontal gradient of the image at pixel coordinates (n, m),Representing the horizontal gradient of the image at pixel coordinates (n, m).
S1-3, splicing the gamma corrected time spectrum gray level diagramA multichannel time-frequency spectrogram is obtained. By increasing the spatial dimension, the spatial information of the plurality of sensor signals is effectively correlated.
Step S2, data augmentation
In practical engineering, the number of structural damage samples is usually limited, so that the structural state recognition model is not sufficiently trained. In response to this problem, the proposed method employs an improved Generation Antagonism Network (GAN) to achieve multi-class sample data augmentation prior to training the structural impairment model.
The standard GAN contains two neural networks, generator G and arbiter D, the overall structure of which is shown in fig. 2. The generator functions to generate realistic samples to fool the discriminant, while the discriminant functions to accurately distinguish between real and pseudo samples. Zero and game are carried out between the generator and the arbiter, and when the generator and the arbiter reach Nash equilibrium, the effect of the generator is optimal. The optimized objective function V (G, D) for GAN is:
(7)
where z is the input sample of the arbiter D, AndThe edge distributions for the real samples and the generated samples respectively,Expressed in real data distributionThe desire for z is lower than that for z,Representing the data distribution being generatedThe following expectation for z, D (z) represents the probabilistic predictive value that the arbiter D is true for z.
The standard GAN is an unsupervised deep learning model, and has the problems of unstable training, mode collapse and the like in the application process. Furthermore, standard GANs can only generate single class pseudo-samples due to lack of supervision of class label information. To solve these problems, the ADC-GAN model is selected in this embodiment. ADC-GAN through introducing condition generatorDistinguishing classifierStandard GAN is improved. Condition generatorThe condition information is embedded into random noise input, so that the condition generation of the specified class sample is realized, and the controllability of the generated sample is improved. Distinguishing classifierThe real classification loss of the generated samples is introduced, so that the training process of the generator is supervised by the class labels, and the diversity of the generated samples is improved.
The overall structure of the ADC-GAN is shown in FIG. 3. Optimized objective function of ADC-GAN compared to the optimized objective function of standard GANOptimizing and addingClassification loss of (c):
(8)
Wherein, For joint distribution of the real data and the tag information,To generate a joint distribution of data and tag information; lambda is an adjustable hyper-parameter.To determine the classification probability of the classifier input sample Z belonging to the real sample y + The classification probability of the sample y - is generated for Z attribution, and the calculation formulas are respectively as follows:
(9)
(10)
Wherein, A feature extraction layer shared by the classifier and the discriminator; And For the full connection layer, the real sample label and the generated sample label are mapped into a learnable feature vector respectively.
Step S3, structural state identification model
In order to fully excavate structural vibration information carried by the multichannel time-frequency spectrogram, the embodiment adopts a two-dimensional deep convolutional neural network to extract time-frequency and spatial characteristic information so as to realize structural state identification.
The structural state recognition model is composed of a backbone network and a Softmax classification layer. The backbone network extracts low-level features and high-level features of input data in a progressive manner, and the process directly determines the performance of structure state identification. Therefore, it is important to select an appropriate backbone network for the recognition model. In general, increasing the number of layers of the backbone network may improve the performance of the network; but the increase of the number of layers also brings new problems such as overfitting, gradient disappearance, gradient explosion, etc. GoogLeNet is a backbone network developed by Google corporation, and the parallel processing is introduced by adopting Inception architecture, so that the number of parameters in the network is reduced, and the training speed and accuracy of the network are improved. Compared with AlexNet, VGGNet and other early backbone networks, the method has the advantages of less network parameters and faster reasoning speed; compared with ResNet appearing later, the multi-scale characteristic information of the input sample can be better extracted. Thus, this embodiment employs GoogLeNet as the backbone network for the structural state recognition model.
Softmax is a widely used deep learning model training loss function. In the method provided by the embodiment, the structural state recognition model is a multi-classification task model; and the difference between the lesion type samples is not large. Under the above conditions, the model trained with Softmax is susceptible to noise interference.
In order to improve the performance of the structural state recognition model, the embodiment proposes a hybrid loss function L as a loss function of the training process on the basis of the Softmax function. Loss of L by Softmax Center lossAnd exclusive regularization lossThree parts.Is a traditional Softmax penalty, and the Softmax layer responsible for guiding the backbone network forms the classification decision boundary. To be compatible withThe embodiment sets the bias of the last full-connection layer of the backbone network to 0, soThe bias of the last fully connected layer is not taken into account. Center lossThe function of the method is to measure the distance of the high-dimensional embedded features and reduce the distance of the intra-class sample features extracted by the backbone network; exclusive regularization lossThe function of the method is to measure the distance between high-dimensional embedded feature clusters and increase the distance between sample features extracted by a backbone network.
The calculation formula of L is:
(11)
Wherein, AndRepresenting the input feature vector and class of the last full connection layer of the backbone network of the structural state identification model of the ith sample, wherein d represents feature dimension; Weight matrix representing last full-connection layer of backbone network of structural state identification model Is selected from the group consisting of the (j) th column,The ith column of the weight matrix W of the last full-connection layer of the backbone network of the structural state identification model is represented, and n and m respectively represent the category number and the sample number of the small-batch training samples; Is the learnable first The class-center feature vector is used to determine,Representing taking L2 norm operation; And Is an adjustable super parameter for balancing AndLoss between; k is training round, N is adjustable annealing super parameter,Representing taking the L1 norm operation.
Step S4, training and monitoring flow
The overall training flow of this embodiment is shown in fig. 4, and the multi-channel time-frequency spectrogram used is divided according to the ratio of 8:2, and is used for the offline stage and the online stage of the simulation respectively.
During the off-line simulation stage, the damage real sample is used for training a data augmentation model, and a trained model and a histogram are utilized for prescribing to generate a high-quality pseudo sample. The pseudo sample is mixed with the real sample, and an augmented sample library is established. Next, a structural state recognition model is trained using the augmented sample library and the mixed loss function. In the on-line stage, classifying the input samples by means of the trained structural state recognition model to finish structural state monitoring.
In training the data augmentation model, ADC-GAN is used as the data augmentation model and equation (8) is used as its loss function. The parameter λ of the formula (8) is set to 1. The optimization algorithms of the ADC-GAN generator, the discriminator and the discrimination optimizer all adopt an Adam optimizer, and the learning rates are respectively set to be 5 multiplied by 10 -5、2×10-4 and 2 multiplied by 10 -4. The maximum training round for ADC-GAN was set to 50 and the Mini-batch size for each round was 8.
In training the structural recognition model, googLeNet is used as the structural recognition model, and equation (11) is used as its loss function. Parameters in equation (11)And N is set to 1, 2 and 1, respectively. GoogLeNet the optimization algorithm was used as Adam optimizer, and the learning rate was set to 2 x 10 -4. GoogLeNet maximum training runs were set to 50 and Mini-batch size for each run was 52.
Experiment verification
(1) Data source and generation of multi-channel time-frequency spectrogram
The experimental data is a structural vibration test data of a certain disclosure. The research object is a truss tower with the height of 9m, the accelerometer and the structural damage position of the truss tower are shown in the figure 5, the corresponding numbers of ML 1-ML 9 are from 01x to 09x, and the sampling frequency of the accelerometer is 1651Hz; DAM1-DAM6 correspond to the location of structural damage.
The published data records structural response vibration signals experienced by truss towers under natural excitation during the period of time from 8.1 in 2020 to 7.31 in 2021. These signals include health status vibration signals and vibration signals of 6 different damage states. The experiment was performed with a selection of portions of the data collected from accelerometers numbered 01x to 09x, see table 1 for details of these data.
Table 1 experimental data details:
The one-dimensional vibration signals acquired by the multiple sensors are processed in a combined mode and are converted into a multichannel time-frequency spectrogram, the multichannel time-frequency spectrogram is used as an input of a structural state identification model, and the specific process is as follows: (1) Non-overlapping slicing of selected experimental data (accelerometer numbers from 01x to 09 x) using a sliding window of length 2048 to obtain signal samples Wherein n=1, a 11592; (2) And converting the signal sample into a multichannel time-frequency spectrogram. The detailed information of the multichannel time-frequency spectrogram is shown in table 2, wherein the structural damage categories are 6 categories in total, and are respectively: DAM6 (S), DAM4 (S), DAM3 (S), DAM6 (M), DAM4 (M) and DAM3 (M).
Table 2 multichannel time-frequency spectrum details:
(2) Experimental platform and model training details
The overall training flow is shown in fig. 4. At each cross-validation, 11592 samples were used in total, of which 9274 samples were used for the offline phase of the simulation and 2318 samples were used for the online phase of the simulation. The ratio of each damaged sample to the healthy state sample in the samples used in the two stages is kept consistent and is 1:6.
In the off-line phase, the data augmentation model ADC-GAN is first trained using real samples (4626) of all lesion classes and a lesion class pseudo-sample is generated. Mixing the pseudo sample and the real sample to form an augmented sample library (13900), wherein the ratio of each damage sample to each health sample is 2:6. the backbone network of the data augmentation model ADC-GAN is realized by adopting a BigGAN backbone network, and the super parameter lambda of the optimization objective function is set to be 1. In ADC-GAN, a generatorDistinguishing device D and distinguishing classifierThe optimization algorithm of (a) is set as Adam optimizer, and the learning rates are set as 5×10 -5、2×10-4 and 2×10 -4 respectively. The maximum training iteration round for ADC-GAN is set to 50 and Mini-batch for each round is set to 8.
The structural state recognition model is then trained using the augmented sample library (13900). Training with mixed loss function, its super parameterAnd N is set to 1, 2 and 1, respectively. The optimization algorithm of the structural state recognition model is set as an Adam optimizer, and the learning rate is set as 2×10 -4; the maximum training iteration round is set to 50 and the Mini-batch for each round is set to 52.
(3) Overall performance assessment
The proposed method is compared with representative signal processing methods and deep learning methods respectively by adopting Accuracy, precision, recall, F-score, false Positive Rate (FPR) and FALSE NEGATIVE RATE (FNR) evaluation indexes, so that the overall performance of the method is evaluated.
The accuracy of identifying whether the structure has defects or not by adopting 5-fold cross validation to obtain the evaluation index of the on-line sample identification result is shown in the table 3. As can be seen from Table 3, the Accuracy and F1-Score, which identify whether the structure is defective, are 98.10% and 97.91%, respectively.
Table 3 the method of the present invention was used to identify the accuracy (%) of the whole sample:
The disclosed data of this experiment adopts a signal processing method with covariance-driven random subspace recognition as a representative, and the natural frequency difference of the structure in different states is analyzed, and the result is shown in fig. 6. As can be seen from the results shown in fig. 6 (a), the difference between the natural frequency (gray) when severe damage occurs to DAM6 (S) and the natural frequency (cyan and blue) when the structure is healthy is obvious, and the two states can be effectively distinguished by using the natural frequency as a criterion; the natural frequencies of DAM3 (S) and DAM4 (S) when severe damage occurs are not obviously different from the natural frequencies (blue, red and green) when the structure is healthy, and the three states are difficult to distinguish by taking the natural frequencies as criteria. As can be seen from the calculation results of the data of the other period shown in fig. 6 (b), the natural frequencies (gray) of the three light lesions of DAM6 (M), DAM4 (M) and DAM3 (M) are very close to the natural frequencies (blue, red and green) of the healthy structure, and cannot be effectively distinguished. Therefore, the method can effectively identify mild and severe structural damage.
Experiments show that the 3DS-CNN network processes the fused multi-vibration signal characteristics to realize structural damage identification, and the method is a representative novel structural damage identification method based on deep learning. Wherein, the ratio of each damaged sample to healthy sample is 1:1. The whole identification precision of the verification set in the experimental method is 81.43%, and effective structural state identification can be realized. However, the method provided by the invention is remarkably improved by 16.67% in the overall recognition accuracy.
In summary, the method of the present invention exhibits superior performance compared to the signal processing method and the 3DS-CNN deep learning method using covariance to drive random subspace recognition. The method not only can effectively identify various structural states, but also has higher overall identification precision. In addition, the experimental data sources of the method and the comparison method are consistent, and the complex structure excited by natural excitation responds to vibration signals. Thus, the proposed method has practical value in engineering applications when dealing with noisy and disturbing vibration signals generated by natural excitation.
The above embodiments are merely preferred embodiments of the present invention, and are not intended to limit the present invention in any way. While the invention has been described with reference to preferred embodiments, it is not intended to be limiting. Therefore, any simple modification, equivalent variation and modification of the above embodiments according to the technical substance of the present invention shall fall within the scope of the technical solution of the present invention.

Claims (5)

1.一种基于振动数据二维化和多类别增广的结构损伤识别方法,其特征在于,所述识别方法,先根据传感器之间的时间与空间关联信息,将多个传感器的信号构建为二维多通道时频谱图,作为多尺度特征提取网络的输入;接着,利用辅助判别器生成对抗网络进行多类别样本数据增广;然后,采用混合损失函数训练多尺度特征提取网络;最后,利用多尺度特征提取网络对实时输入样本进行分类,获得结构状态的评估结果;1. A structural damage identification method based on two-dimensionalization and multi-category augmentation of vibration data, characterized in that the identification method first constructs the signals of multiple sensors into a two-dimensional multi-channel time-frequency spectrum according to the time and space correlation information between the sensors, and uses it as the input of the multi-scale feature extraction network; then, the auxiliary discriminator generates an adversarial network to perform multi-category sample data augmentation; then, the multi-scale feature extraction network is trained using a mixed loss function; finally, the multi-scale feature extraction network is used to classify the real-time input samples to obtain the evaluation result of the structural state; 所述多个传感器的信号构建为二维多通道时频谱图,包括以下步骤:The signals of the multiple sensors are constructed into a two-dimensional multi-channel time-frequency spectrum diagram, comprising the following steps: S1-1,对由C个传感器采集的多振动信号,先对振动信号进行连续小波变换,获得×的时频谱系数矩阵S1-1, for multiple vibration signals collected by C sensors , first the vibration signal Perform continuous wavelet transform to obtain × The time-frequency spectrum coefficient matrix : (1) (1) 其中,的小波变换系数,为频率尺度数,的序列长度,为 取模运算;的计算公式为: in, for The wavelet transform coefficients of is the frequency scale number, for The sequence length is is the modulo operation; The calculation formula is: (2) (2) (3) (3) 其中,是Gabor母小波的共轭,的尺度伸缩因子,的平移因 子; in, is the Gabor mother wavelet The conjugate of for The scaling factor, for The translation factor of S1-2,将的数值范围映射至[0,255]的灰度值区间,并将尺寸缩放至224x224像素, 得到时频谱灰度图;然后,对时频谱灰度图进行伽马校正获得,计算公式为: S1-2, The value range is mapped to the grayscale value interval of [0,255], and the size is scaled to 224x224 pixels to obtain the time-frequency spectrum grayscale image ; Then, the time-frequency spectrum grayscale image Gamma correction is performed to obtain , the calculation formula is: (4) (4) 其中,分别为取最大值运算和取最小值运算,的伽马系数;的 计算公式为: in, and They are the maximum value operation and the minimum value operation respectively. for The gamma coefficient of The calculation formula is: (5) (5) 其中,为可调节的伽马系数,的图像熵和灰度梯度的平均值,in, is the adjustable gamma factor, for The image entropy and the average value of grayscale gradient, ; S1-3,拼接伽马校正后的时频谱灰度图获得多通道时频谱图;通过增加空间维度的 方式,将多个传感器信号的空间信息关联。 S1-3, spliced gamma-corrected time-frequency spectrum grayscale image Obtain a multi-channel time-frequency spectrum; correlate the spatial information of multiple sensor signals by increasing the spatial dimension. 2.根据权利要求1所述的基于振动数据二维化和多类别增广的结构损伤识别方法,其特征在于,所述利用辅助判别器生成对抗网络进行多类别样本数据增广,具体为:2. The structural damage identification method based on two-dimensionalization and multi-category augmentation of vibration data according to claim 1 is characterized in that the multi-category sample data augmentation is performed using an auxiliary discriminator generative adversarial network, specifically: 在生成对抗网络中引入条件生成器和判别分类器形成ADC-GAN,ADC-GAN的优化 目标函数为: Introducing Conditional Generators in Generative Adversarial Networks and discriminative classifier Forming ADC-GAN, optimization objective function of ADC-GAN for: (8) (8) 其中,为真实数据和标签信息的联合分布,为生成数据和标签信息的联 合分布;λ是可调节的超参数;为判别分类器输入样本Z归属真实样本y+的分类概 率,为Z归属生成样本y-的分类概率,计算公式分别为: in, is the joint distribution of real data and label information, is the joint distribution of generated data and label information; λ is an adjustable hyperparameter; is the classification probability that the input sample Z of the discriminant classifier belongs to the true sample y + , The classification probability of generating sample y for Z is calculated as follows: (9) (9) (10) (10) 其中,为判别分类器与判别器共享的特征提取层;为全连接层,分别将真实 样本标签和生成样本标签映射为可学习的特征向量。 in, It is a feature extraction layer shared by the classifier and the discriminator; and is a fully connected layer that maps the real sample labels and generated sample labels into learnable feature vectors. 3.根据权利要求2所述的基于振动数据二维化和多类别增广的结构损伤识别方法,其 特征在于,所述混合损失函数由Softmax损失 中心损失和排他性正则化损失三部 分组成,引导骨干网络的Softmax层形成分类决策边界;中心损失的作用是度量高维 嵌入特征的距离,缩小骨干网络提取的类内样本特征的距离;排他性正则化损失的作用 是度量高维嵌入特征簇的距离,增大骨干网络提取的类间样本特征的距离。 3. The structural damage identification method based on two-dimensionalization and multi-class augmentation of vibration data according to claim 2 is characterized in that the hybrid loss function is composed of Softmax loss , center loss and exclusive regularization loss It consists of three parts: Guide the Softmax layer of the backbone network to form a classification decision boundary; center loss The role of is to measure the distance of high-dimensional embedded features and reduce the distance of intra-class sample features extracted by the backbone network; exclusive regularization loss The role of is to measure the distance of high-dimensional embedded feature clusters and increase the distance between class sample features extracted by the backbone network. 4.根据权利要求3所述的基于振动数据二维化和多类别增广的结构损伤识别方法,其特征在于,所述混合损失函数L的计算公式为:4. The structural damage identification method based on two-dimensionalization and multi-category augmentation of vibration data according to claim 3 is characterized in that the calculation formula of the hybrid loss function L is: (11) (11) 其中,代表第i个样本在结构状态识别模型骨干网络最后一个全连接层的 输入特征向量和类别,d表示特征维度;表示结构状态识别模型骨干网络最后一个 全连接层的权重矩阵的第j列,表示结构状态识别模型骨干网络最后一个全连 接层的权重矩阵W的第i列,n和m分别表示小批量训练样本的类别数量和样本数量; 为可学习的第类中心特征向量,表示取L2范数运算;为可调节的超参数,用于 平衡 之间的损失;k为训练轮次,N为可调节的退火超参数,表示取L1范数运 算。 in, and represents the input feature vector and category of the last fully connected layer of the backbone network of the i-th sample in the structural state recognition model, and d represents the feature dimension; Represents the weight matrix of the last fully connected layer of the backbone network of the structural state recognition model The jth column of The i-th column of the weight matrix W of the last fully connected layer of the backbone network of the structural state recognition model, n and m represent the number of categories and the number of samples of the mini-batch training samples respectively; For the learnable The class center eigenvector, Indicates taking L2 norm operation; and is an adjustable hyperparameter used to balance , and The loss between ; k is the training round, N is the adjustable annealing hyperparameter, Indicates that the L1 norm operation is performed. 5.根据权利要求4所述的基于振动数据二维化和多类别增广的结构损伤识别方法,其特征在于,所述采用混合损失函数训练多尺度特征提取网络,先使用损伤类真实样本训练ADC-GAN,利用训练好的ADC-GAN和直方图规定化生成高质量的伪样本;将伪样本与真实样本混合,建立增广样本库;接着,利用增广样本库和混合损失函数训练结构状态识别模型。5. According to claim 4, the structural damage identification method based on two-dimensionalization and multi-category augmentation of vibration data is characterized in that the multi-scale feature extraction network is trained using a mixed loss function, firstly the ADC-GAN is trained using real samples of the damage class, and high-quality pseudo samples are generated using the trained ADC-GAN and histogram specification; the pseudo samples are mixed with the real samples to establish an augmented sample library; then, the structural state recognition model is trained using the augmented sample library and the mixed loss function.
CN202411433194.4A 2024-10-15 2024-10-15 Structural damage identification method based on vibration data two-dimension and multi-category augmentation Active CN118939986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411433194.4A CN118939986B (en) 2024-10-15 2024-10-15 Structural damage identification method based on vibration data two-dimension and multi-category augmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411433194.4A CN118939986B (en) 2024-10-15 2024-10-15 Structural damage identification method based on vibration data two-dimension and multi-category augmentation

Publications (2)

Publication Number Publication Date
CN118939986A true CN118939986A (en) 2024-11-12
CN118939986B CN118939986B (en) 2024-12-13

Family

ID=93352230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411433194.4A Active CN118939986B (en) 2024-10-15 2024-10-15 Structural damage identification method based on vibration data two-dimension and multi-category augmentation

Country Status (1)

Country Link
CN (1) CN118939986B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB202105706D0 (en) * 2021-04-21 2021-06-02 Wang Yanghua Method of, and apparatus for, geophysical investigation using seismic signal decomposition
CN115581467A (en) * 2022-02-28 2023-01-10 燧人(上海)医疗科技有限公司 A recognition method of SSVEP based on time, frequency and time-frequency domain analysis and deep learning
CN117290771A (en) * 2023-09-22 2023-12-26 南京航空航天大学 Rotary machine fault diagnosis method for generating countermeasure network based on improved auxiliary classification
CN118364280A (en) * 2024-04-02 2024-07-19 南京航空航天大学 Enhanced diagnosis method based on feature fusion convolutional neural network and maximum-minimum elimination algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB202105706D0 (en) * 2021-04-21 2021-06-02 Wang Yanghua Method of, and apparatus for, geophysical investigation using seismic signal decomposition
CN115581467A (en) * 2022-02-28 2023-01-10 燧人(上海)医疗科技有限公司 A recognition method of SSVEP based on time, frequency and time-frequency domain analysis and deep learning
CN117290771A (en) * 2023-09-22 2023-12-26 南京航空航天大学 Rotary machine fault diagnosis method for generating countermeasure network based on improved auxiliary classification
CN118364280A (en) * 2024-04-02 2024-07-19 南京航空航天大学 Enhanced diagnosis method based on feature fusion convolutional neural network and maximum-minimum elimination algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
许子非;岳敏楠;李春;: "基于流形学习与神经网络的旋转机械故障诊断", 热能动力工程, no. 06, 23 July 2020 (2020-07-23) *
高狄;段坤;李星辰;王宪: "自动化铸造生产线工艺特点分析", 大型铸锻件, no. 01, 25 January 2024 (2024-01-25) *

Also Published As

Publication number Publication date
CN118939986B (en) 2024-12-13

Similar Documents

Publication Publication Date Title
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
CN107977932B (en) A face image super-resolution reconstruction method based on discriminative attribute-constrained generative adversarial networks
CN111986142B (en) An unsupervised data enhancement method for hot-rolled coil surface defect images
CN113642634A (en) A shadow detection method based on mixed attention
CN112580590A (en) Finger vein identification method based on multi-semantic feature fusion network
CN109255289B (en) Cross-aging face recognition method based on unified generation model
CN104504366A (en) System and method for smiling face recognition based on optical flow features
CN111505705B (en) Microseism P wave first arrival pickup method and system based on capsule neural network
CN114463759A (en) Lightweight character detection method and device based on anchor-frame-free algorithm
CN115908842A (en) Transformer partial discharge data enhancement and identification method
CN114842238A (en) Embedded mammary gland ultrasonic image identification method
CN114821155A (en) Multi-label classification method and system based on deformable NTS-NET neural network
CN112560668B (en) Human behavior recognition method based on scene priori knowledge
CN116883393A (en) Metal surface defect detection method based on anchor frame-free target detection algorithm
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN118939986B (en) Structural damage identification method based on vibration data two-dimension and multi-category augmentation
CN113963421A (en) Dynamic sequence unconstrained expression recognition method based on hybrid feature augmentation network
CN117672202A (en) An environmental sound classification method based on deep convolutional generative adversarial network
CN117516939A (en) Bearing cross-working condition fault detection method and system based on improved EfficientNetV2
CN116958662A (en) Steel belt defect classification method based on convolutional neural network
CN113011370A (en) Multi-state face recognition method based on deep learning
CN115965883A (en) Smoke detection algorithm based on Transformer
CN116385732B (en) Target detection method based on improved spark R-CNN
CN118941833B (en) Skin disease identification method based on hierarchical feature memory learning
CN117611548B (en) Image quality evaluation method and system based on distortion information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant