CN111184512B - Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient - Google Patents
Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient Download PDFInfo
- Publication number
- CN111184512B CN111184512B CN201911394850.3A CN201911394850A CN111184512B CN 111184512 B CN111184512 B CN 111184512B CN 201911394850 A CN201911394850 A CN 201911394850A CN 111184512 B CN111184512 B CN 111184512B
- Authority
- CN
- China
- Prior art keywords
- layer
- attention
- data
- rehabilitation training
- source separation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000009471 action Effects 0.000 title claims abstract description 56
- 238000012549 training Methods 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 title claims abstract description 27
- 210000001364 upper extremity Anatomy 0.000 title claims abstract description 20
- 208000006011 Stroke Diseases 0.000 title claims abstract description 18
- 238000000926 separation method Methods 0.000 claims abstract description 33
- 239000011159 matrix material Substances 0.000 claims abstract description 31
- 230000002457 bidirectional effect Effects 0.000 claims abstract description 18
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 230000004913 activation Effects 0.000 claims abstract description 10
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 9
- 230000008569 process Effects 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 230000002779 inactivation Effects 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims description 3
- 230000003183 myoelectrical effect Effects 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 210000003205 muscle Anatomy 0.000 abstract description 8
- 238000004458 analytical method Methods 0.000 abstract description 3
- 238000003909 pattern recognition Methods 0.000 abstract description 3
- 238000012216 screening Methods 0.000 abstract description 3
- 230000002123 temporal effect Effects 0.000 abstract description 3
- 230000033001 locomotion Effects 0.000 description 15
- 238000013527 convolutional neural network Methods 0.000 description 12
- 238000010801 machine learning Methods 0.000 description 6
- 210000004247 hand Anatomy 0.000 description 5
- 238000002567 electromyography Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000004064 dysfunction Effects 0.000 description 3
- 210000003414 extremity Anatomy 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 208000012661 Dyskinesia Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000004070 electrodeposition Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000000245 forearm Anatomy 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000007659 motor function Effects 0.000 description 1
- 230000004118 muscle contraction Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1124—Determining motor skills
- A61B5/1125—Grasping motions of hands
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/389—Electromyography [EMG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4836—Diagnosis combined with treatment in closed-loop systems or methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Artificial Intelligence (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Psychiatry (AREA)
- Mathematical Physics (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Data Mining & Analysis (AREA)
- Fuzzy Systems (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for recognizing rehabilitation training actions of upper limbs and hands of a stroke patient, which comprises the steps of performing blind source separation on electromyographic signal data by adopting a non-negative matrix decomposition model, removing non-stable muscle activation information and obtaining a stable time-varying blind source separation result; the decomposed time-varying blind source separation result data is applied to further pattern recognition, so that the recognition stability and precision are improved; the learned features are enabled to maintain both temporal and spatial characteristics through the CNN-RNN model. The CNN-RNN model can directly process data without manual data feature extraction and screening, automatically extract features and complete classification recognition, can realize end-to-end rehabilitation training action recognition analysis, and performs attention weighting on the hidden state of the second layer in the two-layer bidirectional GRU layer by combining the attention layer, so that the data with high contribution degree is endowed with larger weight, the data plays a greater role, and the precision of classification recognition is further improved.
Description
Technical Field
The invention belongs to the field of machine learning, and particularly relates to an upper limb and hand rehabilitation training action recognition method for a stroke patient.
Background
Rehabilitation training is an important treatment means in rehabilitation medicine, and mainly improves functional dyskinesia of corresponding limb parts of a patient through different exercise training, so that the motor function of the patient is recovered as much as possible, and the treatment effect is achieved. Of patients with limb dysfunction caused by stroke, 80% suffer from upper limb dysfunction, and of patients suffering from upper limb dysfunction, only 30% of patients can realize upper limb functional recovery, and 12% of patients can better recover hand function. The functional rehabilitation of the upper limbs and the hands has profound influence on the life quality and social participation of the stroke patients. The recognition of the rehabilitation training action of the stroke patient is widely applied, and the recognition result can be clinically used as a control signal of auxiliary training equipment, such as a mechanical exoskeleton, an artificial limb and the like; can also be used as an input signal for serious game rehabilitation training, such as virtual reality rehabilitation training; the interactive rehabilitation training system can realize interactive rehabilitation training or assist doctors to remotely monitor training conditions and the like in community or family rehabilitation. Surface electromyography (sEMG) is non-invasive, easy to record, and contains rich motor control information, and thus is commonly used for identification of rehabilitation exercises.
At present, a machine learning method for performing rehabilitation training action recognition by using a surface electromyogram signal mainly comprises three steps of signal preprocessing, feature extraction and classification recognition. Firstly, preprocessing an acquired original electromyographic signal, removing noise in the signal through notch filtering, band-pass filtering, full-wave rectification and the like, and segmenting an electromyographic time sequence through a threshold value method or an artificial method to segment a signal segment corresponding to a training action. In the feature extraction step, the set features need to be manually selected, and then the features are extracted from the preprocessed electromyographic signals. The characteristics mainly comprise two types of time domain characteristics and frequency domain characteristics, wherein the time domain characteristics comprise peak values, mean values, root mean square values, kurtosis, autoregressive coefficients and the like, and the frequency domain characteristics comprise power spectrums, intermediate frequencies, center of gravity frequencies, frequency root mean square and the like. And finally, using the feature set extracted in the last step and the corresponding class label as input, and applying a machine learning algorithm to train a classification recognition model. The classification methods commonly used for rehabilitation training action recognition at present mainly comprise decision classification trees, Support Vector Machines (SVM), Linear Discriminant Classifiers (LDC), naive Bayes classifiers (NB), Gaussian Mixture Models (GMM) and the like. After model training is completed, the newly acquired training action electromyographic signals can be subjected to rehabilitation training action recognition through preprocessing and feature extraction.
At present, the main flow of the recognition method for the rehabilitation training action is manual setting, feature extraction and classification recognition by applying a machine learning method.
Such method models have their limitations. The original electromyographic signals have certain non-stationarity, and the difference of different patient physical signs, the difference of stroke injury, the difference of action completion standard degree and the like can cause larger difference of electromyographic data, so that the identification is influenced, and the non-stationarity is difficult to eliminate by using basic preprocessing such as filtering rectification and the like. And such feature engineering usually requires expertise in some specific areas, thus further increasing the cost of the pretreatment. The dependency of the existing model identification performance on feature selection is high, and the influence of extracting different features on the classification identification performance is different; the information redundancy is easily caused by the fact that the characteristics are manually set and extracted and the characteristics possibly have correlation; and for physiological time series data, the time-varying information of which is also important, artificial feature extraction loses such information. The currently used classifier for machine learning (such as SVM, LDC, GMM, etc.) has poor performance of recognition and differentiation for the damaged upper limb and hand motion of stroke patients and some similar motions, such as different finger motions.
Disclosure of Invention
Aiming at the defects in the prior art, the method for recognizing the rehabilitation training action of the upper limbs and the hands of the stroke patient solves the problems of information redundancy and time-varying information loss caused by manual extraction and setting of features and poor machine learning recognition distinguishing performance.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a stroke patient upper limb and hand rehabilitation training action recognition method comprises the following steps:
s1, collecting myoelectric signal data of rehabilitation training actions;
s2, preprocessing electromyographic signal data;
s3, decomposing the preprocessed electromyographic signal data by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
s4, performing iterative training on the CNN-RNN model by adopting a plurality of blind source separation result matrixes to obtain a trained CNN-RNN model;
and S5, repeating the steps S1-S3 on the newly collected training action electromyographic data to obtain a plurality of blind source separation result matrixes, and inputting the plurality of blind source separation result matrixes into the trained CNN-RNN model to obtain the rehabilitation training action recognition category.
Further: step S3 includes the following steps:
s31, manually segmenting the preprocessed electromyographic signal data in a time dimension to obtain an electromyographic signal data matrix formed by each piece of data of the corresponding time sequence;
and S32, decomposing the electromyographic signal data matrix by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes.
Further: the step S4 includes the following steps:
s41, establishing a CNN network model and an RNN network model, and initializing the iteration number m to be 0;
s42, inputting the blind source separation result matrixes into a CNN network model, and performing feature extraction and pooling dimension reduction operation to obtain feature vectors;
s43, inputting the feature vector into an RNN network model for processing to obtain a probability value of the predicted action category;
s44, calculating the distance Loss of the probability values of the predicted action category and the real action category through cross entropym;
S45, judging the Loss of the mth timemValue sum Loss of m-1 timesm-1And if the difference value of the values is smaller than the threshold value, obtaining the trained CNN-RNN model, otherwise, updating the weight parameter and the offset parameter in the CNN network model and the weight parameter of the RNN network model by adopting a batch random gradient descent method, adding 1 to the iteration number m, and jumping to the step S42.
Further: the CNN network model in step S41 includes: three convolutional layers, three pooling layers and three active layers.
Further: the input and output of the convolutional layer are calculated by the formula:
wherein,is the data of the ith input channel of the l-1 th convolutional layer,data of the jth output channel of the first convolutional layer, Ml-1The number of input channels of the l-1 th convolutional layer,for the l-th layer of the convolution kernel weights,the first layer of convolution layer is offset, l is more than or equal to 1 and less than or equal to 3; the data of the ith input channel of the 1 st convolutional layer is a blind source separation result matrix Hr×nThe ith row of data.
Further: the RNN network model in step S41 includes: two layers of bidirectional GRU layers, an attention layer and a full connection layer, wherein each layer of bidirectional GRU layer comprises a T′A GRU unit;
the GRU unit comprises an updating gate and a resetting gate;
the input in the first of the two bidirectional GRU layers is a feature vector, and the output of the second layer is the input of the attention layer;
the output of the attention layer is the input of the fully connected layer.
Further: the state update equation for the GRU unit is as follows:
wherein, the [ alpha ], [ beta ]]Representing the connection of two vectors,. representing the inner product of the vectors,. sigma.rTo reset the weight matrix of the gate, WzIn order to update the weight matrix for the gate,as a candidate setWeight matrix of xtIs a feature vector, htIs an implicit state at time t, ht-1Is an implicit state at time t-1.
Further: implicit state h of the second layer of the two bidirectional GRU layerstThe input into the attention layer for processing comprises the following steps:
a1 hidden state h of the second layer of two-layer bidirectional GRU layertInput into the attention layer;
a2 weight W of initial attention layerwAnd bias bw;
A3, weight W according to attention levelwAnd bias bwObtaining the hidden state h by tanh hyperbolic tangent activation functiontHidden layer representation of ut;
A4, randomly initializing a weight vector uwFor the hidden layer to represent utPerforming softmax standardization to obtain the attention weight alphat;
A5, implicit State htBy attention weight αtWeighting to obtain hidden state htIs expressed as a weighted attention representation qt。
Further: attention weighted representation qtThe process of inputting the full connection layer for processing comprises the following steps:
b1, representing attention by weighting qtInputting the full connection layer, and performing discrete processing to obtain attention weighting ok,C is the number of the neurons of the full connection layer;
b2 weighting attention okAnd carrying out random inactivation operation and classification operation by using softmax to obtain the probability value of the predicted action category.
Further: the probability value of the predicted action category in step B2 is calculated as:
wherein s iskIs the probability value of the predicted kth action.
The invention has the beneficial effects that: a stroke patient upper limb and hand rehabilitation training action recognition method adopts a nonnegative matrix decomposition model to carry out blind source separation on electromyographic signal data, removes nonstationary muscle activation information and obtains a stable time-varying blind source separation result; the decomposed time-varying blind source separation result data is applied to further pattern recognition, so that the recognition stability and precision are improved; the CNN network model is adopted to reserve the spatial characteristics of blind source separation result data, the RNN network model fuses the characteristic data, and time dimension information is provided to facilitate the distinguishing operation of the current data. The learned features are enabled to maintain both temporal and spatial characteristics through the CNN-RNN model. The CNN-RNN model can directly process data without manual data feature extraction and screening, automatically extract features and complete classification recognition, can realize end-to-end rehabilitation training action recognition analysis, and performs attention weighting on the hidden state of the second layer in the two-layer bidirectional GRU layer by combining the attention layer, so that the data with high contribution degree is endowed with larger weight, the data plays a greater role, and the precision of classification recognition is further improved.
Drawings
FIG. 1 is a flow chart of a method for recognizing rehabilitation training actions of upper limbs and hands of a stroke patient;
FIG. 2 is a schematic diagram of a CNN network model structure;
FIG. 3 is a diagram of a GRU unit architecture;
fig. 4 is a network model structure diagram of a part of the RNN network model.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a method for recognizing rehabilitation training actions of upper limbs and hands of a stroke patient includes the following steps:
s1, collecting myoelectric signal data of rehabilitation training actions;
electrodes are respectively arranged on extensor muscle groups, flexor muscle groups, biceps brachii, triceps brachii, deltoid, major thenar muscle and minor thenar muscle of a subject, 8 channels of surface electromyography (sEMG) signals are collected together, the sampling frequency is 2KHz, and the electromyography placement positions are shown in table 1.
TABLE 1 EMG ELECTRODE POSITION DESIGN
In the present embodiment, 25 kinds of functional movements are designed for rehabilitation training and exercise training of the upper arm, forearm, and hand, as shown in table 2. The subject relaxed sitting on the armchair. For hand, wrist and elbow motions, the double arm supports were placed in a table top fixed position for a total of 5 seconds from rest to contraction induced and posture maintained according to video or voice instructions, 6 repetitions of each motion with a rest time of 5 seconds between motions; for shoulder movements, the subject sits up on a chair without a barrier in front, and according to visual or audio instructions, from rest to causing muscle contraction to complete the movement and hold in position for 5 seconds, each movement is repeated 6 times with a rest time of 5 seconds between movements. During the experiment, video of each movement performed by a healthy person was used as a demonstration for guiding the subject to perform (intend to perform) each movement.
TABLE 2 Experimental action design
S2, preprocessing electromyographic signal data, namely removing power line interference noise by using a 50Hz notch filter, performing 20Hz-450Hz band-pass filtering, eliminating motion artifacts (<20) and high-frequency noise (>450), and performing full-wave rectification;
s3, decomposing the preprocessed electromyographic signal data by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
step S3 includes the following steps:
s31, manually segmenting the preprocessed electromyographic signal data in a time dimension to obtain an electromyographic signal data matrix formed by each piece of data of the corresponding time sequence;
and S32, decomposing the electromyographic signal data matrix by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes.
The non-negative matrix factorization model in step S32 is:
Xm×n=Wm×r×Hr×n
wherein, Xm×nIs an electromyographic signal data matrix of m × n dimensions, m is the number of electrodes, n is the number of measured values, Wm×rIs a muscle activity matrix of m x r dimensions, Hr×nIs a r x n dimensional blind source separation result matrix.
S4, performing iterative training on the CNN-RNN model by adopting a plurality of blind source separation result matrixes to obtain a trained CNN-RNN model;
the step S4 includes the following steps:
s41, establishing a CNN network model and an RNN network model, and initializing the iteration number m to be 0;
the CNN network model in step S41 includes: three convolutional layers, three pooling layers, and three active layers, as shown in figure 2.
The input and output of the convolutional layer are calculated by the formula:
wherein,is the data of the ith input channel of the l-1 th convolutional layer,data of the jth output channel of the first convolutional layer, Ml-1The number of input channels of the l-1 th convolutional layer,for the l-th layer of the convolution kernel weights,the first layer of convolution layer is offset, l is more than or equal to 1 and less than or equal to 3; the data of the ith input channel of the 1 st convolutional layer is a blind source separation result matrix Hr×nThe ith row of data.
S42, inputting the blind source separation result matrixes into a CNN network model, and performing feature extraction and pooling dimension reduction operation to obtain feature vectors;
nonlinearity is introduced into the CNN network model through the activation layer, so that the fitting capability of the neural network is improved.
The latitude of input data is reduced through the pooling layer, and the number of parameters or weights to be trained is reduced, so that the calculation cost is reduced, and overfitting is controlled.
S43, inputting the feature vector into an RNN network model for processing to obtain a probability value of the predicted action category;
the RNN network model includes: two bidirectional GRU layers, an attention layer and a full connection layer, wherein each bidirectional GRU layer comprises T' GRU units; the GRU unit comprises an updating gate and a resetting gate;
the input in the first of the two bidirectional GRU layers is a feature vector, and the output of the second layer is the input of the attention layer;
the output of the attention layer is the input of the fully connected layer, as shown in FIG. 3.
The state update equation for the GRU unit is as follows:
wherein, the [ alpha ], [ beta ]]Representing the connection of two vectors,. representing the inner product of the vectors,. sigma.rTo reset the gate rtWeight matrix of WzTo update the door ztThe weight matrix of (a) is determined,as a candidate setWeight matrix of xtIs a feature vector, htIs an implicit state at time t, ht-1Is an implicit state at time t-1.
As shown in fig. 4, the implicit state h of the second layer of the two bidirectional GRU layers is settThe input into the attention layer for processing comprises the following steps:
a1 hidden state h of the second layer of two-layer bidirectional GRU layertInput into the attention layer;
a2 weight W of initial attention layerwAnd bias bw;
A3, weight W according to attention levelwAnd bias bwObtaining the hidden state h by tanh hyperbolic tangent activation functiontHidden layer representation of ut;
A4, randomly initializing a weight vector uwFor the hidden layer to represent utPerforming softmax standardization to obtain the attention weight alphat;
A5, implicit State htBy attention weight αtWeighting to obtain hidden state htIs expressed as a weighted attention representation qt。
Attention is paid to the representation of the force-weighted representation q in step A5tThe calculation formula of (2) is as follows:
ut=tanh(Wwht+bw)
qt=αtht
attention weighted representation qtThe process of inputting the full connection layer for processing comprises the following steps:
b1, representing attention by weighting qtInputting the full connection layer, and performing discrete processing to obtain attention weighting ok,C is the number of the neurons of the full connection layer;
b2 weighting attention okAnd carrying out random inactivation operation and classification operation by using softmax to obtain the probability value of the predicted action category.
The probability value of the predicted action category in step B2 is calculated as:
wherein s iskIs the probability value of the predicted kth action.
S44, calculating the distance Loss of the probability values of the predicted action category and the real action category through cross entropym;
The distance Loss of the probability values of the predicted and real motion categories is calculated in step S44mThe calculation formula of (2) is as follows:
wherein, Batch is the number of samples in Batch, n is the nth data, OnPredicted action category probability value distribution for nth piece of data: { s1,s2…,sC},ynTrue of nth dataAn action category probability value.
S45, judging the Loss of the mth timemValue sum Loss of m-1 timesm-1And if the difference value of the values is smaller than the threshold value, obtaining the trained CNN-RNN model, otherwise, updating the weight parameter and the offset parameter in the CNN network model and the weight parameter of the RNN network model by adopting a batch random gradient descent method, adding 1 to the iteration number m, and jumping to the step S42.
And S5, repeating the steps S1-S3 on the newly collected training action electromyographic data to obtain a plurality of blind source separation result matrixes, and inputting the plurality of blind source separation result matrixes into the trained CNN-RNN model to obtain the rehabilitation training action recognition category.
The invention has the beneficial effects that: a stroke patient upper limb and hand rehabilitation training action recognition method adopts a nonnegative matrix decomposition model to carry out blind source separation on electromyographic signal data, removes nonstationary muscle activation information and obtains a stable time-varying blind source separation result; the decomposed time-varying blind source separation result data is applied to further pattern recognition, so that the recognition stability and precision are improved; the CNN network model is adopted to reserve the spatial characteristics of blind source separation result data, the RNN network model fuses the characteristic data, and time dimension information is provided to facilitate the distinguishing operation of the current data. The learned features are enabled to maintain both temporal and spatial characteristics through the CNN-RNN model. The CNN-RNN model can directly process data without manual data feature extraction and screening, automatically extract features and complete classification recognition, can realize end-to-end rehabilitation training action recognition analysis, and performs attention weighting on the hidden state of the second layer in the two-layer bidirectional GRU layer by combining the attention layer, so that the data with high contribution degree is endowed with larger weight, the data plays a greater role, and the precision of classification recognition is further improved.
Claims (5)
1. A stroke patient upper limb and hand rehabilitation training action recognition method is characterized by comprising the following steps:
s1, collecting myoelectric signal data of rehabilitation training actions;
s2, preprocessing electromyographic signal data;
s3, decomposing the preprocessed electromyographic signal data by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
s4, performing iterative training on the CNN-RNN model by adopting a plurality of blind source separation result matrixes to obtain a trained CNN-RNN model;
s5, repeating the steps S1-S3 on newly collected training action electromyographic data to obtain a plurality of blind source separation result matrixes, and inputting the plurality of blind source separation result matrixes into a trained CNN-RNN model to obtain rehabilitation training action recognition categories;
step S3 includes the following steps:
s31, manually segmenting the preprocessed electromyographic signal data in a time dimension to obtain an electromyographic signal data matrix formed by each piece of data of the corresponding time sequence;
s32, decomposing the electromyographic signal data matrix by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
the step S4 includes the following steps:
s41, establishing a CNN network model and an RNN network model, and initializing the iteration number m to be 0;
the RNN network model in step S41 includes: two bidirectional GRU layers, an attention layer and a full connection layer, wherein each bidirectional GRU layer comprises T' GRU units;
the GRU unit comprises an updating gate and a resetting gate;
the input in the first of the two bidirectional GRU layers is a feature vector, and the output of the second layer is the input of the attention layer;
the output of the attention layer is the input of the full connection layer;
the state update equation for the GRU unit is as follows:
wherein, the [ alpha ], [ beta ]]Representing the concatenation of two vectors, representing the inner product of the vectors, σIs an activation function, tanh is a hyperbolic tangent activation function, WrTo reset the weight matrix of the gate, WzIn order to update the weight matrix for the gate,as a candidate setWeight matrix of xtIs a feature vector, htIs an implicit state at time t, ht-1Is an implicit state at the moment t-1;
the CNN network model in step S41 includes: three layers of convolution layer, three layers of pooling layer and three layers of activation layer; the method comprises the following steps that a convolutional layer is connected with an active layer and then connected with a pooling layer to form a group of network units, and the three groups of network units are sequentially connected to form a CNN network model;
s42, inputting the blind source separation result matrixes into a CNN network model, and performing feature extraction and pooling dimension reduction operation to obtain feature vectors;
s43, inputting the feature vector into an RNN network model for processing to obtain a probability value of the predicted action category;
s44, calculating the distance Loss of the probability values of the predicted action category and the real action category through cross entropym;
S45, judging the Loss of the mth timemValue sum Loss of m-1 timesm-1And if the difference value of the values is smaller than the threshold value, obtaining the trained CNN-RNN model, otherwise, updating the weight parameter and the offset parameter in the CNN network model and the weight parameter of the RNN network model by adopting a batch random gradient descent method, adding 1 to the iteration number m, and jumping to the step S42.
2. The method for recognizing rehabilitation training actions of upper limbs and hands of stroke patients as claimed in claim 1, wherein the calculation formula of the input and output of the convolutional layer is as follows:
wherein,is the data of the ith input channel of the l-1 th convolutional layer,data of the jth output channel of the first convolutional layer, Ml-1The number of input channels of the l-1 th convolutional layer,for the l-th layer of the convolution kernel weights,the first layer of convolution layer is offset, l is more than or equal to 1 and less than or equal to 3; the data of the ith input channel of the 1 st convolutional layer is a blind source separation result matrix Hr×nThe ith row of data.
3. The method of claim 1, wherein the hidden state h of the second layer of the two-layer bidirectional GRU layer is used for recognizing the rehabilitation training actions of the upper limbs and the hands of the patient with stroketThe input into the attention layer for processing comprises the following steps:
a1 hidden state h of the second layer of two-layer bidirectional GRU layertInput into the attention layer;
a2 weight W of initial attention layerwAnd bias bw;
A3, weight W according to attention levelwAnd bias bwObtaining the hidden state h by tanh hyperbolic tangent activation functiontHidden layer representation of ut;
A4, randomly initializing a weight vector uwFor the hidden layer to represent utPerforming softmax standardization to obtain the attention weight alphat;
A5, implicit State htBy paying attention toForce weight alphatWeighting to obtain hidden state htIs expressed as a weighted attention representation qt。
4. The method of claim 1, wherein the attention-weighted representation q represents a weight of the patient's upper limbs and hands in the rehabilitation training action recognitiontThe process of inputting the full connection layer for processing comprises the following steps:
b1, representing attention by weighting qtInputting the full connection layer, and performing discrete processing to obtain attention weighting ok,C is the number of the neurons of the full connection layer;
b2 weighting attention okAnd carrying out random inactivation operation and classification operation by using softmax to obtain the probability value of the predicted action category.
5. The method for recognizing rehabilitation training actions on upper limbs and hands of patients with stroke according to claim 4, wherein the probability value of the predicted action category in the step B2 is calculated by the formula:
wherein s iskIs the probability value of the predicted kth action.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911394850.3A CN111184512B (en) | 2019-12-30 | 2019-12-30 | Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911394850.3A CN111184512B (en) | 2019-12-30 | 2019-12-30 | Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111184512A CN111184512A (en) | 2020-05-22 |
CN111184512B true CN111184512B (en) | 2021-06-01 |
Family
ID=70684400
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911394850.3A Active CN111184512B (en) | 2019-12-30 | 2019-12-30 | Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111184512B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111938660B (en) * | 2020-08-13 | 2022-04-12 | 电子科技大学 | Stroke patient hand rehabilitation training action recognition method based on array myoelectricity |
CN111950460B (en) * | 2020-08-13 | 2022-09-20 | 电子科技大学 | Muscle strength self-adaptive stroke patient hand rehabilitation training action recognition method |
CN112043269B (en) * | 2020-09-27 | 2021-10-19 | 中国科学技术大学 | Muscle space activation mode extraction and recognition method in gesture motion process |
CN114081513B (en) * | 2021-12-13 | 2023-04-07 | 苏州大学 | Electromyographic signal-based abnormal driving behavior detection method and system |
CN114649079A (en) * | 2022-03-25 | 2022-06-21 | 南京信息工程大学无锡研究院 | Prediction method of codec facing GCN and bidirectional GRU |
CN115281902A (en) * | 2022-07-05 | 2022-11-04 | 北京工业大学 | Myoelectric artificial limb control method based on fusion network |
CN116013548B (en) * | 2022-12-08 | 2024-04-09 | 广州视声健康科技有限公司 | Intelligent ward monitoring method and device based on computer vision |
CN115831368B (en) * | 2022-12-28 | 2023-06-16 | 东南大学附属中大医院 | Rehabilitation analysis management system based on cerebral imaging stroke patient data |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018119316A1 (en) * | 2016-12-21 | 2018-06-28 | Emory University | Methods and systems for determining abnormal cardiac activity |
CN108388348A (en) * | 2018-03-19 | 2018-08-10 | 浙江大学 | A kind of electromyography signal gesture identification method based on deep learning and attention mechanism |
CN109480838A (en) * | 2018-10-18 | 2019-03-19 | 北京理工大学 | A kind of continuous compound movement Intention Anticipation method of human body based on surface layer electromyography signal |
CN109924977A (en) * | 2019-03-21 | 2019-06-25 | 西安交通大学 | A kind of surface electromyogram signal classification method based on CNN and LSTM |
CN110337269A (en) * | 2016-07-25 | 2019-10-15 | 开创拉布斯公司 | Method and apparatus for inferring user intent based on neuromuscular signals |
CN110399846A (en) * | 2019-07-03 | 2019-11-01 | 北京航空航天大学 | A kind of gesture identification method based on multichannel electromyography signal correlation |
CN110610172A (en) * | 2019-09-25 | 2019-12-24 | 南京邮电大学 | Myoelectric gesture recognition method based on RNN-CNN architecture |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10878807B2 (en) * | 2015-12-01 | 2020-12-29 | Fluent.Ai Inc. | System and method for implementing a vocal user interface by combining a speech to text system and a speech to intent system |
KR101785500B1 (en) * | 2016-02-15 | 2017-10-16 | 인하대학교산학협력단 | A monophthong recognition method based on facial surface EMG signals by optimizing muscle mixing |
US10709390B2 (en) * | 2017-03-02 | 2020-07-14 | Logos Care, Inc. | Deep learning algorithms for heartbeats detection |
EP3697297A4 (en) * | 2017-10-19 | 2020-12-16 | Facebook Technologies, Inc. | Systems and methods for identifying biological structures associated with neuromuscular source signals |
CN109308459B (en) * | 2018-09-05 | 2022-06-24 | 南京大学 | Gesture estimation method based on finger attention model and key point topology model |
CN109359619A (en) * | 2018-10-31 | 2019-02-19 | 浙江工业大学之江学院 | A kind of high density surface EMG Signal Decomposition Based method based on convolution blind source separating |
CN109820525A (en) * | 2019-01-23 | 2019-05-31 | 五邑大学 | A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model |
-
2019
- 2019-12-30 CN CN201911394850.3A patent/CN111184512B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110337269A (en) * | 2016-07-25 | 2019-10-15 | 开创拉布斯公司 | Method and apparatus for inferring user intent based on neuromuscular signals |
WO2018119316A1 (en) * | 2016-12-21 | 2018-06-28 | Emory University | Methods and systems for determining abnormal cardiac activity |
CN108388348A (en) * | 2018-03-19 | 2018-08-10 | 浙江大学 | A kind of electromyography signal gesture identification method based on deep learning and attention mechanism |
CN109480838A (en) * | 2018-10-18 | 2019-03-19 | 北京理工大学 | A kind of continuous compound movement Intention Anticipation method of human body based on surface layer electromyography signal |
CN109924977A (en) * | 2019-03-21 | 2019-06-25 | 西安交通大学 | A kind of surface electromyogram signal classification method based on CNN and LSTM |
CN110399846A (en) * | 2019-07-03 | 2019-11-01 | 北京航空航天大学 | A kind of gesture identification method based on multichannel electromyography signal correlation |
CN110610172A (en) * | 2019-09-25 | 2019-12-24 | 南京邮电大学 | Myoelectric gesture recognition method based on RNN-CNN architecture |
Non-Patent Citations (1)
Title |
---|
基于深度神经网络的sEMG手势识别研究;张龙娇等;《计算机工程与应用》;20190605(第23期);第113-119页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111184512A (en) | 2020-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111184512B (en) | Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient | |
Chen et al. | Hand gesture recognition based on surface electromyography using convolutional neural network with transfer learning method | |
CN110765920B (en) | Motor imagery classification method based on convolutional neural network | |
CN110238863B (en) | Lower limb rehabilitation robot control method and system based on electroencephalogram-electromyogram signals | |
CN107736894A (en) | A kind of electrocardiosignal Emotion identification method based on deep learning | |
CN103440498A (en) | Surface electromyogram signal identification method based on LDA algorithm | |
CN112120697A (en) | Muscle fatigue advanced prediction and classification method based on surface electromyographic signals | |
US12106204B2 (en) | Adaptive brain-computer interface decoding method based on multi-model dynamic integration | |
CN110013248A (en) | Brain electricity tensor mode identification technology and brain-machine interaction rehabilitation system | |
CN109498370B (en) | Lower limb joint angle prediction method based on electromyographic wavelet correlation dimension | |
Zheng et al. | Concurrent prediction of finger forces based on source separation and classification of neuron discharge information | |
CN113111831A (en) | Gesture recognition technology based on multi-mode information fusion | |
CN112043473A (en) | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb | |
Fan et al. | Robust neural decoding for dexterous control of robotic hand kinematics | |
CN114569143A (en) | Myoelectric gesture recognition method based on attention mechanism and multi-feature fusion | |
CN114548165A (en) | Electromyographic mode classification method capable of crossing users | |
CN110738093B (en) | Classification method based on improved small world echo state network electromyography | |
CN112998725A (en) | Rehabilitation method and system of brain-computer interface technology based on motion observation | |
CN110321856B (en) | Time-frequency multi-scale divergence CSP brain-computer interface method and device | |
CN115985463B (en) | Real-time muscle fatigue prediction method and system based on wearable equipment | |
CN110464517B (en) | Electromyographic signal identification method based on wavelet weighted arrangement entropy | |
Sun et al. | Classification of sEMG signals using integrated neural network with small sized training data | |
Zhou et al. | MI-EEG temporal information learning based on one-dimensional convolutional neural network | |
Murugan et al. | EMG signal classification using ANN and ANFIS for neuro-muscular disorders | |
CN115137375B (en) | Surface electromyographic signal classification method based on double-branch network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |