CN113855048A - Electroencephalogram signal visualization distinguishing method and system for autism spectrum disorder - Google Patents
Electroencephalogram signal visualization distinguishing method and system for autism spectrum disorder Download PDFInfo
- Publication number
- CN113855048A CN113855048A CN202111230491.5A CN202111230491A CN113855048A CN 113855048 A CN113855048 A CN 113855048A CN 202111230491 A CN202111230491 A CN 202111230491A CN 113855048 A CN113855048 A CN 113855048A
- Authority
- CN
- China
- Prior art keywords
- electroencephalogram
- feature map
- map
- neural network
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4076—Diagnosing or monitoring particular conditions of the nervous system
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/7257—Details of waveform analysis characterised by using transforms using Fourier transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Theoretical Computer Science (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Fuzzy Systems (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Neurology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Neurosurgery (AREA)
- Psychology (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses an electroencephalogram signal visual discrimination method and system for autism spectrum disorder, and relates to the field of autism spectrum disorder discrimination, wherein the method comprises the steps of inputting a sampling data stream of an electroencephalogram signal into a pre-trained deep convolutional neural network model; positioning the last level before flattening in the deep convolutional neural network model, setting the last level as an interception level, and acquiring a feature map of each channel output by the interception level; calculating the position value of each point on each feature map, and calculating the importance weight of the feature map to the judgment result according to the position value of each point on each feature map; weighting and linearly combining the importance weights of all the characteristic diagrams and the corresponding characteristic diagrams, and inputting the combined importance weights to a linear rectification activation function to obtain a similar activation diagram; and (5) normalizing the activation map, sampling and superposing the activation map to an electroencephalogram signal sample to form a three-dimensional image. The invention can realize the visualization of the judgment result and improve the judgment rationality.
Description
Technical Field
The invention relates to the field of discrimination of autism spectrum disorder, in particular to an electroencephalogram signal visual discrimination method and system for autism spectrum disorder.
Background
Autism spectrum disorder is a wide-sense autism defined by expanding on the basis of core symptoms of typical autism, and symptoms thereof include social communication disorder, language communication disorder, repetitive stereotyped behavior, and the like.
The number of patients with autism spectrum disorder (hereinafter referred to as autism) is large in China, and early screening and discrimination are extremely important. However, the current assessment and diagnosis technology for autism in our country is based on expert interview and behavior scales, and still stays in subjective, extensive and inefficient stages.
The traditional diagnosis method is to judge through electroencephalogram signals. An electroencephalogram (EEG) is a record of electrophysiological activity of cerebral cortex, has the characteristics of non-invasive type, acquisition and high time resolution, can realize automatic discrimination of autism people by matching with feature engineering and the traditional machine learning technology, and has the advantages that the training complexity is low, and the data and the computing capability are relatively independent; the disadvantage is the high requirements for a priori knowledge and expertise.
With the continuous accumulation of existing big data and the continuous improvement of computing capability, in recent years, deep learning technologies represented by convolutional neural networks and cyclic neural networks are beginning to be applied to the fields of computer vision, natural language processing and the like in a large amount, and even reach the level far exceeding that of human beings. Therefore, the combination of deep learning technology and electroencephalogram signals to realize high-performance accurate judgment of autism becomes a hot problem in the current brain health field. However, the end-to-end deep neural network model often has a complex and nonlinear internal structure, so that the interpretability of the discrimination result is lacked, and the practical application of the technology is severely limited.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an electroencephalogram signal visualization judging method and system for autism spectrum disorder, which can realize visualization of a judgment result and improve judgment rationality by calculating a class activation map in a deep learning model.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides an electroencephalogram signal visualization determination method for autism spectrum disorder
Inputting a sampling data stream of an electroencephalogram signal into a pre-trained deep convolutional neural network model, and counting the type of a judgment result output by the deep convolutional neural network model as C;
positioning the last level in the deep convolutional neural network model before flattening, setting the last level as an interception level, and acquiring a feature map of each channel output by the interception level, wherein the size of the feature map is a multiplied by b, and the number of the channels is n;
calculating the position value of each point on each feature map, and calculating the importance weight alpha of the feature map to the class C according to the position value of each point on each feature mapc;
Weighting and linearly combining the importance weights of all the feature maps and the corresponding feature maps, and inputting the combined feature maps into a linear rectification activation function to obtain a similar activation map;
and normalizing the class activation map, sampling to a size of a multiplied by b, and superposing the class activation map to an electroencephalogram signal sample to form a three-dimensional image.
As an alternative embodiment, the position value of each point on each feature map is calculated, and the importance weight X of the feature map to the class C is calculated according to the position value of each point on each feature mapcThe method comprises the following steps:
meter Ak (x,y)The value of the feature map of the k channel at the position (x, y) is determined, and the determination value of the determination result type C is yCThen the importance weight αC kComprises the following steps:
wherein Z is the number of pixel points in the characteristic diagram.
As an optional embodiment, after weighting and linearly combining the importance weights of all feature maps and the corresponding feature maps, the importance weights are input to a linear rectification activation function, and a specific formula of the activation-like map is obtained as follows:
as an alternative embodiment, before the inputting of the sample data stream of the brain electrical signal to the pre-trained deep convolutional neural network model, the method comprises:
sampling an electroencephalogram signal of an eye opening in a resting state to obtain an electroencephalogram signal, and performing noise reduction through a down-sampling, band-pass filtering and artifact removing algorithm;
and cutting the electroencephalogram signals into electroencephalogram segments at preset time intervals, and obtaining sampling data of the electroencephalogram signals through discrete Fourier transform processing.
As an optional embodiment, the electroencephalogram segment is divided into a training set, a verification set and a test set according to a set proportion, and the deep convolutional neural network model is trained.
As an optional embodiment, the sampling the resting-state eye-opening electroencephalogram signal to obtain an electroencephalogram signal, and performing noise reduction through a down-sampling, band-pass filtering and artifact removing algorithm, including:
sampling the electroencephalogram signals at 128 Hz;
filtering out non-electroencephalogram frequency bands through a band-pass filtering algorithm, and reserving a frequency band of 0.5-45 Hz;
and removing artifacts in the electroencephalogram data, wherein the artifacts comprise ocular and electromyographic artifacts.
As an alternative embodiment, the hidden layers of the deep convolutional neural network model include a convolutional layer, an average pooling layer, a discard layer, a flattening layer, and a full-link layer;
the deep convolutional neural network model flattens the three-dimensional sample into one dimension at the end of the network and is connected with the full connection layer.
As an alternative embodiment, the output layer of the deep convolutional neural network model is a fully connected layer with a node of 2, and the probability that the sample belongs to autism and normal individuals is output through a soft maximum activation function.
As an alternative embodiment, the format of the input data of the deep convolutional neural network model is [1 × 8 × 182], where 1 is the channel dimension of the model, 8 is 8 channels, 182 is the data length of the sampled data of the electroencephalogram signal.
In a second aspect, an embodiment of the present invention provides an electroencephalogram signal visualization and discrimination apparatus for autism spectrum disorder, including:
the judging module is used for inputting a sampling data stream of an electroencephalogram signal into a pre-trained deep convolutional neural network model, and the type of a judging result output by the deep convolutional neural network model is C;
the interception module is used for positioning the last level in the deep convolutional neural network model before flattening, setting the last level as an interception level, and acquiring a feature map of each channel output by the interception level, wherein the size of the feature map is a multiplied by b, and the number of the channels is n;
a calculating module for calculating the position value of each point on each feature map, and calculating the importance weight alpha of the feature map to the class C according to the position value of each point on each feature mapc;
The activation module is used for weighting and linearly combining the importance weights of all the feature maps and the corresponding feature maps and inputting the weighted and linearly combined importance weights and the corresponding feature maps into a linear rectification activation function to obtain a similar activation map;
and the imaging module is used for standardizing the class activation map, sampling the class activation map to a size of a multiplied by b, and superposing the class activation map to the electroencephalogram signal sample to form a three-dimensional image.
Compared with the prior art, the invention has the advantages that:
firstly, the characteristic data in the electroencephalogram information is displayed in a graphic mode, and the problem that interpretability and a judgment basis are lacked when an end-to-end model such as deep learning is used for judging is solved. In addition, the individual difference of the autistic patients can be further explored by comparing feature contribution maps among different individuals.
Secondly, because the characteristic diagram in the neural network is only required to be extracted, the class activation diagram can be obtained by combining the final characteristic diagram with the score of the target class in the existing deep neural network discriminant model without additionally changing the structure or retraining, so that the characteristic contribution diagram is generated together with the sample, and the process does not depend on computing resources or experience knowledge.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings corresponding to the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of the steps of an electroencephalogram signal visualization discrimination method for autism spectrum disorders of the present invention;
FIG. 2 is an overall structure diagram of an electroencephalogram signal visualization discrimination method for autism spectrum disorders in accordance with the present invention;
FIG. 3 is a three-dimensional view of the result of the electroencephalogram signal visualization discrimination method for autism spectrum disorders of the present invention.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, the embodiment of the invention provides an electroencephalogram signal visualization judging method and system for autism spectrum disorder, wherein a feature map in a deep neural network model is intercepted and is connected with a result in series to obtain a contribution value of the feature map to the result, so that a visualization graph is formed, visualization of a judgment result can be realized, and judgment rationality is improved.
In order to achieve the technical effects, the general idea of the application is as follows:
inputting a sampling data stream of an electroencephalogram signal into a pre-trained deep convolutional neural network model, and counting the type of a judgment result output by the deep convolutional neural network model as C;
positioning the last level in the deep convolutional neural network model before flattening, setting the last level as an interception level, and acquiring a feature map of each channel output by the interception level, wherein the size of the feature map is a multiplied by b, and the number of the channels is n;
calculating the position value of each point on each feature map, and calculating the importance weight α C of the feature map to the class C according to the position value of each point on each feature map;
weighting and linearly combining the importance weights of all the feature maps and the corresponding feature maps, and inputting the combined feature maps into a linear rectification activation function to obtain a similar activation map;
normalizing the activation-like map, sampling to a size of a multiplied by b, and superimposing the activation-like map to an electroencephalogram signal sample to form a three-dimensional image
As described above, the conventional electroencephalogram determination path is:
(1) and (4) preprocessing. Filtering the original collected signal to reserve the signal component of a specific frequency band and remove the artifact in the signal;
(2) and (5) feature extraction. Extracting characteristics such as time, frequency, time frequency, entropy and the like from the signals by analysis methods such as signal processing and nonlinear measurement to form a characteristic set;
(3) and (4) selecting the characteristics. Evaluating the importance of the features by adopting a feature selection algorithm based on filtering, wrapping and the like, and screening out an effective feature subset;
(4) and (5) judging the autism. A classifier or set of classifiers is selected and classifier parameters are optimized using the labeled set of samples. And using the optimized model for distinguishing new unlabeled samples.
However, the solution does not inform the user what the decision is based on, and how to specifically decide, which causes the problem of lack of interpretability and decision basis.
According to the method, a feature map is captured from a deep neural network, an internal activation map is formed by calculating the contribution degree of the feature map to a result, and then a three-dimensional image is directly displayed for a user, so that interpretability and judgment basis are improved, and further research of research is promoted.
In order to better understand the technical solution, the following detailed description is made with reference to specific embodiments.
The embodiment of the invention provides an electroencephalogram signal visualization distinguishing method for autism spectrum disorder, which comprises the following steps:
s1: inputting a sampling data stream of an electroencephalogram signal into a pre-trained deep convolutional neural network model, and counting the type of a judgment result output by the deep convolutional neural network model as C;
the invention uses a deep convolution neural network model which can be an existing model or a newly trained model.
Optionally, the deep convolution neural model constructs a model according to experience, after a proper optimizer, a learning rate and training times are set, the model is trained by using a training set and a verification set, the structure and the hyper-parameters of the model are continuously optimized according to the verification accuracy, the structure and the hyper-parameters of the model are adjusted according to the verification accuracy until the performance of the model reaches a certain level, and finally the final performance of the model is obtained by using a test data set until the performance of the model reaches an expected result.
For example, all samples are randomly mixed and divided into a training set, a verification set and a test set according to a ratio of 3:1:1, wherein the former two are used for training and optimizing the model, and the latter is used for testing the model
Further, the input and the output of the deep convolution neural model are three-dimensional samples and the probability that the samples belong to autism or normal individuals. The hidden layers of the model comprise a convolution layer, an average pooling layer, a maximum pooling layer, a discarding layer, a flattening layer, a full-link layer and the like.
Optionally, the input of the deep convolutional neural model is three-dimensional tensor electroencephalogram data with a size of [1 × 8 × 182], where: 1 is the channel dimension of the model, 8 corresponds to 8 channels, 182 corresponds to the data length of 0.5-45Hz part after discrete Fourier transform; the convolution layer, the maximum pooling layer and the average pooling layer all adopt a two-dimensional structure (corresponding to [8 × 182]), wherein the convolution kernel size and the step length of the first dimension (corresponding to 8) are all fixed to be 1, so that the size of a feature map finally generated by the model is still [8 × y ] (y is the reduced size of the second dimension after convolution and pooling). And flattening the three-dimensional sample into one dimension at the tail end of the network, and connecting a plurality of fully connected layers. Different activation functions may be added between layers and layers discarded.
Further, the output layer of the model is a fully connected layer with node 2 and is connected with a soft maximum (Softmax) activation function, so that the model can output the probability (sum is 100%) that the sample belongs to the autism/normal individual.
S2: positioning the last level in the deep convolutional neural network model before flattening, setting the last level as an interception level, and acquiring a feature map of each channel output by the interception level, wherein the size of the feature map is a multiplied by b, and the number of the channels is n;
for example, for the current sample, find the last layer (convolution layer or pooling layer) before flattening in the neural network model, and obtain the feature map output by this layer [ n × a × b ] (n is the number of feature maps, and a and b are the size of each feature map).
S3: calculating the position value of each point on each feature map, and calculating the importance weight α C of the feature map to the class C according to the position value of each point on each feature map;
specifically, the calculating the position value of each point on each feature map, and the calculating the importance weight Xc of the feature map for the category C according to the position value of each point on each feature map includes:
when Ak (x, y) is counted as a value of the feature map of the k-th channel at the position (x, y), and the determination value of the determination result type C is yC, the importance weight xCk is:
for example, for category c, there is a category score y before the softmax layerc. Let A(x,y) kFor the value of the kth channel at the (x, y) position of the kth feature map A, the importance weight α _ k ^ c of the feature map to c is calculated:
s4: weighting and linearly combining the importance weights of all the feature maps and the corresponding feature maps, and inputting the combined feature maps into a linear rectification activation function to obtain a similar activation map;
that is, after the importance weights of all feature maps and the corresponding feature maps are weighted and linearly combined, the combination is input to a linear rectification activation function, and a specific formula of the activation-like map is obtained as follows:
s5: and normalizing the class activation map, sampling to a size of a multiplied by b, and superposing the class activation map to an electroencephalogram signal sample to form a three-dimensional image.
For example, the class activation icon is normalized and upsampled to [ a × b ] size, displayed and analyzed by superimposing a color map (ColorMap) on the brain electrical signal sample, forming a three-dimensional image.
According to the method provided by the invention, the contribution of each region of the electroencephalogram signal to the discrimination result is displayed by calculating the class activation map in the deep learning model, and the corresponding weight of each region of the sample is obtained and used as the explanation and basis of the discrimination result. The problem that interpretability and a judgment basis are lacked when an end-to-end model such as deep learning is used for judging is solved, and individual differences seen by autism patients can be further explored by comparing feature contribution graphs among different individuals. And under the condition of not needing to additionally change the structure or retraining, a class activation graph is obtained by combining the final feature graph and the score of the target class in the existing deep neural network discrimination model, so that the feature contribution graph is generated together with the sample, and the process does not need to rely on computing resources or experience knowledge.
Before the inputting of the sampled data stream of the brain electrical signal to the pre-trained deep convolutional neural network model, the method includes:
sampling an electroencephalogram signal of an eye opening in a resting state to obtain an electroencephalogram signal, and performing noise reduction through a down-sampling, band-pass filtering and artifact removing algorithm;
and cutting the electroencephalogram signals into electroencephalogram segments at preset time intervals, and obtaining sampling data of the electroencephalogram signals through discrete Fourier transform processing.
Preferably, the electroencephalogram segments are divided into a training set, a verification set and a test set according to a set proportion, and the deep convolutional neural network model is trained.
Specifically, the data acquired by the electroencephalogram acquisition equipment is an original signal of 3-5 minutes. And the following treatment is carried out:
(1) the influence of noise such as electrooculogram and myoelectricity on subsequent analysis is reduced by utilizing a down-sampling, band-pass filtering and artifact removing algorithm;
(2) dividing an original signal into a plurality of time segments by using a fixed time window, wherein the time segments are used as a minimum unit for autism judgment, and then generating an individual judgment result through voting;
(3) discrete Fourier transform processing is carried out on each electroencephalogram segment, so that the frequency domain characteristics are more prominent;
(4) and dividing the data into a training set, a verification set and a test set according to a certain proportion.
Optionally, sampling the resting state eye-opening electroencephalogram signal to obtain an electroencephalogram signal, and performing noise reduction through a down-sampling, band-pass filtering and artifact removal algorithm, including:
sampling the electroencephalogram signals at 128 Hz;
filtering out non-electroencephalogram frequency bands through a band-pass filtering algorithm, and reserving a frequency band of 0.5-45 Hz;
and removing artifacts in the electroencephalogram data, wherein the artifacts comprise ocular and electromyographic artifacts.
For example, the above-mentioned brain wave denoising process includes the following steps:
t1 electroencephalogram signal acquisition is usually carried out in a professional electroencephalogram acquisition room, data of a person in a resting or task state are recorded by attaching a sensor to the surface of the scalp of the brain of the person, and the data are transmitted to a computer in real time for storage.
T2 requires the subject to remove the eyes and ear nails and wear a headgear made of a special material.
T3 respectively making the electrodes, a reference electrode and a ground wire corresponding to the eight channels pass through the head cover to closely adhere to the scalp, and reducing the resistivity by smearing the special conductive paste on the contact position.
T4 asks the tested person to keep the static eye-opening state of stable respiration for 3-5 minutes, and the electroencephalogram signals generated during the period are collected and stored together with the tested information.
T5 down-samples the acquired electroencephalogram data to 128Hz to improve the anti-noise capability of the electroencephalogram data; then, filtering out non-electroencephalogram frequency bands by a band-pass filtering algorithm, and only reserving a part of 0.5-45 Hz; and finally, removing artifacts such as electro-oculogram and myoelectricity in the electroencephalogram data.
T6 divides each tested complete electroencephalogram record into 4-second segments, which are used as the unit of subsequent processing, and in the subsequent judgment, more than half of all the segments of an individual are used as the judgment result of the individual.
T7 carries out discrete Fourier transform processing on each electroencephalogram segment respectively, and converts the electroencephalogram segment from time domain to frequency domain, so that the characteristics are more prominent, and only 0.5-45Hz part is reserved again.
On the basis of the above embodiment, the present application also provides a specific embodiment, which includes the following steps:
u1 data acquisition:
the electroencephalogram signal acquisition is usually carried out in a professional electroencephalogram acquisition room, so that the quiet environment and the stable voltage are ensured, and the quality of the acquired signal is improved.
It should also be noted that the choice of a subject having excessively thick or stiff hair is avoided as much as possible, and the head article is removed before wearing the headgear to clean the head and hair. The electrodes are accurately attached to the scalp to be tested according to the corresponding positions, and the specific resistance is reduced by coating special conductive paste on the contact positions, so that the specific resistance of each position is lower than 100k omega (lower than 30k omega is optimal). In the collecting process, the tested person should keep eyes open, still and silent, breathe stably and empty as much as possible, and meanwhile, the electroencephalogram signal drift caused by sweating is avoided. And finally, storing the collected data of 3-5 minutes together with the specific information to be tested.
The frequency of the acquired electroencephalogram data is 1000Hz by default, the precision is far higher than the requirement of a general deep neural network on the resolution of the data, and meanwhile, in order to improve the anti-noise capability of the deep neural network, the data is reduced to 128Hz by using a down-sampling method. Then, a band-pass filtering algorithm is used for filtering out the parts above 0-0.5Hz and 45 Hz. And (3) dividing each tested complete electroencephalogram record into segments of 4 seconds, taking the segments as units of subsequent processing, and discarding the segments with the tail less than 4s when the signals are divided. In the subsequent discrimination, more than half of discrimination results in all the segments of the individual are taken as discrimination results of the individual.
Further, the noisy or drifting segments are identified and discarded. And respectively carrying out discrete Fourier transform processing on each electroencephalogram segment, converting the electroencephalogram segment from a time domain to a frequency domain, enabling the characteristics to be more prominent, and only keeping the part of 0.5-45Hz again. For convenience of subsequent processing, each sample is raised from two dimensions to three dimensions, and finally the size of each sample is [1 multiplied by 8 multiplied by 182], wherein 8 corresponds to eight channels, 182 corresponds to the length of a 0.5-45Hz part, and 1 is the channel dimension of the model but does not correspond to the channel dimension of the electroencephalogram data; 8 × 182 corresponds to the two-dimensional sample of the model, but to 8 channels of data and 182 data points, respectively; the convolutional neural network model is designed in such a way that the size of the model channel dimension is changed, and the channel dimension of data is expected to be kept 8 originally, so the two are intentionally staggered. And finally, randomly mixing all samples together with corresponding labels (the label format of each sample is [0,1] or [1,0] and respectively corresponds to an autistic individual and a normal development individual), and dividing the samples into a training set, a verification set and a testing set according to the proportion of 3:1:1, wherein the former two are used for training and optimizing the model, and the latter is used for testing the model.
U2, use or construction and training of the model.
As shown in FIG. 2, the invention can use the trained deep convolutional neural network, and can also reconstruct and train the model, preferably, the model of the invention inputs [ n × 1 × 8 × 182] samples and [ n × 2] labels, and outputs an array with the length of 2, and each value is between 0 and 1 and the sum is 1, which respectively represents the probability of autism and normality.
The specific model structure of the invention can be automatically adjusted and optimized according to experience and verification accuracy, but the following conditions are preferably used:
(1) the main part (front middle part) of the model is composed of a convolution layer, a maximum pooling layer and an average pooling layer alternately, and the depth is between 5 and 10;
(2) based on the data size, the number of convolution kernels of the convolutional layer is configured to be 8-64, 8 channels are still reserved for ensuring the last feature diagram, the window sizes of the convolutional layer and the pooling layer are 1 x 2 to 1 x 5, the step size of the convolutional layer is 1 x 1, and the step size of the pooling layer is consistent with the window size;
(3) after each convolution/pooling layer, adding an activation function and a discarding layer, wherein the former uses a ReLU activation function preferably, and the latter discards the probability of 0.2-0.4 preferably;
(4) the tail of the front middle part flattens the data size from three-dimensional to one-dimensional through a flattening layer, so that a full connection layer is formed after connection;
(5) the depth of the full connection layer is 2-3 layers, the number of cores is from large to small, the number of cores in the last layer corresponds to the size of a result and is 2, the activation function is softmax, and the rest activation functions are preferably ReLU.
Further, the training parameters preferably use the following conditions:
(1) it is proposed to speed up training in the form of small batches, the batch size being between 128 and 512 according to the data set size;
(2) the Loss function selects classification cross entropy (category cross entropy);
(3) the optimizer suggests selecting an ADAM optimizer that can autonomously dynamically adjust the learning rate;
(4) the training period is suggested as 100, and the optimal parameters are automatically stopped in advance according to the verification error and stored. After the model training and optimization are completed, the final performance of the model can be evaluated through the test data set.
U3 calculates class activation graphs.
Finding a layer before the flattening layer of the model, which may be a convolutional layer, or a max pooling layer or an average pooling layer, identifying the name of the layer (automatically generated by the deep learning framework) and taking the output feature map of the layer, sizing the map to [ n × a × b ], where n represents the number of feature maps, a and b are the size of each feature map, where a is fixed to 8, and n and b are determined by the neural network structure and the hyper-parameters set previously.
Finding the class score y corresponding to the result class c of the samplec. Calculating the importance weight alpha of each feature map to ck c:
Wherein A is(x,y) kIs the value of the k channel at the (x, y) position of the k profile a. Then, weighting and linearly combining the characteristic graphs of each pair of weight pairs, and sending the characteristic graphs into a ReLu activation function to obtain a class activation graph:
the activation map of this class is the same size as a single signature [ a b ].
For the overlay with the original samples, the class activation map should be upsampled to the same size of a single sample [8 × 182], preferably by interpolation.
Each point in the finally obtained class activation graph is the contribution of the feature of the corresponding position in the sample to the discrimination result c (the range is 0-1, no unit, the higher the contribution is, the larger the contribution is).
As shown in fig. 3, the two can be displayed in combination by using a color chart or the like.
Based on the same inventive concept, the application provides an electroencephalogram signal visualization distinguishing device for autism spectrum disorder, which comprises:
the judging module is used for inputting a sampling data stream of an electroencephalogram signal into a pre-trained deep convolutional neural network model, and the type of a judging result output by the deep convolutional neural network model is C;
the interception module is used for positioning the last level in the deep convolutional neural network model before flattening, setting the last level as an interception level, and acquiring a feature map of each channel output by the interception level, wherein the size of the feature map is a multiplied by b, and the number of the channels is n;
a calculating module for calculating the position value of each point on each feature map, and calculating the importance weight alpha of the feature map to the class C according to the position value of each point on each feature mapc;
The activation module is used for weighting and linearly combining the importance weights of all the feature maps and the corresponding feature maps and inputting the weighted and linearly combined importance weights and the corresponding feature maps into a linear rectification activation function to obtain a similar activation map;
and the imaging module is used for standardizing the class activation map, sampling the class activation map to a size of a multiplied by b, and superposing the class activation map to the electroencephalogram signal sample to form a three-dimensional image.
Various modifications and specific examples of the embodiments of the method described above are also applicable to the apparatus of the present embodiment, and the detailed description of the method described above will make clear to those skilled in the art that the method of the present embodiment can be implemented, so that the detailed description is omitted here for the sake of brevity.
Based on the same inventive concept, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for visually distinguishing an electroencephalogram signal of autism spectrum disorder, the method including:
computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Based on the same inventive concept, the present application provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program running on the processor, and the processor executes the computer program to implement all or part of the method steps in the first embodiment.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like which is the control center for the computer device and which connects the various parts of the overall computer device using various interfaces and lines.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the computer device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the cellular phone, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Generally, the electroencephalogram signal visualization distinguishing method and system for autism spectrum disorder provided by the embodiment of the invention intercept the feature map from the deep neural network, form the internal activation map by calculating the contribution degree of the feature map to the result, and further directly show the stereo image for the user, compared with the traditional technology, can solve the problem that interpretability and a distinguishing basis are lacked when distinguishing by using an end-to-end model such as deep learning through more features. In addition, the individual difference of the autistic patients can be further explored by comparing feature contribution maps among different individuals.
Secondly, because the characteristic diagram in the neural network is only required to be extracted, the class activation diagram can be obtained by combining the final characteristic diagram with the score of the target class in the existing deep neural network discriminant model without additionally changing the structure or retraining, so that the characteristic contribution diagram is generated together with the sample, and the process does not depend on computing resources or experience knowledge.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. An electroencephalogram signal visualization distinguishing method for autism spectrum disorder is characterized by comprising the following steps:
inputting a sampling data stream of an electroencephalogram signal into a pre-trained deep convolutional neural network model, and counting the type of a judgment result output by the deep convolutional neural network model as C;
positioning the last level in the deep convolutional neural network model before flattening, setting the last level as an interception level, and acquiring a feature map of each channel output by the interception level, wherein the size of the feature map is a multiplied by b, and the number of the channels is n;
calculating the position value of each point on each feature map, and calculating the importance weight alpha of the feature map to the class C according to the position value of each point on each feature mapc;
Weighting and linearly combining the importance weights of all the feature maps and the corresponding feature maps, and inputting the combined feature maps into a linear rectification activation function to obtain a similar activation map;
and normalizing the class activation map, sampling to a size of a multiplied by b, and superposing the class activation map to an electroencephalogram signal sample to form a three-dimensional image.
2. The method for visually distinguishing the electroencephalogram signals of the autism spectrum disorder as set forth in claim 1, wherein the position value of each point on each feature map is calculated, and the importance weight α of the feature map to the category C is calculated according to the position value of the point on each feature mapcThe method comprises the following steps:
meter Ak (x,y)The value of the feature map of the k channel at the position (x, y) is determined, and the determination value of the determination result type C is yCThen the importance weight αC kComprises the following steps:
wherein Z is the number of pixel points in the characteristic diagram.
3. The method for visually distinguishing the electroencephalogram signals of the autism spectrum disorder as set forth in claim 2, wherein the importance weights of all the feature maps and the corresponding feature maps are weighted and linearly combined and then input to a linear rectification activation function, and a specific formula of an activation-like map is obtained as follows:
4. the method for visually distinguishing the electroencephalogram signal of the autism spectrum disorder as set forth in claim 1, wherein: before the inputting of the sampled data stream of the brain electrical signal to the pre-trained deep convolutional neural network model, the method includes:
sampling an electroencephalogram signal of an eye opening in a resting state to obtain an electroencephalogram signal, and performing noise reduction through a down-sampling, band-pass filtering and artifact removing algorithm;
and cutting the electroencephalogram signals into electroencephalogram segments at preset time intervals, and obtaining sampling data of the electroencephalogram signals through discrete Fourier transform processing.
5. The method for visually distinguishing the electroencephalogram signal of the autism spectrum disorder as set forth in claim 4, wherein: and dividing the electroencephalogram fragments into a training set, a verification set and a test set according to a set proportion, and training the deep convolutional neural network model.
6. The method for visually distinguishing the electroencephalogram signals of the autism spectrum disorder according to claim 4, wherein the electroencephalogram signals obtained by sampling the electroencephalogram signals of the eyes opened in the resting state are subjected to noise reduction through a down-sampling, band-pass filtering and artifact-removing algorithm, and the method comprises the following steps:
sampling the electroencephalogram signals at 128 Hz;
filtering out non-electroencephalogram frequency bands through a band-pass filtering algorithm, and reserving a frequency band of 0.5-45 Hz;
and removing artifacts in the electroencephalogram data, wherein the artifacts comprise ocular and electromyographic artifacts.
7. The method for visually distinguishing the electroencephalogram signal of the autism spectrum disorder as set forth in claim 1, wherein:
the hidden layer of the deep convolutional neural network model comprises a convolutional layer, an average pooling layer, a discarding layer, a flattening layer and a full-connection layer;
the deep convolutional neural network model flattens the three-dimensional sample into one dimension at the end of the network and is connected with the full connection layer.
8. The method for visually distinguishing the electroencephalogram signal of the autism spectrum disorder as set forth in claim 1, wherein:
the output layer of the deep convolutional neural network model is a full-connection layer with the node of 2, and the probability that the sample belongs to the autism and the normal individual is output through a soft maximum activation function.
9. The method for visually distinguishing the electroencephalogram signal of the autism spectrum disorder as set forth in claim 1, wherein:
the format of the input data of the deep convolutional neural network model is [1 multiplied by 8 multiplied by 182], wherein 1 is the channel dimension of the model, 8 is 8 channels, and 182 is the data length of the sampling data of the electroencephalogram signal.
10. The utility model provides a visual discriminating gear of brain electrical signal of autism spectrum obstacle which characterized in that, it includes:
the judging module is used for inputting a sampling data stream of an electroencephalogram signal into a pre-trained deep convolutional neural network model, and the type of a judging result output by the deep convolutional neural network model is C;
the interception module is used for positioning the last level in the deep convolutional neural network model before flattening, setting the last level as an interception level, and acquiring a feature map of each channel output by the interception level, wherein the size of the feature map is a multiplied by b, and the number of the channels is n;
a calculating module for calculating the position value of each point on each feature map, and calculating the importance weight alpha of the feature map to the class C according to the position value of each point on each feature mapc;
The activation module is used for weighting and linearly combining the importance weights of all the feature maps and the corresponding feature maps and inputting the weighted and linearly combined importance weights and the corresponding feature maps into a linear rectification activation function to obtain a similar activation map;
and the imaging module is used for standardizing the class activation map, sampling the class activation map to a size of a multiplied by b, and superposing the class activation map to the electroencephalogram signal sample to form a three-dimensional image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111230491.5A CN113855048A (en) | 2021-10-22 | 2021-10-22 | Electroencephalogram signal visualization distinguishing method and system for autism spectrum disorder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111230491.5A CN113855048A (en) | 2021-10-22 | 2021-10-22 | Electroencephalogram signal visualization distinguishing method and system for autism spectrum disorder |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113855048A true CN113855048A (en) | 2021-12-31 |
Family
ID=78997099
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111230491.5A Withdrawn CN113855048A (en) | 2021-10-22 | 2021-10-22 | Electroencephalogram signal visualization distinguishing method and system for autism spectrum disorder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113855048A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018104751A1 (en) * | 2016-12-08 | 2018-06-14 | Lancaster University | Method and system for determining the presence of an autism spectrum disorder |
CN110009679A (en) * | 2019-02-28 | 2019-07-12 | 江南大学 | A kind of object localization method based on Analysis On Multi-scale Features convolutional neural networks |
CN112043473A (en) * | 2020-09-01 | 2020-12-08 | 西安交通大学 | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb |
CN113317797A (en) * | 2021-04-05 | 2021-08-31 | 宁波工程学院 | Interpretable arrhythmia diagnosis method combining medical field knowledge |
-
2021
- 2021-10-22 CN CN202111230491.5A patent/CN113855048A/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018104751A1 (en) * | 2016-12-08 | 2018-06-14 | Lancaster University | Method and system for determining the presence of an autism spectrum disorder |
CN110009679A (en) * | 2019-02-28 | 2019-07-12 | 江南大学 | A kind of object localization method based on Analysis On Multi-scale Features convolutional neural networks |
CN112043473A (en) * | 2020-09-01 | 2020-12-08 | 西安交通大学 | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb |
CN113317797A (en) * | 2021-04-05 | 2021-08-31 | 宁波工程学院 | Interpretable arrhythmia diagnosis method combining medical field knowledge |
Non-Patent Citations (1)
Title |
---|
HEYOU DONG等: "Subject sensitive EEG discrimination with fast reconstructable CNN driven by reinforcement learning: A case study of ASD evaluation", 《NEUROCOMPUTING》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107495962B (en) | Sleep automatic staging method for single-lead electroencephalogram | |
CN110353702A (en) | A kind of emotion identification method and system based on shallow-layer convolutional neural networks | |
US20200237246A1 (en) | Automatic recognition and classification method for electrocardiogram heartbeat based on artificial intelligence | |
CN110353673B (en) | Electroencephalogram channel selection method based on standard mutual information | |
Hsu | Single-trial motor imagery classification using asymmetry ratio, phase relation, wavelet-based fractal, and their selected combination | |
CN114041795B (en) | Emotion recognition method and system based on multi-mode physiological information and deep learning | |
CN113768519B (en) | Method for analyzing consciousness level of patient based on deep learning and resting state electroencephalogram data | |
CN111920420A (en) | Patient behavior multi-modal analysis and prediction system based on statistical learning | |
CN114190944B (en) | Robust emotion recognition method based on electroencephalogram signals | |
CN113143293B (en) | Continuous speech envelope nerve entrainment extraction method based on electroencephalogram source imaging | |
CN115281685A (en) | Sleep stage identification method and device based on anomaly detection and computer readable storage medium | |
CN117918863A (en) | Method and system for processing brain electrical signal real-time artifacts and extracting features | |
CN112450949A (en) | Electroencephalogram signal processing method and system for cognitive rehabilitation training | |
CN113576498B (en) | Visual and auditory aesthetic evaluation method and system based on electroencephalogram signals | |
CN115969369A (en) | Brain task load identification method, application and equipment | |
Dang et al. | Motor imagery EEG recognition based on generative and discriminative adversarial learning framework and hybrid scale convolutional neural network | |
CN113855048A (en) | Electroencephalogram signal visualization distinguishing method and system for autism spectrum disorder | |
Sutharsan et al. | Electroencephalogram signal processing with independent component analysis and cognitive stress classification using convolutional neural networks | |
CN111671421A (en) | Electroencephalogram-based children demand sensing method | |
CN116421200A (en) | Brain electricity emotion analysis method of multi-task mixed model based on parallel training | |
CN116386845A (en) | Schizophrenia diagnosis system based on convolutional neural network and facial dynamic video | |
CN112220482B (en) | Method for detecting and eliminating magnetoencephalogram eye movement artifact based on neural network and electronic device | |
CN115670480A (en) | Biological characteristic analysis method and device, electronic equipment and storage medium | |
CN114795247A (en) | Electroencephalogram signal analysis method and device, electronic equipment and storage medium | |
CN114970641A (en) | Emotion category identification method and device, processor and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20211231 |
|
WW01 | Invention patent application withdrawn after publication |