CN117503065A - Method and system for evaluating anesthesia depth - Google Patents
Method and system for evaluating anesthesia depth Download PDFInfo
- Publication number
- CN117503065A CN117503065A CN202311703314.3A CN202311703314A CN117503065A CN 117503065 A CN117503065 A CN 117503065A CN 202311703314 A CN202311703314 A CN 202311703314A CN 117503065 A CN117503065 A CN 117503065A
- Authority
- CN
- China
- Prior art keywords
- anesthesia
- electroencephalogram
- patient
- depth
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010002091 Anaesthesia Diseases 0.000 title claims abstract description 119
- 230000037005 anaesthesia Effects 0.000 title claims abstract description 116
- 238000000034 method Methods 0.000 title claims abstract description 34
- 210000004556 brain Anatomy 0.000 claims abstract description 52
- 238000010586 diagram Methods 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 25
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 18
- 230000004397 blinking Effects 0.000 claims abstract description 15
- 238000013135 deep learning Methods 0.000 claims abstract description 13
- 230000008569 process Effects 0.000 claims abstract description 7
- 238000002695 general anesthesia Methods 0.000 claims description 13
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000012544 monitoring process Methods 0.000 claims description 11
- 206010039897 Sedation Diseases 0.000 claims description 10
- 230000036280 sedation Effects 0.000 claims description 10
- 238000013145 classification model Methods 0.000 claims description 9
- 206010062519 Poor quality sleep Diseases 0.000 claims description 7
- 238000001356 surgical procedure Methods 0.000 claims description 7
- 239000003086 colorant Substances 0.000 claims description 6
- 238000013136 deep learning model Methods 0.000 claims description 6
- 239000012634 fragment Substances 0.000 claims description 6
- 238000000819 phase cycle Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 5
- 238000012795 verification Methods 0.000 claims description 5
- 239000003814 drug Substances 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 2
- 230000003444 anaesthetic effect Effects 0.000 abstract description 13
- 238000005516 engineering process Methods 0.000 abstract description 5
- 230000005611 electricity Effects 0.000 description 12
- 241000282472 Canis lupus familiaris Species 0.000 description 5
- 241000282326 Felis catus Species 0.000 description 5
- 239000003193 general anesthetic agent Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 210000001061 forehead Anatomy 0.000 description 3
- 229940035674 anesthetics Drugs 0.000 description 2
- 210000003710 cerebral cortex Anatomy 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002980 postoperative effect Effects 0.000 description 2
- 210000004761 scalp Anatomy 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 208000014644 Brain disease Diseases 0.000 description 1
- 241000282693 Cercopithecidae Species 0.000 description 1
- 206010012218 Delirium Diseases 0.000 description 1
- 206010012289 Dementia Diseases 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 208000032358 Intraoperative Awareness Diseases 0.000 description 1
- 241000282553 Macaca Species 0.000 description 1
- 208000026301 Postoperative Cognitive Complications Diseases 0.000 description 1
- 206010071368 Psychological trauma Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000035622 drinking Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000028709 inflammatory response Effects 0.000 description 1
- 238000001802 infusion Methods 0.000 description 1
- 238000002690 local anesthesia Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000036407 pain Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000000932 sedative agent Substances 0.000 description 1
- 230000001624 sedative effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000004622 sleep time Effects 0.000 description 1
- 230000000391 smoking effect Effects 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4821—Determining level or depth of anaesthesia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/372—Analysis of electroencephalograms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Psychiatry (AREA)
- Artificial Intelligence (AREA)
- Anesthesiology (AREA)
- Psychology (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses a method and a system for evaluating anesthesia depth. The method comprises the following steps: establishing an anesthesia electroencephalogram data set, and training a convolutional neural network model for anesthesia depth identification through a deep learning technology; collecting brain wave signals of a patient to be detected, and converting one-dimensional brain wave waveforms into a two-dimensional brain wave time-frequency diagram; the diagram is input into an anesthetic type identification model to obtain the judgment of the anesthetic state and the corresponding prediction probability. And the anesthesia state of the patient in the whole operation anesthesia process is visualized by using the form of an anesthesia depth stage diagram. The invention is based on a deep learning technology with large sample size, so that the anesthesia depth prediction is more accurate. Compared with the existing various 0-100 digital anesthesia depth monitors, the anesthesia depth staging chart provided by the invention shows that the anesthesia state of the patient is more stable. The invention divides the awake state into two subtypes of awake and non-blinking and awake blinking, which can avoid the interference of the electrooculogram generated by the blinking of the patient on the brain electrical signal and the false giant fluctuation of the anesthesia depth index caused by the interference.
Description
Technical Field
In an invention relating to monitoring of depth of anesthesia, the improvement comprising: 1. compared with the traditional method for extracting the brain electrical characteristics and carrying out statistical regression, the anesthesia depth prediction is more accurate by the deep learning image recognition technology based on large sample size. 2. Compared with the existing various 0-100 digital anesthesia depth monitors, the anesthesia depth staging chart provided by the invention shows that the anesthesia state of the patient is more stable. 3. The invention divides the awake state into two subtypes of awake and non-blinking and awake blinking, which can avoid the interference of the electrooculogram generated by the blinking of the patient on the brain electrical signal and the false giant fluctuation of the anesthesia depth index caused by the interference.
Background
Anesthesia is a prerequisite for modern surgical procedures. It is a challenging matter for an anesthesiologist to evaluate the depth of anesthesia in a patient during intraoperative anesthesia and to dynamically adjust the anesthetic infusion rate so that the patient remains at the proper depth of anesthesia throughout the procedure. The main reason is that there are large individual differences between patients for anesthetics. Too shallow anesthesia can cause the patient to increase stress and produce inflammatory response, affect prognosis, and even possibly produce intraoperative awareness (i.e. the patient suddenly regains consciousness in surgery and forms clear memory after surgery), which brings pain and continuous psychological trauma to the patient. Deep anesthesia is more common and the effect is more profound. Can delay the patient's postoperative wake-up (long-term inability to wake up), delirium (patient's postoperative confusion), and even produce postoperative cognitive dysfunction (dementia).
The anesthetic acts on the brain to change the state of the brain, so that the neuron discharge mode of the cerebral cortex is changed. Such changes in electrical activity of the cerebral cortex may be received in a non-invasive form by placing electrodes on the scalp, the received signals being known as brain waves (EEG). As the concentration of anesthetic agent increases and the depth of anesthesia increases, the brain waves received on the scalp produce several characteristic manifestations, each type of waveform characteristic corresponding to an anesthetic state. Therefore, the anesthesia depth of the patient can be judged by brain waves of the patient. But interpreting these waveforms directly is difficult for anesthesiologists. The main function of the anesthesia depth monitor is to convert these difficult brain wave waveforms into numbers (number range 0-100) that are easy for anesthesiologists to understand, which are called "anesthesia depth index". Different numerical ranges correspond to different anesthetic depths. Specific numerical ranges vary somewhat from company to company. Generally, 80-100 corresponds to wakefulness, 60-80 corresponds to a level of depth of anesthesia for medium and shallow sedation, 40-60 corresponds to a level of depth of anesthesia suitable for surgery, and less than 40 indicates too deep anesthesia.
The existing anesthesia depth monitor mainly comprises a BIS monitor in the United states, a Nacotrend monitor in Germany, a CSI monitor in Denmark and the like, and is developed in nineties in the last century. The anesthesia depth monitors are limited by the computing power of the computer and the level of signal analysis at the time, all adopt a mode of firstly calculating the characteristics of brain electricity and then carrying out statistics, fit a regression model and output a number of 0-100. It is essentially a simple rule based on artificial prescription. Anesthesiologists often complain of the existing anesthesia depth monitor being inaccurate. In the actual anesthesia process, the brain electrical signal of the patient is influenced by various factors such as the concentration of the anesthetic, the individual difference of the patient, the size of the operation stimulus and the like, besides the type of the anesthetic, so that the brain electrical signal is 'thousand people and thousand faces', and a clear classification rule is difficult to summarize. Analogy to the classical "cat and dog identification" problem, there are two distinct technical paths: firstly, manually defining and calculating some image characteristics, such as aspect ratio of pixel outline and the like, so as to identify cats and dogs; the other thinking is that a large number of human-marked photos of the cats and dogs are directly input into a deep learning model, and the model autonomously removes 'learning' from the data, so that the identification of the cats and dogs in the new image is realized. Numerous examples of the fields of image recognition, speech recognition, and machine translation have proven: for such classification problems that are difficult to describe with explicit rules, conventional statistical methods based on human rules generally have only theoretical significance, and it is difficult to reach practically usable levels. At present, all commercial applications such as image recognition, voice recognition, machine translation, large language models and the like which are commercialized adopt a deep learning technology based on a large sample size. According to the invention, a deep learning model for daily image classification (cat, dog, daily articles, pedestrians and vehicles) is used for automatically judging the anesthesia electroencephalogram two-dimensional color map in a transfer learning mode, so that the anesthesia depth of a patient is monitored.
Through retrieval, most of anesthesia depth related patents adopt traditional technical paths of firstly extracting brain electrical characteristics and then carrying out statistics, such as patent CN101449974, patent CN110267590, patent CN104994782, patent CN104644166, patent CN104545949, patent CN103637798 and the like. The invention adopts a deep learning technical path, does not calculate any electroencephalogram characteristics, and is essentially different from the above-mentioned patent.
Through retrieval, patent CN109645989 proposes a method for judging the anesthetic state by using a convolutional neural network without manually extracting the electroencephalogram characteristics. The invention is characterized in that: patent CN109645989 is an autonomously constructed convolutional neural network, training from scratch. In particular, the network has only three layers. In the invention, a migration learning mode is adopted, and a general image recognition deep convolutional neural network trained by each Internet science and technology company or scientific research institute (the convolutional neural networks are generally in hundreds of layers) is directly used for retraining by utilizing anesthesia electroencephalogram data collected autonomously. According to common general knowledge of convolutional neural networks, the extremely shallow convolutional neural network basically has only theoretical significance; the image recognition convolutional neural networks that are actually available are typically hundreds of layers, which is the source of the term "deep learning". The invention and its second difference are: patent CN109645989 divides the patient's brain electricity into awake, anesthetized and convalescent phases. This is divided according to the progress of the surgery, not according to the actual state of the patient. In particular, the "over-anesthesia" phase is not further divided such that it cannot distinguish the over-anesthesia that may occur during the patient's operation, which deviates from the most central function of anesthesia depth monitoring.
After searching, patent CN11376847 also proposes a method for judging the anesthetic state by using a convolutional neural network without manually extracting the electroencephalogram characteristics. The invention and the differences are that: 1. the image utilized in patent CN11376847 is an image formed based on multichannel electroencephalogram data, and the present invention is a method of converting electroencephalogram of forehead single channel into two-dimensional color map using time-frequency analysis. 2. The patent CN11376847 uses anesthesia electroencephalogram data of 2 macaques, the number of samples is too small, the general requirement of deep learning on the data quantity is completely not met, and a universal model can not be trained; even if the anesthesia depth model can be trained, it is only suitable for monkeys, but not for humans due to species differences.
The patent CN111616681 and the patent CN113974557 also propose methods for determining the anesthesia depth based on the deep neural network. The main differences between the invention and the two inventions are: the two inventions do not adopt the common end-to-end mode in deep learning, but calculate different brain electrical characteristics first; and all are autonomous convolutional neural networks. In addition, the patent CN111616681 is an image formed based on multichannel electroencephalogram data, and the invention adopts a time-frequency analysis method to convert forehead single-channel electroencephalogram into a two-dimensional color chart.
The patent CN108717535 also relates to anesthesia depth judgment after retrieval. The invention and the differences are that: the patent CN108717535 adopts a long-short-time memory network, which is an RNN cyclic neural network architecture suitable for one-dimensional time series data, and the invention adopts a CNN convolutional neural grid architecture suitable for two-dimensional images.
Disclosure of Invention
In order to realize the automatic anesthesia depth identification based on the deep learning framework, the invention adopts the following technical scheme: the method consists of an anesthesia depth classification model training method based on a large number of patient anesthesia monitoring historical data and an anesthesia depth dynamic judging method of a specific patient in an operating room scene. As shown in fig. 1.
The anesthetic depth classification model training method based on a large number of patient anesthesia monitoring historical data comprises the following steps:
s1: and collecting anesthesia monitoring data of a large number of operation patients, establishing an anesthesia electroencephalogram data set, screening out appropriate anesthesia patient electroencephalogram data, and training a model for deep learning.
S2: and cutting the continuous brain wave signals of each patient into brain wave fragments according to a certain time length. The method of time-frequency analysis is adopted, and the one-dimensional brain wave fragments are converted into two-dimensional color images to be used as the input of a deep learning model.
S3: combining the human electroencephalogram expert with an anesthesiologist, combining the anesthesiology information of a patient, the clinical state of the patient and the like, and finding out typical pictures of the following 5 states from tens of thousands of electroencephalogram two-dimensional color pictures: consciousness, wakefulness and over-depth in blinking, moderate to shallow sedation, general anesthesia and anesthesia. And repeatedly watching a plurality of electroencephalogram two-dimensional color maps in the 5 different states by a human electroencephalogram expert until grasping the electroencephalogram characteristics corresponding to each state. Fig. 2 is a typical electroencephalogram time-frequency diagram of the above 5 states selected from a large number of pictures, where a to E are respectively: consciousness, wakefulness and over-depth in blinking, moderate to shallow sedation, general anesthesia and anesthesia.
S4: and marking a large number of electroencephalogram time-frequency images by a human electroencephalogram expert in a naked eye judging mode, and marking each image as one of the 5 states to be used as the output of a deep learning model.
S5: one of the existing mainstream image recognition neural networks in the field of computer vision is selected as a pre-training model, the electroencephalogram time-frequency diagram is input into the model, and the anesthesia state marked by a human electroencephalogram expert is used as the output of the model. And setting learning parameters, and adjusting the weight of the convolutional neural network model by adopting a back propagation algorithm to perform model training. And obtaining a trained convolutional neural network anesthesia depth classification model, and testing the performance of the model on a test data set.
The method for discriminating the anesthesia depth of a specific patient in real time in an operating room scene comprises the following steps:
s6: and collecting brain wave signals of the patient to be detected. And acquiring forehead single-channel brain electricity, wherein the sampling rate is not lower than 100Hz.
S7: setting a certain time window and step length, and dividing the electroencephalogram signal into small sections with a certain duration by adopting a sliding mode. The time window is typically a few seconds to tens of seconds and the step size is typically a few seconds.
S8: one-dimensional brain wave waveforms are converted into two-dimensional images by adopting one of the common time-frequency analysis methods.
S9: and inputting the electroencephalogram two-dimensional image into the anesthesia depth classification model obtained through training, and obtaining prediction of anesthesia depth state and corresponding probability.
S10: the anesthesia depth of the whole operation process of the patient is visually displayed, the horizontal axis is time, and the vertical axis is clear and blinked (not blinked), medium-shallow sedation, proper general anesthesia state and over-deep anesthesia from top to bottom. Such a stepped pattern similar to a sleep time phase sequence chart (hypnogram) in sleep medicine is named as an "anesthesia depth stage chart" or an "anesthesia depth time phase sequence chart", and english names are named as: an ansthestargram. Different from the traditional sleep phase sequence diagram form, different colors are adopted to represent different anesthesia states of a patient. These colors are psychologically meaningful. When the general anesthesia is performed, the color similar to the psychological implication is adopted, and when the anesthesia is too deep or too light, the color similar to the psychological implication is adopted. Each different color is darkened. The same color, the shade of which corresponds to the degree of reliability of the corresponding prediction result (probability of 0-1). The darker the color, the more confident the result of the prediction; the lighter the color, the lower the confidence in the predicted result. By multiplying the RGB values of a specific color by the probability value.
The method can form corresponding software modules to form an anesthesia depth evaluation system, and is shown in fig. 3.
Drawings
FIG. 1 is a general method flow diagram of the method for depth of anesthesia assessment in the present invention.
Fig. 2 is a typical electroencephalogram time-frequency diagram of 5 anesthetic states proposed by the present invention.
FIG. 3 is a schematic diagram of the model composition of the anesthesia depth assessment system of the present invention.
Fig. 4 is a time-frequency plot of brain electrical of a portion of a patient in a awake, non-blinking state, as provided in an embodiment of the present invention.
Fig. 5 is a time-frequency plot of a portion of the brain electrical of a patient in a awake and blink state provided in an embodiment of the present invention.
Fig. 6 is a time-frequency plot of brain electrical of a portion of a patient in a medium and shallow sedated state provided in an embodiment of the present invention.
Fig. 7 is a time-frequency plot of brain electrical of a portion of a patient under a suitable general anesthesia provided in an embodiment of the present invention.
Fig. 8 is a time-frequency plot of brain electricity of a portion of a patient in an overseas state provided in an embodiment of the invention.
FIG. 9 is a schematic diagram of an anesthesia depth model training procedure provided in an embodiment of the invention.
FIG. 10 is a graph of a confusion matrix reflecting the performance of the anesthesia depth model on the validation set, provided in an embodiment of the invention.
Fig. 11 is a view showing the anesthesia depth staging of an anesthetized patient according to an embodiment of the present invention.
Detailed Description
And collecting the electroencephalogram monitoring data of 6388 anesthetized patients and other related intraoperative monitoring data, and establishing an anesthetized electroencephalogram database. In addition to the electroencephalogram data, recording is also required: anesthesia mode (general anesthesia, sedation, local anesthesia, etc.), brain disease and mental disease state of the patient, drinking smoking of the patient and taking of the mental medicine, ASA physical condition classification of the patient, operation type, etc. Inquiring the conditions of the above 6043 patients on the depth monitoring of the electroencephalogram anesthesia, and carrying out the depth monitoring of the electroencephalogram anesthesia by 5867 people in total.
And cutting the brain electricity monitoring data of the whole operation of 5867 patients into small segments according to a certain time length, screening out proper brain electricity data fragments from the small segments, and converting one-dimensional brain wave waveforms into two-dimensional images. Through a large number of tests, the time length is selected to be 10-30 seconds, and a good effect can be obtained.
A time-frequency analysis method, such as wavelet transformation, is selected to convert the one-dimensional waveform of the brain electricity into a two-dimensional time-frequency image. Through testing, other common time-frequency analysis methods can be adopted, and subsequent deep learning is not affected.
According to the 5 typical state electroencephalogram time-frequency diagram shown in fig. 2, the human electroencephalogram expert marks the electroencephalogram images and places the electroencephalogram images in corresponding folders in a computer, and the corresponding folders are named by symbols representing the 5 states. Fig. 4 to 8 are time-frequency diagrams of the brain electricity of a part of patients marked by human electroencephalograms in the above typical state, and fig. 4 to 8 are in sequence: the brain wave time-frequency diagram of partial patients in a conscious and non-blinking state, the brain wave time-frequency diagram in a conscious and blinking state, the brain wave time-frequency diagram in a middle and shallow sedative state, the brain wave time-frequency diagram in a proper general anesthesia state and the brain wave time-frequency diagram in an over-anesthesia state.
One of the existing mainstream image recognition neural networks in the computer vision field is selected as a pre-training model. In this embodiment, a squeezenet convolutional neural network model is selected as a pre-training model for transfer learning.
The size of the brain electrical image is adjusted to 227 x 227 so as to adapt to the image input requirement of the squeezenet convolutional neural network. The time-frequency image is set as a training set and 20% is set as a verification set in a random manner.
For this specific application of anesthesia depth identification, the structure of the squeezenet convolutional neural network is adjusted as follows: 1. the last convolutional layer of squeezenet (penultimate layer 5) and the classified output layer were replaced with C, C-blinks, S, A and D, respectively, indicating Conciousness awake, conciousness with eyes-blink awake and shallow Sedation in the blink, anesthesia appropriate general Anesthesia and Over-Deep Anesthesia Over-Deep Anesthesia. 2. The training parameters of machine learning are set, for example, the learning rate can be set to 3 ‱, the batch size is set to 10, and the training period is set to 8 rounds.
The model training process is shown in fig. 9. After the simulation training is completed, an anesthesia depth classification model can be obtained. And verifying the anesthesia depth classification model by using the data in the verification set. As shown in fig. 10, from the confusion matrix, the classification accuracy of the model reaches 88.98%.
And selecting the brain electrical data of the patient except the model training set and the verification set, setting a time window to be 10 seconds, setting a step length to be 1 second, dividing the brain electrical data into segments of 10 seconds by adopting a sliding mode, converting the segments into a two-dimensional color map by utilizing wavelet transformation, and sequentially inputting the model obtained by training to obtain anesthesia state classification updated 1 second for 1 time and corresponding prediction probability.
The anesthetic state of the patient is visualized in the form of an "anesthesia depth staging chart", as shown in fig. 11. The upper part of fig. 11 is an electroencephalogram time-frequency chart of the whole operation process of a patient, and generally, an anesthesiologist does not have the capability of interpreting the electroencephalogram time-frequency chart, and only relevant specialists with electroencephalogram and anesthesia cross backgrounds which are specially trained can interpret the electroencephalogram time-frequency chart. The lower part of fig. 11 is a model obtained anesthesia depth staging chart, the horizontal axis of which is time, and the vertical axis of which is sequentially from top to bottom, and is awake, blinked (not blinked), moderately shallow sedated, suitable general anesthesia state and over-deep anesthesia, forming a stepwise pattern. And the different states mentioned above are colored with colors having a special psychological implication: suitable general anesthetic states are indicated in green, medium-shallow sedation in cyan, two awake states (both blinked and unblurred), and hyperstimulation in red. The intensity of the same color is adjusted by the prediction probability (value of 0-1) output by the model, and the intensity of the color corresponds to the reliability degree of the corresponding prediction result. The darker the color, the more confident the result of the prediction; the lighter the color, the lower the confidence in the predicted result.
In the eyes of relevant specialists with the brain electricity and anesthesia cross background, the brain electricity time-frequency diagram characteristics of the whole operation process of the patient in fig. 11 are highly matched with the anesthesia depth stage diagram obtained by the model, and the level of the human brain electricity specialist is reached after the model is "learned" on the data marked by the human brain electricity specialist.
The anesthesia depth staging chart can be generated in real time and is used for guiding an anesthesiologist to judge the anesthesia depth of a patient and adjusting the types and dosages of anesthetics, so that the patient is maintained at the proper anesthesia depth, and the patient is prevented from being in a bad state of too shallow or too deep anesthesia for a long time. The anesthesia depth staging map can also be obtained by post-processing and used for the anesthesiologist to review the anesthesia quality by a multi-disc.
While the invention has been described in detail in connection with the drawings and embodiments, it should be understood that the foregoing description is not intended to limit the invention in any way. Modifications and variations of the invention may be made as desired by those skilled in the art without departing from the true spirit and scope of the invention, and such modifications and variations fall within the scope of the invention.
Claims (8)
1. An electroencephalogram signal-based anesthesia depth assessment method, which is characterized by comprising the following steps:
s1: collecting anesthesia monitoring data of a large number of patients subjected to surgery, establishing an anesthesia electroencephalogram data set, screening out appropriate anesthesia patient electroencephalogram data, and training a model for deep learning;
s2: and cutting the continuous brain wave signals of each patient into brain wave fragments according to a certain time length. Converting the one-dimensional brain wave fragments into a two-dimensional color map by adopting a time-frequency analysis method, and taking the two-dimensional brain wave fragments as input of a deep learning model;
s3: combining the human electroencephalogram expert with an anesthesiologist, combining the anesthesiology information of a patient, the clinical state of the patient and the like, and finding out typical pictures of the following 5 states from tens of thousands of electroencephalogram two-dimensional color pictures: consciousness, wakefulness and over-depth in blinking, moderate to shallow sedation, general anesthesia and anesthesia. Repeatedly watching a plurality of electroencephalogram two-dimensional color maps in the 5 different states by a human electroencephalogram expert until grasping the electroencephalogram characteristics corresponding to each state;
s4: marking a large number of electroencephalogram time-frequency images by a human electroencephalogram expert in a naked eye judging mode, marking each image as one of the 5 states, and outputting the images as a deep learning model;
s5: and randomly dividing the noted electroencephalogram two-dimensional color map into a training set and a verification set. One of the existing mainstream image recognition neural networks in the field of computer vision is selected as a pre-training model, the electroencephalogram time-frequency diagram is input into the model, and the anesthesia state marked by a human electroencephalogram expert is used as the output of the model. And setting learning parameters, adjusting the weight of the convolutional neural network model by adopting a back propagation algorithm, and performing model training by using training set data. Obtaining a trained convolutional neural network anesthesia depth classification model, and testing the performance of the model on a verification data set;
s6: collecting brain wave signals of a patient to be detected;
s7: setting a certain time window and a certain step length, and dividing the electroencephalogram signal into small sections with a certain duration by adopting a sliding mode;
s8: converting one-dimensional brain wave waveforms into two-dimensional images by adopting a time-frequency analysis method;
s9: inputting the diagram into the anesthesia depth classification model obtained by training to obtain the prediction of the anesthesia depth state and the corresponding probability;
s10: the anesthesia depth of the whole operation process of the patient is visually displayed by using different colors and stepped patterns with different depths of each color.
2. A method according to claim 1, characterized in that:
patients were divided into 5 states: consciousness, wakefulness and over-depth in blinking, moderate to shallow sedation, general anesthesia and anesthesia. The characteristics of the electroencephalogram time-frequency diagram corresponding to each state are determined by human electroencephalogram experts after a large number of electroencephalogram time-frequency diagrams are read. The method of language or signal analysis is not used, and the characteristics of the electroencephalogram time-frequency diagram corresponding to each state are described in a deterministic manner, but a human electroencephalogram expert forms a piece of 'default knowledge' which cannot be said after reading a large number of representative electroencephalogram time-frequency diagrams.
3. The method according to claim 2, characterized in that:
instead of treating "wakefulness" as a single state, as is usual, the "wakefulness" is further subdivided into two distinct subtypes depending on whether the patient blinks or not: one is awake in the general sense, i.e., awake and not blinking, and the other is awake but blinking.
4. The method according to claim 1, characterized in that:
instead of showing the patient's depth of anesthesia in the form of a typical number of 0-100, the patient's state of anesthesia is visualized throughout the entire surgical procedure in a form similar to a sleep phase sequence diagram (hypnogram) in sleep medicine. The device is named as an anesthesia depth stage diagram or an anesthesia depth time phase sequence diagram, and the English name is named as: an ansthestargram.
5. The method according to claim 4, wherein:
different from the traditional sleep phase sequence diagram form, different colors are adopted to represent different anesthesia states of a patient.
6. The method according to claim 5, wherein:
these colors are psychologically meaningful. When the general anesthesia is performed, the color similar to the psychological implication is adopted, and when the anesthesia is too deep or too light, the color similar to the psychological implication is adopted.
7. The method according to claim 6, wherein:
each different color is darkened. The same color, the shade of which corresponds to the degree of reliability of the corresponding prediction result (probability of 0-1). The darker the color, the more confident the result of the prediction; the lighter the color, the lower the confidence in the predicted result.
8. An electroencephalogram signal based anesthesia depth assessment system, the system comprising:
the electroencephalogram acquisition module is used for acquiring electroencephalogram signals of a patient;
the electroencephalogram time-frequency analysis module is used for converting one-dimensional electroencephalogram waveform data into a two-dimensional color chart;
the anesthesia depth evaluation module is internally provided with a deep learning image classification model obtained through training of a large amount of patient data, and outputs estimation of consciousness level and probability of consciousness, consciousness and 5 consciousness levels of blink, medium-shallow sedation, general anesthesia and deep anesthesia according to an input electroencephalogram two-dimensional color map;
and the anesthesia depth display module is used for displaying the anesthesia state of the patient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311703314.3A CN117503065A (en) | 2023-12-12 | 2023-12-12 | Method and system for evaluating anesthesia depth |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311703314.3A CN117503065A (en) | 2023-12-12 | 2023-12-12 | Method and system for evaluating anesthesia depth |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117503065A true CN117503065A (en) | 2024-02-06 |
Family
ID=89766448
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311703314.3A Pending CN117503065A (en) | 2023-12-12 | 2023-12-12 | Method and system for evaluating anesthesia depth |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117503065A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118452838A (en) * | 2024-07-09 | 2024-08-09 | 上海岩思类脑人工智能研究院有限公司 | Anesthesia state detection method and device based on nerve spike coding |
CN118512156A (en) * | 2024-07-23 | 2024-08-20 | 北京大学第三医院(北京大学第三临床医学院) | Prediction method of intraoperative anesthesia depth and related equipment |
-
2023
- 2023-12-12 CN CN202311703314.3A patent/CN117503065A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118452838A (en) * | 2024-07-09 | 2024-08-09 | 上海岩思类脑人工智能研究院有限公司 | Anesthesia state detection method and device based on nerve spike coding |
CN118452838B (en) * | 2024-07-09 | 2024-10-15 | 上海岩思类脑人工智能研究院有限公司 | Anesthesia state detection method and device based on nerve spike coding |
CN118512156A (en) * | 2024-07-23 | 2024-08-20 | 北京大学第三医院(北京大学第三临床医学院) | Prediction method of intraoperative anesthesia depth and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117503065A (en) | Method and system for evaluating anesthesia depth | |
Sun et al. | Deep convolutional network method for automatic sleep stage classification based on neurophysiological signals | |
Gu et al. | Predicting risk decisions in a modified Balloon Analogue Risk Task: Conventional and single-trial ERP analyses | |
US10349851B2 (en) | Method, non-transitory computer readable medium and apparatus for arousal intensity scoring | |
US7761144B2 (en) | Method, apparatus, and computer program product for stochastic psycho-physiological assessment of attentional impairments | |
CN113662545B (en) | Personality assessment method based on emotion electroencephalogram signals and multitask learning | |
KR101220398B1 (en) | The system of total quality test to innate and acquired | |
CN114970599A (en) | Identification method and identification device for attention defect associated electroencephalogram signals and storage medium | |
CN112842279B (en) | Sleep quality evaluation method and device based on multi-dimensional characteristic parameters | |
CN108549875A (en) | A kind of brain electricity epileptic attack detection method based on the perception of depth channel attention | |
CN105868532B (en) | A kind of method and system of intelligent evaluation heart aging degree | |
CN110575141A (en) | Epilepsy detection method based on generation countermeasure network | |
CN117883082A (en) | Abnormal emotion recognition method, system, equipment and medium | |
CN111671445A (en) | Consciousness disturbance degree analysis method | |
Laskaris et al. | Improved detection of amnestic MCI by means of discriminative vector quantization of single-trial cognitive ERP responses | |
Gao et al. | Automatic epileptic seizure classification in multichannel EEG time series with linear discriminant analysis | |
Wang et al. | One and two dimensional convolutional neural networks for seizure detection using EEG signals | |
KR102691350B1 (en) | Single electroencephalogram-based sleep stage classification method and sleep stage classification device | |
KR101168596B1 (en) | A quality test method of dermatoglyphic patterns analysis and program recording medium | |
CN114997230B (en) | Signal-oriented feature positioning display and quantitative analysis method and device | |
TWI808579B (en) | Automatic evolution method for brain wave database and automatic evolving system for detecting brain wave | |
CN113990498A (en) | User memory state early warning system and method | |
CN115022617A (en) | Video quality evaluation method based on electroencephalogram signal and space-time multi-scale combined network | |
Swarnalatha et al. | Analysis of brain wave data to detect epileptic activity using LabVIEW | |
Agostinho et al. | fMRINet: repurposing the EEGNet model to identify emotional arousal states in fMRI data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |