CN110009604B - Method and device for extracting respiratory signal of contrast image sequence - Google Patents
Method and device for extracting respiratory signal of contrast image sequence Download PDFInfo
- Publication number
- CN110009604B CN110009604B CN201910213523.7A CN201910213523A CN110009604B CN 110009604 B CN110009604 B CN 110009604B CN 201910213523 A CN201910213523 A CN 201910213523A CN 110009604 B CN110009604 B CN 110009604B
- Authority
- CN
- China
- Prior art keywords
- image sequence
- sequence
- neural network
- background image
- respiratory signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The embodiment of the invention provides a method and a device for extracting a respiratory signal of a contrast image sequence, wherein the method comprises the following steps: acquiring a contrast image sequence, inputting the contrast image sequence into a pre-trained neural network, and outputting a respiratory signal sequence; the neural network comprises a self-encoder layer, a data processing layer and a data processing layer, wherein the self-encoder layer is used for inputting a contrast image sequence and outputting a background image of each frame image in the contrast image sequence to form the background image sequence; the optical flow network layer is used for inputting the background image sequence and outputting the motion information of the background image sequence; and the circulating neural network layer is used for inputting the motion information of the background image sequence and outputting the respiratory signal sequence. The embodiment of the invention can automatically and accurately extract the respiratory signal sequence.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a method and a device for extracting a respiratory signal of a contrast image sequence.
Background
Angiography is an interventional procedure that injects a contrast agent into a blood vessel, because X-rays cannot penetrate the contrast agent, and the image of the contrast agent displayed under X-rays is considered to be a blood vessel. At present, contrast images are the most common imaging modality in thoracoabdominal vascular interventional procedures. In the thoracoabdominal related surgery, the breathing of the patient causes the displacement of each tissue organ, and therefore, the position of the contrast agent in the image is also changed, which may cause difficulty in positioning operation for the doctor. In order to compensate for the displacement of the tissue organ with the respiration in the image, we need to analyze the motion correlation between the tissue organ and the respiration state. Wherein automatically extracting the breathing signal from the image is a crucial step.
The current methods for directly extracting respiratory signals based on contrast images are roughly divided into two categories: one is a method based on the gray scale or displacement of the region of interest in the image; another class is methods that use manifold learning on the complete image. They all have certain limitations: in a first type of method, the region of interest is a membrane structure. Because the diaphragm can displace along with the breathing movement, a part of people obtain breathing signals by analyzing the gray scale change of the diaphragm area; still another group of people acquire respiratory signals by tracking the up and down displacement of a certain point on the diaphragm. However, in the operation, especially in the chest cardiovascular intervention operation, the diaphragm structure can not appear on the contrast image of certain visual angles; other membrane structures on intraoperative contrast images of relatively fat patients may be less visible. These limit the clinical utility of the first method. In the second method, a manifold learning method such as principal component analysis or equidistant feature mapping is used on a complete image sequence to reduce the high-dimensional data of the image into one-dimensional features. The obtained one-dimensional characteristic is the respiratory signal corresponding to the sequence. In particular, in applications of thoracic cardiovascular systems, it is common to perform a one-step morphological closing operation on the image to suppress the interference caused by the heartbeat motion. The limitation of this method is that the specific dimension reduction modeling process requires a sequence of images to be input first, which is not favorable for real-time operation in the operation.
Disclosure of Invention
Embodiments of the present invention provide a method and apparatus for extracting a respiratory signal of a contrast image sequence that overcome or at least partially solve the above-mentioned problems.
In a first aspect, an embodiment of the present invention provides a method for extracting a respiratory signal of a contrast image sequence, including:
acquiring a contrast image sequence, inputting the contrast image sequence into a pre-trained neural network, and outputting a respiratory signal sequence;
wherein the neural network comprises:
the self-encoder layer is used for inputting a contrast image sequence and outputting a background image of each frame image in the contrast image sequence to form the background image sequence;
the optical flow network layer is used for inputting the background image sequence and outputting the motion information of the background image sequence;
the circulating neural network layer is used for inputting the motion information of the background image sequence and outputting the respiratory signal sequence;
the self-encoder layer is trained according to a sample radiography image sequence and a background image label; the recurrent neural network layer is formed by training motion information of a sample background image sequence and a respiratory signal label.
In a second aspect, an embodiment of the present invention provides a respiratory signal extraction apparatus for a contrast image sequence, including:
the image sequence acquisition module is used for acquiring a contrast image sequence;
the neural network module is used for inputting the radiography image sequence into a pre-trained neural network and outputting a respiratory signal sequence;
wherein the neural network comprises:
the self-encoder layer is used for inputting a contrast image sequence and outputting a background image of each frame image in the contrast image sequence to form the background image sequence;
the optical flow network layer is used for inputting the background image sequence and outputting the motion information of the background image sequence;
the circulating neural network layer is used for inputting the motion information of the background image sequence and outputting the respiratory signal sequence;
the self-encoder layer is trained according to a sample radiography image sequence and a background image label; the recurrent neural network layer is formed by training motion information of a sample background image sequence and a respiratory signal label.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method provided in the first aspect when executing the program.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method as provided in the first aspect.
According to the method and the device for extracting the respiratory signal of the contrast image sequence, provided by the embodiment of the invention, firstly, the background of each frame of image is extracted, so that the influence of heartbeat motion in a chest radiography image is inhibited, and the background image is obtained by a deep learning method, so that the background texture can be effectively and clearly obtained; secondly, the dense motion information in the background image is obtained by using the optical flow network, the method is particularly suitable for solving the problem of large difference of physiological motion states of tissues and organs in the contrast image, and finally, the respiratory signal sequence is accurately obtained by using the advantage of the cyclic neural network in analyzing sequence data with unequal length.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flowchart illustrating a method for extracting a respiratory signal from a contrast image sequence according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a neural network according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a respiratory signal extraction apparatus for a contrast image sequence according to an embodiment of the present invention;
fig. 4 is a schematic physical structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a respiratory signal extraction method of a contrast image sequence according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s101, acquiring a contrast image sequence. The contrast image sequence of the embodiment of the invention is formed by combining a plurality of frames of contrast images which are arranged according to a time sequence.
And S102, inputting the contrast image sequence into a pre-trained neural network, and outputting a respiratory signal sequence.
The embodiment of the invention adopts a deep learning method to obtain the respiratory signal in the contrast image. The neural network adopted by the deep learning is trained by a sample contrast image sequence and a respiratory signal label, the respiratory signal is known in advance for each frame of image in the sample blood vessel modeling image sequence, and the respiratory signal label is generated according to the corresponding respiratory signal.
Fig. 2 is a schematic structural diagram of a neural network according to an embodiment of the present invention, and as shown in fig. 2, the neural network includes:
the self-encoder layer 201 is configured to input a contrast image sequence, and output a background image of each frame image in the contrast image sequence to form the background image sequence. The self-encoder layer is trained according to the sample radiography image sequence and the background image label.
It should be noted that the embodiment of the present invention suppresses the influence of the heartbeat motion in the chest radiography image by extracting the background of each frame image. For the conventional method of performing morphological closing operation on each frame of image to obtain the background image for inhibiting the tubular structure, the method has the defect that the background texture is blurred while the foreground of the tubular structure is inhibited. The embodiment of the invention obtains the background image by a deep learning method, and can effectively and clearly obtain the background texture.
And the optical flow network layer 202 is used for inputting the background image sequence and outputting the motion information of the background image sequence.
The optical flow principle is a method for calculating motion information of an object between adjacent frames by using the change of pixels in an image sequence in a time domain and the correlation between the adjacent frames to find the corresponding relationship between a previous frame and a current frame. In order to realize the end-to-end respiratory signal extraction method based on deep learning, the embodiment of the invention adopts the optical flow network to calculate the motion information between the continuous frame images in the sequence. The motion information of the embodiment of the invention refers to the motion state of each point on two adjacent frames of images.
It should be noted that other methods for obtaining motion information in the prior art further include obtaining a corresponding point pair by using feature point matching, and then calculating a displacement change of the corresponding point. However, one feature of contrast images is that the number of feature points that can be retrieved in the images is particularly low. Even if there is a feature point in the same tissue and organ region in the blood vessel region and the background, it is difficult to find a feature point exactly matching with the blood vessel region and the background in another frame. If the accurate matching point pair cannot be found, the displacement of the point cannot be calculated, and the motion information cannot be obtained. The optical flow method is to construct an energy function based on the assumption that the gray level of an image is unchanged, and to minimize energy through iterative optimization, thereby obtaining motion information of each point. The inventor tests and verifies that the using effect on the contrast images is far better than that of the method based on the feature point matching.
The background image sequence is composed of background images sequenced according to time sequence, and if n frames of contrast images exist in the original contrast image sequence, n background images can be obtained through the background feature extraction and reconstruction process, namely, a background image sequence is formed. And calculating the motion information from the next frame image to the previous frame image by using an optical flow network on the n background images, initializing the motion information of the first frame image to 0, and obtaining n pieces of motion information, namely n optical flow field results.
And the cyclic neural network layer 203 is used for inputting the motion information of the background image sequence and outputting the respiratory signal sequence. The recurrent neural network layer is formed by training motion information of a sample background image sequence and a respiratory signal label.
It should be noted that a Recurrent Neural Network (RNN) is a type of Recurrent Neural Network (Recurrent Neural Network) in which sequence data is input, recursion is performed in the direction of evolution of the sequence, and all nodes (Recurrent units) are connected in a chain. One advantage of the network is that the analysis of sequence data of different lengths can be used to derive the correlation characteristics of the sequence data in the time dimension. Bidirectional recurrent neural networks (Bi-RNN) and Long-Short Term Memory networks (LSTM) are common recurrent neural networks.
The recurrent neural network of the embodiment of the invention adopts a sequence-sequence (many-to-many) input-output mode, namely, a sequence of values is output by inputting a background image sequence into a pre-trained recurrent neural network, wherein each value corresponds to the respiratory signal of each background image in the input sequence. In the process of training the recurrent neural network, the conventional method may be: inputting the motion information of the sample background image sequence into the preset cyclic neural network, and outputting a respiratory signal prediction value corresponding to each frame of image in the sample background image sequence; calculating a loss value according to the prediction signal and the respiration signal label by using a preset loss function; and if the loss value converges to a preset threshold value, finishing the training of the preset recurrent neural network.
In the embodiment of the present invention, the recurrent neural network is terminated by a regression layer to perform the regression operation, so the output of the recurrent neural network in the embodiment of the present invention is a continuous real number (i.e., respiration signal), rather than a probability value between 0 and 1.
According to the method for extracting the respiratory signal from the contrast image, disclosed by the embodiment of the invention, firstly, the background of each frame of image is extracted, so that the influence of heartbeat movement in the chest radiography image is inhibited, and the background image is obtained by a deep learning method, so that the background texture can be effectively and clearly obtained; secondly, motion information in the background image is obtained by utilizing an optical flow algorithm, and the method is particularly suitable for solving the problem of large difference of physiological motion states of tissues and organs existing in the contrast image; in order to realize the end-to-end respiratory signal extraction method based on deep learning, an optical flow network is adopted to calculate the motion information between the continuous frame images in the sequence. And finally, accurately acquiring the respiratory signal by utilizing the advantage of the cyclic neural network in analyzing the sequence data with different lengths.
On the basis of the above embodiments, as an optional embodiment, the neural network adopts a mode of sub-module pre-training and re-complete network unified training, specifically:
respectively training the self-coding layer and the recurrent neural network layer;
and inputting a sample contrast image sequence and a breathing signal label into the neural network for training by taking the pre-trained parameters of the self-coding layer and the cyclic neural network layer and the parameters of the optical flow network layer as initial values of the neural network.
The embodiment of the invention can provide supervision labels in a targeted manner by respectively training each module. For example, the self-coding layer adopts definite contrast images and background images thereof as samples and label training, so that the self-coding layer network can better learn the image features which are wanted to be concerned. The pre-trained weight values are used as initial parameters of the whole network, and then the complete network is trained by using the contrast image sequence and the respiratory signal labels corresponding to each frame, so that the complete network can be converged more quickly.
On the basis of the above embodiments, as an alternative embodiment, the self-coding layer includes:
an encoder layer for inputting a sequence of contrast images and outputting low dimensional features representing a background image for each image in the sequence of contrast images; it is understood that the background image is the background in the image.
And the decoder layer is used for carrying out deconvolution operation on the low-dimensional features and restoring to obtain a background image sequence of the image.
It should be noted that, because the encoder and the decoder are essentially convolutional neural networks, when the encoder and the decoder are trained, the input contrast image sequence and the background image label corresponding to each contrast image output a predicted background image, and a loss value is calculated according to the predicted background image and the background image label by using a preset loss function; and if the loss value converges to the preset threshold value, finishing the training of the encoder and the decoder.
On the basis of the above embodiments, as an alternative embodiment, the optical flow network layer calculates dense motion information of the background image sequence through a convolutional neural network.
It should be noted that the optical flow is divided into a sparse optical flow and a dense optical flow, the sparse optical flow is to calculate the optical flow only at a specific point in the picture, and the dense optical flow is to calculate the optical flow for each pixel. The dense optical flow calculates the motion information of each pixel point on the image; the sparse optical flow computes motion information for individual feature points on the image. Because there are several anatomical tissues and organs in the background of the contrast image, and there are differences in their physiological motion states, calculating dense optical flow can obtain more accurate motion information. Although dense optical flow computation is large, the current latest optical flow network computation speed can reach 140 fps.
The essence of the optical flow network also belongs to the convolutional neural network, but as the optical flow network per se is relatively mature in technology, the embodiment of the invention can directly use the existing network parameters as the initial weights of the optical flow network layer in the embodiment of the invention.
In order to learn the long-term dependence characteristics, on the basis of the above embodiments, the recurrent neural network is a long-term memory neural network.
It can be understood that the long-short term memory neural network comprises a certain number of long-short term memory units, the long-short term memory units are used for extracting the characteristics of the current time at each moment, and the characteristics of the previous time are also transmitted to the long-short term memory units through hidden states, so that the obtained characteristics of the current time take the correlation information in the time dimension into consideration.
On the basis of the above embodiments, the self-encoder layer of the embodiment of the present invention is a countermeasure self-encoder. The embodiment of the invention adopts the countermeasure thought, compares the coding distribution obtained by the coder with the real background distribution, so that the coding obtained by the coder is more approximate to the real distribution of the target, and further the background image obtained by decoding is more real.
Fig. 3 is a schematic structural diagram of a respiratory signal extraction apparatus for a contrast image sequence according to an embodiment of the present invention, as shown in fig. 3, the apparatus includes: an image sequence acquisition module 301 and a neural network module 302, wherein:
an image sequence acquisition module 301 for acquiring a sequence of contrast images. The contrast image sequence of the embodiment of the invention is formed by combining a plurality of frames of contrast images which are arranged according to a time sequence.
And a neural network module 302, configured to input the contrast image sequence into a pre-trained neural network, and output a respiratory signal sequence.
The embodiment of the invention adopts a deep learning method to obtain the respiratory signal in the contrast image. The neural network adopted by the deep learning is trained by a sample contrast image sequence and a respiratory signal label, the respiratory signal is known in advance for each frame of image in the sample blood vessel modeling image sequence, and the respiratory signal label is generated according to the corresponding respiratory signal.
Wherein the neural network comprises:
the self-encoder layer is used for inputting a contrast image sequence and outputting a background image of each frame image in the contrast image sequence to form the background image sequence;
and the optical flow network layer is used for inputting the background image sequence and outputting the motion information of the background image sequence.
It should be noted that, for the existing method of performing a morphological closing operation on each frame of image to obtain a background image for suppressing the tubular structure, the method has a disadvantage of suppressing the foreground of the tubular structure and blurring the background texture. The embodiment of the invention obtains the background image by a deep learning method, and can effectively and clearly obtain the background texture.
And the circulating neural network layer is used for inputting the motion information of the background image sequence and outputting the respiratory signal sequence.
The optical flow principle is a method for calculating motion information of an object between adjacent frames by using the change of pixels in an image sequence in a time domain and the correlation between the adjacent frames to find the corresponding relationship between a previous frame and a current frame. In order to realize the end-to-end respiratory signal extraction method based on deep learning, the embodiment of the invention adopts the optical flow network to calculate the motion information between the continuous frame images in the sequence. The motion information of the embodiment of the invention refers to the motion state of each point on two adjacent frames of images.
It should be noted that other methods for obtaining motion information in the prior art further include obtaining a corresponding point pair by using feature point matching, and then calculating a displacement change of the corresponding point. However, one feature of contrast images is that the number of feature points that can be retrieved in the images is particularly low. Even if there is a feature point in the same tissue and organ region in the blood vessel region and the background, it is difficult to find a feature point exactly matching with the blood vessel region and the background in another frame. If the accurate matching point pair cannot be found, the displacement of the point cannot be calculated, and the motion information cannot be obtained. The optical flow method is to construct an energy function based on the assumption that the gray level of an image is unchanged, and to minimize energy through iterative optimization, thereby obtaining motion information of each point. The inventor tests and verifies that the using effect on the contrast images is far better than that of the method based on the feature point matching.
The background image sequence is composed of background images sequenced according to time sequence, and if n frames of contrast images exist in the original contrast image sequence, n background images can be obtained through the background feature extraction and reconstruction process, namely, a background image sequence is formed. And calculating the motion information from the next frame image to the previous frame image by using an optical flow network on the n background images, initializing the motion information of the first frame image to 0, and obtaining n pieces of motion information, namely n optical flow field results.
The self-encoder layer is trained according to a sample radiography image sequence and a background image label; the recurrent neural network layer is formed by training motion information of a sample background image sequence and a respiratory signal label.
It should be noted that a Recurrent Neural Network (RNN) is a type of Recurrent Neural Network (LSTM) that takes sequence data as input, recurses in the evolution direction of the sequence, and all nodes (Recurrent units) are connected in a chain, where a Bidirectional Recurrent Neural Network (Bi-RNN) and a Long-Term Memory Network (Long Short-Term Memory networks) are common Recurrent Neural networks.
The recurrent neural network of the embodiment of the invention adopts a sequence-sequence (many-to-many) input-output mode, namely, a sequence of values is output by inputting a background image sequence into a pre-trained recurrent neural network, wherein each value corresponds to the respiratory signal of each background image in the input sequence. In the process of training the recurrent neural network, the conventional method may be: inputting the motion information of the sample background image sequence into the preset cyclic neural network, and outputting a respiratory signal prediction value corresponding to each frame of image in the sample background image sequence; calculating a loss value according to the prediction signal and the respiration signal label by using a preset loss function; and if the loss value converges to a preset threshold value, finishing the training of the preset recurrent neural network.
The apparatus for extracting a respiratory signal of a contrast image sequence according to an embodiment of the present invention specifically executes a flow of the respiratory signal extraction method of each contrast image sequence, and please refer to the content of the respiratory signal extraction method of each contrast image sequence in detail, which is not described herein again. According to the respiratory signal extraction device of the contrast image sequence, firstly, the background of each frame of image is extracted, so that the influence of heartbeat motion in the chest radiography image is inhibited, and the background image is obtained through a deep learning method, so that background textures can be effectively and clearly obtained; secondly, motion information in the background image is obtained by utilizing an optical flow algorithm, and the method is particularly suitable for solving the problem of large difference of physiological motion states of tissues and organs existing in the contrast image; in order to realize the end-to-end respiratory signal extraction method based on deep learning, an optical flow network is adopted to calculate the motion information between continuous frame images in a sequence; and finally, accurately acquiring the respiratory signal by utilizing the advantage of the cyclic neural network in analyzing the sequence data with different lengths.
Fig. 4 is a schematic entity structure diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the electronic device may include: a processor (processor)410, a communication Interface 420, a memory (memory)430 and a communication bus 440, wherein the processor 410, the communication Interface 420 and the memory 430 are communicated with each other via the communication bus 440. The processor 410 may invoke a computer program stored on the memory 430 and executable on the processor 410 to perform the method of respiratory signal extraction of a contrast image sequence as provided by the various embodiments described above, including, for example: acquiring a contrast image sequence, inputting the contrast image sequence into a pre-trained neural network, and outputting a respiratory signal; wherein the neural network comprises: the self-encoder layer is used for inputting a contrast image sequence and outputting a background image of each frame image in the contrast image sequence to form the background image sequence; the optical flow network layer is used for inputting the background image sequence and outputting the motion information of the background image sequence; the circulating neural network layer is used for inputting the motion information of the background image sequence and outputting the respiratory signal sequence; the self-encoder layer is trained according to a sample radiography image sequence and a background image label; the recurrent neural network layer is formed by training motion information of a sample background image sequence and a respiratory signal label.
In addition, the logic instructions in the memory 430 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program is implemented to, when executed by a processor, perform a respiratory signal extraction method of a contrast image sequence provided in the foregoing embodiments, for example, including: acquiring a contrast image sequence, inputting the contrast image sequence into a pre-trained neural network, and outputting a respiratory signal sequence; wherein the neural network comprises: the self-encoder layer is used for inputting a contrast image sequence and outputting a background image of each frame image in the contrast image sequence to form the background image sequence; the optical flow network layer is used for inputting the background image sequence and outputting the motion information of the background image sequence; the circulating neural network layer is used for inputting the motion information of the background image sequence and outputting the respiratory signal; the self-encoder layer is trained according to a sample radiography image sequence and a background image label; the recurrent neural network layer is formed by training motion information of a sample background image sequence and a respiratory signal label.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (9)
1. A method of extracting a respiratory signal from a sequence of contrast images, comprising:
acquiring a contrast image sequence, inputting the contrast image sequence into a pre-trained neural network, and outputting a respiratory signal sequence;
wherein the neural network comprises:
the self-encoder layer is used for inputting a contrast image sequence and outputting a background image of each frame image in the contrast image sequence to form the background image sequence;
the optical flow network layer is used for inputting the background image sequence and outputting the motion information of the background image sequence;
the circulating neural network layer is used for inputting the motion information of the background image sequence and outputting the respiratory signal sequence;
the self-encoder layer is trained according to a sample radiography image sequence and a background image label; the recurrent neural network layer is formed by training motion information of a sample background image sequence and a respiratory signal label.
2. The method according to claim 1, wherein the neural network adopts a way of partial module pre-training and re-complete network unified training, and specifically comprises:
respectively training the self-encoder layer and the recurrent neural network layer;
and inputting a sample contrast image sequence and a breathing signal label into the neural network for training by taking the pre-trained parameters of the self-encoder layer and the cyclic neural network layer and the parameters of the optical flow network layer as initial values of the neural network.
3. The method of claim 1, wherein the self-encoder layer comprises:
an encoder layer for inputting a sequence of contrast images and outputting low dimensional features representing a background image for each image in the sequence of contrast images;
and the decoder layer is used for carrying out deconvolution operation on the low-dimensional features and restoring to obtain a background image sequence.
4. The method of claim 1, wherein the optical flow network layer computes dense motion information for a sequence of background images through a convolutional neural network.
5. The method of claim 1, wherein the recurrent neural network layer employs a long-term memory neural network.
6. The method of claim 1, wherein the autoencoder layer is implemented using a counter-acting autoencoder.
7. A respiratory signal extraction apparatus for a sequence of contrast images, comprising:
the image sequence acquisition module is used for acquiring a contrast image sequence;
the neural network module is used for inputting the radiography image sequence into a pre-trained neural network and outputting a respiratory signal sequence;
wherein the neural network comprises:
the self-encoder layer is used for inputting a contrast image sequence and outputting a background image of each frame image in the contrast image sequence to form the background image sequence;
the optical flow network layer is used for inputting the background image sequence and outputting the motion information of the background image sequence;
the circulating neural network layer is used for inputting the motion information of the background image sequence and outputting the respiratory signal sequence;
the self-encoder layer is trained according to a sample radiography image sequence and a background image label; the recurrent neural network layer is formed by training motion information of a sample background image sequence and a respiratory signal label.
8. An electronic device, comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to enable execution of a method of respiratory signal extraction of a contrast image sequence as claimed in any one of claims 1 to 6.
9. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method of extracting a respiratory signal of a contrast image sequence according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910213523.7A CN110009604B (en) | 2019-03-20 | 2019-03-20 | Method and device for extracting respiratory signal of contrast image sequence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910213523.7A CN110009604B (en) | 2019-03-20 | 2019-03-20 | Method and device for extracting respiratory signal of contrast image sequence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110009604A CN110009604A (en) | 2019-07-12 |
CN110009604B true CN110009604B (en) | 2021-05-14 |
Family
ID=67167506
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910213523.7A Active CN110009604B (en) | 2019-03-20 | 2019-03-20 | Method and device for extracting respiratory signal of contrast image sequence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110009604B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117934866B (en) * | 2024-03-18 | 2024-05-28 | 板石智能科技(深圳)有限公司 | Method and device for extracting effective interference image of white light interferometer |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101422368A (en) * | 2008-10-09 | 2009-05-06 | 李宝生 | Respiration signal extracting and respiration movement guiding device |
CN103126705A (en) * | 2011-11-30 | 2013-06-05 | Ge医疗系统环球技术有限公司 | Method, apparatus and device for identifying slice breathing phases and constructing computed tomography (CT) three-dimensional images |
CN104739510A (en) * | 2014-11-24 | 2015-07-01 | 中国科学院苏州生物医学工程技术研究所 | New method for establishing corresponding relation between sequence images and respiratory signals |
CN106580324A (en) * | 2016-11-07 | 2017-04-26 | 广州视源电子科技股份有限公司 | Respiratory signal extraction method and device |
CN108294768A (en) * | 2017-12-29 | 2018-07-20 | 华中科技大学 | The X-ray angiocardiography of sequence image multi-parameter registration subtracts image method and system |
CN108830155A (en) * | 2018-05-10 | 2018-11-16 | 北京红云智胜科技有限公司 | A kind of heart coronary artery segmentation and knowledge method for distinguishing based on deep learning |
CN109146842A (en) * | 2018-07-09 | 2019-01-04 | 南方医科大学 | A kind of breath Motion Estimation method in chest digit synthesis X-ray Tomography |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI458464B (en) * | 2012-02-01 | 2014-11-01 | Nat Univ Tsing Hua | Medical ventilator capable of early detecting and recognizing types of pneumonia, gas recognition chip, and method for recognizing gas thereof |
CN106022258A (en) * | 2016-05-18 | 2016-10-12 | 成都济森科技有限公司 | Digital stethoscope and method for filtering heart sounds and extracting lung sounds |
CN109009125B (en) * | 2018-06-07 | 2019-05-28 | 上海交通大学 | Driver's fine granularity monitoring of respiration method and system based on audio frequency of mobile terminal |
-
2019
- 2019-03-20 CN CN201910213523.7A patent/CN110009604B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101422368A (en) * | 2008-10-09 | 2009-05-06 | 李宝生 | Respiration signal extracting and respiration movement guiding device |
CN103126705A (en) * | 2011-11-30 | 2013-06-05 | Ge医疗系统环球技术有限公司 | Method, apparatus and device for identifying slice breathing phases and constructing computed tomography (CT) three-dimensional images |
CN104739510A (en) * | 2014-11-24 | 2015-07-01 | 中国科学院苏州生物医学工程技术研究所 | New method for establishing corresponding relation between sequence images and respiratory signals |
CN106580324A (en) * | 2016-11-07 | 2017-04-26 | 广州视源电子科技股份有限公司 | Respiratory signal extraction method and device |
CN108294768A (en) * | 2017-12-29 | 2018-07-20 | 华中科技大学 | The X-ray angiocardiography of sequence image multi-parameter registration subtracts image method and system |
CN108830155A (en) * | 2018-05-10 | 2018-11-16 | 北京红云智胜科技有限公司 | A kind of heart coronary artery segmentation and knowledge method for distinguishing based on deep learning |
CN109146842A (en) * | 2018-07-09 | 2019-01-04 | 南方医科大学 | A kind of breath Motion Estimation method in chest digit synthesis X-ray Tomography |
Non-Patent Citations (1)
Title |
---|
Respiration Monitoring through Thoraco-abdominal Video with an LSTM;Vidyadhar upadhya;《IEEE》;20161219;摘要、第1-3节 * |
Also Published As
Publication number | Publication date |
---|---|
CN110009604A (en) | 2019-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11941807B2 (en) | Artificial intelligence-based medical image processing method and medical device, and storage medium | |
CN109409503B (en) | Neural network training method, image conversion method, device, equipment and medium | |
US20200242744A1 (en) | Forecasting Images for Image Processing | |
CN110110723B (en) | Method and device for automatically extracting target area in image | |
CN114897780B (en) | MIP sequence-based mesenteric artery blood vessel reconstruction method | |
CN111275755B (en) | Mitral valve orifice area detection method, system and equipment based on artificial intelligence | |
CN113298800B (en) | CT angiography CTA source image processing method, device and equipment | |
CN111524109A (en) | Head medical image scoring method and device, electronic equipment and storage medium | |
KR20180099039A (en) | Image database-based real-time registration method of 2d x-ray image and 3d ct image, and an apparatus thereof | |
CN113822289A (en) | Training method, device and equipment of image noise reduction model and storage medium | |
Basiri et al. | Domain-specific deep learning feature extractor for diabetic foot ulcer detection | |
CN110009604B (en) | Method and device for extracting respiratory signal of contrast image sequence | |
CN115861172A (en) | Wall motion estimation method and device based on self-adaptive regularized optical flow model | |
CN115375583A (en) | PET parameter image enhancement method, device, equipment and storage medium | |
CA3159947A1 (en) | Medical image synthesis for motion correction using generative adversarial networks | |
Zhou et al. | Automatic segmentation algorithm of femur and tibia based on Vnet-C network | |
EP4343680A1 (en) | De-noising data | |
Luo et al. | Research on several key problems of medical image segmentation and virtual surgery | |
CN116168099A (en) | Medical image reconstruction method and device and nonvolatile storage medium | |
CN114419375B (en) | Image classification method, training device, electronic equipment and storage medium | |
CN116739890B (en) | Method and equipment for training generation model for generating healthy blood vessel image | |
Qiao et al. | Cardiac Image Segmentation Based on Improved U-Net | |
CN117726744B (en) | Method, apparatus and storage medium for generating three-dimensional digital subtraction angiographic image | |
CN117058464B (en) | Method and device for training generation model for generating healthy blood vessel surface | |
Hao et al. | MUE-CoT: multi-scale uncertainty entropy-aware co-training framework for left atrial segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |