CN110599499B - MRI image heart structure segmentation method based on multipath convolutional neural network - Google Patents
MRI image heart structure segmentation method based on multipath convolutional neural network Download PDFInfo
- Publication number
- CN110599499B CN110599499B CN201910780248.7A CN201910780248A CN110599499B CN 110599499 B CN110599499 B CN 110599499B CN 201910780248 A CN201910780248 A CN 201910780248A CN 110599499 B CN110599499 B CN 110599499B
- Authority
- CN
- China
- Prior art keywords
- heart
- cardiac
- segmentation
- mri
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 123
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 11
- 238000000605 extraction Methods 0.000 claims abstract description 43
- 238000012549 training Methods 0.000 claims abstract description 39
- 239000000284 extract Substances 0.000 claims abstract description 10
- 230000000747 cardiac effect Effects 0.000 claims description 83
- 238000002595 magnetic resonance imaging Methods 0.000 claims description 66
- 239000011229 interlayer Substances 0.000 claims description 28
- 239000010410 layer Substances 0.000 claims description 24
- 238000013184 cardiac magnetic resonance imaging Methods 0.000 claims description 14
- 238000011176 pooling Methods 0.000 claims description 11
- 238000002372 labelling Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 6
- 230000002159 abnormal effect Effects 0.000 claims description 5
- 101100295091 Arabidopsis thaliana NUDT14 gene Proteins 0.000 claims description 4
- 230000003205 diastolic effect Effects 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 abstract description 4
- 238000013135 deep learning Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 6
- 230000002861 ventricular Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 208000019622 heart disease Diseases 0.000 description 3
- 230000004217 heart function Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 208000024172 Cardiovascular disease Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000003042 antagnostic effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 210000005240 left ventricle Anatomy 0.000 description 2
- 239000012528 membrane Substances 0.000 description 2
- 230000002107 myocardial effect Effects 0.000 description 2
- 208000020446 Cardiac disease Diseases 0.000 description 1
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000005242 cardiac chamber Anatomy 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 210000004165 myocardium Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 210000005241 right ventricle Anatomy 0.000 description 1
- 238000012502 risk assessment Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention relates to an MRI image heart structure segmentation method based on a multi-path convolution neural network, which comprises the steps of collecting heart movie MRI training data of normal people and heart patients, manually marking the heart structures in the training data by experienced doctors to be used as heart segmentation marking results, training a heart region extraction model based on the training data to enable the heart region extraction model to accurately extract heart regions, training the heart segmentation network according to the heart regions extracted from the training data to segment various structures of a heart, and measuring the segmentation performance of the constructed heart segmentation network by using standard segmentation marking results as standards. According to the method, the heart is extracted by adopting the heart region extraction model based on the generated countermeasure network, so that the accuracy of heart region extraction is improved; meanwhile, context information between adjacent layers is utilized through the multi-path convolutional neural network, and the segmentation precision and accuracy are improved.
Description
Technical Field
The invention relates to the field of medical image processing, in particular to an MRI image heart structure segmentation method based on a multipath convolutional neural network.
Background
According to the world health organization, cardiovascular diseases are the most fatal diseases worldwide, and about 1970 thousands of people die of cardiovascular diseases in 2016. In clinical settings, cardiac function analysis plays an important role in cardiac disease diagnosis, risk assessment, patient management, and treatment decisions. This is usually a quantitative analysis of global or local cardiac function by evaluating a series of clinical indicators such as ventricular volume, ejection fraction, stroke volume, myocardial mass, etc. with the aid of digital images of the heart. Because of the good discrimination of soft tissues, the evaluation of left and right ventricular ejection fraction, stroke volume, left ventricular mass, and myocardial thickness by cine MRI images becomes the golden standard for cardiac function analysis, and the evaluation of these quantitative indices requires accurate segmentation of the left ventricular intima and epicardium, and the right ventricular intima at both end diastole and systole phases. In clinical practice, manual segmentation by doctors is time-consuming and labor-consuming, and results of two segmentations by different doctors and even the same doctor have great variability depending on the experience of the doctors. Therefore, an accurate automatic segmentation method is urgently required.
At present, manual segmentation methods, traditional automatic segmentation methods and segmentation methods based on deep learning are adopted for segmenting cardiac structures in cardiac cine MRI images. The manual segmentation method is that a doctor manually traces a two-dimensional image layer by layer, and further analyzes and diagnoses according to the result of the manual tracing. However, the manual labeling workload is large, time and labor are consumed, and repeatability is poor, so that the labeling capability of a doctor is not enough to meet the requirements of a large number of potential patients; meanwhile, the difference of professional level and experience of the annotating personnel is large, the manual segmentation result has large difference, and the quality cannot be guaranteed.
In the traditional automatic segmentation method, the segmentation process can be completed by a method based on an image or a deformable model in an interactive mode with a user, and the segmentation result needs to be confirmed and the labeling result needs to be adjusted manually. Model-based segmentation methods such as active shape models, atlas models, may accomplish the automatic segmentation process by reducing user interaction in a way that builds a general model with large amounts of data. However, image and deformable model based cardiac segmentation methods typically require user interaction, are less robust, and have low segmentation accuracy. Although the model-based methods such as the active shape model and the atlas model can reduce the user interaction, the heart shapes and the dynamic of different people (including normal people and patients with heart diseases) are various, and the establishment of a general model containing all possible shapes of the heart chambers is difficult, so that the problems of poor model universality and generalization performance exist.
With the development of deep learning in recent years, a segmentation method based on deep learning is introduced into cardiac MRI image segmentation. The segmentation method based on deep learning automatically extracts features from the original cardiac image to complete the automatic segmentation process, generally without user interaction. The method based on the deep learning can obtain a relatively accurate full-automatic segmentation result, but the conventional heart segmentation method based on the deep learning mostly adopts a 2D segmentation method and does not consider interlayer context information, and the interlayer context information is very valuable for accurate segmentation and improvement of segmentation performance. Ignoring the inter-layer context information does not conform to the clinician's actual workflow. Meanwhile, due to the characteristics of thick scanning layer and large spacing of the cardiac cine MRI images, the calculation cost is high and performance improvement may not be brought by directly using the interlayer context information through a 3D segmentation method.
Therefore, how to improve the accuracy of automatic segmentation of cardiac cine MRI images and improve the segmentation performance becomes an urgent problem to be solved.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an MRI image heart structure segmentation method based on a multipath convolutional neural network, which comprises the following steps:
step 1: collecting cardiac cine MRI training data, including normal cardiac cine MRI images of normal persons and abnormal cardiac cine MRI images of cardiac patients, the cardiac cine MRI training data including cine MRI images of diastolic and systolic phases;
step 2: manually labeling the heart structure in the heart cine MRI training data layer by an experienced doctor, and taking a labeling result as a heart segmentation standard result;
and step 3: the method comprises the steps of extracting a heart region, generating confrontation network training, designing and establishing a heart region extraction model based on the generated confrontation network, extracting heart region images from collected cardiac cine MRI training data, wherein the extracted heart region images are formed by overlapping a plurality of cardiac MRI slice images;
and 4, step 4: training a heart segmentation network based on a multipath convolutional neural network, designing and establishing a deep convolutional segmentation network combined with interlayer context information, and comprising the following steps of:
step 41: segmenting the heart structure of the heart region extracted in the step 3, and taking MRI slice image information of an adjacent layer and the existing segmentation result information of an adjacent upper layer as interlayer context information in an iterative mode;
step 42: respectively inputting the interlayer context information into respective corresponding feature extraction branches, wherein each feature extraction branch adopts an independent parallel structure, namely each interlayer context feature extraction branch adopts the same network structure but independently processes the corresponding interlayer context information;
step 43: each branch extracts high-level abstract features of the image by superposing a plurality of convolution and pooling operations and is fused by a feature fusion module;
step 44: the feature fusion module firstly connects the high-level abstract features extracted by each feature extraction branch in series, then further fuses the high-level features through an ASPP module, the fused features restore the image to the size when the image is input into a segmentation network through the up-sampling, local detail information compensation and convolution operation of a decoding module, an end-to-end dense multi-structure simultaneous segmentation probability map is obtained, and the class of each pixel is determined through the probability map so as to obtain a final segmentation result;
and 5: comparing the segmentation result obtained by the heart structure segmentation network in the step 4 with the heart standard segmentation result obtained in the step 2, and quantitatively evaluating the segmentation result through a performance evaluation index;
step 6: collecting cardiac cine MRI data to be segmented, extracting a cardiac region by using the cardiac region extraction model trained in the step 3, recording extraction position information, inputting an MRI slice image of the extracted cardiac region into the cardiac segmentation network trained in the step 4, sequentially iterating from the heart bottom to the heart top to complete the segmentation of the volumetric data of the cardiac region, and eliminating discontinuous scattered regions possibly existing in each cardiac structure in a segmentation result to obtain an initial segmentation result of the cardiac structure;
and 7: and restoring the initial segmentation result of the heart structure to the original image size according to the extracted position information to obtain a final segmentation result.
According to a preferred embodiment, the cardiac region extraction model in step 3 comprises a generator and an arbiter,
the generator takes a cardiac MRI slice image as input and adopts a coding and decoding structure, namely, features are extracted through convolution, pooling operation and down-sampling, and then a pseudo cardiac contour image with the same size as the input cardiac MRI slice image is generated through up-sampling, local detail compensation and convolution operation;
the discriminator takes a heart MRI slice image and a corresponding real heart area contour image pair or a heart MRI slice image and a pseudo heart contour image pair generated by the generator as input, extracts features through convolution and pooling operation, and discriminates whether the input heart contour image is real or generated by the generator;
after the generation of the countermeasure network is trained, accurate heart contour images corresponding to all the heart MRI slice images are generated through a generator, the positions of the hearts on the images are located, and therefore three-dimensional heart area images are extracted.
The invention has the beneficial effects that:
1. the invention adopts the heart region extraction model based on the generation countermeasure network, can automatically and more accurately extract the heart region and record the extraction position without manual interaction, and has better universality and generalization capability.
2. According to the technical scheme, in the training of the neural network for segmenting the heart structure, the precision and the accuracy of heart structure segmentation are improved by using the interlayer context information, namely the spatial correlation information between adjacent layers, and the trained neural network can complete automatic segmentation of the heart structure.
3. Because the interlayer context information is directly utilized by the 3D segmentation method, the method has the defects of high calculation cost and limited performance under a limited data set, and the method adopts an independent parallel structure to process the corresponding interlayer context information under the framework of the 2D segmentation method, thereby effectively improving the segmentation performance.
Drawings
FIG. 1 is a flow chart of a method for automatic segmentation of cardiac structures in accordance with the present invention;
FIG. 2 is a diagram of the working of the present invention to extract the heart based on the generation of a countermeasure network; and
fig. 3 is a diagram of the working principle of the heart segmentation network based on the multi-path convolution neural network.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
The cardiac cine MRI images in the present invention refer to: an image acquired by cardiac MRI is one type of cardiac MRI image.
The ASPP module in the present invention means: pyramid pooling module with expanded convolution.
Cardiac magnetic resonance cine imaging is a common cardiac magnetic resonance imaging technique, and generally refers to rapidly acquiring a plurality of images at specific stages in a cardiac cycle, and displaying the images in the form of a cine. Cardiac magnetic resonance cine imaging techniques may be used to assess not only ventricular function, wall abnormalities, but also valve morphology and function of the heart.
The original image of the invention refers to an image obtained by nuclear magnetic resonance original scanning, namely a collected cardiac cine-MRI image.
Aiming at the defects in the prior art, the invention provides an MRI image heart structure segmentation method based on a multipath convolutional neural network, and as shown in figure 1, the method comprises the following steps:
step 1: cardiac cine MRI training data is collected including normal cardiac cine MRI images of normal persons and abnormal cardiac cine MRI images of cardiac patients. cine-MRI images of the heart acquire images of the diastolic and systolic phases of the cardiac cycle. The cardiac cine MRI data obtained by the scan is volume data, i.e., is composed of a plurality of slice MRI data.
The heart data of normal people and heart disease people are adopted simultaneously to ensure the generalization capability of the technical scheme of the invention, namely the automatic segmentation method of the invention is not only suitable for the segmentation of normal heart structures, but also suitable for the segmentation of abnormal heart structures.
The data of the diastolic and systolic phases are used in the training phase, which is labeled on one hand, and more importantly, the segmentation of the cardiac structures in these two phases is usually of clinical interest.
Step 2: and manually labeling the cardiac structures in the cardiac cine MRI training data layer by an experienced doctor, taking the labeling result as a cardiac segmentation standard result, and taking the cardiac structures to be segmented as a normal cardiac structure and an abnormal cardiac structure. The labeled heart structure mainly comprises a left ventricle, a right ventricle and a myocardium, and in practical application, the inner membrane, the outer membrane and the like of the left ventricle can be labeled. Specifically, layer-by-layer refers to the MRI slice layer from the bottom of the heart to the top of the heart, and all training data are labeled.
Considering that the cardiac cine MRI image data collected in actual operation all have a large scanning range, and cover a large range around the heart, the display proportion of the heart region on the image is relatively small, considering the calculation effectiveness and in order to avoid the problem of category imbalance to a certain extent, before the cardiac structure segmentation is performed, the heart region needs to be extracted, so that the extracted heart region occupies a large display proportion on the image, and thus, the calculation amount of the subsequent segmentation processing is also reduced.
Therefore, the technical solution of the present invention designs and establishes a cardiac region extraction model based on the generated countermeasure network before segmentation, and trains the model for cardiac cine-MRI training data to extract the cardiac region, as shown in step 3.
And step 3: the method comprises the steps of extracting a heart region to generate confrontation network training, designing and establishing a heart region extraction model based on the generated confrontation network, extracting a heart image from collected cardiac cine MRI training data, wherein the extracted heart region is a three-dimensional image and is formed by overlapping a plurality of MRI slice images.
The generation of the countermeasure network is a deep learning model and comprises a generator and a discriminator. The working principle of generating a countermeasure network is shown in fig. 2. The heart region extraction may be performed manually or automatically based on some method. Since the generation countermeasure network achieves better performance in the image processing field in recent years, the invention selects to apply the generation countermeasure network to the heart region extraction model to realize automatic extraction and obtain excellent extraction performance.
The generator takes a cardiac MRI slice image in training data as input, adopts a coding and decoding structure, namely, firstly performs convolution and pooling operations to perform downsampling and extract features, and then performs upsampling, local detail compensation and convolution operations to generate a pseudo cardiac contour image with the same size as the input cardiac MRI slice image. In the training phase of the heart region extraction model, the generator generates a pseudo heart contour image, and the image marked by the doctor is a real heart region contour image. The goal of generating the anti-net training is that the generator can generate a true heart contour image, i.e., after training is complete, the generator can generate a heart contour image that is consistent with the physician's label image, which is the ideal effect.
The method for extracting the heart region based on the generation of the countermeasure network is not adopted in the prior art, and the technical scheme of the invention mainly utilizes the generation of the countermeasure network to realize the automatic extraction of the heart region without manual interaction and manual design features.
The discriminator takes as input either a pair of cardiac MRI slice images and corresponding pairs of real cardiac region contour images or a pair of cardiac MRI slice images and generator generated pseudo cardiac contour images. Features are extracted through convolution and pooling operations to determine whether the cardiac region contour image input to the discriminator is true or generated by the generator. Specifically, the discriminator discriminates whether the input real heart region contour image or pseudo heart contour image is real or generated by the generator.
The generator can generate a segmentation result approximate to the real heart contour image, and meanwhile, the discriminator can accurately distinguish the real heart contour image and the pseudo heart contour image generated by the generator as an optimization target, and the confrontation network is extracted and generated from the heart region for training, so that the generator can finally generate the heart contour image with high accuracy. After the network training is completed, the generator is applied to generate accurate heart contour images corresponding to all the heart MRI slice images, and the positions of the heart on the images are positioned, so that the three-dimensional heart area is extracted. The invention regards the heart region extraction problem as an image pixel level two-classification problem, namely an image segmentation problem, refers to a method for generating a confrontation network for image segmentation, and adopts an image pair as the input of a discriminator.
In a preferred embodiment, whether the generation of the countermeasure network is finished by training can be judged by specifying the number of training iterations and combining a training loss curve.
The generation of the confrontation network is adopted to extract the heart area; for this purpose, the corresponding region is extracted in two steps, one, requiring knowledge of where the heart is in the image, and two. For the first step, the determination of the heart position is realized by generating an antagonistic network, and after training the generation of the antagonistic network, the generator can generate an accurate heart contour image, namely, it is known which position on the image is the heart, namely, the position of the heart is known. In a second step, the determined cardiac region is extracted.
After the generation of the confrontation network is trained, extracting a heart area based on the positioned heart position, wherein the extraction method comprises the following steps: for cardiac cine MRI image data of a cardiac region to be extracted, firstly, a trained generator is used for generating cardiac contour images corresponding to all MRI slice images, then rectangular regions surrounding the heart are generated according to the contours, the largest rectangular region in all slices is selected, the length and the width of the rectangular region are respectively expanded by 0.3 time on the basis of the largest rectangular region to ensure that the complete cardiac region can be extracted, and the expanded rectangular region is used as the region to be extracted. Finally, the region is extracted for all slices and superimposed together, thereby extracting a three-dimensional heart region image for training of a subsequent heart segmentation network.
And 4, step 4: and training a heart segmentation network based on the multipath convolutional neural network, and designing and establishing a deep convolutional segmentation network combined with interlayer context information. The working principle diagram of cardiac structure segmentation based on a multi-way convolutional neural network is shown in fig. 3.
Step 41: and (3) segmenting the heart structure of the heart region extracted in the step (3), and taking the MRI slice image information of the adjacent layer and the existing segmentation result information of the adjacent upper layer as interlayer context information in an iterative mode.
In a preferred embodiment, MRI slice image information of two adjacent layers, i.e., an upper layer adjacent to the MRI slice image information of the next upper layer and existing segmentation result information of the upper layer adjacent to the MRI slice image information of the next upper layer are used as interlayer context information.
Step 42: the interlayer context information is respectively input into respective feature extraction branches, and each interlayer context feature extraction branch adopts an independent parallel structure, namely each interlayer context feature extraction branch adopts the same network structure but independently processes one corresponding interlayer context information. The context feature extraction branch performs feature extraction through convolution and pooling operations.
Specifically, the number of the inter-layer context feature extraction branches is determined according to the number of inter-layer context information to be processed. In a preferred embodiment, if the MRI slice images of the adjacent upper and lower layers and the segmentation result information of 1 adjacent upper layer are taken as the interlayer context information, the number of interlayer context feature extraction branches is 3. In addition, 4 feature extraction branches are added to the feature extraction branch of the MRI slice to be segmented currently. Fig. 3 is a schematic diagram of the heart structure segmentation operation of 4 feature extraction branches according to this embodiment, and as shown in fig. 3, input M [ i +1] indicates a segmentation result of MRI slice images of adjacent upper layers, input S [ i-1] and input S [ i +1] indicate MRI slice images of adjacent upper and lower layers, and input S [ i ] indicates an MRI slice image to be currently segmented.
Step 43: and each branch extracts the high-level abstract features of the image by superposing a plurality of convolution and pooling operations and is fused by a feature fusion module.
Step 44: the feature fusion module firstly connects the high-level abstract features extracted from each branch in series, then further fuses the high-level features through a designed ASPP module, the fused features restore the image to the size when the image is input into a heart structure segmentation network through the up-sampling, local detail information compensation and convolution operation of a decoding module, an end-to-end dense multi-structure joint segmentation probability map is obtained, the segmentation probability map is the probability that each pixel point on an MRI slice image represents that the corresponding pixel point of the image to be segmented belongs to the corresponding classification, and the probability map determines the category of each pixel, so that the final segmentation result is obtained.
And 5: and (3) comparing the segmentation result obtained by the heart structure segmentation network with the heart standard segmentation result obtained in the step (2), and quantitatively evaluating the segmentation result through a performance evaluation index. The method adopts the Dice coefficient and/or the ASSD index to carry out quantitative evaluation on the segmentation result. Meanwhile, the quality of the segmentation result is evaluated through the performance evaluation index.
Step 6: and (3) collecting cardiac cine MRI data to be segmented, automatically extracting a network from the trained cardiac region in the step (3) to extract the cardiac region, and recording and extracting position information. The extracted heart region is three-dimensional volume data and is formed by superimposing a plurality of MRI slice images. And inputting the extracted MRI slice image of the heart region into the deep convolution segmentation network trained in the step 4, sequentially iterating from the heart bottom to the heart top to complete the segmentation of the volume data of the heart region, and eliminating discontinuous scattered regions possibly existing in each structure in the segmentation result to obtain an initial segmentation result of the heart structure.
Step 6 is to perform the segmentation of the cardiac structure by applying the method proposed by the present invention.
And 7: and restoring the initial segmentation result of the heart structure to the original image size according to the extracted position information to obtain a final segmentation result. In practical applications, the segmentation result can be seen without restoring to the original image, but the segmentation result is not affected as long as the size does not match the original image.
For example, the original image is 400 × 400, wherein the extracted heart is 20 × 20 and is located in the center of the original image, and the areas outside 20 × 20 are all background by default, i.e., are segmented into non-cardiac structures by default; the initial segmentation of the cardiac structure results in a segmentation of the 20 x 20 image, while the final segmentation should be the corresponding segmentation of the 400 x 400 image. Therefore, in the segmentation, it is necessary to record the heart extraction position so as to know the position of the original image size corresponding to the heart segmentation result. Such as the middle, lower left corner, lower right corner.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various solutions which are within the scope of the present disclosure and which fall within the scope of the present disclosure. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.
Claims (2)
1. A method for segmenting a cardiac structure of an MRI image based on a multi-path convolutional neural network, the method comprising:
step 1: collecting cardiac cine MRI training data, including normal cardiac cine MRI images of normal persons and abnormal cardiac cine MRI images of cardiac patients, the cardiac cine MRI training data including cine MRI images of diastolic and systolic phases;
step 2: manually labeling the heart structure in the heart cine MRI training data layer by an experienced doctor, and taking a labeling result as a heart segmentation standard result;
and step 3: the method comprises the steps of extracting a heart region, generating confrontation network training, designing and establishing a heart region extraction model based on the generated confrontation network, extracting heart region images from collected cardiac cine MRI training data, wherein the extracted heart region images are formed by overlapping a plurality of cardiac MRI slice images;
the cardiac region extraction model includes a generator and an arbiter:
the generator takes a cardiac MRI slice image as input and adopts a coding and decoding structure, namely, features are extracted through convolution, pooling operation and down-sampling, and then a pseudo cardiac contour image with the same size as the input cardiac MRI slice image is generated through up-sampling, local detail compensation and convolution operation;
the discriminator takes a heart MRI slice image and a corresponding real heart area contour image pair or a heart MRI slice image and a pseudo heart contour image pair generated by the generator as input, extracts features through convolution and pooling operation, and discriminates whether the input heart contour image is real or generated by the generator;
and 4, step 4: training a heart segmentation network based on a multipath convolutional neural network, designing and establishing a deep convolutional segmentation network combined with interlayer context information, and comprising the following steps of:
step 41: segmenting the heart structure of the heart region extracted in the step 3, and taking MRI slice image information of an adjacent layer and the existing segmentation result information of an adjacent upper layer as interlayer context information in an iterative mode;
step 42: respectively inputting the interlayer context information into respective corresponding feature extraction branches, wherein each feature extraction branch adopts an independent parallel structure, namely each interlayer context feature extraction branch adopts the same network structure but independently processes the corresponding interlayer context information;
step 43: each branch extracts high-level abstract features of the image by superposing a plurality of convolution and pooling operations and is fused by a feature fusion module;
step 44: the feature fusion module firstly connects the high-level abstract features extracted by each feature extraction branch in series, then further fuses the high-level features through an ASPP module, the fused features restore the image to the size when the image is input into a segmentation network through the up-sampling, local detail information compensation and convolution operation of a decoding module, an end-to-end dense multi-structure simultaneous segmentation probability map is obtained, and the class of each pixel is determined through the probability map so as to obtain a final segmentation result;
and 5: comparing the segmentation result obtained by the heart structure segmentation network in the step 4 with the heart standard segmentation result obtained in the step 2, and quantitatively evaluating the segmentation result through a performance evaluation index;
step 6: collecting cardiac cine MRI data to be segmented, extracting a cardiac region by using the cardiac region extraction model trained in the step 3, recording extraction position information, inputting an MRI slice image of the extracted cardiac region into the cardiac segmentation network trained in the step 4, sequentially iterating from the heart bottom to the heart top to complete the segmentation of the volumetric data of the cardiac region, and eliminating discontinuous scattered regions possibly existing in each cardiac structure in a segmentation result to obtain an initial segmentation result of the cardiac structure;
and 7: and restoring the initial segmentation result of the heart structure to the original image size according to the extracted position information to obtain a final segmentation result.
2. The method for segmenting the cardiac structure of the MRI image according to claim 1, wherein after the training of the countermeasure network is generated, the generator generates accurate cardiac contour images corresponding to the MRI slice images of the heart, positions the heart on the images and extracts the three-dimensional cardiac region images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910780248.7A CN110599499B (en) | 2019-08-22 | 2019-08-22 | MRI image heart structure segmentation method based on multipath convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910780248.7A CN110599499B (en) | 2019-08-22 | 2019-08-22 | MRI image heart structure segmentation method based on multipath convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110599499A CN110599499A (en) | 2019-12-20 |
CN110599499B true CN110599499B (en) | 2022-04-19 |
Family
ID=68855269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910780248.7A Expired - Fee Related CN110599499B (en) | 2019-08-22 | 2019-08-22 | MRI image heart structure segmentation method based on multipath convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110599499B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111598838B (en) * | 2020-04-22 | 2023-04-07 | 中南民族大学 | Automatic heart MR image segmentation method and device, electronic equipment and storage medium |
CN112116625B (en) * | 2020-08-25 | 2024-10-15 | 澳门科技大学 | Automatic cardiac CT image segmentation method, device and medium based on contradiction labeling method |
EP4208851A4 (en) * | 2020-09-02 | 2024-10-16 | Singapore Health Serv Pte Ltd | Image segmentation system and method |
CN112561921B (en) * | 2020-11-10 | 2024-07-26 | 联想(北京)有限公司 | Image segmentation method and device |
CN112949470A (en) * | 2021-02-26 | 2021-06-11 | 上海商汤智能科技有限公司 | Method, device and equipment for identifying lane-changing steering lamp of vehicle and storage medium |
CN113222996A (en) * | 2021-03-03 | 2021-08-06 | 中南民族大学 | Heart segmentation quality evaluation method, device, equipment and storage medium |
CN113362345B (en) * | 2021-06-30 | 2023-05-30 | 武汉中科医疗科技工业技术研究院有限公司 | Image segmentation method, device, computer equipment and storage medium |
CN113781343A (en) * | 2021-09-13 | 2021-12-10 | 叠境数字科技(上海)有限公司 | Super-resolution image quality improvement method |
CN114066863B (en) * | 2021-11-22 | 2024-09-13 | 沈阳东软智能医疗科技研究院有限公司 | Method and device for training ventricle area segmentation model and determining template image |
CN116228802B (en) * | 2023-05-05 | 2023-07-04 | 济南科汛智能科技有限公司 | Cardiac MRI auxiliary imaging control method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106600621A (en) * | 2016-12-08 | 2017-04-26 | 温州医科大学 | Space-time cooperation segmentation method based on infant brain tumor multi-modal MRI graph |
CN106683104A (en) * | 2017-01-06 | 2017-05-17 | 西北工业大学 | Prostate magnetic resonance image segmentation method based on integrated depth convolution neural network |
CN109377520A (en) * | 2018-08-27 | 2019-02-22 | 西安电子科技大学 | Cardiac image registration arrangement and method based on semi-supervised circulation GAN |
CN109584254A (en) * | 2019-01-07 | 2019-04-05 | 浙江大学 | A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer |
CN110120051A (en) * | 2019-05-10 | 2019-08-13 | 上海理工大学 | A kind of right ventricle automatic division method based on deep learning |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096632A (en) * | 2016-06-02 | 2016-11-09 | 哈尔滨工业大学 | Based on degree of depth study and the ventricular function index prediction method of MRI image |
CN110475505B (en) * | 2017-01-27 | 2022-04-05 | 阿特瑞斯公司 | Automatic segmentation using full convolution network |
US10595727B2 (en) * | 2018-01-25 | 2020-03-24 | Siemens Healthcare Gmbh | Machine learning-based segmentation for cardiac medical imaging |
-
2019
- 2019-08-22 CN CN201910780248.7A patent/CN110599499B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106600621A (en) * | 2016-12-08 | 2017-04-26 | 温州医科大学 | Space-time cooperation segmentation method based on infant brain tumor multi-modal MRI graph |
CN106683104A (en) * | 2017-01-06 | 2017-05-17 | 西北工业大学 | Prostate magnetic resonance image segmentation method based on integrated depth convolution neural network |
CN109377520A (en) * | 2018-08-27 | 2019-02-22 | 西安电子科技大学 | Cardiac image registration arrangement and method based on semi-supervised circulation GAN |
CN109584254A (en) * | 2019-01-07 | 2019-04-05 | 浙江大学 | A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer |
CN110120051A (en) * | 2019-05-10 | 2019-08-13 | 上海理工大学 | A kind of right ventricle automatic division method based on deep learning |
Non-Patent Citations (3)
Title |
---|
A deep learning network for right ventricle segmentation in short-axis MRI;Gongning Luo et al.;《2016 Computing in Cardiology Conference》;20170302;1-5 * |
基于深度学习的心脏图像分割方法的研究;陈军;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20190715(第7期);I138-1137 * |
基于自组织神经网络的超声心脏图象分割;汪天富 等;《中国生物医学工程学报》;20000929;第19卷(第3期);356-360 * |
Also Published As
Publication number | Publication date |
---|---|
CN110599499A (en) | 2019-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110599499B (en) | MRI image heart structure segmentation method based on multipath convolutional neural network | |
CN110934606B (en) | Cerebral apoplexy early-stage flat-scan CT image evaluation system and method and readable storage medium | |
KR102251830B1 (en) | Systems and methods for registration of ultrasound and ct images | |
CN102551718B (en) | MR imaging apparatus | |
CN113781439B (en) | Ultrasonic video focus segmentation method and device | |
CN100561518C (en) | Self-adaptation medical image sequence interpolation method based on area-of-interest | |
CN109192305B (en) | Heart function automatic analysis method based on deep circulation neural network | |
CN103294883B (en) | For the method and system for through the implantation of conduit aorta petal intervene planning | |
Leclerc et al. | LU-Net: a multistage attention network to improve the robustness of segmentation of left ventricular structures in 2-D echocardiography | |
WO2022121100A1 (en) | Darts network-based multi-modal medical image fusion method | |
US11830193B2 (en) | Recognition method of intracranial vascular lesions based on transfer learning | |
CN110163877A (en) | A kind of method and system of MRI ventricular structure segmentation | |
de Albuquerque et al. | Fast fully automatic heart fat segmentation in computed tomography datasets | |
CN117197594B (en) | Deep neural network-based heart shunt classification system | |
CN101160602A (en) | A method, an apparatus and a computer program for segmenting an anatomic structure in a multi-dimensional dataset. | |
CN114419181A (en) | CTA image reconstruction method and device, display method and device | |
CN109003280B (en) | Method for segmenting intima in blood vessel by two-channel intravascular ultrasonic image | |
CN113935976A (en) | Method and system for automatically segmenting blood vessels in internal organs by enhancing CT (computed tomography) image | |
Huang et al. | POST-IVUS: A perceptual organisation-aware selective transformer framework for intravascular ultrasound segmentation | |
CN109741439A (en) | A kind of three-dimensional rebuilding method of two dimension MRI fetus image | |
CN110689080B (en) | Planar atlas construction method of blood vessel structure image | |
CN116580819B (en) | Method and system for automatically determining inspection results in an image sequence | |
CN112258476A (en) | Echocardiography myocardial abnormal motion mode analysis method, system and storage medium | |
Zhao et al. | Automated coronary tree segmentation for x-ray angiography sequences using fully-convolutional neural networks | |
US12112489B2 (en) | Method of establishing an enhanced three-dimensional model of intracranial angiography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220419 |