CN111583204B - Organ localization method based on two-dimensional serial magnetic resonance images based on network model - Google Patents
Organ localization method based on two-dimensional serial magnetic resonance images based on network model Download PDFInfo
- Publication number
- CN111583204B CN111583204B CN202010344910.7A CN202010344910A CN111583204B CN 111583204 B CN111583204 B CN 111583204B CN 202010344910 A CN202010344910 A CN 202010344910A CN 111583204 B CN111583204 B CN 111583204B
- Authority
- CN
- China
- Prior art keywords
- organ
- image
- sequence
- network model
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000000056 organ Anatomy 0.000 title claims abstract description 119
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000004807 localization Effects 0.000 title claims description 38
- 238000001514 detection method Methods 0.000 claims abstract description 57
- 238000012545 processing Methods 0.000 claims abstract description 32
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000012795 verification Methods 0.000 claims abstract description 9
- 230000008569 process Effects 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 29
- 238000010200 validation analysis Methods 0.000 claims description 19
- 238000010606 normalization Methods 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 230000005489 elastic deformation Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 238000012217 deletion Methods 0.000 claims description 2
- 230000037430 deletion Effects 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims description 2
- 238000002360 preparation method Methods 0.000 abstract description 2
- 210000002307 prostate Anatomy 0.000 description 24
- 238000013527 convolutional neural network Methods 0.000 description 17
- 230000011218 segmentation Effects 0.000 description 16
- 230000000694 effects Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 239000010410 layer Substances 0.000 description 4
- 210000001519 tissue Anatomy 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 210000004907 gland Anatomy 0.000 description 2
- 210000000664 rectum Anatomy 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 206010060862 Prostate cancer Diseases 0.000 description 1
- 208000000236 Prostatic Neoplasms Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004195 computer-aided diagnosis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 210000003734 kidney Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 210000000496 pancreas Anatomy 0.000 description 1
- 208000023958 prostate neoplasm Diseases 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30081—Prostate
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及磁共振图像中器官的定位方法,具体而言涉及一种基于网络模型的二维序列磁共振图像的器官定位方法。The invention relates to a method for locating organs in magnetic resonance images, in particular to a method for locating organs in two-dimensional sequence magnetic resonance images based on a network model.
背景技术Background technique
磁共振(MR)成像由于其优越的空间分辨率和组织对比度,目前已经成为前列腺辅助诊断的主要成像方式。相较于经直肠超声成像(TRUS)而言,它促进了靶向活检和治疗对于前列腺肿瘤病变定位、体积评估及前列腺癌症分期有着重要意义。然而目前前列腺MR图像的检查是由放射科医生基于每一张切片图像的视觉检查,因此是一项相当耗时、繁琐且具有一定主观性的任务。Magnetic resonance (MR) imaging has become the main imaging modality for prostate auxiliary diagnosis due to its superior spatial resolution and tissue contrast. Compared with transrectal ultrasonography (TRUS), it facilitates targeted biopsy and therapy that is important for localization of prostate tumor lesions, volume assessment, and prostate cancer staging. However, the current inspection of prostate MR images is performed by radiologists based on visual inspection of each slice image, so it is a rather time-consuming, tedious and subjective task.
器官定位对于许多如图像配准、器官分割和病变检测等医学图像处理任务都是很重要的。器官初始位置的有效评估能够在很大程度上改善后续处理的性能。例如对于器官分割而言,器官的初始定位能够将分割任务集中在感兴趣的区域,在提升分割速度,降低内存存储的同时降低了假阳性分割的风险。Organ localization is important for many medical image processing tasks such as image registration, organ segmentation, and lesion detection. Effective assessment of the initial position of organs can greatly improve the performance of subsequent processing. For example, for organ segmentation, the initial positioning of the organ can focus the segmentation task on the region of interest, which can improve the segmentation speed, reduce memory storage and reduce the risk of false positive segmentation.
目前,在医学图像的各种器官/组织的分割和检测中,已经提出了一些应用于计算机辅助诊断的半自动或全自动的方法。然而由于扫描仪和扫描方案的不同所引起的图像亮度、成像伪影及直肠线圈周围信号强度的异质性和腺体本身存在的大小、形状的不同以及腺体与周围组织结构之间的低对比度、缺乏强边界等内在差异,前列腺的分割和检测仍然面临很大的挑战。At present, in the segmentation and detection of various organs/tissues in medical images, some semi-automatic or fully automatic methods for computer-aided diagnosis have been proposed. However, due to differences in scanners and scanning protocols, image brightness, imaging artifacts, and heterogeneity of signal intensity around the rectal coil, and differences in the size and shape of the glands themselves, as well as the differences between the glands and surrounding tissue structures, Prostate segmentation and detection still face great challenges due to inherent differences such as contrast, lack of strong borders, etc.
近年来,以Faster R-CNN为首的基于区域的两阶段目标检测算法因其卓越的检测精度和较好的检测效率在医学图像处理领域得到了广泛的应用。但是由于网络最初是针对自然图像中的目标检测而设计且对于小目标的检测仍然存在一定的局限性,因此无法对磁共振图像中的器官进行唯一准确的定位。In recent years, region-based two-stage object detection algorithms led by Faster R-CNN have been widely used in the field of medical image processing due to their excellent detection accuracy and better detection efficiency. However, since the network was originally designed for object detection in natural images and there are still certain limitations in the detection of small objects, it cannot uniquely and accurately locate organs in magnetic resonance images.
发明内容SUMMARY OF THE INVENTION
为克服现有技术的不足,本发明提供了一种基于Faster R-CNN改进的二维序列磁共振图像的器官定位方法,充分利用自然图像特征与医学图像特征之间的异同来优化器官的定位精度和检测成功率。In order to overcome the deficiencies of the prior art, the present invention provides an improved two-dimensional sequence magnetic resonance image organ localization method based on Faster R-CNN, which makes full use of the similarities and differences between natural image features and medical image features to optimize organ localization. Accuracy and detection success rate.
为此,本发明采用以下技术方案:For this reason, the present invention adopts the following technical solutions:
一种基于网络模型的二维序列磁共振图像的器官定位方法,包括以下步骤:A method for organ localization of two-dimensional sequence magnetic resonance images based on a network model, comprising the following steps:
S1,准备数据集:S1, prepare the dataset:
搜集多个关于某器官的二维序列磁共振图像,对这些图像中的器官区域进行矩形目标框标注,然后将其分成训练集和验证集;Collect multiple 2D sequential magnetic resonance images of an organ, label the organ regions in these images with rectangular target boxes, and then divide them into training sets and validation sets;
S2,数据集预处理:对每张所述二维序列MR图像依次进行像素强度最大值最小值归一化处理、中心裁剪处理和图像尺寸归一化处理;S2, data set preprocessing: perform pixel intensity maximum and minimum normalization processing, center cropping processing, and image size normalization processing on each of the two-dimensional sequence MR images in sequence;
S3,对步骤S2得到的训练集图像及对应器官的矩形目标框标注进行数据扩充处理;S3, performing data expansion processing on the training set image obtained in step S2 and the rectangular target frame labeling of the corresponding organ;
S4,构建基于faster R-CNN的改进的器官定位网络模型,包括以下步骤:S4, build an improved organ localization network model based on faster R-CNN, including the following steps:
1)构建基于改进的Faster R-CNN的目标检测网络架构,利用带有空间注意力机制的ResNet-50代替经典Faster R-CNN中的VGG16架构,利用ImageNet大型自然数据集对ResNet-50进行训练,得到网络的初始训练权重参数;1) Build the target detection network architecture based on the improved Faster R-CNN, use the ResNet-50 with spatial attention mechanism to replace the VGG16 architecture in the classic Faster R-CNN, and use the ImageNet large natural dataset to train ResNet-50 , get the initial training weight parameters of the network;
2)将步骤S3扩充后的训练集图像及对应的矩形目标框标注作为步骤1)中网络的输入,利用分类损失和回归损失构成的多任务损失函数对整个网络架构参数进行迭代调整,完成网络训练,生成器官初步定位网络模型;2) Use the training set image expanded in step S3 and the corresponding rectangular target frame annotation as the input of the network in step 1), and use the multi-task loss function composed of classification loss and regression loss to iteratively adjust the parameters of the entire network architecture to complete the network. Train and generate a network model for preliminary positioning of organs;
S5,将步骤S2预处理后的验证集图像和与其对应的矩形目标框标注输入S4生成的器官初步定位网络模型,针对输出结果对器官定位模型进行优化调整,得到器官定位网络模型;S5, the verification set image preprocessed in step S2 and the corresponding rectangular target frame annotation are input into the preliminary organ localization network model generated by S4, and the organ localization model is optimized and adjusted according to the output result to obtain the organ localization network model;
S6,利用S5构建的网络模型对步骤S2预处理后的验证集图像进行器官的初步定位,得到针对其中每个二维切片图像的若干个目标检测框及相应的可信度评分,保留每张图像中可信度大于一定阈值的目标候选框;S6, using the network model constructed in S5 to preliminarily locate the organs on the validation set images preprocessed in step S2, to obtain several target detection frames and corresponding reliability scores for each of the two-dimensional slice images, and retain each image The target candidate frame in the image whose reliability is greater than a certain threshold;
S7,构建基于序列关联性处理的空间曲线拟合模型:S7, construct a spatial curve fitting model based on sequence correlation processing:
1)针对S6中每个带有初步检测结果的二维切片图像,将属于同一序列的轴向切片图像按照前后顺序整理好;1) For each two-dimensional slice image with the preliminary detection result in S6, arrange the axial slice images belonging to the same sequence in the order before and after;
2)对上述整理好的每个序列,将其中具有单一可靠目标候选框的切片图像中目标框的关键点进行提取,并利用这些关键点进行序列方向的空间曲线拟合,确定最佳曲线拟合方式为最小二乘四次多项式拟合;2) For each sequence sorted above, extract the key points of the target frame in the slice image with a single reliable target candidate frame, and use these key points to perform spatial curve fitting in the sequence direction to determine the best curve fitting. The fitting method is least squares quadratic polynomial fitting;
S8,将某一待定位器官的二维序列磁共振图像按步骤S2的方法进行预处理后输入所步骤S5得到的器官定位网络模型,得到器官初步定位结果;S8, the two-dimensional sequence magnetic resonance image of a certain organ to be located is preprocessed according to the method of step S2 and then input into the organ location network model obtained in step S5 to obtain a preliminary organ location result;
S9,由步骤S8输出的器官初步定位结果得到每个二维切片图像的若干个目标检测框及相应的可信度评分,保留每张图像中可信度大于一定阈值的目标候选框;S9, obtain several target detection frames and corresponding reliability scores of each two-dimensional slice image from the preliminary positioning result of the organ output in step S8, and retain target candidate frames whose reliability is greater than a certain threshold in each image;
S10,基于序列关联性处理对器官进行最终定位:S10, perform final positioning of the organ based on sequence association processing:
1)针对S9中每个带有初步检测结果的二维切片图像,将其按序列前后顺序整理好;1) For each two-dimensional slice image with preliminary detection results in S9, arrange it in the order of the sequence;
2)针对上述整理好的序列图像,将其中具有单一可靠目标候选框的切片图像中目标框的关键点进行提取,根据S7得到的最小二乘四次多项式拟合方式对这些关键点进行序列方向的空间曲线拟合;2) For the above sorted sequence images, extract the key points of the target frame in the slice image with a single reliable target candidate frame, and perform the sequence direction on these key points according to the least squares quadratic polynomial fitting method obtained in S7. The spatial curve fitting of ;
3)利用拟合的空间曲线和最小空间欧式距离筛选并判定具有多预测框的二维图像中距离拟合位置最近的目标框为器官的最终定位框,并在每次筛选之后更新空间曲线拟合参数;3) Use the fitted spatial curve and the minimum spatial Euclidean distance to screen and determine the target frame closest to the fitting position in the two-dimensional image with multiple prediction frames as the final positioning frame of the organ, and update the spatial curve fitting after each screening. combined parameters;
4)利用最终更新后的空间曲线对每个序列中序列首末两端缺失检测的二维图像进行拟合,完成对器官的最终定位。4) Fitting the two-dimensional images of the deletion detection at the beginning and end of each sequence using the final updated space curve to complete the final positioning of the organ.
其中,步骤S2中对像素强度进行归一化时,像素i的像素值xi归一化后的像素值为:normi=(xi-Pmin)/(Pmax-Pmin)×255,normi∈[0,255],其中,Pmax、Pmin分别指像素i所在的切片图像的像素最大值和最小值。Wherein, when the pixel intensity is normalized in step S2, the normalized pixel value of the pixel value x i of the pixel i is: norm i =( xi -Pmin)/(Pmax-Pmin)×255, norm i ∈ [0,255], where Pmax and Pmin refer to the maximum and minimum pixel values of the slice image where pixel i is located, respectively.
步骤S2中,进行中心裁剪处理时,若原图尺寸为W×H,则裁剪后尺寸为αW×αH,α为比例系数,优选的是,所述α=2/3。In step S2, when the center cropping process is performed, if the size of the original image is W×H, the size after cropping is αW×αH, where α is a scale coefficient, preferably, the α=2/3.
优选的是,步骤S2中,将图像尺寸归一化为600×600。Preferably, in step S2, the image size is normalized to 600×600.
在步骤S3中,通过水平翻转、小幅度水平平移或垂直平移、小角度随机旋转和弹性形变进行数据扩充处理。优选的是,所述小角度随机旋转的角度为-5°~5°。In step S3, data expansion processing is performed by horizontal flipping, small-amplitude horizontal translation or vertical translation, small-angle random rotation and elastic deformation. Preferably, the random rotation angle of the small angle is -5°˜5°.
步骤S6中,所述阈值为0.80~0.90。In step S6, the threshold value is 0.80-0.90.
受到Faster R-CNN模型在自然图像目标检测中取得优异表现的启发,本发明建立在该模型的基础上,充分利用了器官在图像中的中心位置先验信息和序列图像之间的演化规律对模型进行进一步的改进和完善,以此来实现磁共振图像中前列腺器官的精确定位,为后续器官配准、分割等医学图像任务奠定良好的基础。本发明基于Faster R-CNN改进的二维序列磁共振器官定位方法充分利用了自然图像特征与医学图像特征之间的异同来优化器官的定位精度和检测成功率。Inspired by the excellent performance of the Faster R-CNN model in natural image target detection, the present invention is based on the model, and makes full use of the prior information of the center position of the organ in the image and the evolution law between the sequence images. The model is further improved and perfected to achieve accurate positioning of prostate organs in magnetic resonance images, laying a good foundation for subsequent medical image tasks such as organ registration and segmentation. The improved two-dimensional sequence magnetic resonance organ positioning method based on Faster R-CNN of the present invention makes full use of the similarities and differences between natural image features and medical image features to optimize the positioning accuracy and detection success rate of organs.
实验证明,在没有过多复杂的图像预处理工作的前提下,该方法能够实现较为精确的官定位,且具有良好的泛化性能。本方法对于器官面积的召回率达到96.91%,对器官的定位成功率高达99.39%。Experiments show that the method can achieve more accurate localization and good generalization performance without too much complex image preprocessing work. The recall rate of this method for organ area reaches 96.91%, and the success rate of organ localization is as high as 99.39%.
附图说明Description of drawings
图1是本发明方法的流程框图;Fig. 1 is the flow chart of the method of the present invention;
图2是本发明中使用的来自不同医学中心的前列腺轴向磁共振图像示例;Figure 2 is an example of axial magnetic resonance images of the prostate from different medical centers used in the present invention;
图3是本发明中部分网络块的具体结构:(a)卷积注意力块;(b)标识块;(c)空间注意力模块;Fig. 3 is the specific structure of some network blocks in the present invention: (a) convolution attention block; (b) identification block; (c) spatial attention module;
图4是基于改进的Faster R-CNN的部分初步检测结果:(a)带有多个目标检测结果的切片图像示例;(b)缺少目标检测结果的切片图像示例;Figure 4 is a partial preliminary detection result based on the improved Faster R-CNN: (a) an example of a slice image with multiple target detection results; (b) an example of a slice image lacking target detection results;
图5是基于序列关联性的空间曲线拟合和更新示意图,其中:(1)被舍弃目标框的关键点在X-Z和X-Y面上的投影;(2)空间拟合曲线的绘制和调整;(3)空间拟合曲线在X-Z面上的投影;(4)空间拟合曲线在X-Y面上的投影Figure 5 is a schematic diagram of spatial curve fitting and updating based on sequence correlation, wherein: (1) the projection of the key points of the discarded target frame on the X-Z and X-Y planes; (2) the drawing and adjustment of the spatial fitting curve; ( 3) The projection of the spatial fitting curve on the X-Z plane; (4) The projection of the spatial fitting curve on the X-Y plane
图6是本发明的方法在验证集上的部分检测结果图:其中红色标记为前列腺器官的真实轮廓,黄色矩形框为算法的器官定位结果;6 is a partial detection result diagram of the method of the present invention on the verification set: wherein the red mark is the true outline of the prostate organ, and the yellow rectangle is the organ location result of the algorithm;
图7是本发明的方法在一个实际案例中的轴向切片测试结果图:(a)第5张切片;(b)第11张切片;(c)第15张切片;(d)第18张切片。7 is a graph of the axial slice test results of the method of the present invention in a practical case: (a) the 5th slice; (b) the 11th slice; (c) the 15th slice; (d) the 18th slice slice.
图8是本发明中模型训练环节和测试环节的流程图。FIG. 8 is a flow chart of the model training link and the testing link in the present invention.
具体实施方式Detailed ways
由于器官的初始定位对于器官分割、图像配准、病变检测等医学图像处理非常重要,器官初始位置的有效评估能够帮助去除图像背景中大部分的干扰性信息,从而改善后续处理。本发明基于两阶段检测算法Faster R-CNN,引入了关注器官位置特征的空间注意力模块和改善序列间检测效果的关联性处理,并在异质性较高的测试数据集上表现出了优秀的检测效果,为医学图像的后续处理奠定了良好的基础。Since the initial positioning of organs is very important for medical image processing such as organ segmentation, image registration, and lesion detection, effective evaluation of the initial position of organs can help remove most of the interfering information in the image background, thereby improving subsequent processing. Based on the two-stage detection algorithm Faster R-CNN, the invention introduces a spatial attention module that pays attention to the positional features of organs and a correlation process that improves the detection effect between sequences, and shows excellent performance on the test data set with high heterogeneity. The detection effect has laid a good foundation for the subsequent processing of medical images.
以下结合具体实施例对本发明的方法进行详细说明。The method of the present invention will be described in detail below with reference to specific embodiments.
实施例一 一种基于网络模型的二维序列磁共振图像的器官定位方法Embodiment 1 A method for organ localization of two-dimensional sequence magnetic resonance images based on network model
本实施例以规模较小、异质性较高的前列腺磁共振数据集为例对前列腺器官的定位方法进行详细说明。如图8所示,该方法包括以下步骤:In this embodiment, the method for locating prostate organs is described in detail by taking a prostate magnetic resonance data set with a small scale and high heterogeneity as an example. As shown in Figure 8, the method includes the following steps:
S1,数据集准备:S1, dataset preparation:
本实施例使用的前列腺磁共振图像来自2012年MICCAI会议上公布的前列腺分割挑战赛PROMISE12数据集。该数据集包含来自四个医学中心的80个T2加权磁共振图像案例,其中有50例带有专家手工分割掩码,每个中心在采集图像时所采用的采集设备和采集协议都不尽相同。图2中展示了来自四个中心的样例图像。其中的1.5T和3T代表磁场强度,有无ERC代表图像采集时是否使用了直肠线圈。The prostate magnetic resonance images used in this example are from the Prostate Segmentation Challenge PROMISE12 dataset published at the 2012 MICCAI conference. The dataset contains 80 cases of T2-weighted magnetic resonance images from four medical centers, 50 of which have expert manual segmentation masks, each center using different acquisition equipment and acquisition protocols to acquire images . Sample images from the four centers are shown in Figure 2. Among them, 1.5T and 3T represent the magnetic field strength, and the presence or absence of ERC represents whether a rectal coil was used during image acquisition.
本实施例将上述50例带有专家手工分割掩码的前列腺MR图像以案例为单位按照4:1的比例分为训练集和验证集。In this embodiment, the above-mentioned 50 cases of prostate MR images with expert manual segmentation masks are divided into training set and validation set in a case-by-case basis according to a ratio of 4:1.
S2,数据集图像预处理:S2, dataset image preprocessing:
(1)训练集和验证集图像的预处理:(1) Preprocessing of training set and validation set images:
针对上述得到的训练集和验证集图像进行如下四步基本预处理操作:(a)由于原始三维图像在轴向方向的层间分辨率比层内分辨率高,因此本发明将原始数据中的三维图像按照轴向序列展开,形成二维序列MR图像。(b)其次,由于来自不同采集中心的图像强度差异较大,因此针对每张二维切片图像进行强度最大值最小值归一化处理。如像素i的像素值xi归一化后的像素值为:normi=(xi-Pmin)/(Pmax-Pmin)×255,normi∈[0,255],其中Pmax、Pmin分别指像素i所在的切片图像的像素最大值和最小值。(c)此外,考虑到器官通常位于图像中心位置这一先验信息,为了增强对于小目标的检测性能,对图像进行中心裁剪处理(若原图尺寸为W×H,则裁剪后尺寸为αW×αH,α为比例系数,在本实施例中取α=2/3)。(d)最后,将图像尺寸统一归一化,本实施例中归一化为600×600。The following four basic preprocessing operations are performed on the training set and validation set images obtained above: (a) Since the inter-layer resolution of the original 3D image in the axial direction is higher than the intra-layer resolution, the present invention will The 3D images are unfolded in an axial sequence to form a 2D sequence MR image. (b) Secondly, due to the large difference in the intensity of the images from different acquisition centers, the intensity maximum and minimum values are normalized for each 2D slice image. For example, the normalized pixel value of the pixel value x i of the pixel i is: norm i =( xi -Pmin)/(Pmax-Pmin)×255, norm i ∈[0,255], where Pmax and Pmin refer to the pixel i respectively The pixel maximum and minimum of the slice image where it is located. (c) In addition, considering the prior information that the organ is usually located at the center of the image, in order to enhance the detection performance for small objects, the image is centrally cropped (if the size of the original image is W×H, then the cropped size is αW× αH, α are proportional coefficients, in this embodiment, α=2/3). (d) Finally, the image size is uniformly normalized, which is normalized to 600×600 in this embodiment.
(2)训练集和验证集目标框标签的生成(2) Generation of target box labels for training set and validation set
为了利用带有标签的数据来训练和验证深度学习网络,利用训练集和验证集图像对应的分割掩码(即像素级标注)来生成基于每张训练和验证图像的图像级标注,即矩形目标框标注。本方法中针对训练和验证图像对应的分割掩码图像,按照上述(1)中(c)-(d)的中心裁剪和尺寸归一化操作后,利用器官真实轮廓的外接矩形作为器官的位置标签,并将训练集和验证集图像及对应的带有位置标签的标签图像作为后续训练和验证网络的输入。To train and validate deep learning networks with labeled data, segmentation masks (ie, pixel-level annotations) corresponding to training and validation images are used to generate image-level annotations based on each training and validation image, ie, rectangular objects Box callout. In this method, for the segmentation mask images corresponding to the training and verification images, after the center cropping and size normalization operations of (c)-(d) in the above (1), the circumscribed rectangle of the true outline of the organ is used as the position of the organ label, and use the training set and validation set images and the corresponding label images with position labels as the input of the subsequent training and validation network.
对于没有分割掩码的数据集,可先对这些图像中器官的区域进行矩形目标框标注。For datasets without segmentation masks, the regions of the organs in these images can be labeled with rectangular target boxes first.
S3,数据扩充处理:S3, data augmentation processing:
由于深度学习模型训练往往需要大规模的数据集才能够更好地学习到图像数据的泛化特征,而医学图像往往因为患者隐私等问题不易获得,因此本发明针对训练集图像及对应的分割掩码图像通过水平翻转、小幅度水平或垂直平移、小角度随机旋转(-5°~5°)和弹性形变四种方式进行数据扩充处理。扩充后图像数量为原始图像数量的16倍。Since deep learning model training often requires large-scale data sets to better learn the generalization features of image data, and medical images are often difficult to obtain due to issues such as patient privacy, the present invention aims at training set images and corresponding segmentation masks. The code image carries out data expansion processing in four ways: horizontal flip, small-amplitude horizontal or vertical translation, small-angle random rotation (-5°~5°) and elastic deformation. The number of expanded images is 16 times the number of original images.
S4,构建基于faster R-CNN的改进的器官初步定位网络模型:S4, construct an improved preliminary organ localization network model based on faster R-CNN:
(1)网络结构搭建:(1) Network structure construction:
本发明中采用了与Faster R-CNN相仿的网络结构,所不同的是用带有空间注意力机制的ResNet-50架构代替了原始Faster R-CNN中的VGG16来实现对输入图像的特征提取及对图像中目标的分类识别,如图1中所示。其中,卷积注意力模块的具体构造及空间注意力模块在其中的嵌入方式如图3(a)所示;标识块和空间注意力模块的具体结构如图3(b)、(c)所示。空间注意力模块的加入是为了更好地利用器官在图像中的中心位置信息,增强网络对于位置特征学习的敏感性。以F∈RC×H×W作为中间特征图输入,则加入空间注意力模块后的注意力特征图的计算公式为:The present invention adopts the same network structure as Faster R-CNN, the difference is that the ResNet-50 architecture with spatial attention mechanism is used to replace the VGG16 in the original Faster R-CNN to realize the feature extraction and processing of the input image. Classification and recognition of objects in images, as shown in Figure 1. Among them, the specific structure of the convolutional attention module and the embedding method of the spatial attention module are shown in Figure 3(a); the specific structures of the identification block and the spatial attention module are shown in Figures 3(b) and (c) Show. The addition of the spatial attention module is to make better use of the central location information of organs in the image and enhance the sensitivity of the network to location feature learning. Taking F∈R C×H×W as the intermediate feature map input, the calculation formula of the attention feature map after adding the spatial attention module is:
Attention(F)=sigmoid(f7×7[Favg,Fmax])Favg(F)=Global_avgpool(F),Attention(F)=sigmoid( f7 ×7 [Favg,Fmax]) Favg (F)= Global_avgpool (F),
Fmax(F)=Global_maxpool(F)Fmax(F)= Global_maxpool (F)
其中,Favg,Fmax∈R1×H×W,f7×7表示与7×7的卷积核进行卷积操作。Among them, F avg , F max ∈ R 1×H×W , f 7×7 represents the convolution operation with the 7×7 convolution kernel.
(2)网络训练策略和损失函数设置(2) Network training strategy and loss function settings
1)训练策略1) Training strategy
由于医学数据集的有限性,本发明利用迁移学习的思想,将网络在ImageNet大型自然图像集上进行预训练,并将预训练的权重参数作为本方法的初始权重,并在此基础上利用带有标注的前列腺数据集对网络进行进一步训练。训练过程中选用参数为β1=0.9,β2=0.999,epsilon=10-8的Adam优化器,并采用2×10-5的固定学习率。Due to the limitation of medical data sets, the present invention uses the idea of transfer learning to pre-train the network on the large-scale natural image set of ImageNet, and uses the pre-trained weight parameters as the initial weight of the method, and uses the The network is further trained on the annotated prostate dataset. In the training process, Adam optimizer with parameters β1=0.9, β2=0.999, epsilon=10 -8 is selected, and a fixed learning rate of 2×10 -5 is adopted.
2)损失函数设置2) Loss function settings
Faster R-CNN网络包含回归和分类两个任务,回归器用于对候选框进行回归预测,分类器用于对候选框内对象进行分类。因此网络训练过程使用多任务损失函数,其计算公式如下:The Faster R-CNN network includes two tasks: regression and classification. The regressor is used to perform regression prediction on the candidate frame, and the classifier is used to classify the objects in the candidate frame. Therefore, the network training process uses a multi-task loss function, and its calculation formula is as follows:
其中,Lcls、Lreg分别表示分类损失和回归损失,具体如下:Among them, L cls and L reg represent classification loss and regression loss, respectively, as follows:
i)分类损失:采用二分类交叉熵损失函数,其中pi为anchor[i]预测为目标的概率,为anchor[i]的真实标签,即满足 i) Classification loss: a binary cross-entropy loss function is used, where p i is the probability that anchor[i] is predicted to be the target, is the true label of anchor[i], that is, it satisfies
ii)回归损失:其中,ti={tx,ty,tw,th}是一个向量,表示预测候选框anchor[i]的参数化坐标;对应的真实目标框的坐标向量。ii) Regression loss: in, t i ={t x , t y , t w , t h } is a vector representing the parameterized coordinates of the prediction candidate frame anchor[i]; The coordinate vector of the corresponding real target box.
(3)网络训练(3) Network training
1)特征提取1) Feature extraction
将上述S3得到的训练集图像输入特征提取网络,随着卷积层分辨率的不断降低,网络能够更好地学习图像的全局特征(即泛化特征),残差结构的加入进一步优化了特征提取过程。如图1中所示,图像在阶段1~4中经历了四次步长为2的下采样操作,因此最终得到的卷积特征图大小约为原始输入图像尺寸的1/16(以输入图像尺寸为600×600为例,输出的卷积特征图大小为38×38)。The training set image obtained in S3 above is input into the feature extraction network. With the continuous reduction of the resolution of the convolutional layer, the network can better learn the global features (ie generalization features) of the image, and the addition of the residual structure further optimizes the features. extraction process. As shown in Figure 1, the image undergoes four downsampling operations with
2)目标候选框的预测2) Prediction of target candidate frame
(a)基于步骤1)中得到的卷积特征图,RPN网络首先对其进行一个3×3的卷积操作,而后以卷积图中的每个像素为中心,形成若干个不同大小和不同比例的anchors,将这些anchors映射到卷积特征图上,得到若干个目标候选框建议。在本实施例中,考虑到器官本身的形态大小及其在图像中所占的比例,选取尺寸大小分别为{64,128,256,512}且长宽比分别为{0.5,1,2}的anchors,由此在每个像素点处形成4×3=12个anchors。(a) Based on the convolution feature map obtained in step 1), the RPN network first performs a 3×3 convolution operation on it, and then takes each pixel in the convolution map as the center to form several different sizes and different The scaled anchors are mapped to the convolutional feature maps to obtain several target candidate box proposals. In this embodiment, considering the size of the organ itself and its proportion in the image, anchors with sizes {64, 128, 256, 512} and aspect ratios {0.5, 1, 2} are selected. 4×3=12 anchors are formed at each pixel.
(b)基于1)中得到的卷积特征图和(a)中得到的不同大小和尺寸的目标候选框建议,进行ROI池化操作,得到相同尺寸大小的目标候选框特征图;(b) Based on the convolution feature map obtained in 1) and the target candidate frame proposals of different sizes and sizes obtained in (a), perform ROI pooling operation to obtain target candidate frame feature maps of the same size;
(c)利用图1中所示的阶段5对(b)中得到的目标候选框特征图作进一步的卷积操作,之后利用全连接层分别获得每个目标候选框所属的类别概率预测以及位置偏移量预测;(c) Use stage 5 shown in Figure 1 to perform further convolution operations on the feature map of the target candidate frame obtained in (b), and then use the fully connected layer to obtain the category probability prediction and position of each target candidate frame respectively. offset prediction;
(d)执行具有选定阈值的非极大值抑制(NMS),来去除掉同一类别中重叠度较高的目标候选框集合。(d) Perform Non-Maximum Suppression (NMS) with a selected threshold to remove the set of object candidate boxes with high overlap in the same class.
(e)利用训练图像对应的标签图像对经过(d)筛选得到的目标候选框的分类和位置预测进行损失函数评估,从而利用梯度下降反向传播方法,对深度卷积神经网络的参数进行迭代更新,将迭代至最大设定次数后得到的网络参数作为最优的网络参数,完成训练,生成器官初步定位网络模型。(e) Use the label image corresponding to the training image to evaluate the loss function for the classification and position prediction of the target candidate frame screened in (d), so as to use the gradient descent backpropagation method to iterate the parameters of the deep convolutional neural network. Update, take the network parameters obtained after iterating to the maximum set times as the optimal network parameters, complete the training, and generate the initial positioning network model of the organ.
S5、将上述构建的网络模型应用于验证集的初步定位:S5. Apply the network model constructed above to the preliminary positioning of the validation set:
将S2所述经过预处理后的验证集图像(不包含目标框标注)输入到上述构建的网络模型中,输出得到针对验证集图像中每个二维切片图像的若干个目标检测框及相应的可信度评分。保留每张图像中可信度大于阈值(本方法中选用阈值为0.80)的目标候选框。The preprocessed verification set image (not including the target frame annotation) described in S2 is input into the network model constructed above, and the output obtains several target detection frames and corresponding target detection frames for each two-dimensional slice image in the verification set image. reliability score. Retain the target candidate boxes whose reliability is greater than the threshold (the threshold is 0.80 in this method) in each image.
S6,基于序列关联性处理对器官进行最终定位:S6, final localization of the organ based on sequence association processing:
在步骤S4的初步检测结果中出现如图4(a)所示的个别切片图像中出现多个目标检测框的现象及(b)中缺乏目标检测框的现象,这是因为这种方式忽略了医学图像本身的特质。对于器官定位检测来说,同一个体的二维序列MR图像之间器官的位置没有较大的变动,器官表面轮廓的演化是有一定规律的,因此可以利用数学模型来描述为此,本方法引入序列关联性处理来改善上述问题。具体分为以下步骤:In the preliminary detection result of step S4, the phenomenon of multiple target detection frames appearing in individual slice images as shown in Fig. 4(a) and the phenomenon of lack of target detection frames in (b) appear because this method ignores characteristics of the medical image itself. For the detection of organ localization, the position of the organ does not change greatly between the two-dimensional sequence MR images of the same individual, and the evolution of the surface contour of the organ has a certain law, so it can be described by a mathematical model. This method introduces Sequence correlation processing to improve the above problems. Specifically divided into the following steps:
(1)二维图像序列化:(1) Two-dimensional image serialization:
针对S4中每个带有初步检测结果的二维切片图像,将属于同一序列的轴向切片图像按照前后顺序整理好,为进一步的序列关联性处理做好准备。For each two-dimensional slice image with preliminary detection results in S4, the axial slice images belonging to the same sequence are arranged in the order of the front and back, so as to prepare for further sequence correlation processing.
(2)目标框最终确定:(2) The target frame is finally determined:
以前列腺器官的检测为例,每张图像中都包含前列腺器官且器官是唯一的。因此利用目标的唯一性及序列图像中器官位置的微变性,首先针对(1)中得到的验证集中的每个序列,对属于同一序列的具有单一可靠目标候选框的切片图像中目标框的九个关键点(包括矩形框的中心点、四个顶点及四条边的中点)进行三维空间曲线拟合,拟合方式采用最小二乘四次多项式拟合。Taking the detection of prostate organs as an example, each image contains prostate organs and the organs are unique. Therefore, using the uniqueness of the target and the slight variability of the position of the organs in the sequence image, first, for each sequence in the validation set obtained in (1), nine of the target frames in the slice images with a single reliable target candidate frame belonging to the same sequence are analyzed. Three key points (including the center point of the rectangular frame, the four vertices and the midpoints of the four sides) are fitted to the three-dimensional space curve, and the fitting method adopts the least squares quadratic polynomial fitting.
对上述图4(a)、(b)中两个问题分别进行如下处理:The two problems in Figure 4(a) and (b) above are dealt with as follows:
I)针对每个序列中个别二维图像存在两个及两个以上目标框的现象(如图4(a)所示):利用拟合的空间曲线和最小空间欧式距离筛选并判定具有多预测框的二维图像中距离拟合位置最近的目标框为最终器官的位置,并在每次筛选之后更新空间曲线拟拟合参数。图5为描述这一过程的示意图,其中目标框筛选的数学公式依据为:1) For the phenomenon that there are two or more target frames in individual two-dimensional images in each sequence (as shown in Figure 4(a)): use the fitted spatial curve and the minimum spatial Euclidean distance to screen and determine if there are multiple predictions The target box closest to the fitting position in the 2D image of the box is the position of the final organ, and the spatial curve fitting parameters are updated after each screening. Figure 5 is a schematic diagram describing this process, in which the mathematical formula for target box screening is based on:
其中,b*表示该切片图像最终筛选出的器官预测框,分别表示第j个关键点处y方向和z方向的拟合预测值,分别表示网络预测出的某个目标框的第j个关键点处的y方向和z方向的坐标值。Among them, b * represents the organ prediction frame finally screened out of the slice image, represent the fitted predicted values in the y-direction and z-direction at the jth key point, respectively, Represent the coordinate values of the y-direction and z-direction at the jth key point of a target frame predicted by the network, respectively.
II)针对每个序列中序列首末两端的前列腺基部和底部器官由于目标较小而可能出现没有目标框的现象(如图4(b)所示):II) For the prostate base and bottom organs at the beginning and end of the sequence in each sequence, there may be no target frame due to the small target (as shown in Figure 4(b)):
由于目标检测网络对于小目标检测的有限性,本方法利用每个序列最终更新后的空间曲线来对该序列内缺失检测的二维图像拟合得到器官的目标定位。由此,实现了验证集图像的器官最终定位。Due to the limitation of the target detection network for small target detection, this method uses the final updated spatial curve of each sequence to fit the two-dimensional image of the missing detection in the sequence to obtain the target location of the organ. Thus, the final localization of the organs in the validation set images is achieved.
验证集的定位效果评估Validation set positioning effect evaluation
结合上述S6中得到的验证集图像的定位结果和S2中验证集图像的器官真实标注,从定性和定量两个角度对上述方法的检测效果进行评估:Combined with the positioning results of the validation set images obtained in the above S6 and the real annotations of the organs in the validation set images in S2, the detection effects of the above methods are evaluated from both qualitative and quantitative perspectives:
从定性角度评估,图6中(a)-(d)为验证集上得到的检测结果图。其中,红线表示的是器官的真实轮廓,黄色的矩形框则表示本发明方法对前列腺器官的检测结果。从检测结果图来看,该方法能够实现较为精确且唯一的器官定位,同时很好地规避了膀胱、直肠等在形态大小上与器官较为接近的器官。From a qualitative perspective, (a)-(d) in Figure 6 are the detection results obtained on the validation set. Among them, the red line represents the real outline of the organ, and the yellow rectangle represents the detection result of the prostate organ by the method of the present invention. Judging from the test results, this method can achieve more accurate and unique organ localization, and at the same time, it can well avoid organs such as the bladder and rectum that are close to organs in shape and size.
从定量角度评估,通过多次试验表明,该方法对于器官真实面积的召回率达到96.91%,且以面积召回率为50%作为定位成功率的标准,该方法在验证集中对器官的定位成功率高达99.39%。Evaluation from a quantitative point of view, through multiple experiments, it is shown that the recall rate of this method for the real area of the organ reaches 96.91%, and the area recall rate of 50% is used as the standard for the localization success rate. Up to 99.39%.
从检测效率来看,该方法对于一张二维切片图像中的器官定位检测耗时仅为0.3s(英特尔Corei7 3GHz处理器),能够很好地满足在医学任务中的实际需求。From the perspective of detection efficiency, the method takes only 0.3s (Intel Core i7 3GHz processor) for organ localization detection in a two-dimensional slice image, which can well meet the actual needs in medical tasks.
优选的是,将上述经过预处理后的验证集输入到步骤S4形成的网络模型中,对训练网络进行进一步优化调整,得到优化调整的网络模型,再执行步骤S5。Preferably, the preprocessed verification set is input into the network model formed in step S4, and the training network is further optimized and adjusted to obtain an optimized and adjusted network model, and then step S5 is performed.
以上虽然以前列腺器官定位为例进行了说明,可以理解,本方法同样适用于肾脏、胰腺等组织器官的检测定位。Although the location of the prostate organ has been described as an example, it can be understood that this method is also applicable to the detection and location of tissues and organs such as kidney and pancreas.
实施例二
利用上述方法对待定位前列腺器官进行定位的方法,包括以下步骤:The method for locating the prostate organ to be located using the above method comprises the following steps:
(1)图像预处理:(1) Image preprocessing:
针对某一个案例的三维磁共振前列腺器官图像,首先进行二维序列化处理。其次进行最大最小值强度归一化处理,将二维切片图像的像素值归一化到[0,255]之间。最后根据器官在图像中的位置及所占比例对图像进行适当的裁剪处理,并将图像尺寸归一化到600×600。For the 3D magnetic resonance prostate organ image of a certain case, firstly, the 2D serialization process is performed. Secondly, the maximum and minimum intensity normalization processing is performed to normalize the pixel value of the two-dimensional slice image to [0, 255]. Finally, the image is appropriately cropped according to the position and proportion of the organ in the image, and the image size is normalized to 600×600.
(2)基于Faster R-CNN改进器官定位网络模型对器官进行初步定位:(2) Based on Faster R-CNN to improve the organ localization network model to perform preliminary localization of organs:
1)特征提取:将(1)中得到的二维切片图像输入S4构建的卷积神经网络中,得到关于图像的卷积特征图;1) Feature extraction: Input the two-dimensional slice image obtained in (1) into the convolutional neural network constructed by S4 to obtain a convolutional feature map about the image;
2)生成目标候选框建议:将1)得到的卷积特征图输入RPN网络生成目标候选框建议;2) Generate target candidate frame suggestions: Input the convolutional feature map obtained in 1) into the RPN network to generate target candidate frame suggestions;
3)ROI池化(即感兴趣区域池化):以1)输出的卷积特征图和2)输出的目标候选框建议为输入,将目标候选框建议映射到特征图的对应位置,对映射后的特征图进行池化操作得到统一尺寸的目标区域特征图;3) ROI pooling (i.e. region of interest pooling): Take 1) the output convolution feature map and 2) the output target candidate frame proposal as input, map the target candidate frame proposal to the corresponding position of the feature map, and map the mapping. After the feature map is pooled, the target region feature map of uniform size is obtained;
4)分类预测:根据3)得到的目标区域特征图进行分类,区分相应的目标候选框所包含的对象是器官还是背景,并得到对应于该类别的置信度分数;4) Classification prediction: classify according to the feature map of the target area obtained in 3), distinguish whether the object contained in the corresponding target candidate frame is an organ or a background, and obtain a confidence score corresponding to the category;
5)边界框回归:根据3)得到的目标区域特征图进行边界框回归细化边界框,最终获得检测框的精确位置;5) Bounding box regression: According to the feature map of the target area obtained in 3), the bounding box regression is performed to refine the bounding box, and finally the precise position of the detection box is obtained;
6)根据4)和5)中得到的属于器官的目标框及其位置,执行具有阈值为Nt(Nt为固定值)的非极大值抑制(NMS)来去除掉重叠度较高的目标框集合,并保留每张二维切片图像中可信度大于某一固定阈值(建议选用阈值为0.80~0.90之间)的目标候选框。由此,得到每张二维切片图像中的若干个目标检测框的位置及对应的可信度评分。6) According to the target boxes belonging to the organs and their positions obtained in 4) and 5), perform non-maximum suppression (NMS) with a threshold of N t (N t is a fixed value) to remove those with a high degree of overlap. A set of target frames, and retain target candidate frames whose reliability is greater than a certain fixed threshold in each 2D slice image (the threshold is recommended to be between 0.80 and 0.90). Thus, the positions of several target detection frames in each two-dimensional slice image and the corresponding reliability scores are obtained.
(3)基于序列关联性处理对器官进行最终定位:(3) Final positioning of organs based on sequence association processing:
1)将上述切片图像按照原始序列前后顺序进行整理;1) Arrange the above-mentioned slice images in the order of the original sequence;
2)对具有单一可靠目标候选框的序列图像中目标框的九个关键点(包括矩形框的中心点、四个顶点及四条边的中点)进行最小二乘四次多项式空间曲线拟合。其次利用拟合的空间曲线和最小空间欧式距离筛选并判定具有多预测框的二维图像中距离拟合位置最近的目标框为器官的最终定位框,并在每次筛选之后更新空间曲线拟拟合参数。2) Perform the least squares quartic polynomial space curve fitting on the nine key points of the target frame (including the center point of the rectangular frame, the four vertices and the midpoint of the four sides) in the sequence image with a single reliable target candidate frame. Secondly, use the fitted space curve and the minimum space Euclidean distance to screen and determine the target frame closest to the fitting position in the two-dimensional image with multiple prediction frames as the final positioning frame of the organ, and update the space curve fitting after each screening. matching parameters.
3)针对序列首末两端的器官检测由于目标较小而可能出现没有目标框的现象,利用最终更新后的空间曲线来对缺失检测的二维图像拟合得到器官的目标定位,由此实现了整个二维序列图像的器官定位检测。3) For the organ detection at the beginning and end of the sequence, due to the small target, there may be no target frame. The final updated space curve is used to fit the two-dimensional image of the missing detection to obtain the target location of the organ, thus realizing the target location of the organ. Organ localization detection of the entire 2D sequence of images.
本实施例中,轴向切片图像中的前列腺器官定位效果图如图7所示,其中(a)-(d)分别展示的是轴向序列切片的第5、11、15、18张切片下的测试效果图。通过横向对比观察四幅图像中对前列腺器官的定位效果,可以看出本方法对于整体序列切片中的前列腺器官定位效果非常好,特别是序列首末两端的器官定位,这也是其它算法很少涉及或者无法比拟的;此外,纵向来看,本方法在实现较为精确的前列腺定位的同时也能够很好地排除膀胱、直肠等与前列腺器官形态大小较为相似的区域,说明本方法对于目标器官的特征学习效果是比较好的。In this embodiment, the effect diagram of prostate organ localization in the axial slice image is shown in Figure 7, where (a)-(d) respectively show the 5th, 11th, 15th, and 18th slices of the axial serial slices. The test effect chart. By comparing the positioning effect of the prostate organs in the four images, it can be seen that this method has a very good effect on the positioning of prostate organs in the overall sequence slice, especially the organ positioning at the beginning and end of the sequence, which is also rarely involved in other algorithms or Incomparable; in addition, from the longitudinal point of view, this method can not only achieve more accurate prostate localization, but also can well exclude the bladder, rectum and other regions with similar shape and size to the prostate organs, which shows that this method is useful for the feature learning of target organs. The effect is better.
综上所述,本发明所述的二维序列磁共振图像的器官定位识别方法的优点在于,利用规模较小、异质性较高的磁共振数据集构建一种通用的器官定位方法,很好地避免了传统的基于切片的视觉检查的主观性。另外,本发明通过对现有的自然图像处理中使用的目标检测算法Faster R-CNN进行改进,实现了对器官的初步检测定位;之后利用医学序列切片图像之间的关联性(即器官整体的位置和形态大小在层间的演化规律)极大地改善了序列首末两端的器官定位效果,该方法识别速度和准确度较好,能够很好地满足医学图像的处理中的实时性要求。To sum up, the advantage of the method for organ localization and identification of two-dimensional sequence magnetic resonance images according to the present invention is that a general organ localization method is constructed by using a magnetic resonance data set with a small scale and high heterogeneity, which is very convenient. The subjectivity of traditional slice-based visual inspection is well avoided. In addition, the present invention realizes the preliminary detection and positioning of organs by improving the existing target detection algorithm Faster R-CNN used in natural image processing; The evolution law of position and shape size between layers) greatly improves the organ localization effect at the beginning and end of the sequence. This method has better recognition speed and accuracy, and can well meet the real-time requirements in medical image processing.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010344910.7A CN111583204B (en) | 2020-04-27 | 2020-04-27 | Organ localization method based on two-dimensional serial magnetic resonance images based on network model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010344910.7A CN111583204B (en) | 2020-04-27 | 2020-04-27 | Organ localization method based on two-dimensional serial magnetic resonance images based on network model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111583204A CN111583204A (en) | 2020-08-25 |
CN111583204B true CN111583204B (en) | 2022-10-14 |
Family
ID=72125433
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010344910.7A Active CN111583204B (en) | 2020-04-27 | 2020-04-27 | Organ localization method based on two-dimensional serial magnetic resonance images based on network model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111583204B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112164028A (en) * | 2020-09-02 | 2021-01-01 | 陈燕铭 | Pituitary adenoma magnetic resonance image positioning diagnosis method and device based on artificial intelligence |
CN112053342A (en) * | 2020-09-02 | 2020-12-08 | 陈燕铭 | Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence |
CN112598634B (en) * | 2020-12-18 | 2022-11-25 | 燕山大学 | CT image organ positioning method based on 3D CNN and iterative search |
CN113160144B (en) * | 2021-03-25 | 2023-05-26 | 平安科技(深圳)有限公司 | Target object detection method, target object detection device, electronic equipment and storage medium |
CN113436139A (en) * | 2021-05-10 | 2021-09-24 | 上海大学 | Small intestine nuclear magnetic resonance image identification and physiological information extraction system and method based on deep learning |
CN113256574B (en) * | 2021-05-13 | 2022-10-25 | 中国科学院长春光学精密机械与物理研究所 | Three-dimensional target detection method |
CN113674248B (en) * | 2021-08-23 | 2022-08-12 | 广州市番禺区中心医院(广州市番禺区人民医院、广州市番禺区心血管疾病研究所) | Magnetic resonance amide proton transfer imaging magnetization transfer rate detection method and related equipment |
CN113538435B (en) * | 2021-09-17 | 2022-02-18 | 北京航空航天大学 | Pancreatic cancer pathological image classification method and system based on deep learning |
CN114037714B (en) * | 2021-11-02 | 2024-05-24 | 大连理工大学人工智能大连研究院 | 3D MR and TRUS image segmentation method for prostate system puncture |
CN114820584B (en) * | 2022-05-27 | 2023-02-21 | 北京安德医智科技有限公司 | Lung focus positioner |
CN116777935B (en) * | 2023-08-16 | 2023-11-10 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Method and system for automatically segmenting the entire prostate gland based on deep learning |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101301207A (en) * | 2008-05-28 | 2008-11-12 | 华中科技大学 | Angiographic 3D Reconstruction Method Guided by Dynamic Model |
CN109727240A (en) * | 2018-12-27 | 2019-05-07 | 深圳开立生物医疗科技股份有限公司 | A kind of three-dimensional ultrasound pattern blocks tissue stripping means and relevant apparatus |
CN110009628A (en) * | 2019-04-12 | 2019-07-12 | 南京大学 | An automatic detection method for polymorphic targets in continuous two-dimensional images |
CN110211097A (en) * | 2019-05-14 | 2019-09-06 | 河海大学 | Crack image detection method based on fast R-CNN parameter migration |
CN110503112A (en) * | 2019-08-27 | 2019-11-26 | 电子科技大学 | A Small Target Detection and Recognition Method Based on Enhanced Feature Learning |
CN110610210A (en) * | 2019-09-18 | 2019-12-24 | 电子科技大学 | A Multi-target Detection Method |
CN111027547A (en) * | 2019-12-06 | 2020-04-17 | 南京大学 | Automatic detection method for multi-scale polymorphic target in two-dimensional image |
-
2020
- 2020-04-27 CN CN202010344910.7A patent/CN111583204B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101301207A (en) * | 2008-05-28 | 2008-11-12 | 华中科技大学 | Angiographic 3D Reconstruction Method Guided by Dynamic Model |
CN109727240A (en) * | 2018-12-27 | 2019-05-07 | 深圳开立生物医疗科技股份有限公司 | A kind of three-dimensional ultrasound pattern blocks tissue stripping means and relevant apparatus |
CN110009628A (en) * | 2019-04-12 | 2019-07-12 | 南京大学 | An automatic detection method for polymorphic targets in continuous two-dimensional images |
CN110211097A (en) * | 2019-05-14 | 2019-09-06 | 河海大学 | Crack image detection method based on fast R-CNN parameter migration |
CN110503112A (en) * | 2019-08-27 | 2019-11-26 | 电子科技大学 | A Small Target Detection and Recognition Method Based on Enhanced Feature Learning |
CN110610210A (en) * | 2019-09-18 | 2019-12-24 | 电子科技大学 | A Multi-target Detection Method |
CN111027547A (en) * | 2019-12-06 | 2020-04-17 | 南京大学 | Automatic detection method for multi-scale polymorphic target in two-dimensional image |
Non-Patent Citations (2)
Title |
---|
CBAM: Convolutional Block Attention Module;Sanghyun Woo,Jongchan Park,Joon-Young Lee,In So Kweon;《ECCV 2018》;20180718;第4-7页 * |
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks;Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun;《CVPR2016》;20160106;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111583204A (en) | 2020-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111583204B (en) | Organ localization method based on two-dimensional serial magnetic resonance images based on network model | |
CN110782474B (en) | Deep learning-based method for predicting morphological change of liver tumor after ablation | |
WO2023071531A1 (en) | Liver ct automatic segmentation method based on deep shape learning | |
CN111311592A (en) | An automatic segmentation method for 3D medical images based on deep learning | |
CN113554669B (en) | Unet network brain tumor MRI image segmentation method with improved attention module | |
CN111476292A (en) | Small sample element learning training method for medical image classification processing artificial intelligence | |
Alilou et al. | A comprehensive framework for automatic detection of pulmonary nodules in lung CT images | |
CN110060235A (en) | A kind of thyroid nodule ultrasonic image division method based on deep learning | |
CN101013503A (en) | Method for segmenting abdominal organ in medical image | |
CN108364294A (en) | Abdominal CT images multiple organ dividing method based on super-pixel | |
CN114037714B (en) | 3D MR and TRUS image segmentation method for prostate system puncture | |
CN110363802A (en) | Prostate Image Registration System and Method Based on Automatic Segmentation and Pelvis Alignment | |
CN111144486B (en) | Keypoint detection method of cardiac MRI image based on convolutional neural network | |
CN108629785B (en) | Three-dimensional magnetic resonance pancreas image segmentation method based on self-learning | |
CN112365464A (en) | GAN-based medical image lesion area weak supervision positioning method | |
CN112767407B (en) | CT image kidney tumor segmentation method based on cascade gating 3DUnet model | |
CN116596846A (en) | Image segmentation method, image segmentation model construction method, device and medium | |
CN118279302B (en) | Three-dimensional reconstruction detection method and system for brain tumor images | |
WO2024169341A1 (en) | Registration method for multimodality image-guided radiotherapy | |
CN116258732A (en) | Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images | |
CN115147600A (en) | GBM Multimodal MR Image Segmentation Method Based on Classifier Weight Converter | |
CN117422916A (en) | MR medical image colorectal cancer staging algorithm and system based on weak supervision learning | |
CN112686897A (en) | Weak supervision-based gastrointestinal lymph node pixel labeling method assisted by long and short axes | |
Liu et al. | 3-D prostate MR and TRUS images detection and segmentation for puncture biopsy | |
CN106504239A (en) | A kind of method of liver area in extraction ultrasonoscopy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |