CN114558251A - A deep learning-based automatic positioning method, device and radiotherapy equipment - Google Patents
A deep learning-based automatic positioning method, device and radiotherapy equipment Download PDFInfo
- Publication number
- CN114558251A CN114558251A CN202210099697.7A CN202210099697A CN114558251A CN 114558251 A CN114558251 A CN 114558251A CN 202210099697 A CN202210099697 A CN 202210099697A CN 114558251 A CN114558251 A CN 114558251A
- Authority
- CN
- China
- Prior art keywords
- image
- automatic positioning
- drr
- images
- net model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000001959 radiotherapy Methods 0.000 title claims abstract description 19
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 230000011218 segmentation Effects 0.000 claims abstract description 13
- 238000007781 pre-processing Methods 0.000 claims description 12
- 238000010200 validation analysis Methods 0.000 claims description 12
- 238000013136 deep learning model Methods 0.000 claims description 8
- 238000010521 absorption reaction Methods 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 claims description 4
- 230000003287 optical effect Effects 0.000 description 6
- 210000001015 abdomen Anatomy 0.000 description 5
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000007408 cone-beam computed tomography Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002786 image-guided radiation therapy Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 229920001169 thermoplastic Polymers 0.000 description 1
- 239000004416 thermosoftening plastic Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1064—Monitoring, verifying, controlling systems and methods for adjusting radiation treatment in response to monitoring
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1064—Monitoring, verifying, controlling systems and methods for adjusting radiation treatment in response to monitoring
- A61N5/1069—Target adjustment, e.g. moving the patient support
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1075—Monitoring, verifying, controlling systems and methods for testing, calibrating, or quality assurance of the radiation treatment apparatus
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1049—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
- A61N2005/1061—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source
- A61N2005/1062—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source using virtual X-ray images, e.g. digitally reconstructed radiographs [DRR]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N2005/1092—Details
- A61N2005/1097—Means for immobilizing the patient
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Radiation-Therapy Devices (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
技术领域technical field
本发明属于放射治疗技术领域,具体涉及一种基于深度学习的自动摆位方法、装置及放射治疗设备。The invention belongs to the technical field of radiotherapy, and in particular relates to an automatic positioning method, device and radiotherapy equipment based on deep learning.
背景技术Background technique
目前,肿瘤放射治疗一般分次进行,每次治疗都会根据定位CT扫描时的体位固定和模拟时的复位情况,用相应的热塑膜、负压袋等装置配合激光灯对患者进行重复性固定。但由于各种原因,患者的摆位仍然会有一定偏差,偏差多在几毫米到一厘米之间,甚至可达几厘米。At present, tumor radiation therapy is generally carried out in stages. Each treatment will repeat the fixation of the patient with the corresponding thermoplastic film, negative pressure bag and other devices with laser light according to the position fixation during positioning CT scan and the reset situation during simulation. . However, due to various reasons, there will still be a certain deviation in the patient's position, and the deviation is mostly between a few millimeters and one centimeter, or even a few centimeters.
为实现图像引导放射治疗,现有技术主要包括:根据诊断CT和治疗计划,选择ROI(Region ofInterest)区域,结合治疗时扫描的CBCT或其他断层影像,实现3D/3D图像配准。或采集MV/KV图像,实现2D/2D或2D/3D图像配准问题。In order to realize image-guided radiation therapy, the existing technology mainly includes: selecting the ROI (Region of Interest) region according to the diagnostic CT and treatment plan, and combining the CBCT or other tomographic images scanned during the treatment to realize 3D/3D image registration. Or collect MV/KV images to realize 2D/2D or 2D/3D image registration problem.
尤其以MV图像配准问题最为困难,其图像对比度低,模态差异大,导致传统基于图像灰度值密度的配准算法失效。除此之外,基于灰度值密度的迭代配准算法通常非常耗时,用户体验差。基于特征的配准算法,特征选择与提取困难,需要技师手工进行,费时费力,且容易产生人为误差。In particular, MV image registration is the most difficult problem. The image contrast is low and the modal difference is large, which leads to the failure of the traditional registration algorithm based on image gray value density. Besides, the iterative registration algorithm based on gray value density is usually very time-consuming and has poor user experience. The feature-based registration algorithm is difficult to select and extract features, which requires technicians to perform manually, which is time-consuming and labor-intensive, and is prone to human errors.
发明内容SUMMARY OF THE INVENTION
为了解决上述技术问题,本发明提出了一种基于深度学习的自动摆位方法、装置及放射治疗设备。In order to solve the above technical problems, the present invention proposes an automatic positioning method, device and radiotherapy equipment based on deep learning.
为了达到上述目的,本发明的技术方案如下:In order to achieve the above object, technical scheme of the present invention is as follows:
一方面,本发明公开一种基于深度学习的自动摆位方法,包括以下步骤:On the one hand, the present invention discloses an automatic positioning method based on deep learning, comprising the following steps:
S1:将待放疗病人固定在治疗床上,采集相应部位的DR影像;S1: Fix the patient to be radiotherapy on the treatment couch, and collect the DR image of the corresponding part;
S2:将计划时采集的CT影像输入到训练好的U-Net模型中,得到指定部位的分割结果并重建生成DRR影像;S2: Input the CT images collected during planning into the trained U-Net model, obtain the segmentation results of the specified parts, and reconstruct and generate DRR images;
S3:将S1得到的DR影像和S2得到的DRR影像分别输入CycleGAN模型中,得到只含指定部位的DR影像;S3: Input the DR image obtained by S1 and the DRR image obtained by S2 into the CycleGAN model, respectively, to obtain a DR image containing only the specified part;
S4:将S3得到的只含指定部位的DR影像和待配准的DRR影像进行配准,得到摆位偏差;S4: Register the DR image obtained in S3 that only contains the specified part and the DRR image to be registered to obtain the placement deviation;
S5:根据S4得到的摆位偏差控制治疗床移动,实现自动摆位。S5: Control the movement of the treatment couch according to the positioning deviation obtained in S4 to realize automatic positioning.
在上述技术方案的基础上,还可做如下改进:On the basis of the above technical solutions, the following improvements can be made:
作为优选的方案,U-Net模型的训练步骤如下:As a preferred solution, the training steps of the U-Net model are as follows:
T1:收集临床使用的配套各部位的CT影像;T1: Collect CT images of various parts for clinical use;
T2:将CT影像按指定比例划分为训练集、测试集和验证集;T2: Divide CT images into training set, test set and validation set according to the specified proportion;
T3:选择U-Net模型,输入为CT影像,输出为指定部位的分割结果,使用训练集中的数据训练深度学习模型;T3: Select the U-Net model, the input is CT image, the output is the segmentation result of the specified part, and use the data in the training set to train the deep learning model;
T4:使用验证集和测试集在不同阶段检验U-Net模型的鲁棒性,直至训练出足够鲁棒的U-Net模型。T4: Use the validation set and test set to test the robustness of the U-Net model at different stages until a sufficiently robust U-Net model is trained.
作为优选的方案,在T1与T2之间包括以下步骤:对收集到的CT影像进行预处理。As a preferred solution, the following steps are included between T1 and T2: preprocessing the collected CT images.
作为优选的方案,预处理包括以下一种或多种操作:读取并保存CT影像切片成png格式,像素大小归一化、图像缩放。As a preferred solution, the preprocessing includes one or more of the following operations: reading and saving CT image slices into png format, normalizing the pixel size, and scaling the image.
作为优选的方案,S2中,重建生成DRR影像具体包括以下内容:As a preferred solution, in S2, the reconstructed DRR image specifically includes the following content:
通过DRR重建算法,模拟X线光源穿透CT体元,经过衰减和吸收后投影到探测器平面成像,得到新的DRR影像。Through the DRR reconstruction algorithm, the simulated X-ray light source penetrates the CT voxel, and after attenuation and absorption, it is projected to the detector plane for imaging to obtain a new DRR image.
另一方面,本发明公开一种基于深度学习的自动摆位装置,包括:On the other hand, the present invention discloses an automatic positioning device based on deep learning, comprising:
DR影像采集模块,用于将待放疗病人固定在治疗床上后,采集相应部位的DR影像;The DR image acquisition module is used to collect the DR image of the corresponding part after the patient to be radiotherapy is fixed on the treatment couch;
重建DRR影像生成模块,用于将计划时采集的CT影像输入到训练好的U-Net模型中,得到指定部位的分割结果并重建生成DRR影像;The reconstruction DRR image generation module is used to input the CT images collected during planning into the trained U-Net model, obtain the segmentation results of the specified parts, and reconstruct and generate DRR images;
DR影像生成模块,用于将DR影像采集模块采集得到的DR影像和重建DRR影像生成模块生成得到的DRR影像分别输入CycleGAN模型中,得到只含指定部位的DR影像;The DR image generation module is used to input the DR image collected by the DR image acquisition module and the DRR image generated by the reconstructed DRR image generation module into the CycleGAN model respectively to obtain the DR image containing only the specified parts;
摆位偏差生成模块,用于将DR影像生成模块生成得到的只含指定部位的DR影像和待配准的DRR影像进行配准,得到摆位偏差;The placement deviation generating module is used for registering the DR image generated by the DR image generating module and containing only the specified part with the DRR image to be registered to obtain the placement deviation;
自动摆位模块,用于根据摆位偏差生成模块生成得到的摆位偏差控制治疗床移动,实现自动摆位。The automatic positioning module is used to control the movement of the treatment couch according to the positioning deviation generated by the positioning deviation generating module to realize automatic positioning.
作为优选的方案,自动摆位装置还包括:U-Net模型训练模块,U-Net模型训练模块用于采用如下步骤训练U-Net模型;As a preferred solution, the automatic positioning device also includes: a U-Net model training module, and the U-Net model training module is used to train the U-Net model by adopting the following steps;
U-Net模型的训练步骤如下:The training steps of the U-Net model are as follows:
T1:收集临床使用的配套各部位的CT影像;T1: Collect CT images of various parts for clinical use;
T2:将CT影像按指定比例划分为训练集、测试集和验证集;T2: Divide CT images into training set, test set and validation set according to the specified proportion;
T3:选择U-Net模型,输入为CT影像,输出为指定部位的分割结果,使用训练集中的数据训练深度学习模型;T3: Select the U-Net model, the input is CT image, the output is the segmentation result of the specified part, and use the data in the training set to train the deep learning model;
T4:使用验证集和测试集在不同阶段检验U-Net模型的鲁棒性,直至训练出足够鲁棒的U-Net模型。T4: Use the validation set and test set to test the robustness of the U-Net model at different stages until a sufficiently robust U-Net model is trained.
作为优选的方案,在T1与T2之间包括以下步骤:对收集到的CT影像进行预处理。As a preferred solution, the following steps are included between T1 and T2: preprocessing the collected CT images.
作为优选的方案,预处理包括以下一种或多种操作:读取并保存CT影像切片成png格式,像素大小归一化、图像缩放。As a preferred solution, the preprocessing includes one or more of the following operations: reading and saving CT image slices into png format, normalizing the pixel size, and scaling the image.
此外,另一方面,本发明还公开放射治疗设备,利用上述任一种自动摆位方法实现自动摆位,或,包括:上述任一种自动摆位装置。In addition, on the other hand, the present invention also discloses a radiotherapy device, which uses any of the above-mentioned automatic positioning methods to realize automatic positioning, or includes: any of the above-mentioned automatic positioning devices.
本发明公开一种基于深度学习的自动摆位方法、装置及放射治疗设备,利用U-Net模型和CycleGAN模型,有效提高待配准图像的质量,提高配准的精度,通过深度学习的方式减少摆位时预测的时间,最终实现自动摆位,避免人工误差,提高摆位效率。The invention discloses an automatic positioning method, device and radiotherapy equipment based on deep learning. U-Net model and CycleGAN model are used to effectively improve the quality of images to be registered, improve the accuracy of registration, and reduce the amount of time by means of deep learning. The predicted time during placement will eventually realize automatic placement, avoid manual errors and improve placement efficiency.
附图说明Description of drawings
为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings that need to be used in the embodiments. It should be understood that the following drawings only show some embodiments of the present invention, and therefore do not It should be regarded as a limitation of the scope, and for those of ordinary skill in the art, other related drawings can also be obtained according to these drawings without any creative effort.
图1为本发明实施例提供的自动摆位方法的流程图。FIG. 1 is a flowchart of an automatic positioning method provided by an embodiment of the present invention.
图2为本发明实施例提供的训练U-Net模型的流程图。FIG. 2 is a flowchart of training a U-Net model provided by an embodiment of the present invention.
图3为本发明实施例提供的DRR影像生成示意图。FIG. 3 is a schematic diagram of generating a DRR image according to an embodiment of the present invention.
其中:1-模拟光源,2-CT体元,3-DRR平面。Among them: 1-simulated light source, 2-CT voxel, 3-DRR plane.
具体实施方式Detailed ways
下面结合附图详细说明本发明的优选实施方式。The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
“包括”元件的表述是“开放式”表述,该“开放式”表述仅仅是指存在对应的部件或步骤,不应当解释为排除附加的部件或步骤。The expression "comprising" an element is an "open-ended" expression, which merely refers to the presence of corresponding parts or steps and should not be interpreted as excluding additional parts or steps.
为了达到本发明的目的,一种基于深度学习的自动摆位方法、装置及放射治疗设备的其中一些实施例中,由于颈部、胸部和腹部中脊椎的特征明显,所以本实施例所公开的方法基于脊椎进行配准。In order to achieve the purpose of the present invention, in some embodiments of the deep learning-based automatic positioning method, device, and radiotherapy equipment, since the features of the spine in the neck, chest and abdomen are obvious, the present embodiment discloses The method performs registration based on the spine.
如图1所示,自动摆位方法包括以下步骤:As shown in Figure 1, the automatic positioning method includes the following steps:
S1:将待放疗病人的颈部、胸部和腹部固定在治疗床上,采集相应部位的DR影像;S1: Fix the neck, chest and abdomen of the patient to be radiotherapy on the treatment couch, and collect DR images of the corresponding parts;
S2:将计划时采集的CT影像输入到训练好的U-Net模型中,得到脊椎分割结果并重建生成DRR影像;S2: Input the CT image collected during planning into the trained U-Net model, obtain the result of spine segmentation and reconstruct the DRR image;
S3:将S1得到的DR影像和S2得到的DRR影像分别输入CycleGAN模型中,得到只含脊椎的DR影像;S3: Input the DR image obtained by S1 and the DRR image obtained by S2 into the CycleGAN model respectively to obtain a DR image containing only the spine;
S4:将S3得到的只含脊椎的DR影像和待配准的DRR影像进行配准,得到摆位偏差;S4: register the DR image containing only the spine obtained in S3 with the DRR image to be registered to obtain the placement deviation;
S5:根据S4得到的摆位偏差控制治疗床移动,实现自动摆位。S5: Control the movement of the treatment couch according to the positioning deviation obtained in S4 to realize automatic positioning.
为了进一步地优化本发明的实施效果,在另外一些实施方式中,其余特征技术相同,不同之处在于,如图2所示,U-Net模型的训练步骤如下:In order to further optimize the implementation effect of the present invention, in other embodiments, the remaining feature techniques are the same, the difference is that, as shown in FIG. 2 , the training steps of the U-Net model are as follows:
T1:收集临床使用的配套颈部、胸部和腹部的CT影像(横断面),为nii格式;T1: Collect CT images (cross-sections) of the neck, chest and abdomen for clinical use, in nii format;
T2:数据划分,将CT影像按按6:3:1的比例划分为训练集、测试集和验证集;T2: data division, the CT images are divided into training set, test set and validation set according to the ratio of 6:3:1;
T3:选择U-Net模型,输入为CT影像,输出为脊椎的分割结果,使用训练集中的数据训练深度学习模型;T3: Select the U-Net model, the input is CT image, the output is the segmentation result of the spine, and use the data in the training set to train the deep learning model;
T4:使用验证集和测试集在不同阶段检验U-Net模型的鲁棒性,直至训练出足够鲁棒的U-Net模型。T4: Use the validation set and test set to test the robustness of the U-Net model at different stages until a sufficiently robust U-Net model is trained.
进一步,在上述实施例的基础上,在T1与T2之间包括以下步骤:对收集到的CT影像进行预处理。Further, on the basis of the above embodiment, the following steps are included between T1 and T2: preprocessing the collected CT images.
预处理包括以下一种或多种操作:读取并保存CT影像切片成png格式,像素大小归一化、图像缩放。Preprocessing includes one or more of the following operations: reading and saving CT image slices into png format, normalizing pixel size, and image scaling.
为了进一步地优化本发明的实施效果,在另外一些实施方式中,其余特征技术相同,不同之处在于,S2中,重建生成DRR影像具体包括以下内容:In order to further optimize the implementation effect of the present invention, in other embodiments, the remaining feature technologies are the same, the difference is that in S2, the reconstructed and generated DRR image specifically includes the following content:
通过DRR重建算法,模拟X线光源穿透CT体元,经过衰减和吸收后投影到探测器平面成像,得到新的DRR影像。Through the DRR reconstruction algorithm, the simulated X-ray light source penetrates the CT voxel, and after attenuation and absorption, it is projected to the detector plane for imaging to obtain a new DRR image.
值得注意的是,在进行图像配准时,配准的两个实体应该具有相同的维度,即要么是3D-3D配准或者2D-2D配准,当进行2D-3D配准时,需要把3D影像降到二维,之后再进行2D-2D配准。上述DRR重建算法即为一个3D模型降维到2D的过程,如图3所示,模拟X射线光源透过3D体素即3D-CT体素,投影到DRR平面生成DRR影像,整个过程是个光学衰减的过程,符合光学吸收模型规律,衰减过程可用以下表达式描述。It is worth noting that when performing image registration, the two registered entities should have the same dimensions, that is, either 3D-3D registration or 2D-2D registration. When performing 2D-3D registration, the 3D image needs to be registered. Descend to 2D, and then do 2D-2D registration. The above DRR reconstruction algorithm is a process of reducing the dimension of a 3D model to 2D. As shown in Figure 3, the simulated X-ray light source passes through 3D voxels, that is, 3D-CT voxels, and projects to the DRR plane to generate DRR images. The whole process is an optical The attenuation process conforms to the law of the optical absorption model, and the attenuation process can be described by the following expression.
其中:s为光学投影方向的长度参数;Where: s is the length parameter of the optical projection direction;
I(s)为距离s处的光学强度;I(s) is the optical intensity at distance s;
x(t)为光学强度的衰减系数;x(t) is the attenuation coefficient of optical intensity;
I0为进入CT体素的光学强度。I 0 is the optical intensity entering the CT voxel.
由图3和上式可知,DRR是一个模拟光源1发出的模拟射线穿过CT体元2,经过衰减和吸收后投影到成像平面进行累加的过程,具体为:It can be seen from Figure 3 and the above formula that DRR is a process in which the simulated rays emitted by the simulated
1、建立CT图像组的3维体素矩阵,由若干的CT体元构成;1. Establish a 3-dimensional voxel matrix of the CT image group, which is composed of several CT voxels;
2、沿虚拟光源像CT图像组发出若干条射线,射线的条数与DRR平面3的像素个数一致;2. Send out several rays along the CT image group of the virtual light source image, and the number of rays is consistent with the number of pixels of the
3、获得每条射线通过此CT体素的交点,将这些点的电子密度值累加;3. Obtain the intersection points of each ray passing through this CT voxel, and accumulate the electron density values of these points;
4、求投影线通过体素矩阵的有限射线长度;4. Find the finite ray length of the projection line passing through the voxel matrix;
5、将电子密度累加值与射线长度相乘,求出的值按灰度值显示,即为DRR影像。5. Multiply the accumulated value of the electron density by the ray length, and the obtained value is displayed as a grayscale value, which is the DRR image.
本发明实施例公开一种基于深度学习的自动摆位装置,包括:The embodiment of the present invention discloses an automatic positioning device based on deep learning, comprising:
DR影像采集模块,用于将待放疗病人颈部、胸部和腹部固定在治疗床上后,采集相应部位的DR影像;The DR image acquisition module is used to collect the DR images of the corresponding parts after fixing the neck, chest and abdomen of the patient to be radiotherapy on the treatment bed;
重建DRR影像生成模块,用于将计划时采集的CT影像输入到训练好的U-Net模型中,得到脊椎的分割结果并重建生成DRR影像;The reconstruction DRR image generation module is used to input the CT images collected during planning into the trained U-Net model, obtain the segmentation results of the spine, and reconstruct and generate DRR images;
DR影像生成模块,用于将DR影像采集模块采集得到的DR影像和重建DRR影像生成模块生成得到的DRR影像分别输入CycleGAN模型中,得到只含脊椎的DR影像;The DR image generation module is used to input the DR image collected by the DR image acquisition module and the DRR image generated by the reconstructed DRR image generation module into the CycleGAN model, respectively, to obtain a DR image containing only the spine;
摆位偏差生成模块,用于将DR影像生成模块生成得到的只含脊椎的DR影像和待配准的DRR影像进行配准,得到摆位偏差;The placement deviation generation module is used to register the DR image containing only the spine generated by the DR image generation module and the DRR image to be registered to obtain the placement deviation;
自动摆位模块,用于根据摆位偏差生成模块生成得到的摆位偏差控制治疗床移动,实现自动摆位。The automatic positioning module is used to control the movement of the treatment couch according to the positioning deviation generated by the positioning deviation generating module to realize automatic positioning.
进一步,在上述实施例的基础上,自动摆位装置还包括:U-Net模型训练模块,U-Net模型训练模块用于采用如下步骤训练U-Net模型;Further, on the basis of the above-mentioned embodiment, the automatic positioning device further includes: a U-Net model training module, and the U-Net model training module is used to train the U-Net model by adopting the following steps;
U-Net模型的训练步骤如下:The training steps of the U-Net model are as follows:
T1:收集临床使用的配套颈部、胸部和腹部的CT影像(横断面),为nii格式;T1: Collect CT images (cross-sections) of the neck, chest and abdomen for clinical use, in nii format;
T2:数据划分,将CT影像按6:3:1的比例划分为训练集、测试集和验证集;T2: Data division, dividing CT images into training set, test set and validation set according to the ratio of 6:3:1;
T3:选择U-Net模型,输入为CT影像,输出为脊椎的分割结果,使用训练集中的数据训练深度学习模型;T3: Select the U-Net model, the input is CT image, the output is the segmentation result of the spine, and use the data in the training set to train the deep learning model;
T4:使用验证集和测试集在不同阶段检验U-Net模型的鲁棒性,直至训练出足够鲁棒的U-Net模型。T4: Use the validation set and test set to test the robustness of the U-Net model at different stages until a sufficiently robust U-Net model is trained.
进一步,在上述实施例的基础上,在T1与T2之间包括以下步骤:对收集到的CT影像进行预处理。Further, on the basis of the above embodiment, the following steps are included between T1 and T2: preprocessing the collected CT images.
预处理包括以下一种或多种操作:读取并保存CT影像切片成png格式,像素大小归一化、图像缩放。Preprocessing includes one or more of the following operations: reading and saving CT image slices into png format, normalizing pixel size, and image scaling.
此外,本发明实施例还公开放射治疗设备,利用上述任一实施例公开的自动摆位方法实现自动摆位,或,包括:上述任一实施例公开的自动摆位装置。In addition, the embodiment of the present invention also discloses a radiotherapy device, which uses the automatic positioning method disclosed in any of the above embodiments to realize automatic positioning, or includes: the automatic positioning device disclosed in any of the above embodiments.
本发明涉及了两个深度学习模型,第一个模型是U-Net模型,训练使得其可以自动分割提取出CT影像中的脊椎区域,并通过DRR重建算法得到配准的DRR影像;第二个模型是CycleGan模型,将DR影像中其他多余的信息去除,保留脊椎,从而减少影像中其他因素对于配准结果的干扰。通过以上方法提取出ROI区域,有助于提高摆位精度,减少摆位的时间。The present invention involves two deep learning models. The first model is a U-Net model, and the training enables it to automatically segment and extract the spine region in the CT image, and obtain the registered DRR image through the DRR reconstruction algorithm; the second model is the U-Net model. The model is the CycleGan model, which removes other redundant information in the DR image and preserves the spine, thereby reducing the interference of other factors in the image on the registration results. Extracting the ROI area by the above method is helpful to improve the placement accuracy and reduce the placement time.
本发明一种基于深度学习的自动摆位方法、装置及放射治疗设备具有以下有益效果:The deep learning-based automatic positioning method, device and radiation therapy equipment of the present invention have the following beneficial effects:
第一,利用U-Net深度学习模型自动提取脊椎区域,使用CycleGAN去除影像中配准干扰因素,保留脊椎,有针对性的对ROI区域进行配准,提高待配准图像的质量,提高配准的精度;First, use the U-Net deep learning model to automatically extract the spine area, use CycleGAN to remove the registration interference factors in the image, preserve the spine, and register the ROI area in a targeted manner, improve the quality of the image to be registered, and improve the registration. accuracy;
第二,对技师要求低,使用简单;Second, the requirements for technicians are low, and the use is simple;
第三,摆位速度快,排除DRR重建的过程,摆位时间在5s内,而人工摆位方法需要5分钟;Third, the setup speed is fast, excluding the process of DRR reconstruction, the setup time is within 5s, while the manual setup method takes 5 minutes;
第四,可以快速迭代,每次升级可以给医院在线升级,无需长时间的培训。Fourth, it can be quickly iterated, and each upgrade can be upgraded online for hospitals without long-term training.
本发明能够有效提高待配准图像的质量,提高配准的精度,通过深度学习的方式减少摆位时预测的时间,最终实现自动摆位,避免人工误差,提高摆位效率。The invention can effectively improve the quality of images to be registered, improve the accuracy of registration, reduce the prediction time during placement by means of deep learning, finally realize automatic placement, avoid manual errors and improve placement efficiency.
应当理解,这里描述的各种技术可结合硬件或软件,或者它们的组合一起实现。从而,本发明的方法和设备,或者本发明的方法和设备的某些方面或部分可采取嵌入有形媒介,例如软盘、CD-ROM、硬盘驱动器或者其它任意机器可读的存储介质中的程序代码(即指令)的形式,其中当程序被载入诸如计算机之类的机器,并被该机器执行时,该机器变成实践本发明的设备。It should be understood that the various techniques described herein can be implemented in conjunction with hardware or software, or a combination thereof. Thus, the method and apparatus of the present invention, or some aspects or portions of the method and apparatus of the present invention, may take the form of program code embedded in a tangible medium, such as a floppy disk, CD-ROM, hard drive, or any other machine-readable storage medium (ie, instructions), wherein when a program is loaded into a machine, such as a computer, and executed by the machine, the machine becomes an apparatus for practicing the invention.
上述实施例只为说明本发明的技术构思及特点,其目的在于让本领域普通技术人员能够了解本发明的内容并加以实施,并不能以此限制本发明的保护范围,凡根据本发明精神实质所作的等效变化或修饰,都应涵盖在本发明的保护范围内。The above-mentioned embodiments are only intended to illustrate the technical concept and characteristics of the present invention, and their purpose is to enable those of ordinary skill in the art to understand the content of the present invention and implement it, and cannot limit the scope of protection of the present invention with this. The equivalent changes or modifications made should be covered within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210099697.7A CN114558251B (en) | 2022-01-27 | 2022-01-27 | Automatic positioning method, device and radiotherapy equipment based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210099697.7A CN114558251B (en) | 2022-01-27 | 2022-01-27 | Automatic positioning method, device and radiotherapy equipment based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114558251A true CN114558251A (en) | 2022-05-31 |
CN114558251B CN114558251B (en) | 2025-01-07 |
Family
ID=81714356
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210099697.7A Active CN114558251B (en) | 2022-01-27 | 2022-01-27 | Automatic positioning method, device and radiotherapy equipment based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114558251B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115300811A (en) * | 2022-08-08 | 2022-11-08 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | A method and device for determining dose distribution based on machine learning |
CN118403299A (en) * | 2024-04-29 | 2024-07-30 | 复旦大学附属眼耳鼻喉科医院 | Positioning adjustment method, system, medium and program product based on multiple factors and nonlinearity |
CN118403298A (en) * | 2024-04-29 | 2024-07-30 | 复旦大学附属眼耳鼻喉科医院 | Head and neck tumor radiotherapy positioning adjustment method, system, medium, product and equipment |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110582328A (en) * | 2019-07-22 | 2019-12-17 | 北京市肿瘤防治研究所 | A radiation therapy exit beam monitoring method and system |
CN111325749A (en) * | 2020-02-17 | 2020-06-23 | 东北林业大学 | Fundus blood vessel image generation method with hemorrhage disease based on generation countermeasure network |
WO2020132958A1 (en) * | 2018-12-26 | 2020-07-02 | 西安大医集团股份有限公司 | Positioning method and apparatus, and radiotherapy system |
CN112348857A (en) * | 2020-11-06 | 2021-02-09 | 苏州雷泰医疗科技有限公司 | Radiotherapy positioning offset calculation method and system based on deep learning |
CN112771581A (en) * | 2018-07-30 | 2021-05-07 | 纪念斯隆凯特琳癌症中心 | Multi-modal, multi-resolution deep learning neural network for segmentation, outcome prediction and longitudinal response monitoring for immunotherapy and radiotherapy |
CN113041516A (en) * | 2021-03-25 | 2021-06-29 | 中国科学院近代物理研究所 | Method, system and storage medium for guiding positioning of three-dimensional image |
CN113077471A (en) * | 2021-03-26 | 2021-07-06 | 南京邮电大学 | Medical image segmentation method based on U-shaped network |
US11077320B1 (en) * | 2020-02-07 | 2021-08-03 | Elekta, Inc. | Adversarial prediction of radiotherapy treatment plans |
CN113706409A (en) * | 2021-08-18 | 2021-11-26 | 苏州雷泰医疗科技有限公司 | CBCT image enhancement method and device based on artificial intelligence and storage medium |
CN113850169A (en) * | 2021-09-17 | 2021-12-28 | 西北工业大学 | A Face Attribute Transfer Method Based on Image Segmentation and Generative Adversarial Networks |
-
2022
- 2022-01-27 CN CN202210099697.7A patent/CN114558251B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112771581A (en) * | 2018-07-30 | 2021-05-07 | 纪念斯隆凯特琳癌症中心 | Multi-modal, multi-resolution deep learning neural network for segmentation, outcome prediction and longitudinal response monitoring for immunotherapy and radiotherapy |
WO2020132958A1 (en) * | 2018-12-26 | 2020-07-02 | 西安大医集团股份有限公司 | Positioning method and apparatus, and radiotherapy system |
CN110582328A (en) * | 2019-07-22 | 2019-12-17 | 北京市肿瘤防治研究所 | A radiation therapy exit beam monitoring method and system |
US11077320B1 (en) * | 2020-02-07 | 2021-08-03 | Elekta, Inc. | Adversarial prediction of radiotherapy treatment plans |
CN111325749A (en) * | 2020-02-17 | 2020-06-23 | 东北林业大学 | Fundus blood vessel image generation method with hemorrhage disease based on generation countermeasure network |
CN112348857A (en) * | 2020-11-06 | 2021-02-09 | 苏州雷泰医疗科技有限公司 | Radiotherapy positioning offset calculation method and system based on deep learning |
CN113041516A (en) * | 2021-03-25 | 2021-06-29 | 中国科学院近代物理研究所 | Method, system and storage medium for guiding positioning of three-dimensional image |
CN113077471A (en) * | 2021-03-26 | 2021-07-06 | 南京邮电大学 | Medical image segmentation method based on U-shaped network |
CN113706409A (en) * | 2021-08-18 | 2021-11-26 | 苏州雷泰医疗科技有限公司 | CBCT image enhancement method and device based on artificial intelligence and storage medium |
CN113850169A (en) * | 2021-09-17 | 2021-12-28 | 西北工业大学 | A Face Attribute Transfer Method Based on Image Segmentation and Generative Adversarial Networks |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115300811A (en) * | 2022-08-08 | 2022-11-08 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | A method and device for determining dose distribution based on machine learning |
CN115300811B (en) * | 2022-08-08 | 2024-01-05 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Dose distribution determining method and device based on machine learning |
CN118403299A (en) * | 2024-04-29 | 2024-07-30 | 复旦大学附属眼耳鼻喉科医院 | Positioning adjustment method, system, medium and program product based on multiple factors and nonlinearity |
CN118403298A (en) * | 2024-04-29 | 2024-07-30 | 复旦大学附属眼耳鼻喉科医院 | Head and neck tumor radiotherapy positioning adjustment method, system, medium, product and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114558251B (en) | 2025-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7203852B2 (en) | Estimation of full-dose PET images from low-dose PET imaging using deep learning | |
CN114558251B (en) | Automatic positioning method, device and radiotherapy equipment based on deep learning | |
JP4271941B2 (en) | Method for enhancing a tomographic projection image of a patient | |
JP5491174B2 (en) | Deformable registration of images for image-guided radiation therapy | |
CN108815721B (en) | Method and system for determining radiation dose | |
US10149987B2 (en) | Method and system for generating synthetic electron density information for dose calculations based on MRI | |
US11727610B2 (en) | System and method for image processing | |
US8588498B2 (en) | System and method for segmenting bones on MR images | |
CN112348857B (en) | Radiotherapy positioning offset calculation method and system based on deep learning | |
CN104644200A (en) | Method and device for reducing artifacts in computed tomography image reconstruction | |
CN111862021A (en) | Automatic delineation of head and neck lymph nodes and drainage areas based on deep learning | |
Rossi et al. | Image‐based shading correction for narrow‐FOV truncated pelvic CBCT with deep convolutional neural networks and transfer learning | |
CN110444276A (en) | Generate method, ct apparatus, program product and the data medium of image data | |
US20230065196A1 (en) | Patient-specific organ dose quantification and inverse optimization for ct | |
Lei et al. | Deep learning‐based fast volumetric imaging using kV and MV projection images for lung cancer radiotherapy: a feasibility study | |
KR20200057450A (en) | Method and system for generating virtual CT(Computed Tomography) image and attenuation-corrected PET(Positron Emission Tomography) image based on PET image | |
US11887301B2 (en) | System and method for automatic delineation of scanned images | |
CN116630427B (en) | Method and device for automatically positioning hip bone and femur in CT image | |
US20230169668A1 (en) | Systems and methods for image registration | |
US20230281842A1 (en) | Generation of 3d models of anatomical structures from 2d radiographs | |
Xie et al. | New technique and application of truncated CBCT processing in adaptive radiotherapy for breast cancer | |
Létourneau et al. | Semiautomatic vertebrae visualization, detection, and identification for online palliative radiotherapy of bone metastases of the spine a | |
EP4386680A1 (en) | Cbct simulation for training ai-based ct-to-cbct registration and cbct segmentation | |
Akintonde | Surrogate driven respiratory motion model derived from CBCT projection data | |
Lau et al. | Faster and lower dose imaging: evaluating adaptive, constant gantry velocity and angular separation in fast low‐dose 4D cone beam CT imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |