[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116168097A - Method, device, equipment and medium for constructing CBCT delineation model and delineation CBCT image - Google Patents

Method, device, equipment and medium for constructing CBCT delineation model and delineation CBCT image Download PDF

Info

Publication number
CN116168097A
CN116168097A CN202211364596.4A CN202211364596A CN116168097A CN 116168097 A CN116168097 A CN 116168097A CN 202211364596 A CN202211364596 A CN 202211364596A CN 116168097 A CN116168097 A CN 116168097A
Authority
CN
China
Prior art keywords
image
model
training
cbct
delineation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211364596.4A
Other languages
Chinese (zh)
Inventor
杨碧凝
刘宇翔
门阔
戴建荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cancer Hospital and Institute of CAMS and PUMC
Original Assignee
Cancer Hospital and Institute of CAMS and PUMC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cancer Hospital and Institute of CAMS and PUMC filed Critical Cancer Hospital and Institute of CAMS and PUMC
Priority to CN202211364596.4A priority Critical patent/CN116168097A/en
Publication of CN116168097A publication Critical patent/CN116168097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure relates to a method, apparatus, device and medium for constructing a CBCT sketch model and sketching a CBCT image, the method comprising: acquiring training data and a training label, wherein the training data comprises CBCT images and CT images of a plurality of training subjects, and the training label comprises: the quality of a reference image of the CT image, and a reference sketching result of sketching a target area aiming at the CT image; the CBCT image is input into a first model for training, the output of the first model is a pseudo CT image, the training is finished under the condition that the difference between the image quality of the pseudo CT image and the quality of a reference image is smaller than a first set threshold value, and the trained first model is an image generation model; inputting the CT image into a second model for training, wherein the output of the second model is a prediction sketch result, and training is finished under the condition that the difference between the prediction sketch result and a reference sketch result is smaller than a second set threshold value, and the trained second model is a segmentation model; and generating a CBCT sketch model according to the image generation model and the segmentation model.

Description

构建CBCT勾画模型和勾画CBCT图像的方法、装置、设备及介质Method, device, equipment and medium for constructing CBCT delineation model and delineation CBCT image

技术领域technical field

本公开涉及图像处理、医疗技术和计算机技术领域,尤其涉及一种构建CBCT勾画模型和勾画CBCT图像的方法、装置、设备及介质。The present disclosure relates to the fields of image processing, medical technology and computer technology, and in particular to a method, device, device and medium for constructing a CBCT delineation model and delineating a CBCT image.

背景技术Background technique

锥形束计算机断层扫描(CBCT,cone-beam computed tomography)是目前广泛使用的目前图像引导设备,既可用于放射治疗前患者的摆位校正,以量化治疗分次内肿瘤移动和器官运动等因素造成的影响;又可实施在线自适应放疗,对提高放疗精度作用重大。Cone-beam computed tomography (CBCT, cone-beam computed tomography) is a widely used current image-guided device, which can be used for positioning correction of patients before radiotherapy to quantify factors such as tumor movement and organ movement within treatment fractions It can also implement online adaptive radiotherapy, which plays a major role in improving the accuracy of radiotherapy.

在各个分次的自适应放疗中,需要医生在CBCT图像上勾画肿瘤靶区和危及器官等感兴趣区。In each fraction of adaptive radiotherapy, doctors are required to delineate regions of interest such as tumor targets and organs at risk on CBCT images.

发明内容Contents of the invention

为了解决或者至少部分地解决以下发现的技术问题:采用人工在CBCT图像上勾画得到感兴趣区域,不仅非常耗时,大幅度增加自适应放疗的总时长,延长患者等待时间,降低临床运转效率;而且CBCT图像伪影多,图像质量较差,在CBCT图像上直接勾画很大程度上依赖于医生的经验和水平,不同医生进行勾画的主观性差异导致勾画得到的结果差异性很大;本公开的实施例提供了一种构建CBCT勾画模型和勾画CBCT图像的方法、装置、设备及介质。In order to solve or at least partially solve the following technical problems: manually delineating the region of interest on the CBCT image is not only very time-consuming, but also greatly increases the total time of adaptive radiotherapy, prolongs the waiting time of patients, and reduces the efficiency of clinical operations; Moreover, there are many artifacts in the CBCT image, and the image quality is poor. The direct delineation on the CBCT image largely depends on the experience and level of the doctor, and the subjective differences in delineation by different doctors lead to great differences in the delineation results; this disclosure The embodiments of the present invention provide a method, device, device and medium for constructing a CBCT delineation model and delineating a CBCT image.

第一方面,本公开的实施例提供了一种构建CBCT勾画模型的方法。上述构建CBCT勾画模型的方法包括:获取训练数据和训练标签,上述训练数据包括:多个训练用对象的CBCT图像和CT图像,上述训练标签包括:上述CT图像的参考图像质量,针对上述CT图像进行目标区域勾画的参考勾画结果;将上述CBCT图像输入至第一模型进行训练,上述第一模型的输出为伪CT图像,在上述伪CT图像的图像质量和上述参考图像质量的差距小于第一设定阈值的情况下训练结束,训练完成的第一模型为图像生成模型;将上述CT图像输入至第二模型进行训练,上述第二模型的输出为预测勾画结果,在上述预测勾画结果和上述参考勾画结果的差距小于第二设定阈值的情况下训练结束,训练完成的第二模型为分割模型;根据上述图像生成模型和上述分割模型,生成CBCT勾画模型。In a first aspect, the embodiments of the present disclosure provide a method for constructing a CBCT delineation model. The above-mentioned method for constructing a CBCT delineation model includes: obtaining training data and training labels, the above-mentioned training data includes: CBCT images and CT images of a plurality of training objects, and the above-mentioned training labels include: the reference image quality of the above-mentioned CT images, for the above-mentioned CT images A reference delineation result for delineating the target area; the above-mentioned CBCT image is input to the first model for training, and the output of the above-mentioned first model is a pseudo CT image, and the difference between the image quality of the above-mentioned pseudo CT image and the above-mentioned reference image quality is smaller than the first When the threshold is set, the training ends, and the trained first model is an image generation model; the above-mentioned CT images are input to the second model for training, and the output of the above-mentioned second model is the prediction delineation result, and the above-mentioned prediction delineation result and the above-mentioned When the difference between the reference delineation results is less than the second set threshold, the training ends, and the trained second model is a segmentation model; a CBCT delineation model is generated according to the above-mentioned image generation model and the above-mentioned segmentation model.

根据本公开的实施例,根据上述图像生成模型和上述分割模型,生成CBCT勾画模型,包括:将上述图像生成模型的输出作为上述分割模型的输入,得到包含上述图像生成模型和上述分割模型的群体化CBCT勾画模型;根据第一对象的CT图像和针对上述第一对象的CT图像的参考勾画结果,对上述分割模型进行参数微调,得到适配于上述第一对象的个性化分割模型;将上述图像生成模型的输出作为上述个性化分割模型的输入,得到包含上述图像生成模型和上述个性化分割模型的个性化CBCT勾画模型。According to an embodiment of the present disclosure, generating a CBCT delineation model based on the above-mentioned image generation model and the above-mentioned segmentation model includes: using the output of the above-mentioned image generation model as the input of the above-mentioned segmentation model to obtain a group including the above-mentioned image generation model and the above-mentioned segmentation model Delineate the CBCT delineation model; according to the CT image of the first object and the reference delineation result for the CT image of the first object, fine-tune the parameters of the above-mentioned segmentation model to obtain a personalized segmentation model adapted to the above-mentioned first object; The output of the image generation model is used as the input of the above-mentioned personalized segmentation model, and a personalized CBCT delineation model including the above-mentioned image generation model and the above-mentioned personalized segmentation model is obtained.

根据本公开的实施例,上述根据第一对象的CT图像和针对上述第一对象的CT图像的参考勾画结果,对上述分割模型进行参数微调,得到适配于上述第一对象的个性化分割模型,包括:将上述第一对象的CT图像输入至上述分割模型,输出得到上述第一对象的CT图像的预测勾画结果;对上述分割模型的参数进行微调,使得上述第一对象的CT图像的参考勾画结果和预测勾画结果的差距小于第二设定阈值,参数微调后的分割模型为适配于上述第一对象的个性化分割模型。According to an embodiment of the present disclosure, according to the CT image of the first object and the reference delineation result for the CT image of the first object, fine-tuning is performed on the parameters of the segmentation model to obtain a personalized segmentation model adapted to the first object , comprising: inputting the CT image of the above-mentioned first object into the above-mentioned segmentation model, and outputting the prediction delineation result of the CT image of the above-mentioned first object; fine-tuning the parameters of the above-mentioned segmentation model, so that the reference of the CT image of the above-mentioned first object The difference between the delineation result and the predicted delineation result is smaller than the second set threshold, and the segmentation model after parameter fine-tuning is a personalized segmentation model adapted to the above-mentioned first object.

根据本公开的实施例,从噪声水平、伪影、组织边界清晰度、灰度值这四个维度对图像质量的差距进行分析;上述第一设定阈值包括:噪声误差阈值、图像误差阈值、清晰度误差阈值和灰度误差阈值;在上述伪CT图像的图像质量和上述参考图像质量关于噪声水平的差距小于噪声误差阈值、关于伪影的差距小于图像误差阈值、关于组织边界清晰度的差距小于清晰度误差阈值且关于灰度值的差距小于灰度误差阈值的情况下,视为上述伪CT图像的图像质量和上述参考图像质量一致,达到训练结束的条件。According to an embodiment of the present disclosure, the difference in image quality is analyzed from the four dimensions of noise level, artifact, tissue boundary definition, and gray value; the above-mentioned first set threshold includes: noise error threshold, image error threshold, Sharpness error threshold and grayscale error threshold; the difference between the image quality of the above-mentioned pseudo CT image and the above-mentioned reference image quality with respect to the noise level is less than the noise error threshold, the difference between the artifacts is smaller than the image error threshold, and the difference between the tissue boundary definition If it is less than the sharpness error threshold and the difference with respect to the grayscale value is smaller than the grayscale error threshold, it is considered that the image quality of the above-mentioned pseudo CT image is consistent with the above-mentioned reference image quality, and the condition for the end of training is met.

第二方面,本公开的实施例提供了一种勾画CBCT图像的方法。上述勾画CBCT图像的方法包括:获取目标对象的待勾画CBCT图像;将上述待勾画CBCT图像输入至预先训练好的目标图像生成模型,输出得到伪CT图像;将上述伪CT图像输入至预先训练好的目标分割模型,输出得到上述目标对象的CBCT勾画结果;其中,上述目标图像生成模型包含从CBCT图像映射至伪CT图像的第一网络参数;上述目标分割模型包含从CT图像映射至CT图像的目标区域的勾画结果的第二网络参数;在上述目标图像生成模型的训练阶段,输入为训练用对象的CBCT图像,输出为上述训练用对象的伪CT图像;上述训练用对象的伪CT图像的图像质量和上述训练用对象的CT图像的图像质量一致。In a second aspect, embodiments of the present disclosure provide a method for delineating a CBCT image. The above-mentioned method for delineating a CBCT image includes: obtaining a CBCT image to be delineated of a target object; inputting the above-mentioned CBCT image to be delineated into a pre-trained target image generation model, and outputting a pseudo CT image; inputting the above-mentioned pseudo CT image into a pre-trained The target segmentation model, output the CBCT delineation result of the above-mentioned target object; wherein, the above-mentioned target image generation model includes the first network parameters mapped from the CBCT image to the pseudo-CT image; the above-mentioned target segmentation model includes the mapping from the CT image to the CT image The second network parameter of the delineation result of the target area; in the training stage of the above-mentioned target image generation model, the input is the CBCT image of the object for training, and the output is the pseudo-CT image of the object for the above-mentioned training; the pseudo-CT image of the above-mentioned training object The image quality was the same as that of the CT image of the above-mentioned training object.

根据本公开的实施例,上述目标图像生成模型的第一网络参数通过以下方式得到:将多个训练用对象的CBCT图像作为第一模型的训练数据,上述多个训练用对象的CT图像的参考图像质量作为上述第一模型的训练标签,训练完成的第一模型为上述目标图像生成模型,上述第一模型训练好的参数为上述第一网络参数。上述目标分割模型的第二网络参数通过以下方式中的一种得到:将上述多个训练用对象的CT图像作为第二模型的训练数据,针对上述多个训练用对象的CT图像进行目标区域勾画的参考勾画结果作为上述第二模型的训练标签,训练完成的第二模型为上述目标分割模型,上述第二模型训练好的参数为上述第二网络参数;或者,将上述多个训练用对象的CT图像作为第二模型的训练数据,针对上述多个训练用对象的CT图像进行目标区域勾画的参考勾画结果作为上述第二模型的训练标签,训练得到的参数作为上述第二模型的中间参数;根据上述目标对象的CT图像和针对上述目标对象的CT图像的参考勾画结果,对上述第二模型的中间参数进行微调,微调后的第二模型为上述目标分割模型,得到微调后的参数为上述第二网络参数。According to an embodiment of the present disclosure, the first network parameters of the above-mentioned target image generation model are obtained in the following manner: using multiple CBCT images of training objects as the training data of the first model, and referring to the CT images of the above-mentioned multiple training objects The image quality is used as the training label of the first model, the trained first model is the target image generation model, and the trained parameters of the first model are the first network parameters. The second network parameters of the above-mentioned target segmentation model are obtained by one of the following methods: using the CT images of the above-mentioned multiple training objects as the training data of the second model, and delineating the target area for the CT images of the above-mentioned multiple training objects The reference delineation result of the above-mentioned second model is used as the training label of the above-mentioned second model, the second model after training is the above-mentioned target segmentation model, and the parameters trained by the above-mentioned second model are the above-mentioned second network parameters; or, the above-mentioned multiple training objects The CT image is used as the training data of the second model, and the reference delineation result of the target area delineation for the CT images of the above-mentioned multiple training objects is used as the training label of the above-mentioned second model, and the parameters obtained from the training are used as the intermediate parameters of the above-mentioned second model; According to the CT image of the above-mentioned target object and the reference delineation result for the CT image of the above-mentioned target object, the intermediate parameters of the above-mentioned second model are fine-tuned, the second model after fine-tuning is the above-mentioned target segmentation model, and the parameters after fine-tuning are obtained as the above-mentioned Second network parameters.

第三方面,本公开的实施例提供了一种构建CBCT勾画模型的装置。上述构建CBCT勾画模型的装置包括:训练数据和标签获取模块、第一训练模块、第二训练模块和勾画模型生成模块。上述训练数据和标签获取模块用于获取训练数据和训练标签,上述训练数据包括:多个训练用对象的CBCT图像和CT图像,上述训练标签包括:上述CT图像的参考图像质量,针对上述CT图像进行目标区域勾画的参考勾画结果。上述第一训练模块用于将上述CBCT图像输入至第一模型进行训练,上述第一模型的输出为伪CT图像,在上述伪CT图像的图像质量和上述参考图像质量的差距小于第一设定阈值的情况下训练结束,训练完成的第一模型为图像生成模型。上述第二训练模块用于将上述CT图像输入至第二模型进行训练,上述第二模型的输出为预测勾画结果,在上述预测勾画结果和上述参考勾画结果的差距小于第二设定阈值的情况下训练结束,训练完成的第二模型为分割模型。上述勾画模型生成模块用于根据上述图像生成模型和上述分割模型,生成CBCT勾画模型。In a third aspect, embodiments of the present disclosure provide an apparatus for constructing a CBCT delineation model. The above-mentioned device for constructing a CBCT delineation model includes: a training data and label acquisition module, a first training module, a second training module and a delineation model generation module. The above-mentioned training data and label acquisition module is used to obtain training data and training labels. The above-mentioned training data includes: CBCT images and CT images of multiple training objects. The above-mentioned training labels include: the reference image quality of the above-mentioned CT images, for the above-mentioned CT images The reference delineation result for delineation of the target area. The above-mentioned first training module is used to input the above-mentioned CBCT image to the first model for training, the output of the above-mentioned first model is a pseudo CT image, and the difference between the image quality of the above-mentioned pseudo-CT image and the above-mentioned reference image quality is smaller than the first setting In the case of the threshold value, the training ends, and the first model after training is an image generation model. The second training module is used to input the CT image to the second model for training, the output of the second model is the predicted delineation result, and when the difference between the predicted delineation result and the reference delineation result is less than the second set threshold Next, the training ends, and the second model after training is a segmentation model. The above-mentioned delineation model generating module is configured to generate a CBCT delineation model according to the above-mentioned image generation model and the above-mentioned segmentation model.

第四方面,本公开的实施例提供了一种勾画CBCT图像的装置。上述装置包括:数据获取模块、第一处理模块和第二处理模块。上述数据获取模块用于获取目标对象的待勾画CBCT图像。上述第一处理模块用于将上述待勾画CBCT图像输入至预先训练好的目标图像生成模型,输出得到伪CT图像。上述第二处理模块用于将上述伪CT图像输入至预先训练好的目标分割模型,输出得到上述目标对象的CBCT勾画结果。其中,上述目标图像生成模型包含从CBCT图像映射至伪CT图像的第一网络参数;上述目标分割模型包含从CT图像映射至CT图像的目标区域的勾画结果的第二网络参数;在上述目标图像生成模型的训练阶段,输入为训练用对象的CBCT图像,输出为上述训练用对象的伪CT图像;上述训练用对象的伪CT图像的图像质量和上述训练用对象的CT图像的图像质量一致。In a fourth aspect, embodiments of the present disclosure provide an apparatus for delineating a CBCT image. The above device includes: a data acquisition module, a first processing module and a second processing module. The above-mentioned data acquisition module is used to acquire the CBCT image to be outlined of the target object. The above-mentioned first processing module is used to input the above-mentioned CBCT image to be delineated into a pre-trained target image generation model, and output a pseudo CT image. The second processing module is configured to input the pseudo CT image into a pre-trained target segmentation model, and output a CBCT delineation result of the target object. Wherein, the above-mentioned target image generation model includes the first network parameters mapped from the CBCT image to the pseudo-CT image; the above-mentioned target segmentation model includes the second network parameters mapped from the CT image to the delineation result of the target area of the CT image; in the above-mentioned target image In the training phase of the generated model, the input is the CBCT image of the training object, and the output is the pseudo CT image of the training object; the image quality of the pseudo CT image of the training object is consistent with the image quality of the CT image of the training object.

第五方面,本公开的实施例提供了一种电子设备。上述电子设备包括处理器、通信接口、存储器和通信总线,其中,处理器、通信接口和存储器通过通信总线完成相互间的通信;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现如上所述的构建CBCT勾画模型的方法或勾画CBCT图像的方法。In a fifth aspect, embodiments of the present disclosure provide an electronic device. The above-mentioned electronic equipment includes a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete mutual communication through the communication bus; the memory is used to store computer programs; the processor is used to execute all programs on the memory. The stored program implements the method for constructing a CBCT delineation model or the method for delineating a CBCT image as described above.

第六方面,本公开的实施例提供了一种计算机可读存储介质。上述计算机可读存储介质上存储有计算机程序,上述计算机程序被处理器执行时实现如上所述的构建CBCT勾画模型的方法或勾画CBCT图像的方法。In a sixth aspect, embodiments of the present disclosure provide a computer-readable storage medium. A computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for constructing a CBCT delineation model or the method for delineating a CBCT image as described above is implemented.

本公开实施例提供的上述技术方案至少具有如下优点的部分或全部:The above-mentioned technical solutions provided by the embodiments of the present disclosure have at least some or all of the following advantages:

考虑到CBCT图像和CT图像由于扫描时间不一样,二者呈现的图像结构接近但图像具有质量差异,而且由于CBCT图像的伪影较多,如果采用CT图像的勾画结果来进行CBCT图像勾画的监督训练,会产生勾画形变、勾画不准确等问题;本公开实施例提供的构建CBCT勾画模型的方法和勾画CBCT图像的方法中,通过训练第一模型和第二模型,得到用于表示CBCT图像到伪CT图像的映射关系的图像生成模型以及用于表示CT图像到预测勾画结果的映射关系的分割模型,根据图像生成模型和分割模型来生成CBCT勾画模型;由于在第一模型的监督训练过程中,将CT图像真实的图像质量作为训练标签对应的参考图像质量,使得CBCT图像经图像生成模型对应得到的伪CT图像的图像质量和真实CT图像的参考图像质量一致,由于伪CT图像和真实CT图像的图像质量一致,因此CT图像的勾画结果能够应用至伪CT图像,同时由于CBCT图像和伪CT图像之间的图像结构一致,则在伪CT图像上的勾画相当于对CBCT图像对应的组织结构进行匹配勾画,整体上实现了基于真实CT的勾画结果来相对准确且高效地勾画CBCT图像。Considering that the CBCT image and the CT image have different scanning times, the image structures presented by the two are similar but the image quality is different, and because the CBCT image has many artifacts, if the delineation result of the CT image is used to supervise the delineation of the CBCT image Training will cause problems such as delineation deformation and inaccurate delineation; in the method for constructing a CBCT delineation model and the method for delineating a CBCT image provided by the embodiment of the present disclosure, by training the first model and the second model, the CBCT image is obtained to represent the CBCT image. The image generation model of the mapping relationship of the pseudo CT image and the segmentation model used to represent the mapping relationship between the CT image and the prediction delineation result, and generate the CBCT delineation model according to the image generation model and the segmentation model; because in the supervised training process of the first model , the real image quality of the CT image is used as the reference image quality corresponding to the training label, so that the image quality of the pseudo CT image corresponding to the CBCT image through the image generation model is consistent with the reference image quality of the real CT image. Since the pseudo CT image and the real CT image The image quality of the image is consistent, so the delineation result of the CT image can be applied to the pseudo CT image, and because the image structure between the CBCT image and the pseudo CT image is consistent, the delineation on the pseudo CT image is equivalent to the tissue corresponding to the CBCT image The structure is matched and delineated, and the delineation result based on the real CT is realized as a whole to delineate the CBCT image relatively accurately and efficiently.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure.

为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or related technologies. Obviously, for those of ordinary skill in the art , on the premise of not paying creative labor, other drawings can also be obtained based on these drawings.

图1示意性地示出了本公开实施例提供的构建CBCT勾画模型的方法的流程图;Fig. 1 schematically shows a flowchart of a method for constructing a CBCT delineation model provided by an embodiment of the present disclosure;

图2示意性地示出了根据本公开实施例的图像生成模型的训练过程示意图;Fig. 2 schematically shows a schematic diagram of a training process of an image generation model according to an embodiment of the present disclosure;

图3示意性地示出了根据本公开实施例的分割模型的训练过程示意图;Fig. 3 schematically shows a schematic diagram of a training process of a segmentation model according to an embodiment of the present disclosure;

图4示意性地示出了根据本公开实施例的生成CBCT勾画模型的过程示意图;Fig. 4 schematically shows a schematic diagram of the process of generating a CBCT sketching model according to an embodiment of the present disclosure;

图5示意性地示出了对鼻咽癌患者CBCT图像勾画肿瘤临床靶区(CTV)的勾画效果对比图,分别示意了(a)鼻咽癌患者X对应的CBCT图像,针对该鼻咽癌患者X对应的CBCT图像的(a1)真实CTV勾画结果、(a2)采用群体化CBCT勾画模型的CTV勾画结果、(a3)采用个性化CBCT勾画模型的CTV勾画结果;(b)鼻咽癌患者Y对应的CBCT图像,针对该鼻咽癌患者Y对应的CBCT图像的(b1)真实CTV勾画结果、(b2)采用群体化CBCT勾画模型的CTV勾画结果、(b3)采用个性化CBCT勾画模型的CTV勾画结果;Fig. 5 schematically shows a comparison diagram of delineating the clinical target volume (CTV) of the tumor clinical target volume (CTV) on the CBCT image of a nasopharyngeal carcinoma patient. (a1) Real CTV delineation results of the CBCT images corresponding to patient X, (a2) CTV delineation results using the grouped CBCT delineation model, (a3) CTV delineation results using the individualized CBCT delineation model; (b) patients with nasopharyngeal carcinoma The CBCT image corresponding to Y, for the CBCT image corresponding to the nasopharyngeal carcinoma patient Y (b1) the real CTV delineation result, (b2) the CTV delineation result using the group CBCT delineation model, (b3) the delineation result using the personalized CBCT delineation model CTV delineation results;

图6示意性地示出了对鼻咽癌患者CBCT图像勾画鼻咽肿瘤靶区(GTVnx)的勾画效果对比图,分别示意了(a)鼻咽癌患者X对应的CBCT图像,针对该鼻咽癌患者X对应的CBCT图像的(a1)真实GTVnx勾画结果、(a2)采用群体化CBCT勾画模型的GTVnx勾画结果、(a3)采用个性化CBCT勾画模型的GTVnx勾画结果;(b)鼻咽癌患者Y对应的CBCT图像,针对该鼻咽癌患者Y对应的CBCT图像的(b1)真实GTVnx勾画结果、(b2)采用群体化CBCT勾画模型的GTVnx勾画结果、(b3)采用个性化CBCT勾画模型的GTVnx勾画结果;Fig. 6 schematically shows the comparison diagram of delineating the nasopharyngeal tumor target volume (GTVnx) on the CBCT image of the nasopharyngeal carcinoma patient, respectively showing (a) the corresponding CBCT image of the nasopharyngeal carcinoma patient X, for the nasopharyngeal tumor (a1) Real GTVnx delineation results of CBCT images corresponding to cancer patient X, (a2) GTVnx delineation results using population CBCT delineation model, (a3) GTVnx delineation results using personalized CBCT delineation model; (b) nasopharyngeal carcinoma For the CBCT image corresponding to patient Y, the (b1) real GTVnx delineation results, (b2) the GTVnx delineation results using the group CBCT delineation model, and (b3) the personalized CBCT delineation model for the CBCT image corresponding to the nasopharyngeal carcinoma patient Y GTVnx sketch results;

图7示意性地示出了根据本公开实施例的勾画CBCT图像的方法的流程图;FIG. 7 schematically shows a flowchart of a method for delineating a CBCT image according to an embodiment of the present disclosure;

图8示意性地示出了本公开实施例提供的构建CBCT勾画模型的装置的结构框图;FIG. 8 schematically shows a structural block diagram of a device for constructing a CBCT sketching model provided by an embodiment of the present disclosure;

图9示意性地示出了本公开实施例提供的勾画CBCT图像的装置的结构框图;以及FIG. 9 schematically shows a structural block diagram of a device for delineating a CBCT image provided by an embodiment of the present disclosure; and

图10示意性地示出了本公开实施例提供的电子设备的结构框图。Fig. 10 schematically shows a structural block diagram of an electronic device provided by an embodiment of the present disclosure.

具体实施方式Detailed ways

为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the drawings in the embodiments of the present disclosure. Obviously, the described embodiments It is a part of embodiments of the present disclosure, but not all embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present disclosure.

本公开的第一个示例性实施例提供了一种构建CBCT勾画模型的方法。上述方法可以由具有计算能力的电子设备来执行。The first exemplary embodiment of the present disclosure provides a method for constructing a CBCT delineation model. The above methods can be performed by electronic devices with computing capabilities.

图1示意性地示出了本公开实施例提供的构建CBCT勾画模型的方法的流程图。Fig. 1 schematically shows a flowchart of a method for constructing a CBCT delineation model provided by an embodiment of the present disclosure.

参照图1所示,本公开实施例提供的构建CBCT勾画模型的方法,包括以下步骤:S110、S120、S130和S140。Referring to FIG. 1 , the method for constructing a CBCT delineation model provided by an embodiment of the present disclosure includes the following steps: S110 , S120 , S130 and S140 .

在步骤S110,获取训练数据和训练标签,上述训练数据包括:多个训练用对象的CBCT图像和CT图像,上述训练标签包括:上述CT图像的参考图像质量,针对上述CT图像进行目标区域勾画的参考勾画结果。In step S110, training data and training labels are acquired. The above training data includes: CBCT images and CT images of a plurality of training objects. The above training labels include: the reference image quality of the above CT images, and the target area delineation for the above CT images. Refer to sketching results.

上述训练用对象是指训练集数据采集的对象,训练数据例如为医疗体系数据库中患者的CBCT图像和CT图像,同一个患者的同一个身体部位、器官、组织区域等同时具有CBCT图像和CT图像。The above-mentioned training objects refer to the objects of data collection in the training set. The training data are, for example, CBCT images and CT images of patients in the medical system database. The same body part, organ, tissue area, etc. of the same patient have CBCT images and CT images at the same time. .

例如,针对某种癌症患者P1的头部区域、胃部区域、胸腔区域或腹部区域等,同时具有拍摄得到的CBCT图像和CT图像;医疗体系数据库中全部或部分同一类型癌症患者(例如鼻咽癌患者或肺癌患者等)的CBCT图像和CT图像可以作为训练数据。上述CBCT图像和CT图像是像素大小相同且相互配准(具有映射匹配关系)的图像数据。For example, there are CBCT images and CT images captured for the head region, stomach region, thoracic region or abdomen region of a certain cancer patient P1 ; all or part of the same type of cancer patients in the medical system database (such as nasal CBCT images and CT images of patients with pharyngeal cancer or lung cancer, etc.) can be used as training data. The above CBCT image and CT image are image data with the same pixel size and mutual registration (with a mapping matching relationship).

目标区域例如为训练用对象的CBCT图像和CT图像中的靶向区域、靶向位置等。The target area is, for example, a target area, a target position, and the like in the CBCT image and CT image of the training object.

针对每个CT图像,可以获取CT图像真实的图像质量;为了和后续经过第一模型输出得到的伪CT图像的图像质量进行描述上的区分以及说明标签的作用,将该CT图像真实的图像质量描述为参考图像质量。For each CT image, the real image quality of the CT image can be obtained; in order to distinguish it from the image quality of the pseudo CT image obtained through the output of the first model and explain the role of the label, the real image quality of the CT image Described as reference image quality.

针对每个CT图像,可以获取当前CT图像进行目标区域勾画后的勾画结果;上述勾画结果是指框出目标区域的边界线或轮廓线。为了和后续经过第二模型输出得到的预测勾画结果进行描述上的区分以及说明标签的作用,将针对CT图像进行目标区域勾画的勾画结果描述为参考勾画结果。For each CT image, a delineation result after delineation of the target area on the current CT image can be obtained; the above delineation result refers to a boundary line or a contour line of the target area. In order to differentiate in description from the predicted delineation result obtained through the subsequent output of the second model and to explain the function of the label, the delineation result of the target region delineation on the CT image is described as the reference delineation result.

由于CT图像具有扫描时间快、图像清晰等特点,医疗体系数据库中存储有丰富的CT图像和对应的精准的参考勾画结果,因此,可以考虑利用CT图像的参考勾画结果来对CBCT图像进行勾画参考;进一步考虑到CBCT图像和CT图像由于扫描时间不一样,二者呈现的图像结构接近但图像具有质量差异,而且由于CBCT图像的伪影较多,如果采用CT图像的勾画结果来进行CBCT图像勾画的监督训练,会产生勾画形变、勾画不准确等问题,因此本公开的实施例中,通过构建两个模型的训练过程,将第一模型的输出和第二模型的输入基于图像质量的一致性进行关联,通过训练第一模型和第二模型,得到用于表示CBCT图像到伪CT图像的映射关系的图像生成模型以及用于表示CT图像到预测勾画结果的映射关系的分割模型,由于生成的伪CT图像与CBCT图像具有一致的解剖结构(或者描述为图像结构),伪CT图像和CT图像具有一致的图像质量,从而根据图像生成模型的转换和分割模型的勾画处理,实现CBCT图像的准确勾画。Because CT images have the characteristics of fast scanning time and clear images, there are abundant CT images and corresponding accurate reference delineation results stored in the medical system database. Therefore, it is possible to consider using the reference delineation results of CT images to delineate CBCT images for reference. ; Further considering that due to the difference in scanning time between CBCT images and CT images, the image structure presented by the two is similar but the image quality is different, and because there are many artifacts in the CBCT image, if the delineation result of the CT image is used to delineate the CBCT image Supervised training will cause problems such as delineation deformation and inaccurate delineation. Therefore, in the embodiment of the present disclosure, through the training process of constructing two models, the output of the first model and the input of the second model are based on the consistency of image quality. Carry out association, by training the first model and the second model, the image generation model used to represent the mapping relationship between CBCT images and pseudo CT images and the segmentation model used to represent the mapping relationship between CT images and predicted delineation results are obtained, because the generated The pseudo-CT image and the CBCT image have the same anatomical structure (or described as image structure), and the pseudo-CT image and the CT image have the same image quality, so that the accuracy of the CBCT image can be realized according to the transformation of the image generation model and the delineation of the segmentation model. outline.

在步骤S120,将上述CBCT图像输入至第一模型进行训练,上述第一模型的输出为伪CT图像,在上述伪CT图像的图像质量和上述参考图像质量的差距小于第一设定阈值的情况下训练结束,训练完成的第一模型为图像生成模型。In step S120, the above-mentioned CBCT image is input to the first model for training, and the output of the above-mentioned first model is a pseudo CT image, when the difference between the image quality of the above-mentioned pseudo-CT image and the above-mentioned reference image quality is smaller than the first set threshold Next, the training ends, and the first model trained is an image generation model.

图2示意性地示出了根据本公开实施例的图像生成模型的训练过程示意图。Fig. 2 schematically shows a schematic diagram of a training process of an image generation model according to an embodiment of the present disclosure.

参照图2所示,将训练阶段的图像生成模型描述为第一模型,训练完成的第一模型即为图像生成模型。在一些实施例中,第一模型可以是神经网络模型,例如可以基于深度学习的方法来训练得到图像生成模型。Referring to FIG. 2 , the image generation model in the training phase is described as the first model, and the first model after training is the image generation model. In some embodiments, the first model may be a neural network model, for example, an image generation model may be obtained by training based on a deep learning method.

第一模型的输入为训练数据中的CBCT图像,输出为伪CT图像。伪CT图像是指用计算设备模拟生成的CT图像或经过神经网络模型生成的CT图像,区别于真实采用仪器扫描得到的CT图像。The input of the first model is the CBCT image in the training data, and the output is the pseudo CT image. Pseudo CT images refer to CT images generated by computer simulation or through neural network models, which are different from real CT images obtained by scanning with instruments.

在一实施例中,可以基于CycleGAN网络的一部分结构来实现图像生成模型的训练。CycleGAN网络包括两个生成器和两个判别器,分别描述为生成器A和生成器B,判别器C和判别器D。数据输入和输出流向表示为:真实的CBCT图像→生成器A→伪CT→生成器B→重建CBCT图像,在训练过程中还包括判别器C和判别器D,判别器C用于判断生成的伪CT图像和真实的CT图像之间的图像质量是否一致,判别器D用于判断重建CBCT图像和真实的CBCT图像是否一致。其中,可以选取包含生成器A和判别器C部分对应的网络进行训练来得到图像生成模型。In an embodiment, the training of the image generation model can be realized based on a part of the structure of the CycleGAN network. The CycleGAN network consists of two generators and two discriminators, described as generator A and generator B, discriminator C and discriminator D, respectively. The flow direction of data input and output is expressed as: real CBCT image → generator A → pseudo CT → generator B → reconstructed CBCT image, and the discriminator C and discriminator D are also included in the training process, and the discriminator C is used to judge the generated Whether the image quality between the fake CT image and the real CT image is consistent, the discriminator D is used to judge whether the reconstructed CBCT image is consistent with the real CBCT image. Among them, the network corresponding to the generator A and the discriminator C can be selected for training to obtain the image generation model.

在训练过程中,在上述伪CT图像的图像质量和上述参考图像质量的差距小于第一设定阈值的情况下训练结束。During the training process, the training ends when the difference between the quality of the pseudo CT image and the quality of the reference image is smaller than the first set threshold.

在一实施例中,从噪声水平、伪影、组织边界清晰度、灰度值这四个维度对图像质量的差距进行分析;上述第一设定阈值包括:噪声误差阈值、图像误差阈值、清晰度误差阈值和灰度误差阈值;在上述伪CT图像的图像质量和上述参考图像质量关于噪声水平的差距小于噪声误差阈值、关于伪影的差距小于图像误差阈值、关于组织边界清晰度的差距小于清晰度误差阈值且关于灰度值的差距小于灰度误差阈值的情况下,视为上述伪CT图像的图像质量和上述参考图像质量一致,达到训练结束的条件。In one embodiment, the difference in image quality is analyzed from the four dimensions of noise level, artifact, tissue boundary definition, and gray value; the above-mentioned first set threshold includes: noise error threshold, image error threshold, clarity The difference between the image quality of the above-mentioned pseudo CT image and the above-mentioned reference image quality about the noise level is less than the noise error threshold, the difference about the artifact is less than the image error threshold, and the difference about the definition of the tissue boundary is less than When the sharpness error threshold is lower than the grayscale error threshold, it is considered that the image quality of the above-mentioned pseudo CT image is consistent with the above-mentioned reference image quality, and the training end condition is met.

在步骤S130,将上述CT图像输入至第二模型进行训练,上述第二模型的输出为预测勾画结果,在上述预测勾画结果和上述参考勾画结果的差距小于第二设定阈值的情况下训练结束,训练完成的第二模型为分割模型。In step S130, the above-mentioned CT images are input to the second model for training, and the output of the above-mentioned second model is the predicted delineation result, and the training ends when the difference between the above-mentioned predicted delineation result and the above-mentioned reference delineation result is less than the second set threshold , the second model after training is the segmentation model.

图3示意性地示出了根据本公开实施例的分割模型的训练过程示意图。Fig. 3 schematically shows a schematic diagram of a training process of a segmentation model according to an embodiment of the present disclosure.

参照图3所示,将训练阶段的分割模型描述为第二模型,训练完成的第二模型即为分割模型。在一些实施例中,第二模型可以是神经网络模型,例如可以基于深度学习的方法来训练得到分割模型。Referring to FIG. 3 , the segmentation model in the training phase is described as the second model, and the second model after training is the segmentation model. In some embodiments, the second model may be a neural network model, for example, it may be trained based on a deep learning method to obtain a segmentation model.

第二模型的输入为训练数据中的CT图像,输出为预测勾画结果;训练标签为针对该CT图像进行目标区域勾画的参考勾画结果。The input of the second model is the CT image in the training data, and the output is the predicted delineation result; the training label is the reference delineation result for delineating the target area on the CT image.

在一些实施例中,上述第二模型可以是但不限于是Deeplab V3网络模型。In some embodiments, the above-mentioned second model may be, but not limited to, a Deeplab V3 network model.

在步骤S140,根据上述图像生成模型和上述分割模型,生成CBCT勾画模型。In step S140, a CBCT delineation model is generated based on the image generation model and the segmentation model.

图4示意性地示出了根据本公开实施例的生成CBCT勾画模型的过程示意图。Fig. 4 schematically shows a schematic diagram of a process of generating a CBCT sketching model according to an embodiment of the present disclosure.

根据本公开的实施例,参照图4所示,上述步骤S140中,根据上述图像生成模型和上述分割模型,生成CBCT勾画模型,包括:According to an embodiment of the present disclosure, as shown in FIG. 4 , in the above step S140, a CBCT sketching model is generated according to the above image generation model and the above segmentation model, including:

将上述图像生成模型的输出作为上述分割模型的输入,得到包含上述图像生成模型和上述分割模型的群体化CBCT勾画模型410;Using the output of the above-mentioned image generation model as the input of the above-mentioned segmentation model, a grouped CBCT delineation model 410 comprising the above-mentioned image generation model and the above-mentioned segmentation model is obtained;

根据第一对象的CT图像和针对上述第一对象的CT图像的参考勾画结果,对上述分割模型进行参数微调,得到适配于上述第一对象的个性化分割模型;Fine-tuning the parameters of the segmentation model according to the CT image of the first object and the reference delineation result for the CT image of the first object to obtain a personalized segmentation model adapted to the first object;

将上述图像生成模型的输出作为上述个性化分割模型的输入,得到包含上述图像生成模型和上述个性化分割模型的个性化CBCT勾画模型420。Using the output of the above-mentioned image generation model as the input of the above-mentioned personalized segmentation model, a personalized CBCT delineation model 420 including the above-mentioned image generation model and the above-mentioned personalized segmentation model is obtained.

在一些实施例中,可以将图像生成模型和分割模型组合得到群体化CBCT勾画模型,图像生成模型的输出作为分割模型的输入。In some embodiments, the image generation model and the segmentation model can be combined to obtain the grouped CBCT delineation model, and the output of the image generation model is used as the input of the segmentation model.

考虑到第二模型经训练得到的分割模型是适配于群体化的泛化模型,尽管所有患者的病灶区域在大体结构上是相似的,然而不同患者的靶向构造的细节之间还具有个体差异性。即,基于有限数据规模训练的群体化的分割模型,难以针对有大的解剖结构变化或体型特殊的个体来准确预测其感兴趣区(目标区域)的勾画形状。因此在另一些实施例中,将需要进行勾画的第一对象(也可以描述为目标对象)的CBCT图像作为分割模型的个性化输入来对分割模型的参数进行微调,从而将微调后的分割模型和图像生成模型组合得到个性化CBCT勾画模型,图像生成模型的输出作为微调后的分割模型的输入。通过建立个性化分割模型,可实现在CBCT图像上快速、客观地分割出精度较高的个体感兴趣区结构。Considering that the segmentation model trained by the second model is a generalized model adapted to population, although the lesion areas of all patients are similar in general structure, there are still individual differences between the details of the target structure of different patients. difference. That is, it is difficult to accurately predict the delineated shape of the region of interest (target region) for individuals with large anatomical structure changes or special body shapes based on a grouped segmentation model trained on a limited data scale. Therefore, in other embodiments, the CBCT image of the first object (which can also be described as the target object) that needs to be delineated is used as the personalized input of the segmentation model to fine-tune the parameters of the segmentation model, so that the fine-tuned segmentation model Combined with the image generation model to obtain a personalized CBCT delineation model, the output of the image generation model is used as the input of the fine-tuned segmentation model. By establishing a personalized segmentation model, it is possible to quickly and objectively segment individual ROI structures with high precision on CBCT images.

根据本公开的实施例,上述根据第一对象的CT图像和针对上述第一对象的CT图像的参考勾画结果,对上述分割模型进行参数微调,得到适配于上述第一对象的个性化分割模型,包括:将上述第一对象的CT图像输入至上述分割模型,输出得到上述第一对象的CT图像的预测勾画结果;对上述分割模型的参数进行微调,使得上述第一对象的CT图像的参考勾画结果和预测勾画结果的差距小于第二设定阈值,参数微调后的分割模型为适配于上述第一对象的个性化分割模型。According to an embodiment of the present disclosure, according to the CT image of the first object and the reference delineation result for the CT image of the first object, fine-tuning is performed on the parameters of the segmentation model to obtain a personalized segmentation model adapted to the first object , comprising: inputting the CT image of the above-mentioned first object into the above-mentioned segmentation model, and outputting the prediction delineation result of the CT image of the above-mentioned first object; fine-tuning the parameters of the above-mentioned segmentation model, so that the reference of the CT image of the above-mentioned first object The difference between the delineation result and the predicted delineation result is smaller than the second set threshold, and the segmentation model after parameter fine-tuning is a personalized segmentation model adapted to the above-mentioned first object.

在应用场景中,可以利用上述群体化CBCT勾画模型来进行CBCT图像的自动勾画,也可以利用上述个性化CBCT勾画模型来进行CBCT图像的自动勾画。In the application scenario, the above-mentioned grouped CBCT delineation model can be used to perform automatic delineation of CBCT images, and the above-mentioned personalized CBCT delineation model can also be used to perform automatic delineation of CBCT images.

在包含上述步骤S110~S140的实施例中,通过训练第一模型和第二模型,得到用于表示CBCT图像到伪CT图像的映射关系的图像生成模型以及用于表示CT图像到预测勾画结果的映射关系的分割模型,根据图像生成模型和分割模型来生成CBCT勾画模型;由于在第一模型的监督训练过程中,将CT图像真实的图像质量作为训练标签对应的参考图像质量,使得CBCT图像经图像生成模型对应得到的伪CT图像的图像质量和真实CT图像的参考图像质量一致,由于伪CT图像和真实CT图像的图像质量一致,因此CT图像的勾画结果能够应用至伪CT图像,同时由于CBCT图像和伪CT图像之间的图像结构一致,则在伪CT图像上的勾画相当于对CBCT图像对应的组织结构进行匹配勾画,整体上实现了基于真实CT的勾画结果来相对准确且高效地勾画CBCT图像。In the embodiment including the above steps S110-S140, by training the first model and the second model, the image generation model used to represent the mapping relationship between the CBCT image and the pseudo CT image and the image generation model used to represent the CT image to the predicted delineation result are obtained. The segmentation model of the mapping relationship generates the CBCT delineation model according to the image generation model and the segmentation model; in the supervised training process of the first model, the real image quality of the CT image is used as the reference image quality corresponding to the training label, so that the CBCT image is The image quality of the pseudo CT image corresponding to the image generation model is consistent with the reference image quality of the real CT image. Since the image quality of the pseudo CT image and the real CT image are consistent, the delineation result of the CT image can be applied to the pseudo CT image. At the same time, because The image structure between the CBCT image and the pseudo-CT image is consistent, and the delineation on the pseudo-CT image is equivalent to matching and delineating the tissue structure corresponding to the CBCT image. On the whole, the delineation result based on the real CT is relatively accurate and efficient. Outline the CBCT image.

例如,在建立由CBCT图像到伪CT图像的图像生成模型时,选取患者的定位CT图像和治疗时采集的CBCT图像组成训练数据集。患者的CT图像选自型号为Philip BrillianceCT Big Bore的CT扫描仪或西门子SOMATOM Definition AS CT-Sim型号的CT检测仪。使用医学影像处理软件(例如为医学影像处理MIM软件)将CT图像配准(也可以理解为映射配对)到CBCT图像上,组成像素大小和CBCT图像相同的CT-CBCT配对数据。然后使用MIM软件自动勾画出目标区域的外轮廓,将外轮廓范围内的部分提取出来进行深度学习训练和测试。在输入进神经网络之前,还可以对CT图像和CBCT图像进行归一化处理,使其范围在[-1,1]这一取值区间。然后,采用深度学习网络来学习从CBCT图像到伪CT图像的映射,其任务是输入单通道的CBCT图像,输出单通道的伪CT图像,采用有监督训练,训练标签是CT图像的图像质量。For example, when establishing an image generation model from CBCT images to pseudo CT images, the patient's positioning CT images and CBCT images collected during treatment are selected to form a training data set. The patient's CT images were selected from a Philip Brilliance CT Big Bore CT scanner or a Siemens SOMATOM Definition AS CT-Sim CT detector. Use medical image processing software (such as MIM software for medical image processing) to register CT images (also can be understood as mapping pairing) to CBCT images to form CT-CBCT pairing data with the same pixel size as CBCT images. Then use the MIM software to automatically outline the outer contour of the target area, and extract the part within the outer contour range for deep learning training and testing. Before inputting into the neural network, the CT image and the CBCT image can also be normalized so that the range is in the value interval [-1, 1]. Then, a deep learning network is used to learn the mapping from CBCT images to pseudo-CT images. The task is to input single-channel CBCT images and output single-channel pseudo-CT images. Supervised training is adopted, and the training label is the image quality of CT images.

本公开实施例的方法的应用过程中,通过借助于深度学习方法来构建CBCT勾画模型,利用每个患者在放疗前均会实测的CT定位图像及其相应的勾画结果,对群体化CBCT自动勾画模型进行个性化微调,训练过程中无需收集CBCT图像上的手动勾画数据,且可以有效提高感兴趣区域的自动勾画精度。During the application process of the method of the embodiment of the present disclosure, the CBCT delineation model is constructed by means of the deep learning method, and the CT positioning images and corresponding delineation results measured by each patient before radiotherapy are used to automatically delineate the grouped CBCT The model is personalized and fine-tuned, and there is no need to collect manual delineation data on CBCT images during the training process, and it can effectively improve the automatic delineation accuracy of the region of interest.

例如,当新患者入院后,医生首先会对目标区域完成CT扫描得到CT图像,并对CT图像对应的感兴趣区实施勾画工作。在该患者进行第一次CBCT扫描前,将该定位CT图像及勾画输入分割模型,对分割模型进行微调,得到个性化分割模型(也可以描述为个体化分割模型)。在患者当天进行CBCT扫描后,将CBCT图像输入到图像生成模型,生成对应的伪CT图像,并将该伪CT图像输入到微调后的个性化分割模型中,获得高精度的自动勾画感兴趣区的结构。医生和物理师能够基于结构此快速勾画出来的结构来观察患者的解剖变化,或进行自适应放疗等,不仅精度相对于群体模型更高,深度学习方法相较于手动勾画,还可大幅度提高效率,节约患者等待时间,提高临床运转效率。For example, when a new patient is admitted to the hospital, the doctor first completes the CT scan of the target area to obtain a CT image, and delineates the region of interest corresponding to the CT image. Before the patient undergoes the first CBCT scan, the positioning CT image and outline are input into the segmentation model, and the segmentation model is fine-tuned to obtain a personalized segmentation model (also can be described as an individualized segmentation model). After the CBCT scan is performed on the day of the patient, the CBCT image is input into the image generation model to generate the corresponding pseudo CT image, and the pseudo CT image is input into the fine-tuned personalized segmentation model to obtain high-precision automatic delineation of the region of interest Structure. Doctors and physicists can observe the patient's anatomical changes based on the quickly sketched structure, or perform adaptive radiotherapy, etc. Not only is the accuracy higher than that of the group model, but the deep learning method can also be greatly improved compared with manual sketching. Efficiency, save patient waiting time, improve clinical operation efficiency.

图5示意性地示出了对鼻咽癌患者CBCT图像勾画肿瘤临床靶区(CTV)的勾画效果对比图,分别示意了(a)鼻咽癌患者X对应的CBCT图像,针对该鼻咽癌患者X对应的CBCT图像的(a1)真实CTV勾画结果、(a2)采用群体化CBCT勾画模型的CTV勾画结果、(a3)采用个性化CBCT勾画模型的CTV勾画结果;(b)鼻咽癌患者Y对应的CBCT图像,针对该鼻咽癌患者Y对应的CBCT图像的(b1)真实CTV勾画结果、(b2)采用群体化CBCT勾画模型的CTV勾画结果、(b3)采用个性化CBCT勾画模型的CTV勾画结果。Fig. 5 schematically shows a comparison diagram of delineating the clinical target volume (CTV) of the tumor clinical target volume (CTV) on the CBCT image of a nasopharyngeal carcinoma patient. (a1) Real CTV delineation results of the CBCT images corresponding to patient X, (a2) CTV delineation results using the grouped CBCT delineation model, (a3) CTV delineation results using the individualized CBCT delineation model; (b) patients with nasopharyngeal carcinoma The CBCT image corresponding to Y, for the CBCT image corresponding to the nasopharyngeal carcinoma patient Y (b1) the real CTV delineation result, (b2) the CTV delineation result using the group CBCT delineation model, (b3) the delineation result using the personalized CBCT delineation model CTV sketches the results.

以鼻咽癌患者图像及其肿瘤临床靶区(CTV)的勾画为例。参照图5中(a)和(b)所示,分别示意了两个鼻咽癌患者X和Y的CBCT图像,对比图5中(a1)和(a2)、或者对比图5中(b1)和(b2)的结果可知,采用群体化CBCT勾画模型勾画出来的CTV勾画结果基本能够绘制出真实肿瘤临床靶区的核心部位,具有可以接受的准确度,基于群体化CBCT勾画模型进行自动勾画,省却了人工对CBCT勾画的人力和时间成本,也无需预先对CBCT数据进行勾画标注,只需利用医疗体系数据库中大量的CT图像和对应的CT勾画标注数据(作为参考勾画结果)即可;对比图5中(a1)和(a3)或者对比图5中(b1)和(b3)的结果可知,基于个性化CBCT勾画模型勾画得到的CTV勾画结果和真实的勾画结果几乎很接近,除了具有群体化CBCT勾画模型的优点之外,还提升了勾画结果的精确度。Take the image of nasopharyngeal carcinoma patients and the delineation of tumor clinical target volume (CTV) as an example. Referring to (a) and (b) in Figure 5, the CBCT images of two nasopharyngeal carcinoma patients X and Y are shown respectively, comparing (a1) and (a2) in Figure 5, or (b1) in Figure 5 From the results of and (b2), it can be seen that the CTV delineation result drawn by the grouped CBCT delineation model can basically draw the core part of the real tumor clinical target area, with acceptable accuracy, and the automatic delineation based on the grouped CBCT delineation model, It saves the manpower and time cost of manual CBCT delineation, and does not need to delineate and label CBCT data in advance, just use a large number of CT images in the medical system database and the corresponding CT delineation and labeling data (as a reference delineation result); comparison (a1) and (a3) in Figure 5 or comparing the results of (b1) and (b3) in Figure 5, it can be seen that the CTV delineation results based on the personalized CBCT delineation model delineation are almost close to the real delineation results, except for the group In addition to the advantages of the CBCT delineation model, it also improves the accuracy of the delineation results.

图6示意性地示出了对鼻咽癌患者CBCT图像勾画鼻咽肿瘤靶区(GTVnx)的勾画效果对比图,分别示意了(a)鼻咽癌患者X对应的CBCT图像,针对该鼻咽癌患者X对应的CBCT图像的(a1)真实GTVnx勾画结果、(a2)采用群体化CBCT勾画模型的GTVnx勾画结果、(a3)采用个性化CBCT勾画模型的GTVnx勾画结果;(b)鼻咽癌患者Y对应的CBCT图像,针对该鼻咽癌患者Y对应的CBCT图像的(b1)真实GTVnx勾画结果、(b2)采用群体化CBCT勾画模型的GTVnx勾画结果、(b3)采用个性化CBCT勾画模型的GTVnx勾画结果。Fig. 6 schematically shows the comparison diagram of delineating the nasopharyngeal tumor target volume (GTVnx) on the CBCT image of the nasopharyngeal carcinoma patient, respectively showing (a) the corresponding CBCT image of the nasopharyngeal carcinoma patient X, for the nasopharyngeal tumor (a1) Real GTVnx delineation results of CBCT images corresponding to cancer patient X, (a2) GTVnx delineation results using population CBCT delineation model, (a3) GTVnx delineation results using personalized CBCT delineation model; (b) nasopharyngeal carcinoma For the CBCT image corresponding to patient Y, the (b1) real GTVnx delineation results, (b2) the GTVnx delineation results using the group CBCT delineation model, and (b3) the personalized CBCT delineation model for the CBCT image corresponding to the nasopharyngeal carcinoma patient Y GTVnx sketches the results.

以鼻咽癌患者图像及其鼻咽肿瘤靶区(GTVnx)的勾画为例。参照图6中(a)和(b)所示,分别示意了两个鼻咽癌患者X和Y的CBCT图像,对比图6中(a1)和(a2)或者图6中(b1)和(b2)的结果可知,采用群体化CBCT勾画模型勾画出来的GTVnx勾画结果基本能够绘制出真实鼻咽肿瘤靶区的核心部位,具有可以接受的准确度,基于群体化CBCT勾画模型进行自动勾画,省却了人工对CBCT勾画的人力和时间成本,也无需预先对CBCT数据进行勾画标注,只需利用医疗体系数据库中大量的CT图像和对应的CT勾画标注数据(作为参考勾画结果)即可;对比图6中(a1)和(a3)或者图6中(b1)和(b3)的结果可知,基于个性化CBCT勾画模型勾画得到的GTVnx勾画结果和真实的勾画结果几乎很接近,除了具有群体化CBCT勾画模型的优点之外,还提升了勾画结果的精确度。Take the images of nasopharyngeal carcinoma patients and the delineation of nasopharyngeal tumor target volume (GTVnx) as an example. Referring to (a) and (b) shown in Figure 6, the CBCT images of two nasopharyngeal carcinoma patients X and Y are illustrated respectively, compared with (a1) and (a2) in Figure 6 or (b1) and ( From the results of b2), it can be seen that the GTVnx delineation results drawn by the group CBCT delineation model can basically draw the core part of the real nasopharyngeal tumor target area, with acceptable accuracy. Automatic delineation based on the population CBCT delineation model saves It saves the manpower and time cost of manual CBCT delineation, and does not need to delineate and label CBCT data in advance, just use a large number of CT images in the medical system database and corresponding CT delineation and labeling data (as a reference delineation result); comparison diagram The results of (a1) and (a3) in 6 or (b1) and (b3) in Figure 6 show that the GTVnx delineation results based on the personalized CBCT delineation model delineation are almost close to the real delineation results, except that the grouped CBCT In addition to the advantages of sketching the model, it also improves the accuracy of the sketching results.

本公开的第二个示例性实施例提供了一种勾画CBCT图像的方法。上述方法可以由具有计算能力的电子设备来执行。A second exemplary embodiment of the present disclosure provides a method of delineating a CBCT image. The above methods can be performed by electronic devices with computing capabilities.

在一些实施例中,本实施例可以直接利用上述第一个实施例得到的群体化CBCT勾画模型或个性化CBCT勾画模型来进行数据处理,将目标对象的待勾画CBCT图像输入至群体化CBCT勾画模型或个性化CBCT勾画模型,输出得到目标对象对应的CBCT勾画结果。In some embodiments, this embodiment can directly use the grouped CBCT delineation model obtained in the first embodiment above or the personalized CBCT delineation model for data processing, and input the CBCT image to be delineated of the target object into the grouped CBCT delineation model. Model or personalized CBCT delineation model, and output the CBCT delineation result corresponding to the target object.

图7示意性地示出了根据本公开实施例的勾画CBCT图像的方法的流程图。Fig. 7 schematically shows a flowchart of a method for delineating a CBCT image according to an embodiment of the present disclosure.

参照图7所示,本公开实施例提供的勾画CBCT图像的方法,包括以下步骤:S710、S720和S730。Referring to FIG. 7 , the method for delineating a CBCT image provided by an embodiment of the present disclosure includes the following steps: S710 , S720 and S730 .

在步骤S710,获取目标对象的待勾画CBCT图像。In step S710, a CBCT image to be outlined of the target object is acquired.

在步骤S720,将上述待勾画CBCT图像输入至预先训练好的目标图像生成模型,输出得到伪CT图像。In step S720, the above-mentioned CBCT image to be delineated is input into the pre-trained target image generation model, and a pseudo CT image is obtained as output.

在步骤S730,将上述伪CT图像输入至预先训练好的目标分割模型,输出得到上述目标对象的CBCT勾画结果。In step S730, the pseudo CT image is input into the pre-trained target segmentation model, and the CBCT delineation result of the target object is output.

其中,上述目标图像生成模型包含从CBCT图像映射至伪CT图像的第一网络参数;上述目标分割模型包含从CT图像映射至CT图像的目标区域的勾画结果的第二网络参数。在上述目标图像生成模型的训练阶段,输入为训练用对象的CBCT图像,输出为上述训练用对象的伪CT图像;上述训练用对象的伪CT图像的图像质量和上述训练用对象的CT图像的图像质量一致。Wherein, the target image generation model includes first network parameters mapped from the CBCT image to the pseudo CT image; the target segmentation model includes second network parameters mapped from the CT image to the delineation result of the target area of the CT image. In the training stage of the above-mentioned target image generation model, the input is the CBCT image of the training object, and the output is the pseudo CT image of the above-mentioned training object; the image quality of the pseudo CT image of the above-mentioned training object and the CT image of the above-mentioned training object Image quality is consistent.

根据本公开的实施例,上述目标图像生成模型的第一网络参数通过以下方式得到:将多个训练用对象的CBCT图像作为第一模型的训练数据,上述多个训练用对象的CT图像的参考图像质量作为上述第一模型的训练标签,训练完成的第一模型为上述目标图像生成模型,上述第一模型训练好的参数为上述第一网络参数。According to an embodiment of the present disclosure, the first network parameters of the above-mentioned target image generation model are obtained in the following manner: using multiple CBCT images of training objects as the training data of the first model, and referring to the CT images of the above-mentioned multiple training objects The image quality is used as the training label of the first model, the trained first model is the target image generation model, and the trained parameters of the first model are the first network parameters.

在一些实施例中,上述目标分割模型的第二网络参数通过以下方式中的一种得到:In some embodiments, the second network parameters of the above target segmentation model are obtained in one of the following ways:

将上述多个训练用对象的CT图像作为第二模型的训练数据,针对上述多个训练用对象的CT图像进行目标区域勾画的参考勾画结果作为上述第二模型的训练标签,训练完成的第二模型为上述目标分割模型,上述第二模型训练好的参数为上述第二网络参数;或者,The CT images of the above-mentioned multiple training objects are used as the training data of the second model, and the reference delineation results of target area delineation are performed on the CT images of the above-mentioned multiple training objects as the training labels of the above-mentioned second model, and the second model after training is completed. The model is the above-mentioned target segmentation model, and the parameters trained by the above-mentioned second model are the above-mentioned second network parameters; or,

将上述多个训练用对象的CT图像作为第二模型的训练数据,针对上述多个训练用对象的CT图像进行目标区域勾画的参考勾画结果作为上述第二模型的训练标签,训练得到的参数作为上述第二模型的中间参数;根据上述目标对象的CT图像和针对上述目标对象的CT图像的参考勾画结果,对上述第二模型的中间参数进行微调,微调后的第二模型为上述目标分割模型,得到微调后的参数为上述第二网络参数。The CT images of the above-mentioned multiple training objects are used as the training data of the second model, the reference delineation results of target area delineation are performed on the CT images of the above-mentioned multiple training objects as the training labels of the above-mentioned second model, and the parameters obtained after training are used as The intermediate parameters of the above-mentioned second model; according to the CT image of the above-mentioned target object and the reference delineation result for the CT image of the above-mentioned target object, the intermediate parameters of the above-mentioned second model are fine-tuned, and the second model after fine-tuning is the above-mentioned target segmentation model , the fine-tuned parameters obtained are the above-mentioned second network parameters.

在一实施例中,根据上述图像生成模型和上述分割模型,生成CBCT勾画模型,包括:将上述图像生成模型的输出作为上述分割模型的输入,得到包含上述图像生成模型和上述分割模型的群体化CBCT勾画模型;根据第一对象的CT图像和针对上述第一对象的CT图像的参考勾画结果,对上述分割模型进行参数微调,得到适配于上述第一对象的个性化分割模型;将上述图像生成模型的输出作为上述个性化分割模型的输入,得到包含上述图像生成模型和上述个性化分割模型的个性化CBCT勾画模型,可以参照第一个实施例中关于图4的详细描述。In an embodiment, generating a CBCT delineation model based on the above-mentioned image generation model and the above-mentioned segmentation model includes: using the output of the above-mentioned image generation model as the input of the above-mentioned segmentation model to obtain the grouping comprising the above-mentioned image generation model and the above-mentioned segmentation model CBCT delineation model; according to the CT image of the first object and the reference delineation result for the CT image of the first object, fine-tune the parameters of the above-mentioned segmentation model to obtain a personalized segmentation model adapted to the above-mentioned first object; the above-mentioned image The output of the generation model is used as the input of the above-mentioned personalized segmentation model to obtain a personalized CBCT delineation model including the above-mentioned image generation model and the above-mentioned personalized segmentation model, which can refer to the detailed description of FIG. 4 in the first embodiment.

第二个实施例的具体细节还可以参照第一个实施例的相关描述,这里不再赘述。For the specific details of the second embodiment, reference may also be made to the related description of the first embodiment, which will not be repeated here.

本公开的第三个示例性实施例提供了一种构建CBCT勾画模型的装置。A third exemplary embodiment of the present disclosure provides an apparatus for constructing a CBCT delineation model.

图8示意性地示出了本公开实施例提供的构建CBCT勾画模型的装置的结构框图。Fig. 8 schematically shows a structural block diagram of an apparatus for constructing a CBCT sketching model provided by an embodiment of the present disclosure.

参照图8所示,本公开实施例提供的构建CBCT勾画模型的装置800,包括:训练数据和标签获取模块801、第一训练模块802、第二训练模块803和勾画模型生成模块804。Referring to FIG. 8 , an apparatus 800 for constructing a CBCT delineation model provided by an embodiment of the present disclosure includes: a training data and label acquisition module 801 , a first training module 802 , a second training module 803 and a delineation model generation module 804 .

上述训练数据和标签获取模块801用于获取训练数据和训练标签,上述训练数据包括:多个训练用对象的CBCT图像和CT图像,上述训练标签包括:上述CT图像的参考图像质量,针对上述CT图像进行目标区域勾画的参考勾画结果。The above-mentioned training data and label acquisition module 801 is used to obtain training data and training labels. The above-mentioned training data includes: CBCT images and CT images of a plurality of training objects, and the above-mentioned training labels include: the reference image quality of the above-mentioned CT images, for the above-mentioned CT The reference delineation result of delineating the target area on the image.

上述第一训练模块802用于将上述CBCT图像输入至第一模型进行训练,上述第一模型的输出为伪CT图像,在上述伪CT图像的图像质量和上述参考图像质量的差距小于第一设定阈值的情况下训练结束,训练完成的第一模型为图像生成模型。The above-mentioned first training module 802 is used to input the above-mentioned CBCT image into the first model for training, the output of the above-mentioned first model is a pseudo CT image, and the difference between the image quality of the above-mentioned pseudo CT image and the above-mentioned reference image quality is smaller than the first setting When the threshold is set, the training ends, and the first model after training is an image generation model.

上述第二训练模块803用于将上述CT图像输入至第二模型进行训练,上述第二模型的输出为预测勾画结果,在上述预测勾画结果和上述参考勾画结果的差距小于第二设定阈值的情况下训练结束,训练完成的第二模型为分割模型。The second training module 803 is used to input the CT images to the second model for training, the output of the second model is the predicted delineation result, and the difference between the predicted delineation result and the reference delineation result is less than the second set threshold In this case, the training ends, and the second model after training is a segmentation model.

上述勾画模型生成模块804用于根据上述图像生成模型和上述分割模型,生成CBCT勾画模型。The sketching model generation module 804 is configured to generate a CBCT sketching model according to the image generation model and the segmentation model.

本实施例的功能模块的实施细节可以参照第一个实施例的相关描述,这里不再赘述。For implementation details of the functional modules of this embodiment, reference may be made to the relevant description of the first embodiment, and details are not repeated here.

本公开的第四个示例性实施例提供了一种勾画CBCT图像的装置。A fourth exemplary embodiment of the present disclosure provides an apparatus for delineating a CBCT image.

图9示意性地示出了本公开实施例提供的勾画CBCT图像的装置的结构框图。Fig. 9 schematically shows a structural block diagram of an apparatus for delineating a CBCT image provided by an embodiment of the present disclosure.

参照图9所示,本公开实施例提供的勾画CBCT图像的装置900包括:数据获取模块901、第一处理模块902和第二处理模块903。Referring to FIG. 9 , an apparatus 900 for delineating a CBCT image provided by an embodiment of the present disclosure includes: a data acquisition module 901 , a first processing module 902 and a second processing module 903 .

上述数据获取模块901用于获取目标对象的待勾画CBCT图像。The above-mentioned data acquisition module 901 is used to acquire the CBCT image to be outlined of the target object.

上述第一处理模块902用于将上述待勾画CBCT图像输入至预先训练好的目标图像生成模型,输出得到伪CT图像。在一实施例中,第一处理模块包括目标图像生成模型;在另一实施例中,第一处理模块能够和存储目标图像生成模型的装置进行通信,调用目标图像生成模型来执行对待勾画CBCT图像的处理步骤。The first processing module 902 is configured to input the CBCT image to be delineated into a pre-trained target image generation model, and output a pseudo CT image. In one embodiment, the first processing module includes a target image generation model; in another embodiment, the first processing module can communicate with a device storing the target image generation model, and invoke the target image generation model to execute the CBCT image to be delineated. processing steps.

上述第二处理模块903用于将上述伪CT图像输入至预先训练好的目标分割模型,输出得到上述目标对象的CBCT勾画结果。在一实施例中,第二处理模块包括目标分割模型。在另一实施例中,第二处理模块能够和存储目标分割模型的装置进行通信,调用目标分割模型来执行对伪CT图像的处理步骤。在一些实施例中,目标图像生成模型和目标分割模型集成在同一个装置中作为一个整体的CBCT勾画模型;在另一些实施例中,目标图像生成模型和目标分割模型可以分散在不同的装置中,训练阶段二者基于CT图像的质量相互进行关联。The second processing module 903 is configured to input the pseudo CT image into a pre-trained target segmentation model, and output a CBCT delineation result of the target object. In one embodiment, the second processing module includes an object segmentation model. In another embodiment, the second processing module can communicate with the device storing the target segmentation model, and invoke the target segmentation model to execute the processing steps on the pseudo CT image. In some embodiments, the target image generation model and the target segmentation model are integrated in the same device as a whole CBCT delineation model; in other embodiments, the target image generation model and the target segmentation model can be dispersed in different devices , the two are correlated with each other based on the quality of CT images in the training phase.

其中,上述目标图像生成模型包含从CBCT图像映射至伪CT图像的第一网络参数;上述目标分割模型包含从CT图像映射至CT图像的目标区域的勾画结果的第二网络参数;在上述目标图像生成模型的训练阶段,输入为训练用对象的CBCT图像,输出为上述训练用对象的伪CT图像;上述训练用对象的伪CT图像的图像质量和上述训练用对象的CT图像的图像质量一致。Wherein, the above-mentioned target image generation model includes the first network parameters mapped from the CBCT image to the pseudo-CT image; the above-mentioned target segmentation model includes the second network parameters mapped from the CT image to the delineation result of the target area of the CT image; in the above-mentioned target image In the training phase of the generated model, the input is the CBCT image of the training object, and the output is the pseudo CT image of the training object; the image quality of the pseudo CT image of the training object is consistent with the image quality of the CT image of the training object.

上述装置800或装置900所包含的功能模块中的任意多个可以合并在一个模块中实现,或者其中的任意一个模块可以被拆分成多个模块。或者,这些模块中的一个或多个模块的至少部分功能可以与其他模块的至少部分功能相结合,并在一个模块中实现。上述装置800或装置900所包含的功能模块中的至少一个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上系统、基板上的系统、封装上的系统、专用集成电路(ASIC),或可以通过对电路进行集成或封装的任何其他的合理方式等硬件或固件来实现,或以软件、硬件以及固件三种实现方式中任意一种或以其中任意几种的适当组合来实现。或者,上述装置800或装置900所包含的功能模块中的至少一个可以至少被部分地实现为计算机程序模块,当该计算机程序模块被运行时,可以执行相应的功能。Any number of the functional modules included in the above-mentioned apparatus 800 or apparatus 900 may be combined into one module for implementation, or any one of the modules may be split into multiple modules. Alternatively, at least part of the functions of one or more of these modules may be combined with at least part of the functions of other modules and implemented in one module. At least one of the functional modules contained in the above-mentioned device 800 or device 900 may be at least partially implemented as a hardware circuit, such as a field programmable gate array (FPGA), a programmable logic array (PLA), a system on a chip, a system on a substrate , a system on a package, an application-specific integrated circuit (ASIC), or any other reasonable way of integrating or packaging circuits, such as hardware or firmware, or any of the three implementation methods of software, hardware, and firmware Or realize it with any suitable combination of any of them. Alternatively, at least one of the functional modules included in the above-mentioned apparatus 800 or apparatus 900 may be at least partially implemented as a computer program module, and when the computer program module is executed, corresponding functions may be performed.

本公开的第五个示例性实施例提供了一种电子设备。A fifth exemplary embodiment of the present disclosure provides an electronic device.

图10示意性地示出了本公开实施例提供的电子设备的结构框图。Fig. 10 schematically shows a structural block diagram of an electronic device provided by an embodiment of the present disclosure.

参照图10所示,本公开实施例提供的电子设备1000包括处理器1001、通信接口1002、存储器1003和通信总线1004,其中,处理器1001、通信接口1002和存储器1003通过通信总线1004完成相互间的通信;存储器1003,用于存放计算机程序;处理器1001,用于执行存储器上所存放的程序时,实现如上所述的构建CBCT勾画模型的方法或勾画CBCT图像的方法。Referring to FIG. 10 , an electronic device 1000 provided by an embodiment of the present disclosure includes a processor 1001, a communication interface 1002, a memory 1003, and a communication bus 1004, wherein the processor 1001, the communication interface 1002, and the memory 1003 communicate with each other through the communication bus 1004. The memory 1003 is used to store computer programs; the processor 1001 is used to implement the method for constructing a CBCT delineation model or the method for delineating a CBCT image as described above when executing the program stored in the memory.

本公开的第六个示例性实施例还提供了一种计算机可读存储介质。上述计算机可读存储介质上存储有计算机程序,上述计算机程序被处理器执行时实现如上所述的构建CBCT勾画模型的方法或勾画CBCT图像的方法。The sixth exemplary embodiment of the present disclosure also provides a computer-readable storage medium. A computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for constructing a CBCT delineation model or the method for delineating a CBCT image as described above is implemented.

该计算机可读存储介质可以是上述实施例中描述的设备/装置中所包含的;也可以是单独存在,而未装配入该设备/装置中。上述计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被执行时,实现根据本公开实施例的方法。The computer-readable storage medium may be included in the device/device described in the above embodiments; or it may exist independently without being assembled into the device/device. The above-mentioned computer-readable storage medium carries one or more programs, and when the above-mentioned one or more programs are executed, the method according to the embodiment of the present disclosure is realized.

根据本公开的实施例,计算机可读存储介质可以是非易失性的计算机可读存储介质,例如可以包括但不限于:便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, such as may include but not limited to: portable computer disk, hard disk, random access memory (RAM), read-only memory (ROM) , erasable programmable read-only memory (EPROM or flash memory), portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In the present disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.

需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that in this article, relative terms such as "first" and "second" are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these No such actual relationship or order exists between entities or operations. Furthermore, the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus comprising a set of elements includes not only those elements, but also includes elements not expressly listed. other elements of or also include elements inherent in such a process, method, article, or device. Without further limitations, an element defined by the phrase "comprising a ..." does not exclude the presence of additional identical elements in the process, method, article or apparatus comprising said element.

以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所示的这些实施例,而是要符合与本文所申请的原理和新颖特点相一致的最宽的范围。The above descriptions are only specific implementation manners of the present disclosure, so that those skilled in the art can understand or implement the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure will not be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features claimed herein.

Claims (10)

1. A method of constructing a CBCT delineation model, comprising:
acquiring training data and training labels, wherein the training data comprises: a CBCT image and a CT image of a plurality of training subjects, the training tag comprising: the reference image quality of the CT image is used for carrying out a reference sketching result of target area sketching aiming at the CT image;
inputting the CBCT image into a first model for training, wherein the output of the first model is a pseudo CT image, and training is finished under the condition that the difference between the image quality of the pseudo CT image and the quality of the reference image is smaller than a first set threshold value, and the trained first model is an image generation model;
Inputting the CT image into a second model for training, wherein the output of the second model is a prediction sketch result, and training is finished under the condition that the difference between the prediction sketch result and the reference sketch result is smaller than a second set threshold value, and the trained second model is a segmentation model;
and generating a CBCT sketch model according to the image generation model and the segmentation model.
2. The method of claim 1, wherein generating a CBCT delineation model from the image generation model and the segmentation model comprises:
taking the output of the image generation model as the input of the segmentation model to obtain a clustered CBCT sketch model comprising the image generation model and the segmentation model;
performing parameter fine adjustment on the segmentation model according to the CT image of the first object and a reference sketching result of the CT image of the first object to obtain a personalized segmentation model adapted to the first object;
and taking the output of the image generation model as the input of the personalized segmentation model to obtain a personalized CBCT sketch model comprising the image generation model and the personalized segmentation model.
3. The method according to claim 2, wherein the performing parameter fine-tuning on the segmentation model according to the CT image of the first object and the reference delineation result of the CT image of the first object to obtain a personalized segmentation model adapted to the first object comprises:
Inputting the CT image of the first object into the segmentation model, and outputting a prediction sketch result of the CT image of the first object;
and fine-tuning parameters of the segmentation model, so that the difference between the reference sketching result and the prediction sketching result of the CT image of the first object is smaller than a second set threshold value, and the segmentation model subjected to parameter fine-tuning is a personalized segmentation model which is suitable for the first object.
4. The method of claim 1, wherein the difference in image quality is analyzed from four dimensions of noise level, artifacts, tissue boundary sharpness, gray values; the first set threshold includes: noise error threshold, image error threshold, sharpness error threshold, and gray error threshold;
and when the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a noise error threshold, the difference between the image quality of the pseudo CT image and the reference image quality is smaller than an image error threshold, the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a definition error threshold, the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a gray level error threshold, and the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a gray level error threshold.
5. A method of delineating a CBCT image, comprising:
acquiring a CBCT image to be sketched of a target object;
inputting the CBCT image to be sketched into a pre-trained target image generation model, and outputting to obtain a pseudo CT image;
inputting the pseudo CT image into a pre-trained target segmentation model, and outputting to obtain a CBCT sketching result of the target object;
wherein the target image generation model comprises first network parameters mapped from CBCT images to pseudo CT images; the target segmentation model comprises a second network parameter mapped from the CT image to a sketching result of a target area of the CT image; in a training stage of the target image generation model, a CBCT image of a training object is input, and a pseudo CT image of the training object is output; the image quality of the pseudo CT image of the training object is identical to the image quality of the CT image of the training object.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
the first network parameters of the target image generation model are obtained by: taking CBCT images of a plurality of training objects as training data of a first model, taking reference image quality of the CT images of the plurality of training objects as a training label of the first model, generating a model for the target image by the trained first model, wherein the trained parameters of the first model are the first network parameters;
The second network parameter of the target segmentation model is obtained by one of the following means:
taking CT images of the plurality of training objects as training data of a second model, taking a reference sketching result of target area sketching aiming at the CT images of the plurality of training objects as a training label of the second model, wherein the trained second model is the target segmentation model, and the trained parameters of the second model are the second network parameters; or,
taking CT images of the plurality of training objects as training data of a second model, taking a reference sketching result of target area sketching aiming at the CT images of the plurality of training objects as a training label of the second model, and taking a parameter obtained by training as an intermediate parameter of the second model; and fine-tuning the intermediate parameters of the second model according to the CT image of the target object and the reference sketching result of the CT image of the target object, wherein the fine-tuned second model is the target segmentation model, and the fine-tuned parameters are the second network parameters.
7. An apparatus for constructing a CBCT delineation model, comprising:
the training data and label acquisition module is used for acquiring training data and training labels, and the training data comprises: a CBCT image and a CT image of a plurality of training subjects, the training tag comprising: the reference image quality of the CT image is used for carrying out a reference sketching result of target area sketching aiming at the CT image;
The first training module is used for inputting the CBCT image into a first model for training, the output of the first model is a pseudo CT image, the training is finished under the condition that the difference between the image quality of the pseudo CT image and the reference image quality is smaller than a first set threshold value, and the first model after the training is an image generation model;
the second training module is used for inputting the CT image into a second model for training, the output of the second model is a prediction sketching result, the training is finished under the condition that the difference between the prediction sketching result and the reference sketching result is smaller than a second set threshold value, and the trained second model is a segmentation model;
and the sketch model generating module is used for generating a CBCT sketch model according to the image generating model and the segmentation model.
8. An apparatus for delineating CBCT images, comprising:
the data acquisition module is used for acquiring a CBCT image to be sketched of the target object;
the first processing module is used for inputting the CBCT image to be sketched into a pre-trained target image generation model and outputting to obtain a pseudo CT image;
the second processing module is used for inputting the pseudo CT image into a pre-trained target segmentation model and outputting a CBCT sketching result of the target object;
Wherein the target image generation model comprises first network parameters mapped from CBCT images to pseudo CT images; the target segmentation model comprises a second network parameter mapped from the CT image to a sketching result of a target area of the CT image; in a training stage of the target image generation model, a CBCT image of a training object is input, and a pseudo CT image of the training object is output; the image quality of the pseudo CT image of the training object is identical to the image quality of the CT image of the training object.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1-6 when executing a program stored on a memory.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1-6.
CN202211364596.4A 2022-11-02 2022-11-02 Method, device, equipment and medium for constructing CBCT delineation model and delineation CBCT image Pending CN116168097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211364596.4A CN116168097A (en) 2022-11-02 2022-11-02 Method, device, equipment and medium for constructing CBCT delineation model and delineation CBCT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211364596.4A CN116168097A (en) 2022-11-02 2022-11-02 Method, device, equipment and medium for constructing CBCT delineation model and delineation CBCT image

Publications (1)

Publication Number Publication Date
CN116168097A true CN116168097A (en) 2023-05-26

Family

ID=86420748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211364596.4A Pending CN116168097A (en) 2022-11-02 2022-11-02 Method, device, equipment and medium for constructing CBCT delineation model and delineation CBCT image

Country Status (1)

Country Link
CN (1) CN116168097A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117476219A (en) * 2023-12-27 2024-01-30 四川省肿瘤医院 Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis
CN118941585A (en) * 2024-10-12 2024-11-12 四川大学 A 3D oral hard palate image segmentation method based on multi-directional state space model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117476219A (en) * 2023-12-27 2024-01-30 四川省肿瘤医院 Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis
CN117476219B (en) * 2023-12-27 2024-03-12 四川省肿瘤医院 Auxiliary method and auxiliary system for locating CT tomographic images based on big data analysis
CN118941585A (en) * 2024-10-12 2024-11-12 四川大学 A 3D oral hard palate image segmentation method based on multi-directional state space model
CN118941585B (en) * 2024-10-12 2025-01-24 四川大学 A 3D oral hard palate image segmentation method based on multi-directional state space model

Similar Documents

Publication Publication Date Title
CN111862249B (en) Systems and methods for generating canonical imaging data for medical image processing using deep learning
JP6567179B2 (en) Pseudo CT generation from MR data using feature regression model
Oghli et al. Automatic fetal biometry prediction using a novel deep convolutional network architecture
EP2916738B1 (en) Lung, lobe, and fissure imaging systems and methods
US8358819B2 (en) System and methods for image segmentation in N-dimensional space
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
JP6626344B2 (en) Image processing apparatus, control method for image processing apparatus, and program
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
WO2018120644A1 (en) Blood vessel extraction method and system
US9098912B2 (en) Method, system and computer readable medium for automatic segmentation of a medical image
JP2018535732A (en) Pseudo CT generation from MR data using tissue parameter estimation
CN111862044A (en) Ultrasound image processing method, apparatus, computer equipment and storage medium
CN108062749B (en) Recognition method, device and electronic device for levator hiatus
CN116168097A (en) Method, device, equipment and medium for constructing CBCT delineation model and delineation CBCT image
CN106462974B (en) Parameter optimization for segmenting images
CN116071401A (en) Method and device for generating virtual CT images based on deep learning
Alam et al. Evaluation of medical image registration techniques based on nature and domain of the transformation
CN116012526B (en) Three-dimensional CT image focus reconstruction method based on two-dimensional image
CN116433976A (en) Image processing method, device, equipment and storage medium
CN115830163A (en) Progressive medical image cross-mode generation method and device based on deterministic guidance of deep learning
CN118537699B (en) A method for fusion and processing of multimodal oral image data
US20190304145A1 (en) Pseudo-ct generation with multi-variable regression of multiple mri scans
CN114881848A (en) Method for converting multi-sequence MR into CT
CN111612762A (en) MRI brain tumor image generation method and system
CN118037615A (en) A tumor segmentation-guided magnetic resonance image translation method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination