CN114558251A - Automatic positioning method and device based on deep learning and radiotherapy equipment - Google Patents
Automatic positioning method and device based on deep learning and radiotherapy equipment Download PDFInfo
- Publication number
- CN114558251A CN114558251A CN202210099697.7A CN202210099697A CN114558251A CN 114558251 A CN114558251 A CN 114558251A CN 202210099697 A CN202210099697 A CN 202210099697A CN 114558251 A CN114558251 A CN 114558251A
- Authority
- CN
- China
- Prior art keywords
- image
- automatic positioning
- drr
- net model
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000001959 radiotherapy Methods 0.000 title claims abstract description 20
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 230000011218 segmentation Effects 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims description 33
- 238000012360 testing method Methods 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 12
- 238000013136 deep learning model Methods 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000010521 absorption reaction Methods 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 4
- 238000009434 installation Methods 0.000 claims 1
- 230000003287 optical effect Effects 0.000 description 6
- 210000001015 abdomen Anatomy 0.000 description 5
- 230000000694 effects Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000007408 cone-beam computed tomography Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002786 image-guided radiation therapy Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 229920001169 thermoplastic Polymers 0.000 description 1
- 239000004416 thermosoftening plastic Substances 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1064—Monitoring, verifying, controlling systems and methods for adjusting radiation treatment in response to monitoring
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1064—Monitoring, verifying, controlling systems and methods for adjusting radiation treatment in response to monitoring
- A61N5/1069—Target adjustment, e.g. moving the patient support
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1075—Monitoring, verifying, controlling systems and methods for testing, calibrating, or quality assurance of the radiation treatment apparatus
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1049—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
- A61N2005/1061—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source
- A61N2005/1062—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source using virtual X-ray images, e.g. digitally reconstructed radiographs [DRR]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N2005/1092—Details
- A61N2005/1097—Means for immobilizing the patient
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Radiation-Therapy Devices (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses an automatic positioning method and device based on deep learning and radiotherapy equipment, wherein the automatic positioning method comprises the following steps: s1: fixing a patient to be treated with radiotherapy on a treatment couch, and collecting DR images of corresponding parts; s2: inputting the CT image acquired during planning into a trained U-Net model to obtain a segmentation result of the designated part and reconstructing to generate a DRR image; s3: respectively inputting the DR image obtained in the step S1 and the DRR image obtained in the step S2 into a cycleGAN model to obtain a DR image only containing a specified part; s4: registering the DR image only containing the designated part obtained in the step S3 with the DRR image to be registered to obtain a positioning deviation; s5: and controlling the treatment bed to move according to the swing deviation obtained in the step S4, so as to realize automatic swing. The method effectively improves the quality of the image to be registered, improves the registration precision, reduces the prediction time during positioning in a deep learning mode, finally realizes automatic positioning, avoids manual errors and improves the positioning efficiency.
Description
Technical Field
The invention belongs to the technical field of radiotherapy, and particularly relates to an automatic positioning method and device based on deep learning and radiotherapy equipment.
Background
At present, tumor radiotherapy is generally carried out in a fractionated manner, and the patients are repeatedly fixed by using corresponding thermoplastic films, negative pressure bags and other devices in cooperation with laser lamps according to the posture fixation during positioning CT scanning and the resetting condition during simulation during each treatment. However, for various reasons, the patient still has a certain deviation, which is more than several millimeters to one centimeter, and even several centimeters.
To implement image guided radiation therapy, the prior art mainly includes: according to diagnosis CT and treatment plan, ROI (region of interest) area is selected, and 3D/3D image registration is realized by combining CBCT or other tomography images scanned during treatment. Or acquiring MV/KV images to realize the 2D/2D or 2D/3D image registration problem.
Especially, the MV image registration problem is the most difficult, and the image contrast is low, the modal difference is large, so that the traditional registration algorithm based on the image gray value density fails. In addition, iterative registration algorithms based on gray value density are often very time consuming and have poor user experience. Feature-based registration algorithms are difficult to select and extract features, require manual work by a technician, are time-consuming and labor-consuming, and are prone to human error.
Disclosure of Invention
In order to solve the technical problems, the invention provides an automatic positioning method and device based on deep learning and radiotherapy equipment.
In order to achieve the purpose, the technical scheme of the invention is as follows:
on one hand, the invention discloses an automatic positioning method based on deep learning, which comprises the following steps:
s1: fixing a patient to be treated with radiotherapy on a treatment couch, and collecting DR images of corresponding parts;
s2: inputting the CT image acquired during planning into a trained U-Net model to obtain a segmentation result of the designated part and reconstructing to generate a DRR image;
s3: respectively inputting the DR image obtained in the step S1 and the DRR image obtained in the step S2 into a cycleGAN model to obtain a DR image only containing a specified part;
s4: registering the DR image only containing the designated part obtained in the step S3 with the DRR image to be registered to obtain a positioning deviation;
s5: and controlling the treatment bed to move according to the swing deviation obtained in the step S4, so as to realize automatic swing.
On the basis of the technical scheme, the following improvements can be made:
preferably, the training steps of the U-Net model are as follows:
t1: collecting CT images of all parts used in clinic;
t2: dividing the CT image into a training set, a testing set and a verification set according to a specified proportion;
T3: selecting a U-Net model, inputting a CT image, outputting a segmentation result of a specified part, and training a deep learning model by using data in a training set;
t4: and (5) using the verification set and the test set to check the robustness of the U-Net model at different stages until the U-Net model with enough robustness is trained.
Preferably, the method comprises the following steps between T1 and T2: and preprocessing the collected CT image.
Preferably, the pretreatment comprises one or more of the following operations: reading and storing the CT image slices into png format, normalizing the pixel size and zooming the image.
Preferably, in S2, the step of reconstructing and generating the DRR image specifically includes the following steps:
and simulating an X-ray source to penetrate through the CT volume element through a DRR reconstruction algorithm, and projecting the X-ray source to a detector plane for imaging after attenuation and absorption to obtain a new DRR image.
On the other hand, the invention discloses an automatic positioning device based on deep learning, which comprises:
the DR image acquisition module is used for acquiring DR images of corresponding parts after a patient to be treated with radiotherapy is fixed on a treatment couch;
the reconstruction DRR image generation module is used for inputting the CT image acquired during planning into a trained U-Net model to obtain a segmentation result of the specified part and reconstructing to generate a DRR image;
The DR image generation module is used for respectively inputting the DR image acquired by the DR image acquisition module and the DRR image generated by the reconstructed DRR image generation module into a cycleGAN model to obtain a DR image only containing a specified part;
the positioning deviation generating module is used for registering the DR image which only contains the specified part and is generated by the DR image generating module with the DRR image to be registered to obtain positioning deviation;
and the automatic positioning module is used for controlling the treatment couch to move according to the positioning deviation generated by the positioning deviation generating module so as to realize automatic positioning.
Preferably, the automatic positioning device further comprises: the U-Net model training module is used for training the U-Net model by adopting the following steps;
the training steps of the U-Net model are as follows:
t1: collecting CT images of all parts used in clinic;
t2: dividing the CT image into a training set, a testing set and a verification set according to a specified proportion;
t3: selecting a U-Net model, inputting a CT image, outputting a segmentation result of a specified part, and training a deep learning model by using data in a training set;
t4: and (5) using the verification set and the test set to check the robustness of the U-Net model at different stages until the U-Net model with enough robustness is trained.
Preferably, the method comprises the following steps between T1 and T2: and preprocessing the collected CT images.
Preferably, the pretreatment comprises one or more of the following operations: reading and storing the CT image slices into png format, normalizing the pixel size and zooming the image.
In addition, in another aspect, the present invention also discloses a radiotherapy apparatus, which uses any one of the above automatic positioning methods to realize automatic positioning, or, comprises: any one of the automatic positioning devices.
The invention discloses an automatic positioning method and device based on deep learning and radiotherapy equipment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of an automatic positioning method according to an embodiment of the present invention.
FIG. 2 is a flowchart of training a U-Net model according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of generating a DRR image according to an embodiment of the present invention.
Wherein: 1-simulated light source, 2-CT voxel, 3-DRR plane.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The expression "comprising" an element is an "open" expression which merely means that a corresponding component or step is present and should not be interpreted as excluding additional components or steps.
In order to achieve the purpose of the invention, in some embodiments of the automatic positioning method, the automatic positioning device and the radiotherapy equipment based on deep learning, the method disclosed by the embodiment is based on spine registration because the characteristics of the spine in the neck, the chest and the abdomen are obvious.
As shown in fig. 1, the automatic positioning method includes the following steps:
s1: fixing the neck, the chest and the abdomen of a patient to be treated with radiotherapy on a treatment couch, and collecting DR images of corresponding parts;
s2: inputting the CT image acquired during planning into a trained U-Net model to obtain a spine segmentation result and reconstructing to generate a DRR image;
s3: respectively inputting the DR image obtained in the step S1 and the DRR image obtained in the step S2 into a cycleGAN model to obtain a DR image only containing the spine;
s4: registering the DR image only containing the spine obtained in the step S3 with the DRR image to be registered to obtain a positioning deviation;
s5: and controlling the treatment bed to move according to the swing deviation obtained in the step S4, so as to realize automatic swing.
In order to further optimize the implementation effect of the invention, in other embodiments, the rest of feature technologies are the same, except that, as shown in fig. 2, the training steps of the U-Net model are as follows:
t1: collecting CT images (cross sections) of neck, chest and abdomen in nii format;
t2: dividing data, namely dividing the CT image into a training set, a testing set and a verification set according to the ratio of 6:3: 1;
t3: selecting a U-Net model, inputting a CT image, outputting a segmentation result of the spine, and training a deep learning model by using data in a training set;
T4: and using the verification set and the test set to check the robustness of the U-Net model at different stages until the U-Net model with enough robustness is trained.
Further, on the basis of the above embodiment, the following steps are included between T1 and T2: and preprocessing the collected CT images.
The pretreatment comprises one or more of the following operations: reading and storing the CT image slices into png format, normalizing the pixel size and zooming the image.
In order to further optimize the implementation effect of the present invention, in other embodiments, the remaining features are the same, except that in S2, the reconstructing and generating the DRR image specifically includes the following:
and simulating an X-ray source to penetrate through the CT volume element through a DRR reconstruction algorithm, and projecting the X-ray source to a detector plane for imaging after attenuation and absorption to obtain a new DRR image.
It is noted that when image registration is performed, the two registered entities should have the same dimension, i.e. either 3D-3D registration or 2D-2D registration, and when 2D-3D registration is performed, the 3D image needs to be reduced to two dimensions, and then 2D-2D registration is performed. The DRR reconstruction algorithm is a process of dimension reduction of a 3D model to 2D, as shown in fig. 3, the simulated X-ray source penetrates through a 3D voxel, i.e. a 3D-CT voxel, and is projected onto a DRR plane to generate a DRR image, the whole process is an optical attenuation process, and conforms to the rule of an optical absorption model, and the attenuation process can be described by the following expression.
Wherein: s is a length parameter of the optical projection direction;
i(s) is the optical intensity at distance s;
x (t) is the attenuation coefficient of the optical intensity;
I0is the optical intensity into the CT voxel.
As can be seen from fig. 3 and the above formula, DRR is a process in which simulated rays emitted from a simulated light source 1 pass through a CT voxel 2, are attenuated and absorbed, and then are projected onto an imaging plane for accumulation, specifically:
1. establishing a 3-dimensional voxel matrix of the CT image group, which consists of a plurality of CT voxels;
2. a plurality of rays are emitted along the virtual light source image CT image group, and the number of the rays is consistent with the number of pixels of the DRR plane 3;
3. obtaining the intersection point of each ray passing through the CT voxel, and accumulating the electron density values of the points;
4. solving the finite ray length of the projection line passing through the voxel matrix;
5. and multiplying the accumulated value of the electron density by the length of the ray, and displaying the obtained value according to the gray value to obtain the DRR image.
The embodiment of the invention discloses an automatic positioning device based on deep learning, which comprises:
the DR image acquisition module is used for fixing the neck, the chest and the abdomen of a patient to be treated with radiotherapy on a treatment couch and then acquiring DR images of corresponding parts;
the reconstruction DRR image generation module is used for inputting the CT image acquired during planning into a trained U-Net model to obtain a segmentation result of the spine and reconstructing to generate a DRR image;
The DR image generation module is used for respectively inputting the DR image acquired by the DR image acquisition module and the DRR image generated by the reconstructed DRR image generation module into a cycleGAN model to obtain a DR image only containing the spine;
the positioning deviation generating module is used for registering the DR image which only contains the spine and is generated by the DR image generating module with the DRR image to be registered to obtain positioning deviation;
and the automatic positioning module is used for controlling the treatment bed to move according to the positioning deviation generated by the positioning deviation generating module so as to realize automatic positioning.
Further, on the basis of the above embodiment, the automatic positioning device further includes: the U-Net model training module is used for training the U-Net model by adopting the following steps;
the training steps of the U-Net model are as follows:
t1: collecting CT images (cross sections) of neck, chest and abdomen in nii format;
t2: dividing data, namely dividing the CT image into a training set, a testing set and a verification set according to the ratio of 6:3: 1;
t3: selecting a U-Net model, inputting a CT image, outputting a segmentation result of the spine, and training a deep learning model by using data in a training set;
t4: and (5) using the verification set and the test set to check the robustness of the U-Net model at different stages until the U-Net model with enough robustness is trained.
Further, on the basis of the above embodiment, the following steps are included between T1 and T2: and preprocessing the collected CT image.
The pretreatment comprises one or more of the following operations: reading and storing the CT image slices into png format, normalizing the pixel size and zooming the image.
In addition, the embodiment of the invention also discloses radiotherapy equipment, and the automatic positioning method disclosed by any one of the embodiments is used for realizing automatic positioning, or the method comprises the following steps: the automatic positioning device disclosed in any of the above embodiments.
The invention relates to two deep learning models, wherein the first model is a U-Net model, training is carried out to enable the first model to automatically segment and extract a vertebral region in a CT image, and a registered DRR image is obtained through a DRR reconstruction algorithm; the second model is a CycleGan model, and other redundant information in the DR image is removed, and the spine is reserved, so that the interference of other factors in the image on the registration result is reduced. The ROI area is extracted through the method, so that the positioning precision is improved, and the positioning time is reduced.
The automatic positioning method and device based on deep learning and the radiotherapy equipment have the following beneficial effects:
firstly, automatically extracting a spine region by using a U-Net deep learning model, removing registration interference factors in an image by using a cycleGAN, retaining the spine, performing targeted registration on the ROI region, improving the quality of an image to be registered and improving the registration precision;
Secondly, the requirement on technicians is low, and the use is simple;
thirdly, the positioning speed is high, the DRR reconstruction process is eliminated, the positioning time is within 5s, and the manual positioning method needs 5 minutes;
fourthly, the method can be iterated quickly, and the hospital can be upgraded on line in each upgrade without long-time training.
The method can effectively improve the quality of the image to be registered, improve the registration precision, reduce the prediction time during positioning in a deep learning mode, finally realize automatic positioning, avoid manual errors and improve the positioning efficiency.
It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
The above embodiments are merely illustrative of the technical concept and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the content of the present invention and implement the present invention, and not to limit the scope of the present invention, and all equivalent changes or modifications made according to the spirit of the present invention should be covered in the scope of the present invention.
Claims (10)
1. An automatic positioning method based on deep learning is characterized by comprising the following steps:
s1: fixing a patient to be treated with radiotherapy on a treatment couch, and collecting DR images of corresponding parts;
s2: inputting the CT image acquired during planning into a trained U-Net model to obtain a segmentation result of the designated part and reconstructing to generate a DRR image;
s3: respectively inputting the DR image obtained in the step S1 and the DRR image obtained in the step S2 into a cycleGAN model to obtain a DR image only containing a specified part;
s4: registering the DR image only containing the designated part obtained in the step S3 with the DRR image to be registered to obtain a positioning deviation;
s5: and controlling the treatment bed to move according to the swing deviation obtained in the step S4, so as to realize automatic swing.
2. The automatic positioning method according to claim 1, wherein the training of the U-Net model comprises the following steps:
t1: collecting CT images of all parts used in clinic;
t2: dividing the CT image into a training set, a testing set and a verification set according to a specified proportion;
t3: selecting a U-Net model, inputting a CT image, outputting a segmentation result of a specified part, and training a deep learning model by using data in a training set;
t4: and (5) using the verification set and the test set to check the robustness of the U-Net model at different stages until the U-Net model with enough robustness is trained.
3. The automatic positioning method according to claim 2, characterized in that between T1 and T2 the following steps are included: and preprocessing the collected CT image.
4. The automatic positioning method according to claim 3, characterized in that the pre-processing comprises one or more of the following operations: reading and storing the CT image slices into png format, normalizing the pixel size and zooming the image.
5. The automatic positioning method according to any one of claims 1 to 4, wherein in S2, the step of reconstructing to generate the DRR image specifically includes the following steps:
and simulating an X-ray source to penetrate through the CT volume element through a DRR reconstruction algorithm, and projecting the X-ray source to a detector plane for imaging after attenuation and absorption to obtain a new DRR image.
6. An automatic positioning device based on deep learning is characterized by comprising:
the DR image acquisition module is used for acquiring DR images of corresponding parts after a patient to be treated with radiotherapy is fixed on a treatment couch;
the reconstruction DRR image generation module is used for inputting the CT image acquired during planning into a trained U-Net model to obtain a segmentation result of the specified part and reconstructing to generate a DRR image;
the DR image generation module is used for respectively inputting the DR image acquired by the DR image acquisition module and the DRR image generated by the reconstructed DRR image generation module into a cycleGAN model to obtain a DR image only containing a specified part;
The positioning deviation generation module is used for registering the DR image which only contains the designated part and is generated by the DR image generation module with the DRR image to be registered to obtain positioning deviation;
and the automatic positioning module is used for controlling the treatment couch to move according to the positioning deviation generated by the positioning deviation generating module so as to realize automatic positioning.
7. The automatic positioning device according to claim 6, further comprising: the U-Net model training module is used for training the U-Net model by adopting the following steps;
the training steps of the U-Net model are as follows:
t1: collecting CT images of all parts used in clinic;
t2: dividing the CT image into a training set, a testing set and a verification set according to a specified proportion;
t3: selecting a U-Net model, inputting a CT image, outputting a segmentation result of a specified part, and training a deep learning model by using data in a training set;
t4: and (5) using the verification set and the test set to check the robustness of the U-Net model at different stages until the U-Net model with enough robustness is trained.
8. The automatic positioning device according to claim 7, characterized in that between T1 and T2 the following steps are included: and preprocessing the collected CT image.
9. The automated positioning apparatus of claim 8, wherein the pre-processing comprises one or more of: reading and storing CT image slices into png format, normalizing pixel size and zooming image.
10. Radiotherapy installation, characterized in that the automatic positioning is achieved by an automatic positioning method according to any one of claims 1-5, or, comprising: the automatic positioning device according to any one of claims 6-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210099697.7A CN114558251A (en) | 2022-01-27 | 2022-01-27 | Automatic positioning method and device based on deep learning and radiotherapy equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210099697.7A CN114558251A (en) | 2022-01-27 | 2022-01-27 | Automatic positioning method and device based on deep learning and radiotherapy equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114558251A true CN114558251A (en) | 2022-05-31 |
Family
ID=81714356
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210099697.7A Pending CN114558251A (en) | 2022-01-27 | 2022-01-27 | Automatic positioning method and device based on deep learning and radiotherapy equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114558251A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115300811A (en) * | 2022-08-08 | 2022-11-08 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Dose distribution determination method and device based on machine learning |
CN118403298A (en) * | 2024-04-29 | 2024-07-30 | 复旦大学附属眼耳鼻喉科医院 | Head and neck tumor radiotherapy positioning adjustment method, system, medium, product and equipment |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110582328A (en) * | 2019-07-22 | 2019-12-17 | 北京市肿瘤防治研究所 | Radiotherapy emergent beam monitoring method and system |
CN111325749A (en) * | 2020-02-17 | 2020-06-23 | 东北林业大学 | Fundus blood vessel image generation method with hemorrhage disease based on generation countermeasure network |
WO2020132958A1 (en) * | 2018-12-26 | 2020-07-02 | 西安大医集团股份有限公司 | Positioning method and apparatus, and radiotherapy system |
CN112348857A (en) * | 2020-11-06 | 2021-02-09 | 苏州雷泰医疗科技有限公司 | Radiotherapy positioning offset calculation method and system based on deep learning |
CN112771581A (en) * | 2018-07-30 | 2021-05-07 | 纪念斯隆凯特琳癌症中心 | Multi-modal, multi-resolution deep learning neural network for segmentation, outcome prediction and longitudinal response monitoring for immunotherapy and radiotherapy |
CN113041516A (en) * | 2021-03-25 | 2021-06-29 | 中国科学院近代物理研究所 | Method, system and storage medium for guiding positioning of three-dimensional image |
CN113077471A (en) * | 2021-03-26 | 2021-07-06 | 南京邮电大学 | Medical image segmentation method based on U-shaped network |
US11077320B1 (en) * | 2020-02-07 | 2021-08-03 | Elekta, Inc. | Adversarial prediction of radiotherapy treatment plans |
CN113706409A (en) * | 2021-08-18 | 2021-11-26 | 苏州雷泰医疗科技有限公司 | CBCT image enhancement method and device based on artificial intelligence and storage medium |
CN113850169A (en) * | 2021-09-17 | 2021-12-28 | 西北工业大学 | Face attribute migration method based on image segmentation and generation of confrontation network |
-
2022
- 2022-01-27 CN CN202210099697.7A patent/CN114558251A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112771581A (en) * | 2018-07-30 | 2021-05-07 | 纪念斯隆凯特琳癌症中心 | Multi-modal, multi-resolution deep learning neural network for segmentation, outcome prediction and longitudinal response monitoring for immunotherapy and radiotherapy |
WO2020132958A1 (en) * | 2018-12-26 | 2020-07-02 | 西安大医集团股份有限公司 | Positioning method and apparatus, and radiotherapy system |
CN110582328A (en) * | 2019-07-22 | 2019-12-17 | 北京市肿瘤防治研究所 | Radiotherapy emergent beam monitoring method and system |
US11077320B1 (en) * | 2020-02-07 | 2021-08-03 | Elekta, Inc. | Adversarial prediction of radiotherapy treatment plans |
CN111325749A (en) * | 2020-02-17 | 2020-06-23 | 东北林业大学 | Fundus blood vessel image generation method with hemorrhage disease based on generation countermeasure network |
CN112348857A (en) * | 2020-11-06 | 2021-02-09 | 苏州雷泰医疗科技有限公司 | Radiotherapy positioning offset calculation method and system based on deep learning |
CN113041516A (en) * | 2021-03-25 | 2021-06-29 | 中国科学院近代物理研究所 | Method, system and storage medium for guiding positioning of three-dimensional image |
CN113077471A (en) * | 2021-03-26 | 2021-07-06 | 南京邮电大学 | Medical image segmentation method based on U-shaped network |
CN113706409A (en) * | 2021-08-18 | 2021-11-26 | 苏州雷泰医疗科技有限公司 | CBCT image enhancement method and device based on artificial intelligence and storage medium |
CN113850169A (en) * | 2021-09-17 | 2021-12-28 | 西北工业大学 | Face attribute migration method based on image segmentation and generation of confrontation network |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115300811A (en) * | 2022-08-08 | 2022-11-08 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Dose distribution determination method and device based on machine learning |
CN115300811B (en) * | 2022-08-08 | 2024-01-05 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Dose distribution determining method and device based on machine learning |
CN118403298A (en) * | 2024-04-29 | 2024-07-30 | 复旦大学附属眼耳鼻喉科医院 | Head and neck tumor radiotherapy positioning adjustment method, system, medium, product and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7203852B2 (en) | Estimation of full-dose PET images from low-dose PET imaging using deep learning | |
JP4271941B2 (en) | Method for enhancing a tomographic projection image of a patient | |
WO2021213519A1 (en) | Image obtaining method and system, image quality determination method and system, and medical image acquisition method and system | |
US10149987B2 (en) | Method and system for generating synthetic electron density information for dose calculations based on MRI | |
Hristov et al. | A grey‐level image alignment algorithm for registration of portal images and digitally reconstructed radiographs | |
US20090074278A1 (en) | Method and apparatus for metal artifact reduction in computed tomography | |
CN111429379B (en) | Low-dose CT image denoising method and system based on self-supervision learning | |
JP7092190B2 (en) | Image analysis method, segmentation method, bone density measurement method, learning model creation method and image creation device | |
US9142020B2 (en) | Osteo-articular structure | |
Puvanasunthararajah et al. | The application of metal artifact reduction methods on computed tomography scans for radiotherapy applications: A literature review | |
CN107865658B (en) | Method and apparatus for correcting synthesis electron-density map | |
KR20190058285A (en) | Apparatus and method for ct image denoising based on deep learning | |
CN112348857B (en) | Radiotherapy positioning offset calculation method and system based on deep learning | |
CN111915696A (en) | Three-dimensional image data-assisted low-dose scanning data reconstruction method and electronic medium | |
CN104644200A (en) | Method and device for reducing artifacts in computed tomography image reconstruction | |
CN112435307A (en) | Deep neural network assisted four-dimensional cone beam CT image reconstruction method | |
CN114558251A (en) | Automatic positioning method and device based on deep learning and radiotherapy equipment | |
CN115209808A (en) | Learning model creation method, image generation method, and image processing device | |
US20230065196A1 (en) | Patient-specific organ dose quantification and inverse optimization for ct | |
Lei et al. | Deep learning‐based fast volumetric imaging using kV and MV projection images for lung cancer radiotherapy: A feasibility study | |
CN117854684A (en) | Intelligent delineating method for segmented prompt type boron neutron capture treatment target area | |
KR20200083122A (en) | Low Dose Cone Beam Computed Tomography Imaging System Using Total Variation Denoising Technique | |
CN116664429A (en) | Semi-supervised method for removing metal artifacts in multi-energy spectrum CT image | |
CN116685999A (en) | Method and system for flexible denoising of images using a clean feature representation domain | |
CN116416329A (en) | Mammary gland tomographic image reconstruction method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |