[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110675357A - Training and image optimization method, device and equipment of image optimization network - Google Patents

Training and image optimization method, device and equipment of image optimization network Download PDF

Info

Publication number
CN110675357A
CN110675357A CN201910940690.1A CN201910940690A CN110675357A CN 110675357 A CN110675357 A CN 110675357A CN 201910940690 A CN201910940690 A CN 201910940690A CN 110675357 A CN110675357 A CN 110675357A
Authority
CN
China
Prior art keywords
image
network
subnetwork
quality level
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910940690.1A
Other languages
Chinese (zh)
Inventor
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Zhihe Medical Technology Co ltd
Original Assignee
Neusoft Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Medical Systems Co Ltd filed Critical Neusoft Medical Systems Co Ltd
Priority to CN201910940690.1A priority Critical patent/CN110675357A/en
Publication of CN110675357A publication Critical patent/CN110675357A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Nuclear Medicine (AREA)

Abstract

The disclosure relates to a method, a device and equipment for training and optimizing an image of an image optimization network. The training method of the image optimization network comprises the following steps: inputting a sample image of a first quality level in a control sample image obtained in advance into a generation sub-network; transforming the sample image of the first quality level by using the generation sub-network to generate a transformed image; randomly inputting the transformation image output by the generation sub-network and the corresponding sample image of the second quality level into the judgment sub-network, judging the source of the input image by the judgment sub-network, and outputting a source judgment result; the generation subnetwork in the trained image optimization network is completed by minimizing the network loss of the generation subnetwork and the network loss of the discrimination subnetwork and adjusting the parameters of the optimization network, and the image of the first quality level can be converted into the image of which the quality level is close to the second quality level.

Description

Training and image optimization method, device and equipment of image optimization network
Technical Field
The present disclosure relates to the technical field of medical devices, and in particular, to a method, an apparatus, and a device for training an image optimization network and optimizing an image.
Background
PET/CT is an imaging device that combines both PET and CT imaging devices. On the premise that the environment of the PET/CT equipment is kept unchanged, the PET/CT image, particularly the quality of the PET/CT image and the scanning time are positively correlated with the dose of the medicine injected into the patient. Under the premise of certain time and injection dosage, the longer the scanning time is, the larger the injection dosage is, and the higher the quality of the scanned PET image is. However, in the meantime, long-time scanning and high-dose scanning of PET/CT may cause inconvenience and a certain degree of injury to patients.
How to obtain high-quality PET/CT images under the condition of small injection quantity and quick scanning is a difficult problem which is urgently solved at present.
Disclosure of Invention
The present disclosure provides a training scheme and an image optimization scheme for an image optimization network. Specifically, the present disclosure is realized by the following technical solutions:
in a first aspect, a method for training an image optimization network is provided, where the image optimization network includes a generation sub-network and a discrimination sub-network, and the method includes: inputting a sample image of a first quality level in a comparison sample image obtained in advance into the generation subnetwork, wherein the comparison sample image comprises two sample images obtained by performing PET/CT scanning on the same part of the same detected body, and quality classification information corresponding to the two sample images is respectively of the first quality level and a second quality level; transforming the sample image of the first quality class with the generation subnetwork to generate a transformed image; randomly inputting the transformation image output by the generation sub-network and the corresponding sample image of the second quality level into the judgment sub-network, judging the source of the input image by the judgment sub-network, and outputting a source judgment result; adjusting parameters of the optimized network by minimizing a network loss of the generating subnetwork and a network loss of the discriminating subnetwork, wherein the network loss of the generating subnetwork includes a loss of the transformed image correctly discriminated by the discriminating subnetwork and a difference between the transformed image and a corresponding sample image of a second quality level.
Optionally, the image optimization network further comprises a classification sub-network, the classification sub-network being pre-trained with the reference image; the method further comprises the following steps: obtaining features of the transformed image and an image of a corresponding sample image of a second quality level using the classification sub-network; said minimizing the network loss of the generating subnetwork and the network loss of the discriminating subnetwork further comprises: minimizing a difference between features of the transformed image and features of a corresponding sample image of a second quality level.
Optionally, the method further comprises: augmenting the acquired multiple groups of control sample images, including performing rotation operation and/or symmetry operation on the control sample images; and normalizing the augmented control sample image.
Optionally, the acquiring a plurality of sets of control sample images comprises: obtaining two sample images corresponding to a first quality level and a second quality level, respectively, by scanning at a first dose and a second dose, respectively; and/or obtaining two sample images corresponding to the first quality level and the second quality level respectively by scanning at the first set time and the second set time respectively.
Optionally, the method further comprises: and transforming the sample image of the first quality level in the comparison sample image by using the generation sub-network, comparing the generated transformed image with the corresponding sample image of the second quality level, and stopping training the image optimization network when the difference is smaller than a set threshold value.
In a second aspect, there is provided an image optimization method, the method comprising: acquiring a PET/CT scanning image; inputting the scanned image into an image optimization network to obtain an optimized scanned image, wherein the image optimization network is obtained by training through the image optimization network training method according to any embodiment of the disclosure.
In a third aspect, a training apparatus for an image optimization network is provided, the image optimization network including a generation sub-network and a discrimination sub-network, the apparatus including: an input unit, configured to input a sample image of a first quality level in a comparison sample image obtained in advance to the generation subnetwork, where the comparison sample image includes two sample images obtained by performing PET/CT scanning on a same part of a same subject, and quality classification information corresponding to the two sample images is the first quality level and the second quality level, respectively; a transformation unit configured to transform the sample image of the first quality class using the generation sub-network to generate a transformed image; a judging unit, configured to randomly input the transformed image output by the generating subnetwork and the corresponding sample image of the second quality level into the judging subnetwork, and the judging subnetwork judges a source of the input image and outputs a source judging result; an adjusting unit configured to adjust parameters of the optimized network by minimizing a network loss of the generating subnetwork and a network loss of the discriminating subnetwork, wherein the network loss of the generating subnetwork includes a loss of the transformed image correctly discriminated by the discriminating subnetwork and a difference between the transformed image and a corresponding sample image of a second quality level.
In a fourth aspect, there is provided an image optimization apparatus, the apparatus comprising: the acquisition unit is used for acquiring a PET/CT scanning image; and the optimization unit is used for inputting the scanning image into an image optimization network to obtain an optimized scanning image, wherein the image optimization network is obtained by training by using the training method of the image optimization network according to any embodiment of the disclosure.
In a fifth aspect, there is provided a training apparatus for an image optimization network, the apparatus comprising a memory for storing computer instructions executable on a processor, and the processor for implementing a training method for an image optimization network according to any embodiment of the present disclosure when executing the computer instructions.
In a sixth aspect, an image optimization apparatus is provided, the apparatus comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the image optimization method according to any one of the embodiments of the present disclosure when executing the computer instructions.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, a PET/CT device is used to acquire a real comparison sample image to train an image optimization network, where two sample images in each set of sample images correspond to a first quality level and a second quality level respectively, and by optimizing the network loss of a generation subnetwork for converting the sample images of the first quality level and the network loss of a discrimination subnetwork for discriminating the source of an input image, the generation subnetwork can convert the images of the first quality level into images of quality levels close to the second quality level.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments consistent with the disclosure and together with the disclosure, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram of an application scenario of a PET/CT apparatus according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating a method of training an image optimization network according to an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating an architecture of an image optimization network according to an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a structure for generating subnetworks, according to an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a discrimination sub-network according to an exemplary embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating another image optimization network according to an exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram of a classification subnetwork as shown in an exemplary embodiment of the present disclosure;
8A, 8B, 8C are sample images of a first quality level, transformed images and corresponding sample images of a second quality level used to test an image optimization network;
FIG. 9 is a schematic diagram of a training apparatus for an image optimization network according to an exemplary embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a training device of an image optimization network according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 shows a schematic view of an application scenario of a PET/CT device. It will be appreciated by those skilled in the art that the following description of a PET device and a CT device in a PET/CT device also applies to discrete PET devices and CT devices.
The PET/CT apparatus shown in fig. 1 includes a CT imaging apparatus 11 and a PET imaging apparatus 12, and can simultaneously perform a CT scan and a PET scan on the same object to be examined, or can perform the CT scan and the PET scan separately.
The CT imaging apparatus 11 includes a gantry 13, and an X-ray source 15 and a detector array 17 disposed opposite to the X-ray source 15 are disposed on the gantry 13. The X-ray source 15 may emit X-rays toward the object 18 to be inspected. The detector array 17 detects the attenuated X-rays that have passed through the object 18 to be examined and generates an electrical signal that represents the intensity of the detected X-rays. The CT imaging apparatus 11 converts the electrical signals into projection data representing the X-ray attenuation, and reconstructs a CT tomographic image from the projection data. During a scan, the gantry 13 and the components mounted thereon, such as the X-ray source 15 and the detector array 17, rotate about a center of rotation. The stage 14 moves at least a portion of the object 18 into the gantry opening 16.
The PET imaging device 12 includes PET detectors (not shown) for detecting gamma photons and converting the optical signals into electrical signals. The radionuclide is annihilated in the object 18 to be detected to generate a pair of gamma photons with basically opposite directions, and the distribution of the tracer in the human body can be deduced through processing, so that the changes of physiology, pathology, biochemistry, metabolism and the like of human tissues can be reflected on the molecular level, and the method is particularly suitable for the research on the aspect of the physiological function of the human body.
The CT imaging apparatus 11 performs reconstruction calculation by using X-rays attenuated by the human body using the characteristic that various tissues of the human body have different absorption capacities for X-rays to obtain an image matrix. The CT imaging equipment is high in density resolution ratio of tissues, so that the CT imaging equipment is suitable for reflecting the structure of organs. A
On the premise that the environment of the PET/CT equipment is kept unchanged, the quality of the PET/CT image, particularly the quality of the PET image, is in positive correlation with the scanning time and the injection dosage of the medicine for the patient, and the longer the scanning time is, the larger the injection dosage of the medicine is, the higher the quality of the scanned PET/CT image is. Meanwhile, PET/CT requires a patient to keep a static body position during scanning, and most of patients who carry out PET/CT scanning are tumor patients or brain disease patients, and the patient is difficult to maintain the static body position for a long time (usually 10 to 20 minutes).
In order to obtain high-quality PET/CT images under the condition of small injection volume and quick scanning, the disclosure provides a training method of an image optimization network. As shown in FIG. 2, the method includes steps 201-204.
In step 201, a sample image of a first quality level of a previously obtained control sample image is input to the generation subnetwork. The contrast sample image comprises two sample images obtained by performing PET/CT scanning on the same part of the same detected body, and the quality classification information corresponding to the two sample images is respectively a first quality level and a second quality level;
in embodiments of the present disclosure, an image optimization network may be trained using a sample image set acquired by a PET/CT device. The sample image set includes a plurality of sets of control sample images, and each set of control sample images includes two sample images corresponding to a first quality level and a second quality level, where the quality levels may refer to, for example, sharpness, resolution, and the like of the PET/CT images.
In some embodiments, the first quality level may be defined to be lower than the second quality level, and the sample image of the first quality level may be referred to as a low quality sample image and the sample image of the second quality level may be referred to as a high quality sample image.
During scanning of a PET/CT device, a scan image obtained with a low dose of injected drug may be defined as a low quality image; accordingly, a scan image obtained with a high dose of injected drug is defined as a high quality image. The specific values of the low dose and the high dose herein may be specifically set according to the requirements for image quality. Also, a scanned image obtained by performing scanning at a high speed may be defined as a low-quality image, and an image obtained by performing scanning at a low speed may be defined as a high-quality image. Similarly, the specific scanning speed values corresponding to the fast speed and the slow speed can be specifically set according to the requirements on the image quality.
In some embodiments, two sample images corresponding to a first quality level (low quality) and a second quality level (high quality), respectively, may be obtained by scanning at a first dose (low dose) and a second dose (high dose), respectively; and/or by scanning at a first set time (corresponding to fast) and a second set time (corresponding to slow), respectively, obtaining two sample images corresponding to a first quality level (low quality) and a second quality level (high quality), respectively.
In one example, 200 sets of low dose scan images and 200 sets of corresponding high dose scan images may be prepared, and 800 sets of fast scan images and 800 sets of slow scan images may be prepared, forming a sample image set. That is, the sample image set includes 1000 control sample images, that is, 1000 high quality sample images, and 1000 corresponding low quality sample images.
In some embodiments, the plurality of sets of control sample images may be augmented. And performing data amplification on the first quality-class sample image and the corresponding second quality-class sample image in each group of comparison sample images in the same operation mode. The augmentation operations performed include, for example, rotation operations, and/or symmetry operations, where rotation operations and symmetry operations may be used randomly.
Taking the above 1000 groups of control sample images as an example, after the augmentation, 3000 groups of control sample images can be obtained, each group also including a first quality level sample image and a second quality level sample image.
Meanwhile, normalization operation can be carried out on the amplified contrast sample image.
In step 202, the sample image of the first quality level is transformed with a generation subnetwork to generate a transformed image.
In the embodiment of the present disclosure, the generation sub-network is used to transform the sample image with a lower quality level in the input comparison sample image, so as to generate a high-quality transformed image. For example, for a pair of sample images of a first quality level and a second quality level, respectively, of an input generation subnetwork, where the first quality level is lower than the second quality level, then the generation subnetwork transforms the sample images of the first quality level to generate a transformed image of improved quality; for the input sample image of the second quality class, the generation sub-network does not transform the input sample image and directly outputs the sample image. That is, the transformed image output by the generation sub-network is generated from a transformation of a sample image of a first quality level in a set of control sample images, the sample image of a second quality level in the set of control sample images corresponding to the transformed image. For generating the transformed images output by the sub-network, a sample image of the second quality level corresponding thereto may be determined.
In step 203, the transformed image output by the generation sub-network and the corresponding sample image of the second quality level are randomly input to the discrimination sub-network, and the discrimination sub-network discriminates the source of the input image and outputs a source discrimination result.
In the training process of the image optimization network, the transformed image generated by the generation sub-network or the corresponding sample image of the second quality level (i.e. the real sample image) is randomly transmitted to the discrimination sub-network, and the discrimination sub-network discriminates the source of the input image, i.e. determines whether the input image is generated by the generation sub-network or the real sample. For the case where the input image is generated by a generator subnetwork, the discrimination subnetwork output is false or 0; for the case where the input image is a true sample, the discrimination sub-network output is true or 1.
In step 204, parameters of the optimized network are adjusted by minimizing network losses of the generating subnetwork and the discriminating subnetwork.
The network loss of the generation subnetwork comprises the loss of the generated transformed image correctly discriminated by the discrimination subnetwork, and the loss between the transformed image generated by the generation subnetwork and the corresponding sample image of the second quality class; the network loss of the discrimination sub-network includes a loss of discrimination of the source of the input image.
Let the input data for generating the sub-network G be z and the output data be G (z). Distinguishing whether the input to subnetwork D is either G (z) or xreal. Determining sub-network D so that outputs D (G (z)) are 0 and D (x) are as many as possiblereal) While sub-network G is generated such that D (G (z)) is 1 as much as possible. That is, the training goal for the generator subnetwork is to make the discrimination subnetwork fail to correctly determine whether the input image is from the generator subnetwork or from a true sample image; and the training goal of the decision sub-network is to be able to properly determine the source of the input image. Adjusting parameters of the optimized network by minimizing network loss of the generating subnetwork and discriminating network loss of the subnetwork. By adjusting the network parameters of the generation sub-network, the transformed image generated by the generation sub-network is constantly close to the corresponding sample image of the second quality class (real sample image), and the judgment sub-network is more and more difficult to judge the source of the input image; meanwhile, the network of the discrimination sub-network is adjusted, so that the capability of discriminating the source of the input image is stronger and stronger. When the iteration number reaches a set number or the algorithm converges, the transformed image generated by the generation sub-network and the real sample image have the same data distribution, and at the moment, the image optimization network can realize the transformation of the sample image of the first quality level into the output image of the second quality level, namely the transformation of the sample image of low quality into the output image of high quality.
In one example, a batch training method may be used, with 16 sample images per batch; initial learning rate of 10-4(ii) a Iteration 2000 rounds.
In the embodiment of the disclosure, a PET/CT device is used to acquire real comparison sample images to train an image optimization network, two sample images in each set of sample images respectively correspond to a first quality level and a second quality level, and the generation sub-network can transform the image of the first quality level into an image of which the quality level is close to the second quality level by simultaneously optimizing the network loss of a generation sub-network for transforming the sample images of the first quality level and the network loss of a discrimination sub-network for discriminating the source of the input image.
Fig. 3 shows a schematic structural diagram of an image optimization network according to an exemplary embodiment of the present disclosure. As shown in fig. 3, the image optimization network includes a generation subnetwork 31 and a discrimination subnetwork 32.
In the training process of the image optimization network shown in fig. 3, the sample image of the first quality level (low-quality sample image) in each set of control sample images is input to the generation sub-network 31 and transformed, and a transformed image is generated. The transformed image and the second quality level sample image (high quality sample image) of the set of reference sample images are randomly input to discrimination sub-network 32, and discrimination sub-network 32 discriminates the source of the input image, as described in step 203.
In embodiments of the present disclosure, an image optimization network may be built based on a TensorFlow framework.
The structure of the generation sub-network is shown in fig. 4. The generation sub-network uses a network of several residual modules 42 as a main network, for example 8 residual modules may be used. To speed up model convergence, a Batch regularization (BN) layer is added after each convolutional layer in the network. Because the network does not contain a full connection module, images with any resolution can be received as input theoretically, the network does not contain any up-sampling and down-sampling layers, and the resolution of input data is completely the same as that of output data. To make the data size through each convolutional layer constant, an alignment (SAME PADDING) operation is performed on each convolutional layer, with the active layer using a Linear rectification function (ReLU). The input to the generation sub-network is a first quality level image, i.e. a low quality image, and the output is a corresponding second quality level image, i.e. a corresponding high quality image. Ideally, the generation subnetwork will have no way for the discrimination subnetwork to correctly determine whether the data is from the generation subnetwork or from the real image.
The structure of the discrimination sub-network is shown in fig. 5, and includes several modules 52 consisting of convolutional layers and batch regularization layers, and includes fully connected layers. The last layer of the network may be a sigmoid activation function, the output value of which is the number of (0, 1) intervals, representing the probability that the data is real data. The input to the network is a high quality image, which may be a transformed image generated by the generating sub-network or a real image, and the network outputs the results of the classification. Ideally, the source of any input image can be correctly judged by the judgment subcluvium.
In some embodiments, the image optimization network further comprises a classification sub-network that is pre-trained with the control images. The classification sub-network is used to obtain features of the transformed image and an image of a corresponding sample image of a second quality level. When the parameters of the image optimization network are adjusted, the optimization of the network parameters is further realized by minimizing the difference between the features of the transformed image and the features of the corresponding sample image of the second quality class. It should be noted that, since the classification sub-network is trained in advance, in the optimization process of the image optimization network, the network parameters of the classification sub-network are not adjusted, but the network parameters of the generation sub-network and the discrimination sub-network are adjusted.
Fig. 6 shows a schematic structural diagram of an image optimization network according to an exemplary embodiment of the present disclosure. As shown in fig. 6, the image optimization network includes a generation subnetwork 61, a discrimination subnetwork 62, and a classification subnetwork 63.
The classification subnetwork 63 is trained in advance using the same sample image set as the image optimization network, and for example, 3000 sets of the above-described reference sample images (3000 high-quality sample images and 3000 low-quality sample images) are used as input, and the classification subnetwork 63 determines the quality level of the input image and outputs a determination result of 0 or 1, which indicates a low-quality image or a high-quality image. By training the classification subnetwork 63, it is enabled to learn the features of the input PET/CT image, including the input transformed image, and the corresponding sample image of the second quality level (i.e., the true high quality sample image). In the training process of the image optimization network shown in fig. 6, the sample image of the first quality level (low-quality sample image) in each set of control sample images is input to the generation sub-network 61 and transformed, and a transformed image is generated. The transformed image and the second quality level sample image (high quality sample image) of the set of reference sample images are randomly input to discrimination sub-network 62, and discrimination sub-network 62 discriminates the source of the input image, as described in step 203. Meanwhile, the transformed image and the corresponding sample image of the second quality class are input to a classification subnetwork 63 trained in advance, and the classification subnetwork 63 can learn the features of the input image, thereby outputting the features of the transformed image and the output features of the sample image of the corresponding feature quality class. The network parameters of generation subnetwork 61 and discrimination subnetwork 62 are adjusted by simultaneously minimizing the loss of the transformed image generated by generation subnetwork 61 to be correctly discriminated by discrimination subnetwork 62, the difference between the transformed image and the corresponding sample image of the second quality level, and the network loss of discrimination subnetwork 62. When the number of iterations reaches a set number or the algorithm converges, the generation subnetwork 61 transforms the sample image of the first quality level into a sample image of the second quality level, i.e., transforms the sample image of low quality into a sample image of high quality, and retains more texture features of the image because of the addition of feature loss in the loss function. In the PET/CT diagnosis, the size of the lesion site can be determined by the image texture information, so the texture information has an important meaning for disease diagnosis.
The structure of the classifying subnetwork is shown in fig. 7, which contains several convolutional layers, for example 16 convolutional layers, with one maximum pooling layer after every two convolutional layers. The output of the network is a binary result, indicating either a high quality image or a low quality image. The network is trained in advance, the characteristics of the PET/CT image can be learned, and the learned characteristics can be used for calculating network loss and participating in optimization of network parameters of the image optimization network.
In the present disclosureIn an embodiment, a loss function g of a sub-network is generated1ossThe method comprises the following steps:
gloss=mseloss+ggenloss+featureloss(1)
wherein mselossTo generate Mean Square Error (MSE) between transformed images output by the sub-network and corresponding images of a second quality, MSE is minimizedlossAn image with a high Peak signal to noise ratio (PSNR) can be obtained; ggenlossLoss of the normal judgment of the sub-network is judged for the transformation image generated for generating the sub-network; featurelossThe difference between the features of the transformed image generated to generate the sub-network and the features of the corresponding second quality level image.
The specific formula of the loss function of the three parts is as follows:
Figure BDA0002222797500000111
wherein W, H represents the length and width of the input image, HQ represents a high quality image, LQ represents a low quality image, G, respectivelyθGIndicating the generation of a subnetwork.
Wherein D isθDIndicating a discriminating subnetwork.
Figure BDA0002222797500000121
Wherein phii,jA characteristic diagram after the jth convolutional layer and before the ith maximum pooling layer is shown.
In some embodiments, the image optimization network may also be tested using the following methods: and transforming the sample image of the first quality level in the comparison sample image by using the generation sub-network, comparing the generated transformed image with the corresponding sample image of the second quality level, and stopping training the image optimization network when the difference is smaller than a set threshold value. Wherein, the specific value of the set threshold can be specifically set according to the requirement on the image quality.
FIG. 8A shows a sample image of a first quality level in a control sample image; FIG. 8B shows a transformed image generated by transforming the image of FIG. 8A using a generation subnetwork in the trained image optimization network; fig. 8C shows a sample image of a second quality level corresponding to the transformed image. As can be seen from fig. 8A to 8C, the high-quality image generated by the trained image optimization network and the high-quality image in the sample image set can achieve similar definition and resolution.
The present disclosure also provides an image optimization method, including: acquiring a PET/CT scanning image; inputting the scanned image into an image optimization network to obtain an optimized scanned image, wherein the image optimization network is obtained by training according to the training method of the image optimization network in any embodiment of the disclosure.
The execution order of the steps in the above-described illustrated flows is not limited to the order in the flowcharts. Furthermore, the description of each step may be implemented in software, hardware or a combination thereof, for example, a person skilled in the art may implement it in the form of software code, and may be a computer executable instruction capable of implementing the corresponding logical function of the step. When implemented in software, the executable instructions may be stored in a memory and executed by a processor in the system.
Fig. 9 is a schematic diagram illustrating a training apparatus of an image optimization network according to at least one embodiment of the present disclosure. As shown in fig. 9, the apparatus includes: an input unit 901, configured to input a sample image of a first quality level in a comparison sample image obtained in advance to the generation subnetwork, where the comparison sample image includes two sample images obtained by performing PET/CT scanning on a same part of a same subject, and quality classification information corresponding to the two sample images is a first quality level and a second quality level, respectively; a transforming unit 902, configured to transform the sample image of the first quality level by using the generation subnetwork, and generate a transformed image; a determining unit 903, configured to randomly input the transformed image output by the generating subnetwork and the corresponding sample image of the second quality level into the determining subnetwork, where the determining subnetwork determines a source of the input image and outputs a source determination result; an adjusting unit 904 configured to adjust parameters of the optimized network by minimizing a network loss of the generating subnetwork and a network loss of the discriminating subnetwork, wherein the network loss of the generating subnetwork includes a loss of the transformed image correctly discriminated by the discriminating subnetwork and a difference between the transformed image and a corresponding sample image of a second quality level.
At least one embodiment of the present disclosure also provides an image optimization apparatus, including: the acquisition unit is used for acquiring a PET/CT scanning image; and the optimization unit is used for inputting the scanning image into an image optimization network to obtain an optimized scanning image, wherein the image optimization network is obtained by training by using the training method of the image optimization network according to any embodiment of the disclosure.
Referring to fig. 10, a training apparatus for an image optimization network provided for at least one embodiment of the present disclosure includes a memory for storing computer instructions executable on a processor, and the processor is configured to implement a training method for an image optimization network according to any one embodiment of the present disclosure when executing the computer instructions.
At least one embodiment of the present disclosure also provides an image optimization apparatus, which includes a memory for storing computer instructions executable on a processor, and the processor for implementing the image optimization method according to any one of the embodiments of the present disclosure when executing the computer instructions.
In the disclosed embodiments, the computer readable storage medium may take many forms, such as, in various examples: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof. In particular, the computer readable medium may be paper or another suitable medium upon which the program is printed. Using these media, the programs can be electronically captured (e.g., optically scanned), compiled, interpreted, and processed in a suitable manner, and then stored in a computer medium.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (10)

1. A method of training an image optimization network, the image optimization network comprising a generation subnetwork and a discrimination subnetwork, the method comprising:
inputting a sample image of a first quality level in a comparison sample image obtained in advance into the generation subnetwork, wherein the comparison sample image comprises two sample images obtained by performing PET/CT scanning on the same part of the same detected body, and quality classification information corresponding to the two sample images is respectively of the first quality level and a second quality level;
transforming the sample image of the first quality class with the generation subnetwork to generate a transformed image;
randomly inputting the transformation image output by the generation sub-network and the corresponding sample image of the second quality level into the judgment sub-network, judging the source of the input image by the judgment sub-network, and outputting a source judgment result;
adjusting parameters of the optimized network by minimizing a network loss of the generating subnetwork and a network loss of the discriminating subnetwork, wherein the network loss of the generating subnetwork includes a loss of the transformed image correctly discriminated by the discriminating subnetwork and a difference between the transformed image and a corresponding sample image of a second quality level.
2. The method of claim 1, wherein the image optimization network further comprises a classification sub-network, the classification sub-network being pre-trained with the control images; the method further comprises the following steps:
obtaining features of the transformed image and an image of a corresponding sample image of a second quality level using the classification sub-network;
said minimizing the network loss of the generating subnetwork and the network loss of the discriminating subnetwork further comprises:
minimizing a difference between features of the transformed image and features of a corresponding sample image of a second quality level.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
augmenting the acquired multiple groups of control sample images, including performing rotation operation and/or symmetry operation on the control sample images; and the number of the first and second groups,
and carrying out normalization operation on the augmented contrast sample image.
4. The method of claim 1, wherein said acquiring a plurality of sets of control sample images comprises:
obtaining two sample images corresponding to a first quality level and a second quality level, respectively, by scanning at a first dose and a second dose, respectively; and/or the presence of a gas in the gas,
two sample images corresponding to the first quality level and the second quality level, respectively, are obtained by scanning at the first set time and the second set time, respectively.
5. The method of claim 1, further comprising:
and transforming the sample image of the first quality level in the comparison sample image by using the generation sub-network, comparing the generated transformed image with the corresponding sample image of the second quality level, and stopping training the image optimization network when the difference is smaller than a set threshold value.
6. An image optimization method, comprising:
acquiring a PET/CT scanning image;
inputting the scanned image into an image optimization network to obtain an optimized scanned image, wherein the image optimization network is obtained by training with the training method of the image optimization network according to any one of claims 1 to 5.
7. An apparatus for training an image optimization network, the image optimization network including a generation subnetwork and a discrimination subnetwork, the apparatus comprising:
an input unit, configured to input a sample image of a first quality level in a comparison sample image obtained in advance to the generation subnetwork, where the comparison sample image includes two sample images obtained by performing PET/CT scanning on a same part of a same subject, and quality classification information corresponding to the two sample images is the first quality level and the second quality level, respectively;
a transformation unit configured to transform the sample image of the first quality class using the generation sub-network to generate a transformed image;
a judging unit, configured to randomly input the transformed image output by the generating subnetwork and the corresponding sample image of the second quality level into the judging subnetwork, and the judging subnetwork judges a source of the input image and outputs a source judging result;
an adjusting unit configured to adjust parameters of the optimized network by minimizing a network loss of the generating subnetwork and a network loss of the discriminating subnetwork, wherein the network loss of the generating subnetwork includes a loss of the transformed image correctly discriminated by the discriminating subnetwork and a difference between the transformed image and a corresponding sample image of a second quality level.
8. An image optimization apparatus, characterized in that the apparatus comprises:
the acquisition unit is used for acquiring a PET/CT scanning image;
an optimization unit, configured to input the scan image into an image optimization network to obtain an optimized scan image, where the image optimization network is obtained by training using the training method of the image optimization network according to any one of claims 1 to 5.
9. Training device for an image optimization network, characterized in that the device comprises a memory for storing computer instructions executable on a processor for implementing the method of any of claims 1 to 5 when executing the computer instructions.
10. An image optimization device, comprising a memory for storing computer instructions executable on a processor, the processor for implementing the method of claim 6 when executing the computer instructions.
CN201910940690.1A 2019-09-30 2019-09-30 Training and image optimization method, device and equipment of image optimization network Pending CN110675357A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910940690.1A CN110675357A (en) 2019-09-30 2019-09-30 Training and image optimization method, device and equipment of image optimization network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910940690.1A CN110675357A (en) 2019-09-30 2019-09-30 Training and image optimization method, device and equipment of image optimization network

Publications (1)

Publication Number Publication Date
CN110675357A true CN110675357A (en) 2020-01-10

Family

ID=69080517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910940690.1A Pending CN110675357A (en) 2019-09-30 2019-09-30 Training and image optimization method, device and equipment of image optimization network

Country Status (1)

Country Link
CN (1) CN110675357A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127371A (en) * 2020-03-30 2020-05-08 南京安科医疗科技有限公司 Image enhancement parameter automatic optimization method, storage medium and X-ray scanning device
CN114255268A (en) * 2020-09-24 2022-03-29 武汉Tcl集团工业研究院有限公司 Disparity map processing and deep learning model training method and related equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255769A (en) * 2018-10-25 2019-01-22 厦门美图之家科技有限公司 The training method and training pattern and image enchancing method of image enhancement network
US20190035118A1 (en) * 2017-07-28 2019-01-31 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image conversion
US20190209867A1 (en) * 2017-11-08 2019-07-11 Shanghai United Imaging Healthcare Co., Ltd. System and method for diagnostic and treatment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190035118A1 (en) * 2017-07-28 2019-01-31 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image conversion
US20190209867A1 (en) * 2017-11-08 2019-07-11 Shanghai United Imaging Healthcare Co., Ltd. System and method for diagnostic and treatment
CN109255769A (en) * 2018-10-25 2019-01-22 厦门美图之家科技有限公司 The training method and training pattern and image enchancing method of image enhancement network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JELMER M. WOLTERINK ET AL.: "Generative Adversarial Networks for Noise Reduction in Low-Dose CT", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
QINGSONG YANG ET AL.: "CT Image Denoising with Perceptive Deep Neural Network", 《ARXIV:1702.07019V1》 *
QINGSONG YANG ET AL.: "Low Dose CT Image Denoising Using a Generative Adversarial Network with Wasserstein Distance and Perceptual Loss", 《ARXIV:1708.00961V2》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127371A (en) * 2020-03-30 2020-05-08 南京安科医疗科技有限公司 Image enhancement parameter automatic optimization method, storage medium and X-ray scanning device
CN114255268A (en) * 2020-09-24 2022-03-29 武汉Tcl集团工业研究院有限公司 Disparity map processing and deep learning model training method and related equipment

Similar Documents

Publication Publication Date Title
CN109938764B (en) Self-adaptive multi-part scanning imaging method and system based on deep learning
KR102210474B1 (en) Positron emission tomography system and imgae reconstruction method using the same
JP2020036877A (en) Iterative image reconstruction framework
US11941786B2 (en) Image noise reduction method and device
EP3338636B1 (en) An apparatus and associated method for imaging
US11756161B2 (en) Method and system for generating multi-task learning-type generative adversarial network for low-dose PET reconstruction
JP2021013725A (en) Medical apparatus
US20210225491A1 (en) Diagnostic image converting apparatus, diagnostic image converting module generating apparatus, diagnostic image recording apparatus, diagnostic image converting method, diagnostic image converting module generating method, diagnostic image recording method, and computer recordable recording medium
JP2021018109A (en) Medical image processing apparatus, medical image diagnostic apparatus, and nuclear medicine diagnostic apparatus
CN110675357A (en) Training and image optimization method, device and equipment of image optimization network
CN114387236A (en) Low-dose Sinogram denoising and PET image reconstruction method based on convolutional neural network
Ozaki et al. Fast statistical iterative reconstruction for mega-voltage computed tomography
JP2020044162A (en) Medical information processing device and medical information processing system
CN117078693A (en) Lymphoma image reconstruction and segmentation device based on generation countermeasure network
CN111670461B (en) Low radiation dose Computed Tomography Perfusion (CTP) with improved quantitative analysis
US11810228B2 (en) Network determination of limited-angle reconstruction
CN117152365B (en) Method, system and device for oral cavity CBCT ultra-low dose imaging
US11663756B2 (en) Scatter correction for X-ray imaging
US11672498B2 (en) Information processing method, medical image diagnostic apparatus, and information processing system
CN115423892A (en) Attenuation-free correction PET reconstruction method based on maximum expectation network
JP2022159648A (en) Image processing device, image processing method and tomographic image acquisition system
US20220099770A1 (en) Attenuation map estimation of rf coils
WO2022026661A1 (en) Systems and methods for image denoising via adversarial learning
KR20200057463A (en) Diagnostic Image Converting Apparatus, Diagnostic Image Converting Module Generating Apparatus, Diagnostic Image Recording Apparatus, Diagnostic Image Converting Method, Diagnostic Image Converting Module Generating Method, Diagnostic Image Recording Method, and Computer Recordable Recording Medium
CN117115046B (en) Method, system and device for enhancing sparse sampling image of radiotherapy CBCT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230411

Address after: Room 308, No. 177-2 Chuangxin Road, Hunnan District, Shenyang City, Liaoning Province, 110167

Applicant after: Shenyang Zhihe Medical Technology Co.,Ltd.

Address before: 110167 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province

Applicant before: Shenyang Neusoft Medical Systems Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110

RJ01 Rejection of invention patent application after publication