CN114445515B - Image artifact removal method and device, electronic equipment and storage medium - Google Patents
Image artifact removal method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114445515B CN114445515B CN202210135312.8A CN202210135312A CN114445515B CN 114445515 B CN114445515 B CN 114445515B CN 202210135312 A CN202210135312 A CN 202210135312A CN 114445515 B CN114445515 B CN 114445515B
- Authority
- CN
- China
- Prior art keywords
- image
- artifact
- model
- loss
- free
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 103
- 238000012549 training Methods 0.000 claims abstract description 81
- 238000005096 rolling process Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 16
- 238000003709 image segmentation Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 19
- 230000008569 process Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000002372 labelling Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000002608 intravascular ultrasound Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000001990 intravenous administration Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses an image artifact removal method, an image artifact removal device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an image to be processed of which the image artifact information is to be removed, and converting the image coordinates of the image to be processed from Cartesian coordinates to polar coordinates to obtain a target processing image; inputting the image to be processed into an artifact removal model which is trained in advance to obtain a target generation image corresponding to the image to be processed; the image processing model is obtained by training a pre-established image processing model according to a sample processing image and an artifact-free image corresponding to the sample processing image, and comprises an image generator, a first discriminator for discriminating whether an input image to be discriminated is an artifact-free image and a second discriminator for discriminating whether the input image to be discriminated is image data generated by a generator. According to the technical scheme provided by the embodiment of the invention, the image artifact of the image to be processed can be simply, quickly and effectively removed.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image artifact removing method, an image artifact removing device, an electronic device, and a storage medium.
Background
In the process of intravascular ultrasound sampling (Intravenous Ultrasound, IVUS), a catheter is often guided by a guide wire, however, the transmission and collection of ultrasound signals are hindered by the presence of the guide wire, so that artifacts appear in a generated image, and thus visual analysis of a blood vessel section is affected, and a later data processing algorithm (such as tube membrane segmentation) is also affected.
In noise reduction technology, there are relatively sophisticated solutions to random noise like gaussian impulse noise, such as traditional, e.g. early geometric flow (also called Geometric Flow Diffusion) and curvature filtering, which is now very optimal.
But the noise caused by wire ghost is quite different from the impulse type noise. The image artifact caused by the wire ghost generally has larger area, and often occupies 50-300 or more pixel points, while the action area of the algorithm is finer, so that the algorithm is more suitable for high-frequency noise, and cannot effectively process high-frequency noise and low-frequency noise. In addition, there is also a method of replacing an artifact region by sampling twice or more, but this method has a disadvantage in that the time cost of sampling is greatly increased.
Disclosure of Invention
The embodiment of the invention provides an image artifact removal method, an image artifact removal device, electronic equipment and a storage medium, which are used for solving the problem that artifacts in an image cannot be removed rapidly and effectively.
According to an aspect of the present invention, there is provided an image artifact removal method, the method comprising:
Acquiring an image to be processed of which the image artifact information is to be removed, and converting the image coordinates of the image to be processed from Cartesian coordinates to polar coordinates to obtain a target processing image;
Inputting the target processing image into an artifact removal model which is trained in advance to obtain an artifact removal image corresponding to the target processing image;
Converting the image coordinates of the artifact removed image from a polar coordinate system to Cartesian coordinates to obtain a target output image;
The image processing model is obtained by training a pre-established image processing model according to an artifact-free image positioned under a polar coordinate system and an artifact-free image containing image artifact information, and comprises an image generator, a first discriminator and a second discriminator, wherein the image generator is used for removing the image artifact information to generate the artifact-free image, the first discriminator is used for discriminating whether an input image to be discriminated is the artifact-free image, and the second discriminator is used for discriminating whether the input image to be discriminated is image data generated by the image generator.
Optionally, the artifact removal model is trained by:
inputting a sample processing image into the image generator to obtain a model generation image, wherein the sample processing image comprises an artifact-free image and an artifact-containing image artifact information;
Calculating model generation loss of the model generation image corresponding to the artifact-free image and the model generation loss of the image generator by the artifact-free image, and calculating first discrimination loss of the model generation image corresponding to the artifact-free image by the first discriminator and second discrimination loss of the model generation image corresponding to the artifact-free image by the second discriminator;
And adjusting the image generator according to the model generation loss, the first discrimination loss and the second discrimination loss to obtain an artifact removal model.
Optionally, the first arbiter is trained by:
Inputting a first training sample image into a first discriminator to obtain a first model discrimination result corresponding to the first training sample image, wherein the first training sample image comprises a model generation image corresponding to the artifact-free image and the artifact-free image;
and adjusting model parameters of the first discriminator according to the loss between the first model discrimination result and the first expected discrimination result so as to optimize the first discriminator.
Optionally, the second discriminant is trained by:
inputting a second training sample image into a second discriminator to obtain a second model discrimination result corresponding to the second training sample image, wherein the second training sample image comprises the artifact-free image, the model generation image corresponding to the artifact-free image and the model generation image corresponding to the artifact-free image;
and adjusting model parameters of the second discriminator according to the loss between the second model discrimination result and the second expected discrimination result so as to optimize the second discriminator.
Optionally, the method further comprises:
And obtaining an original image containing image artifact information, carrying out image segmentation on the original image to obtain an artifact-free image and an artifact-containing image artifact information, and processing the image by taking the artifact-free image and the artifact-containing image as samples.
Optionally, the image segmentation is performed on the original image to obtain an artifact-free image and an artifact-containing image artifact information, including:
converting the image coordinates of the original image from Cartesian coordinates to polar coordinates to obtain a polar coordinate image;
Obtaining artifact coordinates, which are marked in the polar coordinate image and correspond to image artifact information, and determining an image dividing line according to the artifact coordinates, wherein the image dividing line is positioned in an area except for an area where the image artifact information is positioned in the original image;
And dividing the polar coordinate image into an artifact-free image and an artifact-containing image with image artifact information according to the image dividing line.
Optionally, the method further comprises:
determining rolling information of the polar coordinate image according to image artifact information in the polar coordinate image, wherein the rolling information comprises a rolling direction and a rolling range;
and rolling the polar coordinate image according to the rolling information, returning to execute the operation of acquiring the artifact coordinates marked in the polar coordinate image and corresponding to the image artifact information, and determining an image dividing line according to the artifact coordinates.
According to another aspect of the present invention, there is provided an image artifact removal apparatus comprising:
The target processing image acquisition module is used for acquiring an image to be processed of which the image artifact information is to be removed, and converting the image coordinates of the image to be processed into polar coordinates from Cartesian coordinates to obtain a target processing image;
The artifact removal image generation module is used for inputting the target processing image into an artifact removal model which is trained in advance to obtain an artifact removal image corresponding to the target processing image;
The target image output module is used for converting the image coordinates of the artifact removed image from a polar coordinate system to Cartesian coordinates to obtain a target output image;
The image processing model is obtained by training a pre-established image processing model according to an artifact-free image positioned under a polar coordinate system and an artifact-free image containing image artifact information, and comprises an image generator, a first discriminator and a second discriminator, wherein the image generator is used for removing the image artifact information to generate the artifact-free image, the first discriminator is used for discriminating whether an input image to be discriminated is the artifact-free image, and the second discriminator is used for discriminating whether the input image to be discriminated is image data generated by the image generator.
According to another aspect of the present invention, there is provided an electronic apparatus including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image artifact removal method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the image artifact removal method according to any of the embodiments of the present invention when executed.
According to the technical scheme, the image to be processed of the image artifact information to be removed is obtained, the image coordinates of the image to be processed are converted from Cartesian coordinates to polar coordinates, a target processing image is obtained, and then the target processing image is input into an artifact removal model which is trained in advance, and an artifact-free target generation image corresponding to the image to be processed can be obtained. The initial model corresponding to the artifact removal model, namely the image processing model, comprises an image generator, a first discriminator and a second discriminator, and can discriminate whether the input image to be discriminated is an artifact-free image or not through the first discriminator, and discriminate whether the input image to be discriminated is image data generated by the image generator or not through the second discriminator, so that the artifact removal effect of the image generator on the image to be processed is ensured, the technical problems that the image artifacts cannot be effectively removed, the time consumption is too long and the like in the related technology are solved, and the intelligent removal of the image artifacts in the image to be processed can be simply, rapidly and effectively realized.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image artifact removal method according to a first embodiment of the present invention;
Fig. 2 is a flowchart of an image artifact removal method according to a second embodiment of the present invention;
fig. 3 is an effect comparison chart of preprocessing an image according to a third embodiment of the present invention;
FIG. 4 is an effect comparison chart of the conversion from Cartesian coordinates to polar coordinates of an original image according to a third embodiment of the present invention;
Fig. 5 is a schematic illustration of labeling effect of artifact coordinates of image artifacts in an original image according to a third embodiment of the present invention;
fig. 6 is an effect comparison chart before and after scrolling of an original image according to the third embodiment of the present invention;
fig. 7 is a schematic view of effects before and after splitting an original image according to the third embodiment of the present invention;
fig. 8 is an effect comparison chart before and after image flipping according to the third embodiment of the present invention;
FIG. 9 is a graph showing the comparison of effects of an artifact image before and after an artifact removal model is input according to a third embodiment of the present invention;
FIG. 10 is a graph showing the comparison of effects before and after an artifact-free image is input into an artifact removal model according to a third embodiment of the present invention;
fig. 11 is an effect comparison diagram of an image to be processed and a target output image provided according to a third embodiment of the present invention;
Fig. 12 is a schematic structural diagram of an image artifact removing apparatus according to a fourth embodiment of the present invention;
Fig. 13 is a schematic structural diagram of an electronic device implementing an image artifact removal method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of an image artifact removal method according to a first embodiment of the present invention, where the method is applicable to a situation of automatically removing artifacts in an image, and is particularly applicable to a situation of removing image artifacts in a medical image, and the method may be performed by an image artifact removal device, where the image artifact removal device may be implemented in a form of hardware and/or software, and the image artifact removal device may be configured in a server and/or a terminal. As shown in fig. 1, the method includes:
s110, obtaining an image to be processed of which the image artifact information is to be removed, and converting the image coordinates of the image to be processed into polar coordinates from Cartesian coordinates to obtain a target processing image.
The image to be processed may be understood as an image to be subjected to image artifact removal information processing, and the image to be processed may include image artifact information or image artifact information. The target processed image can be understood as an image to be processed in a polar coordinate system. Cartesian coordinates are understood to mean coordinates in a rectangular or diagonal coordinate system.
Typically, the acquired image to be processed is an image in cartesian coordinates. The actual acquired images often contain image artifacts, which is why many studies have been made on how to remove the image artifacts. While image artifact removal based on artificial intelligence techniques often requires a large number of artifact-and artifact-free images as model training samples. Because of a relatively large number of practical difficulties in obtaining an artifact-free image in a cartesian coordinate system, in the embodiment of the invention, a mode of training a pre-established image processing model according to the artifact-free image in a polar coordinate system and an artifact-free image containing image artifact information to obtain an artifact removal model is provided, so that a model training effect is ensured, and further an image artifact removal effect is improved. In a specific application, after the image coordinates of the image to be processed are converted from Cartesian coordinates to polar coordinates, artifact removal processing is performed on the image under the polar coordinate system, namely the target processing image, through an artifact removal model.
And S120, inputting the target processing image into an artifact removal model which is trained in advance, and obtaining an artifact removal image corresponding to the target processing image.
The image processing model is obtained by training a pre-established image processing model according to an artifact-free image positioned under a polar coordinate system and an artifact-free image containing image artifact information, and comprises an image generator, a first discriminator and a second discriminator, wherein the image generator is used for removing the image artifact information to generate the artifact-free image, the first discriminator is used for discriminating whether an input image to be discriminated is the artifact-free image, and the second discriminator is used for discriminating whether the input image to be discriminated is image data generated by the image generator.
Since an important feature in a polar coordinate system is any point in cartesian coordinates, there can be an infinite number of expressions in the polar coordinate system. Therefore, the training of the pre-established image processing model by the artifact removal model according to the artifact-free image and the artifact-containing image artifact information in the polar coordinate system has the advantage that the training samples of the artifact removal model can be processed more conveniently.
Wherein the image generator may be a convolution-based image generation model. The image generation model is different from the conventional discrimination model and regression model, and is mainly used for generating data, especially a convolution generation model, and specially generating two-dimensional data. Taking the optical neural network as ResNet, for example, the convolution performed in the network performs feature extraction by convolution checking the image, while the convolution generating model performs the opposite operation, and generates the image by the feature vector and the convolution kernel, and the process is called deconvolution (deconvolution). In an embodiment of the invention, the image generator trains with the artifact-free image located in the polar coordinate system and the artifact-free image containing the image artifact information as training samples. In order to improve the resolution of the image generated by the image generation model, a residual module can be added on the basis of convolution, and meanwhile, the size of the input image can be limited.
The inventor finds that the training result of the image generator is more friendly when the structure of the discriminator is relatively simple in the process of realizing the invention. Alternatively, the first arbiter and the second arbiter may employ a convolution network of relatively simple structure, such as a 3-layer convolution network. The first discriminator can train by taking the output image with the artifact image and the artifact-free image generated by the image generator as training samples, and is used for discriminating whether the input image to be discriminated is the artifact-free image or not so as to judge whether the output image of the image generator is the artifact-free image or not, thereby checking the artifact removal effect of the image generator. For example, the second discriminator may discriminate whether the input image to be discriminated is the image data generated by the generator or not, so as to promote the image data generated by the image generation module to be closer to the real data, with the artifact-free image, the image corresponding to the artifact-free image generated by the image generation module, and the image corresponding to the artifact-free image generated by the image generation module.
S130, converting the image coordinates of the artifact removed image from a polar coordinate system to Cartesian coordinates to obtain a target output image.
Because the target processing image input into the artifact removal model is an image under a polar coordinate system, the artifact removal image output by the artifact removal model is also a polar coordinate image, and the image coordinates of the artifact removal image are converted into Cartesian coordinates from the polar coordinate system, the fact that the actually obtained target output image and the obtained image to be processed have the same presentation mode can be ensured, so that the removal effect of the image artifact is intuitively known, and the later image viewing is facilitated.
According to the technical scheme, the image to be processed of the image artifact information to be removed is obtained, the image coordinates of the image to be processed are converted from Cartesian coordinates to polar coordinates, a target processing image is obtained, and then the target processing image is input into an artifact removal model which is trained in advance, and an artifact-free target generation image corresponding to the image to be processed can be obtained. The initial model corresponding to the artifact removal model, namely the image processing model, comprises an image generator, a first discriminator and a second discriminator, and can discriminate whether the input image to be discriminated is an artifact-free image or not through the first discriminator, and discriminate whether the input image to be discriminated is image data generated by the image generator or not through the second discriminator, so that the artifact removal effect of the image generator on the image to be processed is ensured, the technical problems that the image artifacts cannot be effectively removed, the time consumption is too long and the like in the related technology are solved, and the intelligent removal of the image artifacts in the image to be processed can be simply, rapidly and effectively realized.
Example two
Fig. 2 is a flowchart of an image artifact removal method according to a second embodiment of the present invention, where the training process of the artifact removal model is further refined based on the foregoing embodiment. Optionally, the artifact removal model is trained by: inputting a sample processing image into the image generator to obtain a model generation image, wherein the sample processing image comprises an artifact-free image and an artifact-containing image artifact information; calculating model generation loss of the model generation image corresponding to the artifact-free image and the model generation loss of the image generator by the artifact-free image, and calculating first discrimination loss of the model generation image corresponding to the artifact-free image by the first discriminator and second discrimination loss of the model generation image corresponding to the artifact-free image by the second discriminator; and adjusting the image generator according to the model generation loss, the first discrimination loss and the second discrimination loss to obtain an artifact removal model. The technical terms and features that are the same as those of the foregoing embodiments are not repeated herein.
As shown in fig. 2, the method includes:
s210, inputting a sample processing image into an image generator to obtain a model generation image, wherein the sample processing image comprises an artifact-free image and an artifact-containing image artifact information.
The sample processing image is understood to be a training sample image for the training image generator, the first arbiter and the second arbiter.
It is understood that before inputting the sample processing image into the image generator, it further comprises: a sample processing image is acquired. In the embodiment of the present invention, there may be various ways to obtain the sample processing image, for example, the sample processing image may be collected by an image collecting device, a history image meeting the requirements may be obtained from a database as the sample processing image, or the sample processing image may be obtained after the image processing is performed on the original image that can be obtained.
Illustratively, acquiring the sample processing image may specifically include: and obtaining an original image containing image artifact information, carrying out image segmentation on the original image to obtain an artifact-free image and an artifact-containing image artifact information, and processing the image by taking the artifact-free image and the artifact-containing image as samples. Considering that in practical applications, the raw images that can be directly processed as model training samples are often limited, especially valuable medical images, with a smaller sample size of artifact-free medical images. However, model training often requires a large number of images. In the embodiment of the invention, the image without the artifact can be segmented from the image with the artifact by carrying out image segmentation on the original image, so that the sample size of the image without the artifact can be effectively increased, and the training effect of the model is ensured.
Specifically, image segmentation is performed on the original image to obtain an artifact-free image and an artifact-free image containing image artifact information, including: converting the image coordinates of the original image from Cartesian coordinates to polar coordinates to obtain a polar coordinate image; obtaining artifact coordinates, which are marked in the polar coordinate image and correspond to image artifact information, and determining an image dividing line according to the artifact coordinates, wherein the image dividing line is positioned in an area except for an area where the image artifact information is positioned in the original image; and dividing the polar coordinate image into an artifact-free image and an artifact-containing image with image artifact information according to the image dividing line.
As described above, the image coordinates of the original image are generally cartesian coordinates, so that the image coordinates of the original image can be converted from cartesian coordinates to polar coordinates. In the polar coordinate image, the images can be regarded as being in an end-to-end connection state, in the model training process, samples with the same size can be adopted to process the images, at the moment, the images can be divided into two parts through image dividing lines, namely, the images are divided into two parts, and the image dividing lines are positioned in the areas of the original images except for the areas where the image artifact information is positioned, so that artifact-free images with the same size and artifact-containing images containing the image artifact information can be obtained.
As previously mentioned, model training often requires a large number of images. Thus, the sample processing image can be further increased by means of image processing. For example, the polar image may be segmented again after adjusting the position of the image artifact in the polar image to segment out new artifact-free and artifact-free images.
Optionally, determining rolling information of the polar coordinate image according to image artifact information in the polar coordinate image, wherein the rolling information comprises a rolling direction and a rolling range; and rolling the polar coordinate image according to the rolling information, returning to execute the operation of acquiring the artifact coordinates marked in the polar coordinate image and corresponding to the image artifact information, and determining an image dividing line according to the artifact coordinates.
The image artifact information may include at least one of morphological information, boundary information, position information, and size information of the image artifact. The scroll range is used to ensure that after the image is divided into two, the image artifact information is contained in one image and the image artifact information is not contained in one image, for example, the upper limit value of the scroll range may be a value not greater than half of the height or length of the image.
Specifically, after the polar image is scrolled according to the scroll information, the polar image may be segmented based on the same image segmentation line to increase the sample processed image by changing the position of the image artifact in the image. It will be appreciated that more sample processing images may be added by scrolling the polar image multiple times. The image rolling step length corresponding to each rolling of the polar coordinate image may be set according to actual requirements under the condition that the difference value of the two end values of the rolling range is not exceeded, which is not limited herein.
S220, calculating model generation loss of the image generator by the model generation image corresponding to the artifact-free image and the artifact-free image, and calculating first discrimination loss of the model generation image corresponding to the artifact-free image by the first discriminator and second discrimination loss of the model generation image corresponding to the artifact-free image by the second discriminator.
And calculating model generation loss of the image generator through calculating the model generation image corresponding to the artifact-free image and the artifact-free image, and training the image generator to keep the artifact-free area as original as possible through the model generation loss, so that the model generation loss is as small as possible, and the generated image is consistent with the input artifact-free image after the image generator processes the artifact-free image.
Since one of the purposes of the image generator is to convert an artifact-free image into an artifact-free image, the first discriminator discriminates whether the image to be discriminated for input is an artifact-free image or an artifact-free image, generates a first discrimination loss of the image for a model corresponding to the artifact-free image by calculating the first discriminator, and adjusts the image generator by means of the first discrimination loss, so that the image generator can have a good image artifact removal effect.
Optionally, the first arbiter is trained by:
Inputting a first training sample image into a first discriminator to obtain a first model discrimination result corresponding to the first training sample image, wherein the first training sample image comprises a model generation image corresponding to the artifact-free image and the artifact-free image;
and adjusting model parameters of the first discriminator according to the loss between the first model discrimination result and the first expected discrimination result so as to optimize the first discriminator.
The second discriminator is used for discriminating whether the input image to be discriminated is the image data generated by the image generator, and calculating the second discriminating loss of the image generated by the second discriminator on the model corresponding to the artifact image, and then adjusting the image generator by means of the second discriminating loss, so that the image generated by the image generator is more similar to the real artifact-free image, and meanwhile, the distribution of the generated image is similar to the distribution of the input image.
Similarly, the second discriminant is trained by:
inputting a second training sample image into a second discriminator to obtain a second model discrimination result corresponding to the second training sample image, wherein the second training sample image comprises the artifact-free image, the model generation image corresponding to the artifact-free image and the model generation image corresponding to the artifact-free image;
and adjusting model parameters of the second discriminator according to the loss between the second model discrimination result and the second expected discrimination result so as to optimize the second discriminator.
And S230, adjusting the image generator according to the model generation loss, the first discrimination loss and the second discrimination loss to obtain an artifact removal model.
Specifically, a model total loss may be calculated from the model generation loss, the first discrimination loss, and the second discrimination loss, and the image generator may be adjusted based on the model total loss. Wherein a model total loss is calculated from the model generation loss, the first discrimination loss, and the second discrimination loss, and may be, the model total loss is calculated by weighting and then summing the model generation loss, the first discrimination loss and the second discrimination loss respectively.
Specifically, the model total loss for adjusting the image generator can be calculated as follows:
Wherein, For the model total loss of the image generator,A first discrimination loss for the first discriminator,The second discrimination loss of the second discriminator is a second discrimination loss, a represents an artifact-free image, B represents an artifact-free image, G represents an image generator, D B represents a first discriminator for discriminating whether an input image to be discriminated is an artifact-free image, D T represents a second discriminator for discriminating whether an input image to be discriminated is image data generated by the image generator, and α, β, γ are the weights corresponding to the total loss of the super parameter adjustment model, the first discrimination loss, and the second discrimination loss. The values of alpha, beta and gamma can be the same or different, and the specific values can be determined according to actual conditions.
S240, obtaining an image to be processed of which the image artifact information is to be removed, and converting the image coordinates of the image to be processed into polar coordinates from Cartesian coordinates to obtain a target processing image.
S250, inputting the target processing image into an artifact removal model which is trained in advance, and obtaining an artifact removal image corresponding to the target processing image.
And S260, converting the image coordinates of the artifact removed image from a polar coordinate system to Cartesian coordinates to obtain a target output image.
According to the technical scheme of the embodiment, when the image generator is adjusted, the model generation loss between the model generation image corresponding to the artifact-free image and the artifact-free image is used for keeping consistency of image information of artifact-free areas of the output image and the input image, when whether the input image to be distinguished is the artifact-free image or not is judged by the first discriminator, the first discrimination loss of the model generation image corresponding to the artifact-free image is used for guaranteeing that the adjusted artifact removal model has better image artifact removal capability, and when whether the input image to be distinguished is the image data generated by the image generator or not is judged by the second discriminator, the second discrimination loss of the model generation image corresponding to the artifact-free image is used for judging whether the input image to be distinguished is the image data generated by the image generator or not, the real artifact-free image can be attached to the training-derived artifact removal model as much as possible when the target generation image after the image artifact removal is generated, the performance of the artifact removal model is comprehensively optimized, and meanwhile, the artifact removal model is better in robustness can be achieved.
Example III
As an optional example of the embodiment of the present invention, specifically, by training a depth countermeasure generation model, an image after removing an artifact can be output after an image to be processed including an image artifact is input, and specific operation steps may be:
1. converting the image to be processed from Cartesian coordinates to polar coordinates to obtain a target processing image;
2. Positioning image artifacts of the target processing image, and then rolling the target processing image in the target direction to enable the image artifacts to have a certain distance from the marginal of the target processing image;
3. inputting the rolled target processing image into an artifact removal model to obtain an artifact removal image;
4. performing a reset step 2 scrolling operation for the de-artifact image;
5. and finally, converting the artifact removed image after the reset processing from polar coordinates to Cartesian coordinates to obtain a target output image, namely an image after the artifact is removed from the image to be processed.
It should be noted that the processes of steps 2 and 4 are not necessarily performed, and whether or not the processes are to be performed may be determined according to actual situations.
The specific training process of the artifact removal model is described in detail below, and is divided into five parts of model principle, model architecture, loss function setting, data preparation and training method.
A first part: principle of model
The model is used for training a model to convert an image in an artifact-free domain A into an image in an artifact-free domain B by modifying a convolved image generation model and a generation countermeasure network, and consistency of pixels except for artifacts is maintained in the generation process.
A second part: model architecture
In the model framework of the image generator, the residual error module is added on the basis of convolution, so that the generated image is clearer, and the size limitation on the input image is smaller. The arbiter generates 70 x 70 blocks using a simple 3-layer convolutional network (i.e., patchGAN) and then performs the discrimination for each block.
Third section: loss function design
The model training method of the embodiment of the invention is different from the conventional generation of the countermeasure network, and mainly aims at comprising the following steps:
Converting the image with the artifacts into an image without the artifacts;
generating a distribution of images that approximates the distribution of the input image;
if the artifact-free image is processed, the generated image is consistent with the input image.
Three losses are defined around the above three purposes:
(1)
(2)
(3)
Wherein a denotes an artifact-free image, B denotes an artifact-free image, G denotes an image generator, D B denotes a first discriminator for discriminating whether an input image to be discriminated is an artifact-free image, and D T denotes a second discriminator for discriminating whether an input image to be discriminated is image data generated by the image generator.
The loss (1) is used for generating artifact-free data by the training generator G, the training arbiter D B distinguishes the domain where the input image to be distinguished is located, namely, the artifact-free image or the artifact-free image, the loss (2) is used for generating data which are as real as possible by the training image generator G, the training arbiter D T distinguishes whether the input image to be distinguished is real, and the loss (3) is used for keeping the artifact-free domain as original as possible by the training image generator G. The final total loss function is:
wherein, alpha, beta and gamma are used as super parameters to adjust the corresponding weight of each sub-loss.
Fourth part: data preparation
The training data needs to use two groups of images containing artifacts and no artifacts, and the existing data generally contains artifacts, but the artifacts usually occupy less radian, so that one image can be cut to obtain two images containing artifacts and no artifacts, and the specific flow is as follows:
1. Preprocessing an original image to remove information affecting training such as watermarks in the original image, as shown in fig. 3;
2. converting the original image processed by the 1 to polar coordinates from Cartesian coordinates, as shown in FIG. 4;
3. Labeling the artifact coordinates of the image artifact, as shown in fig. 5, the artifact coordinates are denoted by "+";
4. As shown in fig. 6, the original image after labeling is rolled by a random length along the arrow direction, but the image artifact is always kept at the lower half part of the image, and an image dividing line is determined;
5. Image cutting is carried out on the rolled original image along an image dividing line positioned at a half of the height of the rolled original image, so that two images with the same size are obtained, namely an artifact-containing image and an artifact-free image without image artifacts, as shown in fig. 7, wherein a white dotted line represents the image dividing line:
6. The segmented artifact-and/or artifact-free images are randomly flipped horizontally and/or vertically to augment the dataset of the training model, as shown in fig. 8.
Fifth part: training process
The training process may be divided into two or more training cycles, each training cycle including training of the image generator and training of the discriminators D B and D T, and the training steps of the respective networks in each cycle include:
1. training of an image generator:
respectively inputting an artifact-containing image A and an artifact-free image B containing no artifacts into an image generator to obtain two images A and B;
let the discriminator D B discriminate which of A and B contains artifact to obtain discrimination loss
Let the discriminator D T discriminate which of A, B are generated images to obtain discrimination loss
Obtaining the L1 distance of B and B to obtain consistency loss
Then toAndMultiplying the sum of the corresponding weights to obtainThe back propagation then finds the gradient for the generator parameters, and then optimizes the generator parameters, but not the discriminators by this total loss.
2. Training of a discriminator:
let the discriminator D B discriminate which of A and B contains artifact to obtain discrimination loss Then only optimizing the parameters of the discriminator D B;
Let the discriminator D T discriminate which of A, B are generated images to obtain discrimination loss Only the parameters of the arbiter D T are then optimized.
By adopting the technical scheme, the effect of inputting the target processing image under the polar coordinate system into the artifact removal model to obtain the artifact removal image can be seen in fig. 9 and 10. As shown in fig. 9, the artifact-free region information is better removed after artifact image processing than before processing (left image is artifact input, right image is output after processing): as shown in fig. 10, there is no significant difference before and after the artifact-free image processing (left image is artifact-free input, right image is its processed output). The comparison of the image to be processed with the target output image after the de-artifacting image output by the artefact removal model is transformed from polar coordinates to cartesian coordinates is shown in fig. 11. Thus, for the output picture with the artifact input artifact removal model, the artifact can be removed and the artifact rear end can be brightness filled, while for the input picture without the artifact, the model output picture is basically consistent with the original picture.
Example IV
Fig. 12 is a schematic structural diagram of an image artifact removing apparatus according to a fourth embodiment of the present invention. As shown in fig. 12, the apparatus includes: a target processing image acquisition module 1210, a de-artifact image generation module 1220, and a target image output module 1230.
The target processing image obtaining module 1210 is configured to obtain an image to be processed from which image artifact information is to be removed, and convert image coordinates of the image to be processed from cartesian coordinates to polar coordinates to obtain a target processing image; the artifact removal image generating module 1220 is configured to input the target processing image into a pre-trained artifact removal model to obtain an artifact removal image corresponding to the target processing image, where the artifact removal model is obtained by training a pre-established image processing model according to an artifact-free image located under a polar coordinate system and an artifact-containing image artifact information, and the image processing model includes an image generator, a first discriminator and a second discriminator, where the image generator is configured to remove the image artifact information to generate an artifact-free image, and the first discriminator is configured to discriminate whether the input image to be discriminated is an artifact-free image, and the second discriminator is configured to discriminate whether the input image to be discriminated is image data generated by the image generator; the target image output module 1230 is configured to convert the image coordinates of the artifact-removed image from a polar coordinate system to cartesian coordinates, to obtain a target output image.
According to the technical scheme, the image to be processed of the image artifact information to be removed is obtained, the image coordinates of the image to be processed are converted from Cartesian coordinates to polar coordinates, a target processing image is obtained, and then the target processing image is input into an artifact removal model which is trained in advance, and an artifact-free target generation image corresponding to the image to be processed can be obtained. The initial model corresponding to the artifact removal model, namely the image processing model, comprises an image generator, a first discriminator and a second discriminator, and can discriminate whether the input image to be discriminated is an artifact-free image or not through the first discriminator, and discriminate whether the input image to be discriminated is image data generated by the image generator or not through the second discriminator, so that the artifact removal effect of the image generator on the image to be processed is ensured, the technical problems that the image artifacts cannot be effectively removed, the time consumption is too long and the like in the related technology are solved, and the intelligent removal of the image artifacts in the image to be processed can be simply, rapidly and effectively realized.
Optionally, the artifact removal model is obtained through training of a model training device. Wherein, the model training device includes:
The model generation image determining module is used for inputting a sample processing image into the image generator to obtain a model generation image, wherein the sample processing image comprises an artifact-free image and an artifact-containing image artifact information;
A model loss calculation module, configured to calculate a model generation loss of the image generator from the model generation image corresponding to the artifact-free image and the artifact-free image, and calculate a first discrimination loss of the model generation image corresponding to the artifact-free image by the first discriminator and a second discrimination loss of the model generation image corresponding to the artifact-free image by the second discriminator;
and the model adjustment module is used for adjusting the image generator according to the model generation loss, the first discrimination loss and the second discrimination loss so as to obtain an artifact removal model.
Optionally, the first discriminant is trained by the following modules of the model training apparatus:
the first judging result output module is used for inputting a first training sample image into a first judging device to obtain a first model judging result corresponding to the first training sample image, wherein the first training sample image comprises a model generating image corresponding to the artifact-free image and the artifact-free image;
And the first judging parameter optimizing module is used for adjusting the model parameters of the first judging device according to the loss between the first model judging result and the first expected judging result so as to optimize the first judging device.
Optionally, the second discriminant is trained by the following modules of the model training apparatus:
The second judging result output module is used for inputting a second training sample image into a second judging device to obtain a second model judging result corresponding to the second training sample image, wherein the second training sample image comprises the artifact-free image, a model generating image corresponding to the artifact-free image and a model generating image corresponding to the artifact-free image;
And the second judging parameter optimizing module is used for adjusting the model parameters of the second judging device according to the loss between the second model judging result and the second expected judging result so as to optimize the second judging device.
Optionally, the model training apparatus further comprises:
the sample image acquisition module is used for acquiring an original image containing image artifact information, carrying out image segmentation on the original image to obtain an artifact-free image and an artifact-containing image containing the image artifact information, and taking the artifact-free image and the artifact-containing image as sample processing images.
Optionally, the sample image acquisition module includes:
the image coordinate conversion unit is used for converting the image coordinates of the original image from Cartesian coordinates to polar coordinates to obtain a polar coordinate image;
The image dividing line determining unit is used for obtaining the artifact coordinates corresponding to the image artifact information and marked in the polar coordinate image, and determining an image dividing line according to the artifact coordinates, wherein the image dividing line is positioned in an area except for the area where the image artifact information is positioned in the original image;
And the sample image obtaining unit is used for dividing the polar coordinate image into an artifact-free image and an artifact-containing image artifact information according to the image dividing line.
Optionally, the sample image acquisition module further comprises:
a rolling information determining unit, configured to determine rolling information of the polar coordinate image according to image artifact information in the polar coordinate image, where the rolling information includes a rolling direction and a rolling range;
And the image rolling unit is used for rolling the polar coordinate image according to the rolling information, returning to execute the operation of acquiring the artifact coordinates marked in the polar coordinate image and corresponding to the image artifact information, and determining an image dividing line according to the artifact coordinates.
The image artifact removing device provided by the embodiment of the invention can execute the image artifact removing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example five
Fig. 13 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 13, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as method image artifact removal.
In some embodiments, the method image artifact removal may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more of the steps of method image artifact removal described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the method image artifact removal by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Claims (9)
1. An image artifact removal method, comprising:
Acquiring an image to be processed of which the image artifact information is to be removed, and converting the image coordinates of the image to be processed from Cartesian coordinates to polar coordinates to obtain a target processing image;
Inputting the target processing image into an artifact removal model which is trained in advance to obtain an artifact removal image corresponding to the target processing image;
Converting the image coordinates of the artifact removed image from a polar coordinate system to Cartesian coordinates to obtain a target output image;
The image processing model is obtained by training a pre-established image processing model according to an artifact-free image positioned under a polar coordinate system and an artifact-free image containing image artifact information, and comprises an image generator, a first discriminator and a second discriminator, wherein the image generator is used for removing the image artifact information to generate the artifact-free image, the first discriminator is used for discriminating whether an input image to be discriminated is the artifact-free image, and the second discriminator is used for discriminating whether the input image to be discriminated is image data generated by the image generator;
the artifact removal model is obtained through training in the following mode:
inputting a sample processing image into the image generator to obtain a model generation image, wherein the sample processing image comprises an artifact-free image and an artifact-containing image artifact information;
Calculating model generation loss of the model generation image corresponding to the artifact-free image and the model generation loss of the image generator by the artifact-free image, and calculating first discrimination loss of the model generation image corresponding to the artifact-free image by the first discriminator and second discrimination loss of the model generation image corresponding to the artifact-free image by the second discriminator;
adjusting the image generator according to the model generation loss, the first discrimination loss and the second discrimination loss to obtain an artifact removal model;
Wherein said calculating a model total loss from said model generated loss, said first discrimination loss and said second discrimination loss comprises: calculating total model loss by respectively carrying out weighted re-summation on the model generation loss, the first discrimination loss and the second discrimination loss; the total model loss for adjusting the image generator is calculated as follows:
Wherein, For the model total loss of the image generator,A first discrimination loss for the first discriminator,The second discrimination loss of the second discriminator is a second discrimination loss, a represents an artifact-free image, B represents an artifact-free image, G represents an image generator, D B represents a first discriminator for discriminating whether an input image to be discriminated is an artifact-free image, D T represents a second discriminator for discriminating whether an input image to be discriminated is image data generated by the image generator, and a, β, γ are used as the corresponding weights of the first discrimination loss, the second discrimination loss and the model generation loss to be adjusted as the super parameters.
2. The method of claim 1, wherein the first discriminant is trained by:
Inputting a first training sample image into a first discriminator to obtain a first model discrimination result corresponding to the first training sample image, wherein the first training sample image comprises a model generation image corresponding to the artifact-free image and the artifact-free image;
and adjusting model parameters of the first discriminator according to the loss between the first model discrimination result and the first expected discrimination result so as to optimize the first discriminator.
3. The method of claim 1, wherein the second discriminant is trained by:
inputting a second training sample image into a second discriminator to obtain a second model discrimination result corresponding to the second training sample image, wherein the second training sample image comprises the artifact-free image, the model generation image corresponding to the artifact-free image and the model generation image corresponding to the artifact-free image;
and adjusting model parameters of the second discriminator according to the loss between the second model discrimination result and the second expected discrimination result so as to optimize the second discriminator.
4. The method as recited in claim 1, further comprising:
And obtaining an original image containing image artifact information, carrying out image segmentation on the original image to obtain an artifact-free image and an artifact-containing image artifact information, and processing the image by taking the artifact-free image and the artifact-containing image as samples.
5. The method of claim 4, wherein said image segmentation of said original image to obtain an artifact-free image and an artifact-free image comprising image artifact information comprises:
converting the image coordinates of the original image from Cartesian coordinates to polar coordinates to obtain a polar coordinate image;
Obtaining artifact coordinates, which are marked in the polar coordinate image and correspond to image artifact information, and determining an image dividing line according to the artifact coordinates, wherein the image dividing line is positioned in an area except for an area where the image artifact information is positioned in the original image;
And dividing the polar coordinate image into an artifact-free image and an artifact-containing image with image artifact information according to the image dividing line.
6. The method as recited in claim 5, further comprising:
determining rolling information of the polar coordinate image according to image artifact information in the polar coordinate image, wherein the rolling information comprises a rolling direction and a rolling range;
and rolling the polar coordinate image according to the rolling information, returning to execute the operation of acquiring the artifact coordinates marked in the polar coordinate image and corresponding to the image artifact information, and determining an image dividing line according to the artifact coordinates.
7. An image artifact removal device, comprising:
The target processing image acquisition module is used for acquiring an image to be processed of which the image artifact information is to be removed, and converting the image coordinates of the image to be processed into polar coordinates from Cartesian coordinates to obtain a target processing image;
The artifact removal image generation module is used for inputting the target processing image into an artifact removal model which is trained in advance to obtain an artifact removal image corresponding to the target processing image;
The target image output module is used for converting the image coordinates of the artifact removed image from a polar coordinate system to Cartesian coordinates to obtain a target output image;
The image processing model is obtained by training a pre-established image processing model according to an artifact-free image positioned under a polar coordinate system and an artifact-free image containing image artifact information, and comprises an image generator, a first discriminator and a second discriminator, wherein the image generator is used for removing the image artifact information to generate the artifact-free image, the first discriminator is used for discriminating whether an input image to be discriminated is the artifact-free image, and the second discriminator is used for discriminating whether the input image to be discriminated is image data generated by the image generator;
the artifact removal model is obtained through training of a model training device; the model training apparatus includes:
The model generation image determining module is used for inputting a sample processing image into the image generator to obtain a model generation image, wherein the sample processing image comprises an artifact-free image and an artifact-containing image artifact information;
A model loss calculation module, configured to calculate a model generation loss of the image generator from the model generation image corresponding to the artifact-free image and the artifact-free image, and calculate a first discrimination loss of the model generation image corresponding to the artifact-free image by the first discriminator and a second discrimination loss of the model generation image corresponding to the artifact-free image by the second discriminator;
The model adjustment module is used for adjusting the image generator according to the model generation loss, the first discrimination loss and the second discrimination loss so as to obtain an artifact removal model;
Wherein said calculating a model total loss from said model generated loss, said first discrimination loss and said second discrimination loss comprises: calculating total model loss by respectively carrying out weighted re-summation on the model generation loss, the first discrimination loss and the second discrimination loss; the total model loss for adjusting the image generator is calculated as follows:
Wherein, For the model total loss of the image generator,A first discrimination loss for the first discriminator,The second discrimination loss of the second discriminator is a second discrimination loss, a represents an artifact-free image, B represents an artifact-free image, G represents an image generator, D B represents a first discriminator for discriminating whether an input image to be discriminated is an artifact-free image, D T represents a second discriminator for discriminating whether an input image to be discriminated is image data generated by the image generator, and α, β, γ are used as the corresponding weights for the first discrimination loss, the second discrimination loss, and the model generation loss to be adjusted as the super parameters.
8. An electronic device, the electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the image artifact removal method of any of claims 1-6.
9. A computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions, the computer instructions for causing a processor to perform the image artifact removal method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210135312.8A CN114445515B (en) | 2022-02-14 | 2022-02-14 | Image artifact removal method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210135312.8A CN114445515B (en) | 2022-02-14 | 2022-02-14 | Image artifact removal method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114445515A CN114445515A (en) | 2022-05-06 |
CN114445515B true CN114445515B (en) | 2024-10-18 |
Family
ID=81374378
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210135312.8A Active CN114445515B (en) | 2022-02-14 | 2022-02-14 | Image artifact removal method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114445515B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133959A (en) * | 2017-06-12 | 2017-09-05 | 上海交通大学 | A kind of quick vessel borders three-dimensional dividing method and system |
CN110378982A (en) * | 2019-07-23 | 2019-10-25 | 上海联影医疗科技有限公司 | Reconstruction image processing method, device, equipment and storage medium |
CN112132172A (en) * | 2020-08-04 | 2020-12-25 | 绍兴埃瓦科技有限公司 | Model training method, device, equipment and medium based on image processing |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111667420B (en) * | 2020-05-21 | 2023-10-24 | 维沃移动通信有限公司 | Image processing method and device |
CN112446873B (en) * | 2020-12-11 | 2024-09-24 | 深圳高性能医疗器械国家研究院有限公司 | Method for removing image artifacts |
CN113870178A (en) * | 2021-08-13 | 2021-12-31 | 首都医科大学附属北京安贞医院 | Plaque artifact correction and component analysis method and device based on artificial intelligence |
-
2022
- 2022-02-14 CN CN202210135312.8A patent/CN114445515B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133959A (en) * | 2017-06-12 | 2017-09-05 | 上海交通大学 | A kind of quick vessel borders three-dimensional dividing method and system |
CN110378982A (en) * | 2019-07-23 | 2019-10-25 | 上海联影医疗科技有限公司 | Reconstruction image processing method, device, equipment and storage medium |
CN112132172A (en) * | 2020-08-04 | 2020-12-25 | 绍兴埃瓦科技有限公司 | Model training method, device, equipment and medium based on image processing |
Also Published As
Publication number | Publication date |
---|---|
CN114445515A (en) | 2022-05-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2023531350A (en) | A method for incrementing a sample image, a method for training an image detection model and a method for image detection | |
CN115018805A (en) | Segmentation model training method, image segmentation method, device, equipment and medium | |
CN114445515B (en) | Image artifact removal method and device, electronic equipment and storage medium | |
CN117351307B (en) | Model training method, device, equipment and storage medium | |
CN116703925B (en) | Bearing defect detection method and device, electronic equipment and storage medium | |
CN115690143B (en) | Image segmentation method, device, electronic equipment and storage medium | |
CN116188917B (en) | Defect data generation model training method, defect data generation method and device | |
CN115482248B (en) | Image segmentation method, device, electronic equipment and storage medium | |
CN115410140A (en) | Image detection method, device, equipment and medium based on marine target | |
CN115984613A (en) | Fundus image classification method, device, equipment and storage medium | |
CN117333487B (en) | Acne classification method, device, equipment and storage medium | |
CN112862726B (en) | Image processing method, device and computer readable storage medium | |
CN116798104A (en) | Pupil detection method, device, equipment and storage medium | |
CN116245853A (en) | Fractional flow reserve determination method, fractional flow reserve determination device, electronic equipment and storage medium | |
CN117746489A (en) | Gesture detection method, gesture detection device, electronic equipment and storage medium | |
CN117808823A (en) | Coronary vessel segmentation processing method, device, electronic equipment and storage medium | |
CN118196119A (en) | Model training method, target segmentation method and device | |
CN116129243A (en) | Object detection method, device and equipment for radar image and storage medium | |
CN118229990A (en) | Lymph node metastasis prediction result method, device and medium | |
CN116580050A (en) | Medical image segmentation model determination method, device, equipment and medium | |
CN118191935A (en) | Fault identification method and device based on edge detection, electronic equipment and medium | |
CN118411382A (en) | Boundary point detection method, boundary point detection device, electronic equipment and storage medium | |
CN116958047A (en) | Image processing method and device, electronic equipment and storage medium | |
CN118015109A (en) | Training method, device, equipment and storage medium of image reconstruction model | |
CN117275006A (en) | Image processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |