CN115439361B - Underwater image enhancement method based on self-countermeasure generation countermeasure network - Google Patents
Underwater image enhancement method based on self-countermeasure generation countermeasure network Download PDFInfo
- Publication number
- CN115439361B CN115439361B CN202211072112.9A CN202211072112A CN115439361B CN 115439361 B CN115439361 B CN 115439361B CN 202211072112 A CN202211072112 A CN 202211072112A CN 115439361 B CN115439361 B CN 115439361B
- Authority
- CN
- China
- Prior art keywords
- image
- countermeasure
- self
- discriminator
- generator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 230000008569 process Effects 0.000 claims abstract description 11
- 230000006872 improvement Effects 0.000 claims abstract description 10
- 238000000605 extraction Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 5
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 4
- 230000008034 disappearance Effects 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 230000000452 restraining effect Effects 0.000 claims 1
- 238000005070 sampling Methods 0.000 claims 1
- 238000012549 training Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000004075 alteration Effects 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013441 quality evaluation Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000004383 yellowing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an underwater image enhancement method based on a self-countermeasure generation countermeasure network, which is used for realizing the improvement of the underwater image quality based on a self-countermeasure mode and a double-input type discriminator. A new constraint is added to the enhancement process by the self-countermeasure mode, i.e., the constraint generator causes the second generated image to be superior to the first generated image. The guiding function of the discriminator on the generator is enhanced through the dual-input discriminator, and the quality of the enhanced image is further improved.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an underwater image enhancement method based on a self-countermeasure generation countermeasure network.
Background
Because the underwater images have no reference image, the pairs of underwater images in the underwater image enhancement data set are often obtained by generating degraded underwater images or enhancing the underwater images, and when the images are used for training models, the quality of the enhanced images can only be close to the quality of the original underwater images or the quality of the underwater images enhanced by other methods. The method of training using not paired data but underwater images and natural images, although increasing the upper limit of the image quality after enhancement, may result in unnatural enhancement results and degradation of image quality due to the use of two images of different domains.
Disclosure of Invention
The invention provides an underwater image enhancement method based on a self-countermeasure generation countermeasure network, which is realized by training on a common natural image quality database without underwater reference images and transferring quality improvement to a low-quality underwater image to realize underwater image enhancement.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the underwater image enhancement method based on the self-countermeasure generation countermeasure network adopts a self-countermeasure mode, the whole structure of which is shown in figure 1, the self-countermeasure mode does not have a corresponding reference image as a final target of training, and an output image is obtained after an input image passes through a generator G in the self-countermeasure modeOutput image +.>The output image G (G (x)) is obtained by inputting the output image G into the generator G again, and the two output images are simultaneously input into the discriminator D for discrimination, so that the generator is constrained, and the image generated in the second time is better than the image generated in the first time.
The overall optimization objective for the self-countermeasure mode is:
(4-1)
the self-countermeasure mode first obtains an output image from the generator G: then will->Input to the generator to get G (x)), at this time, the fixed generator G trains the arbiter D:
(4-2)
in equation 4-2, the second output image G (G (x)) from the generator is desirably discriminated as a positive sample by D, and the first output imageDesirably, the negative sample is determined, and the training generator G:
(4-3)
in equation 4-3, the first output image is desiredThe second output image G (x)) is preferred to form a challenge between the generator and the arbiter, but unlike challenge training, during this challenge there is no real set of positive samples to form a challenge with the negative samples, which are in a state of challenge with themselves.
Generating an countermeasure underwater image enhancement method based on a self countermeasure mode:
the method proposed herein consists of one generator and two discriminators, the overall flow of the method is shown in fig. 2, where x is the low quality image, y is the high quality image, G is the generator, D is the discriminators for the countermeasure training,is a self-countermeasure mode discriminator, and the two discriminators are both provided with a double-input image quality comparison model structure instead of the traditional single-input two-classifier structure to judge whether the image is true or false, so that better image quality can be obtainedAnd judging the effect. G (x) is an image after the first enhancement, and G (x) is an image obtained by the second enhancement. Wherein, the discriminator->Together with generator G, constitutes a self-countermeasure mode. The original image x is enhanced by a generator G, is input into a discriminator D to discriminate with a high quality image y, is then input into the generator G again to enhance, and is then enhanced by a self-countermeasure discriminator->Two enhanced images are distinguished by a discriminatorTo restrict the quality improvement in the generation process, and to obtain better quality images through repeated iteration enhancement. In addition, the arbiter D is the Patch-GAN structure and the arbiter +.>Is a two-class discriminator structure, and the combination of the two can discriminate from both global and detail angles.
Compared with the existing underwater image processing technology, the technical scheme has the following beneficial effects:
the method is based on a self-countermeasure mode and a dual-input type discriminator to realize the improvement of the quality of the underwater image, a new constraint is added to the enhancement process through the self-countermeasure mode, namely, the constraint generator enables the image generated for the second time to be better than the image generated for the first time, the guiding function of the discriminator on the generator is enhanced through the dual-input type discriminator, and the quality of the enhanced image is further improved.
Meanwhile, unpaired natural images are used for training, and then quality improvement is transferred to the underwater images, so that the problem of insufficient quantity of paired underwater images is solved to a certain extent, and meanwhile, the problem of paired underwater image training generated by using the existing underwater image enhancement data centralization manual method is solved. Experiments prove that the method can effectively improve the quality of the underwater image, and the generated image is more attractive in vision.
Drawings
Fig. 1 is a construction diagram of a self-countermeasure mode.
Fig. 2 is a flow chart of a method.
Fig. 3 is a generator structure.
Fig. 4 is a diagram of the structure of the discriminator D.
FIG. 5 is a discriminatorStructure diagram.
FIG. 6 is a partial image of a KADID-10k dataset.
FIG. 7 is a partial high quality image in a KoniQ-10k dataset.
FIG. 8 is a graph of the method of the present invention adding results and the results of enhancement of 8 methods in the U45 dataset.
FIG. 9 is a graph comparing the method of the present invention with other 5 more advanced underwater image enhancement methods on a U45 dataset.
Detailed Description
The invention is further described with reference to the accompanying drawings:
the underwater image enhancement method based on the self-countermeasure generation countermeasure network is characterized by comprising the following steps of: the method adopts a self-countermeasure mode, and the self-countermeasure mode is structured that an input image x passes through a generator G to obtain an output imageOutput image +.>Input again into generator G to get the output image +.>Two output images are input into a discriminator D at the same time to discriminate, and a constraint generator D makes the output images +.>Is better than the output image +.>;
The overall optimization objective for the self-countermeasure mode is:
(4-1)
the self-countermeasure mode first obtains an output image from the generator G: then will->Input to the generator to get G (x)), at this time, the fixed generator G trains the arbiter D:
(4-2)
in equation 4-2, the second output image G (G (x)) from the generator is desirably discriminated as a positive sample by D, and the first output imageDesirably, the negative sample is determined, and the training generator G:
(4-3)
in equation 4-3, the first output image is desiredIs superior to the second output image G (G (x)) to form a countermeasure between the generator and the arbiter, but unlike countermeasure training, there is no real set of points in the countermeasure processThe positive and negative samples form a challenge, but the negative sample is in a challenge state with itself.
The self-countermeasure mode is specifically applied to the underwater image enhancement method as follows:
s1, after the original input image x is enhanced by a generator G, an output image is obtainedThe image is input into a discriminator D to be discriminated with the high-quality image y, the discriminator D is updated and fixed, and a generator G is updated;
s2, outputting an imageRe-input to generator G for enhancement to obtain output image +.>The output image G (G (x)) and the high-quality image y are input into a discriminator D for discrimination, and the discriminator D is updated;
s3 self-countermeasure discriminatorFor the enhanced output image->And output image +.>Discrimination is performed by a discriminator->The quality improvement in the generation process is restrained, and the repeated iteration enhancement is carried out to obtain images with better quality.
Wherein the generator G is a codec network with the general structure shown in fig. 3. The coding section consists of convolution kernel size 3 x 3 convolution concatenation, with the addition of residual blocks after each convolution to enhance network depth and feature extraction capability. The use of reflection padding before each convolution ensures that the feature map size of the image is half the size before each downsampling. After each convolution layer, the Batch-norm layer and the LeakyReLU activation function are used to increase the robustness and nonlinearity of the network. The decoding section is composed of a plurality of upsampling cascades. After each upsampling, a residual block is added to enhance the image reconstruction capability of the decoded part. A skip connection is employed between encoding and decoding to supplement the image information lost during downsampling. Finally, a Tanh activation function is adopted to avoid the problem of gradient disappearance.
The arbiter specifically implements the self-countermeasure mode and supervised learning using a dual arbiter, the dual arbiter structure being shown in fig. 4 and 5. The arbiter D is a Patch-GAN structure, and the arbiterIs a traditional binary classification network structure. The structure of the discriminator D is shown in FIG. 4, and the discriminator +.>The structure of (2) is shown in figure 5. Two discriminators have two input images +.>And->The input image size is 512×512. The two inputs respectively enter two feature extraction modules with the same structure. The feature extraction module consists of three convolution layers with a convolution kernel size of 3×3 and a step size of 2. After the feature extraction module, the extracted feature graphs are connected in series, and the feature graphs after being connected in series are input into an acceptance module, wherein the module comprises 3 convolution layers with different sizes and 1 pooling layer. The convolution kernels are 5×5,3×3,1×1, and the pooling layer is 3×3 in size, respectively. And then, the feature maps of the convolution layers and the pooling layers with different sizes, which are obtained through the acceptance module, are connected in series and input into the Reduction module for downsampling. The Reduction module includes two branches, one of which is a single convolution layer, volumeThe convolution size is 3×3, the step size is 2, the other branch is a three-layer convolution layer which is overlapped, the convolution core sizes of the convolution layers are 1×1,3×3 and 3×3 from top to bottom respectively, and the step sizes of the convolution layers are 1,1 and 2 from top to bottom respectively. And then connecting the obtained feature graphs after downsampling the two different branches of the Reduction, so that the obtained feature map contains networks with different depths, and richer feature information is obtained. Finally, a characteristic diagram with the size of 7 multiplied by 7 is obtained through two convolution layers with the convolution kernel size of 3 multiplied by 3 and the step size of 2, and a discriminator is adopted>And directly outputting a 7 multiplied by 7 characteristic diagram for discrimination, and inputting the discrimination D into two full-connection layers after the self-adaptive average pooling layer.
The specific training method is as follows:
each parameter update comprises two steps. In the first step, a low-quality image x is input into a generator G to obtain an enhanced image G (x), then the obtained enhanced image G (x) and a high-quality image y are input into a discriminator D to be discriminated, while the parameters of the discriminator D are updated, and thereafter, the parameters of the discriminator D are fixed to update the parameters of the generator G. In the second step, the enhanced image G (x) obtained in the first step is input into a generator G to obtain a re-enhanced image G (x)), then the second enhanced image G (x) and a high-quality image y are input into a discriminator D to discriminate, and parameters of the discriminator D are updated, and at the same time, the second enhanced image G (x) and the first enhanced image G (x) are input into a self-countermeasure discriminatorAnd updating the self-countermeasure discriminator +.>After which the parameters of the generator G are updated again.
The training of the method model adopts an Adam optimizer, the learning rate is set to 0.001, the batch size is set to 2, and the total iteration training is performed for 50 times. Training was performed on a computer equipped with i7 processor, nvidia Titan XP GPU and 64G memory using the Pytorch framework.
The training method adopts a loss function:
including 3 challenge losses, 2 perception losses and 2 during the entire training of the methodAnd loss of norm. In the first step, we use the countermeasures against loss to constrain the optimization process of the generator and arbiter, and use the perceptual loss sumThe weighted sum of the norms lost is taken as the content loss to ensure that the image retains the content information during the enhancement process. The total loss function is shown in equation (4-4).
(4-4)
Wherein the method comprises the steps ofAnd->Are respectively->The weights for the norm and perceptual losses, this value being determined experimentally, are set to 0.1 herein. />,/>And->As shown in formulas (4-5) to (4-7).
(4-5)
(4-6)
(4-7)
Wherein D is a discriminator, G is a generator, x is an input image, G (x) is an enhanced image obtained in the first step,is the j-th convolutional layer of the VGG-19 network pre-trained on the ImageNet dataset.
In the second step, 2 different contrast losses are used herein, one of which is similar to the contrast loss in the first step, for guiding the generator to generate an image of quality closer to that of the reference image, and the other for the self-contrast mode, a further improvement in enhancement results being achieved by discriminating between the two enhanced images obtained in the two steps. At the same time, in order to preserve image content, perceptual loss andthe weighted sum of the norm losses acts as a constraint. The total loss function is shown in equation (4-8).
(4-8)
And->Are respectively->The weights of the norm and perceptual losses, where the values remain the same as in the first step,,/>,/>and->As shown in formulas (4-9) to (4-12).
(4-9)
(4-10)
(4-11)
(4-12)
Wherein the method comprises the steps ofIs a discriminator of the self-countermeasure mode, and G (x)) is the enhanced image obtained in the second step.
Selecting a data set:
according to the degradation problems of low contrast, color cast, low illumination, blurring and the like in the underwater image, a distortion image and a high-quality image which are similar to the underwater degradation type in a natural image data set are selected to train the proposed underwater image enhancement method.
KADID-10k is a natural image quality evaluation dataset containing 81 raw images and 25 distorted images corresponding to the 81 images, each of which contains 5 levels of distortion, for a total of 10125 distorted images. From all distortion images with distortion level 5 in the KADID-10k dataset, distortion images with distortion types including color distortion, noise, low contrast and the like in the underwater image are selected, and 1620 distortion images are taken in total, and part of the images are shown in fig. 6.
KonIQ-10k is a large IQA dataset containing 10073 non-artificial natural distortion images, the 10073 natural images are scored in quality by voting, and since there are only 81 raw reference images in the KADID-10k dataset, the 81 images are far from enough for training the model, 1539 high quality images from the KonIQ-10k dataset were chosen to supplement the high quality image set, with part of the images shown in FIG. 7.
45 low-quality underwater images on the U45 data set are used for testing, the enhancement result of the method is compared with 8 underwater image enhancement methods contained in the U45 data set, and in addition, the method is also compared with some other most advanced underwater image enhancement methods:
comparison of experimental results on the U45 dataset:
the U45 real underwater image data set is a data set containing 45 underwater images with different chromatic aberration, low-contrast underwater images and foggy underwater images, and the result of the enhancement of the underwater images by 8 enhancement methods. The U45 dataset was used as the test set and the enhancement results of the method of the invention were compared to all enhancement results contained therein as shown in fig. 8.
In fig. 8, the enhancement results of the method of the present invention and the enhancement results of 8 methods in the U45 dataset are shown in the left-most side (a) as the original image, and the rest of the images are from left to right respectively from (b) CycleGAN, (c) FE, (d) deeternet, (e) RB, (f) RED, (g) UDCP, (h) uipla, (i) WSCT, (j) the method of the present invention.
By image contrast analysis:
(b) The CycleGAN achieves correction of color shift, but the enhancement effect is not good enough, and the enhancement result is shown in fig. 8 (b) with obvious checkerboard texture.
(c) The enhancement of FE resulted in a pronounced red color shift, as shown in fig. 8 (c).
(d) The enhancement of DewaterNet resulted in a darker overall color, as shown in FIG. 8 (d).
(e) Some darker images also appear in the RB enhancement result, as shown in fig. 8 (e).
(f) RED is not sufficiently color-corrected as shown in fig. 8 (f).
(g) UDCP, (h) uilbla, (i) WSCT cannot completely correct chromatic aberration, and chromatic aberration of the partially enhanced result is more serious, as shown in fig. 8 (g-i).
The method of the invention can well correct various chromatic aberration problems, and the visual effect of the enhanced image is better, as shown in fig. 8 (j).
To further demonstrate the performance of the method of the present invention, the method of the present invention is also compared with other 5 more advanced underwater image enhancement methods on the U45 dataset, as shown in FIG. 9.
The method proposed by Yang et al does not fully correct the color for greenish images, as shown in fig. 9 (b).
Deep sesr does not fully achieve color correction and introduces red color shift in the partial image as shown in fig. 9 (c).
UWCNN introduces a more severe red color shift on light images as shown in fig. 9 (d).
Color correction of the greenish underwater image by the fusiegan results in yellowing of the image, as shown in fig. 9 (e).
The image enhanced by hybrid dectin deviates from the original color of the image, and a distinct checkerboard texture appears in the image, as shown in fig. 9 (f).
The method of the invention realizes color correction, simultaneously retains the original color of the image, and has more beautiful visual effect, as shown in fig. 9 (g).
Two common underwater image quality evaluation indexes UCIQE and UIQM are used for objectively evaluating the method provided by the invention, and the UCIQE and the UIQM respectively represent that the underwater image quality is better when the value is larger. The evaluation results are shown in the above table, wherein the first three of each index are indicated in bold. It can be seen from the table that the results of the method of the present invention on uci qe can reach the third, but the results on UIQM are not ideal, because the method of the present invention uses interpolation in the upsampling process to ensure that the size of the output image is consistent with the size of the input image, thus resulting in the method of the present invention having a low score on the underwater image sharpness metric component, resulting in an overall evaluation score that is not high enough.
The method is based on a self-countermeasure mode and a double-input type discriminator to realize the improvement of the underwater image quality. A new constraint is added to the enhancement process by the self-countermeasure mode, i.e., the constraint generator causes the second generated image to be superior to the first generated image. The guiding function of the discriminator on the generator is enhanced through the dual-input discriminator, and the quality of the enhanced image is further improved. Meanwhile, unpaired natural images are used for training, and then quality improvement is transferred to the underwater images, so that the problem of insufficient quantity of paired underwater images is solved to a certain extent, and meanwhile, the problem of paired underwater image training generated by using the existing underwater image enhancement data centralization manual method is solved. Experiments prove that the method can effectively improve the quality of the underwater image, and the generated image is more attractive in vision.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (7)
1. The underwater image enhancement method based on the self-countermeasure generation countermeasure network is characterized by comprising the following steps of: the method adopts a self-countermeasure mode, the structure of the self-countermeasure mode is that an output image G (x) is obtained after an input image x passes through a generator G, the output image G (x) is input into the generator G again to obtain an output image G (x)), two output images are input into a discriminator D at the same time to discriminate, the generator D is restrained, the output image G (G (x)) is better than the output image G (x), and the self-countermeasure mode is specifically applied to an underwater image enhancement method as follows:
s1, after an original input image x is enhanced by a generator G, an output image G (x) is obtained and input into a discriminator D to be discriminated from a high-quality image y, the discriminator D is updated and fixed, and the generator G is updated;
s2, inputting the output image G (x) into the generator G again for enhancement to obtain the output image G (x), inputting the output image G (x) and the high-quality image y into the discriminator D for discrimination, and updating the discriminator D;
s3 self-countermeasure discriminator D s The enhanced output image G (x) and the output image G (x)) are discriminated and passed through a discriminator D s Restraining quality improvement in the generation process, and repeatedly iterating and enhancing to obtain images with better quality;
the overall optimization objective for the self-countermeasure mode is:
the self-countermeasure mode first obtains an output image G (x) from the generator G: then, G (x) is input into the generator G to obtain G (x)), and at this time, the generator G is fixed to train the discriminator D:
in equation 4-2, the second output image G (x)) from the generator G is desired to be discriminated as a positive sample by the discriminator D, and the first output image G (x) is desired to be discriminated as a negative sample, at which time the generator G is trained:
in equation 4-3, it is desirable that the first output image G (x) is superior to the second output image G (x)), thereby forming a countermeasure between the generator and the arbiter.
2. The method of underwater image enhancement based on self-countermeasure generation countermeasure network according to claim 1, wherein: the generator G is a coding and decoding network, the coding part is formed by convolution connection with the convolution kernel size of 3 multiplied by 3, and a residual block is added after each convolution to enhance the depth of the network and the feature extraction capability; using reflection padding before each convolution ensures that the feature map size of the image is half the size before each downsampling; after each convolution layer, the Batch-norm layer and the LeakyReLU activation function are used to increase the robustness and nonlinearity of the network; the decoding part is composed of a plurality of up-sampling cascade connection; after each upsampling, adding a residual block to enhance the image reconstruction capability of the decoded part; and jump connection is adopted between encoding and decoding, and a Tanh activation function is adopted to avoid the problem of gradient disappearance.
3. The method of underwater image enhancement based on self-countermeasure generation countermeasure network according to claim 1, wherein: the arbiter D is a Patch-GAN structure, and the arbiter D s Is a binary classification network structure.
4. The method of underwater image enhancement based on self-countermeasure generation countermeasure network according to claim 1, wherein: in the step S1, when the output image G (x) is obtained and input to the discriminator D to discriminate with the high-quality image y, the parameters of the discriminator D need to be updated at the same time, and then the parameters of the discriminator D are fixed to update the parameters of the generator G.
5. The method of underwater image enhancement based on self-countermeasure generation countermeasure network according to claim 1, wherein: constraining the optimization process of the generator and arbiter using the countermeasures loss, and using the perceptual loss and L 1 The weighted sum of the norms lost is taken as the content loss to ensure that the image retains the content information during the enhancement process.
6. The underwater image enhancement method based on the self-countermeasure generation countermeasure network according to claim 1 or 4, characterized in that: s2 and S3, the second enhanced image G (G (x)) and the high-quality image y are required to be input into the discriminator D for discrimination, the parameters of the discriminator D are updated, and the second enhanced image G (G (x)) and the first enhanced image G (x) are simultaneously input into the self-countermeasure discriminator D s The second enhanced image G (G (x)) and the first enhanced image G (x) are input to a self-countermeasure discriminator D s And update the self-countermeasure discriminator D s After which the parameters of the generator G are updated again.
7. The method of underwater image enhancement based on self-countermeasure generation countermeasure network of claim 6, wherein: further enhancement of enhancement results is achieved by discriminating two enhanced images obtained by using contrast loss in self-contrast mode, while, in order to preserve image content, perceptual loss and L are used 1 The weighted sum of the norm losses acts as a constraint.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211072112.9A CN115439361B (en) | 2022-09-02 | 2022-09-02 | Underwater image enhancement method based on self-countermeasure generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211072112.9A CN115439361B (en) | 2022-09-02 | 2022-09-02 | Underwater image enhancement method based on self-countermeasure generation countermeasure network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115439361A CN115439361A (en) | 2022-12-06 |
CN115439361B true CN115439361B (en) | 2024-02-20 |
Family
ID=84246088
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211072112.9A Active CN115439361B (en) | 2022-09-02 | 2022-09-02 | Underwater image enhancement method based on self-countermeasure generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115439361B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118469888B (en) * | 2024-07-15 | 2024-09-17 | 中国空气动力研究与发展中心高速空气动力研究所 | Temperature-sensitive paint image contrast enhancement method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111833268A (en) * | 2020-07-10 | 2020-10-27 | 中国海洋大学 | Underwater image enhancement method for generating countermeasure network based on conditions |
CN112541865A (en) * | 2020-10-15 | 2021-03-23 | 天津大学 | Underwater image enhancement method based on generation countermeasure network |
CN113362299A (en) * | 2021-06-03 | 2021-09-07 | 南通大学 | X-ray security check image detection method based on improved YOLOv4 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11024009B2 (en) * | 2016-09-15 | 2021-06-01 | Twitter, Inc. | Super resolution using a generative adversarial network |
-
2022
- 2022-09-02 CN CN202211072112.9A patent/CN115439361B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111833268A (en) * | 2020-07-10 | 2020-10-27 | 中国海洋大学 | Underwater image enhancement method for generating countermeasure network based on conditions |
CN112541865A (en) * | 2020-10-15 | 2021-03-23 | 天津大学 | Underwater image enhancement method based on generation countermeasure network |
CN113362299A (en) * | 2021-06-03 | 2021-09-07 | 南通大学 | X-ray security check image detection method based on improved YOLOv4 |
Non-Patent Citations (1)
Title |
---|
多输入融合对抗网络的水下图像增强;林森;刘世本;唐延东;;红外与激光工程(05);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115439361A (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113240580B (en) | Lightweight image super-resolution reconstruction method based on multi-dimensional knowledge distillation | |
CN110458750B (en) | Unsupervised image style migration method based on dual learning | |
CN110473144B (en) | Image super-resolution reconstruction method based on Laplacian pyramid network | |
CN111932444A (en) | Face attribute editing method based on generation countermeasure network and information processing terminal | |
CN111861945B (en) | Text-guided image restoration method and system | |
CN113392711B (en) | Smoke semantic segmentation method and system based on high-level semantics and noise suppression | |
CN111986075B (en) | Style migration method for target edge clarification | |
CN109685724B (en) | Symmetric perception face image completion method based on deep learning | |
CN112365556B (en) | Image extension method based on perception loss and style loss | |
CN115641391A (en) | Infrared image colorizing method based on dense residual error and double-flow attention | |
CN115620010A (en) | Semantic segmentation method for RGB-T bimodal feature fusion | |
CN115565056A (en) | Underwater image enhancement method and system based on condition generation countermeasure network | |
WO2022166840A1 (en) | Face attribute editing model training method, face attribute editing method and device | |
CN114387365A (en) | Line draft coloring method and device | |
CN115439361B (en) | Underwater image enhancement method based on self-countermeasure generation countermeasure network | |
Liu et al. | Facial image inpainting using multi-level generative network | |
CN114202460B (en) | Super-resolution high-definition reconstruction method, system and equipment for different damage images | |
CN112541866B (en) | Human face image restoration model based on evolutionary generation countermeasure network | |
CN114820303A (en) | Method, system and storage medium for reconstructing super-resolution face image from low-definition image | |
CN114494387A (en) | Data set network generation model and fog map generation method | |
CN116523985B (en) | Structure and texture feature guided double-encoder image restoration method | |
CN116823659A (en) | Low-light level image enhancement method based on depth feature extraction | |
CN113724340B (en) | Guided face image editing method and system based on jumping connection attention | |
CN116109510A (en) | Face image restoration method based on structure and texture dual generation | |
CN118014894B (en) | Image restoration method, device, equipment and readable storage medium based on combination of edge priors and attention mechanisms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |