[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112884673A - Reconstruction method for missing information between coffin chamber mural blocks of improved loss function SinGAN - Google Patents

Reconstruction method for missing information between coffin chamber mural blocks of improved loss function SinGAN Download PDF

Info

Publication number
CN112884673A
CN112884673A CN202110267441.8A CN202110267441A CN112884673A CN 112884673 A CN112884673 A CN 112884673A CN 202110267441 A CN202110267441 A CN 202110267441A CN 112884673 A CN112884673 A CN 112884673A
Authority
CN
China
Prior art keywords
mural
generator
loss
discriminator
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110267441.8A
Other languages
Chinese (zh)
Inventor
吴萌
任义
王姣
高怡宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Architecture and Technology
Original Assignee
Xian University of Architecture and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Architecture and Technology filed Critical Xian University of Architecture and Technology
Priority to CN202110267441.8A priority Critical patent/CN112884673A/en
Publication of CN112884673A publication Critical patent/CN112884673A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a reconstruction method for missing information among tomb mural blocks of an improved loss function SinGAN, which comprises the following steps: taking any one mural block image I in mural setrThen down-sampled to { Ir0,Ir1,Ir2,Ir3,Ir4In which Im4The murals are masked and sampled for 4 times; input I to the bottom layer of the generatorm4'Jing' generator G4To produce an outwardly-extending mural G4(Im4) G is4(Im4) And Ir4Input to a discriminator D4Comparing and judging, and updating the weight parameter of the layer according to the loss function; obtaining weight parameters of each layer in the generator; training the generator by using a training set and a discriminator to enable the generator and the discriminator to reach Nash equilibrium, and then reconstructing missing information among the coffin chamber mural blocks by using the trained generator.

Description

Reconstruction method for missing information between coffin chamber mural blocks of improved loss function SinGAN
Technical Field
The invention belongs to the field of digital image restoration, and relates to a reconstruction method for missing information among coffin chamber mural blocks of an improved loss function SinGAN.
Background
The coffin chamber wall painting has double important meanings to the ancient history and the artistic history of China, truly reflects the living condition, the social fashion and the artistic interest of the ancient Chinese, and simultaneously reflects the religious belief, the funeral culture and other conditions of people. As the coffin chamber wall painting is different from the palace chamber wall painting and the stone cave temple wall painting, the coffin chamber wall painting is buried underground for thousands of years, is completely a closed space before being excavated, and has strong reliability of residual information. And the breadth of the coffin chamber wall painting is larger, taking the chapter and radix pseudostellariae tomb ball picture as an example: the mural is 7 meters long and 3 meters high. Under the archaeological excavation condition in the seventies and eighties of the last century, the mode of taking off the archaeological excavation by blocks can be only adopted. Therefore, a large number of precious coffin chamber murals stored in blocks are generated, and the inter-block information loss of the murals is generated in the uncovering process, so that the continuity and the integrity of the whole murals are influenced.
In the conventional method for repairing digital information of computer-aided ancient murals, digital Image inpainting is used for finally completing reconstruction of mural information by retrieving the edge of an Image information missing area, diffusing Image residual information from the edge and filling the Image residual information into the missing area, and filling information holes layer by layer like onion peeling. The technology is mainly completed from two technical directions, on one hand, the technology is a PDE model based on pixel diffusion, the high-order partial derivative function of a filling front edge is calculated, and information filling is completed by using different diffusion equations, so that the defect is that a fuzzy phenomenon is generated when picture information is greatly lost; another aspect is a texture synthesis model based on sample filling that completes information completion in a certain filling order by comparing the correlation of the remaining samples with the filling front, with the disadvantage that filling of a large number of similar samples can create a mosaic effect. The two methods have the defects that when the deletion area is large, a good repairing effect of the coffin chamber mural cannot be achieved, available information is only searched from the existing residual information, the limitation is large, holes can only be filled inwards for the block murals, and the requirement for rebuilding the extensional information among the block murals cannot be met.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned shortcomings of the prior art, and providing a method for reconstructing missing information between mural blocks of a coffin chamber by improving a loss function SinGAN, which can achieve the reconstruction of the extensional information between mural blocks.
In order to achieve the above object, the method for reconstructing missing information between coffin chamber wall painting blocks of the improved loss function SinGAN of the present invention comprises the following steps:
1) acquiring images of a plurality of blocks of a mural excavated in a tomb road in blocks to obtain a mural block image, adding a mask to the mural block image, standardizing the mural block image for min-max, constructing an image set through the standardized mural block image, and using the image set as a SinGAN to generate a training set of ductility information between the murals;
2) constructing a generating network based on SinGAN and containing generators and discriminators with 5 scales, and increasing reconstruction loss LrecPixel reconstruction loss and texture loss LtextureA loss function of (d);
3) taking any one mural block image I in mural setrThen down-sampled to { Ir0,Ir1,Ir2,Ir3,Ir4In which Im4The murals are masked and sampled for 4 times;
4) input I to the bottom layer of the generatorm4'Jing' generator G4To produce an outwardly-extending mural G4(Im4) G is4(Im4) And Ir4Input to a discriminator D4Comparing and judging, and updating the weight parameter of the layer according to the loss function;
5) repeating the step 4) to obtain the weight parameters of each layer in the generator;
6) training the generator by using a training set and a discriminator to enable the generator and the discriminator to reach Nash equilibrium, and then reconstructing missing information among the coffin chamber mural blocks by using the trained generator.
Using pixel reconstruction loss to capture the whole pixel information around the mural generation area, i.e.
Figure BDA0002972764680000031
Where M is a binary mask, IrnFor downsampling murals of real data, GnTo generate a mural picture generated by the network,
Figure BDA0002972764680000032
multiplication is performed on the mural pixels;
texture loss is designed using Gram matrices, i.e.
Figure BDA0002972764680000033
Wherein G is a Gram matrix, lnFor real mural images IrnOutput through the nth layer of the multiscale generator, InFor a multi-scale generator GnThe generated mural, g represents the generated network;
the reconstruction loss is designed as:
Lrec=||Gn(Noize+Imn)-Irn||2 (4)
wherein G isnIs the nth layer of the multi-scale generator, (Noize + I)mn) For superposition of the noise of each layer in the multiscale generator with the output mural of the previous layer, IrnIs a real mural.
Reconstructed loss LrecPixel reconstruction loss and texture loss LtextureThe loss functions of (a) are:
Figure BDA0002972764680000034
Lrec=||Gn(Noize+Imn)-Irn|| (6)
LG=χLrec+L2+texture (7)
Figure BDA0002972764680000035
wherein L is2+textureIs the sum of the pixel reconstruction loss and texture loss, L, of the muralDAs a loss function of the discriminator, LDFor updating the discriminator weight, χ being a tunable parameter, L is adjusted by adjusting the loss function of the generator2+texture
Design the structure of multi-scale generator and discriminator, let { Ir0,Ir1,Ir2...IrnRespectively obtaining a group of mural block images I obtained by down-sampling real muralsmnFor downsampling n times of mural block images after masking, I is input at the lowest layer of the multi-scale generatormnGo through generator GnGenerating mural Gn(Imn) Then inputted to a discriminator DnNeutralization ofrnComparing and judging, updating the weight parameter of the layer, and comparing Gn(Imn) Upsampling and adding noise with the same size as the upsampled noise to form a generator Gn-1The input image is cycled to the original size of the image.
SSIM is used to measure fidelity and similarity between two murals, wherein,
Figure BDA0002972764680000041
wherein,
Figure BDA0002972764680000042
for mural painting original drawing IrThe average value of (a) of (b),
Figure BDA0002972764680000043
for extended fresco ImThe average value of (a) of (b),
Figure BDA0002972764680000044
is an original wall painting IrThe variance of (a) is determined,
Figure BDA0002972764680000045
for extended rear fresco ImThe variance of the measured values is calculated,
Figure BDA0002972764680000046
is Ir,ImCovariance of (a), b1And b2Used for preventing the overflow caused by the fact that the denominator of the SSIM is zero, the SSIM is positioned at 0,1]Wherein, when the SSIM is larger, the content of the two images is more similar.
The invention has the following beneficial effects:
the invention relates to a reconstruction method for missing information among graveyard mural blocks of an improved loss function SinGAN, which is characterized in that during specific operation, the SinGAN after the loss function is improved is adopted to generate extensional information at two sides of the mural blocks, the reconstruction of the missing part among the mural blocks excavated out of earth by an underground graveyard is realized, and compared with the traditional digital image restoration method for filling the information of the graveyard murals by adopting an image inpainting method, the reconstruction method is more suitable for generating the picture information by the outward extensibility of the mural blocks.
Drawings
FIG. 1 is a flow chart of the present invention for generating a discriminant;
FIG. 2 is a diagram of a trained mural according to the present invention;
FIG. 3 is a wall drawing of the present invention;
FIG. 4 is a diagram of the result of SSIM value calculation in the present invention;
FIG. 5 is a diagram showing the results of mural generation according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1, the method for reconstructing missing information between coffin chamber mural blocks of the improved loss function SinGAN according to the present invention comprises the following steps:
1) the method comprises the steps of collecting images of the high-definition coffin chamber murals, adding masks to the collected images, then conducting min-max standardization, then constructing 8000 images with 256 x 256 through the standardized murals, using the mural sets as training sets, collecting the high-definition coffin chamber murals through shot segmentation, and meanwhile selecting the standardized murals to guarantee redundancy removal maximization performance of effective information in the shot segmentation.
The calibration method comprises the following steps of firstly, obtaining a calibration set of a wall painting of a coffin chamber, wherein the calibration set comprises a calibration set of calibration data, a calibration set of calibration data and a calibration set of calibration data. The preprocessing process is to give a training fresco IrStandardizing the mural to Ir∈[0,1]256*256*3The formula for min-max normalization is:
Figure BDA0002972764680000051
wherein, x is the numerical value of the sample point, Min is the minimum value of the sample data, and Max is the maximum value of the sample data.
And subtracting the minimum value from the value of the sample point, and dividing the value by the difference value between the maximum value and the minimum value of the sample point so as to improve the performance of the neural network.
While performing the training, IrFor the original mural, a mask M ∈ [0,1] is defined]256*256*3So that MijThe central part of the image can be masked, wherein the mask is a binary mask M (mask) using only two values of 0 and 1, the value of 1 indicates that the corresponding part in the image is reserved, and the value of 0 indicates that the corresponding part in the image needs to be completed.
2) Constructing 5 multi-scale generators and discriminators of the SinGAN, and adding a loss function for optimizing the iteration degree;
designing 5 multi-scale generators and discriminators, wherein each generator and discriminator is provided with 6 convolution layers, the convolution kernel is 4 x 4, two layers of expansion convolutions are used for completing mural images, each discriminator is composed of 6 layers of full convolution, the generator at the bottom can generate the integrity information of the mural, and the more the generator generates details each layer up to the generator G at the last layer0The size of the generation is 256 × 256 and the content and style of the mural will become more and more realistic.
The method comprises the following steps of capturing whole pixel information around a mural generation area by adopting pixel reconstruction loss, wherein the expression is as follows:
Figure BDA0002972764680000061
where M is a binary mask, IrnSome down-sampled murals for real data, GnTo generate a mural picture generated by the network,
Figure BDA0002972764680000062
is a mural pixel multiplication.
Texture loss is designed by the Gram matrix, namely:
Figure BDA0002972764680000063
wherein G is a Gram matrix, lnFor real mural images IrnOutput through the nth layer of the network, InFor a multi-scale generator GnAnd g represents a generated network, and the difference of the texture characteristics of each layer is extracted, so that the epitaxial mural image has the same local texture as the global texture, thereby enhancing the reality of the epitaxial mural.
Design reconstruction loss, i.e.
Lrec=||Gn(Noize+Imn)-Irn||2 (4)
Wherein G isnIs the nth layer of the multi-scale generator, (Noize + I)mn) For each layer of noise superimposed with the output fresco of the previous layer, IrnIs a real mural.
3) Taking a wall painting I in the wall painting collectionrDown-sampled to { Ir0,Ir1,Ir2,Ir3,Ir4},Im4For the mural with 4 times of down-sampling after masking, I is input at the lowest layer of the multi-scale generatorm4Go through generator G4To form a mural G4(Im4);
4) G is to be4(Im4) And Ir4Input to a discriminator D4Comparing and judging, updating the weight parameter of the layer according to the trained loss function, and repeating until G0Layers, since each layer has an update of the weight parameter, the resulting wallThe painting will become more and more realistic.
The generator can extract the mural characteristics to generate the mural, and the training set mural IrDown-sampling 4 times to obtain { Ir0,Ir1,Ir2,Ir3,Ir4I after mask is definedmDown-sampling 4 times to obtain Im4. First, to the generator G4Inputting a mural Im4The generator is able to generate the overall information of the mural. Will generate a result G4(Im4) And Ir4Input to a discriminator D4Judging, updating the weight parameter of the layer according to the loss function, and obtaining G4(Im4) Up-sampling to obtain Im3And superimposed with noise of the same size, the superimposed fresco being input to G3Generator to get G3(Im3) And Ir3Input discriminator D3Performing discrimination, calculating loss updating parameters, and repeating until G0Until the generator, the details and contents of the generated mural effect are richest, the quality of the generated mural is greatly improved, and the generation and judgment flow chart is shown in fig. 1:
5) through 200 epoch cycles, the generator and the discriminator reach Nash equilibrium to complete the training of the model, and the trained model is used for reconstructing the missing information among the tomb room mural blocks.
The generator and the discriminator are both trained by a loss function, which is:
Figure BDA0002972764680000081
Lrec=||Gn(Noize+Imn)-Irn|| (6)
LG=χLrec+L2+texture (7)
Figure BDA0002972764680000082
L2+texturefor the sum of mural pixel reconstruction loss and texture loss, for updating generator weights to train the generator, LDAs a loss function of the discriminator, and then according to LDUpdating the weight of the discriminator, finally carrying out countermeasure training on the discriminator and the generator, setting x as an adjustable parameter to be 0.96, and adjusting LGLoss function of generator to adjust L2+textureAnd fixing the weight parameters after the training of each layer is finished, and then training the next layer.
After training is finished, an SSIM method is used for evaluating a generated picture, the numerical value is between [0 and 1], the larger the numerical value is, the higher the similarity with an original picture is, the SSIM is used for measuring the fidelity and the similarity between two murals, the SSIM is not only used for sensing errors, but also can be used for observing the distortion degree of a part relevant to sensing in a visual system, the similarity between two murals with different sizes can be evaluated, and the formula is as follows:
Figure BDA0002972764680000083
wherein,
Figure BDA0002972764680000084
for mural painting original drawing IrThe average value of (a) of (b),
Figure BDA0002972764680000085
for extended fresco ImThe average value of (a) of (b),
Figure BDA0002972764680000086
is an original wall painting IrThe variance of the measured values is calculated,
Figure BDA0002972764680000087
for extended rear fresco ImThe variance of the measured values is calculated,
Figure BDA0002972764680000088
is Ir,ImCovariance of (a), b1And b2For preventing overflow due to SSIM denominator being zeroIn this case, the SSIM score is [0,1]]In between, when the contents of the two images are completely the same, the SSIM score is equal to 1.
Comparing a trained mural with the original mural to obtain the SSIM value shown in FIG. 4, wherein the SSIM result shows that the model is rich in the content and texture of the generated mural, the trained weight is stored, and a mural picture is input to obtain the generated result shown in FIG. 5.

Claims (5)

1. A reconstruction method for missing information among coffin chamber mural blocks of an improved loss function SinGAN is characterized by comprising the following steps:
1) acquiring images of a plurality of blocks of a mural excavated in a tomb road in blocks to obtain a mural block image, adding a mask to the mural block image, standardizing the mural block image for min-max, constructing an image set through the standardized mural block image, and using the image set as a SinGAN to generate a training set of ductility information between the murals;
2) constructing a generating network based on SinGAN and containing generators and discriminators with 5 scales, and increasing reconstruction loss LrecPixel reconstruction loss and texture loss LtextureA loss function of (d);
3) taking any one mural block image I in mural setrThen down-sampled to { Ir0,Ir1,Ir2,Ir3,Ir4In which Im4The murals are masked and sampled for 4 times;
4) input I to the bottom layer of the generatorm4'Jing' generator G4To produce an outwardly-extending mural G4(Im4) G is4(Im4) And Ir4Input to a discriminator D4Comparing and judging, and updating the weight parameter of the layer according to the loss function;
5) repeating the step 4) to obtain the weight parameters of each layer in the generator;
6) training the generator by using a training set and a discriminator to enable the generator and the discriminator to reach Nash equilibrium, and then reconstructing missing information among the coffin chamber mural blocks by using the trained generator.
2. A method as claimed in claim 1, wherein the loss of information in the neighborhood of the mural generation area is determined by applying pixel reconstruction loss to capture the global pixel information surrounding the mural generation area
Figure FDA0002972764670000011
Where M is a binary mask, IrnFor downsampling murals of real data, GnTo generate a mural picture generated by the network,
Figure FDA0002972764670000012
multiplication is performed on the mural pixels;
texture loss is designed using Gram matrices, i.e.
Figure FDA0002972764670000021
Wherein G is a Gram matrix, lnFor real mural images IrnOutput through the nth layer of the multiscale generator, InFor a multi-scale generator GnThe generated mural, g represents the generated network;
the reconstruction loss is designed as:
Lrec=||Gn(Noize+Imn)-Irn||2 (4)
wherein G isnIs the nth layer of the multi-scale generator, (Noize + I)mn) For superposition of the noise of each layer in the multiscale generator with the output mural of the previous layer, IrnIs a real mural.
3. Method for reconstructing missing information between coffin chamber wall blocks of the modified loss function SinGAN as claimed in claim 1, wherein the reconstructed loss L isrecPixel reconstruction loss and textureLoss LtextureThe loss functions of (a) are:
Figure FDA0002972764670000022
Lrec=||Gn(Noize+Imn)-Irn|| (6)
LG=χLrec+L2+texture (7)
Figure FDA0002972764670000023
wherein L is2+textureIs the sum of the pixel reconstruction loss and texture loss, L, of the muralDAs a loss function of the discriminator, LDFor updating the discriminator weight, χ being a tunable parameter, L is adjusted by adjusting the loss function of the generator2+texture
4. The method as claimed in claim 1, wherein the structure of the multi-scale generator and the discriminator is designed by { I }r0,Ir1,Ir2...IrnRespectively obtaining a group of mural block images I obtained by down-sampling real muralsmnFor downsampling n times of mural block images after masking, I is input at the lowest layer of the multi-scale generatormnGo through generator GnGenerating mural Gn(Imn) Then inputted to a discriminator DnNeutralization ofrnComparing and judging, updating the weight parameter of the layer, and comparing Gn(Imn) Upsampling and adding noise with the same size as the upsampled noise to form a generator Gn-1The input image is cycled to the original size of the image.
5. The method for reconstructing information missing from blocks of a coffin chamber wall painting using the modified loss function SinGAN as claimed in claim 1, wherein SSIM is used to measure fidelity and similarity between two wall paintings,
Figure FDA0002972764670000031
wherein,
Figure FDA0002972764670000032
for mural painting original drawing IrThe average value of (a) of (b),
Figure FDA0002972764670000033
for extended fresco ImThe average value of (a) of (b),
Figure FDA0002972764670000034
is an original wall painting IrThe variance of (a) is determined,
Figure FDA0002972764670000035
for extended rear fresco ImThe variance of the measured values is calculated,
Figure FDA0002972764670000036
is Ir,ImCovariance of (a), b1And b2Used for preventing the overflow caused by the fact that the denominator of the SSIM is zero, the SSIM is positioned at 0,1]Wherein, when the SSIM is larger, the content of the two images is more similar.
CN202110267441.8A 2021-03-11 2021-03-11 Reconstruction method for missing information between coffin chamber mural blocks of improved loss function SinGAN Pending CN112884673A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110267441.8A CN112884673A (en) 2021-03-11 2021-03-11 Reconstruction method for missing information between coffin chamber mural blocks of improved loss function SinGAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110267441.8A CN112884673A (en) 2021-03-11 2021-03-11 Reconstruction method for missing information between coffin chamber mural blocks of improved loss function SinGAN

Publications (1)

Publication Number Publication Date
CN112884673A true CN112884673A (en) 2021-06-01

Family

ID=76042465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110267441.8A Pending CN112884673A (en) 2021-03-11 2021-03-11 Reconstruction method for missing information between coffin chamber mural blocks of improved loss function SinGAN

Country Status (1)

Country Link
CN (1) CN112884673A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN110009576A (en) * 2019-02-28 2019-07-12 西北大学 A kind of mural painting inpainting model is established and restorative procedure
CN111292265A (en) * 2020-01-22 2020-06-16 东华大学 Image restoration method based on generating type antagonistic neural network
US20200285959A1 (en) * 2018-09-30 2020-09-10 Boe Technology Group Co., Ltd. Training method for generative adversarial network, image processing method, device and storage medium
CN112184582A (en) * 2020-09-28 2021-01-05 中科人工智能创新技术研究院(青岛)有限公司 Attention mechanism-based image completion method and device
CN112270651A (en) * 2020-10-15 2021-01-26 西安工程大学 Image restoration method for generating countermeasure network based on multi-scale discrimination
US20210183022A1 (en) * 2018-11-29 2021-06-17 Tencent Technology (Shenzhen) Company Limited Image inpainting method and apparatus, computer device, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
US20200285959A1 (en) * 2018-09-30 2020-09-10 Boe Technology Group Co., Ltd. Training method for generative adversarial network, image processing method, device and storage medium
US20210183022A1 (en) * 2018-11-29 2021-06-17 Tencent Technology (Shenzhen) Company Limited Image inpainting method and apparatus, computer device, and storage medium
CN110009576A (en) * 2019-02-28 2019-07-12 西北大学 A kind of mural painting inpainting model is established and restorative procedure
CN111292265A (en) * 2020-01-22 2020-06-16 东华大学 Image restoration method based on generating type antagonistic neural network
CN112184582A (en) * 2020-09-28 2021-01-05 中科人工智能创新技术研究院(青岛)有限公司 Attention mechanism-based image completion method and device
CN112270651A (en) * 2020-10-15 2021-01-26 西安工程大学 Image restoration method for generating countermeasure network based on multi-scale discrimination

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
米恒;贾振堂;: "基于改进生成式对抗网络的图像超分辨率重建", 计算机应用与软件, no. 09, 10 September 2020 (2020-09-10) *

Similar Documents

Publication Publication Date Title
CN112288647B (en) Remote sensing image cloud and shadow restoration method based on gating convolution
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN111539316B (en) High-resolution remote sensing image change detection method based on dual-attention twin network
CN110363215B (en) Method for converting SAR image into optical image based on generating type countermeasure network
CN111242864B (en) Finger vein image restoration method based on Gabor texture constraint
CN108460746B (en) Image restoration method based on structure and texture layered prediction
CN111242841B (en) Image background style migration method based on semantic segmentation and deep learning
CN108230278B (en) Image raindrop removing method based on generation countermeasure network
CN103208001B (en) In conjunction with shape-adaptive neighborhood and the remote sensing image processing method of texture feature extraction
CN113240613A (en) Image restoration method based on edge information reconstruction
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN104408458B (en) SAR image segmentation method based on ray completion administrative division map and feature learning
CN109872278B (en) Image cloud layer removing method based on U-shaped network and generation countermeasure network
CN110544253A (en) fabric flaw detection method based on image pyramid and column template
CN116819615A (en) Seismic data reconstruction method
CN112001293A (en) Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
CN116051407A (en) Image restoration method
CN113744134B (en) Hyperspectral image super-resolution method based on spectrum unmixed convolution neural network
CN110309763A (en) A kind of building classification method based on satellite remote sensing images
CN106991652A (en) Degree of rarefication constrains the coloured image restorative procedure with dictionary atom size adaptation
CN116862252B (en) Urban building loss emergency assessment method based on composite convolution operator
Fu et al. Line-drawing enhanced interactive mural restoration for Dunhuang Mogao Grottoes
CN114155171A (en) Image restoration method and system based on intensive multi-scale fusion
CN112884673A (en) Reconstruction method for missing information between coffin chamber mural blocks of improved loss function SinGAN
CN113674160A (en) Convolution network image defogging method applied to intelligent traffic system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination