[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

AU2018100325A4 - A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks - Google Patents

A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks Download PDF

Info

Publication number
AU2018100325A4
AU2018100325A4 AU2018100325A AU2018100325A AU2018100325A4 AU 2018100325 A4 AU2018100325 A4 AU 2018100325A4 AU 2018100325 A AU2018100325 A AU 2018100325A AU 2018100325 A AU2018100325 A AU 2018100325A AU 2018100325 A4 AU2018100325 A4 AU 2018100325A4
Authority
AU
Australia
Prior art keywords
image
network
images
generator
coloring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2018100325A
Inventor
Xilai Nian
Mengyu Sun
Shinan Wang
Jiafan Xue
Ye Zhao
Jiatong Zhu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wang Shinan Miss
Zhao Ye Miss
Original Assignee
Wang Shinan Miss
Zhao Ye Miss
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wang Shinan Miss, Zhao Ye Miss filed Critical Wang Shinan Miss
Priority to AU2018100325A priority Critical patent/AU2018100325A4/en
Application granted granted Critical
Publication of AU2018100325A4 publication Critical patent/AU2018100325A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

Abstract This new invention presents a method for coloring black-and-white films and photos in real time by using conditional Generative Adversarial Network(cGAN). Using the YUV channel instead of RGB channel to train the network makes the training more effective. The cGAN which is made up of a "U-Net"-based architecture for generator and a convolutional "PatchGAN" classifier avoids shortages of traditional encoder-decoder models. Move clips Software written in Mat Lab Python C atu lredan jit iigues Py tha-on) half-colore images (Dneata es OuInput (Initciiaton parme e poch ma epoch? r'i Yesnn pocs BW image Fiur 1

Description

DESCRIPTION
TITLE A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks
FIELD OF THE INVENTION
The present invention relates to a technique which colorizes black-and-white films and photos. In particular, this invention can color the movies and the photos automatically and in a fast speed, which guarantee the possibility of video processing. The method can give full expression to high-level semantics through using conditional Generative Adversarial Networks (cGANs).
BACKGROUND OF THE INVENTION
Before the middle of 20 century, constrained by manufacturing cost and technology, films and photos almost were black and white. The foundation of film colorization is image colorization. Initial image coloring was all done by hand which costs of efforts and time. As the computer technology is stepping forward to mature, the film has been digitized and colored with the help of computer. However, it still takes a large number of manpower to fine-tune details to ensure quality. With revolution of computer performance, in the area of Deep Learning, encoder- decoder model uses regression to predict color of each pixel which is widely used in image colorization which is much more advanced than the previous method.
Nevertheless, the only link between encoding and decoding is a fixed length semantic vector. In other words, the encoder will compress the information of the entire feature into a fixed length vector. Semantic vector can’t express features in, which makes decoding can’t get enough input sequences information at the beginning. Which means the performance of extracting high-level semantic features is disappointed. In this case, the decoding accuracy reduces and the color is dull. The “semantic gap” distance between low-level visual features and high-level semantic features is necessary by traditional encoder-decoder model.
Therefore, a basic premise is to find an independent model which has a complete mapping from the input image to the output image. The model should learn high-level semantic features and ensure consistency between input and output.
The related references are: [1] Zhang R, Isola P, Effos A A. Colorful Image Colorization[J]. 2016:649-666.
[2] Iizuka S, Simo-Serra E, Ishikawa H. Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification[M]. ACM, 2016.
[3] Bahdanau D, Cho K, Bengio Y. Neural Machine Translation by Jointly Learning to Align and Translate^]. Computer Science, 2014.
SUMMARY OF THE INVENTION
In order to avoid the shortages of traditional encoder-decoder model, this invention applies GANs in the conditional setting. Generative combat network is a kind of learning model, and judging by the generator is composed of two parts, the goal of GAN is to learn the relationship between random noise vector Z and Y output image generator, to generate output can deceive false discriminator as the goal, to distinguish the true image for the task. The two parts are antagonistic to each other and are trained together. The loss function of cGAN is more complex than traditional GAN network. It is to learn the mapping relationship between the observed image x, the random noise vector Z and the output vector y. The joint distribution is used to make the network more precise. The cGAN generator uses the U-Net model, and the discriminator uses the Patch GAN network. We use a “U-Net”- based architecture for our generator to make setup simpler and the “Patch GAN” classifier is for discriminator we use a convolutional, which only penalizes structure at the scale of image patches. The U-Net network is composed of encoders and decoders, and the false pictures are generated by convolution and deconvolution. In the convolution process, the size of the filter and the step size determine the degree of reduction. Deconvolution is the opposite of the convolution, which is used to expand the dimension. The Patch GAN network structure of discriminator and generator is slightly different. The network structure of discriminator is only composed of encoders, that is, a convolution network. We define the picture generated by the generator as fake B, define the color map originally used for practice as real B, and enter the fake B and real B at the same time, according to the result, we can know whether the generated fake B can be false or true by discriminating.
DESCRIPTION OF DRAWING
The following drawings are only for the purpose of description and explanation but not for limitation, where in:
Fig.l is the flow diagram of the core algorithm of our invention, which shows the specific step in our work step by step;
Fig2 shows two types of the architecture of the generator. The “U-Net” is an encoder-decoder with skip connections between mirrored layers in the encoder and decoder stacks;
Fig3 shows the architecture of the U-Net model in detail. We input a grayscale image which size is 512*256. The left leg consists of eight convolution-layer-blocks and the input of each block is the output of the previous block. The right leg is composed of eight deconvolution-layer-blocks whose inputs consist of two parts: the output of the former block and the output of the symmetric convolution-layer-block.
Fig4 shows the process of the test. We input an image to the procedure, and then the procedure transforms it to the tensor. After the regularization, the tensor goes through the model in forward direction. Finally, the procedure output the image;
Fig5 is the process flow diagram of our website, which provides another fully visual and interactive user interface to users. With the guidance of this figure, users could easily transform black-and-white images and GIF to colorful images and GIF through the website. Users can choose to input images or GIF in the index, and then select a file to coloring. After that, users could back to the index in this page;
Fig6 is the process flow diagram of our EXE file, which provides a fully visual and interactive user interface .With the guidance of this figure, users could easily transform black-and-white images and GIF to colorful images and GIF through the EXE file. Users can choose to input images or GIF in the index, and then select a file to coloring. After that, users could download the color images and GIF, empty the interface, or back to the index;
Fig.7 shows the results of our invention. The left column lists the original images we used in our model, while the right column shows the colorful images.
DESCRIPTION OF PREFERRED EMBODIMENTS
The invention adopts the cGAN, which is more convenient and faster than the traditional color method. In order to make the invention easier to be understood, an embodiment of the invention will be described in detail in description, and the process of the process is shown in Figure
Step 1:
In order to train our network, we need to collect a training set for the network to adjust its parameters. The training set should be large enough, usually thousands of pictures, that is capable for training a decent network. To get these images, we intercept some movies and selected the appropriate sampling frames. To prevent the network from over-fitting, we take 1 frame every 50 frames instead of all of the frames for the training process. The images in the training set is supposed to be the combination of a gray image and an RGB image of a same image(required in the network structure). The proportion of the height and width of the image must be 2:1, for example, 1920:960.In our demo, the training set contains about 6500 images.
Step 2:
After obtaining the training sets, we need to use the training images to train the network. To begin with, we create a data loader class to load the images in the training set and convert the data type to tensor. Next, we divide an input image in half, respectively named A and B. Then, we convert the A image, a RGB image, to gray image, and keep B unchanged. When using YUV channel for output, we convert B to YUV scale. To convert the color space of the images, we multiply the value of different channel by a certain factor. Lastly, we randomly take a part of A image which size is 512*256, and cut the corresponding part of B image. By randomly cutting the images, we can reuse a same training image several times.
After the data processing, we need to create a pix-to-pix network model. Firstly, we initialize a generator network. The generator network contains 16 blocks, 8 for convolution layer and 8 for deconvolution layer. The convolution layer calculate all the values of the input image with the kemal matrix that stores weight parameters of the pixels around the calculating pixel point in order to extract the feature from the image. After the convolution, use a function named ReLu to set the value in range (0,1).To lower the dimension of the output image of each layer, we set the convolution stride as 2, so height and width of the convolution result image would be half of the input image. We add skip connections between the corresponding deconvolution layers to shuttle the original image directly to the deconvolution process. After we input a gray image into this network, it will pass the 8 blocks convolution layer, and then be restored to the original size by passing 8 blocks deconvolution layer.
Then we need to initialize a discriminator network to identify the fake image generated by the generator. We use a network structure similar to PatchGAN. The discriminator gets a image first and convolve it with convolution matrix(stride 2).After 3 convolution layers, we could get 256 images which size are 64*32.Then we convolve them with a convolution matrix(stride 1) and get 512 images which size are 63 *31.Lastly, input these output feature maps into a full-connection layer to squeeze the image to l*30*30.The discriminator will use the final output to distinguish real and false.
Step 3:
After created the network model and initialized the generator and discriminator network, we need to start the training process, the main task is to find the best parameters of the convolution matrix. For each of the input image A, the generator network would generate a fake image after it passes through all layers of the network. Then, we input the fake image and the real image that we hope the generator could generate into the discriminator network. The discriminator will answer whether the image is real or not while the supposed answer is false. If the outcome value of the discriminator is not totally false, for example, 0.8 false, it would adjust its parameters of each convolution layer with the output error. The parameter adjustment method is similar to back-propagation. When input the error, multiply the error by its corresponding gradient, take the outcome as the new input error and pass it to the next node. After the parameter adjustment, the discriminator network would be more capable for identifying the fake image that the network generates.
As for the generator, if it gets a ‘fake’ answer from the discriminator, it would also adjust its parameters with back-propagation process. After the adjustment, the generator is supposed to generate images that could fool the discriminator in order to get the ‘real’ answer. As a result, the discriminator would be optimized if generator could fool discriminator, and the generator would be optimized if discriminator could identify the fake image. The network would be optimized to a decent one in the training process above. The objective (loss) function of previous cGAN can be expressed by:
When training, the generator network tries to minimize the objective function while the discriminator tries to maximize it.To optimize the difference comparison better, we defined a new loss function that can be expressed by:
Where Lp is pix loss which refers to the standard deviation of the pixel between the generated image and the authentic image. LGAN is adversarial loss which is from discriminator. λρ and λα are pre-defmed weights for perceptual loss and adversarial loss.
The Lp can be expressed by:
where {x,yh} are the image pair we input with C channels, width W and height H, where x is the input image and yh is the corresponding ground truth.
As for the discriminator, the binary cross-entropy loss for the discriminator D is defined as:
where N is the input images count (ground truth or de-rained result) and /, are the corresponding labels (1 for “real”, 0 for “fake”).
During the training process, the generator and the discriminator would use the value L as reference to optimize their parameters.After the training is over, the network could produce a colored image that similar to the authentic image.
Step 4:
After all functions have been completed, our network demo is about to invoke them to make the website being connected to the background which is made by python. First, our website is rendered by “flask” in our background, then we can log on the website. Second, our website can load the image which is uploaded by our users, and get it filename as a parameter which is transformed into the function44 transformIMG() ”. Last, the website invoke the function to make the picture become colorful through the model we have trained, our users can see the colorful image on the website immediately and find it in the folder later.
Step 5:
We create an executable file to complete the interaction by using C#, a method differs from the website in step 4. Firstly, to guarantee to invoke python files in C#, we should configure the environment- download IronPython, pytorch and conda 8.0. Secondly, design the process which is shown in Fig 6.Click the44 Color” button, the program will invoke transfer.py which contains the function 44 transformIMG( )”. At last, encapsulate the program as an executable file which users can download and execute in their own computers. The result of executable file is that you choose an image locally, click the “Color” button and a colorful image will be showed which can be download.The colored result in “Video” part is exhibited by GIF mode.

Claims (2)

  1. CLAIM
    1. A method for fast images and videos coloring by using conditional generative adversarial networks, which applies GANs in the conditional setting, said generative combat network is a kind of learning model, and judging by the generator is composed of two parts, the goal of GAN is to learn the relationship between random noise vector Z and Y output image generator, to generate output can deceive false discriminator as the goal, to distinguish the true image for the task.
  2. 2. A method for fast images and videos coloring by using conditional generative adversarial networks as claim 1, which said the two parts are antagonistic to each other and are trained together.
AU2018100325A 2018-03-15 2018-03-15 A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks Ceased AU2018100325A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2018100325A AU2018100325A4 (en) 2018-03-15 2018-03-15 A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2018100325A AU2018100325A4 (en) 2018-03-15 2018-03-15 A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks

Publications (1)

Publication Number Publication Date
AU2018100325A4 true AU2018100325A4 (en) 2018-04-26

Family

ID=61973044

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2018100325A Ceased AU2018100325A4 (en) 2018-03-15 2018-03-15 A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks

Country Status (1)

Country Link
AU (1) AU2018100325A4 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830912A (en) * 2018-05-04 2018-11-16 北京航空航天大学 A kind of interactive grayscale image color method of depth characteristic confrontation type study
CN109166126A (en) * 2018-08-13 2019-01-08 苏州比格威医疗科技有限公司 A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN109191472A (en) * 2018-08-28 2019-01-11 杭州电子科技大学 Based on the thymocyte image partition method for improving U-Net network
CN109190524A (en) * 2018-08-17 2019-01-11 南通大学 A kind of human motion recognition method based on generation confrontation network
CN109190620A (en) * 2018-09-03 2019-01-11 苏州科达科技股份有限公司 License plate sample generating method, system, equipment and storage medium
CN109495744A (en) * 2018-10-29 2019-03-19 西安电子科技大学 The big multiplying power remote sensing image compression method of confrontation network is generated based on joint
CN109670510A (en) * 2018-12-21 2019-04-23 万达信息股份有限公司 A kind of gastroscopic biopsy pathological data screening system and method based on deep learning
CN109793491A (en) * 2018-12-29 2019-05-24 维沃移动通信有限公司 A kind of colour blindness detection method and terminal device
CN109813542A (en) * 2019-03-15 2019-05-28 中国计量大学 The method for diagnosing faults of air-treatment unit based on production confrontation network
CN110007341A (en) * 2019-02-28 2019-07-12 长江大学 A kind of recognition methods and system of the microseism useful signal based on IfnoGAN and SSD model
CN110298844A (en) * 2019-06-17 2019-10-01 艾瑞迈迪科技石家庄有限公司 X-ray contrastographic picture blood vessel segmentation and recognition methods and device
CN110866455A (en) * 2019-10-25 2020-03-06 南京理工大学 Pavement water body detection method
US10628931B1 (en) 2019-09-05 2020-04-21 International Business Machines Corporation Enhancing digital facial image using artificial intelligence enabled digital facial image generation
CN111062880A (en) * 2019-11-15 2020-04-24 南京工程学院 Underwater image real-time enhancement method based on condition generation countermeasure network
CN111145290A (en) * 2019-12-31 2020-05-12 云南大学 Image colorization method, system and computer readable storage medium
CN111627080A (en) * 2020-05-20 2020-09-04 广西师范大学 Gray level image coloring method based on convolution nerve and condition generation antagonistic network
CN111862253A (en) * 2020-07-14 2020-10-30 华中师范大学 Sketch coloring method and system for generating confrontation network based on deep convolution
WO2020233709A1 (en) * 2019-05-22 2020-11-26 华为技术有限公司 Model compression method, and device
CN112102323A (en) * 2020-09-17 2020-12-18 陕西师范大学 Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112183507A (en) * 2020-11-30 2021-01-05 北京沃东天骏信息技术有限公司 Image segmentation method, device, equipment and storage medium
CN112488130A (en) * 2020-12-17 2021-03-12 苏州聚悦信息科技有限公司 AI micro-pore wall detection algorithm
CN112884866A (en) * 2021-01-08 2021-06-01 北京奇艺世纪科技有限公司 Coloring method, device, equipment and storage medium for black and white video
US11068749B1 (en) 2020-02-24 2021-07-20 Ford Global Technologies, Llc RCCC to RGB domain translation with deep neural networks
CN113420870A (en) * 2021-07-04 2021-09-21 西北工业大学 U-Net structure generation countermeasure network and method for underwater acoustic target recognition
WO2022028313A1 (en) * 2020-08-04 2022-02-10 Ping An Technology (Shenzhen) Co., Ltd. Method and device for image generation and colorization
CN114997175A (en) * 2022-05-16 2022-09-02 电子科技大学 Emotion analysis method based on field confrontation training

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830912A (en) * 2018-05-04 2018-11-16 北京航空航天大学 A kind of interactive grayscale image color method of depth characteristic confrontation type study
CN109166126A (en) * 2018-08-13 2019-01-08 苏州比格威医疗科技有限公司 A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN109166126B (en) * 2018-08-13 2022-02-18 苏州比格威医疗科技有限公司 Method for segmenting paint cracks on ICGA image based on condition generation type countermeasure network
CN109190524A (en) * 2018-08-17 2019-01-11 南通大学 A kind of human motion recognition method based on generation confrontation network
CN109190524B (en) * 2018-08-17 2021-08-13 南通大学 Human body action recognition method based on generation of confrontation network
CN109191472A (en) * 2018-08-28 2019-01-11 杭州电子科技大学 Based on the thymocyte image partition method for improving U-Net network
CN109190620A (en) * 2018-09-03 2019-01-11 苏州科达科技股份有限公司 License plate sample generating method, system, equipment and storage medium
CN109495744B (en) * 2018-10-29 2019-12-24 西安电子科技大学 Large-magnification remote sensing image compression method based on joint generation countermeasure network
CN109495744A (en) * 2018-10-29 2019-03-19 西安电子科技大学 The big multiplying power remote sensing image compression method of confrontation network is generated based on joint
CN109670510A (en) * 2018-12-21 2019-04-23 万达信息股份有限公司 A kind of gastroscopic biopsy pathological data screening system and method based on deep learning
CN109670510B (en) * 2018-12-21 2023-05-26 万达信息股份有限公司 Deep learning-based gastroscope biopsy pathological data screening system
CN109793491A (en) * 2018-12-29 2019-05-24 维沃移动通信有限公司 A kind of colour blindness detection method and terminal device
CN110007341A (en) * 2019-02-28 2019-07-12 长江大学 A kind of recognition methods and system of the microseism useful signal based on IfnoGAN and SSD model
CN109813542A (en) * 2019-03-15 2019-05-28 中国计量大学 The method for diagnosing faults of air-treatment unit based on production confrontation network
WO2020233709A1 (en) * 2019-05-22 2020-11-26 华为技术有限公司 Model compression method, and device
CN110298844B (en) * 2019-06-17 2021-06-29 艾瑞迈迪科技石家庄有限公司 X-ray radiography image blood vessel segmentation and identification method and device
CN110298844A (en) * 2019-06-17 2019-10-01 艾瑞迈迪科技石家庄有限公司 X-ray contrastographic picture blood vessel segmentation and recognition methods and device
US10628931B1 (en) 2019-09-05 2020-04-21 International Business Machines Corporation Enhancing digital facial image using artificial intelligence enabled digital facial image generation
CN110866455B (en) * 2019-10-25 2022-09-13 南京理工大学 Pavement water body detection method
CN110866455A (en) * 2019-10-25 2020-03-06 南京理工大学 Pavement water body detection method
CN111062880A (en) * 2019-11-15 2020-04-24 南京工程学院 Underwater image real-time enhancement method based on condition generation countermeasure network
CN111145290A (en) * 2019-12-31 2020-05-12 云南大学 Image colorization method, system and computer readable storage medium
CN111145290B (en) * 2019-12-31 2022-09-20 云南大学 Image colorization method, system and computer readable storage medium
US11068749B1 (en) 2020-02-24 2021-07-20 Ford Global Technologies, Llc RCCC to RGB domain translation with deep neural networks
CN111627080A (en) * 2020-05-20 2020-09-04 广西师范大学 Gray level image coloring method based on convolution nerve and condition generation antagonistic network
CN111862253B (en) * 2020-07-14 2023-09-15 华中师范大学 Sketch coloring method and system for generating countermeasure network based on deep convolution
CN111862253A (en) * 2020-07-14 2020-10-30 华中师范大学 Sketch coloring method and system for generating confrontation network based on deep convolution
WO2022028313A1 (en) * 2020-08-04 2022-02-10 Ping An Technology (Shenzhen) Co., Ltd. Method and device for image generation and colorization
CN112102323A (en) * 2020-09-17 2020-12-18 陕西师范大学 Adherent nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112102323B (en) * 2020-09-17 2023-07-07 陕西师范大学 Adhesion cell nucleus segmentation method based on generation of countermeasure network and Caps-Unet network
CN112183507B (en) * 2020-11-30 2021-03-19 北京沃东天骏信息技术有限公司 Image segmentation method, device, equipment and storage medium
CN112183507A (en) * 2020-11-30 2021-01-05 北京沃东天骏信息技术有限公司 Image segmentation method, device, equipment and storage medium
CN112488130A (en) * 2020-12-17 2021-03-12 苏州聚悦信息科技有限公司 AI micro-pore wall detection algorithm
CN112488130B (en) * 2020-12-17 2023-08-15 苏州聚悦信息科技有限公司 AI micro hole wall detection method
CN112884866B (en) * 2021-01-08 2023-06-06 北京奇艺世纪科技有限公司 Coloring method, device, equipment and storage medium for black-and-white video
CN112884866A (en) * 2021-01-08 2021-06-01 北京奇艺世纪科技有限公司 Coloring method, device, equipment and storage medium for black and white video
CN113420870A (en) * 2021-07-04 2021-09-21 西北工业大学 U-Net structure generation countermeasure network and method for underwater acoustic target recognition
CN113420870B (en) * 2021-07-04 2023-12-22 西北工业大学 U-Net structure generation countermeasure network and method for underwater sound target recognition
CN114997175A (en) * 2022-05-16 2022-09-02 电子科技大学 Emotion analysis method based on field confrontation training

Similar Documents

Publication Publication Date Title
AU2018100325A4 (en) A New Method For Fast Images And Videos Coloring By Using Conditional Generative Adversarial Networks
Zhao et al. Pixelated semantic colorization
Kim et al. Global and local enhancement networks for paired and unpaired image enhancement
Kim et al. Bigcolor: Colorization using a generative color prior for natural images
CN113763296B (en) Image processing method, device and medium
US8508546B2 (en) Image mask generation
Armas Vega et al. Copy-move forgery detection technique based on discrete cosine transform blocks features
Montulet et al. Deep learning for robust end-to-end tone mapping
Wang et al. PalGAN: Image colorization with palette generative adversarial networks
Salmona et al. Deoldify: A review and implementation of an automatic colorization method
Blanch et al. End-to-end conditional gan-based architectures for image colourisation
Shen et al. Color correction for image-based modeling in the large
US11887277B2 (en) Removing compression artifacts from digital images and videos utilizing generative machine-learning models
Mejjati et al. Look here! a parametric learning based approach to redirect visual attention
Kim et al. A multi-purpose convolutional neural network for simultaneous super-resolution and high dynamic range image reconstruction
Al Sobbahi et al. Low-light image enhancement using image-to-frequency filter learning
Mazumdar et al. Two-stream encoder–decoder network for localizing image forgeries
KR102430743B1 (en) Apparatus and method for developing object analysis model based on data augmentation
Liang et al. Method for reconstructing a high dynamic range image based on a single-shot filtered low dynamic range image
US12118647B2 (en) Generating colorized digital images utilizing a re-colorization neural network with local hints
Górriz et al. End-to-end conditional GAN-based architectures for image colourisation
KR102430742B1 (en) Apparatus and method for developing space analysis model based on data augmentation
Wu et al. Edge missing image inpainting with compression–decompression network in low similarity images
CN115457015A (en) Image no-reference quality evaluation method and device based on visual interactive perception double-flow network
CN114299105A (en) Image processing method, image processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry