[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112215788A - Multi-focus image fusion algorithm based on improved generation countermeasure network - Google Patents

Multi-focus image fusion algorithm based on improved generation countermeasure network Download PDF

Info

Publication number
CN112215788A
CN112215788A CN202010966366.XA CN202010966366A CN112215788A CN 112215788 A CN112215788 A CN 112215788A CN 202010966366 A CN202010966366 A CN 202010966366A CN 112215788 A CN112215788 A CN 112215788A
Authority
CN
China
Prior art keywords
image
network
generator
discriminator
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010966366.XA
Other languages
Chinese (zh)
Inventor
王娟
柯聪
袁旭亮
丁畅
何宇
刘远远
张鑫午
刘敏
刘聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Technology
Original Assignee
Hubei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Technology filed Critical Hubei University of Technology
Priority to CN202010966366.XA priority Critical patent/CN112215788A/en
Publication of CN112215788A publication Critical patent/CN112215788A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-focus image fusion algorithm based on an improved generation countermeasure network, which is applied to target images extracted by focusing photographing at different positions in the same scene. Firstly, designing a generator network and a discriminator network, cutting down a pooling layer in a network structure in order to avoid information loss caused by an image in a network model transmission process, and extracting image characteristics through convolution stacking; secondly, constructing a loss function for generating a countermeasure network, and optimizing network parameters to obtain an optimal network model; finally, inputting the acquired target image into a trained model to obtain a fused image; when the multi-focus image fusion algorithm is carried out, a generator in the generation countermeasure network generates a fusion image, the generated image and a source image are input into a discriminator, and if the discriminator cannot discriminate, the generated image is the best fusion image.

Description

Multi-focus image fusion algorithm based on improved generation countermeasure network
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-focus image fusion algorithm based on an improved generation countermeasure network.
Background
With the rapid development of technologies such as computers, sensors and the like, the device brings convenience to the life of people to a great extent. Among them, digital images, which are products of these new technologies, have also slowly permeated into the lives of people, and have also played an important role in communication between people. As the amount of image information obtained by people increases, it is important to process the image information. Because the focal length of the optical lens is set in a certain range, only objects within the depth of field can be clearly displayed in a picture, other objects may present a blurry state, and a common technique for acquiring a full-focus image is to fuse a plurality of images of the same scene taken under different focal length settings, i.e., a multi-focus image fusion technique. The multi-focus image fusion technology can fuse focus images under different focal lengths, and the fused images can retain the detail characteristics of source images to the maximum extent, so that richer information is provided for practical application fields such as military detection, medical diagnosis and target recognition.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art. Therefore, an object of the present invention is to provide a multi-focus image algorithm based on an improved generation countermeasure network, which processes target images extracted from different positions of focused shots in the same scene to obtain a fused image containing rich detail information.
1. According to the embodiment of the invention, the multi-focus image fusion algorithm based on the improved generation countermeasure network is applied to target images extracted by focusing photographing at different positions in the same scene, and comprises the following steps:
s1: designing a network structure for generating a generator and a discriminator in a countermeasure network, cutting down a pooling layer in the network structure, and extracting image features by using convolution stacking;
s2: constructing an objective function of a network model by generating a network structure of a countermeasure network;
s3: training through a training set to obtain an optimal generative confrontation network model;
s4: applying the generator model of the countermeasure network generated in step S3, inputting the source image into the generator to obtain the generated image, and performing target update for the discrimination of the generated image and the source image by the discriminator. Preferably, the generator in step S1 is a 5-layer convolutional neural network, the first and second layers use convolution kernels of 5x5, the third and fourth layers use convolution kernels of 3x3, and the last layer uses convolution kernels of 1x1, the step size of each convolution kernel is set to 1, and the input of the generator is formed by connecting two multi-focus images, that is, the input channel is 2
The discriminator in step S1 is an 8-layer convolutional neural network, each layer uses convolution kernels of 3 × 3, the convolution kernel step size of the second, third, and fourth layers is set to 2, and the convolution kernel step size of the remaining layers is 1.
Preferably, the objective function of the created generation confrontation network model in step S2 includes an objective function of the generator network and an objective function of the discriminator network
LGAN={min(LG),min(LD)};
The loss function of the generator comprises two parts, wherein one part is the loss resistance of the generator and the discriminator and is represented by V, and the other part is the content loss of the image detail information in the generation process and is represented by LcontentRepresents:
Figure BDA0002682464810000021
Figure BDA0002682464810000022
then L isGCan be expressed as
LG=V+αLcontent
L for loss function of discriminatorDRepresents:
Figure BDA0002682464810000023
preferably, in step S3, an optimal generative confrontation network model is obtained through training of a training set, 50 pairs of multi-focus images are used as the training set of the experiment, each pair of multi-focus images is divided into sub-blocks by a sliding window with a step size of 14 and a size of 64x64, the sub-blocks are expanded to a size of 76x76 in a filling manner and are used as the input of the generator, the size of the fused image output by the generator is still 64x64, and the generated fused image is used as the input of the discriminator and the Adam optimization algorithm is used until the maximum training times are reached.
Preferably, in step S4, the fused image is obtained by the generator, the fused image is updated by the discriminator, and the two input source images I1、I2By means of a generator G, a fusion image I is obtainedfThe discriminator D is used for fusing the images IfSource image I1、I2The extracted image characteristics are judged, and a fused image I is judgedfWhether or not to include the source image I1、I2If the discriminator can discriminate, the fused image I is continuously updatedf(ii) a If the discriminator cannot discriminate, it indicates that the image generated by the generator is the best fused image.
The invention provides a multi-focus image fusion algorithm based on an improved generation countermeasure network, which realizes the extraction of image information at different focus positions by utilizing the generation countermeasure network and generates a fusion image containing rich detail information. Firstly, designing a generator network and a discriminator network, cutting down a pooling layer in a network structure in order to avoid information loss caused by an image in a network model transmission process, and extracting image characteristics through convolution stacking. Secondly, constructing a loss function for generating the countermeasure network, and optimizing network parameters to obtain an optimal network model. And finally, inputting the acquired target image into the trained model to obtain a fused image.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block diagram of a method of a multi-focus image fusion algorithm based on an improved generation countermeasure network according to the present invention;
FIG. 2 is a diagram of a generator network structure in a multi-focus image fusion algorithm based on an improved generation countermeasure network according to the present invention;
FIG. 3 is a diagram of a network structure of a discriminator in a multi-focus image fusion algorithm based on an improved generation countermeasure network according to the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Example 1:
as shown in fig. 1-3, an image fusion algorithm based on an improved generation of a countermeasure network. And designing a generator network and a discriminator network, cutting down a pooling layer in a network structure in order to avoid information loss caused by the image in the network model transmission process, and extracting image characteristics through convolution stacking. Secondly, constructing a loss function for generating the countermeasure network, and optimizing network parameters to obtain an optimal network model. And finally, inputting the acquired target image into the trained model to obtain a fused image.
The image fusion algorithm based on the improved generation countermeasure network specifically comprises the following steps:
s1: designing a network structure for generating a generator and a discriminator in a countermeasure network, cutting down a pooling layer in the network structure, and extracting image features by using convolution stacking;
s2: constructing an objective function of a network model by generating a network structure of a countermeasure network;
s3: training through a training set to obtain an optimal generative confrontation network model;
s4: applying the generator model of the countermeasure network generated in step S3, inputting the source image into the generator to obtain the generated image, and performing target update for the discrimination of the generated image and the source image by the discriminator.
Example 2:
as shown in fig. 1 to 3, according to the steps of embodiment 1, a network structure for generating generators and discriminators in a countermeasure network is designed in S1. The generator aims to extract more detail information in the source image and generate a fused image with abundant details. The generator is a 5-layer convolutional neural network, with the first and second layers using convolution kernels of 5x5, the third and fourth layers using convolution kernels of 3x3, and the last layer using convolution kernels of 1x 1. The step size of each layer of convolution kernel is set to be 1, and the input of the generator is formed by connecting two multi-focus images, namely the input channel is 2. The purpose of the discriminator is to discriminate whether the target image is the image generated by the generator or the real image, and to classify the target image by extracting the features of the target image. The discriminator is a convolutional neural network with 8 layers, each layer uses convolution kernel of 3x3, the convolution kernel step of the second, third and fourth layers is set to be 2, and the convolution kernel step of the other layers is 1.
Example 3:
as shown in FIGS. 1 to 3, according to the step of embodiment 1, the objective function for generating the countermeasure network model built in S2 includes the objective function of the generator network and the objective function of the discriminator network
LGAN={min(LG),min(LD)};
The generator loss function includes two parts, one part is that the antagonistic loss of the generator and the arbiter is represented by V. The other part is L for content loss in the process of generating image detail informationcontentRepresents:
Figure BDA0002682464810000041
Figure BDA0002682464810000042
then L isGCan be expressed as
LG=V+αLcontent
In order to generate a better fused image, a discriminator is introduced. L for loss function of discriminatorDRepresents:
Figure BDA0002682464810000051
example 4:
as shown in fig. 1 to 3, according to the step of embodiment 1, the optimal generative confrontation network model is obtained by training through the training set in S3. 50 pairs of multi-focus images were used as a training set for the experiment. To enable a better training model, each pair of multi-focused images was divided into sub-blocks with a sliding window size of 64x64 with a step size of 14, and these sub-blocks were expanded in a padding fashion to a size of 76x76, which was used as input to the generator. The fused image output by the generator is still 64x64 in size. The resulting fused image is used as input to a discriminator and the Adam optimization algorithm is used until the maximum number of training passes is reached.
Example 5:
as shown in fig. 1 to 3, according to the step of embodiment 1, the fused image is obtained by the generator in S4, and the fused image is updated by the discriminator. Inputting two source images I1、I2By means of a generator G, a fusion image I is obtainedfThe discriminator D is used for fusing the images IfSource image I1、I2The extracted image characteristics are judged, and a fused image I is judgedfWhether or not to include the source image I1、I2If the discriminator can discriminate, the fused image I is continuously updatedf(ii) a If the discriminator cannot discriminate, it indicates that the image generated by the generator is the best fused image.
In summary, the multi-focus image fusion algorithm based on the improved generation countermeasure network realizes end-to-end adaptive fusion and avoids the complicated fusion rule of design. And (3) extracting image features by adopting convolution stacking through the design of the generator network and the discriminator network. Secondly, constructing a loss function for generating the countermeasure network, and optimizing network parameters to obtain an optimal network model. And finally, inputting the acquired target image into the trained model to obtain a fused image. The algorithm can better extract the detail information and the edge characteristics of the two source images, and achieves better fusion effect.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (5)

1.一种基于改进生成对抗网络的多聚焦图像融合算法,应用于同一场景下不同位置聚焦拍照所提取的目标图像,其特征在于,所述基于改进生成对抗网络的多聚焦图像融合算法包括:1. a multi-focus image fusion algorithm based on improved generative adversarial network, applied to the extracted target image of different positions focusing on taking pictures under the same scene, it is characterized in that, the described multi-focus image fusion algorithm based on improved generative adversarial network comprises: S1:设计生成对抗网络中的生成器和判别器的网络结构,裁撤网络结构中的池化层,使用卷积层叠提取图像特征;S1: Design the network structure of the generator and discriminator in the generative adversarial network, cut out the pooling layer in the network structure, and use convolution stacking to extract image features; S2:通过生成对抗网络的网络结构进而构建网络模型的目标函数;S2: Build the objective function of the network model by generating the network structure of the adversarial network; S3:通过训练集训练得到最佳的生成式对抗网络模型;S3: The best generative adversarial network model is obtained by training on the training set; S4:应用步骤S3中生成对抗网络的生成器模型,将源图像输入到生成器中得到生成图像,并且通过判别器对生成图像和源图像的判别进行目标的更新。S4: Apply the generator model of the generative adversarial network in step S3, input the source image into the generator to obtain the generated image, and use the discriminator to discriminate between the generated image and the source image to update the target. 2.根据权利要求1所述的基于改进生成对抗网络的多聚焦图像融合算法,其特征在于:步骤S1中的所述生成器是一个5层卷积神经网络,第一层和第二层都是使用5x5的卷积核,第三层和第四层使用3x3的卷积核以及最后一层使用1x1的卷积核,每一层卷积核的步长都设为1,生成器的输入是两张多聚焦图像的连接而成,即输入通道为2;2. The multi-focus image fusion algorithm based on improved generative adversarial network according to claim 1, characterized in that: the generator in step S1 is a 5-layer convolutional neural network, and both the first layer and the second layer are It uses a 5x5 convolution kernel, the third and fourth layers use a 3x3 convolution kernel, and the last layer uses a 1x1 convolution kernel. The stride of each layer of convolution kernel is set to 1, and the input of the generator It is the connection of two multi-focus images, that is, the input channel is 2; 步骤S1中的所述判别器是一个8层的卷积神经网络每一层都是使用3x3的卷积核,第二、三、四层的卷积核步长设为2,其余层卷积核步长均为1。The discriminator in step S1 is an 8-layer convolutional neural network. Each layer uses a 3x3 convolution kernel. The kernel step size is all 1. 3.根据权利要求1所述的基于改进生成对抗网络的多聚焦图像融合算法,其特征在于:步骤S2中的搭建的生成对抗网络模型的目标函数包括生成器网络的目标函数和判别器网络的目标函数3. the multi-focus image fusion algorithm based on improved generative adversarial network according to claim 1, is characterized in that: the objective function of the generative adversarial network model of building in step S2 comprises the objective function of generator network and discriminator network. objective function LGAN={min(LG),min(LD)};L GAN = {min(L G ), min(L D )}; 生成器损失函数包括两部分,一部分是生成器与判别器的对抗损失用V表示,另一部分则是图像细节信息在生成过程中的内容损失用Lcontent表示:The generator loss function includes two parts, one part is the confrontation loss between the generator and the discriminator, which is represented by V, and the other part is the content loss of the image detail information in the generation process, which is represented by L content :
Figure FDA0002682464800000011
Figure FDA0002682464800000011
Figure FDA0002682464800000012
Figure FDA0002682464800000012
则LG可表示为Then LG can be expressed as LG=V+αLcontent L G =V+αL content 判别器的损失函数用LD表示:The loss function of the discriminator is denoted by LD :
Figure FDA0002682464800000021
Figure FDA0002682464800000021
4.根据权利要求1所述的基于改进生成对抗网络的多聚焦图像融合算法,其特征在于:步骤S3中通过训练集训练得到最佳的生成式对抗网络模型,将50对多聚焦图像作为实验的训练集,用步长为14,大小为64x64的滑动窗口将每对多聚焦图像分成子块,再将这些子块以填充的方式将大小扩展为76x76,将其作为生成器的输入,通过生成器输出的融合图像大小仍为64x64,生成的融合图像作为判别器的输入,并使用Adam优化算法,直到达到最大训练次数为止。4. the multi-focus image fusion algorithm based on improved generative adversarial network according to claim 1, is characterized in that: in step S3, obtain the best generative adversarial network model through training set training, take 50 pairs of multi-focus images as experiments The training set of , divides each pair of multi-focus images into sub-blocks with a sliding window of stride 14 and size 64x64, and then expands the size of these sub-blocks to 76x76 by padding, which is used as the input of the generator, via The size of the fused image output by the generator is still 64x64, the generated fused image is used as the input of the discriminator, and the Adam optimization algorithm is used until the maximum number of training times is reached. 5.根据权利要求1所述的基于改进生成对抗网络的多聚焦图像融合算法,其特征在于:步骤S4中通过生成器得到融合图像,通过判别器对融合图像进行更新,输入的两幅源图像I1、I2,通过生成器G,得到融合图像If,判别器D通过对融合图像If,源图像I1、I2提取到的图像特征进行判断,判断融合图像If中是否包含源图像I1、I2的细节信息,若判别器可以判别,则继续更新融合图像If;若判别器无法判别,则说明生成器生成的图像是最佳的融合图像。5. The multi-focus image fusion algorithm based on improved generative adversarial network according to claim 1, is characterized in that: in step S4, the fusion image is obtained by the generator, the fusion image is updated by the discriminator, and the input two source images I 1 , I 2 , through the generator G , the fusion image If is obtained, and the discriminator D judges whether the fusion image If contains the image features extracted from the fusion image If and the source images I 1 , I 2 If the discriminator can discriminate the detailed information of the source images I 1 and I 2 , then continue to update the fusion image If; if the discriminator cannot discriminate, it means that the image generated by the generator is the best fusion image.
CN202010966366.XA 2020-09-15 2020-09-15 Multi-focus image fusion algorithm based on improved generation countermeasure network Pending CN112215788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010966366.XA CN112215788A (en) 2020-09-15 2020-09-15 Multi-focus image fusion algorithm based on improved generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010966366.XA CN112215788A (en) 2020-09-15 2020-09-15 Multi-focus image fusion algorithm based on improved generation countermeasure network

Publications (1)

Publication Number Publication Date
CN112215788A true CN112215788A (en) 2021-01-12

Family

ID=74049550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010966366.XA Pending CN112215788A (en) 2020-09-15 2020-09-15 Multi-focus image fusion algorithm based on improved generation countermeasure network

Country Status (1)

Country Link
CN (1) CN112215788A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112439A (en) * 2021-04-14 2021-07-13 展讯半导体(南京)有限公司 Image fusion method, training method, device and equipment of image fusion model
CN113610732A (en) * 2021-08-10 2021-11-05 大连理工大学 Full-focus image generation method based on interactive counterstudy
CN113723470A (en) * 2021-08-09 2021-11-30 北京工业大学 Pollen image synthesis method and device fusing multilayer information and electronic equipment
CN114708368A (en) * 2022-02-17 2022-07-05 北京深睿博联科技有限责任公司 A 3D image generation method and device based on generative adversarial network
CN116597268A (en) * 2023-07-17 2023-08-15 中国海洋大学 An efficient multi-focus image fusion method and its model building method
CN114782297B (en) * 2022-04-15 2023-12-26 电子科技大学 Image fusion method based on motion-friendly multi-focus fusion network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325931A (en) * 2018-08-22 2019-02-12 中北大学 Multimodal Image Fusion Method Based on Generative Adversarial Network and Super-Resolution Network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325931A (en) * 2018-08-22 2019-02-12 中北大学 Multimodal Image Fusion Method Based on Generative Adversarial Network and Super-Resolution Network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王娟 等: "基于改进生成对抗网络的多聚焦图像融合", 《科学技术与工程》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112439A (en) * 2021-04-14 2021-07-13 展讯半导体(南京)有限公司 Image fusion method, training method, device and equipment of image fusion model
CN113723470A (en) * 2021-08-09 2021-11-30 北京工业大学 Pollen image synthesis method and device fusing multilayer information and electronic equipment
CN113723470B (en) * 2021-08-09 2024-08-27 北京工业大学 Pollen image synthesis method and device integrating multilayer information and electronic equipment
CN113610732A (en) * 2021-08-10 2021-11-05 大连理工大学 Full-focus image generation method based on interactive counterstudy
CN113610732B (en) * 2021-08-10 2024-02-09 大连理工大学 Full-focus image generation method based on interactive countermeasure learning
CN114708368A (en) * 2022-02-17 2022-07-05 北京深睿博联科技有限责任公司 A 3D image generation method and device based on generative adversarial network
CN114782297B (en) * 2022-04-15 2023-12-26 电子科技大学 Image fusion method based on motion-friendly multi-focus fusion network
CN116597268A (en) * 2023-07-17 2023-08-15 中国海洋大学 An efficient multi-focus image fusion method and its model building method
CN116597268B (en) * 2023-07-17 2023-09-22 中国海洋大学 An efficient multi-focus image fusion method and its model construction method

Similar Documents

Publication Publication Date Title
CN112215788A (en) Multi-focus image fusion algorithm based on improved generation countermeasure network
JP6011862B2 (en) 3D image capturing apparatus and 3D image capturing method
CN107122796B (en) An Optical Remote Sensing Image Classification Method Based on Multi-branch Network Fusion Model
US20200145642A1 (en) Method and apparatus for estimating depth of field information
JP5760727B2 (en) Image processing apparatus and image processing method
EP3236391B1 (en) Object detection and recognition under out of focus conditions
Ma et al. Defocus image deblurring network with defocus map estimation as auxiliary task
Jung et al. Active confocal imaging for visual prostheses
JP5370542B1 (en) Image processing apparatus, imaging apparatus, image processing method, and program
JP2013005091A (en) Imaging apparatus and distance information acquisition method
KR20090087670A (en) Shooting information automatic extraction system and method
Qian et al. Bggan: Bokeh-glass generative adversarial network for rendering realistic bokeh
CN107707809A (en) A kind of method, mobile device and the storage device of image virtualization
KR102543306B1 (en) Apparatus for detecting objects of interest based on 3d gaze point information and providing metadata reflecting user's perspective and perception
CN103177432A (en) Method for obtaining panorama by using code aperture camera
US20210134050A1 (en) Method and apparatus for image conversion
CN113112439B (en) Image fusion method, training method, device and equipment of image fusion model
CN105812649A (en) Photographing method and device
CN108028893A (en) Multiple camera auto-focusings are synchronous
KR102091643B1 (en) Apparatus for processing image using artificial neural network, method thereof and computer recordable medium storing program to perform the method
CN112241940A (en) Method and device for fusing multiple multi-focus images
CN112351196A (en) Image definition determining method, image focusing method and device
JP2013042375A (en) Image pickup device and distance information acquisition method
WO2023149135A1 (en) Image processing device, image processing method, and program
CN109657702B (en) 3D depth semantic perception method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210112