[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115689960A - Illumination self-adaptive infrared and visible light image fusion method in night scene - Google Patents

Illumination self-adaptive infrared and visible light image fusion method in night scene Download PDF

Info

Publication number
CN115689960A
CN115689960A CN202211325390.0A CN202211325390A CN115689960A CN 115689960 A CN115689960 A CN 115689960A CN 202211325390 A CN202211325390 A CN 202211325390A CN 115689960 A CN115689960 A CN 115689960A
Authority
CN
China
Prior art keywords
network
image
fusion
illumination
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211325390.0A
Other languages
Chinese (zh)
Inventor
詹伟达
王佳乐
郝子强
曹可亮
刘晟佐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202211325390.0A priority Critical patent/CN115689960A/en
Publication of CN115689960A publication Critical patent/CN115689960A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an infrared and visible light image fusion method with self-adaptive illumination in night scenes, which relates to the technical field of image processing and machine vision and comprises the following steps: collecting infrared and visible light images under a night scene, respectively carrying out registration pretreatment, and constructing a training data set; constructing a network model and designing a loss function; inputting the training data set into the network model for model training until the model converges, and storing parameters of the network model to obtain a trained fusion model; and fusing the infrared and visible light images in the night scene based on the trained fusion model to obtain a final fusion image. The invention can obtain a fused image with better effect under the condition of uneven night illumination, and the local edge of the target in the fused image is clearer.

Description

Illumination self-adaptive infrared and visible light image fusion method in night scene
Technical Field
The invention relates to the technical field of image processing and machine vision, in particular to an illumination self-adaptive infrared and visible light image fusion method in a night scene.
Background
The image fusion integrates source images from different sensors, and the fusion result makes up the information limitation of a single sensor. Generally, in a night scene, the conditions of uncertain lamplight conversion and uneven lamplight intensity exist, and a visible light image obtained in the environment has uneven illumination, low contrast, unclear target and poor visual quality. The special imaging principle of the infrared sensor enables the infrared image not to be affected by light change and severe weather. Therefore, the infrared and visible light images are subjected to image fusion, the fused image can not only keep rich texture details, but also highlight a significant target, and the fusion result can be applied to the fields of target identification, security monitoring, target tracking and the like. However, the existing image fusion method for night scenes has the following problems: the fusion result is greatly influenced by lamplight, and the local edge of the target is fuzzy.
The Chinese patent publication No. CN107481214A discloses a method for fusing a low-light-level image and an infrared image, which respectively carries out denoising processing on the collected low-light-level image and the collected infrared image, and then carries out registration on the low-light-level image and the collected infrared image after denoising by using an image registration method based on edge characteristics. And (3) performing image fusion on the registered image by using dual-tree complex wavelet transform, and finally performing enhancement processing on the image to obtain a final fused image. However, the method has poor fusion effect on the source images with uneven night illumination, and local edges of targets in the fused images are blurred.
Therefore, how to obtain a better fused image under the condition of uneven night illumination and make local edges of targets in the fused image clearer is a technical problem which needs to be solved by the technical personnel in the field.
Disclosure of Invention
In view of the above, the present invention provides an infrared and visible light image fusion method with adaptive illuminance in a night scene, which can obtain a fused image with better effect under the condition of uneven night illumination, and make local edges of targets in the fused image clearer.
In order to achieve the above purpose, the invention provides the following technical scheme:
an infrared and visible light image fusion method with self-adaptive illumination in night scenes comprises the following steps:
collecting infrared and visible light images under a night scene, respectively carrying out registration pretreatment, and constructing a training data set;
constructing a network model and designing a loss function;
inputting the training data set into the network model for model training until the model converges, and storing parameters of the network model to obtain a trained fusion model;
and fusing the infrared and visible light images in the night scene based on the trained fusion model to obtain a final fusion image.
The technical effect that above-mentioned technical scheme reaches does: the fused image with better effect can be obtained under the condition of uneven night illumination, and the local edge of the target in the fused image is clearer.
Optionally, the constructing a training data set specifically includes:
selecting a night road data set MRSR and a data set M3FD, carrying out registration preprocessing on images in the night road data set MRSR and the data set M3FD to obtain images with the same size, and constructing a training data set.
Optionally, the network model includes a backbone fusion network and a local illuminance adaptive branch network; wherein:
the main convergence network comprises a first convolution block, a second convolution block, a third convolution block, a fourth convolution block and a fifth convolution block; the first convolution block is used for extracting shallow features of a source image, the second convolution block and the third convolution block are used for extracting deep features of the source image, and the fourth convolution block and the fifth convolution block are used for carrying out feature fusion and image reconstruction on feature maps of the two branches to obtain a fused image;
the local illumination self-adaptive branch network comprises four convolution layers and two full connection layers, wherein the four convolution layers are used for extracting the characteristics of an input source image, and the two full connection layers are used for outputting the high illumination probability and the low illumination probability of the source image.
Optionally, the first volume block, the second volume block, the third volume block, the fourth volume block, and the fifth volume block all include a convolution layer, an activation function, a skip connection, and a splicing operation; wherein: the activation functions in the first, second, third and fourth convolution blocks are all linear rectification functions, and the activation function in the fifth convolution block is a hyperbolic tangent function;
the activation functions of the local illumination adaptive branch network are all linear rectification functions, and the convolution kernels of all convolution layers are n multiplied by n.
The technical effect that above-mentioned technical scheme reaches does: the main structure of the network model is disclosed, a jump connection structure is introduced into a convolution module used by a trunk fusion network, so that the information loss of feature mapping after multiple convolutions can be relieved, and the information of a source image is retained; the local illumination adaptive branch network can estimate the illumination condition of the local area of the visible light source image, promote the illumination adaptive fusion of the trunk fusion network and solve the problem of poor fusion quality of the image with uneven illumination.
As can be seen, the design loss function is specifically:
constructing a loss function of the local illumination adaptive branch network based on the illumination probability and the probability label output by the local illumination adaptive branch network;
and constructing a loss function of the trunk fusion network based on the fusion image and the image label output by the trunk fusion network and the illumination probability output by the local illumination self-adaptive branch network.
Visible, the loss function of the local illumination adaptive branch network is a probability classification loss function, and the illumination of the visible light source image is used as a probability label; loss functions of the trunk fusion network comprise a pixel loss function, a gradient loss function and an edge enhancement loss function, and infrared and visible light source images are used as image labels.
The technical effect that above-mentioned technical scheme reaches does: the main process of designing the loss function is disclosed, and the target edge is used for enhancing the loss function, so that not only a salient target can be more prominent, but also the local edge of the target can be clearer.
Optionally, the obtaining of the trained fusion model specifically includes the following steps:
training the trunk fusion network and the local illumination adaptive branch network, minimizing the value of the loss function in the training process until the training times reach a preset threshold value or the value of the loss function is stabilized in a preset range, finishing the training of the network model, and storing the parameters of the network model.
Optionally, the training of the trunk fusion network and the local illuminance adaptive branch network specifically includes:
in the training process of the local illumination adaptive branch network, using the visible light image in the training data set as input;
and in the training process of the backbone converged network, using the infrared and visible light images in the training data set as input.
According to the technical scheme, compared with the prior art, the invention discloses an infrared and visible light image fusion method with self-adaptive illumination in night scenes, discloses a main structure of a network model, introduces a jump connection structure into a convolution module used by a backbone fusion network, can relieve the loss of characteristic mapping information after multiple convolutions, and retains the information of a source image; the local illumination adaptive branch network can estimate the illumination condition of the local area of the visible light source image, promote the illumination adaptive fusion of the main fusion network, solve the problem of poor fusion quality of the image with uneven illumination, obtain a fused image with better effect under the condition of uneven night illumination, and enable the local edge of the target in the fused image to be clearer; in addition, the target edge enhancement loss function is used, so that not only can a salient target be more prominent, but also the local edge of the target can be clearer.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a method for luminance adaptive infrared and visible light image fusion in a night scene;
FIG. 2 is a schematic diagram of a network model;
FIG. 3 is a detailed diagram of the first, second, third, and fourth volume blocks;
fig. 4 is a schematic structural diagram of a local illuminance adaptive branch network.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses an illumination self-adaptive infrared and visible light image fusion method in a night scene, which comprises the following steps of:
1. collecting infrared and visible light images under a night scene, respectively carrying out registration pretreatment, and constructing a training data set; specifically, a night road data set MRSR and a data set M3FD are selected, images in the data sets are subjected to registration preprocessing to obtain images with the same size, and a training data set is constructed.
In this embodiment, the data set MRSR comprises 742 sets of ir and visible image pairs, and the data set M3FD comprises 560 sets of ir and visible image pairs, each of the source images is cropped to 9 sub-images of 480 × 480 size, for a total of 11718 sets of training images.
2. Constructing a network model, as shown in fig. 2, wherein the network model comprises a backbone fusion network and a local illumination adaptive branch network; wherein:
the backbone convergence network comprises a first volume block, a second volume block, a third volume block, a fourth volume block and a fifth volume block; the first convolution block is used for extracting shallow features of the source image, the second convolution block and the third convolution block are used for extracting deep features of the source image, and the fourth convolution block and the fifth convolution block are used for carrying out feature fusion and image reconstruction on feature images of the two branches to obtain a fused image; the local illumination self-adaptive branch network comprises four convolution layers and two full connection layers, wherein the four convolution layers are used for extracting the characteristics of an input source image, and the two full connection layers are used for outputting the high illumination probability and the low illumination probability of the source image.
Specifically, the first volume block, the second volume block, the third volume block, the fourth volume block and the fifth volume block all comprise volume layers, an activation function, a jump connection and a splicing operation; wherein: the activation functions in the first, second, third and fourth convolution blocks are all linear rectification functions, and the activation function in the fifth convolution block is a hyperbolic tangent function; the activation functions of the local illumination adaptive branch network are all linear rectification functions, and the convolution kernels of all convolution layers are n multiplied by n.
In this embodiment, the first to fourth convolution blocks have the same structural composition, and as shown in fig. 3, a specific composition diagram of a convolution block is shown, and includes 3 layers of convolution layers and 3 activation functions, where the sizes of convolution kernels of the 3 layers of convolution layers are 1 × 1, 3 × 3, and 3 × 3, respectively, and the step lengths are all 1; the 3 activation functions all use linear rectification functions. In addition, jump connection is introduced between the input and the output of the convolution block to relieve the information loss of feature mapping after multiple convolutions, and then an input path is connected to a 1 x 1 convolution layer and then spliced with the result of a third layer of activation function. The fifth convolution block includes only 1 convolution layer with a convolution kernel size of 1 and 1 activation function using a hyperbolic tangent function.
Referring to fig. 4, the local illuminance adaptive branch network includes four convolutional layers and two full-connected layers, where the sizes of convolutional kernels of the 4 convolutional layers are all 3 × 3, and linear rectification functions are used for all activation functions.
Specifically, the linear rectification function and the hyperbolic tangent function are defined as follows:
Figure BDA0003912185620000061
Figure BDA0003912185620000062
3. designing a loss function
Constructing a loss function of the local illumination adaptive branch network based on the illumination probability and the probability label output by the local illumination adaptive branch network; and constructing a loss function of the trunk fusion network based on the fusion image and the image label output by the trunk fusion network and the illumination probability output by the local illumination self-adaptive branch network.
Specifically, the loss function of the local illumination adaptive branch network is a probability classification loss function, and the illumination of the visible light source image is used as a probability label; the formula for the calculation of the probabilistic classification loss function is as follows:
L 1 =-zlogσ({P High ,P Low });
where z denotes an illuminance tag of the input image, σ (x) denotes a softmax function that normalizes the illuminance probability to [0,1];P High 、P Low The high illuminance probability and the low illuminance probability are respectively expressed. The calculation formula of the high illumination and low illumination probability is as follows:
{P High N 、P Low N }=F IA (I vi N );
Figure BDA0003912185620000071
Figure BDA0003912185620000072
in the formula, F IA (I vi N ) A bypass network is represented by a network of branches,
Figure BDA0003912185620000073
representing a high illumination probability and a low illumination probability for each optical photon image.
Loss functions of the backbone fusion network comprise a pixel loss function, a gradient loss function and an edge enhancement loss function, and infrared and visible light source images are used as image labels.
The calculation formula of the pixel loss function is as follows:
Figure BDA0003912185620000074
in the formula, H and W respectively represent the height and width of the image, | · | | | calcualting F Denotes the F-norm, I f (x,y)、I ir (x,y)、I vi (x, y) represents a pixel value at (x, y) of the fusion image, the infrared image, and the visible light image, respectively. Balance weight parameter alpha ir And alpha vi The calculation formula of (a) is as follows:
α vi =P High *10;
α ir =P Low *10;
the gradient loss function is calculated as follows:
Figure BDA0003912185620000081
in the formula (I), the compound is shown in the specification,
Figure BDA0003912185620000082
respectively representing the gradient values at (x, y), beta, of the fused image, the infrared image and the visible image ir And beta vi To balance the weighting parameters, the weighting parameters are set to be beta in practical application ir =β vi =5。
The formula for the edge enhancement loss function is as follows:
Figure BDA0003912185620000083
wherein C is a target edge obtained by a Canny edge detection operator, and gamma is ir And gamma vi To balance the weighting parameters, in practical applications, set to γ ir =γ vi =5. The Canny edge detection operator firstly grays the image, then performs Gaussian filtering on the image, calculates a gradient amplitude and a gradient direction angle, performs non-maximum suppression on the gradient amplitude according to the gradient direction angle, and finally detects and connects edges by using a dual-threshold algorithm.
The gradient magnitude and gradient direction angle are calculated as follows:
Figure BDA0003912185620000084
Figure BDA0003912185620000085
in the formula, G x (x, y) and G y (x, y) are the gradient magnitudes of the image in the horizontal and vertical directions at point (x, y), respectively.
The total loss function of the backbone network is defined as: l = L 2 +L 3 +L 4
4. Model training
And inputting the training data set into the network model for model training until the model is converged, and storing parameters of the network model to obtain a trained fusion model. Specifically, a trunk fusion network and a local illumination adaptive branch network are trained, the value of a loss function is minimized in the training process, and the training of the network model is completed and the parameters of the network model are saved until the training times reach a preset threshold value or the value of the loss function is stabilized within a preset range. In the training process of the local illumination adaptive branch network, a visible light image in a training data set is used as input; in the training process of the backbone converged network, infrared and visible light images in the training data set are used as input.
In the branch network training process of the embodiment, each visible photon image is used as the input of the branch network, the training iteration number is set to 100, and the learning rate is set to 10 -3 The penalty function threshold is set to 0.0002, and the bypass network training is considered complete when the penalty function value is less than 0.0002. In the process of training the backbone network, the infrared and visible light image pairs are used as the input of the backbone network, the batch size of the input images is set to be 16, the training iteration number is set to be 100, the learning rate is set to be 10 -3 And setting the loss function threshold value to be 0.0003, and considering that the branch network training is finished when the value of the loss function is less than 0.0003.
5. Determining a fusion model
And fixing parameters of the backbone network, determining a fusion model, inputting the registered infrared and visible light images into the fusion model, and finally outputting the fusion image. The method can obtain a fused image with better effect under the condition of uneven night illumination, and the local edge of the target in the fused image is clearer.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An illumination self-adaptive infrared and visible light image fusion method under a night scene is characterized by comprising the following steps:
collecting infrared and visible light images under a night scene, respectively carrying out registration pretreatment, and constructing a training data set;
constructing a network model and designing a loss function;
inputting the training data set into the network model for model training until the model converges, and storing parameters of the network model to obtain a trained fusion model;
and fusing the infrared and visible light images in the night scene based on the trained fusion model to obtain a final fusion image.
2. The method for fusing infrared and visible light images with adaptive illuminance in a night scene according to claim 1, wherein the constructing of the training data set specifically comprises:
selecting a night road data set MRSR and a night road data set M3FD, carrying out registration preprocessing on images in the night road data set MRSR and the night road data set M3FD to obtain images with the same size, and constructing a training data set.
3. The method for fusing infrared and visible light images with adaptive illuminance in a night scene according to claim 1, wherein the network model comprises a backbone fusion network and a local illuminance adaptive branch network; wherein:
the backbone convergence network comprises a first volume block, a second volume block, a third volume block, a fourth volume block and a fifth volume block; the first convolution block is used for extracting shallow features of a source image, the second convolution block and the third convolution block are used for extracting deep features of the source image, and the fourth convolution block and the fifth convolution block are used for carrying out feature fusion and image reconstruction on feature maps of the two branches to obtain a fused image;
the local illumination self-adaptive branch network comprises four convolution layers and two full connection layers, wherein the four convolution layers are used for extracting the characteristics of an input source image, and the two full connection layers are used for outputting the high illumination probability and the low illumination probability of the source image.
4. The method of claim 3, wherein the first, second, third, fourth and fifth convolution blocks each include convolution layers, activation functions, jump connections and stitching operations; wherein: the activation functions in the first, second, third and fourth convolution blocks are all linear rectification functions, and the activation function in the fifth convolution block is a hyperbolic tangent function;
the activation functions of the local illumination adaptive branch network are all linear rectification functions, and the convolution kernels of all convolution layers are n multiplied by n.
5. The method according to claim 3, wherein the design loss function is specifically:
constructing a loss function of the local illumination adaptive branch network based on the illumination probability and the probability label output by the local illumination adaptive branch network;
and constructing a loss function of the trunk fusion network based on the fusion image and the image label output by the trunk fusion network and the illumination probability output by the local illumination self-adaptive branch network.
6. The method for fusing the infrared and visible light images with the self-adaptive illumination in the night scene according to claim 5, wherein the loss function of the local illumination self-adaptive branch network is a probability classification loss function, and the illumination of the visible light source image is used as a probability label; loss functions of the backbone fusion network comprise a pixel loss function, a gradient loss function and an edge enhancement loss function, and infrared and visible light source images are used as image labels.
7. The method according to claim 3, wherein the trained fusion model specifically comprises the following steps:
training the trunk fusion network and the local illumination adaptive branch network, minimizing the value of the loss function in the training process until the training times reach a preset threshold value or the value of the loss function is stabilized in a preset range, finishing the training of the network model, and storing the parameters of the network model.
8. The method according to claim 7, wherein the training of the trunk fusion network and the local illuminance adaptive branch network specifically comprises:
in the training process of the local illumination adaptive branch network, using the visible light image in the training data set as input;
and in the training process of the backbone converged network, using the infrared and visible light images in the training data set as input.
CN202211325390.0A 2022-10-27 2022-10-27 Illumination self-adaptive infrared and visible light image fusion method in night scene Pending CN115689960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211325390.0A CN115689960A (en) 2022-10-27 2022-10-27 Illumination self-adaptive infrared and visible light image fusion method in night scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211325390.0A CN115689960A (en) 2022-10-27 2022-10-27 Illumination self-adaptive infrared and visible light image fusion method in night scene

Publications (1)

Publication Number Publication Date
CN115689960A true CN115689960A (en) 2023-02-03

Family

ID=85099455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211325390.0A Pending CN115689960A (en) 2022-10-27 2022-10-27 Illumination self-adaptive infrared and visible light image fusion method in night scene

Country Status (1)

Country Link
CN (1) CN115689960A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033279A (en) * 2023-03-23 2023-04-28 长春理工大学 Near infrared image colorization method, system and equipment for night monitoring camera
CN116363036A (en) * 2023-05-12 2023-06-30 齐鲁工业大学(山东省科学院) Infrared and visible light image fusion method based on visual enhancement
CN116823686A (en) * 2023-04-28 2023-09-29 长春理工大学重庆研究院 Night infrared and visible light image fusion method based on image enhancement
CN117853962A (en) * 2024-03-07 2024-04-09 国网江西省电力有限公司电力科学研究院 Single-double neighborhood edge detection-based porcelain insulator micro-light infrared fusion sensing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481214A (en) * 2017-08-29 2017-12-15 北京华易明新科技有限公司 A kind of twilight image and infrared image fusion method
CN113298744A (en) * 2021-06-07 2021-08-24 长春理工大学 End-to-end infrared and visible light image fusion method
CN114693712A (en) * 2022-04-08 2022-07-01 重庆邮电大学 Dark vision and low-illumination image edge detection method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481214A (en) * 2017-08-29 2017-12-15 北京华易明新科技有限公司 A kind of twilight image and infrared image fusion method
CN113298744A (en) * 2021-06-07 2021-08-24 长春理工大学 End-to-end infrared and visible light image fusion method
CN114693712A (en) * 2022-04-08 2022-07-01 重庆邮电大学 Dark vision and low-illumination image edge detection method based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DEPENG ZHU ET AL: "IPLF: A Novel Image Pair Learning Fusion Network for Infrared and Visible Image", 《IEEE SENSORS JOURNAL》, vol. 22, no. 9, 1 May 2022 (2022-05-01), pages 8808 - 8817, XP011907049, DOI: 10.1109/JSEN.2022.3161733 *
JIAYI MA ET AL: "Infrared and visible image fusion via detail preserving adversarial learning", 《INFORMATION FUSION》, 22 July 2019 (2019-07-22), pages 85 - 98, XP085828974, DOI: 10.1016/j.inffus.2019.07.005 *
LINFENG TANG ET AL: "PIAFusion: A progressive infrared and visible image fusion network based on illumination aware", 《INFORMATION FUSION》, 29 March 2022 (2022-03-29), pages 79 - 92, XP087031519, DOI: 10.1016/j.inffus.2022.03.007 *
邓铭辉: "《新一代信息隐藏技术:鲁棒数字图像水印技术研究》", 31 May 2010, pages: 167 - 169 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116033279A (en) * 2023-03-23 2023-04-28 长春理工大学 Near infrared image colorization method, system and equipment for night monitoring camera
CN116823686A (en) * 2023-04-28 2023-09-29 长春理工大学重庆研究院 Night infrared and visible light image fusion method based on image enhancement
CN116823686B (en) * 2023-04-28 2024-03-08 长春理工大学重庆研究院 Night infrared and visible light image fusion method based on image enhancement
CN116363036A (en) * 2023-05-12 2023-06-30 齐鲁工业大学(山东省科学院) Infrared and visible light image fusion method based on visual enhancement
CN116363036B (en) * 2023-05-12 2023-10-10 齐鲁工业大学(山东省科学院) Infrared and visible light image fusion method based on visual enhancement
CN117853962A (en) * 2024-03-07 2024-04-09 国网江西省电力有限公司电力科学研究院 Single-double neighborhood edge detection-based porcelain insulator micro-light infrared fusion sensing method

Similar Documents

Publication Publication Date Title
CN115689960A (en) Illumination self-adaptive infrared and visible light image fusion method in night scene
Ma et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization
Lu et al. Underwater image super-resolution by descattering and fusion
CN116823686B (en) Night infrared and visible light image fusion method based on image enhancement
CN112184604B (en) Color image enhancement method based on image fusion
WO2021164234A1 (en) Image processing method and image processing device
CN109993091A (en) A kind of monitor video object detection method eliminated based on background
CN112184646B (en) Image fusion method based on gradient domain oriented filtering and improved PCNN
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
CN111311503A (en) Night low-brightness image enhancement system
Chen et al. Visual depth guided image rain streaks removal via sparse coding
CN114862707B (en) Multi-scale feature restoration image enhancement method, device and storage medium
CN114187210B (en) Multi-mode dense fog removing method based on visible light-far infrared image
CN113506230B (en) Photovoltaic power station aerial image dodging processing method based on machine vision
Bhattacharya et al. D2bgan: A dark to bright image conversion model for quality enhancement and analysis tasks without paired supervision
CN107301625B (en) Image defogging method based on brightness fusion network
Lai et al. Single image dehazing with optimal transmission map
CN105303544A (en) Video splicing method based on minimum boundary distance
CN113421210A (en) Surface point cloud reconstruction method based on binocular stereo vision
CN117391987A (en) Dim light image processing method based on multi-stage joint enhancement mechanism
CN114821239B (en) Method for detecting plant diseases and insect pests in foggy environment
Huang et al. An end-to-end dehazing network with transitional convolution layer
Cui et al. Single image haze removal based on luminance weight prior
Hsu et al. Structure-transferring edge-enhanced grid dehazing network
CN114092369A (en) Image fusion method based on visual saliency mapping and least square optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination