CN115034999B - Image rain removing method based on rain and fog separation processing and multi-scale network - Google Patents
Image rain removing method based on rain and fog separation processing and multi-scale network Download PDFInfo
- Publication number
- CN115034999B CN115034999B CN202210796789.0A CN202210796789A CN115034999B CN 115034999 B CN115034999 B CN 115034999B CN 202210796789 A CN202210796789 A CN 202210796789A CN 115034999 B CN115034999 B CN 115034999B
- Authority
- CN
- China
- Prior art keywords
- rain
- image
- convolution layer
- convolution
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012545 processing Methods 0.000 title claims abstract description 21
- 238000000926 separation method Methods 0.000 title claims abstract description 14
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 230000004913 activation Effects 0.000 claims description 18
- 230000004927 fusion Effects 0.000 claims description 18
- 239000003086 colorant Substances 0.000 abstract description 3
- 238000012360 testing method Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image rain removing method based on rain and fog separation processing and a multi-scale network, which comprises the following steps: s1: acquiring an image to be rain removed; s2: constructing a physical model of the rain and fog image; s3: filtering the image by using a guide filter to obtain a high-frequency rainy image and a low-frequency foggy image; s4: designing a rain removing module according to the physical model, inputting the high-frequency rain removing image into the rain removing module, and outputting the high-frequency rain removing image by the rain removing module; s5: and designing a defogging module according to the physical model, inputting the low-frequency defogging image into the defogging module, and outputting the low-frequency defogging image by the defogging module. According to the image rain removing method based on the rain and fog separation processing and the multi-scale network, test results of a plurality of data sets show that the method can well remove rain stripes and rain and fog in the image and keep details and colors of the image, and compared with a mainstream rain removing algorithm in recent years, SSIM is improved by about 0.02-0.08, and PSNR is improved by about 0.2-3.5 dB.
Description
Technical Field
The invention relates to the field of image rain removal processing, in particular to an image rain removal method based on rain and fog separation processing and a multi-scale network.
Background
Outdoor intelligent shooting systems are very popular nowadays, such as: electronic eyes for monitoring traffic, outdoor monitoring systems of shops, shooting devices for automatic driving and the like, and under rainy days, the fields of view of the devices can be blocked by rain and fog formed by dense rain stripes and rain drops, which seriously affects the shooting performance of the devices. In recent years, a plurality of effective single-image rain removing methods are proposed, the effect is continuously improved, a distinguishing sparse coding method is proposed in 2015 to separate a rain mark layer from a background layer, matrices of rain stripes and the background layer are estimated by using a dictionary learning method, the single-image rain removing is realized, a Gaussian mixture model is proposed in 2016 to realize the multi-scale multi-direction rain stripe removing, the model has the effect on the rain stripes with different densities and different directions, but the Gaussian model is difficult to accurately construct under different conditions. With the continuous improvement of a deep learning framework and the improvement of the hardware performance of a computer, people begin to use a deep learning method to remove rain from a single image in a large quantity after 2016, 2017 proposes a method combining knowledge of the image processing field with a convolutional neural network to remove rain fringes of the single image, the rain fringes are extracted to a high-frequency layer in a high-frequency filtering mode, the neural network directly learns the mapping from the high-frequency layer of the rain image to the high-frequency layer of a clear image to recover the image, the method can keep the background details of the image, but due to the fact that the network structure is relatively simple, rain drops remain when the image with large rain and fog concentration is processed, a multi-flow dense connection convolutional neural network algorithm based on density perception is proposed in 2018, and the network can predict the clear image according to the density and scale information of the rain fringes and has adaptability to the rain fringes of all scales. The above algorithm is mostly aimed at removing rain fringes in images, and does not consider rain and fog in a rainy day under real conditions when synthesizing a data set, so that the networks have better effects on the synthesized images but poor effects when processing the real rain and fog images. In order to remove rain and fog in a rain image at the same time, in 2017, modeling the rain and fog in the rain image as the product of a rain stripe matrix and a binary mask, providing more rain stripe position information for a network, using a context expansion network to remove rain from the image, in 2019, someone thinks that the rain and fog in a rainy day are shielding formed by the rain stripe and the stripe accumulation effect, using a neural network combining the stripe accumulation model and GAN to estimate parameters in an atmospheric scattering model, recovering a clear image, the methods have better effects on synthesizing the rain and fog image, but often have the problems of rain trace residue and image blurring in processing a real rain image, then, people try to adapt the rain removing network to a real scene better by using semi-supervised learning, transfer learning and the like, in 2020, a semi-supervised learning frame based on a Gaussian process is provided for removing rain from the image, the non-parametric learning method model has a certain effect on the real rain stripe and the air, in 2021, a memory-oriented transfer learning frame is provided for training the encoding-decoding network of rain removing by using a self-supervised storage module and a self-training mechanism.
The semi-supervised learning and non-supervised learning methods can train a network by using part of unlabeled rainy images, so that the model has better adaptability to a real scene, but the model is difficult to train, and the problems of fitting and even collapse and the like occur.
Therefore, it is necessary to provide an image rain removing method based on a rain and fog separation process and a multi-scale network to solve the above technical problems.
Disclosure of Invention
The invention provides an image rain removing method based on rain and fog separation processing and a multi-scale network, which solves the problems that rain stripes in a high-frequency layer and fog in a low-frequency layer of a rain and fog image are removed respectively through a multi-scale convolutional neural network, and then the high-frequency layer and the low-frequency layer after rain and fog removal are overlapped to recover a clear image.
In order to solve the technical problems, the invention provides an image rain removing method based on rain and fog separation processing and a multi-scale network, which comprises the following steps:
s1: acquiring an image to be rain removed;
s2: constructing a physical model of the rain and fog image;
s3: filtering the image by using a guide filter to obtain a high-frequency rainy image and a low-frequency foggy image;
s4: designing a rain removing module according to the physical model, inputting the high-frequency rain removing image into the rain removing module, and outputting the high-frequency rain removing image by the rain removing module;
s5: designing a defogging module according to the physical model, inputting the low-frequency defogging image into the defogging module, and outputting the low-frequency defogging image by the defogging module;
s6: and fusing the rain-removing high-frequency image and the defogging low-frequency image to obtain a clear image after removing rain and fog.
Preferably, the specific steps of S2 are as follows:
modeling a fog image as a high frequency image I H And low frequency image I L And, the sum is represented as follows:
I=I H +I L
and, in addition, the method comprises the steps of,
I H =(J H +S)·T
I L =J L ·T+A·(1-T)
wherein I represents a foggy image, S represents a visible rain stripe, J represents a real scene image, J H High frequency details of the surface scene, J L Representing the low frequency background of the scene, T representing the transmissivity of the fog layer between the camera system and the scene, and a representing the global atmosphere. The real rain and fog image is modeled as i= (J H +S)·T+[J L ·T+A·(1-T)]。
Preferably, the specific step of S4 is as follows:
a rain removing module is used for carrying out rain removing processing on the high-frequency rain-containing image obtained in the step S3 and outputting the high-frequency rain-removing image;
the rain removing module comprises a convolution layer conv1, a dense block module 2, a dense block module 3, a convolution layer conv2, a convolution layer conv3 and a convolution layer conv4; the connection mode is as follows: the output of the convolution layer cov 1 is used as the input of the denseblock module 1, the denseblock module 2 and the denseblock module 3 respectively; the concat of the inputs of the denseblock module 1, the denseblock module 2 and the denseblock module 3 are connected as the input of the convolution layer conv 2; then, a convolution layer conv2, a convolution layer conv3 and a convolution layer conv4 are connected in sequence;
the convolution kernel size of the convolution layer conv1 is 3 multiplied by 3, the number of input channels is 3, the number of output channels is 128, the step length is 1 multiplied by 1, and a ReLu activation function is used as an activation function;
the structure of the denseblock module 1, the structure of the denseblock module 2 and the structure of the denseblock module 3 are similar, the input is the output of conv1, and the output comprises four convolution layers convd1, convd2, convd3 and convd4 which are connected in sequence;
the inputs of the four convolution layers convd1, convd2, convd3 and convd4 are concat connections of the outputs of all the preceding convolution layers;
the number of input channels of the four convolution layers convd1, convd2, convd3 and convd4 is 128, 256, 384 and 512 in sequence;
the output channel numbers of the four convolution layers convd1, convd2, convd3 and convd4 are all 128, and the sliding step length is 1 multiplied by 1;
all four convolutional layers convd1, convd2, convd3, convd4 are followed by a ReLu activation function.
The sizes of four convolution kernels in the denseblock module 1 are all 3×3;
the four convolution kernels in the denseblock module 2 are all 5×5 in size;
the four convolution kernels in the denseblock module 3 are all 7×7 in size;
the input channel number of the convolution layer conv2 is 384, the output channel number is 64, the convolution kernel size is 3 multiplied by 3, and the ReLu activation function is connected at the back;
the input channel number of the convolution layer conv3 is 64, the output channel number is 64, the convolution kernel size is 3 multiplied by 3, and a ReLu activation function is connected at the back;
the number of input channels of the convolution layer conv4 is 64, the number of output channels is 3, and the convolution kernel size is 3×3.
Preferably, the specific step of S5 is as follows:
processing the defogging low-frequency image by using a defogging module to obtain a defogging low-frequency image;
the defogging module comprises a first convolution layer, a second convolution layer, a first fusion layer, a third convolution layer, a fourth convolution layer, a second fusion layer, a fifth convolution layer, a sixth convolution layer, a third fusion layer and a seventh convolution layer which are sequentially connected, and each convolution layer is connected with a ReLu activation function.
The convolution kernel of the first convolution layer has the size of 1 multiplied by 1, the number of input channels is 3, and the number of output channels is 3;
the convolution kernel of the second convolution layer has the size of 3 multiplied by 3, the number of input channels is 3, and the number of output channels is 3;
the first fusion layer fuses the outputs of the first convolution layer and the second convolution layer in a concat mode, and the number of output channels is 6;
the convolution kernel of the third convolution layer has the size of 3 multiplied by 3, the number of input channels is 6, and the number of output channels is 3;
the convolution kernel of the fourth convolution layer has the size of 5 multiplied by 5, the number of input channels is 3, and the number of output channels is 3;
the second fusion layer fuses the outputs of the second convolution layer, the third convolution layer and the fourth convolution layer in a concat mode, and the number of output channels is 9;
the convolution kernel of the fifth convolution layer has the size of 5 multiplied by 5, the number of input channels is 9, and the number of output channels is 3;
the convolution kernel of the sixth convolution layer is 7 multiplied by 7, the number of input channels is 3, and the number of output channels is 3;
the third fusion layer fuses the outputs of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer and the sixth convolution layer in a concat mode, and the number of output channels is 18;
the convolution kernel size of the seventh convolution layer is 3×3, the number of input channels is 18, and the number of output channels is 3.
Compared with the related art, the image rain removing method based on the rain and fog separation processing and the multi-scale network has the following beneficial effects:
the invention provides an image rain removing method based on rain and fog separation processing and a multi-scale network, and test results on a plurality of data sets show that the method can well remove rain stripes and rain and fog in images and retain details and colors of the images, and compared with a mainstream rain removing algorithm in recent years, SSIM is improved by about 0.02-0.08, and PSNR is improved by about 0.2-3.5 dB.
Drawings
FIG. 1 is a schematic diagram of the overall network architecture of the method of the present invention;
FIG. 2 is a schematic diagram of the structure of a rain removal network according to the method of the present invention;
FIG. 3 is a schematic diagram of the defogging network of the method of the present invention;
FIG. 4 is a schematic diagram of a block module structure of the method of the present invention;
FIG. 5 is a schematic representation of the results of the present invention on a synthetic rain and fog plot; wherein, (a), (c) and (e) are rain charts, and (b), (d) and (f) are corresponding result charts.
FIG. 6 is a schematic diagram showing the results of the present invention on a true rain and fog plot; wherein, (a), (c) and (e) are rain charts, and (b), (d) and (f) are corresponding result charts.
Detailed Description
The invention will be further described with reference to the drawings and embodiments.
First embodiment
Referring to fig. 1, fig. 2, fig. 3, fig. 4, fig. 5, and fig. 6 in combination, fig. 1 is a schematic diagram of an overall network structure of the method of the present invention; FIG. 2 is a schematic diagram of the structure of a rain removal network according to the method of the present invention; FIG. 3 is a schematic diagram of the defogging network of the method of the present invention; FIG. 4 is a schematic diagram of a block module structure of the method of the present invention; FIG. 5 is a schematic representation of the results of the present invention on a synthetic rain and fog plot; wherein, (a), (c) and (e) are rain charts, and (b), (d) and (f) are corresponding result charts. FIG. 6 is a schematic diagram showing the results of the present invention on a true rain and fog plot; wherein, (a), (c) and (e) are rain charts, and (b), (d) and (f) are corresponding result charts. An image rain removing method based on rain and fog separation processing and a multi-scale network comprises the following steps:
s1: acquiring an image to be rain removed;
s2: constructing a physical model of the rain and fog image;
s3: filtering the image by using a guide filter to obtain a high-frequency rainy image and a low-frequency foggy image;
s4: designing a rain removing module according to the physical model, inputting the high-frequency rain removing image into the rain removing module, and outputting the high-frequency rain removing image by the rain removing module;
s5: designing a defogging module according to the physical model, inputting the low-frequency defogging image into the defogging module, and outputting the low-frequency defogging image by the defogging module;
s6: and fusing the rain-removing high-frequency image and the defogging low-frequency image to obtain a clear image after removing rain and fog.
Second embodiment
The image rain removing scheme of this embodiment further defines the specific steps of step b based on the first specific embodiment as follows
Modeling a fog image as a high frequency image I H And low frequency image I L And, the sum is represented as follows:
I=I H +I L
and, in addition, the method comprises the steps of,
I H =(J H +S)·T
I L =J L ·T+A·(1-T)
wherein I represents a foggy image, S represents a visible rain stripe, J represents a real scene image, J H High frequency details of the surface scene, J L Representing the low frequency background of the scene, T representing the transmissivity of the fog layer between the camera system and the scene, and a representing the global atmosphere. The real rain and fog image is modeled as i= (J H +S)·T+[J L ·T+A·(1-T)]。
Third embodiment
The image rain removing scheme of the present embodiment further defines, based on the first embodiment, the specific steps of step d as follows:
a rain removing module is used for carrying out rain removing processing on the high-frequency rain-containing image obtained in the step S3 and outputting the high-frequency rain-removing image;
the rain removing module comprises a convolution layer conv1, a dense block module 2, a dense block module 3, a convolution layer conv2, a convolution layer conv3 and a convolution layer conv4; the connection mode is as follows: the output of the convolution layer cov 1 is used as the input of the denseblock module 1, the denseblock module 2 and the denseblock module 3 respectively; the concat of the inputs of the denseblock module 1, the denseblock module 2 and the denseblock module 3 are connected as the input of the convolution layer conv 2; then, a convolution layer conv2, a convolution layer conv3 and a convolution layer conv4 are connected in sequence;
the convolution kernel size of the convolution layer conv1 is 3 multiplied by 3, the number of input channels is 3, the number of output channels is 128, the step length is 1 multiplied by 1, and a ReLu activation function is used as an activation function;
the structure of the denseblock module 1, the structure of the denseblock module 2 and the structure of the denseblock module 3 are similar, the input is the output of conv1, and the output comprises four convolution layers convd1, convd2, convd3 and convd4 which are connected in sequence;
the inputs of the four convolution layers convd1, convd2, convd3 and convd4 are concat connections of the outputs of all the preceding convolution layers;
the number of input channels of the four convolution layers convd1, convd2, convd3 and convd4 is 128, 256, 384 and 512 in sequence;
the output channel numbers of the four convolution layers convd1, convd2, convd3 and convd4 are all 128, and the sliding step length is 1 multiplied by 1;
all four convolutional layers convd1, convd2, convd3, convd4 are followed by a ReLu activation function.
The sizes of four convolution kernels in the denseblock module 1 are all 3×3;
the four convolution kernels in the denseblock module 2 are all 5×5 in size;
the four convolution kernels in the denseblock module 3 are all 7×7 in size;
the input channel number of the convolution layer conv2 is 384, the output channel number is 64, the convolution kernel size is 3 multiplied by 3, and the ReLu activation function is connected at the back;
the input channel number of the convolution layer conv3 is 64, the output channel number is 64, the convolution kernel size is 3 multiplied by 3, and a ReLu activation function is connected at the back;
the number of input channels of the convolution layer conv4 is 64, the number of output channels is 3, and the convolution kernel size is 3×3.
Fourth embodiment
The image rain removing scheme of the present embodiment further defines, based on the first embodiment, the specific steps of step e as follows:
processing the defogging low-frequency image by using a defogging module to obtain a defogging low-frequency image;
the defogging module comprises a first convolution layer, a second convolution layer, a first fusion layer, a third convolution layer, a fourth convolution layer, a second fusion layer, a fifth convolution layer, a sixth convolution layer, a third fusion layer and a seventh convolution layer which are sequentially connected, and each convolution layer is connected with a ReLu activation function.
The convolution kernel of the first convolution layer has the size of 1 multiplied by 1, the number of input channels is 3, and the number of output channels is 3;
the convolution kernel of the second convolution layer has the size of 3 multiplied by 3, the number of input channels is 3, and the number of output channels is 3;
the first fusion layer fuses the outputs of the first convolution layer and the second convolution layer in a concat mode, and the number of output channels is 6;
the convolution kernel of the third convolution layer has the size of 3 multiplied by 3, the number of input channels is 6, and the number of output channels is 3;
the convolution kernel of the fourth convolution layer has the size of 5 multiplied by 5, the number of input channels is 3, and the number of output channels is 3;
the second fusion layer fuses the outputs of the second convolution layer, the third convolution layer and the fourth convolution layer in a concat mode, and the number of output channels is 9;
the convolution kernel of the fifth convolution layer has the size of 5 multiplied by 5, the number of input channels is 9, and the number of output channels is 3;
the convolution kernel of the sixth convolution layer is 7 multiplied by 7, the number of input channels is 3, and the number of output channels is 3;
the third fusion layer fuses the outputs of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer and the sixth convolution layer in a concat mode, and the number of output channels is 18;
the convolution kernel size of the seventh convolution layer is 3×3, the number of input channels is 18, and the number of output channels is 3.
Compared with the related art, the image rain removing method based on the rain and fog separation processing and the multi-scale network has the following beneficial effects:
test results on a plurality of data sets show that the method can better remove rain streaks and rain mist in images and keep details and colors of the images, and compared with the mainstream rain removal algorithm in recent years, SSIM is improved by about 0.02-0.08, and PSNR is improved by about 0.2-3.5 dB.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.
Claims (2)
1. The image rain removing method based on the rain and fog separation treatment and the multi-scale network is characterized by comprising the following steps of:
s1: acquiring an image to be rain removed;
s2: constructing a physical model of the rain and fog image;
s3: filtering the image by using a guide filter to obtain a high-frequency rainy image and a low-frequency foggy image;
s4: according to the physical model design rain removing module, input the rain removing module with the high frequency rain image, then output the high frequency rain removing image by the rain removing module, the specific step of S4 is as follows:
a rain removing module is used for carrying out rain removing processing on the high-frequency rain-containing image obtained in the step S3 and outputting the high-frequency rain-removing image;
the rain removing module comprises a convolution layer conv1, a dense block module 2, a dense block module 3, a convolution layer conv2, a convolution layer conv3 and a convolution layer conv4; the connection mode is as follows: the output of the convolution layer cov 1 is used as the input of the denseblock module 1, the denseblock module 2 and the denseblock module 3 respectively; the concat of the inputs of the denseblock module 1, the denseblock module 2 and the denseblock module 3 are connected as the input of the convolution layer conv 2; then, a convolution layer conv2, a convolution layer conv3 and a convolution layer conv4 are connected in sequence;
the convolution kernel size of the convolution layer conv1 is 3 multiplied by 3, the number of input channels is 3, the number of output channels is 128, the step length is 1 multiplied by 1, and a ReLu activation function is used as an activation function;
the structure of the denseblock module 1, the structure of the denseblock module 2 and the structure of the denseblock module 3 are similar, the input is the output of conv1, and the output comprises four convolution layers convd1, convd2, convd3 and convd4 which are connected in sequence;
the inputs of the four convolution layers convd1, convd2, convd3 and convd4 are concat connections of the outputs of all the preceding convolution layers;
the number of input channels of the four convolution layers convd1, convd2, convd3 and convd4 is 128, 256, 384 and 512 in sequence;
the output channel numbers of the four convolution layers convd1, convd2, convd3 and convd4 are all 128, and the sliding step length is 1 multiplied by 1;
all the four convolution layers convd1, convd2, convd3 and convd4 are connected with a ReLu activation function;
the sizes of four convolution kernels in the denseblock module 1 are all 3×3;
the four convolution kernels in the denseblock module 2 are all 5×5 in size;
the four convolution kernels in the denseblock module 3 are all 7×7 in size;
the input channel number of the convolution layer conv2 is 384, the output channel number is 64, the convolution kernel size is 3 multiplied by 3, and the ReLu activation function is connected at the back;
the input channel number of the convolution layer conv3 is 64, the output channel number is 64, the convolution kernel size is 3 multiplied by 3, and a ReLu activation function is connected at the back;
the input channel number of the convolution layer conv4 is 64, the output channel number is 3, and the convolution kernel size is 3 multiplied by 3;
s5: designing a defogging module according to a physical model, inputting a low-frequency defogging image into the defogging module, and outputting the low-frequency defogging image by the defogging module, wherein the specific steps of S5 are as follows:
processing the defogging low-frequency image by using a defogging module to obtain a defogging low-frequency image;
the defogging module comprises a first convolution layer, a second convolution layer, a first fusion layer, a third convolution layer, a fourth convolution layer, a second fusion layer, a fifth convolution layer, a sixth convolution layer, a third fusion layer and a seventh convolution layer which are sequentially connected, each convolution layer is connected with a ReLu activation function,
the convolution kernel of the first convolution layer has the size of 1 multiplied by 1, the number of input channels is 3, and the number of output channels is 3;
the convolution kernel of the second convolution layer has the size of 3 multiplied by 3, the number of input channels is 3, and the number of output channels is 3;
the first fusion layer fuses the outputs of the first convolution layer and the second convolution layer in a concat mode, and the number of output channels is 6;
the convolution kernel of the third convolution layer has the size of 3 multiplied by 3, the number of input channels is 6, and the number of output channels is 3;
the convolution kernel of the fourth convolution layer has the size of 5 multiplied by 5, the number of input channels is 3, and the number of output channels is 3;
the second fusion layer fuses the outputs of the second convolution layer, the third convolution layer and the fourth convolution layer in a concat mode, and the number of output channels is 9;
the convolution kernel of the fifth convolution layer has the size of 5 multiplied by 5, the number of input channels is 9, and the number of output channels is 3;
the convolution kernel of the sixth convolution layer is 7 multiplied by 7, the number of input channels is 3, and the number of output channels is 3;
the third fusion layer fuses the outputs of the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer and the sixth convolution layer in a concat mode, and the number of output channels is 18;
the convolution kernel of the seventh convolution layer has a size of 3×3, the number of input channels is 18, and the number of output channels is 3;
s6: and fusing the rain-removing high-frequency image and the defogging low-frequency image to obtain a clear image after removing rain and fog.
2. The method for removing rain from an image based on a rain and fog separation process and a multi-scale network according to claim 1, wherein the specific steps of S2 are as follows:
modeling fog images as high frequency imagesI H And a low frequency imageI L And, the sum is represented as follows:
I = I H + I L
and, in addition, the method comprises the steps of,
I H = ( J H + S )·T
I L =J L ·T + A·( 1-T )
wherein,Ian image representing the presence of a fog is presented,Sindicating that a stripe of rain is visible,Jrepresenting an image of a real scene of a person,J H the high frequency details of the surface view,J L representing a low frequency background of the scene,Tindicating the transmissivity of the fog between the camera system and the scene,Arepresenting global atmospheric light, the real rain and fog image is modeled as
I=(J H + S)·T + [J L ·T + A·(1-T)]。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210796789.0A CN115034999B (en) | 2022-07-06 | 2022-07-06 | Image rain removing method based on rain and fog separation processing and multi-scale network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210796789.0A CN115034999B (en) | 2022-07-06 | 2022-07-06 | Image rain removing method based on rain and fog separation processing and multi-scale network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115034999A CN115034999A (en) | 2022-09-09 |
CN115034999B true CN115034999B (en) | 2024-03-19 |
Family
ID=83128485
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210796789.0A Active CN115034999B (en) | 2022-07-06 | 2022-07-06 | Image rain removing method based on rain and fog separation processing and multi-scale network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115034999B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109064419A (en) * | 2018-07-12 | 2018-12-21 | 四川大学 | A kind of removing rain based on single image method based on WLS filtering and multiple dimensioned sparse expression |
CN109360169A (en) * | 2018-10-24 | 2019-02-19 | 西南交通大学 | A kind of signal processing method of removing rain based on single image demisting |
CN109447918A (en) * | 2018-11-02 | 2019-03-08 | 北京交通大学 | Removing rain based on single image method based on attention mechanism |
CN110378849A (en) * | 2019-07-09 | 2019-10-25 | 闽江学院 | Image defogging rain removing method based on depth residual error network |
CN110866879A (en) * | 2019-11-13 | 2020-03-06 | 江西师范大学 | Image rain removing method based on multi-density rain print perception |
CN111275627A (en) * | 2019-02-27 | 2020-06-12 | 中国科学院沈阳自动化研究所 | Image snow removing algorithm based on snow model and deep learning fusion |
CN111815526A (en) * | 2020-06-16 | 2020-10-23 | 中国地质大学(武汉) | Rain image rainstrip removing method and system based on image filtering and CNN |
CN113628133A (en) * | 2021-07-28 | 2021-11-09 | 武汉三江中电科技有限责任公司 | Rain and fog removing method and device based on video image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022067653A1 (en) * | 2020-09-30 | 2022-04-07 | 京东方科技集团股份有限公司 | Image processing method and apparatus, device, video processing method, and storage medium |
-
2022
- 2022-07-06 CN CN202210796789.0A patent/CN115034999B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109064419A (en) * | 2018-07-12 | 2018-12-21 | 四川大学 | A kind of removing rain based on single image method based on WLS filtering and multiple dimensioned sparse expression |
CN109360169A (en) * | 2018-10-24 | 2019-02-19 | 西南交通大学 | A kind of signal processing method of removing rain based on single image demisting |
CN109447918A (en) * | 2018-11-02 | 2019-03-08 | 北京交通大学 | Removing rain based on single image method based on attention mechanism |
CN111275627A (en) * | 2019-02-27 | 2020-06-12 | 中国科学院沈阳自动化研究所 | Image snow removing algorithm based on snow model and deep learning fusion |
CN110378849A (en) * | 2019-07-09 | 2019-10-25 | 闽江学院 | Image defogging rain removing method based on depth residual error network |
CN110866879A (en) * | 2019-11-13 | 2020-03-06 | 江西师范大学 | Image rain removing method based on multi-density rain print perception |
CN111815526A (en) * | 2020-06-16 | 2020-10-23 | 中国地质大学(武汉) | Rain image rainstrip removing method and system based on image filtering and CNN |
CN113628133A (en) * | 2021-07-28 | 2021-11-09 | 武汉三江中电科技有限责任公司 | Rain and fog removing method and device based on video image |
Non-Patent Citations (4)
Title |
---|
Efficient rain–fog model for rain detection and removal;Fu, Fangfa 等;《Journal of Electronic Imaging》;20200301;1-23 * |
Frequency-Based Haze and Rain Removal Network (FHRR-Net) with Deep Convolutional Encoder-Decoder;Dong Hwan Kim等;《MDPI》;20210323;1-18 * |
多尺度密集时序卷积网络的单幅图像去雨方法;赵嘉兴;王夏黎;王丽红;曹晨洁;;计算机技术与发展;20200110(05);全文 * |
韦豪 ; 李洪儒 ; 邓国亮 ; 周寿桓.基于雨雾分离处理和多尺度网络的图像去雨方法.《计算机应用研究》.2022,1-5. * |
Also Published As
Publication number | Publication date |
---|---|
CN115034999A (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110570371B (en) | Image defogging method based on multi-scale residual error learning | |
CN111915530B (en) | End-to-end-based haze concentration self-adaptive neural network image defogging method | |
CN111161360B (en) | Image defogging method of end-to-end network based on Retinex theory | |
CN111062892B (en) | Single image rain removing method based on composite residual error network and deep supervision | |
CN106780356B (en) | Image defogging method based on convolutional neural network and prior information | |
CN110866879B (en) | Image rain removing method based on multi-density rain print perception | |
CN110443761B (en) | Single image rain removing method based on multi-scale aggregation characteristics | |
CN112365414B (en) | Image defogging method based on double-path residual convolution neural network | |
KR20200018283A (en) | Method for training a convolutional recurrent neural network and for semantic segmentation of inputted video using the trained convolutional recurrent neural network | |
CN107133935A (en) | A kind of fine rain removing method of single image based on depth convolutional neural networks | |
CN107527358B (en) | Dense optical flow estimation method and device | |
CN109993804A (en) | A kind of road scene defogging method generating confrontation network based on condition | |
CN107749048B (en) | Image correction system and method, and color blindness image correction system and method | |
Huang et al. | Towards unsupervised single image dehazing with deep learning | |
CN112581409A (en) | Image defogging method based on end-to-end multiple information distillation network | |
CN112200732B (en) | Video deblurring method with clear feature fusion | |
CN110111267A (en) | A kind of single image based on optimization algorithm combination residual error network removes rain method | |
CN111539896A (en) | Domain-adaptive-based image defogging method and system | |
CN109583334B (en) | Action recognition method and system based on space-time correlation neural network | |
CN111161161A (en) | Feature fusion defogging method for color preservation | |
Qian et al. | CIASM-Net: a novel convolutional neural network for dehazing image | |
CN114049532A (en) | Risk road scene identification method based on multi-stage attention deep learning | |
CN111598793A (en) | Method and system for defogging image of power transmission line and storage medium | |
CN106971377A (en) | A kind of removing rain based on single image method decomposed based on sparse and low-rank matrix | |
CN114202481B (en) | Multi-scale feature defogging network and method based on image high-frequency information fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |