[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114936972B - Remote sensing image thin cloud removing method based on multipath perception gradient - Google Patents

Remote sensing image thin cloud removing method based on multipath perception gradient Download PDF

Info

Publication number
CN114936972B
CN114936972B CN202210357921.8A CN202210357921A CN114936972B CN 114936972 B CN114936972 B CN 114936972B CN 202210357921 A CN202210357921 A CN 202210357921A CN 114936972 B CN114936972 B CN 114936972B
Authority
CN
China
Prior art keywords
remote sensing
sensing image
cloud
thin cloud
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210357921.8A
Other languages
Chinese (zh)
Other versions
CN114936972A (en
Inventor
王晓宇
刘宇航
张严
佘玉成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Dongfanghong Satellite Co Ltd
Original Assignee
Aerospace Dongfanghong Satellite Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Dongfanghong Satellite Co Ltd filed Critical Aerospace Dongfanghong Satellite Co Ltd
Priority to CN202210357921.8A priority Critical patent/CN114936972B/en
Publication of CN114936972A publication Critical patent/CN114936972A/en
Application granted granted Critical
Publication of CN114936972B publication Critical patent/CN114936972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a remote sensing image thin cloud removal method based on multipath perception gradient, which comprises the following steps: establishing a remote sensing image thin cloud removal data set, and forming a training set, a verification set and a test set according to a certain proportion; constructing a perception gradient extraction module for extracting image thin cloud features; a cloud layer thickness estimation module is built and used for adaptively estimating the cloud layer thickness; a remote sensing image thin cloud removal network is built and used for converting a single Zhang Baoyun remote sensing image into a clear remote sensing image; training a remote sensing image thin cloud removal network by adopting a remote sensing image thin cloud removal data set, wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function; and importing model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.

Description

Remote sensing image thin cloud removing method based on multipath perception gradient
Technical Field
The invention relates to the technical field of image processing technology and deep learning, in particular to a remote sensing image thin cloud removing method based on multipath perception gradient.
Background
The optical remote sensing image shot by the remote sensing satellite is often influenced by cloud layers in the environment, so that a series of problems such as shielding of key contents in the image, loss of detail information, color distortion and the like are caused, the utilization efficiency of the optical remote sensing image is greatly reduced, interpretation of the remote sensing image is seriously influenced, and therefore many remote sensing applications cannot be smoothly carried out. The optical remote sensing image affected by the thick cloud has no utilization value, and the thin cloud remote sensing image is processed by a proper technical means to remove the influence of the thin cloud, so that the processing and the application of the subsequent image are facilitated.
The traditional remote sensing image thin cloud removing method adopts an image filtering method, a statistical prior method and the like, and cloud layer influence in an image is removed through filtering in the method, or the difference between cloud images and cloud images is analyzed through statistics, statistical prior information is provided, and then a thin cloud removing task is completed. The method has obvious limitation and can not adapt to complex and changeable conditions.
With the rapid development of the deep neural network, a remote sensing image cloud and fog removal method designed by adopting the deep convolutional neural network is widely focused. The convolutional neural network can extract image features and reconstruct image contents to realize cloud and fog removal of remote sensing images, and has the difficulty of how to design a network and a module to extract the features suitable for cloud and fog removal and keep the restored images real and natural. Through the self-adaptive learning of the conversion relation between the thin cloud remote sensing image and the clear remote sensing image by the proposed neural network structure, the thin cloud removal of the thin cloud remote sensing image is realized.
Disclosure of Invention
The invention solves the technical problems that: the remote sensing image thin cloud removing method based on the multipath sensing gradients overcomes the defects of the prior art, does not need complex assumption and prior, can directly recover a foggy image from a foggy image, and is simple and easy to implement.
The technical scheme of the invention is as follows: a remote sensing image thin cloud removing method based on multipath perception gradient comprises the following steps:
1) Establishing a remote sensing image thin cloud removal data set, wherein the remote sensing image thin cloud removal data set comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and a training set, a verification set and a test set are formed according to proportion;
2) Constructing a perception gradient extraction module for extracting image thin cloud features;
3) A cloud layer thickness estimation module is built and used for adaptively estimating the cloud layer thickness;
4) Based on the perceived gradient extraction module obtained in the step 2) and the cloud layer thickness estimation module obtained in the step 3), a remote sensing image thin cloud removal network is built for converting a single Zhang Baoyun remote sensing image into a clear remote sensing image;
5) Training a remote sensing image thin cloud removal network by utilizing the data set obtained in the step 1), wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
6) And importing model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
In the step 1), the remote sensing image thin cloud removal dataset specifically includes:
11 Selecting n clear remote sensing images R, and obtaining a thin cloud remote sensing image C and a cloud layer thickness image T by generating simulated thin clouds; cutting a remote sensing image into an image with a size of N multiplied by N, forming a remote sensing image thin cloud removal dataset by using a clear remote sensing image R, a thin cloud remote sensing image C and a cloud layer thickness image T which have a corresponding relation, and marking the remote sensing image thin cloud removal dataset as { R i,Ci,Ti |i epsilon (1, …, m) }, wherein i is the sequence number of the image, m is the number of the image, and i and m are positive integers;
12 The remote sensing image thin cloud removal data set is divided into a training set, a verification set and a test set according to the proportion of p 1:p2:p3, wherein p 1、p2 and p 3 are positive integers, and p 1>p2,p1>p3.
In the step 2), the construction of the perception gradient extraction module specifically includes a perception feature extraction unit, a gradient information extraction unit, a residual feature extraction unit and residual connection.
The sensing feature extraction unit specifically adopts a VGG19 network to extract image features, simulates a human visual system to extract features of an image sensing layer, adopts an nth 2 output result of an nth 1 layer of the VGG19 network as sensing feature information, and is used for a subsequent thin cloud removal task, wherein n 1 and n 2 are positive integers.
The gradient information extraction unit specifically adopts a Sobel operator filter to perform convolution operation with the stride of d 1 on the feature map, and is used for extracting image gradient information, wherein the gradient information contains cloud layer related features; wherein d 1 is a positive integer.
The residual feature extraction unit consists of e residual units, each residual unit comprises s 1 convolution+ReLU activation functions, 1 feature calibration unit and 1 residual learning, the convolution kernel sizes are f multiplied by f, the stride is d 2, and e, s 1, f and d 2 are all positive integers.
The characteristic calibration unit consists of 3 branches and performs an image characteristic calibration task, wherein the input of the characteristic calibration unit is alpha in, and the output of the characteristic calibration unit is alpha out;
The branch 1 gives a weight to each pixel of the feature map to realize pixel level feature calibration, and consists of g convolution+ReLU activation function combinations and 1 convolution+sigmoid activation function combination, wherein the output result of the branch 1 is alpha s, the convolution kernel size is z×z, the stride is x, the branch 1 does not change the feature map size and the channel number, and g, z and x are all positive integers;
The branch 2 does not do any operation, and the output is still the input alpha in of the characteristic calibration unit;
The branch 3 gives the same weight to the pixels in each channel of the feature map, realizes channel level feature calibration, and consists of average value pooling, v convolution+ReLU activation function combinations, 1 convolution+sigmoid activation function combination and 1 feature size expansion unit; averaging pooling takes as a result the pixel value of each channel of the feature map, the feature map size changing from w×h×c to 1×1×c; the feature size expansion unit expands the feature map from the size of 1×1×c to w×h×c, i.e. from 1×1 values to w×h identical values, keeping the input and output feature map sizes and the channel number of the branch 3 unchanged; the output result of the branch 3 is alpha c, the convolution kernel size is a multiplied by a, the stride is k, wherein v, a and k are positive integers;
the output α out of the feature calibration unit is the result of the product operation of the 3 branch output feature maps at the corresponding pixels, as follows:
where α out is the output of the feature calibration unit, α s is the output of branch 1, α in is the output of branch 2, and α c is the output of branch 3.
In the step 3), the cloud layer thickness estimation module comprises an edge feature extraction part and a feature calibration part, and is used for adaptively estimating the cloud layer thickness, wherein the cloud layer thickness estimation module inputs a thin cloud remote sensing image C and outputs a predicted cloud layer thicknessAnd a feature map phi out;
The edge feature extraction part comprises w branches, each branch has the same structure and consists of a gradient information extraction part and a residual error unit in the step 2), and the size of convolution kernels adopted by the branches is gradually increased; for the r branch, the result of the input passing through the gradient information extraction part and the residual unit is recorded as phi r, the convolution kernel size is (2r+1) x (2r+1), wherein r is (1, …, w), and r and w are positive integers; the outputs of two adjacent branches are summed corresponding to pixels, the sum of the branches is w-1, and the result of the j-th branch after summation is recorded as delta j, as follows:
wherein j E (1, …, w-1), j and w are positive integers, Summing corresponding position elements of the representation feature map;
The characteristic calibration section is composed of w-1 characteristic calibration units, and for the ith branch of the characteristic calibration section, the output of the characteristic calibration unit is denoted as pi i, as follows:
πi=FC(δi);
where i ε (1, …, w-1), FC (·) represents the output of the feature calibration unit;
The outputs of the feature calibration section FC are connected on the channel and the result is noted as phi out as the 1 st output of the cloud thickness estimation module as follows:
φout=concat(π1,…,πw-1);
where concat (·) represents cascading feature maps on a channel;
The characteristic map phi out is fed into convolution and ReLU activation functions to obtain the predicted cloud layer thickness As the 2 nd output of the cloud thickness estimation module, the following is described:
where conv (·) represents a convolution with a convolution kernel size of l×l, stride d 3, reLU (·) represents a ReLU activation function, and l and d 3 are positive integers.
In the step 4), the building of the remote sensing image thin cloud removal network specifically includes:
the method comprises the steps of constructing a remote sensing image thin cloud removal network based on multipath sensing gradients by adopting a sensing gradient extraction module in the step 2), a cloud layer thickness estimation module in the step 3), a residual error feature extraction unit and a Tanh activation function, and converting from a single Zhang Baoyun remote sensing image to a clear remote sensing image; the input of the remote sensing image thin cloud removing network based on multipath sensing gradients is a thin cloud remote sensing image C, and the output is a prediction clear remote sensing image And predicting cloud thickness images
In the step 5), the characteristic loss function L F is specifically:
Wherein θ (·) represents an output feature map of the VGG19 network, u represents a convolutional layer sequence number of the VGG19 network, q, t, and y represent sequence numbers of feature maps length, width, and channel, O represents a number of layers of VGG19 used, W, H and C represent sizes of feature maps length, width, and channel, R represents a clear remote sensing image, And the predicted clear remote sensing image is represented by u epsilon (1, …, O), q epsilon (1, …, W), t epsilon (1, …, H), y epsilon (1, …, C), u, q, t, y, O, W, H and C are positive integers.
In the step 5), the gradient loss function L G is specifically:
Where, v (·) represents the image gradient extracted using the Prewitt operator, q, t and y represent the feature map length, width and number of channels, W, H and C represent the feature map length, width and size of channels, q e (1, …, W), t e (1, …, H), y e (1, …, C), q, t, y, W, H and C are positive integers.
In the step 5), the cloud layer thickness loss function L R is specifically:
Wherein R represents a clear remote sensing image, Representing a predicted sharp remote sensing image.
In the step 6), the thin cloud removing method specifically includes: training a remote sensing image thin cloud removal network by using the remote sensing image thin cloud removal data set, wherein a loss function used for training comprises a characteristic loss function, a gradient loss function and a cloud layer thickness loss function, b generations of training are summed up, model parameters obtained after training are imported into the remote sensing image thin cloud removal network, and a single thin cloud remote sensing image is input to complete remote sensing image thin cloud removal.
The technical scheme provided by the invention has the beneficial effects that:
1. The traditional remote sensing image thin cloud removing method is based on filtering or assumption priori and the like, physical model deduction is carried out, and limitation is obvious, and the invention provides the remote sensing image thin cloud removing method based on multipath perception gradient, which comprises the steps of constructing a perception gradient extracting module and a cloud layer thickness estimating module, extracting image thin cloud characteristics and adaptively estimating cloud layer thickness, reconstructing a clear remote sensing image, and removing the thin cloud more thoroughly without specific physical model and priori knowledge;
2. According to the method, the thin cloud removal of the single Zhang Yaogan image can be realized, other images are not needed to serve as references, a remote sensing image thin cloud removal network is trained by using a characteristic loss function, a gradient loss function and a cloud layer thickness loss function, and the thin cloud removal of the single Zhang Yaogan image with high quality is completed, so that the method is simple and feasible, and the efficiency is high;
3. The method has strong robustness, can be deployed in embedded equipment as image preprocessing application, realizes real-time remote sensing image thin cloud removal, and has wide application range, and real and natural thin cloud removal effect.
Drawings
FIG. 1 is a flow chart of a remote sensing image thin cloud removal method based on multi-path perceptual gradient;
FIG. 2 is a schematic diagram of a perceptual gradient extraction module;
FIG. 3 is a schematic diagram of a residual feature extraction unit;
FIG. 4 is a schematic diagram of a feature calibration unit;
FIG. 5 is a schematic diagram of a cloud layer thickness estimation module;
fig. 6 is a schematic diagram of a remote sensing image thin cloud removal network structure based on multipath perceptual gradient.
Detailed Description
The method comprises the following steps:
1) Establishing a thin cloud removal data set of a remote sensing image, wherein the data set comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and a training set, a verification set and a test set are formed according to a certain proportion;
2) Constructing a perception gradient extraction module, which comprises a perception feature extraction unit, a gradient information extraction unit and a residual feature extraction unit, and is used for extracting image thin cloud features;
3) The cloud layer thickness estimation module is built and comprises an edge feature extraction part and a feature calibration part, and is used for adaptively estimating the cloud layer thickness;
4) Based on the perceived gradient extraction module in the step 2) and the cloud layer thickness estimation module in the step 3), a remote sensing image thin cloud removal network is built for converting a single Zhang Baoyun remote sensing image into a clear remote sensing image;
5) Training a remote sensing image thin cloud removal network by adopting the remote sensing image thin cloud removal data set in the step 1), wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
6) And importing model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
The image thin cloud removal data set in the step 1) specifically includes:
11 N clear remote sensing images R are selected, and a thin cloud remote sensing image C and a cloud layer thickness image T are obtained by generating simulated thin clouds. Because the remote sensing image is large in size, the remote sensing image is cut into an image with the size of N multiplied by N, a remote sensing image thin cloud removal dataset is formed by a clear remote sensing image R, a thin cloud remote sensing image C and a cloud layer thickness image T with a corresponding relation, the remote sensing image thin cloud removal dataset is marked as { R i,Ci,Ti |i E (1, …, m) }, wherein i is the sequence number of the image, m is the number of the image, and i and m are positive integers;
12 The remote sensing image thin cloud removal data set is divided into a training set, a verification set and a test set according to the proportion of p 1:p2:p3 and is used for training, verification and test of the method, wherein p 1、p2 and p 3 are positive integers, and p 1>p2,p1>p3.
The perceptual gradient extraction module in the step 2) specifically comprises:
as shown in fig. 2, a perceptual gradient extraction module is built, comprising a perceptual feature extraction unit (Perceptual Feature Extraction unit, PFE), a gradient information extraction unit (Gradient Information Extraction unit, GIE), a residual feature extraction unit (Residual Feature Extraction unit, RFE) and a residual connection for extracting image thin cloud features.
The perceptual feature extraction unit specifically comprises:
The sensing characteristic extraction unit PFE adopts a VGG19 network to extract image characteristics, simulates a human visual system to extract characteristics of an image sensing layer, adopts an nth 2 output result of an nth 1 layer of the VGG19 network as sensing characteristic information, and is used for a subsequent thin cloud removal task, wherein n 1 and n 2 are positive integers.
The gradient information extraction unit specifically comprises:
The gradient information extraction unit GIE adopts a Sobel operator filter to perform convolution operation with a stride of d 1 on the feature map, so as to extract image gradient information, wherein the gradient information contains more cloud layer related features, which is beneficial to removing thin cloud, and d 1 is a positive integer.
The residual characteristic extraction unit specifically comprises:
As shown in fig. 3, the Residual feature extraction Unit RFE is composed of e Residual Units (RU), each of which includes s 1 convolutions+relu activation functions, 1 feature calibration Unit (Feature Calibration Unit, FC), and 1 Residual learning, the convolution kernel sizes are f×f, and the stride is d 2, where e, s 1, f, and d 2 are all positive integers.
The characteristic calibration unit specifically comprises:
21 As shown in fig. 4, the feature calibration unit consists of 3 branches, performs an image feature calibration task, and has an input of alpha in and an output of alpha out;
22 The branch 1 gives a weight to each pixel of the feature map to realize pixel level feature calibration, and the feature map is specifically composed of g convolution+ReLU activation function combinations and 1 convolution+sigmoid activation function combination, the output result of the branch 1 is alpha s, the convolution kernel size is z×z, the stride is x, the branch 1 does not change the feature map size and the channel number, wherein g, z and x are all positive integers;
23 No operation is performed by branch 2, the output is still the input α in of the characteristic calibration unit FC;
24 The branch 3 gives the same weight to the pixels in each channel of the feature map, realizes channel level feature calibration, and specifically consists of average value pooling, v convolution+ReLU activation function combinations, 1 convolution+sigmoid activation function combination and 1 feature size expansion unit. Averaging pooling takes as a result the pixel value of each channel of the feature map, the feature map size changing from w×h×c to 1×1×c; the feature size expansion unit expands the feature map from the size of 1×1×c to w×h×c, i.e. from 1×1 values to w×h identical values, keeping the input and output feature map sizes and the channel number of the branch 3 unchanged; the output result of the branch 3 is alpha c, the convolution kernel size is a multiplied by a, the stride is k, wherein v, a and k are positive integers; the output of the feature calibration unit FC is the result of the product operation of the 3 branch output feature maps at the corresponding pixels, and is specifically as follows:
Where α out is the output of the feature calibration unit FC, α s is the output of branch 1, α in is the output of branch 2, and α c is the output of branch 3.
In the step 3), the cloud layer thickness estimation module specifically includes:
31 As shown in fig. 5, a cloud layer thickness estimation module is built, and the cloud layer thickness estimation module comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness, wherein the cloud layer thickness estimation module is input into a thin cloud remote sensing image C and output into a predicted cloud layer thickness And a feature map phi out;
32 The edge feature extraction part comprises w branches, each branch has the same structure and consists of a gradient information extraction part GIE and a residual error unit RU in the step 2), and the size of convolution kernels adopted by the branches is gradually increased. For the r-th branch, the result of the input passing through GIE and RU is noted as φ r, the convolution kernel size is (2r+1) x (2r+1), where r ε (1, …, w), r and w are positive integers. The outputs of two adjacent branches are summed corresponding to pixels, so that the sum of the branches is w-1, and the result of the j-th branch after summation is recorded as delta j, which is specifically as follows:
wherein j E (1, …, w-1), j and w are positive integers, Summing corresponding position elements of the representation feature map;
33 The characteristic calibration section is composed of w-1 characteristic calibration cells FC, and for the ith branch of the characteristic calibration section, the output of the characteristic calibration cell FC is denoted pi i, as follows:
πi=FC(δi)
wherein i epsilon (1, …, w-1), FC (·) represents the output of the feature calibration unit FC;
34 The outputs of the feature calibration section FC are connected on the channel, the result being noted as phi out, as the 1 st output of the cloud thickness estimation module, as follows:
φout=concat(π1,…,πw-1)
where concat (·) represents cascading feature maps on a channel;
The characteristic map phi out is fed into convolution and ReLU activation functions to obtain the predicted cloud layer thickness As the 2 nd output of the cloud layer thickness estimation module, the following is specific:
where conv (·) represents a convolution with a convolution kernel size of l×l, stride d 3, reLU (·) represents a ReLU activation function, and l and d 3 are positive integers.
In step 4), the remote sensing image thin cloud removing network specifically includes:
41 As shown in fig. 6, a remote sensing image thin cloud removing network based on multiple paths of sensing gradients is built by adopting a sensing gradient extracting module in the step 2), a cloud layer thickness estimating module in the step 3), a residual error feature extracting unit RFE and a Tanh activating function, and is used for converting a single Zhang Baoyun remote sensing image into a clear remote sensing image;
42 Input of the remote sensing image thin cloud removing network based on multipath sensing gradients is a thin cloud remote sensing image C, and output is a predicted clear remote sensing image And predicting cloud thickness images
In step 5), the feature loss function L F is specifically:
Wherein θ (·) represents an output feature map of the VGG19 network, u represents a convolutional layer sequence number of the VGG19 network, q, t, and y represent sequence numbers of feature maps length, width, and channel, O represents a number of layers of VGG19 used, W, H and C represent sizes of feature maps length, width, and channel, R represents a clear remote sensing image, And the predicted clear remote sensing image is represented by u epsilon (1, …, O), q epsilon (1, …, W), t epsilon (1, …, H), y epsilon (1, …, C), u, q, t, y, O, W, H and C are positive integers.
In step 5), the gradient loss function L G is specifically:
Where, v (·) represents the image gradient extracted using the Prewitt operator, q, t and y represent the feature map length, width and number of channels, W, H and C represent the feature map length, width and size of channels, q e (1, …, W), t e (1, …, H), y e (1, …, C), q, t, y, W, H and C are positive integers.
In step 5), the cloud layer thickness loss function L R is specifically:
in the step 6), the thin cloud removing method specifically comprises the following steps:
And (3) importing model parameters obtained in the generation b into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal. In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail below.
Example 1
The embodiment of the invention provides a remote sensing image thin cloud removal method based on multipath perception gradient, which is described in detail below with reference to fig. 1:
101: establishing a thin cloud removal data set of a remote sensing image, wherein the data set comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and a training set, a verification set and a test set are formed according to a certain proportion;
102: constructing a perception gradient extraction module, which comprises a perception feature extraction unit, a gradient information extraction unit and a residual feature extraction unit, and is used for extracting image thin cloud features;
103: the cloud layer thickness estimation module is built and comprises an edge feature extraction part and a feature calibration part, and is used for adaptively estimating the cloud layer thickness;
104: based on the perceived gradient extraction module in the step 102 and the cloud layer thickness estimation module in the step 103, a remote sensing image thin cloud removal network is built for converting a single Zhang Baoyun remote sensing image into a clear remote sensing image;
105: training a remote sensing image thin cloud removal network by adopting the remote sensing image thin cloud removal data set in the step 101, wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
106: and importing model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
The specific steps in the step 101 are as follows:
1) And selecting n clear remote sensing images R, and generating a simulated thin cloud to obtain a thin cloud remote sensing image C and a cloud layer thickness image T. Because the remote sensing image is large in size, the remote sensing image is cut into an image with the size of N multiplied by N, a remote sensing image thin cloud removal dataset is formed by a clear remote sensing image R, a thin cloud remote sensing image C and a cloud layer thickness image T with a corresponding relation, the remote sensing image thin cloud removal dataset is marked as { R i,Ci,Ti |i E (1, …, m) }, wherein i is the sequence number of the image, m is the number of the image, and i and m are positive integers;
2) The remote sensing image thin cloud removal data set is divided into a training set, a verification set and a test set according to the proportion of p 1:p2:p3 and is used for training, verification and test of the method, wherein p 1、p2 and p 3 are positive integers, and p 1>p2,p1>p3.
The specific steps in step 102 are as follows:
1) As shown in fig. 2, a perceived gradient extraction module is built, comprising a perceived feature extraction unit (Perceptual Feature Extraction unit, PFE), a gradient information extraction unit (Gradient Information Extraction unit, GIE), a residual feature extraction unit (Residual Feature Extraction unit, RFE) and residual connections, for extracting image thin cloud features;
2) The sensing characteristic extraction unit PFE adopts a VGG19 network to extract image characteristics, simulates a human visual system to extract characteristics of an image sensing layer, adopts an nth 2 output result of an nth 1 layer of the VGG19 network as sensing characteristic information and is used for a subsequent thin cloud removal task, wherein n 1 and n 2 are positive integers;
3) The gradient information extraction unit GIE adopts a Sobel operator filter to perform convolution operation with the stride of d 1 on the feature map, and is used for extracting image gradient information, wherein the gradient information contains more cloud layer related features, which is beneficial to removing thin cloud, and d 1 is a positive integer;
4) As shown in fig. 3, the Residual feature extraction Unit RFE is composed of e Residual Units (RU), each of which includes s 1 convolutions+relu activation functions, 1 feature calibration Unit (Feature Calibration Unit, FC), and 1 Residual learning, the convolution kernel sizes are f×f, the stride is d 2, where e, s 1, f, and d 2 are all positive integers;
5) As shown in fig. 4, the feature calibration unit consists of 3 branches, performs an image feature calibration task, and has an input of α in and an output of α out;
the branch 1 gives a weight to each pixel of the feature map to realize pixel level feature calibration, and specifically consists of g convolution+ReLU activation function combinations and 1 convolution+sigmoid activation function combination, wherein the output result of the branch 1 is alpha s, the convolution kernel size is z×z, the stride is x, the branch 1 does not change the feature map size and the channel number, and g, z and x are all positive integers;
The branch 2 does not perform any operation, and the output is still the input alpha in of the characteristic calibration unit FC;
The branch 3 gives the same weight to the pixels in each channel of the feature map, realizes channel level feature calibration, and specifically consists of average value pooling, v convolution+ReLU activation function combinations, 1 convolution+sigmoid activation function combination and 1 feature size expansion unit. Averaging pooling takes as a result the pixel value of each channel of the feature map, the feature map size changing from w×h×c to 1×1×c; the feature size expansion unit expands the feature map from the size of 1×1×c to w×h×c, i.e. from 1×1 values to w×h identical values, keeping the input and output feature map sizes and the channel number of the branch 3 unchanged; the output result of the branch 3 is alpha c, the convolution kernel size is a multiplied by a, the stride is k, wherein v, a and k are positive integers;
The output of the feature calibration unit FC is the result of the product operation of the 3 branch output feature maps at the corresponding pixels, and is specifically as follows:
Where α out is the output of the feature calibration unit FC, α s is the output of branch 1, α in is the output of branch 2, and α c is the output of branch 3.
The specific steps in step 103 are as follows:
1) As shown in fig. 5, a cloud layer thickness estimation module is built, which comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness, wherein the cloud layer thickness estimation module is input into a thin cloud remote sensing image C and output into a predicted cloud layer thickness And a feature map phi out;
2) As shown in fig. 5, the edge feature extraction portion includes w branches, each of which has the same structure and is composed of the gradient information extraction portion GIE and the residual unit RU in step 102, and the convolution kernel adopted by the branches gradually increases in size. For the r-th branch, the result of the input passing through GIE and RU is noted as φ r, the convolution kernel size is (2r+1) x (2r+1), where r ε (1, …, w), r and w are positive integers. The outputs of two adjacent branches are summed corresponding to pixels, so that the sum of the branches is w-1, and the result of the j-th branch after summation is recorded as delta j, which is specifically as follows:
wherein j E (1, …, w-1), j and w are positive integers, Summing corresponding position elements of the representation feature map;
3) As shown in fig. 5, the characteristic calibration section is composed of w-1 characteristic calibration units FC, and for the ith branch of the characteristic calibration section, the output of the characteristic calibration unit FC is denoted as pi i, which is described in detail below:
πi=FC(δi) (3)
wherein i epsilon (1, …, w-1), FC (·) represents the output of the feature calibration unit FC;
4) The outputs of the feature calibration section FC are connected on a channel, the result being noted as phi out, as the 1 st output of the cloud thickness estimation module, as follows:
φout=concat(π1,…,πw-1) (4)
where concat (·) represents cascading feature maps on a channel;
The characteristic map phi out is fed into convolution and ReLU activation functions to obtain the predicted cloud layer thickness As the 2 nd output of the cloud layer thickness estimation module, the following is specific:
where conv (·) represents a convolution with a convolution kernel size of l×l, stride d 3, reLU (·) represents a ReLU activation function, and l and d 3 are positive integers.
The specific steps in step 104 are as follows:
1) As shown in fig. 6, a remote sensing image thin cloud removal network based on multiple paths of sensing gradients is built by adopting a sensing gradient extraction module in step 102, a cloud layer thickness estimation module in step 103, a residual feature extraction unit RFE and a Tanh activation function, and is used for converting a single Zhang Baoyun remote sensing image into a clear remote sensing image;
2) The input of the remote sensing image thin cloud removing network based on multipath sensing gradients is a thin cloud remote sensing image C, and the output is a prediction clear remote sensing image And predicting cloud thickness images
The specific steps in step 105 are as follows:
1) Training a remote sensing image thin cloud removing network by adopting a data set in the step 101, wherein a loss function used in training comprises a characteristic loss function, a gradient loss function and a cloud layer thickness loss function, and the training is of b generations in total, and the specific function forms are as follows;
2) The characteristic loss function L F is specifically:
Wherein θ (·) represents an output feature map of the VGG19 network, u represents a convolutional layer sequence number of the VGG19 network, q, t, and y represent sequence numbers of feature maps length, width, and channel, O represents a number of layers of VGG19 used, W, H and C represent sizes of feature maps length, width, and channel, R represents a clear remote sensing image, Representing a predicted clear remote sensing image, u epsilon (1, …, O), q epsilon (1, …, W), t epsilon (1, …, H), y epsilon (1, …, C), u, q, t, y, O, W, H and C are positive integers;
3) The gradient loss function L G is specifically:
Wherein, v (·) represents an image gradient extracted using a Prewitt operator, q, t and y represent sequence numbers of feature map length, width and channel, W, H and C represent feature map length, width and channel size, q e (1, …, W), t e (1, …, H), y e (1, …, C), q, t, y, W, H and C are positive integers;
4) The cloud layer thickness loss function L R is specifically:
5) The overall loss function L is a weighted summation of the above loss functions, specifically:
L=LF+σLG+λLR (9)
Where σ and λ are weight coefficients.
The specific steps in step 106 are as follows: and (3) importing model parameters obtained in the generation b into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
Example 2
The embodiment of the invention provides a remote sensing image thin cloud removal method based on multipath perception gradient, which is described in detail below with reference to fig. 1:
201: establishing a thin cloud removal data set of a remote sensing image, wherein the data set comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and a training set, a verification set and a test set are formed according to a certain proportion;
202: constructing a perception gradient extraction module, which comprises a perception feature extraction unit, a gradient information extraction unit and a residual feature extraction unit, and is used for extracting image thin cloud features;
203: the cloud layer thickness estimation module is built and comprises an edge feature extraction part and a feature calibration part, and is used for adaptively estimating the cloud layer thickness;
204: based on the perceived gradient extraction module in the step 202 and the cloud layer thickness estimation module in the step 203, a remote sensing image thin cloud removal network is built for converting a single Zhang Baoyun remote sensing image into a clear remote sensing image;
205: training a remote sensing image thin cloud removal network by adopting the remote sensing image thin cloud removal data set in step 201, wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
206: and importing model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
The specific steps in step 201 are as follows:
1) 200 clear remote sensing images R are selected, and a thin cloud remote sensing image C and a cloud layer thickness image T are obtained by generating simulated thin clouds. Because the remote sensing image is large in size, the remote sensing image is cut into images with the size of 256 multiplied by 256, a remote sensing image thin cloud removal dataset is formed by a clear remote sensing image R, a thin cloud remote sensing image C and a cloud layer thickness image T with a corresponding relation, the remote sensing image thin cloud removal dataset is recorded as { R i,Ci,Ti |i epsilon (1, …, m) }, wherein i is the sequence number of the images, the number of the images m=4000, and i is a positive integer;
2) The remote sensing image thin cloud removal data set is divided into a training set, a verification set and a test set according to the proportion of p 1:p2:p3 and is used for training, verification and test of the method, wherein p 1:p2:p3 is set to be 6:2:2.
The specific steps in step 202 are as follows:
1) As shown in fig. 2, a perceived gradient extraction module is built, comprising a perceived feature extraction unit (Perceptual Feature Extraction unit, PFE), a gradient information extraction unit (Gradient Information Extraction unit, GIE), a residual feature extraction unit (Residual Feature Extraction unit, RFE) and residual connections, for extracting image thin cloud features;
2) The sensing characteristic extraction unit PFE adopts a VGG19 network to extract image characteristics, simulates a human visual system to extract characteristics of an image sensing layer, and adopts a3 rd layer and a2 nd output result of the VGG19 network as sensing characteristic information for a subsequent thin cloud removal task;
3) The gradient information extraction unit GIE adopts a Sobel operator filter to perform convolution operation with the stride of 1 on the feature map, and is used for extracting image gradient information which contains more cloud layer related features, thereby being beneficial to removing thin cloud;
4) As shown in fig. 3, the Residual feature extraction Unit RFE is composed of 6 Residual Units (RU), each of which includes 6 convolution+relu activation functions, 1 feature calibration Unit (Feature Calibration Unit, FC), and 1 Residual learning, the convolution kernel sizes are 3×3, and the stride is 1;
5) As shown in fig. 4, the feature calibration unit consists of 3 branches, performs an image feature calibration task, and has an input of α in and an output of α out;
The branch 1 gives a weight to each pixel of the feature map to realize pixel level feature calibration, and specifically consists of 5 convolution+ReLU activation function combinations and 1 convolution+sigmoid activation function combination, wherein the output result of the branch 1 is alpha s, the convolution kernel size is 3×3, the stride is 1, and the branch 1 does not change the feature map size and the channel number;
The branch 2 does not perform any operation, and the output is still the input alpha in of the characteristic calibration unit FC;
The branch 3 gives the same weight to the pixels in each channel of the feature map, realizes channel level feature calibration, and specifically consists of average value pooling, 5 convolution+ReLU activation function combinations, 1 convolution+sigmoid activation function combination and 1 feature size expansion unit. Averaging pooling takes as a result the pixel value of each channel of the feature map, the feature map size changing from w×h×c to 1×1×c; the feature size expansion unit expands the feature map from the size of 1×1×c to w×h×c, i.e. from 1×1 values to w×h identical values, keeping the input and output feature map sizes and the channel number of the branch 3 unchanged; the output result of the branch 3 is alpha c, the convolution kernel size is 3 multiplied by 3, and the stride is 1;
The output of the feature calibration unit FC is the result of the product operation of the 3 branch output feature maps at the corresponding pixels, and is specifically as follows:
Where α out is the output of the feature calibration unit FC, α s is the output of branch 1, α in is the output of branch 2, and α c is the output of branch 3.
The specific steps in step 203 are as follows:
1) As shown in fig. 5, a cloud layer thickness estimation module is built, which comprises an edge feature extraction part and a feature calibration part and is used for adaptively estimating the cloud layer thickness, wherein the cloud layer thickness estimation module is input into a thin cloud remote sensing image C and output into a predicted cloud layer thickness And a feature map phi out;
2) As shown in fig. 5, the edge feature extraction portion includes 6 branches, each of which has the same structure and is composed of the gradient information extraction portion GIE and the residual unit RU in step 102, and the convolution kernel adopted by the branches gradually increases in size. For the r-th branch, the result of the input passing through GIE and RU is noted as phi r, the convolution kernel size is (2r+1) x (2r+1), where r is a positive integer (1, …, 6). The outputs of two adjacent branches are summed corresponding to pixels, so that the sum of the branches is 5, and the result of the j-th branch after summation is recorded as delta j, which is as follows:
Where j.epsilon.1, …,5, j is a positive integer, Summing corresponding position elements of the representation feature map;
3) As shown in fig. 5, the characteristic calibration section is composed of 5 characteristic calibration units FC, and for the ith branch of the characteristic calibration section, the output of the characteristic calibration unit FC is denoted pi i, which is described in detail as follows:
πi=FC(δi) (3)
Wherein i epsilon (1, …, 5), FC (·) represents the output of the feature calibration unit FC;
4) The outputs of the feature calibration section FC are connected on a channel, the result being noted as phi out, as the 1 st output of the cloud thickness estimation module, as follows:
φout=concat(π1,…,π5) (4)
where concat (·) represents cascading feature maps on a channel;
The characteristic map phi out is fed into convolution and ReLU activation functions to obtain the predicted cloud layer thickness As the 2 nd output of the cloud layer thickness estimation module, the following is specific:
Where conv (·) represents a convolution with a convolution kernel size of 3×3, stride 1, and ReLU (·) represents a ReLU activation function.
The specific steps in step 204 are as follows:
1) As shown in fig. 6, a remote sensing image thin cloud removal network based on multiple paths of sensing gradients is built by adopting a sensing gradient extraction module in step 202, a cloud layer thickness estimation module in step 203, a residual feature extraction unit RFE and a Tanh activation function, and is used for converting a single Zhang Baoyun remote sensing image into a clear remote sensing image;
2) The input of the remote sensing image thin cloud removing network based on multipath sensing gradients is a thin cloud remote sensing image C, and the output is a prediction clear remote sensing image And predicting cloud thickness images
The specific steps in step 205 are as follows:
1) Training a remote sensing image thin cloud removing network by adopting a data set in step 201, wherein a loss function used in training comprises a characteristic loss function, a gradient loss function and a cloud layer thickness loss function, the total training is 200 generations, and the specific function forms are as follows;
2) The characteristic loss function L F is specifically shown in the formula (6), θ (·) represents an output characteristic diagram of the VGG19 network, u represents a sequence number of a convolution layer of the VGG19 network, q, t and y represent a sequence number of a characteristic diagram length, a characteristic diagram width and a characteristic diagram channel, the first 10 layers of convolution characteristic diagrams of the VGG19 are used, the characteristic diagram length, the characteristic diagram width and the characteristic diagram channel size are all 128, R represents a clear remote sensing image, Representing a predicted sharp remote sensing image, u e (1, …, 10), q e (1, …, 128), t e (1, …, 128), y e (1, …, 128); the gradient loss function L G is specifically shown in formula (7), v·represents the image gradient extracted by using the Prewitt operator, q, t and y represent the length, width and sequence number of the channel of the feature map, the length, width and size of the channel of the feature map are all 128, u e (1, …, 10), q e (1, …, 128), t e (1, …, 128), y e (1, …, 128); the cloud layer thickness loss function L R is specifically shown in the formula (8); the overall loss function L is a weighted sum of the above loss functions, specifically, as shown in equation (9), σ=10.0 and λ=5.0.
The specific steps in step 206 are as follows: and importing the model parameters obtained in training for 200 generations into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal.
Those skilled in the art will appreciate that the drawings are schematic representations of only one preferred embodiment, and that the above-described embodiment numbers are merely for illustration purposes and do not represent advantages or disadvantages of the embodiments. The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (5)

1.A remote sensing image thin cloud removing method based on multipath perception gradient is characterized by comprising the following steps:
1) Establishing a remote sensing image thin cloud removal data set, wherein the remote sensing image thin cloud removal data set comprises a thin cloud remote sensing image, a clear remote sensing image and a cloud layer thickness image, and a training set, a verification set and a test set are formed according to proportion;
2) Constructing a perception gradient extraction module for extracting image thin cloud features;
3) A cloud layer thickness estimation module is built and used for adaptively estimating the cloud layer thickness;
4) Based on the perceived gradient extraction module obtained in the step 2) and the cloud layer thickness estimation module obtained in the step 3), a remote sensing image thin cloud removal network is built for converting a single Zhang Baoyun remote sensing image into a clear remote sensing image;
5) Training a remote sensing image thin cloud removal network by utilizing the data set obtained in the step 1), wherein the used loss functions comprise a characteristic loss function, a gradient loss function and a cloud layer thickness loss function;
6) Importing model parameters obtained after training into a remote sensing image thin cloud removal network, and inputting a single thin cloud remote sensing image to realize thin cloud removal;
in the step 1), the remote sensing image thin cloud removal dataset specifically includes:
11 Selecting n clear remote sensing images R, and obtaining a thin cloud remote sensing image C and a cloud layer thickness image T by generating simulated thin clouds; cutting a remote sensing image into an image with a size of N multiplied by N, forming a remote sensing image thin cloud removal dataset by using a clear remote sensing image R, a thin cloud remote sensing image C and a cloud layer thickness image T which have a corresponding relation, and marking the remote sensing image thin cloud removal dataset as { R i,Ci,Ti |i epsilon (1, …, m) }, wherein i is the sequence number of the image, m is the number of the image, and i and m are positive integers;
12 Dividing the remote sensing image thin cloud removal data set into a training set, a verification set and a test set according to the proportion of p 1:p2:p3, wherein p 1、p2 and p 3 are positive integers, and p 1>p2,p1>p3;
the construction of the perception gradient extraction module specifically comprises a perception feature extraction unit, a gradient information extraction unit, a residual feature extraction unit and residual connection;
The perception feature extraction unit specifically adopts a VGG19 network to extract image features, simulates a human visual system to extract features of an image perception layer, adopts an nth 2 output result of an nth 1 layer of the VGG19 network as perception feature information, and is used for a subsequent thin cloud removal task, wherein n 1 and n 2 are positive integers;
The gradient information extraction unit specifically adopts a Sobel operator filter to perform convolution operation with the stride of d 1 on the feature map, and is used for extracting image gradient information, wherein the gradient information contains cloud layer related features; wherein d 1 is a positive integer;
The residual feature extraction unit consists of e residual units, each residual unit comprises s 1 convolution+ReLU activation functions, 1 feature calibration unit and 1 residual learning, the convolution kernel sizes are f multiplied by f, the stride is d 2, and e, s 1, f and d 2 are all positive integers;
The characteristic calibration unit consists of 3 branches and performs an image characteristic calibration task, wherein the input of the characteristic calibration unit is alpha in, and the output of the characteristic calibration unit is alpha out;
The branch 1 gives a weight to each pixel of the feature map to realize pixel level feature calibration, and consists of g convolution+ReLU activation function combinations and 1 convolution+sigmoid activation function combination, wherein the output result of the branch 1 is alpha s, the convolution kernel size is z×z, the stride is x, the branch 1 does not change the feature map size and the channel number, and g, z and x are all positive integers;
The branch 2 does not do any operation, and the output is still the input alpha in of the characteristic calibration unit;
The branch 3 gives the same weight to the pixels in each channel of the feature map, realizes channel level feature calibration, and consists of average value pooling, v convolution+ReLU activation function combinations, 1 convolution+sigmoid activation function combination and 1 feature size expansion unit; averaging pooling takes as a result the pixel value of each channel of the feature map, the feature map size changing from w×h×c to 1×1×c; the feature size expansion unit expands the feature map from the size of 1×1×c to w×h×c, i.e. from 1×1 values to w×h identical values, keeping the input and output feature map sizes and the channel number of the branch 3 unchanged; the output result of the branch 3 is alpha c, the convolution kernel size is a multiplied by a, the stride is k, wherein v, a and k are positive integers;
the output α out of the feature calibration unit is the result of the product operation of the 3 branch output feature maps at the corresponding pixels, as follows:
wherein, alpha out is the output of the characteristic calibration unit, alpha s is the output result of the branch 1, alpha in is the output result of the branch 2, and alpha c is the output result of the branch 3;
In the step 3), the cloud layer thickness estimation module comprises an edge feature extraction part and a feature calibration part, and is used for adaptively estimating the cloud layer thickness, wherein the cloud layer thickness estimation module inputs a thin cloud remote sensing image C and outputs a predicted cloud layer thickness And a feature map phi out;
The edge feature extraction part comprises w branches, each branch has the same structure and consists of a gradient information extraction part and a residual error unit in the step 2), and the size of convolution kernels adopted by the branches is gradually increased; for the r branch, the result of the input passing through the gradient information extraction part and the residual unit is recorded as phi r, the convolution kernel size is (2r+1) x (2r+1), wherein r is (1, …, w), and r and w are positive integers; the outputs of two adjacent branches are summed corresponding to pixels, the sum of the branches is w-1, and the result of the j-th branch after summation is recorded as delta j, as follows:
wherein j E (1, …, w-1), j and w are positive integers, Summing corresponding position elements of the representation feature map;
The characteristic calibration section is composed of w-1 characteristic calibration units, and for the ith branch of the characteristic calibration section, the output of the characteristic calibration unit is denoted as pi i, as follows:
πi=FC(δi);
where i ε (1, …, w-1), FC (·) represents the output of the feature calibration unit;
The outputs of the feature calibration section FC are connected on the channel and the result is noted as phi out as the 1 st output of the cloud thickness estimation module as follows:
φout=concat(π1,…,πw-1);
where concat (·) represents cascading feature maps on a channel;
The characteristic map phi out is fed into convolution and ReLU activation functions to obtain the predicted cloud layer thickness As the 2 nd output of the cloud thickness estimation module, the following is described:
Where conv (·) represents a convolution with a convolution kernel size of l×l, stride d 3, reLU (·) represents a ReLU activation function, and l and d 3 are positive integers;
in the step 4), the building of the remote sensing image thin cloud removal network specifically includes:
the method comprises the steps of constructing a remote sensing image thin cloud removal network based on multipath sensing gradients by adopting a sensing gradient extraction module in the step 2), a cloud layer thickness estimation module in the step 3), a residual error feature extraction unit and a Tanh activation function, and converting from a single Zhang Baoyun remote sensing image to a clear remote sensing image; the input of the remote sensing image thin cloud removing network based on multipath sensing gradients is a thin cloud remote sensing image C, and the output is a prediction clear remote sensing image And predicting cloud thickness images
2. The method for removing thin cloud of remote sensing image based on multi-path sensing gradient according to claim 1, wherein in the step 5), the feature loss function L F is specifically:
Wherein θ (·) represents an output feature map of the VGG19 network, u represents a convolutional layer sequence number of the VGG19 network, q, t, and y represent sequence numbers of feature maps length, width, and channel, O represents a number of layers of VGG19 used, W, H and C represent sizes of feature maps length, width, and channel, R represents a clear remote sensing image, And the predicted clear remote sensing image is represented by u epsilon (1, …, O), q epsilon (1, …, W), t epsilon (1, …, H), y epsilon (1, …, C), u, q, t, y, O, W, H and C are positive integers.
3. The method for removing thin cloud of remote sensing image based on multi-path sensing gradient according to claim 2, wherein in the step 5), the gradient loss function L G is specifically:
In the method, in the process of the invention, Representing the image gradient extracted using the Prewitt operator, q, t and y represent the feature map length, width and channel number, W, H and C represent the feature map length, width and channel size, q e (1, …, W), t e (1, …, H), y e (1, …, C), q, t, y, W, H and C are positive integers.
4. The method for removing thin cloud of remote sensing image based on multi-path sensing gradient of claim 3, wherein in the step 5), the cloud thickness loss function L R is specifically:
Wherein R represents a clear remote sensing image, Representing a predicted sharp remote sensing image.
5. The method for removing the thin cloud from the remote sensing image based on the multi-path sensing gradient according to claim 4, wherein in the step 6), the method for removing the thin cloud specifically comprises: training a remote sensing image thin cloud removal network by using the remote sensing image thin cloud removal data set according to claim 2, wherein the loss function used for training comprises a characteristic loss function, a gradient loss function and a cloud layer thickness loss function, b generations of training are performed in total, model parameters obtained after training are imported into the remote sensing image thin cloud removal network, and a single thin cloud remote sensing image is input to complete remote sensing image thin cloud removal.
CN202210357921.8A 2022-04-06 2022-04-06 Remote sensing image thin cloud removing method based on multipath perception gradient Active CN114936972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210357921.8A CN114936972B (en) 2022-04-06 2022-04-06 Remote sensing image thin cloud removing method based on multipath perception gradient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210357921.8A CN114936972B (en) 2022-04-06 2022-04-06 Remote sensing image thin cloud removing method based on multipath perception gradient

Publications (2)

Publication Number Publication Date
CN114936972A CN114936972A (en) 2022-08-23
CN114936972B true CN114936972B (en) 2024-07-02

Family

ID=82861747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210357921.8A Active CN114936972B (en) 2022-04-06 2022-04-06 Remote sensing image thin cloud removing method based on multipath perception gradient

Country Status (1)

Country Link
CN (1) CN114936972B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460739A (en) * 2018-03-02 2018-08-28 北京航空航天大学 A kind of thin cloud in remote sensing image minimizing technology based on generation confrontation network
CN108931825A (en) * 2018-05-18 2018-12-04 北京航空航天大学 A kind of remote sensing image clouds thickness detecting method based on atural object clarity

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191400A (en) * 2018-08-30 2019-01-11 中国科学院遥感与数字地球研究所 A method of network, which is generated, using confrontation type removes thin cloud in remote sensing image
WO2020102988A1 (en) * 2018-11-20 2020-05-28 西安电子科技大学 Feature fusion and dense connection based infrared plane target detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460739A (en) * 2018-03-02 2018-08-28 北京航空航天大学 A kind of thin cloud in remote sensing image minimizing technology based on generation confrontation network
CN108931825A (en) * 2018-05-18 2018-12-04 北京航空航天大学 A kind of remote sensing image clouds thickness detecting method based on atural object clarity

Also Published As

Publication number Publication date
CN114936972A (en) 2022-08-23

Similar Documents

Publication Publication Date Title
CN111369487B (en) Hyperspectral and multispectral image fusion method, system and medium
CN113344806A (en) Image defogging method and system based on global feature fusion attention network
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN109523482B (en) Deep neural network-based restoration method for degraded image containing texture
Kim et al. Deeply aggregated alternating minimization for image restoration
CN112801104B (en) Image pixel level pseudo label determination method and system based on semantic segmentation
CN113450288A (en) Single image rain removing method and system based on deep convolutional neural network and storage medium
CN111127354A (en) Single-image rain removing method based on multi-scale dictionary learning
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN114742733A (en) Cloud removing method and device, computer equipment and storage medium
CN110084181B (en) Remote sensing image ship target detection method based on sparse MobileNet V2 network
CN115660979A (en) Attention mechanism-based double-discriminator image restoration method
CN111815526A (en) Rain image rainstrip removing method and system based on image filtering and CNN
CN109558880B (en) Contour detection method based on visual integral and local feature fusion
CN114936972B (en) Remote sensing image thin cloud removing method based on multipath perception gradient
Chen et al. Remote sensing image super-resolution with residual split attention mechanism
CN112686822A (en) Image completion method based on stack generation countermeasure network
CN115760670B (en) Unsupervised hyperspectral fusion method and device based on network implicit priori
CN111462014A (en) Single-image rain removing method based on deep learning and model driving
CN117173404A (en) Remote sensing target automatic detection and hiding method based on deep learning
CN113935908B (en) Remote sensing image cloud removing method based on double-branch channel and feature strengthening mechanism
Piriyatharawet et al. Image denoising with deep convolutional and multi-directional LSTM networks under Poisson noise environments
CN117058059A (en) Progressive restoration framework cloud removal method for fusion of optical remote sensing image and SAR image
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
Ding et al. Feedback network for compact thin cloud removal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant