CN117036987B - Remote sensing image space-time fusion method and system based on wavelet domain cross pairing - Google Patents
Remote sensing image space-time fusion method and system based on wavelet domain cross pairing Download PDFInfo
- Publication number
- CN117036987B CN117036987B CN202311304694.3A CN202311304694A CN117036987B CN 117036987 B CN117036987 B CN 117036987B CN 202311304694 A CN202311304694 A CN 202311304694A CN 117036987 B CN117036987 B CN 117036987B
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- resolution remote
- network
- data set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 52
- 238000012549 training Methods 0.000 claims abstract description 43
- 238000012360 testing method Methods 0.000 claims abstract description 22
- 238000012795 verification Methods 0.000 claims abstract description 18
- 238000011156 evaluation Methods 0.000 claims abstract description 12
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 9
- 230000000694 effects Effects 0.000 claims abstract description 8
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims abstract description 7
- 238000005457 optimization Methods 0.000 claims abstract description 7
- 230000000007 visual effect Effects 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 34
- 238000000605 extraction Methods 0.000 claims description 27
- 230000009466 transformation Effects 0.000 claims description 23
- 238000012937 correction Methods 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 14
- 238000000354 decomposition reaction Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 7
- 230000014759 maintenance of location Effects 0.000 claims description 6
- 238000011158 quantitative evaluation Methods 0.000 claims description 5
- 230000005855 radiation Effects 0.000 claims description 5
- 238000001228 spectrum Methods 0.000 claims description 5
- 238000005520 cutting process Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012952 Resampling Methods 0.000 claims 1
- 238000012216 screening Methods 0.000 abstract description 2
- 239000002131 composite material Substances 0.000 abstract 1
- 238000011161 development Methods 0.000 abstract 1
- 238000012544 monitoring process Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/52—Scale-space analysis, e.g. wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a remote sensing image space-time fusion method and a remote sensing image space-time fusion system based on wavelet domain cross pairing, which are used for screening data to be fused, grouping preprocessed high-resolution and low-resolution remote sensing images, and dividing the data into a training data set, a test data set and a verification data set; constructing a wavelet domain cross pairing remote sensing image space-time fusion network; constructing a composite loss function, optimizing and updating network parameters by using an Adam optimization algorithm, and realizing network convergence after multiple times of training; and (3) predicting and evaluating the test data set, wherein if the related evaluation index can reach the accuracy of the mainstream model and the visual effect is good, the network model is proved to have stability in the data set region, and then space-time fusion can be performed in the data set region. The invention realizes that the precision and effect of the main stream fusion model can be achieved by only two input images, solves the problem that at least three input images are needed for most of the current time-space fusion images on the premise of ensuring the fusion precision, and has stronger practicability and development prospect.
Description
Technical Field
The invention belongs to the technical field of remote sensing image processing, and particularly relates to a remote sensing image space-time fusion method and system based on wavelet domain cross pairing.
Background
Remote sensing image timing analysis is essential in many applications such as crop monitoring and assessment, evapotranspiration estimation, atmospheric monitoring, land cover change detection, ecosystem monitoring, and the like. Dense time series telemetry images help capture ground changes and higher spatial resolution can embody details (e.g., texture and structure of the earth's surface). Thus, earth-directed observations with high spatial-temporal resolution are critical for remote sensing applications. However, the time resolution and the spatial resolution of the sensor are in a mutually restricted relationship, the breadth of the high-spatial-resolution remote sensing image is smaller, the revisit period is longer, and the instant resolution is lower; the low spatial resolution remote sensing image has larger breadth, shorter revisiting period and higher real-time resolution. This means that due to technical limitations, no sensor can provide a time-dense image sequence with high spatial resolution, thus making analysis of multi-temporal remote sensing image sequences difficult.
The remote sensing image space-time fusion is an effective means for solving the contradiction between the space and time resolution of the satellite sensor conveniently and efficiently with low cost from the aspect of software, and is beneficial to improving the application potential of multi-source remote sensing data in various fields. The image space-time fusion aims at utilizing a known 'time phase dense' low-spatial resolution remote sensing image sequence and a known 'time phase sparse' high-spatial resolution remote sensing image sequence corresponding to time points to generate a 'time phase dense' high-spatial resolution remote sensing image sequence corresponding to the low-spatial resolution remote sensing image sequence, namely, an image sequence with the highest time resolution and the highest spatial resolution at the same time.
In the existing various fusion algorithms, the traditional methods often have some limitations, such as inapplicability to highly heterogeneous areas, inapplicability to areas with changed surface coverage types, unstable algorithms and the like. The model based on deep learning has higher fusion precision and stability and wider application prospect. The existing space-time fusion network mostly needs at least three (a high-low resolution remote sensing image pair of a reference date and a low-resolution remote sensing image of a prediction date) and even five input images (the high-low resolution remote sensing image pair of two reference dates and the low-resolution remote sensing image of a prediction date) to obtain a better prediction result, and the fact that proper image pairs are difficult to collect due to bad weather or data missing is ignored. How to obtain better prediction results with fewer input images remains a challenge in the field of spatio-temporal fusion.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a remote sensing image space-time fusion method and a remote sensing image space-time fusion system based on wavelet domain cross pairing, which can achieve the precision and the effect of a main stream fusion model by only using two input images and solve the problem that most of the current space-time fusion images at least need three input images.
In order to achieve the above purpose, the invention provides a remote sensing image space-time fusion method based on wavelet domain cross pairing, which comprises the following steps:
step 1, preprocessing high and low resolution remote sensing image data to obtain high and low resolution remote sensing images to be fused;
step 2, grouping the preprocessed high-resolution remote sensing images and the preprocessed low-resolution remote sensing images, and dividing the data into a training data set, a test data set and a verification data set;
step 3, constructing a wavelet domain cross pairing remote sensing image space-time fusion network;
step 4, training the space-time fusion network built in the step 3 by using a training data set, optimizing and updating network parameters by using an Adam optimization algorithm, verifying by using a verification sample data set, and realizing network convergence after multiple times of training to obtain a trained network model;
step 5, inputting the test data set into the network model trained in the step 4, obtaining a predicted high-resolution remote sensing image of the test data, and testing the availability of the network model in the data set region by quantitatively evaluating and qualitatively evaluating the predicted high-resolution remote sensing image;
and 6, verifying the available network model by using the step 5 to obtain a plurality of high-resolution remote sensing images of the data set region on the predicted date, thereby forming a high-resolution time-intensive remote sensing image sequence.
And in the step 1, the high-resolution remote sensing image and the low-resolution remote sensing image of the same area are selected to construct a data set, images with cloud coverage rate and empty pixel quantity larger than a certain proportion are removed, radiation calibration, geometric correction and atmospheric correction are carried out, the four corner empty areas of the corrected high-resolution remote sensing image are cut off, the low-resolution remote sensing image is re-projected into the coordinate system of the high-resolution remote sensing image, the area overlapped with the high-resolution remote sensing image is cut out, and then the cut-out low-resolution remote sensing image data is resampled, so that the size of the cut-out low-resolution remote sensing image is the same as that of the cut high-resolution remote sensing image.
And in the step 2, the preprocessed high-resolution remote sensing images and the preprocessed low-resolution remote sensing images are grouped, a group of data comprises a high-resolution remote sensing image with a reference date and a pair of high-resolution remote sensing images with a prediction date, the reference date and the prediction date are adjacent time phases after unqualified data are removed, the high-resolution remote sensing image with the reference date and the low-resolution remote sensing image with the prediction date are used as input of a model, the high-resolution remote sensing image with the prediction date is used as constraint of the model, and the data set is divided into a training data set, a test data set and a verification data set.
And in the step 3, the wavelet domain cross pairing remote sensing image space-time fusion network comprises a wavelet transformation layer, a reference feature extraction network, a change feature extraction network, a wavelet inverse transformation layer and a reconstruction network. The wavelet transformation layer respectively carries out haar wavelet transformation on the difference value of the reference date high-resolution remote sensing image, the reference date high-resolution remote sensing image and the prediction date low-resolution remote sensing image, and then four characteristic coefficients obtained after wavelet transformation of the reference date high-resolution remote sensing image and the prediction date low-resolution remote sensing image are respectively input into the reference characteristic extraction network and the change characteristic extraction network for characteristic extraction. The input of the reference feature extraction network and the input of the change feature extraction network are different but have the same structure, and each network comprises three layers of networks, wherein each layer of network consists of a convolution layer and a correction linear unit layer. And the wavelet inverse transformation layer correspondingly adds the four characteristic coefficients output by the reference characteristic extraction network and the change characteristic extraction network, performs wavelet inverse transformation to obtain a predicted characteristic, inputs the predicted characteristic into the reconstruction network, and finally obtains a predicted high-resolution remote sensing image of a predicted date. The reconstruction network comprises three layers of networks, each layer of network consisting of a convolutional layer and a modified linear unit layer.
In step 4, the high-resolution remote sensing image of the reference date and the low-resolution remote sensing image of the predicted date in the training data set are input into the space-time fusion network constructed in step 3 to obtain the predicted high-resolution remote sensing image of the predicted date, and a loss function between the real high-resolution remote sensing image of the predicted date and the predicted high-resolution remote sensing image is calculated, wherein the loss function is obtained by wavelet lossLoss of characteristics->And visual loss->The three parts are formed, and the specific calculation formula is as follows:
wherein:
wavelet lossThe average absolute error between the wavelet coefficient of the predicted high-resolution remote sensing image and the wavelet coefficient of the real high-resolution remote sensing image is calculated according to the following formula:
in the method, in the process of the invention,、/>、/>and->Representing four coefficients of the predicted high resolution remote sensing image after wavelet decomposition,、/>、/>and->Representing four coefficients corresponding to real high-resolution remote sensing image after wavelet decomposition, wherein N represents the total number of pixels of the coefficients after wavelet decomposition, < ->The representation takes absolute value;
feature lossThe pre-training model is calculated by a pre-training model, the pre-training model is an encoding-decoding structure, an encoder extracts the characteristics from the high-resolution remote sensing image, a decoder restores the characteristics to the pixel space, and the pre-training model is utilized,the calculation formula is as follows:
in the method, in the process of the invention,and->Respectively representing the predicted high-resolution remote sensing image characteristics and the corresponding actual high-resolution remote sensing image characteristics extracted by the pre-training model, wherein N represents the total number of pixels of the coefficient after wavelet decomposition, and->The representation takes absolute value;
vision lossThrough multi-scale structural similarity calculation, the formula is as follows:
in the method, in the process of the invention,representing the average structural similarity of the predicted image and the reference image.
And optimizing and updating network parameters by using an Adam optimization algorithm, simultaneously carrying out model training and accuracy verification, evaluating the accuracy of each training of the model by using a verification sample data set, adjusting the network model parameters according to the evaluation accuracy of the verification set, recording the prediction accuracy of each training of the network model, and selecting the optimal network parameter model according to the prediction accuracy of the model.
In addition, in the step 5, quantitative evaluation is performed on the obtained predicted high-resolution remote sensing image of the predicted date, and indexes such as root mean square error RMSE, structural similarity SSIM, linear correlation coefficient CC, spectral angle mapping SAM and the like can be adopted to comprehensively evaluate the texture detail retention and the spectrum retention of the fusion image. And carrying out qualitative evaluation on the obtained predicted high-resolution remote sensing image, and evaluating the application value of the fusion result by enlarging the image to check visual effect or carrying out operations such as surface coverage classification, NDVI calculation and the like on the image. If both quantitative evaluation and qualitative evaluation of the prediction result of the test data set can reach the effect of the main stream model, the model is proved to have applicability and stability in the region, and space-time fusion in the data set region can be realized.
The invention also provides a remote sensing image space-time fusion system based on wavelet domain cross pairing, which is used for realizing the remote sensing image space-time fusion method based on wavelet domain cross pairing.
And the system comprises a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the program instructions in the memory to execute the remote sensing image space-time fusion method based on wavelet domain cross pairing.
Or, the method comprises a readable storage medium, wherein the readable storage medium is stored with a computer program, and when the computer program is executed, the remote sensing image space-time fusion method based on wavelet domain cross pairing is realized.
Compared with the prior art, the invention has the following advantages:
according to the invention, wavelet transformation is introduced into the space-time fusion field, a wavelet domain cross pairing remote sensing image space-time fusion network is constructed, the detail textures and global information of the images are better extracted by respectively training the characteristics of different layers, and wavelet loss is added into a loss function, so that the maintenance of space details is promoted, the precision and the effect of a main stream fusion model can be achieved by using only two input images, the problem that at least three input images are needed by most of the current space-time fusion images is solved, and the aim of achieving a better fusion effect by using fewer input images is fulfilled.
Drawings
Fig. 1 is a flowchart of a wavelet domain cross-paired remote sensing image space-time fusion method according to an embodiment of the present invention.
Fig. 2 is a structural framework diagram of a wavelet domain cross-paired remote sensing image space-time fusion network according to an embodiment of the present invention.
Fig. 3 is a comparison of a fused result graph obtained by the method according to the present invention in different time phases with a real high resolution remote sensing image.
Detailed Description
The invention provides a remote sensing image space-time fusion method and a remote sensing image space-time fusion system based on wavelet domain cross pairing, and the technical scheme of the invention is further described below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, the embodiment of the invention provides a remote sensing image space-time fusion method based on wavelet domain cross pairing, which comprises the following steps:
and step 1, screening data to be fused, and eliminating data unsuitable for space-time fusion.
The high-resolution remote sensing image and the low-resolution remote sensing image of the same area are selected to construct a data set, and in the embodiment, the images with the cloud coverage rate of more than 5% and the images with the pixel quantity of no data of more than 1% are discarded for both the high-resolution data source and the low-resolution data source.
And 2, preprocessing the high-resolution and low-resolution remote sensing image data, including performing radiation calibration, geometric correction, atmospheric correction, re-projection, re-sampling, clipping and the like, to obtain the high-resolution and low-resolution remote sensing images to be fused.
And performing radiation calibration, geometric correction and atmospheric correction on the filtered high-resolution remote sensing image data and the filtered low-resolution remote sensing image data. If the data source is a secondary product, the radiation calibration, geometric correction, atmospheric correction are already completed at the time of data distribution, and can be omitted. And cutting off the four corner blank areas of the corrected high-resolution remote sensing image, re-projecting the low-resolution remote sensing image into a coordinate system of the high-resolution remote sensing image, cutting out an area overlapped with the high-resolution remote sensing image, and re-sampling the cut low-resolution remote sensing image data to enable the size of the cut low-resolution remote sensing image to be the same as that of the cut high-resolution remote sensing image.
And step 3, grouping the preprocessed high-resolution remote sensing images and the preprocessed low-resolution remote sensing images, and dividing the data into a training data set, a test data set and a verification data set.
And grouping the preprocessed high-resolution and low-resolution remote sensing images. The set of data comprises a high-resolution remote sensing image with a reference date and a pair of high-resolution remote sensing images with a prediction date, wherein the reference date and the prediction date are adjacent time phases after unqualified data are removed, the high-resolution remote sensing image with the reference date and the low-resolution remote sensing image with the prediction date are used as input of a model, and the high-resolution remote sensing image with the prediction date is used as constraint of the model. The data set is in this example divided into a 60% training sample data set, a 20% test sample data set and a 20% validation sample data set.
And 4, constructing a wavelet domain cross pairing remote sensing image space-time fusion network.
As shown in fig. 2, the wavelet domain cross-paired remote sensing image space-time fusion network comprises a wavelet transformation layer, a reference feature extraction network, a change feature extraction network, a wavelet inverse transformation layer and a reconstruction network. The wavelet transformation layer respectively carries out haar wavelet transformation on the difference value of the reference date high-resolution remote sensing image, the reference date high-resolution remote sensing image and the prediction date low-resolution remote sensing image, and then divides four characteristic coefficients obtained after wavelet transformation of the two imagesInput to the reference feature extraction network and the change feature extraction network respectively to perform feature extraction. The reference feature extraction network and the change feature extraction network have different inputs but the same structure, and each network comprises three layers of networks, each layer of network consists of a convolution layer and a correction linear unit layer, and in the embodiment, the convolution kernel of the convolution layer has the size ofThe stride is 1. And the wavelet inverse transformation layer correspondingly adds the four characteristic coefficients of the high-resolution remote sensing image of the reference date obtained by the reference characteristic extraction network and the four characteristic coefficients of the change information obtained by the change characteristic extraction network, performs wavelet inverse transformation to obtain a predicted characteristic, inputs the predicted characteristic into the reconstruction network, and finally obtains the high-resolution remote sensing image of the predicted date. The reconstruction network comprises three layers of networks, each layer of network consists of a convolution layer and a correction linear unit layer, and the convolution kernel of the first two layers in the embodiment is +.>The convolution kernel size of the latter layer is +.>The linear transformation function is realized, and the stride is 1.
And 5, inputting the training data set into the space-time fusion network built in the step 4 in batches, taking the high-resolution remote sensing image of the reference date and the low-resolution remote sensing image of the prediction date in the data set as input, taking the high-resolution remote sensing image of the prediction date as guidance, optimizing and updating network parameters by using an Adam optimization algorithm, verifying by using a verification sample data set, realizing network convergence after multiple times of training, and obtaining a trained network model.
And (3) inputting training sample data into the space-time fusion network built in the step (4) to obtain the predicted high-resolution remote sensing image of the predicted date. Calculating a loss function between the real high-resolution remote sensing image of the prediction date and the predicted high-resolution remote sensing image, wherein the loss function is lost by waveletLoss of characteristics->And visual loss->The three parts are formed, and the specific calculation formula is as follows:
wherein:
wavelet lossThe average absolute error between the wavelet coefficient of the predicted high-resolution remote sensing image and the wavelet coefficient of the real high-resolution remote sensing image is calculated according to the following formula:
in the method, in the process of the invention,、/>、/>and->Representing four coefficients of the predicted high resolution remote sensing image after wavelet decomposition,、/>、/>and->Representing four coefficients corresponding to real high-resolution remote sensing image after wavelet decomposition, wherein N represents the total number of pixels of the coefficients after wavelet decomposition, < ->The representation takes absolute value.
Feature lossThe pre-training model is calculated by a pre-training model, the pre-training model is an encoding-decoding structure, an encoder extracts the characteristics from the high-resolution remote sensing image, a decoder restores the characteristics to the pixel space, and the pre-training model is utilized,the calculation formula is as follows:
in the method, in the process of the invention,and->Respectively representing the predicted high-resolution remote sensing image characteristics and the corresponding actual high-resolution remote sensing image characteristics extracted by the pre-training model, wherein N represents the total number of pixels of the coefficient after wavelet decomposition, and->The representation takes absolute value.
Vision lossThrough multi-scale structural similarity calculation, the formula is as follows:
in the method, in the process of the invention,representing the average structural similarity of the predicted image and the reference image.
Optimizing and updating the learnable parameters such as weight, bias and the like by using an Adam optimization algorithm, simultaneously carrying out model training and accuracy verification, evaluating the accuracy of each training of the model by using a verification sample data set, adjusting network model parameters according to the evaluation accuracy of the verification set, recording the prediction accuracy of each training network model, and selecting the optimal network parameter model according to the prediction accuracy of the model.
And 6, inputting the test data set into the trained network model in the step 5, obtaining a predicted high-resolution remote sensing image of the test data, and testing the usability of the network model in the data set region by quantitatively evaluating and qualitatively evaluating the predicted high-resolution remote sensing image.
And (3) inputting a test data set based on the trained network model in the step (5), obtaining a predicted high-resolution remote sensing image of the predicted date of the test data, and quantitatively and qualitatively evaluating the obtained predicted high-resolution remote sensing image. At present, no internationally accepted standard can uniquely measure the quality of the fused image, and different indexes have limitations and can only reveal certain aspects of the quality of the fused image. The texture detail retention and the spectrum retention of the fused image can be comprehensively evaluated by adopting indexes such as root mean square error RMSE, structural similarity SSIM, linear correlation coefficient CC, spectrum angle drawing SAM and the like. And carrying out qualitative evaluation on the obtained predicted high-resolution remote sensing image, and evaluating the application value of the fusion result by enlarging the image to check visual effect or carrying out operations such as surface coverage classification, NDVI calculation and the like on the image. If both quantitative evaluation and qualitative evaluation of the prediction result of the test data set can reach the effect of the main stream model, the model is proved to have applicability and stability in the region, and space-time fusion in the data set region can be realized.
And 7, verifying the available network model by using the step 6 to obtain a plurality of high-resolution remote sensing images of the data set region on the predicted date, thereby forming a high-resolution time-intensive remote sensing image sequence.
A comparison experiment is carried out by using a time-space fusion classical CIA data set and four methods STARFM, FSDAF, EDCSTFN, GANSTFM so as to verify the effect of the method. The four indexes of root mean square error RMSE, structural similarity SSIM, linear correlation coefficient CC and spectral angle mapping SAM are used for precision evaluation, and the result is shown in table 1, and the fusion result obtained by the method provided by the invention can exceed the existing unique space-time fusion method (GANSTFM) which only needs two input images, and can reach the precision (STARFM, FSDAF, EDCSTFN) of the main stream space-time fusion method which uses three input images.
Table 1 comparison of the accuracy of the proposed method with the other four methods
Fig. 3 is a diagram showing the result of fusion of a plurality of different time phases obtained by using the method of the present invention, compared with a real high-resolution remote sensing image. As can be seen from fig. 3, the fusion result graph of a plurality of different time phases obtained by the method provided by the invention is very close to a real high-resolution remote sensing image.
Example 2
Based on the same conception, the invention also provides a remote sensing image space-time fusion system based on wavelet domain cross pairing, which comprises a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the program instructions in the memory to execute the remote sensing image space-time fusion method based on wavelet domain cross pairing.
Example 3
Based on the same inventive concept, the invention also provides a remote sensing image space-time fusion system based on wavelet domain cross pairing, which comprises a readable storage medium, wherein the readable storage medium is stored with a computer program, and the remote sensing image space-time fusion method based on wavelet domain cross pairing is realized when the computer program is executed.
In particular, the method according to the technical solution of the present invention may be implemented by those skilled in the art using computer software technology to implement an automatic operation flow, and a system apparatus for implementing the method, such as a computer readable storage medium storing a corresponding computer program according to the technical solution of the present invention, and a computer device including the operation of the corresponding computer program, should also fall within the protection scope of the present invention.
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.
Claims (7)
1. A remote sensing image space-time fusion method based on wavelet domain cross pairing is characterized by comprising the following steps:
step 1, preprocessing high and low resolution remote sensing image data to obtain high and low resolution remote sensing images to be fused;
step 2, grouping the preprocessed high-resolution remote sensing images and the preprocessed low-resolution remote sensing images, and dividing the data into a training data set, a test data set and a verification data set;
step 3, constructing a wavelet domain cross pairing remote sensing image space-time fusion network;
the wavelet domain cross pairing remote sensing image space-time fusion network comprises a wavelet transformation layer, a reference feature extraction network, a change feature extraction network, a wavelet inverse transformation layer and a reconstruction network; the wavelet transformation layer carries out haar wavelet transformation on the reference date high-resolution remote sensing image, the difference value of the reference date high-resolution remote sensing image and the prediction date low-resolution remote sensing image respectively, then four characteristic coefficients obtained after wavelet transformation of the reference date high-resolution remote sensing image and the reference date high-resolution remote sensing image are respectively input into the reference characteristic extraction network and the change characteristic extraction network for characteristic extraction, the wavelet inverse transformation layer correspondingly adds the four characteristic coefficients output by the reference characteristic extraction network and the change characteristic extraction network, wavelet inverse transformation is carried out, the prediction characteristic is obtained and input into the reconstruction network, and finally the prediction date high-resolution remote sensing image is obtained;
step 4, training the space-time fusion network built in the step 3 by using a training data set, optimizing and updating network parameters by using an Adam optimization algorithm, verifying by using a verification sample data set, and realizing network convergence after multiple times of training to obtain a trained network model;
inputting a high-resolution remote sensing image of a reference date and a low-resolution remote sensing image of a predicted date in a training data set into the space-time fusion network built in the step 3 to obtain a predicted high-resolution remote sensing image of the predicted date, calculating a loss function between a real high-resolution remote sensing image of the predicted date and the predicted high-resolution remote sensing image, optimizing and updating network parameters by using an Adam optimization algorithm, performing model training and accuracy verification simultaneously, evaluating the accuracy of each training of the model by using a verification sample data set, adjusting network model parameters according to the evaluation accuracy of the verification set, recording the prediction accuracy of each training network model, and selecting an optimal network parameter model according to the prediction accuracy of the model;
the loss function is lost by waveletLoss of characteristics->And visual loss->The three parts are formed, and the specific calculation formula is as follows:
wherein:
wavelet lossWavelet coefficient and true high resolution of predicted high resolution remote sensing imageThe average absolute error between wavelet coefficients of the remote sensing rate image is calculated by the following formula:
in the method, in the process of the invention,、/>、/>and->Four coefficients representing predicted wavelet decomposition of high resolution remote sensing image, +.>、、/>And->Representing four coefficients corresponding to real high-resolution remote sensing image after wavelet decomposition, wherein N represents the total number of pixels of the coefficients after wavelet decomposition, < ->The representation takes absolute value;
feature lossThe pre-training model is calculated by a pre-training model, the pre-training model is an encoding-decoding structure, and an encoder thereof extracts a special from the high-resolution remote sensing imageThe decoder restores the features to pixel space, using this pre-trained model, ++>The calculation formula is as follows:
in the method, in the process of the invention,and->Respectively representing the predicted high-resolution remote sensing image characteristics and the corresponding actual high-resolution remote sensing image characteristics extracted by the pre-training model, wherein N represents the total number of pixels of the coefficient after wavelet decomposition, and->The representation takes absolute value;
vision lossThrough multi-scale structural similarity calculation, the formula is as follows:
in the method, in the process of the invention,representing the average structural similarity of the predicted image and the reference image;
step 5, inputting the test data set into the network model trained in the step 4, obtaining a predicted high-resolution remote sensing image of the test data, and testing the availability of the network model in the data set region by quantitatively evaluating and qualitatively evaluating the predicted high-resolution remote sensing image;
and 6, verifying the available network model by using the step 5 to obtain a plurality of high-resolution remote sensing images of the data set region on the predicted date, thereby forming a high-resolution time-intensive remote sensing image sequence.
2. The remote sensing image space-time fusion method based on wavelet domain cross pairing as claimed in claim 1, wherein the method comprises the following steps: in the step 1, a high-resolution and low-resolution remote sensing image of the same area is selected to construct a data set, and the cloud coverage rate is removed to be larger than that of the data setThe number of pixels of the image and the null is larger than +.>Performing radiation calibration, geometric correction and atmospheric correction, cutting off blank areas at four corners of the corrected high-resolution remote sensing image, re-projecting the low-resolution remote sensing image into a coordinate system of the high-resolution remote sensing image, cutting out an area overlapped with the high-resolution remote sensing image, and resampling the cut-out low-resolution remote sensing image data to enable the size of the cut-out low-resolution remote sensing image data to be the same as that of the cut-out high-resolution remote sensing image>、/>Is a set threshold.
3. The remote sensing image space-time fusion method based on wavelet domain cross pairing as claimed in claim 1, wherein the method comprises the following steps: in the step 2, the preprocessed high-resolution remote sensing images and the preprocessed low-resolution remote sensing images are grouped, one group of data comprises a high-resolution remote sensing image with a reference date and a pair of high-resolution remote sensing images with a prediction date, the reference date and the prediction date are adjacent time phases after unqualified data are removed, the high-resolution remote sensing image with the reference date and the low-resolution remote sensing image with the prediction date are used as input of a model, the high-resolution remote sensing image with the prediction date is used as constraint of the model, and a data set is divided into a training data set, a test data set and a verification data set.
4. The remote sensing image space-time fusion method based on wavelet domain cross pairing as claimed in claim 1, wherein the method comprises the following steps: in the step 3, the input of the reference feature extraction network and the input of the change feature extraction network are different but have the same structure, and each network comprises three layers of networks, wherein each layer of network comprises a convolution layer and a correction linear unit layer, and the reconstruction network comprises three layers of networks, and each layer of network comprises the convolution layer and the correction linear unit layer.
5. The remote sensing image space-time fusion method based on wavelet domain cross pairing as claimed in claim 1, wherein the method comprises the following steps: in the step 5, quantitative evaluation is performed on the obtained predicted high-resolution remote sensing image of the predicted date, namely, the texture detail retention and the spectrum retention of the fusion image are comprehensively evaluated by adopting Root Mean Square Error (RMSE), structural Similarity (SSIM), linear Correlation Coefficient (CC) and spectrum angle mapping SAM; performing qualitative evaluation on the obtained predicted high-resolution remote sensing image means that the visual effect is checked through the enlarged image, or the surface coverage classification is performed on the image, or the application value of the NDVI evaluation fusion result is calculated; if both quantitative evaluation and qualitative evaluation of the prediction result of the test data set can reach the effect of the main stream model, the model is proved to have applicability and stability in the region, and space-time fusion in the data set region can be realized.
6. A remote sensing image space-time fusion system based on wavelet domain cross pairing, which comprises a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the program instructions in the memory to execute the remote sensing image space-time fusion method based on wavelet domain cross pairing according to any one of claims 1-5.
7. A remote sensing image space-time fusion system based on wavelet domain cross pairing, comprising a readable storage medium, wherein the readable storage medium is stored with a computer program, and the computer program realizes the remote sensing image space-time fusion method based on wavelet domain cross pairing according to any one of claims 1-5 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311304694.3A CN117036987B (en) | 2023-10-10 | 2023-10-10 | Remote sensing image space-time fusion method and system based on wavelet domain cross pairing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311304694.3A CN117036987B (en) | 2023-10-10 | 2023-10-10 | Remote sensing image space-time fusion method and system based on wavelet domain cross pairing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117036987A CN117036987A (en) | 2023-11-10 |
CN117036987B true CN117036987B (en) | 2023-12-08 |
Family
ID=88643464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311304694.3A Active CN117036987B (en) | 2023-10-10 | 2023-10-10 | Remote sensing image space-time fusion method and system based on wavelet domain cross pairing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117036987B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000013423A1 (en) * | 1998-08-28 | 2000-03-09 | Sarnoff Corporation | Method and apparatus for synthesizing high-resolution imagery using one high-resolution camera and a lower resolution camera |
CN1828668A (en) * | 2006-04-10 | 2006-09-06 | 天津大学 | Typhoon center positioning method based on embedded type concealed Markov model and cross entropy |
CN105528619A (en) * | 2015-12-10 | 2016-04-27 | 河海大学 | SAR remote sensing image change detection method based on wavelet transform and SVM |
CN108830814A (en) * | 2018-06-15 | 2018-11-16 | 武汉大学 | A kind of relative radiometric correction method of remote sensing image |
CN109472743A (en) * | 2018-10-25 | 2019-03-15 | 中国科学院电子学研究所 | The super resolution ratio reconstruction method of remote sensing images |
CN109636716A (en) * | 2018-10-29 | 2019-04-16 | 昆明理工大学 | A kind of image super-resolution rebuilding method based on wavelet coefficient study |
KR20190110320A (en) * | 2018-03-20 | 2019-09-30 | 영남대학교 산학협력단 | Method for restoration of image, apparatus and system for executing the method |
CN111640059A (en) * | 2020-04-30 | 2020-09-08 | 南京理工大学 | Multi-dictionary image super-resolution method based on Gaussian mixture model |
CN113902646A (en) * | 2021-11-19 | 2022-01-07 | 电子科技大学 | Remote sensing image pan-sharpening method based on depth layer feature weighted fusion network |
CN114022356A (en) * | 2021-10-29 | 2022-02-08 | 长视科技股份有限公司 | River course flow water level remote sensing image super-resolution method and system based on wavelet domain |
CN116091936A (en) * | 2022-11-28 | 2023-05-09 | 中国农业大学 | Agricultural condition parameter inversion method for fusing point-land block-area scale data |
KR20230102134A (en) * | 2021-12-30 | 2023-07-07 | 인천대학교 산학협력단 | Real-time image fusion apparatus and method for remote sensing based on deep learning |
CN116563103A (en) * | 2023-04-19 | 2023-08-08 | 浙江大学 | Remote sensing image space-time fusion method based on self-adaptive neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7120305B2 (en) * | 2002-04-16 | 2006-10-10 | Ricoh, Co., Ltd. | Adaptive nonlinear image enlargement using wavelet transform coefficients |
-
2023
- 2023-10-10 CN CN202311304694.3A patent/CN117036987B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000013423A1 (en) * | 1998-08-28 | 2000-03-09 | Sarnoff Corporation | Method and apparatus for synthesizing high-resolution imagery using one high-resolution camera and a lower resolution camera |
CN1828668A (en) * | 2006-04-10 | 2006-09-06 | 天津大学 | Typhoon center positioning method based on embedded type concealed Markov model and cross entropy |
CN105528619A (en) * | 2015-12-10 | 2016-04-27 | 河海大学 | SAR remote sensing image change detection method based on wavelet transform and SVM |
KR20190110320A (en) * | 2018-03-20 | 2019-09-30 | 영남대학교 산학협력단 | Method for restoration of image, apparatus and system for executing the method |
CN108830814A (en) * | 2018-06-15 | 2018-11-16 | 武汉大学 | A kind of relative radiometric correction method of remote sensing image |
CN109472743A (en) * | 2018-10-25 | 2019-03-15 | 中国科学院电子学研究所 | The super resolution ratio reconstruction method of remote sensing images |
CN109636716A (en) * | 2018-10-29 | 2019-04-16 | 昆明理工大学 | A kind of image super-resolution rebuilding method based on wavelet coefficient study |
CN111640059A (en) * | 2020-04-30 | 2020-09-08 | 南京理工大学 | Multi-dictionary image super-resolution method based on Gaussian mixture model |
CN114022356A (en) * | 2021-10-29 | 2022-02-08 | 长视科技股份有限公司 | River course flow water level remote sensing image super-resolution method and system based on wavelet domain |
CN113902646A (en) * | 2021-11-19 | 2022-01-07 | 电子科技大学 | Remote sensing image pan-sharpening method based on depth layer feature weighted fusion network |
KR20230102134A (en) * | 2021-12-30 | 2023-07-07 | 인천대학교 산학협력단 | Real-time image fusion apparatus and method for remote sensing based on deep learning |
CN116091936A (en) * | 2022-11-28 | 2023-05-09 | 中国农业大学 | Agricultural condition parameter inversion method for fusing point-land block-area scale data |
CN116563103A (en) * | 2023-04-19 | 2023-08-08 | 浙江大学 | Remote sensing image space-time fusion method based on self-adaptive neural network |
Non-Patent Citations (10)
Title |
---|
A Flexible Reference-Insensitive Spatiotemporal Fusion Model for Remote Sensing Images Using Conditional Generative Adversarial Network;Tan, Z;IEEE;正文 * |
Achieving Super-Resolution Remote Sensing Images via the Wavelet Transform Combined With the Recursive Res-Net;Wen Ma;IEEE;正文 * |
Monitoring vegetation dynamics (2010–2020) in Shengnongjia Forestry District with cloud-removed MODIS NDVI series by a spatio-temporal reconstruction method;Li Xinghua;ELSEVIER;正文 * |
Remote sensing image fusion via wavelet transform and sparse representation;Jian Cheng;ELSEVIER;正文 * |
Wavelet-based residual attention network for image super-resolution;Shengke Xue;ELSEVIER;正文 * |
一种基于提升小波变换和IHS变换的图像融合方法;薛坚;于盛林;王红萍;;中国图象图形学报(第02期);正文 * |
一种改进的àtrous小波融合方法;王倩;刘洋;贾永红;;测绘通报(第08期);正文 * |
基于多光谱图像超分辨率处理的遥感图像融合;杨超 等;激光与光电子学进;正文 * |
基于小波深层网络的图像超分辨率方法研究;孙超;寇昆湖;吕俊伟;叶松松;刘豪;周玲;赵利;;计算机应用研究(第S1期);正文 * |
基于深度学习与超分辨率重建的遥感高时空融合方法;张永梅;滑瑞敏;马健喆;胡蕾;;计算机工程与科学(第09期);正文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117036987A (en) | 2023-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cao et al. | Thick cloud removal in Landsat images based on autoregression of Landsat time-series data | |
Halme et al. | Utility of hyperspectral compared to multispectral remote sensing data in estimating forest biomass and structure variables in Finnish boreal forest | |
CN102800074B (en) | Synthetic aperture radar (SAR) image change detection difference chart generation method based on contourlet transform | |
CN109685108B (en) | Method for generating high-space-time resolution NDVI long-time sequence | |
CN117992757B (en) | Homeland ecological environment remote sensing data analysis method based on multidimensional data | |
CN114120101A (en) | Soil moisture multi-scale comprehensive sensing method | |
CN117422619A (en) | Training method of image reconstruction model, image reconstruction method, device and equipment | |
Cresson et al. | Comparison of convolutional neural networks for cloudy optical images reconstruction from single or multitemporal joint SAR and optical images | |
CN116563728A (en) | Optical remote sensing image cloud and fog removing method and system based on generation countermeasure network | |
CN116310802A (en) | Method and device for monitoring change of residence based on multi-scale fusion model | |
CN106599548B (en) | The spatial and temporal scales matching process and device of land evapotranspiration remote sensing appraising | |
Sustika et al. | Generative adversarial network with residual dense generator for remote sensing image super resolution | |
Sihvonen et al. | Spectral profile partial least-squares (SP-PLS): Local multivariate pansharpening on spectral profiles | |
CN117975297B (en) | Urban ground surface deformation risk fine identification method assisted by combination of multi-source data | |
CN117036987B (en) | Remote sensing image space-time fusion method and system based on wavelet domain cross pairing | |
CN107358625B (en) | SAR image change detection method based on SPP Net and region-of-interest detection | |
CN109359264A (en) | A kind of chlorophyll product NO emissions reduction method and device based on MODIS | |
Huber et al. | Deep Interpolation of Remote Sensing Land Surface Temperature Data with Partial Convolutions | |
CN116091936A (en) | Agricultural condition parameter inversion method for fusing point-land block-area scale data | |
CN117115671A (en) | Soil quality analysis method and device based on remote sensing and electronic equipment | |
Vancutsem et al. | An assessment of three candidate compositing methods for global MERIS time series | |
CN115147726A (en) | City form map generation method and device, electronic equipment and readable storage medium | |
CN115187463A (en) | Landslide remote sensing image set super-resolution reconstruction method and system | |
CN115063332B (en) | Method for constructing high-spatial-resolution time sequence remote sensing data | |
CN114596505B (en) | Fruit tree statistical method based on deep learning technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |