[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118379631A - Self-guiding unsupervised heterogeneous remote sensing image change detection method and system - Google Patents

Self-guiding unsupervised heterogeneous remote sensing image change detection method and system Download PDF

Info

Publication number
CN118379631A
CN118379631A CN202410522591.2A CN202410522591A CN118379631A CN 118379631 A CN118379631 A CN 118379631A CN 202410522591 A CN202410522591 A CN 202410522591A CN 118379631 A CN118379631 A CN 118379631A
Authority
CN
China
Prior art keywords
pseudo
unsupervised
initial
network
change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410522591.2A
Other languages
Chinese (zh)
Inventor
侍佼
吴天成
王方博
雷雨
杨宏
王大宇
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Northwestern Polytechnical University
Original Assignee
Shenzhen Institute of Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Northwestern Polytechnical University filed Critical Shenzhen Institute of Northwestern Polytechnical University
Priority to CN202410522591.2A priority Critical patent/CN118379631A/en
Publication of CN118379631A publication Critical patent/CN118379631A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a self-guided unsupervised heterogeneous remote sensing image change detection method and system, and relates to the technical field of image processing. The self-guided unsupervised heterogeneous remote sensing image change detection method comprises the following steps: acquiring a pair of heterogeneous remote sensing images to be detected; inputting a pair of heterogeneous remote sensing images to be detected into a first iterative structure to obtain an initial fusion pseudo-change sample; performing difference analysis on the initial fusion pseudo-variation sample to obtain initial pseudo-tag data; inputting the initial pseudo tag data, the initial fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a second iteration structure, and obtaining a secondary fusion pseudo change sample and a final self-guiding network through preset iteration times; performing differential analysis on the secondary fusion pseudo-variation sample to obtain secondary pseudo-tag data; and inputting the secondary pseudo tag data, the secondary fusion pseudo change sample and the pair of heterogeneous remote sensing images to be detected into a final self-guiding network to obtain a change detection result of the heterogeneous remote sensing images.

Description

Self-guiding unsupervised heterogeneous remote sensing image change detection method and system
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a self-guided unsupervised heterogeneous remote sensing image change detection method and system.
Background
Remote sensing image change detection is a technology capable of identifying change areas among images of the same region and different times. In general, the change detection task can be classified into homogeneous image change detection and heterogeneous image change detection. The homogenous image represents images from the same sensor under the same geometry, seasonal conditions and acquisition parameters; and the heterogeneous image refers to the heterogeneous image when any conditions such as environmental conditions, acquisition geometry, sensor settings, sensor modes, sensor types and the like are inconsistent.
Most existing algorithms based on deep neural networks are directed to isomorphic remote sensing images, which typically do not provide additional processing of differences between heterogeneous images. Such as optical sensors, are severely affected by light, while synthetic aperture radar (SYNTHETIC APERTURE RADAR, SAR) sensors are mainly affected by the reflective properties of the object. Even in the remote sensing image of the same area, the phase diameter of the image is large due to non-uniform interference. With the abundance of remote sensing image types, analysis of heterogeneous data has become a trend. Image properties vary widely due to the effects of different noise and different sensor parameters.
Currently, for the change detection of heterogeneous remote sensing images, a common method is to perform task alignment by designing depth nonlinear changes or switching between different domains so as to map original data from different sources to a common domain, and the alignment task between the images can be regarded as an auxiliary strategy outside the main task of the change detection. While the above-described assist strategy has proven to be very effective in image change detection, it certainly brings too many tasks, creating problems with bulky models, difficulty in training, and difficulty in balancing the loss function among multiple tasks.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a self-guided unsupervised heterogeneous remote sensing image change detection method and system.
The technical problems to be solved by the invention are realized by the following technical scheme:
In a first aspect, the present invention provides a self-guided method for detecting changes in an unsupervised heterogeneous remote sensing image, including:
acquiring a pair of heterogeneous remote sensing images to be detected;
inputting a pair of heterogeneous remote sensing images to be detected into a first iterative structure to obtain an initial fusion pseudo-change sample;
Performing difference analysis on the initial fusion pseudo-variation sample to obtain initial pseudo-tag data;
Inputting the initial pseudo tag data, the initial fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a second iteration structure, and obtaining a secondary fusion pseudo change sample and a final self-guiding network through preset iteration times;
Performing differential analysis on the secondary fusion pseudo-variation sample to obtain secondary pseudo-tag data;
Inputting the secondary pseudo tag data, the secondary fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a final self-guiding network to obtain a change detection result of the heterogeneous remote sensing images;
Wherein the first iterative structure comprises: an unsupervised network and a supervised network; the second iterative structure includes: bootstrap networks and supervised networks.
Optionally, the unsupervised network comprises a first automatic encoder network and a second automatic encoder network, each comprising: an encoder and a decoder.
Optionally, inputting a pair of heterogeneous remote sensing images to be detected into a first iterative structure to obtain an initial fusion pseudo-change sample, including:
Inputting a pair of heterogeneous remote sensing images to be detected into an unsupervised network to obtain a coding result of the unsupervised network; the coding result of the unsupervised network is a first coding result output by the first automatic coder network and a second coding result output by the second automatic coder network;
dividing the coding result of the unsupervised network by using a maximum inter-class variance method to obtain an initial unsupervised change diagram;
Inputting the initial unsupervised change graph and the coding result of the unsupervised network into the supervised network to obtain an initial pseudo change graph;
and carrying out fusion processing on the initial unsupervised change graph and the initial pseudo change graph to obtain an initial fusion pseudo change sample.
Optionally, the method for detecting the change of the self-guided unsupervised heterogeneous remote sensing image further comprises the following steps of:
consistency judgment is carried out on the initial unsupervised change map through neighborhood criterion processing, and an initial unsupervised verification result is obtained;
Inputting the initial unsupervised change map and the coding result of the unsupervised network into the supervised network to obtain an initial pseudo change map, wherein the method comprises the following steps:
Inputting the initial unsupervised verification result and the coding result of the unsupervised network into the supervised network to obtain an initial pseudo-variation graph.
Optionally, fusing the initial unsupervised change map and the initial pseudo change map to obtain an initial fused pseudo change sample, including:
And carrying out intersection processing on the initial unsupervised change graph and the initial pseudo change graph to obtain an initial fusion pseudo change sample.
Optionally, the initial fusing of the pseudo-variant samples includes: the first initial sub-fusion pseudo-variation sample to be tested and the second initial sub-fusion pseudo-variation sample to be tested;
performing differential analysis on the initial fusion pseudo-variation sample to obtain initial pseudo-tag data, wherein the differential analysis comprises the following steps:
Calculating the Euclidean distance between the first to-be-measured initial sub-fusion pseudo-variation sample and the second to-be-measured initial sub-fusion pseudo-variation sample;
And performing difference analysis on the first to-be-detected initial sub-fusion pseudo-change sample and the second to-be-detected initial sub-fusion pseudo-change sample through the mean square error of the Euclidean distance to obtain initial pseudo-tag data.
Optionally, inputting the initial pseudo tag data, the initial fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a second iteration structure, and obtaining a secondary fusion pseudo change sample and a final self-guiding network through preset iteration times, wherein the method comprises the following steps:
S1, inputting initial pseudo tag data, an initial fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a bootstrap network to obtain a bootstrap coding result of a current iteration;
s2, dividing the bootstrap coding result of the current iteration time by a maximum inter-class variance method to obtain a bootstrap variation diagram of the current iteration time;
s3, consistency judgment is carried out on the bootstrap change graph of the current iteration, and a bootstrap verification result of the current iteration is obtained;
s4, inputting the bootstrap verification result of the current iteration and the bootstrap coding result of the current iteration into a supervised network to obtain a pseudo-change diagram of the current iteration;
s5, carrying out fusion processing on the pseudo-change graph of the current iteration and the self-guiding coding result of the current iteration to obtain a fusion pseudo-change sample of the current iteration;
s6, performing difference analysis on the fusion pseudo-variation sample of the current iteration to obtain pseudo-tag data of the current iteration;
S7, adding 1 to the number of the current iteration times to serve as the current iteration times, taking the result of S5 as an initial fusion pseudo-change sample, and taking the result of S6 as initial pseudo-label data;
S8, circularly executing the steps S1-S7 until the preset iteration times are reached, and finally obtaining a subsspool pseudo-change sample and a trained final bootstrap network.
Optionally, the method for detecting the change of the self-guided unsupervised heterogeneous remote sensing image further comprises the following steps of:
Constraining the first encoding result and the second encoding result by using a loss function of the unsupervised network;
the loss function of an unsupervised network is expressed as using equation minL AE(θ),minLAE (θ):
minLAE(θ)=Lr(θ)+α*Lh(θ);
Lh(θ)=huber_loss(EX(X),EY(Y));
Wherein L r (θ) represents the mean square error, L h (θ) represents the Huber loss, X represents the first image of the pair of heterogeneous remote sensing images to be measured, Y represents the second image of the pair of heterogeneous remote sensing images to be measured, E X (X) represents the result of X inputting the unsupervised network encoder, E Y (Y) represents the result of Y inputting the unsupervised network encoder, Representing the reconstruction result of the X-input unsupervised network,Representing the reconstruction result of the Y input unsupervised network, N X (DEG) representing the unsupervised network processing operation of X, N Y (DEG) representing the unsupervised network processing operation of Y, i representing the horizontal pixels of the sub-image to be detected, j representing the vertical pixels of the sub-image to be detected, X (i,j) representing the ith row and jth column of pixels of X, Y (i,j) representing the ith row and jth column of pixels of Y,Representing the reconstruction result of the X (i,j) input unsupervised network,And (3) representing a reconstruction result of the unsupervised network input by Y (i,j), wherein alpha represents a weighting coefficient of the unsupervised network mapping loss, and theta represents an unsupervised network weighting coefficient.
In a second aspect, the present invention provides a self-guided unsupervised heterogeneous remote sensing image change detection system, comprising: the system comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, and when the self-guided non-supervision heterogeneous remote sensing image change detection system operates, the processor communicates with the storage medium through the bus, and the processor executes the machine-readable instructions to execute the steps of the method according to the first aspect.
In a third aspect, the present invention provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect described above.
The invention provides a self-guided unsupervised heterogeneous remote sensing image change detection method and system. The self-guided unsupervised heterogeneous remote sensing image change detection method comprises the following steps: acquiring a pair of heterogeneous remote sensing images to be detected; inputting a pair of heterogeneous remote sensing images to be detected into a first iterative structure to obtain an initial fusion pseudo-change sample; performing difference analysis on the initial fusion pseudo-variation sample to obtain initial pseudo-tag data; inputting the initial pseudo tag data, the initial fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a second iteration structure, and obtaining a secondary fusion pseudo change sample and a final self-guiding network through preset iteration times; performing differential analysis on the secondary fusion pseudo-variation sample to obtain secondary pseudo-tag data; inputting the secondary pseudo tag data, the secondary fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a final self-guiding network to obtain a change detection result of the heterogeneous remote sensing images; wherein the first iterative structure comprises: an unsupervised network and a supervised network; the second iterative structure includes: bootstrap networks and supervised networks. According to the invention, a first iteration structure is used for acquiring an initial fusion pseudo-change sample, a label is added to obtain initial pseudo-label data, then the initial pseudo-label data, the initial fusion pseudo-change sample and a pair of heterogeneous remote sensing images to be tested are used for continuously carrying out data iteration on a second iteration structure, so that the output result of a self-guiding network in the second iteration structure and the output result of a supervision network in the second iteration structure are continuously approximated towards the direction of difference reduction, a secondary fusion pseudo-change sample and a final self-guiding network are obtained, finally, the trained final self-guiding network is used for obtaining the change detection result of the heterogeneous remote sensing images, the problems of conversion and task alignment processing between data fields and higher model complexity in the prior art are avoided, the change detection efficiency of the heterogeneous remote sensing images is improved, and the performance requirements on hardware equipment are reduced.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a flow chart of a self-guided method for detecting changes in an unsupervised heterogeneous remote sensing image according to an embodiment of the present invention;
fig. 2 is a flowchart of an overall self-guiding method for detecting changes in an unsupervised heterogeneous remote sensing image according to an embodiment of the present invention;
fig. 3 is a data processing flow of an unsupervised network according to an embodiment of the present invention;
FIG. 4 is a data processing flow of a supervised network according to an embodiment of the present invention;
FIG. 5 is a flowchart of overall data processing of a bootstrap network according to an embodiment of the present invention;
FIG. 6 is a first set of simulated sample patterns provided by an embodiment of the present invention;
FIG. 7 is a second set of simulated sample patterns provided by an embodiment of the present invention;
FIG. 8 is a simulation result diagram corresponding to FIG. 6 provided by an embodiment of the present invention;
FIG. 9 is a diagram of simulation results corresponding to FIG. 7 provided in an embodiment of the present invention;
Fig. 10 is a schematic diagram of a self-guiding system for detecting changes in an unsupervised heterogeneous remote sensing image according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
In order to reduce processing tasks and reduce model complexity, the embodiment of the invention provides a self-guided unsupervised heterogeneous remote sensing image change detection method. Fig. 1 is a flowchart of a self-guided method for detecting changes in an unsupervised heterogeneous remote sensing image, which includes:
s101, acquiring a pair of heterogeneous remote sensing images to be detected.
It should be noted that, in the embodiment of the present invention, a pair of heterogeneous remote sensing images to be detected are all non-tag data. When the pair of heterogeneous remote sensing images to be detected is d= { I 1,I2 }, I 1 is defined as X, I 2 is defined as Y, and I 1∈XM*N*p and I 2∈YM *N*q are obtained. In the processed data of the present invention, X and Y have the same size, but the numbers of spectral channel numbers p and q may be different, and M and N represent the total number of horizontal pixels and vertical pixels of X and Y.
S102, inputting a pair of heterogeneous remote sensing images to be detected into a first iteration structure to obtain an initial fusion pseudo-change sample.
Optionally, inputting a pair of heterogeneous remote sensing images to be detected into a first iterative structure to obtain an initial fusion pseudo-change sample, including:
Inputting a pair of heterogeneous remote sensing images to be detected into an unsupervised network to obtain a coding result of the unsupervised network; the coding result of the unsupervised network is a first coding result output by the first automatic coder network and a second coding result output by the second automatic coder network;
dividing the coding result of the unsupervised network by using a maximum inter-class variance method to obtain an initial unsupervised change diagram;
Inputting the initial unsupervised change graph and the coding result of the unsupervised network into the supervised network to obtain an initial pseudo change graph;
and carrying out fusion processing on the initial unsupervised change graph and the initial pseudo change graph to obtain an initial fusion pseudo change sample.
In the embodiment of the invention, a pair of heterogeneous remote sensing images to be detected are: x (i,j) and Y (i,j), wherein i represents a lateral pixel of X and Y, j represents a longitudinal pixel of X and Y, 1.ltoreq.i.ltoreq.M, 1.ltoreq.j.ltoreq.N. An unsupervised network (Unsupervised AE) is first trained and its output is divided into two classes, varying and non-varying, using maximum inter-class variance (Otsu) to obtain an initial unsupervised variation graph cm_u. In one possible implementation, the initial change detection result cm_u may be subjected to a neighborhood criterion, and screened to obtain a more reliable initial unsupervised verification result Z label. Training the supervised network (Supervised Net) to obtain an initial pseudo-variation graph cm_s using the initial unsupervised verification result Z label; in another possible implementation, the initial pseudo-variation map cm_s may also be obtained by training the supervised network (Supervised Net) directly with cm_u. Finally, the CM_U and CM_S are fused and voted to obtain an initial fusion pseudo-variation sampleSo far, the training of the unsupervised network and the supervised network in the preprocessing operation is completed.
It should be noted that, performing neighborhood criterion processing on the cm_u, screening to obtain a more reliable initial unsupervised verification result Z label can make the obtained sample data more accurate, and can effectively improve the processing performance of the supervised network.
S103, performing difference analysis on the initial fusion pseudo-variation sample to obtain initial pseudo-tag data.
S104, inputting the initial pseudo tag data, the initial fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a second iteration structure, and obtaining a secondary fusion pseudo change sample and a final self-guiding network through preset iteration times.
It should be noted that, in the embodiment of the present invention, the supervised networks in the first iteration structure and the second iteration structure are the same network.
S105, performing difference analysis on the secondary fusion pseudo-change sample to obtain secondary pseudo-tag data.
S106, inputting the secondary pseudo tag data, the secondary fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a final self-guiding network to obtain a change detection result of the heterogeneous remote sensing images.
Wherein the first iterative structure comprises: an unsupervised network and a supervised network; the second iterative structure includes: bootstrap networks and supervised networks.
The embodiment of the invention provides a self-guided unsupervised heterogeneous remote sensing image change detection method, which comprises the following steps: acquiring a pair of heterogeneous remote sensing images to be detected; inputting a pair of heterogeneous remote sensing images to be detected into a first iterative structure to obtain an initial fusion pseudo-change sample; performing difference analysis on the initial fusion pseudo-variation sample to obtain initial pseudo-tag data; inputting the initial pseudo tag data, the initial fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a second iteration structure, and obtaining a secondary fusion pseudo change sample and a final self-guiding network through preset iteration times; performing differential analysis on the secondary fusion pseudo-variation sample to obtain secondary pseudo-tag data; inputting the secondary pseudo tag data, the secondary fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a final self-guiding network to obtain a change detection result of the heterogeneous remote sensing images; wherein the first iterative structure comprises: an unsupervised network and a supervised network; the second iterative structure includes: bootstrap networks and supervised networks. According to the invention, a first iteration structure is used for acquiring an initial fusion pseudo-change sample, a label is added to obtain initial pseudo-label data, then the initial pseudo-label data, the initial fusion pseudo-change sample and a pair of heterogeneous remote sensing images to be tested are used for continuously carrying out data iteration on a second iteration structure, so that the output result of a self-guiding network in the second iteration structure and the output result of a supervision network in the second iteration structure are continuously approximated towards the direction of difference reduction, a secondary fusion pseudo-change sample and a final self-guiding network are obtained, finally, the trained final self-guiding network is used for obtaining the change detection result of the heterogeneous remote sensing images, the problems of conversion and task alignment processing between data fields and higher model complexity in the prior art are avoided, the change detection efficiency of the heterogeneous remote sensing images is improved, and the performance requirements on hardware equipment are reduced.
Optionally, the unsupervised network comprises a first automatic encoder network and a second automatic encoder network, each comprising: an encoder and a decoder.
It should be noted that the unsupervised network (Unsupervised AE) is composed of two noise-reducing convolutional automatic encoder networks (dCAE), i.e., the unsupervised network includes: a first automatic encoder network and a second automatic encoder network. The operations of dCAE network N (-) typically include two steps, encoder E (-) and decoder D (-), with the feature extracted as the output of encoder E (-). Thus, the data map for E (-) is denoted as E X(·):X→ZX,EY(·):Y→ZY, where Z X and Z Y are features extracted in encoder E (-).
The first layer in the encoder E (-) is a convolution layer with a core size 3*3 and a stride of 1, and outputs a 20-dimensional feature vector to complete the up-sampling process. Then there are two convolution coupling layers, the kernel size is 1*1, the stride is 1, and the output dimension is unchanged. And finally, finishing a down-sampling process by setting a convolution coupling layer with the output dimension of 1, and inputting the down-sampling result into a supervision network. The decoding layer D (-) consists of two convolution coupling layers and one deconvolution layer, and forms a symmetrical structure with the encoder E (-). The output size of the first layer is 20, and the output size of the last layer is the number of image channels. E (-) and D (-) in both dCAE are trained simultaneously and independently of each other.
In the embodiment of the invention, the hidden characteristic representation can be obtained by setting an unsupervised network and utilizing the encoder E (-) to complete the characteristic extraction function, so that the task alignment processing in the prior art is avoided, and the processing efficiency is improved.
Optionally, the method for detecting the change of the self-guided unsupervised heterogeneous remote sensing image further comprises the following steps of:
and carrying out consistency judgment on the initial unsupervised change map through neighborhood criterion processing to obtain an initial unsupervised verification result.
It should be noted that, in the embodiment of the present invention, the neighborhood criterion is treated as a voting (highest consistency) mechanism based on the highest threshold T v=Nne, and is mainly used for evaluating the confidence of the sample. Where N ne is the number of neighbors of inputs I 1 and I 2. In the neighbor criterion processing, selecting unlabeled samples of which the initial unsupervised change map is consistent with all N ne neighbor sample labels around, and then adding the unlabeled samples to the available candidate sample set to obtain an initial unsupervised verification result.
The calculation of the initial unsupervised verification result Z label uses:
Where i denotes the ith pixel, x i is the ith unlabeled pixel sample, y i is the class label value for the highest vote obtained when the N ne neighborhood samples are consistency voted, T denotes the T-th neighborhood sample, and T v denotes the unlabeled sample with the T v vote.
Inputting the initial unsupervised change map and the coding result of the unsupervised network into the supervised network to obtain an initial pseudo change map, wherein the method comprises the following steps: inputting the initial unsupervised verification result and the coding result of the unsupervised network into the supervised network to obtain an initial pseudo-variation graph.
In an embodiment of the invention, the loss of the supervised network employs a cross entropy loss function. After data processing is performed by using the supervised network, the tag information 0 in the initial pseudo-variation graph can be extended to 01, and the tag information 1 can be extended to 10.
It should be noted that, in the embodiment of the present invention, after the segmentation processing operation is performed by the maximum inter-class variance method, the value of the changed pixel position may be set to 1, and the unchanged pixel position may be set to 0, so as to finally obtain the initial unsupervised change map cm_u.
In addition, in the embodiment of the invention, the neighborhood criterion processing is carried out on the initial unsupervised change map CM_U, the more reliable initial unsupervised verification result Z label is obtained by screening, the acquired data can be more accurate, the coding results of Z label and the unsupervised network are utilized to train the supervised network on the basis, and the data processing performance of the supervised network can be effectively improved.
Optionally, fusing the initial unsupervised change map and the initial pseudo change map to obtain an initial fused pseudo change sample, including:
And carrying out intersection processing on the initial unsupervised change graph and the initial pseudo change graph to obtain an initial fusion pseudo change sample.
Optionally, the initial fusing of the pseudo-variant samples includes: the first initial sub-fusion pseudo-variation sample to be tested and the second initial sub-fusion pseudo-variation sample to be tested;
performing differential analysis on the initial fusion pseudo-variation sample to obtain initial pseudo-tag data, wherein the differential analysis comprises the following steps:
Calculating the Euclidean distance between the first to-be-measured initial sub-fusion pseudo-variation sample and the second to-be-measured initial sub-fusion pseudo-variation sample;
And performing difference analysis on the first to-be-detected initial sub-fusion pseudo-change sample and the second to-be-detected initial sub-fusion pseudo-change sample through the mean square error of the Euclidean distance to obtain initial pseudo-tag data.
Optionally, inputting the initial pseudo tag data, the initial fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a second iteration structure, and obtaining a secondary fusion pseudo change sample and a final self-guiding network through preset iteration times, wherein the method comprises the following steps:
S1, inputting initial pseudo tag data, an initial fusion pseudo change sample and a pair of heterogeneous remote sensing images to be detected into a bootstrap network to obtain a bootstrap coding result of a current iteration;
s2, dividing the bootstrap coding result of the current iteration time by a maximum inter-class variance method to obtain a bootstrap variation diagram of the current iteration time;
s3, consistency judgment is carried out on the bootstrap change graph of the current iteration, and a bootstrap verification result of the current iteration is obtained;
s4, inputting the bootstrap verification result of the current iteration and the bootstrap coding result of the current iteration into a supervised network to obtain a pseudo-change diagram of the current iteration;
s5, carrying out fusion processing on the pseudo-change graph of the current iteration and the self-guiding coding result of the current iteration to obtain a fusion pseudo-change sample of the current iteration;
s6, performing difference analysis on the fusion pseudo-variation sample of the current iteration to obtain pseudo-tag data of the current iteration;
S7, adding 1 to the number of the current iteration times to serve as the current iteration times, taking the result of S5 as an initial fusion pseudo-change sample, and taking the result of S6 as initial pseudo-label data;
S8, circularly executing the steps S1-S7 until the preset iteration times are reached, and finally obtaining a subsspool pseudo-change sample and a trained final bootstrap network.
In this embodiment, the iteration number is set to N, the data set X (i,j)、Y(i,j) and Z label are used to train the bootstrap network (Self-Guided Net) in the first iteration, and the maximum inter-class variance method Otsu is used to divide the bootstrap network output into two classes, namely a variable class and an unchanged class, and when the iteration reaches the V-th iteration, a bootstrap variation graph (cm_sg) V of the current iteration is obtained. Screening the neighborhood criterion of (CM_SG) V to obtain the bootstrap verification result (Z label)V) of the current iteration, training the supervised network (Supervised Net) by using the bootstrap verification result (Z label)V) with pseudo labels to obtain the pseudo-variation graph (CM_S) V of the current iteration, and fusion voting of (CM_SG) V and (CM_S) V to obtain the fusion pseudo-variation sample of the current iteration
Optionally, the method for detecting the change of the self-guided unsupervised heterogeneous remote sensing image further comprises the following steps of:
Constraining the first encoding result and the second encoding result by using a loss function of the unsupervised network;
the loss function of an unsupervised network is expressed as using equation minL AE(θ),minLAE (θ):
minLAE(θ)=Lr(θ)+α*Lh(θ);
Lh(θ)=huber_loss(EX(X),EY(Y));
Wherein L r (θ) represents the mean square error, L h (θ) represents the Huber loss, X represents the first image of the pair of heterogeneous remote sensing images to be measured, Y represents the second image of the pair of heterogeneous remote sensing images to be measured, E X (X) represents the result of X inputting the unsupervised network encoder, E Y (Y) represents the result of Y inputting the unsupervised network encoder, Representing the reconstruction result of the X-input unsupervised network,Representing the reconstruction result of the Y input unsupervised network, N X (DEG) representing the unsupervised network processing operation of X, N Y (DEG) representing the unsupervised network processing operation of Y, i representing the horizontal pixels of the sub-image to be detected, j representing the vertical pixels of the sub-image to be detected, X (i,j) representing the ith row and jth column of pixels of X, Y (i,j) representing the ith row and jth column of pixels of Y,Representing the reconstruction result of the X (i,j) input unsupervised network,And (3) representing a reconstruction result of the unsupervised network input by Y (i,j), wherein alpha represents a weighting coefficient of the unsupervised network mapping loss, and theta represents an unsupervised network weighting coefficient.
In addition, in the embodiment of the present invention, the loss function of the bootstrap network (Self-Guided Net) is as follows:
minLSGAE(θ)=Lr(θ)+α*Lh(θ)+β*Ln(θ);
Wherein minL SGAE (θ) represents a loss function of the bootstrap network, and L r (θ) and L h (θ) may refer to the description of the unsupervised network and change the main body to the bootstrap network, which is not described herein. L n (theta) represents the mean square error between the self-guiding network output and the marked data label, Z is the heterogeneous remote sensing image to be detected, comprises I 1 and I 2, The label data is represented, and β is a weighting coefficient.
In this embodiment, two heterogeneous images (a pair of heterogeneous remote sensing images to be measured) with different phases are input into a final bootstrap network, and the classification of the changed and unchanged categories is performed according to the value of the first two dimensions output by the final bootstrap network, which specifically may include:
If the last two-dimensional values output by the final bootstrap network are 0 and 1 respectively, the pixels belong to a variation class; if the last two dimensions of the final bootstrap network output have values of 1 and 0, respectively, the pixel belongs to the unchanged class.
In order to illustrate the data processing flow, fig. 2 is a flowchart of an overall self-guiding method for detecting changes in an unsupervised heterogeneous remote sensing image, where, as shown in fig. 2, a first iteration structure includes an unsupervised network and a supervised network, a result output by the first iteration structure is input to a second iteration structure (including the self-guiding network and the supervised network), after N iterations, a second iteration structure output and a trained final self-guiding network are obtained, and finally, the final self-guiding network is utilized to output a change detection result. Fig. 3 is a data processing flow of an unsupervised network according to an embodiment of the present invention, and the final output result is CHANGE MAP U (cm_u). Fig. 4 is a data processing flow of the supervised network according to an embodiment of the present invention, and the final output result is CHANGE MAP S (cm_s). Fig. 5 is a flowchart of overall data processing of a bootstrap network provided in an embodiment of the present invention, and it can be seen from fig. 5 that pseudo tag data is also included in addition to input data X and Y, and network processing and difference analysis are performed on the above three data to obtain two loss functions, and weighting and back-propagation are performed to obtain a final change result graph.
In the embodiment of the invention, in order to verify the accuracy of a self-guided unsupervised heterogeneous remote sensing image change detection method, a simulation experiment is also performed.
Simulation conditions :Intel(R)Core(TM)i5-1035G1 CPU@1.00GHz 1.19GHz with 16GB of RAM.Windows 10,Python36.10,Tensorflow1.4.0 environment.
Evaluation index:
the algorithm performance is evaluated by using qualitative and quantitative analysis, and main evaluation indexes used by the quantitative analysis are as follows:
① Error detection number FP: comparing the change detection results obtained by using different methods with a change reference graph, wherein the number of pixels belonging to an unchanged class in the change reference graph but belonging to a changed class in a simulation experiment result graph is called an error detection number;
② Leak detection number FN: comparing the change detection results obtained by using different methods with a change reference graph, wherein the number of pixels belonging to a change class in the change reference graph but belonging to an unchanged class in a simulation experiment result graph is called a missing detection number;
③ Accuracy PCC: the correct detection number accounts for the proportion of all pixels;
④ Kappa coefficient for measuring consistency of simulation experiment result graph and change reference graph:
Where PCC denotes the probability of correctly classifying a pixel, PRE denotes the desired uniformity ratio.
⑤F1 Index:
Wherein TP represents the number of pixels belonging to the variation class in both the simulation experiment result diagram and the variation reference diagram, FP represents the error detection number, and FN represents the leak detection number.
Simulation experiment contents:
by using the existing method and the method provided by the embodiment of the invention, simulation experiments are carried out on different heterogeneous remote sensing image data sets. Also compared are a supervision method including a random forest method RFR, a residual network method Resnet, and a full connection method MLP, and an unsupervised method including a CAN network, a SCCN network, an ACE-Net network, and an X-Net network.
FIG. 6 is a first set of simulated sample plots provided by an embodiment of the present invention showing the flood affected region of the state of California. The image (a) of fig. 6 was taken on the 5 th 1 st 2017, and consisted of 11 channels acquired by the terrestrial satellite 8. The graph (b) of fig. 6 was taken on 18 days 2 in 2017 and consisted of 3 channels collected by sentinel-1 a. The ground truth map is manually marked in the map (c) of fig. 6, and is mainly used as a reference map.
FIG. 7 is a second set of simulated sample patterns provided by embodiments of the present invention, showing lake flooding occurring from 9 in 1995 to 7 in 1996. Fig. 7 (a) and fig. 7 (b) are images taken by the terrestrial satellite 5 of 412×300 pixels, and the channels of fig. 7 (a) and fig. 7 (b) do not overlap. Wherein, fig. 7 (a) has one Near Infrared (NIR) band channel, and fig. 7 (b) has red, green, blue (RGB) band channels photographed in the same area. The ground truth map is manually marked in the map (c) of fig. 7, and is mainly used as a reference map.
Simulation 1. The first set of sample images shown in fig. 6 were subjected to a change detection simulation using the present invention and the prior art method, and the detection results are shown in fig. 8. Wherein, fig. 8 (a) shows the result of detection by RFR method, fig. 8 (b) shows the result of detection by Resnet method, fig. 8 (c) shows the result of detection by MLP method, fig. 8 (d) shows the result of detection by CAN method, fig. 8 (e) shows the result of detection by SCCN method, fig. 8 (f) shows the result of detection by ACE-Net method, fig. 8 (g) shows the result of detection by X-Net method, fig. 8 (h) shows the result of detection by the method of the present invention, and fig. 8 (i) shows the reference diagram.
As can be seen from fig. 8, the supervised methods of RFR, resNet and MLP all suffer from a severe overfitting phenomenon, resulting in many unchanged regions being falsely detected as changed regions, marked with green. In an unsupervised approach, areas where no change is detected in a large number of cases are marked with red. Where the original image portion of the SCCN with a large area in the lower right corner is falsely detected as a changed area, but this happens only in the data set of fig. 6. In addition, the change detection is also made difficult by the fact that the california dataset is too large. In addition, it can be seen that at the intersection of the variable region and the constant region, there are fine and erroneous pixels (pixel points that appear green and red in the figure), which results in difficulty in detection of ACE-Net and X-Net. These fine and erroneous pixels are ubiquitous from the perspective of fig. 8 as the downsampling operation removes some of the spatial and spectral information to some extent, making this false detection more serious.
The test indexes corresponding to the change detection results in fig. 8 are shown in table 1.
Table 1 change detection results quantitative evaluation of experimental results corresponding to fig. 8
As can be seen from Table 1, the inventive process has the lowest FP coefficient and PCC, kappa and F 1 perform best, demonstrating the effectiveness of the inventive process.
Simulation 2. The second set of images shown in fig. 7 was subjected to a change detection simulation using the present invention and the prior art method, the detection results being shown in fig. 9. Wherein, fig. 9 (a) shows the result of detection by RFR method, fig. 9 (b) shows the result of detection by Resnet method, fig. 9 (c) shows the result of detection by MLP method, fig. 9 (d) shows the result of detection by CAN method, fig. 9 (e) shows the result of detection by SCCN method, fig. 9 (f) shows the result of detection by ACE-Net method, fig. 9 (g) shows the result of detection by X-Net method, fig. 9 (h) shows the result of detection by the method of the present invention, and fig. 9 (i) shows the reference diagram.
As can be seen from fig. 9, the change detection of fig. 7 has mainly two difficulties. First, the lower half of the change area is too slim, which results in almost all methods failing to detect the change area marked with red. In addition, resnet and MLP in the supervised method have strong overfitting phenomenon in the detection of the change area, so that the change patterns of the two have almost no undetected red change pixels, and the lowest FP value in the table is well reflected. Another problem with Resnet and MLP is the effect of noise, as can be seen in fig. 9, resnet and MLP, while avoiding the problem of false detection (seen on the land topography of the left half of the image), severely interfere with the land relief area covered by green plants due to the different imaging mechanisms of the heterogeneous remote sensing image to be detected.
The test indexes corresponding to the change detection results in fig. 9 are shown in table 2.
Table 2 change detection results quantitative evaluation of experimental results corresponding to fig. 9
As can be seen from Table 2, although the FN coefficients of the process of the present invention are not the highest, the best coefficients of PCC, kappa, etc. are obtained, demonstrating the effectiveness and superiority of the process of the present invention.
As can be seen from the two simulation experiments, the method has better classification performance and higher change detection precision for the heterogeneous image change detection problem, and is superior to the existing method.
The method provided by the embodiment of the invention can be applied to electronic equipment. Specifically, the electronic device may be: desktop computers, portable computers, intelligent mobile terminals, servers, etc. Any electronic device capable of implementing the present invention is not limited herein, and falls within the scope of the present invention.
Based on the same inventive concept, the embodiment of the invention also provides a self-guided unsupervised heterogeneous remote sensing image change detection system. Fig. 10 is a schematic diagram of a self-guiding unsupervised heterogeneous remote sensing image change detection system according to an embodiment of the present invention, including: processor 710, storage medium 720 and bus 730, storage medium 720 storing machine-readable instructions executable by processor 710, processor 710 executing machine-readable instructions to perform the steps of the above-described method embodiments when a self-guided, unsupervised heterogeneous telemetry image change detection system is in operation, processor 710 and storage medium 720 communicating via bus 730. The specific implementation manner and the technical effect are similar, and are not repeated here.
The storage medium may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the storage medium may be at least one storage device located remotely from the processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The embodiment of the invention also provides a storage medium, on which a computer program is stored, which when being executed by a processor performs the steps of the method of the embodiment described above.
It should be noted that the terms "first," "second," and the like are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the disclosed embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with aspects of the present disclosure.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Further, one skilled in the art can engage and combine the different embodiments or examples described in this specification.
Although the invention is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings and the disclosure. In the description of the present invention, the word "comprising" does not exclude other elements or steps, the "a" or "an" does not exclude a plurality, and the "a" or "an" means two or more, unless specifically defined otherwise. Moreover, some measures are described in mutually different embodiments, but this does not mean that these measures cannot be combined to produce a good effect.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (10)

1. The self-guided unsupervised heterogeneous remote sensing image change detection method is characterized by comprising the following steps of:
acquiring a pair of heterogeneous remote sensing images to be detected;
Inputting the pair of heterogeneous remote sensing images to be detected into a first iterative structure to obtain an initial fusion pseudo-change sample;
performing differential analysis on the initial fusion pseudo-variation sample to obtain initial pseudo-tag data;
inputting the initial pseudo tag data, the initial fusion pseudo change sample and the pair of heterogeneous remote sensing images to be detected into a second iteration structure, and obtaining a secondary fusion pseudo change sample and a final self-guiding network through preset iteration times;
Performing differential analysis on the secondary fusion pseudo-variation sample to obtain secondary pseudo-tag data;
Inputting the secondary pseudo tag data, the secondary fusion pseudo change sample and the pair of heterogeneous remote sensing images to be detected into the final bootstrap network to obtain a change detection result of the heterogeneous remote sensing images;
wherein the first iterative structure comprises: an unsupervised network and a supervised network; the second iterative structure includes: a bootstrap network and the supervised network.
2. The self-guided, unsupervised heterogeneous remote sensing image change detection method according to claim 1, characterized in that,
The unsupervised network includes a first automatic encoder network and a second automatic encoder network, each of the first automatic encoder network and the second automatic encoder network including: an encoder and a decoder.
3. The method for detecting changes in self-guided non-supervised heterogeneous remote sensing images according to claim 2, wherein the inputting the pair of heterogeneous remote sensing images to be detected into the first iterative structure, to obtain the initial fusion pseudo-change sample, comprises:
inputting the pair of heterogeneous remote sensing images to be detected into the unsupervised network to obtain a coding result of the unsupervised network; the coding result of the unsupervised network is a first coding result output by the first automatic encoder network and a second coding result output by the second automatic encoder network;
Dividing the coding result of the unsupervised network by using a maximum inter-class variance method to obtain an initial unsupervised change map;
inputting the initial unsupervised change graph and the coding result of the unsupervised network into the supervised network to obtain an initial pseudo change graph;
And carrying out fusion processing on the initial unsupervised change map and the initial pseudo change map to obtain the initial fusion pseudo change sample.
4. The method for detecting changes in an unsupervised heterogeneous remote sensing image according to claim 3, wherein the method for detecting changes in an unsupervised heterogeneous remote sensing image after obtaining an initial unsupervised change map by dividing the encoding result of the unsupervised network by a maximum inter-class variance method further comprises:
Consistency judgment is carried out on the initial unsupervised change map through neighborhood criterion processing, and an initial unsupervised verification result is obtained;
inputting the initial unsupervised change map and the encoding result of the unsupervised network into the supervised network to obtain an initial pseudo change map, including:
inputting the initial unsupervised verification result and the coding result of the unsupervised network into the supervised network to obtain an initial pseudo-variation graph.
5. The method for detecting changes in self-guided non-supervised heterogeneous remote sensing images according to claim 3, wherein the fusing the initial non-supervised change map and the initial pseudo-change map to obtain the initial fused pseudo-change sample comprises:
And carrying out intersection processing on the initial unsupervised change map and the initial pseudo change map to obtain the initial fusion pseudo change sample.
6. The self-guided, unsupervised heterogeneous remote sensing image change detection method according to claim 1, wherein the initial fusion of pseudo-change samples comprises: the first initial sub-fusion pseudo-variation sample to be tested and the second initial sub-fusion pseudo-variation sample to be tested;
Performing differential analysis on the initial fusion pseudo-variation sample to obtain initial pseudo-tag data, wherein the differential analysis comprises the following steps:
Calculating the Euclidean distance between the first initial sub-fusion pseudo-variation sample to be tested and the second initial sub-fusion pseudo-variation sample to be tested;
And performing difference analysis on the first to-be-detected initial sub-fusion pseudo-variation sample and the second to-be-detected initial sub-fusion pseudo-variation sample through the mean square error of the Euclidean distance to obtain initial pseudo-tag data.
7. The method for detecting changes in self-guided non-supervised heterogeneous remote sensing images according to claim 2, wherein the steps of inputting the initial pseudo tag data, the initial fused pseudo change sample and the pair of heterogeneous remote sensing images to be detected into a second iteration structure, obtaining a sub-fused pseudo change sample and a final self-guided network through a preset iteration number comprise the steps of:
S1, inputting the initial pseudo tag data, the initial fusion pseudo change sample and the pair of heterogeneous remote sensing images to be detected into the bootstrap network to obtain a bootstrap coding result of the current iteration;
s2, dividing the bootstrap coding result of the current iteration time by a maximum inter-class variance method to obtain a bootstrap variation graph of the current iteration time;
s3, consistency judgment is carried out on the bootstrap variation graph of the current iteration, and a bootstrap verification result of the current iteration is obtained;
S4, inputting the bootstrap verification result of the current iteration and the bootstrap coding result of the current iteration into the supervised network to obtain a pseudo-variation graph of the current iteration;
S5, carrying out fusion processing on the pseudo-change graph of the current iteration and the self-guiding coding result of the current iteration to obtain a fusion pseudo-change sample of the current iteration;
s6, performing difference analysis on the fusion pseudo-variation sample of the current iteration to obtain pseudo-tag data of the current iteration;
S7, adding 1 to the number of the current iteration times to serve as the current iteration times, taking the result of the S5 as the initial fusion pseudo-change sample, and taking the result of the S6 as the initial pseudo-tag data;
s8, circularly executing the steps S1-S7 until the preset iteration times are reached, and finally obtaining the subsspool pseudo-change sample and the trained final bootstrap network.
8. The method for detecting changes in an unsupervised heterogeneous remote sensing image according to claim 3, wherein the method for detecting changes in an unsupervised heterogeneous remote sensing image after obtaining an initial unsupervised change map by dividing the encoding result of the unsupervised network by a maximum inter-class variance method further comprises:
constraining the first encoding result and the second encoding result by using a loss function of an unsupervised network;
The loss function of the unsupervised network uses the formula minL AE (θ), which minL AE (θ) is expressed as:
minLAE(θ)=Lr(θ)+α*Lh(θ);
Lh(θ)=huber_loss(EX(X),EY(Y));
Wherein L r (θ) represents the mean square error, L h (θ) represents the Huber loss, X represents the first image of the pair of heterogeneous remote sensing images to be measured, Y represents the second image of the pair of heterogeneous remote sensing images to be measured, E X (X) represents the result of X inputting to the unsupervised network encoder, E Y (Y) represents the result of Y inputting to the unsupervised network encoder, Representing the reconstruction result of the X-input unsupervised network,Representing the reconstruction result of the Y input unsupervised network, N X (DEG) representing the unsupervised network processing operation of X, N Y (DEG) representing the unsupervised network processing operation of Y, i representing the horizontal pixels of the sub-image to be detected, j representing the vertical pixels of the sub-image to be detected, X (i,j) representing the ith row and jth column of pixels of X, Y (i,j) representing the ith row and jth column of pixels of Y,Representing the reconstruction result of the X (i,j) input unsupervised network,And (3) representing a reconstruction result of the unsupervised network input by Y (i,j), wherein alpha represents a weighting coefficient of the unsupervised network mapping loss, and theta represents an unsupervised network weighting coefficient.
9. A self-guided, unsupervised heterogeneous remote sensing image change detection system, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor in communication with the storage medium via the bus when the self-booted, unsupervised, heterogeneous telemetry image change detection system is in operation, the processor executing the machine-readable instructions to perform the steps of the method of any one of claims 1-8.
10. A storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1-8.
CN202410522591.2A 2024-04-28 2024-04-28 Self-guiding unsupervised heterogeneous remote sensing image change detection method and system Pending CN118379631A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410522591.2A CN118379631A (en) 2024-04-28 2024-04-28 Self-guiding unsupervised heterogeneous remote sensing image change detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410522591.2A CN118379631A (en) 2024-04-28 2024-04-28 Self-guiding unsupervised heterogeneous remote sensing image change detection method and system

Publications (1)

Publication Number Publication Date
CN118379631A true CN118379631A (en) 2024-07-23

Family

ID=91901282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410522591.2A Pending CN118379631A (en) 2024-04-28 2024-04-28 Self-guiding unsupervised heterogeneous remote sensing image change detection method and system

Country Status (1)

Country Link
CN (1) CN118379631A (en)

Similar Documents

Publication Publication Date Title
Sun et al. Nonlocal patch similarity based heterogeneous remote sensing change detection
CN109446992B (en) Remote sensing image building extraction method and system based on deep learning, storage medium and electronic equipment
CN112733725B (en) Hyperspectral image change detection method based on multistage cyclic convolution self-coding network
Shi et al. Learning multiscale temporal–spatial–spectral features via a multipath convolutional LSTM neural network for change detection with hyperspectral images
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
Grana et al. Two lattice computing approaches for the unsupervised segmentation of hyperspectral images
Xia et al. A deep Siamese postclassification fusion network for semantic change detection
CN110969088A (en) Remote sensing image change detection method based on significance detection and depth twin neural network
CN114187450B (en) Remote sensing image semantic segmentation method based on deep learning
CN114926511B (en) High-resolution remote sensing image change detection method based on self-supervised learning
CN113901900A (en) Unsupervised change detection method and system for homologous or heterologous remote sensing image
CN108229551B (en) Hyperspectral remote sensing image classification method based on compact dictionary sparse representation
CN112036249B (en) Method, system, medium and terminal for end-to-end pedestrian detection and attribute identification
CN115205590A (en) Hyperspectral image classification method based on complementary integration Transformer network
Dhaya Hybrid machine learning approach to detect the changes in SAR images for salvation of spectral constriction problem
CN110245683A (en) The residual error relational network construction method that sample object identifies a kind of less and application
CN114913434B (en) High-resolution remote sensing image change detection method based on global relation reasoning
He et al. Crack segmentation on steel structures using boundary guidance model
Cheng et al. DMF2Net: Dynamic multi-level feature fusion network for heterogeneous remote sensing image change detection
CN112784777B (en) Unsupervised hyperspectral image change detection method based on countermeasure learning
CN111582057B (en) Face verification method based on local receptive field
CN113191996A (en) Remote sensing image change detection method and device and electronic equipment thereof
Cui et al. Double-branch local context feature extraction network for hyperspectral image classification
CN104851090B (en) Image change detection method and device
CN117994240A (en) Multi-scale two-level optical remote sensing image stripe noise intelligent detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination