CN111340761B - Remote sensing image change detection method based on fractal attribute and decision fusion - Google Patents
Remote sensing image change detection method based on fractal attribute and decision fusion Download PDFInfo
- Publication number
- CN111340761B CN111340761B CN202010098359.2A CN202010098359A CN111340761B CN 111340761 B CN111340761 B CN 111340761B CN 202010098359 A CN202010098359 A CN 202010098359A CN 111340761 B CN111340761 B CN 111340761B
- Authority
- CN
- China
- Prior art keywords
- attribute
- pixel
- decision fusion
- images
- remote sensing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008859 change Effects 0.000 title claims abstract description 43
- 230000004927 fusion Effects 0.000 title claims abstract description 31
- 238000001514 detection method Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 86
- 230000000877 morphologic effect Effects 0.000 claims abstract description 15
- 238000001228 spectrum Methods 0.000 claims abstract description 12
- 230000003044 adaptive effect Effects 0.000 claims abstract description 10
- 238000004364 calculation method Methods 0.000 claims abstract description 5
- 238000000605 extraction Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 4
- 238000002474 experimental method Methods 0.000 description 11
- 238000009448 modified atmosphere packaging Methods 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 6
- 230000003595 spectral effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000011158 quantitative evaluation Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000011524 similarity measure Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000013106 supervised machine learning method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a remote sensing image change detection method based on fractal attribute and decision fusion, which comprises the following steps: collecting multi-temporal high-resolution remote sensing images; establishing an objective function based on the correlation minimum value between the average scales, determining a scale parameter set of each attribute in a self-adaptive manner through iterative calculation, and extracting a morphological attribute profile with self-adaptive scale parameters; and constructing a multi-feature decision fusion framework, calculating a change intensity index and an evidence confidence index to respectively describe change information and corresponding confidence degree, and fusing the change information of a morphological attribute section and an original spectrum from the adaptive scale parameter by using the multi-feature decision fusion framework to obtain a final change detection image. The method establishes the target function based on the correlation between the minimum average scales, adaptively obtains a group of scale parameters, constructs a multi-feature decision fusion framework on the basis, and improves the decision reliability by reducing the uncertainty of the change information of different sources.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a remote sensing image change detection method.
Background
With the continuous development of remote sensing systems, change Detection (CD) has attracted people's attention as one of the most important applications in the field of remote sensing. Accurate understanding of the changes in land cover is an important issue in human activities such as dynamic land use, vegetation health and environmental detection, etc. The widespread use of new generation high resolution sensors (e.g., IKONOS, quickbird, and GF 2) further expands the application range of CD technology. Compared with medium-Resolution and low-Resolution Remote Sensing images, a High-Resolution Remote Sensing image (HRRS) contains more land cover space information and thematic information, so that different types of complex structures can be recognized in a scene. However, these characteristics of high-resolution remote sensing images make it difficult to achieve ideal effects of conventional pixel change detection methods based on spectral differences, since objects with different shapes are composed of many pixels and spectral information is very limited.
To solve this problem, a great deal of research has introduced spatial structure information as a supplement. This information has proven to be very effective in improving the recognition of CD in HRRS images. In the existing literature, supervised machine learning methods are most widely used in CDs. However, these methods require a large number of training samples to determine the model parameters, thereby avoiding the overfitting phenomenon. Meanwhile, for the CD in the HRRS, students propose various unsupervised methods for extracting spatial structure information. These studies have adopted different strategies such as object-based methods, linear transformation-based methods, markov Random Field (MRF) -based methods, multi-scale analysis methods, and varying strength index methods. In recent years, morphological Attribute Profiles (MAPs) have been introduced into CD applications in order to deal with detailed information that is not meaningful or even disadvantageous to CD due to increased resolution of remote sensing images.
As one of the very effective methods in HRRS image space modeling, operators in MAPs can effectively achieve multi-scale representation of land cover through tree structures. Compared with the traditional characteristic extraction strategy based on a given filter window, the MAPs can extend the analysis unit to all connected pixels with similar attributes, thereby being beneficial to accurately extracting the spatial structure information of the object to which the pixels belong. In addition, MAPs have also proven effective in reducing image complexity and extracting spatial structure information in CD applications. Even so, most MAPs-based CD approaches still have some problems: (1) In order to highlight representative spatial structure information while reducing redundant information in a limited number of Attribute Profiles (APs), a reasonable set of scale parameters needs to be adaptively determined. However, MAPs theory does not give a clear criterion, and most of the current scale parameters are determined manually empirically. (2) Given the complexity of land cover changes in a scenario, the uncertainty contained in the change information from different sources is rarely accounted for in existing research when combining the change information and other features of multiple APs.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a remote sensing image change detection method based on fractal attribute and decision fusion.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
the remote sensing image change detection method based on fractal attribute and decision fusion comprises the following steps:
(1) Collecting multi-temporal high-resolution remote sensing images;
(2) Establishing an objective function based on the correlation minimum value between the average scales, determining a scale parameter set of each attribute in a self-adaptive manner through iterative calculation, and extracting a morphological attribute profile with self-adaptive scale parameters;
(3) And constructing a multi-feature decision fusion framework, calculating a change intensity index and an evidence confidence index to respectively describe change information and corresponding confidence degree, and fusing the change information of a morphological attribute section and an original spectrum from the adaptive scale parameter by using the multi-feature decision fusion framework to obtain a final change detection image.
Further, in step (2), 4 morphological attributes of area, diagonal, standard deviation and normalized moment of inertia are selected.
Further, in step (2), the adaptive scale parameter extraction method is as follows:
(201) Setting the total scale number of each attribute as W and the value interval of scale parameters as [ T min ,T max ]Wherein T is min And T max The minimum value and the maximum value which can be obtained by the scale parameter are respectively;
(202) Computing interval Sub w The w-th scale parameter should be located in the Sub interval w In, W ∈ {1,2,..., W }:
(203) Defining an objective function:
iteratively calculating all combinations of scale parameters and combining GRSIM sum The combination corresponding to the minimum value is used as the extracted optimal scale parameter set; wherein, GRSIM w,w+1 Gradient similarity representing two adjacent attribute profiles:
in the above formula, σ Z1 And σ Z2 Standard deviation, σ, of a matrix of gradient amplitudes representing two images M1 And σ M2 The standard deviation of the gradient direction matrix representing the two images,and &>The variance, σ, of the gradient magnitude matrix representing the two images M1,M2 Covariance of the gradient direction matrices of the two images.
Further, in step (3), the method of calculating the variation intensity index is as follows:
(301) Through difference processing, extracting difference images among different time attribute sections under the same scale parameter to obtain a difference image set of morphological attribute sections of which each attribute is based on the self-adaptive scale parameter;
(302) Extracting differential images among images in the same wave band at different time through differential processing to obtain a differential image set based on an original spectrum;
(303) In the difference image, the gray value of the pixel i reflects the possibility of whether the pixel i is a changed pixel, so that the gray value of the pixel i is normalized in the [0,255] interval and used as a changed intensity index corresponding to the pixel i, and a plurality of groups of changed intensity indexes corresponding to the pixel i based on different attributes and original spectra are obtained based on the difference image set obtained in the steps (301) and (302).
Further, in step (3), an evidence confidence indicator is calculated as follows:
in the above formula, the CIE is the evidence confidence indicator.
Further, in step (3), the method for constructing the multi-feature decision fusion framework is as follows:
defining the decision fusion framework as Θ: { CT, NT }, wherein Θ is expressed as a hypothesis space, CT and NT respectively represent a changed pixel and an unchanged pixel, and for each pixel i, establishing a basic probability distribution formula by:
m n ({CT})=CII n ×CIE n
m n ({NT})=(1-CII n )×CIE n
m n ({CT,NT})=1-CIE n
in the above formula, CII n And CIE n Representing the nth variation intensity index and evidence confidence index, m, corresponding to pixel i n ({CT})、m n ({ NT }) and m n ({ CT, NT }) represents the basic probability distribution formulas corresponding to the non-empty subsets { CT }, { NT }, and { CT, NT } nth set of evidence;
the basic probability assignment equations m ({ CT }), m ({ NT }) and m ({ CT, NT }) for the non-null subsets { CT }, { NT }, and { CT, NT }) are calculated using the following equations:
in the above formula, A represents a non-empty subset, N represents the total number of evidences, and m n (F n ) Representing a basic probability distribution formula derived from the nth set of evidence and having F n ∈2 Θ ,
The following decision rules are established:
if the pixel i meets the judgment rule, judging the pixel i as a changed pixel, otherwise, judging the pixel i as an unchanged pixel; and traversing all the pixels to obtain a final transformation detection image.
The beneficial effects brought by adopting the technical scheme are as follows:
according to the invention, a set of scale parameters can be obtained in a self-adaptive manner by establishing the objective function based on the correlation between the minimum average scales, so that the representative APs are extracted while redundant information is reduced. On the basis, a multi-feature decision fusion framework based on a D-S theory is constructed, and the reliability of decision is improved by reducing the uncertainty of the change information of different sources. The effectiveness of the method is verified through experiments on multi-temporal HRRS image data sets.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a graph of the effect of different scale numbers W on the overall accuracy OA in the experiment.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
Change detection is crucial to accurately understanding surface changes using multi-temporal earth observation data. Due to the great advantages of the method in the aspect of space information modeling, the morphological attribute profile is more and more favored by people in the aspect of improving the change detection precision. However, most of the change detection methods based on the morphological attribute profile are realized by manually setting the scale parameters of the attribute profile, and the uncertainty of the change information of different sources is ignored. Aiming at the problems, the invention provides a novel high-resolution remote sensing image change detection method based on morphological attribute profile and decision fusion. By establishing an objective function based on the minimum correlation value among average scales, a Morphological Attribute profile (ASP-MAPs) with adaptive Scale Parameters is provided to mine spatial structure information. On the basis, a multi-feature decision fusion framework based on Dempster-Shafer (D-S) theory is constructed to obtain change detection results. The process flow of the present invention is shown in FIG. 1.
(1) MAPs theory
The MAPs theory is developed on the basis of a set theory, the theory takes spectral similarity and spatial connectivity as basic analysis units, connected regions corresponding to pixels are extracted, and multi-scale operators with different attributes are designed. The calculation of MAPs is briefly described as follows: assuming that B is a gray image, i is a pixel in B, and k represents gray, a binary image can be obtained
Traversing all pixels in B to obtain image sequence Th k (B) And is provided with gamma i (B) = max (k) as a result of i-on operation. On the basis, the symmetry of attribute transformation is utilized to obtain a closed operation result phi of i i (B) = min (k). Let T w ∈{T 1 ,T 2 ,...,T W The W-th scale parameter, W being the total number of scales, the open cross-section Ψ (Γ (B)) and the closed cross-section Ψ (Φ (B)) are represented as follows:
finally, MAPs can be obtained by combining Ψ (Γ (B)) and Ψ (Φ (B)).
(2) Attribute of adoption
Based on the results of the MAPs-related studies, four attributes of area, diagonal, standard deviation, normalized Moment of Inertia (NMI) were used in the present invention, and the effectiveness of these attributes in HRRS image classification and CD applications was demonstrated.
For the connected component corresponding to pixel i: area represents an Area size; diagon represents the Diagonal length of the minimum outer matrix; standard development indicates the degree of gradation change; NMI reflects the shape and location of the center of gravity.
(3) Construction of ASP-MAPs
As shown in FIG. 1, in the ASP-MAPs construction process, firstly, the scale parameters are determined: in a limited number of APs with different scale parameters, the constructed APs highlight the representative spatial structure characteristics of typical ground features in a scene, so that the change identification capability of the ground features is improved; in addition, reducing redundant information between APs also requires a reasonable set of scaling parameters. On the basis, according to the principle that the smaller the average inter-scale correlation of the APs, the stronger the representativeness of the APs. The specific process of ASP-MAPs construction is as follows:
gradient Similarity (grism): to measure the inter-scale correlation of APs, an appropriate similarity measure needs to be selected. According to MAPs theory, pixels that fit within the attribute range defined by the corresponding scale parameter have the largest gray response, i.e., appear as newly generated edges (or objects). Therefore, the employed similarity measure should be sensitive to edge variations. Based on the above analysis, the present invention provides a gradient vector-based similarity measurement GRSIM: using a third order Sobel filter [31] Gradient information is extracted and the GRSIM index between images B1 and B2 is defined as follows:
wherein Z1 and Z2 represent gradient magnitude matrices for B1 and B2, respectively; m1 and M2 represent gradient direction matrices of B1 and B2, respectively. Sigma Z1 ,σ Z2 ,σ M1 ,σ M2 ,And σ M1,M2 Standard deviation, variance and covariance are indicated, respectively. GRSIM B1,B2 The larger the value of (B), the higher the correlation between B1 and B2.
On the basis, the steps of the adaptive scale parameter extraction strategy are as follows:
step 1: set interval [ T min ,T max ]And the scale degree W of each attribute, and adaptively searching an optimal scale parameter set. The area interval is set to [500, 28000]Diagonal interval of [10, 100 ]]The standard deviation interval is [10, 70 ]]The NMI interval is [0.2,0.5 ]]And W does not exceed 10. Further, according to the results of the plurality of sets of experiments, it is suggested in the present invention to set W to 6.
Step 2: to avoid falling into a locally optimal situation, the Wth (W ∈ {1, 2.., W }) scale parameter should be located in the interval Sub w And (4) the following steps. Set Sub according to equation (4) w :
And step 3: the objective function is defined as follows:
wherein, GRSIM w,w+1 GRSIM representing two adjacent APs. Iteratively calculating all combinations of scale parameters according to equations (3) - (5) and combining GRSIM sum And the combination corresponding to the minimum value is used as the extracted optimal scale parameter set. Based on the above, ASP-MAPs of multi-temporal images are obtained according to the above formula (2).
(4) Change information description based on change intensity indicator CII
In order to uniformly describe the variation information extracted from the ASP-MAPs and the original spectrum, the variation intensity index CII is calculated as follows:
step 1: and extracting differential images among APs at different times under the same scale parameter through differential processing to obtain a differential image set of which each attribute is based on the ASP-MAPs.
Step 2: and extracting the difference images among the images in the same wave band and different time through difference processing to obtain a difference image set based on the original spectrum.
And step 3: in the difference image, since the gray value of the pixel i reflects the possibility of whether i is a changed pixel, it is normalized within the [0,255] interval and is one of the CIIs corresponding to i. CIIs are calculated according to the ASP-MAPs and all wave bands in the original image, and then five sets of CII sets can be obtained based on the area, diagonal, standard deviation, NMI and the original spectrum corresponding to i.
(5) Multi-feature decision fusion
The D-S theory is a decision-making theory for multi-source evidence fusion, and has the obvious advantage of strong quantitative evaluation capability on uncertainty of multi-source evidence. Therefore, the invention constructs a decision fusion framework for fusing the variation information from the ASP-MAPs and the original spectra.
Basic Probability Assignment Formula (BPAF): according to the D-S theory, A is represented as 2 Θ Is expressed as a hypothetical space, and the BPFA of a is expressed as m (a). m is 2 Θ →[0,1]BPAF should satisfy the following constraints:
where m (A) represents the confidence level of A, m (A) is calculated as follows:
n denotes the total number of evidences, m n (F n ) Representing a basic probability distribution formula derived from the nth set of evidence and having F n ∈2 Θ ,
Calculation of CIE: to measure the confidence level of CIIS from different sources (including area, diagonal, standard deviation, NMI and raw spectrum), an evidence confidence index CIE is proposed. For each evidence, the CIE can be calculated using equation (8). For each CII, a larger CIE representation should give a higher degree of remission in the decision fusion process.
Constructing a decision fusion framework: the decision fusion framework is defined as Θ: { CT, NT }, where CT and NT represent the changed pixels and the unchanged pixels, respectively. Thus, the non-empty subset includes { CT }, { NT }, and { CT, NT }. For each pixel i, BPAF can be established by the following formula:
m n ({CT})=CII n ×CIE n (9)
m n ({NT})=(1-CII n )×CIE n (10)
m n ({CT,NT})=1-CIE n (11)
wherein CII n And CIE n Representing the nth CII and CIE corresponding to pixel i. On the basis, m ({ CT }), m ({ NT }) and m ({ CT, NT }) of the pixel i are calculated by equation (7), and the determination rules are as follows:
if i satisfies the above rule, i is determined as a changed pixel, otherwise i is determined as an unchanged pixel. And finally, traversing all pixels according to the decision process to obtain a CD image.
(6) Experiments and analysis
The data set 1 is a group of aerial remote sensing images of Nanjing area of China, and comprises three wave bands of red, green and blue; the image acquisition time is respectively 2009, 3 months and 2012, 2 months, the spatial resolution is 0.5 m, and the image size is 512 × 512 pixels. The data set 2 is a group of Quickbird images in Chongqing areas of China, and comprises three wave bands of red, green and blue; the image acquisition time is respectively 2007 and 2011 and 8 months; the spatial resolution is 2.4 meters and the image size is 512 x 512 pixels. The data set 3 is a group of SPOT-5 panchromatic multispectral fusion images of Shanghai region in China, and comprises three wave bands of red, green and blue; the image acquisition time is 6 months in 2004 and 7 months in 2008 respectively; the spatial resolution is 2.5 meters and the image size is 512 x 512 pixels. The three data sets were chosen for the following reasons: these data sets represent different urban scenes, mainly consisting of buildings, roads, vegetation, wastelands, etc., which help to verify the ability of the proposed method to identify these typical feature variations and to evaluate the applicability and stability of the method in CD applications.
In order to comprehensively evaluate the performance of the method provided by the invention, five advanced CD methods are adopted for carrying out comparison experiments: improved Change Vector Analysis (CVA) methods including CVA-Expectation maximization (CVA-Expectation maximization, CVA-EM) (method 1), methods based on spectral angular mapping (method 2), methods based on spectral and textural features (method 3); MAPs-based methods (method 4); method based on Deep Learning (DL) (method 5). The adaptive extraction scale parameter set of the method provided by the invention is shown in tables 1-3.
Table 1 data set 1 scale parameter set extracted
Table 2 data set 2 scale parameter set extracted
Table 3 data set 3 scale parameter set extracted
The results of the quantitative evaluation of the different methods are shown in tables 4-6. In three groups of data, the Overall Accuracy (OA) of the method provided by the invention can reach more than 83.9%, and the fluctuation amplitude is less than 1.5%, so that the method is obviously superior to other comparison methods. Therefore, in the challenges brought by different data sources, the method has the advantages of high precision, good stability and the like.
Among the three CVA-based CD methods, method 1 and method 2 are based on only the spectral differences as CD basis, and the False Positive (FP) and False Negative (FN) rates are over 30% and 20%, respectively. Due to the introduction of texture features as a supplement, the three evaluation indexes are obviously improved in the experimental result of the method 3. Therefore, in order to generate a more accurate CD map, it is necessary to consider using the spatial neighborhood information of the pixels. Nevertheless, method 3 defines a series of specified filter windows in a manual setting manner to extract texture features, which makes it difficult to keep consistent with the inherent shape of the corresponding object to which the current pixel belongs. In contrast, MAPs can extract more accurate spatial structure information from non-stationary local regions composed of all connected pixels with similar attributes.
Although method 4 uses APs to extract change information, the results for OA are significantly reduced in the three data sets of the proposed method, with a fluctuation range exceeding 8%, compared to the proposed method. This is mainly due to the fact that the setting of the scale parameters in method 4 is performed manually, and redundant information contained in APs is omitted and representative spatial structure information is highlighted. In addition, the method is obtained by adopting a single threshold value according to the change information of different sources when the final CD image is obtained, so that the uncertainty of the change information of the multiple sources is ignored.
Table 4 quantitative evaluation of CD accuracy in data set 1. OA: overall accuracy; FP: the false detection rate; FN: rate of missing inspection
Method/index | OA(%) | FP(%) | FN(%) |
Evaluation criteria | The higher the better | The lower the better | The lower the better |
The method of the invention | 83.9 | 15.1 | 9.1 |
|
57.2 | 40.4 | 39.1 |
|
63.5 | 32.3 | 25.2 |
|
79.8 | 19.3 | 11.9 |
|
71.2 | 28.5 | 19.4 |
|
77.1 | 21.4 | 15.3 |
Table 5 quantitative evaluation of CD accuracy in data set 2
Method/index | OA(%) | FP(%) | FN(%) |
Evaluation criteria | The higher the better | The lower the better | The lower the better |
The method of the invention | 84.5 | 12.6 | 9.8 |
|
68.4 | 39.1 | 34.9 |
|
72.8 | 30.6 | 29.8 |
|
81.5 | 15.3 | 11.4 |
|
74.8 | 26.5 | 24.4 |
|
51.1 | 46.6 | 42.8 |
TABLE 6 quantitative evaluation of CD accuracy in data set 3
Method/index | OA(%) | FP(%) | FN(%) |
Evaluation criteria | The higher the better | The lower the better | The lower the better |
The method of the invention | 85.1 | 13.9 | 10.9 |
|
59.4 | 40.2 | 39.7 |
|
68.6 | 30.3 | 31.6 |
|
78.1 | 21.9 | 17.4 |
|
80.2 | 19.4 | 15.8 |
|
71.4 | 26.4 | 27.8 |
In order to respectively verify the effectiveness of the adaptive scale parameter extraction strategy and the decision fusion framework provided by the invention, the following two experimental schemes are carried out: (1) Manually setting the scale parameters of area, diagonal, standard deviation and NMI as {100, 918, 1734, 2548, 3368, 4185, 5000}, {10, 25, 40, 55, 70, 85, 100}, {0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5} and {20, 25, 30, 35, 40, 45, 50}, respectively, the remaining steps being in accordance with the proposed method (method 6); (2) The extracted CIIS corresponding to pixel i is averaged and all pixels in the image are traversed and the threshold for acquiring the CD map is determined using the EM method (method 7). Table 7 lists the OA of the different methods.
OA of the methods, methods 6 and 7 set forth in Table 7
As shown above, the OA of the proposed method is significantly higher than the other two methods. Therefore, the adaptive scale parameter extraction strategy and decision fusion framework proposed by the invention are necessary and effective for improving the change detection precision: the former helps to highlight representative spatial structure information while reducing redundant information in APs; the latter may improve the reliability of the decision by reducing the uncertainty of the varying information from different sources.
In the extraction process of the self-adaptive scale parameter provided by the invention, the scale number W is the only dependent parameter needing manual setting. In order to clarify the basis for setting W, this chapter analyzes the influence of the difference on OA. As shown in fig. 2, the abscissa is W and the ordinate is OA, and the results for the three data sets are represented by different styles of curves.
As shown in fig. 2, in the three data set experiments, OA showed similar trend of gradually rising first, tending to stabilize and then gradually falling as W continuously increases. Where W =6, W =4, W =6 correspond to the OA curve peaks in the experiments for data sets 1,2 and 3, respectively, being 83.9%, 84.9% and 85.1%, respectively. The detailed values are shown in table 8.
TABLE 8 detailed W-OA values in three data set experiments
As shown in the above table, in the experiment of data set 2, when W was set to 6, OA could reach 84.5% and was only 0.4% lower than the corresponding highest OA. This means that setting W to 6 gives the desired results in all experiments for the three data sets. Therefore, it is suggested to set W directly to 6 in CD applications, in view of automation and reliability.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.
Claims (4)
1. The remote sensing image change detection method based on fractal attribute and decision fusion is characterized by comprising the following steps of:
(1) Collecting multi-temporal high-resolution remote sensing images;
(2) Establishing an objective function based on the correlation minimum value between the average scales, determining a scale parameter set of each attribute in a self-adaptive manner through iterative calculation, and extracting a morphological attribute profile with self-adaptive scale parameters;
(3) Constructing a multi-feature decision fusion framework, calculating a change intensity index and an evidence confidence index to respectively describe change information and corresponding confidence degree, and fusing a morphological attribute section from adaptive scale parameters and change information of an original spectrum by using the multi-feature decision fusion framework to obtain a final change detection image;
in the step (2), the adaptive scale parameter extraction method is as follows:
(201) Setting the total scale number of each attribute as W and the value interval of the scale parameter as [ T ] min ,T max ]Wherein T is min And T max The minimum value and the maximum value which can be obtained by the scale parameter are respectively;
(202) Computing interval Sub w The w-th scale parameter should be located in the interval Sub w In, W ∈ {1,2,..., W }:
(203) Defining an objective function:
iteratively calculating all combinations of scale parameters and combining GRSIM sum The combination corresponding to the minimum value is used as the extracted optimal scale parameter set; wherein, GRSIM w,w+1 Gradient similarity representing two adjacent attribute profiles:
in the above formula, GRSIM B1,B1 The gradient similarity between the two images B1 and B2; sigma Z1 And σ Z2 Standard deviation, σ, of a matrix of gradient amplitudes representing two images M1 And σ M2 The standard deviation of the gradient direction matrix representing the two images,and &>Representing the variance, σ, of the gradient magnitude matrices of the two images M1,M2 Representing the covariance of the gradient direction matrices of the two images;
in step (3), the method for constructing the multi-feature decision fusion framework is as follows:
defining the decision fusion framework as Θ: { CT, NT }, wherein Θ is expressed as a hypothesis space, CT and NT respectively represent a changed pixel and an unchanged pixel, and for each pixel i, establishing a basic probability distribution formula by:
m n ({CT})=CII n ×CIE n
m n ({NT})=(1-CII n )×CIE n
m n ({CT,NT})=1-CIE n
in the above formula, CII n And CIE n Representing the nth variation intensity index and evidence confidence index, m, corresponding to pixel i n ({CT})、m n ({ NT }) and m n ({ CT, NT }) represents the basic probability distribution formulas corresponding to the non-empty subsets { CT }, { NT }, and { CT, NT } nth set of evidence;
the basic probability assignment equations m ({ CT }), m ({ NT }) and m ({ CT, NT }) for the non-null subsets { CT }, { NT }, and { CT, NT }) are calculated using the following equations:
in the above formula, A represents a non-empty subset, N represents the total number of evidences, and m n (F n ) Representing a basic probability distribution formula derived from the nth set of evidence and having F n ∈2 Θ ,
The following decision rules are established:
if the pixel i meets the judgment rule, the pixel i is judged as a changed pixel, otherwise, the pixel i is judged as an unchanged pixel; and traversing all the pixels to obtain a final transformation detection image.
2. The remote sensing image change detection method based on fractal attribute and decision fusion as claimed in claim 1, wherein in step (2), 4 morphological attributes of area, diagonal, standard deviation and normalized moment of inertia are selected.
3. The method for detecting the change of the remote sensing image based on the fractal attribute and decision fusion as claimed in claim 1, wherein in the step (3), the method for calculating the change intensity index is as follows:
(301) Through difference processing, extracting difference images among different time attribute sections under the same scale parameter to obtain a difference image set of morphological attribute sections of which each attribute is based on the self-adaptive scale parameter;
(302) Extracting differential images among images in the same wave band at different time through differential processing to obtain a differential image set based on an original spectrum;
(303) In the difference image, the gray value of the pixel i reflects the possibility of whether the pixel i is a changed pixel, so that the gray value of the pixel i is normalized within the interval [0,255] and is used as a changed intensity index corresponding to the pixel i, and a plurality of groups of changed intensity indexes corresponding to the pixel i based on different attributes and original spectra are obtained based on the difference image set obtained in the steps (301) and (302).
4. The method for detecting changes of remote sensing images based on fractal attribute and decision fusion as claimed in claim 1, wherein in step (3), an evidence confidence index is calculated according to the following formula:
in the above formula, CIE is evidence confidence index, GRSIM w,w+1 Representing the gradient similarity of two adjacent attribute profiles.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010098359.2A CN111340761B (en) | 2020-02-18 | 2020-02-18 | Remote sensing image change detection method based on fractal attribute and decision fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010098359.2A CN111340761B (en) | 2020-02-18 | 2020-02-18 | Remote sensing image change detection method based on fractal attribute and decision fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111340761A CN111340761A (en) | 2020-06-26 |
CN111340761B true CN111340761B (en) | 2023-04-18 |
Family
ID=71185238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010098359.2A Active CN111340761B (en) | 2020-02-18 | 2020-02-18 | Remote sensing image change detection method based on fractal attribute and decision fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111340761B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115909050B (en) * | 2022-10-26 | 2023-06-23 | 中国电子科技集团公司第五十四研究所 | Remote sensing image airport extraction method combining line segment direction and morphological difference |
CN118172685B (en) * | 2024-03-12 | 2024-10-18 | 北京智慧宏图勘察测绘有限公司 | Intelligent analysis method and device for unmanned aerial vehicle mapping data |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632363B (en) * | 2013-08-27 | 2016-06-08 | 河海大学 | Object level high-resolution remote sensing image change detecting method based on Multiscale Fusion |
CN107085708B (en) * | 2017-04-20 | 2020-06-09 | 哈尔滨工业大学 | High-resolution remote sensing image change detection method based on multi-scale segmentation and fusion |
CN107689055A (en) * | 2017-08-24 | 2018-02-13 | 河海大学 | A kind of multi-temporal remote sensing image change detecting method |
CN109360184A (en) * | 2018-08-23 | 2019-02-19 | 南京信息工程大学 | In conjunction with the remote sensing image variation detection method of shadow compensation and Decision fusion |
-
2020
- 2020-02-18 CN CN202010098359.2A patent/CN111340761B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111340761A (en) | 2020-06-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gao et al. | Automatic change detection in synthetic aperture radar images based on PCANet | |
CN104966085B (en) | A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features | |
CN105335966B (en) | Multiscale morphology image division method based on local homogeney index | |
CN110309781B (en) | House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion | |
CN106503739A (en) | The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics | |
CN109657610A (en) | A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images | |
CN111582146B (en) | High-resolution remote sensing image city function partitioning method based on multi-feature fusion | |
CN110569751B (en) | High-resolution remote sensing image building extraction method | |
WO2018076138A1 (en) | Target detection method and apparatus based on large-scale high-resolution hyper-spectral image | |
CN109635733B (en) | Parking lot and vehicle target detection method based on visual saliency and queue correction | |
CN103984946A (en) | High resolution remote sensing map road extraction method based on K-means | |
CN110309780A (en) | High resolution image houseclearing based on BFD-IGA-SVM model quickly supervises identification | |
CN109187552B (en) | Wheat scab damage grade determination method based on cloud model | |
CN105139015A (en) | Method for extracting water body from remote sensing image | |
CN104182985A (en) | Remote sensing image change detection method | |
CN109584284B (en) | Hierarchical decision-making coastal wetland ground object sample extraction method | |
CN112990314B (en) | Hyperspectral image anomaly detection method and device based on improved isolated forest algorithm | |
CN111340761B (en) | Remote sensing image change detection method based on fractal attribute and decision fusion | |
CN104217440A (en) | Method for extracting built-up area from remote sensing image | |
CN107992856A (en) | High score remote sensing building effects detection method under City scenarios | |
CN110310263B (en) | SAR image residential area detection method based on significance analysis and background prior | |
CN102982345B (en) | Semi-automatic classification method for timing sequence remote sensing images based on continuous wavelet transforms | |
CN111046838A (en) | Method and device for identifying wetland remote sensing information | |
CN112241956B (en) | PolSAR image ridge line extraction method based on region growing method and variation function | |
CN107657246B (en) | Remote sensing image building detection method based on multi-scale filtering building index |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |