[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111340761B - Remote sensing image change detection method based on fractal attribute and decision fusion - Google Patents

Remote sensing image change detection method based on fractal attribute and decision fusion Download PDF

Info

Publication number
CN111340761B
CN111340761B CN202010098359.2A CN202010098359A CN111340761B CN 111340761 B CN111340761 B CN 111340761B CN 202010098359 A CN202010098359 A CN 202010098359A CN 111340761 B CN111340761 B CN 111340761B
Authority
CN
China
Prior art keywords
pixel
attribute
decision fusion
images
scale parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010098359.2A
Other languages
Chinese (zh)
Other versions
CN111340761A (en
Inventor
王超
申祎
刘辉
吴昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202010098359.2A priority Critical patent/CN111340761B/en
Publication of CN111340761A publication Critical patent/CN111340761A/en
Application granted granted Critical
Publication of CN111340761B publication Critical patent/CN111340761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image change detection method based on fractal attribute and decision fusion, which comprises the following steps: collecting multi-temporal high-resolution remote sensing images; establishing an objective function based on the correlation minimum value between the average scales, determining a scale parameter set of each attribute in a self-adaptive manner through iterative calculation, and extracting a morphological attribute profile with self-adaptive scale parameters; and constructing a multi-feature decision fusion framework, calculating a change intensity index and an evidence confidence index to respectively describe change information and corresponding confidence degree, and fusing the change information of a morphological attribute section and an original spectrum from the adaptive scale parameter by using the multi-feature decision fusion framework to obtain a final change detection image. The method establishes the target function based on the correlation between the minimum average scales, adaptively obtains a group of scale parameters, constructs a multi-feature decision fusion framework on the basis, and improves the decision reliability by reducing the uncertainty of the change information of different sources.

Description

基于分形属性和决策融合的遥感影像变化检测方法Remote sensing image change detection method based on fractal attributes and decision fusion

技术领域Technical Field

本发明属于图像处理领域,特别涉及了一种遥感影像变化检测方法。The invention belongs to the field of image processing, and in particular relates to a remote sensing image change detection method.

背景技术Background Art

随着遥感系统的不断发展,变化检测(Change Detection,CD)作为遥感领域中最重要的应用之一,引起了人们的广泛关注。对土地覆被变化的准确理解是人类活动中的一个重要问题,如动态土地利用、植被健康和环境检测等。新一代高分辨率传感器(例如IKONOS、Quickbird和GF2)的广泛使用进一步扩大了CD技术的应用范围。与中、低分辨率遥感影像相比,高分辨率遥感影像(High-Resolution Remote Sensing,HRRS)包含了更多的土地覆盖空间信息和专题信息,使得在场景中识别不同类型的复杂结构是可行的。然而,由于形状各异的对象由许多像素构成并且光谱信息非常有限,因此高分辨率遥感影像的这些特性使得传统基于光谱差异的传统像素变化检测方法难以取得理想的效果。With the continuous development of remote sensing systems, change detection (CD) has attracted widespread attention as one of the most important applications in the field of remote sensing. Accurate understanding of land cover change is an important issue in human activities, such as dynamic land use, vegetation health, and environmental monitoring. The widespread use of a new generation of high-resolution sensors (such as IKONOS, Quickbird, and GF2) has further expanded the application scope of CD technology. Compared with medium- and low-resolution remote sensing images, high-resolution remote sensing images (HRRS) contain more land cover spatial information and thematic information, making it feasible to identify different types of complex structures in the scene. However, since objects of various shapes are composed of many pixels and the spectral information is very limited, these characteristics of high-resolution remote sensing images make it difficult for traditional pixel change detection methods based on spectral differences to achieve ideal results.

为了解决这一问题,大量研究引入了空间结构信息作为补充。事实证明,这种信息对提高HRRS影像中CD的识别能力是非常有效的。在现有文献中,监督的机器学习方法在CD中应用最为广泛。然而,这些方法需要大量的训练样本来确定模型参数,从而避免过拟合现象。同时,针对HRRS中的CD,学者们提出了多种无监督的空间结构信息的提取方法。这些研究采用了不同的策略,如基于对象的方法,基于线性变换的方法,基于马尔可夫随机场(Markov Random Field,MRF)的方法,多尺度分析方法以及变化强度指标方法。近年来,为了应对遥感影像分辨率提高而带来的对CD没有意义甚至不利的细节信息,形态学属性剖面(MorphologicalAttribute Profiles,MAPs)被引入到CD的应用中。In order to solve this problem, a large number of studies have introduced spatial structural information as a supplement. It turns out that this information is very effective in improving the recognition ability of CD in HRRS images. In the existing literature, supervised machine learning methods are the most widely used in CD. However, these methods require a large number of training samples to determine the model parameters to avoid overfitting. At the same time, scholars have proposed a variety of unsupervised spatial structural information extraction methods for CD in HRRS. These studies have adopted different strategies, such as object-based methods, linear transformation-based methods, Markov Random Field (MRF)-based methods, multi-scale analysis methods, and change intensity index methods. In recent years, in order to cope with the detail information that is meaningless or even unfavorable to CD due to the improvement of remote sensing image resolution, morphological attribute profiles (MAPs) have been introduced into the application of CD.

作为HRRS影像空间建模中非常有效的方法之一,MAPs中的算子可以通过树结构有效地实现土地覆盖的多尺度表示。与传统的基于给定滤波窗口的特征提取策略相比,MAPs可以将分析单元扩展到所有具有相似属性的连通像素,从而有助于准确提取像素所属对象的空间结构信息。此外,在CD应用中,MAPs在降低影像复杂度和提取空间结构信息方面也被证明是有效的。即便如此,大多数基于MAPs的CD方法仍存在一些问题:(1)为了突出具有代表性的空间结构信息,同时在有限数量的属性剖面(Attribute Profiles,APs)中减少冗余信息,需要自适应地确定一组合理的尺度参数集合。然而,MAPs理论没有给出明确的标准,目前的尺度参数大多数是根据经验人工确定的。(2)鉴于场景中土地覆盖变化的复杂性,当结合多个APs的变化信息和其他特征时,现有研究中很少考虑到不同来源的变化信息中所包含的不确定性。As one of the most effective methods in spatial modeling of HRRS images, the operators in MAPs can effectively achieve multi-scale representation of land cover through a tree structure. Compared with the traditional feature extraction strategy based on a given filter window, MAPs can expand the analysis unit to all connected pixels with similar attributes, which helps to accurately extract the spatial structure information of the object to which the pixel belongs. In addition, in CD applications, MAPs have also been proven to be effective in reducing image complexity and extracting spatial structure information. Even so, most MAPs-based CD methods still have some problems: (1) In order to highlight representative spatial structure information and reduce redundant information in a limited number of attribute profiles (APs), a set of reasonable scale parameters needs to be adaptively determined. However, MAPs theory does not give a clear standard, and most of the current scale parameters are manually determined based on experience. (2) Given the complexity of land cover changes in scenes, when combining change information from multiple APs and other features, existing studies rarely consider the uncertainty contained in change information from different sources.

发明内容Summary of the invention

为了解决上述背景技术提到的技术问题,本发明提出了基于分形属性和决策融合的遥感影像变化检测方法。In order to solve the technical problems mentioned in the above background technology, the present invention proposes a remote sensing image change detection method based on fractal attributes and decision fusion.

为了实现上述技术目的,本发明的技术方案为:In order to achieve the above technical objectives, the technical solution of the present invention is:

基于分形属性和决策融合的遥感影像变化检测方法,包括以下步骤:The remote sensing image change detection method based on fractal attributes and decision fusion includes the following steps:

(1)采集多时相高分辨率遥感影像;(1) Collect multi-temporal and high-resolution remote sensing images;

(2)建立基于平均尺度间相关性最小值的目标函数,通过迭代计算,自适应确定各属性的尺度参数集合,提取具有自适应尺度参数的形态学属性剖面;(2) Establish an objective function based on the minimum value of the average inter-scale correlation, adaptively determine the scale parameter set of each attribute through iterative calculation, and extract the morphological attribute profile with adaptive scale parameters;

(3)构建多特征决策融合框架,计算变化强度指标和证据置信度指标来分别描述变化信息和相应的信任程度,利用多特征决策融合框架融合来自自适应尺度参数的形态学属性剖面和原始光谱的变化信息,得到最终的变化检测图。(3) A multi-feature decision fusion framework is constructed to calculate the change intensity index and the evidence confidence index to describe the change information and the corresponding trust level respectively. The multi-feature decision fusion framework is used to fuse the change information of the morphological attribute profile and the original spectrum from the adaptive scale parameter to obtain the final change detection map.

进一步地,在步骤(2)中,选择面积、对角线、标准差和归一化惯性矩4个形态学属性。Furthermore, in step (2), four morphological attributes, namely area, diagonal, standard deviation and normalized moment of inertia, are selected.

进一步地,在步骤(2)中,自适应尺度参数提取方法如下:Furthermore, in step (2), the adaptive scale parameter extraction method is as follows:

(201)设置每种属性的尺度总数为W,尺度参数的取值区间为[Tmin,Tmax],其中Tmin和Tmax分别为尺度参数能够取得的最小值和最大值;(201) Set the total number of scales for each attribute to W, and the value interval of the scale parameter to [T min , T max ], where T min and T max are the minimum and maximum values that the scale parameter can take, respectively;

(202)计算区间Subw,第w个尺度参数应位于区间Subw内,w∈{1,2,...,W}:(202) Calculate the interval Sub w . The w-th scale parameter should be within the interval Sub w , w ∈ {1, 2, ..., W}:

Figure BDA0002386043130000031
Figure BDA0002386043130000031

(203)定义目标函数:(203)Define the objective function:

Figure BDA0002386043130000032
Figure BDA0002386043130000032

迭代计算尺度参数的所有组合,并将GRSIMsum最小值对应的组合作为提取的最优尺度参数集;其中,GRSIMw,w+1表示两个相邻属性剖面的梯度相似性:Iteratively calculate all combinations of scale parameters, and take the combination corresponding to the minimum GRSIM sum as the optimal scale parameter set for extraction; where GRSIM w,w+1 represents the gradient similarity of two adjacent attribute profiles:

Figure BDA0002386043130000033
Figure BDA0002386043130000033

上式中,σZ1和σZ2表示两个影像的梯度幅度矩阵的标准差,σM1和σM2表示两个影像的梯度方向矩阵的标准差,

Figure BDA0002386043130000034
Figure BDA0002386043130000035
表示两个影像的梯度幅度矩阵的方差,σM1,M2两个影像的梯度方向矩阵的协方差。In the above formula, σ Z1 and σ Z2 represent the standard deviations of the gradient magnitude matrices of the two images, σ M1 and σ M2 represent the standard deviations of the gradient direction matrices of the two images,
Figure BDA0002386043130000034
and
Figure BDA0002386043130000035
represents the variance of the gradient magnitude matrices of the two images, and σ M1, M2 represents the covariance of the gradient direction matrices of the two images.

进一步地,在步骤(3)中,计算变化强度指标的方法如下:Furthermore, in step (3), the method for calculating the change intensity index is as follows:

(301)通过差分处理,提取同一尺度参数下不同时间属性剖面之间的差分影像,得到每个属性基于自适应尺度参数的形态学属性剖面的差分影像集合;(301) extracting differential images between different time attribute sections under the same scale parameter through differential processing, and obtaining a differential image set of morphological attribute sections of each attribute based on the adaptive scale parameter;

(302)通过差分处理,提取同一波段不同时间影像之间的差分影像,获得基于原始光谱的差分影像集合;(302) extracting the differential images between images of the same band at different times through differential processing, and obtaining a differential image set based on the original spectrum;

(303)在差分影像中,像素i的灰度值反映了像素i是否是一个变化像素的可能性,因此将其在[0,255]区间内进行归一化处理并作为像素i对应的变化强度指标,基于步骤(301)和(302)获得的差分影像集合,得到像素i对应的基于不同属性和原始光谱的多组变化强度指标。(303) In the differential image, the grayscale value of pixel i reflects the possibility of whether pixel i is a changed pixel. Therefore, it is normalized in the interval [0, 255] and used as the change intensity index corresponding to pixel i. Based on the differential image set obtained in steps (301) and (302), multiple groups of change intensity indexes corresponding to pixel i based on different attributes and original spectra are obtained.

进一步地,在步骤(3)中,按下式计算证据置信度指标:Furthermore, in step (3), the evidence confidence index is calculated as follows:

Figure BDA0002386043130000041
Figure BDA0002386043130000041

上式中,CIE即为证据置信度指标。In the above formula, CIE is the evidence confidence index.

进一步地,在步骤(3)中,构建多特征决策融合框架的方法如下:Furthermore, in step (3), the method of constructing a multi-feature decision fusion framework is as follows:

将决策融合框架定义为Θ:{CT,NT},其中,Θ表示为假设空间,CT和NT分别表示变化的像素和不变像素,对于每个像素i,通过下式建立基本概率分配公式:The decision fusion framework is defined as Θ:{CT,NT}, where Θ represents the hypothesis space, CT and NT represent the changed pixels and unchanged pixels respectively. For each pixel i, the basic probability assignment formula is established by the following formula:

mn({CT})=CIIn×CIEn m n ({CT}) = CII n × CIE n

mn({NT})=(1-CIIn)×CIEn m n ({NT})=(1-CII n )×CIE n

mn({CT,NT})=1-CIEn m n ({CT, NT}) = 1 - CIE n

上式中,CIIn和CIEn代表与像素i对应的第n个变化强度指标和证据置信度指标,mn({CT})、mn({NT})和mn({CT,NT})代表非空子集{CT}、{NT}和{CT,NT}第n组证据对应的基本概率分配公式;In the above formula, CIIn and CIEn represent the nth change intensity index and evidence confidence index corresponding to pixel i, mn ({CT}), mn ({NT}) and mn ({CT,NT}) represent the basic probability distribution formulas corresponding to the nth group of evidence of the non-empty subsets {CT}, {NT} and {CT,NT};

利用下式计算非空子集{CT}、{NT}和{CT,NT}对应的基本概率分配公式m({CT})、m({NT})和m({CT,NT}):The basic probability distribution formulas m({CT}), m({NT}) and m({CT,NT}) corresponding to the non-empty subsets {CT}, {NT} and {CT,NT} are calculated using the following formula:

Figure BDA0002386043130000042
Figure BDA0002386043130000042

上式中,A表示非空子集,N表示证据的总数,mn(Fn)表示由第n组证据得到的基本概率分配公式,并且有Fn∈2Θ

Figure BDA0002386043130000043
In the above formula, A represents a non-empty subset, N represents the total number of evidences, m n (F n ) represents the basic probability distribution formula obtained by the nth group of evidence, and F n ∈ 2 Θ ,
Figure BDA0002386043130000043

建立如下判定规则:Establish the following judgment rules:

Figure BDA0002386043130000044
Figure BDA0002386043130000044

如果像素i满足上述判定规则,则像素i被判定为变化像素,否则像素i被判定为不变像素;遍历所有像素,得到最终的变换检测图。If pixel i satisfies the above judgment rule, pixel i is judged as a changed pixel, otherwise pixel i is judged as an unchanged pixel; all pixels are traversed to obtain the final transformation detection map.

采用上述技术方案带来的有益效果:The beneficial effects brought by adopting the above technical solution are:

本发明通过基于最小平均尺度间相关性建立目标函数,可以自适应地获得一组尺度参数,以在减少冗余信息的同时提取具有代表性APs。在此基础上,构建了基于D-S理论的多特征决策融合框架,通过减少不同来源的变化信息的不确定性来提高决策的可靠性。通过对多时相HRRS影像数据集的实验,验证了本发明方法的有效性。By establishing an objective function based on the minimum average inter-scale correlation, the present invention can adaptively obtain a set of scale parameters to extract representative APs while reducing redundant information. On this basis, a multi-feature decision fusion framework based on D-S theory is constructed to improve the reliability of decision making by reducing the uncertainty of change information from different sources. The effectiveness of the method of the present invention is verified through experiments on multi-phase HRRS image datasets.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本发明的方法流程图;Fig. 1 is a flow chart of the method of the present invention;

图2是实验中不同尺度数W对总体准确度OA的影响。Figure 2 shows the effect of different scale numbers W on the overall accuracy OA in the experiment.

具体实施方式DETAILED DESCRIPTION

以下将结合附图,对本发明的技术方案进行详细说明。The technical solution of the present invention will be described in detail below with reference to the accompanying drawings.

变化检测对于利用多时相地球观测数据来准确理解地表变化至关重要。由于其在空间信息建模方面有巨大优势,形态学属性剖面在提高变化检测精度方面越来越受到人们的青睐。然而,大多数基于形态学属性剖面的变化检测方法是通过手动设置属性剖面的尺度参数来实现的,并且忽略了不同来源的变化信息的不确定性。针对这些问题,本发明提出了一种新的基于形态学属性剖面和决策融合的高分辨率遥感影像变化检测方法。通过建立基于平均尺度间相关性最小值的目标函数,提出了一种具有自适应尺度参数的形态属性剖面(Morphological Attribute Profiles withAdaptive Scale Parameters,ASP-MAPs)来挖掘空间结构信息。在此基础上,构造了一种基于Dempster-Shafer(D-S)理论的多特征决策融合框架,以获得变化检测结果。本发明的方法流程如图1所示。Change detection is crucial for accurately understanding surface changes using multi-temporal earth observation data. Due to its great advantages in spatial information modeling, morphological attribute profiles are increasingly favored in improving the accuracy of change detection. However, most change detection methods based on morphological attribute profiles are achieved by manually setting the scale parameters of the attribute profiles, and ignore the uncertainty of change information from different sources. In response to these problems, the present invention proposes a new high-resolution remote sensing image change detection method based on morphological attribute profiles and decision fusion. By establishing an objective function based on the minimum value of the average inter-scale correlation, a morphological attribute profile with adaptive scale parameters (Morphological Attribute Profiles with Adaptive Scale Parameters, ASP-MAPs) is proposed to mine spatial structural information. On this basis, a multi-feature decision fusion framework based on the Dempster-Shafer (D-S) theory is constructed to obtain change detection results. The method flow of the present invention is shown in Figure 1.

(1)MAPs理论(1) MAPs theory

MAPs理论是在集合论的基础上发展起来的,该理论以光谱相似性和空间连通性为基本分析单元,提取与像素对应的连通区域,并设计了具有不同属性的多尺度算子。MAPs的计算过程简要介绍如下:设B为灰度影像,i为B中的像素,k表示灰度,然后可以获得二值影像

Figure BDA0002386043130000061
MAPs theory is developed on the basis of set theory. The theory uses spectral similarity and spatial connectivity as basic analysis units, extracts connected regions corresponding to pixels, and designs multi-scale operators with different attributes. The calculation process of MAPs is briefly described as follows: Let B be a grayscale image, i be a pixel in B, k represents grayscale, and then the binary image can be obtained.
Figure BDA0002386043130000061

Figure BDA0002386043130000062
Figure BDA0002386043130000062

遍历B中的所有像素,得到影像序列Thk(B),并且设置Γi(B)=max(k)作为i开运算的结果。在此基础上,利用属性变换的对称性,得到i的闭运算结果Φi(B)=min(k)。设Tw∈{T1,T2,...,TW}为第w个尺度参数,W为尺度总数,开剖面Ψ(Γ(B))和闭剖面Ψ(Φ(B))的表示如下:Traverse all pixels in B to obtain the image sequence Th k (B), and set Γ i (B) = max(k) as the result of the i-open operation. On this basis, using the symmetry of the attribute transformation, the i-closed operation result Φ i (B) = min(k) is obtained. Let T w ∈ {T 1 ,T 2 ,...,T W } be the w-th scale parameter, W be the total number of scales, and the open profile Ψ(Γ(B)) and the closed profile Ψ(Φ(B)) are expressed as follows:

Figure BDA0002386043130000063
Figure BDA0002386043130000063

最后,通过结合Ψ(Γ(B))和Ψ(Φ(B)),可以得到MAPs。Finally, by combining Ψ(Γ(B)) and Ψ(Φ(B)), MAPs can be obtained.

(2)采用的属性(2) Adopted attributes

基于与MAPs相关的研究结果,在本发明中,采用了面积、对角线、标准差、归一化惯性矩(Normalized Moment ofInertia,NMI)四个属性,这些属性在HRRS影像分类和CD应用中的有效性已经得到了证明。Based on the research results related to MAPs, four attributes, namely area, diagonal, standard deviation, and normalized moment of inertia (NMI), are used in the present invention. The effectiveness of these attributes in HRRS image classification and CD applications has been proven.

对于与像素i相对应的连通区域:Area表示面积大小;Diagonal表示最小外部矩阵的对角线长度;Standard Deviation表示灰度变化程度;NMI反映形状和重心的位置。For the connected area corresponding to pixel i: Area represents the area size; Diagonal represents the diagonal length of the minimum external matrix; Standard Deviation represents the degree of grayscale change; NMI reflects the shape and the position of the center of gravity.

(3)ASP-MAPs的构造(3) Construction of ASP-MAPs

如图1所示,在ASP-MAPs构建过程中,首先确定尺度参数:有限数量的具有不同尺度参数的APs中,所构建的APs应突出场景中典型的地物的代表性空间结构特征,从而提高对此类地物的变化识别能力;此外,减少APs之间的冗余信息也需要一个合理的尺度参数集合。在此基础上,根据APs的平均尺度间相关性越小,APs代表性越强的原理。ASP-MAPs构建的具体过程如下:As shown in Figure 1, in the process of constructing ASP-MAPs, the scale parameter is first determined: among a limited number of APs with different scale parameters, the constructed APs should highlight the representative spatial structural characteristics of typical objects in the scene, thereby improving the ability to recognize changes in such objects; in addition, reducing redundant information between APs also requires a reasonable set of scale parameters. On this basis, according to the principle that the smaller the average inter-scale correlation of APs, the stronger the representativeness of APs. The specific process of ASP-MAPs construction is as follows:

梯度相似性(Gradient Similarity,GRSIM):为了度量APs的尺度间相关性,需要选择适当的相似性度量。根据MAPs理论,符合由相应的尺度参数确定的属性范围的像素具有最大的灰度响应,即表现为新生成的边缘(或对象)。因此,所采用的相似性度量应当对边缘变化敏感。基于上述分析,本发明提出了一种基于梯度向量的相似性度量GRSIM:利用三阶Sobel滤波器[31]提取梯度信息,并定义影像B1和B2之间的GRSIM指标如下:Gradient Similarity (GRSIM): In order to measure the scale correlation of APs, it is necessary to select an appropriate similarity metric. According to the MAPs theory, pixels that meet the attribute range determined by the corresponding scale parameter have the largest grayscale response, that is, they appear as newly generated edges (or objects). Therefore, the similarity metric used should be sensitive to edge changes. Based on the above analysis, the present invention proposes a similarity metric GRSIM based on gradient vectors: a third-order Sobel filter [31] is used to extract gradient information, and the GRSIM index between images B1 and B2 is defined as follows:

Figure BDA0002386043130000071
Figure BDA0002386043130000071

其中Z1和Z2分别表示B1和B2的梯度幅度矩阵;M1和M2分别表示B1和B2的梯度方向矩阵。σZ1,σZ2,σM1,σM2

Figure BDA0002386043130000072
和σM1,M2分别表示标准差、方差和协方差。GRSIMB1,B2的值越大,B1和B2之间的相关性越高。Where Z1 and Z2 represent the gradient magnitude matrices of B1 and B2 respectively; M1 and M2 represent the gradient direction matrices of B1 and B2 respectively.
Figure BDA0002386043130000072
and σ M1,M2 represent the standard deviation, variance, and covariance, respectively. The larger the value of GRSIM B1,B2, the higher the correlation between B1 and B2.

在此基础上,自适应尺度参数提取策略的步骤如下:On this basis, the steps of the adaptive scale parameter extraction strategy are as follows:

步骤1:设置区间[Tmin,Tmax]和每个属性的尺度数W,自适应搜索最优尺度参数集。将面积区间设为[500,28000],对角线区间为[10,100],标准差区间为[10,70],NMI区间为[0.2,0.5],并且W不超过10。此外,根据多组实验结果,在本发明中建议将W设为6。Step 1: Set the interval [T min , T max ] and the number of scales W for each attribute, and adaptively search for the optimal scale parameter set. Set the area interval to [500, 28000], the diagonal interval to [10, 100], the standard deviation interval to [10, 70], the NMI interval to [0.2, 0.5], and W not more than 10. In addition, based on multiple sets of experimental results, it is recommended in the present invention that W be set to 6.

步骤2:为了避免陷入局部最优的情况,第W个(w∈{1,2,...,W})尺度参数应位于区间Subw内。按照式(4)设置SubwStep 2: To avoid falling into the local optimum, the Wth (w∈{1,2,...,W}) scale parameter should be within the interval Sub w . Set Sub w according to equation (4):

Figure BDA0002386043130000073
Figure BDA0002386043130000073

步骤3:目标函数定义如下:Step 3: The objective function is defined as follows:

Figure BDA0002386043130000074
Figure BDA0002386043130000074

其中,GRSIMw,w+1表示两个相邻APs的GRSIM。根据式(3)-(5),迭代计算尺度参数的所有组合,并将GRSIMsum最小值对应的组合作为提取的最优尺度参数集。在此基础上,根据上述式(2)得到多时相影像的ASP-MAPs。Among them, GRSIM w,w+1 represents the GRSIM of two adjacent APs. According to equations (3)-(5), all combinations of scale parameters are iteratively calculated, and the combination corresponding to the minimum GRSIM sum is taken as the optimal scale parameter set extracted. On this basis, the ASP-MAPs of multi-temporal images are obtained according to the above equation (2).

(4)基于变化强度指标CII的变化信息描述(4) Description of change information based on the change intensity index CII

为了对从ASP-MAPs和原始光谱中提取的变化信息进行统一描述,变化强度指标CII的计算步骤如下:In order to uniformly describe the change information extracted from ASP-MAPs and original spectra, the calculation steps of the change intensity index CII are as follows:

步骤1:通过差分处理,提取同一尺度参数下不同时间APs之间的差分影像,得到每个属性基于ASP-MAPs的差分影像集合。Step 1: Through differential processing, the differential images between APs at different times under the same scale parameters are extracted to obtain a set of differential images based on ASP-MAPs for each attribute.

步骤2:通过差分处理,提取同一波段不同时间影像之间的差分影像,获得基于原始光谱的差分影像集合。Step 2: Through differential processing, the differential images between images of the same band at different times are extracted to obtain a set of differential images based on the original spectrum.

步骤3:在差分影像中,由于像素i的灰度值反映了i是否是一个变化的像素的可能性,因此将其在[0,255]区间内进行归一化处理,并作为与i对应的CIIs之一。根据ASP-MAPs和原始影像中的所有波段计算CIIs,然后可以基于面积,对角线,标准差,NMI以及与i对应的原始光谱获得五组CII集合。Step 3: In the difference image, since the gray value of pixel i reflects the possibility of whether i is a changed pixel, it is normalized in the interval [0,255] and used as one of the CIIs corresponding to i. CIIs are calculated based on ASP-MAPs and all bands in the original image, and then five sets of CIIs can be obtained based on area, diagonal, standard deviation, NMI, and the original spectrum corresponding to i.

(5)多特征决策融合(5) Multi-feature decision fusion

D-S理论是一种多源证据融合的决策理论,D-S理论的一个显著的优点是对多源证据的不确定性具有很强的定量评价能力。因此,本发明构建了一个决策融合框架,用于融合来自ASP-MAPs和原始光谱的变化信息。D-S theory is a decision theory for multi-source evidence fusion. One of the significant advantages of D-S theory is that it has a strong quantitative evaluation capability for the uncertainty of multi-source evidence. Therefore, the present invention constructs a decision fusion framework for fusing the change information from ASP-MAPs and original spectra.

基本概率分配公式(Basic ProbabilityAssignment Formula,BPAF):根据D-S理论,将A表示为2Θ的非空子集,Θ表示为假设空间,A的BPFA表示为m(A)。m:2Θ→[0,1]BPAF应该满足以下约束条件:Basic Probability Assignment Formula (BPAF): According to DS theory, A is represented as a non-empty subset of , Θ is represented as the hypothesis space, and the BPFA of A is represented as m(A). m: →[0,1]BPAF should satisfy the following constraints:

Figure BDA0002386043130000081
Figure BDA0002386043130000081

其中m(A)表示A的信任程度,m(A)的计算如下:Where m(A) represents the trust level of A, and m(A) is calculated as follows:

Figure BDA0002386043130000082
Figure BDA0002386043130000082

N表示证据的总数,mn(Fn)表示由第n组证据得到的基本概率分配公式,并且有Fn∈2Θ

Figure BDA0002386043130000091
N represents the total number of evidences, m n (F n ) represents the basic probability distribution formula obtained from the nth group of evidence, and F n ∈ 2 Θ ,
Figure BDA0002386043130000091

CIE的计算:为了从不同来源(包括面积、对角线、标准差、NMI和原始光谱)测量CIIS的信任程度,提出了一种证据的置信度指标CIE。对于每一个证据,CIE可以用方程(8)来计算。对于每个CII,较大的CIE表示在决策融合过程中应给予较高的缓解度。Calculation of CIE: In order to measure the trust level of CIIs from different sources (including area, diagonal, standard deviation, NMI, and original spectrum), a confidence index of evidence CIE is proposed. For each piece of evidence, CIE can be calculated using equation (8). For each CII, a larger CIE indicates that a higher degree of relief should be given in the decision fusion process.

Figure BDA0002386043130000092
Figure BDA0002386043130000092

决策融合框架的构建:将决策融合框架定义为Θ:{CT,NT},其中CT和NT分别表示变化的像素和不变的像素。因此,非空子集包括{CT},{NT}和{CT,NT}。对于每个像素i,可以通过以下公式建立BPAF:Construction of decision fusion framework: The decision fusion framework is defined as Θ:{CT,NT}, where CT and NT represent changed pixels and unchanged pixels, respectively. Therefore, the non-empty subsets include {CT}, {NT} and {CT,NT}. For each pixel i, the BPAF can be established by the following formula:

mn({CT})=CIIn×CIEn (9)m n ({CT})=CII n ×CIE n (9)

mn({NT})=(1-CIIn)×CIEn (10)m n ({NT})=(1-CII n )×CIE n (10)

mn({CT,NT})=1-CIEn (11)m n ({CT, NT}) = 1 - CIE n (11)

其中CIIn和CIEn代表与像素i对应的第n个CII和CIE。在此基础上,通过方程(7)计算像素i的m({CT})、m({NT})和m({CT,NT}),其判定规则如下:Where CII n and CIE n represent the nth CII and CIE corresponding to pixel i. On this basis, m({CT}), m({NT}) and m({CT,NT}) of pixel i are calculated by equation (7), and the judgment rules are as follows:

Figure BDA0002386043130000093
Figure BDA0002386043130000093

如果i满足上述规则,被判定为变化像素,否则i被判定为不变像素。最后,根据上述决策过程遍历所有像素,得到CD图。If i satisfies the above rules, it is determined to be a changed pixel, otherwise i is determined to be an unchanged pixel. Finally, all pixels are traversed according to the above decision process to obtain the CD map.

(6)实验和分析(6) Experiment and analysis

数据集1是中国南京地区的一组的航空遥感影像,包括红、绿和蓝三个波段;影像采集时间分别为2009年3月和2012年2月,空间分辨率为0.5米,影像大小为512×512像素。数据集2是中国重庆地区的一组Quickbird影像,包括红、绿和蓝三个波段;影像采集时间分别为2007年9月和2011年8月;空间分辨率为2.4米,影像大小为512×512像素。数据集3是中国上海地区的一组SPOT-5全色多光谱融合影像,包括红、绿和蓝三个波段;影像采集时间分别为2004年6月和2008年7月;空间分辨率为2.5米,影像大小为512×512像素。选择这三个数据集进行实验的原因如下:这些数据集代表不同的城市场景,主要由建筑物、道路、植被、荒地等组成,这有助于验证本发明所提出的方法对这些典型地物变化的识别能力,并有助于评价该方法在CD应用中的适用性和稳定性。Dataset 1 is a set of aerial remote sensing images of Nanjing, China, including red, green and blue bands; the images were collected in March 2009 and February 2012, with a spatial resolution of 0.5 meters and an image size of 512×512 pixels. Dataset 2 is a set of Quickbird images of Chongqing, China, including red, green and blue bands; the images were collected in September 2007 and August 2011, with a spatial resolution of 2.4 meters and an image size of 512×512 pixels. Dataset 3 is a set of SPOT-5 panchromatic multispectral fusion images of Shanghai, China, including red, green and blue bands; the images were collected in June 2004 and July 2008, with a spatial resolution of 2.5 meters and an image size of 512×512 pixels. The reasons for selecting these three datasets for experiments are as follows: these datasets represent different urban scenes, mainly composed of buildings, roads, vegetation, wasteland, etc., which helps to verify the recognition ability of the proposed method for these typical landform changes and helps to evaluate the applicability and stability of the method in CD applications.

为了综合评价本发明所提出方法的性能,采用了五种先进的CD方法进行对比实验:改进的变化矢量分析法(change vector analysis,CVA),包括CVA最大期望算法(CVA-Expectation Maximum,CVA-EM)(方法1),基于光谱角度映射的方法(方法2),基于光谱和纹理特征的方法(方法3);基于MAPs的方法(方法4);基于深度学习(Deep Learning,DL)的方法(方法5)。本发明所提出方法的自适应提取尺度参数集合如表1-3所示。In order to comprehensively evaluate the performance of the method proposed in the present invention, five advanced CD methods were used for comparative experiments: improved change vector analysis (CVA), including CVA-Expectation Maximum (CVA-EM) (method 1), method based on spectral angle mapping (method 2), method based on spectral and texture features (method 3); method based on MAPs (method 4); method based on deep learning (DL) (method 5). The adaptive extraction scale parameter set of the method proposed in the present invention is shown in Table 1-3.

表1所提取的数据集1尺度参数集合Table 1 Extracted dataset 1 Scale parameter set

Figure BDA0002386043130000101
Figure BDA0002386043130000101

表2所提取的数据集2尺度参数集合Table 2 Extracted dataset 2 scale parameter set

Figure BDA0002386043130000102
Figure BDA0002386043130000102

表3所提取的数据集3尺度参数集合Table 3 Extracted dataset 3 scale parameter sets

Figure BDA0002386043130000103
Figure BDA0002386043130000103

Figure BDA0002386043130000111
Figure BDA0002386043130000111

不同方法的定量评价结果见表4-6。三组数据中,本发明所提出方法的总体精度(OA)可达到83.9%以上,波动幅度小于1.5%,明显优于其他对比方法。因此,在不同数据源带来的挑战中,本发明具有精度高、稳定性好等优点。The quantitative evaluation results of different methods are shown in Tables 4-6. Among the three sets of data, the overall accuracy (OA) of the method proposed in the present invention can reach more than 83.9%, and the fluctuation range is less than 1.5%, which is significantly better than other comparison methods. Therefore, in the challenges brought by different data sources, the present invention has the advantages of high accuracy and good stability.

在三种基于CVA的CD方法中,方法1和方法2仅以光谱差异作为CD的基础,误检率(False Positive,FP)率和漏检率(False Negative,FN)率分别超过30%和20%。由于引入了纹理特征作为补充,三种评价指标在方法3的实验结果中均有明显改善。因此,为了生成更精确的CD图,有必要考虑利用像素的空间邻域信息。尽管如此,方法3采用人工设定方式定义了一系列指定的滤镜窗口以提取纹理特征,这使其很难与当前像素所属的相应对象的固有形状保持一致。相比之下,MAPs可以根据所有属性相似的连通像素构成的非固定局部区域,提取更精确的空间结构信息。Among the three CVA-based CD methods, Method 1 and Method 2 only use spectral differences as the basis for CD, with false positive (FP) rates and false negative (FN) rates exceeding 30% and 20%, respectively. Due to the introduction of texture features as a supplement, the three evaluation indicators have been significantly improved in the experimental results of Method 3. Therefore, in order to generate more accurate CD maps, it is necessary to consider utilizing the spatial neighborhood information of pixels. Nevertheless, Method 3 uses an artificial setting to define a series of specified filter windows to extract texture features, which makes it difficult to keep consistent with the inherent shape of the corresponding object to which the current pixel belongs. In contrast, MAPs can extract more accurate spatial structure information based on non-fixed local areas composed of all connected pixels with similar attributes.

与本发明所提出的方法相比,虽然方法4采用APs提取变化信息,但在本发明所提出方法的三个数据集中,OA的结果明显降低,波动幅度超过8%。这主要是由于方法4中尺度参数的设置是手动进行的,忽略了APs中包含的冗余信息以及突出具有代表性的空间结构信息。另外,该方法在获得最终的CD图时是根据不同来源的变化信息,采用单一的阈值得到的,从而忽略了多源变化信息的不确定性。Compared with the method proposed in the present invention, although method 4 uses APs to extract change information, the results of OA are significantly reduced in the three data sets of the method proposed in the present invention, with a fluctuation range of more than 8%. This is mainly due to the manual setting of the scale parameters in method 4, which ignores the redundant information contained in APs and highlights the representative spatial structure information. In addition, this method uses a single threshold to obtain the final CD map based on the change information from different sources, thereby ignoring the uncertainty of multi-source change information.

方法5是一种基于D-L的方法,在本发明的对比实验中,方法5在所有三个数据集中都表现出了较低的准确性和较差的稳定性。如果没有足够的训练样本,基于DP的方法就无法在CD应用中获得可靠的结果。但是可以肯定的是,随着训练样本的增加,方法5的性能会有显著的提高。Method 5 is a D-L based method. In the comparative experiments of the present invention, method 5 showed lower accuracy and poorer stability in all three data sets. Without sufficient training samples, DP based methods cannot obtain reliable results in CD applications. However, it is certain that the performance of method 5 will be significantly improved with the increase of training samples.

表4数据集1中CD精度的定量评价。OA:总体准确度;FP:误检率;FN:漏检率Table 4 Quantitative evaluation of CD accuracy in dataset 1. OA: overall accuracy; FP: false positive rate; FN: missed positive rate

方法/指标Methods/Indicators OA(%)OA (%) FP(%)FP(%) FN(%)FN(%) 评价标准Evaluation Criteria 越高越好The higher the better 越低越好The lower the better 越低越好The lower the better 本发明方法Method of the present invention 83.983.9 15.115.1 9.19.1 方法1Method 1 57.257.2 40.440.4 39.139.1 方法2Method 2 63.563.5 32.332.3 25.225.2 方法3Method 3 79.879.8 19.319.3 11.911.9 方法4Method 4 71.271.2 28.528.5 19.419.4 方法5Method 5 77.177.1 21.421.4 15.315.3

表5数据集2中CD精度的定量评价Table 5 Quantitative evaluation of CD accuracy in dataset 2

方法/指标Methods/Indicators OA(%)OA (%) FP(%)FP(%) FN(%)FN(%) 评价标准Evaluation Criteria 越高越好The higher the better 越低越好The lower the better 越低越好The lower the better 本发明方法Method of the present invention 84.584.5 12.612.6 9.89.8 方法1Method 1 68.468.4 39.139.1 34.934.9 方法2Method 2 72.872.8 30.630.6 29.829.8 方法3Method 3 81.581.5 15.315.3 11.411.4 方法4Method 4 74.874.8 26.526.5 24.424.4 方法5Method 5 51.151.1 46.646.6 42.842.8

表6数据集3中CD精度的定量评价Table 6 Quantitative evaluation of CD accuracy in dataset 3

方法/指标Methods/Indicators OA(%)OA (%) FP(%)FP(%) FN(%)FN(%) 评价标准Evaluation Criteria 越高越好The higher the better 越低越好The lower the better 越低越好The lower the better 本发明方法Method of the present invention 85.185.1 13.913.9 10.910.9 方法1Method 1 59.459.4 40.240.2 39.739.7 方法2Method 2 68.668.6 30.330.3 31.631.6 方法3Method 3 78.178.1 21.921.9 17.417.4 方法4Method 4 80.280.2 19.419.4 15.815.8 方法5Method 5 71.471.4 26.426.4 27.827.8

为了分别验证本发明所提出的自适应尺度参数提取策略和决策融合框架的有效性,进行了以下两个实验方案:(1)手动设置面积、对角线、标准差和NMI的尺度参数分别为{100、918、1734、2548、3368、4185、5000}、{10、25、40、55、70、85、100}、{0.2、0.25、0.3、0.35、0.4、0.45、0.5}和{20、25、30、35、40、45、50},其余步骤与所提出的方法(方法6)一致;(2)对与像素i对应的已提取的CIIS求平均值,并遍历影像中的所有像素,并使用EM方法确定用于获取CD图的阈值(方法7)。表7列出了不同方法的OA。In order to verify the effectiveness of the adaptive scale parameter extraction strategy and decision fusion framework proposed in this paper, the following two experimental schemes were carried out: (1) the scale parameters of area, diagonal, standard deviation and NMI were manually set to {100, 918, 1734, 2548, 3368, 4185, 5000}, {10, 25, 40, 55, 70, 85, 100}, {0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5} and {20, 25, 30, 35, 40, 45, 50}, respectively, and the rest of the steps were consistent with the proposed method (method 6); (2) the extracted CIIS corresponding to pixel i were averaged, and all pixels in the image were traversed, and the threshold for obtaining the CD map was determined using the EM method (method 7). Table 7 lists the OA of different methods.

表7所提出方法、方法6和方法7的OATable 7 OA of the proposed method, method 6 and method 7

Figure BDA0002386043130000131
Figure BDA0002386043130000131

如上所示,本发明所提出的方法的OA明显高于其它两种方法。因此,本发明所提出的自适应尺度参数提取策略和决策融合框架对于提高变化检测精度是必要的和有效的:前者有助于突出有代表性的空间结构信息,同时减少APs中的冗余信息;后者可以通过减少来自不同来源的变化信息的不确定性来提高决策的可靠性。As shown above, the OA of the proposed method is significantly higher than that of the other two methods. Therefore, the adaptive scale parameter extraction strategy and decision fusion framework proposed in the present invention are necessary and effective for improving the accuracy of change detection: the former helps to highlight representative spatial structure information while reducing redundant information in APs; the latter can improve the reliability of decision making by reducing the uncertainty of change information from different sources.

在本发明提出的自适应尺度参数提取过程中,尺度数W是唯一需要手动设置的从属参数。为了明确设置W的依据,本章分析了不同对OA的影响。如图2所示,横坐标为W,纵坐标为OA,三个数据集的结果用不同样式的曲线表示W。In the adaptive scale parameter extraction process proposed in this paper, the scale number W is the only subordinate parameter that needs to be manually set. In order to clarify the basis for setting W, this chapter analyzes the impact of different OA. As shown in Figure 2, the horizontal axis is W and the vertical axis is OA. The results of the three data sets are represented by curves of different styles.

如图2所示,在三个数据集实验中,随着W的连续增加,OA均表现出先逐渐上升,趋于稳定后逐渐下降的相似趋势。其中W=6,W=4,W=6分别对应于数据集1、2和3的实验中的OA曲线峰值,分别为83.9%、84.9%和85.1%。详细值如表8所示。As shown in Figure 2, in the experiments of the three data sets, with the continuous increase of W, OA shows a similar trend of gradually rising first, stabilizing and then gradually decreasing. Among them, W = 6, W = 4, and W = 6 correspond to the peak values of the OA curve in the experiments of data sets 1, 2, and 3, which are 83.9%, 84.9%, and 85.1%, respectively. The detailed values are shown in Table 8.

表8三个数据集实验中的详细W-OA值Table 8 Detailed W-OA values in the experiments of three datasets

Figure BDA0002386043130000132
Figure BDA0002386043130000132

Figure BDA0002386043130000141
Figure BDA0002386043130000141

如上表所示,在数据集2的实验中,当W设置为6时,OA可以达到84.5%,并且仅比相应的最高OA低0.4%。这意味着将W设置为6,在三个数据集的所有实验中都可以得到理想的结果。因此,考虑到自动化和可靠性,建议在CD应用中将W直接设置为6。As shown in the table above, in the experiment of dataset 2, when W is set to 6, OA can reach 84.5%, and is only 0.4% lower than the corresponding highest OA. This means that setting W to 6 can get ideal results in all experiments of the three datasets. Therefore, considering automation and reliability, it is recommended to set W directly to 6 in CD applications.

实施例仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明保护范围之内。The embodiments are only for illustrating the technical idea of the present invention and cannot be used to limit the protection scope of the present invention. Any changes made on the basis of the technical solution in accordance with the technical idea proposed by the present invention shall fall within the protection scope of the present invention.

Claims (4)

1.基于分形属性和决策融合的遥感影像变化检测方法,其特征在于,包括以下步骤:1. A remote sensing image change detection method based on fractal attributes and decision fusion, characterized in that it comprises the following steps: (1)采集多时相高分辨率遥感影像;(1) Collect multi-temporal and high-resolution remote sensing images; (2)建立基于平均尺度间相关性最小值的目标函数,通过迭代计算,自适应确定各属性的尺度参数集合,提取具有自适应尺度参数的形态学属性剖面;(2) Establish an objective function based on the minimum value of the average inter-scale correlation, adaptively determine the scale parameter set of each attribute through iterative calculation, and extract the morphological attribute profile with adaptive scale parameters; (3)构建多特征决策融合框架,计算变化强度指标和证据置信度指标来分别描述变化信息和相应的信任程度,利用多特征决策融合框架融合来自自适应尺度参数的形态学属性剖面和原始光谱的变化信息,得到最终的变化检测图;(3) Construct a multi-feature decision fusion framework, calculate the change intensity index and the evidence confidence index to describe the change information and the corresponding trust level respectively, and use the multi-feature decision fusion framework to fuse the change information of the morphological attribute profile and the original spectrum from the adaptive scale parameter to obtain the final change detection map; 在步骤(2)中,自适应尺度参数提取方法如下:In step (2), the adaptive scale parameter extraction method is as follows: (201)设置每种属性的尺度总数为W,尺度参数的取值区间为[Tmin,Tmax],其中Tmin和Tmax分别为尺度参数能够取得的最小值和最大值;(201) Set the total number of scales for each attribute to W, and the value interval of the scale parameter to [T min , T max ], where T min and T max are the minimum and maximum values that the scale parameter can take, respectively; (202)计算区间Subw,第w个尺度参数应位于区间Subw内,w∈{1,2,...,W}:(202) Calculate the interval Sub w . The w-th scale parameter should be within the interval Sub w , w ∈ {1, 2, ..., W}:
Figure FDA0004062248450000011
Figure FDA0004062248450000011
(203)定义目标函数:(203)Define the objective function:
Figure FDA0004062248450000012
Figure FDA0004062248450000012
迭代计算尺度参数的所有组合,并将GRSIMsum最小值对应的组合作为提取的最优尺度参数集;其中,GRSIMw,w+1表示两个相邻属性剖面的梯度相似性:Iteratively calculate all combinations of scale parameters, and take the combination corresponding to the minimum GRSIM sum as the optimal scale parameter set for extraction; where GRSIM w,w+1 represents the gradient similarity of two adjacent attribute profiles:
Figure FDA0004062248450000013
Figure FDA0004062248450000013
上式中,GRSIMB1,B1为两幅影像B1和B2之间的梯度相似性;σZ1和σZ2表示两个影像的梯度幅度矩阵的标准差,σM1和σM2表示两个影像的梯度方向矩阵的标准差,
Figure FDA0004062248450000014
Figure FDA0004062248450000015
表示两个影像的梯度幅度矩阵的方差,σM1,M2表示两个影像的梯度方向矩阵的协方差;
In the above formula, GRSIM B1, B1 is the gradient similarity between the two images B1 and B2; σ Z1 and σ Z2 represent the standard deviations of the gradient magnitude matrices of the two images, σ M1 and σ M2 represent the standard deviations of the gradient direction matrices of the two images,
Figure FDA0004062248450000014
and
Figure FDA0004062248450000015
represents the variance of the gradient magnitude matrices of the two images, σ M1,M2 represents the covariance of the gradient direction matrices of the two images;
在步骤(3)中,构建多特征决策融合框架的方法如下:In step (3), the method of constructing a multi-feature decision fusion framework is as follows: 将决策融合框架定义为Θ:{CT,NT},其中,Θ表示为假设空间,CT和NT分别表示变化的像素和不变像素,对于每个像素i,通过下式建立基本概率分配公式:The decision fusion framework is defined as Θ:{CT,NT}, where Θ represents the hypothesis space, CT and NT represent the changed pixels and unchanged pixels respectively. For each pixel i, the basic probability assignment formula is established by the following formula: mn({CT})=CIIn×CIEn m n ({CT}) = CII n × CIE n mn({NT})=(1-CIIn)×CIEn m n ({NT})=(1-CII n )×CIE n mn({CT,NT})=1-CIEn m n ({CT, NT}) = 1 - CIE n 上式中,CIIn和CIEn代表与像素i对应的第n个变化强度指标和证据置信度指标,mn({CT})、mn({NT})和mn({CT,NT})代表非空子集{CT}、{NT}和{CT,NT}第n组证据对应的基本概率分配公式;In the above formula, CIIn and CIEn represent the nth change intensity index and evidence confidence index corresponding to pixel i, mn ({CT}), mn ({NT}) and mn ({CT,NT}) represent the basic probability distribution formulas corresponding to the nth group of evidence of the non-empty subsets {CT}, {NT} and {CT,NT}; 利用下式计算非空子集{CT}、{NT}和{CT,NT}对应的基本概率分配公式m({CT})、m({NT})和m({CT,NT}):The basic probability distribution formulas m({CT}), m({NT}) and m({CT,NT}) corresponding to the non-empty subsets {CT}, {NT} and {CT,NT} are calculated using the following formula:
Figure FDA0004062248450000021
Figure FDA0004062248450000021
上式中,A表示非空子集,N表示证据的总数,mn(Fn)表示由第n组证据得到的基本概率分配公式,并且有Fn∈2Θ
Figure FDA0004062248450000022
In the above formula, A represents a non-empty subset, N represents the total number of evidences, m n (F n ) represents the basic probability distribution formula obtained by the nth group of evidence, and F n ∈ 2 Θ ,
Figure FDA0004062248450000022
建立如下判定规则:Establish the following judgment rules:
Figure FDA0004062248450000023
Figure FDA0004062248450000023
如果像素i满足上述判定规则,则像素i被判定为变化像素,否则像素i被判定为不变像素;遍历所有像素,得到最终的变换检测图。If pixel i satisfies the above judgment rule, pixel i is judged as a changed pixel, otherwise pixel i is judged as an unchanged pixel; all pixels are traversed to obtain the final transformation detection map.
2.根据权利要求1所述基于分形属性和决策融合的遥感影像变化检测方法,其特征在于,在步骤(2)中,选择面积、对角线、标准差和归一化惯性矩4个形态学属性。2. According to claim 1, the remote sensing image change detection method based on fractal attributes and decision fusion is characterized in that, in step (2), four morphological attributes of area, diagonal, standard deviation and normalized moment of inertia are selected. 3.根据权利要求1所述基于分形属性和决策融合的遥感影像变化检测方法,其特征在于,在步骤(3)中,计算变化强度指标的方法如下:3. The remote sensing image change detection method based on fractal attributes and decision fusion according to claim 1 is characterized in that, in step (3), the method for calculating the change intensity index is as follows: (301)通过差分处理,提取同一尺度参数下不同时间属性剖面之间的差分影像,得到每个属性基于自适应尺度参数的形态学属性剖面的差分影像集合;(301) extracting differential images between different time attribute sections under the same scale parameter through differential processing, and obtaining a differential image set of morphological attribute sections of each attribute based on the adaptive scale parameter; (302)通过差分处理,提取同一波段不同时间影像之间的差分影像,获得基于原始光谱的差分影像集合;(302) extracting the differential images between images of the same band at different times through differential processing, and obtaining a differential image set based on the original spectrum; (303)在差分影像中,像素i的灰度值反映了像素i是否是一个变化像素的可能性,因此将其在[0,255]区间内进行归一化处理并作为像素i对应的变化强度指标,基于步骤(301)和(302)获得的差分影像集合,得到像素i对应的基于不同属性和原始光谱的多组变化强度指标。(303) In the differential image, the grayscale value of pixel i reflects the possibility of whether pixel i is a changed pixel. Therefore, it is normalized in the interval [0, 255] and used as the change intensity index corresponding to pixel i. Based on the differential image set obtained in steps (301) and (302), multiple groups of change intensity indexes corresponding to pixel i based on different attributes and original spectra are obtained. 4.根据权利要求1所述基于分形属性和决策融合的遥感影像变化检测方法,其特征在于,在步骤(3)中,按下式计算证据置信度指标:4. The remote sensing image change detection method based on fractal attributes and decision fusion according to claim 1 is characterized in that, in step (3), the evidence confidence index is calculated as follows:
Figure FDA0004062248450000031
Figure FDA0004062248450000031
上式中,CIE即为证据置信度指标,GRSIMw,w+1表示两个相邻属性剖面的梯度相似性。In the above formula, CIE is the evidence confidence index, and GRSIM w,w+1 represents the gradient similarity of two adjacent attribute profiles.
CN202010098359.2A 2020-02-18 2020-02-18 Remote sensing image change detection method based on fractal attribute and decision fusion Active CN111340761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010098359.2A CN111340761B (en) 2020-02-18 2020-02-18 Remote sensing image change detection method based on fractal attribute and decision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010098359.2A CN111340761B (en) 2020-02-18 2020-02-18 Remote sensing image change detection method based on fractal attribute and decision fusion

Publications (2)

Publication Number Publication Date
CN111340761A CN111340761A (en) 2020-06-26
CN111340761B true CN111340761B (en) 2023-04-18

Family

ID=71185238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010098359.2A Active CN111340761B (en) 2020-02-18 2020-02-18 Remote sensing image change detection method based on fractal attribute and decision fusion

Country Status (1)

Country Link
CN (1) CN111340761B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909050B (en) * 2022-10-26 2023-06-23 中国电子科技集团公司第五十四研究所 Remote sensing image airport extraction method combining line segment direction and morphological difference
CN118172685B (en) * 2024-03-12 2024-10-18 北京智慧宏图勘察测绘有限公司 Intelligent analysis method and device for unmanned aerial vehicle mapping data

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632363B (en) * 2013-08-27 2016-06-08 河海大学 Object level high-resolution remote sensing image change detecting method based on Multiscale Fusion
CN107085708B (en) * 2017-04-20 2020-06-09 哈尔滨工业大学 High-resolution remote sensing image change detection method based on multi-scale segmentation and fusion
CN107689055A (en) * 2017-08-24 2018-02-13 河海大学 A kind of multi-temporal remote sensing image change detecting method
CN109360184A (en) * 2018-08-23 2019-02-19 南京信息工程大学 A remote sensing image change detection method combining shadow compensation and decision fusion

Also Published As

Publication number Publication date
CN111340761A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN110309781B (en) House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
CN111259828B (en) Recognition method based on multi-features of high-resolution remote sensing images
CN103632363B (en) Object level high-resolution remote sensing image change detecting method based on Multiscale Fusion
CN104751478B (en) Object-oriented building change detection method based on multi-feature fusion
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN105335966B (en) Multiscale morphology image division method based on local homogeney index
CN103559500B (en) A kind of multi-spectral remote sensing image terrain classification method based on spectrum Yu textural characteristics
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN104573685B (en) A kind of natural scene Method for text detection based on linear structure extraction
WO2018076138A1 (en) Target detection method and apparatus based on large-scale high-resolution hyper-spectral image
CN109657610A (en) A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images
CN107092871B (en) Remote sensing image building detection method based on multiple dimensioned multiple features fusion
CN110929643B (en) A Hyperspectral Anomaly Detection Method Based on Multiple Features and Isolation Trees
CN102005034A (en) Remote sensing image segmentation method based on region clustering
CN108985247A (en) Multispectral image urban road identification method
CN105427313B (en) SAR image segmentation method based on deconvolution network and adaptive inference network
CN107239759A (en) A kind of Hi-spatial resolution remote sensing image transfer learning method based on depth characteristic
CN109886267A (en) A saliency detection method for low-contrast images based on optimal feature selection
CN104217440B (en) A kind of method extracting built-up areas from remote sensing images
CN111340761B (en) Remote sensing image change detection method based on fractal attribute and decision fusion
CN114119586A (en) Intelligent detection method for aircraft skin defects based on machine vision
CN115393719A (en) Hyperspectral Image Classification Method Combining Spatial Spectral Domain Adaptation and Ensemble Learning
CN111091129A (en) Image salient region extraction method based on multi-color characteristic manifold sorting
CN105023269B (en) A kind of vehicle mounted infrared image colorization method
Manandhar et al. Segmentation based building detection in high resolution satellite images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant