[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110070545B - Method for automatically extracting urban built-up area by urban texture feature density - Google Patents

Method for automatically extracting urban built-up area by urban texture feature density Download PDF

Info

Publication number
CN110070545B
CN110070545B CN201910212324.4A CN201910212324A CN110070545B CN 110070545 B CN110070545 B CN 110070545B CN 201910212324 A CN201910212324 A CN 201910212324A CN 110070545 B CN110070545 B CN 110070545B
Authority
CN
China
Prior art keywords
image
gray
gray level
images
town
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910212324.4A
Other languages
Chinese (zh)
Other versions
CN110070545A (en
Inventor
胡龙飞
仲波
罗小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910212324.4A priority Critical patent/CN110070545B/en
Publication of CN110070545A publication Critical patent/CN110070545A/en
Application granted granted Critical
Publication of CN110070545B publication Critical patent/CN110070545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically extracting town built-up areas by using town texture feature density, which belongs to the field of remote sensing image information extraction. Firstly, analyzing the sensibility of different texture features of a gray level co-occurrence matrix in town extraction by utilizing J-M distance, and taking the texture difference features as classification basis; and then, performing super-pixel segmentation of multiple scales by using a simple linear iterative clustering algorithm (SLIC), taking super-pixels as a range for calculating local feature density, calculating feature density images after multiple segmentation, and superposing the feature density images to construct a feature density image of the image to be classified. The method has the advantages of good integrity and accuracy of the extracted result, especially in the aspect of integrity, the degree of coincidence between the extracted town edge and the actual edge is extremely high, and the manual detection effect can be achieved.

Description

Method for automatically extracting urban built-up area by urban texture feature density
Technical Field
The invention belongs to a method in the field of remote sensing image information extraction, and particularly relates to a method for automatically extracting town built-up areas in images by constructing town texture feature densities based on multi-layer super-pixel segmentation aiming at high-resolution remote sensing images (10 m and higher resolution).
Background
Town information extraction based on remote sensing images means that town built-up areas are identified and marked in remote sensing satellite images, aerial photographed images or images photographed by unmanned aerial vehicles. The range, distribution and change of the town built-up area have important significance for urban planning, land utilization resource allocation and environment monitoring, and are data bases of many research directions, and are paid attention to by a large number of researchers. However, the complex ground information and the various town residents make town extraction a very challenging task. With the rapid development of the global aerospace industry, the quality and resolution of acquired image data are improved, high-resolution images become easier to acquire, clear town texture information exists in the high-resolution remote sensing images, and important data sources for extracting town information based on the remote sensing image information are provided.
Town-built area extraction methods fall into two main categories, spectrum-based methods and texture-based methods. The spectrum-based method is to analyze the difference between towns and non-towns on a spectrum curve to distinguish the towns and the non-towns through multispectral information of middle-low resolution data, and is mostly used for extracting large-area towns and change detection. However, medium-low resolution data is limited by resolution, and it is difficult to obtain a high-precision town range. Texture-based methods refer to the differentiation of town and non-town texture differences in high resolution data. Texture-based methods can be further divided into: an edge density-based method, a straight line detection-based method, and a texture detection model-based method. The first two of which are too dependent on the quality of the input data and require a resolution of the data of more than 2 meters. The method based on the texture detection model is widely applied, and the Gabor filtering and the gray level co-occurrence matrix are generally used for constructing texture features, wherein the Gabor filtering has higher requirements on the resolution of data, and the method based on the gray level co-occurrence matrix can be applied to data with multiple resolutions. Existing texture-based methods place high demands on the quality of the data, typically requiring data at a resolution of greater than 5 meters. However, the high resolution data acquisition is costly and is not conducive to large scale town area extraction. In addition, the existing methods are too dependent on the quality of data and the extracted feature accuracy, however, the phenomenon of foreign matter homography of remote sensing images is common, and a large number of natural features similar to town textures exist, so that a large number of false extractions are caused.
The remote sensing satellite data with the spatial resolution of 10 meters not only has clear texture information, but also has smoother internal texture information in the built-in area of the town, and the data can be acquired freely, but has less application in town extraction so far. According to the characteristics of the remote sensing satellite image data with the resolution of 10 meters, the invention explores the texture features suitable for extracting towns, and provides a method for constructing feature density by utilizing super-pixel segmentation, so that the accuracy of town extraction is improved.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The method has the advantages that the extracted result has good integrity and accuracy, and especially in the aspect of integrity, the urban edge extracted by the method has extremely high coincidence degree with the actual edge, and the manual detection effect can be achieved. The technical scheme of the invention is as follows:
a method for automatically extracting town texture feature density in a built-in area of a town, comprising the steps of:
1) Acquiring a remote sensing image with a resolution of 10m and a visible light wave band, and performing cutting, splicing and cloud removal operations to obtain an area to be extracted;
2) Constructing a gray image of the preprocessed remote sensing image, filtering low-frequency textures of the gray image based on Fourier transform, and obtaining a gray image of a high-frequency component;
3) Aiming at the gray level images of the high-frequency components obtained in the step 2), gray level co-occurrence matrix difference characteristics in a plurality of directions are calculated, and rotation invariance texture characteristics are extracted;
4) Mapping the image formed by the rotation invariance texture features obtained in the step 3) into a gray image of 0 to 255, and extracting a feature region by using an Otsu (on-the-fly) threshold method;
5) And (3) carrying out multi-scale segmentation on the region image to be extracted obtained in the step (1) through a simple linear iterative clustering SLIC algorithm to obtain a multi-layer super-pixel distribution image, obtaining a feature density image under each scale by taking super-pixels as a unit, and superposing all the feature density images to construct a refined feature density image.
Further, the step 1) performs clipping, stitching and cloud removal processing on the obtained image, and specifically includes: one scene remote sensing image cannot cover the region to be extracted completely, multiple adjacent images need to be downloaded, the Map Based mobile function of ENVI software is used for splicing the images to obtain the completely covered images, and then the Subset Data from ROIs function of ENVI software is used for cutting out the part outside the extraction region according to the range (vector file. Shp) of the region to be extracted. The situation that the ground object is covered by the cloud in the remote sensing image is very common, the part covered by the cloud in the image to be processed cannot obtain good extraction effect, and cloud removal processing is needed. The common cloud removal method comprises the following steps: manually identifying a cloud area, searching images of other periods without cloud in the area, cutting out the images of the area, and splicing the images into the image to be processed to obtain a clean image without cloud coverage. The preprocessing work may be done using ENVI software.
Further, the step 2) filters the low-frequency texture from the gray image based on fourier transform to obtain a gray image with a high-frequency component, which specifically includes: the method comprises the steps of converting a spatial domain image into a frequency domain by using a two-dimensional fast Fourier transform FFT, filtering the frequency domain image by using a Gaussian low-pass filter, converting a filtering result into the spatial domain by using an inverse fast Fourier transform FFT to obtain a background image, and subtracting the original gray level image from the background image to obtain a gray level image with low frequency components suppressed.
Further, the spatial domain image is converted into the frequency domain by using a two-dimensional Fast Fourier Transform (FFT), and the formula is:
Figure GDA0002098256140000031
n represents the width and height of the image, (x, y) represents the spatial coordinates of the image, (u, v) represents the coordinates of the frequency image, F (x, y) is the input image, F (u, v) is the fourier frequency domain image obtained by conversion, and the frequency domain image is filtered by a gaussian low pass filter: />
Figure GDA0002098256140000032
F '(u, v) =f (u, v) ·h (u, v), wherein H (x, y) is a gaussian filter weight function, D represents a euclidean distance from a midpoint (u, v) of a gaussian filter window to a center of the gaussian window, σ represents a degree of expansion of the gaussian weight function, F (x, y) is a frequency domain image, F' (x, y) is a filtered frequency domain image, the filtering result is converted into a spatial domain by inverse fast fourier transform to obtain a background image F '(x, y), and the original gray image F (x, y) and the background image F' (x, y) are subjected to subtraction operation to obtain a gray image for suppressing low frequency components:
Figure GDA0002098256140000033
g (x, y) =f (x, y) -f' (x, y), G (x, y) being the finally obtained high frequency component image.
Further, the step 3) uses the high-frequency component image obtained in the previous step to construct gray level co-occurrence matrix difference characteristics in a plurality of directions, and extracts rotation invariance texture characteristics, wherein the construction process is as follows;
the gray level co-occurrence matrix is a matrix with the size of n multiplied by n, n is the gray level of the image, and the gray level co-occurrence matrix is defined as follows: m (i, j) = [ M (i, j, Δx, Δy); i, j e (1, 2, n. n]The gray level co-occurrence matrix M is an n×n matrix, the element values of the ith row and the jth column are as follows: satisfying position within a window of size w x w of an imageThe offset is (Deltax, deltay) and the gray values are the number of occurrences of pixel pairs of i and j, respectively, the difference formula is
Figure GDA0002098256140000041
Wherein->
Figure GDA0002098256140000042
m (i,j) For the ith row and jth column of the gray level co-occurrence matrix M, constructing n1 kinds of co-occurrence matrixes in n1 directions, calculating to obtain the difference characteristics of the n1 kinds of directions, and selecting the minimum value in the n1 kinds of directions to construct the rotation invariance texture characteristics. n1 directions are (Δx, Δy) ∈ { (1, 0), (2, 1), (1, 2), (-2, 1), (-1, 1), (1, 2), (0, 1) }, respectively.
Further, the step 4) uses the OTSU threshold method to extract the feature area, which specifically includes:
1) Linearly mapping the characteristic image into a gray image of 0-255, and counting a gray histogram, namely the number of pixels of each gray level;
2) The histogram is normalized. I.e. the number of pixels per gray level divided by the total number of pixels;
3) The maximum inter-class variance is calculated. Let t be the gray threshold value, count the proportion w of 0 to t gray level pixels to the whole image 0 The average gray value u of 0 to t gray levels is counted 0 Counting the proportion w of t to 255 gray scale pixels to the whole image 1 Statistics of average gray value u of t to 255 gray levels 1 Calculate g=w 0 ×w 1 ×(u 0 -u 1 ) 2 And setting t to 0-255 in turn, so that the maximum t value of g is the threshold value, the part larger than t is the characteristic region, and the part smaller than t is the other region.
Further, the step 5) is performed by linear iterative clustering algorithm SLIC to obtain uniform arrangement of super pixels in cell shape, and specifically includes: dividing the visible light wave band of the image for N times based on SLIC algorithm, wherein the scales are { N }, respectively i ,i∈1,2,3...n},N i Represents the average number of pixels of the ith split superpixel, where N 1 For the minimum segmentation scale, N n For the maximum segmentation scale, from N 1 To N n Sequentially increasing with the same step length S, and dividing n super pixel layers to be represented as { L } i I.epsilon.1, 2, 3..n }, use { P } ij J.epsilon.1, 2,3.. } represents L i Is the j th super pixel, num ij Representing P ij The number of pixels;
in the step 3), the differential characteristic image with the window size of 23 is used for obtaining a characteristic image by using an OSTU method, and the characteristic image is recorded as L-based f Calculate all P ij Feature density D of (2) ij
Figure GDA0002098256140000051
Num in f Represents P ij Within the range of L f The number of the feature points in the corresponding range in each L is obtained i Is L' i Finally, accumulating L' i Obtaining final characteristic density distribution L d
Figure GDA0002098256140000052
Representing the addition of the corresponding pixels of each feature density image to obtain a total feature density L d The characteristic density is obtained by the method, and finally, each layer is overlapped to obtain a characteristic density image L d For density image L d And setting a proper threshold value, wherein the part with higher density is the town extraction result, the set threshold value is between 0 and 1, the set threshold value is multiplied by the number of divided layers, and the part larger than the threshold value is extracted as the town.
The invention has the advantages and beneficial effects as follows:
the invention provides a filtering method for a remote sensing image based on the advantages of frequency domain filtering, which can filter the interference of low-frequency information in original pictures, so that in the texture characteristic image calculated in the step (3), the difference between the gray value of town textures and the gray value of natural textures is increased, and a more accurate characteristic region is obtained by using an OSTU algorithm.
According to the invention, the super-pixel is used as the calculation range of the local feature density, the boundary of the ground features is introduced as the limit, the feature densities of the ground features which are different from each other are not calculated in the same local range, the feature density distribution with better performance is obtained, the internal and external density difference of the built-in area of the town is increased by overlapping the multi-layer feature densities, and meanwhile, the edge of the town is refined. More accurate town information is obtained, and better results can be obtained by extracting with an empirical threshold.
Through an extraction experiment of the MSIL1C visible light wave band remote sensing image of Sentinel-2, the method for extracting the urban built-up area has the advantage that the integrity and the accuracy are both greatly improved compared with the existing mainstream method.
Drawings
FIG. 1 is a luminance image of Sentinel-2MSIL1C obtained
FIG. 2 is a diagram of a filtered image in the frequency domain
FIG. 3 is a calculated differential rotation invariant feature
FIG. 4 is a feature region of OSTU binarization
FIG. 5 is a graph of feature density calculated by multi-layer superpixel
FIG. 6 shows the result of threshold town extraction
FIG. 7 is a flow chart of a method
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and specifically described below with reference to the drawings in the embodiments of the present invention. The depicted example is a partial example of the invention.
The technical scheme for solving the technical problems is as follows:
according to the method, 10-direction difference rotation invariance features are selected according to the characteristics of ground objects in the 10-meter remote sensing image, and a OUST threshold method is used for obtaining a feature region. The method aims at the problem that the existing method is too dependent on feature extraction. The invention provides a method for calculating local feature density, which is based on the segmentation feature of the edge of a SLIC attached ground object, constructs a plurality of layers of super pixels, calculates feature density in the super pixels, reduces feature density outside cities and towns, and therefore achieves high-precision division of urban areas and non-urban areas.
The technical scheme of the method is as follows.
(1) Acquiring a remote sensing image with 10m resolution in a visible light wave band, and performing related preprocessing;
(2) Constructing a gray image, filtering low-frequency information based on Fourier transform, and obtaining a gray image of a high-frequency component;
(3) Aiming at the high-frequency image obtained in the step (2), gray level co-occurrence matrix difference characteristics in multiple directions are calculated, and rotation invariance texture characteristics are extracted;
(4) Mapping the rotation invariance texture feature image obtained in the step (3) into a gray image of 0 to 255, and extracting a feature region by using an Otsu (OTSU) method;
(5) And carrying out multi-scale segmentation on the true color synthetic image of Sentinel-2 through a simple linear iterative clustering (simple linear iterative clustering; SLIC) algorithm to obtain a multi-layer super-pixel distribution image. And taking super pixels as a unit, obtaining a characteristic density image under each scale, and superposing all the characteristic density images to construct a refined characteristic density image.
(6) And finally, selecting a proper density threshold value, and extracting the town area.
The invention is further illustrated below:
(1) And cutting, splicing and cloud removing the obtained images to obtain clean and complete visible light wave band images.
(2) And constructing a gray image by using the brightness information of the processed multiband image, and then performing frequency domain filtering processing on the image. The spatial domain image is converted to the frequency domain using a two-dimensional Fast Fourier Transform (FFT), formulated as:
Figure GDA0002098256140000071
n represents the width and height of the image, respectively, (x, y) represents the spatial coordinates of the image, (u, v) represents the coordinates of the frequency image, F (x, y) is the input image, and F (u, v) is the fourier frequency domain image obtained by conversion. Filtering the frequency domain image with a gaussian low pass filter:
Figure GDA0002098256140000072
F′(u,v)=f (u, v) ·h (u, v), where H (x, y) is a gaussian filter weight function, D represents the euclidean distance of the gaussian filter window midpoint (u, v) to the gaussian window center, σ represents the degree of gaussian weight function expansion. F (x, y) is a frequency domain image, F ' (x, y) is a filtered frequency domain image, the filtering result is converted into a spatial domain by inverse fast fourier transform to obtain a background image F ' (x, y), and the original gray image F (x, y) and the background image F ' (x, y) are subjected to subtraction operation to obtain a gray image with low frequency components suppressed: />
Figure GDA0002098256140000073
G (x, y) =f (x, y) -f' (x, y). G (x, y) is the finally obtained high frequency component image.
(3) The gray level co-occurrence matrix is a matrix of size n×n, where n is the gray level of the image (typically, the gray level of the gray image is 256). The gray co-occurrence matrix is defined as follows: m (i, j) = [ M (i, j, Δx, Δy); i, j e (1, 2, n. n]The gray level co-occurrence matrix M is an n×n matrix, the element values of the ith row and the jth column are as follows: the position offset (deltax, deltay) is satisfied within a window of size w x w of the image, and the gray values are the number of occurrences of pixel pairs of i and j, respectively. The invention constructs gray level co-occurrence matrix according to 10 positional relations, wherein 10 positional relations are respectively (Deltax, deltay) epsilon { (1, 0), (2, 1), (1, 2), (-2, 1), (-1, 1), (1, 2), (0, 1) }. The difference formula is as follows
Figure GDA0002098256140000074
Wherein->
Figure GDA0002098256140000075
m (i,j) The value of the ith row and jth column elements of the gray level co-occurrence matrix M. 10 symbiotic matrixes are constructed in 10 directions, so that 10 directional difference characteristics are calculated, and 10 directional minimum values are selected to construct rotation invariance texture characteristics.
(4) The Obuyuki Otsu (OTSU) is a classical automatic thresholding algorithm that separates an image into background and object 2 parts based on its gray scale characteristics. The larger the inter-class variance between the background and the object, the larger the difference of 2 parts constituting the image, which results in a smaller difference of 2 parts when a part of the object is misclassified as the background or a part of the background is misclassified as the object. Thus, a segmentation that maximizes the inter-class variance means that the probability of misclassification is minimal. The rotation invariance texture feature image obtained in the step (3) is mapped into a gray level image of 0 to 255, two parts of images are obtained by OSTU, and the part with the large gray level value is the feature area to be extracted.
(5) The simple linear iterative clustering algorithm (simple linear iterative clustering; SLIC) is an efficient super-pixel segmentation algorithm, super-pixels obtained through the SLIC algorithm are uniformly arranged in a cellular mode, the sizes of the super-pixels are similar, each super-pixel can adaptively deform according to the boundary of a base map, and the super-pixels can be arranged by being attached to the edge of a ground object. The SLIC algorithm is calculated based on the CIELAB color space, the (l, a, b) color and the spatial coordinates (x, y) of each pixel form a 5-dimensional vector, the similarity between two pixels is measured by using euclidean distance, and the distance calculation formula is as follows:
Figure GDA0002098256140000081
Figure GDA0002098256140000082
wherein i, j each represent two pixels, d c Represents the color distance, d s Represents the spatial distance, D represents the similarity of two pixels, N c N is the desired intra-class maximum color distance s Is the desired maximum spatial distance within the class. The distance D of the pel from the cluster center determines the dependent superpixel of the pixel. Wherein N is c And N s Is two important indexes affecting the clustering result. In general, N s The number of the cluster centers is determined by: />
Figure GDA0002098256140000083
N represents the total number of pixels of the image, K is the number of clustering centers, and is the unique input parameter of the algorithm. In the invention, the average number of the pixels of the super pixels is used as the scale for measuring the super pixel segmentation.
The invention carries out N times of different-scale segmentation on the visible light wave band of the image based on the SLIC algorithm, and the scales are { N }, respectively i ,i∈1,2,3...n},N i Representing the average number of pixels of the ith split super pixel. Wherein N is 1 For the minimum segmentation scale, N n For the maximum segmentation scale, from N 1 To N n Which in turn increases with the same step S. The n divided superpixel layers are expressed as { L ] i I.epsilon.1, 2, 3..n }, use { P } ij J.epsilon.1, 2,3.. } represents L i Is the j th super pixel, num ij Representing P ij Is a number of picture elements. In the step (3), based on the differential feature image with the window size of 23, the feature image is obtained by an OSTU method and is marked as L f . Based on L f Calculate all P ij Feature density D of (2) ij
Figure GDA0002098256140000091
Num in f Represents P ij Within the range of L f The number of feature points of the corresponding range. Obtaining each L i Is L' i Finally, accumulating L' i Obtaining final characteristic density distribution L d :/>
Figure GDA0002098256140000092
Representing the addition of the corresponding pixels of each feature density image to obtain a total feature density L d . Setting N 1 300, S=10, n=5, obtaining 5 layers of super-pixel segmentation results, obtaining characteristic density through the method respectively, and finally superposing the layers to obtain a characteristic density image L d
(6) Based on the feature density image L obtained in step (5) d And setting a proper threshold value, wherein the part with higher density is the result of town extraction. Good results are obtained with a threshold value of 0.7 xn, n being the number of split layers.
1. An example is the image of north in Beijing, the year 9, transmitted by the European space agency from the Sentinel-2A satellite 2017, with the sequence number "S2A_MSIL1C_20170922T025541_N0205_R032_T50". The invention extracts town range based on the texture information of the image, and generally does not need to do radiometric calibration and atmospheric correction, but only needs to do cloud removal processing and clipping. An image as shown in fig. 1 is obtained.
2. The obtained Sentinel-2A satellite data has a plurality of wave bands, and the maximum value of three wave bands of visible light is taken to obtain a brightness image, as shown in figure 2. The gray scale image with the low frequency information removed is obtained after the brightness image is filtered in the frequency domain as shown in fig. 3. The value of sigma in the example is set to
Figure GDA0002098256140000093
M, N the width and height of the image, respectively.
3. The gray scale of the image obtained in the previous step is degraded to 0-31, the window size w is set to 32, the texture difference characteristics (dissimilarity) of 10 directions are calculated, and the minimum value is taken to obtain a characteristic image, as shown in fig. 4.
4. And mapping the characteristic image obtained in the last step to a gray level of 0-255, selecting a threshold value through an Ojin threshold value method, dividing the image into two parts, and taking the part with a larger gray value as a characteristic region. As shown in fig. 5
5. And setting a minimum segmentation scale as 300 pixels, setting a step length as 10 and setting a maximum segmentation scale as 340 through a simple linear iterative clustering algorithm, and performing 5 times of segmentation to obtain a 5-layer super-pixel segmented image. And calculating the feature density of each layer of super pixels based on the super pixels, and superposing the calculated 5 layers of feature density images to construct a total feature density image. The results are shown in FIG. 6.
6. Finally, setting a characteristic density threshold value, wherein the part larger than the threshold value is a town area, and the example setting threshold value is 3.5
The town extraction method provided by the invention can extract an accurate town range. The integrity rate of the above example was 0.96 and the accuracy rate was 0.92, in comparison to the results of manual interpretation.
The above examples should be understood as illustrative only and not limiting the scope of the invention. Various changes and modifications to the present invention may be made by one skilled in the art after reading the teachings herein, and such equivalent changes and modifications are intended to fall within the scope of the invention as defined in the appended claims.

Claims (6)

1. A method for automatically extracting town texture feature density in a built-in area of a town, comprising the steps of:
1) Acquiring a remote sensing image with a resolution of 10m and a visible light wave band, and performing cutting, splicing and cloud removal operations to obtain an area to be extracted;
2) Constructing a gray image of the preprocessed remote sensing image, filtering low-frequency textures of the gray image based on Fourier transform, and obtaining a gray image of a high-frequency component;
3) Aiming at the gray level images of the high-frequency components obtained in the step 2), gray level co-occurrence matrix difference characteristics in a plurality of directions are calculated, and rotation invariance texture characteristics are extracted;
4) Mapping the image formed by the rotation invariance texture features obtained in the step 3) into a gray image of 0 to 255, and extracting a feature region by using an Otsu (on-the-fly) threshold method;
5) Performing multi-scale segmentation on the region image to be extracted obtained in the step 1) by using a simple linear iterative clustering SLIC algorithm to obtain a multi-layer super-pixel distribution image, calculating a feature density image under each scale by taking super pixels as units, and superposing all feature density images to construct a refined feature density image;
the step 5) is performed with the uniform arrangement of the super pixels in a cellular manner obtained by a linear iterative clustering algorithm SLIC, and specifically comprises the following steps: dividing the visible light wave band of the image for N times based on SLIC algorithm, wherein the scales are { N }, respectively i ,i∈1,2,3…n},N i Represents the average number of pixels of the ith split superpixel, where N 1 For the minimum segmentation scale, N n For the maximum segmentation scale, from N 1 To N n Sequentially increasing with the same step length S, and dividing n super pixel layers to be represented as { L } i I.epsilon.1, 2,3 … n }, using { P ij J.epsilon.1, 2,3 … } represents L i Is the j th super pixel, num ij Representing P ij The number of pixels;
based on the difference of window size 23 in step 3)Characteristic images are marked as L-based by using an OSTU method to obtain characteristic images f Calculate all P ij Feature density D of (2) ij
Figure FDA0004045711550000011
Num in f Represents P ij Within the range of L f The number of the feature points in the corresponding range in each L is obtained i Feature density L of (2) i Finally accumulate L i Obtaining final characteristic density distribution L d :/>
Figure FDA0004045711550000012
Representing the addition of the corresponding pixels of each feature density image to obtain a total feature density L d The characteristic density is obtained by the method, and finally, each layer is overlapped to obtain a characteristic density image L d For density image L d And setting a proper threshold value, wherein the part with higher density is the town extraction result, the set threshold value is between 0 and 1, the set threshold value is multiplied by the number of divided layers, and the part larger than the threshold value is extracted as the town.
2. The method for automatically extracting urban built-up areas according to claim 1, wherein said step 1) performs clipping, stitching and cloud removal processes on the obtained images, and specifically comprises: a scene remote sensing image cannot cover an area to be extracted normally, a plurality of adjacent images are required to be downloaded, the images are spliced by using the Map Based mobile function of ENVI software to obtain the completely covered images, then the portions outside the extraction area are cut off according to the range of the area to be extracted by using the Subset Data from ROIs function of ENVI software; the situation that ground objects in a remote sensing image are covered by cloud is very common, the part covered by the cloud in the image to be processed cannot obtain a good extraction effect, cloud removal processing is needed, and a cloud removal method comprises the following steps: manually identifying a cloud area, searching images of other periods of no cloud in the area, cutting out the images of the area, and splicing the images into an image to be processed to obtain a clean image without cloud coverage; the preprocessing work is done using ENVI software.
3. The method for automatically extracting urban built-up area according to claim 1, wherein said step 2) filters low frequency textures from the gray image based on fourier transform to obtain gray image of high frequency component, specifically comprising: the method comprises the steps of converting a spatial domain image into a frequency domain by using a two-dimensional fast Fourier transform FFT, filtering the frequency domain image by using a Gaussian low-pass filter, converting a filtering result into the spatial domain by using an inverse fast Fourier transform FFT to obtain a background image, and subtracting the original gray level image from the background image to obtain a gray level image with low frequency components suppressed.
4. A method for automatically extracting town texture density as claimed in claim 3, wherein the spatial domain image is converted to the frequency domain using a two-dimensional fast fourier transform FFT, the formula being:
Figure FDA0004045711550000021
m, N the width and height of the image, (x, y) the spatial coordinates of the image, (u, v) the coordinates of the frequency image, F (x, y) the input image, F (u, v) the transformed fourier frequency domain image, and filtering the frequency domain image with a gaussian low pass filter: />
Figure FDA0004045711550000022
F (u, v) =f (u, v) ·h (u, v) where H (x, y) is a gaussian filter weight function, D represents the euclidean distance of the point (u, v) in the gaussian filter window to the center of the gaussian window, σ represents the degree of gaussian weight function expansion, F (x, y) is a frequency domain image, F (x, y) is a filtered frequency domain image, and the filtered result is converted into a spatial domain by using inverse fast Fourier transform to obtain a background image f (x, y) combining the original gray scale image f (x, y) with the background image f (x, y) performing subtraction operation to obtain a gray level image for suppressing the low frequency component:
Figure FDA0004045711550000031
G(x,y)=f(x,y)-f (x, y), G (x, y) is the finally obtained high frequency component image.
5. The method for automatically extracting urban built-up area according to claim 1, wherein the step 3) uses the high frequency component gray scale image obtained in the previous step to construct gray scale co-occurrence matrix difference characteristics of several directions, and extracts rotation invariance texture characteristics as follows;
the gray level co-occurrence matrix is a matrix with the size of n multiplied by n, n is the gray level of the image, and the gray level co-occurrence matrix is defined as follows: m (i, j) = [ M (i, j, Δx, Δy); i, j E (1, 2, … n)]The gray level co-occurrence matrix M is an n×n matrix, the element values of the ith row and the jth column are as follows: the number of occurrences of pixel pairs with a position offset (Deltax, deltay) and gray values i and j, respectively, is satisfied within a window of size w of the image, the difference formula is
Figure FDA0004045711550000032
Wherein->
Figure FDA0004045711550000033
m (i,j) Constructing n1 gray level co-occurrence matrixes according to the values of the ith row and the jth column elements of the gray level co-occurrence matrix M in n1 directions, so as to calculate and obtain the difference characteristics of the n1 directions, and selecting the minimum value of the n1 directions to construct the rotation invariance texture characteristics; n1 directions are (Δx, Δy) ∈ { (1, 0), (2, 1), (1, 2), (-2, 1), (-1, 1), (1, 2), (0, 1) }, respectively.
6. A method for automatically extracting urban structured zone according to claim 1, characterized in that said step 4) uses the OTSU threshold method for extracting the feature areas, in particular comprising:
1) Linearly mapping the characteristic image into a gray image of 0-255, and counting a gray histogram, namely the number of pixels of each gray level;
2) Normalizing the histogram; i.e. the number of pixels per gray level divided by the total number of pixels;
3) Calculating the maximum inter-class variance; let t be the gray threshold value, count the proportion w of 0 to t gray level pixels to the whole image 0 The average gray value u of 0 to t gray levels is counted 0 Counting the proportion w of t to 255 gray scale pixels to the whole image 1 Statistics of average gray value u of t to 255 gray levels 1 Calculate g=w 0 ×w 1 ×(u 0 -u 1 ) 2 And setting t to 0-255 in turn, so that the maximum t value of g is the threshold value, the part larger than t is the characteristic region, and the part smaller than t is the other region.
CN201910212324.4A 2019-03-20 2019-03-20 Method for automatically extracting urban built-up area by urban texture feature density Active CN110070545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910212324.4A CN110070545B (en) 2019-03-20 2019-03-20 Method for automatically extracting urban built-up area by urban texture feature density

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910212324.4A CN110070545B (en) 2019-03-20 2019-03-20 Method for automatically extracting urban built-up area by urban texture feature density

Publications (2)

Publication Number Publication Date
CN110070545A CN110070545A (en) 2019-07-30
CN110070545B true CN110070545B (en) 2023-05-26

Family

ID=67366390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910212324.4A Active CN110070545B (en) 2019-03-20 2019-03-20 Method for automatically extracting urban built-up area by urban texture feature density

Country Status (1)

Country Link
CN (1) CN110070545B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739073A (en) * 2020-06-24 2020-10-02 刘秀萍 Efficient and rapid image registration optimization method for handheld device
CN114037747B (en) * 2021-11-25 2024-06-21 佛山技研智联科技有限公司 Image feature extraction method, device, computer equipment and storage medium
CN114943897B (en) * 2022-05-31 2023-11-24 南京大学 Town development boundary demarcating method based on super-pixel segmentation
CN116310806B (en) * 2023-02-28 2023-08-29 北京理工大学珠海学院 Intelligent agriculture integrated management system and method based on image recognition

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006230910A (en) * 2005-02-28 2006-09-07 Konica Minolta Medical & Graphic Inc Image processor and image processing method
CN101702019A (en) * 2009-11-26 2010-05-05 上海电机学院 Method for extracting residential areas in SAR image
CN102687140A (en) * 2009-12-30 2012-09-19 诺基亚公司 Methods and apparatuses for facilitating content-based image retrieval
CN103065296A (en) * 2012-12-14 2013-04-24 中南大学 High-resolution remote sensing image residential area extraction method based on edge feature
CN103745453A (en) * 2013-12-11 2014-04-23 河海大学 Town information extraction method based on Google Earth remote sensing image
CN103824309A (en) * 2014-03-12 2014-05-28 武汉大学 Automatic extracting method of urban built-up area border
CN104182754A (en) * 2014-08-19 2014-12-03 山东临沂烟草有限公司 Rural resident point information extraction method based on high-resolution remote-sensing image
CN104376330A (en) * 2014-11-19 2015-02-25 西安电子科技大学 Polarization SAR image ship target detection method based on superpixel scattering mechanism
CN104508707A (en) * 2012-05-28 2015-04-08 奥普托斯股份有限公司 Methods and apparatus for image processing, and laser scanning ophthalmoscope having an image processing apparatus
CN104835016A (en) * 2015-05-27 2015-08-12 北京搜狐新媒体信息技术有限公司 Crowd density calculation method and device
CN105046262A (en) * 2015-06-29 2015-11-11 中国人民解放军国防科学技术大学 Robust extended local binary pattern textural feature extraction method
CN105096315A (en) * 2015-06-19 2015-11-25 西安电子科技大学 Method for segmenting heterogeneous super-pixel SAR (Synthetic Aperture Radar) image based on Gamma distribution
CN107229917A (en) * 2017-05-31 2017-10-03 北京师范大学 A kind of several remote sensing image general character well-marked target detection methods clustered based on iteration
CN107330875A (en) * 2017-05-31 2017-11-07 河海大学 Based on the forward and reverse heterogeneous water body surrounding enviroment change detecting method of remote sensing images
CN108171193A (en) * 2018-01-08 2018-06-15 西安电子科技大学 Polarization SAR Ship Target Detection method based on super-pixel local message measurement
CN109271864A (en) * 2018-08-17 2019-01-25 武汉烽火凯卓科技有限公司 A kind of crowd density estimation method based on wavelet transformation and support vector machines

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009164215A (en) * 2007-12-28 2009-07-23 Fujifilm Corp Radiological image detecting device and manufacturing method of the radiological image detecting device

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006230910A (en) * 2005-02-28 2006-09-07 Konica Minolta Medical & Graphic Inc Image processor and image processing method
CN101702019A (en) * 2009-11-26 2010-05-05 上海电机学院 Method for extracting residential areas in SAR image
CN102687140A (en) * 2009-12-30 2012-09-19 诺基亚公司 Methods and apparatuses for facilitating content-based image retrieval
CN104508707A (en) * 2012-05-28 2015-04-08 奥普托斯股份有限公司 Methods and apparatus for image processing, and laser scanning ophthalmoscope having an image processing apparatus
CN103065296A (en) * 2012-12-14 2013-04-24 中南大学 High-resolution remote sensing image residential area extraction method based on edge feature
CN103745453A (en) * 2013-12-11 2014-04-23 河海大学 Town information extraction method based on Google Earth remote sensing image
CN103824309A (en) * 2014-03-12 2014-05-28 武汉大学 Automatic extracting method of urban built-up area border
CN104182754A (en) * 2014-08-19 2014-12-03 山东临沂烟草有限公司 Rural resident point information extraction method based on high-resolution remote-sensing image
CN104376330A (en) * 2014-11-19 2015-02-25 西安电子科技大学 Polarization SAR image ship target detection method based on superpixel scattering mechanism
CN104835016A (en) * 2015-05-27 2015-08-12 北京搜狐新媒体信息技术有限公司 Crowd density calculation method and device
CN105096315A (en) * 2015-06-19 2015-11-25 西安电子科技大学 Method for segmenting heterogeneous super-pixel SAR (Synthetic Aperture Radar) image based on Gamma distribution
CN105046262A (en) * 2015-06-29 2015-11-11 中国人民解放军国防科学技术大学 Robust extended local binary pattern textural feature extraction method
CN107229917A (en) * 2017-05-31 2017-10-03 北京师范大学 A kind of several remote sensing image general character well-marked target detection methods clustered based on iteration
CN107330875A (en) * 2017-05-31 2017-11-07 河海大学 Based on the forward and reverse heterogeneous water body surrounding enviroment change detecting method of remote sensing images
CN108171193A (en) * 2018-01-08 2018-06-15 西安电子科技大学 Polarization SAR Ship Target Detection method based on super-pixel local message measurement
CN109271864A (en) * 2018-08-17 2019-01-25 武汉烽火凯卓科技有限公司 A kind of crowd density estimation method based on wavelet transformation and support vector machines

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Fusion of deep convolutional neural networks for land cover classification of high-resolution imagery;Scott G J;《IEEE Geoscience & Remote Sensing Letters.2017,14(9)》;20171231;第1638-1642页 *
利用多特征融合的面向对象的高分辨率遥感影像变化检测;姜慧伟;《陕西理工学院学报(自然科学版)》;20160620(第03期);第45-50页 *
基于复合矢量场的改进ACM模型与医学图像分割;高杨等;《计算机工程与应用》;20070801(第22期);第235-238页 *
改进的基于区域合并的纹理图像分割方法;郑庆庆等;《华中科技大学学报(自然科学版)》;20110523(第05期);第114-117页 *
融合显著度时空上下文的超像素跟踪算法;郭春梅等;《模式识别与人工智能》;20170815(第08期);第58-69页 *
融合直角点和直角边特征的高分辨率遥感影像居民点提取方法;林祥国等;《测绘学报》;20170115(第01期);第87-93页 *
高分辨率遥感影像居民区检测算法研究;张宁新等;《计算机工程与科学》;20131115(第11期);第130-137页 *

Also Published As

Publication number Publication date
CN110070545A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN107358260B (en) Multispectral image classification method based on surface wave CNN
CN110070545B (en) Method for automatically extracting urban built-up area by urban texture feature density
CN107239751B (en) High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
CN103034863B (en) The remote sensing image road acquisition methods of a kind of syncaryon Fisher and multiple dimensioned extraction
CN108898065B (en) Deep network ship target detection method with candidate area rapid screening and scale self-adaption
CN109389163B (en) Unmanned aerial vehicle image classification system and method based on topographic map
CN106651872A (en) Prewitt operator-based pavement crack recognition method and system
CN109635733B (en) Parking lot and vehicle target detection method based on visual saliency and queue correction
CN111079596A (en) System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN111027497B (en) Weak and small target rapid detection method based on high-resolution optical remote sensing image
CN108830870A (en) Satellite image high-precision field boundary extracting method based on Multi-scale model study
CN110570440A (en) Image automatic segmentation method and device based on deep learning edge detection
CN107273813A (en) Geographical space elements recognition system based on high score satellite remote sensing date
CN109063754A (en) A kind of remote sensing image multiple features combining classification method based on OpenStreetMap
CN106971397B (en) Based on the city high-resolution remote sensing image dividing method for improving JSEG algorithms
CN108710862B (en) High-resolution remote sensing image water body extraction method
CN104951765A (en) Remote sensing image target division method based on shape priori information and vision contrast ratio
CN113744191A (en) Automatic cloud detection method for satellite remote sensing image
CN108830883B (en) Visual attention SAR image target detection method based on super-pixel structure
CN110310263B (en) SAR image residential area detection method based on significance analysis and background prior
CN114972453A (en) Improved SAR image region registration method based on LSD and template matching
JP2009123234A (en) Object identification method, apparatus and program
JP4285640B2 (en) Object identification method, apparatus and program
CN118230166A (en) Corn canopy organ identification method and canopy phenotype detection method based on improved Mask2YOLO network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant