[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110349160B - SAR image segmentation method based on super-pixel and fuzzy C-means clustering - Google Patents

SAR image segmentation method based on super-pixel and fuzzy C-means clustering Download PDF

Info

Publication number
CN110349160B
CN110349160B CN201910555710.3A CN201910555710A CN110349160B CN 110349160 B CN110349160 B CN 110349160B CN 201910555710 A CN201910555710 A CN 201910555710A CN 110349160 B CN110349160 B CN 110349160B
Authority
CN
China
Prior art keywords
image
clustering
pixel
matrix
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910555710.3A
Other languages
Chinese (zh)
Other versions
CN110349160A (en
Inventor
陈彦
陈云坪
冉崇敬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910555710.3A priority Critical patent/CN110349160B/en
Publication of CN110349160A publication Critical patent/CN110349160A/en
Application granted granted Critical
Publication of CN110349160B publication Critical patent/CN110349160B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a fuzzy C-means clustering SAR image segmentation method based on superpixels and sparse representation, which comprises the steps of firstly generating superpixels, then extracting gray and textural features from each subregion in an image as basic features, and providing a sparse self-representation matrix correction processing method according to scattering characteristic differences of different surface feature types of the SAR image on the basis of a sparse representation theory so as to obtain accurate distinguishing features, and finally realizing image segmentation processing with strong robustness and high operation efficiency on speckle noise in the SAR image; because the corresponding image processing is carried out on the super-pixel level, the influence of coherent speckle noise can be weakened through the integrity of the pixel set on the basis of keeping the internal information and the boundary information of the image, and meanwhile, the extracted features are more stable through integrating the information of adjacent pixels.

Description

SAR image segmentation method based on super-pixel and fuzzy C-means clustering
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a fuzzy C-means clustering SAR image segmentation method based on superpixels and sparse representation.
Background
With the development of remote sensing technology, SAR imaging technology has entered the high resolution era. Compared with the medium-low resolution SAR, the high resolution technology enhances the capability of a sensor of the high-resolution SAR to acquire the ground feature information, and can provide more complex and fine scenes and rich scattering information. However, the field of SAR image analysis and autonomous recognition is slow. Therefore, how to efficiently analyze and process a large amount of SAR image data and effectively extract information contained therein has become a large center of gravity of current SAR technical research.
The SAR image segmentation is taken as a key step of an image preprocessing stage, and essentially the whole image is divided into a plurality of image blocks with similar characteristics inside, boundaries among the blocks are obviously distinguished and are not intersected with each other, and simultaneously, a union of all the image blocks can reconstruct an original image. The purpose of image segmentation is to distinguish between regions of interest and regions of no interest, so that subsequent analytical studies can be performed only on the regions of interest that have been cut out. To obtain such pixel clusters that can be clearly distinguished and have consistent internal features, the following three general categories of algorithms are available:
(1) a threshold-based image segmentation algorithm. This method is often used in the case of double-segmented regions, and the target region is clearly distinguished from the background irrelevant region on some feature. And classifying each pixel point by defining a threshold value, and extracting a target area. Of course, this approach can be extended to multiple partitions. The most obvious defect of the threshold segmentation method is that the threshold segmentation method is greatly interfered by noise, and although the influence of the noise is considered and the suppression processing is carried out in the processing process, the segmentation precision in a noise serious area is still not ideal.
(2) Image segmentation algorithms based on edge detection. The ultimate goal of image segmentation is to form the image into several non-interfering sub-regions, so it is important to accurately locate the edges and boundaries of each sub-region. The basic idea is to detect feature points or edge points in the image and to partition the boundaries of different sub-regions based on the determination of an infinite number of points. The key of image segmentation based on the edge detection method is to identify complete and accurate sub-region contours, but this is an ideal case, and the contours of each region in most edge detection results are intermittent and incompletely connected "dotted lines", so that steps of edge completion and false edge removal are generally required. In addition, the algorithm has more processing details, and is difficult to detect reliable edges when images with more complex changes are changed, so that the accuracy of image segmentation is reduced.
(3) Region-based image segmentation algorithms. The method is mainly used for completing segmentation through consistency of internal features of the region. The method can be roughly classified into a region growing and merging method, a random field method, and a clustering method. The clustering-based segmentation algorithm extracts the correlation among different pixels, and the feature correlation is maximized by iteration to complete segmentation, thereby greatly improving the heteroscedasticity in the field of image segmentation. Fuzzy partition knowledge is combined with a clustering algorithm, and an intra-class weighted error square sum is formed by adopting a membership square weighting method, so that trivial solution is avoided. And introducing a fuzzy index m, iterating for a plurality of times, and performing optimal c-type division on the sample points by minimizing the target function, which is the well-known FCM algorithm. Also, due to their superiority, many researchers have made many studies and improvements. Such as modified FCM _ S1, FCM _ S2, EnFCM, FGFCM, NS _ FCM, and QICA _ FCM, among others.
Obviously, the most important step is to construct an objective function, or to find the correlation of features inside the image, as it were. Unlike the ordinary optical image, the SAR image contains complex speckle noise due to a unique radar imaging mechanism, and the influence of the multiplicative noise is considered in the segmentation.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a fuzzy C-means clustering SAR image segmentation method based on superpixels and sparse representation.
In order to achieve the purpose, the invention provides a fuzzy C-means clustering SAR image segmentation method based on superpixels and sparse representation, which is characterized by comprising the following steps of:
(1) obtaining SAR image
Acquiring a single-polarized Synthetic Aperture Radar (SAR) image containing appropriate and abundant ground object types in an area to be observed;
(2) generating superpixels using PILS algorithm
(2.1) initializing the clustering center
Setting N pixel points in total of the SAR image, initializing k clustering centers, wherein each clustering center is a superpixel, and the distance between adjacent clustering centers is
Figure BDA0002106821930000021
Each cluster center is represented by a three-dimensional feature as: [ i, x, y ], wherein i represents a pixel intensity value of a cluster center, and x and y represent abscissa and ordinate values of the cluster center;
(2.2) Cluster center relocation
Taking a certain clustering center as a center, finding a position with the minimum gradient in an n multiplied by n neighborhood, and relocating the clustering center to the position;
(2.3) image clustering
Establishing a search domain S multiplied by S by taking a certain pixel point as a center; traversing the whole SAR image, determining a clustering center of the pixel point closest to the searching domain S multiplied by S, and dividing the pixel point into the same class of the clustering center;
(2.4) iterative update
After the category attributes of all the pixel points are distributed, calculating the mean value of the three-dimensional characteristics [ i, x, y ] of the clustering centers of all the categories, updating the clustering centers of the categories according to the mean value, and iterating until convergence;
(3) superpixel feature extraction
Calculating a gray level co-occurrence matrix GLCM in a search domain S multiplied by S to extract the texture characteristics of the super pixels, wherein the method comprises the following steps: angular second moment, contrast, correlation, inverse difference moment, variance, sum mean, sum entropy, sum variance, entropy, difference variance, correlation information measure, and maximum correlation coefficient;
extracting a gray histogram in the super pixel as a super pixel gray characteristic;
forming the super-pixel texture feature and the gray feature into an image basic feature set X (X)1,x2,…,xn),xnRepresents the nth feature;
(4) creating a sparse self-representation model
(4.1), initializing Z-G-psi-0, wherein Z is a sparse self-expression matrix, G is a diagonal matrix, and psi is a Lagrange multiplier matrix; initializing the current iteration time t as 1;
(4.2) judging whether the t iteration is followed by the I Z(t)-G(t)||Eta, eta isConverging a threshold, if so, entering the step (4.3), otherwise, outputting the t-th iteration to obtain a sparse self-expression matrix Z;
(4.3) column by column update G(t)
Figure BDA0002106821930000031
Wherein J is 1,2, …, J, gj=(g1j,g2j,…,g(j-1)j,0,…,gJj) Mu is a penalty factor, zjFor sparse self-representation of matrix Z(t)J column of (1)jIs a matrix Ψ(t)Column j of (1); g is prepared fromjjSet to 0 to satisfy the constraint diag (G)(t)) 0, so when updating column by column, G is encountered(t)When the diagonal element is not updated;
updating Z(t)
Figure BDA0002106821930000041
Wherein, X is an image basic feature set, I is a unit matrix, and gamma is a setting parameter;
for updated Z(t)And (3) correcting:
to Z(t)Element z in (1)ijThe following formula is used for processing to obtain the element zijIs estimated value of
Figure BDA0002106821930000042
Figure BDA0002106821930000043
Forming matrix by the estimated values of all elements
Figure BDA0002106821930000044
Go through
Figure BDA0002106821930000045
Calculating the backscattering mean value of each super pixel set, and enabling the mean value difference of backscattering values of the super pixels i and j to be delta sigmaijObtaining corrected Z(t)The elements of (A) are:
Figure BDA0002106821930000046
updating Ψ(t)
Ψ(t)=Ψ(t-1)+μ(Z(t)-G(t))
Updating the penalty coefficient mu:
μ=min(εμ,1010)
wherein epsilon is a constant;
(4.4) after the step (4.3) is finished, outputting a sparse self-expression matrix Z(t)Adding 1 to the iteration times t, and returning to the step (4.2);
(5) image segmentation
(5.1) defining an objective function J of the SSR _ FCM algorithmm
Figure BDA0002106821930000047
Wherein m represents a blur index, vk
Figure BDA0002106821930000051
Respectively representing the k class centers of the matrixes X and Z, and beta is an objective function J for controlling sparse discriminant featuresmWeight value of ukiRepresenting the membership value of the superpixel i to the kth class for the introduced fuzzy membership parameter;
(5.2) for the objective function JmCarrying out minimization:
Figure BDA0002106821930000052
where ξ represents the differential of the lagrange multiplier;
(5.3) solving the minimized objective function JmObtaining:
Figure BDA0002106821930000053
Figure BDA0002106821930000054
Figure BDA0002106821930000055
(5.4) substitution constraint
Figure BDA0002106821930000056
The following can be solved:
Figure BDA0002106821930000057
thereby obtaining updated uki
Figure BDA0002106821930000058
(5.5) use of the updated ukiConstructing a membership matrix U(k)Then, it is judged whether max (U) is full(k)-U(k-1))<η*,η*Is a preset threshold value, if the preset threshold value is met, the step (5.6) is carried out; otherwise, enabling k to be added by 1, and then returning to the step (5.1);
(5.6) according to the membership value ukiAnd clustering the images so as to realize image segmentation.
The invention aims to realize the following steps:
the invention relates to a fuzzy C-means clustering SAR image segmentation method based on superpixels and sparse representation, which comprises the steps of firstly generating superpixels, then extracting gray and texture characteristics of each subregion in an image as basic characteristics, and providing a sparse self-representation matrix correction processing method according to scattering characteristic differences of different surface feature types of the SAR image on the basis of a sparse representation theory so as to obtain accurate discrimination characteristics, and finally realizing image segmentation processing which has stronger robustness and high operation efficiency on speckle noise in the SAR image; because the corresponding image processing is carried out on the super-pixel level, the influence of coherent speckle noise can be weakened through the integrity of the pixel set on the basis of keeping the internal information and the boundary information of the image, and meanwhile, the extracted features are more stable through integrating the information of adjacent pixels.
Drawings
FIG. 1 is a flow chart of a fuzzy C-means clustering SAR image segmentation method based on superpixels and sparse representation according to the present invention;
FIG. 2 is a raw SAR image;
FIG. 3 is a superpixel generation diagram;
FIG. 4 is a comparison algorithm EnFCM algorithm segmentation result;
FIG. 5 is a graph of the segmentation results of the FLICM algorithm;
fig. 6 is a diagram of a segmentation result of a blurred C-means SAR image based on superpixels and sparse representation.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
FIG. 1 is a flow chart of a fuzzy C-means clustering SAR image segmentation method based on superpixels and sparse representation according to the present invention;
in this embodiment, as shown in fig. 1, the method for segmenting a fuzzy C-means clustering SAR image based on superpixels and sparse representation according to the present invention includes the following steps:
s1, acquiring SAR image
Acquiring a single-polarized Synthetic Aperture Radar (SAR) image containing appropriate and abundant ground object types in an area to be observed;
in this embodiment, an SAR image of a certain area with an azimuth direction × a distance direction of 0.167 × 0.455m, which is acquired by an X-band TerraSAR in 5 months of 2014, is selected, and the size of the image is 300 × 300, as shown in fig. 2.,
s2, generating superpixel by using PILS (Pixel Intensity and Location similarity) algorithm
S2.1, initializing a clustering center
Setting N pixel points in total of the SAR image, initializing k clustering centers, wherein each clustering center is a superpixel, and the distance between adjacent clustering centers is
Figure BDA0002106821930000071
In this embodiment, the SAR image has 90000 pixels, which are initialized to 1200 cluster centers.
Each cluster center is represented by a three-dimensional feature as: [ i, x, y ], wherein i represents a pixel intensity value of a cluster center, and x and y represent abscissa and ordinate values of the cluster center;
s2.2, cluster center relocation
Since at initialization it is possible for the cluster center to be located at the image boundary. To avoid this, a position with the minimum gradient in the neighborhood of n × n — 3 × 3 is found with a certain cluster center as a center, and the cluster center is relocated to the position, while avoiding the influence of localization to a noise point.
S2.3, clustering images
Establishing a search domain S multiplied by S which is 5 multiplied by 5 by taking a certain pixel point as a center; traversing the whole SAR image, determining a clustering center of the pixel point closest to the searching domain S multiplied by S, and dividing the pixel point into the same class of the clustering center; compared with a K-means algorithm which needs to perform global search, the search domain of the algorithm is S multiplied by S, time consumption of distance measurement is greatly reduced, and segmentation is faster.
The following detailed description of the specific process of image clustering specifically includes:
s2.3.1, the similarity measurement step in the conventional superpixel algorithm suitable for optical images cannot be used in SAR images, and the algorithm adopts a new measurement mode, namely pixel intensity ratio distance, to define the pixel correlation in SAR images. The specific formula is as follows:
Figure BDA0002106821930000081
where K represents the number of search field tiles, Ni(k) Representing the tile intensity, N, of an S search field centered at ij(k) Representing the tile strength of an S search domain centered at j;
according to the formula and aiming at the influence of multiplicative noise in the SAR image on the generation of the superpixel, the PILS algorithm defines the intensity similarity measure between pixels when the superpixel is generated by using a probability density function;
Figure BDA0002106821930000082
wherein I represents intensity, SI(i, j) represents the intensity similarity measure of the clustering center i with the nearest distance of the pixel point j in the search domain S multiplied by S, K represents the number of image blocks in the search domain, Ni(k) Representing the tile intensity, N, of an S search field centered at ij(k) Representing the tile strength of an S search domain centered at j; r isijIs the pattern block intensity parameter;
probability density function P (r)ij) Satisfies the following conditions:
Figure BDA0002106821930000083
wherein L is SAR image vision gamma (·) representing gamma function;
s2.3.2, converting the space distance measurement between pixels into pixel space similarity measurement through standard Gaussian kernel function;
Figure BDA0002106821930000084
wherein xy represents space, dxy(i, j) representing the spatial distance between the clustering center i and the pixel point j in the space;
s2.3.3, finally, the similarity measurement formula can obtain the similarity measure parameter by combining the pixel intensity similarity and the spatial position similarity:
S(i,j)=SI(i,j)+αSxy(i,j)
wherein α is a weighting coefficient that balances two different similarity measures;
s2.3.4 classification of each pixel point
And determining the clustering center with the highest similarity measure parameter value of each pixel point in the search domain, and dividing the pixel points into the same type of the clustering center.
S2.4, iterative update
After the category attributes of all the pixel points are distributed, calculating the mean value of the three-dimensional features [ i, x, y ] of the clustering centers of all the categories, updating the clustering centers of the categories according to the mean value, iterating until convergence, and finally generating the superpixel result as shown in fig. 3.
S3 super-pixel feature extraction
The feature extraction is used as the basis of image analysis and is also an important link before image segmentation. The extracted features are generally expressed in a vector form, and are required to have reliability, distinctiveness and independence, so that information of pixel points or pixel areas can be accurately and unmistakably reflected. In general, computing a gray level co-occurrence matrix GLCM in a search domain sxs to extract a super-pixel texture feature includes: angular second moment, contrast, correlation, inverse difference moment, variance, sum mean, sum entropy, sum variance, entropy, difference variance, correlation information measure, and maximum correlation coefficient;
extracting a gray histogram in the super pixel as a super pixel gray characteristic;
forming the super-pixel texture feature and the gray feature into an image basic feature set X (X)1,x2,…,xn),xnRepresents the nth feature;
s4, creating sparse self-representation model
The basic idea of sparse representation is to use a linear combination of a small number of basic signals to express most or even all of the original signal. The image is embodied in the condition that the dimensionality of the characteristic vector is less and most coefficients in the vector are 0, the main structure and the property of the image are reflected, and the specific creating process is as follows:
s4.1, initializing Z-G-psi-0, wherein Z is a sparse self-expression matrix, G is a diagonal matrix, and psi is a Lagrange multiplier matrix; initializing the current iteration time t as 1;
s4.2, judging whether the t-th iteration meets the requirement of Z(t)-G(t)||If the eta is less than eta, the eta is a convergence threshold, if the eta is satisfied, the step S4.3 is carried out, otherwise, a sparse self-expression matrix Z is obtained after the tth iteration is output;
s4.3, updating G column by column(t)
Figure BDA0002106821930000091
Wherein J is 1,2, …, J, gj=(g1j,g2j,…,g(j-1)j,0,…,gJj) Mu is a penalty factor, zjFor sparse self-representation of matrix Z(t)J column of (1)jIs a matrix Ψ(t)Column j of (1); g is prepared fromjjSet to 0 to satisfy the constraint diag (G)(t)) 0, so when updating column by column, G is encountered(t)When the diagonal element is not updated;
updating Z(t)
Figure BDA0002106821930000101
Wherein, X is an image basic feature set, I is a unit matrix, and gamma is a setting parameter;
for updated Z(t)And (3) correcting:
to Z(t)Element z in (1)ijThe following formula is used for processing to obtain the element zijIs estimatedEvaluating value
Figure BDA0002106821930000102
Figure BDA0002106821930000103
When the sample basic feature set X is used for solving the sparse self-expression matrix, no matter the gray histogram feature or the texture feature in the sparse self-expression matrix, the backscattering value which is the essential feature reflecting the ground feature difference in the SAR image is not directly utilized. For a high-resolution SAR image, due to different penetration and reflection capacities of different ground objects on electromagnetic pulses, the returned echo signals show different scattering characteristics. The effective use of the backscattering value can increase the accuracy of ground object resolution. It is found from literature and empirical values that the probability of two objects being of the same class is very low when the backscattering values differ by more than 10dB between the two samples. When the backscattering values of two samples are different by more than 15dB, two ground objects belong to two types with great difference. Thus forming the estimated values of all elements into a matrix
Figure BDA0002106821930000104
Go through
Figure BDA0002106821930000105
Calculating the backscattering mean value of each super pixel set, and enabling the mean value difference of backscattering values of the super pixels i and j to be delta sigmaijObtaining corrected Z(t)The elements of (A) are:
Figure BDA0002106821930000106
by combining the backscattering value of the SAR image, the difference between different ground objects can be further expanded, different features can be distinguished more obviously, and the accuracy of subsequent processing is improved. Any element z in the sparse self-representation matrix obtained by the algorithmijRepresenting the similarity between sample i and sample j. z is a radical ofijThe larger the value, the similarityThe higher the similarity, the lower the similarity between two samples, and even the samples belong to two classes. The diagonal elements represent the similarity of the sample to itself, and are set to a value of 0 without participating in the calculation. The sparse self-representation matrix embodies the inline relation among samples and can be used as a pixel feature for subsequent segmentation.
Updating Ψ(t)
Ψ(t)=Ψ(t-1)+μ(Z(t)-G(t))
Updating the penalty coefficient mu:
μ=min(εμ,1010)
wherein epsilon is a constant;
s4.4, after the step S4.3 is finished, outputting a sparse self-expression matrix Z(t)Adding 1 to the iteration times t, and returning to the step S4.2;
s5, image segmentation
Based on Fuzzy C-Means Clustering Based on Superpixel and Sparse Representation (SSR _ FCM) SAR image segmentation algorithm, two types of features with different dimensions are mainly used during image segmentation. The first type is the image self-feature, expressed as a feature matrix X ═ X1,x2,…,xn) Reflecting the respective basic texture and scattering properties of each super-pixel, called the basic features. The second type is an inline relationship between superpixels obtained through a sparse model, expressed as a feature matrix Z and called sparse discriminant features. N in the two feature matrices represents the number of superpixels, but the dimensions are different.
We now describe the specific process of segmentation:
s5.1, defining an objective function J of the SSR _ FCM algorithmm
Figure BDA0002106821930000111
Wherein m represents a blur index, vk
Figure BDA0002106821930000112
Respectively representing the k class centers of the matrixes X and Z, and beta is an objective function J for controlling sparse discriminant featuresmWeight value of ukiRepresenting the membership value of the superpixel i to the kth class for the introduced fuzzy membership parameter;
s5.2, aiming at an objective function JmCarrying out minimization:
Figure BDA0002106821930000113
where ξ represents the differential of the lagrange multiplier;
s5.3, solving the minimized objective function JmObtaining:
Figure BDA0002106821930000114
Figure BDA0002106821930000121
Figure BDA0002106821930000122
s5.4, substitution constraint
Figure BDA0002106821930000123
The following can be solved:
Figure BDA0002106821930000124
thereby obtaining updated uki
Figure BDA0002106821930000125
S5.5, utilizing updated ukiConstructing a membership matrixU(k)Then, it is judged whether max (U) is full(k)-U(k-1))<η*,η*Is a preset threshold value, if the preset threshold value is met, the step S5.6 is carried out; otherwise, enabling k to be added by 1, and returning to the step S5.1;
s5.6, according to the membership value ukiAnd clustering the images so as to realize image segmentation.
FIG. 4 is a segmentation result of the EnFCM algorithm, which shows that the denoising capability is weak, and the building and lawn areas cannot be accurately distinguished;
FIG. 5 is a diagram of the segmentation result of the FLICM algorithm, which shows that the algorithm can effectively distinguish the buildings and the lawns, and the lawns have good consistency with the interior of the road area, but still have a large number of error points;
FIG. 6 is a segmentation result diagram of a fuzzy C-means SAR image based on superpixels and sparse representation, and is compared with other algorithms, when the algorithm is used for segmentation, small-area buildings in the image can be well distinguished, the edge part of the building is effectively identified, edge expansion does not occur, and the definition of the boundary is guaranteed. In addition, the road and lawn segmentation effect is good, only a small number of mistaken pixel points exist, and the region consistency is best.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (2)

1. A fuzzy C-means clustering SAR image segmentation method based on superpixels and sparse representation is characterized by comprising the following steps:
(1) obtaining SAR image
Acquiring a single-polarized Synthetic Aperture Radar (SAR) image containing appropriate and abundant ground object types in an area to be observed;
(2) generating superpixels using PILS algorithm
(2.1) initializing the clustering center
Setting N pixel points in total of the SAR image, initializing k clustering centers, wherein each clustering center is a superpixel, and the distance between adjacent clustering centers is
Figure FDA0002106821920000011
Each cluster center is represented by a three-dimensional feature as: [ i, x, y ], wherein i represents a pixel intensity value of a cluster center, and x and y represent abscissa and ordinate values of the cluster center;
(2.2) Cluster center relocation
Taking a certain clustering center as a center, finding a position with the minimum gradient in an n multiplied by n neighborhood, and relocating the clustering center to the position;
(2.3) image clustering
Establishing a search domain S multiplied by S by taking a certain pixel point as a center; traversing the whole SAR image, determining a clustering center of the pixel point closest to the searching domain S multiplied by S, and dividing the pixel point into the same class of the clustering center;
(2.4) iterative update
After the category attributes of all the pixel points are distributed, calculating the mean value of the three-dimensional characteristics [ i, x, y ] of the clustering centers of all the categories, updating the clustering centers of the categories according to the mean value, and iterating until convergence;
(3) superpixel feature extraction
Calculating a gray level co-occurrence matrix GLCM in a search domain S multiplied by S to extract the texture characteristics of the super pixels, wherein the method comprises the following steps: angular second moment, contrast, correlation, inverse difference moment, variance, sum mean, sum entropy, sum variance, entropy, difference variance, correlation information measure, and maximum correlation coefficient;
extracting a gray histogram in the super pixel as a super pixel gray characteristic;
forming the super-pixel texture feature and the gray feature into an image basic feature set X (X)1,x2,…,xn),xnRepresents the nth feature;
(4) creating a sparse self-representation model
(4.1), initializing Z-G-psi-0, wherein Z is a sparse self-expression matrix, G is a diagonal matrix, and psi is a Lagrange multiplier matrix; initializing the current iteration time t as 1;
(4.2) judging whether the t iteration is followed by the I Z(t)-G(t)||If yes, entering the step (4.3), otherwise, outputting the t-th iteration to obtain a sparse self-expression matrix Z;
(4.3) column by column update G(t)
Figure FDA0002106821920000021
Wherein J is 1,2, …, J, gj=(g1j,g2j,…,g(j-1)j,0,…,gJj) Mu is a penalty factor, zjFor sparse self-representation of matrix Z(t)J column of (1)jIs a matrix Ψ(t)Column j of (1); g is prepared fromjjSet to 0 to satisfy the constraint diag (G)(t)) 0, so when updating column by column, G is encountered(t)When the diagonal element is not updated;
updating Z(t)
Figure FDA0002106821920000022
Wherein, X is an image basic feature set, I is a unit matrix, and gamma is a setting parameter;
for updated Z(t)And (3) correcting:
to Z(t)Element z in (1)ijThe following formula is used for processing to obtain the element zijIs estimated value of
Figure FDA0002106821920000027
Figure FDA0002106821920000023
Forming matrix by the estimated values of all elements
Figure FDA0002106821920000024
Go through
Figure FDA0002106821920000025
Calculating the backscattering mean value of each super pixel set, and enabling the mean value difference of backscattering values of the super pixels i and j to be delta sigmaijObtaining corrected Z(t)The elements of (A) are:
Figure FDA0002106821920000026
updating Ψ(t)
Ψ(t)=Ψ(t-1)+μ(Z(t)-G(t))
Updating the penalty coefficient mu:
μ=min(εμ,1010)
wherein epsilon is a constant;
(4.4) after the step (4.3) is finished, outputting a sparse self-expression matrix Z(t)Adding 1 to the iteration times t, and returning to the step (4.2);
(5) image segmentation
(5.1) defining an objective function J of the SSR _ FCM algorithmm
Figure FDA0002106821920000031
Wherein m represents a blur index, vk
Figure FDA0002106821920000032
Respectively representing the k class centers of the matrixes X and Z, and beta is an objective function J for controlling sparse discriminant featuresmWeight value of ukiRepresenting the membership value of the superpixel i to the kth class for the introduced fuzzy membership parameter;
(5.2) for the objective function JmCarrying out minimization:
Figure FDA0002106821920000033
where ξ represents the differential of the lagrange multiplier;
(5.3) solving the minimized objective function JmObtaining:
Figure FDA0002106821920000034
Figure FDA0002106821920000035
Figure FDA0002106821920000036
(5.4) substitution constraint
Figure FDA0002106821920000037
The following can be solved:
Figure FDA0002106821920000041
thereby obtaining updated uki
Figure FDA0002106821920000042
(5.5) use of the updated ukiConstructing a membership matrix U(k)Then, it is judged whether max (U) is full(k)-U(k-1))<η*,η*Is a preset threshold value, if the preset threshold value is met, the step (5.6) is carried out; otherwise, enabling k to be added by 1, and then returning to the step (5.1);
(5.6) according to the membership value ukiAnd clustering the images so as to realize image segmentation.
2. The method for segmenting the fuzzy C-means clustering SAR image based on the superpixel and the sparse representation according to claim 1, wherein in the step (2.3), the specific process of image clustering is as follows:
(2.3.1) computing intensity similarity measure between pixels when superpixels are generated by using PILS algorithm
Figure FDA0002106821920000043
Wherein I represents intensity, SI(i, j) represents the intensity similarity measure of the clustering center i with the nearest distance of the pixel point j in the search domain S multiplied by S, K represents the number of image blocks in the search domain, Ni(k) Representing the tile intensity, N, of an S search field centered at ij(k) Representing the tile strength of an S search domain centered at j; r isijIs the pattern block intensity parameter;
probability density function P (r)ij) Satisfies the following conditions:
Figure FDA0002106821920000044
wherein L is SAR image vision gamma (·) representing gamma function;
(2.3.2) computing a spatial similarity measure between pixels at superpixel generation
Figure FDA0002106821920000045
Wherein xy represents space, dxy(i, j) indicates nullThe spatial distance between the middle clustering center i and the pixel point j;
(2.3.3) calculating similarity measure parameters
S(i,j)=SI(i,j)+αSxy(i,j)
Wherein α is a weighting coefficient that balances two different similarity measures;
(2.3.4) classification of each pixel
And determining the clustering center with the highest similarity measure parameter value of each pixel point in the search domain, and dividing the pixel points into the same type of the clustering center.
CN201910555710.3A 2019-06-25 2019-06-25 SAR image segmentation method based on super-pixel and fuzzy C-means clustering Expired - Fee Related CN110349160B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910555710.3A CN110349160B (en) 2019-06-25 2019-06-25 SAR image segmentation method based on super-pixel and fuzzy C-means clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910555710.3A CN110349160B (en) 2019-06-25 2019-06-25 SAR image segmentation method based on super-pixel and fuzzy C-means clustering

Publications (2)

Publication Number Publication Date
CN110349160A CN110349160A (en) 2019-10-18
CN110349160B true CN110349160B (en) 2022-03-25

Family

ID=68182990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910555710.3A Expired - Fee Related CN110349160B (en) 2019-06-25 2019-06-25 SAR image segmentation method based on super-pixel and fuzzy C-means clustering

Country Status (1)

Country Link
CN (1) CN110349160B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583266B (en) * 2020-05-08 2021-09-24 清华大学 Self-adaptive synthetic aperture radar image super-pixel segmentation method based on Fermat vector
CN111932575B (en) * 2020-06-24 2023-07-18 山东师范大学 Image segmentation method and system based on fusion of fuzzy C-means and probability labels
CN112037302A (en) * 2020-08-31 2020-12-04 南通大学 Diffusion light tomography intelligent modeling method based on big data
CN112364730B (en) * 2020-10-29 2023-01-17 济南大学 Hyperspectral ground object automatic classification method and system based on sparse subspace clustering
CN112241956B (en) * 2020-11-03 2023-04-07 甘肃省地震局(中国地震局兰州地震研究所) PolSAR image ridge line extraction method based on region growing method and variation function
CN113223022B (en) * 2021-05-31 2022-04-12 湖南科技大学 Multivariate image segmentation method based on multivariate texture image analysis algorithm
CN113689424B (en) * 2021-09-09 2023-11-24 中国人民解放军陆军军医大学 Ultrasonic inspection system capable of automatically identifying image features and identification method
CN115131555A (en) * 2022-05-19 2022-09-30 西安电子科技大学 Overlapping shadow detection method and device based on superpixel segmentation
CN115100406B (en) * 2022-05-24 2024-07-09 南京邮电大学 Weight information entropy fuzzy C-means clustering method based on superpixel processing
CN114897878B (en) * 2022-06-08 2024-08-02 合肥工业大学 SAR image change detection method based on graph convolution network
CN114926793A (en) * 2022-06-15 2022-08-19 江苏城乡空间规划设计研究院有限责任公司 City analysis method and system based on streetscape image
CN115131373B (en) * 2022-07-14 2023-10-27 西安电子科技大学 SAR image segmentation method based on texture features and SLIC
CN115131566B (en) * 2022-07-25 2024-09-24 北京帝测科技股份有限公司 Automatic image segmentation method based on superpixel and improved fuzzy C-means clustering
CN115841600B (en) * 2023-02-23 2023-05-16 山东金诺种业有限公司 Deep learning-based sweet potato appearance quality classification method
CN116879862B (en) * 2023-09-08 2023-12-01 西安电子科技大学 Single snapshot sparse array space angle super-resolution method based on hierarchical sparse iteration
CN118413637B (en) * 2024-07-02 2024-09-06 安徽协创物联网技术有限公司 Remote real-time monitoring control system and control method based on video monitoring camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680181A (en) * 2015-03-09 2015-06-03 西安电子科技大学 SAR image super-pixel segmentation method based on likelihood ratio features
US9389311B1 (en) * 2015-02-19 2016-07-12 Sandia Corporation Superpixel edges for boundary detection
CN107016684A (en) * 2017-04-13 2017-08-04 中国人民解放军国防科学技术大学 A kind of super-pixel rapid generation of Polarimetric SAR Image
CN107341800A (en) * 2017-07-10 2017-11-10 西安电子科技大学 SAR image change detection based on super-pixel significance analysis
CN109064470A (en) * 2018-08-28 2018-12-21 河南工业大学 A kind of image partition method and device based on adaptive fuzzy clustering
CN109389608A (en) * 2018-10-19 2019-02-26 山东大学 There is the fuzzy clustering image partition method of noise immunity using plane as cluster centre
CN109712149A (en) * 2018-12-25 2019-05-03 杭州世平信息科技有限公司 A kind of image partition method based on wavelet energy and fuzzy C-mean algorithm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9389311B1 (en) * 2015-02-19 2016-07-12 Sandia Corporation Superpixel edges for boundary detection
CN104680181A (en) * 2015-03-09 2015-06-03 西安电子科技大学 SAR image super-pixel segmentation method based on likelihood ratio features
CN107016684A (en) * 2017-04-13 2017-08-04 中国人民解放军国防科学技术大学 A kind of super-pixel rapid generation of Polarimetric SAR Image
CN107341800A (en) * 2017-07-10 2017-11-10 西安电子科技大学 SAR image change detection based on super-pixel significance analysis
CN109064470A (en) * 2018-08-28 2018-12-21 河南工业大学 A kind of image partition method and device based on adaptive fuzzy clustering
CN109389608A (en) * 2018-10-19 2019-02-26 山东大学 There is the fuzzy clustering image partition method of noise immunity using plane as cluster centre
CN109712149A (en) * 2018-12-25 2019-05-03 杭州世平信息科技有限公司 A kind of image partition method based on wavelet energy and fuzzy C-mean algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Deep Learning and Superpixel Feature Extraction Based on Contractive Autoencoder for Change Detection in SAR Images";Ning Lv;《IEEE Transactions on Industrial Informatics》;20181231;第14卷(第12期);第5530-5538页 *
"复杂场景下的SAR目标检测";余文毅;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170315(第03期);第I136-2123页 *

Also Published As

Publication number Publication date
CN110349160A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110349160B (en) SAR image segmentation method based on super-pixel and fuzzy C-means clustering
CN110472627B (en) End-to-end SAR image recognition method, device and storage medium
CN109871902B (en) SAR small sample identification method based on super-resolution countermeasure generation cascade network
CN110147795A (en) A kind of adaptive non local fuzzy C-means clustering SAR image partitioning algorithm
CN108171193B (en) Polarized SAR (synthetic aperture radar) ship target detection method based on super-pixel local information measurement
CN103345757A (en) Optical image and SAR image automatic registration method within multilevel multi-feature constraint
CN106846322B (en) The SAR image segmentation method learnt based on curve wave filter and convolutional coding structure
CN105389799B (en) SAR image object detection method based on sketch map and low-rank decomposition
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN108805057B (en) SAR image reservoir area detection method based on joint significance analysis
CN111666856A (en) High-resolution single-polarization SAR image building target detection method based on structural characteristics
CN107742113A (en) One kind is based on the posterior SAR image complex target detection method of destination number
CN112508963B (en) SAR image segmentation method based on fuzzy C-means clustering
CN105787505A (en) Infrared image clustering segmentation method combining sparse coding and spatial constraints
CN111862778A (en) Shallow lithology geological map generation method and device, storage medium and equipment
CN111460943A (en) Remote sensing image ground object classification method and system
CN112241956B (en) PolSAR image ridge line extraction method based on region growing method and variation function
CN112462367B (en) Vehicle detection method based on polarized synthetic aperture radar
CN111080647B (en) SAR image segmentation method based on adaptive sliding window filtering and FCM
CN108875798B (en) Super-pixel-level feature extraction method based on spatial pyramid pooling
CN115410093B (en) Remote sensing image classification method based on dual-channel coding network and conditional random field
CN110751652B (en) SAR image segmentation method based on Bhattacharyya distance and texture mode measurement
Wecklich et al. Production of a global forest/non-forest map utilizing TanDEM-X interferometric SAR data
CN112883898A (en) Ground feature classification method and device based on SAR (synthetic aperture radar) image
Yang et al. Semantic labelling of SAR images with conditional random fields on region adjacency graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220325

CF01 Termination of patent right due to non-payment of annual fee