[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109190511A - Hyperspectral classification method based on part Yu structural constraint low-rank representation - Google Patents

Hyperspectral classification method based on part Yu structural constraint low-rank representation Download PDF

Info

Publication number
CN109190511A
CN109190511A CN201810919458.5A CN201810919458A CN109190511A CN 109190511 A CN109190511 A CN 109190511A CN 201810919458 A CN201810919458 A CN 201810919458A CN 109190511 A CN109190511 A CN 109190511A
Authority
CN
China
Prior art keywords
matrix
pixel
hyperspectral image
low
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810919458.5A
Other languages
Chinese (zh)
Other versions
CN109190511B (en
Inventor
王�琦
李学龙
何翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201810919458.5A priority Critical patent/CN109190511B/en
Publication of CN109190511A publication Critical patent/CN109190511A/en
Application granted granted Critical
Publication of CN109190511B publication Critical patent/CN109190511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The hyperspectral classification method based on part Yu structural constraint low-rank representation that the present invention provides a kind of.Firstly, carrying out standardization processing to input high spectrum image;Then, building obtains the objective function based on part Yu structural constraint low-rank representation;Then, objective function is solved using augmented vector approach and alternating iteration more new algorithm, obtains low-rank decomposition matrix;Finally, calculating the class label of each test pixel using low-rank decomposition matrix, classification hyperspectral imagery is completed.The method of the present invention is applicable to the high spectrum image with different classes of pixel tightness degree, has preferable robustness to noise and abnormal point, and can significantly improve nicety of grading.

Description

Hyperspectral classification method based on local and structural constraint low-rank representation
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a hyperspectral classification method based on local and structural constraint low-rank representation.
Background
Different from the traditional remote sensing image, the hyperspectral image not only contains the spatial position information of the earth surface target, namely image information, but also contains the spectral curve information corresponding to each wave band, namely spectral information, therefore, the hyperspectral image has an important characteristic: the map is integrated. Due to the characteristics, the hyperspectral image contains abundant and diverse ground feature information, and the nuances of different resolutions of a common image can be captured. The hyperspectral classification technology is to separate different types of ground features according to rich information of hyperspectral images, and has been widely researched in recent years. A Low Rank Representation-based Hyperspectral image classification method is proposed in the document "Sumarsono, Alex, and Qian Du, Low-Rank Subspace reconstruction for Supervised and Unsupervised Classification of Hyperspectral image in IEEE Journal of Selected Topics in Applied Earth requirements and Remote Sensing, vol.9, No.9, pp.4158-4171.2016", which indicates that although Hyperspectral image data has very high dimensionality, a large amount of useful image information is located in a plurality of Low dimensional subspaces, and the noise of the image constitutes a sparse matrix, so that the original Hyperspectral image can be decomposed into a Low Rank data matrix and a sparse noise matrix. Based on the low-rank characteristic, the method firstly utilizes low-rank decomposition on an original hyperspectral image to obtain a true low-rank matrix with noise removed, then combines the existing high-efficiency classification algorithm to classify the obtained low-rank hyperspectral image, and a large number of experimental results show that the classification accuracy of the classifier can be improved through the preprocessing step of the low-rank decomposition. Firstly, the low-rank characteristic of the hyperspectral image is only used for preprocessing image data, and the design of a classifier is not assisted; secondly, the method only adopts the most common low-rank decomposition algorithm, but for the hyperspectral image, the low-rank decomposition algorithm has low applicability, and the obtained low-rank decomposition matrix is not the optimal expression matrix of the original hyperspectral image.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a hyperspectral classification method based on local and structural constraint low-rank representation. Firstly, carrying out normalization processing on an input hyperspectral image; then, constructing and obtaining a target function based on local and structural constraint low-rank representation; then, solving an objective function by using an augmented Lagrange multiplier method and an alternate iteration updating algorithm to obtain a low-rank decomposition matrix; and finally, calculating the class label of each test pixel by using the low-rank decomposition matrix to complete the classification of the hyperspectral image. The method can be suitable for the hyperspectral images with different classes of pixel compactness, has better robustness on noise and abnormal points, and can obviously improve the classification precision.
A hyperspectral classification method based on local and structural constraint low-rank representation is characterized by comprising the following steps:
step 1: performing normalization processing on hyperspectral image data by using a linear min-max normalization method to obtain a normalized hyperspectral image matrix X, wherein each column in the X is a spectral vector of one pixel, and the spectral reflectance value of each pixel is between 0 and 1;
step 2: based on the local constraint and the structure keeping criterion, an objective function of the following local and structure constraint low-rank representation is established:
z is a low-rank decomposition matrix, E is an error matrix, lambda is an error term regular coefficient, lambda is more than or equal to 0, α is a local constraint term regular coefficient, α is more than or equal to 0, β is a structural constraint term regular coefficient, β is more than or equal to 0, M is a distance matrix, Q is a predefined matrix, and the normalized hyperspectral image X can be divided into a training set and a test set, namely the hyperspectral image X is divided into the training set and the test setIn order to train the set matrix,for the test set matrix, the training set is composed of 5% -15% of pixels selected from each type of pixels, the test set is composed of the rest hyperspectral pixels except the training set, Q and Z can be divided into the training set and the test set in this way, namely Q and Z are divided into two parts of the training set and the test setEach element in the distance matrix M is according toIs calculated to obtain xiAnd xjRespectively representing the spectral vectors l of the ith and jth pixels in the normalized hyperspectral image XiAnd ljRespectively representing the space coordinate vectors of the ith pixel and the jth pixel in the normalized hyperspectral image X, wherein m is a parameter for balancing spectral characteristics and space characteristics, m is more than or equal to 0, i is 1, …, n1,j=1,…,n,n1For training setThe number of the middle pixel points, wherein n is the total number of the pixel points in the normalized hyperspectral image X; each element in the predefined matrix Q is according toCalculating to obtain the sigma, wherein the sigma is a parameter for controlling the number of adjacent pixel points, the sigma is more than or equal to 0, i is 1, …, n1,j=1,…,n;||·||*Is the kernel norm of the matrix, which is the sum of all singular values of the matrix, | · | | luminance2,1Is L2,1Norm is calculated byd is the dimension of the pixel spectral vector in the hyperspectral image, | · | | luminance1Is L of a matrix1Norm, which is the sum of the absolute values of all the elements of the matrix, | · |. luminanceFIs the Frobenius norm of the matrix, which is the square root of the sum of the squares of all elements of the matrix,is a Hadamard operator which represents the multiplication of the corresponding position elements of the two matrixes;
and step 3: introducing auxiliary variables H and J, and converting the formula (1) into a formula by using an augmented Lagrange multiplier method:
wherein < A, B > - [ trace (A) ]TB) Trace denotes trace operation of the matrix, μ is a penalty factor, μ>0,Y1、Y2And Y3Is a lagrange multiplier;
and then respectively solving by using an alternating iteration updating algorithm to obtain H, J, Z, E optimal solutions, specifically:
step 3.1 initialise λ ═ 20, α ═ 0.8, β ═ 0.6, Y1 k=Y2 k=Y3 k=0,Hk=Jk=Zk=Ek=0,μk=10-6Wherein, the superscripts k all represent iteration times, and k is 1 initially;
step 3.2: fixing J, Z and E, updating the element in H according to the following formula:
where Θ (x) ═ max (x- ω,0) + min (x + ω,0), and ω is an element ω in ωij=(α/μk)Mij,i=1,…,n1,j=1,…,n;
Step 3.3: fix H, Z and E, update J by the following formula:
wherein, U Σ VTIs thatThe singular value of (a) is decomposed,
step 3.4: fix H, J and E, update Z as follows:
wherein,i is an identity matrix, Ak=X-Ek+Y1 kk
Step 3.5: fixing H, J and Z, updating each column of E as follows:
wherein,is a matrix GkThe (c) th column of (a),
step 3.6: updating the penalty factor according to the following formula:
μk+1=max(ρμk,maxμ) (7)
therein, maxμIs the maximum set value of μ, set to maxμ=1010Rho is a step length control parameter, and the value range is that rho is more than or equal to 1 and less than or equal to 2;
then, the lagrange multipliers are updated separately as follows:
step 3.7: if at the same time satisfy||Zk+1-Jk+1||< ε and | | Hk+1-Zk+1||If the value is less than epsilon, the iteration is stopped, and H, J, Z, E obtained by calculation at the moment is the final solution; otherwise, the iteration number k is k +1, and the procedure returns to step 3.2. Wherein | · | purple sweetAn L ∞ norm representing a matrix, i.e., a product of a maximum element value of the matrix and a column number, epsilon is an error limiting parameter and is set to 10 ∈-4
And 4, step 4: according toComputing test set pixel xjThe category label of (1). Wherein,is a matrixC is the total number of classes of hyperspectral image pixels, j is 1, … n2,n2To test the setThe number of the pixels in (1).
The invention has the beneficial effects that: due to the adoption of the target function of local and structural constraint low-rank representation, the spectral characteristics and the spatial characteristics can be better balanced, and the hyperspectral image processing method is better suitable for different types of hyperspectral images; because the objective function is converted into the minimum term lambda | E | counting with reconstruction error in the solving process2,1In the solving process of the algorithm, the noise of the hyperspectral image is removed, so that the robustness to abnormal points and noise can be improved; due to the characteristic that the obtained low-rank decomposition matrix has similarityAnd the low-rank decomposition matrix is directly used for pixel classification, so that the method is simple to implement and can effectively improve the classification efficiency.
Drawings
FIG. 1 is a flow chart of a hyperspectral classification method based on local and structural constraint low-rank representation according to the invention;
FIG. 2 is a schematic diagram of the present invention based on local and structural constraints for low rank representation;
FIG. 3 is a diagram of the classification results of different algorithms on an Indian Pines dataset;
FIG. 4 is a diagram of the classification results for a Pavia University dataset;
in the figure, (a) is a group truth standard diagram; (b) is a SVM algorithm result graph; (c) a result graph of the SVMCK algorithm is obtained; (d) a JRSRC algorithm result graph is obtained; (e) a cdSRC algorithm result graph is obtained; (f) is a LRR algorithm result graph; (g) is a LGIDL algorithm result graph; (h) is a LSLRR algorithm result graph.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
As shown in fig. 1, the hyperspectral classification method based on local and structural constraint low-rank representation of the invention is basically implemented as follows:
1. hyperspectral image normalization processing
Because the spectrum value in the original hyperspectral image data reaches thousands, the numerical value can overflow in the actual operation process, the algorithm operation speed can be reduced, and the problem can be solved by adopting preprocessing operation, therefore, the given hyperspectral image data is subjected to normalization processing by utilizing a linear min-max normalization method to obtain a normalized hyperspectral image matrix X, wherein each column in the X is a spectral vector of one pixel, and the spectral reflectance value of each pixel in the X is between 0 and 1. The method comprises the following specific steps:
calculating the minimum value p of the pixel spectrum reflection value in the whole three-dimensional hyperspectral image1And maximum value p2Then, each pixel point is normalized according to the following formula:
x=(xo-p1)/(p2-p1) (11)
wherein x isoAnd x respectively represents the original spectral reflectance value and the normalized spectral reflectance value of the pixel point.
2. Constructing an objective function of local and structural constraint low-rank representation
The normalized hyperspectral image X can be divided into a training set and a test set, namelyIn order to train the set matrix,for the test set matrix, the training set is composed of samples with a certain proportion (generally, 5% -15% of each type of pixels are selected according to different hyperspectral data sets) selected from each type of pixels, and the rest hyperspectral pixels are used as the test set.
Firstly, calculating the distance M between each pixel and other pixels in the normalized hyperspectral image X according to the following distance measurement formulaijObtaining a distance matrix M:
wherein x isiAnd xjRespectively representing the spectral vectors l of the ith and jth pixels in the normalized hyperspectral image XiAnd ljRespectively representing the space coordinate vectors of the ith pixel and the jth pixel in the normalized hyperspectral image X, wherein m is a parameter for balancing the spectral characteristic and the space characteristic, i is 1, …, n is an arbitrary value which is more than or equal to 01,j=1,…,n,n1For training setThe number of the pixel points, n is the number of all the pixel points in X. The distance matrix M is obtained by calculating the weighting mode of different types of features, and the balance parameter M is set, so that the weights of the spectral features and the spatial features can be adjusted, and the method can be greatly suitable for different types of hyperspectral images.
To decompose the element Z in the matrix Z for low rankijThe similarity between the ith and jth pixels can be represented, there is a priori information that if the distance between two pixels is larger, the similarity is smaller, and the product of the distance and the similarity is smaller, and such a local constraint criterion can be described by a mathematical formula as follows:
wherein,is a Hadamard operator, which means the multiplication of the corresponding position elements of the two matrices. Such local constraint criteria may cause low rank representations to learn to features between the hyperspectral data parts.
Then, each element of the predefined matrix Q is calculated according to the following formula:
wherein, sigma is a parameter for controlling the number of adjacent pixel points, and the requirement that sigma is more than or equal to 0 is met.
Likewise, both Q and Z can be divided into a training set and a test set in the same manner as X, i.e., Q and Z can be divided into two partsTraining set matrixThe non-diagonal block of (2) is set to 0, and the parameter sigma is properly adjusted, so that the test set matrix can be formedApproximating a block diagonal matrix, the strategy for structure preservation is thus expressed as:
wherein | · | purple sweetFIs the Frobenius norm of the matrix, i.e., the square root of the sum of the squares of all the elements of the matrix.
Combining the local constraint criterion of formula (13) and the structure keeping strategy of formula (15), the objective function of the low-rank expression of the local and structural constraints constructed by the invention is as follows:
wherein λ is an error term regular coefficient, α is a local constraint term regular coefficient, β is a structural constraint term regular coefficient, and the three parameters are all non-negative arbitrary values, | | · |, y*Is the kernel norm of the matrix, which is the sum of all singular values of the matrix, | · | | luminance2,1Is L2,1The norm of the number of the first-order-of-arrival,is the dimension of the hyperspectral pixel, | · | | non-woven phosphor1Is the L1 norm of the matrix, which is the sum of the absolute values of all the elements of the matrix.
3. Solving an objective function using an augmented Lagrange multiplier method and an alternate iterative update algorithm
The correlation between the variables Z and E to be solved in the objective function is large, and the solution is very troublesome, so that firstly, two auxiliary variables H and J are introduced, and the formula (16) is converted into the following form by using an augmented lagrange multiplier method:
wherein < A, B > - [ trace (A) ]TB) Mu > 0 is a penalty factor, Y1,Y2And Y3Is a lagrange multiplier.
Then, the idea of fixing other variables and optimizing the given variable by using an alternative iterative update algorithm is utilized to respectively solve the optimal solutions of H, J, Z and E. The specific process is as follows:
step 3.1 initialise λ ═ 20, α ═ 0.8, β ═ 0.6, Y1 k=Y2 k=Y3 k=0,Hk=Jk=Zk=Ek=0,μk=10-6Wherein, the superscripts k all represent iteration times, and k is 1 initially;
step 3.2: fixing J, Z and E, then H can be updated as follows:
by derivation, the optimal solution is:
where Θ (x) ═ max (x- ω,0) + min (x + ω,0), and ω is an element ω in ωij=(α/μk)Mij,i=1,…,n1,j=1,…,n;
Step 3.3: h, Z and E are fixed, and J is updated according to the following formula:
wherein, U Σ VTIs thatThe singular value of (a) is decomposed,
step 3.4: fixing H, J, and E, then Z can be updated as follows:
this is a quadratic minimization problem whose closed-form solution can be found by making its derivative 0, and the specific optimal solution is as follows:
wherein,i is an identity matrix, Ak=X-Ek+Y1 kk
Step 3.5: fixing H, J and Z, E can be updated as follows:
the optimal solution is as follows:
wherein,is a matrix GkThe (c) th column of (a),
step 3.6: updating corresponding parameters in the augmented Lagrange multiplier, namely updating penalty factors:
μk+1=max(ρμk,maxμ) (25)
therein, maxμIs the maximum set value of μ, set to maxμ=1010Rho is a step length control parameter, and the value range is that rho is more than or equal to 1 and less than or equal to 2;
then, the lagrange multiplier is updated:
step 3.7: the iterative convergence condition is checked, i.e. whether the respective optimization variables satisfy the following condition:
||Zk+1-Jk+1||<ε (30)
||Hk+1-Zk+1||<ε (31)
wherein | · | purple sweetAn L ∞ norm representing a matrix, i.e., a product of a maximum element value of the matrix and a column number, epsilon is an error limiting parameter and is set to 10 ∈-4
If the three conditions are met simultaneously, stopping iteration, and performing subsequent classification processing, wherein the calculated H, J, Z and E are final solutions; otherwise, the iteration number k is k +1, the process returns to step 3.2, and the variables H, J, Z, and E are continuously updated.
4. Sorting process
After the low-rank decomposition matrix Z of the hyperspectral image data is solved, a test set matrix is firstly utilizedComputing matricesThe sum of the elements in column j belonging to category j, denotedWherein l ∈ [1, …, c]And c is the number of categories of the hyperspectral data; then the test pixel is obtained by the following formula calculationCategory label of (2):
wherein,representing a pixelClass label of (1, … n), j ═ 12,n2To test the setThe number of the pixels in (1). The operation of the step is very simple, and only simple addition and maximum value operation is needed, and other complex classifiers are not needed to obtain the classification result.
The implementation environment of this embodiment is: the central processing unit isA64-bit WINDOWS 7 operating system computer with the i7-37703.40GHz and the 32G internal memory is simulated by MATLAB R2015a software. The data used were two hyperspectral public datasets: indian Pines and Pavia University. Indian Pines: 200 bands, each band containing 145 x 145 pixels; pavia University: 103 bands, each band containing 610 x 340 pixels. On an IndianPines data set, 10% of pixels are randomly selected as a training set, the rest pixels are used as a test set, a parameter sigma for controlling the number of adjacent pixel points is set to be 0.8, a parameter for balancing spectral characteristics and spatial characteristics is set to be m 25, and a step length control parameter is set to be rho 1.2. On a Pavia University data set, 5% of pixels are randomly selected as a training set, the rest pixels are used as a test set, a parameter sigma for controlling the number of adjacent pixels is set to be 0.8, a parameter for balancing spectral features and spatial features is set to be m 25, and a step length control parameter is set to be rho 1.2. 7 different algorithms are respectively adopted to carry out classification processing on the two data sets, and fig. 3 and fig. 4 are respectively a classification result graph of the different classification algorithms on the Indian pipes and the Pavia University data sets, and are compared with a group route standard graph. These algorithms include: support Vector Machine (SVM)An algorithm; a Support Vector Machine (SVMCK) algorithm based on a synthetic Kernel; a Joint Robust Sparse Representation Classifier (JRSRC) algorithm; class-independent sparse Representation Classifier (cdSRC) algorithm; a Low Rank Representation (LRR) algorithm; a Low-rank Group Dictionary Learning (LGIDL) algorithm; the local and structural constrained low rank Representation algorithm (LSLRR) of the present invention.
And calculating an Overall Accuracy (OA) index for measuring the Accuracy of the hyperspectral image data classification. The results are shown in table 1, and it can be seen that the overall accuracy of the method of the present invention is highest on two data sets, which also illustrates the advancement of the present invention on hyperspectral image classification.
TABLE 1

Claims (1)

1. A hyperspectral classification method based on local and structural constraint low-rank representation is characterized by comprising the following steps:
step 1: performing normalization processing on hyperspectral image data by using a linear min-max normalization method to obtain a normalized hyperspectral image matrix X, wherein each column in the X is a spectral vector of one pixel, and the spectral reflectance value of each pixel is between 0 and 1;
step 2: based on the local constraint and the structure keeping criterion, an objective function of the following local and structure constraint low-rank representation is established:
z is a low-rank decomposition matrix, E is an error matrix, lambda is an error term regular coefficient, lambda is more than or equal to 0, α is a local constraint term regular coefficient, α is more than or equal to 0, β is a structural constraint term regular coefficient, β is more than or equal to 0, M is a distance matrix, Q is a predefined matrix, and the normalized hyperspectral image X can be divided into a training set and a test set, namely the hyperspectral image X is divided into the training set and the test set In order to train the set matrix,for the test set matrix, the training set is composed of 5% -15% of pixels selected from each type of pixels, the test set is composed of the rest hyperspectral pixels except the training set, Q and Z can be divided into the training set and the test set in this way, namely Q and Z are divided into two parts of the training set and the test setEach element in the distance matrix M is according toIs calculated to obtain xiAnd xjRespectively representing the spectral vectors l of the ith and jth pixels in the normalized hyperspectral image XiAnd ljRespectively representing the space coordinate vectors of the ith pixel and the jth pixel in the normalized hyperspectral image X, wherein m is a parameter for balancing spectral characteristics and space characteristics, m is more than or equal to 0, i is 1, …, n1,j=1,…,n,n1The number of the pixel points in the training set X is n, and the total number of the pixel points in the normalized hyperspectral image X is n; each element in the predefined matrix Q is according toCalculating to obtain the sigma, wherein the sigma is a parameter for controlling the number of adjacent pixel points, the sigma is more than or equal to 0, i is 1, …, n1,j=1,…,n;||·||*Is the kernel norm of the matrix, which is the sum of all singular values of the matrix, | · | | luminance2,1Is L2,1Norm is calculated byd is the dimension of the pixel spectral vector in the hyperspectral image, | · | | luminance1Is L of a matrix1Norm, which is the sum of the absolute values of all the elements of the matrix, | · |. luminanceFIs the Frobenius norm of the matrix, which is the square root of the sum of the squares of all elements of the matrix,is a Hadamard operator which represents the multiplication of the corresponding position elements of the two matrixes;
and step 3: introducing auxiliary variables H and J, and converting the formula (1) into a formula by using an augmented Lagrange multiplier method:
wherein < A, B > - [ trace (A) ]TB) Trace denotes trace operation of the matrix, μ is a penalty factor, μ>0,Y1、Y2And Y3Is a lagrange multiplier;
and then respectively solving by using an alternating iteration updating algorithm to obtain H, J, Z, E optimal solutions, specifically:
step 3.1 initialise λ ═ 20, α ═ 0.8, β ═ 0.6, Y1 k=Y2 k=Y3 k=0,Hk=Jk=Zk=Ek=0,μk=10-6Wherein, the superscripts k all represent iteration times, and k is 1 initially;
step 3.2: fixing J, Z and E, updating the element in H according to the following formula:
where Θ (x) ═ max (x- ω,0) + min (x + ω,0), and ω is an element ω in ωij=(α/μk)Mij,i=1,…,n1,j=1,…,n;
Step 3.3: fix H, Z and E, update J by the following formula:
wherein, U Σ VTIs thatThe singular value of (a) is decomposed,step 3.4: fix H, J and E, update Z as follows:
wherein,i is an identity matrix, Ak=X-Ek+Y1 kk
Step 3.5: fixing H, J and Z, updating each column of E as follows:
wherein,is a matrix GkThe (c) th column of (a),i=1,…,n;
step 3.6: updating the penalty factor according to the following formula:
μk+1=max(ρμk,maxμ) (7)
therein, maxμIs the maximum set value of μ, set to maxμ=1010Rho is a step length control parameter, and the value range is that rho is more than or equal to 1 and less than or equal to 2;
then, the lagrange multipliers are updated separately as follows:
step 3.7: if at the same time satisfy||Zk+1-Jk+1||< ε and | | Hk+1-Zk+1||If the value is less than epsilon, the iteration is stopped, and H, J, Z, E obtained by calculation at the moment is the final solution; otherwise, the iteration number k is k +1, and the step 3.2 is returned; wherein | · | purple sweetL representing a matrixNorm, i.e. the product of the maximum element value of the matrix and the number of columns, epsilon is an error limiting parameter and is set to 10-4
And 4, step 4: according toComputing test set pixel xjA category label of (1); wherein,is a matrixC is the total number of classes of hyperspectral image pixels, j is 1, … n2,n2To test the setThe number of the pixels in (1).
CN201810919458.5A 2018-08-14 2018-08-14 Hyperspectral classification method based on local and structural constraint low-rank representation Active CN109190511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810919458.5A CN109190511B (en) 2018-08-14 2018-08-14 Hyperspectral classification method based on local and structural constraint low-rank representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810919458.5A CN109190511B (en) 2018-08-14 2018-08-14 Hyperspectral classification method based on local and structural constraint low-rank representation

Publications (2)

Publication Number Publication Date
CN109190511A true CN109190511A (en) 2019-01-11
CN109190511B CN109190511B (en) 2021-04-20

Family

ID=64921261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810919458.5A Active CN109190511B (en) 2018-08-14 2018-08-14 Hyperspectral classification method based on local and structural constraint low-rank representation

Country Status (1)

Country Link
CN (1) CN109190511B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335201A (en) * 2019-03-27 2019-10-15 浙江工业大学 The high spectrum image denoising method restored in conjunction with Moreau enhancing TV and local low-rank matrix
CN110599466A (en) * 2019-08-29 2019-12-20 武汉大学 Hyperspectral anomaly detection method for component projection optimization separation
CN111079838A (en) * 2019-12-15 2020-04-28 烟台大学 Hyperspectral band selection method based on double-flow-row maintaining low-rank self-expression
CN111161199A (en) * 2019-12-13 2020-05-15 中国地质大学(武汉) Spatial-spectral fusion hyperspectral image mixed pixel low-rank sparse decomposition method
CN112560975A (en) * 2020-12-23 2021-03-26 西北工业大学 Based on S1/2Hyperspectral anomaly detection method of norm low-rank representation model
CN113409261A (en) * 2021-06-13 2021-09-17 西北工业大学 Hyperspectral anomaly detection method based on space-spectrum feature joint constraint

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513102A (en) * 2015-12-15 2016-04-20 西安电子科技大学 Hyper-spectral compression perception reconstruction method based on nonlocal total variation and low-rank sparsity
US20160371563A1 (en) * 2015-06-22 2016-12-22 The Johns Hopkins University System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing
CN107832790A (en) * 2017-11-03 2018-03-23 南京农业大学 A kind of semi-supervised hyperspectral image classification method based on local low-rank representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160371563A1 (en) * 2015-06-22 2016-12-22 The Johns Hopkins University System and method for structured low-rank matrix factorization: optimality, algorithm, and applications to image processing
CN105513102A (en) * 2015-12-15 2016-04-20 西安电子科技大学 Hyper-spectral compression perception reconstruction method based on nonlocal total variation and low-rank sparsity
CN107832790A (en) * 2017-11-03 2018-03-23 南京农业大学 A kind of semi-supervised hyperspectral image classification method based on local low-rank representation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RUI WANG等: "Hyperspectral unmixing by reweighted low rank and total variation", 《IEEE》 *
楚恒等: "基于图像分割和LSSVM的高光谱图像分类", 《现代电子技术》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335201A (en) * 2019-03-27 2019-10-15 浙江工业大学 The high spectrum image denoising method restored in conjunction with Moreau enhancing TV and local low-rank matrix
CN110599466A (en) * 2019-08-29 2019-12-20 武汉大学 Hyperspectral anomaly detection method for component projection optimization separation
CN110599466B (en) * 2019-08-29 2022-04-29 武汉大学 Hyperspectral anomaly detection method for component projection optimization separation
CN111161199A (en) * 2019-12-13 2020-05-15 中国地质大学(武汉) Spatial-spectral fusion hyperspectral image mixed pixel low-rank sparse decomposition method
CN111161199B (en) * 2019-12-13 2023-09-19 中国地质大学(武汉) Space spectrum fusion hyperspectral image mixed pixel low-rank sparse decomposition method
CN111079838A (en) * 2019-12-15 2020-04-28 烟台大学 Hyperspectral band selection method based on double-flow-row maintaining low-rank self-expression
CN111079838B (en) * 2019-12-15 2024-02-09 烟台大学 Hyperspectral band selection method based on double-flow-line low-rank self-expression
CN112560975A (en) * 2020-12-23 2021-03-26 西北工业大学 Based on S1/2Hyperspectral anomaly detection method of norm low-rank representation model
CN112560975B (en) * 2020-12-23 2024-05-14 西北工业大学 S-based1/2Hyperspectral anomaly detection method of norm low-rank representation model
CN113409261A (en) * 2021-06-13 2021-09-17 西北工业大学 Hyperspectral anomaly detection method based on space-spectrum feature joint constraint
CN113409261B (en) * 2021-06-13 2024-05-14 西北工业大学 Hyperspectral anomaly detection method based on spatial spectrum feature joint constraint

Also Published As

Publication number Publication date
CN109190511B (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN109190511B (en) Hyperspectral classification method based on local and structural constraint low-rank representation
CN110728224B (en) Remote sensing image classification method based on attention mechanism depth Contourlet network
CN108491849B (en) Hyperspectral image classification method based on three-dimensional dense connection convolution neural network
CN107316013B (en) Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network)
WO2021003951A1 (en) Hyperspectral image classification method based on label-constrained elastic network graph model
CN108734199B (en) Hyperspectral image robust classification method based on segmented depth features and low-rank representation
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN109615008B (en) Hyperspectral image classification method and system based on stack width learning
CN107145836B (en) Hyperspectral image classification method based on stacked boundary identification self-encoder
CN109029363A (en) A kind of target ranging method based on deep learning
CN108182449A (en) A kind of hyperspectral image classification method
CN108229551B (en) Hyperspectral remote sensing image classification method based on compact dictionary sparse representation
CN112633386A (en) SACVAEGAN-based hyperspectral image classification method
CN113139512B (en) Depth network hyperspectral image classification method based on residual error and attention
CN115564996A (en) Hyperspectral remote sensing image classification method based on attention union network
CN111680579B (en) Remote sensing image classification method for self-adaptive weight multi-view measurement learning
CN108898269A (en) Electric power image-context impact evaluation method based on measurement
CN112836671A (en) Data dimension reduction method based on maximization ratio and linear discriminant analysis
Ge et al. Adaptive hash attention and lower triangular network for hyperspectral image classification
CN109376787A (en) Manifold learning network and computer visual image collection classification method based on it
CN115393631A (en) Hyperspectral image classification method based on Bayesian layer graph convolution neural network
CN113344045B (en) Method for improving SAR ship classification precision by combining HOG characteristics
CN109034213B (en) Hyperspectral image classification method and system based on correlation entropy principle
CN110717485A (en) Hyperspectral image sparse representation classification method based on local preserving projection
CN116977723A (en) Hyperspectral image classification method based on space-spectrum hybrid self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant