Deep Spatial-Spectral Subspace Clustering for Hyperspectral Images Based on Contrastive Learning
<p>The augmentation methods used in our proposed method.</p> "> Figure 2
<p>The overall architecture of our proposed method.</p> "> Figure 3
<p>The architecture of our backbone CNN model.</p> "> Figure 4
<p>The architecture of our MLPs.</p> "> Figure 5
<p>(<b>a</b>–<b>c</b>) False-color images of the Indian Pines, University of Pavia, and Salinas data sets.</p> "> Figure 6
<p>The clustering results achieved by different methods on the Indian Pines dataset.</p> "> Figure 7
<p>The spectral information of Indian Pines dataset.</p> "> Figure 8
<p>Visualization of data points of the Indian Pines dataset. Using t-SNE, we reduced the feature dimensionality to 2.</p> "> Figure 9
<p>The clustering results of different methods on the Pavia University dataset.</p> "> Figure 10
<p>The spectral information of the University of Pavia dataset.</p> "> Figure 11
<p>Visualization of the data points of the University of Pavia dataset. Using t-SNE, we reduced the feature dimensionality to 2.</p> "> Figure 12
<p>The clustering results of different methods on the Salinas dataset.</p> "> Figure 13
<p>The spectral information of the Salinas dataset.</p> "> Figure 14
<p>Visualization of the data points of the Salinas dataset. Using t-SNE, we reduced the feature dimensionality to 2.</p> ">
Abstract
:1. Introduction
- Inspired by DBMA and DBDA, we designed a double-branch dense spectral–spatial network for HSI clustering. These two branches can extract spectral and spatial features separately, avoiding the huge computation caused by multi-scale inputs. To reduce the computational load further, we remove the attention blocks in DBDA and DBMA.
- We use contrastive learning to explore spatial–spectral information. We augment the image by removing the spectral information of some non-central pixels. Different methods of selecting pixels to remove spectral information can provide different augmented views of the HSI block.
- The experimental results obtained over three publicly available HSI datasets demonstrate the superiority of our proposed method compared to other state-of-the-art methods.
2. Related Works
2.1. Traditional Clustering for HSIs
2.2. Deep Clustering for HSIs
2.3. Contrastive Learning
3. Method
3.1. Augmentation in Our Experimental Method
Algorithm 1 Selecting Random Rectangular Area to Remove Spectral Information. | |
1: | Input: input image I; image size . |
2: | Output: augmented image . |
3: | Generate a matrix of the size () using 1 |
4: | Select a random submatrix in this matrix and change the elements inside to 0 |
5: | if the center point of the matrix is in the submatrix then |
6: | change the element of that point to 1 |
7: | end if |
8: | for i = 1 to c do |
9: | multiply the image in the ith channel by this matrix to obtain the augmented image |
10: | end for |
11: | Return the augmented image |
Algorithm 2 Selecting Discrete Points to Remove Spectral Information. | |
1: | Input: input image I; image size |
2: | Output: augmented image |
3: | Use 0 and 1 with the same probability to generate a random matrix of the size () |
4: | if the center point of the matrix is 0 then |
5: | change the element of that point to 1 |
6: | end if |
7: | for i = 1 to c do |
8: | multiply the image in the ith channel by this matrix to obtain the augmented image |
9: | end for |
10: | Return the augmented image |
3.2. Architectures of Our Experimental Models
3.3. Summary of Our Experimental Method
Algorithm 3 Our proposed clustering algorithm. | |
1: | Input: dataset I; pixel block size ; training epochs E; batch size N. |
2: | Output: cluster assignments. |
3: | Sample pixel block of size from the dataset I |
4: | //training |
5: | for epoch = 1 to E do |
6: | compute instance-level contrastive loss |
7: | compute cluster-level contrastive loss |
8: | compute overall contrastive loss |
9: | update the network |
10: | end for |
11: | //test |
12: | Extract features using the CNN model |
13: | Use spectral clustering algorithm to obtain the clustering result |
4. Experiments
4.1. Experimental Datasets
4.2. Evaluation Metrics
4.3. Experimental Parameter
4.4. Comparison Methods
4.5. Result Analysis
4.5.1. Indian Pines
4.5.2. University of Pavia
4.5.3. Salinas
5. Discussion
5.1. Influence of Patch Size
5.2. Influence of Data Augmentation Methods
5.3. Influence of Spectral Clustering
5.4. Running Time and Complexity
6. Conclusions and Future Research
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
HSI | Hyperspectral image; |
SSC | Sparse subspace clustering; |
CNN | Convolutional neural networks; |
MLP | Multilayer perceptron. |
References
- Zhao, C.; Wang, Y.; Qi, B.; Wang, J. Global and local real-time anomaly detectors for hyperspectral remote sensing imagery. Remote Sens. 2015, 7, 3966–3985. [Google Scholar] [CrossRef] [Green Version]
- Awad, M.; Jomaa, I.; Arab, F. Improved capability in stone pine forest mapping and management in Lebanon using hyperspectral CHRIS-Proba data relative to Landsat ETM+. Photogramm. Eng. Remote Sens. 2014, 80, 725–731. [Google Scholar] [CrossRef]
- Ibrahim, A.; Franz, B.; Ahmad, Z.; Healy, R.; Knobelspiesse, K.; Gao, B.C.; Proctor, C.; Zhai, P.W. Atmospheric correction for hyperspectral ocean color retrieval with application to the Hyperspectral Imager for the Coastal Ocean (HICO). Remote Sens. Environ. 2018, 204, 60–75. [Google Scholar] [CrossRef] [Green Version]
- Rodriguez, A.; Laio, A. Clustering by fast search and find of density peaks. Science 2014, 344, 1492–1496. [Google Scholar] [CrossRef] [Green Version]
- Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A k-means clustering algorithm. J. R. Stat. Soc. Ser. (Appl. Stat.) 1979, 28, 100–108. [Google Scholar] [CrossRef]
- Maggioni, M.; Murphy, J.M. Learning by Unsupervised Nonlinear Diffusion. J. Mach. Learn. Res. 2019, 20, 1–56. [Google Scholar]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, KDD, Portland, OR, USA, 2–4 August 1996; Volume 96, pp. 226–231. [Google Scholar]
- Roy, S.; Bhattacharyya, D.K. An approach to find embedded clusters using density based techniques. In Proceedings of the International Conference on Distributed Computing and Internet Technology, Bhubaneswar, India, 22–24 December 2005; pp. 523–535. [Google Scholar]
- Cariou, C.; Le Moan, S.; Chehdi, K. Improving k-nearest neighbor approaches for density-based pixel clustering in hyperspectral remote sensing images. Remote Sens. 2020, 12, 3745. [Google Scholar] [CrossRef]
- Chen, S.; Zhang, D. Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure. IEEE Trans. Syst. Man, Cybern. Part B (Cybern.) 2004, 34, 1907–1916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lin, J.; He, C.; Wang, Z.J.; Li, S. Structure preserving transfer learning for unsupervised hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1656–1660. [Google Scholar] [CrossRef]
- Murphy, J.M.; Maggioni, M. Unsupervised clustering and active learning of hyperspectral images with nonlinear diffusion. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1829–1845. [Google Scholar] [CrossRef] [Green Version]
- Elhamifar, E.; Vidal, R. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2765–2781. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Dou, Y.; Jin, R.; Li, R.; Qiao, P. Hierarchical learning with backtracking algorithm based on the visual confusion label tree for large-scale image classification. Vis. Comput. 2021, 1–21. [Google Scholar] [CrossRef]
- Liu, Y.; Dou, Y.; Jin, R.; Qiao, P. Visual tree convolutional neural network in image classification. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 758–763. [Google Scholar]
- Nagpal, C.; Dubey, S.R. A performance evaluation of convolutional neural networks for face anti spoofing. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A fast dense spectral–spatial convolution network framework for hyperspectral images classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
- Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-branch multi-attention mechanism network for hyperspectral image classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
- Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of hyperspectral image based on double-branch dual-attention mechanism network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef] [Green Version]
- Zeng, M.; Cai, Y.; Liu, X.; Cai, Z.; Li, X. Spectral-spatial clustering of hyperspectral image based on Laplacian regularized deep subspace clustering. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 2694–2697. [Google Scholar]
- Lei, J.; Li, X.; Peng, B.; Fang, L.; Ling, N.; Huang, Q. Deep spatial-spectral subspace clustering for hyperspectral image. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2686–2697. [Google Scholar] [CrossRef]
- Li, Y.; Hu, P.; Liu, Z.; Peng, D.; Zhou, J.T.; Peng, X. Contrastive clustering. In Proceedings of the 2021 AAAI Conference on Artificial Intelligence (AAAI), Vancouver, BC, Canada, 2–9 February 2021. [Google Scholar]
- Paoli, A.; Melgani, F.; Pasolli, E. Clustering of hyperspectral images based on multiobjective particle swarm optimization. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4175–4188. [Google Scholar] [CrossRef]
- Zhong, Y.; Zhang, L.; Gong, W. Unsupervised remote sensing image classification using an artificial immune network. Int. J. Remote Sens. 2011, 32, 5461–5483. [Google Scholar] [CrossRef]
- Zhai, H.; Zhang, H.; Zhang, L.; Li, P. Laplacian-regularized low-rank subspace clustering for hyperspectral image band selection. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1723–1740. [Google Scholar] [CrossRef]
- Tian, L.; Du, Q.; Kopriva, I.; Younan, N. Spatial-spectral Based Multi-view Low-rank Sparse Sbuspace Clustering for Hyperspectral Imagery. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 8488–8491. [Google Scholar]
- Zhang, H.; Zhai, H.; Zhang, L.; Li, P. Spectral–spatial sparse subspace clustering for hyperspectral remote sensing images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3672–3684. [Google Scholar] [CrossRef]
- Xie, J.; Girshick, R.; Farhadi, A. Unsupervised deep embedding for clustering analysis. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 9–14 June 2016; pp. 478–487. [Google Scholar]
- Chang, J.; Wang, L.; Meng, G.; Xiang, S.; Pan, C. Deep adaptive image clustering. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5879–5887. [Google Scholar]
- Fard, M.M.; Thonet, T.; Gaussier, E. Deep k-means: Jointly clustering with k-means and learning representations. Pattern Recognit. Lett. 2020, 138, 185–192. [Google Scholar] [CrossRef]
- Barthakur, M.; Sarma, K.K. Semantic Segmentation using K-means Clustering and Deep Learning in Satellite Image. In Proceedings of the 2019 2nd International Conference on Innovations in Electronics, Signal Processing and Communication (IESC), Shillong, India, 1–2 March 2019; pp. 192–196. [Google Scholar]
- Sodjinou, S.G.; Mohammadi, V.; Mahama, A.T.S.; Gouton, P. A deep semantic segmentation-based algorithm to segment crops and weeds in agronomic color images. Inf. Process. Agric. 2021. [Google Scholar] [CrossRef]
- Ji, P.; Zhang, T.; Li, H.; Salzmann, M.; Reid, I. Deep subspace clustering networks. arXiv 2017, arXiv:1709.02508. [Google Scholar]
- Mukherjee, S.; Asnani, H.; Lin, E.; Kannan, S. Clustergan: Latent space clustering in generative adversarial networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 4610–4617. [Google Scholar]
- Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 2180–2188. [Google Scholar]
- Egaña, Á.F.; Santibáñez-Leal, F.A.; Vidal, C.; Díaz, G.; Liberman, S.; Ehrenfeld, A. A Robust Stochastic Approach to Mineral Hyperspectral Analysis for Geometallurgy. Minerals 2020, 10, 1139. [Google Scholar] [CrossRef]
- Xu, J.; Li, H.; Liu, P.; Xiao, L. A novel hyperspectral image clustering method with context-aware unsupervised discriminative extreme learning machine. IEEE Access 2018, 6, 16176–16188. [Google Scholar] [CrossRef]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, Online, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 9729–9738. [Google Scholar]
- Grill, J.B.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P.H.; Buchatskaya, E.; Doersch, C.; Pires, B.A.; Guo, Z.D.; Azar, M.G.; et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv 2020, arXiv:2006.07733. [Google Scholar]
- You, C.; Li, C.G.; Robinson, D.P.; Vidal, R. Oracle based active set algorithm for scalable elastic net subspace clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3928–3937. [Google Scholar]
- You, C.; Robinson, D.; Vidal, R. Scalable sparse subspace clustering by orthogonal matching pursuit. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3918–3927. [Google Scholar]
- Yan, D.; Huang, L.; Jordan, M.I. Fast approximate spectral clustering. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, 28 June–1 July 2009; pp. 907–916. [Google Scholar]
- Mall, R.; Langone, R.; Suykens, J.A. Kernel Spectral Clustering for Big Data Networks. Entropy 2013, 15, 1567–1586. [Google Scholar] [CrossRef] [Green Version]
Layer | Input Shape | Output Shape | Parameters | Padding | Kernel_Size | Stride |
Conv11 | [1,9,9,100] | [24,9,9,47] | 192 | (0,0,0) | (1,1,7) | (1,1,2) |
Conv12 | [24,9,9,47] | [12,9,9,47] | 2028 | (0,0,3) | (1,1,7) | (1,1,1) |
Conv13 | [36,9,9,47] | [12,9,9,47] | 3036 | (0,0,3) | (1,1,7) | (1,1,1) |
Conv14 | [48,9,9,47] | [12,9,9,47] | 4044 | (0,0,3) | (1,1,7) | (1,1,1) |
Conv15 | [60,9,9,47] | [60,9,9,1] | 169,260 | (0,0,0) | (1,1,47) | (1,1,1) |
Conv21 | [1,9,9,100] | [24,9,9,1] | 2424 | (0,0,0) | (1,1,100) | (1,1,1) |
Conv22 | [24,9,9,1] | [12,9,9,1] | 2604 | (1,1,0) | (3,3,1) | (1,1,1) |
Conv23 | [36,9,9,1] | [12,9,9,1] | 3900 | (1,1,0) | (3,3,1) | (1,1,1) |
Conv24 | [48,9,9,1] | [12,9,9,1] | 5196 | (1,1,0) | (3,3,1) | (1,1,1) |
Layer | Input Shape | Output Shape | Parameters | eps | Momentum | Affine |
BN11 | [24,9,9,47] | [24,9,9,47] | 48 | 0.001 | 0.1 | True |
BN12 | [36,9,9,47] | [36,9,9,47] | 72 | 0.001 | 0.1 | True |
BN13 | [48,9,9,47] | [48,9,9,47] | 96 | 0.001 | 0.1 | True |
BN14 | [60,9,9,47] | [60,9,9,47] | 120 | 0.001 | 0.1 | True |
BN21 | [24,9,9,1] | [24,9,9,1] | 48 | 0.001 | 0.1 | True |
BN22 | [36,9,9,1] | [36,9,9,1] | 72 | 0.001 | 0.1 | True |
BN23 | [48,9,9,1] | [48,9,9,1] | 96 | 0.001 | 0.1 | True |
BN3 | [120,9,9,1] | [120,9,9,1] | 240 | 0.001 | 0.1 | True |
Total params: 193,476 | ||||||
Trainable params: 193,476 | ||||||
Non-trainable params: 0 | ||||||
Total mult-adds (M): 50.02 | ||||||
Input size (MB): 0.03 | ||||||
Forward/backward pass size (MB): 6.84 | ||||||
Params size (MB): 0.74 | ||||||
Estimated Total Size (MB): 7.61 |
MLP I | MLP II | ||||
---|---|---|---|---|---|
Layer | Output Shape | Parameter | Layer | Output Shape | Parameter |
Linear | [120] | 14,520 | Linear | [120] | 14,520 |
Relu | [120] | 0 | Relu | [120] | 0 |
Linear | [256] | 30,976 | Linear | [4] | 484 |
Softmax | [4] | 0 | |||
Total params: 45,496 | Total params: 15,004 | ||||
Trainable params: 45,496 | Trainable params: 15,004 | ||||
Non-trainable params: 0 | Non-trainable params: 0 | ||||
Total mult-adds (M): 0.09 | Total mult-adds (M): 0.03 |
Datasets | Indian Pines | Pavia University | Salinas |
---|---|---|---|
Location | [30–115, 24–94] | [150–350, 100–200] | [0–140, 50–200] |
Channels | 200 | 103 | 204 |
Clusters | 4 | 8 | 6 |
Samples | 4391 | 6445 | 15,428 |
Class | Number | Methods | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
k-Means | SSC | EnSC | SSC-OMP | DLSS | LRDSC | Proposed | ||||
Corn-notill | 1005 | 0.4328 | 0.4935 | 0.7452 | 0.1034 | 0.6100 | 0.4418 | 0.5970 | 0.5184 | 0.9203 |
Grass-trees | 730 | 0.9958 | 0.9958 | 0.6616 | 0.0000 | 1.0000 | 0.9763 | 0.8883 | 1.0000 | 0.9986 |
Soybean-notill | 732 | 0.5737 | 0.6694 | 0.1489 | 0.0204 | 0.6530 | 0.4980 | 0.7031 | 0.9784 | 1.0000 |
Soybean-mintill | 1924 | 0.6351 | 0.6410 | 0.4069 | 0.9968 | 0.6528 | 0.7508 | 0.7767 | 0.8933 | 0.9381 |
OA | 0.6386 | 0.6701 | 0.4837 | 0.4639 | 0.7008 | 0.6736 | 0.7410 | 0.8388 | 0.9545 | |
AA | 0.6594 | 0.6999 | 0.4907 | 0.2802 | 0.7290 | 0.6667 | 0.7413 | 0.8475 | 0.9642 | |
Kappa | 0.4911 | 0.5988 | 0.2731 | 0.0593 | 0.5825 | 0.5833 | 0.6777 | 0.7989 | 0.9353 |
Class | Number | Methods | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
k-Means | SSC | EnSC | SSC-OMP | DLSS | LRDSC | Proposed | ||||
Asphalt | 425 | 0.0000 | 0.9540 | 0.6541 | 0.1882 | 0.8730 | 0.6522 | 0.4658 | 1.0000 | 1.0000 |
Meadows | 768 | 0.8476 | 0.0280 | 0.9062 | 0.3333 | 0.6064 | 0.9907 | 0.8785 | 0.0000 | 1.0000 |
Trees | 63 | 0.0000 | 0.4853 | 0.7777 | 0.0317 | 0.9861 | 0.4559 | 0.0000 | 0.0000 | 0.4920 |
Painted metal sheet | 1315 | 0.3680 | 0.9976 | 0.7171 | 0.7893 | 0.9909 | 0.0000 | 0.7784 | 0.9953 | 0.6410 |
Bare soil | 2559 | 0.4060 | 0.3264 | 0.5291 | 0.4028 | 0.3193 | 0.7023 | 0.8942 | 0.9610 | 1.0000 |
Bitumen | 860 | 0.9988 | 0.0000 | 0.4430 | 0.7104 | 0.0000 | 1.0000 | 0.4891 | 0.0024 | 0.9930 |
Self-Blocking Bricks | 94 | 0.3510 | 0.6000 | 0.0000 | 0.1489 | 0.9837 | 0.7343 | 0.9940 | 1.0000 | 0.0000 |
Shadows | 361 | 1.0000 | 1.0000 | 1.0000 | 0.2493 | 0.9909 | 0.5956 | 0.9363 | 0.5873 | 0.9944 |
OA | 0.5317 | 0.5655 | 0.6303 | 0.4844 | 0.6509 | 0.6250 | 0.8117 | 0.8687 | 0.9060 | |
AA | 0.4964 | 0.5489 | 0.6284 | 0.3567 | 0.7188 | 0.6414 | 0.6795 | 0.5682 | 0.7650 | |
Kappa | 0.4449 | 0.5641 | 0.5590 | 0.3732 | 0.5852 | 0.6242 | 0.8111 | 0.8685 | 0.8784 |
Class | Number | Methods | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
k-Means | SSC | EnSC | SSC-OMP | DLSS | LRDSC | Proposed | ||||
Fallow_rough_plow | 1229 | 0.9910 | 0.3318 | 0.0000 | 0.9780 | 0.9959 | 0.9930 | 0.9558 | 0.9971 | 1.0000 |
Fallow_smooth | 2441 | 0.9946 | 0.7461 | 0.2494 | 0.9631 | 0.9926 | 0.9935 | 0.9919 | 1.0000 | 0.9983 |
Stubble | 3949 | 0.6920 | 0.6571 | 0.6505 | 0.8465 | 0.9977 | 0.9970 | 0.9997 | 1.0000 | 1.0000 |
Celery | 3543 | 0.9937 | 1.0000 | 0.3211 | 0.9960 | 0.9984 | 0.9946 | 0.9804 | 1.0000 | 1.0000 |
Grapes_untrained | 2198 | 0.9986 | 1.0000 | 0.8999 | 0.9126 | 1.0000 | 0.9969 | 0.9946 | 0.9843 | 0.6974 |
Vineyard_untrained | 2068 | 0.0000 | 0.0000 | 0.0483 | 0.0415 | 0.0000 | 0.0000 | 0.0000 | 0.0879 | 1.0000 |
OA | 0.7840 | 0.6481 | 0.4144 | 0.8113 | 0.8631 | 0.8564 | 0.8474 | 0.8698 | 0.9566 | |
AA | 0.7783 | 0.6225 | 0.3615 | 0.7896 | 0.8307 | 0.8292 | 0.8204 | 0.8449 | 0.9493 | |
Kappa | 0.7367 | 0.6438 | 0.2969 | 0.7682 | 0.8312 | 0.8562 | 0.8473 | 0.8696 | 0.9466 |
Dataset | Metric | 7 × 7 | 9 × 9 | 11 × 11 | 13 × 13 |
---|---|---|---|---|---|
Indian Pines | OA | 0.6955 | 0.9545 | 0.6807 | 0.7335 |
AA | 0.7642 | 0.9642 | 0.7835 | 0.6481 | |
Kappa | 0.5805 | 0.9353 | 0.5870 | 0.5961 | |
University of Pavia | OA | 0.8740 | 0.9060 | 0.7626 | 0.7845 |
AA | 0.7777 | 0.7650 | 0.6764 | 0.6978 | |
Kappa | 0.8424 | 0.8784 | 0.7168 | 0.7301 | |
Salinas | OA | 0.9564 | 0.9566 | 0.9561 | 0.9542 |
AA | 0.9490 | 0.9493 | 0.9487 | 0.9466 | |
Kappa | 0.9464 | 0.9466 | 0.9460 | 0.9436 |
Dataset | Metric | No Flip | Only Point | Only Rectangle | Rotation | Proposed |
---|---|---|---|---|---|---|
Indian Pines | OA | 0.9549 | 0.6101 | 0.9679 | 0.9508 | 0.9545 |
AA | 0.9645 | 0.4810 | 0.9704 | 0.9622 | 0.9642 | |
Kappa | 0.9359 | 0.3723 | 0.9541 | 0.9302 | 0.9353 | |
University of Pavia | OA | 0.8794 | 0.8808 | 0.8009 | 0.8836 | 0.9060 |
AA | 0.7794 | 0.7797 | 0.6687 | 0.7801 | 0.7650 | |
Kappa | 0.8488 | 0.8505 | 0.7544 | 0.8539 | 0.8784 | |
Salinas | OA | 0.9567 | 0.9569 | 0.8503 | 0.9568 | 0.9566 |
AA | 0.9493 | 0.9496 | 0.7499 | 0.9494 | 0.9493 | |
Kappa | 0.9467 | 0.9469 | 0.8147 | 0.9468 | 0.9466 |
Metric | Indian Pines | University of Pavia | Salinas | |||
---|---|---|---|---|---|---|
K-Means | Spectral | K-Means | Spectral | K-Means | Spectral | |
OA | 0.6809 | 0.9545 | 0.5600 | 0.9060 | 0.6803 | 0.9566 |
AA | 0.7287 | 0.9642 | 0.5322 | 0.7650 | 0.6443 | 0.9493 |
Kappa | 0.5654 | 0.9353 | 0.4887 | 0.8784 | 0.6187 | 0.9466 |
Time(s) | Indian Pines | University of Pavia | Salinas |
---|---|---|---|
Training CNN | 74.53 | 99.08 | 235.45 |
Getting features | 0.55 | 0.82 | 1.96 |
Spectral clustering | 25.14 | 41.44 | 172.73 |
Total | 102.22 | 141.34 | 410.14 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hu, X.; Li, T.; Zhou, T.; Peng, Y. Deep Spatial-Spectral Subspace Clustering for Hyperspectral Images Based on Contrastive Learning. Remote Sens. 2021, 13, 4418. https://doi.org/10.3390/rs13214418
Hu X, Li T, Zhou T, Peng Y. Deep Spatial-Spectral Subspace Clustering for Hyperspectral Images Based on Contrastive Learning. Remote Sensing. 2021; 13(21):4418. https://doi.org/10.3390/rs13214418
Chicago/Turabian StyleHu, Xiang, Teng Li, Tong Zhou, and Yuanxi Peng. 2021. "Deep Spatial-Spectral Subspace Clustering for Hyperspectral Images Based on Contrastive Learning" Remote Sensing 13, no. 21: 4418. https://doi.org/10.3390/rs13214418
APA StyleHu, X., Li, T., Zhou, T., & Peng, Y. (2021). Deep Spatial-Spectral Subspace Clustering for Hyperspectral Images Based on Contrastive Learning. Remote Sensing, 13(21), 4418. https://doi.org/10.3390/rs13214418