One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification
<p>Illustration of the 3-D convolution operation.</p> "> Figure 2
<p>Illustration of the residual block in the ResNet.</p> "> Figure 3
<p>Illustration of the dense block in the DenseNet.</p> "> Figure 4
<p>Details of the channel-only polarized attention mechanism in our network.</p> "> Figure 5
<p>Details of the spatial-only polarized attention mechanism in our network.</p> "> Figure 6
<p>The structure of the proposed network.</p> "> Figure 7
<p>Full-factor classification maps for the PU dataset. (<b>a</b>) Ground-truth. (<b>b</b>) SVM. (<b>c</b>) HYSN. (<b>d</b>) SSRN. (<b>e</b>) FDSS. (<b>f</b>) DBMA. (<b>g</b>) DBDA. (<b>h</b>) PCIA. (<b>i</b>) SSGC. (<b>j</b>) OSDN. (<b>k</b>) False-color image.</p> "> Figure 8
<p>Full-factor classification maps for the KSC dataset. (<b>a</b>) Ground-truth. (<b>b</b>) SVM. (<b>c</b>) HYSN. (<b>d</b>) SSRN. (<b>e</b>) FDSS. (<b>f</b>) DBMA. (<b>g</b>) DBDA. (<b>h</b>) PCIA. (<b>i</b>) SSGC. (<b>j</b>) OSDN. (<b>k</b>) False-color image.</p> "> Figure 9
<p>Full-factor classification maps for the BS dataset. (<b>a</b>) Ground-truth. (<b>b</b>) SVM. (<b>c</b>) HYSN. (<b>d</b>) SSRN. (<b>e</b>) FDSS. (<b>f</b>) DBMA. (<b>g</b>) DBDA. (<b>h</b>) PCIA. (<b>i</b>) SSGC. (<b>j</b>) OSDN. (<b>k</b>) False-color image.</p> "> Figure 10
<p>Full-factor classification maps for the HS dataset. (<b>a</b>) Ground-truth. (<b>b</b>) SVM. (<b>c</b>) HYSN. (<b>d</b>) SSRN. (<b>e</b>) FDSS. (<b>f</b>) DBMA. (<b>g</b>) DBDA. (<b>h</b>) PCIA. (<b>i</b>) SSGC. (<b>j</b>) OSDN. (<b>k</b>) False-color image.</p> "> Figure 11
<p>Full-factor classification maps for the SA dataset. (<b>a</b>) Ground-truth. (<b>b</b>) SVM. (<b>c</b>) HYSN. (<b>d</b>) SSRN. (<b>e</b>) FDSS. (<b>f</b>) DBMA. (<b>g</b>) DBDA. (<b>h</b>) PCIA. (<b>i</b>) SSGC. (<b>j</b>) OSDN. (<b>k</b>) False-color image.</p> "> Figure 12
<p>Comparison of <span class="html-italic">OA</span> using different spatial window sizes for the five datasets.</p> "> Figure 13
<p>Comparison of <span class="html-italic">OA</span> using different training sample proportions for the five datasets: (<b>a</b>) PU, (<b>b</b>) KSC, (<b>c</b>) BS, (<b>d</b>) HS, and (<b>e</b>) SA.</p> "> Figure 14
<p>Classification results at different methods on the five datasets. (<b>a</b>) <span class="html-italic">OA</span>. (<b>b</b>) <span class="html-italic">AA</span>. (<b>c</b>) <span class="html-italic">Kappa</span>.</p> "> Figure 15
<p>Different dense blocks. (<b>a</b>) Dense block. (<b>b</b>) Week dense block. (<b>c</b>) One-shot dense block.</p> "> Figure 16
<p><span class="html-italic">OA</span> (%) of OSDN with different attention models on five datasets.</p> ">
Abstract
:1. Introduction
- (1)
- We propose a novel spectral–spatial network based on one-shot dense block and polarized attention for HSI classification. The proposed network has two independent feature extraction branches: the spectral branch with channel-only polarized attention applied to obtain spectral features, and the spatial branch with spatial-only polarized attention used to capture spatial features.
- (2)
- By one-shot dense block, the number of parameters and computational complexity of the network are greatly reduced. Meanwhile, the residual connection is added to the block, which can alleviate the performance saturation and gradient disappearance problems.
- (3)
- We apply both channel-only and spatial-only polarized attention in the proposed network. The channel-only polarized attention emphasizes valuable channel features and suppresses useless ones. The spatial-only attention is more focused on areas with more discriminative features. In addition, the attention mechanism can preserve more resolution in both channel and spatial dimensions and consume less computational costs.
- (4)
- Some advanced technologies, including cosine annealing learning rate, Mish activation function [48], Dropout, and early stopping, are employed in the proposed network. For reproducibility, the code of the proposed network is available at https://github.com/HaiZhu-Pan/OSDN (accessed on 5 May 2022).
2. Background
2.1. 3-D Convolution Operation
2.2. ResNet and DenseNet
2.3. Attention Mechanism
3. Methodology
3.1. Channel-Only Polarized Attention Mechanism
3.2. Spatial-Only Polarized Attention Mechanism
3.3. One-Shot Dense Network with Polarized Attention
3.3.1. Spectral and Spatial Feature Extraction of One-Shot Dense Block
3.3.2. Spectral and Spatial Feature Enhancement of Polarized Attention Mechanism
3.3.3. Spectral and Spatial Feature Fusion and Classification
4. Experiment
4.1. Hyperspectral Dataset Description
4.2. Experimental Evaluation Indicators
4.3. Experimental Setting
- (1)
- SVM: The SVM with radial basis function (RBF) kernel is employed as a representative of the traditional method for HSI classification. It is implemented by scikit-learn [50]. Each labeled sample in the HSI has a continuous spectral vector. They are directly fed into the SVM without feature extraction and dimensionality reduction. The penalty parameter C and the RBF kernel width σ are selected by Grid SearchCV, both in the range of (10−2, 102).
- (2)
- HYSN [26]: The HYSN model has three 3-D convolution layers, one 2-D convolution layer, and two fully connected layers. The sizes of the convolution kernels of the 3-D convolution layers are 3 × 3 × 7, 3 × 3 × 5, and 3 × 3 × 3, respectively. The size of the convolution kernel of the 2-D convolution layer is 3 × 3.
- (3)
- SSRN [29]: The SSRN model consists of two residual convolutional blocks with convolution kernel sizes of 1 × 1 × 7 and 3 × 3 × 1, respectively. They are connected sequentially to extract deep-level spectral and spatial features, in which BN and ReLu are added after each convolutional layer.
- (4)
- FDSS [31]: The network structure of FDSS is connected by three convolutional parts, including a densely connected spectral feature extraction part, a reducing dimension part, and a densely connected spatial feature extraction part. The shapes of the three partial convolution kernels are 1 × 1 × 7, 1 × 1 × b (b represents the spectral depth of the generated feature map), and 3 × 3 × 1, respectively. Moreover, BN and ReLu are added before each convolutional layer.
- (5)
- (6)
- (7)
- PCIA [46]: The PCIA model uses an iterative approach to construct an attention mechanism. This network structure also consists of two branches, but each branch uses a pyramid convolution module to perform feature extraction.
- (8)
4.4. Experimental Results
5. Discussion
5.1. Comparison of Different Spatial Patch Size
5.2. Comparison of Different Training Sample Proportions
5.3. Comparison of Computational Cost and Complexity
5.4. Comparison of Different Dense Connections
5.5. Ablation Analysis toward the Attention Module
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhong, Y.; Cao, Q.; Zhao, J.; Ma, A.; Zhao, B.; Zhang, L. Optimal decision fusion for urban land-use/land-cover classification based on adaptive differential evolution using hyperspectral and LiDAR data. Remote Sens. 2017, 9, 868. [Google Scholar] [CrossRef] [Green Version]
- Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
- Lorenz, S.; Salehi, S.; Kirsch, M.; Zimmermann, R.; Unger, G.; Vest Sørensen, E.; Gloaguen, R. Radiometric correction and 3D integration of long-range ground-based hyperspectral imagery for mineral exploration of vertical outcrops. Remote Sen. 2018, 10, 176. [Google Scholar] [CrossRef] [Green Version]
- Audebert, N.; Le Saux, B.; Lefèvre, S. Deep learning for classification of hyperspectral data: A comparative review. IEEE Geosci. Remote Sens. 2019, 7, 159–173. [Google Scholar] [CrossRef] [Green Version]
- Shahshahani, B.M.; Landgrebe, D.A. The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1087–1095. [Google Scholar] [CrossRef] [Green Version]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Cao, G.; Li, X.; Wang, B.; Fu, P. Active semi-supervised random forest for hyperspectral image classification. Remote Sens. 2019, 11, 2974. [Google Scholar] [CrossRef] [Green Version]
- Ma, L.; Crawford, M.M.; Tian, J. Local manifold learning-based k-nearest-neighbor for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4099–4109. [Google Scholar] [CrossRef]
- Kuo, B.-C.; Ho, H.-H.; Li, C.-H.; Hung, C.-C.; Taur, J.-S. A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 317–326. [Google Scholar] [CrossRef]
- Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.A. PCA-based edge-preserving features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
- Bruce, L.M.; Koger, C.H.; Li, J. Dimensionality reduction of hyperspectral data using discrete wavelet transform feature extraction. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2331–2338. [Google Scholar] [CrossRef]
- Falco, N.; Benediktsson, J.A.; Bruzzone, L. A study on the effectiveness of different independent component analysis algorithms for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2183–2199. [Google Scholar] [CrossRef]
- Jia, S.; Wu, K.; Zhu, J.; Jia, X. Spectral–spatial Gabor surface feature fusion approach for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1142–1154. [Google Scholar] [CrossRef]
- Jia, S.; Hu, J.; Zhu, J.; Jia, X.; Li, Q. Three-dimensional local binary patterns for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2399–2413. [Google Scholar] [CrossRef]
- Gu, Y.; Liu, T.; Jia, X.; Benediktsson, J.A.; Chanussot, J. Nonlinear multiple kernel learning with multiple-structure-element extended morphological profiles for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3235–3247. [Google Scholar] [CrossRef]
- Cao, X.; Zhou, F.; Xu, L.; Meng, D.; Xu, Z.; Paisley, J. Hyperspectral image classification with Markov random fields and a convolutional neural network. IEEE Trans. Image Process. 2018, 27, 2354–2367. [Google Scholar] [CrossRef] [Green Version]
- Wang, P.; Fan, E.; Wang, P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar] [CrossRef]
- Lateef, F.; Ruichek, Y. Survey on semantic segmentation using deep learning techniques. Neurocomputing 2019, 338, 321–348. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. 2016, 4, 22–40. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef] [Green Version]
- Yu, C.; Zhao, M.; Song, M.; Wang, Y.; Li, F.; Han, R.; Chang, C.-I. Hyperspectral image classification method based on CNN architecture embedding with hashing semantic feature. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1866–1881. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
- Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised spatial–spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
- Zhang, H.; Li, Y.; Jiang, Y.; Wang, P.; Shen, Q.; Shen, C. Hyperspectral classification based on lightweight 3-D-CNN with transfer learning. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5813–5828. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A fast dense spectral–spatial convolution network framework for hyperspectral images classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
- Tian, Z.; Zhan, R.; Hu, J.; Wang, W.; He, Z.; Zhuang, Z. Generating anchor boxes based on attention mechanism for object detection in remote sensing images. Remote Sens. 2020, 12, 2416. [Google Scholar] [CrossRef]
- You, Q.; Jin, H.; Wang, Z.; Fang, C.; Luo, J. Image captioning with semantic attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4651–4659. [Google Scholar]
- Atoum, Y.; Ye, M.; Ren, L.; Tai, Y.; Liu, X. Color-wise attention network for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 506–507. [Google Scholar]
- Fang, B.; Li, Y.; Zhang, H.; Chan, J.C.-W. Hyperspectral images classification based on dense convolutional networks with spectral-wise attention mechanism. Remote Sens. 2019, 11, 159. [Google Scholar] [CrossRef] [Green Version]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
- Li, J.; Cui, R.; Li, B.; Song, R.; Li, Y.; Dai, Y.; Du, Q. Hyperspectral image super-resolution by band attention through adversarial learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4304–4318. [Google Scholar] [CrossRef]
- Roy, S.K.; Dubey, S.R.; Chatterjee, S.; Chaudhuri, B.B. FuSENet: Fused squeeze-and-excitation network for spectral-spatial hyperspectral image classification. IET Image Process. 2020, 14, 1653–1661. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-branch multi-attention mechanism network for hyperspectral image classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
- Li, R.; Zheng, S.; Duan, C.; Yang, Y.; Wang, X. Classification of hyperspectral image based on double-branch dual-attention mechanism network. Remote Sens. 2020, 12, 582. [Google Scholar] [CrossRef] [Green Version]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
- Shi, C.; Liao, D.; Zhang, T.; Wang, L. Hyperspectral Image Classification Based on 3D Coordination Attention Mechanism Network. Remote Sens. 2022, 14, 608. [Google Scholar] [CrossRef]
- Li, Z.; Cui, X.; Wang, L.; Zhang, H.; Zhu, X.; Zhang, Y. Spectral and spatial global context attention for hyperspectral image classification. Remote Sens. 2021, 13, 771. [Google Scholar] [CrossRef]
- Cao, Y.; Xu, J.; Lin, S.; Wei, F.; Hu, H. GCNet: Non-local networks meet squeeze-excitation networks and beyond. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 1971–1980. [Google Scholar]
- Shi, H.; Cao, G.; Ge, Z.; Zhang, Y.; Fu, P. Double-Branch Network with Pyramidal Convolution and Iterative Attention for Hyperspectral Image Classification. Remote Sens. 2021, 13, 1403. [Google Scholar] [CrossRef]
- Liu, H.; Liu, F.; Fan, X.; Huang, D. Polarized self-attention: Towards high-quality pixel-wise regression. arXiv 2021, arXiv:2107.00782. [Google Scholar]
- Misra, D. Mish: A self regularized non-monotonic activation function. arXiv 2019, arXiv:1908.08681. [Google Scholar]
- Liu, D.; Han, G.; Liu, P.; Yang, H.; Sun, X.; Li, Q.; Wu, J. A Novel 2D-3D CNN with Spectral-Spatial Multi-Scale Feature Fusion for Hyperspectral Image Classification. Remote Sens. 2021, 13, 4621. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
- Ge, Z.; Cao, G.; Shi, H.; Zhang, Y.; Li, X.; Fu, P. Compound Multiscale Weak Dense Network with Hybrid Attention for Hyperspectral Image Classification. Remote Sens. 2021, 13, 3305. [Google Scholar] [CrossRef]
Input Size | Layer Operations | Kernel Size | Filters | Output Size |
---|---|---|---|---|
103, 1) | BN-Mish-Conv3D | 7) | 24 | 49, 24) |
49, 24) | BN-Mish-Conv3D | 7) | 12 | 49, 12) |
49, 12) | BN-Mish-Conv3D | 7) | 12 | 49, 12) |
49, 12) | BN-Mish-Conv3D | 7) | 12 | 49, 12) |
49, 12) | BN-Mish-Conv3D | 7) | 12 | 49, 12) |
49, 12) | BN-Mish-Conv3D | 7) | 12 | 49, 12) |
(7 × 7 × 49, 12)/(7 × 7 × 49, 12)/(7 × 7 × 49, 12) /(7 × 7 × 49, 12)/(7 × 7 × 49, 12) | Concatenate | / | / | (7 × 7 × 49, 60) |
(7 × 7 × 49, 60) | BN-Mish-Conv3D | 1) | 24 | (7 × 7 × 49, 24) |
49, 24) | Element-wise Sum | / | / | 49, 24) |
49, 24) | BN-Mish-Conv3D | 49) | 24 | 1, 24) |
Input Size | Layer Operations | Kernel Size | Filters | Output Size |
---|---|---|---|---|
103, 1) | BN-Mish-Conv3D | 103) | 24 | 1, 24) |
1, 24) | BN-Mish-Conv3D | 1) | 12 | 1, 12) |
1, 12) | BN-Mish-Conv3D | 1) | 12 | 1, 12) |
1, 12) | BN-Mish-Conv3D | 1) | 12 | 1, 12) |
1, 12) | BN-Mish-Conv3D | 1) | 12 | 1, 12) |
1, 12) | BN-Mish-Conv3D | 1) | 12 | 1, 12) |
1, 12) 1, 12) | Concatenate | / | / | (7 × 7 × 1, 60) |
(7 × 7 × 1, 60) | BN-Mish-Conv3D | 1) | 24 | (7 × 7 × 1, 24) |
1, 24) | Element-wise Sum | / | / | 1, 24) |
Input Size | Layer Operations | Kernel Size | Filters | Output Size |
---|---|---|---|---|
24) | Conv2D | 1) | 12 | 12) |
12) | Reshape | / | / | 12) |
24) | Conv2D | 1) | 1 | 1) |
1) | Reshape | / | / | 49) |
49) | SoftMax | / | / | 49) |
12) | Matrix Multiplication | / | / | 12) |
12) | Conv2D | 1) | 12/r | 12/r) |
12/r) | LayerNorm and ReLu | / | / | 12/r) |
12/r) | Conv2D | 1) | 24 | 24) |
24) | Sigmoid | / | / | 24) |
24) | Dot Multiplication | / | / | 24) |
Input Size | Layer Operations | Kernel Size | Filters | Output Size |
---|---|---|---|---|
24) | Conv2D | 1) | 12 | 12) |
12) | Reshape | / | / | 49) |
24) | Conv2D | 1) | 12 | 12) |
12) | AvgPooling | / | / | 12) |
12) | SoftMax | / | / | 12) |
49) | Matrix Multiplication | / | / | 49) |
49) | Reshape | / | / | 1) |
1) | Sigmoid | / | / | 1) |
24) | Dot Multiplication | / | / | 24) |
Input Size | Layer Operations | Output Size |
---|---|---|
24) | AdaptiveAvgPool-BN-Mish and Squeeze | 24) |
24) | AdaptiveAvgPool-BN-Mish and Squeeze | 24) |
24) | Concatenate | 48) |
48) | Dropout-Linear | 9) |
Number | Land Cover Type | Total | Train | Val. | Test |
---|---|---|---|---|---|
C1 | Asphalt | 6631 | 66 | 66 | 6499 |
C2 | Meadows | 18,649 | 186 | 186 | 18,277 |
C3 | Gravel | 2099 | 21 | 21 | 2057 |
C4 | Trees | 3064 | 31 | 31 | 3002 |
C5 | Painted metal sheets | 1345 | 13 | 13 | 1319 |
C6 | Bare soil | 5029 | 50 | 50 | 4929 |
C7 | Bitumen | 1330 | 13 | 13 | 1304 |
C8 | Self-blocking bricks | 3682 | 37 | 37 | 3608 |
C9 | Shadows | 947 | 9 | 9 | 929 |
Total | 42,776 | 428 | 428 | 41,920 |
Number | Land Cover Type | Total | Train | Val. | Test |
---|---|---|---|---|---|
C1 | Scrub | 761 | 15 | 15 | 731 |
C2 | Willow swamp | 243 | 5 | 5 | 233 |
C3 | CP hammock | 256 | 5 | 5 | 246 |
C4 | Slash pine | 252 | 5 | 5 | 242 |
C5 | Oak/broadleaf | 161 | 3 | 3 | 155 |
C6 | Hardwood | 229 | 5 | 5 | 219 |
C7 | Swamp | 105 | 2 | 2 | 101 |
C8 | Graminoid marsh | 431 | 9 | 9 | 413 |
C9 | Spartina marsh | 520 | 10 | 10 | 500 |
C10 | Cattail marsh | 404 | 8 | 8 | 388 |
C11 | Salt marsh | 419 | 8 | 8 | 403 |
C12 | Mud flats | 503 | 10 | 10 | 483 |
C13 | Water | 927 | 19 | 19 | 889 |
Total | 5211 | 104 | 104 | 5003 |
Number | Land Cover Type | Total | Train | Val. | Test |
---|---|---|---|---|---|
C1 | Water | 270 | 5 | 5 | 260 |
C2 | Hippo grass | 101 | 2 | 2 | 97 |
C3 | Floodplain grasses1 | 251 | 5 | 5 | 241 |
C4 | Floodplain grasses2 | 215 | 4 | 4 | 207 |
C5 | Reeds1 | 269 | 5 | 5 | 259 |
C6 | Riparian | 269 | 5 | 5 | 259 |
C7 | Fierscar2 | 259 | 5 | 5 | 249 |
C8 | Island interior | 203 | 4 | 4 | 195 |
C9 | Acacia woodlands | 314 | 6 | 6 | 302 |
C10 | Acacia shrublands | 248 | 5 | 5 | 238 |
C11 | Acacia grasslands | 305 | 6 | 6 | 293 |
C12 | Short mopane | 181 | 4 | 4 | 173 |
C13 | Mixed mopane | 268 | 5 | 5 | 258 |
C14 | Exposed soils | 95 | 2 | 2 | 91 |
Total | 3248 | 65 | 65 | 3118 |
Number | Land Cover Type | Total | Train | Val. | Test |
---|---|---|---|---|---|
C1 | Healthy grass | 1251 | 25 | 25 | 1201 |
C2 | Stressed grass | 1254 | 25 | 25 | 1204 |
C3 | Synthetic grass | 697 | 14 | 14 | 669 |
C4 | Trees | 1244 | 25 | 25 | 1194 |
C5 | Soil | 1242 | 25 | 25 | 1192 |
C6 | Water | 325 | 7 | 7 | 311 |
C7 | Residential | 1268 | 25 | 25 | 1218 |
C8 | Commercial | 1244 | 25 | 25 | 1194 |
C9 | Road | 1252 | 25 | 25 | 1202 |
C10 | Highway | 1227 | 25 | 25 | 1177 |
C11 | Railway | 1235 | 25 | 25 | 1185 |
C12 | Parking lot 1 | 1233 | 25 | 25 | 1183 |
C13 | Parking lot 2 | 469 | 9 | 9 | 451 |
C14 | Tennis court | 428 | 9 | 9 | 410 |
C15 | Running track | 660 | 13 | 13 | 634 |
Total | 15,029 | 301 | 301 | 14,427 |
Number | Land Cover Type | Total | Train | Val. | Test |
---|---|---|---|---|---|
C1 | Brocoli-green-weeds_1 | 2009 | 40 | 40 | 1929 |
C2 | Brocoli-green-weeds_2 | 3726 | 75 | 75 | 3576 |
C3 | Fallow | 1976 | 40 | 40 | 1896 |
C4 | Fallow-rough-plow | 1394 | 28 | 28 | 1338 |
C5 | Fallow-smooth | 2678 | 54 | 54 | 2570 |
C6 | Stubble | 3959 | 79 | 79 | 3801 |
C7 | Celery | 3597 | 72 | 72 | 3435 |
C8 | Grapes-untrained | 11,271 | 225 | 225 | 10,821 |
C9 | Soil-vinyard-develop | 6203 | 124 | 124 | 5955 |
C10 | Corn-senesced-green-weeds | 3278 | 66 | 66 | 3146 |
C11 | Lettuce-romaine-4wk | 1068 | 21 | 21 | 1026 |
C12 | Lettuce-romaine-5wk | 1927 | 39 | 39 | 1849 |
C13 | Lettuce-romaine-6wk | 916 | 18 | 18 | 880 |
C14 | Lettuce-romaine-7wk | 1070 | 21 | 21 | 1028 |
C15 | Vinyard-untrained | 7268 | 145 | 145 | 6978 |
C16 | Vinyard-vertical-trellis | 1807 | 36 | 36 | 1735 |
Total | 54,129 | 1083 | 1083 | 51,963 |
Number | Color | SVM | HYSN | SSRN | FDSS | DBMA | DBDA | PCIA | SSGC | OSDN |
---|---|---|---|---|---|---|---|---|---|---|
C1 | 87.65 | 97.57 | 97.03 | 99.61 | 97.35 | 95.13 | 93.55 | 98.30 | 98.65 | |
C2 | 91.78 | 95.66 | 98.21 | 97.85 | 97.90 | 98.83 | 98.56 | 98.68 | 99.63 | |
C3 | 76.13 | 94.02 | 71.88 | 93.58 | 93.78 | 90.38 | 78.62 | 98.99 | 98.07 | |
C4 | 93.81 | 93.27 | 98.56 | 100.0 | 98.81 | 97.89 | 99.64 | 99.26 | 99.36 | |
C5 | 98.14 | 98.94 | 99.70 | 99.92 | 100.0 | 99.55 | 99.92 | 99.92 | 99.47 | |
C6 | 85.81 | 84.69 | 96.28 | 99.49 | 99.10 | 95.21 | 98.12 | 99.46 | 99.98 | |
C7 | 68.39 | 89.65 | 99.91 | 81.94 | 93.89 | 100.0 | 99.79 | 100.0 | 100.0 | |
C8 | 84.88 | 81.57 | 82.71 | 91.33 | 86.12 | 91.52 | 86.73 | 84.48 | 92.86 | |
C9 | 99.89 | 99.56 | 99.44 | 99.04 | 99.01 | 99.78 | 97.37 | 97.27 | 100.0 | |
OA (%) | 88.87 | 92.96 | 95.23 | 97.11 | 96.67 | 96.77 | 95.84 | 97.38 | 98.83 | |
AA (%) | 87.39 | 92.77 | 93.75 | 95.86 | 96.22 | 96.48 | 94.70 | 97.37 | 98.67 | |
Kappa × 100 | 85.11 | 90.69 | 93.67 | 96.16 | 95.57 | 95.71 | 94.47 | 96.52 | 98.44 |
Number | Color | SVM | HYSN | SSRN | FDSS | DBMA | DBDA | PCIA | SSGC | OSDN |
---|---|---|---|---|---|---|---|---|---|---|
C1 | 86.49 | 99.86 | 87.58 | 88.83 | 90.72 | 97.22 | 96.58 | 87.68 | 97.96 | |
C2 | 76.43 | 86.76 | 67.41 | 66.57 | 88.18 | 84.55 | 91.16 | 93.42 | 98.60 | |
C3 | 64.14 | 86.18 | 53.19 | 56.54 | 80.88 | 77.32 | 91.62 | 80.48 | 82.09 | |
C4 | 45.75 | 55.00 | 61.54 | 83.78 | 61.47 | 54.86 | 64.57 | 71.90 | 92.35 | |
C5 | 31.76 | 26.80 | 100.0 | 97.96 | 69.23 | 33.33 | 82.85 | 70.07 | 95.60 | |
C6 | 50.58 | 96.70 | 100.0 | 75.81 | 72.86 | 95.81 | 81.06 | 94.17 | 80.78 | |
C7 | 48.20 | 67.61 | 100.0 | 100.0 | 84.40 | 76.80 | 94.35 | 83.67 | 96.12 | |
C8 | 68.95 | 83.82 | 90.89 | 97.61 | 86.22 | 89.74 | 94.38 | 99.00 | 98.06 | |
C9 | 72.31 | 82.33 | 98.24 | 99.79 | 86.89 | 98.81 | 97.59 | 100.0 | 100.0 | |
C10 | 94.00 | 98.54 | 64.09 | 98.97 | 100.0 | 100.0 | 99.94 | 100.0 | 99.47 | |
C11 | 86.35 | 87.82 | 98.53 | 99.75 | 100.0 | 100.0 | 100.0 | 99.48 | 100.0 | |
C12 | 81.41 | 81.51 | 91.08 | 98.72 | 98.91 | 94.97 | 99.29 | 99.35 | 92.26 | |
C13 | 100.0 | 97.41 | 100.0 | 98.54 | 100.0 | 100.0 | 100.0 | 100.0 | 99.78 | |
OA (%) | 77.25 | 82.72 | 84.99 | 90.24 | 90.62 | 91.85 | 94.23 | 93.81 | 96.09 | |
AA (%) | 69.72 | 80.80 | 85.58 | 89.45 | 86.14 | 84.88 | 91.80 | 90.70 | 94.85 | |
Kappa × 100 | 74.68 | 80.79 | 83.22 | 89.10 | 89.53 | 90.92 | 93.57 | 93.10 | 95.64 |
Number | Color | SVM | HYSN | SSRN | FDSS | DBMA | DBDA | PCIA | SSGC | OSDN |
---|---|---|---|---|---|---|---|---|---|---|
C1 | 100.0 | 78.71 | 98.86 | 96.98 | 97.01 | 95.57 | 98.31 | 98.48 | 100.0 | |
C2 | 86.76 | 97.73 | 100.0 | 100.0 | 100.0 | 98.00 | 86.27 | 100.0 | 100.0 | |
C3 | 86.70 | 88.03 | 100.0 | 87.78 | 100.0 | 99.58 | 88.70 | 100.0 | 99.17 | |
C4 | 94.19 | 87.06 | 90.95 | 99.03 | 95.41 | 91.96 | 97.90 | 91.59 | 91.86 | |
C5 | 77.05 | 90.37 | 90.28 | 77.41 | 87.31 | 91.96 | 97.69 | 92.06 | 86.75 | |
C6 | 59.86 | 57.51 | 80.08 | 96.61 | 83.69 | 96.07 | 97.04 | 91.27 | 89.71 | |
C7 | 100.0 | 88.46 | 96.48 | 99.2 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
C8 | 86.49 | 91.46 | 96.26 | 89.23 | 98.00 | 98.43 | 96.11 | 97.91 | 100.0 | |
C9 | 64.10 | 69.02 | 94.89 | 81.18 | 96.69 | 96.74 | 81.57 | 90.96 | 98.67 | |
C10 | 85.05 | 92.06 | 81.60 | 100.0 | 99.58 | 85.14 | 72.83 | 91.44 | 99.57 | |
C11 | 44.00 | 89.51 | 93.31 | 91.3 | 100.0 | 100.0 | 100.0 | 94.83 | 100.0 | |
C12 | 91.35 | 90.12 | 98.05 | 99.42 | 100.0 | 84.91 | 100.0 | 100.0 | 98.08 | |
C13 | 76.79 | 98.13 | 79.50 | 100.0 | 83.01 | 91.05 | 96.35 | 92.28 | 92.13 | |
C14 | 100.0 | 95.56 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
OA (%) | 73.40 | 84.32 | 91.45 | 92.54 | 94.76 | 94.63 | 92.82 | 94.95 | 96.41 | |
AA (%) | 82.31 | 86.70 | 92.88 | 94.15 | 95.76 | 94.96 | 93.77 | 95.77 | 96.85 | |
Kappa × 100 | 71.07 | 82.99 | 90.73 | 91.91 | 94.32 | 94.18 | 92.22 | 94.53 | 96.11 |
Number | Color | SVM | HYSN | SSRN | FDSS | DBMA | DBDA | PCIA | SSGC | OSDN |
---|---|---|---|---|---|---|---|---|---|---|
C1 | 96.69 | 91.27 | 97.81 | 97.46 | 91.18 | 96.47 | 99.80 | 96.83 | 99.91 | |
C2 | 98.21 | 86.10 | 99.92 | 99.92 | 94.69 | 99.00 | 92.26 | 98.15 | 97.86 | |
C3 | 98.81 | 94.63 | 100.0 | 99.55 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
C4 | 91.78 | 89.32 | 85.14 | 95.11 | 99.48 | 97.21 | 95.64 | 97.62 | 95.34 | |
C5 | 89.80 | 92.09 | 92.39 | 93.90 | 93.04 | 98.66 | 96.70 | 94.61 | 99.66 | |
C6 | 95.85 | 91.50 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 96.91 | 96.53 | |
C7 | 70.96 | 81.38 | 94.68 | 85.11 | 95.00 | 92.80 | 80.88 | 95.52 | 94.64 | |
C8 | 69.36 | 87.98 | 99.52 | 96.71 | 97.15 | 92.33 | 88.54 | 92.46 | 99.63 | |
C9 | 71.47 | 80.53 | 81.45 | 82.37 | 95.81 | 95.48 | 88.15 | 96.17 | 91.96 | |
C10 | 76.44 | 86.94 | 67.22 | 93.09 | 82.49 | 80.42 | 89.38 | 93.61 | 88.59 | |
C11 | 80.71 | 86.33 | 94.26 | 89.55 | 92.77 | 97.58 | 89.07 | 92.95 | 97.92 | |
C12 | 71.96 | 83.42 | 91.84 | 89.60 | 92.09 | 85.77 | 91.91 | 88.09 | 92.84 | |
C13 | 29.25 | 94.00 | 95.51 | 87.12 | 71.40 | 85.81 | 97.62 | 77.99 | 96.16 | |
C14 | 92.73 | 90.79 | 95.77 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
C15 | 99.53 | 90.75 | 98.15 | 98.76 | 94.17 | 99.68 | 94.43 | 99.21 | 100.0 | |
OA (%) | 82.82 | 87.52 | 90.30 | 92.81 | 92.85 | 93.99 | 92.18 | 94.66 | 96.28 | |
AA (%) | 82.24 | 88.47 | 92.91 | 93.88 | 93.29 | 94.75 | 93.62 | 94.67 | 96.74 | |
Kappa × 100 | 81.41 | 86.51 | 89.51 | 92.23 | 92.27 | 93.50 | 91.55 | 94.23 | 95.98 |
Number | Color | SVM | HYSN | SSRN | FDSS | DBMA | DBDA | PCIA | SSGC | OSDN |
---|---|---|---|---|---|---|---|---|---|---|
C1 | 99.90 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | |
C2 | 98.62 | 99.94 | 100.0 | 100.0 | 99.97 | 100.0 | 100.0 | 100.0 | 99.92 | |
C3 | 91.69 | 96.38 | 100.0 | 97.57 | 100.0 | 100.0 | 99.36 | 100.0 | 100.0 | |
C4 | 97.32 | 99.15 | 99.93 | 99.78 | 99.33 | 99.32 | 100.0 | 95.61 | 97.39 | |
C5 | 97.27 | 96.55 | 99.92 | 99.81 | 99.65 | 99.42 | 91.18 | 100.0 | 100.0 | |
C6 | 99.97 | 99.95 | 100.0 | 100.0 | 99.92 | 100.0 | 100.0 | 99.97 | 100.0 | |
C7 | 98.92 | 99.47 | 99.88 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 99.83 | |
C8 | 76.15 | 85.39 | 88.47 | 96.93 | 95.44 | 96.55 | 95.14 | 90.45 | 98.78 | |
C9 | 98.88 | 98.35 | 99.75 | 99.97 | 99.45 | 99.80 | 100.0 | 99.82 | 99.52 | |
C10 | 92.24 | 95.68 | 98.27 | 99.48 | 96.41 | 99.20 | 99.77 | 99.46 | 98.39 | |
C11 | 96.16 | 97.93 | 100.0 | 95.75 | 100.0 | 100.0 | 95.32 | 100.0 | 99.90 | |
C12 | 95.97 | 96.39 | 99.84 | 99.04 | 99.89 | 99.51 | 99.30 | 99.68 | 99.84 | |
C13 | 93.38 | 92.67 | 100.0 | 100.0 | 97.34 | 100.0 | 99.88 | 100.0 | 99.77 | |
C14 | 97.03 | 99.49 | 99.70 | 99.22 | 93.29 | 98.56 | 99.41 | 99.03 | 98.55 | |
C15 | 77.47 | 82.63 | 97.12 | 91.68 | 95.78 | 88.56 | 94.61 | 98.00 | 95.12 | |
C16 | 99.19 | 100.0 | 99.89 | 99.94 | 100.0 | 98.86 | 100.0 | 100.0 | 100.0 | |
OA (%) | 90.23 | 93.53 | 96.86 | 97.94 | 97.98 | 97.46 | 97.62 | 97.39 | 98.80 | |
AA (%) | 94.39 | 96.25 | 98.92 | 98.70 | 98.53 | 98.74 | 98.37 | 98.88 | 99.19 | |
Kappa × 100 | 89.09 | 92.79 | 96.49 | 97.70 | 97.75 | 97.18 | 97.34 | 97.09 | 98.66 |
Model | SVM | HYSN | SSRN | FDSS | DBMA | DBDA | PCIA | SSGC | OSDN | |
---|---|---|---|---|---|---|---|---|---|---|
PU | Parameters (M) | / | 1.37 | 0.21 | 0.34 | 0.32 | 0.20 | 0.23 | 0.19 | 0.05 |
FLOPs (M) | / | 71.41 | 48.47 | 39.55 | 74.43 | 32.72 | 26.50 | 32.51 | 21.18 | |
Training time (s) | 5.19 | 15.14 | 40.47 | 43.62 | 39.41 | 37.00 | 35.25 | 37.21 | 30.19 | |
Testing time (s) | 0.96 | 5.38 | 8.79 | 12.97 | 10.41 | 11.56 | 11.89 | 10.97 | 7.53 | |
KSC | Parameters (M) | / | 2.03 | 0.31 | 0.93 | 0.52 | 0.33 | 0.35 | 0.32 | 0.07 |
FLOPs (M) | / | 123.55 | 83.27 | 86.47 | 128.67 | 56.15 | 44.35 | 55.95 | 36.36 | |
Training time (s) | 0.67 | 12.97 | 21.49 | 29.03 | 22.76 | 17.10 | 27.70 | 17.02 | 15.80 | |
Test time (s) | 0.06 | 1.05 | 1.52 | 1.79 | 1.53 | 1.39 | 1.25 | 1.64 | 1.08 | |
BS | Parameters (M) | / | 1.76 | 0.27 | 0.65 | 0.44 | 0.28 | 0.30 | 0.27 | 0.06 |
FLOPs (M) | / | 101.82 | 68.77 | 64.92 | 106.07 | 46.39 | 36.91 | 46.18 | 30.04 | |
Training time (s) | 0.51 | 4.50 | 9.32 | 15.01 | 11.72 | 10.06 | 13.22 | 9.53 | 5.64 | |
Testing time (s) | 0.04 | 1.01 | 1.33 | 1.64 | 1.99 | 1.78 | 1.81 | 1.66 | 1.15 | |
HS | Parameters (M) | / | 1.74 | 0.27 | 0.63 | 0.43 | 0.27 | 0.29 | 0.26 | 0.06 |
FLOPs (M) | / | 100.38 | 67.81 | 63.80 | 104.56 | 45.74 | 36.42 | 45.53 | 29.62 | |
Training time (s) | 3.21 | 9.67 | 20.91 | 21.85 | 24.63 | 22.18 | 22.17 | 23.37 | 13.39 | |
Testing time (s) | 0.57 | 1.89 | 2.03 | 2.52 | 2.78 | 2.85 | 3.75 | 2.83 | 1.23 | |
SA | Parameters (M) | / | 2.47 | 0.39 | 1.53 | 0.66 | 0.42 | 0.44 | 0.41 | 0.08 |
FLOPs (M) | / | 158.31 | 106.47 | 126.12 | 164.82 | 71.77 | 56.25 | 71.57 | 42.26 | |
Training time (s) | 41.46 | 59.77 | 222.32 | 424.52 | 298.42 | 272.88 | 326.39 | 264.63 | 120.11 | |
Testing time (s) | 6.36 | 10.77 | 12.47 | 16.34 | 20.15 | 28.21 | 27.61 | 26.23 | 15.81 |
Dataset | Block | Parameters (M) | FLOPs (M) | OA (%) |
---|---|---|---|---|
PU | DB | 0.29 | 49.49 | 98.99 |
WDB | 0.05 | 25.13 | 98.02 | |
OSDB | 0.05 | 21.18 | 98.83 |
Dataset | Block | Parameters (M) | FLOPs (M) | OA (%) |
---|---|---|---|---|
KSC | DB | 0.47 | 84.88 | 96.74 |
WDB | 0.07 | 43.12 | 95.93 | |
OSDB | 0.07 | 36.36 | 96.09 |
Dataset | Block | Parameters (M) | FLOPs (M) | OA (%) |
---|---|---|---|---|
BS | DB | 0.41 | 70.14 | 96.89 |
WDB | 0.07 | 35.63 | 96.28 | |
OSDB | 0.06 | 30.04 | 96.41 |
Dataset | Block | Parameters (M) | FLOPs (M) | OA (%) |
---|---|---|---|---|
HS | DB | 0.40 | 69.15 | 96.93 |
WDB | 0.07 | 35.13 | 96.11 | |
OSDB | 0.06 | 29.62 | 96.28 |
Dataset | Block | Parameters (M) | FLOPs (M) | OA (%) |
---|---|---|---|---|
SA | DB | 0.60 | 108.48 | 99.01 |
WDB | 0.09 | 55.12 | 98.75 | |
OSDB | 0.08 | 42.26 | 98.80 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pan, H.; Liu, M.; Ge, H.; Wang, L. One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification. Remote Sens. 2022, 14, 2265. https://doi.org/10.3390/rs14092265
Pan H, Liu M, Ge H, Wang L. One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification. Remote Sensing. 2022; 14(9):2265. https://doi.org/10.3390/rs14092265
Chicago/Turabian StylePan, Haizhu, Moqi Liu, Haimiao Ge, and Liguo Wang. 2022. "One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification" Remote Sensing 14, no. 9: 2265. https://doi.org/10.3390/rs14092265
APA StylePan, H., Liu, M., Ge, H., & Wang, L. (2022). One-Shot Dense Network with Polarized Attention for Hyperspectral Image Classification. Remote Sensing, 14(9), 2265. https://doi.org/10.3390/rs14092265