EL-NAS: Efficient Lightweight Attention Cross-Domain Architecture Search for Hyperspectral Image Classification
"> Figure 1
<p>The search framework of proposed EL-NAS for HSI classification.</p> "> Figure 2
<p>The whole modular search space and searching network of the proposed EL-NAS.</p> "> Figure 3
<p>The principle of 3D convolution decomposition.</p> "> Figure 4
<p>The performance impact of the number of skip connections on Pavia.</p> "> Figure 5
<p>The searching workflow works with edge decision with dynamic regularization.</p> "> Figure 6
<p>IN. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth map.</p> "> Figure 7
<p>UP. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth map.</p> "> Figure 8
<p>HU. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth map.</p> "> Figure 9
<p>SA. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth map.</p> "> Figure 10
<p>IMDB. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth map.</p> "> Figure 11
<p>The number of skip connections in the operations selected in ten independent searches.</p> "> Figure 12
<p>Best cell architecture on different dataset settings. (<b>a</b>) IN; (<b>b</b>) UP; (<b>c</b>) HU.</p> "> Figure 13
<p>Classification maps for IN. (<b>a</b>) False-color image; (<b>b</b>) ground-truth map; (<b>c</b>) SVMCK; (<b>d</b>) 2D-CNN; (<b>e</b>) 3D-CNN; (<b>f</b>) DFFN; (<b>g</b>) SSRN; (<b>h</b>) DcCapsGAN; (<b>i</b>) AUTO-CNN; (<b>j</b>) LAMFN; (<b>k</b>) EL-NAS.</p> "> Figure 14
<p>Classification maps for UP. (<b>a</b>) False-color image; (<b>b</b>) ground-truth map; (<b>c</b>) SVMCK; (<b>d</b>) 2D-CNN; (<b>e</b>) 3D-CNN; (<b>f</b>) DFFN; (<b>g</b>) SSRN; (<b>h</b>) DcCapsGAN; (<b>i</b>) AUTO-CNN; (<b>j</b>) LMAFN; (<b>k</b>) EL-NAS.</p> "> Figure 15
<p>Classification maps for HU. (<b>a</b>) False-color image; (<b>b</b>) ground-truth map; (<b>c</b>) SVM; (<b>d</b>) SVMCK; (<b>e</b>) 2D-CNN; (<b>f</b>) 3D-CNN; (<b>g</b>) DFFN; (<b>h</b>) SSRN; (<b>i</b>) DcCapsGAN; (<b>j</b>) AUTO-CNN; (<b>k</b>) EL-NAS.</p> ">
Abstract
:1. Introduction
- EL-NAS successfully introduces the lightweight and attention module and 3D decomposition convolution for automatically realizing the efficient design of DL structure in the hyperspectral image classification area. Therefore, the efficient automatic searching strategy enables us to establish a task-driven automatic design of DL structure for different datasets from different acquisition sensors or scenarios.
- EL-NAS presents remarkable searching efficiency through edge decision strategy to realize lightweight attention DL structure by imposing (i) the knowledge of successful lightweight 3D decomposition convolution and attention module in the searching space. (ii) The entropy of operation distribution estimated over non-skip operation is implemented to make the edge decision. (iii) Dynamic regularization loss based on the impact of the number of skip connections is adopted for further improving the searching performance. Therefore, the most effective and lightweight operations will be preserved by utilizing the edge decision strategy.
- Compared with several state-of-the-art methods via comprehensive experiments in accuracy, classification maps, the number of parameters, and the execution cost, EL-NAS presents fewer GPU searching costs and lower parameters and computation costs. The experimental results on three real HSI datasets demonstrate that EL-NAS can search out a more lightweight network structure and realize more robust classification results even under data-independent and sensor-independent scenarios.
2. Related Work
2.1. GD-Based NAS
2.2. NAS for HSI
3. Methodology
3.1. Modular Search Space
- (1)
- Lightweight module (i.e., inverted residual block in MobileNetv2 [63], IR) involves pointwise convolution and depthwise separable convolution. The purpose of the inverted residual module is to increase the number of channels by pointwise convolution and then perform depthwise separable convolution in higher dimensions to extract better channel features without significantly increasing the model parameters and computational costs.
- (2)
- Attention module (i.e., Squeeze-and-Excitation [64], SE) adaptively learns weights for different channels using global pooling and fully connected layers. Hundreds of spectral channels is a significant characteristic of hyperspectral images, where different channels contribute differently to the feature classification task, so the channel attention module is essential as verified in the experimental section.
- (3)
- 3D decomposition convolution. In this paper, 3D convolution is decomposed into two types of decomposition convolution for processing spectral and spatial information, respectively. The principle of 3D decomposition convolution is shown in Figure 3, where a 3D convolution with a kernel size of is decomposed into two decomposition convolutions with a kernel size of and , respectively. This simplifies the complexity of a single candidate operation and allows the search space to yield more possibilities of models, which can significantly reduce the model parameters.
3.2. Regularization-Based Edge-Decision Search Strategy
3.2.1. Bi-Level Optimization
3.2.2. Edge Decision Criterion
Edge Importance
Selection Certainty
3.2.3. Dynamic Regularization (DR)
3.3. Performance Evaluation
Algorithm 1 The overall procedure of the proposed EL-NAS. |
|
4. Experiments
4.1. Hyperspectral Data Sets
4.2. Experimental Configuration
4.3. Search Space Configuration
- Lightweight modules ( and inverted residual modules, IR).
- Three-dimensional decomposition convolution (3D convolution with kernel size of (SPA)and (SPE)).
- Attention modules (SE).
- Skip connection ().
- None ().
4.4. Hyperparameter Settings
4.5. Ablation Study
4.5.1. Different Candidate Operations
4.5.2. Strategy Optimisation Scheme
4.6. Architecture Evaluation
4.7. Cross Domain Experiment
4.7.1. Cross-Datasets Architecture Search of EL-NAS
4.7.2. Cross-Sensors Architecture Search of EL-NAS
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lacar, F.M.; Lewis, M.M.; Grierson, I.T. Use of hyperspectral imagery for mapping grape varieties in the Barossa Valley, South Australia. In Proceedings of the Geoscience and Remote Sensing Symposium, Sydney, Australia, 9–13 July 2001. [Google Scholar]
- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
- Zhang, F.; Wu, L.; Zhu, D.; Liu, Y. Social sensing from street-level imagery: A case study in learning spatio-temporal urban mobility patterns. ISPRS J. Photogramm. Remote Sens. 2019, 153, 48–58. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Tao, D.; Huang, X.; Du, B. Hyperspectral remote sensing image subpixel target detection based on supervised metric learning. IEEE Trans. Geosci. Remote Sens. 2013, 52, 4955–4965. [Google Scholar] [CrossRef]
- Zhong, Y.; Wang, X.; Xu, Y.; Wang, S.; Jia, T.; Hu, X.; Zhao, J.; Wei, L.; Zhang, L. Mini-UAV-Borne Hyperspectral Remote Sensing: From Observation and Processing to Applications. IEEE Geosci. Remote Sens. Mag. 2018, 6, 46–62. [Google Scholar] [CrossRef]
- Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification via kernel sparse representation. IEEE Trans. Geosci. Remote Sens. 2012, 51, 217–231. [Google Scholar] [CrossRef]
- Yi, C.; Nasrabadi, N.M.; Tran, T.D. Classification for hyperspectral imagery based on sparse representation. In Proceedings of the Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Reykjavik, Iceland, 14–16 June 2010. [Google Scholar]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Peng, J.; Zhou, Y.; Chen, C. Region-Kernel-Based Support Vector Machines for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4810–4824. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Bruzzone, L. Kernel-based methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
- Wang, J.; Jiao, L.; Liu, H.; Yang, S. Hyperspectral Image Classification by Spatial–Spectral Derivative-Aided Kernel Joint Sparse Representation. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2485–2500. [Google Scholar] [CrossRef]
- Wang, J.; Jiao, L.; Shuang, W.; Hou, B.; Fang, L. Adaptive Nonlocal Spatial–Spectral Kernel for Hyperspectral Imagery Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1–16. [Google Scholar] [CrossRef]
- Saxena, L. Recent advances in deep learning. Comput. Rev. 2016, 57, 563–564. [Google Scholar]
- Zhang, H.; Li, Y.; Zhang, Y.; Shen, Q. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network. Remote Sens. Lett. 2017, 8, 438–447. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral-Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Slavkovikj, V.; Verstockt, S.; Neve, W.D.; Hoecke, S.V.; Walle, R. Hyperspectral Image Classification with Convolutional Neural Networks. In Proceedings of the the 23rd ACM International Conference, Montreal, QC, Canada, 18–22 October 2021. [Google Scholar]
- He, M.; Bo, L.; Chen, H. Multi-scale 3D deep convolutional neural network for hyperspectral image classification. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017. [Google Scholar]
- Mou, L.; Ghamisi, P.; Zhu, X.X. Unsupervised Spectral-Spatial Feature Learning via Deep Residual Conv-Deconv Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 391–406. [Google Scholar] [CrossRef]
- Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral Image Classification With Deep Feature Fusion Network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
- Xi, B.; Li, J.; Diao, Y.; Li, Y.; Li, Z.; Huang, Y.; Chanussot, J. DGSSC: A Deep Generative Spectral-Spatial Classifier for Imbalanced Hyperspectral Imagery. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 1535–1548. [Google Scholar] [CrossRef]
- Zhang, H.; Li, Y.; Jiang, Y.; Wang, P.; Shen, C. Hyperspectral Classification Based on Lightweight 3-D-CNN with Transfer Learning. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5813–5828. [Google Scholar] [CrossRef]
- Wang, J.; Guo, S.; Huang, R.; Li, L.; Jiao, L. Dual-Channel Capsule Generation Adversarial Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
- Yang, X.; Cao, W.; Lu, Y.; Zhou, Y. QTN: Quaternion Transformer Network for Hyperspectral Image Classification. IEEE Trans. Circuits Syst. Video Technol. 2023. [Google Scholar] [CrossRef]
- Wang, J.; Huang, R.; Guo, S.; Li, L.; Zhu, M.; Yang, S.; Jiao, L. NAS-Guided Lightweight Multiscale Attention Fusion Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8754–8767. [Google Scholar] [CrossRef]
- Radosavovic, I.; Kosaraju, R.P.; Girshick, R.; He, K.; Dollar, P. Designing Network Design Spaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2020. [Google Scholar]
- Liu, B.; Yu, X.; Yu, A.; Wan, G. Deep convolutional recurrent neural network with transfer learning for hyperspectral image classification. J. Appl. Remote Sens. 2018, 12, 026028. [Google Scholar] [CrossRef]
- He, X.; Chen, Y.; Ghamisi, P. Heterogeneous transfer learning for hyperspectral image classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3246–3263. [Google Scholar] [CrossRef]
- Liu, X.; Hu, Q.; Cai, Y.; Cai, Z. Extreme learning machine-based ensemble transfer learning for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3892–3902. [Google Scholar] [CrossRef]
- Jaderberg, M.; Vedaldi, A.; Zisserman, A. Speeding up Convolutional Neural Networks with Low Rank Expansions. arXiv 2014, arXiv:1405.3866. [Google Scholar]
- Hinton, G.; Vinyals, O.; Dean, J. Distilling the Knowledge in a Neural Network. Comput. Sci. 2015, 14, 38–39. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; Zhang, L. Cvt: Introducing convolutions to vision transformers. arXiv 2021, arXiv:2103.15808. [Google Scholar]
- Leiva-Aravena, E.; Leiva, E.; Zamorano, V.; Rojas, C.; John, M. Neural Architecture Search with Reinforcement Learning. arXiv. 2019, arXiv:1611.01578. [Google Scholar]
- Pham, H.; Guan, M.; Zoph, B.; Le, Q.; Dean, J. Efficient neural architecture search via parameters sharing. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 4095–4104. [Google Scholar]
- Baker, B.; Gupta, O.; Naik, N.; Raskar, R. Designing neural network architectures using reinforcement learning. arXiv 2016, arXiv:1611.02167. [Google Scholar]
- Real, E.; Moore, S.; Selle, A.; Saxena, S.; Suematsu, Y.L.; Tan, J.; Le, Q.V.; Kurakin, A. Large-scale evolution of image classifiers. In Proceedings of the International Conference on Machine Learning, PMLR, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 2902–2911. [Google Scholar]
- Liu, H.; Simonyan, K.; Vinyals, O.; Fernando, C.; Kavukcuoglu, K. Hierarchical representations for efficient architecture search. arXiv 2017, arXiv:1711.00436. [Google Scholar]
- Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8697–8710. [Google Scholar]
- Tan, M.; Chen, B.; Pang, R.; Vasudevan, V.; Sandler, M.; Howard, A.; Le, Q.V. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2820–2828. [Google Scholar]
- Liu, H.; Simonyan, K.; Yang, Y. Darts: Differentiable architecture search. arXiv 2018, arXiv:1806.09055. [Google Scholar]
- Li, C.; Ning, J.; Hu, H.; He, K. Enhancing the Robustness, Efficiency, and Diversity of Differentiable Architecture Search. arXiv 2022, arXiv:2204.04681. [Google Scholar]
- Xia, X.; Xiao, X.; Wang, X.; Zheng, M. Progressive Automatic Design of Search Space for One-Shot Neural Architecture Search. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 2455–2464. [Google Scholar]
- Liu, Y.; Li, T.; Zhang, P.; Yan, Y. Improved conformer-based end-to-end speech recognition using neural architecture search. arXiv 2021, arXiv:2104.05390. [Google Scholar]
- Li, H.; Wu, G.; Zheng, W.S. Combined depth space based architecture search for person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 6729–6738. [Google Scholar]
- Zhang, H.; Gong, C.; Bai, Y.; Bai, Z.; Li, Y. 3-D-ANAS: 3-D Asymmetric Neural Architecture Search for Fast Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–19. [Google Scholar] [CrossRef]
- Xue, X.; Zhang, H.; Fang, B.; Bai, Z.; Li, Y. Grafting Transformer Module on Automatically Designed ConvNet for Hyperspectral Image Classification. arXiv 2021, arXiv:2110.11084. [Google Scholar]
- Liang, H.; Zhang, S.; Sun, J.; He, X.; Huang, W.; Zhuang, K.; Li, Z. Darts+: Improved differentiable architecture search with early stopping. arXiv 2019, arXiv:1909.06035. [Google Scholar]
- Xu, Y.; Xie, L.; Zhang, X.; Chen, X.; Qi, G.J.; Tian, Q.; Xiong, H. PC-DARTS: Partial channel connections for memory-efficient architecture search. arXiv 2019, arXiv:1907.05737. [Google Scholar]
- Chu, X.; Wang, X.; Zhang, B.; Lu, S.; Wei, X.; Yan, J. DARTS-: Robustly stepping out of performance collapse without indicators. arXiv 2020, arXiv:2009.01027. [Google Scholar]
- Li, G.; Qian, G.; Delgadillo, I.C.; Muller, M.; Thabet, A.; Ghanem, B. Sgas: Sequential greedy architecture search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1620–1630. [Google Scholar]
- Chu, X.; Zhang, B.; Xu, R. Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 12239–12248. [Google Scholar]
- Hou, P.; Jin, Y.; Chen, Y. Single-DARTS: Towards Stable Architecture Search. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville, TN, USA, 20–25 June 2021; pp. 373–382. [Google Scholar]
- Zela, A.; Elsken, T.; Saikia, T.; Marrakchi, Y.; Brox, T.; Hutter, F. Understanding and Robustifying Differentiable Architecture Search. arXiv 2019, arXiv:1909.09656. [Google Scholar]
- Ye, P.; Li, B.; Li, Y.; Chen, T.; Fan, J.; Ouyang, W. beta-DARTS: Beta-Decay Regularization for Differentiable Architecture Search. arXiv 2022, arXiv:2203.01665. [Google Scholar]
- Huang, L.; Sun, S.; Zeng, J.; Wang, W.; Pang, W.; Wang, K. U-DARTS: Uniform-space differentiable architecture search. Inf. Sci. 2023, 628, 339–349. [Google Scholar] [CrossRef]
- Wang, W.; Zhang, X.; Cui, H.; Yin, H.; Zhang, Y. FP-DARTS: Fast parallel differentiable neural architecture search for image classification. Pattern Recognit. 2023, 136, 109193. [Google Scholar] [CrossRef]
- Zhang, C.; Liu, X.; Wang, G.; Cai, Z. Particle Swarm Optimization Based Deep Learning Architecture Search for Hyperspectral Image Classification. In Proceedings of the IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 509–512. [Google Scholar]
- Liu, X.; Zhang, C.; Cai, Z.; Yang, J.; Zhou, Z.; Gong, X. Continuous Particle Swarm Optimization-Based Deep Learning Architecture Search for Hyperspectral Image Classification. Remote Sens. 2021, 13, 1082. [Google Scholar] [CrossRef]
- Chen, Y.; Zhu, K.; Zhu, L.; He, X.; Ghamisi, P.; Benediktsson, J.A. Automatic design of convolutional neural network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7048–7066. [Google Scholar] [CrossRef]
- Zhan, L.; Fan, J.; Ye, P.; Cao, J. A2S-NAS: Asymmetric Spectral-Spatial Neural Architecture Search for Hyperspectral Image Classification. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–9 June 2023; pp. 1–5. [Google Scholar]
- Cao, C.; Xiang, H.; Song, W.; Yi, H.; Xiao, F.; Gao, X. Lightweight Multiscale Neural Architecture Search With Spectral–Spatial Attention for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
IP | UP | HU | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Class | Class Name | Train | Test | # | Class | Class Name | Train | Test | # | Class | Class Name | Train | Test |
1 | Alfalfa | 2 | 51 | # | 1 | Asphalt | 67 | 6963 | # | 1 | Healthy grass | 38 | 1314 |
2 | Corn-notill | 43 | 1571 | # | 2 | Meadows | 187 | 19,582 | # | 2 | Stressed grass | 38 | 1317 |
3 | Corn-mintill | 25 | 913 | # | 3 | Gravel | 21 | 2204 | # | 3 | Synthetic grass | 21 | 732 |
4 | Corn | 8 | 261 | # | 4 | Trees | 31 | 3218 | # | 4 | Trees | 38 | 1307 |
5 | Grass-pasture | 15 | 532 | # | 5 | Sheets | 14 | 1413 | # | 5 | Soil | 38 | 1305 |
6 | Grass-trees | 22 | 803 | # | 6 | Baresoil | 51 | 5281 | # | 6 | Water | 10 | 342 |
7 | Grass-pasture-mowed | 1 | 31 | # | 7 | Bitumen | 14 | 1397 | # | 7 | Residential | 39 | 1332 |
8 | Hay-windrowed | 15 | 526 | # | 8 | Bricks | 37 | 3867 | # | 8 | Commercial | 38 | 1307 |
9 | Oats | 1 | 22 | # | 9 | Shadows | 10 | 995 | # | 9 | Road | 38 | 1315 |
10 | Soybean-nottill | 30 | 1070 | # | # | 10 | Highway | 37 | 1289 | ||||
11 | Soybean-minttill | 74 | 2701 | # | # | 11 | Railway | 38 | 1297 | ||||
12 | Soybean-clean | 18 | 653 | # | # | 12 | Parking Lot 1 | 37 | 1295 | ||||
13 | Wheat | 7 | 226 | # | # | 13 | Parking Lot 2 | 15 | 493 | ||||
14 | Woods | 38 | 1392 | # | # | 14 | Tennis Court | 13 | 450 | ||||
15 | Buildings-Grass-Trees-Drives | 12 | 425 | # | # | 15 | Running Track | 20 | 693 | ||||
16 | Stone-Steel-Towers | 3 | 103 | # | # | ||||||||
Total | 314 | 9935 | # | Total | 432 | 42,344 | # | Total | 458 | 15,788 |
Condidate Operations | OA | AA | KAPPA |
---|---|---|---|
BASE(dilconv+sepconv) | |||
IR | |||
IR+BASE | |||
IR+SE | |||
IR+pointconv | |||
IR+SPA | |||
IR+SPE | |||
IR+SPA+SPE | |||
MSS |
Exp | 1 | 2 | 3 | Mean | 1 | 2 | 3 | Mean | 1 | 2 | 3 | Mean | 1 | 2 | 3 | Mean |
OA(%) | 98.30 | 98.36 | 98.27 | 98.31 | 98.46 | 98.47 | 98.41 | 98.45 | 98.79 | 98.85 | 98.80 | 98.81 | 98.80 | 98.86 | 98.81 | 98.82 |
AA(%) | 97.49 | 97.68 | 97.76 | 97.64 | 98.19 | 97.97 | 97.90 | 98.02 | 98.24 | 98.44 | 98.40 | 98.36 | 98.29 | 98.44 | 98.41 | 98.38 |
KAPPA(%) | 97.73 | 97.80 | 97.81 | 97.78 | 98.24 | 98.02 | 98.41 | 98.22 | 98.39 | 98.46 | 98.40 | 98.42 | 98.39 | 98.47 | 98.41 | 98.42 |
Class | SVMCK | 2D-CNN | 3D-CNN | DFFN | SSRN | DcCapsGAN | Auto-CNN | LMAFN | EL-NAS |
---|---|---|---|---|---|---|---|---|---|
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
9 | |||||||||
10 | |||||||||
11 | |||||||||
12 | |||||||||
13 | |||||||||
14 | |||||||||
15 | |||||||||
16 | |||||||||
OA(%) | |||||||||
AA(%) | |||||||||
KAPPA(%) | |||||||||
PARAM | - | 186,096 | 9068 | 374,880 | 376,892 | 33,521,328 | 176,299 | 148,651 | 274,613 |
Class | SVMCK | 2D-CNN | 3D-CNN | DFFN | SSRN | DcCapsGAN | Auto-CNN | LMAFN | EL-NAS |
---|---|---|---|---|---|---|---|---|---|
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
9 | |||||||||
OA(%) | |||||||||
AA(%) | |||||||||
KAPPA(%) | |||||||||
PARAM | - | 185,193 | 5253 | 443,929 | 229,261 | 21,468,326 | 156,101 | 140,260 | 175,657 |
Class | SVMCK | 2D-CNN | 3D-CNN | DFFN | SSRN | DcCapsGAN | Auto-CNN | LMAFN | EL-NAS |
---|---|---|---|---|---|---|---|---|---|
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
9 | |||||||||
10 | |||||||||
11 | |||||||||
12 | |||||||||
13 | |||||||||
14 | |||||||||
15 | |||||||||
OA(%) | |||||||||
AA(%) | |||||||||
KAPPA(%) | |||||||||
PARAM | - | 185,967 | 8523 | 375,103 | 290,851 | 27,055,608 | 172,373 | 143,658 | 238,292 |
Dataset | IN | UP | HU | |||
---|---|---|---|---|---|---|
Model | Patameter | Depth | Patameter | Depth | Patameter | Depth |
2D-CNN | 186,096 | 3 | 185,193 | 3 | 185,967 | 3 |
3D-CNN | 9068 | 3 | 5253 | 3 | 8523 | 3 |
DFFN | 374,880 | 27 | 443,929 | 33 | 375,103 | 27 |
SSRN | 376,892 | 13 | 229,261 | 13 | 290,851 | 13 |
DcCapsGAN | 33,521,328 | / | 21,468,326 | / | 27,055,608 | / |
LMAFN | 148,651 | 57 | 140,260 | 57 | 143,658 | 57 |
EL-NAS | 274,613 | 13 | 175,657 | 13 | 238,292 | 13 |
IN | UP | HU | |||||||
---|---|---|---|---|---|---|---|---|---|
Searching | Training | Test | Searching | Training | Test | Searching | Training | Test | |
DcCapsGAN | - | 148.07 | 22.83 | - | 68.49 | 41.08 | - | 125.59 | 23.38 |
2D-CNN | - | 18.43 | 4.96 | - | 10.53 | 11.21 | - | 16.55 | 5.32 |
3D-CNN | - | 50.54 | 3.51 | - | 23.53 | 4.56 | - | 43.29 | 3.03 |
DFFN | - | 337.72 | 1.10 | - | 376.53 | 3.98 | - | 350.40 | 1.54 |
SSRN | - | 227.34 | 10.47 | - | 290.77 | 27.38 | - | 350.66 | 11.55 |
LMAFN | - | 171.23 | 0.82 | - | 156.43 | 1.73 | - | 161.13 | 0.83 |
Auto-CNN | 82.43 | 86.22 | 0.82 | 73.56 | 108.98 | 2.88 | 89.41 | 132.00 | 1.17 |
EL-NAS | 68.22 | 87.81 | 0.88 | 62.66 | 117.81 | 3.42 | 71.39 | 147.43 | 1.28 |
Evaluate Data | SA 10 | SA 20 | IN 10 | IN 20 | ||||
---|---|---|---|---|---|---|---|---|
Search Data | SA 10 | IN 10% | SA 20 | IN 10% | IN 10 | SA 10% | IN 20 | SA 10% |
OA(%) | 94.10 | 94.70 | 95.55 | 95.99 | 87.95 | 88.60 | 90.00 | 90.39 |
AA(%) | 96.00 | 96.25 | 96.80 | 96.73 | 87.90 | 88.51 | 86.30 | 86.41 |
KAPPA(%) | 94.20 | 94.07 | 95.60 | 95.50 | 86.80 | 86.94 | 88.10 | 88.97 |
Evaluate Data | IMDB 10 | IMDB 20 | IN 10 | IN 20 | UP 10 | UP 20 | SA 10 | SA 20 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Search Data | IMDB 10 | HU 10% | IMDB 20 | HU 10% | IN 10 | HU 10% | IN 20 | HU 10% | UP 10 | HU 10% | UP 20 | HU 10% | SA 10 | HU 10% | SA 20 | HU 10% |
OA(%) | 97.1 | 97.8 | 99.0 | 99.3 | 86.9 | 88.7 | 90.2 | 89.0 | 91.5 | 91.7 | 91.9 | 92.1 | 94.0 | 95.8 | 96.7 | 96.5 |
AA(%) | 95.2 | 96.4 | 97.3 | 97.6 | 85.8 | 87.6 | 87.3 | 86.1 | 88.8 | 87.2 | 87.8 | 88.5 | 97.0 | 96.9 | 97.2 | 98.1 |
KAPPA(%) | 96.9 | 97.8 | 98.9 | 99.2 | 84.9 | 87.1 | 89.3 | 87.7 | 87.9 | 89.0 | 89.1 | 90.2 | 93.7 | 95.1 | 94.4 | 95.3 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Hu, J.; Liu, Y.; Hua, Z.; Hao, S.; Yao, Y. EL-NAS: Efficient Lightweight Attention Cross-Domain Architecture Search for Hyperspectral Image Classification. Remote Sens. 2023, 15, 4688. https://doi.org/10.3390/rs15194688
Wang J, Hu J, Liu Y, Hua Z, Hao S, Yao Y. EL-NAS: Efficient Lightweight Attention Cross-Domain Architecture Search for Hyperspectral Image Classification. Remote Sensing. 2023; 15(19):4688. https://doi.org/10.3390/rs15194688
Chicago/Turabian StyleWang, Jianing, Jinyu Hu, Yichen Liu, Zheng Hua, Shengjia Hao, and Yuqiong Yao. 2023. "EL-NAS: Efficient Lightweight Attention Cross-Domain Architecture Search for Hyperspectral Image Classification" Remote Sensing 15, no. 19: 4688. https://doi.org/10.3390/rs15194688
APA StyleWang, J., Hu, J., Liu, Y., Hua, Z., Hao, S., & Yao, Y. (2023). EL-NAS: Efficient Lightweight Attention Cross-Domain Architecture Search for Hyperspectral Image Classification. Remote Sensing, 15(19), 4688. https://doi.org/10.3390/rs15194688