Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network
"> Figure 1
<p>(<b>a</b>) 2D convolution operation, as per Formula (1); (<b>b</b>) 3D convolution operation, as per Formula (2).</p> "> Figure 2
<p>Illustration of the three-dimensional convolutional neural network (3D-CNN)-based hyperspectral imagery (HSI) classification framework.</p> "> Figure 3
<p>(<b>a</b>) Input HSI; (<b>b</b>) Feature images extracted from C1; (<b>c</b>) Feature images extracted from C2.</p> "> Figure 4
<p>(<b>a</b>) False-color composite; (<b>b</b>) Ground truth, black area represents unlabeled pixels.</p> "> Figure 5
<p>(<b>a</b>) False-color composite; (<b>b</b>) Ground truth; black area denotes unlabeled pixels.</p> "> Figure 6
<p>(<b>a</b>) False-color composite; (<b>b</b>) Ground truth; black area denotes unlabeled pixels.</p> "> Figure 7
<p>Classification results of Pavia University scene. (<b>a</b>) False-color composite; (<b>b</b>) Ground truth; (<b>c</b>) SAE-LR, OA = 98.46%; (<b>d</b>) DBN-LR, OA = 98.99%; (<b>e</b>) 2D-CNN, OA = 99.03%; (<b>f</b>) 3D-CNN, OA = 99.39%.</p> "> Figure 8
<p>Zoom of classified region. (<b>a</b>) False-color composite; (<b>b</b>) Ground truth; (<b>c</b>) SAE-LR, OA = 98.46%; (<b>d</b>) DBN-LR, OA = 98.99%; (<b>e</b>) 2D-CNN, OA = 99.03%; (<b>f</b>) 3D-CNN, OA = 99.39%.</p> "> Figure 9
<p>Convergence curves of training samples.</p> "> Figure 10
<p>Classification results of Botswana scene. (<b>a</b>) False-color composite; (<b>b</b>) Ground truth; (<b>c</b>) SAE-LR, OA = 98.49%; (<b>d</b>) DBN-LR, OA = 98.81%; (<b>e</b>) 2D-CNN, OA = 98.88%; (<b>f</b>) 3D-CNN, OA = 99.55%.</p> "> Figure 11
<p>Zoom of classified region. (<b>a</b>) False-color composite; (<b>b</b>) Ground truth; (<b>c</b>) SAE-LR, OA = 98.49%; (<b>d</b>) DBN-LR, OA = 98.71%; (<b>e</b>) 2D-CNN, OA = 98.88%; (<b>f</b>) 3D-CNN, OA = 99.55%.</p> "> Figure 12
<p>Classification results of Indian Pines scene. (<b>a</b>) False-color composite; (<b>b</b>) Ground truth; (<b>c</b>) SAE-LR, OA = 93.98%; (<b>d</b>) DBN-LR, OA = 95.91%; (<b>e</b>) 2D-CNN, OA = 95.97%; (<b>f</b>) 3D-CNN, OA = 99.07%.</p> "> Figure 13
<p>Zoom of classified region. (<b>a</b>) False-color composite; (<b>b</b>) Ground truth; (<b>c</b>) SAE-LR, OA = 93.98%; (<b>d</b>) DBN-LR, OA = 95.91%; (<b>e</b>) 2D-CNN, OA = 95.97%; (<b>f</b>) 3D-CNN, OA = 99.07%.</p> "> Figure 14
<p>Overall accuracy by varying the number of kernels for the two convolution layers.</p> "> Figure 15
<p>Influence of the spatial size of sample. (<b>a</b>) Pavia University scene; (<b>b</b>) Botswana scene; (<b>c</b>) Indian Pines scene.</p> "> Figure 16
<p>Influence of sample proportion. (<b>a</b>) Pavia University scene; (<b>b</b>) Botswana scene; (<b>c</b>) Indian Pines scene.</p> ">
Abstract
:1. Introduction
2. Proposed Method
2.1. 3D Convolution Operation
2.2. 3D-CNN-Based HSI Classification
2.3. Feature Analysis
- (1)
- Different feature images are activated by different object types. For example, the eight feature images in Figure 3c are basically activated by eight different contents.
- (2)
- Different layers encode different feature types. At higher layers, the computed features are more abstract and distinguishable.
3. Datasets and Experimental Setup
3.1. Datasets
3.1.1. Pavia University Scene
3.1.2. Botswana Scene
3.1.3. Indian Pines Scene
3.2. Experimental Setup
4. Experimental Results and Discussion
4.1. Comparison with State-of-the-Art Methods
4.1.1. Results for Pavia University Scene
4.1.2. Results for Botswana Scene
4.1.3. Results for Indian Pines Scene
4.2. Influence of Parameters
4.2.1. Effect of the Numbers of Kernels
4.2.2. Effect of the Spectral Depth of Kernels
4.2.3. Effect of the Spatial Size of the Sample
4.3. Impact of the Training Sample Size
4.4. Discussion
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Lacar, F.M.; Lewis, M.M.; Grierson, I.T. Use of hyperspectral imagery for mapping grape varieties in the Barossa Valley, South Australia. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Sydney, Australia, 9–13 July 2001; pp. 2875–2877.
- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P. Hyperspectral remote sensing data analysis and future challenges. Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
- Plaza, A.; Du, Q.; Chang, Y.; King, R.L. High Performance Computing for Hyperspectral Remote Sensing. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2011, 4, 528–544. [Google Scholar]
- Du, Q.; Chang, C.I. A Linear Constrained distance-based discriminant analysis for hyperspectral image classification. Pattern Recognit. 2001, 34, 361–373. [Google Scholar]
- Samaniego, L.; Bardossy, A.; Schulz, K. Supervised classification of remotely sensed imagery using a modified, k-NN technique. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2112–2125. [Google Scholar] [CrossRef]
- Ediriwickrema, J.; Khorram, S. Hierarchical maximum-likelihood classification for improved accuracies. IEEE Trans. Geosci. Remote Sens. 1997, 35, 810–816. [Google Scholar] [CrossRef]
- Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4085–4098. [Google Scholar] [CrossRef]
- Donoho, D.L. High-dimensional data analysis: The curses and blessings of dimensionality. In Proceedings of the AMS Math Challenges Lecture, Los Angeles, CA, USA, 6–11 August 2000; Volume 13, pp. 178–183.
- Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Adcances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
- Plaza, A.; Martinez, P.; Perez, R.; Plaza, J. A new approach to mixed pixel classification of hyperspectral imagery based on extended morphological profiles. Pattern Recognit. 2004, 37, 1097–1116. [Google Scholar] [CrossRef]
- Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
- Ghamisi, P.; Dalla Mura, M.; Benediktsson, J.A. A survey on spectral–spatial classification techniques based on attribute profiles. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2335–2353. [Google Scholar] [CrossRef]
- Tuia, D.; Volpi, M.; Dalla Mura, M.; Rakotomamonjy, A.; Flamary, R. Automatic feature learning for spatio-spectral image classification with sparse SVM. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6062–6074. [Google Scholar] [CrossRef]
- Dalla Mura, M.; Villa, A.; Benediktsson, J.A.; Chanussot, J.; Bruzzone, L. Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis. IEEE Geosci. Remote Sens. Lett. 2011, 8, 542–546. [Google Scholar] [CrossRef]
- Jia, S.; Zhang, X.; Li, Q. Spectral–spatial hyperspectral image classification using regularized low-rank representation and sparse representation-based graph cuts. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2473–2484. [Google Scholar] [CrossRef]
- Zhang, X.; Xu, C.; Li, M.; Sun, X. Sparse and Low-rank coupling image segmentation model via nonconvex regularization. Int. J. Pattern Recognit. Artif. Intell. 2015, 29. [Google Scholar] [CrossRef]
- Zhang, B.; Li, S.; Jia, X.; Gao, L.; Peng, M. Adaptive Markov Random field approach for classification of hyperspectral imagery. IEEE Geosci. Remote Sens. Lett. 2011, 8, 973–977. [Google Scholar] [CrossRef]
- Tarabalka, Y.; Rana, A. Graph-cut-based model for spectral-spatial classification of hyperspectral images. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec, QC, Canada, 13–18 July 2014; pp. 3418–3421.
- Pajares, G.; Lópezmartínez, C.; Sánchezlladó, F.J.; Molina, I. Improving Wishart classification of polarimetric SAR data using the Hopfield neural network optimization approach. Remote Sens. 2012, 4, 3571–3595. [Google Scholar] [CrossRef] [Green Version]
- Guijarro, M.; Pajares, G.; Herrera, P.J. Image-based airborne sensors: A combined approach for spectral signatures classification through deterministic simulated annealing. Sensors 2009, 9, 7132–7149. [Google Scholar] [CrossRef] [PubMed]
- Sánchez-Lladó, F.J.; Pajares, G.; López-Martínez, C. Improving the Wishart synthetic aperture radar image classifications through deterministic simulated annealing. ISPRS J. Photogramm. Remote Sens. 2011, 66, 845–857. [Google Scholar]
- Zhong, Y.; Ma, A.; Zhang, L. An adaptive Memetic fuzzy clustering algorithm with spatial information for remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1235–1248. [Google Scholar] [CrossRef]
- Qian, Y.; Ye, M.; Zhou, J. Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2276–2291. [Google Scholar] [CrossRef]
- Shen, L.; Jia, S. Three-dimensional Gabor wavelets for pixel-based hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2011, 49, 5039–5046. [Google Scholar] [CrossRef]
- Tang, Y.; Lu, Y.; Yuan, H. Hyperspectral image classification based on three-dimensional scattering wavelet transform. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2467–2480. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Tensor discriminative locality alignment for hyperspectral image spectral–spatial feature extraction. IEEE Trans. Geosci. Remote Sens. 2013, 53, 242–256. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, Q.; Zhang, L.; Tao, D.; Huang, X.; Du, Bo. Ensemble manifold regularized sparse low-rank approximation for multiview feature embedding. Pattern Recognit. 2015, 48, 3102–3112. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Li, T.; Zhang, J.; Zhang, Y. Classification of hyperspectral image based on deep belief networks. In Proceedings of the 2014 IEEE International Conference on Image Processing, Paris, France, 27–30 October 2014; pp. 5132–5136.
- Chen, Y.; Zhao, X.; Jia, X. Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1–12. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral-spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Milan, Italy, 26–31 July 2015; pp. 4959–4962.
- Liang, H.; Li, Q. Hyperspectral imagery classification using sparse representations of convolutional neural network features. Remote Sens. 2016, 8. [Google Scholar] [CrossRef]
- Ji, S.; Yang, M.; Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef] [PubMed]
- Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; Paluri, M. Learning spatiotemporal features with 3D convolutional networks. In Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 7–13.
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9.
- Girshick, R. Fast R-CNN. In Proceedings of the International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448.
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587.
- Liu, F.; Shen, C.; Lin, G. Deep convolutional neural fields for depth estimation from a single image. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5162–5170.
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015.
- Palm, R.B. Prediction as a Candidate for Learning Deep Hierarchical Models of Data. Available online: https://github.com/rasmusbergpalm/DeepLearnToolbox (accessed on 12 January 2017).
- Vedaldi, A.; Lenc, K. MatConvNet: Convolutional neural networks for MATLAB. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 689–692.
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
Class | Samples | ||
---|---|---|---|
No. | Name | Training | Testing |
1 | Asphalt | 3330 | 3192 |
2 | Meadows | 8933 | 8974 |
3 | Gravel | 1051 | 997 |
4 | Trees | 1492 | 1547 |
5 | Painted Metal Sheets | 677 | 668 |
6 | Bare Soil | 2450 | 2579 |
7 | Bitumen | 656 | 674 |
8 | Self-Blocking Bricks | 1836 | 1846 |
9 | Shadows | 500 | 447 |
Total | 20,925 | 20,924 |
Class | Samples | ||
---|---|---|---|
No. | Name | Training | Testing |
1 | Water | 145 | 125 |
2 | Hippo grass | 52 | 49 |
3 | Floodplain grasses 1 | 125 | 126 |
4 | Floodplain grasses 2 | 104 | 111 |
5 | Reeds | 123 | 146 |
6 | Riparian | 140 | 129 |
7 | Fire scar | 126 | 133 |
8 | Island interior | 106 | 97 |
9 | Acacia woodlands | 142 | 172 |
10 | Acacia shrublands | 136 | 112 |
11 | Acacia grasslands | 151 | 154 |
12 | Short mopane | 86 | 95 |
13 | Mixed mopane | 141 | 127 |
14 | Exposed soils | 47 | 48 |
Total | 1624 | 1624 |
Class | Samples | ||
---|---|---|---|
No. | Name | Training | Testing |
1 | Alfalfa | 22 | 24 |
2 | Corn-notill | 721 | 707 |
3 | Corn-mintill | 390 | 387 |
4 | Corn | 124 | 113 |
5 | Grass-pasture | 237 | 231 |
6 | Grass-trees | 363 | 367 |
7 | Grass-pasture-mowed | 9 | 19 |
8 | Hay-windrowed | 244 | 234 |
9 | Oats | 8 | 12 |
10 | Soybean-notill | 482 | 485 |
11 | Soybean-mintill | 1187 | 1226 |
12 | Soybean-clean | 316 | 277 |
13 | Wheat | 82 | 123 |
14 | Woods | 632 | 633 |
15 | Buildings-Grass-Trees-Drives | 180 | 158 |
16 | Stone-Steel-Towers | 46 | 47 |
Total | 5043 | 5043 |
Layer | Kernel Size | Kernel Number | Stride | Output Size | Feature Volumes |
---|---|---|---|---|---|
Input | - | - | - | 5 × 5 × 103 | 1 |
C1 | 3 × 3 × 7 | 2 | 1 | 3 × 3 × 97 | 2 |
C2 | 3 × 3 × 3 | 4 | 1 | 1 × 1 × 95 | 8 |
F1 | - | - | - | 1 × 1 × 1 | 144 |
Classification | - | - | - | 1 × 1 × 1 | 9 |
Methods | ||||
---|---|---|---|---|
Class | SAE-LR [29] | DBN-LR [31] | 2D-CNN [33] | 3D-CNN |
1 | 98.73 0.0344 | 99.05 0.2968 | 99.68 0.0151 | 99.65 0.0049 |
2 | 99.55 0.0119 | 99.83 0.0068 | 99.87 0.0038 | 99.83 0.0059 |
3 | 93.87 1.5622 | 95.15 2.1337 | 96.31 1.1068 | 94.65 2.1480 |
4 | 98.63 0.0201 | 98.83 0.1274 | 98.01 0.2650 | 99.09 0.5543 |
5 | 100 0 | 99.93 0.0065 | 100 0 | 100 0 |
6 | 97.87 0.3816 | 98.71 0.1035 | 97.61 0.1862 | 99.93 0.0028 |
7 | 93.74 0.9172 | 96.36 1.1547 | 95.63 0.2008 | 97.75 1.7837 |
8 | 96.76 0.9861 | 98.20 0.5798 | 99.35 0.1605 | 99.24 0.1096 |
9 | 99.90 0.0133 | 99.71 0.0350 | 97.25 0.6590 | 99.55 0.2557 |
OA | 98.46 0.0190 | 98.99 0.0922 | 99.03 0.0142 | 99.39 0.0098 |
AA | 97.67 0.0382 | 98.38 0.1881 | 98.19 0.0268 | 98.85 0.0609 |
K | 97.98 0.0342 | 98.68 0.1596 | 98.71 0.0021 | 99.20 0.0169 |
Layer | Kernel Size | Kernel Number | Stride | Output Size | Feature Volumes |
---|---|---|---|---|---|
Input | - | - | - | 5 × 5 × 145 | 1 |
C1 | 3 × 3 × 2 | 2 | 1 | 3 × 3 × 144 | 2 |
C2 | 3 × 3 × 2 | 4 | 1 | 1 × 1 × 143 | 8 |
F1 | - | - | - | 1 × 1 × 1 | 112 |
Classification | - | - | - | 1 × 1 × 1 | 14 |
Methods | ||||
---|---|---|---|---|
Class | SAE-LR [29] | DBN-LR [31] | 2D-CNN [33] | 3D-CNN |
1 | 100 ± 0 | 100 ± 0 | 99.18 ± 0.3786 | 99.64 ± 0.1440 |
2 | 100 ± 0 | 100 ± 0 | 100 ± 0 | 100 ± 0 |
3 | 100 ± 0 | 100 ± 0 | 100 ± 0 | 100 ± 0 |
4 | 99.58 ± 0.4906 | 100 ± 0 | 99.16 ± 1.2225 | 99.45 ± 1.3444 |
5 | 94.70 ± 1.1443 | 94.84 ± 2.3406 | 99.54 ± 0.3159 | 98.60 ± 0.6820 |
6 | 92.96 ± 9.3873 | 95.33 ± 12.401 | 97.36 ± 1.2054 | 98.72 ± 0.2930 |
7 | 99.88 ± 0.0938 | 100 ± 0 | 100 ± 0 | 99.68 ± 0.3014 |
8 | 100 ± 0 | 99.74 ± 0.2704 | 100 ± 0 | 100 ± 0 |
9 | 96.68 ± 1.3994 | 96.98 ± 2.0501 | 94.99 ± 0.203 | 99.67 ± 0.2277 |
10 | 99.74 ± 0.4004 | 100 ± 0 | 100 ± 0 | 99.70 ± 0.1500 |
11 | 99.47 ± 0.3915 | 99.67 ± 0.1476 | 100 ± 0 | 99.87 ± 0.1823 |
12 | 100 ± 0 | 100 ± 0 | 96.63 ± 1.1139 | 99.63 ± 1.3690 |
13 | 99.52 ± 0.1387 | 99.82 ± 0.1332 | 100 ± 0 | 99.43 ± 0.5453 |
14 | 99.26 ± 1.3152 | 100 ± 0 | 97.44 ± 2.1112 | 100 ± 0 |
OA | 98.49 ± 0.1159 | 98.81 ± 0.0436 | 98.88 ± 0.0009 | 99.55 ± 0.0140 |
AA | 98.70 ± 0.1017 | 99.03 ± 0.0315 | 98.88 ± 0.0214 | 99.60 ± 0.0122 |
K | 98.36 ± 0.1374 | 98.72 ± 0.0510 | 98.78 ± 0.0012 | 99.51 ± 0.0165 |
Layer | Kernel Size | Kernel Number | Stride | Output Size | Feature Volumes |
---|---|---|---|---|---|
Input | - | - | - | 5 × 5 × 200 | 1 |
C1 | 3 × 3 × 7 | 2 | 1 | 3 × 3 × 194 | 2 |
C2 | 3 × 3 × 3 | 4 | 1 | 1 × 1 × 192 | 8 |
F1 | - | - | - | 1 × 1 × 1 | 128 |
Classification | - | - | - | 1 × 1 × 1 | 16 |
Methods | ||||
---|---|---|---|---|
Class | SAE-LR [29] | DBN-LR [31] | 2D-CNN [33] | 3D-CNN |
1 | 85.56 ± 1.3195 | 80.90 ± 2.1058 | 86.11 ± 2.2222 | 95.89 ± 2.8881 |
2 | 90.72 ± 2.5262 | 93.97 ± 1.0502 | 91.37 ± 1.5399 | 98.46 ± 0.1412 |
3 | 91.58 ± 5.8134 | 95.13 ± 1.0701 | 95.37 ± 3.9835 | 98.99 ± 0.7959 |
4 | 89.81 ± 9.5761 | 85.14 ± 2.0225 | 98.54 ± 1.9192 | 99.14 ± 0.3120 |
5 | 96.16 ± 3.3383 | 98.05 ± 1.8262 | 91.40 ± 2.6639 | 99.29 ± 0.5117 |
6 | 98.98 ± 0.5110 | 100 ± 0 | 98.05 ± 0.6324 | 99.92 ± 0.0170 |
7 | 95.29 ± 1.0912 | 94.92 ± 4.9651 | 97.73 ± 20.657 | 100 ± 0 |
8 | 98.75 ± 0.7893 | 100 ± 0 | 98.44 ± 1.2281 | 100 ± 0 |
9 | 100 ± 0 | 100 ± 0 | 50.87 ± 5.0596 | 92.31 ± 5.2565 |
10 | 94.52 ± 1.0791 | 97.37 ± 0.4709 | 93.53 ± 5.0951 | 98.12 ± 0.5533 |
11 | 94.79 ± 0.4464 | 97.70 ± 0.1939 | 97.62 ± 0.2358 | 98.96 ± 0.2035 |
12 | 86.43 ± 1.0475 | 84.72 ± 7.7210 | 94.89 ± 3.3988 | 98.99 ± 0.3265 |
13 | 99.80 ± 0.1640 | 99.35 ± 2.0866 | 100 ± 0 | 99.82 ± 0.1440 |
14 | 97.48 ± 0.6172 | 100 ± 0 | 99.29 ± 0.6455 | 99.81 ± 0.0259 |
15 | 84.35 ± 1.4027 | 84.64 ± 3.9929 | 99.59 ± 0.2764 | 99.56 ± 0.6405 |
16 | 96.76 ± 9.0504 | 95.33 ± 8.1276 | 98.88 ± 1.6992 | 99.38 ± 0.9888 |
OA | 93.98 ± 0.0838 | 95.91 ± 0.0123 | 95.97 ± 0.0938 | 99.07 ± 0.0345 |
AA | 93.81 ± 0.4858 | 94.20 ± 0.0568 | 93.23 ± 0.7629 | 98.66 ± 0.0345 |
K | 93.13 ± 0.1067 | 95.34 ± 0.0147 | 95.40 ± 0.1215 | 98.93 ± 0.0450 |
© 2017 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. https://doi.org/10.3390/rs9010067
Li Y, Zhang H, Shen Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sensing. 2017; 9(1):67. https://doi.org/10.3390/rs9010067
Chicago/Turabian StyleLi, Ying, Haokui Zhang, and Qiang Shen. 2017. "Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network" Remote Sensing 9, no. 1: 67. https://doi.org/10.3390/rs9010067
APA StyleLi, Y., Zhang, H., & Shen, Q. (2017). Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sensing, 9(1), 67. https://doi.org/10.3390/rs9010067