Urban Land Cover Classification of High-Resolution Aerial Imagery Using a Relation-Enhanced Multiscale Convolutional Network
"> Figure 1
<p>Relation-enhanced multiscale convolutional network (REMSNet) overview.</p> "> Figure 2
<p>Different blocks neural network units. (<b>a</b>) Inception block; (<b>b</b>) dense block (layer = 3, k = 3); (<b>c</b>) residual block [<a href="#B36-remotesensing-12-00311" class="html-bibr">36</a>].</p> "> Figure 3
<p>The basic principle chart of relation-enhanced blocks. (<b>a</b>) Spatial relation-enhanced block; (<b>b</b>) channel relation-enhanced block.</p> "> Figure 4
<p>Examples of Vaihingen dataset. (<b>a</b>) Aerial image; (<b>b</b>) ground truth; (<b>c</b>) ground truth with eroded boundaries (black areas will be ignored); (<b>d</b>) corresponding Normalized Digital Surface Model (nDSM).</p> "> Figure 5
<p>Study areas and image data of the Shanghai dataset.</p> "> Figure 6
<p>Examples of Shanghai dataset. (<b>a</b>) Aerial image; (<b>b</b>) corresponding ground truth.</p> "> Figure 7
<p>Examples of land cover classification results on the Vaihingen dataset. The rectangles in pink and the circles in pink represent correct and incorrect classification, respectively. (<b>a</b>–<b>e</b>) are five subsets land cover classification results from Vaihingen dataset.</p> "> Figure 8
<p>A full image land cover classification result on the Vaihingen dataset. The circles in pink and the rectangle in pink represent the correct and incorrect classification, respectively.</p> "> Figure 9
<p>Examples of land cover classification results for the Shanghai dataset. The circles in white represent incorrect classification. (<b>a</b>–<b>f</b>) are six subsets land cover classification results from Shanghai dataset.</p> "> Figure 10
<p>Urban land cover classification map using the proposed network.</p> "> Figure 11
<p>Intersection-over-union (IoU) and precision changes of per class with different settings. (1) Changes of per class IoU with different settings; (2) changes of per class precision with different settings. SREB: spatial relation-enhanced block; CREB: channel relation-enhanced block; MS: multiscale fusion module.</p> "> Figure 12
<p>Our network different settings’ speed and accuracy analysis for the Shanghai dataset. (1) Changes of trainable parameters and mIoU with different settings; (2) changes of the average inference time and mean <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo> </mo> </mrow> </semantics></math> with different settings; a: Basic network; b: basic network + SREB; c: basic network + SREB + CREB; d: basic network + SREB + CREB + MS.</p> ">
Abstract
:1. Introduction
- The proposed network uses a dense connectivity pattern to build the lightweight model. Meanwhile, parallel multi-kernel convolution and deconvolution modules are designed in the encoding and decoding stages, respectively, to increase the variety of receptive field sizes that can capture different scales objects more effectively.
- We introduce spatial and channel relation-enhanced blocks to explicitly model spatial and channel context relationships. They significantly improve the classification results by capturing global context information in the spatial and channel domains.
- We propose a relation-enhanced multiscale network for urban land cover classification in aerial imagery. Case studies with the ISPRS Vaihingen dataset and an area of Shanghai of about 143 km2 demonstrate that the proposed REMSNet can generate a better land cover classification accuracy and can produce more practical land cover maps compared with several state-of-the-art deep learning models.
2. Related Work
2.1. Semantic Segmentation for High-Resolution Aerial Images
2.2. Attention Mechanism
3. The Proposed Network
3.1. Proposed Network Architecture
3.2. Spatial Relation-Enhanced Block
3.3. Channel Relation Enhanced Block
3.4. Multiscale Fusion Module
4. Experimental Results and Analysis
4.1. Study Area and Preprocessing
4.1.1. Study area and data description
4.1.2. Data Preprocessing and Augmentation
4.2. Training Detail and Evaluation Metrics
4.3. Results and Analysis
4.3.1. Results of Vaihingen
4.3.2. Results of Shanghai
5. Discussion
5.1. Influence of the Different Modules on Classification
5.2. Model Size and Efficiency Analysis
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Patino, J.E.; Duque, J.C. A review of regional science applications of satellite remote sensing in urban settings. Comput. Environ. Urban Syst. 2013, 37, 1–17. [Google Scholar] [CrossRef]
- Qiu, C.; Mou, L.; Schmitt, M.; Zhu, X.X. Local climate zone-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network. ISPRS J. Photogramm. Remote Sens. 2019, 154, 151–162. [Google Scholar] [CrossRef] [PubMed]
- Yuan, F.; Sawaya, K.E.; Loeffelholz, B.C.; Bauer, M.E. Land cover classification and change analysis of the Twin Cities (Minnesota) Metropolitan Area by multitemporal Landsat remote sensing. Remote Sens. Environ. 2005, 98, 317–328. [Google Scholar] [CrossRef]
- Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
- Huang, B.; Zhao, B.; Song, Y.M. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
- Belward, A.S.; Skøien, J.O. Who launched what, when and why; trends in global land-cover observation capacity from civilian earth observation satellites. ISPRS J. Photogramm. Remote Sens. 2015, 103, 115–128. [Google Scholar] [CrossRef]
- Pesaresi, M.; Huadong, G.; Blaes, X.; Ehrlich, D.; Ferri, S.; Gueguen, L.; Halkia, M.; Kauffmann, M.; Kemper, T.; Lu, L.; et al. A Global Human Settlement Layer from Optical HR/VHR RS Data: Concept and First Results. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2102–2131. [Google Scholar] [CrossRef]
- Cheng, G.; Zhu, F.; Xiang, S.; Wang, Y.; Pan, C. Accurate urban road centerline extraction from VHR imagery via multiscale segmentation and tensor voting. Neurocomputing 2016, 205, 407–420. [Google Scholar] [CrossRef] [Green Version]
- Yuan, Y.; Lin, J.; Wang, Q. Dual-Clustering-Based Hyperspectral Band Selection by Contextual Analysis. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1431–1445. [Google Scholar] [CrossRef]
- Sherrah, J. Fully Convolutional Networks for Dense Semantic Labelling of High-Resolution Aerial Imagery. arXiv 2016, arXiv:1606.02585. [Google Scholar]
- Zhao, W.; Du, S.; Wang, Q.; Emery, W.J. Contextually guided very-high-resolution imagery classification with semantic segments. ISPRS J. Photogramm. Remote Sens. 2017, 132, 48–60. [Google Scholar] [CrossRef]
- Deng, Z.P.; Sun, H.; Zhou, S.L.; Zhao, J.P.; Lei, L.; Zou, H.X. Multi-scale object detection in remote sensing imagery with convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2018, 145, 3–22. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway, NJ, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Yu, B.; Yang, L.; Chen, F. Semantic Segmentation for High Spatial Resolution Remote Sensing Images Based on Convolution Neural Network and Pyramid Pooling Module. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3252–3261. [Google Scholar] [CrossRef]
- Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Zhang, P.; Ke, Y.; Zhang, Z.; Wang, M.; Li, P.; Zhang, S. Urban Land Use and Land Cover Classification Using Novel Deep Learning Models Based on High Spatial Resolution Satellite Imagery. Sensors 2018, 18, 3717. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Panboonyuen, T.; Jitkajornwanich, K.; Lawawirojwong, S.; Srestasathiern, P.; Vateekul, P. Semantic Segmentation on Remotely Sensed Images Using an Enhanced Global Convolutional Network with Channel Attention and Domain Specific Transfer Learning. Remote Sens. 2019, 11, 83. [Google Scholar] [CrossRef] [Green Version]
- Peng, C.; Zhang, X.; Yu, G.; Luo, G.; Sun, J. Large Kernel Matters—Improve Semantic Segmentation by Global Convolutional Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1743–1751. [Google Scholar]
- Liu, Y.; Minh Nguyen, D.; Deligiannis, N.; Ding, W.; Munteanu, A. Hourglass-ShapeNetwork Based Semantic Segmentation for High Resolution Aerial Imagery. Remote Sens. 2017, 9, 522. [Google Scholar] [CrossRef] [Green Version]
- Chen, G.; Zhang, X.; Wang, Q.; Dai, F.; Gong, Y.; Zhu, K. Symmetrical Dense-Shortcut Deep Fully Convolutional Networks for Semantic Segmentation of Very-High-Resolution Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1633–1644. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the 2015 Medical Image Computing and Computer-Assisted Intervention (MICCAI), Cham, Switzerland, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Wang, Q.; Liu, S.; Chanussot, J.; Li, X. Scene Classification with Recurrent Attention of VHR Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1155–1167. [Google Scholar] [CrossRef]
- Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
- Xiaolong, W.; Girshick, R.; Gupta, A.; Kaiming, H. Non-local neural networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Los Alamitos, CA, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
- Huang, G.; Liu, Z.; Maaten, L.v.d.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
- Szegedy, C.; Wei, L.; Yangqing, J.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Martinez-Gonzalez, P.; Garcia-Rodriguez, J. A survey on deep learning techniques for image and video semantic segmentation. Appl. Soft Comput. 2018, 70, 41–65. [Google Scholar] [CrossRef]
- Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Sun, W.W.; Wang, R.S. Fully Convolutional Networks for Semantic Segmentation of Very High Resolution Remotely Sensed Images Combined With DSM. IEEE Geosci. Remote Sens. Lett. 2018, 15, 474–478. [Google Scholar] [CrossRef]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Sun, T.; Chen, Z.; Yang, W.; Wang, Y. Stacked U-Nets with Multi-output for Road Extraction. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 187–1874. [Google Scholar]
- Marmanis, D.; Schindler, K.; Wegner, J.D.; Galliani, S.; Datcu, M.; Stilla, U. Classification with an edge: Improving semantic image segmentation with boundary detection. ISPRS J. Photogramm. Remote Sens. 2018, 135, 158–172. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Ding, W.; Liu, C.; Liu, Y.; Wang, Y.; Li, H. ERN: Edge Loss Reinforced Semantic Segmentation Network for Remote Sensing Images. Remote Sens. 2018, 10, 1339. [Google Scholar] [CrossRef] [Green Version]
- Audebert, N.; Le Saux, B.; Lefèvre, S. Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks. ISPRS J. Photogramm. Remote Sens. 2018, 140, 20–32. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Sun, Y.; Zhang, X.; Xin, Q.; Huang, J. Developing a multi-filter convolutional neural network for semantic segmentation using high-resolution aerial imagery and LiDAR data. ISPRS J. Photogramm. Remote Sens. 2018, 143, 3–14. [Google Scholar] [CrossRef]
- Feng, W.; Sui, H.; Huang, W.; Xu, C.; An, K. Water Body Extraction from Very High-Resolution Remote Sensing Imagery Using Deep U-Net and a Superpixel-Based Conditional Random Field Model. IEEE Geosci. Remote Sens. Lett. 2019, 16, 618–622. [Google Scholar] [CrossRef]
- Krähenbühl, P.; Science, V.K.J.C. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials. In Proceedings of the 2011 Neural Information Processing Systems (NIPS), Granada, Spain, 12–15 December 2011; pp. 109–117. [Google Scholar]
- Zheng, S.; Jayasumana, S.; Romera-Paredes, B.; Vineet, V.; Su, Z.; Du, D.; Huang, C.; Torr, P.H.S. Conditional Random Fields as Recurrent Neural Networks. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 13–16 December 2015; pp. 1529–1537. [Google Scholar]
- He, C.; Fang, P.; Zhang, Z.; Xiong, D.; Liao, M. An End-to-End Conditional Random Fields and Skip-Connected Generative Adversarial Segmentation Network for Remote Sensing Images. Remote Sens. 2019, 11, 1604. [Google Scholar] [CrossRef] [Green Version]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Li, H.; Xiong, P.; An, J.; Wang, L. Pyramid attention network for semantic segmentation. In Proceedings of the 2018 British Machine Vision Conference (BMVC), Newcastle, UK, 3–6 September 2018. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 5999–6009. [Google Scholar]
- Zhao, H.; Zhang, Y.; Liu, S.; Shi, J.; Loy, C.C.; Lin, D.; Jia, J. PSANet: Point-wise Spatial Attention Network for Scene Parsing. In Proceedings of the 2018 European Conference on Computer Vision (ECCV), Cham, Switzerland, 8–14 September 2018; pp. 270–286. [Google Scholar]
- Yuan, Y.; Wang, J. OCNet: Object Context Network for Scene Parsing. arXiv 2018, arXiv:1809.00916. [Google Scholar]
- Chen, L.-C.; Yang, Y.; Wang, J.; Xu, W.; Yuille, A.L. Attention to Scale: Scale-Aware Semantic Image Segmentation. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 1–26 June 2016; pp. 3640–3649. [Google Scholar]
- Jie, H.; Li, S.; Gang, S. Squeeze-and-excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Los Alamitos, CA, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the 2018 European Conference on Computer Vision (ECCV), Cham, Switzerland, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Jégou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y. The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation. In Proceedings of the IEEE Computer Society Conference, Honolulu, HI, USA, 21–26 July 2017; pp. 1175–1183. [Google Scholar]
- Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. CCNet: Criss-Cross Attention for Semantic Segmentation. arXiv 2018, arXiv:1811.11721. [Google Scholar]
- Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; Sang, N. BiSeNet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 334–349. [Google Scholar]
- 2D Semantic Labeling—Vaihingen Data. Available online: http://www2.isprs.org/commissions/comm3/wg4/2d-sem-label-vaihingen.html (accessed on 5 November 2019).
- Gerke, M. Use of the Stair Vision Library within the ISPRS 2D Semantic Labeling Benchmark (Vaihingen); Technical Report; University of Twente: Enschede, The Netherlands, 2015. [Google Scholar]
- Quang, N.T.; Sang, D.V.; Thuy, N.T.; Binh, H.T.T. An efficient framework for pixel-wise building segmentation from aerial images. In Proceedings of the 6th International Symposium on Information and Communication Technology (SoICT), Hue, Vietnam, 3–4 December 2015; pp. 282–287. [Google Scholar]
- Marcos, D.; Volpi, M.; Kellenberger, B.; Tuia, D. Land cover mapping at very high resolution with rotation equivariant CNNs: Towards small yet accurate models. ISPRS J. Photogramm. Remote Sens. 2018, 145, 96–107. [Google Scholar] [CrossRef] [Green Version]
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation, Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
- Hinton, G.; Tieleman, T. Lecture 6.5-Rmsprop, Coursera: Neural Networks for Machine Learning; Technical Report; University of Toronto: Toronto, ON, Canada, 2012. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
Method | Imp. Surf. | Buildings | Low Veg. | Tree | Car | mIoU | OA | |
---|---|---|---|---|---|---|---|---|
SVL-boosting + CRF [54] | 86.10 | 90.90 | 77.60 | 84.90 | 59.90 | 79.90 | - | 84.70 |
RF + dCRF [55] | 86.90 | 92.00 | 78.3 | 86.90 | 29.00 | 74.60 | - | 85.90 |
SegNet [31] | 92.27 | 93.73 | 77.35 | 86.65 | 62.76 | 81.27 | 70.40 | 87.42 |
MobileNet [59] | 90.29 | 94.13 | 83.13 | 84.07 | 77.19 | 85.56 | 75.56 | 87.76 |
FC-DenseNet [50] | 92.40 | 94.46 | 72.95 | 90.93 | 78.49 | 86.73 | 77.20 | 87.99 |
RotEqNet [56] | 89.50 | 94.80 | 77.50 | 86.50 | 72.60 | 84.18 | - | 87.50 |
HSNet [19] | 90.89 | 94.51 | 78.83 | 87.84 | 81.87 | 86.79 | - | 88.32 |
ENR [34] | 91.48 | 95.11 | 79.42 | 88.18 | 89.00 | 88.64 | - | 88.88 |
REMSNet | 92.07 | 95.95 | 83.53 | 90.13 | 81.48 | 89.07 | 80.73 | 90.46 |
Method | Farmland | Forestland | Building | Road | Arti. Stru. | Bare Soil | Water | mIoU | OA | |
---|---|---|---|---|---|---|---|---|---|---|
FC-DenseNet [50] | 94.92 | 62.52 | 91.49 | 78.38 | 63.99 | 88.45 | 92.74 | 79.60 | 67.55 | 85.20 |
SegNet [31] | 92.87 | 75.86 | 91.66 | 76.49 | 61.29 | 70.33 | 93.2 | 80.17 | 68.02 | 85.68 |
MobileNet [59] | 96.03 | 70.42 | 91.53 | 77.61 | 61.16 | 79.28 | 88.14 | 80.54 | 68.59 | 86.18 |
Deeplabv3 [15] | 95.28 | 77.27 | 89.34 | 72.93 | 58.60 | 76.77 | 89.86 | 80.90 | 68.97 | 86.19 |
PSPNet [30] | 93.67 | 76.02 | 91.44 | 75.74 | 63.43 | 76.29 | 93.00 | 81.71 | 70.05 | 86.36 |
REMSNet | 96.21 | 77.51 | 92.59 | 81.86 | 63.95 | 81.81 | 93.56 | 84.52 | 73.94 | 88.55 |
Method | OA (%) | mIoU (%) | |
---|---|---|---|
Basic network | 79.60 | 85.20 | 67.55 |
Basic network + SREB | 81.28 | 86.77 | 69.25 |
Basic network + SREB + CREB | 82.66 | 87.42 | 71.31 |
Basic network + SREB + CREB + MS | 84.52 | 88.55 | 73.94 |
Method | Parameters (M) | Time (s) | Mean (%) | OA (%) | mIoU (%) | |||
---|---|---|---|---|---|---|---|---|
Shang. | Vaihi. | Shang. | Vaihi. | Shang. | Vaihi. | |||
FC-DenseNet [50] | 7.86 | 0.0965 | 79.60 | 86.73 | 85.20 | 87.99 | 67.55 | 77.20 |
SegNet [31] | 34.96 | 0.0350 | 80.17 | 81.27 | 85.68 | 87.42 | 68.02 | 70.40 |
MobileNet [59] | 8.87 | 0.0325 | 80.54 | 85.56 | 86.18 | 87.76 | 68.59 | 75.56 |
Deeplabv3 [15] | 46.66 | 0.0130 | 80.90 | - | 86.19 | - | 68.97 | - |
PSPNet [30] | 55.99 | 0.0217 | 81.71 | - | 86.36 | - | 70.05 | - |
REMSNet | 10.56 | 0.1239 | 84.52 | 89.07 | 88.55 | 90.46 | 73.94 | 80.73 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, C.; Zeng, D.; Wu, H.; Wang, Y.; Jia, S.; Xin, L. Urban Land Cover Classification of High-Resolution Aerial Imagery Using a Relation-Enhanced Multiscale Convolutional Network. Remote Sens. 2020, 12, 311. https://doi.org/10.3390/rs12020311
Liu C, Zeng D, Wu H, Wang Y, Jia S, Xin L. Urban Land Cover Classification of High-Resolution Aerial Imagery Using a Relation-Enhanced Multiscale Convolutional Network. Remote Sensing. 2020; 12(2):311. https://doi.org/10.3390/rs12020311
Chicago/Turabian StyleLiu, Chun, Doudou Zeng, Hangbin Wu, Yin Wang, Shoujun Jia, and Liang Xin. 2020. "Urban Land Cover Classification of High-Resolution Aerial Imagery Using a Relation-Enhanced Multiscale Convolutional Network" Remote Sensing 12, no. 2: 311. https://doi.org/10.3390/rs12020311
APA StyleLiu, C., Zeng, D., Wu, H., Wang, Y., Jia, S., & Xin, L. (2020). Urban Land Cover Classification of High-Resolution Aerial Imagery Using a Relation-Enhanced Multiscale Convolutional Network. Remote Sensing, 12(2), 311. https://doi.org/10.3390/rs12020311