Intelligent Mapping of Urban Forests from High-Resolution Remotely Sensed Imagery Using Object-Based U-Net-DenseNet-Coupled Network
<p>Location of the study area: (<b>a</b>) Zhejiang Province, and the blue polygons represent Hangzhou, (<b>b</b>) the subregion of the Yuhang District (YH) of Hangzhou.</p> "> Figure 2
<p>The overlapping cropping method for training the deep learning (DL) network. The size of the cropping windows is set to 128 × 128 pixels, where n is defined as half of 128.</p> "> Figure 3
<p>The classical structure of convolutional neural networks (CNNs). Batch normalization (BN) is a technique for accelerating network training by reducing the offset of internal covariates.</p> "> Figure 4
<p>The improved DenseNet structure composed of three dense blocks: (<b>a</b>) the complete structure and (<b>b</b>) the Dense Block composed of five feature map layers.</p> "> Figure 5
<p>The improved U-net structure is composed of eleven convolution layers.</p> "> Figure 6
<p>The network structure of the UDN algorithm, where NF represents the number of convolutional filters. (<b>a</b>) The first two layers (Level1 and Level2) including convolutional layers and pooling layers; (<b>b</b>) the coupling of U-net and DenseNet algorithms; (<b>c</b>) the bottom layer of the network; (<b>d</b>) Upsampling layers; (<b>e</b>) predicted classification result.</p> "> Figure 7
<p>A flowchart of the experimental method in this paper, including five major steps: (<b>a</b>) image preprocessing; (<b>b</b>) image labeling and image cropping; (<b>c</b>) model training; (<b>d</b>) model prediction; (<b>e</b>) object-based optimization of the UDN results and comparisons of the results from all algorithms.</p> "> Figure 8
<p>(<b>a</b>) Original image; (<b>b</b>) the classification map of the U algorithm based on the Spe; (<b>c</b>) the classification map of the D algorithm based on the Spe; (<b>d</b>) the classification map of the UDN algorithm based on the Spe; (<b>e</b>) the classification map of the OUDN algorithm based on the Spe; and the red and black circles denote incorrect and correct classifications, respectively.</p> "> Figure 9
<p>(<b>a</b>) Original image; (<b>b</b>) the classification map of the U algorithm based on the Spe-Index; (<b>c</b>) the classification map of the D algorithm based on the Spe-Index; (<b>d</b>) the classification map of the UDN algorithm based on the Spe-Index; (<b>e</b>) the classification map of the OUDN algorithm based on the Spe-Index; and the red and black circles denote incorrect and correct classifications, respectively.</p> "> Figure 10
<p>(<b>a</b>) Original image; (<b>b</b>) the classification map of the U algorithm based on the Spe-Texture; (<b>c</b>) the classification map of the D algorithm based on the Spe-Texture; (<b>d</b>) the classification map of the UDN algorithm based on the Spe-Texture; (<b>e</b>) the classification map of the OUDN algorithm based on the Spe-Texture; and the red and black circles denote incorrect and correct classifications, respectively.</p> "> Figure 11
<p>There are two subsets (Subset (<b>1</b>) and Subset (<b>2</b>)) dominated by urban forests. (<b>a</b>) the classification maps of the U algorithm in the subsets; (<b>b</b>) the classification maps of the D algorithm in the subsets; (<b>c</b>) the classification maps of the UDN algorithm in the subsets; and (<b>d</b>) the classification maps of the OUDN algorithm in the subsets.</p> "> Figure 12
<p>(<b>a1</b>) Confusion matrix of U algorithm based on Spe; (<b>b1</b>) Confusion matrix of U algorithm based on Spe-Index; (<b>c1</b>) Confusion matrix of U algorithm based on Spe-Texture; (<b>a2</b>) Confusion matrix of D algorithm based on Spe; (<b>b2</b>) Confusion matrix of D algorithm based on Spe-Index; (<b>c2</b>) Confusion matrix of D algorithm based on Spe-Texture; (<b>a3</b>) Confusion matrix of UDN algorithm based on Spe; (<b>b3</b>) Confusion matrix of UDN algorithm based on Spe-Index; (<b>c3</b>) Confusion matrix of UDN algorithm based on Spe-Texture; (<b>a4</b>) Confusion matrix of OUDN algorithm based on Spe; (<b>b4</b>) Confusion matrix of OUDN algorithm based on Spe-Index; (<b>c4</b>) Confusion matrix of OUDN algorithm based on Spe-Texture.</p> "> Figure 13
<p>Overall classification accuracies (OA) of different features based on different algorithms.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Area
2.2. Data Processing
2.3. Feature Setting
2.4. Methodology
2.4.1. Brief Introduction of CNNs
2.4.2. DL Algorithms
- (1)
- The left half of the bottom layer is the contracting path. With the input of a 128 128 image, each layer uses three 3 3 convolution operations. After each convolution, followed by the ReLU activation function, max-pooling with a step of 2 is applied for downsampling. In each downsampling stage, the number of feature channels is doubled. Five downsamplings are applied, followed by two 3 3 convolutions in the bottom layer of the network architecture. The size of the feature maps is eventually reduced to 4 4 pixels, and the number of feature map channels is 1024.
- (2)
- The right half of the network, that is, the expansive path, mainly restores the feature information of the original image. First, a deconvolution kernel with a size of 2 2 is used to perform upsampling. In this process, the number of the feature map channels is halved, while the feature maps of the symmetrical position generated by the downsampling and the upsampling are merged; then, three 3 3 convolution operations are performed on the merged features, and the above operations are repeated until the image is restored to the size of input image; ultimately, four 3 3 and one 1 1 convolution operations and a Softmax activation function are used to complete the category prediction of each pixel in the image. The Softmax activation function is defined as Equations (7):
Algorithm 1 Train a neural network with the minibatch Adam optimization algorithm. |
initialize () |
for = 1, …, do |
for = 1, …, # do |
← uniformly sample images |
, ← preprocess(images) |
← forward (net, ) |
← loss (, ) |
, ← background () |
update (, , ) |
end for |
end for |
2.5. Experiment Design
3. Results and Analysis
3.1. Training Results of U, D and UDN Algorithms
3.2. Classification Results
3.2.1. Classification Results Based on Four Algorithms
3.2.2. Extraction Results of Urban Forests
3.2.3. Result Analysis
4. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Voltersen, M.; Berger, C.; Hese, S.; Schmullius, C. Object-based land cover mapping and comprehensive feature calculation for an automated derivation of urban structure types at block level. Remote Sens. Environ. 2014, 154, 192–201. [Google Scholar] [CrossRef]
- Wu, C.; Zhang, L.; Du, B. Kernel Slow Feature Analysis for Scene Change Detection. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2367–2384. [Google Scholar] [CrossRef]
- Lin, J.; Kroll, C.N.; Nowak, D.J.; Greenfield, E.J. A review of urban forest modeling: Implications for management and future research. Urban. For. Urban. Green. 2019, 43, 126366. [Google Scholar] [CrossRef]
- Ren, Z.; Zheng, H.; He, X.; Zhang, D.; Yu, X.; Shen, G. Spatial estimation of urban forest structures with Landsat TM data and field measurements. Urban. For. Urban. Green. 2015, 14, 336–344. [Google Scholar] [CrossRef]
- Guang-Rong, S.; Wang, Z.; Liu, C.; Han, Y. Mapping aboveground biomass and carbon in Shanghai’s urban forest using Landsat ETM+ and inventory data. Urban. For. Urban. Green. 2020, 51, 126655. [Google Scholar] [CrossRef]
- Zhang, M.; Du, H.; Mao, F.; Zhou, G.; Li, X.; Dong, L.; Zheng, J.; Zhu, D.; Liu, H.; Huang, Z.; et al. Spatiotemporal Evolution of Urban Expansion Using Landsat Time Series Data and Assessment of Its Influences on Forests. ISPRS Int. J. Geo-Inf. 2020, 9, 64. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Shen, W.; Li, M.; Lv, Y. Assessing spatio-temporal changes in forest cover and fragmentation under urban expansion in Nanjing, eastern China, from long-term Landsat observations (1987 –2017). Appl. Geogr. 2020, 117, 102190. [Google Scholar] [CrossRef]
- Alonzo, M.; McFadden, J.P.; Nowak, D.J.; Roberts, D.A. Mapping urban forest structure and function using hyperspectral imagery and lidar data. Urban. For. Urban. Green. 2016, 17, 135–147. [Google Scholar] [CrossRef] [Green Version]
- Alonzo, M.; Bookhagen, B.; Roberts, D.A. Urban tree species mapping using hyperspectral and lidar data fusion. Remote Sens. Environ. 2014, 148, 70–83. [Google Scholar] [CrossRef]
- Liu, L.; Coops, N.C.; Aven, N.W.; Pang, Y. Mapping urban tree species using integrated airborne hyperspectral and LiDAR remote sensing data. Remote Sens. Environ. 2017, 200, 170–182. [Google Scholar] [CrossRef]
- Pu, R.; Landry, S. A comparative analysis of high spatial resolution IKONOS and WorldView-2 imagery for mapping urban tree species. Remote Sens. Environ. 2012, 124, 516–533. [Google Scholar] [CrossRef]
- Pu, R.; Landry, S.M. Mapping urban tree species by integrating multi-seasonal high resolution pléiades satellite imagery with airborne LiDAR data. Urban. For. Urban. Green. 2020, 53, 126675. [Google Scholar] [CrossRef]
- Puissant, A.; Rougier, S.; Stumpf, A. Object-oriented mapping of urban trees using Random Forest classifiers. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 235–245. [Google Scholar] [CrossRef]
- Pan, G.; Qi, G.; Wu, Z.; Zhang, D.; Li, S. Land-Use Classification Using Taxi GPS Traces. IEEE Trans. Intell. Transp. Syst. 2012, 14, 113–123. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Learning multiscale and deep representations for classifying remotely sensed imagery. ISPRS J. Photogramm. Remote Sens. 2016, 113, 155–165. [Google Scholar] [CrossRef]
- Moser, G.; Serpico, S.B.; Benediktsson, J.A. Land-Cover Mapping by Markov Modeling of Spatial–Contextual Information in Very-High-Resolution Remote Sensing Images. Proc. IEEE 2012, 101, 631–651. [Google Scholar] [CrossRef]
- Hu, F.; Xia, G.-S.; Hu, J.; Zhang, L. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef] [Green Version]
- Han, N.; Du, H.; Zhou, G.; Xu, X.; Ge, H.; Liu, L.; Gao, G.; Sun, S. Exploring the synergistic use of multi-scale image object metrics for land-use/land-cover mapping using an object-based approach. Int. J. Remote Sens. 2015, 36, 3544–3562. [Google Scholar] [CrossRef]
- Sun, X.; Du, H.; Han, N.; Zhou, G.; Lu, D.; Ge, H.; Xu, X.; Liu, L. Synergistic use of Landsat TM and SPOT5 imagery for object-based forest classification. J. Appl. Remote Sens. 2014, 8, 083550. [Google Scholar] [CrossRef]
- Hamdi, Z.M.; Brandmeier, M.; Straub, C. Forest Damage Assessment Using Deep Learning on High Resolution Remote Sensing Data. Remote Sens. 2019, 11, 1976. [Google Scholar] [CrossRef] [Green Version]
- Shirvani, Z.; Abdi, O.; Buchroithner, M.F. A Synergetic Analysis of Sentinel-1 and -2 for Mapping Historical Landslides Using Object-Oriented Random Forest in the Hyrcanian Forests. Remote Sens. 2019, 11, 2300. [Google Scholar] [CrossRef] [Green Version]
- Stubbings, P.; Peskett, J.; Rowe, F.; Arribas-Bel, D. A Hierarchical Urban Forest Index Using Street-Level Imagery and Deep Learning. Remote Sens. 2019, 11, 1395. [Google Scholar] [CrossRef] [Green Version]
- Abdi, O. Climate-Triggered Insect Defoliators and Forest Fires Using Multitemporal Landsat and TerraClimate Data in NE Iran: An Application of GEOBIA TreeNet and Panel Data Analysis. Sensors 2019, 19, 3965. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised Deep Feature Extraction for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 54, 1349–1362. [Google Scholar] [CrossRef] [Green Version]
- Dong, L.; Du, H.; Mao, F.; Han, N.; Li, X.; Zhou, G.; Zhu, D.; Zheng, J.; Zhang, M.; Xing, L.; et al. Very High Resolution Remote Sensing Imagery Classification Using a Fusion of Random Forest and Deep Learning Technique—Subtropical Area for Example. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 13, 113–128. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S.; Wang, Q.; Emery, W. Contextually guided very-high-resolution imagery classification with semantic segments. ISPRS J. Photogramm. Remote Sens. 2017, 132, 48–60. [Google Scholar] [CrossRef]
- Sherrah, J. Fully convolutional networks for dense semantic labelling of high-resolution aerial imagery. arXiv 2016, arXiv:1606.02585. [Google Scholar]
- Liu, Y.; Fan, B.; Wang, L.; Bai, J.; Xiang, S.; Pan, C. Semantic labeling in very high resolution images via a self-cascaded convolutional neural network. ISPRS J. Photogramm. Remote Sens. 2018, 145, 78–95. [Google Scholar] [CrossRef] [Green Version]
- Sun, Y.; Zhang, X.; Xin, Q.; Huang, J. Developing a multi-filter convolutional neural network for semantic segmentation using high-resolution aerial imagery and LiDAR data. ISPRS J. Photogramm. Remote Sens. 2018, 143, 3–14. [Google Scholar] [CrossRef]
- Chen, G.; Zhang, X.; Wang, Q.; Dai, F.; Gong, Y.; Zhu, K. Symmetrical Dense-Shortcut Deep Fully Convolutional Networks for Semantic Segmentation of Very-High-Resolution Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1633–1644. [Google Scholar] [CrossRef]
- Chen, K.; Weinmann, M.; Sun, X.; Yan, M.; Hinz, S.; Jutzi, B. Semantic Segmentation of Aerial Imagery via Multi-Scale Shuffling Convolutional Neural Networks with Deep Supervision. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 29–36. [Google Scholar] [CrossRef] [Green Version]
- Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
- Xu, J.; Feng, G.; Zhao, T.; Sun, X.; Zhu, M. Remote sensing image classification based on semi-supervised adaptive interval type-2 fuzzy c-means algorithm. Comput. Geosci. 2019, 131, 132–143. [Google Scholar] [CrossRef]
- Mi, L.; Chen, Z. Superpixel-enhanced deep neural forest for remote sensing image semantic segmentation. ISPRS J. Photogramm. Remote Sens. 2020, 159, 140–152. [Google Scholar] [CrossRef]
- Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. High-resolution semantic labeling with convolutional neural networks. arXiv 2016, arXiv:1611.01962. [Google Scholar]
- Liu, Y.; Piramanayagam, S.; Monteiro, S.T.; Saber, E. Dense Semantic Labeling of Very-High-Resolution Aerial Imagery and LiDAR with Fully-Convolutional Neural Networks and Higher-Order CRFs. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1561–1570. [Google Scholar]
- Zhang, B.; Zhao, L.; Zhang, X. Three-dimensional convolutional neural network model for tree species classification using airborne hyperspectral images. Remote Sens. Environ. 2020, 247, 111938. [Google Scholar] [CrossRef]
- Dumoulin, V.; Visin, F. A guide to convolution arithmetic for deep learning. arXiv 2016, arXiv:1603.07285. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral–Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
- Li, R.; Duan, C. LiteDenseNet: A Lightweight Network for Hyperspectral Image Classification. arXiv 2020, arXiv:2004.08112. [Google Scholar]
- Bai, Y.; Zhang, Q.; Lu, Z.; Zhang, Y. SSDC-DenseNet: A Cost-Effective End-to-End Spectral-Spatial Dual-Channel Dense Network for Hyperspectral Image Classification. IEEE Access 2019, 7, 84876–84889. [Google Scholar] [CrossRef]
- Li, G.; Zhang, C.; Lei, R.; Zhang, X.; Ye, Z.; Li, X. Hyperspectral remote sensing image classification using three-dimensional-squeeze-and-excitation-DenseNet (3D-SE-DenseNet). Remote Sens. Lett. 2020, 11, 195–203. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
- Flood, N.; Watson, F.; Collett, L. Using a U-net convolutional neural network to map woody vegetation extent from high resolution satellite imagery across Queensland, Australia. Int. J. Appl. Earth Obs. Geoinfor. 2019, 82, 101897. [Google Scholar] [CrossRef]
- Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C. Resunet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote Sens. 2020, 162, 94–114. [Google Scholar] [CrossRef] [Green Version]
- Zhao, W.; Du, S.; Emery, W.J. Object-Based Convolutional Neural Network for High-Resolution Imagery Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3386–3396. [Google Scholar] [CrossRef]
- Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.S.; Atkinson, P.M. An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sens. Environ. 2018, 216, 57–70. [Google Scholar] [CrossRef] [Green Version]
- Liu, T.; Abd-Elrahman, A. Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification. ISPRS J. Photogramm. Remote Sens. 2018, 139, 154–170. [Google Scholar] [CrossRef]
- Martins, V.S.; Kaleita, A.L.; Gelder, B.K.; Da Silveira, H.L.; Abe, C.A. Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution. ISPRS J. Photogramm. Remote Sens. 2020, 168, 56–73. [Google Scholar] [CrossRef]
- Zhang, C.; Yue, P.; Tapete, D.; Shangguan, B.; Wang, M.; Wu, Z. A multi-level context-guided classification method with object-based convolutional neural network for land cover classification using very high resolution remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2020, 88, 102086. [Google Scholar] [CrossRef]
- Tong, X.-Y.; Xia, G.-S.; Lu, Q.; Shen, H.; Li, S.; You, S.; Zhang, L. Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sens. Environ. 2020, 237, 111322. [Google Scholar] [CrossRef] [Green Version]
- Anderson, J.R. A Land Use and Land Cover Classification System for Use with Remote Sensor Data; USGS Professional Paper, No. 964; US Government Printing Office: Washington, DC, USA, 1976. [CrossRef] [Green Version]
- Gong, P.; Liu, H.; Zhang, M.; Li, C.; Wang, J.; Huang, H.; Clinton, N.; Ji, L.; Li, W.; Bai, Y.; et al. Stable classification with limited sample: Transferring a 30-m resolution sample set collected in 2015 to mapping 10-m resolution global land cover in 2017. Sci. Bull. 2019, 64, 370–373. [Google Scholar] [CrossRef] [Green Version]
- Sun, J.; Wang, H.; Song, Z.; Lu, J.; Meng, P.; Qin, S. Mapping Essential Urban Land Use Categories in Nanjing by Integrating Multi-Source Big Data. Remote Sens. 2020, 12, 2386. [Google Scholar] [CrossRef]
- Bharati, M.H.; Liu, J.; MacGregor, J.F. Image texture analysis: Methods and comparisons. Chemom. Intell. Lab. Syst. 2004, 72, 57–71. [Google Scholar] [CrossRef]
- Li, Y.; Han, N.; Li, X.; Du, H.; Mao, F.; Cui, L.; Liu, T.; Xing, L. Spatiotemporal Estimation of Bamboo Forest Aboveground Carbon Storage Based on Landsat Data in Zhejiang, China. Remote Sens. 2018, 10, 898. [Google Scholar] [CrossRef] [Green Version]
- Fatiha, B.; Abdelkader, A.; Latifa, H.; Mohamed, E. Spatio Temporal Analysis of Vegetation by Vegetation Indices from Multi-dates Satellite Images: Application to a Semi Arid Area in Algeria. Energy Procedia 2013, 36, 667–675. [Google Scholar] [CrossRef] [Green Version]
- Taddeo, S.; Dronova, I.; Depsky, N. Spectral vegetation indices of wetland greenness: Responses to vegetation structure, composition, and spatial distribution. Remote Sens. Environ. 2019, 234, 111467. [Google Scholar] [CrossRef]
- Zhang, M.; Du, H.; Zhou, G.; Li, X.; Mao, F.; Dong, L.; Zheng, J.; Liu, H.; Huang, Z.; He, S. Estimating Forest Aboveground Carbon Storage in Hang-Jia-Hu Using Landsat TM/OLI Data and Random Forest Model. Forests 2019, 10, 1004. [Google Scholar] [CrossRef] [Green Version]
- Ren, H.; Zhou, G.; Zhang, F. Using negative soil adjustment factor in soil-adjusted vegetation index (SAVI) for aboveground living biomass estimation in arid grasslands. Remote Sens. Environ. 2018, 209, 439–445. [Google Scholar] [CrossRef]
- Hadji, I.; Wildes, R.P. What do we understand about convolutional networks? arXiv 2018, arXiv:1803.08834. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Yin, R.; Shi, R.; Li, J. Automatic Selection of Optimal Segmentation Scale of High-resolution Remote Sensing Images. J. Geo-inf. Sci. 2013, 15, 902–910. [Google Scholar] [CrossRef]
- He, T.; Zhang, Z.; Zhang, H.; Zhang, Z.; Xie, J.; Li, M. Bag of Tricks for Image Classification with Convolutional Neural Networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 558–567. [Google Scholar]
- Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
Land Use Classes | Subclass Components |
---|---|
Forest | Deciduous Forest, Evergreen Forest |
Build-up | Residential, Commercial and Services, Industrial, Transportation |
Agricultural Land | Cropland, Nurseries, Other Agricultural Land |
Grassland | Nature Grassland, Managed Grassland |
Barren Land | Dry Salt Flats, Sandy Areas other than Beaches, Bare Exposed Rock |
Water | Streams, River, Pond |
Others | Shadow of trees and buildings |
Feature Types | Feature Names | Details | Remarks |
---|---|---|---|
Original bands | Blue band (B) | 450−510 nm | WorldView-3 data |
Green band (G) | 510−580 nm | ||
Red band (R) | 630−690 nm | ||
Near infrared band (NIR) | 770−1040 nm | ||
Vegetation indices | Difference vegetation index (DVI) | NIR − R | X take value for 0.16 L take value for 0.5 [62] |
Ratio vegetation index (RVI) | NIR/R | ||
Normalized difference vegetation index (NDVI) | (NIR − R)/(NIR + R) | ||
Optimized soil adjusted vegetation index (OSAVI) | (NIR − R)/(NIR + R + X) | ||
Soil adjusted vegetation index (SAVI) | (NIR − R) (1 + L)/(NIR + R + L) | ||
Triangular vegetation index (TVI) | 0.5 [120 (NIR − G) – 200 (R − G)] | ||
Texture features based on the gray-level co-occurrence matrix (GLCM) | Mean (ME) | is the th row of the th column in the th moving window | |
Variance (VA) | |||
Entropy (EN) | |||
Angular second moment (SE) | |||
Homogeneity (HO) | |||
Contrast (CON) | |||
Dissimilarity (DI) | |||
Correlation (COR) |
Feature | TA | VA | ||||
---|---|---|---|---|---|---|
U | D | UDN | U | D | UDN | |
Spe | 0.975 | 0.971 | 0.981 | 0.914 | 0.923 | 0.936 |
Spe-Index | 0.969 | 0.977 | 0.980 | 0.935 | 0.916 | 0.920 |
Spe-Texture | 0.963 | 0.960 | 0.984 | 0.927 | 0.929 | 0.938 |
Algorithms | OA | Kappa | Forest | Build-Up | Agricultural Land | Grassland | Barren Land | Water | Others | |
---|---|---|---|---|---|---|---|---|---|---|
U | 0.903 | 0.887 | UA | 0.834 | 0.892 | 0.756 | 0.951 | 0.963 | 0.997 | 0.990 |
PA | 0.990 | 0.963 | 0.890 | 0.643 | 0.873 | 0.987 | 0.973 | |||
D | 0.905 | 0.889 | UA | 0.863 | 0.855 | 0.781 | 0.920 | 0.975 | 0.997 | 0.993 |
PA | 0.990 | 0.980 | 0.880 | 0.730 | 0.787 | 0.990 | 0.983 | |||
UDN | 0.920 | 0.907 | UA | 0.908 | 0.910 | 0.755 | 0.961 | 0.973 | 1.000 | 0.987 |
PA | 0.990 | 0.980 | 0.903 | 0.740 | 0.853 | 0.987 | 0.993 | |||
OUDN | 0.923 | 0.910 | UA | 0.911 | 0.909 | 0.767 | 0.958 | 0.974 | 0.993 | 0.993 |
PA | 0.990 | 0.970 | 0.910 | 0.753 | 0.857 | 0.993 | 0.990 |
Algorithms | OA | Kappa | Forest | Build-Up | Agricultural Land | Grassland | Barren Land | Water | Others | |
---|---|---|---|---|---|---|---|---|---|---|
U | 0.913 | 0.899 | UA | 0.878 | 0.872 | 0.809 | 0.930 | 0.977 | 1.000 | 0.958 |
PA | 0.983 | 0.977 | 0.873 | 0.757 | 0.867 | 0.950 | 0.987 | |||
D | 0.917 | 0.903 | UA | 0.870 | 0.857 | 0.842 | 0.927 | 0.977 | 0.997 | 0.980 |
PA | 0.983 | 0.980 | 0.890 | 0.760 | 0.850 | 0.973 | 0.983 | |||
UDN | 0.923 | 0.910 | UA | 0.891 | 0.892 | 0.817 | 0.957 | 0.978 | 0.997 | 0.958 |
PA | 0.983 | 0.963 | 0.923 | 0.750 | 0.883 | 0.960 | 0.997 | |||
OUDN | 0.926 | 0.914 | UA | 0.892 | 0.901 | 0.822 | 0.974 | 0.982 | 0.997 | 0.955 |
PA | 0.987 | 0.973 | 0.937 | 0.753 | 0.887 | 0.957 | 0.990 |
Algorithms | OA | Kappa | Forest | Build-Up | Agricultural Land | Grassland | Barren Land | Water | Others | |
---|---|---|---|---|---|---|---|---|---|---|
U | 0.898 | 0.881 | UA | 0.864 | 0.840 | 0.787 | 0.914 | 0.955 | 1.000 | 0.961 |
PA | 0.993 | 0.963 | 0.860 | 0.677 | 0.857 | 0.947 | 0.987 | |||
D | 0.897 | 0.879 | UA | 0.897 | 0.824 | 0.750 | 0.943 | 0.980 | 0.993 | 0.971 |
PA | 0.987 | 0.970 | 0.930 | 0.660 | 0.797 | 0.943 | 0.990 | |||
UDN | 0.932 | 0.921 | UA | 0.857 | 0.873 | 0.913 | 0.954 | 0.985 | 1.000 | 0.970 |
PA | 0.997 | 0.983 | 0.877 | 0.833 | 0.873 | 0.977 | 0.983 | |||
OUDN | 0.938 | 0.928 | UA | 0.877 | 0.866 | 0.932 | 0.970 | 0.985 | 1.000 | 0.967 |
PA | 0.997 | 0.987 | 0.913 | 0.853 | 0.857 | 0.973 | 0.987 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
He, S.; Du, H.; Zhou, G.; Li, X.; Mao, F.; Zhu, D.; Xu, Y.; Zhang, M.; Huang, Z.; Liu, H.; et al. Intelligent Mapping of Urban Forests from High-Resolution Remotely Sensed Imagery Using Object-Based U-Net-DenseNet-Coupled Network. Remote Sens. 2020, 12, 3928. https://doi.org/10.3390/rs12233928
He S, Du H, Zhou G, Li X, Mao F, Zhu D, Xu Y, Zhang M, Huang Z, Liu H, et al. Intelligent Mapping of Urban Forests from High-Resolution Remotely Sensed Imagery Using Object-Based U-Net-DenseNet-Coupled Network. Remote Sensing. 2020; 12(23):3928. https://doi.org/10.3390/rs12233928
Chicago/Turabian StyleHe, Shaobai, Huaqiang Du, Guomo Zhou, Xuejian Li, Fangjie Mao, Di’en Zhu, Yanxin Xu, Meng Zhang, Zihao Huang, Hua Liu, and et al. 2020. "Intelligent Mapping of Urban Forests from High-Resolution Remotely Sensed Imagery Using Object-Based U-Net-DenseNet-Coupled Network" Remote Sensing 12, no. 23: 3928. https://doi.org/10.3390/rs12233928
APA StyleHe, S., Du, H., Zhou, G., Li, X., Mao, F., Zhu, D., Xu, Y., Zhang, M., Huang, Z., Liu, H., & Luo, X. (2020). Intelligent Mapping of Urban Forests from High-Resolution Remotely Sensed Imagery Using Object-Based U-Net-DenseNet-Coupled Network. Remote Sensing, 12(23), 3928. https://doi.org/10.3390/rs12233928