Road Extraction from Very-High-Resolution Remote Sensing Images via a Nested SE-Deeplab Model
"> Figure 1
<p>Example of images and labels from the Massachusetts Roads Dataset. (<b>a</b>) includes the original image and label, and the label has two classes which are road (white) and background (black). (<b>b</b>) is the local magnification of (<b>a</b>).</p> "> Figure 2
<p>Example of data enhancement applied by image rotation and cropping. (<b>a</b>) is the original image; (<b>b</b>) is the result after image rotation; (<b>c</b>) is the result after image cropping.</p> "> Figure 3
<p>The structure of the proposed model (Nested SE-Deeplab) for road extraction from very-high-resolution remote sensing images.</p> "> Figure 4
<p>The structure of the Squeeze-and-Excitation module. Fsq(fm) is the process of global average pooling. Fex(z,w) is the step to obtain different weights for every channel of feature maps, and Fw(f,s) can combine the input and weights to produce the final output of this module (revised after [<a href="#B50-remotesensing-12-02985" class="html-bibr">50</a>]).</p> "> Figure 5
<p>The structure of atrous spatial pyramid pooling module used in Deeplab v3. The module consists of two steps including (<b>a</b>) atrous convolution and (<b>b</b>) image pooling, and produces the final output by a convolution layer after concatenation of feature maps (revised after [<a href="#B16-remotesensing-12-02985" class="html-bibr">16</a>]).</p> "> Figure 6
<p>The structure of the encoder and decoder of Nested SE-Deeplab during training and testing. The structure of red region is needed during training, and the red region can be hidden during testing.</p> "> Figure 7
<p>Visual comparison of four loss functions used with Nested SE-Deeplab on the testing set. True positive (TP) is marked as green, false positive (FP) as blue, and false negative (FN) as red. The three rows including (<b>a</b>), (<b>b</b>) and (<b>c</b>) in the figure represent visual effects of three examples in testing set, which have been tested by models with different loss functions. For each row in the figure, it contains an input image, a picture of ground truth and results of models with different loss functions including softmax cross entropy, weighted log loss, dice+bce and dice coefficient.</p> "> Figure 8
<p>Magnification of the results in <a href="#remotesensing-12-02985-f007" class="html-fig">Figure 7</a>. True positive (TP) is marked as green, false positive (FP) as blue, and false negative (FN) as red. Each subfigure in this figure is the magnification of the results shown in <a href="#remotesensing-12-02985-f007" class="html-fig">Figure 7</a>.</p> "> Figure 9
<p>Progression of loss values (<b>A</b>) and training accuracy (<b>B</b>) for four loss functions used with Nested SE-Deeplab during training. The loss functions are softmax cross entropy (softmax), weighted log loss, dice coefficient (dice), and dice coefficient added with binary cross entropy (bce).</p> "> Figure 10
<p>Visual comparison of three backbone networks used in Nested SE-Deeplab for the testing set. True positive (TP) is marked as green, false positive (FP) as blue, and false negative (FN) as red. The three rows including (<b>a</b>), (<b>b</b>) and (<b>c</b>) in the figure represent visual effects of three examples in testing set, which have been tested by different models. For each row in the figure, it contains an input image, a picture of ground truth and results of the proposed model combined with different backbone networks including ResNet, ResNext and SE Module.</p> "> Figure 11
<p>Magnification of the results in <a href="#remotesensing-12-02985-f010" class="html-fig">Figure 10</a>. True positive (TP) is marked as green, false positive (FP) as blue, and false negative (FN) as red. Each subfigure in this figure is the magnification of the results shown in <a href="#remotesensing-12-02985-f010" class="html-fig">Figure 10</a>.</p> "> Figure 12
<p>Visual comparison of Nested SE-Deeplab and other deep-learning models on the testing dataset. True positive (TP) is marked as green, false positive (FP) as blue, and false negative (FN) as red. The three rows including (<b>a</b>), (<b>b</b>) and (<b>c</b>) in the figure contains visual effects of three examples in testing set, which have been tested by different models. For each row in the figure, it contains an input image, a picture of ground truth and five results of road extraction models including Deeplab v3, SegNet, UNet, FC-DenseNet and Nested SE-Deeplab.</p> "> Figure 13
<p>Magnification of the results in <a href="#remotesensing-12-02985-f012" class="html-fig">Figure 12</a>. True positive (TP) is marked as green, false positive (FP) as blue, and false negative (FN) as red. Each subfigure in this figure is the magnification of the results shown in <a href="#remotesensing-12-02985-f012" class="html-fig">Figure 12</a>.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Dataset
2.2. Nested SE-Deeplab
2.2.1. Squeeze-and-Excitation Module
2.2.2. Model Encoder and Decoder
2.2.3. Model Structure and Training
2.3. Selection of Loss Functions
2.4. Selection of Backbone Networks and Modules
2.5. Comparison with State-of-the-Art
2.6. Parameter Settings
2.7. Evaluation Indexes
3. Results
3.1. Comparison of Loss Functions
3.2. Comparison of Backbone Networks
3.3. Model Comparison
4. Discussion
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Yan, M.; Haiping, W.; Lizhe, W.; Bormin, H.; Rajiv, R.; Albert, Z.; Wei, J. Remote sensing big data computing: Challenges and opportunities. Future Gener. Comp. Syst. 2015, 51, 47–60. [Google Scholar]
- Liu, P.; Di, L.P.; Du, Q.; Wang, L.Z. Remote sensing big data: Theory, methods and applications. Remote Sens. 2018, 10, 711. [Google Scholar] [CrossRef] [Green Version]
- Casu, F.; Manunta, M.; Agram, P.S.; Crippen, R.E. Big remotely sensed data: Tools, applications and experiences. Remote Sens. Environ. 2017, 202, 1–2. [Google Scholar] [CrossRef]
- Wang, G.Z.; Huang, Y.C. Road automatic extraction of high-resolution remote sensing images. J. Geomat. 2020, 45, 34–38. [Google Scholar]
- Zhang, C.; Sargent, I.; Pan, X.; Li, H.P.; Gardiner, A.; Hare, J.; Atkinson, P.M. Joint Deep Learning for land cover and land use classification. Remote Sens. Environ. 2019, 221, 173–187. [Google Scholar] [CrossRef] [Green Version]
- Ma, L.; Liu, Y.; Zhang, X.L.; Ye, Y.X.; Yin, G.F.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Yang, Z.; Mu, X.D.; Zhao, F.A. Scene classification of remote sensing image based on deep network and multi-scale features fusion. Optik 2018, 171, 287–293. [Google Scholar] [CrossRef]
- Ni, K.; Wu, Y.Q. Scene classification from remote sensing images using mid-level deep feature learning. Int. J. Remote Sens. 2020, 41, 1415–1436. [Google Scholar] [CrossRef]
- Fu, K.; Chang, Z.H.; Zhang, Y.; Xu, G.L.; Zhang, K.S.; Sun, X. Rotation-aware and multi-scale convolutional neural network for object detection in remote sensing images. ISPRS J. Photogramm. Remote Sens. 2020, 161, 294–308. [Google Scholar] [CrossRef]
- Ding, P.; Zhang, Y.; Jia, P.; Chang, X.L. A comparison: Different DCNN models for intelligent object detection in remote sensing images. Neural Process. Lett. 2019, 49, 1369–1379. [Google Scholar] [CrossRef]
- Ding, P.; Zhang, Y.; Jia, P.; Chang, X.L. Vehicle object detection in remote sensing imagery based on multi-perspective convolutional neural network. Neural Process. Lett. 2018, 7, 1369–1379. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2012, 60, 84–90. [Google Scholar] [CrossRef]
- Shelhamer, E.; Jonathan, L.; Trevor, D. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intel. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intel. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Jégou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y. The one hundred layers tiramisu: Fully convolutional denseNets for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1175–1183. [Google Scholar]
- Chen, L.C.; Zhu, Y.K.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. Computer Vision. In Proceedings of the 15th European Conference (ECCV 2018), Munich, Germany, 8–14 September 2018; pp. 833–851. [Google Scholar]
- Dong, P.; Xing, L. Robust prediction of isodose distribution with a fully convolutional networks (FCN)-based deep learning model. Int. J. Radiat. Oncol. 2018, 102, S54. [Google Scholar] [CrossRef]
- Drozdzal, M.; Chartrand, G.; Vorontsov, E.; Shakeri, M.; Di Jorio, L.; Tang, A.; Romero, A.; Bengio, Y.; Pal, C.; Kadoury, S. Learning normalized inputs for iterative estimation in medical image segmentation. Med. Image Anal. 2018, 44, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Li, L.W.; Yan, Z.; Shen, Q.; Cheng, G.; Gao, L.R.; Zhang, B. Water body extraction from very high spatial resolution remote sensing data based on fully convolutional networks. Remote Sens. 2019, 11, 1162. [Google Scholar] [CrossRef] [Green Version]
- Wu, G.M.; Shao, X.W.; Guo, Z.L.; Chen, Q.; Yuan, W.; Shi, X.D.; Xu, Y.W.; Shibasaki, R. Automatic building segmentation of aerial imagery using multi-constraint fully convolutional Networks. Remote Sens. 2018, 10, 407. [Google Scholar] [CrossRef] [Green Version]
- Shrestha, S.; Vanneschi, L. Improved fully convolutional network with conditional random fields for building extraction. Remote Sens. 2018, 10, 1135. [Google Scholar] [CrossRef] [Green Version]
- Zhu, H.C.; Adeli, E.; Shi, F.; Shen, D.G. FCN based label correction for multi-atlas guided organ segmentation. Neuroinformatics 2020, 18, 319–331. [Google Scholar] [CrossRef] [PubMed]
- Hu, X.J.; Luo, W.J.; Hu, J.L.; Guo, S.; Huang, W.L.; Scott, M.R.; Wiest, R.; Dahlweid, M.; Reyes, M. Brain SegNet: 3D local refinement network for brain lesion segmentation. BMC Med. Imagin. 2020, 20, 98–111. [Google Scholar] [CrossRef] [PubMed]
- Khagi, B.; Kwon, G.R. Pixel-Label-Based Segmentation of cross-sectional brain mRI using simplified SegNet architecture-based CNN. J. Healthc. Eng. 2018, 2018, 3640705. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, G.J.; Wu, M.J.; Wei, X.K.; Song, H.H. Water identification from high-resolution remote sensing images based on multidimensional densely connected convolutional neural networks. Remote Sens. 2020, 12, 795. [Google Scholar] [CrossRef] [Green Version]
- El Adoui, M.; Mahmoudi, S.A.; Larhmam, M.A.; Benjelloun, M. MRI breast tumor segmentation using different encoder and decoder CNN architectures. Computers 2019, 8, 52. [Google Scholar] [CrossRef] [Green Version]
- Majeed, Y.; Zhang, J.; Zhang, X.; Fu, L.S.; Karkee, M.; Zhang, Q.; Whiting, M.D. Deep learning based segmentation for automated training of apple trees on trellis wires. Comput. Electron. Agric. 2020, 170, 105277. [Google Scholar] [CrossRef]
- Song, C.G.; Wu, L.J.; Chen, Z.C.; Zhou, H.F.; Lin, P.J.; Cheng, S.Y.; Wu, Z.H. Pixel-level crack detection in images using SegNet. In Proceedings of the Multi-disciplinary International Conference on Artificial Intelligence (MIWAI 2019), Kuala Lumpur, Malaysia, 17–19 November 2019; pp. 247–254. [Google Scholar]
- He, N.J.; Fang, L.Y.; Plaza, A. Hybrid first and second order attention UNet for building segmentation in remote sensing images. Sci. China Inf. Sci. 2020, 63, 611–622. [Google Scholar] [CrossRef] [Green Version]
- Li, X.M.; Chen, H.; Qi, X.J.; Dou, Q.; Fu, C.W.; Heng, P.A. H-DenseUNet: Hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imagin. 2018, 37, 2663–2674. [Google Scholar] [CrossRef] [Green Version]
- Javaid, U.; Dasnoy, D.; Lee, J.A. Multi-organ segmentation of chest CT images in radiation oncology: Comparison of standard and dilated UNet. In Proceedings of the International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS), Poitiers, France, 24–27 September 2018; pp. 188–199. [Google Scholar]
- Nguyen, H.G.; Pica, A.; Maeder, P.; Schalenbourg, A.; Peroni, M.; Hrbacek, J.; Weber, D.C.; Cuadra, M.B.; Sznitman, R. Ocular structures segmentation from multi-sequences mRI using 3d UNet with fully connected CRFs. In Proceedings of the 1st International Workshop on Computational Pathology (COMPAY), Granada, Spain, 16–20 September 2018; pp. 167–175. [Google Scholar]
- Zhang, Y.; Li, W.H.; Gong, W.G.; Wang, Z.X.; Sun, J.X. An improved boundary-aware perceptual loss for building extraction from VHR images. Remote Sens. 2020, 12, 1195. [Google Scholar] [CrossRef] [Green Version]
- Yue, K.; Yang, L.; Li, R.R.; Hu, W.; Zhang, F.; Li, W. TreeUNet: Adaptive tree convolutional neural networks for subdecimeter aerial image segmentation. ISPRS J. Photogramm. Remote Sens. 2019, 156, 1–13. [Google Scholar] [CrossRef]
- Hinton, G.E. A practical guide to training restricted boltzmann machines. Momentum. Neural Netw. Tricks Trade. 2012, 9, 599–619. [Google Scholar]
- Chen, Y.S.; Hong, Z.J.; He, Q.; Ma, H.B. Road extraction from high-resolution remote sensing images based on synthetical characteristics. In Proceedings of the International Conference on Measurement, Instrumentation and Automation (ICMIA), Guilin, China, 23–24 April 2013; pp. 828–831. [Google Scholar]
- Zhang, Z.X.; Liu, Q.J.; Wang, Y.H. Road extraction by deep residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef] [Green Version]
- Wang, J.; Song, J.W.; Chen, M.Q.; Yang, Z. Road network extraction: A neural-dynamic framework based on deep learning and a finite state machine. Int. J. Remote Sens. 2015, 36, 3144–3169. [Google Scholar] [CrossRef]
- Tao, C.; Qi, J.; Wang, H.; Li, H.F. Spatial information inference net: Road extraction using road-specific contextual information. ISPRS J. Photogramm. Remote Sens. 2019, 158, 155–166. [Google Scholar] [CrossRef]
- Alshehhi, R.; Marpu Prashanth, R.M.; Woon, W.L.; Mura, M.D. Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2017, 130, 139–149. [Google Scholar] [CrossRef]
- Xie, Y.; Miao, F.; Zhou, K.; Peng, J. HsgNet: A Road Extraction Network Based on Global Perception of High-Order Spatial Information. ISPRS Int. J. GeoInf. 2019, 8, 571. [Google Scholar] [CrossRef] [Green Version]
- Tejenaki, S.A.K.; Ebadi, H.; Mohammadzadeh, A. A new hierarchical method for automatic road centerline extraction in urban areas using LIDAR data. Advances in Space Research. Adv. Space. Res. 2019, 64, 1792–1806. [Google Scholar] [CrossRef]
- Liu, R.Y.; Song, J.F.; Miao, Q.G.; Xu, P.F.; Xue, Q. Road centerlines extraction from high resolution images based on an improved directional segmentation and road probability. Neurocomputing 2016, 212, 88–95. [Google Scholar] [CrossRef]
- Yang, X.F.; Li, X.T.; Ye, Y.M.; Lau, R.Y.K.; Zhang, X.F.; Huang, X.H. Road detection and centerline extraction via deep recurrent convolutional neural network U-Net. IEEE Geosci. Remote Sens. 2019, 57, 7209–7220. [Google Scholar] [CrossRef]
- Yu, F.; Koltun, V. Multi-Scale Context Aggregation by Dilated Convolutions. arXiv 2016, arXiv:1511.07122. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Su, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. In Proceedings of the European Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 346–361. [Google Scholar]
- Mnih, V.; Hinton, G.E. Learning to detect roads in high-resolution aerial images. In Proceedings of the European Conference on Computer Vision (ECCV), Heraklion, Greece, 5–11 September 2010; pp. 210–223. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E.H. Squeeze-and-Excitation Networks. IEEE Trans. Pattern. Anal. 2019, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhao, B.; Wu, X.; Feng, J.S.; Peng, Q.; Yan, S.C. Diversified visual attention networks for fine-grained object classification. IEEE Trans. Multimed. 2017, 19, 1245–1256. [Google Scholar] [CrossRef] [Green Version]
- Wang, F.; Jiang, M.Q.; Qian, C.; Yang, S.; Li, C.; Zhang, H.G.; Wang, X.G.; Tang, X.O. Residual attention network for image classification. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6450–6458. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep residual learning for image recognition. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 630–645. [Google Scholar]
- Xie, S.N.; Girshick, R.; Dollar, P.; Tu, Z.W.; He, K.M. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5987–5995. [Google Scholar]
- Python Software Foundation. Available online: https://www.python.org (accessed on 17 July 2017).
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
- Chollet, F.K. Available online: https://github.com/fchollet/keras (accessed on 2 December 2017).
- He, H.; Wang, S.C.; Yang, D.F.; Wang, S.Y.; Liu, X. Remote sensing image road extraction method based on encoder-decoder network. journal of surveying and mapping. Acta Geod. Cart. Sin. 2019, 48, 330–338. [Google Scholar]
- He, H.; Yang, D.; Wang, S.; Wang, S.; Li, Y. Road extraction by using atrous spatial pyramid pooling integrated encoder-decoder network and structural similarity loss. Remote Sens. 2019, 11, 1015. [Google Scholar] [CrossRef] [Green Version]
Training Set | Validation Set | Test Set | |
---|---|---|---|
Images | 604 | 66 | 27 |
Sub-Images | 96976 | 7654 | / |
Experiment | Methods | Correctness | F1-Score | IoU 1 |
---|---|---|---|---|
Figure 7a | Softmax | 0.8707 | 0.8821 | 0.7823 |
WBCE | 0.8728 | 0.8806 | 0.7867 | |
Dice+BCE | 0.8757 | 0.8823 | 0.7893 | |
Dice | 0.8739 | 0.8836 | 0.7914 | |
Figure 7b | Softmax | 0.9061 | 0.9101 | 0.8350 |
WBCE | 0.9078 | 0.9098 | 0.8359 | |
Dice+BCE | 0.9132 | 0.9136 | 0.8410 | |
Dice | 0.9140 | 0.9167 | 0.8462 | |
Figure 7c | Softmax | 0.8108 | 0.8355 | 0.7175 |
WBCE | 0.8143 | 0.8364 | 0.7189 | |
Dice+BCE | 0.8291 | 0.8441 | 0.7302 | |
Dice | 0.8363 | 0.8497 | 0.7387 |
Experiment | Methods | Correctness | F1-Score | IoU 1 |
---|---|---|---|---|
Figure 10a | ResNet | 0.9100 | 0.9112 | 0.8385 |
ResNext | 0.9088 | 0.9161 | 0.8455 | |
SE-Net | 0.9140 | 0.9167 | 0.8462 | |
Figure 10b | ResNet | 0.8152 | 0.8274 | 0.7056 |
ResNext | 0.8395 | 0.8447 | 0.7312 | |
SE-Net | 0.8380 | 0.8541 | 0.7454 | |
Figure 10c | ResNet | 0.7941 | 0.8095 | 0.6800 |
ResNext | 0.8120 | 0.8190 | 0.6935 | |
SE-Net | 0.8270 | 0.8254 | 0.7027 |
Methods 1 | Overall Accuracy | Correctness | F1-Score | IoU 2 |
---|---|---|---|---|
Deeplab V3 | 0.862 | 0.694 | 0.734 | 0.5878 |
SegNet | 0.873 | 0.695 | 0.724 | 0.6256 |
U-Net | 0.923 | 0.793 | 0.821 | 0.6932 |
ELU-SegNet-R | / | 0.847 | 0.812 | / |
FC-DenseNet | 0.954 | 0.809 | 0.833 | 0.7189 |
DCED | / | 0.839 | 0.829 | / |
ASPP-UNet [60] | / | 0.849 | 0.832 | / |
Our Methods | 0.967 | 0.858 | 0.857 | 0.7387 |
Prunning | No Prunning | |||
---|---|---|---|---|
Batches | Inference Time | Batches | Inference Time | |
Nested SE-Deeplab | 10 | 1 m 42 s | 10 | 1 m 56 s |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, Y.; Xu, D.; Wang, N.; Shi, Z.; Chen, Q. Road Extraction from Very-High-Resolution Remote Sensing Images via a Nested SE-Deeplab Model. Remote Sens. 2020, 12, 2985. https://doi.org/10.3390/rs12182985
Lin Y, Xu D, Wang N, Shi Z, Chen Q. Road Extraction from Very-High-Resolution Remote Sensing Images via a Nested SE-Deeplab Model. Remote Sensing. 2020; 12(18):2985. https://doi.org/10.3390/rs12182985
Chicago/Turabian StyleLin, Yeneng, Dongyun Xu, Nan Wang, Zhou Shi, and Qiuxiao Chen. 2020. "Road Extraction from Very-High-Resolution Remote Sensing Images via a Nested SE-Deeplab Model" Remote Sensing 12, no. 18: 2985. https://doi.org/10.3390/rs12182985
APA StyleLin, Y., Xu, D., Wang, N., Shi, Z., & Chen, Q. (2020). Road Extraction from Very-High-Resolution Remote Sensing Images via a Nested SE-Deeplab Model. Remote Sensing, 12(18), 2985. https://doi.org/10.3390/rs12182985