Utilizing Dual-Stream Encoding and Transformer for Boundary-Aware Agricultural Parcel Extraction in Remote Sensing Images
"> Figure 1
<p>Schematic of the proposed segmentation framework.</p> "> Figure 2
<p>Detailed process of the proposed network. <math display="inline"><semantics> <msup> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mi>l</mi> </msup> </semantics></math>(<math display="inline"><semantics> <mrow> <mi>l</mi> <mo>∈</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> <mo>]</mo> </mrow> </semantics></math>) represents the feature map obtained by the encoder. <math display="inline"><semantics> <msup> <mrow> <mi mathvariant="normal">y</mi> </mrow> <mi>l</mi> </msup> </semantics></math>(<math display="inline"><semantics> <mrow> <mi>l</mi> <mo>∈</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>]</mo> </mrow> </semantics></math>) represents the feature map obtained by the decoder.</p> "> Figure 3
<p>Dual-stream feature extraction.</p> "> Figure 4
<p>Specific structure of the Convolution Block and Residual Block.</p> "> Figure 5
<p>The specific structure diagram of the Global Feature Fusion Module (GFFM).</p> "> Figure 6
<p>Schematic diagram of the Dice loss function based on boundary area. (<b>a</b>) The prediction results and ground truth of the i-th category. (<b>b</b>) The prediction result of the i-th class and the edge area of the ground truth; d is the width of the edge area.</p> "> Figure 7
<p>Overview of the dataset.</p> "> Figure 8
<p>Extracted agricultural parcels by DSTBA-Net on the Denmark (DK) Sentinel-2 image.</p> "> Figure 9
<p>Examples of agricultural parcels delineated by different methods on the Denmark dataset.(<b>a</b>,<b>b</b>) are image slices from areas with complex backgrounds; (<b>c</b>) is an image slice from a relatively regular area; (<b>d</b>) is an image slice from a contiguous distribution area.Examples of agricultural parcels delineated by different methods on the Denmark dataset.</p> "> Figure 10
<p>Examples of parcel boundaries extracted by different methods in a Denmark testing area.</p> "> Figure 11
<p>Extracted agricultural parcels by DSTBA-Net on the Shandong GF-2 image, China (CN).</p> "> Figure 12
<p>Examples of agricultural parcels delineated by different methods on the Shandong GF-2 image. (<b>a</b>,<b>d</b>) are image slices of agricultural parcels in regular layouts, while (<b>b</b>,<b>c</b>) are image slices of agricultural parcels in dense layouts.</p> "> Figure 13
<p>Examples of parcel boundaries extracted by different methods in a Shandong testing area.</p> "> Figure 14
<p>Prediction results for the entire Shandong dataset image. The white areas represent agricultural parcels, and the black areas represent non-agricultural parcels.</p> "> Figure 15
<p>Visualization of feature maps for Residual Block. <math display="inline"><semantics> <msup> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mi>l</mi> </msup> </semantics></math> and <math display="inline"><semantics> <mover accent="true"> <msup> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mi>l</mi> </msup> <mo>^</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>∈</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> <mo>]</mo> </mrow> </semantics></math> represent the Feature maps from regular convolution and provided Residual Blocks at each layer.</p> "> Figure 16
<p>Visualization of feature maps for BFG. (<b>A</b>) represents the selected image, (<b>B</b>) represents the feature maps before the addition of BFG, and (<b>C</b>) represents the feature maps after the addition of BFG.</p> "> Figure 17
<p>Visualization of feature maps for Residual Block. <math display="inline"><semantics> <msup> <mi>y</mi> <mi>l</mi> </msup> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>y</mi> <mrow> <mo>′</mo> <mi>l</mi> </mrow> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mover accent="true"> <msup> <mi>y</mi> <mi>l</mi> </msup> <mo>^</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>∈</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> <mo>]</mo> </mrow> </semantics></math>, respectively, denote the feature maps for each layer of the baseline network’s decoding part, after the inclusion of FCR, and after the inclusion of GFFM.</p> ">
Abstract
:1. Introduction
- (1)
- DSTBA-Net, a novel segmentation network framework designed to accurately extract agricultural parcels from remote sensing images, is proposed.
- (2)
- Dual-Stream Feature Extraction (DSFE) is designed to perform multi-level feature extraction on image and boundary data, guiding the model to focus on image edges, thereby preserving the unique morphological characteristics of parcels.
- (3)
- A Transformer-dominated Global Feature Fusion Module (GFFM) is designed to effectively capture long-distance dependencies and merge them with detailed features, enhancing the completeness of feature extraction.
- (4)
- A boundary-aware weighted loss algorithm is designed to balance the weights of image interiors and edges, effectively improving feature discrimination.
2. Methodology
2.1. Framework Introduction
2.2. Global Feature Fusion Module (GFFM)
2.3. Feature Compensation Reconstruction (FCR)
2.4. Boundary-Aware Weighted Loss
3. Experiments
3.1. Dataset
3.2. Implementation Details
3.3. Evaluation Metrics
4. Results
4.1. Experiment Using the Denmark Sentinel-2 Image
4.2. Experiment Using the Shandong GF-2 Image
4.3. Ablation Experiments of DSTBA-Net
5. Discussion
5.1. Module-Wise Feature Map Analysis
5.2. Analysis of Weight Coefficients
5.3. Discussion on Data Variability
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Kocur-Bera, K. Data compatibility between the Land and Building Cadaster (LBC) and the Land Parcel Identification System (LPIS) in the context of area-based payments: A case study in the Polish Region of Warmia and Mazury. Land Use Policy 2019, 80, 370–379. [Google Scholar] [CrossRef]
- McCarty, J.; Neigh, C.; Carroll, M.; Wooten, M. Extracting smallholder cropped area in Tigray, Ethiopia with wall-to-wall sub-meter WorldView and moderate resolution Landsat 8 imagery. Remote Sens. Environ. 2017, 202, 142–151. [Google Scholar] [CrossRef]
- Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
- Sitokonstantinou, V.; Papoutsis, I.; Kontoes, C.; Lafarga Arnal, A.; Armesto Andres, A.P.; Garraza Zurbano, J.A. Scalable parcel-based crop identification scheme using Sentinel-2 data time-series for the monitoring of the common agricultural policy. Remote Sens. 2018, 10, 911. [Google Scholar] [CrossRef]
- Dong, W.; Wu, T.; Luo, J.; Sun, Y.; Xia, L. Land parcel-based digital soil mapping of soil nutrient properties in an alluvial-diluvia plain agricultural area in China. Geoderma 2019, 340, 234–248. [Google Scholar] [CrossRef]
- Wagner, M.P.; Oppelt, N. Extracting agricultural fields from remote sensing imagery using graph-based growing contours. Remote Sens. 2020, 12, 1205. [Google Scholar] [CrossRef]
- Tang, Z.; Li, M.; Wang, X. Mapping tea plantations from VHR images using OBIA and convolutional neural networks. Remote Sens. 2020, 12, 2935. [Google Scholar] [CrossRef]
- Graesser, J.; Ramankutty, N. Detection of cropland field parcels from Landsat imagery. Remote Sens. Environ. 2017, 201, 165–180. [Google Scholar] [CrossRef]
- Xiong, J.; Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Poehnelt, J.; Congalton, R.G.; Yadav, K.; Thau, D. Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS J. Photogramm. Remote Sens. 2017, 126, 225–244. [Google Scholar] [CrossRef]
- Waldner, F.; Canto, G.S.; Defourny, P. Automated annual cropland mapping using knowledge-based temporal features. ISPRS J. Photogramm. Remote Sens. 2015, 110, 1–13. [Google Scholar] [CrossRef]
- Jong, M.; Guan, K.; Wang, S.; Huang, Y.; Peng, B. Improving field boundary delineation in ResUNets via adversarial deep learning. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102877. [Google Scholar] [CrossRef]
- Cai, Z.; Hu, Q.; Zhang, X.; Yang, J.; Wei, H.; He, Z.; Song, Q.; Wang, C.; Yin, G.; Xu, B. An adaptive image segmentation method with automatic selection of optimal scale for extracting cropland parcels in smallholder farming systems. Remote Sens. 2022, 14, 3067. [Google Scholar] [CrossRef]
- Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
- Rydberg, A.; Borgefors, G. Integrated method for boundary delineation of agricultural fields in multispectral satellite images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2514–2520. [Google Scholar] [CrossRef]
- Robb, C.; Hardy, A.; Doonan, J.H.; Brook, J. Semi-automated field plot segmentation from UAS imagery for experimental agriculture. Front. Plant Sci. 2020, 11, 591886. [Google Scholar] [CrossRef]
- Hong, R.; Park, J.; Jang, S.; Shin, H.; Kim, H.; Song, I. Development of a parcel-level land boundary extraction algorithm for aerial imagery of regularly arranged agricultural areas. Remote Sens. 2021, 13, 1167. [Google Scholar] [CrossRef]
- Suzuki, S. Topological structural analysis of digitized binary images by border following. Comput. Vision Graph. Image Process. 1985, 30, 32–46. [Google Scholar] [CrossRef]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
- Kecman, V. Support vector machines—An introduction. In Support Vector Machines: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1–47. [Google Scholar]
- Li, D.; Zhang, G.; Wu, Z.; Yi, L. An edge embedded marker-based watershed algorithm for high spatial resolution remote sensing image segmentation. IEEE Trans. Image Process. 2010, 19, 2781–2787. [Google Scholar]
- Chen, B.; Qiu, F.; Wu, B.; Du, H. Image segmentation based on constrained spectral variance difference and edge penalty. Remote Sens. 2015, 7, 5980–6004. [Google Scholar] [CrossRef]
- Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
- Wassie, Y.; Koeva, M.; Bennett, R.; Lemmen, C. A procedure for semi-automated cadastral boundary feature extraction from high-resolution satellite imagery. J. Spat. Sci. 2018, 63, 75–92. [Google Scholar] [CrossRef]
- Torre, M.; Radeva, P. Agricultural-field extraction on aerial images by region competition algorithm. In Proceedings of the Proceedings 15th International Conference on Pattern Recognition, Barcelona, Spain, 3–7 September 2000; ICPR-2000. IEEE: Piscataway, NJ, USA, 2000; Volume 1, pp. 313–316. [Google Scholar]
- Tetteh, G.O.; Gocht, A.; Schwieder, M.; Erasmi, S.; Conrad, C. Unsupervised parameterization for optimal segmentation of agricultural parcels from satellite images in different agricultural landscapes. Remote Sens. 2020, 12, 3096. [Google Scholar] [CrossRef]
- Garcia-Pedrero, A.; Gonzalo-Martin, C.; Lillo-Saavedra, M. A machine learning approach for agricultural parcel delineation through agglomerative segmentation. Int. J. Remote Sens. 2017, 38, 1809–1819. [Google Scholar] [CrossRef]
- Tian, Y.; Yang, C.; Huang, W.; Tang, J.; Li, X.; Zhang, Q. Machine learning-based crop recognition from aerial remote sensing imagery. Front. Earth Sci. 2021, 15, 54–69. [Google Scholar] [CrossRef]
- Guo, H.; Shi, Q.; Marinoni, A.; Du, B.; Zhang, L. Deep building footprint update network: A semi-supervised method for updating existing building footprint from bi-temporal remote sensing images. Remote Sens. Environ. 2021, 264, 112589. [Google Scholar] [CrossRef]
- Shi, Q.; Liu, M.; Li, S.; Liu, X.; Wang, F.; Zhang, L. A deeply supervised attention metric-based network and an open aerial image dataset for remote sensing change detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
- Liu, S.; Shi, Q.; Zhang, L. Few-shot hyperspectral image classification with unknown classes using multitask deep learning. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5085–5102. [Google Scholar] [CrossRef]
- Shi, Q.; Tang, X.; Yang, T.; Liu, R.; Zhang, L. Hyperspectral image denoising using a 3-D attention denoising network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10348–10363. [Google Scholar] [CrossRef]
- Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. Joint Deep Learning for land cover and land use classification. Remote Sens. Environ. 2019, 221, 173–187. [Google Scholar] [CrossRef]
- He, D.; Shi, Q.; Liu, X.; Zhong, Y.; Zhang, X. Deep subpixel mapping based on semantic information modulated network for urban land use mapping. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10628–10646. [Google Scholar] [CrossRef]
- Zhang, D.; Pan, Y.; Zhang, J.; Hu, T.; Zhao, J.; Li, N.; Chen, Q. A generalized approach based on convolutional neural networks for large area cropland mapping at very high resolution. Remote Sens. Environ. 2020, 247, 111912. [Google Scholar] [CrossRef]
- Persello, C.; Bruzzone, L. A novel protocol for accuracy assessment in classification of very high resolution images. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1232–1244. [Google Scholar] [CrossRef]
- Liu, Y.; Cheng, M.M.; Hu, X.; Wang, K.; Bai, X. Richer convolutional features for edge detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3000–3009. [Google Scholar]
- Garcia-Pedrero, A.; Lillo-Saavedra, M.; Rodriguez-Esparragon, D.; Gonzalo-Martin, C. Deep learning for automatic outlining agricultural parcels: Exploiting the land parcel identification system. IEEE Access 2019, 7, 158223–158236. [Google Scholar] [CrossRef]
- Li, C.; Fu, L.; Zhu, Q.; Zhu, J.; Fang, Z.; Xie, Y.; Guo, Y.; Gong, Y. Attention enhanced u-net for building extraction from farmland based on google and worldview-2 remote sensing images. Remote Sens. 2021, 13, 4411. [Google Scholar] [CrossRef]
- Xia, L.; Zhao, F.; Chen, J.; Yu, L.; Lu, M.; Yu, Q.; Liang, S.; Fan, L.; Sun, X.; Wu, S.; et al. A full resolution deep learning network for paddy rice mapping using Landsat data. ISPRS J. Photogramm. Remote Sens. 2022, 194, 91–107. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; proceedings, part III 18. Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Guo, M.; Liu, H.; Xu, Y.; Huang, Y. Building extraction based on U-Net with an attention block and multiple losses. Remote Sens. 2020, 12, 1400. [Google Scholar] [CrossRef]
- Xia, L.; Luo, J.; Sun, Y.; Yang, H. Deep extraction of cropland parcels from very high-resolution remotely sensed imagery. In Proceedings of the 2018 7th International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Hangzhou, China, 6–9 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar]
- Potlapally, A.; Chowdary, P.S.R.; Shekhar, S.R.; Mishra, N.; Madhuri, C.S.V.D.; Prasad, A. Instance segmentation in remote sensing imagery using deep convolutional neural networks. In Proceedings of the 2019 International Conference on Contemporary Computing and Informatics (IC3I), Singapore, 12–14 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 117–120. [Google Scholar]
- Waldner, F.; Diakogiannis, F.I. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sens. Environ. 2020, 245, 111741. [Google Scholar] [CrossRef]
- Li, M.; Long, J.; Stein, A.; Wang, X. Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images. ISPRS J. Photogramm. Remote Sens. 2023, 200, 24–40. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need.(Nips), 2017. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Aleissaee, A.A.; Kumar, A.; Anwer, R.M.; Khan, S.; Cholakkal, H.; Xia, G.S.; Khan, F.S. Transformers in remote sensing: A survey. Remote Sens. 2023, 15, 1860. [Google Scholar] [CrossRef]
- Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P.H.; et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 6881–6890. [Google Scholar]
- Wang, L.; Fang, S.; Meng, X.; Li, R. Building extraction with vision transformer. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–11. [Google Scholar] [CrossRef]
- Chen, K.; Zou, Z.; Shi, Z. Building extraction from remote sensing images with sparse token transformers. Remote Sens. 2021, 13, 4441. [Google Scholar] [CrossRef]
- Xiao, X.; Guo, W.; Chen, R.; Hui, Y.; Wang, J.; Zhao, H. A swin transformer-based encoding booster integrated in u-shaped network for building extraction. Remote Sens. 2022, 14, 2611. [Google Scholar] [CrossRef]
- Wang, H.; Chen, X.; Zhang, T.; Xu, Z.; Li, J. CCTNet: Coupled CNN and transformer network for crop segmentation of remote sensing images. Remote Sens. 2022, 14, 1956. [Google Scholar] [CrossRef]
- Xia, L.; Mi, S.; Zhang, J.; Luo, J.; Shen, Z.; Cheng, Y. Dual-stream feature extraction network based on CNN and transformer for building extraction. Remote Sens. 2023, 15, 2689. [Google Scholar] [CrossRef]
- Ding, L.; Lin, D.; Lin, S.; Zhang, J.; Cui, X.; Wang, Y.; Tang, H.; Bruzzone, L. Looking outside the window: Wide-context transformer for the semantic segmentation of high-resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
- Gao, L.; Liu, H.; Yang, M.; Chen, L.; Wan, Y.; Xiao, Z.; Qian, Y. STransFuse: Fusing swin transformer and convolutional neural network for remote sensing image semantic segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10990–11003. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, W.; Chen, W.; Chen, C. BSDSNet: Dual-Stream Feature Extraction Network Based on Segment Anything Model for Synthetic Aperture Radar Land Cover Classification. Remote Sens. 2024, 16, 1150. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Qin, X.; Zhang, Z.; Huang, C.; Dehghan, M.; Zaiane, O.R.; Jagersand, M. U2-Net: Going deeper with nested U-structure for salient object detection. Pattern Recognit. 2020, 106, 107404. [Google Scholar] [CrossRef]
- Long, J.; Li, M.; Wang, X.; Stein, A. Delineation of agricultural fields using multi-task BsiNet from high-resolution satellite images. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102871. [Google Scholar] [CrossRef]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
Areas | Satellites | Dates | Resolution (m) | Size (pixels) | Area (km2) |
---|---|---|---|---|---|
Denmark | Sentinel-2 | 8 May 2016 | 10 | 10,982 × 20,978 | 20,900 |
Shandong | Gaofen-2 | 20 December 2021 | 1 | 10,661 × 8769 | 91.70 |
Method | Common Metrics | Boundary Metrics | |||||
---|---|---|---|---|---|---|---|
OA (%) | P (%) | R (%) | F 1 (%) | IoU (%) | 95% HD | SSIM (%) | |
DSTBA-Net | 93.00 | 87.13 | 86.03 | 85.90 | 78.13 | 97.73 | 81.26 |
SEANet | 92.20 | 86.84 | 85.28 | 85.50 | 77.04 | 93.86 | 79.34 |
U2-Net | 92.14 | 83.70 | 86.19 | 84.36 | 76.06 | 125.56 | 79.57 |
BsiNet | 91.03 | 85.00 | 81.91 | 82.92 | 73.77 | 104.84 | 76.53 |
U-Net | 86.68 | 73.87 | 86.53 | 79.25 | 68.87 | 98.84 | 71.41 |
Deeplabv3+ | 85.75 | 73.38 | 86.94 | 78.83 | 68.37 | 104.35 | 69.13 |
Method | Common Metrics | Boundary Metrics | |||||
---|---|---|---|---|---|---|---|
OA (%) | P (%) | R (%) | F 1 (%) | IoU (%) | 95% HD | SSIM (%) | |
DSTBA-Net | 95.05 | 96.51 | 96.47 | 96.46 | 93.24 | 54.57 | 84.29 |
SEANet | 93.45 | 96.57 | 93.42 | 94.76 | 91.82 | 70.68 | 80.86 |
U2-Net | 92.34 | 95.47 | 91.97 | 93.59 | 88.51 | 56.57 | 84.15 |
BsiNet | 94.49 | 95.20 | 94.74 | 94.73 | 91.48 | 60.92 | 83.59 |
U-Net | 91.19 | 89.67 | 96.73 | 92.86 | 87.52 | 80.00 | 81.95 |
Deeplabv3+ | 93.37 | 95.08 | 95.37 | 95.11 | 90.89 | 49.04 | 80.78 |
Model Name | Modules | |||
---|---|---|---|---|
Baseline | Residual Block | BFG | GGFM | |
(a) | ✓ | |||
(b) | ✓ | ✓ | ||
(c) | ✓ | ✓ | ✓ | |
(d) | ✓ | ✓ | ✓ | |
(e) | ✓ | ✓ | ✓ | ✓ |
Model | Common Metrics | Boundary Metrics | |||||
---|---|---|---|---|---|---|---|
OA (%) | P (%) | R (%) | F 1 (%) | IoU (%) | 95% HD | SSIM (%) | |
(a) | 86.68 | 73.87 | 86.53 | 79.25 | 68.87 | 98.84 | 71.41 |
(b) | 87.76 | 75.91 | 87.23 | 81.19 | 71.27 | 101.54 | 72.04 |
(c) | 90.16 | 83.42 | 83.13 | 82.47 | 74.11 | 99.98 | 76.32 |
(d) | 91.60 | 84.51 | 83.98 | 83.37 | 75.02 | 104.59 | 77.55 |
(e) | 93.00 | 87.13 | 86.03 | 85.90 | 78.13 | 97.73 | 81.26 |
Coefficient () | Common Metrics | Boundary Metrics | |||||
---|---|---|---|---|---|---|---|
OA (%) | P (%) | R (%) | F 1 (%) | IoU (%) | 95% HD | SSIM (%) | |
0.3 | 91.70 | 85.55 | 84.41 | 84.18 | 75.91 | 114.86 | 78.68 |
0.4 | 92.24 | 86.19 | 84.84 | 84.55 | 76.36 | 107.37 | 79.24 |
0.5 | 93.00 | 87.13 | 86.03 | 85.90 | 78.13 | 97.73 | 81.26 |
0.6 | 92.63 | 86.15 | 85.76 | 85.15 | 77.13 | 105.42 | 80.09 |
0.7 | 91.81 | 85.60 | 84.11 | 84.07 | 75.93 | 110.40 | 78.71 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, W.; Wang, J.; Wang, C.; Li, Z.; Zhang, J.; Su, H.; Wu, S. Utilizing Dual-Stream Encoding and Transformer for Boundary-Aware Agricultural Parcel Extraction in Remote Sensing Images. Remote Sens. 2024, 16, 2637. https://doi.org/10.3390/rs16142637
Xu W, Wang J, Wang C, Li Z, Zhang J, Su H, Wu S. Utilizing Dual-Stream Encoding and Transformer for Boundary-Aware Agricultural Parcel Extraction in Remote Sensing Images. Remote Sensing. 2024; 16(14):2637. https://doi.org/10.3390/rs16142637
Chicago/Turabian StyleXu, Weiming, Juan Wang, Chengjun Wang, Ziwei Li, Jianchang Zhang, Hua Su, and Sheng Wu. 2024. "Utilizing Dual-Stream Encoding and Transformer for Boundary-Aware Agricultural Parcel Extraction in Remote Sensing Images" Remote Sensing 16, no. 14: 2637. https://doi.org/10.3390/rs16142637
APA StyleXu, W., Wang, J., Wang, C., Li, Z., Zhang, J., Su, H., & Wu, S. (2024). Utilizing Dual-Stream Encoding and Transformer for Boundary-Aware Agricultural Parcel Extraction in Remote Sensing Images. Remote Sensing, 16(14), 2637. https://doi.org/10.3390/rs16142637