RDAU-Net: A U-Shaped Semantic Segmentation Network for Buildings near Rivers and Lakes Based on a Fusion Approach
<p>The architecture of RDAU-Net.</p> "> Figure 2
<p>The running flow of dynamic convolution.</p> "> Figure 3
<p>The structure of RDSC module.</p> "> Figure 4
<p>The structure of the MCCT module.</p> "> Figure 5
<p>Multi-channel attention mechanism.</p> "> Figure 6
<p>The structure of the DCF module.</p> "> Figure 7
<p>Example of a typical sample of the HRI Building dataset.</p> "> Figure 8
<p>Results of ablation experiments on the HRI Building dataset. (<b>a</b>) Image. (<b>b</b>) Ground truth. (<b>c</b>) Baseline. (<b>d</b>) Baseline + RDSC. (<b>e</b>) Baseline + RDSC + MCCT. (<b>f</b>) Baseline + RDSC + MCCT + DCF.</p> "> Figure 9
<p>Results of ablation experiments on the WHU Building dataset. (<b>a</b>) Image. (<b>b</b>) Ground truth. (<b>c</b>) Baseline. (<b>d</b>) Baseline + RDSC. (<b>e</b>) Baseline + RDSC + MCCT. (<b>f</b>) Baseline + RDSC + MCCT + DCF.</p> "> Figure 10
<p>Visualization of the results of comparative experiments on the HRI Building dataset. (<b>a</b>) Image. (<b>b</b>) Ground truth. (<b>c</b>) FCN. (<b>d</b>) U-Net. (<b>e</b>) U-Net++. (<b>f</b>) Swin-UNet. (<b>g</b>) ACC-UNet. (<b>h</b>) CSC-UNet. (<b>i</b>) UCTransNet. (<b>j</b>) DTA-UNet. (<b>k</b>) RDAU-Net.</p> "> Figure 11
<p>Visualization of the results of comparative experiments on the WHU Building dataset. (<b>a</b>) Image. (<b>b</b>) Ground truth. (<b>c</b>) FCN. (<b>d</b>) U-Net. (<b>e</b>) U-Net++. (<b>f</b>) Swin-UNet. (<b>g</b>) ACC-UNet. (<b>h</b>) CSC-UNet. (<b>i</b>) UCTransNet. (<b>j</b>) DTA-UNet. (<b>k</b>) RDAU-Net.</p> ">
Abstract
:1. Introduction
- It proposes RDAU-Net, consisting of a residual dynamic short-cut down-sampling module, RDSC, and two attention mechanism modules, MCCT and DCF; this model is capable of accurately segmenting buildings and eliminating the effects of specular reflections from water bodies as well as complex backgrounds such as boats;
- The residual dynamic shortcut down-sampling (RDSC) module is also proposed herein, aiming at solving the problem of the high loss of feature information during down-sampling in the decoding stage. This module can enhance the model’s representation without adding depth or width to the network and facilitate the integrity of building segmentation and model detail feature extraction;
- The multi-channel cross fusion transformer (MCCT) module reduces the semantic differences that exist in the encoder-decoder features, greatly reduces the negative impact of boat targets on building segmentation, and makes it less likely that independent buildings at multiple scales will be missed during segmentation;
- The double-feature channel-wise fusion attention (DCF) module is proposed for fusing MCCT input features with decoder features. It can reduce the semantic differences between decoded features at the same scale, which is beneficial in reducing the effects of specular reflections from water bodies and the impact of boats along rivers and lakes on building segmentation;
- The HRI Building dataset was constructed under the guidance of river and lake regulation staff and is the first building dataset to include multiple water-edge building types to represent complex scenarios of river and lake regulation. It fills the gaps regarding river and lake regulation scenarios in the existing building semantic segmentation datasets.
2. Related Works
2.1. CNN for Semantic Segmentation in Remote Sensing
2.2. Transformer for Semantic Segmentation in Remote Sensing
2.3. Combining CNN and Transformer for Semantic Segmentation in Remote Sensing
3. Methods
3.1. Over Architecture of RDAU-Net
3.2. Residual Dynamic Short-Cut Down-Sampling (RDSC)
3.2.1. Dynamic Convolution
3.2.2. Specific Structure of the RDSC
3.3. Multi-Channel Cross Fusion Transformer (MCCT)
3.3.1. Multi-Scale Feature Embedding
3.3.2. Multi-Head Cross-Attention
3.4. Double-Feature Channel-Wise Fusion Attention (DCF)
4. Experimental Set Up
4.1. Datasets
4.1.1. HRI Building Dataset
4.1.2. WHU Building Dataset
4.2. Implementation Details
4.3. Loss Function Selection
4.4. Evaluation Metrics
5. Results and Discussion
5.1. Ablation Study
5.1.1. Effect of RDSC
5.1.2. Effect of MCCT
5.1.3. Effect of DCF
5.2. Comparison with State-of-the-Art Methods
5.2.1. Results on the HRI Building Dataset
5.2.2. Results on the WHU Building Dataset
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cid, N.; Erős, T.; Heino, J.; Singer, G.; Jähnig, S.C.; Cañedo-Argüelles, M.; Bonada, N.; Sarremejane, R.; Mykrä, H.; Sandin, L.; et al. From meta-system theory to the sustainable management of rivers in the Anthropocene. Front. Ecol. Environ. 2022, 20, 49–57. [Google Scholar] [CrossRef] [PubMed]
- Su, X.; Fan, Y.; Wen, C. Systematic coupling and multistage interactive response of the urban land use efficiency and ecological environment quality. J. Environ. Manag. 2024, 365, 121584. [Google Scholar] [CrossRef] [PubMed]
- Huang, X.; Shen, J.; Li, S.; Chi, C.; Guo, P.; Hu, P. Sustainable flood control strategies under extreme rainfall: Allocation of flood drainage rights in the middle and lower reaches of the yellow river based on a new decision-making framework. J. Environ. Manag. 2024, 367, 122020. [Google Scholar] [CrossRef] [PubMed]
- Huang, X.; Hua, W.; Dai, X. Performance Evaluation of Watershed Environment Governance—A Case Study of Taihu Basin. Water 2022, 14, 158. [Google Scholar] [CrossRef]
- Xue, H.; Liu, K.; Wang, Y.; Chen, Y.; Huang, C.; Wang, P.; Li, L. MAD-UNet: A Multi-Region UAV Remote Sensing Network for Rural Building Extraction. Sensors 2024, 24, 2393. [Google Scholar] [CrossRef]
- Notarangelo, N.M.; Mazzariello, A.; Albano, R.; Sole, A. Comparing Three Machine Learning Techniques for Building Extraction from a Digital Surface Model. Appl. Sci. 2021, 11, 6072. [Google Scholar] [CrossRef]
- Chen, R.; Li, X.; Li, J. Object-Based Features for House Detection from RGB High-Resolution Images. Remote Sens. 2018, 10, 451. [Google Scholar] [CrossRef]
- Tamilarasi, R.; Prabu, S. Automated building and road classifications from hyperspectral imagery through a fully convolutional network and support vector machine. J. Supercomput. 2021, 77, 13243–13261. [Google Scholar] [CrossRef]
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Liu, L.W.; Huang, Z.; Lu, K.Y.; Wang, Z.X.; Liang, Y.M.; Lin, S.Y.; Ji, Y.H. UJAT-Net: A U-Net Combined Joint-Attention and Transformer for Breast Tubule Segmentation in H&E Stained Images. IEEE Access 2024, 12, 34582–34591. [Google Scholar] [CrossRef]
- Lin, A.; Chen, B.; Xu, J.; Zhang, Z.; Lu, G.; Zhang, D. DS-TransUNet: Dual Swin Transformer U-Net for Medical Image Segmentation. IEEE Trans. Instrum. Meas. 2022, 71, 4005615. [Google Scholar] [CrossRef]
- Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Can semantic labeling methods generalize to any city? The inria aerial image labeling benchmark. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 3226–3229. [Google Scholar]
- Shao, Z.; Tang, P.; Wang, Z.; Saleem, N.; Yam, S.; Sommai, C. BRRNet: A Fully Convolutional Neural Network for Automatic Building Extraction From High-Resolution Remote Sensing Images. Remote Sens. 2020, 12, 1050. [Google Scholar] [CrossRef]
- Guo, H.; Du, B.; Zhang, L.; Su, X. A coarse-to-fine boundary refinement network for building footprint extraction from remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2022, 183, 240–252. [Google Scholar] [CrossRef]
- Liu, H.; Luo, J.; Huang, B.; Yang, H.; Hu, X.; Xu, N.; Xia, L. Building Extraction based on SE-Unet. J. Geo-Inf. Sci. 2019, 21, 1779–1789. [Google Scholar]
- Lu, K.; Sun, Y.; Ong, S.H. Dual-Resolution U-Net: Building Extraction from Aerial Images. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 489–494. [Google Scholar]
- Wang, S.; Hou, X.; Zhao, X. Automatic Building Extraction From High-Resolution Aerial Imagery via Fully Convolutional Encoder-Decoder Network With Non-Local Block. IEEE Access 2020, 8, 7313–7322. [Google Scholar] [CrossRef]
- Shunping, J.I.; Shiqing, W.E.I. Building extraction via convolutional neural networks from an open remote sensing building dataset. Acta Geod. Cartogr. Sin. 2019, 48, 448. [Google Scholar]
- Mo, Y.; Wu, Y.; Yang, X.; Liu, F.; Liao, Y.J.N. Review the state-of-the-art technologies of semantic segmentation based on deep learning. Neurocomputing 2022, 493, 626–646. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Trans. Med. Imaging 2020, 39, 1856–1867. [Google Scholar] [CrossRef]
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018; pp. 833–851. [Google Scholar]
- Yu, M.; Chen, X.; Zhang, W.; Liu, Y. AGs-Unet: Building Extraction Model for High Resolution Remote Sensing Images Based on Attention Gates U Network. Sensors 2022, 22, 2932. [Google Scholar] [CrossRef] [PubMed]
- Zhong, L.; Lin, Y.; Su, Y.; Fang, X. Improved U-Net Network Segmentation Method for Remote Sensing Image. In Proceedings of the 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Beijing, China, 3–5 October 2022; pp. 1034–1039. [Google Scholar]
- Liu, Y.; Chen, D.; Ma, A.; Zhong, Y.; Fang, F.; Xu, K. Multiscale U-Shaped CNN Building Instance Extraction Framework With Edge Constraint for High-Spatial-Resolution Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 6106–6120. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9992–10002. [Google Scholar]
- Meng, X.; Yang, Y.; Wang, L.; Wang, T.; Li, R.; Zhang, C. Class-Guided Swin Transformer for Semantic Segmentation of Remote Sensing Imagery. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6517505. [Google Scholar] [CrossRef]
- Xu, Z.; Geng, J.; Jiang, W. MMT: Mixed-Mask Transformer for Remote Sensing Image Semantic Segmentation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5605116. [Google Scholar] [CrossRef]
- Chen, Y.; Dong, Q.; Wang, X.; Zhang, Q.; Kang, M.; Jiang, W.; Wang, M.; Xu, L.; Zhang, C. Hybrid Attention Fusion Embedded in Transformer for Remote Sensing Image Semantic Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 4421–4435. [Google Scholar] [CrossRef]
- Xiao, X.; Guo, W.; Chen, R.; Hui, Y.; Wang, J.; Zhao, H. A Swin Transformer-Based Encoding Booster Integrated in U-Shaped Network for Building Extraction. Remote Sens. 2022, 14, 2611. [Google Scholar] [CrossRef]
- Wang, T.; Xu, C.; Liu, B.; Yang, G.; Zhang, E.; Niu, D.; Zhang, H. MCAT-UNet: Convolutional and Cross-Shaped Window Attention Enhanced UNet for Efficient High-Resolution Remote Sensing Image Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 9745–9758. [Google Scholar] [CrossRef]
- Liu, B.; Li, B.; Sreeram, V.; Li, S. MBT-UNet: Multi-Branch Transform Combined with UNet for Semantic Segmentation of Remote Sensing Images. Remote Sens. 2024, 16, 2776. [Google Scholar] [CrossRef]
- Ding, R.X.; Xu, Y.H.; Liu, J.; Zhou, W.; Chen, C. LSENet: Local and Spatial Enhancement to Improve the Semantic Segmentation of Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2024, 21, 7506005. [Google Scholar] [CrossRef]
- Dimitrovski, I.; Spasev, V.; Loshkovska, S.; Kitanovski, I. U-Net Ensemble for Enhanced Semantic Segmentation in Remote Sensing Imagery. Remote Sens. 2024, 16, 2077. [Google Scholar] [CrossRef]
- Chen, Y.; Dai, X.; Liu, M.; Chen, D.; Yuan, L.; Liu, Z. Dynamic convolution: Attention over convolution kernels. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11030–11039. [Google Scholar]
- Feng, M.; Sun, X.; Dong, J.; Zhao, H. Gaussian Dynamic Convolution for Semantic Segmentation in Remote Sensing Images. Remote Sens. 2022, 14, 5736. [Google Scholar] [CrossRef]
- Hou, J.; Guo, Z.; Wu, Y.; Diao, W.; Xu, T. BSNet: Dynamic Hybrid Gradient Convolution Based Boundary-Sensitive Network for Remote Sensing Image Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5624022. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, Y.; Lei, T.; Wang, Y.; Zhai, Y.; Nandi, A.K. Dynamic Convolution Self-Attention Network for Land-Cover Classification in VHR Remote-Sensing Images. Remote Sens. 2022, 14, 4941. [Google Scholar] [CrossRef]
- Chen, G.P.; Li, L.; Zhang, J.X.; Dai, Y. Rethinking the unpretentious U-net for medical ultrasound image segmentation. Pattern Recognit. 2023, 142, 109728. [Google Scholar] [CrossRef]
- Ibtehaz, N.; Rahman, M.S. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 2020, 121, 74–87. [Google Scholar] [CrossRef]
- Wang, H.N.; Cao, P.; Wang, J.Q.; Zaiane, O.R. UCTransNet: Rethinking the Skip Connections in U-Net from a Channel-Wise Perspective with Transformer. In Proceedings of the 36th AAAI Conference on Artificial Intelligence/34th Conference on Innovative Applications of Artificial Intelligence/12th Symposium on Educational Advances in Artificial Intelligence, Electr Network, Online, 22 February–1 March 2022; pp. 2441–2449. [Google Scholar]
- Ming, Q.; Xiao, X. Towards Accurate Medical Image Segmentation With Gradient-Optimized Dice Loss. IEEE Signal Process. Lett. 2024, 31, 191–195. [Google Scholar] [CrossRef]
- Wang, S.y.; Qu, Z.; Gao, L.y. Multi-Spatial Pyramid Feature and Optimizing Focal Loss Function for Object Detection. IEEE Trans. Intell. Veh. 2024, 9, 1054–1065. [Google Scholar] [CrossRef]
- Connor, R.; Dearle, A.; Claydon, B.; Vadicamo, L. Correlations of Cross-Entropy Loss in Machine Learning. Entropy 2024, 26, 491. [Google Scholar] [CrossRef]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 205–218. [Google Scholar]
- Ibtehaz, N.; Kihara, D. ACC-UNet: A Completely Convolutional UNet Model for the 2020s; Springer Nature: Cham, Switzerland, 2023; pp. 692–702. [Google Scholar]
- Tang, H.; He, S.; Yang, M.; Lu, X.; Yu, Q.; Liu, K.; Yan, H.; Wang, N. CSC-Unet: A Novel Convolutional Sparse Coding Strategy Based Neural Network for Semantic Segmentation. IEEE Access 2024, 12, 35844–35854. [Google Scholar] [CrossRef]
- Li, Y.; Yan, B.; Hou, J.; Bai, B.; Huang, X.; Xu, C.; Fang, L. UNet based on dynamic convolution decomposition and triplet attention. Sci. Rep. 2024, 14, 271. [Google Scholar] [CrossRef]
Dataset | Modules | Evaluation Index (%) | |||||||
---|---|---|---|---|---|---|---|---|---|
RDSC | MCCT | DCF | Precision | Recall | F1 | Kappa | IoU | OA | |
HRI Building | - | - | - | 91.14 | 91.05 | 91.09 | 82.19 | 83.68 | 91.31 |
√ | - | - | 92.38 | 92.48 | 92.42 | 84.85 | 85.94 | 92.60 | |
√ | √ | - | 92.57 | 92.51 | 92.53 | 85.07 | 86.14 | 92.72 | |
√ | √ | √ | 93.54 | 93.62 | 93.58 | 87.15 | 87.94 | 93.72 | |
WHU Building | - | - | - | 96.41 | 96.85 | 96.62 | 93.25 | 93.69 | 98.65 |
√ | - | - | 97.24 | 96.79 | 97.02 | 94.04 | 94.31 | 98.82 | |
√ | √ | - | 97.38 | 96.80 | 97.09 | 94.17 | 94.43 | 98.85 | |
√ | √ | √ | 97.28 | 97.06 | 97.17 | 94.34 | 94.59 | 98.88 |
Method | Precision/% | Recall/% | F1/% | Kappa/% | IoU/% | OA/% |
---|---|---|---|---|---|---|
FCN | 92.56 | 92.07 | 92.29 | 84.59 | 85.72 | 92.52 |
U-Net | 89.74 | 89.77 | 89.75 | 79.5 | 76.86 | 89.99 |
U-Net++ | 87.71 | 88.14 | 87.89 | 75.78 | 78.43 | 88.1 |
Swin-UNet | 80.54 | 79.92 | 80.17 | 60.37 | 67.06 | 80.82 |
ACC-UNet | 85.39 | 84.48 | 84.83 | 69.69 | 73.76 | 85.36 |
CSC-UNet | 89.97 | 89.42 | 89.66 | 79.32 | 81.3 | 89.97 |
UCTransNet | 92.49 | 92.53 | 92.51 | 85.02 | 86.09 | 92.69 |
DTA-UNet | 92.16 | 92.46 | 92.29 | 84.58 | 85.70 | 92.44 |
baseline | 91.14 | 91.05 | 91.09 | 82.19 | 83.68 | 91.31 |
RDAU-Net (Ours) | 93.54 | 93.62 | 93.58 | 87.15 | 87.94 | 93.72 |
Method | Precision | Recall | F1 | Kappa | IoU | OA |
---|---|---|---|---|---|---|
FCN | 96.29 | 96.44 | 96.36 | 92.73 | 93.13 | 98.56 |
U-Net | 95.92 | 94.74 | 95.32 | 90.63 | 91.29 | 98.17 |
U-Net++ | 96.65 | 96.91 | 96.78 | 93.56 | 93.87 | 98.72 |
Swin-UNet | 95.76 | 95.22 | 95.49 | 90.98 | 91.58 | 98.23 |
ACC-UNet | 96.26 | 96.13 | 96.19 | 92.39 | 92.82 | 98.50 |
CSC-UNet | 96.59 | 97.06 | 96.82 | 93.64 | 93.95 | 98.74 |
UCTransNet | 96.90 | 97.24 | 97.16 | 94.14 | 94.40 | 98.84 |
DTA-UNet | 96.61 | 97.47 | 97.03 | 94.07 | 94.33 | 98.82 |
baseline | 96.41 | 96.85 | 96.62 | 93.25 | 93.69 | 98.65 |
RDAU-Net (Ours) | 97.28 | 97.06 | 97.17 | 94.34 | 94.59 | 98.88 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Y.; Wang, D.; Xu, T.; Shi, Y.; Liang, W.; Wang, Y.; Petropoulos, G.P.; Bao, Y. RDAU-Net: A U-Shaped Semantic Segmentation Network for Buildings near Rivers and Lakes Based on a Fusion Approach. Remote Sens. 2025, 17, 2. https://doi.org/10.3390/rs17010002
Wang Y, Wang D, Xu T, Shi Y, Liang W, Wang Y, Petropoulos GP, Bao Y. RDAU-Net: A U-Shaped Semantic Segmentation Network for Buildings near Rivers and Lakes Based on a Fusion Approach. Remote Sensing. 2025; 17(1):2. https://doi.org/10.3390/rs17010002
Chicago/Turabian StyleWang, Yipeng, Dongmei Wang, Teng Xu, Yifan Shi, Wenguang Liang, Yihong Wang, George P. Petropoulos, and Yansong Bao. 2025. "RDAU-Net: A U-Shaped Semantic Segmentation Network for Buildings near Rivers and Lakes Based on a Fusion Approach" Remote Sensing 17, no. 1: 2. https://doi.org/10.3390/rs17010002
APA StyleWang, Y., Wang, D., Xu, T., Shi, Y., Liang, W., Wang, Y., Petropoulos, G. P., & Bao, Y. (2025). RDAU-Net: A U-Shaped Semantic Segmentation Network for Buildings near Rivers and Lakes Based on a Fusion Approach. Remote Sensing, 17(1), 2. https://doi.org/10.3390/rs17010002