TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation
<p>Comparison of color fundus images and fovea-centred (yellow rectangle area) OCTA images: (<b>a</b>) color fundus, (<b>b</b>–<b>d</b>) superficial vascular complexes (SVC), deep vascular complexes (DVC), and the inner retina vascular plexus including both SVC and DVC (SVC+DVC). (<b>e</b>–<b>h</b>) are their corresponding labels. The small vessels usually have low contrast.</p> "> Figure 2
<p>(<b>a</b>) Illustration of the proposed TCU-Net, (<b>b</b>) efficient cross-fusion transformer module, and (<b>c</b>) efficient channel cross-attention module.</p> "> Figure 3
<p>The encoder output is subjected to an interpolation downsampling operation to obtain the cross-scale <math display="inline"><semantics> <mrow> <msub> <mrow> <msup> <mi mathvariant="bold">Q</mi> <mo>′</mo> </msup> </mrow> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> <mo>)</mo> </mrow> <mo>,</mo> <msup> <mi mathvariant="bold">K</mi> <mo>′</mo> </msup> <mo>,</mo> <msup> <mi mathvariant="bold">V</mi> <mo>′</mo> </msup> </mrow> </semantics></math>.</p> "> Figure 4
<p>Efficient multihead cross-attention.</p> "> Figure 5
<p>Vessel segmentation results from different methods on different layers of ROSE-1 and ROSE-2. From (<b>left</b>) to (<b>right</b>): en face angiograms (original images), manual annotations, and vessel segmentation results obtained by TransFuse, TransUnet, OCTA-Net, and the proposed method (TCU-Net), respectively.</p> "> Figure 6
<p>Effect of size reduction and projection of efficient self-attention on ROSE-1 dataset.</p> "> Figure 7
<p>Effect of size reduction size and projection of efficient self-attention on ROSE-2 dataset.</p> ">
Abstract
:1. Introduction
- We proposed a novel end-to-end OCTA retinal vessel segmentation method that embeds convolution calculations into a transformer for global feature extraction.
- An efficient cross-fusion transformer module was designed to replace the original skip connections, thus achieving interaction between multiscale features and compensating for the loss of vessel information. The multihead cross-attention mechanism of the ECT module reduces the computational complexity compared to the original multihead self-attention mechanism.
- To reduce the semantic difference between the output of ECT module and decoder features, we introduce a channel cross-attention module to fuse and enhance effective vessel information.
- Experimental evaluation on two OCTA retinal vessel segmentation datasets, ROSE-1 and ROSE-2, demonstrates the effectiveness of the proposed TCU-Net.
2. Related Studies
2.1. Based on Convolution Neural Networks
2.2. Based on Transformer Architecture
3. Proposed Method
3.1. Network Architecture
3.2. ECT: Efficient Cross-Fusion Transformer for Encoder Feature Transformation
ECCA: Efficient Channel Cross-Attention
4. Experimental Results
4.1. Datasets and Metrics
- Area under the ROC curve (AUC) [34];
- Sensitivity (SEN) = TP/(TP + FN);
- Specificity (specificity) = TN/(TN + FP);
- Accuracy (ACC) [35] = (TP + TN)/(TP + TN + FP + FN);
- Kappa score [36] = (accuracy − pe)/(1 − pe);
- pe [7] = ((TP + FN)(TP + FP) + (TN + FP)(TN + FN))/(TP + TN + FP + FN)
- False discovery rate (FDR) [37] = FP/(FP + TP);
- G-mean score [38] = sensitivity × specificity;
- Dice coefficient (Dice) [39] = 2 × TP/(FP + FN + 2 × TP).
4.2. Implements Details
4.3. Performance Comparison and Analysis
4.4. Ablation Studies
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhang, C.; Wang, S.; Li, M.; Wu, Y. Association between atherosclerosis and diabetic retinopathy in Chinese patients with type 2 diabetes mellitus. Diabetes Metab. Syndr. Obes. Targets Ther. 2020, 13, 1911. [Google Scholar] [CrossRef] [PubMed]
- Cao, L.; Li, H.; Zhang, Y.; Zhang, L.; Xu, L. Hierarchical method for cataract grading based on retinal images using improved Haar wavelet. Inf. Fusion 2020, 53, 196–208. [Google Scholar] [CrossRef]
- Drew, V.J.; Tseng, C.L.; Seghatchian, J.; Burnouf, T. Reflections on dry eye syndrome treatment: Therapeutic role of blood products. Front. Med. 2018, 5, 33. [Google Scholar] [CrossRef] [PubMed]
- Afza, F.; Sharif, M.; Khan, M.A.; Tariq, U.; Yong, H.S.; Cha, J. Multiclass skin lesion classification using hybrid deep features selection and extreme learning machine. Sensors 2022, 22, 799. [Google Scholar] [CrossRef] [PubMed]
- Lee, W.D.; Devarajan, K.; Chua, J.; Schmetterer, L.; Mehta, J.S.; Ang, M. Optical coherence tomography angiography for the anterior segment. Eye Vis. 2019, 6, 4. [Google Scholar] [CrossRef] [PubMed]
- De Carlo, T.E.; Romano, A.; Waheed, N.K.; Duker, J.S. A review of optical coherence tomography angiography (OCTA). Int. J. Retin. Vitr. 2015, 1, 1–15. [Google Scholar] [CrossRef]
- Ma, Y.; Hao, H.; Xie, J.; Fu, H.; Zhang, J.; Yang, J.; Wang, Z.; Liu, J.; Zheng, Y.; Zhao, Y. ROSE: A retinal OCT-angiography vessel segmentation dataset and new model. IEEE Trans. Med. Imaging 2020, 40, 928–939. [Google Scholar] [CrossRef]
- Yin, P.; Cai, H.; Wu, Q. DF-Net: Deep fusion network for multi-source vessel segmentation. Inf. Fusion 2022, 78, 199–208. [Google Scholar] [CrossRef]
- Gao, Z.; Pan, X.; Shao, J.; Jiang, X.; Su, Z.; Jin, K.; Ye, J. Automatic interpretation and clinical evaluation for fundus fluorescein angiography images of diabetic retinopathy patients by deep learning. Br. J. Ophthalmol. 2022, 28. [Google Scholar] [CrossRef]
- Jin, K.; Huang, X.; Zhou, J.; Li, Y.; Yan, Y.; Sun, Y.; Zhang, Q.; Wang, Y.; Ye, J. Fives: A fundus image dataset for artificial Intelligence based vessel segmentation. Sci. Data 2022, 9, 475. [Google Scholar] [CrossRef]
- Song, X.; Tong, W.; Lei, C.; Huang, J.; Fan, X.; Zhai, G.; Zhou, H. A clinical decision model based on machine learning for ptosis. BMC Ophthalmol. 2021, 21, 169. [Google Scholar] [CrossRef] [PubMed]
- Siddique, N.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V. U-net and its variants for medical image segmentation: A review of theory and applications. IEEE Access 2021, 9, 82031–82057. [Google Scholar] [CrossRef]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Wang, H.; Cao, P.; Wang, J.; Zaiane, O.R. UCTransNet: Rethinking the skip connections in U-Net from a channel-wise perspective with transformer. arXiv 2021, arXiv:2109.04335. [Google Scholar] [CrossRef]
- Pissas, T.; Bloch, E.; Cardoso, M.J.; Flores, B.; Georgiadis, O.; Jalali, S.; Ravasio, C.; Stoyanov, D.; Da Cruz, L.; Bergeles, C. Deep iterative vessel segmentation in OCT angiography. Biomed. Opt. Express 2020, 11, 2490–2510. [Google Scholar] [CrossRef]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 3146–3154. [Google Scholar]
- Sinha, A.; Dolz, J. Multi-scale self-guided attention for medical image segmentation. IEEE J. Biomed. Health Inform. 2020, 25, 121–130. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Xiao, X.; Lian, S.; Luo, Z.; Li, S. Weighted res-unet for high-quality retina vessel segmentation. In Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018; pp. 327–331. [Google Scholar]
- Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef]
- Guo, C.; Szemenyei, M.; Yi, Y.; Wang, W.; Chen, B.; Fan, C. Sa-unet: Spatial attention u-net for retinal vessel segmentation. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 1236–1242. [Google Scholar]
- Zhang, J.; Zhang, Y.; Xu, X. Pyramid u-net for retinal vessel segmentation. In Proceedings of the ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 1125–1129. [Google Scholar]
- Li, M.; Zhang, W.; Chen, Q. Image magnification network for vessel segmentation in octa images. In Proceedings of the Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, 4–7 November 2022; Proceedings, Part IV. pp. 426–435. [Google Scholar]
- Xu, X.; Yang, P.; Wang, H.; Xiao, Z.; Xing, G.; Zhang, X.; Wang, W.; Xu, F.; Zhang, J.; Lei, J. AV-casNet: Fully Automatic Arteriole-Venule Segmentation and Differentiation in OCT Angiography. IEEE Trans. Med. Imaging 2022, 42, 22593541. [Google Scholar] [CrossRef]
- Wu, Z.; Wang, Z.; Zou, W.; Ji, F.; Dang, H.; Zhou, W.; Sun, M. PAENet: A progressive attention-enhanced network for 3D to 2D retinal vessel segmentation. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Houston, TX, USA, 9–12 December 2021; pp. 1579–1584. [Google Scholar]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
- Zhang, Y.; Liu, H.; Hu, Q. Transfuse: Fusing transformers and cnns for medical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; pp. 14–24. [Google Scholar]
- Chen, B.; Liu, Y.; Zhang, Z.; Lu, G.; Zhang, D. Transattunet: Multi-level attention-guided u-net with transformer for medical image segmentation. arXiv 2021, arXiv:2107.05274. [Google Scholar]
- Tan, X.; Chen, X.; Meng, Q.; Shi, F.; Xiang, D.; Chen, Z.; Pan, L.; Zhu, W. OCT2Former: A retinal OCT-angiography vessel segmentation transformer. Comput. Methods Programs Biomed. 2023, 233, 107454. [Google Scholar] [CrossRef]
- Gao, Y.; Zhou, M.; Metaxas, D.N. UTNet: A hybrid transformer architecture for medical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; pp. 61–71. [Google Scholar]
- Wang, S.; Li, B.Z.; Khabsa, M.; Fang, H.; Ma, H. Linformer: Self-attention with linear complexity. arXiv 2020, arXiv:2006.04768. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Bradley, A.P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997, 30, 1145–1159. [Google Scholar] [CrossRef]
- Diebold, F.X.; Mariano, R.S. Comparing predictive accuracy. J. Bus. Econ. Stat. 2002, 20, 134–144. [Google Scholar] [CrossRef]
- McHugh, M.L. Interrater reliability: The kappa statistic. Biochem. Medica 2012, 22, 276–282. [Google Scholar] [CrossRef]
- Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. 1995, 57, 289–300. [Google Scholar] [CrossRef]
- Ri, J.H.; Tian, G.; Liu, Y.; Xu, W.H.; Lou, J.G. Extreme learning machine with hybrid cost function of G-mean and probability for imbalance learning. Int. J. Mach. Learn. Cybern. 2020, 11, 2007–2020. [Google Scholar] [CrossRef]
- Dice, L.R. Measures of the amount of ecologic association between species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
- Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; Liu, J. Ce-net: Context encoder network for 2d medical image segmentation. IEEE Trans. Med. Imaging 2019, 38, 2281–2292. [Google Scholar] [CrossRef]
- Mou, L.; Zhao, Y.; Chen, L.; Cheng, J.; Gu, Z.; Hao, H.; Qi, H.; Zheng, Y.; Frangi, A.; Liu, J. CS-Net: Channel and spatial attention network for curvilinear structure segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 721–730. [Google Scholar]
Methods | AUC (%) | ACC (%) | G-Mean (%) | Kappa (%) | Dice (%) | FDR (%) |
---|---|---|---|---|---|---|
U-Net [18] | 94.10 ± 0.13 | 91.38 ± 0.18 | 82.52 ± 0.48 | 72.02 ± 0.20 | 77.48 ± 0.32 | 21.55 ± 1.62 |
ResU-Net [20] | 94.57 ± 0.09 | 91.73 ± 0.12 | 84.26 ± 0.59 | 72.97 ± 0.32 | 77.05 ± 0.30 | 19.88 ± 1.58 |
CE-Net [40] | 94.90 ± 0.07 | 91.63 ± 0.19 | 84.08 ± 0.49 | 71.71 ± 0.34 | 76.81 ± 0.24 | 19.57 ± 1.61 |
CS-Net [41] | 95.07 ± 0.05 | 92.29 ± 0.07 | 83.41 ± 0.53 | 73.16 ± 0.22 | 77.78 ± 0.23 | 14.60 ± 1.14 |
OCTA-Net [7] | 94.83 ± 0.12 | 92.09 ± 0.34 | 82.57 ± 1.54 | 72.24 ± 0.64 | 76.93 ± 0.59 | 14.17 ± 3.23 |
TranFuse [27] | 92.50 ± 0.98 | 90.63 ± 0.50 | 83.09 ± 0.60 | 66.24 ± 0.28 | 72.64 ± 0.36 | 28.19 ± 3.20 |
TransUnet [26] | 94.50 ± 0.11 | 92.21 ± 0.10 | 82.79 ± 0.48 | 72.18 ± 0.28 | 76.76 ± 0.26 | 12.34 ± 1.20 |
Ours | 95.12 ± 0.05 | 92.30 ± 0.06 | 84.73 ± 0.85 | 73.29 ± 0.31 | 77.91 ± 0.37 | 12.25 ± 2.11 |
Methods | AUC (%) | ACC (%) | G-Mean (%) | Kappa (%) | Dice (%) | FDR (%) |
---|---|---|---|---|---|---|
U-Net [18] | 95.33 ± 0.33 | 96.90 ± 1.23 | 84.87 ± 2.38 | 59.80 ± 3.70 | 60.70 ± 3.30 | 51.96 ± 4.20 |
ResU-Net [20] | 96.65 ± 0.29 | 98.86 ± 0.12 | 89.76 ± 2.59 | 65.55 ± 4.32 | 61.12 ± 3.30 | 39.16 ± 4.58 |
CE-Net [40] | 96.37 ± 0.39 | 98.08 ± 0.32 | 90.15 ± 2.72 | 64.30 ± 4.86 | 63.21 ± 3.18 | 51.81 ± 4.86 |
CS-Net [41] | 96.65 ± 0.31 | 98.20 ± 1.09 | 89.30 ± 2.93 | 66.09 ± 4.26 | 66.18 ± 3.05 | 46.86 ± 4.17 |
OCTA-Net [7] | 96.82 ± 0.56 | 98.29 ± 0.75 | 90.12 ± 2.63 | 64.07 ± 2.82 | 64.82 ± 2.64 | 46.71 ± 3.45 |
TranFuse [27] | 94.95 ± 0.50 | 98.55 ± 1.27 | 85.89 ± 2.83 | 60.16 ± 3.88 | 60.86 ± 3.16 | 46.55 ± 3.14 |
TransUnet [26] | 96.69 ± 0.05 | 99.01 ± 0.27 | 87.77 ± 2.60 | 68.96 ± 4.95 | 67.39 ± 3.26 | 33.21 ± 3.20 |
Ours | 98.23 ± 0.13 | 99.12 ± 0.20 | 90.23 ± 2.17 | 69.39 ± 2.96 | 69.87 ± 2.16 | 28.98 ± 3.11 |
Methods | AUC (%) | ACC (%) | G-Mean (%) | Kappa (%) | Dice (%) | FDR (%) |
---|---|---|---|---|---|---|
U-Net [18] | 90.17 ± 0.96 | 89.30 ± 0.30 | 77.58 ± 0.98 | 63.61 ± 0.12 | 69.77 ± 0.23 | 24.37 ± 1.71 |
ResU-Net [20] | 91.14 ± 0.61 | 89.82 ± 0.38 | 77.84 ± 0.25 | 64.10 ± 0.25 | 70.12 ± 0.54 | 20.66 ± 3.52 |
CE-Net [40] | 90.21 ± 0.04 | 89.63 ± 0.24 | 77.45 ± 0.91 | 63.44 ± 0.24 | 69.58 ± 0.27 | 21.08 ± 2.62 |
CS-Net [41] | 91.49 ± 0.02 | 90.16 ± 0.06 | 77.47 ± 0.89 | 64.77 ± 0.44 | 70.52 ± 0.52 | 17.89 ± 1.59 |
OCTA-Net [7] | 91.44 ± 0.05 | 90.12 ± 0.15 | 76.84 ± 0.99 | 64.31 ± 0.35 | 70.02 ± 0.47 | 17.14 ± 2.43 |
TranFuse [27] | 89.86 ± 0.51 | 89.54 ± 0.26 | 76.51 ± 0.84 | 63.92 ± 0.78 | 68.96 ± 0.36 | 27.30 ± 3.51 |
TransUnet [26] | 91.05 ± 0.05 | 90.15 ± 0.27 | 77.22 ± 0.60 | 64.56 ± 0.45 | 70.83 ± 0.67 | 16.93 ± 3.20 |
Ours | 91.70 ± 0.06 | 90.42 ± 0.16 | 77.98 ± 1.10 | 64.87 ± 0.38 | 71.20 ± 0.50 | 15.67 ± 2.56 |
Methods | AUC (%) | ACC (%) | G-Mean (%) | Kappa (%) | Dice (%) | FDR (%) |
---|---|---|---|---|---|---|
U-Net [18] | 85.03 ± 0.57 | 94.16 ± 0.19 | 79.39 ± 1.26 | 64.11 ± 0.50 | 67.33 ± 0.56 | 28.65 ± 1.90 |
ResU-Net [20] | 86.08 ± 0.61 | 94.26 ± 0.88 | 80.12 ± 0.25 | 65.56 ± 0.25 | 68.75 ± 0.54 | 27.53 ± 1.52 |
CE-Net [40] | 85.13 ± 0.06 | 94.03 ± 0.05 | 80.70 ± 0.29 | 65.71 ± 0.19 | 69.04 ± 0.19 | 27.76 ± 0.58 |
CS-Net [41] | 85.98 ± 0.06 | 94.39 ± 0.20 | 78.20 ± 1.56 | 63.96 ± 0.61 | 67.02 ± 0.74 | 26.64 ± 2.02 |
OCTA-Net [7] | 86.05 ± 0.04 | 94.44 ± 0.15 | 78.91 ± 0.74 | 64.92 ± 0.14 | 67.96 ± 0.16 | 26.04 ± 0.14 |
TranFuse [27] | 84.01 ± 0.49 | 89.83 ± 0.26 | 79.94 ± 1.24 | 60.16 ± 0.68 | 66.03 ± 0.48 | 38.96 ± 2.84 |
TransUnet [26] | 85.78 ± 0.05 | 94.24 ± 0.27 | 79.91 ± 0.60 | 63.97 ± 0.95 | 68.14 ± 0.26 | 27.77 ± 1.20 |
Ours | 86.23 ± 0.05 | 94.54 ± 0.24 | 81.26 ± 0.62 | 64.97 ± 0.21 | 68.40 ± 0.28 | 25.26 ± 1.24 |
Methods | Param (M) | FLOPs (G) |
---|---|---|
U-Net [18] | 34.5 | 184.6 |
ResU-Net [20] | 12.0 | 22.1 |
CE-Net [40] | 29.0 | 25.7 |
CS-Net [41] | 33.6 | 157.2 |
OCTA-Net [7] | 217.7 | 345.0 |
TranFuse [27] | 300.16 | 420.6 |
TransUnet [26] | 334.18 | 483.4 |
Ours | 14.1 | 80.6 |
Methods | AUC (%) | ACC (%) | G-Mean (%) | Kappa (%) | Dice (%) | FDR (%) |
---|---|---|---|---|---|---|
Baseline (U-Net) | 94.10 | 91.38 | 82.52 | 72.02 | 77.48 | 21.55 |
Baseline+ECT | 95.10 | 92.38 | 84.21 | 73.77 | 78.35 | 15.37 |
Baseline+ECCA | 95.01 | 92.36 | 83.23 | 73.25 | 77.79 | 13.91 |
Baseline+ECT+ECCA | 95.11 | 92.39 | 84.73 | 73.78 | 78.45 | 12.25 |
Methods | AUC (%) | ACC (%) | G-Mean (%) | Kappa (%) | Dice (%) | FDR (%) |
---|---|---|---|---|---|---|
Baseline (U-Net) | 95.33 | 96.90 | 84.87 | 59.80 | 60.70 | 51.96 |
Baseline+ECT | 98.29 | 99.02 | 91.84 | 64.83 | 65.44 | 43.61 |
Baseline+ECCA | 98.00 | 98.63 | 90.37 | 69.93 | 70.41 | 36.67 |
Baseline+ECT+ECCA | 98.40 | 99.18 | 90.03 | 71.40 | 71.80 | 26.51 |
Methods | AUC (%) | ACC (%) | G-Mean (%) | Kappa (%) | Dice (%) | FDR (%) |
---|---|---|---|---|---|---|
Baseline (U-Net) | 90.17 | 89.30 | 77.58 | 63.61 | 69.77 | 24.37 |
Baseline+ECT | 91.39 | 90.16 | 78.15 | 65.17 | 70.97 | 18.93 |
Baseline+ECCA | 91.60 | 90.19 | 77.61 | 64.91 | 70.64 | 17.97 |
Baseline+ECT+ECCA | 91.76 | 90.31 | 79.21 | 65.52 | 71.46 | 17.34 |
Methods | AUC (%) | ACC (%) | G-Mean (%) | Kappa (%) | Dice (%) | FDR (%) |
---|---|---|---|---|---|---|
Baseline (U-Net) | 85.03 | 94.16 | 79.39 | 64.11 | 67.33 | 28.65 |
Baseline+ECT | 86.20 | 94.38 | 79.09 | 64.72 | 67.78 | 26.65 |
Baseline+ECCA | 86.19 | 94.21 | 78.93 | 64.93 | 67.99 | 28.29 |
Baseline+ECT+ECCA | 86.29 | 94.43 | 80.10 | 64.97 | 68.15 | 26.51 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shi, Z.; Li, Y.; Zou, H.; Zhang, X. TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation. Sensors 2023, 23, 4897. https://doi.org/10.3390/s23104897
Shi Z, Li Y, Zou H, Zhang X. TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation. Sensors. 2023; 23(10):4897. https://doi.org/10.3390/s23104897
Chicago/Turabian StyleShi, Zidi, Yu Li, Hua Zou, and Xuedong Zhang. 2023. "TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation" Sensors 23, no. 10: 4897. https://doi.org/10.3390/s23104897
APA StyleShi, Z., Li, Y., Zou, H., & Zhang, X. (2023). TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation. Sensors, 23(10), 4897. https://doi.org/10.3390/s23104897