[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (139)

Search Parameters:
Keywords = retinal vessel segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 7241 KiB  
Article
A Novel Ensemble Meta-Model for Enhanced Retinal Blood Vessel Segmentation Using Deep Learning Architectures
by Mohamed Chetoui and Moulay A. Akhloufi
Biomedicines 2025, 13(1), 141; https://doi.org/10.3390/biomedicines13010141 - 9 Jan 2025
Viewed by 487
Abstract
Background: Retinal blood vessel segmentation plays an important role in diagnosing retinal diseases such as diabetic retinopathy, glaucoma, and hypertensive retinopathy. Accurate segmentation of blood vessels in retinal images presents a challenging task due to noise, low contrast, and the complex morphology of [...] Read more.
Background: Retinal blood vessel segmentation plays an important role in diagnosing retinal diseases such as diabetic retinopathy, glaucoma, and hypertensive retinopathy. Accurate segmentation of blood vessels in retinal images presents a challenging task due to noise, low contrast, and the complex morphology of blood vessel structures. Methods: In this study, we propose a novel ensemble learning framework combining four deep learning architectures: U-Net, ResNet50, U-Net with a ResNet50 backbone, and U-Net with a transformer block. Each architecture is customized to enhance feature extraction and segmentation performance. The models are trained on the DRIVE and STARE datasets to improve the degree of generalization and evaluated using the performance metric accuracy, F1-Score, sensitivity, specificity, and AUC. Results: The ensemble meta-model integrates predictions from these architectures using a stacking approach, achieving state-of-the-art performance with an accuracy of 0.9778, an AUC of 0.9912, and an F1-Score of 0.8231. These results demonstrate the performance of the proposed technique in identifying thin retinal blood vessels. Conclusions: A comparative analysis using qualitative and quantitative results with individual models highlights the robustness of the ensemble framework, especially under conditions of noise and poor visibility. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed methodology for retinal blood vessel segmentation.</p>
Full article ">Figure 2
<p>U-Net architecture.</p>
Full article ">Figure 3
<p>The modified ResNet50.</p>
Full article ">Figure 4
<p>Diagram of U-Net with ResNet50 backbone.</p>
Full article ">Figure 5
<p>Diagram of proposed CTU-Net.</p>
Full article ">Figure 6
<p>Example of images with corresponding masks from DRIVE dataset [<a href="#B12-biomedicines-13-00141" class="html-bibr">12</a>].</p>
Full article ">Figure 7
<p>Example of images corresponding masks from STARE dataset [<a href="#B13-biomedicines-13-00141" class="html-bibr">13</a>].</p>
Full article ">Figure 8
<p>Performance metrics for U-Net training and validation. Each subfigure illustrates a key metric tracked during training: (<b>a</b>) BCE loss, (<b>b</b>) accuracy, (<b>c</b>) AUC, and (<b>d</b>) F1-Score.</p>
Full article ">Figure 9
<p>Performance metrics for modified ResNet50 model training and validation. Each subfigure illustrates a key metric tracked during training: (<b>a</b>) BCE loss, (<b>b</b>) accuracy, (<b>c</b>) AUC, and (<b>d</b>) F1-Score.</p>
Full article ">Figure 10
<p>Performance metrics for CTU-Net model training and validation. Each subfigure illustrates a key metric tracked during training: (<b>a</b>) BCE loss, (<b>b</b>) Accuracy, (<b>c</b>) AUC, and (<b>d</b>) F1-Score.</p>
Full article ">Figure 11
<p>Performance metrics for U-Net with the ResNet50 backbone in model training and validation. Each subfigure illustrates a key metric tracked during training: (<b>a</b>) BCE loss, (<b>b</b>) accuracy, (<b>c</b>) AUC, and (<b>d</b>) F1-Score.</p>
Full article ">Figure 12
<p>A diagram of the meta-model for stacking-based blood vessel segmentation. The meta-model combines predictions from four base models, U-Net, ResNet50, U-Net with a ResNet50 backbone, and CTU-Net with a transformer block, to produce a final segmentation mask.</p>
Full article ">Figure 13
<p>Meta-model training BCE loss over 500 epochs with best value annotated.</p>
Full article ">Figure 14
<p>Meta-model training accuracy over 500 epochs with best value annotated.</p>
Full article ">Figure 15
<p>Meta-model training F1-Score over 500 epochs with best value annotated.</p>
Full article ">Figure 16
<p>Meta-model predictions for retinal blood vessel segmentation on DRIVE and STARE datasets. (<b>a</b>) Original image, (<b>b</b>) ground truth, (<b>c</b>) meta-model prediction.</p>
Full article ">Figure 17
<p>Individual model predictions for retinal blood vessel segmentation on DRIVE and STARE datasets. (<b>a</b>) Original image, (<b>b</b>) U-Net, (<b>c</b>) ResNet50, (<b>d</b>) CTU-Net, (<b>e</b>) U-Net with ResNet50 backbone.</p>
Full article ">
15 pages, 11124 KiB  
Article
Intraoperative Augmented Reality for Vitreoretinal Surgery Using Edge Computing
by Run Zhou Ye and Raymond Iezzi
J. Pers. Med. 2025, 15(1), 20; https://doi.org/10.3390/jpm15010020 - 6 Jan 2025
Viewed by 568
Abstract
Purpose: Augmented reality (AR) may allow vitreoretinal surgeons to leverage microscope-integrated digital imaging systems to analyze and highlight key retinal anatomic features in real time, possibly improving safety and precision during surgery. By employing convolutional neural networks (CNNs) for retina vessel segmentation, [...] Read more.
Purpose: Augmented reality (AR) may allow vitreoretinal surgeons to leverage microscope-integrated digital imaging systems to analyze and highlight key retinal anatomic features in real time, possibly improving safety and precision during surgery. By employing convolutional neural networks (CNNs) for retina vessel segmentation, a retinal coordinate system can be created that allows pre-operative images of capillary non-perfusion or retinal breaks to be digitally aligned and overlayed upon the surgical field in real time. Such technology may be useful in assuring thorough laser treatment of capillary non-perfusion or in using pre-operative optical coherence tomography (OCT) to guide macular surgery when microscope-integrated OCT (MIOCT) is not available. Methods: This study is a retrospective analysis involving the development and testing of a novel image-registration algorithm for vitreoretinal surgery. Fifteen anonymized cases of pars plana vitrectomy with epiretinal membrane peeling, along with corresponding preoperative fundus photographs and optical coherence tomography (OCT) images, were retrospectively collected from the Mayo Clinic database. We developed a TPU (Tensor-Processing Unit)-accelerated CNN for semantic segmentation of retinal vessels from fundus photographs and subsequent real-time image registration in surgical video streams. An iterative patch-wise cross-correlation (IPCC) algorithm was developed for image registration, with a focus on optimizing processing speeds and maintaining high spatial accuracy. The primary outcomes measured were processing speed in frames per second (FPS) and the spatial accuracy of image registration, quantified by the Dice coefficient between registered and manually aligned images. Results: When deployed on an Edge TPU, the CNN model combined with our image-registration algorithm processed video streams at a rate of 14 FPS, which is superior to processing rates achieved on other standard hardware configurations. The IPCC algorithm efficiently aligned pre-operative and intraoperative images, showing high accuracy in comparison to manual registration. Conclusions: This study demonstrates the feasibility of using TPU-accelerated CNNs for enhanced AR in vitreoretinal surgery. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Figure 1

Figure 1
<p>General pipeline for semantic segmentation with TPU-accelerated CNN and real-time image registration. Initially, a float16 convolutional neural network (CNN) was trained for semantic segmentation of retinal vessels from color photographs (<b>A</b>). This CNN was then quantized to eight bits (int8) and adapted for the Edge TPU to perform real-time vessel segmentation in surgical videos (<b>B</b>). The iterative patch-wise cross-correlation (IPCC) algorithm, operating on the CPU, utilized these segmentations to create a transformation matrix (<b>C</b>), which was then applied to align pre-operative images with the surgical video stream in real time (<b>D</b>).</p>
Full article ">Figure 2
<p>Algorithm design for iterative patch-wise cross-correlation. Image A is divided into n × n patches and overlaid onto Image B. Cross-correlation is performed between each patch of Image A and Image B. The patches with the highest correlation coefficients are used to compute rotation, scaling, and translation matrices for Image A, aligning it with Image B. This alignment process involves iterative adjustments to the transformation matrices, refining the overlay of Image A onto Image B through successive rounds of cross-correlation. The final transformation matrix, obtained after K iterations, precisely registers the pre-operative image (Image A) onto the intraoperative frame (Image B).</p>
Full article ">Figure 3
<p>Pseudocode for the iterative patch-wise cross-correlation algorithm.</p>
Full article ">Figure 4
<p>Retina image segmentation using the unquantized and quantized neural networks. Images from the CHASE_DB1 and STARE data sets with corresponding ground truth vessel segmentation and the model predicted vessel segmentation by the unquantized (<b>A</b>) and quantized (<b>B</b>) models.</p>
Full article ">Figure 5
<p>Frames from surgical recordings processed by the CNN on the Edge TPU and the corresponding predicted vessel-segmentation maps.</p>
Full article ">Figure 6
<p>This figure demonstrates the iterative registration of pre-operative retina-thickness map to intra-operative surgical frame (<b>A</b>). The stabilization of the transformation matrix is shown over multiple iterations of the Iterative Patch-wise Cross-Correlation (IPCC) algorithm. Panel (<b>B</b>) displays the initial alignment after the first iteration (k = 1), where the pre-operative map shows significant misalignment with the intra-operative map. Panels (<b>C</b>–<b>E</b>) show the progressive alignment after two, three, and four iterations, respectively, with Panels (<b>E</b>,<b>F</b>) showing minimal adjustments and optimal registration achieved by the third iteration.</p>
Full article ">Figure 7
<p>Shown here is the integration of various pre-operative diagnostic imaging modalities into the intra-operative surgical video stream in real time using the proposed retinal vessel segmentation and registration pipeline. Panels (<b>A</b>–<b>D</b>) represent different types of pre-operative imaging data before registration: (<b>A</b>) microperimetry images, (<b>B</b>) Spectralis multi-spectral fundus images, (<b>C</b>) retina thickness maps, and (<b>D</b>) cross-sectional optical coherence tomography (OCT) images. Panels (<b>E</b>) and (<b>F</b>) represent the original surgical frame and the vessel segmentation result from the quantized U-Net model, respectively. Panels (<b>G</b>,<b>H</b>) display the corresponding intra-operative surgical frames with the registered overlays: surgical frame overlayed with microperimetry images (<b>G</b>), Spectralis multi-spectral fundus images (<b>H</b>), retina thickness maps (<b>I</b>), and cross-sectional optical coherence tomography (OCT) images (<b>J</b>). The overlays maintain accurate alignment even under conditions such as partial occlusion of retinal vessels by surgical instruments. This capability ensures that surgeons can access critical diagnostic information directly within the operative view.</p>
Full article ">
19 pages, 4339 KiB  
Article
VDMNet: A Deep Learning Framework with Vessel Dynamic Convolution and Multi-Scale Fusion for Retinal Vessel Segmentation
by Guiwen Xu, Tao Hu and Qinghua Zhang
Bioengineering 2024, 11(12), 1190; https://doi.org/10.3390/bioengineering11121190 - 25 Nov 2024
Viewed by 765
Abstract
Retinal vessel segmentation is crucial for diagnosing and monitoring ophthalmic and systemic diseases. Optical Coherence Tomography Angiography (OCTA) enables detailed imaging of the retinal microvasculature, but existing methods for OCTA segmentation face significant limitations, such as susceptibility to noise, difficulty in handling class [...] Read more.
Retinal vessel segmentation is crucial for diagnosing and monitoring ophthalmic and systemic diseases. Optical Coherence Tomography Angiography (OCTA) enables detailed imaging of the retinal microvasculature, but existing methods for OCTA segmentation face significant limitations, such as susceptibility to noise, difficulty in handling class imbalance, and challenges in accurately segmenting complex vascular morphologies. In this study, we propose VDMNet, a novel segmentation network designed to overcome these challenges by integrating several advanced components. Firstly, we introduce the Fast Multi-Head Self-Attention (FastMHSA) module to effectively capture both global and local features, enhancing the network’s robustness against complex backgrounds and pathological interference. Secondly, the Vessel Dynamic Convolution (VDConv) module is designed to dynamically adapt to curved and crossing vessels, thereby improving the segmentation of complex morphologies. Furthermore, we employ the Multi-Scale Fusion (MSF) mechanism to aggregate features across multiple scales, enhancing the detection of fine vessels while maintaining vascular continuity. Finally, we propose Weighted Asymmetric Focal Tversky Loss (WAFT Loss) to address class imbalance issues, focusing on the accurate segmentation of small and difficult-to-detect vessels. The proposed framework was evaluated on the publicly available ROSE-1 and OCTA-3M datasets. Experimental results demonstrated that our model effectively preserved the edge information of tiny vessels and achieved state-of-the-art performance in retinal vessel segmentation across several evaluation metrics. These improvements highlight VDMNet’s superior ability to capture both fine vascular details and overall vessel connectivity, making it a robust solution for retinal vessel segmentation. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The architecture of VDMNet, which is composed of encoder, decoder, and skip connections.</p>
Full article ">Figure 2
<p>The proposed Fast Multi-Head Self-Attention Mechanism. (<b>a</b>) Fast Multi-Head Self-Attention Mechanism encoder. (<b>b</b>) Fast Multi-Head Self-Attention Mechanism decoder. They share similar concepts, but (<b>b</b>) takes two inputs: the high-resolution features from skip connections in the encoder and the low-resolution features from the decoder.</p>
Full article ">Figure 3
<p>Multi-Scale Fusion Module.</p>
Full article ">Figure 4
<p>Retinal vessel segmentation results of the proposed VDMNet and other segmentation networks. From top to bottom, the OCTA images of rows 1 and 3 come from ROSE-1, and rows 5 and 7 come from OCTA-3M, respectively. Rows 2, 4, 6, and 8 show the corresponding locally zoomed-in OCTA images, as well as the ground truth and segmentation results.</p>
Full article ">
19 pages, 4495 KiB  
Article
Transformer-Enhanced Retinal Vessel Segmentation for Diabetic Retinopathy Detection Using Attention Mechanisms and Multi-Scale Fusion
by Hyung-Joo Kim, Hassan Eesaar and Kil To Chong
Appl. Sci. 2024, 14(22), 10658; https://doi.org/10.3390/app142210658 - 18 Nov 2024
Cited by 3 | Viewed by 968
Abstract
Eye health has become a significant concern in recent years, given the rising prevalence of visual impairment resulting from various eye disorders and related factors. Global surveys suggest that approximately 2.2 billion individuals are visually impaired, with at least 1 billion affected by [...] Read more.
Eye health has become a significant concern in recent years, given the rising prevalence of visual impairment resulting from various eye disorders and related factors. Global surveys suggest that approximately 2.2 billion individuals are visually impaired, with at least 1 billion affected by treatable diseases or ailments. Early detection, treatment, and screening for fundus diseases are crucial in addressing these challenges. In this study, we propose a novel segmentation model for retinal vascular delineation aimed at diagnosing diabetic retinopathy. The model integrates CBAM (Channel-Attention and Spatial-Attention) for enhanced feature representation, JPU (Joint Pyramid Upsampling) for multi-scale feature fusion, and transformer blocks for contextual understanding. Leveraging deep-learning techniques, our proposed model outperforms existing approaches in retinal vascular segmentation, like achieving a Mean IOU of 0.8047, Recall of 0.7254, Precision of 0.8492, F1 Score of 0.7824, and Specificity of 0.9892 for CHASEDB1 dataset. Extensive evaluations on benchmark datasets demonstrate its efficacy, highlighting its potential for automated diabetic retinopathy screening. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Overall Flow.</p>
Full article ">Figure 2
<p>Overall framework of the retinal vessel segmentation model with individual module architectures.</p>
Full article ">Figure 3
<p>Qualitative analysis of the proposed network and its comparison with existing segmentation baseline models.</p>
Full article ">Figure 4
<p>Qualitative analysis of the proposed network and its comparison with existing segmentation baseline models on ISO-STAR, HRF, and LES-AV datasets.</p>
Full article ">Figure 5
<p>Comparative Analysis of Proposed Model with Recently Published Retina Segmentation Models. The bar graph illustrates the specificity of various state-of-the-art models MCDAU-Net (2023) [<a href="#B40-applsci-14-10658" class="html-bibr">40</a>], ResDO-UNet (2022) [<a href="#B41-applsci-14-10658" class="html-bibr">41</a>], ARSA-UNet (2024) [<a href="#B43-applsci-14-10658" class="html-bibr">43</a>], SMP-Net (2024) [<a href="#B44-applsci-14-10658" class="html-bibr">44</a>], Gabor-Net (2024) [<a href="#B45-applsci-14-10658" class="html-bibr">45</a>], SGAT-Net(2023) [<a href="#B51-applsci-14-10658" class="html-bibr">51</a>], and TUnet-LBF(2023) [<a href="#B52-applsci-14-10658" class="html-bibr">52</a>] across the DRIVE, CHASE_DB1, and STARE datasets. Our proposed model consistently outperforms the other models in specificity, achieving the highest values across all datasets. The graph highlights the superior ability of our model to correctly identify non-vessel regions, therefore minimizing false positives and enhancing diagnostic accuracy.</p>
Full article ">
16 pages, 2489 KiB  
Article
A Method for Retina Segmentation by Means of U-Net Network
by Antonella Santone, Rosamaria De Vivo, Laura Recchia, Mario Cesarelli and Francesco Mercaldo
Electronics 2024, 13(22), 4340; https://doi.org/10.3390/electronics13224340 - 5 Nov 2024
Cited by 1 | Viewed by 801
Abstract
Retinal image segmentation plays a critical role in diagnosing and monitoring ophthalmic diseases such as diabetic retinopathy and age-related macular degeneration. We propose a deep learning-based approach utilizing the U-Net network for the accurate and efficient segmentation of retinal images. U-Net, a convolutional [...] Read more.
Retinal image segmentation plays a critical role in diagnosing and monitoring ophthalmic diseases such as diabetic retinopathy and age-related macular degeneration. We propose a deep learning-based approach utilizing the U-Net network for the accurate and efficient segmentation of retinal images. U-Net, a convolutional neural network widely used for its performance in medical image segmentation, is employed to segment key retinal structures, including the optic disc and blood vessels. We evaluate the proposed model on a publicly available retinal image dataset, demonstrating interesting performance in automatic retina segmentation, thus showing the effectiveness of the proposed method. Our proposal provides a promising method for automated retinal image analysis, aiding in early disease detection and personalized treatment planning. Full article
(This article belongs to the Special Issue New Trends in Computer Vision and Image Processing)
Show Figures

Figure 1

Figure 1
<p>Automatic retina segmentation workflow.</p>
Full article ">Figure 2
<p>A comparison of the outputs of the proposed U-Net model trained for retina segmentation at different training epochs (1, 25, 50, 75, and 100). The left image is the original image, the middle image indicates the original mask and the right figure represents the predicted mask by the U-Net model.</p>
Full article ">Figure 3
<p>The trends of the loss training and the loss validation for the proposed model.</p>
Full article ">Figure 4
<p>Several examples of segmentation provided by the proposed method.</p>
Full article ">
12 pages, 6506 KiB  
Review
Anterior Segment Optical Coherence Tomography Angiography: A Review of Applications for the Cornea and Ocular Surface
by Brian Juin Hsien Lee, Kai Yuan Tey, Ezekiel Ze Ken Cheong, Qiu Ying Wong, Chloe Si Qi Chua and Marcus Ang
Medicina 2024, 60(10), 1597; https://doi.org/10.3390/medicina60101597 - 28 Sep 2024
Viewed by 1253
Abstract
Dye-based angiography is the main imaging modality in evaluating the vasculature of the eye. Although most commonly used to assess retinal vasculature, it can also delineate normal and abnormal blood vessels in the anterior segment diseases—but is limited due to its invasive, time-consuming [...] Read more.
Dye-based angiography is the main imaging modality in evaluating the vasculature of the eye. Although most commonly used to assess retinal vasculature, it can also delineate normal and abnormal blood vessels in the anterior segment diseases—but is limited due to its invasive, time-consuming methods. Thus, anterior segment optical coherence tomography angiography (AS-OCTA) is a useful non-invasive modality capable of producing high-resolution images to evaluate the cornea and ocular surface vasculature. AS-OCTA has demonstrated the potential to detect and delineate blood vessels in the anterior segment with quality images comparable to dye-based angiography. AS-OCTA has a diverse range of applications for the cornea and ocular surface, such as objective assessment of corneal neovascularization and response to various treatments; diagnosis and evaluation of ocular surface squamous neoplasia; and evaluation of ocular surface disease including limbal stem cell deficiency and ischemia. Our review aims to summarize the new developments and clinical applications of AS-OCTA for the cornea and ocular surface. Full article
(This article belongs to the Special Issue Clinical Management of Ocular Surface Disease)
Show Figures

Figure 1

Figure 1
<p>AS-OCTA scan of corneal neovascularization from traumatic corneal injury. (<b>A</b>) An en face image with whole blood flow signals. (<b>B</b>) Image with the total CoNV lesion area demarcated in yellow. (<b>C</b>) Cross-sectional scan along the green line on panel (<b>A</b>). Areas of vascularity are demarcated in red. (<b>D</b>) Close-up image of the dotted white box in panel (<b>C</b>). Image courtesy: Prof. Aijun Deng, Affiliated Hospital of Weifang Medical University, Shandong, China Device: BMizar, BM-400K, TowardPi Medical, China.</p>
Full article ">Figure 2
<p>AS-OCTA scan of the conjunctival/sclera vasculature in a healthy eye versus an eye with limbal stem cell deficiency. (<b>A</b>) An en face image with whole blood flow signals in the healthy eye. (<b>B</b>) Cross-sectional scan along the green line on panel (<b>A</b>). Areas of vascularity are demarcated in red. (<b>C</b>) An en face image with whole blood flow signals in the eye with limbal stem cell deficiency. (<b>D</b>) Cross-sectional scan along the green line on panel (<b>C</b>) showing decreased areas of vascularity compared to the healthy eye in panel (<b>B</b>). (<b>E</b>) Slit-lamp photography image of the eye with limbal stem cell deficiency shown in panels (<b>C</b>,<b>D</b>).</p>
Full article ">
26 pages, 7224 KiB  
Article
MPCCN: A Symmetry-Based Multi-Scale Position-Aware Cyclic Convolutional Network for Retinal Vessel Segmentation
by Chunfen Xia and Jianqiang Lv
Symmetry 2024, 16(9), 1189; https://doi.org/10.3390/sym16091189 - 10 Sep 2024
Viewed by 1298
Abstract
In medical image analysis, precise retinal vessel segmentation is crucial for diagnosing and managing ocular diseases as the retinal vascular network reflects numerous health indicators. Despite decades of development, challenges such as intricate textures, vascular ruptures, and undetected areas persist, particularly in accurately [...] Read more.
In medical image analysis, precise retinal vessel segmentation is crucial for diagnosing and managing ocular diseases as the retinal vascular network reflects numerous health indicators. Despite decades of development, challenges such as intricate textures, vascular ruptures, and undetected areas persist, particularly in accurately segmenting small vessels and addressing low contrast in imaging. This study introduces a novel segmentation approach called MPCCN that combines position-aware cyclic convolution (PCC) with multi-scale resolution input to tackle these challenges. By integrating standard convolution with PCC, MPCCN effectively captures both global and local features. A multi-scale input module enhances feature extraction, while a weighted-shared residual and guided attention module minimizes background noise and emphasizes vascular structures. Our approach achieves sensitivity values of 98.87%, 99.17%, and 98.88%; specificity values of 98.93%, 97.25%, and 99.20%; accuracy scores of 97.38%, 97.85%, and 97.75%; and AUC values of 98.90%, 99.15%, and 99.05% on the DRIVE, STARE, and CHASE_DB1 datasets, respectively. In addition, it records F1 scores of 90.93%, 91.00%, and 90.55%. Experimental results demonstrate that our method outperforms existing techniques, especially in detecting small vessels. Full article
(This article belongs to the Section Life Sciences)
Show Figures

Figure 1

Figure 1
<p>Overall structure of the MPCCN.</p>
Full article ">Figure 2
<p>The portion dedicated to a skip connection.</p>
Full article ">Figure 3
<p>Overall structure of MIM.</p>
Full article ">Figure 4
<p>Overall structure of the multi-scale input module.</p>
Full article ">Figure 5
<p>Structure of ECA.</p>
Full article ">Figure 6
<p>Graphical representation of the guided attention module.</p>
Full article ">Figure 7
<p>Examples of the normal retinal vessels and lesion images from various datasets.</p>
Full article ">Figure 8
<p>Comparison of the segmentation results between the MPCCN and various prior methods on the DRIVE dataset. The yellow and green boxes show the difference between outcomes.</p>
Full article ">Figure 9
<p>Comparison of the segmentation results between the MPCCN and various prior methods on the CHASE_DB1 dataset. The yellow boxes show the difference between outcomes.</p>
Full article ">Figure 10
<p>Comparison of the segmentation results between the MPCCN and various prior methods on the CHASE_DB1 and STARE datasets. The lines from the yellow boxes show the local details.</p>
Full article ">Figure 11
<p>Performance comparison of the precision–recall curve and ROC curve between MPCCN and various prior approaches on the CHASE_DB1, DRIVE, and HRF datasets.</p>
Full article ">Figure 12
<p>Ablation study of different modules on the DRIVE dataset.</p>
Full article ">
17 pages, 15128 KiB  
Article
Retinal Vessel Segmentation Based on Self-Attention Feature Selection
by Ligang Jiang, Wen Li, Zhiming Xiong, Guohui Yuan, Chongjun Huang, Wenhao Xu, Lu Zhou, Chao Qu, Zhuoran Wang and Yuhua Tong
Electronics 2024, 13(17), 3514; https://doi.org/10.3390/electronics13173514 - 4 Sep 2024
Cited by 1 | Viewed by 1008
Abstract
Many major diseases can cause changes in the morphology of blood vessels, and the segmentation of retinal blood vessels is of great significance for preventing these diseases. Obtaining complete, continuous, and high-resolution segmentation results is very challenging due to the diverse structures of [...] Read more.
Many major diseases can cause changes in the morphology of blood vessels, and the segmentation of retinal blood vessels is of great significance for preventing these diseases. Obtaining complete, continuous, and high-resolution segmentation results is very challenging due to the diverse structures of retinal tissues, the complex spatial structures of blood vessels, and the presence of many small ships. In recent years, deep learning networks like UNet have been widely used in medical image processing. However, the continuous down-sampling operations in UNet can result in the loss of a significant amount of information. Although skip connections between the encoder and decoder can help address this issue, the encoder features still contain a large amount of irrelevant information that cannot be efficiently utilized by the decoder. To alleviate the irrelevant information, this paper proposes a feature selection module between the decoder and encoder that utilizes the self-attention mechanism of transformers to accurately and efficiently select the relevant encoder features for the decoder. Additionally, a lightweight Residual Global Context module is proposed to obtain dense global contextual information and establish dependencies between pixels, which can effectively preserve vascular details and segment small vessels accurately and continuously. Experimental results on three publicly available color fundus image datasets (DRIVE, CHASE, and STARE) demonstrate that the proposed algorithm outperforms existing methods in terms of both performance metrics and visual quality. Full article
(This article belongs to the Section Bioelectronics)
Show Figures

Figure 1

Figure 1
<p>The overall network architecture, where the numbers above the feature maps represent the number of channels.</p>
Full article ">Figure 2
<p>The overall structure of the RGCTB.</p>
Full article ">Figure 3
<p>The overall structure of the FSTB.</p>
Full article ">Figure 4
<p>The process of pyramid average pooling.</p>
Full article ">Figure 5
<p>The segmentation results of retinal vessel images from the three datasets. The first row shows the original images, the second row shows the ground truth labels, and the third row shows the segmentation results. The first two columns are from the DRIVE dataset, the middle two columns are from the CHASE dataset, and the last two columns are from the STARE dataset.</p>
Full article ">Figure 6
<p>Comparisons of the retinal vessel segmentation results. From left to right, the images show the original image, the segmentation result of our proposed method, the segmentation result of CAR-UNet, the ground truth, and a magnified version of a selected region.</p>
Full article ">Figure 7
<p>Comparisons of feature maps before and after the FSTB. From left to right, the original images, the labels, the feature maps before feature selection, and the feature maps after feature selection.</p>
Full article ">Figure 8
<p>Visual comparisons of the vessel segmentation results between the RGCTB and the original convolutional blocks.</p>
Full article ">Figure 9
<p>Qualitative visualization result of FS-UNet from different stages.</p>
Full article ">
9 pages, 4309 KiB  
Communication
Attention Mechanism-Based Glaucoma Classification Model Using Retinal Fundus Images
by You-Sang Cho, Ho-Jung Song, Ju-Hyuck Han and Yong-Suk Kim
Sensors 2024, 24(14), 4684; https://doi.org/10.3390/s24144684 - 19 Jul 2024
Viewed by 1196
Abstract
This paper presents a classification model for eye diseases utilizing attention mechanisms to learn features from fundus images and structures. The study focuses on diagnosing glaucoma by extracting retinal vessels and the optic disc from fundus images using a ResU-Net-based segmentation model and [...] Read more.
This paper presents a classification model for eye diseases utilizing attention mechanisms to learn features from fundus images and structures. The study focuses on diagnosing glaucoma by extracting retinal vessels and the optic disc from fundus images using a ResU-Net-based segmentation model and Hough Circle Transform, respectively. The extracted structures and preprocessed images were inputted into a CNN-based multi-input model for training. Comparative evaluations demonstrated that our model outperformed other research models in classifying glaucoma, even with a smaller dataset. Ablation studies confirmed that using attention mechanisms to learn fundus structures significantly enhanced performance. The study also highlighted the challenges in normal case classification due to potential feature degradation during structure extraction. Future research will focus on incorporating additional fundus structures such as the macula, refining extraction algorithms, and expanding the types of classified eye diseases. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of fundus images: (<b>a</b>) normal image; (<b>b</b>) fundus image of glaucoma patients.</p>
Full article ">Figure 2
<p>Overall flowchart of the method proposed in this paper.</p>
Full article ">Figure 3
<p>Process of preprocessing. (<b>a</b>) The original data used in this study; (<b>b</b>) the data applying the average RGB channel value of the image to the background area around the fundus; (<b>c</b>) the data using only the L channel after the change to the Lab channel; (<b>d</b>) the data applying CLAHE to (<b>c</b>); (<b>e</b>) the data changed to the RGB channel after the previous process; and (<b>f</b>) the data applying the existing black background in (<b>e</b>).</p>
Full article ">Figure 4
<p>Architecture of Retinal Vascular Segmentation Model and extracted Retinal Vascular Data: (<b>a</b>) built on ResU-Net as architecture of retinal vascular segmentation model; (<b>b</b>) a retinal vascular image extracted through ResU-Net, where the left side is an image extracted from glaucoma data, and the right side an image extracted from normal data.</p>
Full article ">Figure 5
<p>Optic nerve nipple extraction process. (<b>a</b>) Only the right part of the fundus image is extracted; (<b>b</b>) the top 3.5% of all pixels within the L channel after converting to the Lab channel; (<b>c</b>) using the Hough Circle Transform to detect circles in the image to specify the range; (<b>d</b>) applying a specific range to the fundus image.</p>
Full article ">Figure 6
<p>Architecture of the glaucoma classification model.</p>
Full article ">Figure 7
<p>ROC curve of glaucoma classification model.</p>
Full article ">
18 pages, 8185 KiB  
Article
A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis
by Yating Wang and Hongjun Li
Sensors 2024, 24(13), 4326; https://doi.org/10.3390/s24134326 - 3 Jul 2024
Cited by 1 | Viewed by 1101
Abstract
Accurate segmentation of retinal vessels is of great significance for computer-aided diagnosis and treatment of many diseases. Due to the limited number of retinal vessel samples and the scarcity of labeled samples, and since grey theory excels in handling problems of “few data, [...] Read more.
Accurate segmentation of retinal vessels is of great significance for computer-aided diagnosis and treatment of many diseases. Due to the limited number of retinal vessel samples and the scarcity of labeled samples, and since grey theory excels in handling problems of “few data, poor information”, this paper proposes a novel grey relational-based method for retinal vessel segmentation. Firstly, a noise-adaptive discrimination filtering algorithm based on grey relational analysis (NADF-GRA) is designed to enhance the image. Secondly, a threshold segmentation model based on grey relational analysis (TS-GRA) is designed to segment the enhanced vessel image. Finally, a post-processing stage involving hole filling and removal of isolated pixels is applied to obtain the final segmentation output. The performance of the proposed method is evaluated using multiple different measurement metrics on publicly available digital retinal DRIVE, STARE and HRF datasets. Experimental analysis showed that the average accuracy and specificity on the DRIVE dataset were 96.03% and 98.51%. The mean accuracy and specificity on the STARE dataset were 95.46% and 97.85%. Precision, F1-score, and Jaccard index on the HRF dataset all demonstrated high-performance levels. The method proposed in this paper is superior to the current mainstream methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Structure of the retina.</p>
Full article ">Figure 2
<p>Block diagram of the suggested approach.</p>
Full article ">Figure 3
<p>Middle image of fundus image preprocessing: (<b>a</b>) the original image, (<b>b</b>) the green channel image, (<b>c</b>) the image after NADF-GRA, (<b>d</b>) the image after CLAHE, (<b>e</b>) the image after Frangi enhancement.</p>
Full article ">Figure 4
<p>Images from the DRIVE dataset before and after post-processing: (<b>a</b>) and (<b>c</b>) are the images before post-processing. (<b>b</b>) and (<b>d</b>) are the images after post-processing.</p>
Full article ">Figure 5
<p>Threshold variation map of NADF-GRA.</p>
Full article ">Figure 6
<p>Threshold variation curve of TS-GRA.</p>
Full article ">Figure 7
<p>Segmentation results of the DRIVE, STARE and HRF datasets.</p>
Full article ">Figure 8
<p>Magnification and comparison of segmentation results of different algorithms on DRIVE: (<b>a</b>) the original image, (<b>b</b>) the ground truth, (<b>c</b>) segmentation results under the traditional GLCM model, (<b>d</b>) segmentation results under the novel GLCAA model.</p>
Full article ">Figure 9
<p>Magnification and comparison of segmentation results of different algorithms on STARE: (<b>a</b>) the original image, (<b>b</b>) the ground truth, (<b>c</b>) segmentation results under the traditional GLCM model, (<b>d</b>) segmentation results under the novel GLCAA model.</p>
Full article ">
14 pages, 2268 KiB  
Article
A Retinal Vessel Segmentation Method Based on the Sharpness-Aware Minimization Model
by Iqra Mariam, Xiaorong Xue and Kaleb Gadson
Sensors 2024, 24(13), 4267; https://doi.org/10.3390/s24134267 - 30 Jun 2024
Cited by 1 | Viewed by 1284
Abstract
Retinal vessel segmentation is crucial for diagnosing and monitoring various eye diseases such as diabetic retinopathy, glaucoma, and hypertension. In this study, we examine how sharpness-aware minimization (SAM) can improve RF-UNet’s generalization performance. RF-UNet is a novel model for retinal vessel segmentation. We [...] Read more.
Retinal vessel segmentation is crucial for diagnosing and monitoring various eye diseases such as diabetic retinopathy, glaucoma, and hypertension. In this study, we examine how sharpness-aware minimization (SAM) can improve RF-UNet’s generalization performance. RF-UNet is a novel model for retinal vessel segmentation. We focused our experiments on the digital retinal images for vessel extraction (DRIVE) dataset, which is a benchmark for retinal vessel segmentation, and our test results show that adding SAM to the training procedure leads to notable improvements. Compared to the non-SAM model (training loss of 0.45709 and validation loss of 0.40266), the SAM-trained RF-UNet model achieved a significant reduction in both training loss (0.094225) and validation loss (0.08053). Furthermore, compared to the non-SAM model (training accuracy of 0.90169 and validation accuracy of 0.93999), the SAM-trained model demonstrated higher training accuracy (0.96225) and validation accuracy (0.96821). Additionally, the model performed better in terms of sensitivity, specificity, AUC, and F1 score, indicating improved generalization to unseen data. Our results corroborate the notion that SAM facilitates the learning of flatter minima, thereby improving generalization, and are consistent with other research highlighting the advantages of advanced optimization methods. With wider implications for other medical imaging tasks, these results imply that SAM can successfully reduce overfitting and enhance the robustness of retinal vessel segmentation models. Prospective research avenues encompass verifying the model on vaster and more diverse datasets and investigating its practical implementation in real-world clinical situations. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram showing how to integrate SAM into an MIS model.</p>
Full article ">Figure 2
<p>Single SAM parameter update.</p>
Full article ">Figure 3
<p>A sample of images from the DRIVE dataset. (<b>a</b>) an image having the maximum brightness. (<b>b</b>,<b>c</b>) images with the low brightness.</p>
Full article ">Figure 4
<p>This figure shows visual comparison between the ground truths, output of RF-UNet without SAM and output of FR-UNet with SAM: (<b>a</b>,<b>d</b>) ground truth images from DRIVE dataset, (<b>b</b>,<b>e</b>) output images of RF-UNet without SAM, and (<b>c</b>,<b>f</b>) output images of RF-UNet with SAM.</p>
Full article ">Figure 5
<p>Validation and training accuracy through 30 epochs of RF-UNet trained with SAM and without SAM.</p>
Full article ">Figure A1
<p>(<b>a</b>) Validation loss and training loss of both SAM-trained and non-SAM-trained, (<b>b</b>) validation Auc and training Auc of both SAM-trained and non-SAM-trained, (<b>c</b>) validation Spe and training Spe of both SAM-trained and non-SAM-trained, and (<b>d</b>) validation Sen and training Sen of both SAM-trained and non-SAM-trained.</p>
Full article ">
11 pages, 2001 KiB  
Case Report
High-Resolution Imaging in Macular Telangiectasia Type 2: Case Series and Literature Review
by Andrada Elena Mirescu, Florian Balta, Ramona Barac, Dan George Deleanu, Ioana Teodora Tofolean, George Balta, Razvan Cojanu and Sanda Jurja
Diagnostics 2024, 14(13), 1351; https://doi.org/10.3390/diagnostics14131351 - 25 Jun 2024
Viewed by 1567
Abstract
Background: Macular telangiectasia (MacTel), also known as idiopathic juxtafoveolar telangiectasis (IJFTs), involves telangiectatic changes in the macular capillary network. The most common variant, MacTel type 2, has distinct clinical features and management strategies. Methods: This study offers a comprehensive review of MacTel and [...] Read more.
Background: Macular telangiectasia (MacTel), also known as idiopathic juxtafoveolar telangiectasis (IJFTs), involves telangiectatic changes in the macular capillary network. The most common variant, MacTel type 2, has distinct clinical features and management strategies. Methods: This study offers a comprehensive review of MacTel and focuses on a series of three patients diagnosed with MacTel type 2 in our clinic. A meticulous ophthalmological evaluation, augmented by high-resolution imaging techniques like optical coherence tomography (OCT), OCT angiography (OCT-A), fundus autofluorescence (FAF), fluorescein angiography (FA), and adaptive optics (AOs) imaging, was conducted. Results: The findings revealed normal anterior segment features and a grayish discoloration in the temporal perifoveal area on fundus examination. OCT exhibited hyporeflective cavities in the inner and outer neurosensory retina, along with other changes, while OCT-A identified retinal telangiectatic vessels in the deep capillary plexus. FAF demonstrated increased foveal autofluorescence, while FA initially detected telangiectatic capillaries followed by diffuse perilesional leakage in the later phase. Adaptive optics images showed the cone mosaic pattern. Notably, one patient developed a macular hole as a complication, which was successfully managed surgically. Conclusion: This study underscores the challenges in diagnosing and managing MacTel, emphasizing the importance of a multidisciplinary approach and regular follow-ups for optimal outcomes. Full article
(This article belongs to the Special Issue Diagnostics for Ocular Diseases: Its Importance in Patient Care)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) OCT image of RE; (<b>b</b>) OCT-A image of RE; (<b>c</b>) OCT image of LE; (<b>d</b>) OCT-A image of LE; (<b>e</b>) FAF of RE; (<b>f</b>) FAF of LE; (<b>g</b>) FA—early phases of RE; (<b>h</b>) FA—early phases of LE; (<b>i</b>) FA—late phase of RE (<b>j</b>) FA—late phase of LE; (<b>k</b>) AO image of RE photoreceptors; and (<b>l</b>) AO image of LE photoreceptors. The lines in b and d represents the OCT image slice navigators.</p>
Full article ">Figure 2
<p>(<b>a</b>) OCT image of RE; (<b>b</b>) OCT-A image of RE; (<b>c</b>) OCT image of LE; (<b>d</b>) OCT-A image of LE; (<b>e</b>) FAF of RE; (<b>f</b>) FAF of LE; (<b>g</b>) FA—early phase of RE; (<b>h</b>) FA—early phases of LE; (<b>i</b>) FA—late phase of RE (<b>j</b>) FA—late phase of LE; (<b>k</b>) AO image of RE photoreceptors; and (<b>l</b>) AO image of LE photoreceptors. The lines in b and d represents the OCT image slice navigators.</p>
Full article ">Figure 3
<p>(<b>a</b>) OCT image of RE; (<b>b</b>) OCT-A image of RE; (<b>c</b>) OCT image of LE; (<b>d</b>) OCT-A image of LE; (<b>e</b>) FAF of RE; (<b>f</b>) FAF of LE; (<b>g</b>) FA—early phases of RE; (<b>h</b>) FA—early phases of LE; (<b>i</b>) FA—late phase of RE (<b>j</b>) FA—late phase of LE; (<b>k</b>) AO image of RE photoreceptors; and (<b>l</b>) AO image of LE photoreceptors. The lines in b and d represents the OCT image slice navigators.</p>
Full article ">Figure 4
<p>(<b>a</b>) LE OCT captured one year after the initial visit, showing the progression into a macular hole with a positive “ILM drape” sign; (<b>b</b>) LE OCT captured one month after vitrectomy.</p>
Full article ">Figure 5
<p>(<b>a</b>) RE OCT showing the progression into a macular hole with a positive “ILM drape” sign; (<b>b</b>) RE OCT of RE captured one month after vitrectomy.</p>
Full article ">
19 pages, 20152 KiB  
Article
PAM-UNet: Enhanced Retinal Vessel Segmentation Using a Novel Plenary Attention Mechanism
by Yongmao Wang, Sirui Wu and Junhao Jia
Appl. Sci. 2024, 14(13), 5382; https://doi.org/10.3390/app14135382 - 21 Jun 2024
Cited by 1 | Viewed by 989
Abstract
Retinal vessel segmentation is critical for diagnosing related diseases in the medical field. However, the complex structure and variable size and shape of retinal vessels make segmentation challenging. To enhance feature extraction capabilities in existing algorithms, we propose PAM-UNet, a U-shaped network architecture [...] Read more.
Retinal vessel segmentation is critical for diagnosing related diseases in the medical field. However, the complex structure and variable size and shape of retinal vessels make segmentation challenging. To enhance feature extraction capabilities in existing algorithms, we propose PAM-UNet, a U-shaped network architecture incorporating a novel Plenary Attention Mechanism (PAM). In the BottleNeck stage of the network, PAM identifies key channels and embeds positional information, allowing spatial features within significant channels to receive more focus. We also propose a new regularization method, DropBlock_Diagonal, which discards diagonal regions of the feature map to prevent overfitting and enhance vessel feature learning. Within the decoder stage of the network, features from each stage are merged to enhance the segmentation accuracy of the final vessel. Experimental validation on two retinal image datasets, DRIVE and CHASE_DB1, shows that PAM-UNet achieves 97.15%, 83.16%, 98.45%, 83.15%, 98.66% and 97.64%, 85.82%, 98.46%, 82.56%, 98.95% on Acc, Se, Sp, F1, AUC, respectively, outperforming UNet and most other retinal vessel segmentation algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>PAM-UNet network architecture diagram.</p>
Full article ">Figure 2
<p>Plenary attention mechanism.</p>
Full article ">Figure 3
<p>DropBlockschematic diagram. (<b>a</b>) represents a color fundus image (local) inputted into the network. In (<b>b</b>,<b>c</b>) light blue regions represent activated units containing vessel semantic information, while white regions represent activated units containing background semantic information. Each cell represents a pixel. These crosses represent discarded activation units.</p>
Full article ">Figure 4
<p>DropBlock_Diagonal schematic diagram. (<b>a</b>) represents a color fundus image (local) inputted into the network. (<b>b</b>) shows the effect of Dropout, where the activation units are randomly discarded. Figure (<b>c</b>) illustrates the schematic when <math display="inline"><semantics> <mrow> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> <mo>_</mo> <mi>l</mi> </mrow> </semantics></math> = 5, <math display="inline"><semantics> <mrow> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> <mo>_</mo> <mi>t</mi> <mi>y</mi> <mi>p</mi> <mi>e</mi> </mrow> </semantics></math> = “primary”, <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>u</mi> <mi>m</mi> <mo>_</mo> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> </mrow> </semantics></math> = 1 for DropBlock_Diagonal and Figure (<b>d</b>) illustrates the schematic when <math display="inline"><semantics> <mrow> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> <mo>_</mo> <mi>l</mi> </mrow> </semantics></math> = 5, <math display="inline"><semantics> <mrow> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> <mo>_</mo> <mi>t</mi> <mi>y</mi> <mi>p</mi> <mi>e</mi> </mrow> </semantics></math> = “secondary”, <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>u</mi> <mi>m</mi> <mo>_</mo> <mi>d</mi> <mi>i</mi> <mi>a</mi> <mi>g</mi> </mrow> </semantics></math> = 1.</p>
Full article ">Figure 5
<p>DropBlock_Diagonal placement. The convolutional block is illustrated in (<b>a</b>), DropBlock_Diagonal is specifically added at the location, after the convolutional layer and before BN [<a href="#B40-applsci-14-05382" class="html-bibr">40</a>] and Relu [<a href="#B41-applsci-14-05382" class="html-bibr">41</a>], as shown in (<b>b</b>) (here, Drop_Diag stands for DropBlock_Diagonal).</p>
Full article ">Figure 6
<p>Image preprocessing.</p>
Full article ">Figure 7
<p>Segmentation results of PAM with different reduction rate (r).</p>
Full article ">Figure 8
<p>DropBlock_Diagonal location discussion.</p>
Full article ">Figure 9
<p>DropBlock_Diagonal shape discussion.</p>
Full article ">Figure 10
<p>DRIVE dataset segmentation results.</p>
Full article ">Figure 11
<p>CHASE_DB1 dataset segmentation results.</p>
Full article ">Figure 12
<p>DRIVE dataset segmentation details.</p>
Full article ">Figure 13
<p>CHASE_DB1 dataset segmentation details.</p>
Full article ">
12 pages, 1243 KiB  
Article
A Microvascular Segmentation Network Based on Pyramidal Attention Mechanism
by Hong Zhang, Wei Fang and Jiayun Li
Sensors 2024, 24(12), 4014; https://doi.org/10.3390/s24124014 - 20 Jun 2024
Cited by 1 | Viewed by 948
Abstract
The precise segmentation of retinal vasculature is crucial for the early screening of various eye diseases, such as diabetic retinopathy and hypertensive retinopathy. Given the complex and variable overall structure of retinal vessels and their delicate, minute local features, the accurate extraction of [...] Read more.
The precise segmentation of retinal vasculature is crucial for the early screening of various eye diseases, such as diabetic retinopathy and hypertensive retinopathy. Given the complex and variable overall structure of retinal vessels and their delicate, minute local features, the accurate extraction of fine vessels and edge pixels remains a technical challenge in the current research. To enhance the ability to extract thin vessels, this paper incorporates a pyramid channel attention module into a U-shaped network. This allows for more effective capture of information at different levels and increased attention to vessel-related channels, thereby improving model performance. Simultaneously, to prevent overfitting, this paper optimizes the standard convolutional block in the U-Net with the pre-activated residual discard convolution block, thus improving the model’s generalization ability. The model is evaluated on three benchmark retinal datasets: DRIVE, CHASE_DB1, and STARE. Experimental results demonstrate that, compared to the baseline model, the proposed model achieves improvements in sensitivity (Sen) scores of 7.12%, 9.65%, and 5.36% on these three datasets, respectively, proving its strong ability to extract fine vessels. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Pyramid channel attention module and submodule.</p>
Full article ">Figure 2
<p>Convolutional block.</p>
Full article ">Figure 3
<p>Dropout (<b>a</b>) and Dropblock (<b>b</b>) function diagrams.</p>
Full article ">Figure 4
<p>The proposed algorithm structure diagram.</p>
Full article ">Figure 5
<p>Raw images and masks from the datasets.</p>
Full article ">Figure 6
<p>Visualization of vessel segmentation results. This paper uses red and blue boxes to highlight the details and zoom in to make the results more intuitive.</p>
Full article ">
15 pages, 8511 KiB  
Article
Vessel Segmentation in Fundus Images with Multi-Scale Feature Extraction and Disentangled Representation
by Yuanhong Zhong, Ting Chen, Daidi Zhong and Xiaoming Liu
Appl. Sci. 2024, 14(12), 5039; https://doi.org/10.3390/app14125039 - 10 Jun 2024
Viewed by 1053
Abstract
Vessel segmentation in fundus images is crucial for diagnosing eye diseases. The rapid development of deep learning has greatly improved segmentation accuracy. However, the scale of the retinal blood-vessel structure varies greatly, and there is a lot of noise unrelated to blood-vessel segmentation [...] Read more.
Vessel segmentation in fundus images is crucial for diagnosing eye diseases. The rapid development of deep learning has greatly improved segmentation accuracy. However, the scale of the retinal blood-vessel structure varies greatly, and there is a lot of noise unrelated to blood-vessel segmentation in fundus images, which increases the complexity and difficulty of the segmentation algorithm. Comprehensive consideration of factors like scale variation and noise suppression is imperative to enhance segmentation accuracy and stability. Therefore, we propose a retinal vessel segmentation method based on multi-scale feature extraction and decoupled representation. Specifically, we design a multi-scale feature extraction module at the skip connections, utilizing dilated convolutions to capture multi-scale features and further emphasizing crucial information through channel attention modules. Additionally, to separate useful spatial information from redundant information and enhance segmentation performance, we introduce an image reconstruction branch to assist in the segmentation task. The specific approach involves using a disentangled representation method to decouple the image into content and style, utilizing the content part for segmentation tasks. We conducted experiments on the DRIVE, STARE, and CHASE_DB1 datasets, and the results showed that our method outperformed others, achieving the highest accuracy across all three datasets (DRIVE:0.9690, CHASE_DB1:0.9757, and STARE:0.9765). Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed UNet-MSFE-DR.</p>
Full article ">Figure 2
<p>The structure of the proposed UNet-MSFE-DR.</p>
Full article ">Figure 3
<p>The structure of the designed MSFE.</p>
Full article ">Figure 4
<p>Visualization of comparative experimental results.</p>
Full article ">Figure 5
<p>Visualization of the ablation experiments.</p>
Full article ">Figure 6
<p>Detailed visualization results of ablation experiments.</p>
Full article ">
Back to TopTop