[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (412)

Search Parameters:
Keywords = GLCM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 4367 KiB  
Article
Gray-Level Co-Occurrence Matrix Uniformity Correction Algorithm in Positron Emission Tomographic Image: A Phantom Study
by Kyuseok Kim and Youngjin Lee
Photonics 2025, 12(1), 33; https://doi.org/10.3390/photonics12010033 - 3 Jan 2025
Viewed by 224
Abstract
High uniformity of positron emission tomography (PET) images in the field of nuclear medicine is necessary to obtain excellent and stable data from the system. In this study, we aimed to apply and optimize a PET/magnetic resonance (MR) imaging system by approaching the [...] Read more.
High uniformity of positron emission tomography (PET) images in the field of nuclear medicine is necessary to obtain excellent and stable data from the system. In this study, we aimed to apply and optimize a PET/magnetic resonance (MR) imaging system by approaching the gray-level co-occurrence matrix (GLCM), which is known to be efficient in the uniformity correction of images. CAIPIRINHA Dixon-VIBE was used as an MR image acquisition pulse sequence for the fast and accurate attenuation correction of PET images, and the phantom was constructed by injecting NaCl and NaCl + NiSO4 solutions. The lambda value of the GLCM algorithm for uniformity correction of the acquired PET images was optimized in terms of energy and contrast. By applying the GLCM algorithm optimized in terms of energy and contrast to the PET images of phantoms using NaCl and NaCl + NiSO4 solutions, average percent image uniformity (PIU) values of 26.01 and 83.76 were derived, respectively. Compared to the original PET image, an improved PIU value of more than 30% was derived from the PET image to which the proposed optimized GLCM algorithm was applied. In conclusion, we demonstrated that an algorithm optimized in terms of the GLCM energy and contrast can improve the uniformity of PET images. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified flowchart of the uniformity correction using the weight parameter optimization with the gray-level co-occurrence matrix (GLCM) in the positron emission tomography (PET) image. Here, the closer GLCM<sub>contrast</sub> was to 0 and GLCM<sub>energy</sub> was to 1, the higher the uniformity of the PET image slice.</p>
Full article ">Figure 2
<p>Graph for deriving the optimal solution of the GLCM algorithm’s lambda value: results in terms of (<b>a</b>) energy and (<b>b</b>) contrast using a NaCl solution phantom, and results in terms of (<b>c</b>) energy and (<b>d</b>) contrast using a NaCl + NiSO<sub>4</sub> solution phantom.</p>
Full article ">Figure 3
<p>Corrected PET images shown by applying the original and optimized GLCM algorithm: (<b>a</b>) NaCl and (<b>b</b>) NaCl + NiSO<sub>4</sub> solution phantom results. The bias-field image derived from the optimization process in terms of energy and contrast of the GLCM is shown in the middle line.</p>
Full article ">Figure 4
<p>(<b>a</b>) ROI schematic diagram indicated for PIU calculation and (<b>b</b>) graph of PIU results from PET images acquired by applying the original and optimized GLCM algorithm.</p>
Full article ">
16 pages, 4152 KiB  
Article
Computer Vision-Based Fire–Ice Ion Algorithm for Rapid and Nondestructive Authentication of Ziziphi Spinosae Semen and Its Counterfeits
by Peng Chen, Xutong Shao, Guangyu Wen, Yaowu Song, Rao Fu, Xiaoyan Xiao, Tulin Lu, Peina Zhou, Qiaosheng Guo, Hongzhuan Shi and Chenghao Fei
Foods 2025, 14(1), 5; https://doi.org/10.3390/foods14010005 - 24 Dec 2024
Viewed by 508
Abstract
The authentication of Ziziphi Spinosae Semen (ZSS), Ziziphi Mauritianae Semen (ZMS), and Hovenia Acerba Semen (HAS) has become challenging. The chromatic and textural properties of ZSS, ZMS, and HAS are analyzed in this study. Color features were extracted via RGB, CIELAB, and HSI [...] Read more.
The authentication of Ziziphi Spinosae Semen (ZSS), Ziziphi Mauritianae Semen (ZMS), and Hovenia Acerba Semen (HAS) has become challenging. The chromatic and textural properties of ZSS, ZMS, and HAS are analyzed in this study. Color features were extracted via RGB, CIELAB, and HSI spaces, whereas texture information was analyzed via the gray-level co-occurrence matrix (GLCM) and Law’s texture feature analysis. The results revealed significant differences in color and texture among the samples. The fire–ice ion dimensionality reduction algorithm effectively fuses these features, enhancing their differentiation ability. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) confirmed the algorithm’s effectiveness, with variable importance in projection analysis (VIP analysis) (VIP > 1, p < 0.05) highlighting significant differences, particularly for the fire value, which is a key factor. To further validate the reliability of the algorithm, Back Propagation Neural Network (BP), Support Vector Machine (SVM), Deep Belief Network (DBN), and Random Forest (RF) were used for reverse validation, and the accuracy of the training set and test set reached 98.83–100% and 95.89–99.32%, respectively. The method provides a simple, low-cost, and high-precision tool for the fast and nondestructive detection of food authenticity. Full article
Show Figures

Figure 1

Figure 1
<p>Sample information (<b>A</b>) and radar chart of colorimetric values (<b>B</b>) of ZSS, ZMS, and HAS. ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen.</p>
Full article ">Figure 2
<p>GLCM texture parameter histogram (<b>A</b>) and Law’s texture parameter heatmap (<b>B</b>) of ZSS, ZMS, and HAS. ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen.</p>
Full article ">Figure 3
<p>Fire–ice value box chart (<b>A</b>) and fire–ice chart (<b>B</b>) of ZSS, ZMS, and HAS. The letters (a–c) above the bars indicate significant differences as determined by Duncan’s multiple-range test (<span class="html-italic">p</span> &lt; 0.05). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen.</p>
Full article ">Figure 4
<p>Score plots of the PCA model for ZSS, ZMS, and HAS of raw color and texture characterization (<b>A</b>); score plots of the PLS-DA model for ZSS, ZMS, and HAS of raw color and texture characterization (<b>B</b>). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen. PCA, principal component analysis; PLS-DA, partial least squares discrimination analysis.</p>
Full article ">Figure 5
<p>Cross-validation results with 200 calculations using a permutation test for ZSS, ZMS, and HAS of raw color and texture characterization (<b>A</b>); VIP plots for ZSS, ZMS, and HAS of raw color and texture characterization (<b>B</b>). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen. VIP, variable importance for projecting.</p>
Full article ">Figure 6
<p>Score plots of the PCA model for ZSS, ZMS, and HAS of fire–ice ions dimensionality reduction data (<b>A</b>); score plots of the PLS-DA model for ZSS, ZMS, and HAS of fire–ice ions dimensionality reduction data (<b>B</b>). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen. PCA, principal component analysis; PLS-DA, partial least squares discrimination analysis.</p>
Full article ">Figure 7
<p>Cross-validation results with 200 calculations using a permutation test for ZSS, ZMS, and HAS of fire–ice ions dimensionality reduction data (<b>A</b>); VIP plots for ZSS, ZMS, and HAS of fire–ice ions dimensionality reduction data (<b>B</b>). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen. VIP, variable importance for projecting.</p>
Full article ">Figure 8
<p>Evaluation metrics of 4 machine learning algorithms (BP, SVM, DBN, and RF).</p>
Full article ">
16 pages, 1342 KiB  
Article
Diffusion-Weighted MRI and Human Papillomavirus (HPV) Status in Oropharyngeal Cancer
by Heleen Bollen, Rüveyda Dok, Frederik De Keyzer, Sarah Deschuymer, Annouschka Laenen, Johannes Devos, Vincent Vandecaveye and Sandra Nuyts
Cancers 2024, 16(24), 4284; https://doi.org/10.3390/cancers16244284 - 23 Dec 2024
Viewed by 392
Abstract
Background: This study aimed to explore the differences in quantitative diffusion-weighted (DW) MRI parameters in oropharyngeal squamous cell carcinoma (OPC) based on Human Papillomavirus (HPV) status before and during radiotherapy (RT). Methods: Echo planar DW sequences acquired before and during (chemo)radiotherapy (CRT) of [...] Read more.
Background: This study aimed to explore the differences in quantitative diffusion-weighted (DW) MRI parameters in oropharyngeal squamous cell carcinoma (OPC) based on Human Papillomavirus (HPV) status before and during radiotherapy (RT). Methods: Echo planar DW sequences acquired before and during (chemo)radiotherapy (CRT) of 178 patients with histologically proven OPC were prospectively analyzed. The volumetric region of interest (ROI) was manually drawn on the apparent diffusion coefficient (ADC) map, and 105 DW-MRI radiomic parameters were extracted. Change in ADC values (Δ ADC) was calculated as the difference between baseline and during RT at week 4, normalized by the baseline values. Results: Pre-treatment first-order 10th percentile ADC and Gray Level co-occurrence matrix (GLCM)-correlation were significantly lower in HPV-positive compared with HPV-negative tumors (82.4 × 10−5 mm2/s vs. 90.3 × 10−5 mm2/s, p = 0.03 and 0.18 vs. 0.30, p < 0.01). In the fourth week of RT, all first-order ADC values were significantly higher in HPV-positive tumors (p < 0.01). Δ ADC mean was significantly higher for the HPV-positive compared with the HPV-negative OPC group (95% vs. 55%, p < 0.01). A predictive model for HPV status based on smoking status, alcohol consumption, GLCM correlation, and mean ADC and 10th percentile ADC values yielded an area under the curve of 0.77 (95% CI 0.70–0.84). Conclusions: Our results highlight the potential of DW-MR imaging as a non-invasive biomarker for the prediction of HPV status, although its current role remains supplementary to pathological confirmation. Full article
(This article belongs to the Special Issue Advances in Radiotherapy for Head and Neck Cancer)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the study. OPC: oropharyngeal cancer; HPV: Human Papillomavirus, n = number of patients. P16 was used as surrogate marker for HPV. In the following figures and tables, p16-positive tumors will be depicted as HPV-positive and p16-negative tumors as HPV-negative.</p>
Full article ">Figure 2
<p>Boxplots displaying the distribution of (<b>A</b>) pre-treatment ADC mean values (×10<sup>−5</sup> mm<sup>2</sup>/s), (<b>B</b>) ADC mean values during RT (×10<sup>−5</sup> mm<sup>2</sup>/s) and (<b>C</b>) ΔADC mean (%) according to HPV status. <span class="html-italic">p</span>-values were calculated with Mann–Whitney <span class="html-italic">U</span> test.</p>
Full article ">Figure 2 Cont.
<p>Boxplots displaying the distribution of (<b>A</b>) pre-treatment ADC mean values (×10<sup>−5</sup> mm<sup>2</sup>/s), (<b>B</b>) ADC mean values during RT (×10<sup>−5</sup> mm<sup>2</sup>/s) and (<b>C</b>) ΔADC mean (%) according to HPV status. <span class="html-italic">p</span>-values were calculated with Mann–Whitney <span class="html-italic">U</span> test.</p>
Full article ">Figure 3
<p>Receiver operating characteristic (ROC) curve of the predictive model based on clinical factors, including smoking status, alcohol consumption, tumor location and radiomic features, including 10th percentile ADC, mean ADC and GLCM-correlation. HPV status was used as classification variable. The predicted probability was generated by multivariable logistic regression. AUC: Area under the curve.</p>
Full article ">Figure 4
<p>Kaplan–Meier curves for (<b>A</b>) locoregional control (LRC) and (<b>B</b>) overall survival (OS) stratified by p16 status as a surrogate marker for HPV. HPV: Human Papillomavirus.</p>
Full article ">
19 pages, 2563 KiB  
Article
Optimization of Cocoa Pods Maturity Classification Using Stacking and Voting with Ensemble Learning Methods in RGB and LAB Spaces
by Kacoutchy Jean Ayikpa, Abou Bakary Ballo, Diarra Mamadou and Pierre Gouton
J. Imaging 2024, 10(12), 327; https://doi.org/10.3390/jimaging10120327 - 18 Dec 2024
Viewed by 535
Abstract
Determining the maturity of cocoa pods early is not just about guaranteeing harvest quality and optimizing yield. It is also about efficient resource management. Rapid identification of the stage of maturity helps avoid losses linked to a premature or late harvest, improving productivity. [...] Read more.
Determining the maturity of cocoa pods early is not just about guaranteeing harvest quality and optimizing yield. It is also about efficient resource management. Rapid identification of the stage of maturity helps avoid losses linked to a premature or late harvest, improving productivity. Early determination of cocoa pod maturity ensures both the quality and quantity of the harvest, as immature or overripe pods cannot produce premium cocoa beans. Our innovative research harnesses artificial intelligence and computer vision technologies to revolutionize the cocoa industry, offering precise and advanced tools for accurately assessing cocoa pod maturity. Providing an objective and rapid assessment enables farmers to make informed decisions about the optimal time to harvest, helping to maximize the yield of their plantations. Furthermore, by automating this process, these technologies reduce the margins for human error and improve the management of agricultural resources. With this in mind, our study proposes to exploit a computer vision method based on the GLCM (gray level co-occurrence matrix) algorithm to extract the characteristics of images in the RGB (red, green, blue) and LAB (luminance, axis between red and green, axis between yellow and blue) color spaces. This approach allows for in-depth image analysis, which is essential for capturing the nuances of cocoa pod maturity. Next, we apply classification algorithms to identify the best performers. These algorithms are then combined via stacking and voting techniques, allowing our model to be optimized by taking advantage of the strengths of each method, thus guaranteeing more robust and precise results. The results demonstrated that the combination of algorithms produced superior performance, especially in the LAB color space, where voting scored 98.49% and stacking 98.71%. In comparison, in the RGB color space, voting scored 96.59% and stacking 97.06%. These results surpass those generally reported in the literature, showing the increased effectiveness of combined approaches in improving the accuracy of classification models. This highlights the importance of exploring ensemble techniques to maximize performance in complex contexts such as cocoa pod maturity classification. Full article
(This article belongs to the Special Issue Imaging Applications in Agriculture)
Show Figures

Figure 1

Figure 1
<p>Diagram representing the voting process.</p>
Full article ">Figure 2
<p>Illustration of the stacking process of the algorithms in our study.</p>
Full article ">Figure 3
<p>The overall architecture of our method.</p>
Full article ">Figure 4
<p>Histogram of model performance comparison (accuracy) in the RGB space.</p>
Full article ">Figure 5
<p>Confusion matrix of the best-performing models in the RGB color space.</p>
Full article ">Figure 6
<p>Histogram of model performance comparison (accuracy) in the LAB space.</p>
Full article ">Figure 7
<p>Confusion matrix of the best-performing models in the LAB color space.</p>
Full article ">Figure 8
<p>Histogram of algorithm performance in RGB and LAB color spaces.</p>
Full article ">
26 pages, 2762 KiB  
Article
Uncovering the Diagnostic Power of Radiomic Feature Significance in Automated Lung Cancer Detection: An Integrative Analysis of Texture, Shape, and Intensity Contributions
by Sotiris Raptis, Christos Ilioudis and Kiki Theodorou
BioMedInformatics 2024, 4(4), 2400-2425; https://doi.org/10.3390/biomedinformatics4040129 - 18 Dec 2024
Viewed by 403
Abstract
Background: Lung cancer still maintains the leading position among causes of death in the world; the process of early detection surely contributes to changes in the survival of patients. Standard diagnostic methods are grossly insensitive, especially in the early stages. In this paper, [...] Read more.
Background: Lung cancer still maintains the leading position among causes of death in the world; the process of early detection surely contributes to changes in the survival of patients. Standard diagnostic methods are grossly insensitive, especially in the early stages. In this paper, radiomic features are discussed that can assure improved diagnostic accuracy through automated lung cancer detection by considering the important feature categories, such as texture, shape, and intensity, originating from the CT DICOM images. Methods: We developed and compared the performance of two machine learning models—DenseNet-201 CNN and XGBoost—trained on radiomic features with the ability to identify malignant tumors from benign ones. Feature importance was analyzed using SHAP and techniques of permutation importance that enhance both the global and case-specific interpretability of the models. Results: A few features that reflect tumor heterogeneity and morphology include GLCM Entropy, shape compactness, and surface-area-to-volume ratio. These performed excellently in diagnosis, with DenseNet-201 producing an accuracy of 92.4% and XGBoost at 89.7%. The analysis of feature interpretability ascertains its potential in early detection and boosting diagnostic confidence. Conclusions: The current work identifies the most important radiomic features and quantifies their diagnostic significance through a properly conducted feature selection process reflecting stability analysis. This provides the blueprint for feature-driven model interpretability in clinical applications. Radiomics features have great value in the automated diagnosis of lung cancer, especially when combined with machine learning models. This might improve early detection and open personalized diagnostic strategies for precision oncology. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the radiomics workflow used in this study.</p>
Full article ">Figure 2
<p>Distribution of radiomic feature categories extracted in this study.</p>
Full article ">Figure 3
<p>Distribution of radiomic features based on ICC values.</p>
Full article ">Figure 4
<p>SHAP summary plot illustrating the global impact of selected radiomic features on model predictions.</p>
Full article ">Figure 5
<p>SHAP dependence plot illustrating the influence of First-order Mean Intensity on model predictions.</p>
Full article ">Figure 6
<p>Permutation importance score of radiomic features.</p>
Full article ">Figure 7
<p>SHAP dependence plot showing the effect of GLCM Entropy on model predictions.</p>
Full article ">Figure 8
<p>Trade-offs between model interpretability and diagnostic performance.</p>
Full article ">
19 pages, 1818 KiB  
Article
Enhancing Sensitivity of Point-of-Care Thyroid Diagnosis via Computational Analysis of Lateral Flow Assay Images Using Novel Textural Features and Hybrid-AI Models
by Towfeeq Fairooz, Sara E. McNamee, Dewar Finlay, Kok Yew Ng and James McLaughlin
Biosensors 2024, 14(12), 611; https://doi.org/10.3390/bios14120611 - 13 Dec 2024
Viewed by 628
Abstract
Lateral flow assays are widely used in point-of-care diagnostics but face challenges in sensitivity and accuracy when detecting low analyte concentrations, such as thyroid-stimulating hormone biomarkers. This study aims to enhance assay performance by leveraging textural features and hybrid artificial intelligence models. A [...] Read more.
Lateral flow assays are widely used in point-of-care diagnostics but face challenges in sensitivity and accuracy when detecting low analyte concentrations, such as thyroid-stimulating hormone biomarkers. This study aims to enhance assay performance by leveraging textural features and hybrid artificial intelligence models. A modified Gray-Level Co-occurrence Matrix, termed the Averaged Horizontal Multiple Offsets Gray-Level Co-occurrence Matrix, was utilised to compute the textural features of the biosensor assay images. Significant textural features were selected for further analysis. A deep learning Convolutional Neural Network model was employed to extract features from these textural features. Both traditional machine learning models and hybrid artificial intelligence models, which combine Convolutional Neural Network features with traditional algorithms, were used to categorise these textural features based on the thyroid-stimulating hormone concentration levels. The proposed method achieved accuracy levels exceeding 95%. This pioneering study highlights the utility of textural aspects of assay images for accurate predictive disease modelling, offering promising advancements in diagnostics and management within biomedical research. Full article
(This article belongs to the Special Issue Biosensing Advances in Lateral Flow Assays (LFA))
Show Figures

Figure 1

Figure 1
<p>Typical structure of an LFA (taken from [<a href="#B17-biosensors-14-00611" class="html-bibr">17</a>]).</p>
Full article ">Figure 2
<p>Lumos Reader device for capturing LFA images.</p>
Full article ">Figure 3
<p>Schematic of the methodology for LFA image analysis using textural features.</p>
Full article ">Figure 4
<p>(<b>a</b>) Merged binary masks of segmented halves with bounding boxes (red lines) marking the ROIs, and LFA sample overlaid on marked binary masks. (<b>b</b>) Patch creation: 500-by-128 pixel LFA image sample; (<b>c</b>) four 128-by-32 pixel patches extracted from ROIs and merged.</p>
Full article ">Figure 5
<p>AHMO-GLCM process: GLCM computation at 0 degrees offset with pixel pairs separated by distances d = 1 to 25.</p>
Full article ">Figure 6
<p>Quantisation: (<b>a</b>) split LFA patch and selective sub-patch from test line. (<b>b</b>) Pixel values of selected sub-region. (<b>c</b>) 64-level quantised sub-patch, capturing intensity variation. Quantised binary patch images: (<b>d</b>,<b>e</b>) 8-level binarised split patch and sub-patch pixel values. (<b>f</b>,<b>g</b>) 64-level binarised split patch and sub-patch pixel values.</p>
Full article ">Figure 7
<p>(<b>a</b>) Ranking of key texture features based on AHMO−GLCM properties. (<b>b</b>) Heatmap of AHMO−GLCM feature correlations.</p>
Full article ">Figure 8
<p>CNN training and validation progress plot (gray images’ bin size: 64; features’ batch processing size: 32).</p>
Full article ">Figure 9
<p>AHMO-GLCM features: confusion matrices comparing classifiers trained on (<b>a</b>–<b>d</b>) with all features; (<b>e</b>–<b>h</b>) with MRMR-based top-ranked features; (<b>i</b>–<b>l</b>) with CNN-derived features.</p>
Full article ">Figure 10
<p>Classification accuracy comparison using all features, top MRMR features, and CNN-derived features from AHMO-GLCM features data.</p>
Full article ">Figure 11
<p>Comparison of classification accuracies: proposed textural features method vs. previous approaches (RF: random forest; LSTM: long short-term memory; ED: Euclidean distance) [<a href="#B53-biosensors-14-00611" class="html-bibr">53</a>,<a href="#B54-biosensors-14-00611" class="html-bibr">54</a>,<a href="#B55-biosensors-14-00611" class="html-bibr">55</a>,<a href="#B56-biosensors-14-00611" class="html-bibr">56</a>].</p>
Full article ">
32 pages, 22123 KiB  
Article
Automated Seedling Contour Determination and Segmentation Using Support Vector Machine and Image Features
by Samsuzzaman, Md Nasim Reza, Sumaiya Islam, Kyu-Ho Lee, Md Asrakul Haque, Md Razob Ali, Yeon Jin Cho, Dong Hee Noh and Sun-Ok Chung
Agronomy 2024, 14(12), 2940; https://doi.org/10.3390/agronomy14122940 - 10 Dec 2024
Viewed by 534
Abstract
Boundary contour determination during seedling image segmentation is critical for accurate object detection and morphological characterization in agricultural machine vision systems. The traditional manual annotation for segmentation is labor-intensive, time-consuming, and prone to errors, especially in controlled environments with complex backgrounds. These errors [...] Read more.
Boundary contour determination during seedling image segmentation is critical for accurate object detection and morphological characterization in agricultural machine vision systems. The traditional manual annotation for segmentation is labor-intensive, time-consuming, and prone to errors, especially in controlled environments with complex backgrounds. These errors can affect the accuracy of detecting phenotypic traits, like shape, size, and width. To address these issues, this study introduced a method that integrated image features and a support vector machine (SVM) to improve boundary contour determination during segmentation, enabling real-time detection and monitoring. Seedling images (pepper, tomato, cucumber, and watermelon) were captured under various lighting conditions to enhance object–background differentiation. Histogram equalization and noise reduction filters (median and Gaussian) were applied to minimize the illumination effects. The peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) were used to select the clip limit for histogram equalization. The images were analyzed across 18 different color spaces to extract the color features, and six texture features were derived using the gray-level co-occurrence matrix (GLCM) method. To reduce feature overlap, sequential feature selection (SFS) was applied, and the SVM was used for object segmentation. The SVM model achieved 73% segmentation accuracy without SFS and 98% with SFS. Segmentation accuracy for the different seedlings ranged from 81% to 98%, with a low boundary misclassification rate between 0.011 and 0.019. The correlation between the actual and segmented contour areas was strong, with an R2 up to 0.9887. The segmented boundary contour files were converted into annotation files to train a YOLOv8 model, which achieved a precision ranging from 96% to 98.5% and a recall ranging from 96% to 98%. This approach enhanced the segmentation accuracy, reduced manual annotation, and improved the agricultural monitoring systems for plant health management. The future direction involves integrating this system with advanced methods to address overlapping image segmentation challenges, further enhancing the real-time seedling monitoring and optimizing crop management and productivity. Full article
Show Figures

Figure 1

Figure 1
<p>Image acquisition from top and side views using commercial camera setup for four types of seedlings in controlled plant factory chamber.</p>
Full article ">Figure 2
<p>Vertical section of seedling growing chamber designed to maintain different light intensities for each plant bed: (<b>a</b>) plant beds arranged in separate layers, and (<b>b</b>) lighting arrangement for each bed to achieve specific light conditions.</p>
Full article ">Figure 3
<p>Images of seedlings grown in plant factory: (<b>a</b>) tomato, (<b>b</b>) cucumber, (<b>c</b>) pepper, (<b>d</b>) watermelon. (<b>e</b>) Various background elements in images, including seedling, soil, and seedling tray.</p>
Full article ">Figure 4
<p>Overall image preprocessing steps and feature extraction and seedling segmentation process used in this study.</p>
Full article ">Figure 5
<p>Image preprocessing workflow includes noise removal, contrast enhancement with histogram equalization, and quality assessment using PSNR and SSIM metrics: (<b>a</b>) original image with histogram, (<b>b</b>) noise-removed and histogram-equalized image, and (<b>c</b>) optimum clip limit selection for accurate histogram equalization using PSNR and SSIM analysis.</p>
Full article ">Figure 6
<p>Six color spaces were used from all seedling images in this study: (<b>a</b>) RGB, (<b>b</b>) HSV, (<b>c</b>) XYZ, (<b>d</b>) YUV, (<b>e</b>) YCbCr, and (<b>f</b>) LAB.</p>
Full article ">Figure 7
<p>Schematic diagram for seedling texture feature extraction process.</p>
Full article ">Figure 8
<p>Texture feature analysis using GLCM method: (<b>a</b>) homogeneity, (<b>b</b>) contrast, (<b>c</b>) correlation, (<b>d</b>) energy, and (<b>e</b>) entropy.</p>
Full article ">Figure 9
<p>(<b>a</b>) Three-dimensional visualization of data patterns under different environmental lighting conditions (50, 250, and 450 µmol·m⁻<sup>2</sup>·s⁻<sup>1</sup>), where the red circles indicate seedlings and the blue circles indicate the background, and (<b>b</b>) hierarchical clustering dendrogram for data points based on 18 color features and 6 texture features.</p>
Full article ">Figure 10
<p>Schematic diagram of SFS method to select features used in this study.</p>
Full article ">Figure 11
<p>Illustration of SVM optimal hyperplane, margin, and support vectors for linearly separable dataset. Dark blue and light blue circles represent Class A and Class B data points, respectively.</p>
Full article ">Figure 12
<p>SVM segmentation model development in this study.</p>
Full article ">Figure 13
<p>Images for segmentation model development and pixels of seedlings, soil, and tray. Dark blue circles represent seedling area, while pink circles highlight seedling image background. (<b>a</b>) tomato, (<b>b</b>) cucumber, (<b>c</b>) pepper, and (<b>d</b>) watermelon.</p>
Full article ">Figure 14
<p>Working flow diagram for image segmentation using color transformation and feature extraction. Red circles represent seedlings, while blue circles represent the background. The segmentation process is performed using SVM in this study.</p>
Full article ">Figure 15
<p>Flow diagram of annotation file preparation from the contour image dataset for real-time seedling detection model. (1–5) represent the unique class of objects.</p>
Full article ">Figure 16
<p>Feature selection performance curve using SFS method (selected features are indicated by red, dashed lines).</p>
Full article ">Figure 17
<p>Impact of SFS on SVM classification performance for seedling (white dots) and background segmentation (black dots): (<b>a</b>) decision boundary without SFS methods achieving 73% accuracy, (<b>b</b>) and decision boundary with SFS, improving accuracy to 98%.</p>
Full article ">Figure 18
<p>Pixel classification using the SVM without feature selection under varying light conditions: (<b>a</b>) 50 µmol·m⁻<sup>2</sup>·s⁻<sup>1</sup>, (<b>b</b>) 250 µmol·m⁻<sup>2</sup>·s⁻<sup>1</sup>, and (<b>c</b>) 450 µmol·m⁻<sup>2</sup>·s⁻<sup>1</sup>. The left panel shows the segmented images with visible noise around the seedlings. The center panel presents pixel classification scatter plots considering all the features, highlighting the clusters of background (red) and seedling (blue) pixels. The right panel displays the resulting contour detection on the segmented images, revealing inaccurate contours and noisy boundaries due to the presence of noise.</p>
Full article ">Figure 19
<p>Segmentation performance of seedling images under different lighting conditions ((<b>a</b>) = 50, (<b>b</b>) = 250, and (<b>c</b>) = 450 µmol·m⁻<sup>2</sup>·s⁻<sup>1</sup>). Random colors represent seedling detection of different shapes.</p>
Full article ">Figure 20
<p>Overall classification results using SVM method with different kernels: (<b>a</b>) decision boundaries for linear kernels with 0, 5, and 10-fold cross-validation, C = 0; (<b>b</b>) decision boundaries for RBF kernels with 0, 5, and 10-fold cross-validation, C = 128,100, and γ = 128, 512; and (<b>c</b>) decision boundaries for polynomial kernels with 0, 5, and 10-fold cross-validation, C = 60, and γ = 0, degree = 3. In all figures, seedlings are represented by white circles, and black dots represents background.</p>
Full article ">Figure 21
<p>Segmented masked image, contour, and bounding box detection using various seedling images: (<b>a</b>) pepper, (<b>b</b>) cucumber, (<b>c</b>) tomato, and (<b>d</b>) watermelon.</p>
Full article ">Figure 22
<p>Performance evaluation of SVM model: confusion matrices for (1) pepper, (2) tomato, (3) cucumber, and (4) watermelon: (<b>a</b>) before applying feature section method, (<b>b</b>) confusion metrics after feature selection method, and (<b>c</b>) ROC curve with accuracy of 98%.</p>
Full article ">Figure 23
<p>Correlation between actual ground truth area and segmented canopy area for different seedlings: (<b>a</b>) cucumber, (<b>b</b>) pepper, (<b>c</b>) tomato, and (<b>d</b>) watermelon.</p>
Full article ">Figure 24
<p>Training and validation performance of proposed YOLOv8 model, highlighting various loss functions, box loss (B), mask loss (M), segmentation loss, classification loss, and validation loss, as well as key metrics, including precision, recall, and mAP at IoU thresholds of 0.5 and 0.5–0.95. (<b>a</b>) Results using contour-based annotated dataset, and (<b>b</b>) results using manual annotated dataset.</p>
Full article ">Figure 25
<p>The precision–recall and recall–confidence curves for seedling segmentation: (<b>a</b>) results using contour-based annotation dataset, and (<b>b</b>) results using manual annotated dataset.</p>
Full article ">Figure 26
<p>Test results using YOLOv8 model, trained with contour-based annotation dataset. Model accurately detects seedlings, (<b>a</b>) pepper, (<b>b</b>) cucumber, (<b>c</b>) tomato, and (<b>d</b>) watermelon, with confidence levels ranging from 50% to 98%.</p>
Full article ">Figure 27
<p>Sample images demonstrate separation of overlapped seedling leaves with accurate contour detection for precise seedling identification. Blue circle indicates successful separation of overlapped leaves (top cropped image), and instance where leaves remain connected, with only contour drawn around joined leaf sections (lower cropped image).</p>
Full article ">
36 pages, 41599 KiB  
Article
A Large-Scale Inter-Comparison and Evaluation of Spatial Feature Engineering Strategies for Forest Aboveground Biomass Estimation Using Landsat Satellite Imagery
by John B. Kilbride and Robert E. Kennedy
Remote Sens. 2024, 16(23), 4586; https://doi.org/10.3390/rs16234586 - 6 Dec 2024
Viewed by 565
Abstract
Aboveground biomass (AGB) estimates derived from Landsat’s spectral bands are limited by spectral saturation when AGB densities exceed 150–300 Mg ha1. Statistical features that characterize image texture have been proposed as a means to alleviate spectral saturation. However, apart from [...] Read more.
Aboveground biomass (AGB) estimates derived from Landsat’s spectral bands are limited by spectral saturation when AGB densities exceed 150–300 Mg ha1. Statistical features that characterize image texture have been proposed as a means to alleviate spectral saturation. However, apart from Gray Level Co-occurrence Matrix (GLCM) statistics, many spatial feature engineering techniques (e.g., morphological operations or edge detectors) have not been evaluated in the context of forest AGB estimation. Moreover, many prior investigations have been constrained by limited geographic domains and sample sizes. We utilize 176 lidar-derived AGB maps covering ∼9.3 million ha of forests in the Pacific Northwest of the United States to construct an expansive AGB modeling dataset that spans numerous biophysical gradients and contains AGB densities exceeding 1000 Mg ha1. We conduct a large-scale inter-comparison of multiple spatial feature engineering techniques, including GLCMs, edge detectors, morphological operations, spatial buffers, neighborhood vectorization, and neighborhood similarity features. Our numerical experiments indicate that statistical features derived from GLCMs and spatial buffers yield the greatest improvement in AGB model performance out of the spatial feature engineering strategies considered. Including spatial features in Random Forest AGB models reduces the root mean squared error (RMSE) by 9.97 Mg ha1. We contextualize this improvement model performance by comparing to AGB models developed with multi-temporal features derived from the LandTrendr and Continuous Change Detection and Classification algorithms. The inclusion of temporal features reduces the model RMSE by 18.41 Mg ha1. When spatial and temporal features are both included in the model’s feature set, the RMSE decreases by 21.71 Mg ha1. We conclude that spatial feature engineering strategies can yield nominal gains in model performance. However, this improvement came at the cost of increased model prediction bias. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The perimeters of the lidar AGB maps that were used as reference data in this analysis. The average forest AGB (Mg <math display="inline"><semantics> <mrow> <msup> <mi>ha</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) in each perimeter is depicted.</p>
Full article ">Figure 2
<p>An illustration of the sampling and data partitioning scheme used to generate the modeling dataset. A 500 m buffer was placed around test set locations to exclude samples from the training and development. This mitigates the impact of of spatial autocorrelation on our numerical experiments. Plots are superimposed over Landsat imagery (shortwave infrared-2, near-infrared, red reflectance; <b>left panel</b>) and true color National Agricultural Imagery Program 1 m imagery (<b>right panel</b>).</p>
Full article ">Figure 3
<p>An overview of the image processing and feature engineering workflow used in this analysis.</p>
Full article ">Figure 4
<p>RMSE distributions for the RF models developed in experiment 1.</p>
Full article ">Figure 5
<p>Predicted vs. observed AGB values from the second experiment comparing the AGB predictions generated by Random Forest models over the testing set. Models were produced using (<b>A</b>) the baseline features, (<b>B</b>) the baseline and spatial features, (<b>C</b>) the baseline and temporal features, (<b>D</b>) the baseline, spatial, and temporal features. The relationships are summarized using an ordinary least square regression curve (red line). The black dashed line is the one-to-one curve.</p>
Full article ">Figure 6
<p>The location of the four 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> subsets (red squares) that were selected to visualize the outputs from the AGB models developed in experiment 2. The subsets are located in (A) the Coast Range in Oregon, (B) Eastern Oregon, (C) North Central Washington, and (D) Central Idaho.</p>
Full article ">Figure 7
<p>The reference lidar AGB map and the spatial residuals from each of the four models applied to a 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> area in the Oregon Coast Range. Red colors indicate that the model overestimated the lidar AGB density. Blue indicates that the model underestimated the lidar AGB density.</p>
Full article ">Figure 8
<p>The reference lidar AGB map and the spatial residuals from each of the four models applied to a 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> area in Eastern Oregon. Red colors indicate that the model overestimated the lidar AGB density. Blue indicates that the model underestimated the lidar AGB density.</p>
Full article ">Figure 9
<p>The reference lidar AGB map and the spatial residuals from each of the four models applied to a 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> area in Central Idaho. Red colors indicate that the model overestimated the lidar AGB density. Blue indicates that the model underestimated the lidar AGB density.</p>
Full article ">Figure 10
<p>The reference lidar AGB map and the spatial residuals from each of the four models applied to a 15 <math display="inline"><semantics> <mrow> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> area in North Central Washington. Red colors indicate the model overestimated the lidar AGB density. Blue indicates the model underestimated the the lidar AGB density.</p>
Full article ">
14 pages, 1960 KiB  
Article
Predicting Tumor Progression in Patients with Cervical Cancer Using Computer Tomography Radiomic Features
by Shopnil Prasla, Daniel Moore-Palhares, Daniel Dicenzo, LaurentiusOscar Osapoetra, Archya Dasgupta, Eric Leung, Elizabeth Barnes, Alexander Hwang, Amandeep S. Taggar and Gregory Jan Czarnota
Radiation 2024, 4(4), 355-368; https://doi.org/10.3390/radiation4040027 - 4 Dec 2024
Viewed by 686
Abstract
The objective of this study was to evaluate the effectiveness of utilizing radiomic features from radiation planning computed tomography (CT) scans in predicting tumor progression among patients with cervical cancers. A retrospective analysis was conducted on individuals who underwent radiotherapy for cervical cancer [...] Read more.
The objective of this study was to evaluate the effectiveness of utilizing radiomic features from radiation planning computed tomography (CT) scans in predicting tumor progression among patients with cervical cancers. A retrospective analysis was conducted on individuals who underwent radiotherapy for cervical cancer between 2015 and 2020, utilizing an institutional database. Radiomic features, encompassing first-order statistical, morphological, Gray-Level Co-Occurrence Matrix (GLCM), Gray-Level Run Length Matrix (GLRLM), and Gray-Level Dependence Matrix (GLDM) features, were extracted from the primary cervical tumor on the CT scans. The study encompassed 112 CT scans from patients with varying stages of cervical cancer ((FIGO Staging of Cervical Cancer 2018): 24% at stage I, 47% at stage II, 21% at stage III, and 10% at stage IV). Of these, 31% (n = 35/112) exhibited tumor progression. Univariate feature analysis identified three morphological features that displayed statistically significant differences (p < 0.05) between patients with and without progression. Combining these features enabled a classification model to be developed with a mean sensitivity, specificity, accuracy, and AUC of 76.1% (CI 1.5%), 70.4% (CI 4.1%), 73.6% (CI 2.1%), and 0.794 (CI 0.029), respectively, employing nested ten-fold cross-validation. This research highlights the potential of CT radiomic models in predicting post-radiotherapy tumor progression, offering a promising approach for tailoring personalized treatment decisions in cervical cancer. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart showing the study methodology. General steps include patient selection, data acquisition, feature extraction, and analysis and development of machine learning models.</p>
Full article ">Figure 2
<p>Representative radiomics feature maps overlaid on axial CT anatomical images of cervical tumors from recurrence (R) and non-recurrence (NR) groups. The representative maps are those that contribute to the best-performing seven features. The white scale bar indicates 2 cm. The color bar indicates −1.20 to 0.35 for the GLCM lmc 1 parameter, −1.20 to 0.35 for the GLCM lmc 2 parameter, −1.20 to 1.50 for the Skewness parameter, −0.30 to 0.30 for the GLCM Correlation parameter, −2.00 to 2.00 for the GLDM Dependence Non-Uniformity parameter, and −20,000.00 to 20,000.00 for the GLDM Large Dependence High Gray-Level Emphasis parameter. These are the original ranges of the parameters, prior to the feature normalization procedure. These texture parameters and their representation as parametrized images reflect the tumor structure and heterogeneity. Differences can be subtle and require machine learning approaches for interpretation.</p>
Full article ">Figure 3
<p>Box and scatter plots for selected radiomic features from the recurrence (‘R’) and the non-recurrence (‘NR’) groups. These demonstrate the presence of discriminating features potentially useful for building a classification model to separate recurrence samples from non-recurrence ones. The asterisk (*) mark demonstrates a statistical significant difference (<span class="html-italic">p</span>-value &lt; 0.05). Blue indicates non-recurrence. Orange indicates recurrence.</p>
Full article ">Figure 4
<p>Correlation between image texture features using the Random Forest classifier model for determining the best classifying feature. This figure displays inter-feature correlation. We observed the presence of highly correlated features as indicated by the lighter coloring. If there was more than one feature that was highly correlated, we selected only a single feature from this pool of correlated features. Redundant features do not add quality to the classification model. <a href="#app1-radiation-04-00027" class="html-app">Figure S2</a> (1: GLCM Dependence Non-Uniformity, 2: Shape Maximum 2D Diameter Slice, 3: First Order Energy, 4: Shape Maximum 2D Diameter Column, 5: Shape Major Axis Length, 6: First Order Kurtosis, 7: First Order Maximum, 8: Shape Least Axis Length, 9: GLDM Large Area High Gray-Level Emphasis, 10: GLDM Large Area Emphasis, 11: GLSZM Gray-Level Non-uniformity Normalized, 12: GLDM Gray-Level Non-Uniformity, 13: GLSZM Size Zone Non-uniformity, 14: GLCM Imc1, 15: First Order 10 percentile, 16: First Order Mean, 17: First Order Skewness, 18: First Order Minimum, 19: First Order Entropy, 20: GLSZM Zone Entropy, 21: GLSZM Size Non-uniformity Normalized, 22: GLCM Difference Average, 23: Shape Surface Volume Ratio, 24: GLDM Small Dependence High Gray-Level Emphasis, 25: GLDM Dependence Entropy, 26:GLCM Contrast, 27: GLDM Large Dependence Low Gray-Level Emphasis, 28: GLCM Correlation, 29: First Order Interquartile Range, 30: First Order 90 percentile, 31: GLCM Imc2, 32: GLCM Autocorrelation, 33: GLDM Large Dependence High Gray-Level Emphasis, 34: Shape Elongation, 35: Shape Flatness, and 36: Shape Sphericity).</p>
Full article ">
12 pages, 1899 KiB  
Article
Image Biomarker Analysis of Ultrasonography Images of the Parotid Gland for Baseline Characteristic Establishment with Reduced Shape Effects
by Hak-Sun Kim
Appl. Sci. 2024, 14(23), 11041; https://doi.org/10.3390/app142311041 - 27 Nov 2024
Viewed by 458
Abstract
Background: This study aimed to analyze image biomarkers of the parotid glands in ultrasonography images with reduced shape effects, providing a reference for the radiomic diagnosis of parotid gland lesions. Methods: Ultrasound (US) and sialography images of the parotid glands, acquired from September [...] Read more.
Background: This study aimed to analyze image biomarkers of the parotid glands in ultrasonography images with reduced shape effects, providing a reference for the radiomic diagnosis of parotid gland lesions. Methods: Ultrasound (US) and sialography images of the parotid glands, acquired from September 2019 to March 2024, were reviewed along with their clinical information. Parotid glands diagnosed as within the normal range were included. Overall, 91 US images depicting the largest portion of the parotid glands were selected for radiomic feature extraction. Regions of interest were drawn twice on 50 images using different shapes to assess the intraclass correlation coefficient (ICC). Feature dimensions were statistically reduced by selecting features with an ICC > 0.8 and applying four statistical algorithms. The selected features were used to distinguish age and sex using the four classification models. Classification performance was evaluated using the area under the receiver operating characteristic curve (AUC), recall, and precision. Results: The combinations of the information gain ratio algorithm or stochastic gradient descent and the naïve Bayes model showed the highest AUC for both age and sex classification (AUC = 1.000). The features contributing to these classifications included the first-order and gray-level co-occurrence matrix (high-order) features, particularly discretized intensity skewness and kurtosis, intensity skewness, and GLCM angular second moment. These features also contributed to achieving one of the highest recall (0.889) and precision (0.926) values. Conclusions: The two features were the most significant factors in discriminating radiomic variations related to age and sex in US images with reduced shape effects. These radiomic findings should be assessed when diagnosing parotid gland pathology versus normal using US images and radiomics in a heterogeneous population. Full article
(This article belongs to the Section Applied Dentistry and Oral Sciences)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram illustrating the flow of the study. ROI, region of interest; ICC, intraclass correlation coefficient; ROC, receiver-operating characteristic.</p>
Full article ">Figure 2
<p>Examples of ultrasound images of parotid glands: (<b>a</b>) a normal parotid gland with homogeneous echogenicity, included in the study; and (<b>b</b>) an inflammatory parotid gland showing heterogeneous echogenicity with numerous hypoechoic foci, excluded from the study.</p>
Full article ">Figure 3
<p>Examples of regions of interest selection in an ultrasound image of a parotid gland: (<b>a</b>) polygonal, (<b>b</b>) square shape.</p>
Full article ">Figure 4
<p>Performance metrics for four statistical algorithms and four classification models: (<b>a</b>–<b>c</b>) area under the receiver operating characteristic curve results, (<b>d</b>–<b>f</b>) recall, and (<b>g</b>–<b>i</b>) precision. Metrics are presented for (<b>a</b>,<b>d</b>,<b>g</b>) age, (<b>b</b>,<b>e</b>,<b>h</b>) sex, and (<b>c</b>,<b>f</b>,<b>i</b>) combined age and sex classification. The highest scores are highlighted in bold. AUC, area under the receiver operating characteristic curve; LASSO, least absolute shrinkage and selection operator; IGR, information gain ratio; SGD, stochastic gradient descent; KNN, k-nearest neighbors.</p>
Full article ">Figure 5
<p>Confusion matrices showing the classification results for each combination of statistical algorithm and machine learning model: (<b>a</b>) age, (<b>b</b>) sex, and (<b>c</b>) combined age and sex classification.</p>
Full article ">
3908 KiB  
Proceeding Paper
Automated Glaucoma Detection in Fundus Images Using Comprehensive Feature Extraction and Advanced Classification Techniques
by Vijaya Kumar Velpula, Jyothisri Vadlamudi, Purna Prakash Kasaraneni and Yellapragada Venkata Pavan Kumar
Eng. Proc. 2024, 82(1), 33; https://doi.org/10.3390/ecsa-11-20437 - 25 Nov 2024
Viewed by 62
Abstract
Glaucoma, a primary cause of irreversible blindness, necessitates early detection to prevent significant vision loss. In the literature, fundus imaging is identified as a key tool in diagnosing glaucoma, which captures detailed retina images. However, the manual analysis of these images can be [...] Read more.
Glaucoma, a primary cause of irreversible blindness, necessitates early detection to prevent significant vision loss. In the literature, fundus imaging is identified as a key tool in diagnosing glaucoma, which captures detailed retina images. However, the manual analysis of these images can be time-consuming and subjective. Thus, this paper presents an automated system for glaucoma detection using fundus images, combining diverse feature extraction methods with advanced classifiers, specifically Support Vector Machine (SVM) and AdaBoost. The pre-processing step incorporated image enhancement via Contrast-Limited Adaptive Histogram Equalization (CLAHE) to enhance image quality and feature extraction. This work investigated individual features such as the histogram of oriented gradients (HOG), local binary patterns (LBP), chip histogram features, and the gray-level co-occurrence matrix (GLCM), as well as their various combinations, including HOG + LBP + chip histogram + GLCM, HOG + LBP + chip histogram, and others. These features were utilized with SVM and Adaboost classifiers to improve classification performance. For validation, the ACRIMA dataset, a public fundus image collection comprising 369 glaucoma-affected and 309 normal images, was used in this work, with 80% of the data allocated for training and 20% for testing. The results of the proposed study show that different feature sets yielded varying accuracies with the SVM and Adaboost classifiers. For instance, the combination of LBP + chip histogram achieved the highest accuracy of 99.29% with Adaboost, while the same combination yielded a 65.25% accuracy with SVM. The individual feature LBP alone achieved 97.87% with Adaboost and 98.58% with SVM. Furthermore, the combination of GLCM + LBP provided a 98.58% accuracy with Adaboost and 97.87% with SVM. The results demonstrate that CLAHE and combined feature sets significantly enhance detection accuracy, providing a reliable tool for early and precise glaucoma diagnosis, thus facilitating timely intervention and improved patient outcomes. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>Left</b>) Glaucoma-affected image; (<b>right</b>) normal image.</p>
Full article ">Figure 2
<p>Workflow of proposed method for detection of glaucoma in fundus images.</p>
Full article ">Figure 3
<p>Confusion matrices of Adaboost and SVM for LBP + chip histogram.</p>
Full article ">Figure 4
<p>Confusion matrices of Adaboost and SVM for GLCM + LBP.</p>
Full article ">Figure 5
<p>Confusion matrices of Adaboost and SVM for GLCM + LBP + chip histogram.</p>
Full article ">
32 pages, 10860 KiB  
Article
Combining the SHAP Method and Machine Learning Algorithm for Desert Type Extraction and Change Analysis on the Qinghai–Tibetan Plateau
by Ruijie Lu, Shulin Liu, Hanchen Duan, Wenping Kang and Ying Zhi
Remote Sens. 2024, 16(23), 4414; https://doi.org/10.3390/rs16234414 - 25 Nov 2024
Viewed by 575
Abstract
For regional desertification control and sustainable development, it is critical to quickly and accurately understand the distribution pattern and spatial and temporal changes of deserts. In this work, five different machine learning algorithms are used to classify different desert types on the Qinghai–Tibetan [...] Read more.
For regional desertification control and sustainable development, it is critical to quickly and accurately understand the distribution pattern and spatial and temporal changes of deserts. In this work, five different machine learning algorithms are used to classify different desert types on the Qinghai–Tibetan Plateau (QTP), and their classification performance is evaluated on the basis of their classification results and classification accuracy. Then, on the basis of the best classification model, the Shapely Additive Explanations (SHAP) method is used to clarify the contribution of each classification feature to the identification of desert types during the machine learning classification process, both globally and locally. Finally, the independent and interactive effects of each factor on desert change on the Qinghai-Tibetan Plateau during the study period are quantitatively analyzed via geodetector. The main results are as follows: (1) Compared with other classification algorithms (GTB, CART, KNN, and SVM), the RF classifier achieves the best performance in classifying QTP desert types, with an overall accuracy (OA) of 87.11% and a kappa coefficient of 0.83. (2) From the perspective of the overall classification of deserts, the five features, namely, elevation, slope, VV, VH, and GLCM, contribute most significantly to the features. In terms of the influence of each classification feature on the extraction of different types of deserts, the radar backscattering coefficient VV serves the most important role in distinguishing sandy deserts; the VH is helpful in distinguishing the four types of deserts: rocky desert, alpine cold desert, sandy deserts, and loamy desert; slope is more effective in distinguishing between the two desert types (rocky desert and alpine cold desert) and other types of deserts; and elevation has a significant role in the identification of alpine cold deserts; and the short-wave infrared band SR_B7 has an important role in the identification of salt crusts and saline deserts. (3) During the study period, the QTP deserts exhibited a reversing trend, and the proportion of desert area decreased from 28.62% to 26.20%. (4) Compared with other factors, slope, precipitation, elevation, vegetation type, and the human footprint have greater effects on changes in the QTP desert area, and the interactions among the factors affecting changes in the desert area all show bidirectional enhancement or nonlinear enhancement effects. Full article
Show Figures

Figure 1

Figure 1
<p>Location and geographical overview of the Qinghai–Tibetan plateau.</p>
Full article ">Figure 2
<p>Overall classification results of five machine learning algorithms for QTP deserts.</p>
Full article ">Figure 3
<p>Comparison of local classification details of different machine learning algorithms. (<b>a</b>) is the typical region containing mainly SD and MS; (<b>b</b>) is the typical region containing mainly GD and SM; (<b>c</b>) is the typical region containing mainly LD and AC; (<b>d</b>) is the typical region with diverse and complex desert types; (<b>e</b>) is the typical region containing mainly GD, RD and Non-desert.</p>
Full article ">Figure 4
<p>Global and local importance of SHAP-based classification features. (<b>a</b>) is the global importance; (<b>b</b>–<b>h</b>) are the importance of classification features for different desert types, where (<b>b</b>) is SD, (<b>c</b>) is GD, (<b>d</b>) is SM, (<b>e</b>) is MS, (<b>f</b>) is LD, (<b>g</b>) is RD, and (<b>h</b>) is AC.</p>
Full article ">Figure 5
<p>Spatial distribution of QTP desert types in 2000, 2010, and 2020.</p>
Full article ">Figure 6
<p>Spatial distribution of changes in QTP deserts during different periods.</p>
Full article ">Figure 7
<p>Sankey diagram of QTP desert changes in different periods.</p>
Full article ">Figure 8
<p>Spatial distribution of driver factor rating results. Note: See <a href="#app1-remotesensing-16-04414" class="html-app">Appendix A</a> for details of the meaning of vegetation type and soil type codes.</p>
Full article ">Figure 9
<p>The q-value of the individual factors. Note: “*” represents <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 10
<p>Detection results of the interaction between the two factors. Note: “↑” and “↑↑” represent bidirectional enhancement and nonlinear enhancement, respectively.</p>
Full article ">Figure A1
<p>The spatial distribution of driving factors.</p>
Full article ">Figure A2
<p>The location of the local classification detail regions.</p>
Full article ">
17 pages, 3025 KiB  
Article
A Spectral–Spatial Approach for the Classification of Tree Cover Density in Mediterranean Biomes Using Sentinel-2 Imagery
by Michail Sismanis, Ioannis Z. Gitas, Nikos Georgopoulos, Dimitris Stavrakoudis, Eleni Gkounti and Konstantinos Antoniadis
Forests 2024, 15(11), 2025; https://doi.org/10.3390/f15112025 - 18 Nov 2024
Viewed by 741
Abstract
Tree canopy cover is an important forest inventory parameter and a critical component for the in-depth mapping of forest fuels. This research examines the potential of employing single-date Sentinel-2 multispectral imagery, combined with contextual spatial information, to classify areas based on their tree [...] Read more.
Tree canopy cover is an important forest inventory parameter and a critical component for the in-depth mapping of forest fuels. This research examines the potential of employing single-date Sentinel-2 multispectral imagery, combined with contextual spatial information, to classify areas based on their tree cover density using Random Forest classifiers. Three spatial information extraction methods are investigated for their capacity to acutely detect canopy cover: two based on Gray-Level Co-Occurrence Matrix (GLCM) features and one based on segment statistics. The research was carried out in three different biomes in Greece, in a total study area of 23,644 km2. Three tree cover classes were considered, namely, non-forest (cover < 15%), open forest (cover = 15%–70%), and closed forest (cover ≥ 70%), based on the requirements set for fuel mapping in Europe. Results indicate that the best approach identified delivers F1-scores ranging 70%–75% for all study areas, significantly improving results over the other alternatives. Overall, the synergistic use of spectral and spatial features derived from Sentinel-2 images highlights a promising approach for the generation of tree cover density information layers in Mediterranean regions, enabling the creation of additional information in support of the detailed mapping of forest fuels. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Map of the three different biomes examined in this study in Greece.</p>
Full article ">Figure 2
<p>Flowchart of the spectral–spatial workflow for the classification of tree cover density in Mediterranean ecosystems.</p>
Full article ">Figure 3
<p>Tree cover density map generated using the best-performing method for study area A in GGRS87 coordinate reference system.</p>
Full article ">Figure 4
<p>Tree cover density map generated using the best-performing method for study area B in GGRS87 coordinate reference system.</p>
Full article ">Figure 5
<p>Tree cover density map generated using the best-performing method for study area C in GGRS87 coordinate reference system.</p>
Full article ">
20 pages, 2362 KiB  
Article
Machine Learning-Driven GLCM Analysis of Structural MRI for Alzheimer’s Disease Diagnosis
by Maria João Oliveira, Pedro Ribeiro and Pedro Miguel Rodrigues
Bioengineering 2024, 11(11), 1153; https://doi.org/10.3390/bioengineering11111153 - 15 Nov 2024
Viewed by 950
Abstract
Background: Alzheimer’s disease (AD) is a progressive and irreversible neurodegenerative condition that increasingly impairs cognitive functions and daily activities. Given the incurable nature of AD and its profound impact on the elderly, early diagnosis (at the mild cognitive impairment (MCI) stage) and intervention [...] Read more.
Background: Alzheimer’s disease (AD) is a progressive and irreversible neurodegenerative condition that increasingly impairs cognitive functions and daily activities. Given the incurable nature of AD and its profound impact on the elderly, early diagnosis (at the mild cognitive impairment (MCI) stage) and intervention are crucial, focusing on delaying disease progression and improving patients’ quality of life. Methods: This work aimed to develop an automatic sMRI-based method to detect AD in three different stages, namely healthy controls (CN), mild cognitive impairment (MCI), and AD itself. For such a purpose, brain sMRI images from the ADNI database were pre-processed, and a set of 22 texture statistical features from the sMRI gray-level co-occurrence matrix (GLCM) were extracted from various slices within different anatomical planes. Different combinations of features and planes were used to feed classical machine learning (cML) algorithms to analyze their discrimination power between the groups. Results: The cML algorithms achieved the following classification accuracy: 85.2% for AD vs. CN, 98.5% for AD vs. MCI, 95.1% for CN vs. MCI, and 87.1% for all vs. all. Conclusions: For the pair AD vs. MCI, the proposed model outperformed state-of-the-art imaging source studies by 0.1% and non-imaging source studies by 4.6%. These results are particularly significant in the field of AD classification, opening the door to more efficient early diagnosis in real-world settings since MCI is considered a precursor to AD. Full article
Show Figures

Figure 1

Figure 1
<p>Methodology workflow diagram.</p>
Full article ">Figure 2
<p>Skull stripping process in SPM: (<b>a</b>) original image; (<b>b</b>) processed image.</p>
Full article ">Figure 3
<p>State-of-the-art comparison with the present study (best <math display="inline"><semantics> <mrow> <mi>a</mi> <mi>c</mi> <mi>c</mi> <mi>u</mi> <mi>r</mi> <mi>a</mi> <mi>c</mi> <mi>y</mi> </mrow> </semantics></math>). For reference: (Shukla et al. 2023 [<a href="#B11-bioengineering-11-01153" class="html-bibr">11</a>]), (Hussain et al. 2020 [<a href="#B22-bioengineering-11-01153" class="html-bibr">22</a>]), (Pirrone et al. 2022 [<a href="#B26-bioengineering-11-01153" class="html-bibr">26</a>]), (Lama et al. 2022 [<a href="#B14-bioengineering-11-01153" class="html-bibr">14</a>]), (Rallabandi and Seetharama 2023 [<a href="#B23-bioengineering-11-01153" class="html-bibr">23</a>]), (Rodrigues et al. 2021 [<a href="#B27-bioengineering-11-01153" class="html-bibr">27</a>]), and (Goenka et al. 2022 [<a href="#B17-bioengineering-11-01153" class="html-bibr">17</a>]).</p>
Full article ">
31 pages, 5080 KiB  
Article
Detection of Subarachnoid Hemorrhage Using CNN with Dynamic Factor and Wandering Strategy-Based Feature Selection
by Jewel Sengupta, Robertas Alzbutas, Tomas Iešmantas, Vytautas Petkus, Alina Barkauskienė, Vytenis Ratkūnas, Saulius Lukoševičius, Aidanas Preikšaitis, Indre Lapinskienė, Mindaugas Šerpytis, Edgaras Misiulis, Gediminas Skarbalius, Robertas Navakas and Algis Džiugys
Diagnostics 2024, 14(21), 2417; https://doi.org/10.3390/diagnostics14212417 - 30 Oct 2024
Viewed by 633
Abstract
Objectives: Subarachnoid Hemorrhage (SAH) is a serious neurological emergency case with a higher mortality rate. An automatic SAH detection is needed to expedite and improve identification, aiding timely and efficient treatment pathways. The existence of noisy and dissimilar anatomical structures in NCCT [...] Read more.
Objectives: Subarachnoid Hemorrhage (SAH) is a serious neurological emergency case with a higher mortality rate. An automatic SAH detection is needed to expedite and improve identification, aiding timely and efficient treatment pathways. The existence of noisy and dissimilar anatomical structures in NCCT images, limited availability of labeled SAH data, and ineffective training causes the issues of irrelevant features, overfitting, and vanishing gradient issues that make SAH detection a challenging task. Methods: In this work, the water waves dynamic factor and wandering strategy-based Sand Cat Swarm Optimization, namely DWSCSO, are proposed to ensure optimum feature selection while a Parametric Rectified Linear Unit with a Stacked Convolutional Neural Network, referred to as PRSCNN, is developed for classifying grades of SAH. The DWSCSO and PRSCNN surpass current practices in SAH detection by improving feature selection and classification accuracy. DWSCSO is proposed to ensure optimum feature selection, avoiding local optima issues with higher exploration capacity and avoiding the issue of overfitting in classification. Firstly, in this work, a modified region-growing method was employed on the patient Non-Contrast Computed Tomography (NCCT) images to segment the regions affected by SAH. From the segmented regions, the wide range of patterns and irregularities, fine-grained textures and details, and complex and abstract features were extracted from pre-trained models like GoogleNet, Visual Geometry Group (VGG)-16, and ResNet50. Next, the PRSCNN was developed for classifying grades of SAH which helped to avoid the vanishing gradient issue. Results: The DWSCSO-PRSCNN obtained a maximum accuracy of 99.48%, which is significant compared with other models. The DWSCSO-PRSCNN provides an improved accuracy of 99.62% in CT dataset compared with the DL-ICH and GoogLeNet + (GLCM and LBP), ResNet-50 + (GLCM and LBP), and AlexNet + (GLCM and LBP), which confirms that DWSCSO-PRSCNN effectively reduces false positives and false negatives. Conclusions: the complexity of DWSCSO-PRSCNN was acceptable in this research, for while simpler approaches appeared preferable, they failed to address problems like overfitting and vanishing gradients. Accordingly, the DWSCSO for optimized feature selection and PRSCNN for robust classification were essential for handling these challenges and enhancing the detection in different clinical settings. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

Figure 1
<p>Automated SAH classification using DWSCSO and PRSCNN.</p>
Full article ">Figure 2
<p>Sample-acquired NCCT images.</p>
Full article ">Figure 3
<p>Sample-segmented images.</p>
Full article ">Figure 4
<p>Flowchart of the DWSCSO-based feature optimization.</p>
Full article ">Figure 5
<p>Analysis of optimization with different size of population for collected dataset.</p>
Full article ">Figure 6
<p>Analysis of convergence for collected dataset.</p>
Full article ">Figure 7
<p>ROC curve for collected dataset, (<b>a</b>) decision tree, (<b>b</b>) GCN, (<b>c</b>) ANN, (<b>d</b>) Autoencoder, (<b>e</b>) CNN, (<b>f</b>) PRSCNN.</p>
Full article ">Figure 8
<p>Confusion matrix for collected dataset, (<b>a</b>) decision tree, (<b>b</b>) GCN, (<b>c</b>) ANN, (<b>d</b>) Autoencoder, (<b>e</b>) CNN, (<b>f</b>) PRSCNN.</p>
Full article ">Figure 9
<p>Accuracy graph for collected dataset, (<b>a</b>) decision tree, (<b>b</b>) GCN, (<b>c</b>) ANN, (<b>d</b>) Autoencoder, (<b>e</b>) CNN, (<b>f</b>) PRSCNN.</p>
Full article ">Figure 10
<p>Loss graph for collected dataset, (<b>a</b>) ReLU, (<b>b</b>) Leaky ReLU, (<b>c</b>) ELU, (<b>d</b>) PReLU.</p>
Full article ">Figure 10 Cont.
<p>Loss graph for collected dataset, (<b>a</b>) ReLU, (<b>b</b>) Leaky ReLU, (<b>c</b>) ELU, (<b>d</b>) PReLU.</p>
Full article ">
Back to TopTop