[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (424)

Search Parameters:
Keywords = GLCM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 17777 KiB  
Article
Informal Settlements Extraction and Fuzzy Comprehensive Evaluation of Habitat Environment Quality Based on Multi-Source Data
by Zanxian Yang, Fei Yang, Yuanjing Xiang, Haiyi Yang, Chunnuan Deng, Liang Hong and Zhongchang Sun
Land 2025, 14(3), 556; https://doi.org/10.3390/land14030556 - 6 Mar 2025
Abstract
The United Nations Sustainable Development Goal (SDG) 11.1 emphasizes improving well-being, ensuring housing security, and promoting social equity. Informal settlements, one of the most vulnerable groups, require significant attention due to their dynamic changes and habitat quality. These areas limit the ability to [...] Read more.
The United Nations Sustainable Development Goal (SDG) 11.1 emphasizes improving well-being, ensuring housing security, and promoting social equity. Informal settlements, one of the most vulnerable groups, require significant attention due to their dynamic changes and habitat quality. These areas limit the ability to comprehensively capture spatial heterogeneity and dynamic shifts in regional sustainable development. This study proposes an integrated approach using multi-source remote sensing data to extract the spatial distribution of informal settlements in Mumbai and assess their habitat environment quality. Specifically, seasonal spectral indices and texture features were constructed using Sentinel and SAR data, combined with the mean decrease impurity (MDI) indicator and hierarchical clustering to optimize feature selection, ultimately using a random forest (RF) model to extract the spatial distribution of informal settlements in Mumbai. Additionally, an innovative habitat environment index was developed through a Gaussian fuzzy evaluation model based on entropy weighting, providing a more robust assessment of habitat quality for informal settlements. The study demonstrates that: (1) texture features from the gray level co-occurrence matrix (GLCM) significantly improved the classification of informal settlements, with the random forest classification model achieving a kappa coefficient above 0.77, an overall accuracy exceeding 0.89, and F1 scores above 0.90; (2) informal settlements exhibited two primary development patterns: gradual expansion near formal residential areas and dependence on natural resources such as farmland, forests, and water bodies; (3) economic vitality emerged as a critical factor in improving the living environment, while social, natural, and residential conditions remained relatively stable; (4) the proportion of highly suitable and moderately suitable areas increased from 65.62% to 65.92%, although the overall improvement in informal settlements remained slow. This study highlights the novel integration of multi-source remote sensing data with machine learning for precise spatial extraction and comprehensive habitat quality assessment, providing valuable insights into urban planning and sustainable development strategies. Full article
21 pages, 17349 KiB  
Article
Multi-Type Change Detection and Distinction of Cultivated Land Parcels in High-Resolution Remote Sensing Images Based on Segment Anything Model
by Zhongxin Huang, Xiaomei Yang, Yueming Liu, Zhihua Wang, Yonggang Ma, Haitao Jing and Xiaoliang Liu
Remote Sens. 2025, 17(5), 787; https://doi.org/10.3390/rs17050787 - 24 Feb 2025
Viewed by 212
Abstract
Change detection of cultivated land parcels is critical for achieving refined management of farmland. However, existing change detection methods based on high-resolution remote sensing imagery focus primarily on cultivation type changes, neglecting the importance of detecting parcel pattern changes. To address the issue [...] Read more.
Change detection of cultivated land parcels is critical for achieving refined management of farmland. However, existing change detection methods based on high-resolution remote sensing imagery focus primarily on cultivation type changes, neglecting the importance of detecting parcel pattern changes. To address the issue of detecting diverse types of changes in cultivated land parcels, this study constructs an automated workflow framework for change detection, based on the unsupervised segmentation method of the SAM (Segment Anything Model). By performing spatial connection analysis on cultivated land parcel units extracted by the SAM for two phases and combining multiple features such as texture features (GLCM), multi-scale structural similarity (MS-SSIM), and normalized difference vegetation index (NDVI), precise identification of cultivation type and pattern change areas was achieved. The study results show that the proposed method achieved the highest accuracy in detecting parcel pattern changes in plain areas (precision: 78.79%, recall: 79.45%, IOU: 78.44%), confirming the effectiveness of the proposed method. This study provides an efficient and low-cost detection and distinction method for analyzing changes in cultivated land patterns and types using high-resolution remote sensing images, which can be directly applied in real-world scenarios. The method significantly enhances the automation and timeliness of parcel unit change detection, offering important applications for advancing precision agriculture and sustainable land resource management. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study area.</p>
Full article ">Figure 2
<p>Workflow of the multi-type change detection and distinction method for cultivated land. The overall process is divided into two parts: cultivated land parcel extraction and multi-type change detection.</p>
Full article ">Figure 3
<p>Workflow for multi-feature distinction of land parcels. The classification of land parcel units is based on the correspondence of patch quantities.</p>
Full article ">Figure 4
<p>T1 and T2 represent the scene of two local areas of the study area for the two temporal remote sensing images and the SAM extraction results. The red dashed boxes indicate areas segmented and simultaneously identified as background by SAM, which are considered unchanged regions by default.</p>
Full article ">Figure 5
<p>Shows two pairs of areas with cultivated land pattern changes. The patches in the figure are land parcel units obtained from SAM segmentation.</p>
Full article ">Figure 6
<p>Shows two pairs of areas with cultivated land type changes. The patches in the figure are land parcel units obtained from SAM segmentation.</p>
Full article ">Figure 7
<p>Shows potential change areas where bare land has transformed into vegetation (red rectangle).</p>
Full article ">Figure 8
<p>Diagram showing the correct, missed, and incorrect areas of detected land parcel units.</p>
Full article ">Figure 9
<p>Local segmentation results for different terrains in the study area are shown.</p>
Full article ">Figure 10
<p>Detection results of different change types for land parcel units in the plain area.</p>
Full article ">Figure 11
<p>Detection results of different change types for land parcel units in the hilly area.</p>
Full article ">Figure 12
<p>Cultivation pattern change detection results. (<b>a</b>,<b>b</b>) show a large parcel corresponding to multiple small parcels, with the sum of the areas of the small parcels being close to the area of the large parcel; (<b>c</b>,<b>d</b>) show a large parcel corresponding to multiple small parcels, with the number of small parcels ex-ceeding three.</p>
Full article ">Figure 13
<p>Evaluation results of change detection using different metrics.</p>
Full article ">
18 pages, 511 KiB  
Systematic Review
Texture Analysis in Musculoskeletal Ultrasonography: A Systematic Review
by Yih-Kuen Jan, Isabella Yu-Ju Hung and W. Catherine Cheung
Diagnostics 2025, 15(5), 524; https://doi.org/10.3390/diagnostics15050524 - 21 Feb 2025
Viewed by 252
Abstract
Background: The objective of this systematic review was to summarize the findings of texture analyses of musculoskeletal ultrasound images and synthesize the information to facilitate the use of texture analysis on assessing skeletal muscle quality in various pathophysiological conditions. Methods: Medline, PubMed, Scopus, [...] Read more.
Background: The objective of this systematic review was to summarize the findings of texture analyses of musculoskeletal ultrasound images and synthesize the information to facilitate the use of texture analysis on assessing skeletal muscle quality in various pathophysiological conditions. Methods: Medline, PubMed, Scopus, Web of Science, and Cochrane databases were searched from their inception until January 2025 using the PRISMA Diagnostic Test Accuracy and was registered at PROSPERO CRD42025636613. Information related to patients, interventions, ultrasound settings, texture analyses, muscles, and findings were extracted. The quality of evidence was evaluated using QUADAS-2. Results: A total of 38 studies using second-order and higher-order texture analysis met the criteria. The results indicated that no studies used an established reference standard (histopathology) to evaluate the accuracy of ultrasound texture analysis in diagnosing muscle quality. Alternative reference standards were compared, including various physiological, pathological, and pre–post intervention comparisons using over 200+ texture features of various muscles on diverse pathophysiological conditions. Conclusions: The findings of these included studies demonstrating that ultrasound texture analysis was able to discriminate changes in muscle quality using texture analysis between patients with pathological conditions and healthy conditions, including popular gray-level co-occurrence matrix (GLCM)-based contrast, correlation, energy, entropy, and homogeneity. Studies also demonstrated that texture analysis can discriminate muscle quality in various muscles under pathophysiological conditions although evidence is low because of bias in subject recruitment and lack of comparison with the established reference standard. This is the first systematic review of the use of texture analysis of musculoskeletal ultrasonography in assessing muscle quality in various muscles under diverse pathophysiological conditions. Full article
Show Figures

Figure 1

Figure 1
<p>PRISMA Flow diagram.</p>
Full article ">
27 pages, 10700 KiB  
Article
Rice Yield Prediction Using Spectral and Textural Indices Derived from UAV Imagery and Machine Learning Models in Lambayeque, Peru
by Javier Quille-Mamani, Lia Ramos-Fernández, José Huanuqueño-Murillo, David Quispe-Tito, Lena Cruz-Villacorta, Edwin Pino-Vargas, Lisveth Flores del Pino, Elizabeth Heros-Aguilar and Luis Ángel Ruiz
Remote Sens. 2025, 17(4), 632; https://doi.org/10.3390/rs17040632 - 12 Feb 2025
Viewed by 685
Abstract
Predicting rice yield accurately is crucial for enhancing farming practices and securing food supplies. This research aims to estimate rice yield in Peru’s Lambayeque region by utilizing spectral and textural indices derived from unmanned aerial vehicle (UAV) imagery, which offers a cost-effective alternative [...] Read more.
Predicting rice yield accurately is crucial for enhancing farming practices and securing food supplies. This research aims to estimate rice yield in Peru’s Lambayeque region by utilizing spectral and textural indices derived from unmanned aerial vehicle (UAV) imagery, which offers a cost-effective alternative to traditional approaches. UAV data collection in commercial areas involved seven flights in 2022 and ten in 2023, focusing on key growth stages such as flowering, milk, and dough, each showing significant predictive capability. Vegetation indices like NDVI, SP, DVI, NDRE, GNDVI, and EVI2, along with textural features from the gray-level co-occurrence matrix (GLCM) such as ENE, ENT, COR, IDM, CON, SA, and VAR, were combined to form a comprehensive dataset for model training. Among the machine learning models tested, including Multiple Linear Regression (MLR), Support Vector Machines (SVR), and Random Forest (RF), MLR demonstrated high reliability for annual data with an R2 of 0.69 during the flowering and milk stages, and an R2 of 0.78 for the dough stage in 2022. The RF model excelled in the combined analysis of 2022–2023 data, achieving an R2 of 0.58 for the dough stage, all confirmed through cross-validation. Integrating spectral and textural data from UAV imagery enhances early yield prediction, aiding precision agriculture and informed decision-making in rice management. These results emphasize the need to incorporate climate variables to refine predictions under diverse environmental conditions, offering a scalable solution to improve agricultural management and market planning. Full article
(This article belongs to the Special Issue Perspectives of Remote Sensing for Precision Agriculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area: (<b>a</b>) geographical location of Peru; (<b>b</b>) Lambayeque region; and (<b>c</b>) commercial zones: Caballito, García, Santa Julia, Totora, and Zapote.</p>
Full article ">Figure 2
<p>Meteorological variables recorded during the rice growing season in 2022 and 2023: (<b>a</b>) maximum temperature (°C), minimum temperature (°C), and precipitation (mm); (<b>b</b>) relative humidity (%) and wind speed (m s<sup>−1</sup>). These data were collected at the automatic weather station of INIA-Vista Florida.</p>
Full article ">Figure 3
<p>(<b>a</b>) Flights carried out in the commercial areas; (<b>b</b>) phenology of the Capoteña variety according to days post sowing (DPS).</p>
Full article ">Figure 4
<p>Flow diagram of the methodology followed in this study.</p>
Full article ">Figure 5
<p>Flight platform and sensors. (<b>a</b>) DJI Matric 300 RTK, (<b>b</b>) Micasense RedEdge-MX multispectral sensor, and (<b>c</b>) Parrot Sequoia multispectral sensor, together with their respective calibration panels.</p>
Full article ">Figure 6
<p>Rice yield data in tons per hectare (t ha<sup>−1</sup>) in commercial fields of Ferreñafe for the years 2022 and 2023.</p>
Full article ">Figure 7
<p>Coefficient of determination (R<sup>2</sup>) of vegetation indices (VIs) and textural indices (TIs) in relation to measured rice yield during phenological stages. (<b>a</b>) Number of plots evaluated for each phenological stage in 2022 and 2023. (<b>b</b>) Distribution of R<sup>2</sup> values across phenological stages for 2022 and 2023.</p>
Full article ">Figure 8
<p>The optimal results from Sequential Feature Selection for Multiple Linear Regression (MLR) and Support Vector Regression (SVR) models using vegetation indices (VIs), texture indices (TIs), and their combination (VIs + TIs) across the flowering (<b>a</b>,<b>d</b>,<b>g</b>), milk (<b>b</b>,<b>e</b>,<b>h</b>), and dough (<b>c</b>,<b>f</b>,<b>i</b>) stages for the years 2022 (<b>a</b>–<b>c</b>), 2023 (<b>d</b>–<b>f</b>), and the combined period of 2022–2023 (<b>g</b>–<b>i</b>).</p>
Full article ">Figure 9
<p>Predicted versus measured grain yield for Multiple Linear Regression (MLR) and Support Vector Regression (SVR) models using vegetation indices (VIs), texture indices (TIs), and their combination (VIs + TIs) across the flowering (<b>a</b>,<b>d</b>,<b>g</b>), milk (<b>b</b>,<b>e</b>,<b>h</b>), and dough (<b>c</b>,<b>f</b>,<b>i</b>) stages for the years 2022 (<b>a</b>–<b>c</b>), 2023 (<b>d</b>–<b>f</b>), and the combined period 2022–2023 (<b>g</b>–<b>i</b>).</p>
Full article ">Figure 10
<p>Random Forest (RF) model for rice yield estimation during the flowering stage (2022–2023) using vegetation (VIs) and textural indices (TIs): (<b>a</b>) out-of-bag error (OOB), (<b>b</b>) variable selection via LOOCV (RMSE), and (<b>c</b>) predictor importance.</p>
Full article ">Figure 11
<p>Random Forest (RF) model for rice yield estimation during the milk stage (2022–2023) using vegetation (VIs) and textural indices (TIs): (<b>a</b>) out-of-bag error (OOB), (<b>b</b>) variable selection via LOOCV (RMSE), and (<b>c</b>) predictor importance.</p>
Full article ">Figure 12
<p>Random Forest (RF) model for rice yield estimation during the dough stage (2022–2023) using vegetation (VIs) and textural indices (TIs): (<b>a</b>) out-of-bag error (OOB), (<b>b</b>) variable selection via LOOCV (RMSE), and (<b>c</b>) predictor importance.</p>
Full article ">Figure 13
<p>Predicted versus measured grain yield for Random Forest (RF) models using vegetation indices (VIs), texture indices (TIs), and their combination (VIs + TIs) across the flowering (<b>a</b>,<b>d</b>,<b>g</b>), milk (<b>b</b>,<b>e</b>,<b>h</b>), and dough (<b>c</b>,<b>f</b>,<b>i</b>) stages for the years 2022 (<b>a</b>–<b>c</b>), 2023 (<b>d</b>–<b>f</b>), and the combined period 2022–2023 (<b>g</b>–<b>i</b>).</p>
Full article ">
12 pages, 1695 KiB  
Article
Promising Results About the Possibility to Identify Prostate Cancer Patients Employing a Random Forest Classifier: A Preliminary Study Preoperative Patients Selection
by Eliodoro Faiella, Matteo Pileri, Raffaele Ragone, Anna Maria De Nicola, Bruno Beomonte Zobel, Rosario Francesco Grasso and Domiziana Santucci
Diagnostics 2025, 15(4), 421; https://doi.org/10.3390/diagnostics15040421 - 10 Feb 2025
Viewed by 432
Abstract
Objective: This study evaluates the accuracy of a Machine Learning model of Random Forest (RF) type, using MRI data and radiomic features to predict lymph node involvement in prostate cancer (PCa). Methods: Ninety-five patients who underwent mp-MRI, prostatectomy, and lymphadenectomy at [...] Read more.
Objective: This study evaluates the accuracy of a Machine Learning model of Random Forest (RF) type, using MRI data and radiomic features to predict lymph node involvement in prostate cancer (PCa). Methods: Ninety-five patients who underwent mp-MRI, prostatectomy, and lymphadenectomy at the Fondazione Policlinico Campus Bio-medico Radiological Department from 2016 to 2022 were analyzed. Radiomic features were extracted from T2-weighted, DWI, and ADC sequences and processed using a Random Forest (RF) model. Clinical data such as PSA levels and Gleason scores were also considered. Results: The RF model demonstrated significant accuracy in predicting lymph node involvement, achieving 84% accuracy for nodules in the peripheral zone (80% for predicting positive lymph node involvement and 85% for negative lymph node involvement) and 87% for those in the transitional zone (86% for predicting positive lymph node involvement and 88% for negative lymph node involvement). In the peripheral zone, key features included ADC shape maximum 2D diameter row and T2 noduloglcm difference variance, while in the transitional zone, DWI glcm difference average and DWI glcm Idm were important. DWI and ADC sequences were particularly crucial for accurate lymph node assessment. First-order features emerged as the most significant in whole-gland analysis, indicating fundamental differences in tumor composition and density critical for identifying malignancies with higher metastatic potential. Conclusions: AI-driven radiomic analysis, especially using DWI- and ADC-derived features, effectively predicts lymph node involvement in PCa patients, in particular in negative linfonode status patients, offering a promising tool for preoperative linfonode sparing patient selection. Further validation with larger cohorts is needed. Some limitations of this study are a relatively small sample size and it being a retrospective study. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Prostatic nodule (green zone) in ADC map (<b>A</b>), axial T2 (<b>B</b>), DWI (<b>C</b>), and coronal T2 (<b>D</b>).</p>
Full article ">
13 pages, 3805 KiB  
Article
Radiomics-Driven CBCT Texture Analysis as a Novel Biosensor for Quantifying Periapical Bone Healing: A Comparative Study of Intracanal Medications
by Diana Lorena Garcia Lopes, Sérgio Lúcio Pereira de Castro Lopes, Daniela Maria de Toledo Ungaro, Ana Paula Martins Gomes, Nicole Berton de Moura, Bianca Costa Gonçalves and Andre Luiz Ferreira Costa
Biosensors 2025, 15(2), 98; https://doi.org/10.3390/bios15020098 - 9 Feb 2025
Viewed by 684
Abstract
This study aimed to evaluate the effectiveness of two intracanal medications in promoting periapical bone healing following endodontic treatment using radiomics-enabled texture analysis of cone-beam computed tomography (CBCT) images as a novel biosensing technique. By quantifying tissue changes through advanced image analysis, this [...] Read more.
This study aimed to evaluate the effectiveness of two intracanal medications in promoting periapical bone healing following endodontic treatment using radiomics-enabled texture analysis of cone-beam computed tomography (CBCT) images as a novel biosensing technique. By quantifying tissue changes through advanced image analysis, this approach seeks to enhance the monitoring and assessment of endodontic treatment outcomes. Thirty-four single-rooted teeth with pulp necrosis and periapical lesions were allocated to two groups (17 each): calcium hydroxide +2% chlorhexidine gel (CHX) and Ultracal XS®. CBCT scans were obtained immediately after treatment and three months later. Texture analysis performed using MaZda software extracted 11 parameters based on the gray level co-occurrence matrix (GLCM) across two inter-pixel distances and four directions. Statistical analysis revealed significant differences between medications for S [0,1] inverse difference moment (p = 0.043), S [0,2] difference of variance (p = 0.014), and S [0,2] difference of entropy (p = 0.004). CHX treatment resulted in a more organized bone tissue structure post-treatment, evidenced by reduced entropy and variance parameters, while Ultracal exhibited less homogeneity, indicative of fibrous or immature tissue formation. These findings demonstrate the superior efficacy of CHX in promoting bone healing and underscore the potential of texture analysis as a powerful tool for assessing CBCT images in endodontic research. Full article
(This article belongs to the Special Issue Biosensors for Biomedical Diagnostics)
Show Figures

Figure 1

Figure 1
<p>CBCT slice demonstrating the placement of the circular region of interest (ROI) for texture analysis of periapical lesions using MaZda software. The red dot indicates the center of the periapical lesion, determined at the intersection of the lateromedial and superoinferior lines. A circular ROI with a diameter of 44 pixels was centered on this point to ensure only lesion tissue was included in the analysis.</p>
Full article ">Figure 2
<p>Schematic representation of the texture analysis workflow for quantifying periapical bone healing. (1) The process begins with CBCT image acquisition at two time points (T1: immediately after treatment, T2: 3 months post-treatment), (2) followed by region of interest (ROI) selection. (3) Texture analysis is performed using MaZda software, (4) extracting 11 GLCM parameters. (5) Statistical analysis compares these parameters between the two medication groups and time points.</p>
Full article ">Figure 3
<p>Descriptive measures and comparative analysis of 11 texture parameters for two intracanal medications (CHX and Ultracal) across two time points (T1 and T2). Parameters include angular second moment (AngScMom), contrast, correlation (Correlat), difference of entropy (DifEntrp), difference of variance (DifVarnc), entropy, inverse difference moment (InvDfMom), sum of average (SumAverg), sum of entropy (SumEntrp), sum of squares (SumOfSqs), and sum of variance (SumVarnc). Values represent means and standard deviations (SDs) for each parameter, along with <span class="html-italic">p</span>-values for medication and time comparisons.</p>
Full article ">Figure 4
<p>Graphical representations of changes in 11 texture parameters between two time points (T1 and T2) for two intracanal medications (CHX and Ultracal). Parameters shown are angular second moment (AngScMom), contrast, correlation (Correlat), sum of squares (SumOfSqs), inverse difference moment (InvDfMom), sum of average (SumAverg), sum of variance (SumVarnc), sum of entropy (SumEntrp), entropy, difference of entropy (DifEntrp), and difference of variance (DifVarnc). Bars represent mean values at each time point for both medications.</p>
Full article ">
27 pages, 4940 KiB  
Article
Alzheimer’s Prediction Methods with Harris Hawks Optimization (HHO) and Deep Learning-Based Approach Using an MLP-LSTM Hybrid Network
by Raheleh Ghadami and Javad Rahebi
Diagnostics 2025, 15(3), 377; https://doi.org/10.3390/diagnostics15030377 - 5 Feb 2025
Viewed by 529
Abstract
Background/Objective: Alzheimer’s disease is a progressive brain syndrome causing cognitive decline and, ultimately, death. Early diagnosis is essential for timely medical intervention, with MRI medical imaging serving as a primary diagnostic tool. Machine learning (ML) and deep learning (DL) methods are increasingly [...] Read more.
Background/Objective: Alzheimer’s disease is a progressive brain syndrome causing cognitive decline and, ultimately, death. Early diagnosis is essential for timely medical intervention, with MRI medical imaging serving as a primary diagnostic tool. Machine learning (ML) and deep learning (DL) methods are increasingly utilized to analyze these images, but accurately distinguishing between healthy and diseased states remains a challenge. This study aims to address these limitations by developing an integrated approach combining swarm intelligence with ML and DL techniques for Alzheimer’s disease classification. Method: This proposal methodology involves sourcing Alzheimer’s disease-related MRI images and extracting features using convolutional neural networks (CNNs) and the Gray Level Co-occurrence Matrix (GLCM). The Harris Hawks Optimization (HHO) algorithm is applied to select the most significant features. The selected features are used to train a multi-layer perceptron (MLP) neural network and further processed using a long short-term (LSTM) memory network in order to classify tumors as malignant or benign. The Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset is utilized for assessment. Results: The proposed method achieved a classification accuracy of 97.59%, sensitivity of 97.41%, and precision of 97.25%, outperforming other models, including VGG16, GLCM, and ResNet-50, in diagnosing Alzheimer’s disease. Conclusions: The results demonstrate the efficacy of the proposed approach in enhancing Alzheimer’s disease diagnosis through improved feature extraction and selection techniques. These findings highlight the potential for advanced ML and DL integration to improve diagnostic tools in medical imaging applications. Full article
(This article belongs to the Special Issue Artificial Intelligence in Alzheimer’s Disease Diagnosis)
Show Figures

Figure 1

Figure 1
<p>Influences on the development of Alzheimer’s disease [<a href="#B29-diagnostics-15-00377" class="html-bibr">29</a>].</p>
Full article ">Figure 2
<p>The regions of the brain affected by Alzheimer’s disease in MRI images [<a href="#B30-diagnostics-15-00377" class="html-bibr">30</a>].</p>
Full article ">Figure 3
<p>The structure of the suggested method for diagnosing Alzheimer’s disease.</p>
Full article ">Figure 4
<p>ResNet-50 residual block.</p>
Full article ">Figure 5
<p>ResNet-50 neural network configuration.</p>
Full article ">Figure 6
<p>Proposed feature selection flowchart.</p>
Full article ">Figure 7
<p>Structure of the LSTM.</p>
Full article ">Figure 8
<p>Two sample images from the ADNI and MIRIAD datasets: (<b>a</b>) MIRIAD dataset [<a href="#B46-diagnostics-15-00377" class="html-bibr">46</a>]; (<b>b</b>) slices from the ADNI Dataset [<a href="#B47-diagnostics-15-00377" class="html-bibr">47</a>].</p>
Full article ">Figure 9
<p>Evaluation of the proposed method on the ADNI dataset.</p>
Full article ">Figure 10
<p>Evaluation of the proposed method on the MIRIAD dataset.</p>
Full article ">Figure 11
<p>Comparison of the proposed method with several classification methods of Alzheimer’s images with accuracy index.</p>
Full article ">Figure 12
<p>Comparison of the proposed method with several classification methods of Alzheimer’s images with precision index.</p>
Full article ">Figure 13
<p>Comparison of the proposed method with several classification methods of Alzheimer’s images with sensitivity index.</p>
Full article ">Figure 14
<p>Comparison of the accuracy of the proposed method of DL methods in Alzheimer’s diagnosis.</p>
Full article ">Figure 15
<p>Comparison of the sensitivity of the proposed method of DL methods in Alzheimer’s diagnosis.</p>
Full article ">
19 pages, 4734 KiB  
Article
Fractal Analysis of Volcanic Rock Image Based on Difference Box-Counting Dimension and Gray-Level Co-Occurrence Matrix: A Case Study in the Liaohe Basin, China
by Sijia Li, Zhuwen Wang and Dan Mou
Fractal Fract. 2025, 9(2), 99; https://doi.org/10.3390/fractalfract9020099 - 4 Feb 2025
Viewed by 594
Abstract
Volcanic rocks, as a widely distributed rock type on the earth, are mostly buried deep within basins, and their internal structures possess characteristics by irregularity and self-similarity. In the study of volcanic rocks, accurately identifying the lithology of volcanic rocks is significant for [...] Read more.
Volcanic rocks, as a widely distributed rock type on the earth, are mostly buried deep within basins, and their internal structures possess characteristics by irregularity and self-similarity. In the study of volcanic rocks, accurately identifying the lithology of volcanic rocks is significant for reservoir description and reservoir evaluation. The accuracy of lithology identification can improve the success rate of petroleum exploration and development as well as the safety of engineering construction. In this study, we took the electron microscope images of four types of volcanic rocks in the Liaohe Basin as the research objects and comprehensively used the differential box-counting dimension (DBC) and the gray-level co-occurrence matrix (GLCM) to identify the lithology of volcanic rocks. Obtain the images of volcanic rocks in the research area and conduct preprocessing so that the images can meet the requirements of calculations. Firstly, calculate the different box-counting dimension. Divide the grayscale image into boxes of different scales and determine the differential box-counting dimension based on the variation of grayscale values within each box. The differential box-counting dimension of basalt ranges from 1.7 to 1.75, that of trachyte ranges from 1.82 to 1.87, that of gabbro ranges from 1.76 to 1.79, and that of diabase ranges from 1.78 to 1.82. Then, the gray-level co-occurrence matrix is utilized to extract four image texture features of volcanic rock images, namely contrast, energy, entropy, and variance. The recognition of four types of volcanic rock images is achieved by combining the different box-counting dimension and the gray-level co-occurrence matrix. This method has been experimentally verified by volcanic rock image samples. It has a relatively high accuracy in identifying the lithology of volcanic rocks and can effectively distinguish four different types of volcanic rocks. Compared with single-feature recognition methods, this approach significantly improves recognition accuracy, offers reliable technical support and a data basis for volcanic rock-related geological analyses, and drives the further development of volcanic rock research. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of the Liaohe Basin; (<b>b</b>) four types of volcanic rocks studied.</p>
Full article ">Figure 2
<p>Basalt image gray release processing: (<b>a</b>) microscopic image, (<b>b</b>) maximum method gray image, (<b>c</b>) average method gray image, (<b>d</b>) weighted average method gray image.</p>
Full article ">Figure 3
<p>Gaussian filter image.</p>
Full article ">Figure 4
<p>Median filter image.</p>
Full article ">Figure 5
<p>Basalt image preprocessing: (<b>a</b>) microscopic image, (<b>b</b>) gray image, (<b>c</b>) binary image.</p>
Full article ">Figure 6
<p>DBC Flowchart.</p>
Full article ">Figure 7
<p>Sketch of determination of the number of boxes by the DBC method.</p>
Full article ">Figure 8
<p>GLCM flowchart.</p>
Full article ">Figure 9
<p>GLCM sketch map of space position.</p>
Full article ">Figure 10
<p>Flowchart for comprehensively discriminating volcanic lithology based on differential box-counting dimension and gray-level co-occurrence matrix.</p>
Full article ">Figure 11
<p>Plot the relationship between ln(<span class="html-italic">N</span>) and ln(<span class="html-italic">1/n</span>) for volcanic rock images using the DBC dimension method. (<b>a</b>) Fractal dimension of the basalt image, in Well D15–2106 m. (<b>b</b>) Fractal dimension of the trachyte image, in Well O20–2260 m. (<b>c</b>) Fractal dimension of the gabbro image, in Well J28–1332 m. (<b>d</b>) Fractal dimension of the diabase image, in Well H28–1135 m.</p>
Full article ">Figure 12
<p>Range of fractal dimension of four kinds of volcanic rocks.</p>
Full article ">Figure 13
<p>H24well, depth 2850–2950 m predict the lithology results of volcanic rocks.</p>
Full article ">
17 pages, 5156 KiB  
Article
Plant Detection in RGB Images from Unmanned Aerial Vehicles Using Segmentation by Deep Learning and an Impact of Model Accuracy on Downstream Analysis
by Mikhail V. Kozhekin, Mikhail A. Genaev, Evgenii G. Komyshev, Zakhar A. Zavyalov and Dmitry A. Afonnikov
J. Imaging 2025, 11(1), 28; https://doi.org/10.3390/jimaging11010028 - 20 Jan 2025
Viewed by 825
Abstract
Crop field monitoring using unmanned aerial vehicles (UAVs) is one of the most important technologies for plant growth control in modern precision agriculture. One of the important and widely used tasks in field monitoring is plant stand counting. The accurate identification of plants [...] Read more.
Crop field monitoring using unmanned aerial vehicles (UAVs) is one of the most important technologies for plant growth control in modern precision agriculture. One of the important and widely used tasks in field monitoring is plant stand counting. The accurate identification of plants in field images provides estimates of plant number per unit area, detects missing seedlings, and predicts crop yield. Current methods are based on the detection of plants in images obtained from UAVs by means of computer vision algorithms and deep learning neural networks. These approaches depend on image spatial resolution and the quality of plant markup. The performance of automatic plant detection may affect the efficiency of downstream analysis of a field cropping pattern. In the present work, a method is presented for detecting the plants of five species in images acquired via a UAV on the basis of image segmentation by deep learning algorithms (convolutional neural networks). Twelve orthomosaics were collected and marked at several sites in Russia to train and test the neural network algorithms. Additionally, 17 existing datasets of various spatial resolutions and markup quality levels from the Roboflow service were used to extend training image sets. Finally, we compared several texture features between manually evaluated and neural-network-estimated plant masks. It was demonstrated that adding images to the training sample (even those of lower resolution and markup quality) improves plant stand counting significantly. The work indicates how the accuracy of plant detection in field images may affect their cropping pattern evaluation by means of texture characteristics. For some of the characteristics (GLCM mean, GLRM long run, GLRM run ratio) the estimates between images marked manually and automatically are close. For others, the differences are large and may lead to erroneous conclusions about the properties of field cropping patterns. Nonetheless, overall, plant detection algorithms with a higher accuracy show better agreement with the estimates of texture parameters obtained from manually marked images. Full article
(This article belongs to the Special Issue Imaging Applications in Agriculture)
Show Figures

Figure 1

Figure 1
<p>A UAV launching from the launcher.</p>
Full article ">Figure 2
<p>Examples of images for the analysis. (<b>a</b>) A fragment of an orthomosaic before markup; (<b>b</b>) the same fragment with vector markup applied in QGIS; (<b>c</b>) a generated raster mask showing the location of plant centers.</p>
Full article ">Figure 3
<p>The architecture of the U-Net network used in this work for plant identification.</p>
Full article ">Figure 4
<p>The learning curves of the models corresponding to the experiments: (<b>a</b>) RN18-LQ; (<b>b</b>) RN18-HQ-LQ; (<b>c</b>) RN34-HQ-LQ; and (<b>d</b>) RN50-HQ-LQ. On the X-axis, the ID numbers of epochs during training are plotted. The Y-axis shows parameters characterizing the magnitude of error obtained with the training and validation samples (see the panels in the top-right corner of the graphs). Blue curve: change in the loss function on the training sample; green curve: change in the loss function on the validation sample; yellow curve: change in the <span class="html-italic">IoU</span> metric on the training sample; red curve: change in the <span class="html-italic">IoU</span> metric on the validation sample.</p>
Full article ">Figure 5
<p>Examples of RN50-HQ-LQ model performance on the test sample for different crops and high-resolution orthomosaics. (<b>a</b>) Sugar beet, Beet_marat_1; (<b>b</b>) sugar beet, UBONN_Sb3_2015; (<b>c</b>) potato, Stavropol_2_7; (<b>d</b>) potato, Stavropol_4_0; (<b>e</b>) potato, Stavropol_4_9. Images in rows from left to right: original (Field); manual plant marking (Mask); automatic marking by the RN50-HQ-LQ network.</p>
Full article ">Figure 6
<p>A comparison of crop texture characteristics estimates between markup obtained by the manual approach (X axis) and markup obtained by neural network algorithm (Y axis). The names of characteristics are shown at the top of the figure. (<b>a</b>) Stavropol_2_7, prediction by the RN50-HQ-LQ method; (<b>b</b>) Stavropol_2_7, prediction by the RN18-HQ method; (<b>c</b>) Beet_marat_1, prediction by the RN50-HQ-LQ method; (<b>d</b>) Beet_marat_1, prediction by the RN18-HQ method.</p>
Full article ">
36 pages, 13780 KiB  
Article
Combining a Standardized Growth Class Assessment, UAV Sensor Data, GIS Processing, and Machine Learning Classification to Derive a Correlation with the Vigour and Canopy Volume of Grapevines
by Ronald P. Dillner, Maria A. Wimmer, Matthias Porten, Thomas Udelhoven and Rebecca Retzlaff
Sensors 2025, 25(2), 431; https://doi.org/10.3390/s25020431 - 13 Jan 2025
Viewed by 744
Abstract
Assessing vines’ vigour is essential for vineyard management and automatization of viticulture machines, including shaking adjustments of berry harvesters during grape harvest or leaf pruning applications. To address these problems, based on a standardized growth class assessment, labeled ground truth data of precisely [...] Read more.
Assessing vines’ vigour is essential for vineyard management and automatization of viticulture machines, including shaking adjustments of berry harvesters during grape harvest or leaf pruning applications. To address these problems, based on a standardized growth class assessment, labeled ground truth data of precisely located grapevines were predicted with specifically selected Machine Learning (ML) classifiers (Random Forest Classifier (RFC), Support Vector Machines (SVM)), utilizing multispectral UAV (Unmanned Aerial Vehicle) sensor data. The input features for ML model training comprise spectral, structural, and texture feature types generated from multispectral orthomosaics (spectral features), Digital Terrain and Surface Models (DTM/DSM- structural features), and Gray-Level Co-occurrence Matrix (GLCM) calculations (texture features). The specific features were selected based on extensive literature research, including especially the fields of precision agri- and viticulture. To integrate only vine canopy-exclusive features into ML classifications, different feature types were extracted and spatially aggregated (zonal statistics), based on a combined pixel- and object-based image-segmentation-technique-created vine row mask around each single grapevine position. The extracted canopy features were progressively grouped into seven input feature groups for model training. Model overall performance metrics were optimized with grid search-based hyperparameter tuning and repeated-k-fold-cross-validation. Finally, ML-based growth class prediction results were extensively discussed and evaluated for overall (accuracy, f1-weighted) and growth class specific- classification metrics (accuracy, user- and producer accuracy). Full article
(This article belongs to the Special Issue Remote Sensing for Crop Growth Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) Investigation area in Bernkastel-Kues within the Moselle wine region mapped on a high-precision orthomosaic (CRS (Coordinate Reference System) with EPSG (European Petroleum Survey Group) 25,832 ETRS (European Terrestrial Reference System) 89/UTM (Universal Transverse Mercator) zone 32N). Red points represent each vine position that was localized with differential-GPS (see <a href="#sec2dot5-sensors-25-00431" class="html-sec">Section 2.5</a> for more information) (<b>B</b>) Zoomed out view of the investigation area and overview of local vineyard structure and the Moselle river (mapped on Google Earth Satellite map from QuickMapService plugin in QGIS version 3.22) (<b>C</b>) Intermediate zoom of the investigation area, with orthomosaic on Google Earth Satellite map. It can be seen that the UAV- sensor- based orthomosaic and the Google Satellite map show some offset to each other, due to different absolute geographic accuracies and spatial resolution.</p>
Full article ">Figure 2
<p>(<b>A</b>) Zoomed-out view of the canopy-free vine rows of the investigation area. (<b>B</b>) Zoomed-in view of the training system in the investigation area (photos taken in December 2024).</p>
Full article ">Figure 3
<p>Ground truth template examples with label descriptions for the growth classes after Porten [<a href="#B43-sensors-25-00431" class="html-bibr">43</a>]. The specific visual characteristics and correlations to viticultural, oenological, and environmental parameters are described in detail by [<a href="#B43-sensors-25-00431" class="html-bibr">43</a>].</p>
Full article ">Figure 4
<p>Color-coded growth classification after Porten [<a href="#B43-sensors-25-00431" class="html-bibr">43</a>] for single grapevines in the investigation area mapped on multispectral orthomosaic. All geodata are projected to CRS with EPSG: 25,832 ETRS89/UTM zone 32N.</p>
Full article ">Figure 5
<p>Visualization of the developed and applied geo- and image processing workflow in this study, with QGIS and different geospatial libraries in Phyton. Geoprocessing was the foundation for further statistical analysis, and machine learning model predictions of the growth classes after [<a href="#B43-sensors-25-00431" class="html-bibr">43</a>].</p>
Full article ">Figure 6
<p>Sampling rectangles around the vines’ position for zonal statistics pixel aggregation process, together with growth class categorized grapevine stem positions and vine row extracted OSAVI (OSAVI extracted). All geodata are projected to CRS with EPSG: 25,832 ETRS89/UTM zone 32N.</p>
Full article ">Figure 7
<p>Growth class grouped CHM Volume boxplots with significance stars (*) between boxplots generated according to the Mann–Whitney-U-test with <span class="html-italic">p</span>-value significance. ** Signal greater than 0.01 (intermediate significance). *** Signal less than 0.001 (vital significance).</p>
Full article ">Figure 8
<p>Input feature group (1–7) grouped boxplot OA (overall accuracy) in % for the SVM classifier of the seven different SVM models (see legend color of grouped boxplots), with significance stars (*) generated according to the Mann–Whitney-U-test between statistical significant model (SVM 1–SVM 7) results, where significant accuracy differences, derived from repeated-k-fold-cross-validation occurred, with <span class="html-italic">p</span>-value significance classes: ** Signal greater than 0.01 (intermediate significance). *** Signal less than 0.001 (vital significance). No stars between boxplots indicate no statistical differences between the model outputs according to Mann-Whitney-U-test.</p>
Full article ">Figure 9
<p>Input feature group (1–7) grouped boxplot accuracy in % for the RF classifier of the seven different RF classifier models (see legend color of grouped boxplots), with significance stars (*) generated according to the Mann–Whitney-U-test between statistical significant model (RF 1–RF 7) results, where significant OA (overall accuracy) differences derived from repeated-k-fold-cross-validation occurred, with <span class="html-italic">p</span>-value significance classes: * Signal greater than 0.1 (weak significance).** Signal greater than 0.01 (intermediate significance). *** Signal less than 0.001 (vital significance). No stars between boxplots indicate no statistical differences between the model outputs according to Mann-Whitney-U-test.</p>
Full article ">Figure 10
<p>Pairwise statistical comparison of OA (overall accuracy) of the test and train data in % for the SVM classifier of the seven different feature groups input sets (1–7) with significance stars (*) generated according to the Mann–Whitney-U-test between boxplots where significant accuracy differences (OA) derived from repeated-k-fold-cross-validation occurred, with <span class="html-italic">p</span>-value significance classes. * Signal greater than 0.1 (weak significance). *** Signal less than 0.001 (vital significance).</p>
Full article ">Figure 11
<p>Pairwise statistical comparison of accuracy in % and f1-weighted score in % of the test data sets for the RF classifier of the seven different input feature groups (see legend color of grouped boxplots) with significance stars (*) generated according to the Mann–Whitney-U-test between accuracy and f1-weighted, with significant differences derived from repeated-k-fold-cross-validation occurred, with <span class="html-italic">p</span>-value significance classes. *** Signal less than 0.001 (vital significance).</p>
Full article ">Figure 12
<p>Pairwise statistical comparison of overall accuracy of train data set in % for the SVM classifier with f1-weighted in score in % of the seven different input feature groups (see legend color of grouped boxplots) with significance stars (*) generated according to the Mann–Whitney-U-test between accuracy and f1-weighted, where significant accuracy differences derived from repeated-k-fold cross- validation occurred, with <span class="html-italic">p</span>-value significance classes. ** Signal greater than 0.01 (intermediate significance). *** Signal less than 0.001 (vital significance). No stars between boxplots indicate no statistical differences between the model outputs according to Mann-Whitney-U-test.</p>
Full article ">Figure 13
<p>Visualization example of the difference between the ground truth growth classes and the model predicted growth classes from the output from SVM 7 model. Red numbers next to grapevine stems (brown points) with values over zero represent an underestimation of the model prediction compared to ground truth data. In contrast, values less than zero would indicate an overestimation of the growth class model prediction, compared to the ground truth data. Zero values indicate a perfect match of the ground truth with the ML model prediction. The red rectangles represent the area of the zonal statistics aggregation, and the red outline the generated vine row mask (see <a href="#sec2dot6dot9-sensors-25-00431" class="html-sec">Section 2.6.9</a>), where the spatial aggregation of the features was achieved. Pixels outside the vine row mask were not considered for spatial aggregation. All mapped geodata are projected to CRS with EPSG: 25,832/ ETRS89/UTM zone 32N.</p>
Full article ">
18 pages, 2256 KiB  
Article
Image-Based Detection and Classification of Malaria Parasites and Leukocytes with Quality Assessment of Romanowsky-Stained Blood Smears
by Jhonathan Sora-Cardenas, Wendy M. Fong-Amaris, Cesar A. Salazar-Centeno, Alejandro Castañeda, Oscar D. Martínez-Bernal, Daniel R. Suárez and Carol Martínez
Sensors 2025, 25(2), 390; https://doi.org/10.3390/s25020390 - 10 Jan 2025
Viewed by 926
Abstract
Malaria remains a global health concern, with 249 million cases and 608,000 deaths being reported by the WHO in 2022. Traditional diagnostic methods often struggle with inconsistent stain quality, lighting variations, and limited resources in endemic regions, making manual detection time-intensive and error-prone. [...] Read more.
Malaria remains a global health concern, with 249 million cases and 608,000 deaths being reported by the WHO in 2022. Traditional diagnostic methods often struggle with inconsistent stain quality, lighting variations, and limited resources in endemic regions, making manual detection time-intensive and error-prone. This study introduces an automated system for analyzing Romanowsky-stained thick blood smears, focusing on image quality evaluation, leukocyte detection, and malaria parasite classification. Using a dataset of 1000 clinically diagnosed images, we applied feature extraction techniques, including histogram bins and texture analysis with the gray level co-occurrence matrix (GLCM), alongside support vector machines (SVMs), for image quality assessment. Leukocyte detection employed optimal thresholding segmentation utility (OTSU) thresholding, binary masking, and erosion, followed by the connected components algorithm. Parasite detection used high-intensity region selection and adaptive bounding boxes, followed by a custom convolutional neural network (CNN) for candidate identification. A second CNN classified parasites into trophozoites, schizonts, and gametocytes. The system achieved an F1-score of 95% for image quality evaluation, 88.92% for leukocyte detection, and 82.10% for parasite detection. The F1-score—a metric balancing precision (correctly identified positives) and recall (correctly detected instances out of actual positives)—is especially valuable for assessing models on imbalanced datasets. In parasite stage classification, CNN achieved F1-scores of 85% for trophozoites, 88% for schizonts, and 83% for gametocytes. This study introduces a robust and scalable automated system that addresses critical challenges in malaria diagnosis by integrating advanced image quality assessment and deep learning techniques for parasite detection and classification. This system’s adaptability to low-resource settings underscores its potential to improve malaria diagnostics globally. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Imaging Sensors and Processing)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the proposed method.</p>
Full article ">Figure 2
<p>Sample images annotated with red (parasite) and green (leukocytes) bounding boxes. (<b>a</b>) Good quality image sample. (<b>b</b>) Bad quality image sample.</p>
Full article ">Figure 2 Cont.
<p>Sample images annotated with red (parasite) and green (leukocytes) bounding boxes. (<b>a</b>) Good quality image sample. (<b>b</b>) Bad quality image sample.</p>
Full article ">Figure 3
<p>Example crops for each parasite stage: (<b>a</b>) trophozoites, (<b>b</b>) schizonts, (<b>c</b>) gametocytes, with sizes ranging from 13 to 138 pixels (100×).</p>
Full article ">Figure 4
<p>Histogram example.</p>
Full article ">Figure 5
<p>Example of WBC segmentation process. (<b>a</b>) Grayscale image; (<b>b</b>) OTSU’s segmentation; (<b>c</b>) erosion function; (<b>d</b>) mask candidates.</p>
Full article ">Figure 6
<p>Precision curve of the customized CNN model on patch level.</p>
Full article ">
20 pages, 4706 KiB  
Article
Band Selection Algorithm Based on Multi-Feature and Affinity Propagation Clustering
by Junbin Zhuang, Wenying Chen, Xunan Huang and Yunyi Yan
Remote Sens. 2025, 17(2), 193; https://doi.org/10.3390/rs17020193 - 8 Jan 2025
Viewed by 545
Abstract
Hyperspectral images are high-dimensional data containing rich spatial, spectral, and radiometric information, widely used in geological mapping, urban remote sensing, and other fields. However, due to the characteristics of hyperspectral remote sensing images—such as high redundancy, strong correlation, and large data volumes—the classification [...] Read more.
Hyperspectral images are high-dimensional data containing rich spatial, spectral, and radiometric information, widely used in geological mapping, urban remote sensing, and other fields. However, due to the characteristics of hyperspectral remote sensing images—such as high redundancy, strong correlation, and large data volumes—the classification and recognition of these images present significant challenges. In this paper, we propose a band selection method (GE-AP) based on multi-feature extraction and the Affine Propagation Clustering (AP) algorithm for dimensionality reduction of hyperspectral images, aiming to improve classification accuracy and processing efficiency. In this method, texture features of the band images are extracted using the Gray-Level Co-occurrence Matrix (GLCM), and the Euclidean distance between bands is calculated. A similarity matrix is then constructed by integrating multi-feature information. The AP algorithm clusters the bands of the hyperspectral images to achieve effective band dimensionality reduction. Through simulation and comparison experiments evaluating the overall classification accuracy (OA) and Kappa coefficient, it was found that the GE-AP method achieves the highest OA and Kappa coefficient compared to three other methods, with maximum increases of 8.89% and 13.18%, respectively. This verifies that the proposed method outperforms traditional single-information methods in handling spatial and spectral redundancy between bands, demonstrating good adaptability and stability. Full article
Show Figures

Figure 1

Figure 1
<p>AP algorithm indicates the transmission process of two kinds of messages.</p>
Full article ">Figure 2
<p>Band selection flow chart based on multi-feature and Affine Propagation Clustering algorithm.</p>
Full article ">Figure 3
<p>Real object distribution map.</p>
Full article ">Figure 4
<p>Pavia University hyperspectral image of the first three principal component images.</p>
Full article ">Figure 5
<p>Hyperspectral image component lithotripsy.</p>
Full article ">Figure 6
<p>Pavia Center hyperspectral image of the first three principal component images.</p>
Full article ">Figure 7
<p>Band index line plots of hyperspectral images of Pavia University and Pavia Cener based on ABS method.</p>
Full article ">Figure 8
<p>Grayscale of correlation coefficient.</p>
Full article ">Figure 9
<p>The nearest neighbors of hyperspectral images can transmit correlation curves.</p>
Full article ">Figure 10
<p>Line chart of band coefficients in each subspace of hyperspectral image.</p>
Full article ">Figure 11
<p>The Pavia University band represents the image.</p>
Full article ">Figure 12
<p>The Pavia Center band represents the image.</p>
Full article ">
11 pages, 4367 KiB  
Article
Gray-Level Co-Occurrence Matrix Uniformity Correction Algorithm in Positron Emission Tomographic Image: A Phantom Study
by Kyuseok Kim and Youngjin Lee
Photonics 2025, 12(1), 33; https://doi.org/10.3390/photonics12010033 - 3 Jan 2025
Viewed by 551
Abstract
High uniformity of positron emission tomography (PET) images in the field of nuclear medicine is necessary to obtain excellent and stable data from the system. In this study, we aimed to apply and optimize a PET/magnetic resonance (MR) imaging system by approaching the [...] Read more.
High uniformity of positron emission tomography (PET) images in the field of nuclear medicine is necessary to obtain excellent and stable data from the system. In this study, we aimed to apply and optimize a PET/magnetic resonance (MR) imaging system by approaching the gray-level co-occurrence matrix (GLCM), which is known to be efficient in the uniformity correction of images. CAIPIRINHA Dixon-VIBE was used as an MR image acquisition pulse sequence for the fast and accurate attenuation correction of PET images, and the phantom was constructed by injecting NaCl and NaCl + NiSO4 solutions. The lambda value of the GLCM algorithm for uniformity correction of the acquired PET images was optimized in terms of energy and contrast. By applying the GLCM algorithm optimized in terms of energy and contrast to the PET images of phantoms using NaCl and NaCl + NiSO4 solutions, average percent image uniformity (PIU) values of 26.01 and 83.76 were derived, respectively. Compared to the original PET image, an improved PIU value of more than 30% was derived from the PET image to which the proposed optimized GLCM algorithm was applied. In conclusion, we demonstrated that an algorithm optimized in terms of the GLCM energy and contrast can improve the uniformity of PET images. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified flowchart of the uniformity correction using the weight parameter optimization with the gray-level co-occurrence matrix (GLCM) in the positron emission tomography (PET) image. Here, the closer GLCM<sub>contrast</sub> was to 0 and GLCM<sub>energy</sub> was to 1, the higher the uniformity of the PET image slice.</p>
Full article ">Figure 2
<p>Graph for deriving the optimal solution of the GLCM algorithm’s lambda value: results in terms of (<b>a</b>) energy and (<b>b</b>) contrast using a NaCl solution phantom, and results in terms of (<b>c</b>) energy and (<b>d</b>) contrast using a NaCl + NiSO<sub>4</sub> solution phantom.</p>
Full article ">Figure 3
<p>Corrected PET images shown by applying the original and optimized GLCM algorithm: (<b>a</b>) NaCl and (<b>b</b>) NaCl + NiSO<sub>4</sub> solution phantom results. The bias-field image derived from the optimization process in terms of energy and contrast of the GLCM is shown in the middle line.</p>
Full article ">Figure 4
<p>(<b>a</b>) ROI schematic diagram indicated for PIU calculation and (<b>b</b>) graph of PIU results from PET images acquired by applying the original and optimized GLCM algorithm.</p>
Full article ">
16 pages, 4152 KiB  
Article
Computer Vision-Based Fire–Ice Ion Algorithm for Rapid and Nondestructive Authentication of Ziziphi Spinosae Semen and Its Counterfeits
by Peng Chen, Xutong Shao, Guangyu Wen, Yaowu Song, Rao Fu, Xiaoyan Xiao, Tulin Lu, Peina Zhou, Qiaosheng Guo, Hongzhuan Shi and Chenghao Fei
Foods 2025, 14(1), 5; https://doi.org/10.3390/foods14010005 - 24 Dec 2024
Viewed by 783
Abstract
The authentication of Ziziphi Spinosae Semen (ZSS), Ziziphi Mauritianae Semen (ZMS), and Hovenia Acerba Semen (HAS) has become challenging. The chromatic and textural properties of ZSS, ZMS, and HAS are analyzed in this study. Color features were extracted via RGB, CIELAB, and HSI [...] Read more.
The authentication of Ziziphi Spinosae Semen (ZSS), Ziziphi Mauritianae Semen (ZMS), and Hovenia Acerba Semen (HAS) has become challenging. The chromatic and textural properties of ZSS, ZMS, and HAS are analyzed in this study. Color features were extracted via RGB, CIELAB, and HSI spaces, whereas texture information was analyzed via the gray-level co-occurrence matrix (GLCM) and Law’s texture feature analysis. The results revealed significant differences in color and texture among the samples. The fire–ice ion dimensionality reduction algorithm effectively fuses these features, enhancing their differentiation ability. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) confirmed the algorithm’s effectiveness, with variable importance in projection analysis (VIP analysis) (VIP > 1, p < 0.05) highlighting significant differences, particularly for the fire value, which is a key factor. To further validate the reliability of the algorithm, Back Propagation Neural Network (BP), Support Vector Machine (SVM), Deep Belief Network (DBN), and Random Forest (RF) were used for reverse validation, and the accuracy of the training set and test set reached 98.83–100% and 95.89–99.32%, respectively. The method provides a simple, low-cost, and high-precision tool for the fast and nondestructive detection of food authenticity. Full article
Show Figures

Figure 1

Figure 1
<p>Sample information (<b>A</b>) and radar chart of colorimetric values (<b>B</b>) of ZSS, ZMS, and HAS. ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen.</p>
Full article ">Figure 2
<p>GLCM texture parameter histogram (<b>A</b>) and Law’s texture parameter heatmap (<b>B</b>) of ZSS, ZMS, and HAS. ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen.</p>
Full article ">Figure 3
<p>Fire–ice value box chart (<b>A</b>) and fire–ice chart (<b>B</b>) of ZSS, ZMS, and HAS. The letters (a–c) above the bars indicate significant differences as determined by Duncan’s multiple-range test (<span class="html-italic">p</span> &lt; 0.05). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen.</p>
Full article ">Figure 4
<p>Score plots of the PCA model for ZSS, ZMS, and HAS of raw color and texture characterization (<b>A</b>); score plots of the PLS-DA model for ZSS, ZMS, and HAS of raw color and texture characterization (<b>B</b>). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen. PCA, principal component analysis; PLS-DA, partial least squares discrimination analysis.</p>
Full article ">Figure 5
<p>Cross-validation results with 200 calculations using a permutation test for ZSS, ZMS, and HAS of raw color and texture characterization (<b>A</b>); VIP plots for ZSS, ZMS, and HAS of raw color and texture characterization (<b>B</b>). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen. VIP, variable importance for projecting.</p>
Full article ">Figure 6
<p>Score plots of the PCA model for ZSS, ZMS, and HAS of fire–ice ions dimensionality reduction data (<b>A</b>); score plots of the PLS-DA model for ZSS, ZMS, and HAS of fire–ice ions dimensionality reduction data (<b>B</b>). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen. PCA, principal component analysis; PLS-DA, partial least squares discrimination analysis.</p>
Full article ">Figure 7
<p>Cross-validation results with 200 calculations using a permutation test for ZSS, ZMS, and HAS of fire–ice ions dimensionality reduction data (<b>A</b>); VIP plots for ZSS, ZMS, and HAS of fire–ice ions dimensionality reduction data (<b>B</b>). ZSS, Ziziphi Spinosae Semen; ZMS, Ziziphi Mauritianae Semen; HAS, Hovenia Acerba Semen. VIP, variable importance for projecting.</p>
Full article ">Figure 8
<p>Evaluation metrics of 4 machine learning algorithms (BP, SVM, DBN, and RF).</p>
Full article ">
14 pages, 1342 KiB  
Article
Diffusion-Weighted MRI and Human Papillomavirus (HPV) Status in Oropharyngeal Cancer
by Heleen Bollen, Rüveyda Dok, Frederik De Keyzer, Sarah Deschuymer, Annouschka Laenen, Johannes Devos, Vincent Vandecaveye and Sandra Nuyts
Cancers 2024, 16(24), 4284; https://doi.org/10.3390/cancers16244284 - 23 Dec 2024
Viewed by 779
Abstract
Background: This study aimed to explore the differences in quantitative diffusion-weighted (DW) MRI parameters in oropharyngeal squamous cell carcinoma (OPC) based on Human Papillomavirus (HPV) status before and during radiotherapy (RT). Methods: Echo planar DW sequences acquired before and during (chemo)radiotherapy (CRT) of [...] Read more.
Background: This study aimed to explore the differences in quantitative diffusion-weighted (DW) MRI parameters in oropharyngeal squamous cell carcinoma (OPC) based on Human Papillomavirus (HPV) status before and during radiotherapy (RT). Methods: Echo planar DW sequences acquired before and during (chemo)radiotherapy (CRT) of 178 patients with histologically proven OPC were prospectively analyzed. The volumetric region of interest (ROI) was manually drawn on the apparent diffusion coefficient (ADC) map, and 105 DW-MRI radiomic parameters were extracted. Change in ADC values (Δ ADC) was calculated as the difference between baseline and during RT at week 4, normalized by the baseline values. Results: Pre-treatment first-order 10th percentile ADC and Gray Level co-occurrence matrix (GLCM)-correlation were significantly lower in HPV-positive compared with HPV-negative tumors (82.4 × 10−5 mm2/s vs. 90.3 × 10−5 mm2/s, p = 0.03 and 0.18 vs. 0.30, p < 0.01). In the fourth week of RT, all first-order ADC values were significantly higher in HPV-positive tumors (p < 0.01). Δ ADC mean was significantly higher for the HPV-positive compared with the HPV-negative OPC group (95% vs. 55%, p < 0.01). A predictive model for HPV status based on smoking status, alcohol consumption, GLCM correlation, and mean ADC and 10th percentile ADC values yielded an area under the curve of 0.77 (95% CI 0.70–0.84). Conclusions: Our results highlight the potential of DW-MR imaging as a non-invasive biomarker for the prediction of HPV status, although its current role remains supplementary to pathological confirmation. Full article
(This article belongs to the Special Issue Advances in Radiotherapy for Head and Neck Cancer)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the study. OPC: oropharyngeal cancer; HPV: Human Papillomavirus, n = number of patients. P16 was used as surrogate marker for HPV. In the following figures and tables, p16-positive tumors will be depicted as HPV-positive and p16-negative tumors as HPV-negative.</p>
Full article ">Figure 2
<p>Boxplots displaying the distribution of (<b>A</b>) pre-treatment ADC mean values (×10<sup>−5</sup> mm<sup>2</sup>/s), (<b>B</b>) ADC mean values during RT (×10<sup>−5</sup> mm<sup>2</sup>/s) and (<b>C</b>) ΔADC mean (%) according to HPV status. <span class="html-italic">p</span>-values were calculated with Mann–Whitney <span class="html-italic">U</span> test.</p>
Full article ">Figure 2 Cont.
<p>Boxplots displaying the distribution of (<b>A</b>) pre-treatment ADC mean values (×10<sup>−5</sup> mm<sup>2</sup>/s), (<b>B</b>) ADC mean values during RT (×10<sup>−5</sup> mm<sup>2</sup>/s) and (<b>C</b>) ΔADC mean (%) according to HPV status. <span class="html-italic">p</span>-values were calculated with Mann–Whitney <span class="html-italic">U</span> test.</p>
Full article ">Figure 3
<p>Receiver operating characteristic (ROC) curve of the predictive model based on clinical factors, including smoking status, alcohol consumption, tumor location and radiomic features, including 10th percentile ADC, mean ADC and GLCM-correlation. HPV status was used as classification variable. The predicted probability was generated by multivariable logistic regression. AUC: Area under the curve.</p>
Full article ">Figure 4
<p>Kaplan–Meier curves for (<b>A</b>) locoregional control (LRC) and (<b>B</b>) overall survival (OS) stratified by p16 status as a surrogate marker for HPV. HPV: Human Papillomavirus.</p>
Full article ">
Back to TopTop