[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (213)

Search Parameters:
Keywords = sub-technique classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 6340 KiB  
Article
Identifying Bias in Deep Neural Networks Using Image Transforms
by Sai Teja Erukude, Akhil Joshi and Lior Shamir
Computers 2024, 13(12), 341; https://doi.org/10.3390/computers13120341 - 15 Dec 2024
Viewed by 214
Abstract
CNNs have become one of the most commonly used computational tools in the past two decades. One of the primary downsides of CNNs is that they work as a “black box”, where the user cannot necessarily know how the image data are analyzed, [...] Read more.
CNNs have become one of the most commonly used computational tools in the past two decades. One of the primary downsides of CNNs is that they work as a “black box”, where the user cannot necessarily know how the image data are analyzed, and therefore needs to rely on empirical evaluation to test the efficacy of a trained CNN. This can lead to hidden biases that affect the performance evaluation of neural networks, but are difficult to identify. Here we discuss examples of such hidden biases in common and widely used benchmark datasets, and propose techniques for identifying dataset biases that can affect the standard performance evaluation metrics. One effective approach to identify dataset bias is to perform image classification by using merely blank background parts of the original images. However, in some situations, a blank background in the images is not available, making it more difficult to separate foreground or contextual information from the bias. To overcome this, we propose a method to identify dataset bias without the need to crop background information from the images. The method is based on applying several image transforms to the original images, including Fourier transform, wavelet transforms, median filter, and their combinations. These transforms are applied to recover background bias information that CNNs use to classify images. These transformations affect the contextual visual information in a different manner than it affects the systemic background bias. Therefore, the method can distinguish between contextual information and the bias, and can reveal the presence of background bias even without the need to separate sub-image parts from the blank background of the original images. The code used in the experiments is publicly available. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

Figure 1
<p>X-rays images classified correctly by CNNs. (<b>a</b>) Original X-ray, (<b>b</b>) AlexNet, (<b>c</b>) GoogleNet, (<b>d</b>) VGG16, (<b>e</b>) VGG19, (<b>f</b>) ResNet18, (<b>g</b>) ResNet50, (<b>h</b>) ResNet101, (<b>i</b>) Inception V3, (<b>j</b>) InceptionResNet, (<b>k</b>) DenseNet201, (<b>l</b>) SqueezeNet, (<b>m</b>) Xception, and (<b>n</b>) CNN-X [<a href="#B30-computers-13-00341" class="html-bibr">30</a>].</p>
Full article ">Figure 2
<p>A 20 × 20 cropped background segment of a blank background part of the original image. This was carried out for all images in the dataset to create a new dataset of blank sub-images, as shown in <a href="#computers-13-00341-f003" class="html-fig">Figure 3</a>. When using just these seemingly blank parts of the images, the classification accuracy of numerous datasets was far higher than mere chance accuracy.</p>
Full article ">Figure 3
<p>Original images from Yale Faces B and the 20 × 20 portion of the top left corner separated from each of the original images. The classification accuracy of the CNN was far higher than mere chance, showing the CNN does not necessarily need to recognize the face in order to classify the images correctly.</p>
Full article ">Figure 4
<p>Original images from KVASIR and the 20 × 20 portion of the top left corner separated from the original images [<a href="#B13-computers-13-00341" class="html-bibr">13</a>].</p>
Full article ">Figure 5
<p>Classification accuracy of CNN models trained and tested on seemingly blank sub-images taken from image background of several common image benchmark datasets [<a href="#B13-computers-13-00341" class="html-bibr">13</a>].</p>
Full article ">Figure 6
<p>Categorization of data. The data used in this paper include natural images collected from various sources, as well as controlled datasets such that all images in the dataset are provided from a single source.</p>
Full article ">Figure 7
<p>VGG16 architecture.</p>
Full article ">Figure 8
<p>Classification accuracy when using the Fourier-transformed full images.</p>
Full article ">Figure 9
<p>Classification accuracy when using the Fourier-transformed small sub-images taken from the background.</p>
Full article ">Figure 10
<p>Example of an original image and the Haar and Daubechies Discrete Wavelet transformed images.</p>
Full article ">Figure 11
<p>Example of an original image taken from ImageNette, and the Haar and Daubechies Discrete Wavelet transformed images.</p>
Full article ">Figure 12
<p>Classification accuracy when using images after applying the wavelet transform.</p>
Full article ">Figure 13
<p>Classification accuracy when using the wavelet transformed cropped images.</p>
Full article ">Figure 14
<p>Classification accuracy when using full images after applying median filtering.</p>
Full article ">Figure 15
<p>Classification accuracy when using sub-images of blank background after applying median filter.</p>
Full article ">Figure 16
<p>Classification accuracy of full images after applying both median and wavelet transforms.</p>
Full article ">Figure 17
<p>Classification accuracy of blank background sub-images after applying both median and wavelet transforms.</p>
Full article ">
13 pages, 6430 KiB  
Proceeding Paper
Detection of Non-Technical Losses in Special Customers with Telemetering, Based on Artificial Intelligence
by José Luis Llagua Arévalo and Patricio Antonio Pesántez Sarmiento
Eng. Proc. 2024, 77(1), 29; https://doi.org/10.3390/engproc2024077029 - 18 Nov 2024
Viewed by 244
Abstract
The Ecuadorian electricity sector, until April 2024, presented losses of 15.64% (6.6% technical and 9.04% non-technical), so it is important to detect the areas that potentially sub-register energy in order to reduce Non-Technical Losses (NTLs). The “Empresa Eléctrica de Ambato Sociedad Anónima” (EEASA), [...] Read more.
The Ecuadorian electricity sector, until April 2024, presented losses of 15.64% (6.6% technical and 9.04% non-technical), so it is important to detect the areas that potentially sub-register energy in order to reduce Non-Technical Losses (NTLs). The “Empresa Eléctrica de Ambato Sociedad Anónima” (EEASA), as a distribution company, has, to reduce NTLs, incorporated many smart meters in special clients, generating a large amount of data that are stored. This historical information is analyzed to detect anomalous consumption that is not easily recognized and is a significant part of the NTLs. The use of machine learning with appropriate clustering techniques and deep learning neural networks work together to detect abnormal curves that record lower readings than the real energy consumption. The developed methodology uses three k-means validation indices to classify daily energy curves based on the days of the week and holidays that present similar behaviors in terms of energy consumption. The developed algorithm groups similar consumption patterns as input data sets for learning, testing, and validating the densely connected classification neural network, allowing for the identification of daily curves described by customers. The results obtained from the system detected customers who sub-register energy. It is worth mentioning that this methodology is replicable for distribution companies that store historical consumption data with Advanced Measurement Infrastructure (AMI) systems. Full article
(This article belongs to the Proceedings of The XXXII Conference on Electrical and Electronic Engineering)
Show Figures

Figure 1

Figure 1
<p>Methodology flowchart.</p>
Full article ">Figure 2
<p>Variabilit.</p>
Full article ">Figure 3
<p>Demand.</p>
Full article ">Figure 4
<p>Grouping using the Soft-DTW k-means index for a k = 5, represented the centroid curves in red color.</p>
Full article ">Figure 5
<p>Grouping assigned values.</p>
Full article ">Figure 6
<p>Normal and fraudulent consumption curves with percentage decrease. (<b>a</b>) Type 1 with 36% of customer 6 in zone 2. (<b>b</b>) Type 2 with 56% of customer 4 in zone 1 and (<b>c</b>) Type 3 with 82% of customer 6 in zone 7.</p>
Full article ">Figure 7
<p>Model network design for the holiday group.</p>
Full article ">Figure 8
<p>KNIME—Python link and deep learning libraries.</p>
Full article ">Figure 9
<p>Completed neural network in the working environment.</p>
Full article ">Figure 10
<p>Accuracy curves of the neural network.</p>
Full article ">Figure 11
<p>Losses curves of the neural network.</p>
Full article ">Figure 12
<p>Weekend neural network results.</p>
Full article ">Figure 13
<p>Results of the neural network from Monday to Friday.</p>
Full article ">
19 pages, 3451 KiB  
Article
High-Resolution Remotely Sensed Evidence Shows Solar Thermal Power Plant Increases Grassland Growth on the Tibetan Plateau
by Naijing Liu, Huaiwu Peng, Zhenshi Zhang, Yujin Li, Kai Zhang, Yuehan Guo, Yuzheng Cui, Yingsha Jiang, Wenxiang Gao and Donghai Wu
Remote Sens. 2024, 16(22), 4266; https://doi.org/10.3390/rs16224266 - 15 Nov 2024
Viewed by 402
Abstract
Solar energy plays a crucial role in mitigating greenhouse gas emissions in the context of global climate change. However, its deployment for green electricity generation can significantly influence regional climate and vegetation dynamics. While prior studies have examined the impacts of solar power [...] Read more.
Solar energy plays a crucial role in mitigating greenhouse gas emissions in the context of global climate change. However, its deployment for green electricity generation can significantly influence regional climate and vegetation dynamics. While prior studies have examined the impacts of solar power plants on vegetation, the accuracy of these assessments has often been constrained by the availability of publicly accessible multispectral, high-resolution remotely sensed imagery. Given the abundant solar energy resources and the ecological significance of the Tibetan Plateau, a thorough evaluation of the vegetation effects associated with solar power installations is warranted. In this study, we utilize sub-meter resolution imagery from the GF-2 satellite to reconstruct the fractional vegetation cover (FVC) at the Gonghe solar thermal power plant through image classification, in situ sampling, and sliding window techniques. We then quantify the plant’s impact on FVC by comparing data from the pre-installation and post-installation periods. Our findings indicate that the Gonghe solar thermal power plant is associated with a 0.02 increase in FVC compared to a surrounding control region (p < 0.05), representing a 12.5% increase relative to the pre-installation period. Notably, the enhancement in FVC is more pronounced in the outer ring areas than near the central tower. The observed enhancement in vegetation growth at the Gonghe plant suggests potential ecological and carbon storage benefits resulting from solar power plant establishment on the Tibetan Plateau. These findings underscore the necessity of evaluating the climate and ecological impacts of renewable energy facilities during the planning and design phases to ensure a harmonious balance between clean energy development and local ecological integrity. Full article
(This article belongs to the Special Issue Remote Sensing of Mountain and Plateau Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area. (<b>a</b>) Geolocation and true color composite image of the Gonghe solar thermal power plant, captured by GF-2 with a spatial resolution of 0.8 m. Detailed representations of the Gonghe solar thermal power plant are provided in subfigures (<b>b</b>–<b>d</b>), with their respective locations indicated within the main figure (<b>a</b>).</p>
Full article ">Figure 2
<p>Workflow of the study. The primary steps include data preprocessing, land cover classification, fractional vegetation cover (FVC) data preparation, FVC reconstruction, and assessment of FVC impacts within the Gonghe solar thermal power plant.</p>
Full article ">Figure 3
<p>Confusion matrices of soft voting classification in this study. Subfigure (<b>a</b>) illustrates the confusion matrix for the training samples, while subfigure (<b>b</b>) presents the confusion matrix for the validation samples. The f1_score and the kappa value for the validation samples are detailed in the accompanying text.</p>
Full article ">Figure 4
<p>Soft voting classification results of the Gonghe solar thermal power plant. The classification results are detailed in subfigures (<b>b</b>–<b>d</b>), with their respective positions indicated in the main figure (<b>a</b>). Areas classified as bare land and impervious surfaces are represented in brown, reflecting mirrors in white, and grassland in green.</p>
Full article ">Figure 5
<p>FVC reconstruction results of the Gonghe solar thermal power plant in 2020. The detailed results of the FVC reconstruction are presented in subfigures (<b>b</b>–<b>d</b>), with their respective locations indicated in the main figure (<b>a</b>).</p>
Full article ">Figure 6
<p>Spatial distribution of the FVC difference and ΔFVC of the Gonghe solar thermal power plant between 2017 and 2020. Subfigure (<b>a</b>) illustrates the spatial distribution of FVC differences along with the boundaries of the mirror field and control region. Subfigure (<b>b</b>) presents boxplots that depict the FVC differences observed in the mirror field and control region, with the ΔFVC values and the significance of the two-sample <span class="html-italic">t</span>-test detailed in the accompanying text.</p>
Full article ">Figure 7
<p>Distribution of the FVC difference of the Gonghe solar thermal power plant in the ring regions around the central tower and the power plant between 2017 and 2020. Subfigure (<b>a</b>) depicts the spatial arrangement of the rings, which include the power plant rings (Ring<sub>p</sub>) and the control region rings (Ring<sub>c</sub>). The control region rings are spaced at 100 m intervals, extending from 0 to 500 m beyond the boundaries of the Gonghe solar thermal power plant. Subfigure (<b>b</b>) illustrates the average FVC differences for each ring, with the standard deviations represented by the shaded area of the plot.</p>
Full article ">
26 pages, 33394 KiB  
Article
Feature Intensification Using Perception-Guided Regional Classification for Remote Sensing Image Super-Resolution
by Yinghua Li, Jingyi Xie, Kaichen Chi, Ying Zhang and Yunyun Dong
Remote Sens. 2024, 16(22), 4201; https://doi.org/10.3390/rs16224201 - 11 Nov 2024
Viewed by 628
Abstract
In recent years, super-resolution technology has gained widespread attention in the field of remote sensing. Despite advancements, current methods often employ uniform reconstruction techniques across entire remote sensing images, neglecting the inherent variability in spatial frequency distributions, particularly the distinction between high-frequency texture [...] Read more.
In recent years, super-resolution technology has gained widespread attention in the field of remote sensing. Despite advancements, current methods often employ uniform reconstruction techniques across entire remote sensing images, neglecting the inherent variability in spatial frequency distributions, particularly the distinction between high-frequency texture regions and smoother areas, leading to computational inefficiency, which introduces redundant computations and fails to optimize the reconstruction process for regions of higher complexity. To address these issues, we propose the Perception-guided Classification Feature Intensification (PCFI) network. PCFI integrates two key components: a compressed sensing classifier that optimizes speed and performance, and a deep texture interaction fusion module that enhances content interaction and detail extraction. This network mitigates the tendency of Transformers to favor global information over local details, achieving improved image information integration through residual connections across windows. Furthermore, a classifier is employed to segment sub-image blocks prior to super-resolution, enabling efficient large-scale processing. The experimental results on the AID dataset indicate that PCFI achieves state-of-the-art performance, with a PSNR of 30.87 dB and an SSIM of 0.8131, while also delivering a 4.33% improvement in processing speed compared to the second-best method. Full article
Show Figures

Figure 1

Figure 1
<p>Overall structure of Perception-guided Classification Feature Intensification Network and integrated compressive sensing-based perception classifier module.</p>
Full article ">Figure 2
<p>An illustration of depth–texture interaction fusion module. The figure is divided into three sections: the top-left represents the DTIF module, the top-right section represents the DTIT block, and the bottom section represents the CWIA block.</p>
Full article ">Figure 3
<p>The process of soft pooling (The red arrows are for the forward operation, and the output value of the SoftPool operation is generated by passing the standard sum of all <math display="inline"><semantics> <mover accent="true"> <mi>γ</mi> <mo>˜</mo> </mover> </semantics></math> in the kernel neighborhood N).</p>
Full article ">Figure 4
<p>The process of N–Gram window sliding (When sliding the window over single-character paddings, forward N–Gram features are obtained through the WSA operation).</p>
Full article ">Figure 5
<p>Some typical samples of AID dataset from 30 different scene classifications.</p>
Full article ">Figure 6
<p>The comparison of FLOPs and parameters, as well as PNSR/SSIM performance, with other methods on the AID dataset at a ×2 scale.</p>
Full article ">Figure 7
<p>Visual comparison on AID datasets at a ×3 scale. The patches used for comparison are marked in red boxes.</p>
Full article ">Figure 8
<p>Visual comparison on the SAR dataset and AVIRIS dataset at a ×2 scale. The patches used for comparison are marked in red boxes.</p>
Full article ">Figure 9
<p>Visual Comparison of Images from Manga109, BSD100, Set5, Set14, and Urban100 Datasets at ×3 Scale The patches used for comparison are marked in red boxes.</p>
Full article ">
23 pages, 5944 KiB  
Article
Examining Sentiment Analysis for Low-Resource Languages with Data Augmentation Techniques
by Gaurish Thakkar, Nives Mikelić Preradović and Marko Tadić
Eng 2024, 5(4), 2920-2942; https://doi.org/10.3390/eng5040152 - 7 Nov 2024
Viewed by 725
Abstract
This investigation investigates the influence of a variety of data augmentation techniques on sentiment analysis in low-resource languages, with a particular emphasis on Bulgarian, Croatian, Slovak, and Slovene. The following primary research topic is addressed: is it possible to improve sentiment analysis efficacy [...] Read more.
This investigation investigates the influence of a variety of data augmentation techniques on sentiment analysis in low-resource languages, with a particular emphasis on Bulgarian, Croatian, Slovak, and Slovene. The following primary research topic is addressed: is it possible to improve sentiment analysis efficacy in low-resource languages through data augmentation? Our sub-questions look at how different augmentation methods affect performance, how effective WordNet-based augmentation is compared to other methods, and whether lemma-based augmentation techniques can be used, especially for Croatian sentiment tasks. The sentiment-labelled evaluations in the selected languages are included in our data sources, which were curated with additional annotations to standardise labels and mitigate ambiguities. Our findings show that techniques like replacing words with synonyms, masked language model (MLM)-based generation, and permuting and combining sentences can only make training datasets slightly bigger. However, they provide limited improvements in model accuracy for low-resource language sentiment classification. WordNet-based techniques, in particular, exhibit a marginally superior performance compared to other methods; however, they fail to substantially improve classification scores. From a practical perspective, this study emphasises that conventional augmentation techniques may require refinement to address the complex linguistic features that are inherent to low-resource languages, particularly in mixed-sentiment and context-rich instances. Theoretically, our results indicate that future research should concentrate on the development of augmentation strategies that introduce novel syntactic structures rather than solely relying on lexical variations, as current models may not effectively leverage synonymic or lemmatised data. These insights emphasise the nuanced requirements for meaningful data augmentation in low-resource linguistic settings and contribute to the advancement of sentiment analysis approaches. Full article
(This article belongs to the Special Issue Feature Papers in Eng 2024)
Show Figures

Figure 1

Figure 1
<p>Comparison of F1 scores for Bulgarian datasets. Our proposed methods are labelled with prefix “expanded”.</p>
Full article ">Figure 2
<p>Comparison of F1 scores for Croatian datasets. Our proposed methods are labelled with prefix “expanded”.</p>
Full article ">Figure 3
<p>Comparison of F1 scores for Slovak datasets. Our proposed methods are labelled with prefix “expanded”.</p>
Full article ">Figure 4
<p>Comparison of F1 scores for Slovene datasets. Our proposed methods are labelled with prefix “expanded”.</p>
Full article ">
20 pages, 5623 KiB  
Article
Tropical Cyclone Wind Direction Retrieval Based on Wind Streaks and Rain Bands in SAR Images
by Zhancai Liu, Hongwei Yang, Weihua Ai, Kaijun Ren, Shensen Hu and Li Wang
Remote Sens. 2024, 16(20), 3837; https://doi.org/10.3390/rs16203837 - 15 Oct 2024
Viewed by 665
Abstract
Tropical cyclones (TCs) are associated with severe weather phenomena, making accurate wind field retrieval crucial for TC monitoring. SAR’s high-resolution imaging capability provides detailed information for TC observation, and wind speed calculations require wind direction as prior information. Therefore, utilizing SAR images to [...] Read more.
Tropical cyclones (TCs) are associated with severe weather phenomena, making accurate wind field retrieval crucial for TC monitoring. SAR’s high-resolution imaging capability provides detailed information for TC observation, and wind speed calculations require wind direction as prior information. Therefore, utilizing SAR images to retrieve TC wind fields is of significant importance. This study introduces a novel approach for retrieving wind direction from SAR images of TCs through the classification of TC sub-images. The method utilizes a transfer learning-based Inception V3 model to identify wind streaks (WSs) and rain bands in SAR images under TC conditions. For sub-images containing WSs, the Mexican-hat wavelet transform is applied, while for sub-images containing rain bands, an edge detection technique is used to locate the center of the TC eye and subsequently the tangent to the spiral rain bands is employed to determine the wind direction associated with the rain bands. Wind direction retrieval from 10 SAR TC images showed an RMSD of 19.52° and a correlation coefficient of 0.96 when compared with ECMWF and HRD observation wind directions, demonstrating satisfactory consistency and providing highly accurate TC wind directions. These results confirm the method’s potential applications in TC wind direction retrieval. Full article
Show Figures

Figure 1

Figure 1
<p>Example of geophysical phenomena in SAR images. The first row represents wind streaks (G), the second row depicts rain bands (I), and the third row illustrates other geophysical phenomena (A).</p>
Full article ">Figure 2
<p>Flowchart of retrain recognition model based on transfer learning and wind direction retrieval from TCs SAR images.</p>
Full article ">Figure 3
<p>The architecture of transfer learning.</p>
Full article ">Figure 4
<p>The wind direction of rain band locations existing in Northern Hemisphere TCs.</p>
Full article ">Figure 5
<p>Accuracy and loss of training set (blue lines) and validation set (orange lines).</p>
Full article ">Figure 6
<p>Sub-image recognition results of SAR TC images. “G” represents WSs, “I” represents rain bands and “A” denotes other geophysical phenomena.</p>
Full article ">Figure 7
<p>Wind direction retrieval from SAR TC sub-images using 2-D Mexican-hat wavelet transform. (<b>a</b>) SAR sub-image; (<b>b</b>) The result of FFT; (<b>c</b>) The result of Mexico-hat wavelet transformation; (<b>d</b>) The wind direction of the sub-image.</p>
Full article ">Figure 8
<p>The Canny edge detection results for TC Douglas. The NRCS for VV and VH polarizations are presented in (<b>a</b>,<b>d</b>), respectively; the rain band distributions for VV and VH polarizations are shown in (<b>b</b>,<b>e</b>), respectively; the TC eye positions for VV and VH polarizations are depicted in (<b>c</b>,<b>f</b>), respectively.</p>
Full article ">Figure 9
<p>The Canny edge detection results for TC Larry. The NRCS for VV and VH polarizations are presented in (<b>a</b>,<b>d</b>), respectively; the rain band distributions for VV and VH polarizations are shown in (<b>b</b>,<b>e</b>), respectively; the TC eye positions for VV and VH polarizations are depicted in (<b>c</b>,<b>f</b>), respectively.</p>
Full article ">Figure 10
<p>Schematic diagram of wind directions with 180° ambiguity and reference wind direction. For the two predicted wind directions <math display="inline"><semantics> <msub> <mi>θ</mi> <msub> <mi>p</mi> <mn>1</mn> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>θ</mi> <msub> <mi>p</mi> <mn>2</mn> </msub> </msub> </semantics></math> that are aligned but point in opposite directions, the smaller the <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>θ</mi> </mrow> </semantics></math> calculated relative to the reference wind direction <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>t</mi> </msub> </semantics></math>, the closer it is to the true wind direction.</p>
Full article ">Figure 11
<p>The wind field rotational pattern of TCs. (<b>a</b>) TCs in the Northern Hemisphere. (<b>b</b>) TCs in the Southern Hemisphere.</p>
Full article ">Figure 12
<p>The wind direction retrieval results for TC Douglas, acquired on 25 July 2020. (<b>a</b>) Quick-look from the VV polarized SAR image over TC Douglas; (<b>b</b>) The wind direction retrieval results; (<b>c</b>) The ECMWF wind direction; (<b>d</b>) Comparison of the retrieved wind direction with ECMWF and HRD observation wind direction.</p>
Full article ">Figure 13
<p>The wind direction retrieval results for TC Larry, acquired on 7 September 2021. (<b>a</b>) Quick-look from the VV polarized SAR image over TC Larry; (<b>b</b>) The wind direction retrieval results; (<b>c</b>) The ECMWF wind direction; (<b>d</b>) Comparison of the retrieved wind direction with ECMWF and HRD observation wind direction.</p>
Full article ">Figure 14
<p>Comparison of wind directions retrieved from 10 SAR TCs images with ECMWF reanalysis and HRD observation wind directions.</p>
Full article ">
19 pages, 4794 KiB  
Article
An Efficient Ensemble Approach for Brain Tumors Classification Using Magnetic Resonance Imaging
by Zubair Saeed, Tarraf Torfeh, Souha Aouadi, (Jim) Xiuquan Ji and Othmane Bouhali
Information 2024, 15(10), 641; https://doi.org/10.3390/info15100641 - 15 Oct 2024
Viewed by 1252
Abstract
Tumors in the brain can be life-threatening, making early and precise detection crucial for effective treatment and improved patient outcomes. Deep learning (DL) techniques have shown significant potential in automating the early diagnosis of brain tumors by analyzing magnetic resonance imaging (MRI), offering [...] Read more.
Tumors in the brain can be life-threatening, making early and precise detection crucial for effective treatment and improved patient outcomes. Deep learning (DL) techniques have shown significant potential in automating the early diagnosis of brain tumors by analyzing magnetic resonance imaging (MRI), offering a more efficient and accurate approach to classification. Deep convolutional neural networks (DCNNs), which are a sub-field of DL, have the potential to analyze rapidly and accurately MRI data and, as such, assist human radiologists, facilitating quicker diagnoses and earlier treatment initiation. This study presents an ensemble of three high-performing DCNN models, i.e., DenseNet169, EfficientNetB0, and ResNet50, for accurate classification of brain tumors and non-tumor MRI samples. Our proposed ensemble model demonstrates significant improvements over various evaluation parameters compared to individual state-of-the-art (SOTA) DCNN models. We implemented ten SOTA DCNN models, i.e., EfficientNetB0, ResNet50, DenseNet169, DenseNet121, SqueezeNet, ResNet34, ResNet18, VGG16, VGG19, and LeNet5, and provided a detailed performance comparison. We evaluated these models using two learning rates (LRs) of 0.001 and 0.0001 and two batch sizes (BSs) of 64 and 128 and identified the optimal hyperparameters for each model. Our findings indicate that the ensemble approach outperforms individual models, having 92% accuracy, 90% precision, 92% recall, and an F1 score of 91% at a 64 BS and 0.0001 LR. This study not only highlights the superior performance of the ensemble technique but also offers a comprehensive comparison with the latest research. Full article
(This article belongs to the Special Issue Detection and Modelling of Biosignals)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Block diagram of our proposed methodology.</p>
Full article ">Figure 2
<p>Dataset samples (<b>a</b>) No tumor, (<b>b</b>) Glioma, (<b>c</b>) Meningioma, and (<b>d</b>) Pituitary.</p>
Full article ">Figure 3
<p>Overall dataset distribution against each class.</p>
Full article ">Figure 4
<p>(<b>a</b>) Accuracy, (<b>b</b>) Precision, (<b>c</b>) Recall, (<b>d</b>) F1-Score of Proposed Ensemble Model using 64 BS and 0.0001 LS, EfficientNetB0 using 64 BS and 0.0001, ResNet59 using 64 BS and 0.0001, DenseNet169, DenseNet121 using 64 BS and 0.001, SqueezeNet using 64 BS and 0.001, ResNet34 using 64 BS and 0.001, ResNet18 using 64 BS and 0.001, VGG16 using 128 BS and 0.001, VGG18 using 128 BS and 0.001, LeNet5 using 64 BS and 0.001.</p>
Full article ">Figure 5
<p>Training and Validation plots of (<b>a</b>) Proposed Ensemble Model using 64 BS and 0.0001 LS, (<b>b</b>) EfficientNetB0 using 64 BS and 0.0001, (<b>c</b>) ResNet59 using 64 BS and 0.0001, (<b>d</b>) DenseNet169 using 64 BS and 0.001, (<b>e</b>) DenseNet121 using 64 BS and 0.001, (<b>f</b>) SqueezeNet using 64 BS and 0.001, (<b>g</b>) ResNet34 using 64 BS and 0.001, (<b>h</b>) ResNet18 using 64 BS and 0.001, (<b>i</b>) VGG16 using 128 BS and 0.001, (<b>j</b>) VGG18 using 128 BS and 0.001, (<b>k</b>) LeNet5 using 64 BS and 0.001.</p>
Full article ">Figure 6
<p>False predictions of Proposed Ensemble Technique.</p>
Full article ">
30 pages, 22835 KiB  
Review
Ceramics for Microelectromechanical Systems Applications: A Review
by Ehsan Fallah Nia and Ammar Kouki
Micromachines 2024, 15(10), 1244; https://doi.org/10.3390/mi15101244 - 9 Oct 2024
Viewed by 3661
Abstract
A comprehensive review of the application of different ceramics for MEMS devices is presented. Main ceramics materials used for MEMS systems and devices including alumina, zirconia, aluminum Nitride, Silicon Nitride, and LTCC are introduced. Conventional and new methods of fabricating each material are [...] Read more.
A comprehensive review of the application of different ceramics for MEMS devices is presented. Main ceramics materials used for MEMS systems and devices including alumina, zirconia, aluminum Nitride, Silicon Nitride, and LTCC are introduced. Conventional and new methods of fabricating each material are explained based on the literature, along with the advantages of the new approaches, mainly additive manufacturing, i.e., 3D-printing technologies. Various manufacturing processes with relevant sub-techniques are detailed and the ones that are more suitable to have an application for MEMS devices are highlighted with their properties. In the main body of this paper, each material with its application for MEMS is categorized and explained. The majority of works are within three main classifications, including the following: (i) using ceramics as a substrate for MEMS devices to be mounted or fabricated on top of it; (ii) ceramics are a part of the materials used for an MEMS device or a monolithic fabrication of MEMS and ceramics; and finally, (iii) using ceramics as packaging solution for MEMS devices. We elaborate on how ceramics may be superior substitutes over other materials when delicate MEMS-based systems need to be assembled or packaged by a simpler fabrication process as well as their advantages when they need to operate in harsh environments. Full article
(This article belongs to the Special Issue The 15th Anniversary of Micromachines)
Show Figures

Figure 1

Figure 1
<p>Conventional manufacturing of ceramics from beginning to final product [<a href="#B55-micromachines-15-01244" class="html-bibr">55</a>].</p>
Full article ">Figure 2
<p>Different types of additive manufacturing and their techniques based on ISO classification [<a href="#B56-micromachines-15-01244" class="html-bibr">56</a>,<a href="#B57-micromachines-15-01244" class="html-bibr">57</a>].</p>
Full article ">Figure 3
<p>AM approaches for Si<sub>3</sub>N<sub>4</sub> manufacturing: (<b>A</b>) SLS/SLM; (<b>B</b>) SLA; (<b>C</b>) LIS; (<b>D</b>) DLP, LCD; (<b>E</b>) DIW; (<b>F</b>) FDM; (<b>G</b>) BJ; (<b>H</b>) 3D printing (3DP); (<b>I</b>) LOM [<a href="#B63-micromachines-15-01244" class="html-bibr">63</a>].</p>
Full article ">Figure 4
<p>Five different 3D-printing techniques: Digital Light Processing (DLP), material jetting (MJ), Stereolithography (SLA), Fused Deposition Modeling (FDM), Direct Ink Writing (DIW) [<a href="#B64-micromachines-15-01244" class="html-bibr">64</a>].</p>
Full article ">Figure 5
<p>Microstructure of (<b>a</b>) monolithic (<b>b</b>) multi-structure of fabrication [<a href="#B66-micromachines-15-01244" class="html-bibr">66</a>].</p>
Full article ">Figure 6
<p>Shrinkage rate of the sintered ceramic with different sintering materials content, (<b>a</b>) (TiO<sub>2</sub>) on top, (<b>b</b>) (CaCO<sub>3</sub>) on left and (<b>c</b>) (MgO) on right in all direction (X direction in black, Y direction in red and Z direction in blue), direct lines indicate the shrinkage before adding materials [<a href="#B67-micromachines-15-01244" class="html-bibr">67</a>].</p>
Full article ">Figure 7
<p>DLP technique for AlN 3D manufacturing [<a href="#B69-micromachines-15-01244" class="html-bibr">69</a>].</p>
Full article ">Figure 8
<p>Laser power and velocity variation effect on surface morphology of zirconia sample [<a href="#B76-micromachines-15-01244" class="html-bibr">76</a>].</p>
Full article ">Figure 9
<p>Preheat temperature effect on sample cracks [<a href="#B76-micromachines-15-01244" class="html-bibr">76</a>].</p>
Full article ">Figure 10
<p>LTCC substrate and surface metallization by MJ technique [<a href="#B80-micromachines-15-01244" class="html-bibr">80</a>].</p>
Full article ">Figure 11
<p>Machine for (<b>a</b>) flat and (<b>b</b>) curve printing of LTCC [<a href="#B80-micromachines-15-01244" class="html-bibr">80</a>].</p>
Full article ">Figure 12
<p>Microstrip patch antenna and RF measurements including S11, VSWR, and gain. (<b>a</b>) 3D perspective of the circuit (<b>b</b>) fabricated circuit (<b>c</b>) S11 simulation and measurement (<b>d</b>) VSWR simulation (<b>e</b>) gain and efficiency (<b>f</b>) 3D radiation [<a href="#B80-micromachines-15-01244" class="html-bibr">80</a>].</p>
Full article ">Figure 13
<p>Fabricated curve LTCC with metallization on top by MJ technique. (<b>a</b>) fabricated curved LTCC (<b>b</b>) schematic diagram of the curved surface (<b>c</b>) shrinkage circuit after sintering with side views [<a href="#B80-micromachines-15-01244" class="html-bibr">80</a>].</p>
Full article ">Figure 14
<p>LTCC powder-preparation steps [<a href="#B81-micromachines-15-01244" class="html-bibr">81</a>].</p>
Full article ">Figure 15
<p>LTCC slurry and tape preparation [<a href="#B81-micromachines-15-01244" class="html-bibr">81</a>].</p>
Full article ">Figure 16
<p>(<b>a</b>–<b>i</b>) Fabrication process of Pt film, (<b>j</b>) overall view of the sensor, (<b>k</b>) zoomed view of the sensitive area. Performance of the sensor on the right (resistance variation vs. temperature) [<a href="#B91-micromachines-15-01244" class="html-bibr">91</a>].</p>
Full article ">Figure 17
<p>Fabrication process of AR lens (<b>a</b>) fabrication steps of AR lens, (<b>b</b>) polished surface of the curved aluminum (<b>c</b>) nanoporous alumina on curved aluminum (<b>d</b>) final optical image of the lens (<b>left</b>), AR lens vs. normal one comparison (<b>top right</b>), (<b>a</b>) AR lens on a yellow light, (<b>b</b>) nanopillars created by anodization of aluminum (<b>bottom right</b>) [<a href="#B95-micromachines-15-01244" class="html-bibr">95</a>].</p>
Full article ">Figure 18
<p>Alumina membrane gas sensor fabrication process (<b>left</b>), gas sensor under test in different temperatures (<b>a</b>–<b>c</b>) Si<sub>3</sub>N<sub>4</sub> and (<b>d</b>–<b>f</b>) Al<sub>2</sub>O<sub>3</sub> μHP (<b>right</b>) [<a href="#B98-micromachines-15-01244" class="html-bibr">98</a>].</p>
Full article ">Figure 19
<p>Fabricated bridge sealed with Alumina and silicon nitride (<b>left</b>). Measured S-parameters (<b>right</b>) [<a href="#B99-micromachines-15-01244" class="html-bibr">99</a>].</p>
Full article ">Figure 20
<p>Fabricated alumina nanopores with high aspect ratio (<b>a</b>) Two-step anodization process. (<b>b</b>) Cu seed layer deposition process. (<b>c</b>) Photoresist spin coating process. (<b>d</b>) Photolithography and patterning processes. (<b>e</b>) Cu electroplating process. (<b>f</b>) Removal of photoresist. (<b>g</b>) Etching of cu seed layer. (<b>h</b>) Etching of AAO membrane (<b>left</b>). Thin film packaging using glow discharge (<b>center</b>) and fabricated view from top illustrating anode and cathode metals (<b>a</b>) top view (<b>b</b>) SEM image (<b>right</b>) [<a href="#B100-micromachines-15-01244" class="html-bibr">100</a>,<a href="#B101-micromachines-15-01244" class="html-bibr">101</a>].</p>
Full article ">Figure 21
<p>Flip-chip assembly on zirconia-silicate [<a href="#B103-micromachines-15-01244" class="html-bibr">103</a>].</p>
Full article ">Figure 22
<p>Fabricated flexible solar cell [<a href="#B104-micromachines-15-01244" class="html-bibr">104</a>].</p>
Full article ">Figure 23
<p>Fabrication process of micro-thruster [<a href="#B105-micromachines-15-01244" class="html-bibr">105</a>].</p>
Full article ">Figure 24
<p>Ammonia sensor and readout circuit (<b>left</b>), fabricated circuit (<b>center</b>), measured results of the sensor (<b>right</b>) [<a href="#B106-micromachines-15-01244" class="html-bibr">106</a>].</p>
Full article ">Figure 25
<p>Polishing machine (<b>left</b>), gaps found on the surface of AlN after polishing (<b>right</b>) [<a href="#B108-micromachines-15-01244" class="html-bibr">108</a>].</p>
Full article ">Figure 26
<p>(<b>a</b>)A PMUT device top view (<b>b</b>) cross section with different layers including AlN piezo layer [<a href="#B112-micromachines-15-01244" class="html-bibr">112</a>].</p>
Full article ">Figure 27
<p>Different type of PMUT devices in arrays, (<b>a</b>) top view of a PMUT device, (<b>b</b>) arrays of PMUTs 3Ddesign, (<b>c</b>) top view of arrays of PMUTs, (<b>d</b>) dimensions of PMUT arrays as a MEMS chip on CMOS device [<a href="#B112-micromachines-15-01244" class="html-bibr">112</a>].</p>
Full article ">Figure 28
<p>(<b>a</b>) A 3D view of AlN lamb wave resonator, (<b>b</b>) cross-section of AlN BAW resonator, (<b>c</b>) cross section of resonator with centered anchor, (<b>d</b>) cross section of a conventional lamb wave resonator [<a href="#B112-micromachines-15-01244" class="html-bibr">112</a>].</p>
Full article ">Figure 29
<p>SEM image of the fabricated optical 3 × 1 switch with zoom views and dimensions, (<b>a</b>) fabricated device top view; (<b>b</b>) mechanical stopper gap; (<b>c</b>) switching actuator gap; (<b>d</b>) air gap of the gap closing actuator; (<b>e</b>) air gap closing interface; and (<b>f</b>) etch profile of the optical stack. [<a href="#B129-micromachines-15-01244" class="html-bibr">129</a>].</p>
Full article ">Figure 30
<p>Silicon Nitride sealing fabrication process [<a href="#B130-micromachines-15-01244" class="html-bibr">130</a>].</p>
Full article ">Figure 31
<p>LTCC layers with embedded vias, cavities, and metallization as active substrate: (<b>left</b>) polished surface with vias on top, (<b>right</b>) active component and MEMS devices on top after final monolithic fabrication [<a href="#B134-micromachines-15-01244" class="html-bibr">134</a>].</p>
Full article ">Figure 32
<p>Fabricated capacitive MEMS switch with LTCC MEMS monolithic process, (<b>a</b>) Top image of the fabricated switch, (<b>b</b>) enlarged view (<b>c</b>) SEM image (<b>left</b>) LTCC-MEMS process flow (<b>right</b>) [<a href="#B134-micromachines-15-01244" class="html-bibr">134</a>].</p>
Full article ">Figure 33
<p>Cavities and via holes acting as a fluidic system for sensing application with embedded sensor [<a href="#B139-micromachines-15-01244" class="html-bibr">139</a>].</p>
Full article ">Figure 34
<p>Fabricated cantilever with LTCC ceramic materials (<b>left</b>) LTCC hotplate (<b>right</b>) [<a href="#B139-micromachines-15-01244" class="html-bibr">139</a>].</p>
Full article ">Figure 35
<p>LTCC humidity sensors made of different LTCC ceramic materials [<a href="#B139-micromachines-15-01244" class="html-bibr">139</a>].</p>
Full article ">
15 pages, 11845 KiB  
Article
Situational Awareness Classification Based on EEG Signals and Spiking Neural Network
by Yakir Hadad, Moshe Bensimon, Yehuda Ben-Shimol and Shlomo Greenberg
Appl. Sci. 2024, 14(19), 8911; https://doi.org/10.3390/app14198911 - 3 Oct 2024
Viewed by 880
Abstract
Situational awareness detection and characterization of mental states have a vital role in medicine and many other fields. An electroencephalogram (EEG) is one of the most effective tools for identifying and analyzing cognitive stress. Yet, the measurement, interpretation, and classification of EEG sensors [...] Read more.
Situational awareness detection and characterization of mental states have a vital role in medicine and many other fields. An electroencephalogram (EEG) is one of the most effective tools for identifying and analyzing cognitive stress. Yet, the measurement, interpretation, and classification of EEG sensors is a challenging task. This study introduces a novel machine learning-based approach to assist in evaluating situational awareness detection using EEG signals and spiking neural networks (SNNs) based on a unique spike continuous-time neuron (SCTN). The implemented biologically inspired SNN architecture is used for effective EEG feature extraction by applying time–frequency analysis techniques and allows adept detection and analysis of the various frequency components embedded in the different EEG sub-bands. The EEG signal undergoes encoding into spikes and is then fed into an SNN model which is well suited to the serial sequence order of the EEG data. We utilize the SCTN-based resonator for EEG feature extraction in the frequency domain which demonstrates high correlation with the classical FFT features. A new SCTN-based 2D neural network is introduced for efficient EEG feature mapping, aiming to achieve a spatial representation of each EEG sub-band. To validate and evaluate the performance of the proposed approach, a common, publicly available EEG dataset is used. The experimental results show that by using the extracted EEG frequencies features and the SCTN-based SNN classifier, the mental state can be accurately classified with an average accuracy of 96.8% for the common EEG dataset. Our proposed method outperforms existing machine learning-based methods and demonstrates the advantages of using SNNs for situational awareness detection and mental state classifications. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) SNN-based resonator architecture and (<b>b</b>) the neurons’ output.</p>
Full article ">Figure 2
<p>Overall SNN-based architecture. (<b>a</b>) EEG electrodes positioned according to 10–20 standard. (<b>b</b>) SNN-based resonators used for feature extraction through supervised STDP learning. (<b>c</b>) Feature mapping, with EEG topologic map consisting of <math display="inline"><semantics> <mrow> <mn>11</mn> <mo>×</mo> <mn>11</mn> </mrow> </semantics></math> SCTNs. (<b>d</b>) SCTN-based classification network trained with unsupervised STDP.</p>
Full article ">Figure 3
<p>SCTN-based resonator frequency response to a chirp signal in the EEG sub-band ranges.</p>
Full article ">Figure 4
<p>EEG frequency features. (<b>a</b>,<b>c</b>) SNN-based spikegram and (<b>b</b>,<b>d</b>) FFT spectrogram.</p>
Full article ">Figure 5
<p>EEG feature mapping. (<b>a</b>) EEG topologic map consisting of <math display="inline"><semantics> <mrow> <mn>11</mn> <mo>×</mo> <mn>11</mn> </mrow> </semantics></math> SCTNs for each sub band. (<b>b</b>) Weight distribution for the 14 synapses of each SCTN, according to an EEG electrode position map.</p>
Full article ">Figure 6
<p>EEG sub-band topography maps for FFT vs. SCTN. SCTN-based topographic maps (one for each sub-band) are compared to the spacial maps created using an FFT.</p>
Full article ">Figure 7
<p>EEG topography maps for the five EEG sub-bands (delta, theta, alpha, beta, and gamma).</p>
Full article ">Figure 8
<p>The activity measured in the <span class="html-italic">F3</span> EEG electrode for the delta sub-band.</p>
Full article ">
17 pages, 14569 KiB  
Article
Cross-Country Ski Skating Style Sub-Technique Detection and Skiing Characteristic Analysis on Snow Using High-Precision GNSS
by Shunya Uda, Naoto Miyamoto, Kiyoshi Hirose, Hiroshi Nakano, Thomas Stöggl, Vesa Linnamo, Stefan Lindinger and Masaki Takeda
Sensors 2024, 24(18), 6073; https://doi.org/10.3390/s24186073 - 19 Sep 2024
Viewed by 930
Abstract
A comprehensive analysis of cross-country skiing races is a pivotal step in establishing effective training objectives and tactical strategies. This study aimed to develop a method of classifying sub-techniques and analyzing skiing characteristics during cross-country skiing skating style timed races on snow using [...] Read more.
A comprehensive analysis of cross-country skiing races is a pivotal step in establishing effective training objectives and tactical strategies. This study aimed to develop a method of classifying sub-techniques and analyzing skiing characteristics during cross-country skiing skating style timed races on snow using high-precision kinematic GNSS devices. The study involved attaching GNSS devices to the heads of two athletes during skating style timed races on cross-country ski courses. These devices provided precise positional data and recorded vertical and horizontal head movements and velocity over ground (VOG). Based on these data, sub-techniques were classified by defining waveform patterns for G2, G3, G4, and G6P (G6 with poling action). The validity of the classification was verified by comparing the GNSS data with video analysis, a process that yielded classification accuracies ranging from 95.0% to 98.8% for G2, G3, G4, and G6P. Notably, G4 emerged as the fastest technique, with sub-technique selection varying among skiers and being influenced by skiing velocity and course inclination. The study’s findings have practical implications for athletes and coaches as they demonstrate that high-precision kinematic GNSS devices can accurately classify sub-techniques and detect skiing characteristics during skating style cross-country skiing races, thereby providing valuable insights for training and strategy development. Full article
(This article belongs to the Special Issue Sensors and Wearable Technologies in Sport Biomechanics)
Show Figures

Figure 1

Figure 1
<p>The figure shows the Ikenotaira cross-country ski course, Japan, used in this study. The plotted data were obtained from the study subject, covering one lap of 0.8 km. The figure shows the course profile’s plan view data (<b>a</b>) and course inclination data (<b>b</b>).</p>
Full article ">Figure 2
<p>This picture and image show the experimental setup. The GNSS antenna was attached to the skier’s head, and the receiver and mobile router were stored in a small bag at the skier’s waist. This setup obtained head positioning data (latitude, longitude, altitude, and VOG) during the timed race.</p>
Full article ">Figure 3
<p>The figure shows the typical waveform patterns of subject A (<b>a</b>) and subject B (<b>b</b>) for G2, G3, G4, and G6P. The black dashed lines indicate the points where the net vertical head movement reaches a peak. The interval between two black lines represents one cycle. The green lines indicate the VOG. The blue waveform shows the trajectory of the net vertical head movement. The red waveform shows the trajectory of the net horizontal head movement. The red bars indicate the amplitude of the net horizontal head movement.</p>
Full article ">Figure 4
<p>The figure shows the quality of the positional data obtained from the RTK GNSS devices for subject A and subject B. The green color indicates the fix solution, the orange color indicates the float solution, and the blue color indicates the dGNSS solution.</p>
Full article ">Figure 5
<p>This figure shows the usage ratio over time (<b>a</b>) and the ratio over distance (<b>b</b>) for each sub-technique during the timed race.</p>
Full article ">Figure 6
<p>The distribution of sub-techniques used by two subjects during the second lap of the timed race is shown on the course profile’s plan view data.</p>
Full article ">Figure 7
<p>The course inclination data show the distribution of sub-techniques used by two subjects during the second lap of the timed race.</p>
Full article ">Figure 8
<p>The distribution of sub-techniques used by two subjects during the second lap of the timed race. The <span class="html-italic">X</span>-axis indicates the distance traveled, and the <span class="html-italic">Y</span>-axis indicates the VOG of the skier’s head.</p>
Full article ">Figure 9
<p>This figure shows the CL, CT, skiing velocity, and course inclination data for subjects A and B’s sub-techniques during the timed race. Each sub-technique cycle was defined from the vertical movement peak at the waveform data’s head to the next peak. The horizontal line within each box represents the median value of the dataset, while the “x” symbol denotes the mean value. ** indicates a significance level of <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 10
<p>The distribution of four sub-techniques used by two subjects during the timed race is shown with skiing velocity (<span class="html-italic">X</span>-axis) and course inclination (<span class="html-italic">Y</span>-axis).</p>
Full article ">Figure 11
<p>The distribution of four sub-techniques used by two subjects during the timed race is shown as a histogram of skiing velocity frequencies.</p>
Full article ">Figure 12
<p>The distribution of four sub-techniques used by two subjects during the timed race is shown as a histogram of course inclination frequencies.</p>
Full article ">
25 pages, 8181 KiB  
Article
A Novel Integration of Data-Driven Rule Generation and Computational Argumentation for Enhanced Explainable AI
by Lucas Rizzo, Damiano Verda, Serena Berretta and Luca Longo
Mach. Learn. Knowl. Extr. 2024, 6(3), 2049-2073; https://doi.org/10.3390/make6030101 - 12 Sep 2024
Viewed by 814
Abstract
Explainable Artificial Intelligence (XAI) is a research area that clarifies AI decision-making processes to build user trust and promote responsible AI. Hence, a key scientific challenge in XAI is the development of methods that generate transparent and interpretable explanations while maintaining scalability and [...] Read more.
Explainable Artificial Intelligence (XAI) is a research area that clarifies AI decision-making processes to build user trust and promote responsible AI. Hence, a key scientific challenge in XAI is the development of methods that generate transparent and interpretable explanations while maintaining scalability and effectiveness in complex scenarios. Rule-based methods in XAI generate rules that can potentially explain AI inferences, yet they can also become convoluted in large scenarios, hindering their readability and scalability. Moreover, they often lack contrastive explanations, leaving users uncertain why specific predictions are preferred. To address this scientific problem, we explore the integration of computational argumentation—a sub-field of AI that models reasoning processes through defeasibility—into rule-based XAI systems. Computational argumentation enables arguments modelled from rules to be retracted based on new evidence. This makes it a promising approach to enhancing rule-based methods for creating more explainable AI systems. Nonetheless, research on their integration remains limited despite the appealing properties of rule-based systems and computational argumentation. Therefore, this study also addresses the applied challenge of implementing such an integration within practical AI tools. The study employs the Logic Learning Machine (LLM), a specific rule-extraction technique, and presents a modular design that integrates input rules into a structured argumentation framework using state-of-the-art computational argumentation methods. Experiments conducted on binary classification problems using various datasets from the UCI Machine Learning Repository demonstrate the effectiveness of this integration. The LLM technique excelled in producing a manageable number of if-then rules with a small number of premises while maintaining high inferential capacity for all datasets. In turn, argument-based models achieved comparable results to those derived directly from if-then rules, leveraging a concise set of rules and excelling in explainability. In summary, this paper introduces a novel approach for efficiently and automatically generating arguments and their interactions from data, addressing both scientific and applied challenges in advancing the application and deployment of argumentation systems in XAI. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

Figure 1
<p>Illustration of the integration of a data-driven rule-generator (Logic Learning Machine) and a rule-aggregator with non-monotonic logic (structured argumentation).</p>
Full article ">Figure 2
<p>An illustration of a multipartite argumentation graph. A node represents each argument and has an <span class="html-italic">if-then</span> internal structure following Equation (<a href="#FD1-make-06-00101" class="html-disp-formula">1</a>) (premises are omitted for the sake of simplicity). Arguments <span class="html-italic">a</span>–<span class="html-italic">c</span> share a common output class, whereas arguments <span class="html-italic">d</span>–<span class="html-italic">f</span> share a different one. Each argument in a partite attacks all the other arguments in the other partite.</p>
Full article ">Figure 3
<p>An illustrative example of elicitation of arguments and the definition of their dialectical status. Node labels contain the argument label and its weight. The premise of argument <span class="html-italic">a</span> does not hold true with the input data, so it is discarded along with its incoming/outgoing attacks. For graphs 1, 2, 3, and 4, the following can be observed, respectively: attacks <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>∅</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>d</mi> <mo>→</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>d</mi> <mo>→</mo> <mi>c</mi> <mo>,</mo> <mi>c</mi> <mo>→</mo> <mi>b</mi> <mo>}</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>d</mi> <mo>→</mo> <mi>c</mi> <mo>,</mo> <mi>c</mi> <mo>→</mo> <mi>d</mi> <mo>,</mo> <mi>c</mi> <mo>→</mo> <mi>b</mi> <mo>}</mo> </mrow> </semantics></math> are removed to respect the inconsistency budget defined; the grounded extensions are <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>∅</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>∅</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>, and the preferred extensions are <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>; the top ranked arguments for the categoriser are <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <mi>c</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>c</mi> <mo>,</mo> <mi>d</mi> <mo>}</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Design of a comparative experiment with four main steps: (<b>a</b>) Selection and pre-processing of four datasets for binary classification tasks; (<b>b</b>) Automatic formation of <span class="html-italic">if-then</span> rules from the selected dataset using the Logic Learning Machine (LLM) technique; (<b>c</b>) Generation of final inferences using two aggregator logics rules: the Standard Applied Procedure and computational argumentation; (<b>d</b>) Comparative analysis via standard binary classification metrics and percentage of undecided cases (NAs: when a model cannot lead to a final inference).</p>
Full article ">Figure 5
<p>Overall results for inferences produced using the CARS dataset grouped by the error threshold per rule parameter (10% for the first four blocks (<b>a</b>–<b>d</b>), 25% for the second four blocks (<b>e</b>–<b>h</b>)), and divided by inconsistency budget variation (25%, 50%, 90%, 100%).</p>
Full article ">Figure 6
<p>Overall results for inferences produced using the CENSUS dataset grouped by the error threshold per rule parameter (10% for the first four blocks (<b>a</b>–<b>d</b>), 25% for the second four blocks (<b>e</b>–<b>h</b>)) and divided by inconsistency budget variation (25%, 50%, 90%, 100%).</p>
Full article ">Figure 7
<p>Overall results for inferences produced using the BANK dataset grouped by the error threshold per rule parameter (10% for the first four blocks (<b>a</b>–<b>d</b>), 25% for the second four blocks (<b>e</b>–<b>h</b>)), and divided by inconsistency budget variation (25%, 50%, 90%, 100%).</p>
Full article ">Figure 8
<p>Overall results for inferences produced using the MYOCARDIAL dataset grouped by the error threshold per rule parameter (10% for the first four blocks (<b>a</b>–<b>d</b>), 25% for the second four blocks (<b>e</b>–<b>h</b>)), and divided by inconsistency budget variation (25%, 50%, 90%, 100%).</p>
Full article ">Figure A1
<p>Example of argumentation graph generated from the <span class="html-italic">if-then</span> rules extracted for the CENSUS dataset using the LLM technique with 10% error threshold per rule. (<b>a</b>) All arguments and attacks with no input data, (<b>b</b>,<b>c</b>) two examples of accepted (green) and rejected (red) arguments from some input data using the preferred semantics.</p>
Full article ">Figure A2
<p>Example of argumentation graph generated from the <span class="html-italic">if-then</span> rules extracted for the CENSUS dataset using the LLM technique with 10% error threshold per rule. (<b>a</b>) All arguments and attacks with no input data, (<b>b</b>,<b>c</b>) two examples of accepted (green) and rejected (red) arguments from some input data using the preferred semantics.</p>
Full article ">Figure A3
<p>Examples of the open-source ArgFrame framework [<a href="#B38-make-06-00101" class="html-bibr">38</a>] instantiated with argumentation graphs generated for the CENSUS dataset. It is possible to hover over nodes to analyze their internal structure. Data can also be imported, allowing the visualization of case-by-case inferences. Its use is recommended for a better understanding of the available functionalities.</p>
Full article ">
23 pages, 13140 KiB  
Article
MSCR-FuResNet: A Three-Residual Network Fusion Model Based on Multi-Scale Feature Extraction and Enhanced Channel Spatial Features for Close-Range Apple Leaf Diseases Classification under Optimal Conditions
by Xili Chen, Xuanzhu Xing, Yongzhong Zhang, Ruifeng Liu, Lin Li, Ruopeng Zhang, Lei Tang, Ziyang Shi, Hao Zhou, Ruitian Guo and Jingrong Dong
Horticulturae 2024, 10(9), 953; https://doi.org/10.3390/horticulturae10090953 - 6 Sep 2024
Viewed by 737
Abstract
The precise and automated diagnosis of apple leaf diseases is essential for maximizing apple yield and advancing agricultural development. Despite the widespread utilization of deep learning techniques, several challenges persist: (1) the presence of small disease spots on apple leaves poses difficulties for [...] Read more.
The precise and automated diagnosis of apple leaf diseases is essential for maximizing apple yield and advancing agricultural development. Despite the widespread utilization of deep learning techniques, several challenges persist: (1) the presence of small disease spots on apple leaves poses difficulties for models to capture intricate features; (2) the high similarity among different types of apple leaf diseases complicates their differentiation; and (3) images with complex backgrounds often exhibit low contrast, thereby reducing classification accuracy. To tackle these challenges, we propose a three-residual fusion network known as MSCR-FuResNet (Fusion of Multi-scale Feature Extraction and Enhancements of Channels and Residual Blocks Net), which consists of three sub-networks: (1) enhancing detailed feature extraction through multi-scale feature extraction; (2) improving the discrimination of similar features by suppressing insignificant channels and pixels; and (3) increasing low-contrast feature extraction by modifying the activation function and residual blocks. The model was validated with a comprehensive dataset from public repositories, including Plant Village and Baidu Flying Paddle. Various data augmentation techniques were employed to address class imbalance. Experimental results demonstrate that the proposed model outperforms ResNet-50 with an accuracy of 97.27% on the constructed dataset, indicating significant advancements in apple leaf disease recognition. Full article
(This article belongs to the Section Plant Pathology and Disease Management (PPDM))
Show Figures

Figure 1

Figure 1
<p>Example images of six apple leaf diseases and asymptomatic (healthy) apple leaves.</p>
Full article ">Figure 2
<p>Examples of apple leaf disease images with complex backgrounds processed by U-Net (Simulation).</p>
Full article ">Figure 3
<p>Example images after applying seven data augmentation methods and the original image.</p>
Full article ">Figure 4
<p>Overall network architecture, which illustrates how the network operates.</p>
Full article ">Figure 5
<p>Mechanism of multi-scale feature fusion.</p>
Full article ">Figure 6
<p>Structural diagram of image channel feature enhancement.</p>
Full article ">Figure 7
<p>Structure for enhancing image spatial and channel features. (<b>A</b>) represents the structure designed to enhance the spatial features, while (<b>B</b>) focuses on enhancing the channel features of the image.</p>
Full article ">Figure 8
<p>Diagrams of Residual Block Structures: Before and After Modification. Part (<b>a</b>) of the image illustrates the residual block structure of ResNet-50, while Part (<b>b</b>) shows the structure of the modified residual block.</p>
Full article ">Figure 9
<p>Fusion methods of subnetworks, detailing the model fusion mechanism and its operation.</p>
Full article ">Figure 10
<p>Comparison of validation accuracy (<b>A</b>) and training loss (<b>B</b>) for apple leaf disease image classification. Part A: a model with only subnet A. Part B: a model with only subnet B. Part C: a model with only subnet C. Part D: a fusion model incorporating subnets A, B, and C.</p>
Full article ">Figure 11
<p>Comparison of subnetwork fusion results in terms of accuracy for apple leaf disease image classification. Part A: a model with only subnet A. Part B: a model with only subnet B. Part C: a model with only subnet C. Part D: a fusion model incorporating subnets A, B, and C.</p>
Full article ">Figure 12
<p>Ablation experiments verify classification accuracy for apple leaf disease image classification. Part A integrates the improved subnetworks A, B, and C, forming MSCR-FuResNet; Part B combines the improved subnetworks A and B with the original, unimproved residual block network C; Part C merges subnetwork A, which lacks a multi-scale feature fusion mechanism, with the improved subnetworks B and C; Part D fuses the improved subnetworks A and C with subnetwork B, which does not include spatial feature enhancement or the mechanism for secondary enhancement of channel features at the end of the residual block within the residual structure.</p>
Full article ">Figure 13
<p>Comparison of ablation experiment results in terms of accuracy for apple leaf disease image classification. Part A integrates the improved subnetworks A, B, and C, forming MSCR-FuResNet; Part B combines the improved subnetworks A and B with the original, unimproved residual block network C; Part C merges subnetwork A, which lacks a multi-scale feature fusion mechanism, with the improved subnetworks B and C; and Part D fuses the improved subnetworks A and C with subnetwork B, which does not include spatial feature enhancement or the mechanism for secondary enhancement of channel features at the end of the residual block within the residual structure.</p>
Full article ">Figure 14
<p>Comparative analysis of subnetworks and the original Resnet-50 network in terms of accuracy for apple leaf disease image classification.</p>
Full article ">Figure 15
<p>Accuracy of apple leaf disease image classification before and after data augmentation.</p>
Full article ">Figure 16
<p>Accuracy of apple leaf disease image classification before and after data augmentation.</p>
Full article ">Figure 17
<p>Comparison of network methods in terms of accuracy for apple leaf disease image classification.</p>
Full article ">Figure 18
<p>Comparison of network accuracy for apple leaf disease image classification.</p>
Full article ">Figure A1
<p>Diagram of the three-dimensional structure of subnetwork A.</p>
Full article ">Figure A2
<p>Diagram of the three-dimensional structure of subnetwork B.</p>
Full article ">Figure A3
<p>Structural diagram of stacked small residual blocks in <span class="html-italic">a</span>, <span class="html-italic">b</span>, <span class="html-italic">c</span>, and <span class="html-italic">d</span>.</p>
Full article ">Figure A4
<p>Diagram of the three-dimensional structure of subnetwork C.</p>
Full article ">
24 pages, 32875 KiB  
Article
Integrating Sequential Backward Selection (SBS) and CatBoost for Snow Avalanche Susceptibility Mapping at Catchment Scale
by Sinem Cetinkaya and Sultan Kocaman
ISPRS Int. J. Geo-Inf. 2024, 13(9), 312; https://doi.org/10.3390/ijgi13090312 - 29 Aug 2024
Viewed by 901
Abstract
Snow avalanche susceptibility (AS) mapping is a crucial step in predicting and mitigating avalanche risks in mountainous regions. The conditioning factors used in AS modeling are diverse, and the optimal set of factors depends on the environmental and geological characteristics of the region. [...] Read more.
Snow avalanche susceptibility (AS) mapping is a crucial step in predicting and mitigating avalanche risks in mountainous regions. The conditioning factors used in AS modeling are diverse, and the optimal set of factors depends on the environmental and geological characteristics of the region. Using a sub-optimal set of input features with a data-driven machine learning (ML) method can lead to challenges like dealing with high-dimensional data, overfitting, and reduced model generalization. This study implemented a robust framework involving the Sequential Backward Selection (SBS) algorithm and a decision-tree based ML model, CatBoost, for the automatic selection of predictive variables for AS mapping. A comprehensive inventory of a large avalanche period, previously derived from satellite images, was used for the investigations in three distinct catchment areas in the Swiss Alps. The integrated SBS-CatBoost approach achieved very high classification accuracies between 94% and 97% for the three catchments. In addition, the Shapley additive explanations (SHAP) method was employed to analyze the contributions of each feature to avalanche occurrences. The proposed methodology revealed the benefits of integrating advanced feature selection algorithms with ML techniques for AS assessment. We aimed to contribute to avalanche hazard knowledge by assessing the impact of each feature in model learning. Full article
Show Figures

Figure 1

Figure 1
<p>A schematic overview of the proposed framework.</p>
Full article ">Figure 2
<p>Engelberger Aa, Meienreuss, and Göschenerreuss catchments (sub-catchments of the Reuss River Basin).</p>
Full article ">Figure 3
<p>Altitude maps of Engelberger Aa, Meienreuss, and Göschenerreuss catchments.</p>
Full article ">Figure 4
<p>LULC classification maps for Engelberger Aa, Meienreuss, and Göschenenreuss.</p>
Full article ">Figure 5
<p>Flowchart of iterative feature elimination using CatBoost with GridSearchCV.</p>
Full article ">Figure 6
<p>(<b>a</b>) The AS map of the Engelberger Aa catchment using the optimal feature set and (<b>b</b>) SHAP summary plot.</p>
Full article ">Figure 7
<p>(<b>a</b>) The AS map of the Meienreuss catchment using the optimal feature set and (<b>b</b>) SHAP summary plot.</p>
Full article ">Figure 8
<p>(<b>a</b>) The AS map of the Göschenerreuss catchment using the optimal feature set and (<b>b</b>) SHAP summary plot.</p>
Full article ">Figure A1
<p>The AS maps for Engelberger Aa, Meienreuss, and Göschenerreuss catchments, based on models with varying numbers of features (16, 15, 14, 13, and 12).</p>
Full article ">Figure A2
<p>The AS maps for Engelberger Aa, Meienreuss, and Göschenerreuss catchments, based on models with varying numbers of features (11, 10, 9, 8, and 7).</p>
Full article ">Figure A3
<p>The AS maps for Engelberger Aa, Meienreuss, and Göschenerreuss catchments, based on models with varying numbers of features (6, 5, 4, 3, and 2).</p>
Full article ">
28 pages, 6255 KiB  
Article
Spatial Predictive Modeling of Liver Fluke Opisthorchis viverrine (OV) Infection under the Mathematical Models in Hexagonal Symmetrical Shapes Using Machine Learning-Based Forest Classification Regression
by Benjamabhorn Pumhirunroj, Patiwat Littidej, Thidarut Boonmars, Atchara Artchayasawat, Narueset Prasertsri, Phusit Khamphilung, Satith Sangpradid, Nutchanat Buasri, Theeraya Uttha and Donald Slack
Symmetry 2024, 16(8), 1067; https://doi.org/10.3390/sym16081067 - 19 Aug 2024
Cited by 2 | Viewed by 1563
Abstract
Infection with liver flukes (Opisthorchis viverrini) is partly due to their ability to thrive in habitats in sub-basin areas, causing the intermediate host to remain in the watershed system throughout the year. Spatial modeling is used to predict water source infections, [...] Read more.
Infection with liver flukes (Opisthorchis viverrini) is partly due to their ability to thrive in habitats in sub-basin areas, causing the intermediate host to remain in the watershed system throughout the year. Spatial modeling is used to predict water source infections, which involves designing appropriate area units with hexagonal grids. This allows for the creation of a set of independent variables, which are then covered using machine learning techniques such as forest-based classification regression methods. The independent variable set was obtained from the local public health agency and used to establish a relationship with a mathematical model. The ordinary least (OLS) model approach was used to screen the variables, and the most consistent set was selected to create a new set of variables using the principal of component analysis (PCA) method. The results showed that the forest classification and regression (FCR) model was able to accurately predict the infection rates, with the PCA factor yielding a reliability value of 0.915. This was followed by values of 0.794, 0.741, and 0.632, respectively. This article provides detailed information on the factors related to water body infection, including the length and density of water flow lines in hexagonal form, and traces the depth of each process. Full article
(This article belongs to the Special Issue Mathematical Modeling of the Infectious Diseases and Their Controls)
Show Figures

Figure 1

Figure 1
<p>The percentage of individuals infected with liver fluke in the 8th Regional Health Province (R8) [<a href="#B14-symmetry-16-01067" class="html-bibr">14</a>], Available online: <a href="https://r8way.moph.go.th/r8way/" target="_blank">https://r8way.moph.go.th/r8way/</a> (accessed on 21 July 2021), near the Mekong River between 2019 and 2021 (adapted from [<a href="#B15-symmetry-16-01067" class="html-bibr">15</a>]).</p>
Full article ">Figure 2
<p>Study area and distribution of percentage of ov infection (<b>a</b>) study area on a national scale, (<b>b</b>) study area on region scale, (<b>c</b>) the presence of an infection is indicated as a point, (<b>d</b>) an infection is indicated by a hexagonal shape.</p>
Full article ">Figure 3
<p>The comparison of the number of samples with the function Between Optimized Hot Spot Analysis Tool on a rectangular grid and a hexagonal grid (the search boundaries for circles and straight red lines differ between the rectangular grid on the left and the hexagon on the right).</p>
Full article ">Figure 4
<p>Distribution of training and testing points on hexagonal grids; (<b>a</b>) overall connected stream lines with ov–infected points, (<b>b</b>) training points, (<b>c</b>) testing points, and (<b>d</b>) training points and testing points.</p>
Full article ">Figure 5
<p>Distribution of independent variables and dependent variable obtained from mathematical model within hexagonal grid: (<b>a</b>) X1, (<b>b</b>) X2, (<b>c</b>) X3, (<b>d</b>) X4, (<b>e</b>) X5, (<b>f</b>) X6, (<b>g</b>) X7, (<b>h</b>) X8, (<b>i</b>) X9, and (<b>j</b>) Y.</p>
Full article ">Figure 5 Cont.
<p>Distribution of independent variables and dependent variable obtained from mathematical model within hexagonal grid: (<b>a</b>) X1, (<b>b</b>) X2, (<b>c</b>) X3, (<b>d</b>) X4, (<b>e</b>) X5, (<b>f</b>) X6, (<b>g</b>) X7, (<b>h</b>) X8, (<b>i</b>) X9, and (<b>j</b>) Y.</p>
Full article ">Figure 6
<p>Independent variable obtained from PCA using X3, X4, X7, and X8.</p>
Full article ">Figure 7
<p>Prediction of percentage of infection of water resources from import of several independent variables; prediction with FCR model: (<b>a</b>) FCR predicted using X3, X4, X7, and X8, (<b>b</b>) FCR predicted using X1 to X9, (<b>c</b>) FCR predicted using X1, X2, X4, and X6, (<b>d</b>) FCR predicted using PCA (X3, X4, X7, and X8).</p>
Full article ">Figure 8
<p>AUC graphs for use as an alternative decide on the right FCR model for predicting liver fluke infection: (<b>a</b>) AUC of FCR predicted using X3, X4, X7, and X8, (<b>b</b>) AUC of FCR predicted using X1 to X9, (<b>c</b>) AUC of FCR predicted using X1, X2, X4, and X6, (<b>d</b>) AUC of FCR predicted using PCA (X3, X4, X7, and X8), (<b>e</b>) AUC of all predicted FCR model.</p>
Full article ">
12 pages, 282 KiB  
Article
Food-Intolerance Genetic Testing: A Useful Tool for the Dietary Management of Chronic Gastrointestinal Disorders
by Alexandra Celi, María Trelis, Lorena Ponce, Vicente Ortiz, Vicente Garrigues, José M. Soriano and Juan F. Merino-Torres
Nutrients 2024, 16(16), 2741; https://doi.org/10.3390/nu16162741 - 16 Aug 2024
Viewed by 1412
Abstract
The rise in food intolerances and celiac disease, along with advanced diagnostic techniques, has prompted health professionals to seek effective and economical testing methods. This study evaluates combining genetic tests with routine carbohydrate-absorption breath tests to classify patients with chronic gastrointestinal disorders into [...] Read more.
The rise in food intolerances and celiac disease, along with advanced diagnostic techniques, has prompted health professionals to seek effective and economical testing methods. This study evaluates combining genetic tests with routine carbohydrate-absorption breath tests to classify patients with chronic gastrointestinal disorders into therapeutic groups, enhancing dietary management and improving gut health and quality of life. Forty-nine patients with suspected carbohydrate intolerance underwent genetic testing for lactase non-persistence, hereditary fructose intolerance, and celiac disease risk. Simultaneously, breath tests assessed lactose and fructose absorption. The lactase non-persistence genotype appeared in 36.7% of cases, with one hereditary fructose-intolerance case in a heterozygous condition. Celiac disease risk markers (HLA-DQ2/8 haplotypes) were found in 49.0% of the population. Secondary lactose and/or fructose malabsorption was present in 67.3% of patients, with 66.1% of lactase non-persistence individuals showing secondary lactose malabsorption. Fructose malabsorption was prevalent in 45.8% of patients at risk for celiac disease. Two main treatment groups were defined based on genetic results, indicating primary and irreversible gastrointestinal disorder causes, followed by a sub-classification using breath test results. Genetic testing is a valuable tool for designing dietary management plans, avoiding unnecessary diet restrictions, and reducing recovery times. Full article
(This article belongs to the Section Clinical Nutrition)
Back to TopTop