[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (219)

Search Parameters:
Keywords = sub-technique classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 682 KiB  
Article
Sentence Interaction and Bag Feature Enhancement for Distant Supervised Relation Extraction
by Wei Song and Qingchun Liu
AI 2025, 6(3), 51; https://doi.org/10.3390/ai6030051 - 4 Mar 2025
Viewed by 279
Abstract
Background: Distant supervision employs external knowledge bases to automatically match with text, allowing for the automatic annotation of sentences. Although this method effectively tackles the challenge of manual labeling, it inevitably introduces noisy labels. Traditional approaches typically employ sentence-level attention mechanisms, assigning lower [...] Read more.
Background: Distant supervision employs external knowledge bases to automatically match with text, allowing for the automatic annotation of sentences. Although this method effectively tackles the challenge of manual labeling, it inevitably introduces noisy labels. Traditional approaches typically employ sentence-level attention mechanisms, assigning lower weights to noisy sentences to mitigate their impact. But this approach overlooks the critical importance of information flow between sentences. Additionally, previous approaches treated an entire bag as a single classification unit, giving equal importance to all features within the bag. However, they failed to recognize that different dimensions of features have varying levels of significance. Method: To overcome these challenges, this study introduces a novel network that incorporates sentence interaction and a bag-level feature enhancement (ESI-EBF) mechanism. We concatenate sentences within a bag into a continuous context, allowing information to flow freely between them during encoding. At the bag level, we partition the features into multiple groups based on dimensions, assigning an importance coefficient to each sub-feature within a group. This enhances critical features while diminishing the influence of less important ones. In the end, the enhanced features are utilized to construct high-quality bag representations, facilitating more accurate classification by the classification module. Result: The experimental findings from the New York Times (NYT) and Wiki-20m datasets confirm the efficacy of our suggested encoding approach and feature improvement module. Our method also outperforms state-of-the-art techniques on these datasets, achieving superior relation extraction accuracy. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Figure 1
<p>The general structure of the proposed method.</p>
Full article ">Figure 2
<p>The process of the rich joint sentence representation T. Different colors represent different dimensional features.</p>
Full article ">Figure 3
<p>The overall process of the group-wise enhancement module. Different colors represent different levels of attention.</p>
Full article ">Figure 4
<p>Precision–Recall curves for the different versions of the suggested model using the NYT dataset.</p>
Full article ">Figure 5
<p>Precision–Recall curves for eight models evaluated on the NYT dataset.</p>
Full article ">Figure 6
<p>Comparison of PR curves for six models using the Wiki-20m dataset.</p>
Full article ">Figure 7
<p>PR curves for ESI-EBF and two ablation methods using the NYT dataset.</p>
Full article ">Figure 8
<p>Precision of different group-wise sizes.</p>
Full article ">
24 pages, 6584 KiB  
Article
Machine Learning Framework for Hybrid Clad Characteristics Modeling in Metal Additive Manufacturing
by Sina Tayebati and Kyu Taek Cho
J. Manuf. Mater. Process. 2025, 9(2), 49; https://doi.org/10.3390/jmmp9020049 - 5 Feb 2025
Viewed by 563
Abstract
Metal additive manufacturing (MAM) has advanced significantly, yet accurately predicting clad characteristics from processing parameters remains challenging due to process complexity and data scarcity. This study introduces a novel hybrid machine learning (ML) framework that integrates validated multi-physics computational fluid dynamics simulations with [...] Read more.
Metal additive manufacturing (MAM) has advanced significantly, yet accurately predicting clad characteristics from processing parameters remains challenging due to process complexity and data scarcity. This study introduces a novel hybrid machine learning (ML) framework that integrates validated multi-physics computational fluid dynamics simulations with experimental data, enabling prediction of clad characteristics unattainable through conventional methods alone. Our approach uniquely incorporates physics-aware features, such as volumetric energy density and linear mass density, enhancing process understanding and model transferability. We comprehensively benchmark ML models across traditional, ensemble, and neural network categories, analyzing their computational complexity through Big O notation and evaluating both classification and regression performance in predicting clad geometries and process maps. The framework demonstrates superior prediction accuracy with sub-second inference latency, overcoming limitations of purely experimental or simulation-based methods. The trained models generate processing maps with 0.95 AUC (Area Under Curve) accuracy that directly guide MAM parameter selection, bridging the gap between theoretical modeling and practical process control. By integrating physics-based simulations with ML techniques and physics-aware features, our approach achieves an R2 of 0.985 for clad geometry prediction and improved generalization over traditional methods, establishing a new standard for MAM process modeling. This research advances both theoretical understanding and practical implementation of MAM processes through a comprehensive, physics-aware machine learning approach. Full article
(This article belongs to the Special Issue Large-Scale Metal Additive Manufacturing)
Show Figures

Figure 1

Figure 1
<p>Schematic of the DED process and the clad characteristics.</p>
Full article ">Figure 2
<p>Hybrid data, ML models, and task implementation in our framework.</p>
Full article ">Figure 3
<p>Distribution of clad features in our dataset, (<b>a</b>) “width” distribution, (<b>b</b>) “height” distribution, (<b>c</b>) “depth” distribution, (<b>d</b>) occurrence of clad quality labels.</p>
Full article ">Figure 4
<p>Diagram of boundary conditions.</p>
Full article ">Figure 5
<p>Comparison of experimental and modeling results: (<b>a</b>) comparison of clad height from experiment and modeling results; (<b>b</b>) comparison of dilution from experiment and modeling results.</p>
Full article ">Figure 6
<p>Feature distribution comparison between modeling and experimental datasets for key process parameters i.e., (<b>a</b>) volumetric energy density <math display="inline"><semantics> <mrow> <mfenced separators="|"> <mrow> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi mathvariant="normal">J</mi> </mrow> <mrow> <msup> <mrow> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msup> </mrow> </mfrac> </mstyle> </mrow> </mfenced> </mrow> </semantics></math>, (<b>b</b>) linear mass density <math display="inline"><semantics> <mrow> <mfenced separators="|"> <mrow> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi mathvariant="normal">g</mi> </mrow> <mrow> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> </mrow> </mfrac> </mstyle> </mrow> </mfenced> </mrow> </semantics></math>, (<b>c</b>) laser power (<math display="inline"><semantics> <mrow> <mi mathvariant="normal">W</mi> </mrow> </semantics></math>), (<b>d</b>) laser scanning speed <math display="inline"><semantics> <mrow> <mfenced separators="|"> <mrow> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi mathvariant="normal">m</mi> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mi mathvariant="normal">s</mi> </mrow> </mfrac> </mstyle> </mrow> </mfenced> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Benchmark performance comparison for predicting geometrical characteristics of the single clad: (<b>a</b>,<b>b</b>) width prediction accuracy and MAE results, (<b>c</b>,<b>d</b>) height prediction accuracy and MAE results, (<b>e</b>,<b>f</b>) depth prediction accuracy and MAE results.</p>
Full article ">Figure 8
<p>Predicted clad geometry plotted against the actual ground truth geometry values: (<b>a</b>) width prediction using ‘Gradient Boosting’, (<b>b</b>) height prediction using ‘Gradient Boosting’, (<b>c</b>) depth prediction using ‘Gradient Boosting’.</p>
Full article ">Figure 9
<p>Printability maps constructed by the ML regression models on the test dataset, showing the effect of laser power and laser scanning velocity on single clad geometry features: (<b>a</b>) predicted width using Gradient Boosting Regression, (<b>b</b>) predicted height using Gradient Boosting Regression, (<b>c</b>) predicted depth using Gradient Boosting Regression.</p>
Full article ">Figure 10
<p>Benchmark performance comparison for predicting the class of the clad and process map: (<b>a</b>) accuracy results, (<b>b</b>) AUC-ROC results.</p>
Full article ">Figure 11
<p>ROC curves of the ML classifiers in predicting the class of the clad and process map.</p>
Full article ">Figure 12
<p>(<b>a</b>) Printability maps (classification boundaries) of our testing dataset based on laser power and laser scanning velocity for printing a single clad with desirable (20% <math display="inline"><semantics> <mrow> <mo>≤</mo> </mrow> </semantics></math> Dilution <math display="inline"><semantics> <mrow> <mo>≤</mo> <mtext> </mtext> <mn>50</mn> <mo>%</mo> </mrow> </semantics></math>) or undesirable (Dilution <math display="inline"><semantics> <mrow> <mo>≤</mo> </mrow> </semantics></math> 20% or Dilution <math display="inline"><semantics> <mrow> <mo>≥</mo> </mrow> </semantics></math> 50%) quality for neural network model. (<b>b</b>) Confusion matrix for clad classification based on neural network prediction.</p>
Full article ">Figure 13
<p>Feature importance analysis. (<b>a</b>) Feature importance for clad height prediction (<b>b</b>) Feature importance for clad width prediction (<b>c</b>) Feature importance for clad depth prediction (<b>d</b>) Feature importance for clad quality classification.</p>
Full article ">Figure 14
<p>Comparison of the time complexity of machine learning models using Big O notation.</p>
Full article ">
18 pages, 2544 KiB  
Article
Graph Neural Network Learning on the Pediatric Structural Connectome
by Anand Srinivasan, Rajikha Raja, John O. Glass, Melissa M. Hudson, Noah D. Sabin, Kevin R. Krull and Wilburn E. Reddick
Tomography 2025, 11(2), 14; https://doi.org/10.3390/tomography11020014 - 29 Jan 2025
Viewed by 592
Abstract
Purpose: Sex classification is a major benchmark of previous work in learning on the structural connectome, a naturally occurring brain graph that has proven useful for studying cognitive function and impairment. While graph neural networks (GNNs), specifically graph convolutional networks (GCNs), have gained [...] Read more.
Purpose: Sex classification is a major benchmark of previous work in learning on the structural connectome, a naturally occurring brain graph that has proven useful for studying cognitive function and impairment. While graph neural networks (GNNs), specifically graph convolutional networks (GCNs), have gained popularity lately for their effectiveness in learning on graph data, achieving strong performance in adult sex classification tasks, their application to pediatric populations remains unexplored. We seek to characterize the capacity for GNN models to learn connectomic patterns on pediatric data through an exploration of training techniques and architectural design choices. Methods: Two datasets comprising an adult BRIGHT dataset (N = 147 Hodgkin’s lymphoma survivors and N = 162 age similar controls) and a pediatric Human Connectome Project in Development (HCP-D) dataset (N = 135 healthy subjects) were utilized. Two GNN models (GCN simple and GCN residual), a deep neural network (multi-layer perceptron), and two standard machine learning models (random forest and support vector machine) were trained. Architecture exploration experiments were conducted to evaluate the impact of network depth, pooling techniques, and skip connections on the ability of GNN models to capture connectomic patterns. Models were assessed across a range of metrics including accuracy, AUC score, and adversarial robustness. Results: GNNs outperformed other models across both populations. Notably, adult GNN models achieved 85.1% accuracy in sex classification on unseen adult participants, consistent with prior studies. The extension of the adult models to the pediatric dataset and training on the smaller pediatric dataset were sub-optimal in their performance. Using adult data to augment pediatric models, the best GNN achieved comparable accuracy across unseen pediatric (83.0%) and adult (81.3%) participants. Adversarial sensitivity experiments showed that the simple GCN remained the most robust to perturbations, followed by the multi-layer perceptron and the residual GCN. Conclusions: These findings underscore the potential of GNNs in advancing our understanding of sex-specific neurological development and disorders and highlight the importance of data augmentation in overcoming challenges associated with small pediatric datasets. Further, they highlight relevant tradeoffs in the design landscape of connectomic GNNs. For example, while the simpler GNN model tested exhibits marginally worse accuracy and AUC scores in comparison to the more complex residual GNN, it demonstrates a higher degree of adversarial robustness. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

Figure 1
<p>Model architecture illustrations for simple GCN (<b>a</b>), residual GCN (<b>b</b>), and multi-layer perceptron (<b>c</b>) networks.</p>
Full article ">Figure 2
<p>Sex classification mean accuracy (<b>a</b>) and AUC score (<b>b</b>) results for random forest (RF), support vector machine (SVM), multi-layer perceptron (MLP), simple graph convolutional neural network (GCN simple), and residual graph convolutional neural network (GCN residual) classifiers trained on the adult-enriched pediatric dataset. Results for pediatric (blue), adult (dark gray), and overall (black) subsets of the test dataset were displayed. Standard deviations of classification accuracy shown by whiskers. AUC and accuracy score results are displayed in tabular format in (<b>c</b>). Red indicates overall best results; bold indicates overall second-best results.</p>
Full article ">Figure 3
<p>Representative training and validation loss curves for multi-layer perceptron (<b>a</b>), simple graph convolutional neural network (<b>b</b>), and residual graph convolutional neural network (<b>c</b>) classifiers trained on the adult-enriched pediatric dataset.</p>
Full article ">Figure 4
<p>Representative receiver operating characteristic (ROC) curves for multi-layer perceptron (<b>a</b>), simple graph convolutional neural network (<b>b</b>), residual graph convolutional neural network (<b>c</b>), support vector machine (<b>d</b>), and random forest (<b>e</b>) classifiers trained on the adult-enriched pediatric dataset. Three test set ROC curves are displayed for each model corresponding to pediatric (blue), adult (gray), and overall (black) subsets. A single ROC curve corresponding to the overall training set (green) is displayed for each model.</p>
Full article ">Figure 5
<p>Sex classification mean accuracy (<b>a</b>) and AUC score (<b>b</b>) results for simple graph convolutional neural network (simple GCN) and residual graph convolutional neural network (residual GCN) architectures with varying model depth analogs.</p>
Full article ">Figure 6
<p>Adversarial accuracy (<b>a</b>) and AUC score (<b>b</b>) results for multi-layer perceptron (MLP), simple graph convolutional neural network (GCN simple), and residual graph convolutional neural network (GCN residual) classifiers trained on the adult-enriched pediatric dataset.</p>
Full article ">
41 pages, 3121 KiB  
Article
Impact of Indices on Stock Price Volatility of BRICS Countries During Crises: Comparative Study
by Nursel Selver Ruzgar
Int. J. Financial Stud. 2025, 13(1), 8; https://doi.org/10.3390/ijfs13010008 - 11 Jan 2025
Viewed by 868
Abstract
This study aims to identify the common indices having an impact on the SPV of BRICS countries during crises. To address this, the monthly data retrieved from the database of the Global Economic Monitor (GEM), World Bank, IMF International Financial Statistics data, and [...] Read more.
This study aims to identify the common indices having an impact on the SPV of BRICS countries during crises. To address this, the monthly data retrieved from the database of the Global Economic Monitor (GEM), World Bank, IMF International Financial Statistics data, and OECD in the period of January 2000 to December 2023 are analyzed in two phases. In the first phase, DM classification techniques are applied to the data to identify the best common classification technique in order to use this technique in the second phase to compare the results with Multiple Linear Regression (MLR) results. In the second phase, to account for the global financial crisis and COVID-19 crisis, the sample period is divided into two sub-periods. For those sub-periods, MLR and the best classification technique that was found in the first phase are utilized to find the common indices that have an impact on the stock price volatility during individual and both crises. The findings indicate that the Random Tree method commonly classified the data among the seven classification techniques. Regarding MLR results, no common indices were identified during the global financial crisis or the COVID-19 crisis. However, based on Random Tree classifications, the CPI price percent, National Currency, and CPI index for all items were common during the global financial crisis, whereas only the CPI price percent was common during the COVID-19 crisis. While some common indices were observed in individual crises for specific countries, no indices were consistently found across both crises. This variation is attributed to the unique nature of each crisis and the diverse economic and socio-political structures of different countries. These findings provide valuable insights for financial institutions and investors to refine financial and policy decisions based on the specific characteristics of each crisis and the indices affecting each country. Full article
Show Figures

Figure 1

Figure 1
<p>Stock price volatility percent of BRICS countries in USD (aggregated to 2010 in %).</p>
Full article ">Figure A1
<p>Normal P-P Plot of Regression Standardized Residuals of BRICS countries for 2007–2010 crisis.</p>
Full article ">Figure A1 Cont.
<p>Normal P-P Plot of Regression Standardized Residuals of BRICS countries for 2007–2010 crisis.</p>
Full article ">Figure A2
<p>Histograms of Regression Standardized Residuals of BRICS countries for 2007–2010 crisis.</p>
Full article ">Figure A2 Cont.
<p>Histograms of Regression Standardized Residuals of BRICS countries for 2007–2010 crisis.</p>
Full article ">Figure A3
<p>Normal P-P Plot of Regression Standardized Residuals of BRICS countries for 2018–2021 crisis.</p>
Full article ">Figure A3 Cont.
<p>Normal P-P Plot of Regression Standardized Residuals of BRICS countries for 2018–2021 crisis.</p>
Full article ">Figure A4
<p>Histograms of Regression Standardized Residuals of BRICS countries for 2018–2021 crisis.</p>
Full article ">Figure A4 Cont.
<p>Histograms of Regression Standardized Residuals of BRICS countries for 2018–2021 crisis.</p>
Full article ">
26 pages, 1002 KiB  
Article
Training Neural Networks with a Procedure Guided by BNF Grammars
by Ioannis G. Tsoulos  and Vasileios Charilogis
Big Data Cogn. Comput. 2025, 9(1), 5; https://doi.org/10.3390/bdcc9010005 - 2 Jan 2025
Viewed by 550
Abstract
Artificial neural networks are parametric machine learning models that have been applied successfully to an extended series of classification and regression problems found in the recent literature. For the effective identification of the parameters of the artificial neural networks, a series of optimization [...] Read more.
Artificial neural networks are parametric machine learning models that have been applied successfully to an extended series of classification and regression problems found in the recent literature. For the effective identification of the parameters of the artificial neural networks, a series of optimization techniques have been proposed in the relevant literature, which, although they present good results in many cases, either the optimization method used is not efficient and the training error of the network is trapped in sub-optimal values, or the neural network exhibits the phenomenon of overfitting which means that it has poor results when applied to data that was not present during the training. This paper proposes an innovative technique for constructing the weights of artificial neural networks based on appropriate BNF grammars, used in the evolutionary process of Grammatical Evolution. The new procedure locates an interval of values for the parameters of the artificial neural network, and the optimization method effectively locates the network parameters within this interval. The new technique was applied to a wide range of data classification and adaptation problems covering a number of scientific areas and the experimental results were more than promising. Full article
Show Figures

Figure 1

Figure 1
<p>The structure of the neural network incorporated in the current work.</p>
Full article ">Figure 2
<p>BNF grammar used in the implemented work. The numbers in parentheses denote the sequence number of the current production rule and they are used during the Grammatical Evolution procedure in order to produce valid programs. Symbols enclosed in &lt; &gt; are considered non-terminal symbols of the grammar. The parameter n denotes the number of parameters for the used neural network.</p>
Full article ">Figure 3
<p>A representation of the example neural network presented here.</p>
Full article ">Figure 4
<p>An example of one-point crossover mechanism, used in the Grammatical Evolution procedure.</p>
Full article ">Figure 5
<p>The steps of the proposed algorthm as a flowchart.</p>
Full article ">Figure 6
<p>Statistical comparison between the used methods for the classification datasets. Every method is denoted with different color.</p>
Full article ">Figure 7
<p>Statistical comparison between the used methods for the regression datasets. Every method is denoted with different color.</p>
Full article ">
18 pages, 6340 KiB  
Article
Identifying Bias in Deep Neural Networks Using Image Transforms
by Sai Teja Erukude, Akhil Joshi and Lior Shamir
Computers 2024, 13(12), 341; https://doi.org/10.3390/computers13120341 - 15 Dec 2024
Viewed by 1225
Abstract
CNNs have become one of the most commonly used computational tools in the past two decades. One of the primary downsides of CNNs is that they work as a “black box”, where the user cannot necessarily know how the image data are analyzed, [...] Read more.
CNNs have become one of the most commonly used computational tools in the past two decades. One of the primary downsides of CNNs is that they work as a “black box”, where the user cannot necessarily know how the image data are analyzed, and therefore needs to rely on empirical evaluation to test the efficacy of a trained CNN. This can lead to hidden biases that affect the performance evaluation of neural networks, but are difficult to identify. Here we discuss examples of such hidden biases in common and widely used benchmark datasets, and propose techniques for identifying dataset biases that can affect the standard performance evaluation metrics. One effective approach to identify dataset bias is to perform image classification by using merely blank background parts of the original images. However, in some situations, a blank background in the images is not available, making it more difficult to separate foreground or contextual information from the bias. To overcome this, we propose a method to identify dataset bias without the need to crop background information from the images. The method is based on applying several image transforms to the original images, including Fourier transform, wavelet transforms, median filter, and their combinations. These transforms are applied to recover background bias information that CNNs use to classify images. These transformations affect the contextual visual information in a different manner than it affects the systemic background bias. Therefore, the method can distinguish between contextual information and the bias, and can reveal the presence of background bias even without the need to separate sub-image parts from the blank background of the original images. The code used in the experiments is publicly available. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

Figure 1
<p>X-rays images classified correctly by CNNs. (<b>a</b>) Original X-ray, (<b>b</b>) AlexNet, (<b>c</b>) GoogleNet, (<b>d</b>) VGG16, (<b>e</b>) VGG19, (<b>f</b>) ResNet18, (<b>g</b>) ResNet50, (<b>h</b>) ResNet101, (<b>i</b>) Inception V3, (<b>j</b>) InceptionResNet, (<b>k</b>) DenseNet201, (<b>l</b>) SqueezeNet, (<b>m</b>) Xception, and (<b>n</b>) CNN-X [<a href="#B30-computers-13-00341" class="html-bibr">30</a>].</p>
Full article ">Figure 2
<p>A 20 × 20 cropped background segment of a blank background part of the original image. This was carried out for all images in the dataset to create a new dataset of blank sub-images, as shown in <a href="#computers-13-00341-f003" class="html-fig">Figure 3</a>. When using just these seemingly blank parts of the images, the classification accuracy of numerous datasets was far higher than mere chance accuracy.</p>
Full article ">Figure 3
<p>Original images from Yale Faces B and the 20 × 20 portion of the top left corner separated from each of the original images. The classification accuracy of the CNN was far higher than mere chance, showing the CNN does not necessarily need to recognize the face in order to classify the images correctly.</p>
Full article ">Figure 4
<p>Original images from KVASIR and the 20 × 20 portion of the top left corner separated from the original images [<a href="#B13-computers-13-00341" class="html-bibr">13</a>].</p>
Full article ">Figure 5
<p>Classification accuracy of CNN models trained and tested on seemingly blank sub-images taken from image background of several common image benchmark datasets [<a href="#B13-computers-13-00341" class="html-bibr">13</a>].</p>
Full article ">Figure 6
<p>Categorization of data. The data used in this paper include natural images collected from various sources, as well as controlled datasets such that all images in the dataset are provided from a single source.</p>
Full article ">Figure 7
<p>VGG16 architecture.</p>
Full article ">Figure 8
<p>Classification accuracy when using the Fourier-transformed full images.</p>
Full article ">Figure 9
<p>Classification accuracy when using the Fourier-transformed small sub-images taken from the background.</p>
Full article ">Figure 10
<p>Example of an original image and the Haar and Daubechies Discrete Wavelet transformed images.</p>
Full article ">Figure 11
<p>Example of an original image taken from ImageNette, and the Haar and Daubechies Discrete Wavelet transformed images.</p>
Full article ">Figure 12
<p>Classification accuracy when using images after applying the wavelet transform.</p>
Full article ">Figure 13
<p>Classification accuracy when using the wavelet transformed cropped images.</p>
Full article ">Figure 14
<p>Classification accuracy when using full images after applying median filtering.</p>
Full article ">Figure 15
<p>Classification accuracy when using sub-images of blank background after applying median filter.</p>
Full article ">Figure 16
<p>Classification accuracy of full images after applying both median and wavelet transforms.</p>
Full article ">Figure 17
<p>Classification accuracy of blank background sub-images after applying both median and wavelet transforms.</p>
Full article ">
482 KiB  
Proceeding Paper
Support Vector Machine-Based Epileptic Seizure Detection Using EEG Signals
by Sachin Himalyan and Vrinda Gupta
Eng. Proc. 2022, 18(1), 73; https://doi.org/10.3390/ecsa-11-20506 - 26 Nov 2024
Viewed by 218
Abstract
Increased electrical activity in the brain causes epilepsy, which causes seizures, resulting in various medical complications that can sometimes be fatal. Doctors use electroencephalography (EEG) for the profiling and diagnosis of epilepsy. According to the World Health Organization (WHO), approximately 50 million people [...] Read more.
Increased electrical activity in the brain causes epilepsy, which causes seizures, resulting in various medical complications that can sometimes be fatal. Doctors use electroencephalography (EEG) for the profiling and diagnosis of epilepsy. According to the World Health Organization (WHO), approximately 50 million people worldwide have epilepsy, making it one of the most common neurological disorders globally. This number represents about 0.7% of the global population. The conventional method of EEG analysis employed by medical professionals is a visual investigation that is time-consuming and requires expertise because of the variability in EEG signals. This paper describes a method for detecting epileptic seizures in EEG signals by combining signal processing and machine learning techniques. SVM and other machine learning techniques detect anomalies in the input EEG signal. To extract features, DWT is used for decomposition to sub-bands. The proposed method aims to improve the accuracy of the machine learning model while using as few features as possible. The classification results show an accuracy of 100% with just one feature, mean absolute value, from datasets A and E. With additional features, the overall accuracy remains high at 99%, with specificity and sensitivity values of 97.2% and 99.1%, respectively. These results outperform previous research on the same dataset, demonstrating the effectiveness of our approach. This research contributes to developing more accurate and efficient epilepsy diagnosis systems, potentially improving patient outcomes. Full article
(This article belongs to the Proceedings of The 8th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Applications of EEG signal analysis.</p>
Full article ">Figure 2
<p>Epileptic seizure detection methodology.</p>
Full article ">
13 pages, 6430 KiB  
Proceeding Paper
Detection of Non-Technical Losses in Special Customers with Telemetering, Based on Artificial Intelligence
by José Luis Llagua Arévalo and Patricio Antonio Pesántez Sarmiento
Eng. Proc. 2024, 77(1), 29; https://doi.org/10.3390/engproc2024077029 - 18 Nov 2024
Viewed by 394
Abstract
The Ecuadorian electricity sector, until April 2024, presented losses of 15.64% (6.6% technical and 9.04% non-technical), so it is important to detect the areas that potentially sub-register energy in order to reduce Non-Technical Losses (NTLs). The “Empresa Eléctrica de Ambato Sociedad Anónima” (EEASA), [...] Read more.
The Ecuadorian electricity sector, until April 2024, presented losses of 15.64% (6.6% technical and 9.04% non-technical), so it is important to detect the areas that potentially sub-register energy in order to reduce Non-Technical Losses (NTLs). The “Empresa Eléctrica de Ambato Sociedad Anónima” (EEASA), as a distribution company, has, to reduce NTLs, incorporated many smart meters in special clients, generating a large amount of data that are stored. This historical information is analyzed to detect anomalous consumption that is not easily recognized and is a significant part of the NTLs. The use of machine learning with appropriate clustering techniques and deep learning neural networks work together to detect abnormal curves that record lower readings than the real energy consumption. The developed methodology uses three k-means validation indices to classify daily energy curves based on the days of the week and holidays that present similar behaviors in terms of energy consumption. The developed algorithm groups similar consumption patterns as input data sets for learning, testing, and validating the densely connected classification neural network, allowing for the identification of daily curves described by customers. The results obtained from the system detected customers who sub-register energy. It is worth mentioning that this methodology is replicable for distribution companies that store historical consumption data with Advanced Measurement Infrastructure (AMI) systems. Full article
(This article belongs to the Proceedings of The XXXII Conference on Electrical and Electronic Engineering)
Show Figures

Figure 1

Figure 1
<p>Methodology flowchart.</p>
Full article ">Figure 2
<p>Variabilit.</p>
Full article ">Figure 3
<p>Demand.</p>
Full article ">Figure 4
<p>Grouping using the Soft-DTW k-means index for a k = 5, represented the centroid curves in red color.</p>
Full article ">Figure 5
<p>Grouping assigned values.</p>
Full article ">Figure 6
<p>Normal and fraudulent consumption curves with percentage decrease. (<b>a</b>) Type 1 with 36% of customer 6 in zone 2. (<b>b</b>) Type 2 with 56% of customer 4 in zone 1 and (<b>c</b>) Type 3 with 82% of customer 6 in zone 7.</p>
Full article ">Figure 7
<p>Model network design for the holiday group.</p>
Full article ">Figure 8
<p>KNIME—Python link and deep learning libraries.</p>
Full article ">Figure 9
<p>Completed neural network in the working environment.</p>
Full article ">Figure 10
<p>Accuracy curves of the neural network.</p>
Full article ">Figure 11
<p>Losses curves of the neural network.</p>
Full article ">Figure 12
<p>Weekend neural network results.</p>
Full article ">Figure 13
<p>Results of the neural network from Monday to Friday.</p>
Full article ">
19 pages, 3451 KiB  
Article
High-Resolution Remotely Sensed Evidence Shows Solar Thermal Power Plant Increases Grassland Growth on the Tibetan Plateau
by Naijing Liu, Huaiwu Peng, Zhenshi Zhang, Yujin Li, Kai Zhang, Yuehan Guo, Yuzheng Cui, Yingsha Jiang, Wenxiang Gao and Donghai Wu
Remote Sens. 2024, 16(22), 4266; https://doi.org/10.3390/rs16224266 - 15 Nov 2024
Viewed by 678
Abstract
Solar energy plays a crucial role in mitigating greenhouse gas emissions in the context of global climate change. However, its deployment for green electricity generation can significantly influence regional climate and vegetation dynamics. While prior studies have examined the impacts of solar power [...] Read more.
Solar energy plays a crucial role in mitigating greenhouse gas emissions in the context of global climate change. However, its deployment for green electricity generation can significantly influence regional climate and vegetation dynamics. While prior studies have examined the impacts of solar power plants on vegetation, the accuracy of these assessments has often been constrained by the availability of publicly accessible multispectral, high-resolution remotely sensed imagery. Given the abundant solar energy resources and the ecological significance of the Tibetan Plateau, a thorough evaluation of the vegetation effects associated with solar power installations is warranted. In this study, we utilize sub-meter resolution imagery from the GF-2 satellite to reconstruct the fractional vegetation cover (FVC) at the Gonghe solar thermal power plant through image classification, in situ sampling, and sliding window techniques. We then quantify the plant’s impact on FVC by comparing data from the pre-installation and post-installation periods. Our findings indicate that the Gonghe solar thermal power plant is associated with a 0.02 increase in FVC compared to a surrounding control region (p < 0.05), representing a 12.5% increase relative to the pre-installation period. Notably, the enhancement in FVC is more pronounced in the outer ring areas than near the central tower. The observed enhancement in vegetation growth at the Gonghe plant suggests potential ecological and carbon storage benefits resulting from solar power plant establishment on the Tibetan Plateau. These findings underscore the necessity of evaluating the climate and ecological impacts of renewable energy facilities during the planning and design phases to ensure a harmonious balance between clean energy development and local ecological integrity. Full article
(This article belongs to the Special Issue Remote Sensing of Mountain and Plateau Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area. (<b>a</b>) Geolocation and true color composite image of the Gonghe solar thermal power plant, captured by GF-2 with a spatial resolution of 0.8 m. Detailed representations of the Gonghe solar thermal power plant are provided in subfigures (<b>b</b>–<b>d</b>), with their respective locations indicated within the main figure (<b>a</b>).</p>
Full article ">Figure 2
<p>Workflow of the study. The primary steps include data preprocessing, land cover classification, fractional vegetation cover (FVC) data preparation, FVC reconstruction, and assessment of FVC impacts within the Gonghe solar thermal power plant.</p>
Full article ">Figure 3
<p>Confusion matrices of soft voting classification in this study. Subfigure (<b>a</b>) illustrates the confusion matrix for the training samples, while subfigure (<b>b</b>) presents the confusion matrix for the validation samples. The f1_score and the kappa value for the validation samples are detailed in the accompanying text.</p>
Full article ">Figure 4
<p>Soft voting classification results of the Gonghe solar thermal power plant. The classification results are detailed in subfigures (<b>b</b>–<b>d</b>), with their respective positions indicated in the main figure (<b>a</b>). Areas classified as bare land and impervious surfaces are represented in brown, reflecting mirrors in white, and grassland in green.</p>
Full article ">Figure 5
<p>FVC reconstruction results of the Gonghe solar thermal power plant in 2020. The detailed results of the FVC reconstruction are presented in subfigures (<b>b</b>–<b>d</b>), with their respective locations indicated in the main figure (<b>a</b>).</p>
Full article ">Figure 6
<p>Spatial distribution of the FVC difference and ΔFVC of the Gonghe solar thermal power plant between 2017 and 2020. Subfigure (<b>a</b>) illustrates the spatial distribution of FVC differences along with the boundaries of the mirror field and control region. Subfigure (<b>b</b>) presents boxplots that depict the FVC differences observed in the mirror field and control region, with the ΔFVC values and the significance of the two-sample <span class="html-italic">t</span>-test detailed in the accompanying text.</p>
Full article ">Figure 7
<p>Distribution of the FVC difference of the Gonghe solar thermal power plant in the ring regions around the central tower and the power plant between 2017 and 2020. Subfigure (<b>a</b>) depicts the spatial arrangement of the rings, which include the power plant rings (Ring<sub>p</sub>) and the control region rings (Ring<sub>c</sub>). The control region rings are spaced at 100 m intervals, extending from 0 to 500 m beyond the boundaries of the Gonghe solar thermal power plant. Subfigure (<b>b</b>) illustrates the average FVC differences for each ring, with the standard deviations represented by the shaded area of the plot.</p>
Full article ">
26 pages, 33394 KiB  
Article
Feature Intensification Using Perception-Guided Regional Classification for Remote Sensing Image Super-Resolution
by Yinghua Li, Jingyi Xie, Kaichen Chi, Ying Zhang and Yunyun Dong
Remote Sens. 2024, 16(22), 4201; https://doi.org/10.3390/rs16224201 - 11 Nov 2024
Cited by 1 | Viewed by 958
Abstract
In recent years, super-resolution technology has gained widespread attention in the field of remote sensing. Despite advancements, current methods often employ uniform reconstruction techniques across entire remote sensing images, neglecting the inherent variability in spatial frequency distributions, particularly the distinction between high-frequency texture [...] Read more.
In recent years, super-resolution technology has gained widespread attention in the field of remote sensing. Despite advancements, current methods often employ uniform reconstruction techniques across entire remote sensing images, neglecting the inherent variability in spatial frequency distributions, particularly the distinction between high-frequency texture regions and smoother areas, leading to computational inefficiency, which introduces redundant computations and fails to optimize the reconstruction process for regions of higher complexity. To address these issues, we propose the Perception-guided Classification Feature Intensification (PCFI) network. PCFI integrates two key components: a compressed sensing classifier that optimizes speed and performance, and a deep texture interaction fusion module that enhances content interaction and detail extraction. This network mitigates the tendency of Transformers to favor global information over local details, achieving improved image information integration through residual connections across windows. Furthermore, a classifier is employed to segment sub-image blocks prior to super-resolution, enabling efficient large-scale processing. The experimental results on the AID dataset indicate that PCFI achieves state-of-the-art performance, with a PSNR of 30.87 dB and an SSIM of 0.8131, while also delivering a 4.33% improvement in processing speed compared to the second-best method. Full article
Show Figures

Figure 1

Figure 1
<p>Overall structure of Perception-guided Classification Feature Intensification Network and integrated compressive sensing-based perception classifier module.</p>
Full article ">Figure 2
<p>An illustration of depth–texture interaction fusion module. The figure is divided into three sections: the top-left represents the DTIF module, the top-right section represents the DTIT block, and the bottom section represents the CWIA block.</p>
Full article ">Figure 3
<p>The process of soft pooling (The red arrows are for the forward operation, and the output value of the SoftPool operation is generated by passing the standard sum of all <math display="inline"><semantics> <mover accent="true"> <mi>γ</mi> <mo>˜</mo> </mover> </semantics></math> in the kernel neighborhood N).</p>
Full article ">Figure 4
<p>The process of N–Gram window sliding (When sliding the window over single-character paddings, forward N–Gram features are obtained through the WSA operation).</p>
Full article ">Figure 5
<p>Some typical samples of AID dataset from 30 different scene classifications.</p>
Full article ">Figure 6
<p>The comparison of FLOPs and parameters, as well as PNSR/SSIM performance, with other methods on the AID dataset at a ×2 scale.</p>
Full article ">Figure 7
<p>Visual comparison on AID datasets at a ×3 scale. The patches used for comparison are marked in red boxes.</p>
Full article ">Figure 8
<p>Visual comparison on the SAR dataset and AVIRIS dataset at a ×2 scale. The patches used for comparison are marked in red boxes.</p>
Full article ">Figure 9
<p>Visual Comparison of Images from Manga109, BSD100, Set5, Set14, and Urban100 Datasets at ×3 Scale The patches used for comparison are marked in red boxes.</p>
Full article ">
23 pages, 5944 KiB  
Article
Examining Sentiment Analysis for Low-Resource Languages with Data Augmentation Techniques
by Gaurish Thakkar, Nives Mikelić Preradović and Marko Tadić
Eng 2024, 5(4), 2920-2942; https://doi.org/10.3390/eng5040152 - 7 Nov 2024
Viewed by 1062
Abstract
This investigation investigates the influence of a variety of data augmentation techniques on sentiment analysis in low-resource languages, with a particular emphasis on Bulgarian, Croatian, Slovak, and Slovene. The following primary research topic is addressed: is it possible to improve sentiment analysis efficacy [...] Read more.
This investigation investigates the influence of a variety of data augmentation techniques on sentiment analysis in low-resource languages, with a particular emphasis on Bulgarian, Croatian, Slovak, and Slovene. The following primary research topic is addressed: is it possible to improve sentiment analysis efficacy in low-resource languages through data augmentation? Our sub-questions look at how different augmentation methods affect performance, how effective WordNet-based augmentation is compared to other methods, and whether lemma-based augmentation techniques can be used, especially for Croatian sentiment tasks. The sentiment-labelled evaluations in the selected languages are included in our data sources, which were curated with additional annotations to standardise labels and mitigate ambiguities. Our findings show that techniques like replacing words with synonyms, masked language model (MLM)-based generation, and permuting and combining sentences can only make training datasets slightly bigger. However, they provide limited improvements in model accuracy for low-resource language sentiment classification. WordNet-based techniques, in particular, exhibit a marginally superior performance compared to other methods; however, they fail to substantially improve classification scores. From a practical perspective, this study emphasises that conventional augmentation techniques may require refinement to address the complex linguistic features that are inherent to low-resource languages, particularly in mixed-sentiment and context-rich instances. Theoretically, our results indicate that future research should concentrate on the development of augmentation strategies that introduce novel syntactic structures rather than solely relying on lexical variations, as current models may not effectively leverage synonymic or lemmatised data. These insights emphasise the nuanced requirements for meaningful data augmentation in low-resource linguistic settings and contribute to the advancement of sentiment analysis approaches. Full article
(This article belongs to the Special Issue Feature Papers in Eng 2024)
Show Figures

Figure 1

Figure 1
<p>Comparison of F1 scores for Bulgarian datasets. Our proposed methods are labelled with prefix “expanded”.</p>
Full article ">Figure 2
<p>Comparison of F1 scores for Croatian datasets. Our proposed methods are labelled with prefix “expanded”.</p>
Full article ">Figure 3
<p>Comparison of F1 scores for Slovak datasets. Our proposed methods are labelled with prefix “expanded”.</p>
Full article ">Figure 4
<p>Comparison of F1 scores for Slovene datasets. Our proposed methods are labelled with prefix “expanded”.</p>
Full article ">
20 pages, 5623 KiB  
Article
Tropical Cyclone Wind Direction Retrieval Based on Wind Streaks and Rain Bands in SAR Images
by Zhancai Liu, Hongwei Yang, Weihua Ai, Kaijun Ren, Shensen Hu and Li Wang
Remote Sens. 2024, 16(20), 3837; https://doi.org/10.3390/rs16203837 - 15 Oct 2024
Cited by 1 | Viewed by 913
Abstract
Tropical cyclones (TCs) are associated with severe weather phenomena, making accurate wind field retrieval crucial for TC monitoring. SAR’s high-resolution imaging capability provides detailed information for TC observation, and wind speed calculations require wind direction as prior information. Therefore, utilizing SAR images to [...] Read more.
Tropical cyclones (TCs) are associated with severe weather phenomena, making accurate wind field retrieval crucial for TC monitoring. SAR’s high-resolution imaging capability provides detailed information for TC observation, and wind speed calculations require wind direction as prior information. Therefore, utilizing SAR images to retrieve TC wind fields is of significant importance. This study introduces a novel approach for retrieving wind direction from SAR images of TCs through the classification of TC sub-images. The method utilizes a transfer learning-based Inception V3 model to identify wind streaks (WSs) and rain bands in SAR images under TC conditions. For sub-images containing WSs, the Mexican-hat wavelet transform is applied, while for sub-images containing rain bands, an edge detection technique is used to locate the center of the TC eye and subsequently the tangent to the spiral rain bands is employed to determine the wind direction associated with the rain bands. Wind direction retrieval from 10 SAR TC images showed an RMSD of 19.52° and a correlation coefficient of 0.96 when compared with ECMWF and HRD observation wind directions, demonstrating satisfactory consistency and providing highly accurate TC wind directions. These results confirm the method’s potential applications in TC wind direction retrieval. Full article
Show Figures

Figure 1

Figure 1
<p>Example of geophysical phenomena in SAR images. The first row represents wind streaks (G), the second row depicts rain bands (I), and the third row illustrates other geophysical phenomena (A).</p>
Full article ">Figure 2
<p>Flowchart of retrain recognition model based on transfer learning and wind direction retrieval from TCs SAR images.</p>
Full article ">Figure 3
<p>The architecture of transfer learning.</p>
Full article ">Figure 4
<p>The wind direction of rain band locations existing in Northern Hemisphere TCs.</p>
Full article ">Figure 5
<p>Accuracy and loss of training set (blue lines) and validation set (orange lines).</p>
Full article ">Figure 6
<p>Sub-image recognition results of SAR TC images. “G” represents WSs, “I” represents rain bands and “A” denotes other geophysical phenomena.</p>
Full article ">Figure 7
<p>Wind direction retrieval from SAR TC sub-images using 2-D Mexican-hat wavelet transform. (<b>a</b>) SAR sub-image; (<b>b</b>) The result of FFT; (<b>c</b>) The result of Mexico-hat wavelet transformation; (<b>d</b>) The wind direction of the sub-image.</p>
Full article ">Figure 8
<p>The Canny edge detection results for TC Douglas. The NRCS for VV and VH polarizations are presented in (<b>a</b>,<b>d</b>), respectively; the rain band distributions for VV and VH polarizations are shown in (<b>b</b>,<b>e</b>), respectively; the TC eye positions for VV and VH polarizations are depicted in (<b>c</b>,<b>f</b>), respectively.</p>
Full article ">Figure 9
<p>The Canny edge detection results for TC Larry. The NRCS for VV and VH polarizations are presented in (<b>a</b>,<b>d</b>), respectively; the rain band distributions for VV and VH polarizations are shown in (<b>b</b>,<b>e</b>), respectively; the TC eye positions for VV and VH polarizations are depicted in (<b>c</b>,<b>f</b>), respectively.</p>
Full article ">Figure 10
<p>Schematic diagram of wind directions with 180° ambiguity and reference wind direction. For the two predicted wind directions <math display="inline"><semantics> <msub> <mi>θ</mi> <msub> <mi>p</mi> <mn>1</mn> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>θ</mi> <msub> <mi>p</mi> <mn>2</mn> </msub> </msub> </semantics></math> that are aligned but point in opposite directions, the smaller the <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>θ</mi> </mrow> </semantics></math> calculated relative to the reference wind direction <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>t</mi> </msub> </semantics></math>, the closer it is to the true wind direction.</p>
Full article ">Figure 11
<p>The wind field rotational pattern of TCs. (<b>a</b>) TCs in the Northern Hemisphere. (<b>b</b>) TCs in the Southern Hemisphere.</p>
Full article ">Figure 12
<p>The wind direction retrieval results for TC Douglas, acquired on 25 July 2020. (<b>a</b>) Quick-look from the VV polarized SAR image over TC Douglas; (<b>b</b>) The wind direction retrieval results; (<b>c</b>) The ECMWF wind direction; (<b>d</b>) Comparison of the retrieved wind direction with ECMWF and HRD observation wind direction.</p>
Full article ">Figure 13
<p>The wind direction retrieval results for TC Larry, acquired on 7 September 2021. (<b>a</b>) Quick-look from the VV polarized SAR image over TC Larry; (<b>b</b>) The wind direction retrieval results; (<b>c</b>) The ECMWF wind direction; (<b>d</b>) Comparison of the retrieved wind direction with ECMWF and HRD observation wind direction.</p>
Full article ">Figure 14
<p>Comparison of wind directions retrieved from 10 SAR TCs images with ECMWF reanalysis and HRD observation wind directions.</p>
Full article ">
19 pages, 4794 KiB  
Article
An Efficient Ensemble Approach for Brain Tumors Classification Using Magnetic Resonance Imaging
by Zubair Saeed, Tarraf Torfeh, Souha Aouadi, (Jim) Xiuquan Ji and Othmane Bouhali
Information 2024, 15(10), 641; https://doi.org/10.3390/info15100641 - 15 Oct 2024
Cited by 1 | Viewed by 1680
Abstract
Tumors in the brain can be life-threatening, making early and precise detection crucial for effective treatment and improved patient outcomes. Deep learning (DL) techniques have shown significant potential in automating the early diagnosis of brain tumors by analyzing magnetic resonance imaging (MRI), offering [...] Read more.
Tumors in the brain can be life-threatening, making early and precise detection crucial for effective treatment and improved patient outcomes. Deep learning (DL) techniques have shown significant potential in automating the early diagnosis of brain tumors by analyzing magnetic resonance imaging (MRI), offering a more efficient and accurate approach to classification. Deep convolutional neural networks (DCNNs), which are a sub-field of DL, have the potential to analyze rapidly and accurately MRI data and, as such, assist human radiologists, facilitating quicker diagnoses and earlier treatment initiation. This study presents an ensemble of three high-performing DCNN models, i.e., DenseNet169, EfficientNetB0, and ResNet50, for accurate classification of brain tumors and non-tumor MRI samples. Our proposed ensemble model demonstrates significant improvements over various evaluation parameters compared to individual state-of-the-art (SOTA) DCNN models. We implemented ten SOTA DCNN models, i.e., EfficientNetB0, ResNet50, DenseNet169, DenseNet121, SqueezeNet, ResNet34, ResNet18, VGG16, VGG19, and LeNet5, and provided a detailed performance comparison. We evaluated these models using two learning rates (LRs) of 0.001 and 0.0001 and two batch sizes (BSs) of 64 and 128 and identified the optimal hyperparameters for each model. Our findings indicate that the ensemble approach outperforms individual models, having 92% accuracy, 90% precision, 92% recall, and an F1 score of 91% at a 64 BS and 0.0001 LR. This study not only highlights the superior performance of the ensemble technique but also offers a comprehensive comparison with the latest research. Full article
(This article belongs to the Special Issue Detection and Modelling of Biosignals)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Block diagram of our proposed methodology.</p>
Full article ">Figure 2
<p>Dataset samples (<b>a</b>) No tumor, (<b>b</b>) Glioma, (<b>c</b>) Meningioma, and (<b>d</b>) Pituitary.</p>
Full article ">Figure 3
<p>Overall dataset distribution against each class.</p>
Full article ">Figure 4
<p>(<b>a</b>) Accuracy, (<b>b</b>) Precision, (<b>c</b>) Recall, (<b>d</b>) F1-Score of Proposed Ensemble Model using 64 BS and 0.0001 LS, EfficientNetB0 using 64 BS and 0.0001, ResNet59 using 64 BS and 0.0001, DenseNet169, DenseNet121 using 64 BS and 0.001, SqueezeNet using 64 BS and 0.001, ResNet34 using 64 BS and 0.001, ResNet18 using 64 BS and 0.001, VGG16 using 128 BS and 0.001, VGG18 using 128 BS and 0.001, LeNet5 using 64 BS and 0.001.</p>
Full article ">Figure 5
<p>Training and Validation plots of (<b>a</b>) Proposed Ensemble Model using 64 BS and 0.0001 LS, (<b>b</b>) EfficientNetB0 using 64 BS and 0.0001, (<b>c</b>) ResNet59 using 64 BS and 0.0001, (<b>d</b>) DenseNet169 using 64 BS and 0.001, (<b>e</b>) DenseNet121 using 64 BS and 0.001, (<b>f</b>) SqueezeNet using 64 BS and 0.001, (<b>g</b>) ResNet34 using 64 BS and 0.001, (<b>h</b>) ResNet18 using 64 BS and 0.001, (<b>i</b>) VGG16 using 128 BS and 0.001, (<b>j</b>) VGG18 using 128 BS and 0.001, (<b>k</b>) LeNet5 using 64 BS and 0.001.</p>
Full article ">Figure 6
<p>False predictions of Proposed Ensemble Technique.</p>
Full article ">
30 pages, 22835 KiB  
Review
Ceramics for Microelectromechanical Systems Applications: A Review
by Ehsan Fallah Nia and Ammar Kouki
Micromachines 2024, 15(10), 1244; https://doi.org/10.3390/mi15101244 - 9 Oct 2024
Viewed by 4165
Abstract
A comprehensive review of the application of different ceramics for MEMS devices is presented. Main ceramics materials used for MEMS systems and devices including alumina, zirconia, aluminum Nitride, Silicon Nitride, and LTCC are introduced. Conventional and new methods of fabricating each material are [...] Read more.
A comprehensive review of the application of different ceramics for MEMS devices is presented. Main ceramics materials used for MEMS systems and devices including alumina, zirconia, aluminum Nitride, Silicon Nitride, and LTCC are introduced. Conventional and new methods of fabricating each material are explained based on the literature, along with the advantages of the new approaches, mainly additive manufacturing, i.e., 3D-printing technologies. Various manufacturing processes with relevant sub-techniques are detailed and the ones that are more suitable to have an application for MEMS devices are highlighted with their properties. In the main body of this paper, each material with its application for MEMS is categorized and explained. The majority of works are within three main classifications, including the following: (i) using ceramics as a substrate for MEMS devices to be mounted or fabricated on top of it; (ii) ceramics are a part of the materials used for an MEMS device or a monolithic fabrication of MEMS and ceramics; and finally, (iii) using ceramics as packaging solution for MEMS devices. We elaborate on how ceramics may be superior substitutes over other materials when delicate MEMS-based systems need to be assembled or packaged by a simpler fabrication process as well as their advantages when they need to operate in harsh environments. Full article
(This article belongs to the Special Issue The 15th Anniversary of Micromachines)
Show Figures

Figure 1

Figure 1
<p>Conventional manufacturing of ceramics from beginning to final product [<a href="#B55-micromachines-15-01244" class="html-bibr">55</a>].</p>
Full article ">Figure 2
<p>Different types of additive manufacturing and their techniques based on ISO classification [<a href="#B56-micromachines-15-01244" class="html-bibr">56</a>,<a href="#B57-micromachines-15-01244" class="html-bibr">57</a>].</p>
Full article ">Figure 3
<p>AM approaches for Si<sub>3</sub>N<sub>4</sub> manufacturing: (<b>A</b>) SLS/SLM; (<b>B</b>) SLA; (<b>C</b>) LIS; (<b>D</b>) DLP, LCD; (<b>E</b>) DIW; (<b>F</b>) FDM; (<b>G</b>) BJ; (<b>H</b>) 3D printing (3DP); (<b>I</b>) LOM [<a href="#B63-micromachines-15-01244" class="html-bibr">63</a>].</p>
Full article ">Figure 4
<p>Five different 3D-printing techniques: Digital Light Processing (DLP), material jetting (MJ), Stereolithography (SLA), Fused Deposition Modeling (FDM), Direct Ink Writing (DIW) [<a href="#B64-micromachines-15-01244" class="html-bibr">64</a>].</p>
Full article ">Figure 5
<p>Microstructure of (<b>a</b>) monolithic (<b>b</b>) multi-structure of fabrication [<a href="#B66-micromachines-15-01244" class="html-bibr">66</a>].</p>
Full article ">Figure 6
<p>Shrinkage rate of the sintered ceramic with different sintering materials content, (<b>a</b>) (TiO<sub>2</sub>) on top, (<b>b</b>) (CaCO<sub>3</sub>) on left and (<b>c</b>) (MgO) on right in all direction (X direction in black, Y direction in red and Z direction in blue), direct lines indicate the shrinkage before adding materials [<a href="#B67-micromachines-15-01244" class="html-bibr">67</a>].</p>
Full article ">Figure 7
<p>DLP technique for AlN 3D manufacturing [<a href="#B69-micromachines-15-01244" class="html-bibr">69</a>].</p>
Full article ">Figure 8
<p>Laser power and velocity variation effect on surface morphology of zirconia sample [<a href="#B76-micromachines-15-01244" class="html-bibr">76</a>].</p>
Full article ">Figure 9
<p>Preheat temperature effect on sample cracks [<a href="#B76-micromachines-15-01244" class="html-bibr">76</a>].</p>
Full article ">Figure 10
<p>LTCC substrate and surface metallization by MJ technique [<a href="#B80-micromachines-15-01244" class="html-bibr">80</a>].</p>
Full article ">Figure 11
<p>Machine for (<b>a</b>) flat and (<b>b</b>) curve printing of LTCC [<a href="#B80-micromachines-15-01244" class="html-bibr">80</a>].</p>
Full article ">Figure 12
<p>Microstrip patch antenna and RF measurements including S11, VSWR, and gain. (<b>a</b>) 3D perspective of the circuit (<b>b</b>) fabricated circuit (<b>c</b>) S11 simulation and measurement (<b>d</b>) VSWR simulation (<b>e</b>) gain and efficiency (<b>f</b>) 3D radiation [<a href="#B80-micromachines-15-01244" class="html-bibr">80</a>].</p>
Full article ">Figure 13
<p>Fabricated curve LTCC with metallization on top by MJ technique. (<b>a</b>) fabricated curved LTCC (<b>b</b>) schematic diagram of the curved surface (<b>c</b>) shrinkage circuit after sintering with side views [<a href="#B80-micromachines-15-01244" class="html-bibr">80</a>].</p>
Full article ">Figure 14
<p>LTCC powder-preparation steps [<a href="#B81-micromachines-15-01244" class="html-bibr">81</a>].</p>
Full article ">Figure 15
<p>LTCC slurry and tape preparation [<a href="#B81-micromachines-15-01244" class="html-bibr">81</a>].</p>
Full article ">Figure 16
<p>(<b>a</b>–<b>i</b>) Fabrication process of Pt film, (<b>j</b>) overall view of the sensor, (<b>k</b>) zoomed view of the sensitive area. Performance of the sensor on the right (resistance variation vs. temperature) [<a href="#B91-micromachines-15-01244" class="html-bibr">91</a>].</p>
Full article ">Figure 17
<p>Fabrication process of AR lens (<b>a</b>) fabrication steps of AR lens, (<b>b</b>) polished surface of the curved aluminum (<b>c</b>) nanoporous alumina on curved aluminum (<b>d</b>) final optical image of the lens (<b>left</b>), AR lens vs. normal one comparison (<b>top right</b>), (<b>a</b>) AR lens on a yellow light, (<b>b</b>) nanopillars created by anodization of aluminum (<b>bottom right</b>) [<a href="#B95-micromachines-15-01244" class="html-bibr">95</a>].</p>
Full article ">Figure 18
<p>Alumina membrane gas sensor fabrication process (<b>left</b>), gas sensor under test in different temperatures (<b>a</b>–<b>c</b>) Si<sub>3</sub>N<sub>4</sub> and (<b>d</b>–<b>f</b>) Al<sub>2</sub>O<sub>3</sub> μHP (<b>right</b>) [<a href="#B98-micromachines-15-01244" class="html-bibr">98</a>].</p>
Full article ">Figure 19
<p>Fabricated bridge sealed with Alumina and silicon nitride (<b>left</b>). Measured S-parameters (<b>right</b>) [<a href="#B99-micromachines-15-01244" class="html-bibr">99</a>].</p>
Full article ">Figure 20
<p>Fabricated alumina nanopores with high aspect ratio (<b>a</b>) Two-step anodization process. (<b>b</b>) Cu seed layer deposition process. (<b>c</b>) Photoresist spin coating process. (<b>d</b>) Photolithography and patterning processes. (<b>e</b>) Cu electroplating process. (<b>f</b>) Removal of photoresist. (<b>g</b>) Etching of cu seed layer. (<b>h</b>) Etching of AAO membrane (<b>left</b>). Thin film packaging using glow discharge (<b>center</b>) and fabricated view from top illustrating anode and cathode metals (<b>a</b>) top view (<b>b</b>) SEM image (<b>right</b>) [<a href="#B100-micromachines-15-01244" class="html-bibr">100</a>,<a href="#B101-micromachines-15-01244" class="html-bibr">101</a>].</p>
Full article ">Figure 21
<p>Flip-chip assembly on zirconia-silicate [<a href="#B103-micromachines-15-01244" class="html-bibr">103</a>].</p>
Full article ">Figure 22
<p>Fabricated flexible solar cell [<a href="#B104-micromachines-15-01244" class="html-bibr">104</a>].</p>
Full article ">Figure 23
<p>Fabrication process of micro-thruster [<a href="#B105-micromachines-15-01244" class="html-bibr">105</a>].</p>
Full article ">Figure 24
<p>Ammonia sensor and readout circuit (<b>left</b>), fabricated circuit (<b>center</b>), measured results of the sensor (<b>right</b>) [<a href="#B106-micromachines-15-01244" class="html-bibr">106</a>].</p>
Full article ">Figure 25
<p>Polishing machine (<b>left</b>), gaps found on the surface of AlN after polishing (<b>right</b>) [<a href="#B108-micromachines-15-01244" class="html-bibr">108</a>].</p>
Full article ">Figure 26
<p>(<b>a</b>)A PMUT device top view (<b>b</b>) cross section with different layers including AlN piezo layer [<a href="#B112-micromachines-15-01244" class="html-bibr">112</a>].</p>
Full article ">Figure 27
<p>Different type of PMUT devices in arrays, (<b>a</b>) top view of a PMUT device, (<b>b</b>) arrays of PMUTs 3Ddesign, (<b>c</b>) top view of arrays of PMUTs, (<b>d</b>) dimensions of PMUT arrays as a MEMS chip on CMOS device [<a href="#B112-micromachines-15-01244" class="html-bibr">112</a>].</p>
Full article ">Figure 28
<p>(<b>a</b>) A 3D view of AlN lamb wave resonator, (<b>b</b>) cross-section of AlN BAW resonator, (<b>c</b>) cross section of resonator with centered anchor, (<b>d</b>) cross section of a conventional lamb wave resonator [<a href="#B112-micromachines-15-01244" class="html-bibr">112</a>].</p>
Full article ">Figure 29
<p>SEM image of the fabricated optical 3 × 1 switch with zoom views and dimensions, (<b>a</b>) fabricated device top view; (<b>b</b>) mechanical stopper gap; (<b>c</b>) switching actuator gap; (<b>d</b>) air gap of the gap closing actuator; (<b>e</b>) air gap closing interface; and (<b>f</b>) etch profile of the optical stack. [<a href="#B129-micromachines-15-01244" class="html-bibr">129</a>].</p>
Full article ">Figure 30
<p>Silicon Nitride sealing fabrication process [<a href="#B130-micromachines-15-01244" class="html-bibr">130</a>].</p>
Full article ">Figure 31
<p>LTCC layers with embedded vias, cavities, and metallization as active substrate: (<b>left</b>) polished surface with vias on top, (<b>right</b>) active component and MEMS devices on top after final monolithic fabrication [<a href="#B134-micromachines-15-01244" class="html-bibr">134</a>].</p>
Full article ">Figure 32
<p>Fabricated capacitive MEMS switch with LTCC MEMS monolithic process, (<b>a</b>) Top image of the fabricated switch, (<b>b</b>) enlarged view (<b>c</b>) SEM image (<b>left</b>) LTCC-MEMS process flow (<b>right</b>) [<a href="#B134-micromachines-15-01244" class="html-bibr">134</a>].</p>
Full article ">Figure 33
<p>Cavities and via holes acting as a fluidic system for sensing application with embedded sensor [<a href="#B139-micromachines-15-01244" class="html-bibr">139</a>].</p>
Full article ">Figure 34
<p>Fabricated cantilever with LTCC ceramic materials (<b>left</b>) LTCC hotplate (<b>right</b>) [<a href="#B139-micromachines-15-01244" class="html-bibr">139</a>].</p>
Full article ">Figure 35
<p>LTCC humidity sensors made of different LTCC ceramic materials [<a href="#B139-micromachines-15-01244" class="html-bibr">139</a>].</p>
Full article ">
15 pages, 11845 KiB  
Article
Situational Awareness Classification Based on EEG Signals and Spiking Neural Network
by Yakir Hadad, Moshe Bensimon, Yehuda Ben-Shimol and Shlomo Greenberg
Appl. Sci. 2024, 14(19), 8911; https://doi.org/10.3390/app14198911 - 3 Oct 2024
Cited by 1 | Viewed by 1261
Abstract
Situational awareness detection and characterization of mental states have a vital role in medicine and many other fields. An electroencephalogram (EEG) is one of the most effective tools for identifying and analyzing cognitive stress. Yet, the measurement, interpretation, and classification of EEG sensors [...] Read more.
Situational awareness detection and characterization of mental states have a vital role in medicine and many other fields. An electroencephalogram (EEG) is one of the most effective tools for identifying and analyzing cognitive stress. Yet, the measurement, interpretation, and classification of EEG sensors is a challenging task. This study introduces a novel machine learning-based approach to assist in evaluating situational awareness detection using EEG signals and spiking neural networks (SNNs) based on a unique spike continuous-time neuron (SCTN). The implemented biologically inspired SNN architecture is used for effective EEG feature extraction by applying time–frequency analysis techniques and allows adept detection and analysis of the various frequency components embedded in the different EEG sub-bands. The EEG signal undergoes encoding into spikes and is then fed into an SNN model which is well suited to the serial sequence order of the EEG data. We utilize the SCTN-based resonator for EEG feature extraction in the frequency domain which demonstrates high correlation with the classical FFT features. A new SCTN-based 2D neural network is introduced for efficient EEG feature mapping, aiming to achieve a spatial representation of each EEG sub-band. To validate and evaluate the performance of the proposed approach, a common, publicly available EEG dataset is used. The experimental results show that by using the extracted EEG frequencies features and the SCTN-based SNN classifier, the mental state can be accurately classified with an average accuracy of 96.8% for the common EEG dataset. Our proposed method outperforms existing machine learning-based methods and demonstrates the advantages of using SNNs for situational awareness detection and mental state classifications. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) SNN-based resonator architecture and (<b>b</b>) the neurons’ output.</p>
Full article ">Figure 2
<p>Overall SNN-based architecture. (<b>a</b>) EEG electrodes positioned according to 10–20 standard. (<b>b</b>) SNN-based resonators used for feature extraction through supervised STDP learning. (<b>c</b>) Feature mapping, with EEG topologic map consisting of <math display="inline"><semantics> <mrow> <mn>11</mn> <mo>×</mo> <mn>11</mn> </mrow> </semantics></math> SCTNs. (<b>d</b>) SCTN-based classification network trained with unsupervised STDP.</p>
Full article ">Figure 3
<p>SCTN-based resonator frequency response to a chirp signal in the EEG sub-band ranges.</p>
Full article ">Figure 4
<p>EEG frequency features. (<b>a</b>,<b>c</b>) SNN-based spikegram and (<b>b</b>,<b>d</b>) FFT spectrogram.</p>
Full article ">Figure 5
<p>EEG feature mapping. (<b>a</b>) EEG topologic map consisting of <math display="inline"><semantics> <mrow> <mn>11</mn> <mo>×</mo> <mn>11</mn> </mrow> </semantics></math> SCTNs for each sub band. (<b>b</b>) Weight distribution for the 14 synapses of each SCTN, according to an EEG electrode position map.</p>
Full article ">Figure 6
<p>EEG sub-band topography maps for FFT vs. SCTN. SCTN-based topographic maps (one for each sub-band) are compared to the spacial maps created using an FFT.</p>
Full article ">Figure 7
<p>EEG topography maps for the five EEG sub-bands (delta, theta, alpha, beta, and gamma).</p>
Full article ">Figure 8
<p>The activity measured in the <span class="html-italic">F3</span> EEG electrode for the delta sub-band.</p>
Full article ">
Back to TopTop