[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,056)

Search Parameters:
Keywords = recurrent neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 12596 KiB  
Article
Spectral Data-Driven Prediction of Soil Properties Using LSTM-CNN-Attention Model
by Yiqiang Liu, Luming Shen, Xinghui Zhu, Yangfan Xie and Shaofang He
Appl. Sci. 2024, 14(24), 11687; https://doi.org/10.3390/app142411687 (registering DOI) - 14 Dec 2024
Abstract
Accurate prediction of soil properties is essential for sustainable land management and precision agriculture. This study presents an LSTM-CNN-Attention model that integrates temporal and spatial feature extraction with attention mechanisms to improve predictive accuracy. Utilizing the LUCAS soil dataset, the model analyzes spectral [...] Read more.
Accurate prediction of soil properties is essential for sustainable land management and precision agriculture. This study presents an LSTM-CNN-Attention model that integrates temporal and spatial feature extraction with attention mechanisms to improve predictive accuracy. Utilizing the LUCAS soil dataset, the model analyzes spectral data to estimate key soil properties, including organic carbon (OC), nitrogen (N), calcium carbonate (CaCO3), and pH (in H2O). The Long Short-Term Memory (LSTM) component captures temporal dependencies, the Convolutional Neural Network (CNN) extracts spatial features, and the attention mechanism highlights critical information within the data. Experimental results show that the proposed model achieves excellent prediction performance, with coefficient of determination (R2) values of 0.949 (OC), 0.916 (N), 0.943 (CaCO3), and 0.926 (pH), along with corresponding ratio of percent deviation (RPD) values of 3.940, 3.737, 5.377, and 3.352. Both R2 and RPD values exceed those of traditional machine learning models, such as partial least squares regression (PLSR), support vector machine regression (SVR), and random forest (RF), as well as deep learning models like CNN-LSTM and Gated Recurrent Unit (GRU). Additionally, the proposed model outperforms S-AlexNet in effectively capturing temporal and spatial patterns. These findings emphasize the potential of the proposed model to significantly enhance the accuracy and reliability of soil property predictions by capturing both temporal and spatial patterns effectively. Full article
Show Figures

Figure 1

Figure 1
<p>Sampling points of European Union.</p>
Full article ">Figure 2
<p>Initial absorbance spectra and preprocessed spectral curves for mineral soil samples from the LUCAS 2015 topsoil database: (<b>a</b>) shows the original spectra, and (<b>b</b>) displays the preprocessed spectra. Both figures present the 5th, 16th, 50th, 84th, and 95th percentiles to illustrate the variability within the dataset.</p>
Full article ">Figure 3
<p>Diagram of the LSTM model structure featuring the forget gate, input gate, and output gate.</p>
Full article ">Figure 4
<p>Self-attention mechanism.</p>
Full article ">Figure 5
<p>The framework of the proposed LSTM-CNN-Attention model.</p>
Full article ">Figure 6
<p>The flowchart of soil property prediction with the LSTM-CNN-Attention method.</p>
Full article ">Figure 7
<p>KDE plots of soil properties for the total dataset, training set, and test set: (<b>a</b>) OC, (<b>b</b>) N, (<b>c</b>) CaCO<sub>3</sub>, and (<b>d</b>) pH(H<sub>2</sub>O).</p>
Full article ">Figure 8
<p>KDE plots of PCA-transformed spectral data for the total dataset, training set, and test set: (<b>a</b>) PC1 and (<b>b</b>) PC2.</p>
Full article ">Figure 9
<p>Actual vs. predicted values of the proposed framework: (<b>a</b>) OC, (<b>b</b>) N, (<b>c</b>) CaCO<sub>3</sub>, and (<b>d</b>) pH(H<sub>2</sub>O).</p>
Full article ">Figure 10
<p>Residual comparison: (<b>a</b>) OC, (<b>b</b>) N, (<b>c</b>) CaCO<sub>3</sub>, and (<b>d</b>) pH(H<sub>2</sub>O).</p>
Full article ">Figure 11
<p>Line charts of (<b>a</b>) R<sup>2</sup> and (<b>b</b>) RPD for the proposed and other models.</p>
Full article ">
18 pages, 2633 KiB  
Article
Software Reliability Prediction Based on Recurrent Neural Network and Ensemble Method
by Wafa Alshehri, Salma Kammoun Jarraya and Arwa Allinjawi
Computers 2024, 13(12), 335; https://doi.org/10.3390/computers13120335 - 13 Dec 2024
Viewed by 191
Abstract
Software reliability is a crucial factor in determining software quality quantitatively. It is also used to estimate the software testing duration. In software reliability testing, traditional parametric software reliability growth models (SRGMs) are effectively used. Nevertheless, a single parametric model cannot provide accurate [...] Read more.
Software reliability is a crucial factor in determining software quality quantitatively. It is also used to estimate the software testing duration. In software reliability testing, traditional parametric software reliability growth models (SRGMs) are effectively used. Nevertheless, a single parametric model cannot provide accurate predictions in all cases. Moreover, non-parametric models have proven to be efficient for predicting software reliability as alternatives to parametric models. In this paper, we adopted a deep learning method for software reliability testing in computer vision systems. Also, we focused on critical computer vision applications that need high reliability. We propose a new deep learning-based model that is combined and based on the ensemble method to improve the performance of software reliability testing. The experimental results of the new model architecture present fairly accurate predictive capability compared to other existing single Neural Network (NN) based models. Full article
Show Figures

Figure 1

Figure 1
<p>Software failure process [<a href="#B2-computers-13-00335" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>An example of software reliability data [<a href="#B2-computers-13-00335" class="html-bibr">2</a>].</p>
Full article ">Figure 3
<p>The proposed model architecture for software reliability testing.</p>
Full article ">Figure 4
<p>RNN architecture [<a href="#B3-computers-13-00335" class="html-bibr">3</a>].</p>
Full article ">Figure 5
<p>MSE results over the models.</p>
Full article ">Figure 6
<p>MAE results over the models.</p>
Full article ">Figure 7
<p>(<b>a</b>) Actual and predicted values for Dataset 1; (<b>b</b>) Actual and predicted values for Dataset 2; (<b>c</b>) Actual and predicted values for Dataset 3; (<b>d</b>) Actual and predicted values for Dataset 4.</p>
Full article ">
22 pages, 7903 KiB  
Article
Forecasting Forex Market Volatility Using Deep Learning Models and Complexity Measures
by Pavlos I. Zitis, Stelios M. Potirakis and Alex Alexandridis
J. Risk Financial Manag. 2024, 17(12), 557; https://doi.org/10.3390/jrfm17120557 - 13 Dec 2024
Viewed by 287
Abstract
In this article, we examine whether incorporating complexity measures as features in deep learning (DL) algorithms enhances their accuracy in predicting forex market volatility. Our approach involved the gradual integration of complexity measures alongside traditional features to determine whether their inclusion would provide [...] Read more.
In this article, we examine whether incorporating complexity measures as features in deep learning (DL) algorithms enhances their accuracy in predicting forex market volatility. Our approach involved the gradual integration of complexity measures alongside traditional features to determine whether their inclusion would provide additional information that improved the model’s predictive accuracy. For our analyses, we employed recurrent neural networks (RNNs), long short-term memory (LSTM), and gated recurrent units (GRUs) as DL model architectures, while using the Hurst exponent and fuzzy entropy as complexity measures. All analyses were conducted on intraday data from four highly liquid currency pairs, with volatility estimated using the Range-Based estimator. Our findings indicated that the inclusion of complexity measures as features significantly enhanced the accuracy of DL models in predicting volatility. In achieving this, we contribute to a relatively unexplored area of research, as this is the first instance of such an approach being applied to the prediction of forex market volatility. Additionally, we conducted a comparative analysis of the three models’ performance, revealing that the LSTM and GRU models consistently demonstrated a superior accuracy. Finally, our findings also have practical implications, as they may assist risk managers and policymakers in forecasting volatility in the forex market. Full article
(This article belongs to the Special Issue Machine Learning Applications in Finance, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of simple RNN cell (<a href="#B66-jrfm-17-00557" class="html-bibr">Yu et al. 2019</a>).</p>
Full article ">Figure 2
<p>Schematic representation of LSTM cell (<a href="#B66-jrfm-17-00557" class="html-bibr">Yu et al. 2019</a>).</p>
Full article ">Figure 3
<p>Schematic representation of gated recurrent unit (GRU) cell (<a href="#B66-jrfm-17-00557" class="html-bibr">Yu et al. 2019</a>).</p>
Full article ">Figure 4
<p>Partitioning of the dataset and the grid search process: (<b>a</b>) the division of the dataset into training and test sets and (<b>b</b>) a schematic representation of grid search and cross-validation.</p>
Full article ">Figure 5
<p>Evolution of four forex market currency exchange rate prices (blue curves, left vertical axis) and corresponding Range-Based volatility (grey curves, right vertical axis) over the period from 28 August 2014, to 29 December 2023: (<b>a</b>) EUR/USD, (<b>b</b>) GBP/USD, (<b>c</b>) USD/CAD, and (<b>d</b>) USD/CHF. The vertical red lines delineate the data area utilized for model training (on the left) and the data area employed for model testing (on the right).</p>
Full article ">Figure 6
<p>Actual values of the Range-Based volatility for EUR/USD (grey curves) and the model predictions (colored curves) by feature case and DL model for the test dataset (i.e., over the period 8 April 2020, to 29 December 2023). More specifically, subfigures (<b>a</b>,<b>e</b>,<b>i</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case I (where the feature used was Range-Based Volatility). Subfigures (<b>b</b>,<b>f</b>,<b>j</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case II (where the features used were Range-Based Volatility, High, and Low). Subfigures (<b>c</b>,<b>g</b>,<b>k</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case III (where the features used were Range-Based Volatility, Hurst Exponent, and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> <mi>E</mi> <mi>n</mi> </mrow> </semantics></math>). Subfigures (<b>d</b>,<b>h</b>,<b>l</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case IV (where the features used were Range-Based Volatility, High, Low, Hurst Exponent, and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> <mi>E</mi> <mi>n</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 7
<p>Actual values of the Range-Based volatility for GBP/USD (grey curves) and the model predictions (colored curves) by feature case and DL model for the test dataset (i.e., over the period 8 April 2020, to 29 December 2023). More specifically, subfigures (<b>a</b>,<b>e</b>,<b>i</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case I (where the feature used was Range-Based Volatility). Subfigures (<b>b</b>,<b>f</b>,<b>j</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case II (where the features used were Range-Based Volatility, High, and Low). Subfigures (<b>c</b>,<b>g</b>,<b>k</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case III (where the features used were Range-Based Volatility, Hurst Exponent, and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> <mi>E</mi> <mi>n</mi> </mrow> </semantics></math>). Subfigures (<b>d</b>,<b>h</b>,<b>l</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case IV (where the features used were Range-Based Volatility, High, Low, Hurst Exponent, and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> <mi>E</mi> <mi>n</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 8
<p>Actual values of the Range-Based volatility for USD/CAD (grey curves) and the model predictions (colored curves) by feature case and DL model for the test dataset (i.e., over the period 8 April 2020, to 29 December 2023). More specifically, subfigures (<b>a</b>,<b>e</b>,<b>i</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case I (where the feature used was Range-Based Volatility). Subfigures (<b>b</b>,<b>f</b>,<b>j</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case II (where the features used were Range-Based Volatility, High, and Low). Subfigures (<b>c</b>,<b>g</b>,<b>k</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case III (where the features used were Range-Based Volatility, Hurst Exponent, and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> <mi>E</mi> <mi>n</mi> </mrow> </semantics></math>). Subfigures (<b>d</b>,<b>h</b>,<b>l</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case IV (where the features used were Range-Based Volatility, High, Low, Hurst Exponent, and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> <mi>E</mi> <mi>n</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 9
<p>Actual values of the Range-Based volatility for USD/CHF (grey curves) and the model predictions (colored curves) by feature case and DL model for the test dataset (i.e., over the period 8 April 2020, to 29 December 2023). More specifically, subfigures (<b>a</b>,<b>e</b>,<b>i</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case I (where the feature used was Range-Based Volatility). Subfigures (<b>b</b>,<b>f</b>,<b>j</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case II (where the features used were Range-Based Volatility, High, and Low). Subfigures (<b>c</b>,<b>g</b>,<b>k</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case III (where the features used were Range-Based Volatility, Hurst Exponent, and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> <mi>E</mi> <mi>n</mi> </mrow> </semantics></math>). Subfigures (<b>d</b>,<b>h</b>,<b>l</b>) show the actual values of the Range-Based volatility and the predictions of the RNN, LSTM, and GRU models, respectively, for Case IV (where the features used were Range-Based Volatility, High, Low, Hurst Exponent, and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> <mi>E</mi> <mi>n</mi> </mrow> </semantics></math>). A subsequent evaluation of the model performance on the test dataset was conducted using four statistical metrics (i.e., <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>A</mi> <mi>E</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>S</mi> </mrow> </semantics></math>), as detailed in <a href="#sec3dot4-jrfm-17-00557" class="html-sec">Section 3.4</a>. This evaluation was performed for each of the three models, across all currency rates, and for each of the four feature sets. The results are presented in <a href="#jrfm-17-00557-t002" class="html-table">Table 2</a>.</p>
Full article ">
18 pages, 10262 KiB  
Article
Fault Diagnosis of Mechanical Rolling Bearings Using a Convolutional Neural Network–Gated Recurrent Unit Method with Envelope Analysis and Adaptive Mean Filtering
by Huiyi Zhu, Zhen Sui, Jianliang Xu and Yeshen Lan
Processes 2024, 12(12), 2845; https://doi.org/10.3390/pr12122845 - 12 Dec 2024
Viewed by 269
Abstract
Rolling bearings are vital components in rotating machinery, and their reliable operation is crucial for maintaining the stability and efficiency of mechanical systems. However, fault detection in rolling bearings is often hindered by noise interference in complex industrial environments. To overcome this challenge, [...] Read more.
Rolling bearings are vital components in rotating machinery, and their reliable operation is crucial for maintaining the stability and efficiency of mechanical systems. However, fault detection in rolling bearings is often hindered by noise interference in complex industrial environments. To overcome this challenge, this paper presents a novel fault diagnosis method for rolling bearings, combining Convolutional Neural Networks (CNNs) and Gated Recurrent Units (GRUs), integrated with the envelope analysis and adaptive mean filtering techniques. Initially, envelope analysis and adaptive mean filtering are applied to suppress random noise in the bearing signals, thereby enhancing the visibility of fault features. Subsequently, a deep learning model that combines a CNN and a GRU is developed: the CNN extracts spatial features, while the GRU captures the temporal dependencies between these features. The integration of the CNN and GRU significantly improves the accuracy and robustness of fault diagnosis. The proposed method is validated using the CWRU dataset, with the experimental results achieving an average accuracy of 99.25%. Additionally, the method is compared to four classical fault diagnosis models, demonstrating superior performance in terms of both diagnostic accuracy and generalization ability. The results, supported by various visualization techniques, show that the proposed approach effectively addresses the challenges of fault detection in rolling bearings under complex industrial conditions. Full article
Show Figures

Figure 1

Figure 1
<p>CNN structure diagram.</p>
Full article ">Figure 2
<p>GRU network structure diagram.</p>
Full article ">Figure 3
<p>Experimental diagram of CWRU bearing equipment.</p>
Full article ">Figure 4
<p>Initial data curve of bearings.</p>
Full article ">Figure 5
<p>Comparison results of different filtering algorithms for bearings.</p>
Full article ">Figure 6
<p>Box plot of 10 categories of bearing data ((<b>a</b>) original bearing data; (<b>b</b>) filtered bearing data).</p>
Full article ">Figure 7
<p>CNN-GRU fault diagnosis model.</p>
Full article ">Figure 8
<p>Accuracy and loss value variation curve during training process.</p>
Full article ">Figure 9
<p>A visual display of the confusion matrix in the test set.</p>
Full article ">Figure 9 Cont.
<p>A visual display of the confusion matrix in the test set.</p>
Full article ">Figure 10
<p>Visualization of t-SNE clustering in different processes of the model.</p>
Full article ">
39 pages, 6902 KiB  
Article
Supply Chains Problem During Crises: A Data-Driven Approach
by Farima Salamian, Amirmohammad Paksaz, Behrooz Khalil Loo, Mobina Mousapour Mamoudan, Mohammad Aghsami and Amir Aghsami
Modelling 2024, 5(4), 2001-2039; https://doi.org/10.3390/modelling5040104 - 12 Dec 2024
Viewed by 268
Abstract
Efficient management of hospital evacuations and pharmaceutical supply chains is a critical challenge in modern healthcare, particularly during emergencies. This study addresses these challenges by proposing a novel bi-objective optimization framework. The model integrates a Mixed-Integer Linear Programming (MILP) approach with advanced machine [...] Read more.
Efficient management of hospital evacuations and pharmaceutical supply chains is a critical challenge in modern healthcare, particularly during emergencies. This study addresses these challenges by proposing a novel bi-objective optimization framework. The model integrates a Mixed-Integer Linear Programming (MILP) approach with advanced machine learning techniques to simultaneously minimize total costs and maximize patient satisfaction. A key contribution is the incorporation of a Gated Recurrent Unit (GRU) neural network for accurate drug demand forecasting, enabling dynamic resource allocation in crisis scenarios. The model also accounts for two distinct patient destinations—receiving hospitals and temporary care centers (TCCs)—and includes a specialized pharmaceutical supply chain to prevent medicine shortages. To enhance system robustness, probabilistic demand patterns and disruption risks are considered, ensuring supply chain reliability. The solution methodology combines the Grasshopper Optimization Algorithm (GOA) and the ɛ-constraint method, efficiently addressing the multi-objective nature of the problem. Results demonstrate significant improvements in cost reduction, resource allocation, and service levels, highlighting the model’s practical applicability in real-world scenarios. This research provides valuable insights for optimizing healthcare logistics during critical events, contributing to both operational efficiency and patient welfare. Full article
Show Figures

Figure 1

Figure 1
<p>Steps of this article.</p>
Full article ">Figure 2
<p>GRU structure.</p>
Full article ">Figure 3
<p>Supply chain configuration.</p>
Full article ">Figure 4
<p>TCC parallel configuration.</p>
Full article ">Figure 5
<p>Average number of patients per medicine.</p>
Full article ">Figure 6
<p>Distribution of average number of patients by crisis type.</p>
Full article ">Figure 7
<p>Reginal distribution of the average number of patients by crisis type.</p>
Full article ">Figure 8
<p>Average number of patients by crisis severity.</p>
Full article ">Figure 9
<p>Comparison of model performances.</p>
Full article ">Figure 10
<p>Effect of <span class="html-italic">re<sub>k</sub></span> on objective function.</p>
Full article ">Figure 11
<p>Effect of FC on OF1.</p>
Full article ">Figure 12
<p>Relationship between OF1 and OF2.</p>
Full article ">Figure 13
<p>Relationship between <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>σ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> and OF2.</p>
Full article ">Figure 14
<p>Relationship between <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>c</mi> <mi>t</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>, OF1 and OF2.</p>
Full article ">Figure 15
<p>Relationship between CV, OF1, and OF2.</p>
Full article ">Figure 16
<p>Comparison of objective functions across scenarios.</p>
Full article ">Figure 17
<p>A comparison of run times.</p>
Full article ">Figure 18
<p>Algorithm performance comparison (objective function values and time).</p>
Full article ">Figure 19
<p>Sensitivity analysis of GOA parameters.</p>
Full article ">
28 pages, 30709 KiB  
Article
Drone-Enabled AI Edge Computing and 5G Communication Network for Real-Time Coastal Litter Detection
by Sarun Duangsuwan and Phoowadon Prapruetdee
Drones 2024, 8(12), 750; https://doi.org/10.3390/drones8120750 - 12 Dec 2024
Viewed by 276
Abstract
Coastal litter is a severe environmental issue impacting marine ecosystems and coastal communities in Thailand, with plastic pollution posing one of the most urgent challenges. Every month, millions of tons of plastic waste enter the ocean, where items such as bottles, cans, and [...] Read more.
Coastal litter is a severe environmental issue impacting marine ecosystems and coastal communities in Thailand, with plastic pollution posing one of the most urgent challenges. Every month, millions of tons of plastic waste enter the ocean, where items such as bottles, cans, and other plastics can take hundreds of years to degrade, threatening marine life through ingestion, entanglement, and habitat destruction. To address this issue, we deploy drones equipped with high-resolution cameras and sensors to capture detailed coastal imagery for assessing litter distribution. This study presents the development of an AI-driven coastal litter detection system using edge computing and 5G communication networks. The AI edge server utilizes YOLOv8 and a recurrent neural network (RNN) to enable the drone to detect and classify various types of litter, such as bottles, cans, and plastics, in real-time. High-speed 5G communication supports seamless data transmission, allowing efficient monitoring. We evaluated drone performance under optimal flying heights above ground of 5 m, 7 m, and 10 m, analyzing accuracy, precision, recall, and F1-score. Results indicate that the system achieves optimal detection at an altitude of 5 m with a ground sampling distance (GSD) of 0.98 cm/pixel, yielding an F1-score of 98% for cans, 96% for plastics, and 95% for bottles. This approach facilitates real-time monitoring of coastal areas, contributing to marine ecosystem conservation and environmental sustainability. Full article
(This article belongs to the Special Issue Detection, Identification and Tracking of UAVs and Drones)
Show Figures

Figure 1

Figure 1
<p>The conceptual framework of real-time drone coastal litter imagery detection.</p>
Full article ">Figure 2
<p>The framework of AI edge computing.</p>
Full article ">Figure 3
<p>The network architecture of the YOLOv8 model.</p>
Full article ">Figure 4
<p>The network of a classifier using the RNN algorithm.</p>
Full article ">Figure 5
<p>Ground sampling distance (GSD) of drone coastal litter monitoring.</p>
Full article ">Figure 6
<p>Real-time drone coastal litter detection.</p>
Full article ">Figure 7
<p>Experimental tools and location: (<b>a</b>) Real-time drone coastal litter detection equipment; (<b>b</b>) Location map of point test.</p>
Full article ">Figure 8
<p>Samples of the coastal litter items dataset for training: cans, bottles, and plastics.</p>
Full article ">Figure 9
<p>Experimental setup: (<b>a</b>) Drone flight at 5 m altitude; (<b>b</b>) Drone flight at 7 m altitude; (<b>c</b>) Drone flight at 10 m altitude.</p>
Full article ">Figure 10
<p>Bottle litter detection at 5 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 11
<p>Bottle litter detection at 7 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 12
<p>Bottle litter detection at 10 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 13
<p>Can litter detection at 5 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 13 Cont.
<p>Can litter detection at 5 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 14
<p>Can litter detection at 7 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 14 Cont.
<p>Can litter detection at 7 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 15
<p>Can litter detection at 10 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 15 Cont.
<p>Can litter detection at 10 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 16
<p>Plastic litter detection at 5 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 16 Cont.
<p>Plastic litter detection at 5 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 17
<p>Plastic litter detection at 7 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 17 Cont.
<p>Plastic litter detection at 7 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 18
<p>Plastic litter detection at 10 m altitude: (<b>a</b>) Real-time video monitoring via OBS; (<b>b</b>) GSD vs. Epoch performance.</p>
Full article ">Figure 19
<p>Accuracy vs. data rate for bottle litter detection speed test.</p>
Full article ">Figure 20
<p>Accuracy vs. data rate for can litter detection speed test.</p>
Full article ">Figure 21
<p>Accuracy vs. data rate for plastic litter detection speed test.</p>
Full article ">
13 pages, 4380 KiB  
Article
Interval Forecast Method for Wind Power Based on GCN-GRU
by Wenting Zha, Xueyan Li, Yijun Du and Yingyu Liang
Symmetry 2024, 16(12), 1643; https://doi.org/10.3390/sym16121643 - 12 Dec 2024
Viewed by 304
Abstract
Interval prediction is to predict the range of power prediction intervals, which can guide electricity production and usage better. To improve further improve the performance of the prediction interval, this paper aims to investigate he wind power interval prediction method based on the [...] Read more.
Interval prediction is to predict the range of power prediction intervals, which can guide electricity production and usage better. To improve further improve the performance of the prediction interval, this paper aims to investigate he wind power interval prediction method based on the lower and upper bound evaluation (LUBE). Firstly, an improved loss function is proposed, which transforms multi-objective optimization problems into single-objective optimization with guidance of mathematical derivation. Afterward, the interval prediction results can be further improved through a combination of graph convolutional network (GCN) and gate recurrent unit (GRU). Then, the tree-structured Parzen estimator (TPE) optimization algorithm optimizes the GCN cell to find the optimal parameter configuration and maximize the performance of the model. Finally, in the experimental part, the proposed GCN-GRU with improved loss function is compared with some current mainstream neural networks. The results show that for any type of network, the improved loss function can obtain prediction intervals with better performance. Especially for the prediction interval based on GCN-GRU, the prediction interval normalized average width (PINAW) and prediction interval relative deviation (PIRD) can reach 6.75% and 46.99%, respectively, while ensuring the given prediction interval nominal confidence (PINC). Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

Figure 1
<p>A flow chart of wind power interval prediction based on GCN-GRU.</p>
Full article ">Figure 2
<p>The structure of the GCN-GRU network.</p>
Full article ">Figure 3
<p>(<b>a</b>): The graph of Euclidean space; (<b>b</b>): The graph of non-Euclidean space.</p>
Full article ">Figure 4
<p>The correlation thermogram of the wind power and variables at different sampling times.</p>
Full article ">Figure 5
<p>The next one-step interval prediction results of different models.</p>
Full article ">Figure 6
<p>The next four-step interval prediction results of different models.</p>
Full article ">Figure 7
<p>The next eight-step interval prediction results of different models.</p>
Full article ">
19 pages, 906 KiB  
Article
Forecasting of Local Lightning Using Spatial–Channel-Enhanced Recurrent Convolutional Neural Network
by Wei Zhou, Jinliang Li, Hongjie Wang, Donglai Zhang and Xupeng Wang
Atmosphere 2024, 15(12), 1478; https://doi.org/10.3390/atmos15121478 - 11 Dec 2024
Viewed by 317
Abstract
Lightning is a hazardous weather phenomenon, characterized by sudden occurrences and complex local distributions. It poses significant challenges for accurate forecasting, which is crucial for public safety and economic stability. Deep learning methods are often better than traditional numerical weather prediction (NWP) models [...] Read more.
Lightning is a hazardous weather phenomenon, characterized by sudden occurrences and complex local distributions. It poses significant challenges for accurate forecasting, which is crucial for public safety and economic stability. Deep learning methods are often better than traditional numerical weather prediction (NWP) models at capturing the spatiotemporal predictors of lightning events. However, these methods struggle to integrate predictors from diverse data sources, which leads to lower accuracy and interpretability. To address these challenges, the Multi-Scale Spatial–Channel-Enhanced Recurrent Convolutional Neural Network (SCE-RCNN) is proposed to improve forecasting accuracy and timeliness by utilizing multi-source data and enhanced attention mechanisms. The proposed model incorporates a multi-scale spatial–channel attention module and a cross-scale fusion module, which facilitates the integration of data from diverse sources. The multi-scale spatial–channel attention module utilizes a multi-scale convolutional network to extract spatial features at different spatial scales and employs a spatial–channel attention mechanism to focus on the most relevant regions for lightning prediction. Experimental results show that the SCE-RCNN model achieved a critical success index (CSI) of 0.83, a probability of detection (POD) of 0.991, and a false alarm rate (FAR) reduced to 0.351, outperforming conventional deep learning models across multiple prediction metrics. This research provides reliable lightning forecasts to support real-time decision-making, making significant contributions to aviation safety, outdoor event planning, and disaster risk management. The model’s high accuracy and low false alarm rate highlight its value in both academic research and practical applications. Full article
(This article belongs to the Special Issue The Challenge of Weather and Climate Prediction)
Show Figures

Figure 1

Figure 1
<p>Model workflow diagram.</p>
Full article ">Figure 2
<p>The workflow of the multi-scale spatial–channel attention mechanism.</p>
Full article ">Figure 3
<p>Intra-scale joint attention module.</p>
Full article ">Figure 4
<p>Cross-scale cooperative fusion module.</p>
Full article ">Figure 5
<p>Predicted lightning for the next hour across regions and intensities.</p>
Full article ">Figure 6
<p>Critical success index (CSI) over time for different models.</p>
Full article ">Figure 7
<p>Performance metrics comparison among different model versions.</p>
Full article ">
21 pages, 2964 KiB  
Article
Prediction of Drivers’ Red-Light Running Behaviour in Connected Vehicle Environments Using Deep Recurrent Neural Networks
by Md Mostafizur Rahman Komol, Mohammed Elhenawy, Jack Pinnow, Mahmoud Masoud, Andry Rakotonirainy, Sebastien Glaser, Merle Wood and David Alderson
Mach. Learn. Knowl. Extr. 2024, 6(4), 2855-2875; https://doi.org/10.3390/make6040136 - 11 Dec 2024
Viewed by 371
Abstract
Red-light running at signalised intersections poses a significant safety risk, necessitating advanced predictive technologies to predict red-light violation behaviour, especially for advanced red-light warning (ARLW) systems. This research leverages Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models to forecast the red-light [...] Read more.
Red-light running at signalised intersections poses a significant safety risk, necessitating advanced predictive technologies to predict red-light violation behaviour, especially for advanced red-light warning (ARLW) systems. This research leverages Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models to forecast the red-light running and stopping behaviours of drivers in connected vehicles. We utilised data from the Ipswich Connected Vehicle Pilot (ICVP) in Queensland, Australia, which gathered naturalistic driving data from 355 connected vehicles at 29 signalised intersections. These vehicles broadcast Cooperative Awareness Messages (CAM) within the Cooperative Intelligent Transport Systems (C-ITS), providing kinematic inputs such as vehicle speed, speed limits, longitudinal and lateral accelerations, and yaw rate. These variables were monitored at 100-millisecond intervals for durations from 1 to 4 s before reaching various distances from the stop line. Our results indicate that the LSTM model outperforms the GRU in predicting both red-light running and stopping behaviours with high accuracy. However, the pre-trained GRU model performs better in predicting red-light running specifically, making it valuable in applications requiring early violation prediction. Implementing these models can enhance red-light violation countermeasures, such as dynamic all-red extension (DARE), decreasing the likelihood of severe collisions and enhancing road users’ safety. Full article
Show Figures

Figure 1

Figure 1
<p>Red-light running traffic violation and right-angle collision [<a href="#B10-make-06-00136" class="html-bibr">10</a>].</p>
Full article ">Figure 2
<p>Flowchart of red-light running behaviour prediction methodology.</p>
Full article ">Figure 3
<p>(<b>a</b>) Vehicle passing intersection during yellow signal which then turns red when the vehicle enters the conflict zone. (<b>b</b>) Vehicle turning during green signal but the straight signal is red [<a href="#B40-make-06-00136" class="html-bibr">40</a>].</p>
Full article ">Figure 4
<p>The windowing of datasets with different traffic monitoring times at different distances before the stop line [<a href="#B20-make-06-00136" class="html-bibr">20</a>].</p>
Full article ">Figure 5
<p>Transfer learning with pre-trained LSTM model to predict red-light running behaviour.</p>
Full article ">Figure 6
<p>The comparison of LSTM and GRU models’ prediction accuracy before and after data upsampling.</p>
Full article ">Figure 6 Cont.
<p>The comparison of LSTM and GRU models’ prediction accuracy before and after data upsampling.</p>
Full article ">Figure 7
<p>The comparison of LSTM and GRU models’ performance measures.</p>
Full article ">
22 pages, 6560 KiB  
Article
Deep Learning-Based Spectrum Sensing for Cognitive Radio Applications
by Sara E. Abdelbaset, Hossam M. Kasem, Ashraf A. Khalaf, Amr H. Hussein and Ahmed A. Kabeel
Sensors 2024, 24(24), 7907; https://doi.org/10.3390/s24247907 - 11 Dec 2024
Viewed by 319
Abstract
In order for cognitive radios to identify and take advantage of unused frequency bands, spectrum sensing is essential. Conventional techniques for spectrum sensing rely on extracting features from received signals at specific locations. However, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) [...] Read more.
In order for cognitive radios to identify and take advantage of unused frequency bands, spectrum sensing is essential. Conventional techniques for spectrum sensing rely on extracting features from received signals at specific locations. However, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have recently demonstrated promise in improving the precision and efficacy of spectrum sensing. Our research introduces a groundbreaking approach to spectrum sensing by leveraging convolutional neural networks (CNNs) to significantly advance the precision and effectiveness of identifying unused frequency bands. We treat spectrum sensing as a classification task and train our model with diverse signal types and noise data, enabling unparalleled adaptability to novel signals. Our method surpasses traditional techniques such as the maximum–minimum eigenvalue ratio-based and frequency domain entropy-based methods, showcasing superior performance and adaptability. In particular, our CNN-based approach demonstrates exceptional accuracy, even outperforming established methods when faced with additive white Gaussian noise (AWGN). Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The proposed network flowchart.</p>
Full article ">Figure 2
<p>This graph shows the accuracy and losses for the number of epochs 10 during the training and validation phases. (<b>a</b>) Accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 3
<p>The accuracy and losses during training and validation phases with the effect of AWGN with No. of epochs 10. (<b>a</b>) Accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 4
<p>The accuracy and losses during training and validation phases with the effect of AWGN with No. of epochs 40. (<b>a</b>) Accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 5
<p>The accuracy and losses during training and validation phases with the effect of AWGN with SNR range (−8 to 30 dB) with No. of epochs 20. (<b>a</b>) Accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 6
<p>The accuracy and losses during training and validation phases with the effect of AWGN with SNR range (−8 to 30 dB) with No. of epochs 32. (<b>a</b>) Accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 7
<p>Detection performance (<b>a</b>) OOK, (<b>b</b>) QPSK.</p>
Full article ">Figure 8
<p>Comparison with traditional methods at pf = 0.15.</p>
Full article ">Figure 9
<p>The predicted accuracy of our proposed model by using flatten layer for various modulation types.</p>
Full article ">Figure 10
<p>The predicted accuracy of our proposed mode by using GAP layer for various modulation types.</p>
Full article ">Figure 10 Cont.
<p>The predicted accuracy of our proposed mode by using GAP layer for various modulation types.</p>
Full article ">Figure 11
<p>Training accuracy of the compared models.</p>
Full article ">Figure 12
<p>Validation accuracy of the compared models.</p>
Full article ">Figure 13
<p>Training loss of the compared models.</p>
Full article ">Figure 14
<p>Validation loss of the compared models.</p>
Full article ">Figure 15
<p>The predicted accuracy of the compared models at SNR from −10 to 20.</p>
Full article ">
26 pages, 11738 KiB  
Article
Active Vibration Control of a Cantilever Beam Structure Using Pure Deep Learning and PID with Deep Learning-Based Tuning
by Abdul-Wahid A. Saif, Ahmed Abdulrahman Mohammed, Fouad AlSunni and Sami El Ferik
Appl. Sci. 2024, 14(24), 11520; https://doi.org/10.3390/app142411520 - 11 Dec 2024
Viewed by 399
Abstract
Vibration is a major problem that can cause structures to wear out prematurely and even fail. Smart structures are a promising solution to this problem because they can be equipped with actuators, sensors, and controllers to reduce or eliminate vibration. The primary objective [...] Read more.
Vibration is a major problem that can cause structures to wear out prematurely and even fail. Smart structures are a promising solution to this problem because they can be equipped with actuators, sensors, and controllers to reduce or eliminate vibration. The primary objective of this paper is to explore and compare two deep learning-based approaches for vibration control in cantilever beams. The first approach involves the direct application of deep learning techniques, specifically multi-layer neural networks and RNNs, to control the beam’s dynamic behavior. The second approach integrates deep learning into the tuning process of a PID controller, optimizing its parameters for improved control performance. To activate the structure, two different input signals are used, an impulse signal at time zero and a random one. Through this comparative analysis, the paper aims to evaluate the effectiveness, strengths, and limitations of each method, offering insights into their potential applications in the field of smart structure control. Full article
(This article belongs to the Section Materials Science and Engineering)
Show Figures

Figure 1

Figure 1
<p>The Piezoelectric Effect.</p>
Full article ">Figure 2
<p>Assumption that the Plane Sections Remain Plane.</p>
Full article ">Figure 3
<p>Cantilever Beam with Two PZT Actuators and Sensor.</p>
Full article ">Figure 4
<p>Block Diagram of Open Loop System with Low-Pass Filter under Disturbance.</p>
Full article ">Figure 5
<p>Genetic Algorithm-Based PID Tuning Process Flowchart.</p>
Full article ">Figure 6
<p>Recurrent Neural Network with Long Short-Term Memory.</p>
Full article ">Figure 7
<p>Closed Loop System with Deep Learning Controller under Disturbance.</p>
Full article ">Figure 8
<p>Voltage Response of PZT Sensor before Vibration Control (under Impulse Disturbance).</p>
Full article ">Figure 9
<p>System Control Signal.</p>
Full article ">Figure 10
<p>Voltage Response of PZT Sensor after Vibration Control with Deep Learning Controller (under Impulse Disturbance).</p>
Full article ">Figure 11
<p>Voltage Response of PZT Sensor in the Cantilever with Deep Learning Controller and without Controller.</p>
Full article ">Figure 12
<p>Random Disturbance.</p>
Full article ">Figure 13
<p>Voltage Response of PZT Sensor before Vibration Control (under Random Disturbance).</p>
Full article ">Figure 14
<p>System Control Signal.</p>
Full article ">Figure 15
<p>Voltage Response of PZT Sensor after Vibration Control with Deep Learning Controller (under Random Disturbance).</p>
Full article ">Figure 16
<p>Voltage Response of PZT Sensor in the Cantilever Beam with Deep Learning Controller and without Controller.</p>
Full article ">Figure 17
<p>Closed Loop System with DL-PID Controller under Disturbance.</p>
Full article ">Figure 18
<p>Deep Learning Network.</p>
Full article ">Figure 19
<p>Voltage Response of PZT Sensor before Vibration Control (under Impulse Disturbance).</p>
Full article ">Figure 20
<p>System Control Signal.</p>
Full article ">Figure 21
<p>Voltage Response of PZT Sensor after Vibration Control with DL-PID Controller (under Impulse Disturbance).</p>
Full article ">Figure 22
<p>Voltage Response of PZT Sensor in the Cantilever with DL-PID Controller and without.</p>
Full article ">Figure 23
<p>Random Disturbance of the System.</p>
Full article ">Figure 24
<p>Voltage Response of PZT Sensor before Vibration Control (under Random Disturbance).</p>
Full article ">Figure 25
<p>System Control Signal.</p>
Full article ">Figure 26
<p>Voltage Response of PZT Sensor after Vibration Control with DL-PID Controller (under Impulse Disturbance).</p>
Full article ">Figure 27
<p>Voltage Response of PZT Sensor in the Cantilever with DL-PID Controller and without.</p>
Full article ">
22 pages, 4924 KiB  
Article
A New Varying-Factor Finite-Time Recurrent Neural Network to Solve the Time-Varying Sylvester Equation Online
by Haoming Tan, Junyun Wu, Hongjie Guan, Zhijun Zhang, Ling Tao, Qingmin Zhao and Chunquan Li
Mathematics 2024, 12(24), 3891; https://doi.org/10.3390/math12243891 - 10 Dec 2024
Viewed by 268
Abstract
This paper presents a varying-parameter finite-time recurrent neural network, called a varying-factor finite-time recurrent neural network (VFFTRNN), which is able to solve the solution of the time-varying Sylvester equation online. The proposed neural network makes the matrix coefficients vary with time and can [...] Read more.
This paper presents a varying-parameter finite-time recurrent neural network, called a varying-factor finite-time recurrent neural network (VFFTRNN), which is able to solve the solution of the time-varying Sylvester equation online. The proposed neural network makes the matrix coefficients vary with time and can achieve convergence in a finite time. Apart from this, the performance of the network is better than traditional networks in terms of robustness. It is theoretically proved that the proposed neural network has super-exponential convergence performance. Simulation results demonstrate that this neural network has faster convergence speed and better robustness than the return to zero neural networks and can track the theoretical solution of the time-varying Sylvester equation effectively. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of VFFTRNN model <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </semantics></math> for solving the Sylvester equation.</p>
Full article ">Figure 2
<p>Solving Sylvester Equation (<a href="#FD1-mathematics-12-03891" class="html-disp-formula">1</a>) with given coefficients online with ZNN and its computational error <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math>. Several red solid curves represent different initial states while the unique black dotted curve represents the theoretical solution. (<b>a</b>) Solution to ZNN with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and error <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math> of ZNN with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>b</b>) Solution to ZNN with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and error <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math> of ZNN with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>c</b>) Solution to ZNN with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and error <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math> of ZNN with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Solving Sylvester Equation (<a href="#FD1-mathematics-12-03891" class="html-disp-formula">1</a>) with given coefficients online with VFFTRNN and its computational error <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>5</mn> <mn>7</mn> </mfrac> </mstyle> </mrow> </semantics></math>). Several red solid curves represent different initial states while the unique black dotted curve represents the theoretical solution. (<b>a</b>) Solution to VFFTRNN with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and error <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math> of VFFTRNN with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>b</b>) Solution to VFFTRNN with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and error <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math> of VFFTRNN with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>c</b>) Solution to VFFTRNN with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and error <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math> of VFFTRNN with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The curves in the figure represent the convergence errors under different initial conditions. When ZNN uses different activation functions without perturbation to solve Sylvester Equation (<a href="#FD1-mathematics-12-03891" class="html-disp-formula">1</a>), its convergence error <math display="inline"><semantics> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math> is as shown above (<math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>). (<b>a</b>) Linear. (<b>b</b>) Sigmoid. (<b>c</b>) Power (<math display="inline"><semantics> <mrow> <mi>s</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>The curves in the figure represent the convergence errors under different initial conditions. WhenVFFTRNN uses different activation functions without perturbation to solve Sylvester Equation (<a href="#FD1-mathematics-12-03891" class="html-disp-formula">1</a>), its convergence error <math display="inline"><semantics> <mrow> <mo>|</mo> <mo>|</mo> <mi>P</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>−</mo> <msup> <mi>P</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> </msub> </mrow> </semantics></math> is as shown above (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>1</mn> <mn>5</mn> </mfrac> </mstyle> </mrow> </semantics></math>). (<b>a</b>) Linear. (<b>b</b>) Sigmoid. (<b>c</b>) Power (<math display="inline"><semantics> <mrow> <mi>s</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 6
<p>Robustness of ZNN and VFFTRNN with varying degrees of differential error and model implementation error. The arrows point to the original location of the magnified detail in each figure. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>β</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>β</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>β</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>β</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.05</mn> <mo>.</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>. (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>β</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">
25 pages, 5732 KiB  
Article
Analyzing the Impact of Binaural Beats on Anxiety Levels by a New Method Based on Denoised Harmonic Subtraction and Transient Temporal Feature Extraction
by Devika Rankhambe, Bharati Sanjay Ainapure, Bhargav Appasani, Avireni Srinivasulu and Nicu Bizon
Bioengineering 2024, 11(12), 1251; https://doi.org/10.3390/bioengineering11121251 - 10 Dec 2024
Viewed by 392
Abstract
Anxiety is a widespread mental health issue, and binaural beats have been explored as a potential non-invasive treatment. EEG data reveal changes in neural oscillation and connectivity linked to anxiety reduction; however, harmonics introduced during signal acquisition and processing often distort these findings. [...] Read more.
Anxiety is a widespread mental health issue, and binaural beats have been explored as a potential non-invasive treatment. EEG data reveal changes in neural oscillation and connectivity linked to anxiety reduction; however, harmonics introduced during signal acquisition and processing often distort these findings. Existing methods struggle to effectively reduce harmonics and capture the fine-grained temporal dynamics of EEG signals, leading to inaccurate feature extraction. Hence, a novel Denoised Harmonic Subtraction and Transient Temporal Feature Extraction is proposed to improve the analysis of the impact of binaural beats on anxiety levels. Initially, a novel Wiener Fused Convo Filter is introduced to capture spatial features and eliminate linear noise in EEG signals. Next, an Intrinsic Harmonic Subtraction Network is employed, utilizing the Attentive Weighted Least Mean Square (AW-LMS) algorithm to capture nonlinear summation and resonant coupling effects, effectively eliminating the misinterpretation of brain rhythms. To address the challenge of fine-grained temporal dynamics, an Embedded Transfo XL Recurrent Network is introduced to detect and extract relevant parameters associated with transient events in EEG data. Finally, EEG data undergo harmonic reduction and temporal feature extraction before classification with a cross-correlated Markov Deep Q-Network (DQN). This facilitates anxiety level classification into normal, mild, moderate, and severe categories. The model demonstrated a high accuracy of 95.6%, precision of 90%, sensitivity of 93.2%, and specificity of 96% in classifying anxiety levels, outperforming previous models. This integrated approach enhances EEG signal processing, enabling reliable anxiety classification and offering valuable insights for therapeutic interventions. Full article
(This article belongs to the Special Issue Adaptive Neurostimulation: Innovative Strategies for Stimulation)
Show Figures

Figure 1

Figure 1
<p>Block Diagram of the proposed system.</p>
Full article ">Figure 2
<p>Wiener Fused Convo Filter.</p>
Full article ">Figure 3
<p>Flowchart of Hilbert–Huang transformation process of the proposed system.</p>
Full article ">Figure 4
<p>Attentive Weighted Least Mean Square (AW-LMS) algorithm of the proposed model.</p>
Full article ">Figure 5
<p>Schematic representation of a Transformer-XL.</p>
Full article ">Figure 6
<p>Long Short-Term Memory.</p>
Full article ">Figure 7
<p>Input EEG of the proposed model for (<b>a</b>) delta, (<b>b</b>) theta, (<b>c</b>) alpha, (<b>d</b>) beta, and (<b>e</b>) gamma.</p>
Full article ">Figure 7 Cont.
<p>Input EEG of the proposed model for (<b>a</b>) delta, (<b>b</b>) theta, (<b>c</b>) alpha, (<b>d</b>) beta, and (<b>e</b>) gamma.</p>
Full article ">Figure 8
<p>Pre-processed EEG of the proposed model for (<b>a</b>) delta, (<b>b</b>) theta, (<b>c</b>) alpha, (<b>d</b>) beta, and (<b>e</b>) gamma.</p>
Full article ">Figure 9
<p>Normalized frequencies of the proposed model for (<b>a</b>) delta, (<b>b</b>) theta, (<b>c</b>) alpha, (<b>d</b>) beta, and (<b>e</b>) gamma.</p>
Full article ">Figure 10
<p>Brain rhythm of the proposed model for (<b>a</b>) delta, (<b>b</b>) theta, (<b>c</b>) alpha, (<b>d</b>) beta, and (<b>e</b>) gamma.</p>
Full article ">Figure 11
<p>The loss rate of the proposed system.</p>
Full article ">Figure 12
<p>Mean square error (MSE) of the proposed model.</p>
Full article ">Figure 13
<p>Confusion matrix of the proposed method.</p>
Full article ">Figure 14
<p>Accuracy, precision, sensitivity, specificity, and F1 score of the proposed model.</p>
Full article ">Figure 15
<p>FPR, FNR, and MAE of the proposed model.</p>
Full article ">Figure 16
<p>NPV, PSNR, and MCC of the proposed model.</p>
Full article ">Figure 17
<p>Comparison of key metrics such as accuracy, precision, sensitivity, specificity, and F1 score of the proposed model.</p>
Full article ">Figure 18
<p>Comparison of the NPV and MCC of the proposed model.</p>
Full article ">Figure 19
<p>Comparison of the FNR, FPR, and FDR of the proposed model.</p>
Full article ">
21 pages, 9649 KiB  
Article
Prediction of the Dissolved Oxygen Content in Aquaculture Based on the CNN-GRU Hybrid Neural Network
by Ying Ma, Qiwei Fang, Shengwei Xia and Yu Zhou
Water 2024, 16(24), 3547; https://doi.org/10.3390/w16243547 - 10 Dec 2024
Viewed by 345
Abstract
The dissolved oxygen (DO) content is one of the important water quality parameters; it is crucial for assessing water body quality and ensuring the healthy growth of aquatic organisms. To enhance the prediction accuracy of DO in aquaculture, we propose a fused neural [...] Read more.
The dissolved oxygen (DO) content is one of the important water quality parameters; it is crucial for assessing water body quality and ensuring the healthy growth of aquatic organisms. To enhance the prediction accuracy of DO in aquaculture, we propose a fused neural network model integrating a convolutional neural network (CNN) and a gated recurrent unit (GRU). This model initially employs a CNN to extract primary features from water quality parameters. Subsequently, the GRU captures temporal information and long-term dependencies, while a temporal attention mechanism (TAM) is introduced to further pinpoint crucial information. By optimizing model parameters through an improved particle swarm optimization (IPSO) algorithm, we develop a comprehensive IPSO-CNN-GRU-TAM prediction model. Experiments conducted using water quality datasets collected from Eagle Mountain Lake demonstrate that our model achieves a root mean square error (RMSE) of 0.0249 and a coefficient of determination (R2) of 0.9682, outperforming other prediction models with high precision. The model exhibits stable performance across fivefold cross-validation and datasets of varying depths, showcasing robust generalization capabilities. In summary, this model allows aquaculturists to precisely regulate the DO content, ensuring fish health and growth while achieving energy conservation and carbon reduction, aligning with the practical demands of modern aquaculture. Full article
(This article belongs to the Section Water, Agriculture and Aquaculture)
Show Figures

Figure 1

Figure 1
<p>Data distribution.</p>
Full article ">Figure 2
<p>Structure of CNN.</p>
Full article ">Figure 3
<p>Structure of GRU.</p>
Full article ">Figure 4
<p>Structure of TAM.</p>
Full article ">Figure 5
<p>Structure of the IPSO-CNN-GRU-TAM model.</p>
Full article ">Figure 6
<p>Algorithm flow of the IPSO-CNN-GRU-TAM model.</p>
Full article ">Figure 7
<p>Comparison of 4 models.</p>
Full article ">Figure 8
<p>Prediction error of different models.</p>
Full article ">Figure 9
<p>Comparison of 4 models with different evaluation indicators.</p>
Full article ">Figure 10
<p>Comparison of 9 models.</p>
Full article ">Figure 11
<p>Comparison of 9 models with different evaluation indicators.</p>
Full article ">Figure 12
<p>Prediction error of different models.</p>
Full article ">Figure 13
<p>Prediction curve results of the IPSO-CNN-GRU-TAM model (4 m).</p>
Full article ">Figure 14
<p>Prediction curve results of the IPSO-CNN-GRU-TAM model (6 m).</p>
Full article ">Figure 15
<p>Comparison of 3 models with different evaluation indicators.</p>
Full article ">
28 pages, 6709 KiB  
Article
A 3D Model-Based Framework for Real-Time Emergency Evacuation Using GIS and IoT Devices
by Noopur Tyagi, Jaiteg Singh, Saravjeet Singh and Sukhjit Singh Sehra
ISPRS Int. J. Geo-Inf. 2024, 13(12), 445; https://doi.org/10.3390/ijgi13120445 - 9 Dec 2024
Viewed by 453
Abstract
Advancements in 3D modelling technology have facilitated more immersive and efficient solutions in spatial planning and user-centred design. In healthcare systems, 3D modelling is beneficial in various applications, such as emergency evacuation, pathfinding, and localization. These models support the fast and efficient planning [...] Read more.
Advancements in 3D modelling technology have facilitated more immersive and efficient solutions in spatial planning and user-centred design. In healthcare systems, 3D modelling is beneficial in various applications, such as emergency evacuation, pathfinding, and localization. These models support the fast and efficient planning of evacuation routes, ensuring the safety of patients, staff, and visitors, and guiding them in cases of emergency. To improve urban modelling and planning, 3D representation and analysis are used. Considering the advantages of 3D modelling, this study proposes a framework for 3D indoor navigation and employs a multiphase methodology to enhance spatial planning and user experience. Our approach combines state-of-the art GIS technology with a 3D hybrid model. The proposed framework incorporates federated learning (FL) along with edge computing and Internet of Things (IoT) devices to achieve accurate floor-level localization and navigation. In the first phase of the methodology, Quantum Geographic Information System (QGIS) software was used to create a 3D model of the building’s architectural details, which are required for efficient indoor navigation during emergency evacuations in healthcare systems. In the second phase, the 3D model and an FL-based recurrent neural network (RNN) technique were utilized to achieve real-time indoor positioning. This method resulted in highly precise outcomes, attaining an accuracy rate over 99% at distances of no less than 10 metres. Continuous monitoring and effective pathfinding ensure that users can navigate safely and effectively during emergencies. IoT devices were connected with the building’s navigation software in Phase 3. As per the performed analysis, it was observed that the proposed framework provided 98.7% routing accuracy between different locations during emergency situations. By improving safety, building accessibility, and energy efficiency, this research addresses the health and environmental impacts of modern technologies. Full article
(This article belongs to the Special Issue HealthScape: Intersections of Health, Environment, and GIS&T)
Show Figures

Figure 1

Figure 1
<p>Integrating FL, edge computing, and IoT devices to create indoor navigation systems.</p>
Full article ">Figure 2
<p>Three-dimensional indoor navigation system from Phase 1 to Phase 3.</p>
Full article ">Figure 3
<p>Federated learning framework.</p>
Full article ">Figure 4
<p>LiDAR data collection.</p>
Full article ">Figure 5
<p>The conceptual flow of creating the 3D model in QGIS.</p>
Full article ">Figure 6
<p>Perspective view of 3D model of building for efficient and optimized path prediction.</p>
Full article ">Figure 7
<p>Federated learning process flowchart.</p>
Full article ">Figure 8
<p>IoT data flow diagram.</p>
Full article ">Figure 9
<p>(<b>a</b>) Training progress of accuracy and RMSE. (<b>b</b>) Training progress of accuracy and RMSE.</p>
Full article ">Figure 10
<p>Graph of indoor positioning system performance.</p>
Full article ">Figure 11
<p>Variation in accuracy among diverse clients at various intervals.</p>
Full article ">Figure 11 Cont.
<p>Variation in accuracy among diverse clients at various intervals.</p>
Full article ">Figure 12
<p>Variation in RMSE among several clients at distinct intervals.</p>
Full article ">Figure 12 Cont.
<p>Variation in RMSE among several clients at distinct intervals.</p>
Full article ">Figure 13
<p>Three-dimensional indoor navigation system in emergency evacuation.</p>
Full article ">Figure 14
<p>Prototype software architecture.</p>
Full article ">
Back to TopTop