[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (104)

Search Parameters:
Keywords = ESN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2949 KiB  
Article
Antibacterial Efficacy Comparison of Electrolytic and Reductive Silver Nanoparticles Against Propionibacterium acnes
by Suparno Suparno, Rita Prasetyowati, Khafidh Nur Aziz, Anggarwati Rahma, Eka Sentia Ayu Lestari, Siti Chaerani Nabiilah and Deby Grace
Antibiotics 2025, 14(1), 86; https://doi.org/10.3390/antibiotics14010086 - 14 Jan 2025
Viewed by 486
Abstract
Background: The aim of this study was to develop an electrolysis system to produce silver nanoparticles free from toxic gases, as the most common reduction and electrolysis techniques produce nitrogen dioxide (NO2) as a byproduct, which is harmful to human health. [...] Read more.
Background: The aim of this study was to develop an electrolysis system to produce silver nanoparticles free from toxic gases, as the most common reduction and electrolysis techniques produce nitrogen dioxide (NO2) as a byproduct, which is harmful to human health. The new electrolysis system used two identical silver plate electrodes, replacing silver and carbon rods, and used water as the electrolyte instead of silver nitrate (AgNO3) solution since AgNO3 is the source of NO2. Methods: The electrolytic silver nanoparticles (ESNs) produced by the new system were characterized and compared with reductive silver nanoparticles (RSNs). Using UV–Visible spectrophotometry, absorption peaks were found at 425 nm (ESN) and 437 nm (RSN). Using dynamic light scattering, the particle diameters were measured at 40.3 nm and 39.9 nm for ESNs at concentrations of 10 ppm and 30 ppm, respectively, and 74.0 nm and 74.6 nm for RSNs at concentrations of 10 ppm and 30 ppm, respectively. Antibacterial activity against Propionibacterium acnes (P. acnes) was assessed using the Kirby–Bauer method. Results: It was found that the efficacy of ESNs and RSNs was relatively lower than that of 5% chloramphenicol because it was measured in different concentration units (ESNs and RSNs in ppm and chloramphenicol in %). Using the calibration curve, the efficacy of 5% chloramphenicol was comparable to that of 0.005% ESN. It was also found that P. acnes developed a strong resistance to chloramphenicol and showed no resistance to ESNs. Conclusions: This finding underlines the tremendous potential of ESNs as a future antibiotic raw material. Full article
Show Figures

Figure 1

Figure 1
<p>Possible mechanisms of silver nanoparticles in killing and inhibiting bacteria: (1) cytoplasmic membrane denaturation due to silver nanoparticle accumulation, (2) membrane structure disruption due to nanoparticle penetration, (3) inhibition of <span class="html-italic">P. acnes</span> biofilm production, (4) silver ions released by silver nanoparticles damage the cell wall and penetrate cytoplasm, (5) silver ions interfere with the production of adenosine triphosphate leading to low energy production, (6) silver ions cause denaturation of ribosomes and the cytoplasm and disrupt protein synthesis, (7) respiratory enzyme inactivation by silver ions, leading to bacteria suffocation, and (8) increase in reactive oxygen species (ROS) production caused by silver ions penetration causes oxidative stress to most of the internal organs of the bacteria. As for color: The green balls represent silver nanoparticles, the green balls with a plus symbols represent Ag<sup>+</sup> ions, the pink represents biofilm, the blue represents denaturation of the cytoplasmic membrane, the yellow inside the bacterial cell wall is cytoplasm, the orange triangles represent membrane fragments, and the pink triangles are cell wall fragments.</p>
Full article ">Figure 2
<p>(<b>a</b>) ESN and RSN color and (<b>b</b>) ESN concentration over time. The blue dots represent the concentration of ESN (in ppm) at certain times (in minute) which were written in green color in the horizontal.</p>
Full article ">Figure 3
<p>Absorption peaks: (<b>a</b>) 10 ppm and (<b>b</b>) 30 ppm of ESNs and RSNs.</p>
Full article ">Figure 4
<p>Clear zone diameter of (<b>a</b>) 10 ppm and (<b>b</b>) 30 ppm of all antibiotics.</p>
Full article ">Figure 5
<p>RSN production process: (<b>a</b>) mixing/heating precursor and reductor, (<b>b</b>) silver atom formation, and (<b>c</b>) RSN production. White solvent and many orange dots represent the mixture of silver nitrate and trisodium citrate at the beginning of process. Yellow solvent and many larger dots represent the formation of silver atoms. Yellow solvent and orange bubbles represent nitrogen dioxide formation.</p>
Full article ">Figure 6
<p>ESN production process: (1) oxidation on anode, (2) reduction in solution, (3) silver atom aggregation, (4) ESN formation, and (5) reduction on cathode. Green spheres with plus sign represent silver ions. Green single spheres represent silver atoms. Green spheres aggregates denote ESN.</p>
Full article ">
19 pages, 954 KiB  
Article
Memory–Non-Linearity Trade-Off in Distance-Based Delay Networks
by Stefan Iacob and Joni Dambre
Biomimetics 2024, 9(12), 755; https://doi.org/10.3390/biomimetics9120755 - 11 Dec 2024
Viewed by 804
Abstract
The performance of echo state networks (ESNs) in temporal pattern learning tasks depends both on their memory capacity (MC) and their non-linear processing. It has been shown that linear memory capacity is maximized when ESN neurons have linear activation, and that a trade-off [...] Read more.
The performance of echo state networks (ESNs) in temporal pattern learning tasks depends both on their memory capacity (MC) and their non-linear processing. It has been shown that linear memory capacity is maximized when ESN neurons have linear activation, and that a trade-off between non-linearity and linear memory capacity is required for temporal pattern learning tasks. The more recent distance-based delay networks (DDNs) have shown improved memory capacity over ESNs in several benchmark temporal pattern learning tasks. However, it has not thus far been studied whether this increased memory capacity comes at the cost of reduced non-linear processing. In this paper, we advance the hypothesis that DDNs in fact achieve a better trade-off between linear MC and non-linearity than ESNs, by showing that DDNs can have strong non-linearity with large memory spans. We tested this hypothesis using the NARMA-30 task and the bitwise delayed XOR task, two commonly used reservoir benchmark tasks that require a high degree of both non-linearity and memory. Full article
Show Figures

Figure 1

Figure 1
<p>Average IPC profile for randomly initialized unoptimized ESNs and DDNs.</p>
Full article ">Figure 2
<p>Task–reservoir IPC overlap for various random network initializations. The right-hand side shows the overlap scores of DDNs initialized with random, normally distributed neuron locations. The left-hand side shows the overlap of equivalent ESNs. All scores are averaged over 3 random initializations with the same leak rate and spectral radius.</p>
Full article ">Figure 3
<p>NARMA-30 NRMSE for various random network initializations. The right-hand side shows the performances of DDNs initialized with random, normally distributed neuron locations. The left-hand side shows the performance of equivalent ESNs. All scores are averaged over 3 random initializations with the same leak rate and spectral radius.</p>
Full article ">Figure 4
<p>Scatter plot of the task IPC overlap against task performance for the ESNs and DDNs from <a href="#biomimetics-09-00755-f002" class="html-fig">Figure 2</a> and <a href="#biomimetics-09-00755-f003" class="html-fig">Figure 3</a>. Each dot corresponds to a single initialization and network simulation of an ESN or DDN. All DDNs have their equivalent ESN, which is identical in all aspects except inter-neuron delays.</p>
Full article ">Figure 5
<p>First- and second-degree IPC of networks optimized for the NARMA-30 task. Capacity profiles are the average of 10 reservoirs. The legend shows the total capacity for each degree, computed as the sum of capacities over all lags. The bottom graph shows the task capacity of both tasks, indicating task information processing requirements.</p>
Full article ">Figure 6
<p>Second-degree IPC for optimized ESNs and DDNs (<b>left</b>) and DDNs (<b>middle</b>) and the NARMA-30 degree task capacity (<b>right</b>), shown as a heatmap over the lag and window size. A lighter hue indicates a higher capacity. The window size indicates the difference between the smallest and largest delay in the production of polynomials (the basis function) used to compute the capacity. We observe that the second-order task requirements are concentrated at a window size of 30.</p>
Full article ">Figure 7
<p>ESN and DDN NARMA-30 test performance and overlap score throughout CMA-ES evolution. Each dot represents a single instantiation of a network. The same network activity was used to measure the performance and the regional IPC, which was then used to compute the NARMA-30 overlap score. The shade of the dot indicates the CMA-ES generation from which the network was sampled. Networks were sampled from the best candidate hyperparameter set of every 10th generation, and five re-initializations were performed for each candidate. For readability, the generations were binned in five bins. Similarly to <a href="#biomimetics-09-00755-f004" class="html-fig">Figure 4</a>, we see that task overlap and performance are correlated in both ESNs and DDNs. Moreover, we see that DDNs start off with a higher overlap and reach a higher overlap after hyperparameter optimization.</p>
Full article ">Figure 8
<p>Test performance for ESNs and DDNs measured in BER, both before and after CMA-ES optimization. The horizontal axis shows the task delay used to generate each of the 19 delayed XOR test sets. BERs are averaged over 20 network samples. With a test set of <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> samples, BERs higher than 10<sup>−3</sup> are reported at a 90% confidence level [<a href="#B39-biomimetics-09-00755" class="html-bibr">39</a>]. The vertical error bars represent the standard error.</p>
Full article ">
17 pages, 24201 KiB  
Article
An Echo State Network-Based Light Framework for Online Anomaly Detection: An Approach to Using AI at the Edge
by Andrea Bonci, Renat Kermenov, Lorenzo Longarini, Sauro Longhi, Geremia Pompei, Mariorosario Prist and Carlo Verdini
Machines 2024, 12(10), 743; https://doi.org/10.3390/machines12100743 - 21 Oct 2024
Viewed by 893
Abstract
Production efficiency is used to determine the best conditions for manufacturing goods at the lowest possible unit cost. When achieved, production efficiency leads to increased revenues for the manufacturer, enhanced employee safety, and a satisfied customer base. Production efficiency not only measures the [...] Read more.
Production efficiency is used to determine the best conditions for manufacturing goods at the lowest possible unit cost. When achieved, production efficiency leads to increased revenues for the manufacturer, enhanced employee safety, and a satisfied customer base. Production efficiency not only measures the amount of resources that are needed for production but also considers the productivity levels and the state of the production lines. In this context, online anomaly detection (AD) is an important tool for maintaining the reliability of the production ecosystem. With advancements in artificial intelligence and the growing significance of identifying and mitigating anomalies across different fields, approaches based on artificial neural networks facilitate the recognition of intricate types of anomalies by taking into account both temporal and contextual attributes. In this paper, a lightweight framework based on the Echo State Network (ESN) model running at the edge is introduced for online AD. Compared to other AD methods, such as Long Short-Term Memory (LSTM), it achieves superior precision, accuracy, and recall metrics while reducing training time, CO2 emissions, and the need for high computational resources. The preliminary evaluation of the proposed solution was conducted using a low-resource computing device at the edge of the real production machine through an Industrial Internet of Things (IIoT) smart meter module. The machine used to test the proposed solution was provided by the Italian company SIFIM Srl, which manufactures filter mats for industrial kitchens. Experimental results demonstrate the feasibility of developing an AD method that achieves high accuracy, with the ESN-based framework reaching 85% compared to 80.88% for the LSTM-based model. Furthermore, this method requires minimal hardware resources, with a training time of 9.5 s compared to 2.100 s for the other model. Full article
Show Figures

Figure 1

Figure 1
<p>ESN diagram.</p>
Full article ">Figure 2
<p>LSTM diagram.</p>
Full article ">Figure 3
<p>Architecture for the computation of the standard deviation of the error in the training set.</p>
Full article ">Figure 4
<p>Framework architecture for the inference phase.</p>
Full article ">Figure 5
<p>Architecture diagram.</p>
Full article ">Figure 6
<p>Production machine and production layout.</p>
Full article ">Figure 7
<p>Cloud dashboard—CO<sub>2</sub> production.</p>
Full article ">Figure 8
<p>Standard deviation of the error of both the ESN and LSTM model-based approaches.</p>
Full article ">Figure 9
<p>Comparison between real time series and time-series prediction (ESN-based model on the left and LSTM-based model on the right).</p>
Full article ">Figure 10
<p>Accuracy for each epoch (LSTM-based and ESN-based methods).</p>
Full article ">
17 pages, 2902 KiB  
Article
Echo State Network and Sparrow Search: Echo State Network for Modeling the Monthly River Discharge of the Biggest River in Buzău County, Romania
by Liu Zhen and Alina Bărbulescu
Water 2024, 16(20), 2916; https://doi.org/10.3390/w16202916 - 14 Oct 2024
Viewed by 683
Abstract
Artificial intelligence (AI) has become an instrument used in all domains with good results. The water resources management field is not an exception. Therefore, in this article, we propose two machine learning (ML) techniques—an echo state network (ESN) and sparrow search algorithm–echo state [...] Read more.
Artificial intelligence (AI) has become an instrument used in all domains with good results. The water resources management field is not an exception. Therefore, in this article, we propose two machine learning (ML) techniques—an echo state network (ESN) and sparrow search algorithm–echo state network (SSA-ESN)—for monthly modeling of the water discharge of one of the biggest rivers in Romania for three periods (S, S1, and S2). In both models, R2 was over 0.989 on the test and training sets and the mean absolute error (MAE) varied between 4.4826 and 7.6038. The performance of the SSA-ESN was similar, but the ESN had the shortest run time. The influence of anomalies on the models’ quality was assessed by running the algorithms on a series without the aberrant values, which were detected by the seasonal hybrid extreme studentized deviate (S-H-ESD) test. The results indicate that removing the anomalies significantly improved both models’ performance, but the run time was increased. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

Figure 1
<p>Map of the Buzău River catchment [<a href="#B53-water-16-02916" class="html-bibr">53</a>].</p>
Full article ">Figure 2
<p>The series S1 and S2, and the values of their basic statistics.</p>
Full article ">Figure 3
<p>ESN’s structure.</p>
Full article ">Figure 4
<p>(<b>a</b>) Data series (red) and its anomalies (blue dots). (<b>b</b>) The series without anomalies.</p>
Full article ">Figure 5
<p>(<b>a</b>) ESN model on the test set for (<b>a</b>) S, (<b>b</b>) S1, and (<b>c</b>) S2.</p>
Full article ">Figure 6
<p>(<b>a</b>) ESN model on the test set without aberrant values for (<b>a</b>) S, (<b>b</b>) S1, and (<b>c</b>) S2.</p>
Full article ">
24 pages, 7534 KiB  
Article
DeepESN Neural Networks for Industrial Predictive Maintenance through Anomaly Detection from Production Energy Data
by Andrea Bonci, Luca Fredianelli, Renat Kermenov, Lorenzo Longarini, Sauro Longhi, Geremia Pompei, Mariorosario Prist and Carlo Verdini
Appl. Sci. 2024, 14(19), 8686; https://doi.org/10.3390/app14198686 - 26 Sep 2024
Viewed by 1335
Abstract
Optimizing energy consumption is an important aspect of industrial competitiveness, as it directly impacts operational efficiency, cost reduction, and sustainability goals. In this context, anomaly detection (AD) becomes a valuable methodology, as it supports maintenance activities in the manufacturing sector, allowing for early [...] Read more.
Optimizing energy consumption is an important aspect of industrial competitiveness, as it directly impacts operational efficiency, cost reduction, and sustainability goals. In this context, anomaly detection (AD) becomes a valuable methodology, as it supports maintenance activities in the manufacturing sector, allowing for early intervention to prevent energy waste and maintain optimal performance. Here, an AD-based method is proposed and studied to support energy-saving predictive maintenance of production lines using time series acquired directly from the field. This paper proposes a deep echo state network (DeepESN)-based method for anomaly detection by analyzing energy consumption data sets from production lines. Compared with traditional prediction methods, such as recurrent neural networks with long short-term memory (LSTM), although both models show similar time series trends, the DeepESN-based method studied here appears to have some advantages, such as timelier error detection and higher prediction accuracy. In addition, the DeepESN-based method has been shown to be more accurate in predicting the occurrence of failure. The proposed solution has been extensively tested in a real-world pilot case consisting of an automated metal filter production line equipped with industrial smart meters to acquire energy data during production phases; the time series, composed of 88 variables associated with energy parameters, was then processed using the techniques introduced earlier. The results show that our method enables earlier error detection and achieves higher prediction accuracy when running on an edge device. Full article
(This article belongs to the Special Issue Digital and Sustainable Manufacturing in Industry 4.0)
Show Figures

Figure 1

Figure 1
<p>Echo state network architecture.</p>
Full article ">Figure 2
<p>Long short-term memory architecture.</p>
Full article ">Figure 3
<p>Input layer and gates architecture.</p>
Full article ">Figure 4
<p>Deep echo state network architecture.</p>
Full article ">Figure 5
<p>Anomaly detector architecture.</p>
Full article ">Figure 6
<p>The sub-phases in the global architecture of the proposed AD methodology.</p>
Full article ">Figure 7
<p>System architecture.</p>
Full article ">Figure 8
<p>Sifim’s production line. (<b>1</b>) Loading station. (<b>2</b>) Working area. (<b>3</b>) Unloading station. (<b>4</b>) Complete overview.</p>
Full article ">Figure 9
<p>Seneca S604’s IoT module. (<b>1</b>) Electric schema. (<b>2</b>) Installed module.</p>
Full article ">Figure 10
<p>Example of acquired data.</p>
Full article ">Figure 11
<p>Example of <math display="inline"><semantics> <mi mathvariant="bold-italic">σ</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="bold">q</mi> </semantics></math> vectors.</p>
Full article ">Figure 12
<p>Development of the accuracy, F1 score, time, and CO<sub>2</sub> emissions metrics for each epoch of LSTM model training compared with the one-shot DeepESN results.</p>
Full article ">Figure 13
<p>Current system anomaly detection.</p>
Full article ">Figure 14
<p>DeepESN receiver operating characteristic (ROC) curve.</p>
Full article ">
34 pages, 786 KiB  
Review
Recurrent Neural Networks: A Comprehensive Review of Architectures, Variants, and Applications
by Ibomoiye Domor Mienye, Theo G. Swart and George Obaido
Information 2024, 15(9), 517; https://doi.org/10.3390/info15090517 - 25 Aug 2024
Cited by 18 | Viewed by 19199
Abstract
Recurrent neural networks (RNNs) have significantly advanced the field of machine learning (ML) by enabling the effective processing of sequential data. This paper provides a comprehensive review of RNNs and their applications, highlighting advancements in architectures, such as long short-term memory (LSTM) networks, [...] Read more.
Recurrent neural networks (RNNs) have significantly advanced the field of machine learning (ML) by enabling the effective processing of sequential data. This paper provides a comprehensive review of RNNs and their applications, highlighting advancements in architectures, such as long short-term memory (LSTM) networks, gated recurrent units (GRUs), bidirectional LSTM (BiLSTM), echo state networks (ESNs), peephole LSTM, and stacked LSTM. The study examines the application of RNNs to different domains, including natural language processing (NLP), speech recognition, time series forecasting, autonomous vehicles, and anomaly detection. Additionally, the study discusses recent innovations, such as the integration of attention mechanisms and the development of hybrid models that combine RNNs with convolutional neural networks (CNNs) and transformer architectures. This review aims to provide ML researchers and practitioners with a comprehensive overview of the current state and future directions of RNN research. Full article
(This article belongs to the Special Issue Applications of Machine Learning and Convolutional Neural Networks)
Show Figures

Figure 1

Figure 1
<p>Basic RNN architecture.</p>
Full article ">Figure 2
<p>Architecture of the LSTM network [<a href="#B41-information-15-00517" class="html-bibr">41</a>].</p>
Full article ">Figure 3
<p>Architecture of BiLSTM network [<a href="#B41-information-15-00517" class="html-bibr">41</a>].</p>
Full article ">Figure 4
<p>A stacked LSTM [<a href="#B41-information-15-00517" class="html-bibr">41</a>].</p>
Full article ">Figure 5
<p>Architecture of the GRU network.</p>
Full article ">
18 pages, 1512 KiB  
Article
Subsurface Drainage and Nitrogen Fertilizer Management Affect Fertilizer Fate in Claypan Soils
by Harpreet Kaur and Kelly A. Nelson
Sustainability 2024, 16(15), 6477; https://doi.org/10.3390/su16156477 - 29 Jul 2024
Cited by 1 | Viewed by 994
Abstract
Sustainable nitrogen (N) fertilizer management practices in the Midwest U.S. strive to optimize crop production while minimizing N gas emission losses and nitrate-N (NO3-N) losses in subsurface drainage water. A replicated site in upstate Missouri from 2018 to 2020 investigated the [...] Read more.
Sustainable nitrogen (N) fertilizer management practices in the Midwest U.S. strive to optimize crop production while minimizing N gas emission losses and nitrate-N (NO3-N) losses in subsurface drainage water. A replicated site in upstate Missouri from 2018 to 2020 investigated the influence of different N fertilizer management practices on nutrient concentrations in drainage water, nitrous oxide (N2O) emissions, and ammonia (NH3) volatilization losses in a corn (Zea mays, 2018, 2020)–soybean (Glyince max, 2019) rotation. Four N treatments applied to corn included fall anhydrous ammonia with nitrapyrin (fall AA + NI), spring anhydrous ammonia (spring AA), top dressed SuperU and ESN as a 25:75% granular blend (TD urea), and non-treated control (NTC). All treatments were applied to subsurface-drained (SD) and non-drained (ND) replicated plots, except TD urea, which was only applied with SD. Across the years, NO3-N concentration in subsurface drainage water was similar for fall AA + NI and spring AA treatments. The NO3-N concentration in subsurface drainage water was statistically (p < 0.0001) lower with TD urea (9.1 mg L−1) and NTC (8.9 mg L−1) compared to fall AA + NI (14.6 mg L−1) and spring AA (13.8 mg L−1) in corn growing years. During corn years (2018 and 2020), cumulative N2O emissions were significantly (p < 0.05) higher with spring AA compared to other fertilizer treatments with SD and ND. Reduced corn growth and plant N uptake in 2018 caused greater N2O loss with TD urea and spring AA compared to the NTC and fall AA + NI in 2019. Cumulative NH3 volatilization was ranked as TD urea > spring AA > fall AA + NI. Due to seasonal variability in soil moisture and temperature, gas losses were higher in 2018 compared to 2020. There were no environmental benefits to applying AA in the spring compared to AA + NI in the fall on claypan soils. Fall AA with a nitrification inhibitor is a viable alternative to spring AA, which maintains flexible N application timings for farmers. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

Figure 1
<p>Daily soil temperature (<b>a</b>) from November to October in 2018, 2019, and 2020. Daily (bars) and cumulative (lines) precipitation from November to October at the University of Missouri Greenley Research Center in (<b>b</b>) 2018, (<b>c</b>) 2019, and (<b>d</b>) 2020.</p>
Full article ">Figure 1 Cont.
<p>Daily soil temperature (<b>a</b>) from November to October in 2018, 2019, and 2020. Daily (bars) and cumulative (lines) precipitation from November to October at the University of Missouri Greenley Research Center in (<b>b</b>) 2018, (<b>c</b>) 2019, and (<b>d</b>) 2020.</p>
Full article ">Figure 2
<p>Nitrogen fertilizer (non-treated control = NTC; SuperU and ESN top dress application = TD; fall anhydrous ammonia + nitrapyrin = Fall AA + NI; and pre-plant anhydrous ammonia = spring AA) effects on soil N<sub>2</sub>O flux and cumulative emission in subsurface-drained (SD) and non-drained (ND) soil from November to October in (<b>a</b>) 2018 and (<b>b</b>) 2020. Letters following cumulative loss from N fertilizer treatments represent significant (<span class="html-italic">p</span> &lt; 0.1) differences among treatments within a year.</p>
Full article ">Figure 2 Cont.
<p>Nitrogen fertilizer (non-treated control = NTC; SuperU and ESN top dress application = TD; fall anhydrous ammonia + nitrapyrin = Fall AA + NI; and pre-plant anhydrous ammonia = spring AA) effects on soil N<sub>2</sub>O flux and cumulative emission in subsurface-drained (SD) and non-drained (ND) soil from November to October in (<b>a</b>) 2018 and (<b>b</b>) 2020. Letters following cumulative loss from N fertilizer treatments represent significant (<span class="html-italic">p</span> &lt; 0.1) differences among treatments within a year.</p>
Full article ">Figure 3
<p>Nitrogen fertilizer source (non-treated control = NTC; SuperU and ESN top dress application = TD; fall anhydrous ammonia + nitrapyrin = Fall AA + NI; and pre-plant anhydrous ammonia = spring AA) effects on soil (<b>a</b>) N<sub>2</sub>O and (<b>b</b>) NH<sub>3</sub> flux, and cumulative emissions in subsurface-drained (SD) and non-drained (ND) soil from November 2018 to October 2019 during soybean. Letters following cumulative loss from N source represent significant (<span class="html-italic">p</span> &lt; 0.1) differences among treatments within a year.</p>
Full article ">Figure 4
<p>Nitrogen fertilizer source (non-treated control = NTC; SuperU and ESN top dress application = TD; fall anhydrous ammonia + nitrapyrin = Fall AA+NI; and pre-plant anhydrous ammonia = spring AA) effects on soil NH<sub>3</sub> flux and cumulative emission in subsurface-drained (SD) and non-drained (ND) soil from November to October in (<b>a</b>) 2018 and (<b>b</b>) 2020. Letters following cumulative loss from N source represent significant (<span class="html-italic">p</span> &lt; 0.1) differences among treatments within a year.</p>
Full article ">Figure 4 Cont.
<p>Nitrogen fertilizer source (non-treated control = NTC; SuperU and ESN top dress application = TD; fall anhydrous ammonia + nitrapyrin = Fall AA+NI; and pre-plant anhydrous ammonia = spring AA) effects on soil NH<sub>3</sub> flux and cumulative emission in subsurface-drained (SD) and non-drained (ND) soil from November to October in (<b>a</b>) 2018 and (<b>b</b>) 2020. Letters following cumulative loss from N source represent significant (<span class="html-italic">p</span> &lt; 0.1) differences among treatments within a year.</p>
Full article ">
18 pages, 1165 KiB  
Article
Exploiting Signal Propagation Delays to Match Task Memory Requirements in Reservoir Computing
by Stefan Iacob and Joni Dambre
Biomimetics 2024, 9(6), 355; https://doi.org/10.3390/biomimetics9060355 - 14 Jun 2024
Cited by 1 | Viewed by 903
Abstract
Recurrent neural networks (RNNs) transmit information over time through recurrent connections. In contrast, biological neural networks use many other temporal processing mechanisms. One of these mechanisms is the inter-neuron delays caused by varying axon properties. Recently, this feature was implemented in echo state [...] Read more.
Recurrent neural networks (RNNs) transmit information over time through recurrent connections. In contrast, biological neural networks use many other temporal processing mechanisms. One of these mechanisms is the inter-neuron delays caused by varying axon properties. Recently, this feature was implemented in echo state networks (ESNs), a type of RNN, by assigning spatial locations to neurons and introducing distance-dependent inter-neuron delays. These delays were shown to significantly improve ESN task performance. However, thus far, it is still unclear why distance-based delay networks (DDNs) perform better than ESNs. In this paper, we show that by optimizing inter-node delays, the memory capacity of the network matches the memory requirements of the task. As such, networks concentrate their memory capabilities to the points in the past which contain the most information for the task at hand. Moreover, we show that DDNs have a greater total linear memory capacity, with the same amount of non-linear processing power. Full article
(This article belongs to the Special Issue Bioinspired Algorithms)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of a(n) (A)DDN connection, with a delay of <span class="html-italic">d</span> time steps. The activity of neuron A is present at the input of neuron B after <span class="html-italic">d</span> steps, so the input for neuron B at time <span class="html-italic">n</span> will be <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>A</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>−</mo> <mi>d</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. In the case of ADDNs, where connections can be adaptive, the weight change <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>w</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> </mrow> </semantics></math> is computed based on the postsynaptic activity <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> (represented in orange), and the delayed presynaptic activity <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>A</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>−</mo> <mi>d</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> (represented in green).</p>
Full article ">Figure 2
<p>Validation performance throughout CMA-ES hyperparameter optimization (evolution). The x-axes refer to the CMA-ES generations. The shaded areas represent the standard deviation in performance of the best candidate of the generation. (<b>A</b>): DDN, ADDN, and baseline ESN Mackey–Glass Performance expressed in prediction horizon (e.g., number of blind prediction steps during validation until error margin is reached). These reservoirs are evolved to optimize task performance on the Mackey–Glass signal generation task, where the goal is to predict as many future states of the Mackey–Glass system (Equation (<a href="#FD8-biomimetics-09-00355" class="html-disp-formula">8</a>)) as possible based on previous states, while maintaining a low enough absolute error. (<b>B</b>): DDN and baseline ESN performance on NARMA system approximation tasks expressed in normalized root mean squared error (NRMSE). The goal is to predict the next output of the NARMA system (Equations (<a href="#FD9-biomimetics-09-00355" class="html-disp-formula">9</a>) and (10)), based on all previous inputs. We use 50 node and 100 node ESNs and DDNs for the NARMA-10 and NARMA-30 task, respectively.</p>
Full article ">Figure 3
<p>A comparison between the memory capacity profile of a DDN optimized for Mackey–Glass with a maximum delay of 25, and a conventional ESN with the same parameters (i.e., a non-delayed copy). The linear memory capacity is averaged over 20 trials, with the error bars representing standard deviation. Here, the average total memory capacity (<math display="inline"><semantics> <mrow> <mi>M</mi> <msub> <mi>C</mi> <mrow> <mi>t</mi> <mi>o</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, the integral of the MC profile, averaged over the 20 trials) of the DDN and the ESN is, respectively, 18.92 and 11.82. We see that the optimized DDN has a concentration of higher memory capacity at higher lags and is spread out over more lags. In contrast, the memory capacity of the delay-less network is concentrated on smaller lags and spans fewer lag values.</p>
Full article ">Figure 4
<p>We optimized conventional ESNs for 200 generations with CMA-ES. The best candidate hyperparameter set was used to generate a network. (<b>A</b>): Four DDN copies with normally distributed neuron locations and different (spatial) network sizes were made, with the input neuron in the centre of the network. In this graph, we show the linear MC profiles of these original ESN and the four DDNs of increasing spatial dimensions. The total linear MC is mentioned in the legend. (<b>B</b>): A spatial representation of the network corresponding to (<b>A</b>). We see that the input neuron (blue) is located in the centre of the normally distributed reservoir neurons (green). (<b>C</b>): Analogous to (<b>A</b>), this graph shows the linear MC profile of increasingly larger DDNs. In this case, however, the input neuron (blue) is in a distant position. (<b>D</b>): A spatial representation of the network corresponding to (<b>C</b>).</p>
Full article ">Figure 5
<p>DDN linear memory capacity profiles throughout evolution optimized for NARMA system approximation task. In (<b>A</b>,<b>B</b>), we see the DDN linear MC profiles of the best performing candidate hyperparameter set of every tenth generation. These were computed based on five networks generated from each candidate. MC was computed up to a lag of 100 and averaged over the five networks. (<b>C</b>,<b>D</b>) show the same for baseline ESNs for the same tasks. (<b>E</b>): the NARMA-10 task profile, computed as the squared correlation between lagged input and output, plotted against lag. (<b>F</b>): Analogous for NARMA-30 task.</p>
Full article ">Figure 6
<p>(<b>A</b>): DDN linear memory capacity profiles throughout evolution, optimized for the Mackey–Glass signal generation task. The best performing candidate hyperparameter set of every 10th generation is used to generate five networks. The linear memory capacity profiles up to a lag of 100 is computed for each of these five networks and averaged. (<b>B</b>): Analogous for ADDNs. In this case, reservoirs are first adapted using delay-sensitive BCM before computing MC. (<b>C</b>): Analogous for baseline ESNs. (<b>D</b>): The task capacity profile of the Mackey–Glass system. This represents the correlation between the lagged state of the system with the current state of the system.</p>
Full article ">
22 pages, 3185 KiB  
Article
Determination of Performance of Different Pad Materials and Energy Consumption Values of Direct Evaporative Cooler
by Tomasz Jakubowski, Sedat Boyacı, Joanna Kocięcka and Atılgan Atılgan
Energies 2024, 17(12), 2811; https://doi.org/10.3390/en17122811 - 7 Jun 2024
Viewed by 1303
Abstract
The purpose of this study is to determine the performances of luffa and greenhouse shading netting (which can be used as alternatives to commercial cellulose pads, that are popular for cooling greenhouses), the contribution of external shading to the evaporative cooling performance, and [...] Read more.
The purpose of this study is to determine the performances of luffa and greenhouse shading netting (which can be used as alternatives to commercial cellulose pads, that are popular for cooling greenhouses), the contribution of external shading to the evaporative cooling performance, and the energy consumption of the direct evaporative cooler. In this experiment, eight different applications were evaluated: natural ventilation (NV), natural ventilation combined with external shading net (NV + ESN), cellulose pad (CP), cellulose pad combined with external shading net (CP + ESN), luffa pad (LP), luffa pad combined with external shading net (LP + ESN), shading net pad (SNP), and shading net pad combined with external shading net (SNP + ESN). The cooling efficiencies of CP, CP + ESN, LP, LP + ESN, SNP, and SNP + ESN were found to be 37.6%, 45.0%, 38.9%, 41.2%, 24.4%, 29.1%, respectively. Moreover, their cooling capacities were 2.6 kW, 3.0 kW, 2.8 kW, 3.0 kW, 1.7 kW, 2.0 kW, respectively. The system water consumption values were 2.9, 3.1, 2.8, 3.2, 2.4, 2.4 l h−1, respectively. The performance coefficients of the system were determined to be 10.2, 12.1, 11.3, 11.9, 6.6, 7.8. The system’s electricity consumption per unit area was 0.15 kWh m−2. As a result of the study, it was determined that commercially used cellulose pads have advantages over luffa and shading net materials. However, luffa pads can be a good alternative to cellulose pads, considering their local availability, initial cost, cooling efficiency, and capacity. Full article
(This article belongs to the Special Issue Energy Sources from Agriculture and Rural Areas II)
Show Figures

Figure 1

Figure 1
<p>Direct evaporative cooling system and the components of the system.</p>
Full article ">Figure 2
<p>The testing materials: (<b>a</b>) cellulose pad; (<b>b</b>) luffa pad; (<b>c</b>) shading net pad.</p>
Full article ">Figure 3
<p>Variations in performance parameters the CP and CP + ESN pads with time: (<b>a</b>) CP; (<b>b</b>) CP + ESN.</p>
Full article ">Figure 4
<p>Variation in the sensible and latent heat transfer as a function of time: (<b>a</b>) cellulose ped; (<b>b</b>) cellulose pad + external shading net.</p>
Full article ">Figure 5
<p>Variations of performance parameters the LP and LP + ESN pads with time (<b>a</b>) LP; (<b>b</b>) LP + ESN.</p>
Full article ">Figure 6
<p>Variation in the sensible and latent heat transfer as a function of time: (<b>a</b>) luffa pad; (<b>b</b>) luffa pad + external shading net.</p>
Full article ">Figure 7
<p>Variations of performance parameters the SNP and SNP + ESN pads with time (<b>a</b>) SNP; (<b>b</b>) SNP + ESN.</p>
Full article ">Figure 8
<p>Variation in the sensible and latent heat transfer as a function of time: (<b>a</b>) shading net pad; (<b>b</b>) shading net pad + external shading net.</p>
Full article ">
20 pages, 1613 KiB  
Article
Energy-Efficient Edge and Cloud Image Classification with Multi-Reservoir Echo State Network and Data Processing Units
by E. J. López-Ortiz, M. Perea-Trigo, L. M. Soria-Morillo, J. A. Álvarez-García and J. J. Vegas-Olmos
Sensors 2024, 24(11), 3640; https://doi.org/10.3390/s24113640 - 4 Jun 2024
Cited by 1 | Viewed by 891
Abstract
In an era dominated by Internet of Things (IoT) devices, software-as-a-service (SaaS) platforms, and rapid advances in cloud and edge computing, the demand for efficient and lightweight models suitable for resource-constrained devices such as data processing units (DPUs) has surged. Traditional deep learning [...] Read more.
In an era dominated by Internet of Things (IoT) devices, software-as-a-service (SaaS) platforms, and rapid advances in cloud and edge computing, the demand for efficient and lightweight models suitable for resource-constrained devices such as data processing units (DPUs) has surged. Traditional deep learning models, such as convolutional neural networks (CNNs), pose significant computational and memory challenges, limiting their use in resource-constrained environments. Echo State Networks (ESNs), based on reservoir computing principles, offer a promising alternative with reduced computational complexity and shorter training times. This study explores the applicability of ESN-based architectures in image classification and weather forecasting tasks, using benchmarks such as the MNIST, FashionMnist, and CloudCast datasets. Through comprehensive evaluations, the Multi-Reservoir ESN (MRESN) architecture emerges as a standout performer, demonstrating its potential for deployment on DPUs or home stations. In exploiting the dynamic adaptability of MRESN to changing input signals, such as weather forecasts, continuous on-device training becomes feasible, eliminating the need for static pre-trained models. Our results highlight the importance of lightweight models such as MRESN in cloud and edge computing applications where efficiency and sustainability are paramount. This study contributes to the advancement of efficient computing practices by providing novel insights into the performance and versatility of MRESN architectures. By facilitating the adoption of lightweight models in resource-constrained environments, our research provides a viable alternative for improved efficiency and scalability in modern computing paradigms. Full article
(This article belongs to the Special Issue Feature Papers in the 'Sensor Networks' Section 2024)
Show Figures

Figure 1

Figure 1
<p>ESN training. The symbol <math display="inline"><semantics> <mrow> <mo>+</mo> <mspace width="-0.166667em"/> <mspace width="-0.166667em"/> <mspace width="-0.166667em"/> <mo>+</mo> </mrow> </semantics></math> is used to express a vector concatenation. <math display="inline"><semantics> <mi>λ</mi> </semantics></math> is used to express the computation of the new state. The length of training is expressed by T.</p>
Full article ">Figure 2
<p>ESN Test phase. The output layer processes the new input beside the state it produces in the network to obtain the predicted output. Symbol <math display="inline"><semantics> <mrow> <mo>+</mo> <mspace width="-0.166667em"/> <mspace width="-0.166667em"/> <mspace width="-0.166667em"/> <mo>+</mo> </mrow> </semantics></math> is used to express vector concatenation, while ⨂ is used to express matrix multiplication.</p>
Full article ">Figure 3
<p>ESN with specialized output layers. Argmax or softmax could be used as the final step. Symbol <math display="inline"><semantics> <mrow> <mo>+</mo> <mspace width="-0.166667em"/> <mspace width="-0.166667em"/> <mspace width="-0.166667em"/> <mo>+</mo> </mrow> </semantics></math> is used to express vector concatenation, while ⨂ is used to express matrix multiplication.</p>
Full article ">Figure 4
<p>MRESN architecture. Test phase. Symbol <math display="inline"><semantics> <mrow> <mo>+</mo> <mspace width="-0.166667em"/> <mspace width="-0.166667em"/> <mspace width="-0.166667em"/> <mo>+</mo> </mrow> </semantics></math> is used to express vector concatenation, while ⨂ is used to express matrix multiplication.</p>
Full article ">Figure 5
<p>(<b>a</b>) MNIST, (<b>b</b>) FashionMNIST.</p>
Full article ">Figure 6
<p>CloudCast sample 728 × 728 pixels. Europe region. Coloured regions correspond to the first four classes.</p>
Full article ">Figure 7
<p>Execution times on CPU vs. GPU vs. DPU for 8 × 1000 nodes MRESN.</p>
Full article ">Figure 8
<p>Effect of neighboring pixels on the target pixel over time. The dashed lines represent successive time steps. Certain pixels exert immediate influence on the target (green cell), while others require multiple steps for their information to reach the target.</p>
Full article ">Figure 9
<p>ESN grid search for CloudCast dataset. Big influence of spectral radius on results.</p>
Full article ">Figure 10
<p>PSO results for ESN architecture.</p>
Full article ">Figure 11
<p>PSO over MRESN architecture.</p>
Full article ">
19 pages, 2010 KiB  
Review
Emerging Technologies for Automation in Environmental Sensing: Review
by Shekhar Suman Borah, Aaditya Khanal and Prabha Sundaravadivel
Appl. Sci. 2024, 14(8), 3531; https://doi.org/10.3390/app14083531 - 22 Apr 2024
Cited by 2 | Viewed by 4188
Abstract
This article explores the impact of automation on environmental sensing, focusing on advanced technologies that revolutionize data collection analysis and monitoring. The International Union of Pure and Applied Chemistry (IUPAC) defines automation as integrating hardware and software components into modern analytical systems. Advancements [...] Read more.
This article explores the impact of automation on environmental sensing, focusing on advanced technologies that revolutionize data collection analysis and monitoring. The International Union of Pure and Applied Chemistry (IUPAC) defines automation as integrating hardware and software components into modern analytical systems. Advancements in electronics, computer science, and robotics drive the evolution of automated sensing systems, overcoming traditional limitations in manual data collection. Environmental sensor networks (ESNs) address challenges in weather constraints and cost considerations, providing high-quality time-series data, although issues in interoperability, calibration, communication, and longevity persist. Unmanned Aerial Systems (UASs), particularly unmanned aerial vehicles (UAVs), play an important role in environmental monitoring due to their versatility and cost-effectiveness. Despite challenges in regulatory compliance and technical limitations, UAVs offer detailed spatial and temporal information. Pollution monitoring faces challenges related to high costs and maintenance requirements, prompting the exploration of cost-efficient alternatives. Smart agriculture encounters hurdle in data integration, interoperability, device durability in adverse weather conditions, and cybersecurity threats, necessitating privacy-preserving techniques and federated learning approaches. Financial barriers, including hardware costs and ongoing maintenance, impede the widespread adoption of smart technology in agriculture. Integrating robotics, notably underwater vehicles, proves indispensable in various environmental monitoring applications, providing accurate data in challenging conditions. This review details the significant role of transfer learning and edge computing, which are integral components of robotics and wireless monitoring frameworks. These advancements aid in overcoming challenges in environmental sensing, underscoring the ongoing necessity for research and innovation to enhance monitoring solutions. Some state-of-the-art frameworks and datasets are analyzed to provide a comprehensive review on the basic steps involved in the automation of environmental sensing applications. Full article
Show Figures

Figure 1

Figure 1
<p>Integration of automation, robotics, and edge computing in environmental sensing.</p>
Full article ">Figure 2
<p>Transfer learning workflow [<a href="#B38-applsci-14-03531" class="html-bibr">38</a>].</p>
Full article ">Figure 3
<p>Deep learning workflow [<a href="#B60-applsci-14-03531" class="html-bibr">60</a>].</p>
Full article ">Figure 4
<p>Deep learning applications in environment sensing.</p>
Full article ">Figure 5
<p>Edge computing workflow [<a href="#B97-applsci-14-03531" class="html-bibr">97</a>].</p>
Full article ">
20 pages, 3898 KiB  
Article
Neural Networks with Transfer Learning and Frequency Decomposition for Wind Speed Prediction with Missing Data
by Xiaoou Li and Yingqin Zhu
Mathematics 2024, 12(8), 1137; https://doi.org/10.3390/math12081137 - 10 Apr 2024
Cited by 3 | Viewed by 972
Abstract
This paper presents a novel data-driven approach for enhancing time series forecasting accuracy when faced with missing data. Our proposed method integrates an Echo State Network (ESN) with ARIMA (Autoregressive Integrated Moving Average) modeling, frequency decomposition, and online transfer learning. This combination specifically [...] Read more.
This paper presents a novel data-driven approach for enhancing time series forecasting accuracy when faced with missing data. Our proposed method integrates an Echo State Network (ESN) with ARIMA (Autoregressive Integrated Moving Average) modeling, frequency decomposition, and online transfer learning. This combination specifically addresses the challenges missing data introduce in time series prediction. By using the strengths of each technique, our framework offers a robust solution for handling missing data and achieving superior forecasting accuracy in real-world applications. We demonstrate the effectiveness of the proposed model through a wind speed prediction case study. Compared to the existing methods, our approach achieves significant improvement in prediction accuracy, paving the way for more reliable decisionmaking in wind energy operations and management. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence)
Show Figures

Figure 1

Figure 1
<p>ESN structure.</p>
Full article ">Figure 2
<p>Echo state ARIMA model.</p>
Full article ">Figure 3
<p>Frequency decomposition for echo state ARIMA model.</p>
Full article ">Figure 4
<p>Transfer learning to improve the model ES-ARIMA.</p>
Full article ">Figure 5
<p>Scheme of the long-term prediction of time series with missing values using echo state ARIMA model.</p>
Full article ">Figure 6
<p>The histogram of wind speed.</p>
Full article ">Figure 7
<p>ACF/PAF analysis of the three datasets. (<b>a</b>) ACF analysis of Kaggle Farm 1 data; (<b>b</b>) PAF analysis of Kaggle Farm 1 data; (<b>c</b>) ACF analysis of California data; (<b>d</b>) PAF analysis of California data; (<b>e</b>) ACF analysis of Germany TenneTTSO data; (<b>f</b>) PAF analysis of Germany TenneTTSO data.</p>
Full article ">Figure 8
<p>The prediction of Kaggle using ES-ARIMA: Farm 6.</p>
Full article ">Figure 9
<p>The prediction of Kaggle using frequency decomposition: Farm 6.</p>
Full article ">Figure 10
<p>The prediction of using transfer learning: Farm 5.</p>
Full article ">
21 pages, 5788 KiB  
Article
PAR-Net: An Enhanced Dual-Stream CNN–ESN Architecture for Human Physical Activity Recognition
by Imran Ullah Khan and Jong Weon Lee
Sensors 2024, 24(6), 1908; https://doi.org/10.3390/s24061908 - 16 Mar 2024
Cited by 3 | Viewed by 1463
Abstract
Physical exercise affects many facets of life, including mental health, social interaction, physical fitness, and illness prevention, among many others. Therefore, several AI-driven techniques have been developed in the literature to recognize human physical activities. However, these techniques fail to adequately learn the [...] Read more.
Physical exercise affects many facets of life, including mental health, social interaction, physical fitness, and illness prevention, among many others. Therefore, several AI-driven techniques have been developed in the literature to recognize human physical activities. However, these techniques fail to adequately learn the temporal and spatial features of the data patterns. Additionally, these techniques are unable to fully comprehend complex activity patterns over different periods, emphasizing the need for enhanced architectures to further increase accuracy by learning spatiotemporal dependencies in the data individually. Therefore, in this work, we develop an attention-enhanced dual-stream network (PAR-Net) for physical activity recognition with the ability to extract both spatial and temporal features simultaneously. The PAR-Net integrates convolutional neural networks (CNNs) and echo state networks (ESNs), followed by a self-attention mechanism for optimal feature selection. The dual-stream feature extraction mechanism enables the PAR-Net to learn spatiotemporal dependencies from actual data. Furthermore, the incorporation of a self-attention mechanism makes a substantial contribution by facilitating targeted attention on significant features, hence enhancing the identification of nuanced activity patterns. The PAR-Net was evaluated on two benchmark physical activity recognition datasets and achieved higher performance by surpassing the baselines comparatively. Additionally, a thorough ablation study was conducted to determine the best optimal model for human physical activity recognition. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The internal architecture of the ESN.</p>
Full article ">Figure 2
<p>Main framework of the PAR-Net.</p>
Full article ">Figure 3
<p>The human body skeleton joint extracted through Kinect.</p>
Full article ">Figure 4
<p>The accuracy of different models on the PAR and PER datasets.</p>
Full article ">Figure 5
<p>The precision of different models on the PAR and PER datasets.</p>
Full article ">Figure 6
<p>Recall comparisons of different models on the PAR and PER datasets.</p>
Full article ">Figure 7
<p>F1-score comparison of different models on the PAR and PER datasets.</p>
Full article ">Figure 8
<p>The confusion metrics of the PAR-Net on various sequences on the PAR dataset.</p>
Full article ">Figure 9
<p>The confusion metrics of the PAR-Net on different sequences on the PER dataset.</p>
Full article ">Figure 10
<p>The training and validation accuracy and loss graphs on the PAR dataset.</p>
Full article ">Figure 11
<p>The training and validation accuracy and loss graphs on the PER dataset.</p>
Full article ">
16 pages, 4455 KiB  
Article
Low-Cost Data-Driven Robot Collision Localization Using a Sparse Modular Point Matrix
by Haoyu Lin, Pengkun Quan, Zhuo Liang, Dongbo Wei and Shichun Di
Appl. Sci. 2024, 14(5), 2131; https://doi.org/10.3390/app14052131 - 4 Mar 2024
Viewed by 1020
Abstract
In the context of automatic charging for electric vehicles, collision localization for the end-effector of robots not only serves as a crucial visual complement but also provides essential foundations for subsequent response design. In this scenario, data-driven collision localization methods are considered an [...] Read more.
In the context of automatic charging for electric vehicles, collision localization for the end-effector of robots not only serves as a crucial visual complement but also provides essential foundations for subsequent response design. In this scenario, data-driven collision localization methods are considered an ideal choice. However, due to the typically high demands on the data scale associated with such methods, they may significantly increase the construction cost of models. To mitigate this issue to some extent, in this paper, we propose a novel approach for robot collision localization based on a sparse modular point matrix (SMPM) in the context of automatic charging for electric vehicles. This method, building upon the use of collision point matrix templates, strategically introduces sparsity to the sub-regions of the templates, aiming to reduce the scale of data collection. Additionally, we delve into the exploration of data-driven models adapted to SMPMs. We design a feature extractor that combines a convolutional neural network (CNN) with an echo state network (ESN) to perform adaptive feature extraction on collision vibration signals. Simultaneously, by incorporating a support vector machine (SVM) as a classifier, the model is capable of accurately estimating the specific region in which the collision occurs. The experimental results demonstrate that the proposed collision localization method maintains a collision localization accuracy of 91.27% and a collision localization RMSE of 1.46 mm, despite a 48.15% reduction in data scale. Full article
(This article belongs to the Special Issue Recent Advances in Robotics and Intelligent Robots Applications)
Show Figures

Figure 1

Figure 1
<p>Illustration of the automatic charging equipment.</p>
Full article ">Figure 2
<p>Different sparsification forms of SMPMs.</p>
Full article ">Figure 3
<p>The structure of the ESN.</p>
Full article ">Figure 4
<p>Structure of the CE-SVM.</p>
Full article ">Figure 5
<p>Illustration of SMPMs at four corners of the collision point matrix template.</p>
Full article ">Figure 6
<p>Illustration of SMPMs with different distributions. (<b>a</b>) Case with horizontal movement; (<b>b</b>) case with vertical movement; (<b>c</b>) case with movement toward the center.</p>
Full article ">Figure 7
<p>Collision localization accuracy with different forms of SMPMs at four corners.</p>
Full article ">Figure 8
<p>Collision localization results using SMPMs with different distributions. (<b>a</b>) UD; (<b>b</b>) LR; (<b>c</b>) CT.</p>
Full article ">Figure 9
<p>Illustration of SMPMs in the Cell3-2 form with varying degrees of sparsification.</p>
Full article ">
25 pages, 7815 KiB  
Article
Ensemble and Pre-Training Approach for Echo State Network and Extreme Learning Machine Models
by Lingyu Tang, Jun Wang, Mengyao Wang and Chunyu Zhao
Entropy 2024, 26(3), 215; https://doi.org/10.3390/e26030215 - 28 Feb 2024
Cited by 1 | Viewed by 1318
Abstract
The echo state network (ESN) is a recurrent neural network that has yielded state-of-the-art results in many areas owing to its rapid learning ability and the fact that the weights of input neurons and hidden neurons are fixed throughout the learning process. However, [...] Read more.
The echo state network (ESN) is a recurrent neural network that has yielded state-of-the-art results in many areas owing to its rapid learning ability and the fact that the weights of input neurons and hidden neurons are fixed throughout the learning process. However, the setting procedure for initializing the ESN’s recurrent structure may lead to difficulties in designing a sound reservoir that matches a specific task. This paper proposes an improved pre-training method to adjust the model’s parameters and topology to obtain an adaptive reservoir for a given application. Two strategies, namely global random selection and ensemble training, are introduced to pre-train the randomly initialized ESN model. Specifically, particle swarm optimization is applied to optimize chosen fixed and global weight values within the network, and the reliability and stability of the pre-trained model are enhanced by employing the ensemble training strategy. In addition, we test the feasibility of the model for time series prediction on six benchmarks and two real-life datasets. The experimental results show a clear enhancement in the ESN learning results. Furthermore, the proposed global random selection and ensemble training strategies are also applied to pre-train the extreme learning machine (ELM), which has a similar training process to the ESN model. Numerical experiments are subsequently carried out on the above-mentioned eight datasets. The experimental findings consistently show that the performance of the proposed pre-trained ELM model is also improved significantly. The suggested two strategies can thus enhance the ESN and ELM models’ prediction accuracy and adaptability. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Basic structure of an ESN model and (<b>b</b>) basic structure of an ELM model. Solid bold arrows represent fixed connections, and dashed arrows represent connections to be trained.</p>
Full article ">Figure 2
<p>Flowchart of basic PSO algorithm.</p>
Full article ">Figure 3
<p>Flowchart of EN-PSO-ESN model.</p>
Full article ">Figure 4
<p>Pre-training framework of <span class="html-italic">N</span> basic ESNs based on PSO algorithm.</p>
Full article ">Figure 5
<p>Iteration procedure of a given particle.</p>
Full article ">Figure 6
<p>Visual representation of prediction performance for six models (MG(1) dataset). (<b>a</b>) The comparison of the average measure indices of the ELM-based models. (<b>b</b>) The comparison of the average measure indices of the ESN-based models.</p>
Full article ">Figure 7
<p>Visual representation of prediction performance for six models (MG(2) dataset). (<b>a</b>) The comparison of the average measure indices of the ELM-based models. (<b>b</b>) The comparison of the average measure indices of the ESN-based models.</p>
Full article ">Figure 8
<p>Visual representation of prediction performance for six models (NARMA(1) dataset). (<b>a</b>) The comparison of the average measure indices of the ELM-based models. (<b>b</b>) The comparison of the average measure indices of the ESN-based models.</p>
Full article ">Figure 9
<p>Visual representation of prediction performance for six models (NARMA(2) dataset). (<b>a</b>) The comparison of the average measure indices of the ELM-based models. (<b>b</b>) The comparison of the average measure indices of the ESN-based models.</p>
Full article ">Figure 10
<p>Visual representation of prediction performance for six models (Henon attractor dataset). (<b>a</b>) The comparison of the average measure indices of the ELM-based models. (<b>b</b>) The comparison of the average measure indices of the ESN-based models.</p>
Full article ">Figure 11
<p>Visual representation of prediction performance for six models (Lorenz attractor dataset). (<b>a</b>) The comparison of the average measure indices of the ELM-based models. (<b>b</b>) The comparison of the average measure indices of the ESN-based models.</p>
Full article ">Figure 12
<p>Visual representation of prediction results for six models (AQ dataset). (<b>a</b>) The comparison of the average measure indices of the ELM-based models. (<b>b</b>) The comparison of the average measure indices of the ESN-based models. (<b>c</b>) The fitting performances on the test data. (<b>d</b>) The fitting results on the test data.</p>
Full article ">Figure 13
<p>Visual representation of prediction results for six models (ASN dataset). (<b>a</b>) The comparison of the average measure indices of the ELM-based models. (<b>b</b>) The comparison of the average measure indices of the ESN-based models. (<b>c</b>) The fitting performances on the test data. (<b>d</b>) The fitting results on the test data.</p>
Full article ">Figure 14
<p>The DA distribution for four models. (<b>a</b>) The distribution of DA values for 100 initialized canonical ELM models. (<b>b</b>) The DA distribution for 100 PSO-ELM(I) models. (<b>c</b>) The DA distribution for 100 PSO-ELM(II) models. (<b>d</b>) The DA distribution for ensemble PSO-based ELM models.</p>
Full article ">
Back to TopTop