[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 17, July
Previous Issue
Volume 17, May
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Algorithms, Volume 17, Issue 6 (June 2024) – 51 articles

Cover Story (view full-size image): The extraction of information from object data hinges on the representational approach employed. This study introduces a novel representation of 3D objects as interconnected series of dependent random variables, paving the way for a non-gradient, non-iterative methodology. This approach facilitates the mapping of two 3D objects by aligning their extrema. It amplifies extrema by aggregating dependent random values and provides an in-depth elucidation of the underlying statistical principles. The method is efficient, yielding a finite set of potential matches; this ensures that any existing mapping is contained within this set. This approach shows promise for object analysis and identification within additive manufacturing and for protein structure alignment. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 1299 KiB  
Article
Towards Sustainable Inventory Management: A Many-Objective Approach to Stock Optimization in Multi-Storage Supply Chains
by João A. M. Santos, Miguel S. E. Martins, Rui M. Pinto and Susana M. Vieira
Algorithms 2024, 17(6), 271; https://doi.org/10.3390/a17060271 - 20 Jun 2024
Viewed by 2660
Abstract
Within the framework of sustainable supply chain management and logistics, this work tackles the complex challenge of optimizing inventory levels across varied storage facilities. It introduces a comprehensive many-objective optimization model designed to minimize holding costs, energy consumption, and shortage risk concurrently, thereby [...] Read more.
Within the framework of sustainable supply chain management and logistics, this work tackles the complex challenge of optimizing inventory levels across varied storage facilities. It introduces a comprehensive many-objective optimization model designed to minimize holding costs, energy consumption, and shortage risk concurrently, thereby integrating sustainability considerations into inventory management. The model incorporates the distinct energy consumption profiles associated with various storage types and evaluates the influence of stock levels on energy usage. Through an examination of a 60-day production schedule, the dynamic relationship between inventory levels and operational objectives is investigated, revealing a well-defined set of optimal solutions that highlight the trade-off between energy savings and shortage risk. Employing a 30-day rolling forward analysis with daily optimization provides insights into the evolving nature of inventory optimization. Additionally, the model is extended to encompass a five-objective optimization by decomposing shortage risk, offering a nuanced comprehension of inventory risks. The outcomes of this research provide a range of optimal solutions, empowering supply chain managers to make informed decisions that strike a balance among cost, energy efficiency, and supply chain resilience. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Example of energy consumption evolution with the stock.</p>
Full article ">Figure 2
<p>Parallel plot of the many-objective optimization solutions. Each line corresponds to an optimal solution. The colour scale supplied regards the first objective, the energy consumption, simply to aid in identifying the solutions across the remaining objectives.</p>
Full article ">Figure 3
<p>Pareto fronts between each objective for 3 distinct optimization results, out of the 10 repetitions.</p>
Full article ">Figure 4
<p>Approximated Pareto fronts for the optimizations with the lowest median objectives of each day of the rolling-forward schedule. The Pareto front’s lines are smoothed for easier comprehension.</p>
Full article ">Figure 5
<p>Parallel plot of the separated risk optimization for the complete solution space.</p>
Full article ">Figure 6
<p>Parallel plot of the separated risk optimization for the complete solution space, only considering solutions with a risk of type 1 smaller than 400.</p>
Full article ">Figure 7
<p>Parallel plot of the separated risk optimization for the complete solution space, only considering solutions with a risk of type 1 smaller than 400 and a holding cost larger than 4.8.</p>
Full article ">
15 pages, 2689 KiB  
Article
Sensor Fusion Architecture for Fault Diagnosis with a Predefined-Time Observer
by Ofelia Begovich, Adrián Lizárraga and Antonio Ramírez-Treviño
Algorithms 2024, 17(6), 270; https://doi.org/10.3390/a17060270 - 20 Jun 2024
Viewed by 889
Abstract
This study focuses on generating reliable signals from measured noisy signals through an enhanced sensor fusion method. The main contribution of this research is the development of a novel sensor fusion architecture that creates virtual sensors, improving the system’s redundancy. This architecture utilizes [...] Read more.
This study focuses on generating reliable signals from measured noisy signals through an enhanced sensor fusion method. The main contribution of this research is the development of a novel sensor fusion architecture that creates virtual sensors, improving the system’s redundancy. This architecture utilizes an input observer to estimate the system input, then it is introduced to the system model, the output of which is the virtual sensor. Then, this virtual sensor includes two filtering stages, both derived from the system’s dynamics—the input observer and the system model—which effectively diminish noise in the virtual sensors. Afterwards, the same architecture includes a classical sensor fusion scheme and a voter to merge the virtual sensors with the real measured signals, enhancing the signal reliability. The effectiveness of this method is shown by applying merged signals to two distinct diagnosers: one utilizes a high-order sliding mode observer, while the other employs an innovative extension of a predefined-time observer. The findings indicate that the proposed architecture improves diagnostic results. Moreover, a three-wheeled omnidirectional mobile robot equipped with noisy sensors serves as a case study, confirming the approach’s efficacy in an actual noisy setting and highlighting its principal characteristics. Importantly, the diagnostic systems can manage several simultaneous actuator faults. Full article
Show Figures

Figure 1

Figure 1
<p>Sensor fusion architecture and the diagnoser applied in a diagnoser scheme. The sensor fusion architecture encompasses the homogenizer, EKF, and voter stages. The homogenizer generates virtual outputs, while the EKF merges real and virtual signals to generate more reliable signals, the filter reduces the noise from the signals, and the voter selects the best signal (in this work, the minimum variance signal). The diagnoser leverages these reliable signals to detect, locate, and identify concurrent and nonconcurrent faults.</p>
Full article ">Figure 2
<p>Implementation of the signal homogenizer <span class="html-italic">i</span> to obtain the virtual sensors <math display="inline"><semantics> <msup> <mover accent="true"> <mo>Σ</mo> <mo>¯</mo> </mover> <mi>i</mi> </msup> </semantics></math>, an estimate of the state <math display="inline"><semantics> <msup> <mover accent="true"> <mi>x</mi> <mo>¯</mo> </mover> <mi>i</mi> </msup> </semantics></math>, and an estimate for the input <math display="inline"><semantics> <msup> <mover accent="true"> <mi>u</mi> <mo>¯</mo> </mover> <mi>i</mi> </msup> </semantics></math>.</p>
Full article ">Figure 3
<p>Three-wheeled omnidirectional mobile robot.</p>
Full article ">Figure 4
<p>Mobile robot trajectory comparison.</p>
Full article ">Figure 5
<p>The proposed diagnoser based on a predefined-time observer is used, with (<b>left</b>) and without (<b>right</b>) the proposed sensor fusion architecture.</p>
Full article ">Figure 6
<p>A previously reported diagnoser based on an HOSM differentiator is used, with (<b>left</b>) and without (<b>right</b>) the proposed sensor fusion architecture.</p>
Full article ">Figure 7
<p>Comparison of the diagnosis with <math display="inline"><semantics> <msup> <mover accent="true"> <mo>Σ</mo> <mo>¯</mo> </mover> <mn>1</mn> </msup> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <msup> <mover accent="true"> <mo>Σ</mo> <mo>¯</mo> </mover> <mn>2</mn> </msup> </semantics></math> (<b>right</b>).</p>
Full article ">
2 pages, 150 KiB  
Editorial
Why Reinforcement Learning?
by Mehmet Emin Aydin, Rafet Durgut and Abdur Rakib
Algorithms 2024, 17(6), 269; https://doi.org/10.3390/a17060269 - 20 Jun 2024
Viewed by 1134
Abstract
The term Artificial Intelligence (AI) has come to be one of the most frequently expressed keywords around the globe [...] Full article
(This article belongs to the Special Issue Advancements in Reinforcement Learning Algorithms)
14 pages, 312 KiB  
Article
Parsing Unranked Tree Languages, Folded Once
by Martin Berglund, Henrik Björklund and Johanna Björklund
Algorithms 2024, 17(6), 268; https://doi.org/10.3390/a17060268 - 19 Jun 2024
Viewed by 802
Abstract
A regular unranked tree folding consists of a regular unranked tree language and a folding operation that merges (i.e., folds) selected nodes of a tree to form a graph; the combination is a formal device for representing graph languages. If, in the [...] Read more.
A regular unranked tree folding consists of a regular unranked tree language and a folding operation that merges (i.e., folds) selected nodes of a tree to form a graph; the combination is a formal device for representing graph languages. If, in the process of folding, the order among edges is discarded so that the result is an unordered graph, then two applications of a fold operation are enough to make the associated parsing problem NP-complete. However, if the order is kept, then the problem is solvable in non-uniform polynomial time. In this paper, we address the remaining case, where only one fold operation is applied, but the order among the edges is discarded. We show that, under these conditions, the problem is solvable in non-uniform polynomial time. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

Figure 1
<p>In the tree in Subfigure (<b>a</b>), round nodes with and without annotation denote a label in <math display="inline"><semantics> <mrow> <mo>Σ</mo> <mo>×</mo> <mo>Δ</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mo>Σ</mo> </semantics></math>, respectively, and square nodes denote a label in <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mo>{</mo> <mi>α</mi> <mo>,</mo> <mi>β</mi> <mo>,</mo> <mi>γ</mi> <mo>,</mo> <mi>δ</mi> <mo>,</mo> <mi>ϵ</mi> <mo>}</mo> </mrow> </semantics></math>. Arrows indicate edges, pointing from the source node of each edge to its target. When the tree is evaluated bottom-up, nodes with labels in <math display="inline"><semantics> <mrow> <mo>Σ</mo> <mo>∪</mo> <mo>(</mo> <mo>Σ</mo> <mo>×</mo> <mo>Δ</mo> <mo>)</mo> </mrow> </semantics></math> are copied to the output graph until the transformation reaches a node with a label <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>∈</mo> <mo>Δ</mo> </mrow> </semantics></math>. Here, all nodes below with a label in <math display="inline"><semantics> <mrow> <mo>Σ</mo> <mo>×</mo> <mo>{</mo> <mi>α</mi> <mo>}</mo> </mrow> </semantics></math> are merged into a single node, which is assigned a label in <math display="inline"><semantics> <mo>Σ</mo> </semantics></math>, and the square node labelled <math display="inline"><semantics> <mi>α</mi> </semantics></math> is removed. The result is the graph in Subfigure (<b>b</b>). The process continues upwards until all <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>-labelled nodes have been cleared, yielding the graph in Subfigure (<b>d</b>). In each of the subfigures (<b>b</b>–<b>d</b>), the node arising from the most recent merger is indicated with a bold outline.</p>
Full article ">Figure 2
<p>In the above figure, we see two examples of trees decorated with folding symbols and the graphs they fold into. As in <a href="#algorithms-17-00268-f001" class="html-fig">Figure 1</a>, round nodes with and without annotations denote a label in <math display="inline"><semantics> <mrow> <mo>Σ</mo> <mo>×</mo> <mo>Δ</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mo>Σ</mo> </semantics></math>, respectively, and square nodes denote a label in <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>. Nodes that result from mergings are indicated with a bold outline.</p>
Full article ">Figure 3
<p>To parse a folded graph (<b>top right</b>), we first decompose it into a number of tree fragments attaching to the merged node (<b>bottom row</b>), and then search for a way of reassembling the fragments into a tree in the folded tree language (<b>top left</b>). The single node arising from merging is indicated with a bold outline.</p>
Full article ">Figure 4
<p>The derivation of an operation sequence from a destructured sequence <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>v</mi> <mo>,</mo> <mi>I</mi> <mo>,</mo> <mi>P</mi> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </semantics></math>. The shaded parts indicate the initial sequence <span class="html-italic">P</span>. Let <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>=</mo> <mo>{</mo> <msub> <mi>i</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>i</mi> <mi>n</mi> </msub> <mo>}</mo> </mrow> </semantics></math>. After the first visit to a position <span class="html-italic">i</span> in <span class="html-italic">P</span>, all cycles on <span class="html-italic">i</span> are inserted, which can be made using operations in <span class="html-italic">S</span>, iterating this for all positions until the remainder of <span class="html-italic">S</span> is free of cycles. Then, the non-cyclic suffix of <span class="html-italic">S</span> is placed at the end of this sequence in a way determined by a topological sort of operation, iteratively appending all operations that require a visit not made in the remainder of <span class="html-italic">S</span>.</p>
Full article ">
14 pages, 9631 KiB  
Article
Semi-Self-Supervised Domain Adaptation: Developing Deep Learning Models with Limited Annotated Data for Wheat Head Segmentation
by Alireza Ghanbari, Gholam Hassan Shirdel and Farhad Maleki
Algorithms 2024, 17(6), 267; https://doi.org/10.3390/a17060267 - 17 Jun 2024
Viewed by 954
Abstract
Precision agriculture involves the application of advanced technologies to improve agricultural productivity, efficiency, and profitability while minimizing waste and environmental impacts. Deep learning approaches enable automated decision-making for many visual tasks. However, in the agricultural domain, variability in growth stages and environmental conditions, [...] Read more.
Precision agriculture involves the application of advanced technologies to improve agricultural productivity, efficiency, and profitability while minimizing waste and environmental impacts. Deep learning approaches enable automated decision-making for many visual tasks. However, in the agricultural domain, variability in growth stages and environmental conditions, such as weather and lighting, presents significant challenges to developing deep-learning-based techniques that generalize across different conditions. The resource-intensive nature of creating extensive annotated datasets that capture these variabilities further hinders the widespread adoption of these approaches. To tackle these issues, we introduce a semi-self-supervised domain adaptation technique based on deep convolutional neural networks with a probabilistic diffusion process, requiring minimal manual data annotation. Using only three manually annotated images and a selection of video clips from wheat fields, we generated a large-scale computationally annotated dataset of image–mask pairs and a large dataset of unannotated images extracted from video frames. We developed a two-branch convolutional encoder–decoder model architecture that uses both synthesized image–mask pairs and unannotated images, enabling effective adaptation to real images. The proposed model achieved a Dice score of 80.7% on an internal test dataset and a Dice score of 64.8% on an external test set composed of images from five countries and spanning 18 domains, indicating its potential to develop generalizable solutions that could encourage the wider adoption of advanced technologies in agriculture. Full article
(This article belongs to the Special Issue Efficient Learning Algorithms with Limited Resources)
Show Figures

Figure 1

Figure 1
<p>Three manually annotated image–mask pairs were utilized for data synthesis. We developed two training sets by synthesizing computationally annotated images using manually annotated images from the left (<math display="inline"><semantics> <msub> <mi>I</mi> <mi>η</mi> </msub> </semantics></math>) and the middle (<math display="inline"><semantics> <msub> <mi>I</mi> <mi>ζ</mi> </msub> </semantics></math>), producing 8000 images based on <math display="inline"><semantics> <msub> <mi>I</mi> <mi>η</mi> </msub> </semantics></math> and 8000 images based on <math display="inline"><semantics> <msub> <mi>I</mi> <mi>ζ</mi> </msub> </semantics></math>. Hereafter, we refer to the 8000 images developed based on <math display="inline"><semantics> <msub> <mi>I</mi> <mi>η</mi> </msub> </semantics></math> as dataset <math display="inline"><semantics> <msub> <mi mathvariant="double-struck">D</mi> <mi>η</mi> </msub> </semantics></math>. We refer to the set comprising the whole 16,000 images as <math display="inline"><semantics> <msub> <mi mathvariant="double-struck">D</mi> <mrow> <mi>η</mi> <mo>+</mo> <mi>ζ</mi> </mrow> </msub> </semantics></math>. Additionally, we created a validation set by synthesizing 4000 images, with 2000 from the image on the right (<math display="inline"><semantics> <msub> <mi>I</mi> <mi>τ</mi> </msub> </semantics></math>) and 2000 images based on <math display="inline"><semantics> <msub> <mi>I</mi> <mi>ζ</mi> </msub> </semantics></math>. Hereafter, we refer to this set of 4000 images as <math display="inline"><semantics> <msub> <mi mathvariant="double-struck">D</mi> <mrow> <mi>ζ</mi> <mo>+</mo> <mi>τ</mi> </mrow> </msub> </semantics></math>. Dataset <math display="inline"><semantics> <msub> <mi mathvariant="double-struck">D</mi> <mrow> <mi>ζ</mi> <mo>+</mo> <mi>τ</mi> </mrow> </msub> </semantics></math> was made to allow for a balanced representation of wheat field images from the early and late growth stages. All computationally annotated samples were synthesized following the methodology described by Najafian et al. [<a href="#B7-algorithms-17-00267" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>Examples of computationally synthesized images and their corresponding segmentation masks.</p>
Full article ">Figure 3
<p>Schematic representation of the model architecture. The encoder focuses on developing a joint image representation for both synthesized and real images, while the mask decoder aims at generating segmentation masks and the image decoder aims at reconstructing the real images, forcing the encoder to adapt to the real images.</p>
Full article ">Figure 4
<p>A ResNet block comprises three groups of operations, including convolution, GroupNorm layers, and the Swish activation function for nonlinearity. It also incorporates skip connections to enhance feature propagation.</p>
Full article ">Figure 5
<p>Encoder model architecture designed by combining convolutional layers, ResNet blocks, and GroupNorm layers. Also, in each of the two decoding streams, we utilize concatenation instead of addition.</p>
Full article ">Figure 6
<p>Showcasing the prediction performance of model <math display="inline"><semantics> <msub> <mi mathvariant="script">F</mi> <mrow> <mi>η</mi> <mo>+</mo> <mi>ζ</mi> <mo>+</mo> <mi>ρ</mi> </mrow> </msub> </semantics></math> (highlighted in a red box in the upper row) in comparison with the results obtained by model <span class="html-italic">S</span> [<a href="#B7-algorithms-17-00267" class="html-bibr">7</a>] (highlighted in a blue box in the lower row) on samples from the Global Wheat Head Detection dataset [<a href="#B23-algorithms-17-00267" class="html-bibr">23</a>].</p>
Full article ">
23 pages, 832 KiB  
Article
Re-Orthogonalized/Affine GMRES and Orthogonalized Maximal Projection Algorithm for Solving Linear Systems
by Chein-Shan Liu, Chih-Wen Chang  and Chung-Lun Kuo 
Algorithms 2024, 17(6), 266; https://doi.org/10.3390/a17060266 - 15 Jun 2024
Viewed by 1137
Abstract
GMRES is one of the most powerful and popular methods to solve linear systems in the Krylov subspace; we examine it from two viewpoints: to maximize the decreasing length of the residual vector, and to maintain the orthogonality of the consecutive residual vector. [...] Read more.
GMRES is one of the most powerful and popular methods to solve linear systems in the Krylov subspace; we examine it from two viewpoints: to maximize the decreasing length of the residual vector, and to maintain the orthogonality of the consecutive residual vector. A stabilization factor, η, to measure the deviation from the orthogonality of the residual vector is inserted into GMRES to preserve the orthogonality automatically. The re-orthogonalized GMRES (ROGMRES) method guarantees the absolute convergence; even the orthogonality is lost gradually in the GMRES iteration. When η<1/2, the residuals’ lengths of GMRES and GMRES(m) no longer decrease; hence, η<1/2 can be adopted as a stopping criterion to terminate the iterations. We prove η=1 for the ROGMRES method; it automatically keeps the orthogonality, and maintains the maximality for reducing the length of the residual vector. We improve GMRES by seeking the descent vector to minimize the residual in a larger space of the affine Krylov subspace. The resulting orthogonalized maximal projection algorithm (OMPA) is identified as having good performance. We further derive the iterative formulas by extending the GMRES method to the affine Krylov subspace; these equations are slightly different from the equations derived by Saad and Schultz (1986). The affine GMRES method is combined with the orthogonalization technique to generate a powerful affine GMRES (A-GMRES) method with high performance. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>For example 1: (<b>a</b>) <math display="inline"><semantics> <mi>η</mi> </semantics></math> of GMRES, ROGMRES, ROGMRES(m), and MPA(m), and (<b>b</b>) the residuals blowing up obtained by GMRES, ROGMRES, ROGMRES(m), and MPA(m) do not blow up.</p>
Full article ">Figure 2
<p>For example 3 of the inverse Cauchy problem: (<b>a</b>) the residuals, (<b>b</b>) comparing solutions, and (<b>c</b>) the errors obtained by GMRES and ROGMRES.</p>
Full article ">Figure 3
<p>For example 3 with mixed boundary conditions: (<b>a</b>) <math display="inline"><semantics> <mi>η</mi> </semantics></math> and (<b>b</b>) the residuals obtained by GMRES(m), ROGMRES, OMPA, and A-GMRES; the residuals obtained by GMRES(m) fail.</p>
Full article ">Figure 4
<p>For example 4, the residuals obtained by OMPA, A-GMRES, ROGMRES(m), original GMRES(m), and current GMRES(m), which is coincident with that obtained by ROGMRES(m).</p>
Full article ">Figure 5
<p>For example 4, <math display="inline"><semantics> <mi>η</mi> </semantics></math> obtained by OMPA and A-GMRES. If the automatically orthogonality preserving method is not taken into account, they fail.</p>
Full article ">
4 pages, 164 KiB  
Editorial
Artificial Intelligence in Modeling and Simulation
by Nuno Fachada and Nuno David
Algorithms 2024, 17(6), 265; https://doi.org/10.3390/a17060265 - 15 Jun 2024
Viewed by 1538
Abstract
Modeling and simulation (M&S) serve as essential tools in various scientific and engineering domains, enabling the representation of complex systems and processes without the constraints of physical experimentation [...] Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)
23 pages, 5821 KiB  
Article
Optimizing Charging Pad Deployment by Applying a Quad-Tree Scheme
by Rei-Heng Cheng, Chang-Wu Yu and Zuo-Li Zhang
Algorithms 2024, 17(6), 264; https://doi.org/10.3390/a17060264 - 14 Jun 2024
Viewed by 743
Abstract
The recent advancement in wireless power transmission (WPT) has led to the development of wireless rechargeable sensor networks (WRSNs), since this technology provides a means to replenish sensor nodes wirelessly, offering a solution to the energy challenges faced by WSNs. Most of the [...] Read more.
The recent advancement in wireless power transmission (WPT) has led to the development of wireless rechargeable sensor networks (WRSNs), since this technology provides a means to replenish sensor nodes wirelessly, offering a solution to the energy challenges faced by WSNs. Most of the recent previous work has focused on charging sensor nodes using wireless charging vehicles (WCVs) equipped with high-capacity batteries and WPT devices. In these schemes, a vehicle can move close to a sensor node and wirelessly charge it without physical contact. While these schemes can mitigate the energy problem to some extent, they overlook two primary challenges of applied WCVs: off-road navigation and vehicle speed limitations. To overcome these challenges, previous work proposed a new WRSN model equipped with one drone coupled with several pads deployed to charge the drone when it cannot reach the subsequent stop. This wireless charging pad deployment aims to deploy the minimum number of pads so that at least one feasible routing path from the base station can be established for the drone to reach every SN in a given WRSN. The major weakness of previous studies is that they only consider deploying a wireless charging pad at the locations of the wireless sensor nodes. Their schemes are limited and constrained because usually every point in the deployed area can be considered to deploy a pad. Moreover, the deployed pads suggested by these schemes may not be able to meet the connected requirements due to sparse environments. In this work, we introduce a new scheme that utilizes the Quad-Tree concept to address the wireless charging pad deployment problem and reduce the number of deployed pads at the same time. Extensive simulations were conducted to illustrate the merits of the proposed schemes by comparing them with different previous schemes on maps of varying sizes. In the case of large maps, the proposed schemes surpassed all previous works, indicating that our approach is more suitable for large-scale network environments. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A schematic network layout; the base station is located at the center of the network, and the drone can travel directly or via pads to reach sensor locations for recharging tasks.</p>
Full article ">Figure 2
<p>Illustration of parameters <span class="html-italic">a</span> and <span class="html-italic">b</span>. The listed edges represent the distances that satisfy the conditions, indicating feasible drone flight paths.</p>
Full article ">Figure 3
<p>When the Quad-Tree node A covers the same sensors (red circle) as the ones within the range (blue circle) of its four child nodes B, C, D, and E, the Quad-Tree node A does not need to be subdivided into B, C, D, and E.</p>
Full article ">Figure 4
<p>All sensors within the radius <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mi>c</mi> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>g</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> </mrow> </msub> <mo>+</mo> <mi>d</mi> <mi>T</mi> </mrow> </semantics></math> (red circle) of the center position of the Quad-Tree node (red dot position) encompass all sensors that may be covered when the pad is placed at any position within this Quad-Tree node’s square area.</p>
Full article ">Figure 5
<p>The flowchart of the proposed method.</p>
Full article ">Figure 6
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of related methods on small-scale maps.</p>
Full article ">Figure 7
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of related methods on medium-scale maps.</p>
Full article ">Figure 8
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of related methods on large-scale maps.</p>
Full article ">Figure 9
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of QT&amp;MSC, QT&amp;MSC&amp;DSC, CDC&amp;MSC methods on extremely large-scale maps.</p>
Full article ">Figure 10
<p>The basic structure for generating the special test maps.</p>
Full article ">Figure 11
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of related methods on the first group of special test maps.</p>
Full article ">Figure 12
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of related methods on the second group of special test maps.</p>
Full article ">Figure 13
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of related methods on the third group of special test maps.</p>
Full article ">Figure 14
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of related methods on the fourth group of special test maps.</p>
Full article ">Figure 15
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of the proposed QT&amp;MSC method while changing the minimal unit size of QT on small-scale maps.</p>
Full article ">Figure 16
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of the proposed QT&amp;MSC method while changing the minimal unit size of QT on medium-scale maps.</p>
Full article ">Figure 17
<p>Comparisons of (<b>a</b>) the required number of pads and (<b>b</b>) the average execution time of the proposed QT&amp;MSC method while changing the minimal unit size of QT on large-scale maps.</p>
Full article ">
19 pages, 5344 KiB  
Article
3D Reconstruction Based on Iterative Optimization of Moving Least-Squares Function
by Saiya Li, Jinhe Su, Guoqing Jiang, Ziyu Huang and Xiaorong Zhang
Algorithms 2024, 17(6), 263; https://doi.org/10.3390/a17060263 - 14 Jun 2024
Viewed by 978
Abstract
Three-dimensional reconstruction from point clouds is an important research topic in computer vision and computer graphics. However, the discrete nature, sparsity, and noise of the original point cloud contribute to the results of 3D surface generation based on global features often appearing jagged [...] Read more.
Three-dimensional reconstruction from point clouds is an important research topic in computer vision and computer graphics. However, the discrete nature, sparsity, and noise of the original point cloud contribute to the results of 3D surface generation based on global features often appearing jagged and lacking details, making it difficult to describe shape details accurately. We address the challenge of generating smooth and detailed 3D surfaces from point clouds. We propose an adaptive octree partitioning method to divide the global shape into local regions of different scales. An iterative loop method based on GRU is then used to extract features from local voxels and learn local smoothness and global shape priors. Finally, a moving least-squares approach is employed to generate the 3D surface. Experiments demonstrate that our method outperforms existing methods on benchmark datasets (ShapeNet dataset, ABC dataset, and Famous dataset). Ablation studies confirm the effectiveness of the adaptive octree partitioning and GRU modules. Full article
Show Figures

Figure 1

Figure 1
<p>Structure diagram of the 3D reconstruction based on iterative optimization of MLS.</p>
Full article ">Figure 2
<p>Adaptive octree partition method based on cosine similarity (2D diagram). <math display="inline"><semantics> <mrow> <msub> <mi>o</mi> <mi>i</mi> </msub> </mrow> </semantics></math> means the center point of the octant, and <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>i</mi> </msub> </mrow> </semantics></math> means the input point cloud within the octant.</p>
Full article ">Figure 3
<p>Iterative recurrent feature optimization process based on GRU.</p>
Full article ">Figure 4
<p>Visualization and comparison results of 3D surface-generation methods under the ShapeNet dataset. (The red frames display the local comparison results of the shape).</p>
Full article ">Figure 5
<p>Visualization results of 3D surface-generation methods under ABC dataset. (The red frames display the local comparison results of the shape).</p>
Full article ">Figure 6
<p>Visualization results of 3D surface generation methods under Famous dataset. (The red frames display the local comparison results of the shape).</p>
Full article ">Figure 7
<p>Comparison of average iteration times.</p>
Full article ">Figure 8
<p>Comparison of memory.</p>
Full article ">Figure 9
<p>Varying the number of GRU results to generate a 3D surface-comparison result. (The red frames and arrows display the local comparison results of the shape).</p>
Full article ">Figure 10
<p>Visualization and comparison of different methods after adding Gaussian noise. (Famous dataset, and the red frames display the local comparison results of the shape).</p>
Full article ">Figure 11
<p>Visualization and comparison of different methods after adding Gaussian noise. (ShapeNet dataset, and the red frames display the local comparison results of the shape).</p>
Full article ">
15 pages, 1063 KiB  
Article
EAND-LPRM: Enhanced Attention Network and Decoding for Efficient License Plate Recognition under Complex Conditions
by Shijuan Chen, Zongmei Li, Xiaofeng Du and Qin Nie
Algorithms 2024, 17(6), 262; https://doi.org/10.3390/a17060262 - 14 Jun 2024
Cited by 1 | Viewed by 837
Abstract
With the rapid advancement of urban intelligence, there is an increasingly urgent demand for technological innovation in traffic management. License plate recognition technology can achieve high accuracy under ideal conditions but faces significant challenges in complex traffic environments and adverse weather conditions. To [...] Read more.
With the rapid advancement of urban intelligence, there is an increasingly urgent demand for technological innovation in traffic management. License plate recognition technology can achieve high accuracy under ideal conditions but faces significant challenges in complex traffic environments and adverse weather conditions. To address these challenges, we propose the enhanced attention network and decoding for license plate recognition model (EAND-LPRM). This model leverages an encoder to extract features from image sequences and employs a self-attention mechanism to focus on critical feature information, enhancing its capability to handle complex traffic scenarios such as rainy weather and license plate distortion. We have curated and utilized publicly available datasets that closely reflect real-world scenarios, ensuring transparency and reproducibility. Experimental evaluations conducted on these datasets, which include various complex scenarios, demonstrate that the EAND-LPRM model achieves an accuracy of 94%, representing a 6% improvement over traditional license plate recognition algorithms. The main contributions of this research include the development of a novel attention-mechanism-based architecture, comprehensive evaluation on multiple datasets, and substantial performance improvements under diverse and challenging conditions. This study provides a practical solution for automatic license plate recognition systems in dynamic and unpredictable environments. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Structure diagram of the EAND-LPRM model.</p>
Full article ">Figure 2
<p>Channel transformation module.</p>
Full article ">Figure 3
<p>Self-made dataset of Chinese license plate images with multiple complex conditions.</p>
Full article ">Figure 4
<p>Model evaluation scatter plot.</p>
Full article ">
32 pages, 2034 KiB  
Systematic Review
Artificial Intelligence-Based Algorithms and Healthcare Applications of Respiratory Inductance Plethysmography: A Systematic Review
by Md. Shahidur Rahman, Sowrav Chowdhury, Mirza Rasheduzzaman and A. B. M. S. U. Doulah
Algorithms 2024, 17(6), 261; https://doi.org/10.3390/a17060261 - 14 Jun 2024
Cited by 1 | Viewed by 2051
Abstract
Respiratory Inductance Plethysmography (RIP) is a non-invasive method for the measurement of respiratory rates and lung volumes. Accurate detection of respiratory rates and volumes is crucial for the diagnosis and monitoring of prognosis of lung diseases, for which spirometry is classically used in [...] Read more.
Respiratory Inductance Plethysmography (RIP) is a non-invasive method for the measurement of respiratory rates and lung volumes. Accurate detection of respiratory rates and volumes is crucial for the diagnosis and monitoring of prognosis of lung diseases, for which spirometry is classically used in clinical applications. RIP has been studied as an alternative to spirometry and shown promising results. Moreover, RIP data can be analyzed through machine learning (ML)-based approaches for some other purposes, i.e., detection of apneas, work of breathing (WoB) measurement, and recognition of human activity based on breathing patterns. The goal of this study is to provide an in-depth systematic review of the scope of usage of RIP and current RIP device developments, as well as to evaluate the performance, usability, and reliability of ML-based data analysis techniques within its designated scope while adhering to the PRISMA guidelines. This work also identifies research gaps in the field and highlights the potential scope for future work. The IEEE Explore, Springer, PLoS One, Science Direct, and Google Scholar databases were examined, and 40 publications were included in this work through a structured screening and quality assessment procedure. Studies with conclusive experimentation on RIP published between 2012 and 2023 were included, while unvalidated studies were excluded. The findings indicate that RIP is an effective method to a certain extent for testing and monitoring respiratory functions, though its accuracy is lacking in some settings. However, RIP possesses some advantages over spirometry due to its non-invasive nature and functionality for both stationary and ambulatory uses. RIP also demonstrates its capabilities in ML-based applications, such as detection of breathing asynchrony, classification of apnea, identification of sleep stage, and human activity recognition (HAR). It is our conclusion that, though RIP is not yet ready to replace spirometry and other established methods, it can provide crucial insights into subjects’ condition associated to respiratory illnesses. The implementation of artificial intelligence (AI) could play a potential role in improving the overall effectiveness of RIP, as suggested in some of the selected studies. Full article
Show Figures

Figure 1

Figure 1
<p>Functional block diagram of RIP.</p>
Full article ">Figure 2
<p>Flow chart of the screening and selection process for the studies.</p>
Full article ">Figure 3
<p>Distribution of number of subjects per study.</p>
Full article ">Figure 4
<p>RIP belt coil design with three different sine wave patterns with step sizes of (<b>a.</b>) 1 cm, (<b>b.</b>) 1.5 cm, and (<b>c.</b>) 3 cm [<a href="#B44-algorithms-17-00261" class="html-bibr">44</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) Patient asleep while using a CPAP device. (<b>b</b>) OSA patient with tissue blocking the airway.</p>
Full article ">
15 pages, 1574 KiB  
Article
Exploring Data Augmentation Algorithm to Improve Genomic Prediction of Top-Ranking Cultivars
by Osval A. Montesinos-López, Arvinth Sivakumar, Gloria Isabel Huerta Prado, Josafhat Salinas-Ruiz, Afolabi Agbona, Axel Efraín Ortiz Reyes, Khalid Alnowibet, Rodomiro Ortiz, Abelardo Montesinos-López and José Crossa
Algorithms 2024, 17(6), 260; https://doi.org/10.3390/a17060260 - 14 Jun 2024
Cited by 1 | Viewed by 2774
Abstract
Genomic selection (GS) is a groundbreaking statistical machine learning method for advancing plant and animal breeding. Nonetheless, its practical implementation remains challenging due to numerous factors affecting its predictive performance. This research explores the potential of data augmentation to enhance prediction accuracy across [...] Read more.
Genomic selection (GS) is a groundbreaking statistical machine learning method for advancing plant and animal breeding. Nonetheless, its practical implementation remains challenging due to numerous factors affecting its predictive performance. This research explores the potential of data augmentation to enhance prediction accuracy across entire datasets and specifically within the top 20% of the testing set. Our findings indicate that, overall, the data augmentation method (method A), when compared to the conventional model (method C) and assessed using Mean Arctangent Absolute Prediction Error (MAAPE) and normalized root mean square error (NRMSE), did not improve the prediction accuracy for the unobserved cultivars. However, significant improvements in prediction accuracy (evidenced by reduced prediction error) were observed when data augmentation was applied exclusively to the top 20% of the testing set. Specifically, reductions in MAAPE_20 and NRMSE_20 by 52.86% and 41.05%, respectively, were noted across various datasets. Further investigation is needed to refine data augmentation techniques for effective use in genomic prediction. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>Prediction accuracy performance results for the Maize_1 dataset using conventional (C) and augmented (A) methods, in terms of mean arctangent absolute percentage error (MAAPE), mean arctangent absolute percentage error for the best 20% cultivars (MAAPE_20), normalizing the mean square error (NRMSE), and normalizing the mean square error for the best 20% cultivars (NRMSE_20).</p>
Full article ">Figure 2
<p>Prediction accuracy performance results for the Maize_2 dataset using conventional (C) and augmented (A) methods in terms of mean arctangent absolute percentage error (MAAPE), mean arctangent absolute percentage error for the best 20% cultivars (MAAPE_20), normalizing the mean square error (NRMSE), and normalizing the mean square error for the best 20% cultivars (NRMSE_20).</p>
Full article ">Figure 3
<p>Prediction accuracy performance results for the Maize_3 dataset using conventional (C) and augmented (A) methods in terms of mean arctangent absolute percentage error (MAAPE), mean arctangent absolute percentage error for the best 20% cultivars (MAAPE_20), normalizing the mean square error (NRMSE), and normalizing the mean square error for the best 20% cultivars (NRMSE_20).</p>
Full article ">Figure 4
<p>Prediction accuracy performance results for the Maize_4 dataset using conventional (C) and augmented (A) methods in terms of mean arctangent absolute percentage error (MAAPE), mean arctangent absolute percentage error for the best 20% cultivars (MAAPE_20), normalizing the mean square error (NRMSE), and normalizing the mean square error for the best 20% cultivars (NRMSE_20).</p>
Full article ">Figure 5
<p>Prediction accuracy performance results for the Soybean_1 dataset using conventional (C) and augmented (A) methods in terms of mean arctangent absolute percentage error (MAAPE), mean arctangent absolute percentage error for the best 20% cultivars (MAAPE_20), normalizing the mean square error (NRMSE), and normalizing the mean square error for the best 20% cultivars (NRMSE_20).</p>
Full article ">Figure 6
<p>Prediction accuracy performance results for the Soybean_2 dataset using conventional (C) and augmented (A) methods in terms of mean arctangent absolute percentage error (MAAPE), mean arctangent absolute percentage error for the best 20% cultivars (MAAPE_20), normalizing the mean square error (NRMSE), and normalizing the mean square error for the best 20% cultivars (NRMSE_20).</p>
Full article ">Figure 7
<p>Prediction accuracy performance results for the across dataset using conventional (C) and augmented methods (A) in terms of mean arctangent absolute percentage error (MAAPE), mean arctangent absolute percentage error for the top 20% cultivars (MAAPE_20), normalizing the mean square error (NRMSE), and normalizing the mean square error for the top 20% cultivars (NRMSE_20).</p>
Full article ">
13 pages, 531 KiB  
Article
Univariate Outlier Detection: Precision-Driven Algorithm for Single-Cluster Scenarios
by Mohamed Limam El hairach, Amal Tmiri and Insaf Bellamine
Algorithms 2024, 17(6), 259; https://doi.org/10.3390/a17060259 - 14 Jun 2024
Viewed by 1280
Abstract
This study introduces a novel algorithm tailored for the precise detection of lower outliers (i.e., data points at the lower tail) in univariate datasets, which is particularly suited for scenarios with a single cluster and similar data distribution. The approach leverages a combination [...] Read more.
This study introduces a novel algorithm tailored for the precise detection of lower outliers (i.e., data points at the lower tail) in univariate datasets, which is particularly suited for scenarios with a single cluster and similar data distribution. The approach leverages a combination of transformative techniques and advanced filtration methods to efficiently segregate anomalies from normal values. Notably, the algorithm emphasizes high-precision outlier detection, ensuring minimal false positives, and requires only a few parameters for configuration. Its unsupervised nature enables robust outlier filtering without the need for extensive manual intervention. To validate its efficacy, the algorithm is rigorously tested using real-world data obtained from photovoltaic (PV) module strings with similar DC capacities, containing various outliers. The results demonstrate the algorithm’s capability to accurately identify lower outliers while maintaining computational efficiency and reliability in practical applications. Full article
Show Figures

Figure 1

Figure 1
<p>Illustrating random normal, negatively skewed, positively skewed, and uniform distributions.</p>
Full article ">Figure 2
<p>Modified standard hyperbolic tangent transformation (tanh) curve.</p>
Full article ">Figure 3
<p>Flowchart of the proposed algorithm.</p>
Full article ">Figure 4
<p>Initial state of the dataset before preprocessing.</p>
Full article ">Figure 5
<p>Transformed data after the application of the customized tanh function.</p>
Full article ">Figure 6
<p>Variation of F1 score, recall, accuracy, and precision against changing threshold values.</p>
Full article ">Figure 7
<p>Impact of scaling parameter on sensitivity in sigmoid curve of <math display="inline"><semantics> <mrow> <mo form="prefix">tanh</mo> <mspace width="3.33333pt"/> <mo>(</mo> <mi>α</mi> <mi>x</mi> <mo>−</mo> <mn>3</mn> <mo>)</mo> <mo>·</mo> <mn>3</mn> <mo>+</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The ROC curve of the autoencoder.</p>
Full article ">Figure 9
<p>The ROC curve of the SVM.</p>
Full article ">Figure 10
<p>The ROC curve of the proposed approach.</p>
Full article ">
14 pages, 625 KiB  
Article
Approximating a Minimum Dominating Set by Purification
by Ernesto Parra Inza, Nodari Vakhania, José María Sigarreta Almira and José Alberto Hernández-Aguilar
Algorithms 2024, 17(6), 258; https://doi.org/10.3390/a17060258 - 12 Jun 2024
Viewed by 851
Abstract
A dominating set of a graph is a subset of vertices such that every vertex not in the subset has at least one neighbor within the subset. The corresponding optimization problem is known to be NP-hard. It is proved to be beneficial to [...] Read more.
A dominating set of a graph is a subset of vertices such that every vertex not in the subset has at least one neighbor within the subset. The corresponding optimization problem is known to be NP-hard. It is proved to be beneficial to separate the solution process in two stages. First, one can apply a fast greedy algorithm to obtain an initial dominating set and then use an iterative procedure to purify (reduce) the size of this dominating set. In this work, we develop the purification stage and propose new purification algorithms. The purification procedures that we present here outperform, in practice, the earlier known purification procedure. We have tested our algorithms for over 1300 benchmark problem instances. Compared to the estimations due to known upper bounds, the obtained solutions are about seven times better. Remarkably, for the 500 benchmark instances for which the optimum is known, the optimal solutions are obtained for 46.33% of the tested instances, whereas the average error for the remaining instances is about 1.01. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

Figure 1
<p>Forests <math display="inline"><semantics> <msup> <mi>T</mi> <mrow> <mi>h</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi>T</mi> <mi>h</mi> </msup> </semantics></math> (the red edge in <math display="inline"><semantics> <msup> <mi>T</mi> <mrow> <mi>h</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math> is omitted in <math display="inline"><semantics> <msup> <mi>T</mi> <mi>h</mi> </msup> </semantics></math> and yellow edges are added in <math display="inline"><semantics> <msup> <mi>T</mi> <mi>h</mi> </msup> </semantics></math>.</p>
Full article ">Figure 2
<p>Graph <span class="html-italic">G</span> and Cluster <span class="html-italic">T</span>. (<b>a</b>) Graph <span class="html-italic">G</span> with <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>G</mi> <mo>)</mo> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>; (<b>b</b>) Cluster <span class="html-italic">T</span>.</p>
Full article ">Figure 3
<p>Results of purification procedures (<math display="inline"><semantics> <mrow> <mo movablelimits="true" form="prefix">min</mo> <mo>{</mo> <msubsup> <mi>S</mi> <mi>i</mi> <mo>∗</mo> </msubsup> <mo>}</mo> </mrow> </semantics></math>), <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>G</mi> <mo>)</mo> </mrow> </semantics></math>, and <span class="html-italic">U</span>.</p>
Full article ">Figure 4
<p>Execution time of purification procedures. (<b>a</b>) Maximum execution times among P1–P4 for all instances. (<b>b</b>) Execution times for individual procedures.</p>
Full article ">
15 pages, 4225 KiB  
Article
NSBR-Net: A Novel Noise Suppression and Boundary Refinement Network for Breast Tumor Segmentation in Ultrasound Images
by Yue Sun, Zhaohong Huang, Guorong Cai, Jinhe Su and Zheng Gong
Algorithms 2024, 17(6), 257; https://doi.org/10.3390/a17060257 - 12 Jun 2024
Cited by 1 | Viewed by 1034
Abstract
Breast tumor segmentation of ultrasound images provides valuable tumor information for early detection and diagnosis. However, speckle noise and blurred boundaries in breast ultrasound images present challenges for tumor segmentation, especially for malignant tumors with irregular shapes. Recent vision transformers have shown promising [...] Read more.
Breast tumor segmentation of ultrasound images provides valuable tumor information for early detection and diagnosis. However, speckle noise and blurred boundaries in breast ultrasound images present challenges for tumor segmentation, especially for malignant tumors with irregular shapes. Recent vision transformers have shown promising performance in handling the variation through global context modeling. Nevertheless, they are often dominated by features of large patterns and lack the ability to recognize negative information in ultrasound images, which leads to the loss of breast tumor details (e.g., boundaries and small objects). In this paper, we propose a novel noise suppression and boundary refinement network, NSBR-Net, to simultaneously alleviate speckle noise interference and blurred boundary problems of breast tumor segmentation. Specifically, we propose two innovative designs, namely, the Noise Suppression Module (NSM) and the Boundary Refinement Module (BRM). The NSM filters noise information from the coarse-grained feature maps, while the BRM progressively refines the boundaries of significant lesion objects. Our method demonstrates superior accuracy over state-of-the-art deep learning models, achieving significant improvements of 3.67% on Dataset B and 2.30% on the BUSI dataset in mDice for testing malignant tumors. Full article
Show Figures

Figure 1

Figure 1
<p>Challenges in a breast ultrasound image segmentation task. The red lines are the boundaries of the breast tumors. Inside and outside the boundaries are where the boundaries are blurred. The areas inside the blue circles are typical regions of speckle noise, which refers to irregular, distinct brightness and darkness distribution.</p>
Full article ">Figure 2
<p>The network’s performance variation when eliminating high-frequency information. The metrics, including the mean Dice coefficient (mDice) and mean intersection over union (mIoU), both assess the internal consistency of objects within the segmentation results.</p>
Full article ">Figure 3
<p>The framework of our proposed NSBR-Net primarily comprises the pyramid vision transformer, partial decoder (PD) [<a href="#B23-algorithms-17-00257" class="html-bibr">23</a>], Noise Suppression Module, and Boundary Refinement Module.</p>
Full article ">Figure 4
<p>Overall architecture of NSM. It is composed of a low-pass filter (LPF) and a high-pass filter (HPF).</p>
Full article ">Figure 5
<p>Overall architecture of BRM, which contains reverse attention and axial attention.</p>
Full article ">Figure 6
<p>Qualitative comparison of different methods on BUSI [<a href="#B19-algorithms-17-00257" class="html-bibr">19</a>] (first row and second row) and Dataset B [<a href="#B21-algorithms-17-00257" class="html-bibr">21</a>] (third row and fourth row). The red curve is the ground truth boundary. The green curve is the segmentation results of these methods.</p>
Full article ">Figure 7
<p>Visualization results of the ablative experiment on BUSI [<a href="#B19-algorithms-17-00257" class="html-bibr">19</a>] (first row and second row) and Dataset B [<a href="#B21-algorithms-17-00257" class="html-bibr">21</a>] (third row and fourth row). The red curve is the ground truth boundary. The green curve is the segmentation results of different components.</p>
Full article ">
19 pages, 4271 KiB  
Article
Synthesis of Circular Antenna Arrays for Achieving Lower Side Lobe Level and Higher Directivity Using Hybrid Optimization Algorithm
by Vikas Mittal, Kanta Prasad Sharma, Narmadha Thangarasu, Udandarao Sarat, Ahmad O. Hourani and Rohit Salgotra
Algorithms 2024, 17(6), 256; https://doi.org/10.3390/a17060256 - 11 Jun 2024
Viewed by 1100
Abstract
Circular antenna arrays (CAAs) find extensive utility in a range of cutting-edge communication applications such as 5G networks, the Internet of Things (IoT), and advanced beamforming technologies. In the realm of antenna design, the side lobes levels (SLL) in the radiation pattern hold [...] Read more.
Circular antenna arrays (CAAs) find extensive utility in a range of cutting-edge communication applications such as 5G networks, the Internet of Things (IoT), and advanced beamforming technologies. In the realm of antenna design, the side lobes levels (SLL) in the radiation pattern hold significant importance within communication systems. This is primarily due to its role in mitigating signal interference across the entire radiation pattern’s side lobes. In order to suppress the subsidiary lobe, achieve the required primary lobe orientation, and improve directivity, an optimization problem is used in this work. This paper introduces a method aimed at enhancing the radiation pattern of CAA by minimizing its SLL using a Hybrid Sooty Tern Naked Mole-Rat Algorithm (STNMRA). The simulation results show that the hybrid optimization method significantly reduces side lobes while maintaining reasonable directivity compared to the uniform array and other competitive metaheuristics. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
Show Figures

Figure 1

Figure 1
<p>Configuration of CAA.</p>
Full article ">Figure 2
<p>Flowchart of STNMRA.</p>
Full article ">Figure 3
<p>Convergence characteristics for 12-element CAA.</p>
Full article ">Figure 4
<p>Twelve-element CAA: (<b>a</b>) 2D beam patterns; (<b>b</b>) polar plots.</p>
Full article ">Figure 5
<p>Three-dimensional beam pattern: (<b>a</b>) uniform; (<b>b</b>) GWO; (<b>c</b>) SCA; (<b>d</b>) SSA; (<b>e</b>) CS; (<b>f</b>) STNMRA for 12-element CAA.</p>
Full article ">Figure 6
<p>Convergence characteristics for 24-element CAA.</p>
Full article ">Figure 7
<p>Twenty-four-element CAA: (<b>a</b>) 2D beam patterns; (<b>b</b>) polar plots.</p>
Full article ">Figure 8
<p>Three-dimensional beam pattern: (<b>a</b>) uniform; (<b>b</b>) GWO; (<b>c</b>) SCA; (<b>d</b>) SSA; (<b>e</b>) CS; (<b>f</b>) STNMRA for 24-element CAA.</p>
Full article ">
3 pages, 185 KiB  
Editorial
Guest Editorial for the Special Issue “New Trends in Algorithms for Intelligent Recommendation Systems”
by Edward Rolando Núñez-Valdez and Vicente García-Díaz
Algorithms 2024, 17(6), 255; https://doi.org/10.3390/a17060255 - 10 Jun 2024
Viewed by 938
Abstract
Currently, the problem of information overload, a term popularized by Alvin Toffler in his book Future Shock [...] Full article
(This article belongs to the Special Issue New Trends in Algorithms for Intelligent Recommendation Systems)
12 pages, 473 KiB  
Review
The Quest for the Application of Artificial Intelligence to Whole Slide Imaging: Unique Prospective from New Advanced Tools
by Gavino Faa, Massimo Castagnola, Luca Didaci, Fernando Coghe, Mario Scartozzi, Luca Saba and Matteo Fraschini
Algorithms 2024, 17(6), 254; https://doi.org/10.3390/a17060254 - 10 Jun 2024
Cited by 3 | Viewed by 2106
Abstract
The introduction of machine learning in digital pathology has deeply impacted the field, especially with the advent of whole slide image (WSI) analysis. In this review, we tried to elucidate the role of machine learning algorithms in diagnostic precision, efficiency, and the reproducibility [...] Read more.
The introduction of machine learning in digital pathology has deeply impacted the field, especially with the advent of whole slide image (WSI) analysis. In this review, we tried to elucidate the role of machine learning algorithms in diagnostic precision, efficiency, and the reproducibility of the results. First, we discuss some of the most used tools, including QuPath, HistoQC, and HistomicsTK, and provide an updated overview of machine learning approaches and their application in pathology. Later, we report how these tools may simplify the automation of WSI analyses, also reducing manual workload and inter-observer variability. A novel aspect of this review is its focus on open-source tools, presented in a way that may help the adoption process for pathologists. Furthermore, we highlight the major benefits of these technologies, with the aim of making this review a practical guide for clinicians seeking to implement machine learning-based solutions in their specific workflows. Moreover, this review also emphasizes some crucial limitations related to data quality and the interpretability of the models, giving insight into future directions for research. Overall, this work tries to bridge the gap between the more recent technological progress in computer science and traditional clinical practice, supporting a broader, yet smooth, adoption of machine learning approaches in digital pathology. Full article
(This article belongs to the Special Issue AI Algorithms in Medical Imaging)
Show Figures

Figure 1

Figure 1
<p>A schematic representation of the relationship among artificial intelligence, machine learning, and deep learning.</p>
Full article ">
55 pages, 715 KiB  
Review
Hardware Model Checking Algorithms and Techniques
by Gianpiero Cabodi, Paolo Enrico Camurati, Marco Palena and Paolo Pasini
Algorithms 2024, 17(6), 253; https://doi.org/10.3390/a17060253 - 9 Jun 2024
Cited by 1 | Viewed by 1311
Abstract
Digital systems are nowadays ubiquitous and often comprise an extremely high level of complexity. Guaranteeing the correct behavior of such systems has become an ever more pressing need for manufacturers. The correctness of digital systems can be addressed resorting to formal verification techniques, [...] Read more.
Digital systems are nowadays ubiquitous and often comprise an extremely high level of complexity. Guaranteeing the correct behavior of such systems has become an ever more pressing need for manufacturers. The correctness of digital systems can be addressed resorting to formal verification techniques, such as model checking. Currently, it is usually impossible to determine a priori the best algorithm to use given a verification task and, thus, portfolio approaches have become the de facto standard in model checking verification suites. This paper describes the most relevant algorithms and techniques, at the foundations of bit-level SAT-based model checking itself. Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)
Show Figures

Figure 1

Figure 1
<p>Search tree enumerating all possible assignments for three variables <span class="html-italic">x</span>, <span class="html-italic">y</span> and <span class="html-italic">z</span> with respect to <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>N</mi> <mi>F</mi> <mo>=</mo> <mo>{</mo> <mo>{</mo> <mo>¬</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>}</mo> <mo>,</mo> <mo>{</mo> <mo>¬</mo> <mi>y</mi> <mo>,</mo> <mo>¬</mo> <mi>z</mi> <mo>}</mo> <mo>}</mo> </mrow> </semantics></math>. Solid (dashed) edges correspond to <math display="inline"><semantics> <mrow> <mi>t</mi> <mi>r</mi> <mi>u</mi> <mi>e</mi> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>f</mi> <mi>a</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </semantics></math>) assignments.</p>
Full article ">Figure 2
<p>Partial termination tree associated with the example of <a href="#algorithms-17-00253-f001" class="html-fig">Figure 1</a> where each node is labeled with the corresponding CNF. Crosses denote contradiction within a given path whereas check marks denote admissible assignments.</p>
Full article ">Figure 3
<p>(<b>a</b>) A simple sequential hardware design implementing a 2-bit counter from 0 to 2 using D-type flip flops. (<b>b</b>) The model of the counter as a transition system.</p>
Full article ">Figure 4
<p>Different types of traces with respect to a given <math display="inline"><semantics> <mrow> <mi mathvariant="script">M</mi> <mover> <mo>=</mo> <mrow> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">f</mi> </mrow> </mover> <mrow> <mo>〈</mo> <mi>V</mi> <mo>,</mo> <mi mathvariant="script">I</mi> <mo>,</mo> <mi>T</mi> <mo>〉</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Visualization of states in a transition system <math display="inline"><semantics> <mrow> <mi mathvariant="script">M</mi> </mrow> </semantics></math>. States are labeled with their corresponding truth-value assignments to state variables <span class="html-italic">V</span>.</p>
Full article ">Figure 6
<p>Interpolation as an overapproximated image operator. Interpolant <span class="html-italic">I</span> is an overapproximation of the image of <span class="html-italic">R</span> that does not intersect <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>e</mi> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Different steps of McMillan’s interpolation-based model checking procedure visualized as groupings of states in a transition system <math display="inline"><semantics> <mrow> <mi mathvariant="script">M</mi> </mrow> </semantics></math>. States are labeled with their corresponding truth-value assignments to state variables <span class="html-italic">V</span>. States enclosed by solid lines represent the set <span class="html-italic">B</span> of bad states or states that are backward reachable from a bad state. States enclosed by dotted lines represent the current set <span class="html-italic">R</span> of states that are reachable from the initial states. States enclosed by dash–dotted lines represent the set <span class="html-italic">A</span> of states in the image of <span class="html-italic">R</span>. States enclosed by dashed lines represent the interpolant <span class="html-italic">I</span> that overapproximates <span class="html-italic">A</span>. In (<b>a</b>) the initial scenario is depicted, where <span class="html-italic">I</span> overapproximates the image of <span class="html-italic">R</span> including both reachable and unreachable states (from <math display="inline"><semantics> <mi mathvariant="script">I</mi> </semantics></math>). (<b>b</b>) depicts the situation after the current set of reachable states <span class="html-italic">R</span> has been updated by disjointing it with the interpolant <span class="html-italic">I</span> and the subsequent traversal lead to a spurious counterexample. (<b>c</b>) depicts a new traversal after overapproximation refinement. (<b>d</b>) depicts the convergence fix-point for the algorithm.</p>
Full article ">Figure 8
<p>Visual representation of different steps in the IC3 procedure. In (<b>a</b>), a bad cube <span class="html-italic">s</span> that violates <span class="html-italic">P</span> is found in <math display="inline"><semantics> <msub> <mi>F</mi> <mi>k</mi> </msub> </semantics></math> and extended to <span class="html-italic">q</span>. (<b>b</b>) shows the beginning of the cube blocking procedure for <span class="html-italic">q</span>, in which a clause <math display="inline"><semantics> <mrow> <mo>¬</mo> <mi>q</mi> </mrow> </semantics></math> is checked for induction relative to <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math>. (<b>c</b>) shows the case in which relative induction of <math display="inline"><semantics> <mrow> <mo>¬</mo> <mi>q</mi> </mrow> </semantics></math> does not hold and thus a predecessor cube <span class="html-italic">s</span> is found to be a CTI and extended into cube <span class="html-italic">p</span> that needs to be blocked in <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math>. (<b>d</b>) shows the cube blocking procedure for <span class="html-italic">p</span>, in which a clause <math display="inline"><semantics> <mrow> <mo>¬</mo> <mi>p</mi> </mrow> </semantics></math> is checked for induction relative to <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>2</mn> </mrow> </msub> </semantics></math>. In (<b>e</b>), <math display="inline"><semantics> <mrow> <mo>¬</mo> <mi>p</mi> </mrow> </semantics></math> is found to be inductive relative to <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>2</mn> </mrow> </msub> </semantics></math>; thus, <math display="inline"><semantics> <mrow> <mo>¬</mo> <mi>p</mi> </mrow> </semantics></math> undergoes inductive generalization and the result is used to refine the trace. In (<b>f</b>), after refining the trace, <math display="inline"><semantics> <mrow> <mo>¬</mo> <mi>q</mi> </mrow> </semantics></math> becomes inductive relative to <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math>; the proof obligation of <span class="html-italic">q</span> can thus be discharged by generalizing <math display="inline"><semantics> <mrow> <mo>¬</mo> <mi>q</mi> </mrow> </semantics></math> and refining the trace once more. In (<b>g</b>), a fix-point is found by detecting <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>k</mi> </msub> <mo>=</mo> <msub> <mi>F</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> after propagation.</p>
Full article ">Figure 9
<p>Examples of simplification under a care set. (<b>a</b>) Simplification of <span class="html-italic">B</span> using <span class="html-italic">A</span> as care set. The <span class="html-small-caps">Simplify</span> procedure simplifies <span class="html-italic">B</span> without affecting its conjunction <span class="html-italic">F</span> with <span class="html-italic">A</span>. (<b>b</b>) Simplification of <span class="html-italic">B</span> using <span class="html-italic">C</span> as care set, with <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>→</mo> <mi>C</mi> </mrow> </semantics></math>. The <span class="html-small-caps">Simplify</span> procedure simplifies <span class="html-italic">B</span> without affecting its conjunction <span class="html-italic">F</span> with <span class="html-italic">A</span>.</p>
Full article ">
23 pages, 13655 KiB  
Article
Prediction of Hippocampal Signals in Mice Using a Deep Learning Approach for Neurohybrid Technology Applications
by Albina V. Lebedeva, Margarita I. Samburova, Vyacheslav V. Razin, Nikolay V. Gromov, Svetlana A. Gerasimova, Tatiana A. Levanova, Lev A. Smirnov and Alexander N. Pisarchik
Algorithms 2024, 17(6), 252; https://doi.org/10.3390/a17060252 - 7 Jun 2024
Viewed by 1325
Abstract
The increasing growth in knowledge about the functioning of the nervous system of mammals and humans, as well as the significant neuromorphic technology developments in recent decades, has led to the emergence of a large number of brain–computer interfaces and neuroprosthetics for regenerative [...] Read more.
The increasing growth in knowledge about the functioning of the nervous system of mammals and humans, as well as the significant neuromorphic technology developments in recent decades, has led to the emergence of a large number of brain–computer interfaces and neuroprosthetics for regenerative medicine tasks. Neurotechnologies have traditionally been developed for therapeutic purposes to help or replace motor, sensory or cognitive abilities damaged by injury or disease. They also have significant potential for memory enhancement. However, there are still no fully developed neurotechnologies and neural interfaces capable of restoring or expanding cognitive functions, in particular memory, in mammals or humans. In this regard, the search for new technologies in the field of the restoration of cognitive functions is an urgent task of modern neurophysiology, neurotechnology and artificial intelligence. The hippocampus is an important brain structure connected to memory and information processing in the brain. The aim of this paper is to propose an approach based on deep neural networks for the prediction of hippocampal signals in the CA1 region based on received biological input in the CA3 region. We compare the results of prediction for two widely used deep architectures: reservoir computing (RC) and long short-term memory (LSTM) networks. The proposed study can be viewed as a first step in the complex task of the development of a neurohybrid chip, which allows one to restore memory functions in the damaged rodent hippocampus. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Scheme of the experiment on recording field excitatory postsynaptic potentials (fEPSPs) in mice hippocampal slices and scheme for training neural networks (LSTM or reservoir). The left panel shows the mouse hippocampal slices with a protocol for installing, recording and stimulating electrodes for electrical stimulation and subsequent synaptic transmission activation in the perforant (3-synaptic) pathway of the hippocampus. A stimulating electrode was placed in the DG area and sent electrical square current pulses to activate the cells in DG area of hippocampus. Recording electrodes were installed in the pyramidal neuron dendrites of CA3 and CA1, the hippocampus areas. This made it possible to record the activation of the perforant pathway in the hippocampus due to electrical stimulation. The following shows the original trace of the fEPSP recorded in the dendrites of CA3 and CA1 hippocampus areas. These signals were fed to the input of LSTM or reservoir for training. The right panel shows the architectures of the neural networks used. After training these neural networks with fEPSP signals recorded from the CA3 region in hippocampal slices (input signals), predicted signals for the CA1 region (output signals) were obtained. (<b>b</b>) Representative examples of original fEPSP traces at 400 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A and 500 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitudes.</p>
Full article ">Figure 2
<p>Pipeline for data processing which includes four main steps.</p>
Full article ">Figure 3
<p>LSTM architecture used for prediction of fEPSP signal prediction in CA1 region using CA3 signal as an input.</p>
Full article ">Figure 4
<p>Reservoir architecture used for fEPSP signal prediction in CA1 region using CA3 signal as an input.</p>
Full article ">Figure 5
<p>Signal parameters for subsequent evaluation of the predicted signal quality metrics: 1—rise time, 2—decrease time, 3—response time halfwidth, 4—amplitude, 5—slope.</p>
Full article ">Figure 6
<p>Boxplots for main features used in custom metric of CA3 signals (left panel) and CA1 signals (right panel) obtained as a response to short rectangular electrical pulse of varying amplitude. (<b>a</b>) Rise time, (<b>b</b>) decrease time, (<b>c</b>) halfwidth, (<b>d</b>) amplitude, (<b>e</b>) slope. Black dots are outliers.</p>
Full article ">Figure 7
<p>Typical examples of signals belonging to different classes. (<b>a</b>) Signals belonging to Class 1, (<b>b</b>) signals belonging to Class 2, (<b>c</b>) signals belonging to Class 3. See detailed description in the text.</p>
Full article ">Figure 8
<p>True (blue) and predicted (red and black) fEPSP signals at 400 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude for (<b>a</b>) LSTM (red) and (<b>b</b>) RC (black) networks.</p>
Full article ">Figure 9
<p>Evaluation metrics for predicted fEPSP signals. (<b>a</b>) MAPE, (<b>b</b>) custom metric based on valuable properties of biological signal.</p>
Full article ">Figure 10
<p>Comparison of prediction quality for LSTM (left panels, red dots) and RC (right panels, black dots) for different custom metric parameters: (<b>a</b>) rise time, (<b>b</b>) decrease time, (<b>c</b>) halfwidth, (<b>d</b>) amplitude and (<b>e</b>) slope.</p>
Full article ">Figure A1
<p>True and predicted fEPSP signals at 100 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude. Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.</p>
Full article ">Figure A2
<p>True and predicted fEPSP signals at 200 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude. Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.</p>
Full article ">Figure A3
<p>True and predicted fEPSP signals at 300 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude. Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.</p>
Full article ">Figure A4
<p>True and predicted fEPSP signals at 500 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude. Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.</p>
Full article ">Figure A5
<p>True and predicted fEPSP signals at 600 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude. Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.</p>
Full article ">Figure A6
<p>True and predicted fEPSP signals at 700 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude. Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.</p>
Full article ">Figure A7
<p>True and predicted fEPSP signals at 800 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude. Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.</p>
Full article ">Figure A8
<p>True and predicted fEPSP signals at 900 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude. Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.</p>
Full article ">Figure A9
<p>True and predicted fEPSP signals at 1000 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>A stimulus amplitude. Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.</p>
Full article ">
16 pages, 778 KiB  
Article
Distributed Control of Hydrogen-Based Microgrids for the Demand Side: A Multiagent Self-Triggered MPC-Based Strategy
by Tingzhe Pan, Jue Hou, Xin Jin, Zhenfan Yu, Wei Zhou and Zhijun Wang
Algorithms 2024, 17(6), 251; https://doi.org/10.3390/a17060251 - 7 Jun 2024
Viewed by 900
Abstract
With the global pursuit of renewable energy and carbon neutrality, hydrogen-based microgrids have also become an important area of research, as ensuring proper design and operation is essential to achieve optimal performance from hybrid systems. This paper proposes a distributed control strategy based [...] Read more.
With the global pursuit of renewable energy and carbon neutrality, hydrogen-based microgrids have also become an important area of research, as ensuring proper design and operation is essential to achieve optimal performance from hybrid systems. This paper proposes a distributed control strategy based on multiagent self-triggered model predictive control (ST-MPC), with the aim of achieving demand-side control of hydrogen-based microgrid systems. This architecture considers a hybrid energy storage system with renewable energy as the main power source, supplemented by fuel cells based on electrolytic hydrogen. The primary objective of this architecture is aiming at the supply and demand balance problem under the supply and demand relationship of microgrid, the service life of hydrogen-based microgrid energy storage equipment can be increased on the basis of realizing demand-side control of hydrogen energy microgrid system. To accomplish this, model predictive controllers are implemented within a self-triggered framework that dynamically adjusts the counting period. The simulation results demonstrate that the ST-MPC architecture significantly reduces the frequency of control action changes while maintaining an acceptable level of set-point tracking. These findings highlight the viability of the proposed solution for microgrids equipped with multiple types of electrochemical storage, which contributes to improved sustainability and efficiency in renewable-based microgrid systems. Full article
(This article belongs to the Special Issue Intelligent Algorithms for High-Penetration New Energy)
Show Figures

Figure 1

Figure 1
<p>AC/DC hybrid microgrid electric-to-hydrogen conversion control schematic.</p>
Full article ">Figure 2
<p>Power of the supply and demand sides.</p>
Full article ">Figure 3
<p>Battery capacity variation.</p>
Full article ">Figure 4
<p>Hydrogen storage tank capacity variation.</p>
Full article ">Figure 5
<p>Battery charging and discharging status.</p>
Full article ">Figure 6
<p>Energy variation in the electrolyzer and fuel cell.</p>
Full article ">Figure 7
<p>Trigger time point of ST-MPC.</p>
Full article ">Figure 8
<p>Comparison histogram of ST-MPC trigger times and MPC trigger times.</p>
Full article ">
23 pages, 5573 KiB  
Article
Research on Distributed Fault Diagnosis Model of Elevator Based on PCA-LSTM
by Chengming Chen, Xuejun Ren and Guoqing Cheng
Algorithms 2024, 17(6), 250; https://doi.org/10.3390/a17060250 - 7 Jun 2024
Viewed by 948
Abstract
A Distributed Elevator Fault Diagnosis System (DEFDS) is developed to tackle frequent malfunctions stemming from the widespread distribution and aging of elevator systems. Due to the complexity of elevator fault data and the subtlety of fault characteristics, traditional methods such as visual inspections [...] Read more.
A Distributed Elevator Fault Diagnosis System (DEFDS) is developed to tackle frequent malfunctions stemming from the widespread distribution and aging of elevator systems. Due to the complexity of elevator fault data and the subtlety of fault characteristics, traditional methods such as visual inspections and basic operational tests fall short in detecting early signs of mechanical wear and electrical issues. These conventional techniques often fail to recognize subtle fault characteristics, necessitating more advanced diagnostic tools. In response, this paper introduces a Principal Component Analysis–Long Short-Term Memory (PCA-LSTM) method for fault diagnosis. The distributed system decentralizes the fault diagnosis process to individual elevator units, utilizing PCA’s feature selection capabilities in high-dimensional spaces to extract and reduce the dimensionality of fault features. Subsequently, the LSTM model is employed for fault prediction. Elevator models within the system exchange data to refine and optimize a global prediction model. The efficacy of this approach is substantiated through empirical validation with actual data, achieving an accuracy rate of 90% and thereby confirming the method’s effectiveness in facilitating distributed elevator fault diagnosis. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>Proportion of variance explained by principal components in elevator experiment data, justifying the 85% threshold.</p>
Full article ">Figure 2
<p>Structure of the LSTM.</p>
Full article ">Figure 3
<p>Structure flowchart.</p>
Full article ">Figure 4
<p>Distributed deep learning framework for elevators.</p>
Full article ">Figure 5
<p>Broadcast schematic diagram.</p>
Full article ">Figure 6
<p>Data-parallel schematic diagram.</p>
Full article ">Figure 7
<p>Diagnostic accuracy of distributed elevator fault detection systems.</p>
Full article ">Figure 8
<p>Data schematic diagram after dimension reduction.</p>
Full article ">Figure 9
<p>Performance of PCA-LSTM model in predicting elevator speed and fault detection.</p>
Full article ">Figure 10
<p>Training and validation loss trends for PCA-LSTM elevator fault prediction model.</p>
Full article ">Figure 11
<p>ROC curve of the PCA-LSTM model.</p>
Full article ">Figure 12
<p>Comparison of PCA-LSTM’s accuracy compared to that of various algorithms.</p>
Full article ">Figure 13
<p>Performance comparison of PCA-LSTM and IAO-XGBoost models across various elevator fault types.</p>
Full article ">Figure 14
<p>Comparative analysis of PCA-LSTM and DBN models for elevator fault diagnosis.</p>
Full article ">Figure 15
<p>Confusion matrix diagram.</p>
Full article ">Figure 16
<p>F1 score plot.</p>
Full article ">Figure 17
<p>Distributed model diagnosis effect diagram.</p>
Full article ">
28 pages, 859 KiB  
Article
Simulation of Calibrated Complex Synthetic Population Data with XGBoost
by Johannes Gussenbauer, Matthias Templ, Siro Fritzmann and Alexander Kowarik
Algorithms 2024, 17(6), 249; https://doi.org/10.3390/a17060249 - 6 Jun 2024
Viewed by 1070
Abstract
Syntheticdata generation methods are used to transform the original data into privacy-compliant synthetic copies (twin data). With our proposed approach, synthetic data can be simulated in the same size as the input data or in any size, and in the case of finite [...] Read more.
Syntheticdata generation methods are used to transform the original data into privacy-compliant synthetic copies (twin data). With our proposed approach, synthetic data can be simulated in the same size as the input data or in any size, and in the case of finite populations, even the entire population can be simulated. The proposed XGBoost-based method is compared with known model-based approaches to generate synthetic data using a complex survey data set. The XGBoost method shows strong performance, especially with synthetic categorical variables, and outperforms other tested methods. Furthermore, the structure and relationship between variables are well preserved. The tuning of the parameters is performed automatically by a modified k-fold cross-validation. If exact population margins are known, e.g., cross-tabulated population counts on age class, gender and region, the synthetic data must be calibrated to those known population margins. For this purpose, we have implemented a simulated annealing algorithm that is able to use multiple population margins simultaneously to post-calibrate a synthetic population. The algorithm is, thus, able to calibrate simulated population data containing cluster and individual information, e.g., about persons in households, at both person and household level. Furthermore, the algorithm is efficiently implemented so that the adjustment of populations with many millions or more persons is possible. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

Figure 1
<p>The workflow to produce a synthetic population and to calibrate it. The contribution covers the large light blue area together with the synthesis step 2 using XGBoost. The corresponding text passages are linked to section numbers in the Figure.</p>
Full article ">Figure 2
<p>Step 1: To simulate a basic structure of the data, for example, here, the household structure by age and sex in each stratum and by considering the sampling weights. <span class="html-italic">n</span> determines the size of the data to be synthesized and <span class="html-italic">N</span> the size of the synthetic population. Unrealistic clusters (here, household compositions) are avoided.</p>
Full article ">Figure 3
<p>Comparison between a single synthetic population and the original survey data by the residual colored mosaic plot of chronic illness (P103000). The label -2 regards to non-selected persons and ∗ to missing values.</p>
Full article ">Figure 4
<p>Comparison between the average of all 100 synthetic population and the original survey data by the residual colored mosaic plot of the marital status (P114000), and chronic illness (P103000). The label -2 regards to non-selected persons and ∗ to missing values.</p>
Full article ">Figure 5
<p>Distribution of people after post-calibration by variables “db040”, citizenship (horizontal panels) and gender (vertical panels). Colored bars show the distribution after applying the old version of calibPop (SA 1.2.1), the improved version of calibPop (SA 2.1.2) and the improved version of calibPop with <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>. The black lines represent the target distribution, the red bars the distribution before calibration.</p>
Full article ">Figure 6
<p>Distribution of people (left side) and households after post-calibration. The colors indicate using only one set of target margins (single margins) on people and target margins on people and households simultaneously (multiple margins). The black lines represent the target distribution, the red bars the distribution before calibration.</p>
Full article ">
25 pages, 1790 KiB  
Article
A Non-Gradient and Non-Iterative Method for Mapping 3D Mesh Objects Based on a Summation of Dependent Random Values
by Ihar Volkau, Sergei Krasovskii, Abdul Mujeeb and Helen Balinsky
Algorithms 2024, 17(6), 248; https://doi.org/10.3390/a17060248 - 6 Jun 2024
Viewed by 1062
Abstract
The manuscript presents a novel non-gradient and non-iterative method for mapping two 3D objects by matching extrema. This innovative approach utilizes the amplification of extrema through the summation of dependent random values, accompanied by a comprehensive explanation of the statistical background. The method [...] Read more.
The manuscript presents a novel non-gradient and non-iterative method for mapping two 3D objects by matching extrema. This innovative approach utilizes the amplification of extrema through the summation of dependent random values, accompanied by a comprehensive explanation of the statistical background. The method further incorporates structural patterns based on spherical harmonic functions to calculate the rotation matrix, enabling the juxtaposition of the objects. Without utilizing gradients and iterations to improve the solution step by step, the proposed method generates a limited number of candidates, and the mapping (if it exists) is necessarily among the candidates. For instance, this method holds potential for object analysis and identification in additive manufacturing for 3D printing and protein matching. Full article
Show Figures

Figure 1

Figure 1
<p>Spherical harmonic of order (3,2) with the extrema marked. The yellow areas correspond to the negative values of the function, and the blue ones correspond to the positive ones.</p>
Full article ">Figure 2
<p>Spherical harmonics (degrees from 0 to 4). The green areas correspond to the negative values of the harmonics, and the blue ones correspond to the positive ones.</p>
Full article ">Figure 3
<p>ICP vs. our algorithm. Left side: The results of mapping rotated blue and original gold objects by the ICP. Right side: Mapping of rotated green and original red objects using our algorithm.</p>
Full article ">Figure 4
<p>(<b>a</b>) A model of a feline with fangs, (<b>b</b>) a feline without fangs, (<b>c</b>) the ICP mapping of both models.</p>
Full article ">Figure 5
<p>Examples when the mapping calculated by the method proposed is substandard. In each row, the (<b>right</b>) figure is the target, and the (<b>left</b>) is the result of the mapping of rotated green and original red objects. (<b>top</b>) Dragon egg—the surface components (spirally arranged scales) are not aligned. (<b>bottom</b>) Tablecloth—the holes of the objects are rotationally misaligned.</p>
Full article ">
16 pages, 5093 KiB  
Article
New Multi-View Feature Learning Method for Accurate Antifungal Peptide Detection
by Sayeda Muntaha Ferdous, Shafayat Bin Shabbir Mugdha and Iman Dehzangi
Algorithms 2024, 17(6), 247; https://doi.org/10.3390/a17060247 - 6 Jun 2024
Cited by 1 | Viewed by 1334
Abstract
Antimicrobial resistance, particularly the emergence of resistant strains in fungal pathogens, has become a pressing global health concern. Antifungal peptides (AFPs) have shown great potential as a promising alternative therapeutic strategy due to their inherent antimicrobial properties and potential application in combating fungal [...] Read more.
Antimicrobial resistance, particularly the emergence of resistant strains in fungal pathogens, has become a pressing global health concern. Antifungal peptides (AFPs) have shown great potential as a promising alternative therapeutic strategy due to their inherent antimicrobial properties and potential application in combating fungal infections. However, the identification of antifungal peptides using experimental approaches is time-consuming and costly. Hence, there is a demand to propose fast and accurate computational approaches to identifying AFPs. This paper introduces a novel multi-view feature learning (MVFL) model, called AFP-MVFL, for accurate AFP identification, utilizing multi-view feature learning. By integrating the sequential and physicochemical properties of amino acids and employing a multi-view approach, the AFP-MVFL model significantly enhances prediction accuracy. It achieves 97.9%, 98.4%, 0.98, and 0.96 in terms of accuracy, precision, F1 score, and Matthews correlation coefficient (MCC), respectively, outperforming previous studies found in the literature. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>The overall architecture of AFP-MVFL. The AFPs prediction pipeline consists of three modules: (i) feature extraction module using iFeature; (ii) feature selection using random forest; (iii) classification module for the prediction task.</p>
Full article ">Figure 2
<p>ROC curve of the results for various classification models on the independent test of the Antifp_DS1 dataset.</p>
Full article ">Figure 3
<p>ROC curve of the results for various classification models on the independent test of the Antifp_DS2 dataset.</p>
Full article ">Figure 4
<p>ROC curve of the results for various classification models on the independent test of the Antifp_DS3 dataset.</p>
Full article ">Figure 5
<p>Feature visualization of AFP-MVFL on the AntiFP_DS1 dataset. Blue dots correspond to instances where the label negative equals 0 and red dots correspond to instances where the label positive equals 1.</p>
Full article ">Figure 6
<p>Feature visualization of random forest on the AntiFP_DS2 dataset. Blue dots correspond to instances where the label negative equals 0 and red dots correspond to instances where the label positive equals 1.</p>
Full article ">Figure 7
<p>Feature visualization of random forest on the AntiFP_DS3 dataset. Blue dots correspond to instances where the label negative equals 0 and red dots correspond to instances where the label positive equals 1.</p>
Full article ">
13 pages, 346 KiB  
Article
Minimizing Query Frequency to Bound Congestion Potential for Moving Entities at a Fixed Target Time
by William Evans and David Kirkpatrick
Algorithms 2024, 17(6), 246; https://doi.org/10.3390/a17060246 - 6 Jun 2024
Viewed by 861
Abstract
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings [...] Read more.
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings such as collision avoidance, may be unavoidable. However, the associated difficulties are compounded if there is uncertainty about the precise location of entities, giving rise to potential encroachment and, more generally, potential congestion within the full collection. We adopt a model in which entities can be queried for their current location (at some cost) and the uncertainty region associated with an entity grows in proportion to the time since that entity was last queried. The goal is to maintain low potential congestion, measured in terms of the (dynamic) intersection graph of uncertainty regions, at specified (possibly all) times, using the lowest possible query cost. Previous work in the same uncertainty model addressed the problem of minimizing the congestion potential of point entities using location queries of some bounded frequency. It was shown that it is possible to design query schemes that are O(1)-competitive, in terms of worst-case congestion potential, with other, even clairvoyant query schemes (that exploit knowledge of the trajectories of all entities), subject to the same bound on query frequency. In this paper, we initiate the treatment of a more general problem with the complementary optimization objective: minimizing the query frequency, measured as the reciprocal of the minimum time between queries (granularity), while guaranteeing a fixed bound on congestion potential of entities with positive extent at one specified target time. This complementary objective necessitates quite different schemes and analyses. Nevertheless, our results parallel those of the earlier papers, specifically tight competitive bounds on required query frequency. Full article
(This article belongs to the Special Issue Selected Algorithmic Papers From FCT 2023)
Show Figures

Figure 1

Figure 1
<p>Uncertainty regions (light grey) of four unit-radius entities (dark grey) with uncertainty ply three (witnessed by point *).</p>
Full article ">Figure 2
<p>A configuration of five unit radius entities. The 3-ball <math display="inline"><semantics> <mrow> <msub> <mi>B</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> of entity <math display="inline"><semantics> <msub> <mi>e</mi> <mn>2</mn> </msub> </semantics></math> is shown shaded.</p>
Full article ">Figure 3
<p>Illustration of Example 1. The trajectories of entities in <span class="html-italic">A</span> are in red. Those of entities in <span class="html-italic">B</span> are in blue.</p>
Full article ">
18 pages, 3005 KiB  
Article
A Modified Analytic Hierarchy Process Suitable for Online Survey Preference Elicitation
by Sean Pascoe, Anna Farmery, Rachel Nichols, Sarah Lothian and Kamal Azmi
Algorithms 2024, 17(6), 245; https://doi.org/10.3390/a17060245 - 6 Jun 2024
Cited by 1 | Viewed by 899
Abstract
A key component of multi-criteria decision analysis is the estimation of criteria weights, reflecting the preference strength of different stakeholder groups related to different objectives. One common method is the Analytic Hierarchy Process (AHP). A key challenge with the AHP is the potential [...] Read more.
A key component of multi-criteria decision analysis is the estimation of criteria weights, reflecting the preference strength of different stakeholder groups related to different objectives. One common method is the Analytic Hierarchy Process (AHP). A key challenge with the AHP is the potential for inconsistency in responses, resulting in potentially unreliable preference weights. In small groups, interactions between analysts and respondents can compensate for this through reassessment of inconsistent responses. In many cases, however, stakeholders may be geographically dispersed, with online surveys being a more cost-effective means to elicit these preferences, making renegotiating with inconsistent respondents impossible. Further, the potentially large number of bivariate comparisons required using the AHP may adversely affect response rates. In this study, we test a new “modified” AHP (MAHP). The MAHP was designed to retain the key desirable features of the AHP but be more amenable to online surveys, reduce the problem of inconsistencies, and require substantially fewer comparisons. The MAHP is tested using three groups of university students through an online survey platform, along with a “traditional” AHP approach. The results indicate that the MAHP can provide statistically equivalent outcomes to the AHP but without problems arising due to inconsistencies. Full article
Show Figures

Figure 1

Figure 1
<p>Example of a bivariate comparison used in the traditional AHP. A separate bivariate comparison is required for each set of alternatives.</p>
Full article ">Figure 2
<p>Example of bivariate comparison in the MAHP. Additional alternatives are included as additional rows, and all alternatives are compared in the same question.</p>
Full article ">Figure 3
<p>Two-alternative comparison preference distributions. The black dot represents the mean of the distribution. The light blue distributions represent the preferences for sparkling water estimated using both methods (AHP and MAHP respectively), and the green distributions represent the preferences for still water estimated using both methods.</p>
Full article ">Figure 4
<p>Three-alternative comparison preference distributions. The black dot represents the mean of the distribution. The red distributions represent the preferences for Beef estimated using both methods (AHP and MAHP respectively), the pink distributions represent the preferences for fish estimated using both methods, and the yellow distributions represent the preferences for lasagna estimated using both methods.</p>
Full article ">Figure 5
<p>Consistency indicator of responses for the three-alternative comparison: (<b>a</b>) MAHP and (<b>b</b>) AHP. Blue bars indicate consistent responses; red bars indicate inconsistent responses. The black vertical line indicates the GCI where CR = 0.1.</p>
Full article ">Figure 6
<p>Four-alternative comparison preference distributions. The black dot represents the mean of the distribution. The brown distributions represent the preferences for fruit estimated using both methods (AHP and MAHP respectively), the red distributions represent the preferences for ice cream estimated using both methods, and the green distributions represent the preferences for jelly estimated using both methods, and the yellow distributions represent the preferences for pie estimated using both methods.</p>
Full article ">Figure 7
<p>Consistency indicator of responses for the four-alternative comparison: (<b>a</b>) MAHP and (<b>b</b>) AHP. Blue bars indicate consistent responses; red bars indicate inconsistent responses. The black vertical line indicates the GCI where the CR = 0.1.</p>
Full article ">Figure 8
<p>Perceptions of ease of use of each approach (derived using each approach). The black dot represents the mean of the distribution. The first part of the label represents the alternative considered, while the second part of the label represents the approach used to elicit the preference. For example, “Modified.AHP” represents the user preference for the use of the MAHP approach but derived using the traditional AHP. Conversely, “Pairwise.MAHP” represents the user preference for the use of the traditional AHP approach but derived using the MAHP. The light blue distributions represent the preferences for the use of MAHP estimated using both methods (AHP and MAHP respectively), and the green distributions represent the preferences for the pairwise AHP estimated using both methods.</p>
Full article ">Figure 9
<p>Rank consistency when the scores are estimated using both approaches. The blue bars represent the preference weighting given to the MAHP by respondents who were rank-consistent in their relative ranking of each approach when estimated using each approach. The red bars represent the preference weighting given to the MAHP by respondents who were rank-inconsistent in their relative rankings across the two approaches. Those who were consistent in their responses tended to favor the MAHP (based on their preference weighting).</p>
Full article ">Figure 10
<p>The number of responses to each question. The red indicates a non-response, which occurred only for bivariate (AHP)-related questions. All MAHP questions were answered by all respondents.</p>
Full article ">
18 pages, 3670 KiB  
Article
Automated Recommendation of Aggregate Visualizations for Crowdfunding Data
by Mohamed A. Sharaf, Heba Helal, Nazar Zaki, Wadha Alketbi, Latifa Alkaabi, Sara Alshamsi and Fatmah Alhefeiti
Algorithms 2024, 17(6), 244; https://doi.org/10.3390/a17060244 - 6 Jun 2024
Viewed by 859
Abstract
Analyzing crowdfunding data has been the focus of many research efforts, where analysts typically explore this data to identify the main factors and characteristics of the lending process as well as to discover unique patterns and anomalies in loan distributions. However, the manual [...] Read more.
Analyzing crowdfunding data has been the focus of many research efforts, where analysts typically explore this data to identify the main factors and characteristics of the lending process as well as to discover unique patterns and anomalies in loan distributions. However, the manual exploration and visualization of such data is clearly an ad hoc, time-consuming, and labor-intensive process. Hence, in this work, we propose LoanVis, which is an automated solution for discovering and recommending those valuable and insightful visualizations. LoanVis is a data-driven system that utilizes objective metrics to quantify the “interestingness” of a visualization and employs such metrics in the recommendation process. We demonstrate the effectiveness of LoanVis in analyzing and exploring different aspects of the Kiva crowdfunding dataset. Full article
(This article belongs to the Special Issue Recommendations with Responsibility Constraints)
Show Figures

Figure 1

Figure 1
<p>Ranking and Recommending Visualizations in LoanVis.</p>
Full article ">Figure 2
<p>Target View (Entertainment sector) vs. Comparison View (All sectors).</p>
Full article ">Figure 3
<p>Deviation in loans distribution for all sectors (comparison view in black) vs. loans for the entertainment sector (target view in red).</p>
Full article ">Figure 4
<p>LoanVis: Data Exploration and Recommendation System—User Interface.</p>
Full article ">Figure 5
<p>Normalized probability distribution in loans disbursed for all sectors (comparison view in blue) vs. loans for the entertainment sector (target view in green).</p>
Full article ">Figure 6
<p>LoanVis top Recommendations for the <span class="html-italic">Sector</span> Aspect. (<b>a</b>) Deviation in loans distribution for All sectors vs. loans for <tt>Wholesale</tt> based on Gender. (<b>b</b>) Deviation in loans distribution for All sectors vs. loans for <tt>Construction</tt> based on Gender.</p>
Full article ">Figure 7
<p>LoanVis top recommendations for the <span class="html-italic">Country</span> aspect. (<b>a</b>) In Namibia (target view in red), most loans are payed as bullet repayments at once in contrast to the other worldwide prevailing repayment methods (comparison view in black). (<b>b</b>) In Congo (target view in red), most projects are led by mixed-gender teams vs. worldwide (comparison view in black); most projects are female-led.</p>
Full article ">Figure 8
<p>Deviation in loan distribution for all years (comparison view in black) vs. loans for 2016 (target view in red).</p>
Full article ">Figure 9
<p>Deviation in number of lenders for all years vs. 2016.</p>
Full article ">Figure 10
<p>Distribution of repayment interval for all projects (comparison view) vs. male-led projects (target view).</p>
Full article ">
18 pages, 3521 KiB  
Article
Training of Convolutional Neural Networks for Image Classification with Fully Decoupled Extended Kalman Filter
by Armando Gaytan, Ofelia Begovich-Mendoza and Nancy Arana-Daniel
Algorithms 2024, 17(6), 243; https://doi.org/10.3390/a17060243 - 6 Jun 2024
Viewed by 1108
Abstract
First-order algorithms have long dominated the training of deep neural networks, excelling in tasks like image classification and natural language processing. Now there is a compelling opportunity to explore alternatives that could outperform current state-of-the-art results. From the estimation theory, the Extended Kalman [...] Read more.
First-order algorithms have long dominated the training of deep neural networks, excelling in tasks like image classification and natural language processing. Now there is a compelling opportunity to explore alternatives that could outperform current state-of-the-art results. From the estimation theory, the Extended Kalman Filter (EKF) arose as a viable alternative and has shown advantages over backpropagation methods. Current computational advances offer the opportunity to review algorithms derived from the EKF, almost excluded from the training of convolutional neural networks. This article revisits an approach of the EKF with decoupling and it brings the Fully Decoupled Extended Kalman Filter (FDEKF) for training convolutional neural networks in image classification tasks. The FDEKF is a second-order algorithm with some advantages over the first-order algorithms, so it can lead to faster convergence and higher accuracy, due to a higher probability of finding the global optimum. In this research, experiments are conducted on well-known datasets that include Fashion, Sports, and Handwritten Digits images. The FDEKF shows faster convergence compared to other algorithms such as the popular Adam optimizer, the sKAdam algorithm, and the reduced extended Kalman filter. Finally, motivated by the finding of the highest accuracy of FDEKF with images of natural scenes, we show its effectiveness in another experiment focused on outdoor terrain recognition. Full article
(This article belongs to the Special Issue Machine Learning in Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Activation functions: (<b>a</b>) logistic, (<b>b</b>) hyperbolic tangent, (<b>c</b>) ReLU.</p>
Full article ">Figure 2
<p>Schematic illustration of the different levels of decoupling.</p>
Full article ">Figure 3
<p>Fashion, Sports, and Handwritten Digits images.</p>
Full article ">Figure 4
<p>FASHION with DCNN experiment—loss.</p>
Full article ">Figure 5
<p>SPORTS with DCNN experiment—loss.</p>
Full article ">Figure 6
<p>MNIST with DCNN experiment—loss.</p>
Full article ">Figure 7
<p>Terrain image.</p>
Full article ">Figure 8
<p>Cost map with DCNN in regression problem.</p>
Full article ">Figure 9
<p>Cost map with DCNN in classification problem.</p>
Full article ">
24 pages, 2990 KiB  
Article
A Comparative Study of Machine Learning Methods and Text Features for Text Authorship Recognition in the Example of Azerbaijani Language Texts
by Rustam Azimov and Efthimios Providas
Algorithms 2024, 17(6), 242; https://doi.org/10.3390/a17060242 - 5 Jun 2024
Viewed by 1010
Abstract
This paper presents various machine learning methods with different text features that are explored and evaluated to determine the authorship of the texts in the example of the Azerbaijani language. We consider techniques like artificial neural network, convolutional neural network, random forest, and [...] Read more.
This paper presents various machine learning methods with different text features that are explored and evaluated to determine the authorship of the texts in the example of the Azerbaijani language. We consider techniques like artificial neural network, convolutional neural network, random forest, and support vector machine. These techniques are used with different text features like word length, sentence length, combined word length and sentence length, n-grams, and word frequencies. The models were trained and tested on the works of many famous Azerbaijani writers. The results of computer experiments obtained by utilizing a comparison of various techniques and text features were analyzed. The cases where the usage of text features allowed better results were determined. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>Recognition accuracies obtained with variances of 2-gram frequencies in feature selection procedure 3: (<b>a</b>) frequencies of first bigrams in the descending order of total frequency in the training set; (<b>b</b>) frequencies of middle bigrams in the descending order of total frequency in the training set.</p>
Full article ">Figure 2
<p>Recognition accuracies obtained with 2-gram frequencies in feature selection procedure 3: (<b>a</b>) variances of frequencies of first bigrams in the descending order of total frequency in the training set; (<b>b</b>) variances of frequencies of middle bigrams in the descending order of total frequency in the training set.</p>
Full article ">Figure 3
<p>Recognition accuracies obtained with 2-gram frequencies using feature selection procedure 4: (<b>a</b>) 4.1 and 4.2 variants of the feature selection procedure (see Formulas (12) and (13)); (<b>b</b>) 4.2 and 4.3 variants of the feature selection procedure (see Formulas (13) and (14)).</p>
Full article ">Figure 4
<p>Recognition accuracies obtained with word frequencies (%).</p>
Full article ">Figure 5
<p>Maximum recognition accuracies obtained using different types of features.</p>
Full article ">Figure 6
<p>Confusion matrices obtained with two best-performing feature groups on the literary works: (<b>a</b>) n-gram frequencies; (<b>b</b>) word frequencies.</p>
Full article ">Figure 7
<p>Confusion matrices obtained with two best-performing feature groups on the test set: (<b>a</b>) n-gram frequencies; (<b>b</b>) word frequencies.</p>
Full article ">Figure 8
<p>Maximum recognition accuracies obtained by different methods of machine learning.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop