[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,175)

Search Parameters:
Keywords = high-speed trains

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 5905 KiB  
Article
Hybrid ANFIS-PI-Based Optimization for Improved Power Conversion in DFIG Wind Turbine
by Farhat Nasim, Shahida Khatoon, Ibraheem, Shabana Urooj, Mohammad Shahid, Asmaa Ali and Nidal Nasser
Sustainability 2025, 17(6), 2454; https://doi.org/10.3390/su17062454 (registering DOI) - 11 Mar 2025
Abstract
Wind energy is essential for promoting sustainability and renewable power solutions. However, ensuring stability and consistent performance in DFIG-based wind turbine systems (WTSs) remains challenging due to rapid wind speed variations, grid disturbances, and parameter uncertainties. These fluctuations result in power instability, increased [...] Read more.
Wind energy is essential for promoting sustainability and renewable power solutions. However, ensuring stability and consistent performance in DFIG-based wind turbine systems (WTSs) remains challenging due to rapid wind speed variations, grid disturbances, and parameter uncertainties. These fluctuations result in power instability, increased overshoot, and prolonged settling times, negatively impacting grid compliance and system efficiency. Conventional proportional-integral (PI) controllers are simple and effective in steady-state conditions, but they lack adaptability in dynamic situations. Similarly, artificial intelligence (AI)-based controllers, such as fuzzy logic controllers (FLCs) and artificial neural networks (ANNs), improve adaptability but suffer from high computational demands and training complexity. To address these limitations, this paper presents a hybrid adaptive neuro-fuzzy inference system (ANFIS)-PI controller for DFIG-based WTS. The proposed controller integrates fuzzy logic adaptability with neural network-based learning, allowing real-time optimization of control parameters. Implemented within the rotor-side converter (RSC) and grid-side converter (GSC), ANFIS enhances reactive power management, grid compliance, and overall system stability. The system was tested under a step wind speed signal varying from 10 m/s to 12 m/s to evaluate its robustness. The simulation results confirmed that the ANFIS-PI controller significantly improved performance compared with the conventional PI controller. Specifically, it reduced rotor speed overshoot by 3%, torque overshoot by 12.5%, active power overshoot by 2%, and DC link voltage overshoot by 20%. Additionally, the ANFIS-PI controller shortened settling time by 50% for rotor speed, by 25% for torque, by 33% for active power, and by 16.7% for DC link voltage, ensuring faster stabilization, enhanced dynamic response, and greater efficiency. These improvements establish the ANFIS-PI controller as an advanced, computationally efficient, and scalable solution for enhancing the reliability of DFIG-based WTS, facilitating seamless integration of wind energy into modern power grids. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of the proposed grid-tied DFIG-based WECS.</p>
Full article ">Figure 2
<p>Power coefficient (<span class="html-italic">C</span><span class="html-italic">p</span>) versus tip speed ratio (λ) for WTS [<a href="#B29-sustainability-17-02454" class="html-bibr">29</a>].</p>
Full article ">Figure 3
<p>Power operating regions of WTS.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) DFIG dq equivalent circuit.</p>
Full article ">Figure 5
<p>Basic structure of ANFIS.</p>
Full article ">Figure 6
<p>Flow chart of designing the ANFIS controller.</p>
Full article ">Figure 7
<p>Implementation of ANFIS controller in the rotor-side converter.</p>
Full article ">Figure 8
<p>Implementation of the ANFIS controller in the grid-side converter.</p>
Full article ">Figure 9
<p>Error reduction in RSC after 25 iterations.</p>
Full article ">Figure 10
<p>Error reduction in GSC after 25 iterations.</p>
Full article ">Figure 11
<p>ANFIS model structure.</p>
Full article ">Figure 12
<p>A test signal was applied to the signal builder of the proposed model.</p>
Full article ">Figure 13
<p>Hybrid and PI controllers’ responses for rotor speed.</p>
Full article ">Figure 14
<p>Hybrid and PI controllers’ responses for torque.</p>
Full article ">Figure 15
<p>Hybrid and PI controllers’ responses for active power.</p>
Full article ">Figure 16
<p>Hybrid and PI controllers’ responses for DC link voltage.</p>
Full article ">
22 pages, 1334 KiB  
Article
A Robust YOLOv8-Based Framework for Real-Time Melanoma Detection and Segmentation with Multi-Dataset Training
by Saleh Albahli
Diagnostics 2025, 15(6), 691; https://doi.org/10.3390/diagnostics15060691 (registering DOI) - 11 Mar 2025
Abstract
Background: Melanoma, the deadliest form of skin cancer, demands accurate and timely diagnosis to improve patient survival rates. However, traditional diagnostic approaches rely heavily on subjective clinical interpretations, leading to inconsistencies and diagnostic errors. Methods: This study proposes a robust YOLOv8-based [...] Read more.
Background: Melanoma, the deadliest form of skin cancer, demands accurate and timely diagnosis to improve patient survival rates. However, traditional diagnostic approaches rely heavily on subjective clinical interpretations, leading to inconsistencies and diagnostic errors. Methods: This study proposes a robust YOLOv8-based deep learning framework for real-time melanoma detection and segmentation. A multi-dataset training strategy integrating the ISIC 2020, HAM10000, and PH2 datasets was employed to enhance generalizability across diverse clinical conditions. Preprocessing techniques, including adaptive contrast enhancement and artifact removal, were utilized, while advanced augmentation strategies such as CutMix and Mosaic were applied to enhance lesion diversity. The YOLOv8 architecture unified lesion detection and segmentation tasks into a single inference pass, significantly enhancing computational efficiency. Results: Experimental evaluation demonstrated state-of-the-art performance, achieving a mean Average Precision ([email protected]) of 98.6%, a Dice Coefficient of 0.92, and an Intersection over Union (IoU) score of 0.88. These results surpass conventional segmentation models including U-Net, DeepLabV3+, Mask R-CNN, SwinUNet, and Segment Anything Model (SAM). Moreover, the proposed framework demonstrated real-time inference speeds of 12.5 ms per image, making it highly suitable for clinical deployment and mobile health applications. Conclusions: The YOLOv8-based framework effectively addresses the limitations of existing diagnostic methods by integrating detection and segmentation tasks, achieving high accuracy and computational efficiency. This study highlights the importance of multi-dataset training for robust generalization and recommends the integration of explainable AI techniques to enhance clinical trust and interpretability. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Medical Image Analysis)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Training and validation loss curve. (<b>b</b>) Evaluation metrics over training epochs. (<b>c</b>) Confusion matrix for melanoma classification.</p>
Full article ">Figure 2
<p>Methodology architecture for the proposed YOLOv8-based framework for melanoma detection and segmentation. The pipeline includes image preprocessing, multi-dataset training, unified detection and segmentation, post-processing, and performance evaluation, followed by deployment for real-time clinical applications.</p>
Full article ">Figure 3
<p>Segmentation model comparison.</p>
Full article ">
22 pages, 6955 KiB  
Article
A Novel Multi-Dynamic Coupled Neural Mass Model of SSVEP
by Hongqi Li, Yujuan Wang and Peirong Fu
Biomimetics 2025, 10(3), 171; https://doi.org/10.3390/biomimetics10030171 (registering DOI) - 11 Mar 2025
Abstract
Steady-state visual evoked potential (SSVEP)-based brain—computer interfaces (BCIs) leverage high-speed neural synchronization to visual flicker stimuli for efficient device control. While SSVEP-BCIs minimize user training requirements, their dependence on physical EEG recordings introduces challenges, such as inter-subject variability, signal instability, and experimental complexity. [...] Read more.
Steady-state visual evoked potential (SSVEP)-based brain—computer interfaces (BCIs) leverage high-speed neural synchronization to visual flicker stimuli for efficient device control. While SSVEP-BCIs minimize user training requirements, their dependence on physical EEG recordings introduces challenges, such as inter-subject variability, signal instability, and experimental complexity. To overcome these limitations, this study proposes a novel neural mass model for SSVEP simulation by integrating frequency response characteristics with dual-region coupling mechanisms. Specific parallel linear transformation functions were designed based on SSVEP frequency responses, and weight coefficient matrices were determined according to the frequency band energy distribution under different visual stimulation frequencies in the pre-recorded SSVEP signals. A coupled neural mass model was constructed by establishing connections between occipital and parietal regions, with parameters optimized through particle swarm optimization to accommodate individual differences and neuronal density variations. Experimental results demonstrate that the model achieved a high-precision simulation of real SSVEP signals across multiple stimulation frequencies (10 Hz, 11 Hz, and 12 Hz), with maximum errors decreasing from 2.2861 to 0.8430 as frequency increased. The effectiveness of the model was further validated through the real-time control of an Arduino car, where simulated SSVEP signals were successfully classified by the advanced FPF-net model and mapped to control commands. This research not only advances our understanding of SSVEP neural mechanisms but also releases the user from the brain-controlled coupling system, thus providing a practical framework for developing more efficient and reliable BCI-based systems. Full article
(This article belongs to the Special Issue Computational Biology Simulation, Agent-Based Modelling and AI)
Show Figures

Figure 1

Figure 1
<p>The traditional neural mass model, which contains excitatory interneurons, inhibitory interneurons, and pyramidal neurons. A sigmoid function <span class="html-italic">S</span>(<span class="html-italic">v</span>) and differential equations for excitatory (<span class="html-italic">h<sub>e</sub></span>) and inhibitory (<span class="html-italic">h<sub>i</sub></span>) responses are included to describe the dynamic behavior of the interested subpopulation. The external input <span class="html-italic">n(t)</span> is modeled as Gaussian white noise, which introduces variability to the signal, and the coupling coefficients <span class="html-italic">C</span><sub>1</sub>, <span class="html-italic">C</span><sub>2</sub>, <span class="html-italic">C</span><sub>3</sub>, and <span class="html-italic">C</span><sub>4</sub> define the interaction strengths between different neural subpopulations. The output signal <span class="html-italic">E<sup>i</sup>(t)</span>, being the difference between the excitatory and inhibitory responses, represents the EEG-like signal produced by the model.</p>
Full article ">Figure 2
<p>The multi-dynamic neural mass model for SSVEPs.</p>
Full article ">Figure 3
<p>The SSVEP-BCI multi-dynamic coupled neural mass model. The occipital and parietal regions are represented by a multi-dynamic NMM of <a href="#biomimetics-10-00171-f002" class="html-fig">Figure 2</a>, where three parallel linear transfer functions are involved in the excitatory and inhibitory interneurons. The membrane potential of each intra-regional pyramidal cell (i.e., <span class="html-italic">y<sub>out</sub></span>) is first transformed into mean spike density through the static nonlinear function <span class="html-italic">s</span>(<span class="html-italic">v</span>) and then processed by the cross-regional neural encoder.</p>
Full article ">Figure 4
<p>Simulated signal curves varying with <span class="html-italic">μ</span> and their spectral power. As <span class="html-italic">μ</span> increased from 50 to 200, the rhythmic characteristics gradually intensified, with a final pronounced spectral peak at 10 Hz.</p>
Full article ">Figure 5
<p>Simulated signal curves varying with <span class="html-italic">σ</span><sup>2</sup> when <span class="html-italic">μ</span> = 220 and their spectral power. As <span class="html-italic">σ<sup>2</sup></span> increased from 50 to 20,000, slight to progressive changes of amplitude and spectral peaks variations were observed.</p>
Full article ">Figure 6
<p>Simulated signal curves varying with <span class="html-italic">σ</span><sup>2</sup> when <span class="html-italic">μ</span> = 90 and their normalized spectral power. As <span class="html-italic">σ</span><sup>2</sup> increased, signal amplitudes gradually increased (e.g., from 100 to 3000), and even led to irregular spike activity (from 6000 or 20,000).</p>
Full article ">Figure 7
<p>The simulated signals and spectral power of the occipital region without coupling. Due to the high weight assigned to α, the waveform fluctuated around the alpha wave, and as the delta wave component increased, spike activity decreased with a gradual left-ward shift in frequency peaks.</p>
Full article ">Figure 8
<p>The simulated signals and spectra of the occipital region under unidirectional coupling. As the parietal-to-occipital coupling strength (<span class="html-italic">p<sub>o</sub></span>) increased, while maintaining zero occipital-to-parietal coupling (<span class="html-italic">o<sub>p</sub></span> = 0), the occipital region showed an increased signal amplitude while maintaining stable frequency characteristics, accompanied by enhanced spectral peak values.</p>
Full article ">Figure 9
<p>The simulated signals and spectra of the occipital region under bidirectional coupling with different dynamic characteristics. When the coupling strength between the regions increases, the occipital region model simulation signal spikes are reduced, and the spectral peaks are gradually shifted to the left.</p>
Full article ">Figure 10
<p>Comparison of real and simulated SSVEP under three types of visual stimuli, where overall waveform pattern of simulated signals remains consistent with real signals.</p>
Full article ">Figure 10 Cont.
<p>Comparison of real and simulated SSVEP under three types of visual stimuli, where overall waveform pattern of simulated signals remains consistent with real signals.</p>
Full article ">Figure 11
<p>FPF-net structure.</p>
Full article ">Figure 12
<p>Arduino car movement based on simulated SSVEP.</p>
Full article ">
21 pages, 3228 KiB  
Article
TransECA-Net: A Transformer-Based Model for Encrypted Traffic Classification
by Ziao Liu, Yuanyuan Xie, Yanyan Luo, Yuxin Wang and Xiangmin Ji
Appl. Sci. 2025, 15(6), 2977; https://doi.org/10.3390/app15062977 - 10 Mar 2025
Viewed by 187
Abstract
Encrypted network traffic classification remains a critical component in network security monitoring. However, existing approaches face two fundamental limitations: (1) conventional methods rely on manual feature engineering and are inadequate in handling high-dimensional features; and (2) they lack the capability to capture dynamic [...] Read more.
Encrypted network traffic classification remains a critical component in network security monitoring. However, existing approaches face two fundamental limitations: (1) conventional methods rely on manual feature engineering and are inadequate in handling high-dimensional features; and (2) they lack the capability to capture dynamic temporal patterns. This paper introduces TransECA-Net, a novel hybrid deep learning architecture that addresses these limitations through two key innovations. First, we integrate ECA-Net modules with CNN architecture to enable automated feature extraction and efficient dimension reduction via channel selection. Second, we incorporate a Transformer encoder to model global temporal dependencies through multi-head self-attention, supplemented by residual connections for optimal gradient flow. Extensive experiments on the ISCX VPN-nonVPN dataset demonstrate the superiority of our approach. TransECA-Net achieved an average accuracy of 98.25% in classifying 12 types of encrypted traffic, outperforming classical baseline models such as 1D-CNN, CNN + LSTM, and TFE-GNN by 6.2–14.8%. Additionally, it demonstrated a 37.44–48.84% improvement in convergence speed during the training process. Our proposed framework presents a new paradigm for encrypted traffic feature disentanglement and representation learning. This paradigm enables cybersecurity systems to achieve fine-grained service identification of encrypted traffic (e.g., 98.9% accuracy in VPN traffic detection) and real-time responsiveness (48.8% faster than conventional methods), providing technical support for combating emerging cybercrimes such as monitoring illegal transactions on darknet networks and contributing significantly to adaptive network security monitoring systems. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>TransECA-Net structural flowchart.</p>
Full article ">Figure 2
<p>ECANet structural flowchart.</p>
Full article ">Figure 3
<p>TransECA-Net model performance.</p>
Full article ">Figure 4
<p>Normalized confusion matrix of 12 types of encrypted traffic.</p>
Full article ">Figure 5
<p>Linear graph of training loss values for 1D-CNN model.</p>
Full article ">Figure 6
<p>Linear graph of training loss values for SAE + 1D-CNN model.</p>
Full article ">Figure 7
<p>Linear graph of training loss values for CNN + LSTM model.</p>
Full article ">Figure 8
<p>Linear graph of training loss values for TransECA-Net model.</p>
Full article ">
29 pages, 1565 KiB  
Article
Analyzing High-Speed Rail’s Transformative Impact on Public Transport in Thailand Using Machine Learning
by Chinnakrit Banyong, Natthaporn Hantanong, Panuwat Wisutwattanasak, Thanapong Champahom, Kestsirin Theerathitichaipa, Rattanaporn Kasemsri, Manlika Seefong, Vatanavongs Ratanavaraha and Sajjakaj Jomnonkwao
Infrastructures 2025, 10(3), 57; https://doi.org/10.3390/infrastructures10030057 - 10 Mar 2025
Viewed by 215
Abstract
This study investigates the impact of high-speed rail (HSR) on Thailand’s public transportation market and evaluates the effectiveness of machine learning techniques in predicting travel mode choices. A stated preference survey was conducted with 3200 respondents across 16 provinces, simulating travel scenarios involving [...] Read more.
This study investigates the impact of high-speed rail (HSR) on Thailand’s public transportation market and evaluates the effectiveness of machine learning techniques in predicting travel mode choices. A stated preference survey was conducted with 3200 respondents across 16 provinces, simulating travel scenarios involving buses, trains, airplanes, and HSR. The dataset, consisting of 38,400 observations, was analyzed using the CatBoost model and the multinomial logit (MNL) model. CatBoost demonstrated superior predictive performance, achieving an accuracy of 0.853 and an AUC of 0.948, compared to MNL’s accuracy of 0.749 and AUC of 0.879. Shapley additive explanations (SHAP) analysis identified key factors influencing travel behavior, including cost, service frequency, waiting time, travel time, and station access time. The results predict that HSR will capture 88.91% of the intercity travel market, significantly reducing market shares for buses (4.76%), trains (5.11%), and airplanes (1.22%). The findings highlight the transformative role of HSR in reshaping travel patterns and offer policy insights for optimizing pricing, service frequency, and accessibility. Machine learning enhances predictive accuracy and enables a deeper understanding of mode choice behavior, providing a robust analytical framework for transportation planning. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Infrastructures)
Show Figures

Figure 1

Figure 1
<p>The proportion of intercity public transport passenger volume in Thailand (2017–2022).</p>
Full article ">Figure 2
<p>Research process flowchart.</p>
Full article ">Figure 3
<p>Provinces selected for data collection in Thailand.</p>
Full article ">Figure 4
<p>Feature importance of using public transport by SHAP.</p>
Full article ">
21 pages, 2314 KiB  
Article
High Accuracy of Epileptic Seizure Detection Using Tiny Machine Learning Technology for Implantable Closed-Loop Neurostimulation Systems
by Evangelia Tsakanika, Vasileios Tsoukas, Athanasios Kakarountas and Vasileios Kokkinos
BioMedInformatics 2025, 5(1), 14; https://doi.org/10.3390/biomedinformatics5010014 - 10 Mar 2025
Viewed by 225
Abstract
Background: Epilepsy is one of the most common and devastating neurological disorders, manifesting with seizures and affecting approximately 1–2% of the world’s population. The criticality of seizure occurrence and associated risks, combined with the overwhelming need for more precise and innovative treatment methods, [...] Read more.
Background: Epilepsy is one of the most common and devastating neurological disorders, manifesting with seizures and affecting approximately 1–2% of the world’s population. The criticality of seizure occurrence and associated risks, combined with the overwhelming need for more precise and innovative treatment methods, has led to the development of invasive neurostimulation devices programmed to detect and apply electrical stimulation therapy to suppress seizures and reduce the seizure burden. Tiny Machine Learning (TinyML) is a rapidly growing branch of machine learning. One of its key characteristics is the ability to run machine learning algorithms without the need for high computational complexity and powerful hardware resources. The featured work utilizes TinyML technology to implement an algorithm that can be integrated into the microprocessor of an implantable closed-loop brain neurostimulation system to accurately detect seizures in real-time by analyzing intracranial EEG (iEEG) signals. Methods: A dataset containing iEEG signal values from both non-epileptic and epileptic individuals was utilized for the implementation of the proposed algorithm. Appropriate data preprocessing was performed, and two training datasets with 1000 records of non-epileptic and epileptic iEEG signals were created. A test dataset with an independent dataset of 500 records was also created. The web-based platform Edge Impulse was used for model generation and visualization, and different model architectures were explored and tested. Finally, metrics of accuracy, confusion matrices, and ROC curves were used to evaluate the performance of the model. Results: Our model demonstrated high performance, achieving 98% and 99% accuracy on the validation and test EEG datasets, respectively. Our results support the use of TinyML technology in closed-loop neurostimulation devices for epilepsy, as it contributes significantly to the speed and accuracy of seizure detection. Conclusions: The proposed TinyML model demonstrated reliable seizure detection in real-time by analyzing EEG signals and distinguishing epileptic activity from normal brain electrical activity. These findings highlight the potential of TinyML in closed-loop neurostimulation systems for epilepsy, enhancing both speed and accuracy in seizure detection. Full article
(This article belongs to the Special Issue Editor's Choices Series for Methods in Biomedical Informatics Section)
Show Figures

Figure 1

Figure 1
<p>Distribution of values of EEG signals of non-epileptic individuals (<b>left panel</b>) and epileptic patients (<b>right panel</b>). A difference in value range and value distribution is observed.</p>
Full article ">Figure 2
<p>Workflow of the proposed model’s architecture.</p>
Full article ">Figure 3
<p>Scatter plot of labels 0 and 1 predictions in the validation set.</p>
Full article ">Figure 4
<p>Scatter plot of labels 0 and 1 predictions in the test set.</p>
Full article ">Figure 5
<p>ROC curve for the validation set.</p>
Full article ">Figure 6
<p>ROC curve for the test set.</p>
Full article ">
15 pages, 2505 KiB  
Article
Validity and Reliability of Inertial Motion Unit-Based Performance Metrics During Wheelchair Racing Propulsion
by Raphaël Ouellet, Katia Turcot, Nathalie Séguin, Alexandre Campeau-Lecour and Jason Bouffard
Sensors 2025, 25(6), 1680; https://doi.org/10.3390/s25061680 - 8 Mar 2025
Viewed by 200
Abstract
This study aims to evaluate the concurrent validity and test–retest reliability of wheelchair racing performance metrics. Thirteen individuals without disabilities and experience in wheelchair racing were evaluated twice while performing maximal efforts on a racing wheelchair. Three wheelchair athletes were also assessed to [...] Read more.
This study aims to evaluate the concurrent validity and test–retest reliability of wheelchair racing performance metrics. Thirteen individuals without disabilities and experience in wheelchair racing were evaluated twice while performing maximal efforts on a racing wheelchair. Three wheelchair athletes were also assessed to compare their performance with novice participants. The wheelchair kinematics was estimated using an inertial motion unit (IMU) positioned on the frame and a light detection and ranging (Lidar) system. The propulsion cycle (PC) duration, acceleration, average speed, speed gains during acceleration, and speed loss during deceleration were estimated for the first PC and stable PCs. The test–retest reliability was generally moderate (0.50 ≤ ICC < 0.75) to good (0.75 ≤ ICC < 0.90), while few metrics showed poor reliability (ICC < 0.50). High to very high correlations were obtained between both systems for 10 out of 11 metrics (0.78–0.99). Wheelchair athletes performed better than novice participants. Our results suggest that integrated accelerometer data could be used to assess wheelchair speed characteristics over a short distance with a known passage time. Such fine-grain analyses using methods usable in the field could allow for data-informed training in novice and elite wheelchair racing athletes. Full article
(This article belongs to the Special Issue Feature Papers in Wearables 2024)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Racing wheelchair used for the NPs. (<b>B</b>) Inertial measurement unit and the reflective plate positioned on the back of the frame.</p>
Full article ">Figure 2
<p>Cycle separation (<b>A</b>) and metric extraction (<b>B</b>). Panel A: Gray area highlights the propulsion cycles 5 to 7, which are illustrated in panel B. Panel B: Light-gray area highlights the acceleration phase (AccGain); dark-gray area highlights the deceleration phase (DecLoss). AccMin: minimal acceleration; AccMax: maximal acceleration; AccGain: speed gains during the acceleration phase; DecLoss: speed loss during deceleration.</p>
Full article ">Figure 3
<p>(<b>A</b>) Absolute velocity RMS error (RMSe); (<b>B</b>) velocity RMS error normalized to the maximal velocity over the ten first propulsion cycles. PCstable: propulsion cycles 3 to 10.</p>
Full article ">Figure 4
<p>(<b>A</b>): Wheelchair velocity curves of all NPs (gray) and WRAs (red). (<b>B</b>): Ranks of the WRAs within the entire group (for a total of 16 participants: 3 WRAs + 13 NPs). <span class="html-fig-inline" id="sensors-25-01680-i001"><img alt="Sensors 25 01680 i001" src="/sensors/sensors-25-01680/article_deploy/html/images/sensors-25-01680-i001.png"/></span> WRA1, <span class="html-fig-inline" id="sensors-25-01680-i002"><img alt="Sensors 25 01680 i002" src="/sensors/sensors-25-01680/article_deploy/html/images/sensors-25-01680-i002.png"/></span> WRA2, and <span class="html-fig-inline" id="sensors-25-01680-i003"><img alt="Sensors 25 01680 i003" src="/sensors/sensors-25-01680/article_deploy/html/images/sensors-25-01680-i003.png"/></span> WRA3. AccMin: minimal acceleration; AccMax: maximal acceleration; AvSpeed: average speed; AvSpeedGain: average speed gain between two cycles; AccGain: speed gains during the acceleration phase; DecLoss: speed loss during deceleration; PC1: first propulsion cycle; PCstable: propulsion cycles 3 to 10.</p>
Full article ">
30 pages, 3530 KiB  
Article
A Hybrid Optimization Approach Combining Rolling Horizon with Deep-Learning-Embedded NSGA-II Algorithm for High-Speed Railway Train Rescheduling Under Interruption Conditions
by Wenqiang Zhao, Leishan Zhou and Chang Han
Sustainability 2025, 17(6), 2375; https://doi.org/10.3390/su17062375 - 8 Mar 2025
Viewed by 144
Abstract
This study discusses the issue of train rescheduling in high-speed railways (HSR) when unexpected interruptions occur. These interruptions can lead to delays, cancellations, and disruptions to passenger travel. An optimization model for train rescheduling under uncertain-duration interruptions is proposed. The model aims to [...] Read more.
This study discusses the issue of train rescheduling in high-speed railways (HSR) when unexpected interruptions occur. These interruptions can lead to delays, cancellations, and disruptions to passenger travel. An optimization model for train rescheduling under uncertain-duration interruptions is proposed. The model aims to minimize both the decline in passenger service quality and the total operating cost, thereby achieving sustainable rescheduling. Then, a hybrid optimization algorithm combining rolling horizon optimization with a deep-learning-embedded NSGA-II algorithm is introduced to solve this multi-objective problem. This hybrid algorithm combines the advantages of each single algorithm, significantly improving computational efficiency and solution quality, particularly in large-scale scenarios. Furthermore, a case study on the Beijing–Shanghai high-speed railway shows the effectiveness of the model and algorithm. The optimization rates are 16.27% for service quality and 15.58% for operational costs in the small-scale experiment. Compared to other single algorithms or algorithm combinations, the hybrid algorithm enhances computational efficiency by 26.21%, 15.73%, and 25.13%. Comparative analysis shows that the hybrid algorithm outperforms traditional methods in both optimization quality and computational efficiency, contributing to enhanced overall operational efficiency of the railway system and optimized resource utilization. The Pareto front analysis provides decision makers with a range of scheduling alternatives, offering flexibility in balancing service quality and cost. In conclusion, the proposed approach is highly applicable in real-world railway operations, especially under complex and uncertain conditions, as it not only reduces operational costs but also aligns railway operations with broader sustainability goals. Full article
Show Figures

Figure 1

Figure 1
<p>Example of a small-scale high-speed railway timetable.</p>
Full article ">Figure 2
<p>Schematic diagram of the rolling horizon algorithm.</p>
Full article ">Figure 3
<p>Example of gene fragments.</p>
Full article ">Figure 4
<p>Schematic diagram of the selection process for a new population.</p>
Full article ">Figure 5
<p>The process of the hybrid algorithm.</p>
Full article ">Figure 6
<p>Stations along the Beijing–Shanghai high-speed railway.</p>
Full article ">Figure 7
<p>Comparison showing before and after iteration.</p>
Full article ">Figure 8
<p>Iteration curve of two objectives.</p>
Full article ">Figure 9
<p>Convergence curves of objective function 1 over 15 experiments.</p>
Full article ">Figure 10
<p>Pareto front scatter plot of two experiments.</p>
Full article ">
18 pages, 7165 KiB  
Article
Novel Preprocessing-Based Sequence for Comparative MR Cervical Lymph Node Segmentation
by Elif Ayten Tarakçı, Metin Çeliker, Mehmet Birinci, Tuğba Yemiş, Oğuz Gül, Enes Faruk Oğuz, Merve Solak, Esat Kaba, Fatma Beyazal Çeliker, Zerrin Özergin Coşkun, Ahmet Alkan and Özlem Çelebi Erdivanlı
J. Clin. Med. 2025, 14(6), 1802; https://doi.org/10.3390/jcm14061802 - 7 Mar 2025
Viewed by 191
Abstract
Background and Objective: This study aims to utilize deep learning methods for the automatic segmentation of cervical lymph nodes in magnetic resonance images (MRIs), enhancing the speed and accuracy of diagnosing pathological masses in the neck and improving patient treatment processes. Materials [...] Read more.
Background and Objective: This study aims to utilize deep learning methods for the automatic segmentation of cervical lymph nodes in magnetic resonance images (MRIs), enhancing the speed and accuracy of diagnosing pathological masses in the neck and improving patient treatment processes. Materials and Methods: This study included 1346 MRI slices from 64 patients undergoing cervical lymph node dissection, biopsy, and preoperative contrast-enhanced neck MRI. A preprocessing model was used to crop and highlight lymph nodes, along with a method for automatic re-cropping. Two datasets were created from the cropped images—one with augmentation and one without—divided into 90% training and 10% validation sets. After preprocessing, the ResNet-50 images in the DeepLabv3+ encoder block were automatically segmented. Results: According to the results of the validation set, the mean IoU values for the DWI, T2, T1, T1+C, and ADC sequences in the dataset without augmentation created for cervical lymph node segmentation were 0.89, 0.88, 0.81, 0.85, and 0.80, respectively. In the augmented dataset, the average IoU values for all sequences were 0.91, 0.89, 0.85, 0.88, and 0.84. The DWI sequence showed the highest performance in the datasets with and without augmentation. Conclusions: Our preprocessing-based deep learning architectures successfully segmented cervical lymph nodes with high accuracy. This study is the first to explore automatic segmentation of the cervical lymph nodes using comprehensive neck MRI sequences. The proposed model can streamline the detection process, reducing the need for radiology expertise. Additionally, it offers a promising alternative to manual segmentation in radiotherapy, potentially enhancing treatment effectiveness. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) RGB colour space and (<b>b</b>) HSV colour space [<a href="#B10-jcm-14-01802" class="html-bibr">10</a>].</p>
Full article ">Figure 2
<p>The workflow of our preprocessing program for semi-automatic segmentation is illustrated. (<b>a</b>) A raw medical image where lymph nodes have been manually annotated by an expert using green markings. (<b>b</b>–<b>d</b>) show the cropped images obtained by our proposed method, which automatically extracts the annotated regions. Our program crops each lymph node marked by the expert and ensures that the same lymph node is not cropped more than once through a control mechanism. This prevents redundant processing and ensures that each lymph node is included only once in the dataset.</p>
Full article ">Figure 3
<p>Stages of creating ground truth from cropped images. The manually cropped lymph node is marked in green, while the lymph node during ground truth creation is marked in shades of gray.</p>
Full article ">Figure 4
<p>Automatic cropping of images and generation of ground truths.</p>
Full article ">Figure 5
<p>Proposed method.</p>
Full article ">Figure 6
<p>Residual block.</p>
Full article ">Figure 7
<p>(<b>a</b>) Test confusion matrices without augmentation for ADC, (<b>b</b>) T1, (<b>c</b>) T1+C, (<b>d</b>) T2, and (<b>e</b>) DWI sequences.</p>
Full article ">Figure 8
<p>Training graph for DWI sequence with unenhanced dataset.</p>
Full article ">Figure 9
<p>(<b>a</b>) Test confusion matrices after augmentation for ADC, (<b>b</b>) T1, (<b>c</b>) T1+C, (<b>d</b>) T2, and (<b>e</b>) DWI sequences.</p>
Full article ">Figure 10
<p>Training graph with augmented dataset for DWI sequence.</p>
Full article ">
16 pages, 1104 KiB  
Article
Detection of Fractured Endodontic Instruments in Periapical Radiographs: A Comparative Study of YOLOv8 and Mask R-CNN
by İrem Çetinkaya, Ekin Deniz Çatmabacak and Emir Öztürk
Diagnostics 2025, 15(6), 653; https://doi.org/10.3390/diagnostics15060653 - 7 Mar 2025
Viewed by 136
Abstract
Background/Objectives: Accurate localization of fractured endodontic instruments (FEIs) in periapical radiographs (PAs) remains a significant challenge. This study aimed to evaluate the performance of YOLOv8 and Mask R-CNN in detecting FEIs and root canal treatments (RCTs) and compare their diagnostic capabilities with those [...] Read more.
Background/Objectives: Accurate localization of fractured endodontic instruments (FEIs) in periapical radiographs (PAs) remains a significant challenge. This study aimed to evaluate the performance of YOLOv8 and Mask R-CNN in detecting FEIs and root canal treatments (RCTs) and compare their diagnostic capabilities with those of experienced endodontists. Methods: A data set of 1050 annotated PAs was used. Mask R-CNN and YOLOv8 models were trained and evaluated for FEI and RCT detection. Metrics including accuracy, intersection over union (IoU), mean average precision at 0.5 IoU (mAP50), and inference time were analyzed. Observer agreement was assessed using inter-class correlation (ICC), and comparisons were made between AI predictions and human annotations. Results: YOLOv8 achieved an accuracy of 97.40%, a mAP50 of 98.9%, and an inference time of 14.6 ms, outperforming Mask R-CNN in speed and mAP50. Mask R-CNN demonstrated an accuracy of 98.21%, a mAP50 of 95%, and an inference time of 88.7 ms, excelling in detailed segmentation tasks. Comparative analysis revealed no statistically significant differences in diagnostic performance between the models and experienced endodontists. Conclusions: Both YOLOv8 and Mask R-CNN demonstrated high diagnostic accuracy and reliability, comparable to experienced endodontists. YOLOv8’s rapid detection capabilities make it particularly suitable for real-time clinical applications, while Mask R-CNN excels in precise segmentation. This study establishes a strong foundation for integrating AI into dental diagnostics, offering innovative solutions to improve clinical outcomes. Future research should address data diversity and explore multimodal imaging for enhanced diagnostic capabilities. Full article
(This article belongs to the Special Issue Advances in Medical Image Processing, Segmentation and Classification)
Show Figures

Figure 1

Figure 1
<p>Representative examples of Mask R-CNN’s performance on periapical radiographs (PAs) for detecting fractured endodontic instruments (FEI) and root canal treatments (RCT). The bounding boxes and associated confidence scores highlight the model’s ability to accurately identify and localize objects. Panels (<b>A1</b>–<b>E1</b>) represent the ground truth annotations marked with blue boxes for FEI and red boxes for RCT, while panels (<b>A2</b>–<b>E2</b>) depict the segmentations generated by the Mask R-CNN model, where FEI is marked with red boxes and RCT with pink boxes.</p>
Full article ">Figure 2
<p>Flowchart of Mask R-CNN architecture. CNN extracts feature maps from the input image. The Region Proposal Network generates candidate regions, which are processed through RoI (Region of Interest) Align to ensure accurate spatial alignment. The extracted features are passed through FC (Fully Connected) layers for classification and bounding box regression. Additionally, Conv (Convolutional) layers are used for mask prediction.</p>
Full article ">Figure 3
<p>Flowchart of YOLO architecture.</p>
Full article ">Figure 4
<p>Saliency map outputs for FEI and RCT detection using YOLO and Mask R-CNN. (<b>A1</b>–<b>D1</b>) Raw periapical radiographs, (<b>A2</b>–<b>D2</b>) corresponding saliency maps. (<b>A</b>) YOLO-based saliency map for FEI detection, (<b>B</b>) YOLO-based saliency map for RCT detection, (<b>C</b>) Mask R-CNN-based saliency map for FEI detection, and (<b>D</b>) Mask R-CNN-based saliency map for RCT detection. The red boxes indicate the regions identified by the models as containing FEI or RCT, highlighting the areas of interest detected by the respective deep learning approaches.</p>
Full article ">Figure 5
<p>Comparison of training and validation losses for YOLOv8 (top) and Mask R-CNN (bottom) models. The YOLOv8 graphs depict box loss (<b>A</b>) and class loss (<b>B</b>), illustrating a steady decrease in both training and validation losses with minimal divergence, indicating strong generalization and effective performance in object localization and classification. In contrast, the Mask R-CNN graph (<b>C</b>) shows the total loss across training and validation, with training loss decreasing rapidly and validation loss stabilizing with slight fluctuations, reflecting its ability to perform detailed segmentation tasks. Overall, YOLOv8 demonstrates faster convergence and smoother loss reduction, while Mask R-CNN exhibits robustness in tasks requiring precise segmentation.</p>
Full article ">
16 pages, 3356 KiB  
Article
Integrated Whole-Body Control and Manipulation Method Based on Teacher–Student Perception Information Consistency
by Shuqi Liu, Yufeng Zhuang, Shuming Hu, Yanzhu Hu and Bin Zeng
Actuators 2025, 14(3), 131; https://doi.org/10.3390/act14030131 - 7 Mar 2025
Viewed by 90
Abstract
In emergency scenarios, we focus on studying how to manipulate legged robot dogs equipped with robotic arms to move and operate in a small space, known as legged emergency manipulation. Although the legs of the robotic dog are mainly used for movement, we [...] Read more.
In emergency scenarios, we focus on studying how to manipulate legged robot dogs equipped with robotic arms to move and operate in a small space, known as legged emergency manipulation. Although the legs of the robotic dog are mainly used for movement, we found that implementing a whole-body control strategy can enhance its operational capabilities. This means that the robotic dog’s legs and mechanical arms can be synchronously controlled, thus expanding its working range and mobility, allowing it to flexibly enter and exit small spaces. To this end, we propose a framework that can utilize visual information to provide feedback for whole-body control. Our method combines low-level and high-level strategies: the low-level strategy utilizes all degrees of freedom to accurately track the body movement speed of the robotic dog and the position of the end effector of the robotic arm; the advanced strategy is based on visual input, intelligently planning the optimal moving speed and end effector position. At the same time, considering the uncertainty of visual guidance, we integrate fully supervised learning into the advanced strategy to construct a teacher network and use it as a benchmark network for training the student network. We have rigorously trained these two levels of strategies in a simulated environment, and through a series of extensive simulation validations, we have demonstrated that our method has significant improvements over baseline methods in moving various objects in a small space, facing different configurations and different target objects. Full article
Show Figures

Figure 1

Figure 1
<p>The module on the left utilizes a whole-body control method, whereas the module on the right employs a non-whole-body control method. The circles in the diagram represent the activity space of the quadruped robot body and the robotic arm. The whole-body control method offers a more flexible workspace compared to the non-whole-body control method, making it easier to adapt to different environments and handle objects at various height positions.</p>
Full article ">Figure 2
<p>The training of a full-body control method based on visual information involves the use of supervised learning and mutual feedback learning with visual consistency information to train the command generation strategy, providing a foundation for the generation of control commands.</p>
Full article ">Figure 3
<p>Real robot system setup.It mainly includes a robotic arm, a quadruped robot body and a visual perception module.</p>
Full article ">Figure 4
<p><b>Success rates</b> of different methods at different height positions, tested in the simulator. The dots represent the mean performance of the same objects.</p>
Full article ">Figure 5
<p><b>Rewards</b> of our methods during training.</p>
Full article ">Figure 6
<p>Visualization of qualitative experiments.</p>
Full article ">
19 pages, 13823 KiB  
Article
Autonomous Agricultural Robot Using YOLOv8 and ByteTrack for Weed Detection and Destruction
by Ardin Bajraktari and Hayrettin Toylan
Machines 2025, 13(3), 219; https://doi.org/10.3390/machines13030219 - 7 Mar 2025
Viewed by 154
Abstract
Automating agricultural machinery presents a significant opportunity to lower costs and enhance efficiency in both current and future field operations. The detection and destruction of weeds in agricultural areas via robots can be given as an example of this process. Deep learning algorithms [...] Read more.
Automating agricultural machinery presents a significant opportunity to lower costs and enhance efficiency in both current and future field operations. The detection and destruction of weeds in agricultural areas via robots can be given as an example of this process. Deep learning algorithms can accurately detect weeds in agricultural fields. Additionally, robotic systems can effectively eliminate these weeds. However, the high computational demands of deep learning-based weed detection algorithms pose challenges for their use in real-time applications. This study proposes a vision-based autonomous agricultural robot that leverages the YOLOv8 model in combination with ByteTrack to achieve effective real-time weed detection. A dataset of 4126 images was used to create YOLO models, with 80% of the images designated for training, 10% for validation, and 10% for testing. Six different YOLO object detectors were trained and tested for weed detection. Among these models, YOLOv8 stands out, achieving a precision of 93.8%, a recall of 86.5%, and a [email protected] detection accuracy of 92.1%. With an object detection speed of 18 FPS and the advantages of the ByteTrack integrated object tracking algorithm, YOLOv8 was selected as the most suitable model. Additionally, the YOLOv8-ByteTrack model, developed for weed detection, was deployed on an agricultural robot with autonomous driving capabilities integrated with ROS. This system facilitates real-time weed detection and destruction, enhancing the efficiency of weed management in agricultural practices. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

Figure 1
<p>Machine vision-based weeding robots: (<b>a</b>) the Bonirob, (<b>b</b>) the ARA, (<b>c</b>) the AVO, (<b>d</b>) the Laserweeder.</p>
Full article ">Figure 2
<p>Overview of the autonomous agricultural robot.</p>
Full article ">Figure 3
<p>Block diagram of the autonomous agricultural robot.</p>
Full article ">Figure 4
<p>Position of the autonomous agricultural robot.</p>
Full article ">Figure 5
<p>Flowchart of autonomous navigation part.</p>
Full article ">Figure 6
<p>YOLOv5 architecture [<a href="#B49-machines-13-00219" class="html-bibr">49</a>].</p>
Full article ">Figure 7
<p>YOLOv8 architecture [<a href="#B49-machines-13-00219" class="html-bibr">49</a>].</p>
Full article ">Figure 8
<p>ByteTrack workflow [<a href="#B55-machines-13-00219" class="html-bibr">55</a>].</p>
Full article ">Figure 9
<p>Types of weeds: (<b>a</b>) Dandelion Weeds, (<b>b</b>) Heliotropium indicum, (<b>c</b>) Young field Thistle Cirsium arvense, (<b>d</b>) Cirsium arvense, (<b>e</b>) Plantago lanceolata, (<b>f</b>) Eclipta, (<b>g</b>) Urtica Diocia.</p>
Full article ">Figure 10
<p>Results for the YOLOv5 model on image.</p>
Full article ">Figure 11
<p>(<b>a</b>) Results of the YOLOv5 Pruned and Quantized with Transfer Learning, (<b>b</b>) Result of the YOLOv5 Pruned and Quantized.</p>
Full article ">Figure 12
<p>Performance curves of YOLOv5: (<b>a</b>) Metrics/precision curves, (<b>b</b>) Metrics/recall curves.</p>
Full article ">Figure 13
<p>Performance curves of YOLOv5: (<b>a</b>) Metrics/mAP@0.5, (<b>b</b>) metrics/mAP@0.5:0.95.</p>
Full article ">Figure 14
<p>Performance results of YOLOv8.</p>
Full article ">
16 pages, 4154 KiB  
Article
A Novel Bearing Fault Diagnosis Method Based on Improved Convolutional Neural Network and Multi-Sensor Fusion
by Zhongyao Wang, Xiao Xu, Dongli Song, Zejun Zheng and Weidong Li
Machines 2025, 13(3), 216; https://doi.org/10.3390/machines13030216 - 7 Mar 2025
Viewed by 92
Abstract
Bearings are key components of modern mechanical equipment. To address the issue that the limited information contained in the single-source signal of the bearing leads to the limited accuracy of the single-source fault diagnosis method, a multi-sensor fusion fault diagnosis method is proposed [...] Read more.
Bearings are key components of modern mechanical equipment. To address the issue that the limited information contained in the single-source signal of the bearing leads to the limited accuracy of the single-source fault diagnosis method, a multi-sensor fusion fault diagnosis method is proposed to improve the reliability of bearing fault diagnosis. Firstly, the feature extraction process of the convolutional neural network (CNN) is improved based on the theory of variational Bayesian inference, which forms the variational Bayesian inference convolutional neural network (VBICNN). VBICNN is used to obtain preliminary diagnosis results of single-channel signals. Secondly, considering the redundancy of information contained in multi-channel signals, a voting strategy is used to fuse the preliminary diagnosis results of the single-channel model to obtain the final results. Finally, the proposed method is evaluated by an experimental dataset of the axlebox bearing of a high-speed train. The results show that the average diagnosis accuracy of the proposed method can reach more than 99% and has favorable stability. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of CNN.</p>
Full article ">Figure 2
<p>Framework of the proposed bearing fault diagnosis model. Where the red boxes represent the segments intercepted from the raw signals as inputs to the model.</p>
Full article ">Figure 3
<p>Schematic diagram of the models: (<b>a</b>) CNN; (<b>b</b>) VBICNN. Where <span class="html-italic">n<sub>x</sub></span> is the number of neurons in the flattened layer, <span class="html-italic">n<sub>z</sub></span> is the number of neurons in the fully connected layer in the CNN, <span class="html-italic">n<sub>F</sub></span> is the number of features extracted by the VBICNN, <span class="html-italic">n<sub>z</sub></span> is equal to <span class="html-italic">n<sub>F</sub></span>, and the red boxes represent the segments intercepted from the raw signals as inputs to the model.</p>
Full article ">Figure 4
<p>Framework and main steps of the proposed method.</p>
Full article ">Figure 5
<p>Test-bed for axlebox bearing of high-speed train.</p>
Full article ">Figure 6
<p>Five kinds of fault bearings.</p>
Full article ">Figure 7
<p>Results of repeated diagnostics using different channel signals for different fault diagnostic models (horizontal axis is the number of runs, vertical axis is the test accuracy): (<b>a</b>) CNN by channel 1; (<b>b</b>) VBCNN by channel 1; (<b>c</b>) CNN by channel 2; (<b>d</b>) VBCNN by channel 2; (<b>e</b>) CNN by channel 3; (<b>f</b>) VBCNN by channel 3; (<b>g</b>) proposed method using all channels; (<b>h</b>) comparison of results, where different colors represent different models using different channel signals, and the color meanings are consistent with (<b>a</b>–<b>g</b>).</p>
Full article ">Figure 8
<p>Diagnosis results for different approaches.</p>
Full article ">Figure 9
<p>Confusion matrices of the basic diagnosis mode with different channel signals: (<b>a</b>) Channel 1; (<b>b</b>) Channel 2; (<b>c</b>) Channel 3.</p>
Full article ">Figure 10
<p>Comparison of diagnostic results.</p>
Full article ">
20 pages, 6134 KiB  
Article
A Hardware-in-the-Loop Simulation Platform for a High-Speed Maglev Positioning and Speed Measurement System
by Linzi Yin, Cong Luo, Ling Liu, Junfeng Cui, Zhiming Liu and Guoying Sun
Technologies 2025, 13(3), 108; https://doi.org/10.3390/technologies13030108 - 6 Mar 2025
Viewed by 175
Abstract
In order to solve the testing and verification problems at the early development stage of a high-speed Maglev positioning and speed measurement system (MPSS), a hardware-in-the-loop (HIL) simulation platform is presented, which includes induction loops, transmitting antennas, a power driver unit, a simulator [...] Read more.
In order to solve the testing and verification problems at the early development stage of a high-speed Maglev positioning and speed measurement system (MPSS), a hardware-in-the-loop (HIL) simulation platform is presented, which includes induction loops, transmitting antennas, a power driver unit, a simulator based on a field-programmable gate array (FPGA), a host computer, etc. This HIL simulation platform simulates the operation of a high-speed Maglev train and generates the related loop-induced signals to test the performance of a real ground signal processing unit (GSPU). Furthermore, an absolute position detection method based on Gray-coded loops is proposed to identify which Gray-coded period the train is in. A relative position detection method based on height compensation is also proposed to calculate the exact position of the train in a Gray-coded period. The experimental results show that the positioning error is only 2.58 mm, and the speed error is 6.34 km/h even in the 600 km/h condition. The proposed HIL platform also effectively simulates the three kinds of operation modes of high-speed Maglev trains, which verifies the effectiveness and practicality of the HIL simulation strategy. This provides favorable conditions for the development and early validation of high-speed MPSS. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

Figure 1
<p>Block diagram of MPSS.</p>
Full article ">Figure 2
<p>Address loops coding method. Half the distance between two adjacent crossing nodes of a G0 loop is the Gray-coding period.</p>
Full article ">Figure 3
<p>Principle of relative position detection. The shaded portion of the figure indicates the projected portion of the transmitting antenna in the loop. Different colors indicate opposite magnetic flux.</p>
Full article ">Figure 4
<p>G0 and SG0 loop signals.</p>
Full article ">Figure 5
<p>The magnetic flux in the induction loop circuit. The shaded portion of the figure indicates the projected portion of the transmitting antenna in the loop.</p>
Full article ">Figure 6
<p>Magnetic induction inside the loop.</p>
Full article ">Figure 7
<p>Transmitting antenna model. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> is the resistor used to limit the current, L is the coil of the transmitting antenna, and C is the capacitor used for impedance matching.</p>
Full article ">Figure 8
<p>Structure of the HIL simulation platform.</p>
Full article ">Figure 9
<p>Operation of the simulation system.</p>
Full article ">Figure 10
<p>(<b>a</b>) The physical GSPU, emulator, and host computer; (<b>b</b>) the eight groups of induction loops and the corresponding eight transmitting antennas of the HIL simulation platform.</p>
Full article ">Figure 11
<p>Absolute position Gray code processing results.</p>
Full article ">Figure 12
<p>G0 and SG0 loop signals processing results.</p>
Full article ">Figure 13
<p>Relative position processing results. They are expressed in angular form and the values vary between 0 and 360.</p>
Full article ">Figure 14
<p>Instantaneous position processing results.</p>
Full article ">Figure 15
<p>Accelerated mode absolute position processing results.</p>
Full article ">Figure 16
<p>(<b>a</b>) The positioning error during uniform motion at different speeds; (<b>b</b>) the positioning error during accelerated motion at different accelerations.</p>
Full article ">Figure 17
<p>(<b>a</b>) The speed measurement error during uniform motion at different speeds; (<b>b</b>) the speed measurement error during accelerated motion at different accelerations.</p>
Full article ">
30 pages, 8829 KiB  
Article
Adaptive Temporal Reinforcement Learning for Mapping Complex Maritime Environmental State Spaces in Autonomous Ship Navigation
by Ruolan Zhang, Xinyu Qin, Mingyang Pan, Shaoxi Li and Helong Shen
J. Mar. Sci. Eng. 2025, 13(3), 514; https://doi.org/10.3390/jmse13030514 - 6 Mar 2025
Viewed by 166
Abstract
The autonomous decision-making model for ship navigation requires extensive interaction and trial-and-error in real, complex environments to ensure optimal decision-making performance and efficiency across various scenarios. However, existing approaches still encounter significant challenges in addressing the temporal features of state space and tackling [...] Read more.
The autonomous decision-making model for ship navigation requires extensive interaction and trial-and-error in real, complex environments to ensure optimal decision-making performance and efficiency across various scenarios. However, existing approaches still encounter significant challenges in addressing the temporal features of state space and tackling complex dynamic collision avoidance tasks, primarily due to factors such as environmental uncertainty, the high dimensionality of the state space, and limited decision robustness. This paper proposes an adaptive temporal decision-making model based on reinforcement learning, which utilizes Long Short-Term Memory (LSTM) networks to capture temporal features of the state space. The model integrates an enhanced Proximal Policy Optimization (PPO) algorithm for efficient policy iteration optimization. Additionally, a simulation training environment is constructed, incorporating multi-factor coupled physical properties and ship dynamics equations. The environment maps variables such as wind speed, current velocity, and wave height, along with dynamic ship parameters, while considering the International Regulations for Preventing Collisions at Sea (COLREGs) in training the autonomous navigation decision-making model. Experimental results demonstrate that, compared to other neural network-based reinforcement learning methods, the proposed model excels in environmental adaptability, collision avoidance success rate, navigation stability, and trajectory optimization. The model’s decision resilience and state-space mapping align with real-world navigation scenarios, significantly improving the autonomous decision-making capability of ships in dynamic sea conditions and providing critical support for the advancement of intelligent shipping. Full article
Show Figures

Figure 1

Figure 1
<p>Autonomous navigation decision-making architecture for ships based on adaptive temporal reinforcement learning.</p>
Full article ">Figure 2
<p>Ship motion diagram in the body-fixed coordinate system.</p>
Full article ">Figure 3
<p>Ship motion diagram in the still-water coordinate system.</p>
Full article ">Figure 4
<p>Output response of the first-order Nomoto model.</p>
Full article ">Figure 5
<p>Ship encounter map.</p>
Full article ">Figure 6
<p>Encounter situation judgment criteria for ships under the COLREGs system.</p>
Full article ">Figure 7
<p>Design of ship encounter situation classification based on reinforcement learning reward function.</p>
Full article ">Figure 8
<p>Actor network structure diagram.</p>
Full article ">Figure 9
<p>Critic network structure diagram.</p>
Full article ">Figure 10
<p>Presents a schematic of the deep reinforcement learning interactive simulation environment, incorporating environmental features such as wind speed, current velocity, and wave height, along with dynamic ship parameters in the state space mapping.</p>
Full article ">Figure 11
<p>Average reward variations of PPO, DDPG, A3C, and LSTM_PPO.</p>
Full article ">Figure 12
<p>Variation in computational efficiencies of PPO, DDPG, A3C, and LSTM_PPO.</p>
Full article ">Figure 13
<p>Comparison of average episode lengths for DDPG, PPO, LSTM_PPO, and A3C.</p>
Full article ">Figure 14
<p>Comparison of average episode rewards for DDPG, PPO, LSTM_PPO, and A3C.</p>
Full article ">Figure 15
<p>Comparison of training value losses for PPO, LSTM_PPO, and A3C.</p>
Full article ">Figure 16
<p>Comparison of training strategy gradient losses for PPO, LSTM_PPO, and A3C.</p>
Full article ">Figure 17
<p>Comparison of training strategy gradient losses for DDPG, PPO, LSTM_PPO, and A3C.</p>
Full article ">Figure 18
<p>Comparison of approximate KL divergences for PPO, LSTM_PPO, and A3C.</p>
Full article ">Figure 19
<p>Results of single and two-vessel simulation experiments.</p>
Full article ">Figure 20
<p>Head-on encounter scenario results.</p>
Full article ">Figure 21
<p>Overtaking scenario results.</p>
Full article ">Figure 22
<p>Crossing encounter scenario results.</p>
Full article ">
Back to TopTop