[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,352)

Search Parameters:
Keywords = command and control

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6955 KiB  
Article
A Novel Multi-Dynamic Coupled Neural Mass Model of SSVEP
by Hongqi Li, Yujuan Wang and Peirong Fu
Biomimetics 2025, 10(3), 171; https://doi.org/10.3390/biomimetics10030171 - 11 Mar 2025
Abstract
Steady-state visual evoked potential (SSVEP)-based brain—computer interfaces (BCIs) leverage high-speed neural synchronization to visual flicker stimuli for efficient device control. While SSVEP-BCIs minimize user training requirements, their dependence on physical EEG recordings introduces challenges, such as inter-subject variability, signal instability, and experimental complexity. [...] Read more.
Steady-state visual evoked potential (SSVEP)-based brain—computer interfaces (BCIs) leverage high-speed neural synchronization to visual flicker stimuli for efficient device control. While SSVEP-BCIs minimize user training requirements, their dependence on physical EEG recordings introduces challenges, such as inter-subject variability, signal instability, and experimental complexity. To overcome these limitations, this study proposes a novel neural mass model for SSVEP simulation by integrating frequency response characteristics with dual-region coupling mechanisms. Specific parallel linear transformation functions were designed based on SSVEP frequency responses, and weight coefficient matrices were determined according to the frequency band energy distribution under different visual stimulation frequencies in the pre-recorded SSVEP signals. A coupled neural mass model was constructed by establishing connections between occipital and parietal regions, with parameters optimized through particle swarm optimization to accommodate individual differences and neuronal density variations. Experimental results demonstrate that the model achieved a high-precision simulation of real SSVEP signals across multiple stimulation frequencies (10 Hz, 11 Hz, and 12 Hz), with maximum errors decreasing from 2.2861 to 0.8430 as frequency increased. The effectiveness of the model was further validated through the real-time control of an Arduino car, where simulated SSVEP signals were successfully classified by the advanced FPF-net model and mapped to control commands. This research not only advances our understanding of SSVEP neural mechanisms but also releases the user from the brain-controlled coupling system, thus providing a practical framework for developing more efficient and reliable BCI-based systems. Full article
(This article belongs to the Special Issue Computational Biology Simulation, Agent-Based Modelling and AI)
Show Figures

Figure 1

Figure 1
<p>The traditional neural mass model, which contains excitatory interneurons, inhibitory interneurons, and pyramidal neurons. A sigmoid function <span class="html-italic">S</span>(<span class="html-italic">v</span>) and differential equations for excitatory (<span class="html-italic">h<sub>e</sub></span>) and inhibitory (<span class="html-italic">h<sub>i</sub></span>) responses are included to describe the dynamic behavior of the interested subpopulation. The external input <span class="html-italic">n(t)</span> is modeled as Gaussian white noise, which introduces variability to the signal, and the coupling coefficients <span class="html-italic">C</span><sub>1</sub>, <span class="html-italic">C</span><sub>2</sub>, <span class="html-italic">C</span><sub>3</sub>, and <span class="html-italic">C</span><sub>4</sub> define the interaction strengths between different neural subpopulations. The output signal <span class="html-italic">E<sup>i</sup>(t)</span>, being the difference between the excitatory and inhibitory responses, represents the EEG-like signal produced by the model.</p>
Full article ">Figure 2
<p>The multi-dynamic neural mass model for SSVEPs.</p>
Full article ">Figure 3
<p>The SSVEP-BCI multi-dynamic coupled neural mass model. The occipital and parietal regions are represented by a multi-dynamic NMM of <a href="#biomimetics-10-00171-f002" class="html-fig">Figure 2</a>, where three parallel linear transfer functions are involved in the excitatory and inhibitory interneurons. The membrane potential of each intra-regional pyramidal cell (i.e., <span class="html-italic">y<sub>out</sub></span>) is first transformed into mean spike density through the static nonlinear function <span class="html-italic">s</span>(<span class="html-italic">v</span>) and then processed by the cross-regional neural encoder.</p>
Full article ">Figure 4
<p>Simulated signal curves varying with <span class="html-italic">μ</span> and their spectral power. As <span class="html-italic">μ</span> increased from 50 to 200, the rhythmic characteristics gradually intensified, with a final pronounced spectral peak at 10 Hz.</p>
Full article ">Figure 5
<p>Simulated signal curves varying with <span class="html-italic">σ</span><sup>2</sup> when <span class="html-italic">μ</span> = 220 and their spectral power. As <span class="html-italic">σ<sup>2</sup></span> increased from 50 to 20,000, slight to progressive changes of amplitude and spectral peaks variations were observed.</p>
Full article ">Figure 6
<p>Simulated signal curves varying with <span class="html-italic">σ</span><sup>2</sup> when <span class="html-italic">μ</span> = 90 and their normalized spectral power. As <span class="html-italic">σ</span><sup>2</sup> increased, signal amplitudes gradually increased (e.g., from 100 to 3000), and even led to irregular spike activity (from 6000 or 20,000).</p>
Full article ">Figure 7
<p>The simulated signals and spectral power of the occipital region without coupling. Due to the high weight assigned to α, the waveform fluctuated around the alpha wave, and as the delta wave component increased, spike activity decreased with a gradual left-ward shift in frequency peaks.</p>
Full article ">Figure 8
<p>The simulated signals and spectra of the occipital region under unidirectional coupling. As the parietal-to-occipital coupling strength (<span class="html-italic">p<sub>o</sub></span>) increased, while maintaining zero occipital-to-parietal coupling (<span class="html-italic">o<sub>p</sub></span> = 0), the occipital region showed an increased signal amplitude while maintaining stable frequency characteristics, accompanied by enhanced spectral peak values.</p>
Full article ">Figure 9
<p>The simulated signals and spectra of the occipital region under bidirectional coupling with different dynamic characteristics. When the coupling strength between the regions increases, the occipital region model simulation signal spikes are reduced, and the spectral peaks are gradually shifted to the left.</p>
Full article ">Figure 10
<p>Comparison of real and simulated SSVEP under three types of visual stimuli, where overall waveform pattern of simulated signals remains consistent with real signals.</p>
Full article ">Figure 10 Cont.
<p>Comparison of real and simulated SSVEP under three types of visual stimuli, where overall waveform pattern of simulated signals remains consistent with real signals.</p>
Full article ">Figure 11
<p>FPF-net structure.</p>
Full article ">Figure 12
<p>Arduino car movement based on simulated SSVEP.</p>
Full article ">
9 pages, 938 KiB  
Article
Fitness Profile of Police Officers from Rapid Intervention Teams of the Lisbon Metropolitan Command
by João Daniel Freitas and Luís Miguel Massuça
J. Funct. Morphol. Kinesiol. 2025, 10(1), 90; https://doi.org/10.3390/jfmk10010090 - 11 Mar 2025
Viewed by 48
Abstract
Background: A rapid intervention team is a broad category of special teams used by police and emergency respondents to cover various needs. It is essential to ensure the safety and well-being of people in emergencies, minimising the risk of harm and maximising [...] Read more.
Background: A rapid intervention team is a broad category of special teams used by police and emergency respondents to cover various needs. It is essential to ensure the safety and well-being of people in emergencies, minimising the risk of harm and maximising the chances of survival. Objective: This study aimed (i) to identify the fitness profiles and levels of POs from the EIR of the Lisbon Metropolitan Command (COMETLIS, PSP, Portugal), considering age classes; (ii) to directly compare the observed fitness profiles to previous research and normative data; and (iii) to compare the fitness profile of POs from the EIR with cadets from the Police Academy. Methods: This cross-sectional observational study included the participation of 121 male POs from the EIR of the Lisbon Metropolitan Command (Portugal) and 92 male cadets from the Police Academy (Lisbon, Portugal). The assessment protocol sequence involved the collection of biosocial data (age classes: ≤29 years; 30–39 years; 40–49 years), a body size assessment, and a fitness assessment (horizontal jump, handgrip strength, 60 s sit-ups and 20 m shuttle run). Results: (i) In the ≤29 years age class, POs performed better in all fitness tests (highlighting that the age class had a statistically significant effect on performance in the horizontal jump, sit-ups, 20 m shuttle run, and predicted VO2max), and they showed significantly better performance than cadets in handgrip (left, right, and sum), and significantly worse performance in sit-ups and predicted VO2max. (ii) In the 30–39 years age class, POs had significantly worse performance than cadets in the horizontal jump, sit-ups, 20 m shuttle run, and predicted VO2max, even after controlling for age. Conclusions: (i) The fitness performance decreased as the age class became older; (ii) the handgrip strength and cardiovascular capacity attributes were between the standard and excellent levels according to the ACSM guidelines for the general population; (iii) POs from the EIR were stronger than cadets in terms of handgrip strength but weaker in terms of lower limb power, abdominal muscular endurance, and aerobic capacity; and (iv) the differences observed between POs from the EIR and cadets in the 30–39 years age class emphasise the importance of physical training after the training period and throughout professional life. Full article
Show Figures

Figure 1

Figure 1
<p>Fitness assessment protocol scheme.</p>
Full article ">Figure 2
<p>Distribution and differences in fitness attributes of male police officers (POs) from the Lisbon Met-ropolitan Command (Portugal) rapid intervention teams (EIR) and cadets from the Police Academy, from the ≤29 years and 30–39 years age groups, after controlling for age.</p>
Full article ">
19 pages, 1902 KiB  
Article
Facial Features Controlled Smart Vehicle for Disabled/Elderly People
by Yijun Hu, Ruiheng Wu, Guoquan Li, Zhilong Shen and Jin Xie
Electronics 2025, 14(6), 1088; https://doi.org/10.3390/electronics14061088 - 10 Mar 2025
Viewed by 97
Abstract
Mobility limitations due to congenital disabilities, accidents, or illnesses pose significant challenges to the daily lives of individuals with disabilities. This study presents a novel design for a multifunctional intelligent vehicle, integrating head recognition, eye-tracking, Bluetooth control, and ultrasonic obstacle avoidance to offer [...] Read more.
Mobility limitations due to congenital disabilities, accidents, or illnesses pose significant challenges to the daily lives of individuals with disabilities. This study presents a novel design for a multifunctional intelligent vehicle, integrating head recognition, eye-tracking, Bluetooth control, and ultrasonic obstacle avoidance to offer an innovative mobility solution. The smart vehicle supports three driving modes: (1) a nostril-based control system using MediaPipe to track displacement for movement commands, (2) an eye-tracking control system based on the Viola–Jones algorithm processed via an Arduino Nano board, and (3) a Bluetooth-assisted mode for caregiver intervention. Additionally, an ultrasonic sensor system ensures real-time obstacle detection and avoidance, enhancing user safety. Extensive experimental evaluations were conducted to validate the effectiveness of the system. The results indicate that the proposed vehicle achieves an 85% accuracy in nostril tracking, over 90% precision in eye direction detection, and efficient obstacle avoidance within a 1 m range. These findings demonstrate the robustness and reliability of the system in real-world applications. Compared to existing assistive mobility solutions, this vehicle offers non-invasive, cost-effective, and adaptable control mechanisms that cater to a diverse range of disabilities. By enhancing accessibility and promoting user independence, this research contributes to the development of inclusive mobility solutions for disabled and elderly individuals. Full article
(This article belongs to the Special Issue Active Mobility: Innovations, Technologies, and Applications)
Show Figures

Figure 1

Figure 1
<p>Calculation of pixel value totals.</p>
Full article ">Figure 2
<p>Original rectangle feature [<a href="#B26-electronics-14-01088" class="html-bibr">26</a>].</p>
Full article ">Figure 3
<p>Cascade classifier block diagram [<a href="#B32-electronics-14-01088" class="html-bibr">32</a>].</p>
Full article ">Figure 4
<p>Photographs of eyeballs incorrectly identified as nostrils.</p>
Full article ">Figure 5
<p>Multifunctional intelligent vehicle under natural stationary conditions.</p>
Full article ">Figure 6
<p>Block diagram of control circuit.</p>
Full article ">Figure 7
<p>Control flowchart.</p>
Full article ">Figure 8
<p>Demonstration of smart vehicle straight line effect. (<b>a</b>) The eyes look straight ahead. (<b>b</b>) The vehicle moves forward.</p>
Full article ">Figure 9
<p>Vehicle’s turning left effect controlled by eyeball movement. (<b>a</b>) The eyes look to the left. (<b>b</b>) The vehicle is making a left turn.</p>
Full article ">Figure 10
<p>The vehicle’s right-turning motion is controlled by the movement of the nostrils.</p>
Full article ">
21 pages, 2977 KiB  
Article
From Command-Control to Lifecycle Regulation: Balancing Innovation and Safety in China’s Pharmaceutical Legislation
by Jing Zhang, Shuchen Tang and Pengqing Sun
Healthcare 2025, 13(6), 588; https://doi.org/10.3390/healthcare13060588 - 7 Mar 2025
Viewed by 205
Abstract
Background: China’s pharmaceutical regulatory framework is undergoing a pivotal shift from a traditional “command-control” model to a “lifecycle regulation” approach, aiming to balance drug safety, innovation, and accessibility. This study systematically examines the evolution, achievements, and challenges of China’s regulatory reforms, offering insights [...] Read more.
Background: China’s pharmaceutical regulatory framework is undergoing a pivotal shift from a traditional “command-control” model to a “lifecycle regulation” approach, aiming to balance drug safety, innovation, and accessibility. This study systematically examines the evolution, achievements, and challenges of China’s regulatory reforms, offering insights for global pharmaceutical governance. Methods: Using a mixed-methods approach integrating historical analysis, policy text mining, and case studies, we reviewed the pharmaceutical laws and regulations enacted since 1949, supplemented by case studies (e.g., COVID-19 vaccine emergency approvals) and a comparative analysis with international models (e.g., U.S. FDA and EU EMA frameworks). The data were sourced from authoritative platforms such as the PKULAW database, criminal law amendments, and international regulatory texts. Results: China’s regulatory evolution is categorized into four phases: Emergence (1949–1984), Foundational (1985–2000), Deepening Reform (2001–2018), and Lifecycle Regulation (2019–present). The revised Drug Administration Law (2019) institutionalized risk management, dynamic GMP inspections, and post-market surveillance, marking a transition to holistic lifecycle oversight. Key milestones include the introduction of the Vaccine Management Law (2019) and stricter penalties under the Criminal Law Amendment (XI) (2020). Conclusions: China’s lifecycle regulation model demonstrates potential to harmonize safety and innovation, evidenced by improved API export compliance (e.g., 15% increase in international certifications by 2023) and accelerated approvals for breakthrough therapies (e.g., domestically developed PD-1 inhibitors). However, challenges persist, including uneven enforcement capacities, tensions between conditional approvals and risk mitigation, and reliance on global supply chains. These findings provide critical lessons for developing countries navigating similar regulatory dilemmas. Full article
Show Figures

Figure 1

Figure 1
<p>Number of laws and regulations introduced in different stages.</p>
Full article ">Figure 2
<p>Statistical overview of legal and regulatory documents across different stages.</p>
Full article ">Figure 3
<p>Timeline of legislation during the Emergence stage.</p>
Full article ">Figure 4
<p>Timeline of legislation during the Foundational stage.</p>
Full article ">Figure 5
<p>Timeline of legislation during the Deepening Reform stage.</p>
Full article ">Figure 6
<p>Timeline of legislation during the full Lifecycle Regulation stage.</p>
Full article ">
20 pages, 2207 KiB  
Article
A Novel TLS-Based Fingerprinting Approach That Combines Feature Expansion and Similarity Mapping
by Amanda Thomson, Leandros Maglaras and Naghmeh Moradpoor
Future Internet 2025, 17(3), 120; https://doi.org/10.3390/fi17030120 - 7 Mar 2025
Viewed by 247
Abstract
Malicious domains are part of the landscape of the internet but are becoming more prevalent and more dangerous both to companies and to individuals. They can be hosted on various technologies and serve an array of content, including malware, command and control and [...] Read more.
Malicious domains are part of the landscape of the internet but are becoming more prevalent and more dangerous both to companies and to individuals. They can be hosted on various technologies and serve an array of content, including malware, command and control and complex phishing sites that are designed to deceive and expose. Tracking, blocking and detecting such domains is complex, and very often it involves complex allowlist or denylist management or SIEM integration with open-source TLS fingerprinting techniques. Many fingerprinting techniques, such as JARM and JA3, are used by threat hunters to determine domain classification, but with the increase in TLS similarity, particularly in CDNs, they are becoming less useful. The aim of this paper was to adapt and evolve open-source TLS fingerprinting techniques with increased features to enhance granularity and to produce a similarity-mapping system that would enable the tracking and detection of previously unknown malicious domains. This was achieved by enriching TLS fingerprints with HTTP header data and producing a fine-grain similarity visualisation that represented high-dimensional data using MinHash and Locality-Sensitive Hashing. Influence was taken from the chemistry domain, where the problem of high-dimensional similarity in chemical fingerprints is often encountered. An enriched fingerprint was produced, which was then visualised across three separate datasets. The results were analysed and evaluated, with 67 previously unknown malicious domains being detected based on their similarity to known malicious domains and nothing else. The similarity-mapping technique produced demonstrates definite promise in the arena of early detection of malware and phishing domains. Full article
Show Figures

Figure 1

Figure 1
<p>A flow diagram of the end-to-end fingerprint processing pipeline.</p>
Full article ">Figure 2
<p>The raw fingerprint produced from the active scan.</p>
Full article ">Figure 3
<p>A screenshot of the HEAD request being made, as seen within Wireshark. The HTTP Protocol is highlighted in green.</p>
Full article ">Figure 4
<p>A screenshot of a typical set of HTTP headers received in response to a HEAD request during the header enrichment process. The HEAD request is seen in red, the HTTP response is blue.</p>
Full article ">Figure 5
<p>Graph displaying TLS features enriched with HTTP header data. The resulting feature matrix <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>∈</mo> <msup> <mrow> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>}</mo> </mrow> <mrow> <mi>n</mi> <mo>×</mo> <mi>d</mi> </mrow> </msup> </mrow> </semantics></math> has dimensions <span class="html-italic">n</span> = 16,254 (fingerprints) and <span class="html-italic">d</span> = 2124 (features), representing the complete binary feature space of the TLS and HTTP characteristics. Known good domains are coloured green, known bad domains, red and unknown domains, orange.</p>
Full article ">Figure 6
<p>The Mixed Host dataset displays a diverse number of distance metrics and a broader distribution of similarity scores across the sample space. Each line represents a different domain, with a range of colors to aid in differentiation.</p>
Full article ">Figure 7
<p>The Cloudflare CDN dataset displays less diversity in similarity. All k-nearest neighbours maintain distances below 0.30. This shows closer similarity between domains. Each line represents a different domain, with a range of colors to aid in differentiation.</p>
Full article ">Figure 8
<p>A typical domain with strong indicators of malicious intent. The domain was sourced from the unknown category and registered within 30 days of the scan taking place. At the time of evaluation, 12 security vendors had flagged the domain as malicious, including Sophos, Fortinet, ESET and Bitdefender.</p>
Full article ">Figure 9
<p>An example of a domain on the threshold for further investigation. The domain has three vendors confirmed as malicious—BitDefender, CRDF and G-Data—but a further suspicious flag from vendor Trustwave. The left-hand shows the heuristic scan performed by URLQuery, indicating that ClearFake malicious JavaScript library was detected.</p>
Full article ">Figure 10
<p>The LSH forest of dataset A visualised using Fearun. Known bad domains are colored red, known good are colored blue and unknown domains are colored orange.</p>
Full article ">Figure 11
<p>The LSH forest of dataset B (Cloudflare CDN domains) visualised using Fearun. The TLS fingerprints have been enriched with HTTP header data. Known bad domains are colored red, known good are colored blue and unknown domains are colored orange.</p>
Full article ">Figure 12
<p>The LSH forest of dataset B (Cloudflare CDN domains) visualised using Fearun. The TLS fingerpritns are not enriched and contain only TLS features. Known bad domains are colored red, known good are colored blue and unknown domains are colored orange.</p>
Full article ">Figure 13
<p>The LSH visualisation of dataset C, known malicious domains. Clear similarity patterns can be seen forming by capability. Go Phish domains are seen in yellow, Cert Pl orange, Metasploit pink, Tactical RRM purple and Burp Collaborator Blue.</p>
Full article ">
23 pages, 9777 KiB  
Article
Integrated Lower Limb Robotic Orthosis with Embedded Highly Oriented Electrospinning Sensors by Fuzzy Logic-Based Gait Phase Detection and Motion Control
by Ming-Chan Lee, Cheng-Tang Pan, Jhih-Syuan Huang, Zheng-Yu Hoe and Yeong-Maw Hwang
Sensors 2025, 25(5), 1606; https://doi.org/10.3390/s25051606 - 5 Mar 2025
Viewed by 268
Abstract
This study introduces an integrated lower limb robotic orthosis with near-field electrospinning (NFES) piezoelectric sensors and a fuzzy logic-based gait phase detection system to enhance mobility assistance and rehabilitation. The exoskeleton incorporates embedded pressure sensors within the insoles to capture ground reaction forces [...] Read more.
This study introduces an integrated lower limb robotic orthosis with near-field electrospinning (NFES) piezoelectric sensors and a fuzzy logic-based gait phase detection system to enhance mobility assistance and rehabilitation. The exoskeleton incorporates embedded pressure sensors within the insoles to capture ground reaction forces (GRFs) in real-time. A fuzzy logic inference system processes these signals, classifying gait phases such as stance, initial contact, mid-stance, and pre-swing. The NFES technique enables the fabrication of highly oriented nanofibers, improving sensor sensitivity and reliability. The system employs a master–slave control framework. A Texas Instruments (TI) TMS320F28069 microcontroller (Texas Instruments, Dallas, TX, USA) processes gait data and transmits actuation commands to motors and harmonic drives at the hip and knee joints. The control strategy follows a three-loop methodology, ensuring stable operation. Experimental validation assesses the system’s accuracy under various conditions, including no-load and loaded scenarios. Results demonstrate that the exoskeleton accurately detects gait phases, achieving a maximum tracking error of 4.23% in an 8-s gait cycle under no-load conditions and 4.34% when tested with a 68 kg user. Faster motion cycles introduce a maximum error of 6.79% for a 3-s gait cycle, confirming the system’s adaptability to dynamic walking conditions. These findings highlight the effectiveness of the developed exoskeleton in interpreting human motion intentions, positioning it as a promising solution for wearable rehabilitation and mobility assistance. Full article
Show Figures

Figure 1

Figure 1
<p>The experimental process of this integrated system in this study.</p>
Full article ">Figure 2
<p>Rotation angles and DOFs of the robotic orthosis. (<b>a</b>) Range of motion of the hip (<b>b</b>) Range of motion of the knee.</p>
Full article ">Figure 3
<p>Gaits of the hip and knee.</p>
Full article ">Figure 4
<p>Tests of the possible stress points of the feet.</p>
Full article ">Figure 5
<p>Stress points of the feet.</p>
Full article ">Figure 6
<p>Schematic of NFES.</p>
Full article ">Figure 7
<p>Positions of the sensors on the insole.</p>
Full article ">Figure 8
<p>Fuzzy logic structure.</p>
Full article ">Figure 9
<p>The fuzzy membership functions of ground reaction forces.</p>
Full article ">Figure 10
<p>The fuzzy membership functions of gait phases.</p>
Full article ">Figure 11
<p>The schematic diagram of the area of fuzzy sets.</p>
Full article ">Figure 12
<p>Flowchart of the gait phase detection.</p>
Full article ">Figure 13
<p>The schematic diagram of the designed three-loop control.</p>
Full article ">Figure 14
<p>The signals of the piezoresistive sensors.</p>
Full article ">Figure 15
<p>PVDF-based NFES sensors work with gait phase detection.</p>
Full article ">Figure 16
<p>The total system communication and computation time is approximately 5.09 ms.</p>
Full article ">Figure 17
<p>Comparison of fuzzy logic gait detection and traditional gait detection.</p>
Full article ">Figure 18
<p>CNC machining.</p>
Full article ">Figure 19
<p>Assembled orthosis.</p>
Full article ">Figure 20
<p>Results of 8-s walking cycle. (<b>a</b>) Tracking results of the hip (<b>b</b>) Tracking results of the knee.</p>
Full article ">Figure 21
<p>Comparison of the maximum error and RMSE.</p>
Full article ">Figure 22
<p>The action decomposition diagram of the robotic orthosis operation.</p>
Full article ">Figure 23
<p>Results of the first experiment. (<b>a</b>) Tracking results of the hip (<b>b</b>) Tracking results of the knee.</p>
Full article ">Figure 23 Cont.
<p>Results of the first experiment. (<b>a</b>) Tracking results of the hip (<b>b</b>) Tracking results of the knee.</p>
Full article ">Figure 24
<p>Results of the second experiment. (<b>a</b>) Tracking results of the hip (<b>b</b>) Tracking results of the knee.</p>
Full article ">Figure 24 Cont.
<p>Results of the second experiment. (<b>a</b>) Tracking results of the hip (<b>b</b>) Tracking results of the knee.</p>
Full article ">
26 pages, 663 KiB  
Review
The Multifaceted Impact of the SARS-CoV-2 Pandemic on Sexual Health, Function, and Behaviors: Implications for Public Health: A Scoping Review
by Gonzalo R. Quintana
Healthcare 2025, 13(5), 559; https://doi.org/10.3390/healthcare13050559 - 5 Mar 2025
Viewed by 178
Abstract
Background. The SARS-CoV-2 pandemic had a significant impact on sexual health and human behavior, revealing a widespread decline in sexual function and behaviors. Objective. To summarize these findings and highlight their importance for public health, this article discusses the changes observed in sexual [...] Read more.
Background. The SARS-CoV-2 pandemic had a significant impact on sexual health and human behavior, revealing a widespread decline in sexual function and behaviors. Objective. To summarize these findings and highlight their importance for public health, this article discusses the changes observed in sexual function and behavior during the pandemic, as well as potential explanations for these trends. Methods. This study followed the PRISMA-ScR guidelines, using the keyword search commands: “sexual function” AND (“SARS-CoV-2” OR “COVID-19” OR coronavirus) and “sexual behavior*” AND (“SARS-CoV-2” OR “COVID-19” OR coronavirus) in the Scopus and PubMed databases. The search was conducted on 10 March 2024, including articles published from January 2019 to March 2024. Inclusion criteria required studies focusing on sexual health/function during the SARS-CoV-2 pandemic, excluding non-English articles and non-adult populations. Studies were screened based on relevance, methodological rigor, and sample size, with data extraction focusing on sexual behavior/function metrics. Results were synthesized to identify trends and propose explanatory models. Results. While some individuals experienced reductions in sexual desire and activities, others reported increases, indicating varied individual responses to stressors such as a pandemic. Two hypotheses are presented to explain these changes: terror management theory and the dual control model of sexual response. The critical role of public health in addressing sexual health and well-being needs during a health crisis is discussed, emphasizing the importance of providing clear information, ensuring access to remote sexual health services, and reducing stigma. The need to integrate sexual health into the global response to future health crises is highlighted to ensure a comprehensive approach to human well-being. Conclusions. This review shows the multifaceted impact of the pandemic and social distancing in people’s sexual function and behaviors, underscoring the importance of considering sexual health as an integral part of the emergency health planning and response, to promote the physical and mental well-being of the population during crises such as the SARS-CoV-2 pandemic. Full article
(This article belongs to the Collection COVID-19: Impact on Public Health and Healthcare)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the search, filtering, and selection process of the articles. Adopted from the PRISMA flowchart [<a href="#B17-healthcare-13-00559" class="html-bibr">17</a>].</p>
Full article ">
28 pages, 7320 KiB  
Article
Technology for Improving the Accuracy of Predicting the Position and Speed of Human Movement Based on Machine Learning Models
by Artem Obukhov, Denis Dedov, Andrey Volkov and Maksim Rybachok
Technologies 2025, 13(3), 101; https://doi.org/10.3390/technologies13030101 - 3 Mar 2025
Viewed by 409
Abstract
The solution to the problem of insufficient accuracy in determining the position and speed of human movement during interaction with a treadmill-based training complex is considered. Control command generation based on the training complex user’s actions may be performed with a delay, may [...] Read more.
The solution to the problem of insufficient accuracy in determining the position and speed of human movement during interaction with a treadmill-based training complex is considered. Control command generation based on the training complex user’s actions may be performed with a delay, may not take into account the specificity of movements, or be inaccurate due to the error of the initial data. The article introduces a technology for improving the accuracy of predicting a person’s position and speed on a running platform using machine learning and computer vision methods. The proposed technology includes analysing and processing data from the tracking system, developing machine learning models to improve the quality of the raw data, predicting the position and speed of human movement, and implementing and integrating neural network methods into the running platform control system. Experimental results demonstrate that the decision tree (DT) model provides better accuracy and performance in solving the problem of positioning key points of a human model in complex conditions with overlapping limbs. For speed prediction, the linear regression (LR) model showed the best results when the analysed window length was 10 frames. Prediction of the person’s position (based on 10 previous frames) is performed using the DT model, which is optimal in terms of accuracy and computation time relative to other options. The comparison of the control methods of the running platform based on machine learning models showed the advantage of the combined method (linear control function combined with the speed prediction model), which provides an average absolute error value of 0.116 m/s. The results of the research confirmed the achievement of the primary objective (increasing the accuracy of human position and speed prediction), making the proposed technology promising for application in human-machine systems. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

Figure 1
<p>Schematic Diagram of the Research Methodology. The diagram illustrates the entire process—from video acquisition and key point extraction, through the stages of preprocessing, model design, and training of ML1 (positional correction), ML2 (speed detection), and ML3 (position prediction), to their subsequent integration into five neural network-based control methods (C1–C5). Different stages of the methodology are highlighted in distinct colours.</p>
Full article ">Figure 2
<p>Example of inserting noise into the original video data: (<b>a</b>) during model training; (<b>b</b>) during testing. Artificial noise in the form of grey rectangles is used to complicate the operation of the human body model recognition system.</p>
Full article ">Figure 3
<p>Comparison of LR, XGB, and DT models under conditions of artificial interference (grey rectangles) for reconstructing the correct positions of body segments (green dots and lines) during treadmill movement.</p>
Full article ">Figure 4
<p>Comparison of models for velocity determination: (<b>a</b>) at low speed; (<b>b</b>) at medium speed; and (<b>c</b>) at high speed. The caption at the top right displays the current treadmill speed reference, while the left side shows the speed predictions for different numbers of analysed frames (10, 15, and 20) across various machine learning models applied to the video frames.</p>
Full article ">Figure 4 Cont.
<p>Comparison of models for velocity determination: (<b>a</b>) at low speed; (<b>b</b>) at medium speed; and (<b>c</b>) at high speed. The caption at the top right displays the current treadmill speed reference, while the left side shows the speed predictions for different numbers of analysed frames (10, 15, and 20) across various machine learning models applied to the video frames.</p>
Full article ">Figure 5
<p>Performance visualisation of different neural network methods. Comparison graphs of treadmill control methods C1–C5 are presented, illustrating the predicted speed values relative to the reference speed set by the user.</p>
Full article ">Figure 6
<p>Visualisation of C3 method operation (the position of the person and the speed of the treadmill in this position). The absence of a linear component in method C3 leads to abrupt changes in speed.</p>
Full article ">Figure 7
<p>Visualisation of C4 method operation (the position of the person and the speed of the treadmill in this position). The combined approach employed in method C4 maintains a comfortable, smooth speed trajectory throughout the entire time interval.</p>
Full article ">Figure 8
<p>Visualisation of C5 method operation (the position of the person and the speed of the treadmill in that position). Method C5 is characterised by a very smooth start; however, despite incorporating three components, it does not demonstrate any advantages over method C4.</p>
Full article ">Figure 9
<p>Test fragments of computer vision technology under different conditions: (<b>a</b>) non-contrasting user’s clothing; (<b>b</b>) no white background and an additional person in the background; (<b>c</b>) an additional person in front of the camera. The results demonstrate the viability of the computer vision technology (for body model recognition) under real-world conditions in the presence of external interference.</p>
Full article ">Figure 10
<p>Fragments of low-light computer vision technology tests: (<b>a</b>) half of normal; (<b>b</b>) minimum level; (<b>c</b>) minimum level after exposure correction. The results indicate that the computer vision technology (for body model recognition) remains functional under challenging lighting conditions.</p>
Full article ">
23 pages, 10604 KiB  
Article
An Improved MTPA Control Method Based on DTC-SVM Using D-Axis Flux Optimization
by Doo-Il Son and Geun-Ho Lee
Electronics 2025, 14(5), 1006; https://doi.org/10.3390/electronics14051006 - 2 Mar 2025
Viewed by 299
Abstract
This paper proposes an improved Maximum Torque Per Ampere (MTPA) control method based on The Direct Torque Control-Space Vector Modulation (DTC-SVM) control algorithm using d-axis flux optimization. The proposed algorithm simplifies the existing DTC-SVM control method by geometrically interpreting its complex equations thereby [...] Read more.
This paper proposes an improved Maximum Torque Per Ampere (MTPA) control method based on The Direct Torque Control-Space Vector Modulation (DTC-SVM) control algorithm using d-axis flux optimization. The proposed algorithm simplifies the existing DTC-SVM control method by geometrically interpreting its complex equations thereby providing a more straightforward and efficient approach. The proposed algorithm geometrically computes the d-axis flux reference and compensation values for the MTPA control by continuously monitoring the q-axis flux in real time. Additionally, the compensation value of the d-axis flux reference is employed to compute the magnitude and phase reference values of the DTC-SVM voltage vector, which in turn generates the stator current values that align with the MTPA curve. The effectiveness of the proposed algorithm was validated through simulation results in MATLAB Simulink. When the proposed algorithm was applied, the torque response to the torque command improved compared to the DTC-SVM control. Additionally, for the same torque production, the stator current consumption of the IPMSM was reduced by approximately 12.55%, demonstrating improved efficiency. To further validate the effectiveness of the proposed algorithm, a dynamometer test system was established, and the IPMSM was tested across various speed ranges below the base speed while generating different torque outputs. The torque response dynamics and stator current consumption of the proposed algorithm were then compared with those of the DTC-SVM algorithm, confirming its enhanced performance. Full article
Show Figures

Figure 1

Figure 1
<p>Dotted lines represent the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> </mrow> </semantics></math> coordinate system of the stator flux, while dashed lines represent the d-q rotor reference frame. Solid <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>−</mo> <mi>β</mi> </mrow> </semantics></math> lines represent the stator coordinate system referenced to the <math display="inline"><semantics> <mrow> <mi>u</mi> </mrow> </semantics></math>-phase of the motor. The red dash and blue dash lines denote the projection of the stator flux onto the <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>−</mo> <mi>β</mi> </mrow> </semantics></math> frame, whereas the yellow solid and green solid lines represent the stator flux components in the rotating d-q coordinate system.</p>
Full article ">Figure 2
<p>DTC Algorithm block diagram.</p>
Full article ">Figure 3
<p>The blue solid line represents the fixed <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>−</mo> <mi>β</mi> </mrow> </semantics></math> coordinate system referenced to the <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>−</mo> </mrow> </semantics></math>phase of the motor. The black dashed line denotes the rotor flux position detection region and the space voltage vector sector used for regulating the stator flux trajectory in the DTC algorithm.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic of the stator flux control method in the traditional DTC algorithm. The blue line represents the stator flux trajectory when using the traditional DTC method. The red dotted line indicates the stator flux position at time step <math display="inline"><semantics> <mrow> <mi>k</mi> </mrow> </semantics></math>, while the green line denotes the space voltage vector at time step <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>k</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>, (<b>b</b>) Schematic of the stator flux control method in the traditional DTC method. The red and blue lines represent the space voltage vectors applied to the motor when using the DTC-SVM method, while the green line denotes the space voltage vector diagram at time step <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>k</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>. The black dotted line represents the stator flux trajectory at time step <math display="inline"><semantics> <mrow> <mi>k</mi> </mrow> </semantics></math> when using the DTC-SVM method.</p>
Full article ">Figure 5
<p>(<b>a</b>) When the stator flux reference value is kept constant, the variation of the stator flux according to the torque command. The red line represents the trajectory of the stator flux and the variation in the d-axis flux when the stator flux reference value is varied using the ODF−DTC−SVM method, while the blue line shows the trajectory of the stator flux and the variation in the d-axis flux when the stator flux reference value is kept constant. (<b>b</b>) the stator flux command phase angle <math display="inline"><semantics> <mrow> <mi>ρ</mi> </mrow> </semantics></math> and the magnitude of the stator flux reference in the proposed algorithm. The green line represents the space voltage vector diagram.</p>
Full article ">Figure 6
<p>The diagram of the optimized d−axis flux DTC−SVM algorithm expressed geometrically. The blue line represents the stator flux reference value when the proposed ODF−DTC−SVM control method is used, while the red line represents the stator flux reference value when the traditional DTC−SVM control method is applied. The dotted line indicates the d-axis flux compensation value required for MTPA control.</p>
Full article ">Figure 7
<p>Comparison of the stator flux trajectories of the two control methods when the torque command is applied on the <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math>−axis. The black dashed line represents the stator flux trajectory when the stator flux reference value is kept constant, while the blue line indicates the variation in the d−axis flux during DTV−SVM control, and the red line shows the d−axis flux compensation value required for MTPA control.</p>
Full article ">Figure 8
<p>The vector diagram of the phase and magnitude of the changed command voltage vector when the proposed algorithm is used. The black dashed line represents the extension line for geometric analysis, while the blue line indicates the stator flux reference value and the reference angle of the space voltage vector phase for the proposed ODF−DTC−SVM control method. The blue line also represents the stator flux reference value and the reference angle of the space voltage vector phase for the DTC−SVM control method.</p>
Full article ">Figure 9
<p>(<b>a</b>) Proposed Optimized D-axis Flux Direct Torque Control (ODF-DTC) control block diagram, (<b>b</b>) Algorithm block diagram of the ODF-DTC controller.</p>
Full article ">Figure 10
<p>Comparison of torque response when a 1.5 Nm torque command is applied to both the DTC-SVM algorithm and the ODF-DTC-SVM algorithm.</p>
Full article ">Figure 11
<p>Comparison of stator flux trajectories when a 1.5 Nm torque command is applied to both the DTC−SVM algorithm and the ODF−DTC−SVM algorithm.</p>
Full article ">Figure 12
<p>(<b>a</b>) 3-phase stator current waveform when a 1.5 Nm torque is applied using the ODF-DTC-SVM method in simulation. (<b>b</b>) 3-phase stator current waveform when a 1.5 Nm torque is applied using the DTC-SVM method in simulation.</p>
Full article ">Figure 13
<p>Comparison of the d-q axis current trajectories between the DTC−SVM control method and the proposed control method.</p>
Full article ">Figure 14
<p>(<b>a</b>) Comparison of the d−axis current values of the two control methods in simulation. (<b>b</b>) Comparison of the q−axis current values of the two control methods in simulation.</p>
Full article ">Figure 15
<p>(<b>a</b>) Experimental setup. (<b>b</b>) Dynamo system setup.</p>
Full article ">Figure 16
<p>(<b>a</b>) 3−phase stator current waveform when a 0.5 Nm torque is applied using the proposed method in the actual experiment. (<b>b</b>) 3−phase stator current waveform when a 0.5 Nm torque is applied using the DTC−SVM method in the actual experiment.</p>
Full article ">Figure 17
<p>(<b>a</b>) Comparison of the stator current trajectories when a 0.5 Nm torque is applied using the two methods in the actual experiments. (<b>b</b>) Comparison of the stator current trajectories when a 1.0 Nm torque is applied using the two methods in the actual experiments.</p>
Full article ">Figure 18
<p>(<b>a</b>) 3-phase stator current waveform when a 1.0 Nm torque is applied using the proposed method in the actual experiment. (<b>b</b>) 3-phase stator current waveform when a 1.0 Nm torque is applied using the DTC-SVM method in the actual experiment.</p>
Full article ">Figure 19
<p>(<b>a</b>) 3-phase stator current waveform when a 1.5 Nm torque is applied using the proposed method in the actual experiment. (<b>b</b>) 3-phase stator current waveform when a 1.5 Nm torque is applied using the DTC-SVM method in the actual experiment.</p>
Full article ">Figure 20
<p>Comparison of the stator current trajectories when a 1.5 Nm torque is applied using the two methods in the actual experiments.</p>
Full article ">Figure 21
<p>(<b>a</b>) Comparison of the d-axis current values of the two control methods in the actual experiments. (<b>b</b>) Comparison of the q-axis current values of the two control methods in the actual experiments.</p>
Full article ">Figure 22
<p>Comparison of d-q axis current trajectories between the DTC-SVM algorithm and the proposed algorithm when a 1.5 Nm torque command is applied.</p>
Full article ">Figure 23
<p>(<b>a</b>) Comparison of the torque response of the proposed algorithm and the existing control algorithm when a 0.5 Nm torque reference command is applied in the actual experiment. (<b>b</b>) Comparison of the torque response of the proposed algorithm and the existing control algorithm when a 1.0 Nm torque reference command is applied in the actual experiment.</p>
Full article ">Figure 24
<p>Comparison of the torque response of the proposed algorithm and the existing control algorithm when a 1.5 Nm torque reference command is applied in the actual experiment.</p>
Full article ">
30 pages, 4650 KiB  
Article
Commanded Filter-Based Robust Model Reference Adaptive Control for Quadrotor UAV with State Estimation Subject to Disturbances
by Nigar Ahmed and Nashmi Alrasheedi
Drones 2025, 9(3), 181; https://doi.org/10.3390/drones9030181 - 28 Feb 2025
Viewed by 185
Abstract
Unmanned aerial vehicles must achieve precise flight maneuvers despite disturbances, parametric uncertainties, modeling inaccuracies, and limitations in onboard sensor information. This paper presents a robust adaptive control for trajectory tracking under nonlinear disturbances. Firstly, parametric and modeling uncertainties are addressed using model reference [...] Read more.
Unmanned aerial vehicles must achieve precise flight maneuvers despite disturbances, parametric uncertainties, modeling inaccuracies, and limitations in onboard sensor information. This paper presents a robust adaptive control for trajectory tracking under nonlinear disturbances. Firstly, parametric and modeling uncertainties are addressed using model reference adaptive control principles to ensure that the dynamics of the aerial vehicle closely follow a reference model. To address the effects of disturbances, a modified nonlinear disturbance observer is designed based on estimated state variables. This observer effectively attenuates constant, nonlinear disturbances with variable frequency and magnitude, and noises. In the next step, a two-stage sliding mode control strategy is introduced, incorporating adaptive laws and a commanded-filter to compute numerical derivatives of the state variables required for control design. An error compensator is integrated into the framework to reduce numerical and computational delays. To address sensor inaccuracies and potential failures, a high-gain observer-based state estimation technique is employed, utilizing the separation principle to incorporate estimated state variables into the control design. Finally, Lyapunov-based stability analysis demonstrates that the system is uniformly ultimately bounded. Numerical simulations on a DJI F450 quadrotor validate the approach’s effectiveness in achieving robust trajectory tracking under disturbances. Full article
(This article belongs to the Section Drone Design and Development)
Show Figures

Figure 1

Figure 1
<p>Quadrotor schematic.</p>
Full article ">Figure 2
<p>Architecture of cascaded position and attitude control.</p>
Full article ">Figure 3
<p>Block diagram of the closed-loop system—<math display="inline"><semantics> <mrow> <mi>i</mi> <mo>∈</mo> <mo>(</mo> <mi>ϕ</mi> <mo>,</mo> <mi>θ</mi> <mo>,</mo> <mi>ψ</mi> <mo>)</mo> <mo>,</mo> <mspace width="4pt"/> <mi>j</mi> <mo>∈</mo> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Trajectory tracking—aggressive maneuvers [<a href="#B38-drones-09-00181" class="html-bibr">38</a>].</p>
Full article ">Figure 5
<p>Phase portraits—aggressive maneuvers.</p>
Full article ">Figure 6
<p>Trajectory tracking—helical [<a href="#B38-drones-09-00181" class="html-bibr">38</a>].</p>
Full article ">Figure 7
<p>Phase portraits—helical trajectory.</p>
Full article ">Figure 8
<p>Quadrotor position tracking—aggressive maneuvers [<a href="#B38-drones-09-00181" class="html-bibr">38</a>].</p>
Full article ">Figure 9
<p>Quadrotor attitude tracking—aggressive maneuvers [<a href="#B38-drones-09-00181" class="html-bibr">38</a>].</p>
Full article ">Figure 10
<p>RMSE during aggressive maneuvers [<a href="#B38-drones-09-00181" class="html-bibr">38</a>].</p>
Full article ">Figure 11
<p>Error visualization for quadrotor attitude and position during trajectory tracking.</p>
Full article ">Figure 12
<p>Error distribution in quadrotor attitude and position tracking visualized through isosurfaces.</p>
Full article ">Figure 13
<p>Disturbance estimation in the position model during aggressive maneuvers.</p>
Full article ">Figure 14
<p>Disturbance estimation in the attitude model during aggressive maneuvers.</p>
Full article ">Figure 15
<p>Quadrotor control inputs during aggressive maneuvers.</p>
Full article ">Figure 16
<p>Force and torque of each rotor during aggressive maneuvers.</p>
Full article ">Figure 17
<p>Total power consumed by DJI-F450.</p>
Full article ">
20 pages, 3184 KiB  
Article
Adaptive Path Guidance Law for a Small Fixed-Wing UAS with Bounded Bank Angle
by Suhyeon Kim and Dongwon Jung
Drones 2025, 9(3), 180; https://doi.org/10.3390/drones9030180 - 28 Feb 2025
Viewed by 263
Abstract
This study deals with the path-following guidance of a fixed-wing unmanned aerial system (UAS) in conjunction with parameter adaptation. Utilizing a backstepping control design approach, a path-following control algorithm is formulated for the roll command, accounting for the approximated closed-loop roll control. The [...] Read more.
This study deals with the path-following guidance of a fixed-wing unmanned aerial system (UAS) in conjunction with parameter adaptation. Utilizing a backstepping control design approach, a path-following control algorithm is formulated for the roll command, accounting for the approximated closed-loop roll control. The inaccurate time constant is estimated by employing a parameter adaptation algorithm. The proposed guidance algorithm is first validated via the hardware-in-the-loop simulation environment, followed by flight tests on an actual UAV platform to demonstrate that both tracking performance and control robustness are improved over various shape of reference paths. Full article
(This article belongs to the Special Issue Path Planning, Trajectory Tracking and Guidance for UAVs: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Error coordinates definition with respect to the S-F frame (<math display="inline"><semantics> <mi mathvariant="script">F</mi> </semantics></math>).</p>
Full article ">Figure 2
<p>Simulation result for comparative study: Distance error <math display="inline"><semantics> <mover accent="true"> <mi>d</mi> <mo stretchy="false">˜</mo> </mover> </semantics></math> [<a href="#B10-drones-09-00180" class="html-bibr">10</a>].</p>
Full article ">Figure 3
<p>Course rate error and the estimated time constant of the proposed guidance law.</p>
Full article ">Figure 4
<p>Simulation results for time constant variations [<a href="#B10-drones-09-00180" class="html-bibr">10</a>].</p>
Full article ">Figure 5
<p>HILS architecture.</p>
Full article ">Figure 6
<p>Error states and command input without parameter adaptation.</p>
Full article ">Figure 7
<p>Error states and command input with parameter adaptation.</p>
Full article ">Figure 8
<p>Error states and command input with parameter adaptation.</p>
Full article ">Figure 8 Cont.
<p>Error states and command input with parameter adaptation.</p>
Full article ">Figure 9
<p>Error states and command input with parameter adaptation (<math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 10
<p>Error states and command input with parameter adaptation [Flight Test: Case I]. (<b>a</b>) UAV test-bed platform(Skywalker Eve-2000). (<b>b</b>) Architecture of the flight control system [<a href="#B34-drones-09-00180" class="html-bibr">34</a>]. (<b>c</b>) Reference path and actual trajectory. (<b>d</b>) Error states. (<b>e</b>) <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> vs. <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mi>c</mi> </msub> </semantics></math>. (<b>f</b>) Estimate time constant <math display="inline"><semantics> <msub> <mover accent="true"> <mi>λ</mi> <mo>^</mo> </mover> <mi>ϕ</mi> </msub> </semantics></math>.</p>
Full article ">Figure 11
<p>Error states and command input with parameter adaptation [Flight Test: Case II].</p>
Full article ">
23 pages, 1930 KiB  
Article
Event-Driven Prescribed-Time Tracking Control for Multiple UAVs with Flight State Constraints
by Xueyan Han, Peng Yu, Maolong Lv, Yuyuan Shi and Ning Wang
Machines 2025, 13(3), 192; https://doi.org/10.3390/machines13030192 - 27 Feb 2025
Viewed by 94
Abstract
Consensus tracking control for multiple UAVs demonstrates critical theoretical value and application potential, improving system robustness and addressing challenges in complex operational environments. This paper addresses the challenge of event-triggered prescribed-time synchronization tracking control for 6-DOF fixed-wing UAVs with state constraints. We propose [...] Read more.
Consensus tracking control for multiple UAVs demonstrates critical theoretical value and application potential, improving system robustness and addressing challenges in complex operational environments. This paper addresses the challenge of event-triggered prescribed-time synchronization tracking control for 6-DOF fixed-wing UAVs with state constraints. We propose a novel prescribed-time command filtered backstepping approach to effectively tackle the issues of complexity explosion and singularities. By utilizing a state-transition function, we manage asymmetric time-varying state constraints, including limitations on speed, roll, yaw, and pitch angles in UAVs. The theoretical analysis demonstrates that all signals in the 6-DOF UAV system remain bounded, with tracking errors converging to the origin within the prescribed time. Finally, simulation results validate the effectiveness of the proposed control strategy. Full article
(This article belongs to the Special Issue Intelligent Control Techniques for Unmanned Aerial Vehicles)
Show Figures

Figure 1

Figure 1
<p>Communication topology.</p>
Full article ">Figure 2
<p>The trajectory of speed and attitude and their tracking errors in proposed controller. (<b>a</b>) Speed response. (<b>b</b>) Roll−angle response. (<b>c</b>) Pitch−angle response. (<b>d</b>) Yaw−angle response. (<b>e</b>) Tracking error of speed. (<b>f</b>) Tracking error of roll angle. (<b>g</b>) Tracking error of pitch angle. <b>(h</b>) Tracking error of yaw angle.</p>
Full article ">Figure 3
<p>Speed and attitude tracking response under comparative controller and proposed controller. (<b>a</b>) Speed response. (<b>b</b>) Roll−angle response. (<b>c</b>) Pitch−angle response. (<b>d</b>) Yaw−angle response.</p>
Full article ">Figure 4
<p>Triggering time interval. (<b>a</b>) Data <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>x</mi> <mn>1</mn> </mrow> </msub> </semantics></math>. (<b>b</b>) Data <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>. (<b>c</b>) Data <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>x</mi> <mn>2</mn> </mrow> </msub> </semantics></math>. (<b>d</b>) Data <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>2</mn> </msub> </semantics></math>. (<b>e</b>) Data <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>x</mi> <mn>3</mn> </mrow> </msub> </semantics></math>. (<b>f</b>) Data <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>3</mn> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>The number of times an event was triggered.</p>
Full article ">
19 pages, 21047 KiB  
Article
Real-Time Localization for an AMR Based on RTAB-MAP
by Chih-Jer Lin, Chao-Chung Peng and Si-Ying Lu
Actuators 2025, 14(3), 117; https://doi.org/10.3390/act14030117 - 27 Feb 2025
Viewed by 252
Abstract
This study aimed to develop a real-time localization system for an AMR (autonomous mobile robot), which utilizes the Robot Operating System (ROS) Noetic version in the Ubuntu 20.04 operating system. RTAB-MAP (Real-Time Appearance-Based Mapping) is employed for localization, integrating with an RGB-D camera [...] Read more.
This study aimed to develop a real-time localization system for an AMR (autonomous mobile robot), which utilizes the Robot Operating System (ROS) Noetic version in the Ubuntu 20.04 operating system. RTAB-MAP (Real-Time Appearance-Based Mapping) is employed for localization, integrating with an RGB-D camera and a 2D LiDAR for real-time localization and mapping. The navigation was performed using the A* algorithm for global path planning, combined with the Dynamic Window Approach (DWA) for local path planning. It enables the AMR to receive velocity control commands and complete the navigation task. RTAB-MAP is a graph-based visual SLAM method that combines closed-loop detection and the graph optimization algorithm. The maps built using these three methods were evaluated with RTAB-MAP localization and AMCL (Adaptive Monte Carlo Localization) in a high-similarity long corridor environment. For RTAB-MAP and AMCL methods, three map optimization methods, i.e., TORO (Tree-based Network Optimizer), g2o (General Graph Optimization), and GTSAM (Georgia Tech Smoothing and Mapping), were used for the graph optimization of the RTAB-MAP and AMCL methods. Finally, the TORO, g2o, and GTSAM methods were compared to test the accuracy of localization for a long corridor according to the RGB-D camera and the 2D LiDAR. Full article
(This article belongs to the Special Issue Actuators in Robotic Control—3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the AMR for this experiment.</p>
Full article ">Figure 2
<p>(<b>a</b>) Architecture of RTAB-MAP. (<b>b</b>) Flowchart of the RTAB-MAP method.</p>
Full article ">Figure 3
<p>(<b>a</b>) Experimental location (AMR moves from A,B,C,D,E, to F); (<b>b</b>) graph optimization setup for RTAB-MAP with TORO.</p>
Full article ">Figure 4
<p>(<b>a</b>) Loop closure detection for time <span class="html-italic">t</span> = 00:10. (<b>b</b>) Loop closure detection for time <span class="html-italic">t</span> = 01:02. (<b>c</b>) Loop closure detection for initial time <span class="html-italic">t</span> = 02:07. (<b>d</b>) Loop closure detection for time <span class="html-italic">t</span> = 03:21. (<b>e</b>) Loop closure detection for time <span class="html-italic">t</span> = 04:24.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Loop closure detection for time <span class="html-italic">t</span> = 00:10. (<b>b</b>) Loop closure detection for time <span class="html-italic">t</span> = 01:02. (<b>c</b>) Loop closure detection for initial time <span class="html-italic">t</span> = 02:07. (<b>d</b>) Loop closure detection for time <span class="html-italic">t</span> = 03:21. (<b>e</b>) Loop closure detection for time <span class="html-italic">t</span> = 04:24.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Loop closure detection for time <span class="html-italic">t</span> = 00:10. (<b>b</b>) Loop closure detection for time <span class="html-italic">t</span> = 01:02. (<b>c</b>) Loop closure detection for initial time <span class="html-italic">t</span> = 02:07. (<b>d</b>) Loop closure detection for time <span class="html-italic">t</span> = 03:21. (<b>e</b>) Loop closure detection for time <span class="html-italic">t</span> = 04:24.</p>
Full article ">Figure 5
<p>Localization graph for RTAB-MAP with TORO.</p>
Full article ">Figure 6
<p>Localization graph for RTAB-MAP with g2o.</p>
Full article ">Figure 7
<p>Localization graph for RTAB-MAP with GTSAM.</p>
Full article ">Figure 8
<p>Proposed TF tree in ROS.</p>
Full article ">Figure 9
<p>Move_base node [<a href="#B41-actuators-14-00117" class="html-bibr">41</a>].</p>
Full article ">Figure 10
<p>Recovery behaviors of the move_base node [<a href="#B42-actuators-14-00117" class="html-bibr">42</a>].</p>
Full article ">Figure 11
<p>(<b>a</b>) Obstacle avoidance and loop closure detection; (<b>b</b>) beginning of the task; (<b>c</b>) destination of the obstacle avoidance task.</p>
Full article ">Figure 12
<p>Navigation results for AMCL with TORO.</p>
Full article ">Figure 13
<p>Navigation results for RTAB-MAP with TORO.</p>
Full article ">Figure 14
<p>Navigation photos for the proposed RTAB-MAP with TORO.</p>
Full article ">Figure 15
<p>(<b>a</b>) Obstacle avoidance trajectories of TORO for RTAB-MAP. (<b>b</b>) Obstacle avoidance trajectories of g2o for RTAB-MAP. (<b>c</b>) Obstacle avoidance trajectories of GTSAM for RTAB-MAP.</p>
Full article ">
19 pages, 2689 KiB  
Article
Visual Servo Tracking Control and Scene Depth Identification of Mobile Robots with Velocity Saturation Constraints
by Qiaomei Zhang, Baoquan Li and Fuyun Sun
Mathematics 2025, 13(5), 790; https://doi.org/10.3390/math13050790 - 27 Feb 2025
Viewed by 270
Abstract
Velocity saturation constraints are a significant issue for wheeled mobile robots (WMRs) when designing kinematics-based control laws. To handle the problem of velocity saturation constraints, a novel monocular visual servoing controller is developed for WMRs to solve tracking problems and enable unknown depth [...] Read more.
Velocity saturation constraints are a significant issue for wheeled mobile robots (WMRs) when designing kinematics-based control laws. To handle the problem of velocity saturation constraints, a novel monocular visual servoing controller is developed for WMRs to solve tracking problems and enable unknown depth estimation. By analyzing the kinematic model of the robot system and employing the homography decomposition technique, measurable signals are obtained to develop a visual tracking error model for non-holonomic mobile robots. To ensure that the velocity commands are consistently constrained within the allowed limits, a saturation function is employed in the designed visual servoing control law. Furthermore, an adaptive updating law is designed to estimate the unknown depth information. The boundedness of the velocity commands is analyzed to evaluate the saturation performance of the developed visual servoing controller. With the aid of Lyapunov techniques and Barbalat’s lemma, the stability of this scheme is demonstrated. The simulation and experiment verify the performance of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>Relationships in the coordinate system.</p>
Full article ">Figure 2
<p>Block diagram of the mobile robot system.</p>
Full article ">Figure 3
<p>Motion trajectories of the mobile robot.</p>
Full article ">Figure 4
<p>Evolution of the robot errors.</p>
Full article ">Figure 5
<p>Velocities of the mobile robot. The yellow dotted lines denote the velocity saturation limits.</p>
Full article ">Figure 6
<p>Image paths of feature points.</p>
Full article ">Figure 7
<p>Evolution of <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi>d</mi> <mo stretchy="false">^</mo> </mover> <mi>*</mi> </msup> </mrow> </semantics></math> when using the adaptive updating law (dashed line: true value of <math display="inline"><semantics> <mrow> <msup> <mi>d</mi> <mi>*</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 8
<p>Evolution of the robot errors with noise.</p>
Full article ">Figure 9
<p>Velocities of the mobile robot with noise. The yellow dotted lines denote the velocity saturation limits.</p>
Full article ">Figure 10
<p>Evolution of <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi>d</mi> <mo stretchy="false">^</mo> </mover> <mi>*</mi> </msup> </mrow> </semantics></math> by the adaptive updating law with noise (dashed line: truth value of <math display="inline"><semantics> <mrow> <msup> <mi>d</mi> <mi>*</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 11
<p>Experimental setup: visual features and mobile robot platform.</p>
Full article ">Figure 12
<p>Motion trajectories of the current and desired frames under <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="script">F</mi> <mi>*</mi> </msup> </mrow> </semantics></math>. The first and second plots correspond to two different velocity constraints set using the control method introduced in this work, while the third plot corresponds to the previous method in [<a href="#B40-mathematics-13-00790" class="html-bibr">40</a>]. Red solid lines: desired motion trajectories; blue solid lines: current motion trajectories.</p>
Full article ">Figure 13
<p>Evolution of trajectory tracking errors. The dark blue and light blue solid lines denote the two different velocity constraints imposed with the control method proposed in this work, respectively, and the green dashed line corresponds to the previous method in [<a href="#B40-mathematics-13-00790" class="html-bibr">40</a>].</p>
Full article ">Figure 14
<p>Velocities of the mobile robot. The yellow dotted lines denote the velocity saturation limits. The dark blue and light blue solid lines denote the method proposed in this work, respectively, and the green line corresponds to the previous method in [<a href="#B40-mathematics-13-00790" class="html-bibr">40</a>].</p>
Full article ">Figure 15
<p>Image trajectories of feature points.</p>
Full article ">Figure 16
<p>Evolution of the scene depth (dashed line: truth value of <math display="inline"><semantics> <mrow> <msup> <mi>d</mi> <mi>*</mi> </msup> </mrow> </semantics></math>).</p>
Full article ">
17 pages, 5063 KiB  
Article
Observer-Based Adaptive Robust Force Control of a Robotic Manipulator Integrated with External Force/Torque Sensor
by Zixuan Huo, Mingxing Yuan, Shuaikang Zhang and Xuebo Zhang
Actuators 2025, 14(3), 116; https://doi.org/10.3390/act14030116 - 27 Feb 2025
Viewed by 251
Abstract
Maintaining precise interaction force in uncertain environments characterized by unknown and varying stiffness or location is significantly challenging for robotic manipulators. Existing approaches widely employ a two-level control structure in which the higher level generates the command motion of the lower level according [...] Read more.
Maintaining precise interaction force in uncertain environments characterized by unknown and varying stiffness or location is significantly challenging for robotic manipulators. Existing approaches widely employ a two-level control structure in which the higher level generates the command motion of the lower level according to the force tracking error. However, the low-level motion tracking error is generally ignored completely. Recognizing this limitation, this paper first formulates the low-level motion tracking error as an unknown input disturbance, based on which a dynamic interaction model capturing both structured and unstructured uncertainties is developed. With the developed interaction model, an observer-based adaptive robust force controller is proposed to achieve accurate and robust force modulation for a robotic manipulator. Alongside the theoretical stability analysis, comparative experiments with the classical admittance control (AC), the adaptive variable impedance control (AVIC), and the adaptive force tracking admittance control based on disturbance observer (AFTAC) are conducted on a robotic manipulator across four scenarios. The experimental results demonstrate the significant advantages of the proposed approach over existing methods in terms of accuracy and robustness in interaction force control. For instance, the proposed method reduces the root mean square error (RMSE) by 91.3%, 87.2%, and 75.5% in comparison to AC, AVIC, and AFTAC, respectively, in the experimental scenario where the manipulator is directed to follow a time-varying force while experiencing significant low-level motion tracking errors. Full article
(This article belongs to the Special Issue Motion Planning, Trajectory Prediction, and Control for Robotics)
Show Figures

Figure 1

Figure 1
<p>Overall framework of the proposed observation-based adaptive robust interaction control scheme.</p>
Full article ">Figure 2
<p>Illustration of modeling the low-level motion tracking error as an unknown input disturbance in the high-level force control module.</p>
Full article ">Figure 3
<p>Experimental platform.</p>
Full article ">Figure 4
<p>Experimental tasks for Case 1 and Case 2. The manipulator aims to maintain a constant interaction force at the contact point.</p>
Full article ">Figure 5
<p>The interaction force tracking results in Case 1.</p>
Full article ">Figure 6
<p>The deviation signal imposed on the original command of the low-level motion controller.</p>
Full article ">Figure 7
<p>Parameter convergence process of the proposed ARFC.</p>
Full article ">Figure 8
<p>The interaction force tracking results in Case 2.</p>
Full article ">Figure 9
<p>Experimental task for Case 3 and Case 4.</p>
Full article ">Figure 10
<p>The interaction force tracking results in Case 3.</p>
Full article ">Figure 11
<p>The interaction force tracking results in Case 4.</p>
Full article ">
Back to TopTop