[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (248)

Search Parameters:
Keywords = motion target trajectory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 4431 KiB  
Article
Parking Trajectory Planning for Autonomous Vehicles Under Narrow Terminal Constraints
by Yongxing Cao, Bijun Li, Zejian Deng and Xiaomin Guo
Electronics 2024, 13(24), 5041; https://doi.org/10.3390/electronics13245041 - 22 Dec 2024
Viewed by 463
Abstract
Trajectory planning in tight spaces presents a significant challenge due to the complex maneuvering required under kinematic and obstacle avoidance constraints. When obstacles are densely distributed near the target state, the limited connectivity between the feasible states and terminal state can further decrease [...] Read more.
Trajectory planning in tight spaces presents a significant challenge due to the complex maneuvering required under kinematic and obstacle avoidance constraints. When obstacles are densely distributed near the target state, the limited connectivity between the feasible states and terminal state can further decrease the efficiency and success rate of trajectory planning. To address this challenge, we propose a novel Dual-Stage Motion Pattern Tree (DS-MPT) algorithm. DS-MPT decomposes the trajectory generation process into two stages: merging and posture adjustment. Each stage utilizes specific heuristic information to guide the construction of the trajectory tree. Our experimental results demonstrate the high robustness and computational efficiency of the proposed method in various parallel parking scenarios. Additionally, we introduce an enhanced driving corridor generation strategy for trajectory optimization, reducing computation time by 54% to 84% compared to traditional methods. Further experiments validate the improved stability and success rate of our approach. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of (<b>a</b>) merging stage and (<b>b</b>) posture adjustment stage for a pull-in process.</p>
Full article ">Figure 2
<p>Illustration of (<b>a</b>) the optimal candidate state under Euclidean distance metric and (<b>b</b>) actual optimal candidate state at the posture adjustment stage.</p>
Full article ">Figure 3
<p>Illustration of vehicle geometric parameters and kinematic parameters.</p>
Full article ">Figure 4
<p>Illustration of the framework.</p>
Full article ">Figure 5
<p>Illustration of trajectory tree extension.</p>
Full article ">Figure 6
<p>Illustration of trajectory tree pruning.</p>
Full article ">Figure 7
<p>Illustration of vehicle shape approximation.</p>
Full article ">Figure 8
<p>Experimental results for the DS-MPT algorithm.</p>
Full article ">Figure 9
<p>Comparison of planning results of DS-MPT and the FTHA algorithm.</p>
Full article ">Figure 10
<p>Experimental results for driving corridor generation strategies.</p>
Full article ">Figure 11
<p>The optimal trajectories in Case 12 with the local trajectory in the posture adjustment stage highlighted in red.</p>
Full article ">Figure 12
<p>Optimized control/state profiles in Case 14.</p>
Full article ">Figure 13
<p>Optimized control/state profiles in Case 15.</p>
Full article ">Figure 14
<p>Optimized control/state profiles in Case 16.</p>
Full article ">
28 pages, 1413 KiB  
Review
A Comprehensive Review of Control Challenges and Methods in End-Effector Upper-Limb Rehabilitation Robots
by Dalia M. Mahfouz, Omar M. Shehata, Elsayed I. Morgan and Filippo Arrichiello
Robotics 2024, 13(12), 181; https://doi.org/10.3390/robotics13120181 - 18 Dec 2024
Viewed by 496
Abstract
In the last decades, there has been an increasing number of human patients who suffer from upper-limb disorders limiting their motor abilities. One of the possible solutions that gained extensive research interest is the development of robot-aided rehabilitation training setups, including either end-effector [...] Read more.
In the last decades, there has been an increasing number of human patients who suffer from upper-limb disorders limiting their motor abilities. One of the possible solutions that gained extensive research interest is the development of robot-aided rehabilitation training setups, including either end-effector or exoskeleton robots, which showed various advantages compared to traditional manual rehabilitation therapy. One of the main challenges of these systems is to control the robot’s motion to track a desirable rehabilitation training trajectory while being affected by either voluntary or involuntary human forces depending on the patient’s recovery state. Several previous studies have been targeting exoskeleton robotic systems focusing on their structure, clinical features, and control methods, with limited review on end-effector-based robotic rehabilitation systems. In this regard, an overview of the most common end-effector robotic devices used for upper-limb rehabilitation is provided in this paper, describing their mechanical structure, features, clinical application, commercialization, advantages, and shortcomings. Additionally, a comprehensive review on possible control methods applied to end-effector rehabilitation exploitation is presented. These control methods are categorized as conventional, robust, intelligent, and most importantly, adaptive controllers implemented to serve for diverse rehabilitation control modes, addressing their development, implementation, findings, and possible drawbacks. Full article
(This article belongs to the Special Issue Neurorehabilitation Robotics: Recent Trends and Novel Applications)
Show Figures

Figure 1

Figure 1
<p>Search methodology diagram.</p>
Full article ">Figure 2
<p>End-effector and exoskeleton robotic structures for upper-limb rehabilitation process.</p>
Full article ">Figure 3
<p>PID control structure for upper-limb robotic rehabilitation.</p>
Full article ">Figure 4
<p>General impedance and admittance control structures for upper-limb robotic rehabilitation.</p>
Full article ">Figure 5
<p>Sliding Mode Control (SMC) structure.</p>
Full article ">Figure 6
<p>Abstracted block diagram of adaptive controllers for upper-limb manipulation.</p>
Full article ">
19 pages, 5449 KiB  
Article
Coordinated Motion Control of Mobile Self-Reconfigurable Robots in Virtual Rigid Framework
by Ruopeng Wei, Yubin Liu, Huijuan Dong, Yanhe Zhu and Jie Zhao
Machines 2024, 12(12), 888; https://doi.org/10.3390/machines12120888 - 5 Dec 2024
Viewed by 538
Abstract
This paper presents a control method for the coordinated motion of a mobile self-reconfigurable robotic system. By utilizing a virtual rigid framework, the system ensures that its configuration remains stable and intact, while enabling modular units to collaboratively track the required trajectory and [...] Read more.
This paper presents a control method for the coordinated motion of a mobile self-reconfigurable robotic system. By utilizing a virtual rigid framework, the system ensures that its configuration remains stable and intact, while enabling modular units to collaboratively track the required trajectory and velocity for mobile tasks. The proposed method generates a virtual rigid structure with a specific configuration and introduces an optimized controller with dynamic look-ahead distance and adaptive steering angle. This controller calculates the necessary control parameters for the virtual rigid structure to follow the desired trajectory and speed, providing a unified reference framework for the coordinated movement of the module units. A coordination controller, based on kinematics and adaptive sliding mode control, is developed to enable each module to track the motion of the virtual rigid structure, ensuring the entire robotic system follows the target path while maintaining an accurate configuration. Extensive simulations and experiments under various configurations, robot numbers, and environmental conditions demonstrate the effectiveness and robustness of the proposed method. This approach shows strong potential for applications in smart factories, particularly in material transport and assembly line supply. Full article
(This article belongs to the Special Issue Industry 4.0: Intelligent Robots in Smart Manufacturing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of a virtual rigid framework.</p>
Full article ">Figure 2
<p>The hierarchical control structure of PPC-PID.</p>
Full article ">Figure 3
<p>Pure pursuit control motion model.</p>
Full article ">Figure 4
<p>Schematic diagram of the motion process of the module units following a virtual rigid structure.</p>
Full article ">Figure 5
<p>Structure of coordinated motion controller.</p>
Full article ">Figure 6
<p>Path-tracking results of the VRS based on PPC-PID control. (<b>a</b>) Path-tracking trajectory of VRS. (<b>b</b>) The relationship between the motion speed and the steering angle of the VRS.</p>
Full article ">Figure 7
<p>The relationship between the dynamic look-ahead distance, speed, and path curvature.</p>
Full article ">Figure 8
<p>Path-tracking results under static look-ahead parameters PPC in Reference [<a href="#B19-machines-12-00888" class="html-bibr">19</a>]. (<b>a</b>) Path-tracking results using a small static look-ahead parameter <span class="html-italic">L<sub>d</sub></span> = 1. (<b>b</b>) Path-tracking results using a large static look-ahead parameter <span class="html-italic">L<sub>d</sub></span> = 5.</p>
Full article ">Figure 9
<p>Coordinated motion simulation results for arrow-shaped configuration. (<b>a</b>) Trajectories of each module unit during the coordinated motion process. (<b>b</b>) The tracking errors for each modular unit. (<b>c</b>) The velocity of each modular unit. (<b>d</b>) The velocity error of each modular unit.</p>
Full article ">Figure 10
<p>Simulation results of cross-shaped configuration during the coordinated motion process. (<b>a</b>) Trajectories of each module unit during the coordinated motion process. (<b>b</b>) The tracking errors for each modular unit. (<b>c</b>) The velocity of each modular unit. (<b>d</b>) The velocity error of each modular unit.</p>
Full article ">Figure 11
<p>Tracking errors of PID controller in Reference [<a href="#B4-machines-12-00888" class="html-bibr">4</a>]. (<b>a</b>) The tracking errors for each modular unit. (<b>b</b>) The velocity errors for each modular unit.</p>
Full article ">Figure 12
<p>A coordinated motion experiment is conducted with two module units arranged in a parallel configuration on flat surface. The desired trajectory is shown by the red dashed lines.</p>
Full article ">Figure 13
<p>A coordinated motion experiment is conducted with three module units arranged in an arrow-shaped configuration on uneven dirt surface. The desired trajectory is shown by the red dashed lines.</p>
Full article ">Figure 14
<p>Coordinated motion performance in parallel configuration on flat surface. (<b>a</b>) Trajectories of each module unit during the coordinated motion process. (<b>b</b>) The tracking errors for each modular unit. (<b>c</b>) The velocity of each modular unit. (<b>d</b>) The velocity error of each modular unit.</p>
Full article ">Figure 15
<p>Coordinated motion performance in arrow-shaped configuration on uneven dirt surface. (<b>a</b>) Trajectories of each module unit during the coordinated motion process. (<b>b</b>) The tracking errors for each modular unit. (<b>c</b>) The velocity of each modular unit. (<b>d</b>) The velocity error of each modular unit.</p>
Full article ">
21 pages, 19154 KiB  
Article
Time-Delay-Based Sliding Mode Tracking Control for Cooperative Dual Marine Lifting System Subject to Sea Wave Disturbances
by Yiwen Cong, Gang Li, Jifu Li, Jianyan Tian and Xin Ma
Actuators 2024, 13(12), 491; https://doi.org/10.3390/act13120491 - 2 Dec 2024
Viewed by 422
Abstract
Dual marine lifting systems are complicated, fully actuated mechatronics systems with multi-input and multi-output capabilities. The anti-swing cooperative lifting control of dual marine lifting systems with dual ships’ sway, heave, and roll motions is still open. The uncertainty regarding system parameters makes the [...] Read more.
Dual marine lifting systems are complicated, fully actuated mechatronics systems with multi-input and multi-output capabilities. The anti-swing cooperative lifting control of dual marine lifting systems with dual ships’ sway, heave, and roll motions is still open. The uncertainty regarding system parameters makes the task of achieving stable performance more challenging. To adjust both the attitude and position of large distributed-mass payloads to their target positions, this paper presents a time-delay-based sliding mode-tracking controller for cooperative dual marine lifting systems impacted by sea wave disturbances. Firstly, a dynamic model of a dual marine lifting system is established by using Lagrange’s method. Then, a kinematic coupling-based cooperative trajectory planning strategy is proposed by analyzing the coupling relationship between the dual marine lifting system and dual ship motion. After that, an improved sliding mode tracking controller is proposed by using time-delay estimation technology, which estimates unknown system parameters online. The finite-time convergence of full-state variables is rigorously proven. Finally, the simulation results verify the designed controller in terms of anti-swing control performance. The hardware experiments revealed that the proposed controller significantly reduces the actuator positioning errors by 83.33% compared with existing control methods. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

Figure 1
<p>Lifting operations using DMLS.</p>
Full article ">Figure 2
<p>Model of the DMLS.</p>
Full article ">Figure 3
<p>Block diagram of the proposed time-delay-based sliding mode tracking controller.</p>
Full article ">Figure 4
<p>Waves and corresponding sea state.</p>
Full article ">Figure 5
<p>Ship motion caused by sea wave disturbances.</p>
Full article ">Figure 6
<p>Comparative simulation results.</p>
Full article ">Figure 7
<p>Payload state of the comparative simulation: (<b>a</b>) Proposed controller, (<b>b</b>) EAB controller.</p>
Full article ">Figure 8
<p>Simulation results with changing payload mass.</p>
Full article ">Figure 9
<p>Simulation results with changing payload length.</p>
Full article ">Figure 10
<p>Simulation results with changing working requirements.</p>
Full article ">Figure 11
<p>Payload state of the robustness experiment: (<b>a</b>) Proposed controller, (<b>b</b>) EAB controller.</p>
Full article ">Figure 12
<p>Marine lifting system experimental platform.</p>
Full article ">Figure 13
<p>Experimental results.</p>
Full article ">
28 pages, 15457 KiB  
Article
Intelligent Dynamic Trajectory Planning of UAVs: Addressing Unknown Environments and Intermittent Target Loss
by Zhengpeng Yang, Suyu Yan, Chao Ming and Xiaoming Wang
Drones 2024, 8(12), 721; https://doi.org/10.3390/drones8120721 - 29 Nov 2024
Viewed by 427
Abstract
Precise trajectory planning is crucial in terms of enabling unmanned aerial vehicles (UAVs) to execute interference avoidance and target capture actions in unknown environments and when facing intermittent target loss. To address the trajectory planning problem of UAVs in such conditions, this paper [...] Read more.
Precise trajectory planning is crucial in terms of enabling unmanned aerial vehicles (UAVs) to execute interference avoidance and target capture actions in unknown environments and when facing intermittent target loss. To address the trajectory planning problem of UAVs in such conditions, this paper proposes a UAV trajectory planning system that includes a predictor and a planner. Specifically, the system employs a bidirectional Temporal Convolutional Network (TCN) and Gated Recurrent Unit (GRU) network algorithm with an adaptive attention mechanism (BITCN-BIGRU-AAM) to train a model that incorporates the historical motion trajectory features of the target and motion intention the inferred by a Dynamic Bayesian Network (DBN). The resulting predictions of the maneuvering target are used as terminal inputs for the planner. An improved Radial Basis Function (RBF) network is utilized in combination with an offline–online trajectory planning method for real-time obstacle avoidance trajectory planning. Additionally, considering future practical applications, the predictor and planner adopt a parallel optimization and correction algorithm structure to ensure planning accuracy and computational efficiency. Simulation results indicate that the proposed method can accurately avoid dynamic interference and precisely capture the target during tasks involving dynamic interference in unknown environments and when facing intermittent target loss, while also meeting system computational capacity requirements. Full article
Show Figures

Figure 1

Figure 1
<p>UAV online obstacle avoidance trajectory diagram under intermittent target loss conditions, where the lines of different colors represent the different trajectories real-time planned for the UAV.</p>
Full article ">Figure 2
<p>Schematic showing the characteristics of UAV dynamics.</p>
Full article ">Figure 3
<p>Schematic of principle of target maneuvering intention derivation.</p>
Full article ">Figure 4
<p>Schematic diagram of threat of ground radar, where the asterisk (R) in the figure represents the ground radar station.</p>
Full article ">Figure 5
<p>The residual block schematic of TCN.</p>
Full article ">Figure 6
<p>The prediction algorithm structure of BITCN-BIGRU-AAM.</p>
Full article ">Figure 7
<p>A model diagram of general neurons.</p>
Full article ">Figure 8
<p>Obstacle avoidance trajectory planning based on RBF networks combined with offline–online alteration.</p>
Full article ">Figure 9
<p>The parallel system structure based on the BITCN-BIGRU-AAM and improved RBF algorithm.</p>
Full article ">Figure 10
<p>Algorithm prediction results.</p>
Full article ">Figure 11
<p>The diagram of the comparison of the prediction effects of different prediction algorithms under the random motion of the target.</p>
Full article ">Figure 12
<p>The predictive performance index diagram of the algorithm, where the figure (<b>a</b>) represents performance indicators and figure (<b>b</b>) represents performance normalization results.</p>
Full article ">Figure 13
<p>Generation of sample library.</p>
Full article ">Figure 14
<p>Regression curve of the training process.</p>
Full article ">Figure 15
<p>Comparison of online prediction results.</p>
Full article ">Figure 16
<p>Comparison error of online prediction results.</p>
Full article ">Figure 17
<p>Online obstacle avoidance trajectory planning under the dynamic radius interference, where the black dotted line represents the radius of 150 m, the blue dotted line represents the radius of 200 m, and the red solid line represents the radius of 140 m.</p>
Full article ">Figure 18
<p>UAV state variables under the dynamic-static radius joint interference.</p>
Full article ">Figure 19
<p>Online obstacle avoidance trajectory planning under the dynamic position interference, where the black dotted line, blue dashed line and red solid line represent show the blind spot at positions ranging from (5000, 1200) to (6000, 1330) meters.</p>
Full article ">Figure 20
<p>UAV state variables under the dynamicstatic position joint interference.</p>
Full article ">Figure 20 Cont.
<p>UAV state variables under the dynamicstatic position joint interference.</p>
Full article ">Figure 21
<p>Online obstacle avoidance trajectory planning under the target prediction position interference, where change from the blue solid line to the green dashed line, respectively, represents the target capture area from (8680, 1630) to (8680, 1690) meters.</p>
Full article ">Figure 22
<p>UAV state variables under the target prediction position interference.</p>
Full article ">Figure 23
<p>System performance index.</p>
Full article ">
14 pages, 1805 KiB  
Article
Multi-UAV Collaborative Target Search Method in Unknown Dynamic Environment
by Liyuan Yang, Yongping Hao, Jiulong Xu and Meixuan Li
Sensors 2024, 24(23), 7639; https://doi.org/10.3390/s24237639 - 29 Nov 2024
Viewed by 560
Abstract
The challenge of search inefficiency arises when multiple UAV swarms conduct dynamic target area searches in unknown environments. The primary sources of this inefficiency are repeated searches in the target region and the dynamic motion of targets. To address this issue, we present [...] Read more.
The challenge of search inefficiency arises when multiple UAV swarms conduct dynamic target area searches in unknown environments. The primary sources of this inefficiency are repeated searches in the target region and the dynamic motion of targets. To address this issue, we present the distributed adaptive real-time planning search (DAPSO) technique, which enhances the search efficiency for dynamic targets in uncertain mission situations. To minimize repeated searches, UAVs utilize localized communication for information exchange and dynamically update their situational awareness regarding the mission environment, facilitating collaborative exploration. To mitigate the effects of target mobility, we develop a dynamic mission planning method based on local particle swarm optimization, enabling UAVs to adjust their search trajectories in response to real-time environmental inputs. Finally, we propose a distance-based inter-vehicle collision avoidance strategy to ensure safety during multi-UAV cooperative searches. The experimental findings demonstrate that the proposed DAPSO method significantly outperforms other search strategies regarding the coverage and target detection rates. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>UAV coordinated target search scenario.</p>
Full article ">Figure 2
<p>Schematic diagram of the sensor.</p>
Full article ">Figure 3
<p>Task area rasterization.</p>
Full article ">Figure 4
<p>Multi-UAV cooperative search framework.</p>
Full article ">Figure 5
<p>Measuring distance.</p>
Full article ">Figure 6
<p>Histogram of the average coverage of the five search algorithms.</p>
Full article ">Figure 7
<p>Average target detection rate.</p>
Full article ">Figure 8
<p>Target Detection Probability Convergence Plot.</p>
Full article ">Figure 9
<p>Drone–to–drone distance.</p>
Full article ">
25 pages, 29809 KiB  
Article
A Vision-Based End-to-End Reinforcement Learning Framework for Drone Target Tracking
by Xun Zhao, Xinjian Huang, Jianheng Cheng, Zhendong Xia and Zhiheng Tu
Drones 2024, 8(11), 628; https://doi.org/10.3390/drones8110628 - 30 Oct 2024
Viewed by 1193
Abstract
Drone target tracking, which involves instructing drone movement to follow a moving target, encounters several challenges: (1) traditional methods need accurate state estimation of both the drone and target; (2) conventional Proportional–Derivative (PD) controllers require tedious parameter tuning and struggle with nonlinear properties; [...] Read more.
Drone target tracking, which involves instructing drone movement to follow a moving target, encounters several challenges: (1) traditional methods need accurate state estimation of both the drone and target; (2) conventional Proportional–Derivative (PD) controllers require tedious parameter tuning and struggle with nonlinear properties; and (3) reinforcement learning methods, though promising, rely on the drone’s self-state estimation, adding complexity and computational load and reducing reliability. To address these challenges, this study proposes an innovative model-free end-to-end reinforcement learning framework, the VTD3 (Vision-Based Twin Delayed Deep Deterministic Policy Gradient), for drone target tracking tasks. This framework focuses on controlling the drone to follow a moving target while maintaining a specific distance. VTD3 is a pure vision-based tracking algorithm which integrates the YOLOv8 detector, the BoT-SORT tracking algorithm, and the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm. It diminishes reliance on GPS and other sensors while simultaneously enhancing the tracking capability for complex target motion trajectories. In a simulated environment, we assess the tracking performance of VTD3 across four complex target motion trajectories (triangular, square, sawtooth, and square wave, including scenarios with occlusions). The experimental results indicate that our proposed VTD3 reinforcement learning algorithm substantially outperforms conventional PD controllers in drone target tracking applications. Across various target trajectories, the VTD3 algorithm demonstrates a significant reduction in average tracking errors along the X-axis and Y-axis of up to 34.35% and 45.36%, respectively. Additionally, it achieves a notable improvement of up to 66.10% in altitude control precision. In terms of motion smoothness, the VTD3 algorithm markedly enhances performance metrics, with improvements of up to 37.70% in jitter and 60.64% in Jerk RMS. Empirical results verify the superiority and feasibility of our proposed VTD3 framework for drone target tracking. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the <tt>VTD3</tt> for drone target tracking.</p>
Full article ">Figure 2
<p>YOLOv 8 network architecture.</p>
Full article ">Figure 3
<p>Workflow of the BoT-SORT tracker.</p>
Full article ">Figure 4
<p>TD3 network training framework.</p>
Full article ">Figure 5
<p>Actor and Q network structures.</p>
Full article ">Figure 6
<p>Simulation process diagram.</p>
Full article ">Figure 7
<p>Demonstration of various target occlusion scenarios encountered in drone tracking. The scenarios include the following: Unoccluded, where the entire target is visible; Front Occluded, where the front portion of the target is obscured; and Rear Occluded, where the rear part of the target is hidden from view. These scenarios exemplify typical challenges faced in practical drone tracking applications.</p>
Full article ">Figure 8
<p>Training results of YOLOv8. The blue curves represent the values of each metric for each epoch, while the orange curves show the smoothed results.</p>
Full article ">Figure 9
<p>Detection results for YOLOv8.</p>
Full article ">Figure 10
<p>Training rewards over episodes for the TD3 network. The graph shows the total reward per episode (blue) and the moving average reward (orange) throughout the training process. The x-axis represents the number of episodes, while the y-axis indicates the reward value for each episode. The training process is divided into three stages: the random exploration stage (episodes 0–1000), noisy exploration stage (episodes 1000–1500), and pure policy stage (episodes 1500–2000). These stages are demarcated by green dashed lines on the graph. The progression of rewards illustrates the learning performance of the TD3 network across different exploration strategies.</p>
Full article ">Figure 11
<p>Four distinct vehicle motion trajectories are implemented in our experiments: triangular, square, sawtooth, and square wave. The X- and Y-axis in each figure represent the horizontal and vertical coordinates, respectively, measured in meters. The red lines depict the vehicle’s movement path. Notably, the square wave trajectory includes three gray boxes representing scenarios where the vehicle is occluded.</p>
Full article ">Figure 12
<p>Drone tracking performance under four vehicle trajectory patterns (triangular, square, sawtooth, and square wave). Each row presents five plots: X and Y position over time (columns 1–2), altitude (Z) over time (column 3), and X and Y velocity over time (columns 4–5). The red curve represents the vehicle, the blue curve represents the TD3 controller, and the green curve represents the PD controller. The y-axis units for the first three columns are in meters (m), while the last two columns use meters per second (m/s). The x-axis unit for all plots is seconds (s).</p>
Full article ">Figure 13
<p>Performance comparison of PD and TD3 controllers in triangular trajectory tracking.</p>
Full article ">Figure 14
<p>Performance comparison of PD and TD3 controllers in square trajectory tracking.</p>
Full article ">Figure 15
<p>Performance comparison of PD and TD3 controllers in sawtooth trajectory tracking.</p>
Full article ">Figure 16
<p>Performance comparison of PD and TD3 controllers in square wave trajectory tracking.</p>
Full article ">
26 pages, 32628 KiB  
Article
Risk-Aware Lane Change and Trajectory Planning for Connected Autonomous Vehicles Based on a Potential Field Model
by Tao Wang, Dayi Qu, Kedong Wang, Chuanbao Wei and Aodi Li
World Electr. Veh. J. 2024, 15(11), 489; https://doi.org/10.3390/wevj15110489 - 27 Oct 2024
Viewed by 1370
Abstract
To enhance the safety of lane changes for connected autonomous vehicles in an intelligent transportation environment, this study draws from potential field theory to analyze variations in the risks that vehicles face under different traffic conditions. The safe minimum vehicle distance is dynamically [...] Read more.
To enhance the safety of lane changes for connected autonomous vehicles in an intelligent transportation environment, this study draws from potential field theory to analyze variations in the risks that vehicles face under different traffic conditions. The safe minimum vehicle distance is dynamically adjusted, and a comprehensive vehicle risk potential field model is developed. This model systematically quantifies the risks encountered by connected autonomous vehicles during the driving process, providing a more accurate assessment of safety conditions. Subsequently, vehicle motion is decoupled into lateral and longitudinal components within the Frenet coordinate system, with quintic polynomials employed to generate clusters of potential trajectories. To improve computational efficiency, trajectory evaluation metrics are developed based on vehicle dynamics, incorporating factors such as acceleration, jerk, and curvature. An initial filtering process is applied to these trajectories, yielding a refined set of candidates. These candidate trajectories are further assessed using a minimum safety distance model derived from potential field theory, with optimization focusing on safety, comfort, and efficiency. The algorithm is tested in a three-lane curved simulation environment that includes both constant-speed and variable-speed lane change scenarios. Results show that the collision risk between the target vehicle and surrounding vehicles remains below the minimum safety distance threshold throughout the lane change process, ensuring a high level of safety. Furthermore, across various driving conditions, the target vehicle’s acceleration, jerk, and trajectory curvature remained well within acceptable limits, demonstrating that the proposed lane change trajectory planning algorithm successfully balances safety, comfort, and smoothness, even in complex traffic environments. Full article
(This article belongs to the Special Issue Motion Planning and Control of Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of free lane change scenario.</p>
Full article ">Figure 2
<p>Flow chart of lane change decision making.</p>
Full article ">Figure 3
<p>The distribution of potential field strength under various vehicle operating conditions.</p>
Full article ">Figure 4
<p>Schematic diagram of lane change between TV and FV.</p>
Full article ">Figure 5
<p>The impact of acceleration of TV and FV on safe lane change distance.</p>
Full article ">Figure 6
<p>Schematic diagram of lane change between TV and RV.</p>
Full article ">Figure 7
<p>The impact of acceleration of TV and RV on safe lane change distance.</p>
Full article ">Figure 8
<p>The distribution of road boundary potential field.</p>
Full article ">Figure 9
<p>The distribution of road centerline potential field.</p>
Full article ">Figure 10
<p>Relationship between Cartesian coordinates and Frenet coordinates.</p>
Full article ">Figure 11
<p>Coordinate transformation.</p>
Full article ">Figure 12
<p>Lateral and longitudinal trajectory clusters.</p>
Full article ">Figure 13
<p>Three-lane curved road scenarios.</p>
Full article ">Figure 14
<p>The risk value and total value of loss function for three scenarios.</p>
Full article ">Figure 15
<p>The planned trajectory clusters in three scenarios.</p>
Full article ">Figure 16
<p>The motion process of the vehicles across three scenarios.</p>
Full article ">Figure 17
<p>The acceleration and jerk curves of TV across the three traffic scenarios.</p>
Full article ">Figure 18
<p>The curvature curves of the planned optimal trajectory across the three traffic scenarios.</p>
Full article ">
25 pages, 6821 KiB  
Article
Real-Time Trajectory Planning and Effectiveness Analysis of Intercepting Large-Scale Invading UAV Swarms Based on Motion Primitives
by Yue Zhang, Xianzhong Gao, Jian’an Zong, Zhihui Leng and Zhongxi Hou
Drones 2024, 8(10), 588; https://doi.org/10.3390/drones8100588 - 17 Oct 2024
Viewed by 1060
Abstract
This paper introduces a swift method for intercepting the state trajectory of large-scale invading drone swarms using quadrotor drones. The research primarily concentrates on the design and computation of multi-target interception trajectories, with an analysis of the trajectory state constraints inherent to multi-target [...] Read more.
This paper introduces a swift method for intercepting the state trajectory of large-scale invading drone swarms using quadrotor drones. The research primarily concentrates on the design and computation of multi-target interception trajectories, with an analysis of the trajectory state constraints inherent to multi-target interception tasks. Utilizing Pontryagin’s principle of motion, we have designed computationally efficient motion primitives for multi-target interception scenarios. These motion primitives’ durations have informed the design of cost matrices for multi-target interception tasks. In contrast to static planar scenarios, the cost matrix in dynamic scenarios displays significant asymmetry, correlating with the speed and spatial distribution of the targets. We have proposed an algorithmic framework based on three genetic operators for solving multi-target interception trajectories, offering certain advantages in terms of solution accuracy and speed compared to other optimization algorithms. Simulation results from large-scale dynamic target interception scenarios indicate that for an interception task involving 50 targets, the average solution time for trajectories is a mere 3.7 s. Using the methods proposed in this paper, we conducted a comparative analysis of factors affecting the performance of interception trajectories in various target interception scenarios. This study represents the first instance in existing public research where precise evaluations have been made on the trajectories of drone interceptions against large-scale flying targets. This research lays the groundwork for further exploration into game-theoretic adversarial cluster interception methods. Full article
Show Figures

Figure 1

Figure 1
<p>Quadcopter dynamic model.</p>
Full article ">Figure 2
<p>A schematic diagram of a defense drone intercepting three invading targets, where the drone intercepts the second, first, and third targets at moments <span class="html-italic">T</span><sub>1</sub>, <span class="html-italic">T</span><sub>2</sub>, and <span class="html-italic">T</span><sub>3</sub> respectively.</p>
Full article ">Figure 3
<p>Flowchart for efficient recursive testing of trajectory feasibility.</p>
Full article ">Figure 4
<p>The cost matrix of motion primitives for planar and spatial targets under different flight speeds, where numbers represent trajectory time: (<b>a</b>) stationary planar target, the matrix is symmetric; (<b>b</b>) stationary spatial target; (<b>c</b>) moving planar target; (<b>d</b>) moving spatial target.</p>
Full article ">Figure 4 Cont.
<p>The cost matrix of motion primitives for planar and spatial targets under different flight speeds, where numbers represent trajectory time: (<b>a</b>) stationary planar target, the matrix is symmetric; (<b>b</b>) stationary spatial target; (<b>c</b>) moving planar target; (<b>d</b>) moving spatial target.</p>
Full article ">Figure 5
<p>Histogram of asymmetric index of cost matrix in different scenarios.</p>
Full article ">Figure 6
<p>The schematic diagram of path encoding, where blue represents the defense drone and is the starting point of the path, and red represents the sequence numbers of <span class="html-italic">n</span> targets.</p>
Full article ">Figure 7
<p>Schematic diagram of operator genetic evolution: (<b>a</b>) crossover; (<b>b</b>) variation; (<b>c</b>) reverse.</p>
Full article ">Figure 8
<p>Comparison display of optimization results based on distance and motion primitives: (<b>a</b>) 3D view of the flight path of the comparison method; (<b>b</b>) our method provides a three-dimensional view of the flight path; (<b>c</b>) compare the flight state trajectory of the method; (<b>d</b>) our method’s flight state trajectory.</p>
Full article ">Figure 9
<p>Histogram of the statistical results of the comparison method and our method on 1000 sets of data.</p>
Full article ">Figure 10
<p>Histogram of average planning results and computational efficiency of different optimization algorithms on 20 target scenarios.</p>
Full article ">Figure 11
<p>Histogram of computational efficiency and error of the proposed method on tasks with different target quantities.</p>
Full article ">Figure 12
<p>Simulation results for different numbers of replanning on 10 different targets: (<b>a</b>) No replanning, taking 63.6 s; (<b>b</b>) replanning once, taking 60.1 s; (<b>c</b>) replanning three times, taking 60.1 s.</p>
Full article ">Figure 13
<p>The histogram of the optimal interception trajectory time of our method in different drone performance scenarios.</p>
Full article ">Figure 14
<p>Schematic diagram of simulation scenarios with different initial positions of initial defenders.</p>
Full article ">Figure 15
<p>Simulation results of initial positions of different initial defenders: (<b>a</b>) The initial position of the drone is located at position 1 (front), taking 287.3 s; (<b>b</b>) the initial position of the drone is at position 2 (behind), taking 296.4 s; (<b>c</b>) the initial position of the drone is at position 3 (above), which takes 250.2 s; (<b>d</b>) the initial position of the drone is at position 4 (below) and takes 320.1 s.</p>
Full article ">Figure 16
<p>The performance of our method in different target structure scenarios: (<b>a</b>) circular distribution, taking 93.6 s; (<b>b</b>) triangular distribution, takes 82 s; (<b>c</b>) square distribution, takes 119.9 s; (<b>d</b>) random distribution, taking 87.9 s; (<b>e</b>–<b>h</b>) is the convex hull of the corresponding target group distribution structure.</p>
Full article ">
16 pages, 2157 KiB  
Article
Motion Target Localization Method for Step Vibration Signals Based on Deep Learning
by Rui Chen, Yanping Zhu, Qi Chen and Chenyang Zhu
Appl. Sci. 2024, 14(20), 9361; https://doi.org/10.3390/app14209361 - 14 Oct 2024
Viewed by 797
Abstract
To address the limitations of traditional footstep vibration signal localization algorithms, such as limited accuracy, single feature extraction, and cumbersome parameter adjustment, a motion target localization method for step vibration signals based on deep learning is proposed. Velocity vectors are used to describe [...] Read more.
To address the limitations of traditional footstep vibration signal localization algorithms, such as limited accuracy, single feature extraction, and cumbersome parameter adjustment, a motion target localization method for step vibration signals based on deep learning is proposed. Velocity vectors are used to describe human motion and adapt it to the nonlinear motion and complex interactions of moving targets. In the feature extraction stage, a one-dimensional residual convolutional neural network is constructed to extract the time–frequency domain features of the signals, and a channel attention mechanism is introduced to enhance the model’s focus on different vibration sensor signal features. Furthermore, a bidirectional long short-term memory network is built to learn the temporal relationships between the extracted signal features of the convolution operation. Finally, regression operations are performed through fully connected layers to estimate the position and velocity vectors of the moving target. The dataset consists of footstep vibration signal data from six experimental subjects walking on four different paths and the actual motion trajectories of the moving targets obtained using a visual tracking system. Experimental results show that compared to WT-TDOA and SAE-BPNN, the positioning accuracy of our method has been improved by 37.9% and 24.8%, respectively, with a system average positioning error reduced to 0.376 m. Full article
Show Figures

Figure 1

Figure 1
<p>Location scenario. This figure shows the experiment scene of vibration signal acquisition of a moving target, including the sensor configuration, single moving target, and one camera.</p>
Full article ">Figure 2
<p>Overview of our method. This paper proposes a deep learning-based method for locating moving targets using vibration signals. The approach begins by collecting vibration signal data sequences and recording the actual trajectories of pedestrians on the ground using cameras. Subsequently, supervised learning is employed, leveraging deep learning techniques to extract signal features for training the positioning model. Finally, the two-dimensional velocity vectors of pedestrians are output to estimate their motion trajectories.</p>
Full article ">Figure 3
<p>Architectures of our method.</p>
Full article ">Figure 4
<p>The real experimental conditions.</p>
Full article ">Figure 5
<p>Comparison of the predicted trajectory and the real trajectory.</p>
Full article ">Figure 6
<p>Velocity vector comparison. (<b>a</b>) Comparison of real velocity and prediction in the X-axis direction. (<b>b</b>) Comparison of real velocity and prediction in the Y-axis direction.</p>
Full article ">Figure 7
<p>Part of the linear path test results. (<b>a</b>) The test set results of experimenter A. (<b>b</b>) The test set results of experimenter B. (<b>c</b>) The test set results of experimenter C.</p>
Full article ">Figure 8
<p>Part of the test result graph around the figure-eight path. (<b>a</b>) The test set results of experimenter A. (<b>b</b>) The test set results of experimenter B. (<b>c</b>) The test set results of experimenter C.</p>
Full article ">Figure 9
<p>Part of the loop path test results. (<b>a</b>) The test set results of experimenter A. (<b>b</b>) The test set results of experimenter B. (<b>c</b>) The test set results of experimenter C.</p>
Full article ">Figure 10
<p>Part of the L-path test results. (<b>a</b>) The test set results of experimenter A. (<b>b</b>) The test set results of experimenter B. (<b>c</b>) The test set results of experimenter C.</p>
Full article ">Figure 11
<p>Test set experimental results.</p>
Full article ">Figure 12
<p>Comparison of positioning results of different methods.</p>
Full article ">
22 pages, 5833 KiB  
Article
A Novel Hypersonic Target Trajectory Estimation Method Based on Long Short-Term Memory and a Multi-Head Attention Mechanism
by Yue Xu, Quan Pan, Zengfu Wang and Baoquan Hu
Entropy 2024, 26(10), 823; https://doi.org/10.3390/e26100823 - 26 Sep 2024
Viewed by 590
Abstract
To address the complex maneuvering characteristics of hypersonic targets in adjacent space, this paper proposes an LSTM trajectory estimation method combined with the attention mechanism and optimizes the model from the information-theoretic perspective. The method captures the target dynamics by using the temporal [...] Read more.
To address the complex maneuvering characteristics of hypersonic targets in adjacent space, this paper proposes an LSTM trajectory estimation method combined with the attention mechanism and optimizes the model from the information-theoretic perspective. The method captures the target dynamics by using the temporal processing capability of LSTM, and at the same time improves the efficiency of information utilization through the attention mechanism to achieve accurate prediction. First, a target dynamics model is constructed to clarify the motion behavior parameters. Subsequently, an LSTM model incorporating the attention mechanism is designed, which enables the model to automatically focus on key information fragments in the historical trajectory. In model training, information redundancy is reduced, and information validity is improved through feature selection and data preprocessing. Eventually, the model achieves accurate prediction of hypersonic target trajectories with limited computational resources. The experimental results show that the method performs well in complex dynamic environments with improved prediction accuracy and robustness, reflecting the potential of information theory principles in optimizing the trajectory prediction model. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Basic structure of LSTM.</p>
Full article ">Figure 2
<p>LSTM network incorporating attention mechanism.</p>
Full article ">Figure 3
<p>Self-attention module.</p>
Full article ">Figure 4
<p>Multi-head attention module.</p>
Full article ">Figure 5
<p>Network training flow.</p>
Full article ">Figure 6
<p>Recognition accuracy of various methods under different input sequences: (<b>a</b>) length = 50; (<b>b</b>) length = 100; (<b>c</b>) length = 150; (<b>d</b>) length = 200; (<b>e</b>) length = 250; (<b>f</b>) length = 300.</p>
Full article ">Figure 7
<p>Recognition losses of various methods under different input sequences: (<b>a</b>) length = 50; (<b>b</b>) length = 100; (<b>c</b>) length = 150; (<b>d</b>) length = 200; (<b>e</b>) length = 250; (<b>f</b>) length = 300.</p>
Full article ">Figure 8
<p>Recognition accuracy of various methods at different network layers: (<b>a</b>) m = 1, n = 1; (<b>b</b>) m = 1, n = 2; (<b>c</b>) m = 2, n = 1; (<b>d</b>) m = 2, n = 2; (<b>e</b>) m = 2, n = 3; (<b>f</b>) m = 3, n = 2.</p>
Full article ">Figure 8 Cont.
<p>Recognition accuracy of various methods at different network layers: (<b>a</b>) m = 1, n = 1; (<b>b</b>) m = 1, n = 2; (<b>c</b>) m = 2, n = 1; (<b>d</b>) m = 2, n = 2; (<b>e</b>) m = 2, n = 3; (<b>f</b>) m = 3, n = 2.</p>
Full article ">Figure 9
<p>Recognition losses of various methods at different network layers: (<b>a</b>) m = 1, n = 1; (<b>b</b>) m = 1, n = 2; (<b>c</b>) m = 2, n = 1; (<b>d</b>) m = 2, n = 2; (<b>e</b>) m = 2, n = 3; (<b>f</b>) m = 3, n = 2.</p>
Full article ">Figure 9 Cont.
<p>Recognition losses of various methods at different network layers: (<b>a</b>) m = 1, n = 1; (<b>b</b>) m = 1, n = 2; (<b>c</b>) m = 2, n = 1; (<b>d</b>) m = 2, n = 2; (<b>e</b>) m = 2, n = 3; (<b>f</b>) m = 3, n = 2.</p>
Full article ">Figure 10
<p>Performance of various methods in noisy environments: (<b>a</b>) SNR = −6; (<b>b</b>) SNR = −2; (<b>c</b>) SNR = 2; (<b>d</b>) SNR = 6.</p>
Full article ">
17 pages, 9404 KiB  
Article
SimpleTrackV2: Rethinking the Timing Characteristics for Multi-Object Tracking
by Yan Ding, Yuchen Ling, Bozhi Zhang, Jiaxin Li, Lingxi Guo and Zhe Yang
Sensors 2024, 24(18), 6015; https://doi.org/10.3390/s24186015 - 17 Sep 2024
Cited by 1 | Viewed by 1318
Abstract
Multi-object tracking tasks aim to assign unique trajectory codes to targets in video frames. Most detection-based tracking methods use Kalman filtering algorithms for trajectory prediction, directly utilizing associated target features for trajectory updates. However, this approach often fails, with camera jitter and transient [...] Read more.
Multi-object tracking tasks aim to assign unique trajectory codes to targets in video frames. Most detection-based tracking methods use Kalman filtering algorithms for trajectory prediction, directly utilizing associated target features for trajectory updates. However, this approach often fails, with camera jitter and transient target loss in real-world scenarios. This paper rethinks state prediction and fusion based on target temporal features to address these issues and proposes the SimpleTrackV2 algorithm, building on the previously designed SimpleTrack. Firstly, to address the poor prediction performance of linear motion models in complex scenes, we designed a target state prediction algorithm called LSTM-MP, based on long short-term memory (LSTM). This algorithm encodes the target’s historical motion information using LSTM and decodes it with a multilayer perceptron (MLP) to achieve target state prediction. Secondly, to mitigate the effect of occlusion on target state saliency, we designed a spatiotemporal attention-based target appearance feature fusion (TSA-FF) target state fusion algorithm based on the attention mechanism. TSA-FF calculates adaptive fusion coefficients to enhance target state fusion, thereby improving the accuracy of subsequent data association. To demonstrate the effectiveness of the proposed method, we compared SimpleTrackV2 with the baseline model SimpleTrack on the MOT17 dataset. We also conducted ablation experiments on TSA-FF and LSTM-MP for SimpleTrackV2, exploring the optimal number of fusion frames and the impact of different loss functions on model performance. The experimental results show that SimpleTrackV2 handles camera jitter and target occlusion better, achieving improvements of 1.6%, 3.2%, and 6.1% in MOTA, IDF1, and HOTA, respectively, compared to the SimpleTrack algorithm. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Motion state differences and linear Kalman filter predictions for the pedestrian target with video sequence ID 23 in MOT17-09. The linear Kalman filter predicts the center point coordinates and the width and height of the target’s bounding box based on a linear motion model. The black curve represents the frame-to-frame differences in the Kalman filter’s predictions, the blue and green curves show the differences in the horizontal and vertical coordinates of the target’s center point, respectively, the purple and orange curves illustrate the differences in the target’s width and height, and the red and cyan curves depict the intersection over union (IoU) of the predicted bounding boxes and the degree of target occlusion.</p>
Full article ">Figure 2
<p>The target state prediction and fusion pipeline in SimpleTrackV2. SimpleTrackV2 inherits the feature decoupling and association components from SimpleTrack, with the improvements highlighted in the red boxes, which primarily include the design of LSTM-MP for target state prediction and TSA-FF for target state fusion. In the diagram, the blue output lines from feature decoupling represent the extracted positional features, while the purple arrows indicate the extracted appearance features. The LSTM-MP model utilizes the differences in the target’s motion state between consecutive frames as input to predict future motion states. The TSA-FF algorithm computes temporal and spatial attention weights to determine fusion weight coefficients, facilitating the effective fusion of target states.</p>
Full article ">Figure 3
<p>The LSTM-MP model begins by applying a feature transformation method to convert the motion features of the target’s historical frames into the difference between its front and back frame motion states. This difference is then used as input to the model. The LSTM-MP model encodes and reduces the dimensionality of these features through a multilayer perceptron (MLP) network. Finally, an inverse feature transformation is applied to predict the target’s future motion state, thereby achieving effective motion state prediction.</p>
Full article ">Figure 4
<p>The overall structure of the LSTM-MP model.</p>
Full article ">Figure 5
<p>The TSA-FF algorithm first computes the temporal and spatial attention weights for the appearance feature vectors of different time frames for the same target. It then sums the self-spatial attention coefficients of each historical frame with the interaction spatial attention coefficients and multiplies this sum by the temporal attention weights to determine the final fusion coefficients. Finally, the appearance features of each historical frame are weighted and averaged using these fusion coefficients, followed by normalization to obtain the fused target appearance features.</p>
Full article ">
22 pages, 6111 KiB  
Article
Research on the Motion Control Strategy of a Lower-Limb Exoskeleton Rehabilitation Robot Using the Twin Delayed Deep Deterministic Policy Gradient Algorithm
by Yifeng Guo, Min He, Xubin Tong, Min Zhang and Limin Huang
Sensors 2024, 24(18), 6014; https://doi.org/10.3390/s24186014 - 17 Sep 2024
Viewed by 1010
Abstract
The motion control system of a lower-limb exoskeleton rehabilitation robot (LLERR) is designed to assist patients in lower-limb rehabilitation exercises. This research designed a motion controller for an LLERR-based on the Twin Delayed Deep Deterministic policy gradient (TD3) algorithm to control the lower-limb [...] Read more.
The motion control system of a lower-limb exoskeleton rehabilitation robot (LLERR) is designed to assist patients in lower-limb rehabilitation exercises. This research designed a motion controller for an LLERR-based on the Twin Delayed Deep Deterministic policy gradient (TD3) algorithm to control the lower-limb exoskeleton for gait training in a staircase environment. Commencing with the establishment of a mathematical model of the LLERR, the dynamics during its movement are systematically described. The TD3 algorithm is employed to plan the motion trajectory of the LLERR’s right-foot sole, and the target motion curve of the hip (knee) joint is deduced inversely to ensure adherence to human physiological principles during motion execution. The control strategy of the TD3 algorithm ensures that the movement of each joint of the LLERR is consistent with the target motion trajectory. The experimental results indicate that the trajectory tracking errors of the hip (knee) joints are all within 5°, confirming that the LLERR successfully assists patient in completing lower-limb rehabilitation training in a staircase environment. The primary contribution of this study is to propose a non-linear control strategy tailored for the staircase environment, enabling the planning and control of the lower-limb joint motions facilitated by the LLERR. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Operating environment of lower-limb exoskeleton rehabilitation robot.</p>
Full article ">Figure 2
<p>Lower-limb exoskeleton rehabilitation robot model.</p>
Full article ">Figure 3
<p>Gridding of the motion space of the mass point U.</p>
Full article ">Figure 4
<p>The motion path planning framework of mass point U.</p>
Full article ">Figure 5
<p>Critic (Actor) neural network topology.</p>
Full article ">Figure 6
<p>Path planning process for the TD3 algorithm.</p>
Full article ">Figure 7
<p>The relationship between the planning path length and learning times of mass point U.</p>
Full article ">Figure 8
<p>Critic of TD3 algorithm.</p>
Full article ">Figure 9
<p>Actor of TD3 algorithm.</p>
Full article ">Figure 10
<p>Lower-limb exoskeleton right-sole target path.</p>
Full article ">Figure 11
<p>Schematic diagram of lower-limb exoskeleton’s single-leg movement.</p>
Full article ">Figure 12
<p>Target motion trajectories of the joints of a lower extremity exoskeleton rehabilitation robot.</p>
Full article ">Figure 13
<p>Joint angle tracking curves of a lower-limb exoskeleton rehabilitation robot.</p>
Full article ">Figure 14
<p>Joint angle tracking error curves of a lower-limb exoskeleton rehabilitation robot.</p>
Full article ">Figure 15
<p>Maximum joint angle error vs. learning iterations.</p>
Full article ">Figure 16
<p>Experimental prototype testing platform for lower-limb exoskeleton rehabilitation robot.</p>
Full article ">Figure 17
<p>Prototype control system testing process of lower-limb rehabilitation robot.</p>
Full article ">Figure 18
<p>Joint angle tracking curve of the prototype lower-limb exoskeleton rehabilitation robot.</p>
Full article ">Figure 19
<p>Joint tracking error of lower limb exoskeleton rehabilitation robot prototype.</p>
Full article ">
30 pages, 2615 KiB  
Article
Evaluation of the Monitoring Capabilities of Remote Sensing Satellites for Maritime Moving Targets
by Weiming Li, Zhiqiang Du, Li Wang and Tiancheng Zhou
ISPRS Int. J. Geo-Inf. 2024, 13(9), 325; https://doi.org/10.3390/ijgi13090325 - 11 Sep 2024
Viewed by 951
Abstract
Although an Automatic Identification System (AIS) can be used to monitor trajectories, it has become a reality for remote sensing satellite clusters to monitor maritime moving targets. The increasing demand for monitoring poses challenges for the construction of satellites, the monitoring capabilities of [...] Read more.
Although an Automatic Identification System (AIS) can be used to monitor trajectories, it has become a reality for remote sensing satellite clusters to monitor maritime moving targets. The increasing demand for monitoring poses challenges for the construction of satellites, the monitoring capabilities of which urgently need to be evaluated. Conventional evaluation methods focus on the spatial characteristics of monitoring; however, the temporal characteristics and the target’s kinematic characteristics are neglected. In this study, an evaluation method that integrates the spatial and temporal characteristics of monitoring along with the target’s kinematic characteristics is proposed. Firstly, a target motion prediction model for calculating the transfer probability and a satellite observation information calculation model for obtaining observation strips and time windows are established. Secondly, an index system is established, including the target detection capability, observation coverage capability, proportion of empty window, dispersion of observation window, and deviation of observation window. Thirdly, a comprehensive evaluation is completed through combining the analytic hierarchy process and entropy weight method to obtain the monitoring capability score. Finally, simulation experiments are conducted to evaluate the monitoring capabilities of satellites for ship trajectories. The results show that the method is effective when the grid size is between 1.6 and 1.8 times the target size and the task duration is approximately twice the time interval between trajectory points. Furthermore, the method is proven to be usable in various environments. Full article
Show Figures

Figure 1

Figure 1
<p>The relationship between the models and the index system.</p>
Full article ">Figure 2
<p>Diagram of the task area.</p>
Full article ">Figure 3
<p>Calculation of the coordinates in a three-dimensional spherical coordinate system.</p>
Full article ">Figure 4
<p>Distribution diagram of a target’s transfer probability.</p>
Full article ">Figure 5
<p>Scene of satellite side-looking imaging: (<b>a</b>) the process of satellite side-looking imaging; and (<b>b</b>) a cross-sectional view of the scene.</p>
Full article ">Figure 6
<p>Overlay diagram of the satellite observation strip and the covered task area, yellow represents the area that can be observed by satellites, and red circles represent the task area.</p>
Full article ">Figure 7
<p>The calculation process for the TDC.</p>
Full article ">Figure 8
<p>The process of merging time windows.</p>
Full article ">Figure 9
<p>The calculation process for the DOW: (<b>a</b>) the conversion for measuring the dispersion between periods; and (<b>b</b>) the calculation process for the DOW when there are only two observation windows.</p>
Full article ">Figure 10
<p>The model for evaluation with the analytic hierarchy process.</p>
Full article ">Figure 11
<p>Simulated Trajectory 1.</p>
Full article ">Figure 12
<p>The comparison results of the comprehensive evaluation scores and time consumption for different gird sizes: (<b>a</b>) the comprehensive evaluation score, blue line represents score trend, red dashed line represents trend boundary line; (<b>b</b>) time consumption comparison of the seven different gird sizes.</p>
Full article ">Figure 13
<p>The values of the TDC at different grid sizes.</p>
Full article ">Figure 14
<p>The comprehensive evaluation scores.</p>
Full article ">Figure 15
<p>The values of each index: (<b>a</b>) Index values for <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>s</mi> <mi>k</mi> </mrow> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>T</mi> </mrow> </semantics></math>; (<b>b</b>) index values for <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>s</mi> <mi>k</mi> </mrow> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>1.5</mn> <mo>Δ</mo> <mi>T</mi> </mrow> </semantics></math>; (<b>c</b>) index values for <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>s</mi> <mi>k</mi> </mrow> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>Δ</mo> <mi>T</mi> </mrow> </semantics></math>; (<b>d</b>) index values for <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>s</mi> <mi>k</mi> </mrow> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>2.5</mn> <mo>Δ</mo> <mi>T</mi> </mrow> </semantics></math>; and (<b>e</b>) index values for <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>s</mi> <mi>k</mi> </mrow> </msub> </semantics></math> of <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>Δ</mo> <mi>T</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Simulated trajectories: (<b>a</b>) Trajectory 2; (<b>b</b>) Trajectory 3; and (<b>c</b>) Trajectory 4.</p>
Full article ">Figure 17
<p>The comprehensive evaluation scores.</p>
Full article ">Figure 18
<p>The values of each index: (<b>a</b>) index values for Trajectory 1; (<b>b</b>) index values for Trajectory 2; and (<b>c</b>) index values for Trajectory 3.</p>
Full article ">Figure 19
<p>The comprehensive evaluation scores.</p>
Full article ">Figure 20
<p>The values of each index: (<b>a</b>) Index values for Night Trajectory 2; (<b>b</b>) Index values for Night Trajectory 3; and (<b>c</b>) Index values for Night Trajectory 4.</p>
Full article ">Figure 21
<p>The trajectory when covered by clouds; for example, <math display="inline"><semantics> <mrow> <mn>25</mn> <mo>%</mo> </mrow> </semantics></math>, the number represents the ID of the trajectory point.</p>
Full article ">Figure 22
<p>The comprehensive evaluation scores.</p>
Full article ">Figure 23
<p>The values of each index: (<b>a</b>) Index values with <math display="inline"><semantics> <mrow> <mn>25</mn> <mo>%</mo> </mrow> </semantics></math> cloud coverage; (<b>b</b>) Index values with <math display="inline"><semantics> <mrow> <mn>50</mn> <mo>%</mo> </mrow> </semantics></math> cloud coverage; and (<b>c</b>) Index values with <math display="inline"><semantics> <mrow> <mn>75</mn> <mo>%</mo> </mrow> </semantics></math> cloud coverage.</p>
Full article ">
27 pages, 2238 KiB  
Article
Contrastive Transformer Network for Track Segment Association with Two-Stage Online Method
by Zongqing Cao, Bing Liu, Jianchao Yang, Ke Tan, Zheng Dai, Xingyu Lu and Hong Gu
Remote Sens. 2024, 16(18), 3380; https://doi.org/10.3390/rs16183380 - 11 Sep 2024
Viewed by 706
Abstract
Interrupted and multi-source track segment association (TSA) are two key challenges in target trajectory research within radar data processing. Traditional methods often rely on simplistic assumptions about target motion and statistical techniques for track association, leading to problems such as unrealistic assumptions, susceptibility [...] Read more.
Interrupted and multi-source track segment association (TSA) are two key challenges in target trajectory research within radar data processing. Traditional methods often rely on simplistic assumptions about target motion and statistical techniques for track association, leading to problems such as unrealistic assumptions, susceptibility to noise, and suboptimal performance limits. This study proposes a unified framework to address the challenges of associating interrupted and multi-source track segments by measuring trajectory similarity. We present TSA-cTFER, a novel network utilizing contrastive learning and TransFormer Encoder to accurately assess trajectory similarity through learned Representations by computing distances between high-dimensional feature vectors. Additionally, we tackle dynamic association scenarios with a two-stage online algorithm designed to manage tracks that appear or disappear at any time. This algorithm categorizes track pairs into easy and hard groups, employing tailored association strategies to achieve precise and robust associations in dynamic environments. Experimental results on real-world datasets demonstrate that our proposed TSA-cTFER network with the two-stage online algorithm outperforms existing methods, achieving 94.59% accuracy in interrupted track segment association tasks and 94.83% in multi-source track segment association tasks. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

Figure 1
<p>TSA-cTFER network overview. This figure illustrates the process of extracting features from track segments using TSA-cTFER, and compares these features to assess the similarity between track segments, thereby determining whether they originate from the same target. The scenario trajectory data include interrupted track segments, such as <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">T</mi> <mrow> <mn>1</mn> </mrow> <mi mathvariant="script">A</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">T</mi> <mrow> <mn>2</mn> </mrow> <mi mathvariant="script">A</mi> </msubsup> </semantics></math>, or multi-source track segments, such as <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">T</mi> <mrow> <mn>1</mn> </mrow> <mi mathvariant="script">A</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">T</mi> <mrow> <mn>4</mn> </mrow> <mi mathvariant="script">B</mi> </msubsup> </semantics></math>. After processing by TSA-cTFER, each track segment acquires high-dimensional feature representations (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">z</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math>). Track segments originating from the same target are defined as positive pairs, e.g., <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="bold-italic">z</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">z</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="bold-italic">z</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">z</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </semantics></math>, while all others are categorized as negative pairs. Through contrastive learning, the network continuously minimizes distances between positive samples and maximizes distances between negative samples to achieve effective track segment association.</p>
Full article ">Figure 2
<p>The overall framework of TSA-cTFER. (<b>a</b>) The TSA-cTFER network comprises an input embedding module, a Transformer encoder module, and an output pooling module. For the two distinct tasks, ITSA and MSTSA, a shared feature extraction module is utilized; however, each task employs different output heads to derive trajectory features. (<b>b</b>) Input Embedding Module. Different attributes of the targets, both continuous <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">x</mi> <mi>c</mi> </msub> </semantics></math> and discrete <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">x</mi> <mi>d</mi> </msub> </semantics></math>, are converted into a unified vector representation <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">x</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> for easier processing by the network. (<b>c</b>) Transformer Encoder. Given the discrete points of the trajectory, the Transformer Encoder employs a multi-head attention mechanism to extract relationships among these trajectory points.</p>
Full article ">Figure 3
<p>Construction of positive and negative association pairs. The scene contains two targets and two sources, with tracks from different sources represented in distinct colors. Different dotted arrows indicate association pairs, with red for positive pairs and black for negative pairs.</p>
Full article ">Figure 4
<p>Diagram of the two-stage online association algorithm for TSA. Sensors A and B continuously report target positions and other relevant information. The Track State Update Module gathers these target states and employs TSA-cTFER to compute their features. The Two-Stage Association module calculates the similarity between association pairs based on the features of the tracks, employing a staged association strategy to achieve pairing among the targets.</p>
Full article ">Figure 5
<p>The selection process of invalid/easy/hard association pairs.</p>
Full article ">Figure 6
<p>Four typical scenarios in MTAD. Tracks reported from different sources are plotted using different colors.</p>
Full article ">Figure 7
<p>The variation of association scores. (<b>a</b>) A scenario of association of tracks in the MTAD. Different colors represent the trajectories of different targets. (<b>b</b>) The variation of association scores over time in ITSA. (<b>c</b>) The variation of association scores over time in MSTSA.</p>
Full article ">Figure 8
<p>Effect of different pooling strategies. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>CA</mi> </msub> </mrow> </semantics></math> variation for different pooling strategies. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>FA</mi> </msub> </mrow> </semantics></math> variation for different pooling strategies.</p>
Full article ">Figure 9
<p>Effect of different input trajectory length. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>CA</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>FA</mi> </msub> </mrow> </semantics></math> variation in the ITSA Task. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>CA</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>FA</mi> </msub> </mrow> </semantics></math> variation in the MSTSA Task.</p>
Full article ">Figure 10
<p>Effect of hidden state size.</p>
Full article ">Figure 11
<p>Effect of the number of layers.</p>
Full article ">Figure 12
<p>Effect of training batch size. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>CA</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>FA</mi> </msub> </mrow> </semantics></math> variation in the ITSA Task. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>CA</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>FA</mi> </msub> </mrow> </semantics></math> variation in the MSTSA Task.</p>
Full article ">Figure 13
<p>Effect of different contrastive loss temperatures. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>CA</mi> </msub> </mrow> </semantics></math> variation for different contrastive loss temperatures. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <msub> <mi>P</mi> <mi>FA</mi> </msub> </mrow> </semantics></math> variation for different contrastive loss temperatures.</p>
Full article ">Figure 14
<p>A failure scenario in the ITSA task. (<b>a</b>) Movement trajectories of targets within the scene. Different colors represent the trajectories of different targets. (<b>b</b>) Scores of various association pairs over time. (<b>c</b>) Example of incorrect association pairs outputted by TSA-cTFER and the propagation of association errors.</p>
Full article ">
Back to TopTop