[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (108)

Search Parameters:
Keywords = multi-UAV formation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1322 KiB  
Article
A Consensus-Driven Distributed Moving Horizon Estimation Approach for Target Detection Within Unmanned Aerial Vehicle Formations in Rescue Operations
by Salvatore Rosario Bassolillo, Egidio D’Amato and Immacolata Notaro
Drones 2025, 9(2), 127; https://doi.org/10.3390/drones9020127 - 9 Feb 2025
Viewed by 459
Abstract
In the last decades, the increasing employment of unmanned aerial vehicles (UAVs) in civil applications has highlighted the potential of coordinated multi-aircraft missions. Such an approach offers advantages in terms of cost-effectiveness, operational flexibility, and mission success rates, particularly in complex scenarios such [...] Read more.
In the last decades, the increasing employment of unmanned aerial vehicles (UAVs) in civil applications has highlighted the potential of coordinated multi-aircraft missions. Such an approach offers advantages in terms of cost-effectiveness, operational flexibility, and mission success rates, particularly in complex scenarios such as search and rescue operations, environmental monitoring, and surveillance. However, achieving global situational awareness, although essential, represents a significant challenge, due to computational and communication constraints. This paper proposes a Distributed Moving Horizon Estimation (DMHE) technique that integrates consensus theory and Moving Horizon Estimation to optimize computational efficiency, minimize communication requirements, and enhance system robustness. The proposed DMHE framework is applied to a formation of UAVs performing target detection and tracking in challenging environments. It provides a fully distributed architecture that enables UAVs to estimate the position and velocity of other fleet members while simultaneously detecting static and dynamic targets. The effectiveness of the technique is proved by several numerical simulation, including an in-depth sensitivity analysis of key algorithm parameters, such as fleet network topology and consensus iterations and the evaluation of the robustness against node faults and information losses. Full article
(This article belongs to the Special Issue Resilient Networking and Task Allocation for Drone Swarms)
Show Figures

Figure 1

Figure 1
<p>Graphical representation of the communication flow between UAVs during consensus.</p>
Full article ">Figure 2
<p>Graphical representation of the DMHE flow for the <span class="html-italic">i</span>-th UAV.</p>
Full article ">Figure 3
<p>Test case #1: Communication schemes among the drones. In <math display="inline"><semantics> <msub> <mi mathvariant="script">E</mi> <mn>2</mn> </msub> </semantics></math>, each aircraft communicates with two neighbors; in <math display="inline"><semantics> <msub> <mi mathvariant="script">E</mi> <mn>4</mn> </msub> </semantics></math>, each aircraft communicates with four neighbors.</p>
Full article ">Figure 4
<p>Test case #1: Graphical comparison of the estimation error made by UAV #1 in estimating the position of UAV #2 as a function of the consensus steps (<span class="html-italic">L</span>) and the communication link schemes (<math display="inline"><semantics> <mi mathvariant="script">E</mi> </semantics></math>) with <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math>. The middle point of each bar represents the average error, <math display="inline"><semantics> <msubsup> <mi>e</mi> <mn>1</mn> <msub> <mi>S</mi> <mn>2</mn> </msub> </msubsup> </semantics></math>, while the endpoints correspond to the minimum and maximum values of the standard deviation <math display="inline"><semantics> <msubsup> <mi>σ</mi> <mn>1</mn> <msub> <mi>S</mi> <mn>2</mn> </msub> </msubsup> </semantics></math>.</p>
Full article ">Figure 5
<p>Test case #1: Graphical comparison of the DMHE computation time per estimation step, varying the estimation window size (<math display="inline"><semantics> <msub> <mi>n</mi> <mi>t</mi> </msub> </semantics></math>) and the number of consensus steps (<span class="html-italic">L</span>). The numerical simulations were performed on a laptop equipped with an Apple M3 processor and 16 GB of RAM.</p>
Full article ">Figure 6
<p>Test case #1: Estimated trajectories of UAV #1, UAV #2, and UAV #9 made by UAV #1, considering <math display="inline"><semantics> <mrow> <mi mathvariant="script">E</mi> <mo>=</mo> <msub> <mi mathvariant="script">E</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> consensus steps and a moving horizon with <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Test case #2: UAVs reference trajectories.</p>
Full article ">Figure 8
<p>Test case #2: Communication link scheme during the interruption in the data transmission between UAV #4 and UAV #6 and between UAV #5 and UAV #7.</p>
Full article ">Figure 9
<p>Test case #2: Comparison between the estimated X,Y,Z coordinates of UAV #8 and its actual coordinates.</p>
Full article ">Figure 10
<p>Test case #3: Starting position of the UAVs and target. The solid black lines represent the communication link between aircraft.</p>
Full article ">Figure 11
<p>Test case #3: Estimation error of target coordinates <math display="inline"><semantics> <msubsup> <mi>e</mi> <mn>1</mn> <msub> <mi>X</mi> <mi>T</mi> </msub> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi>e</mi> <mn>1</mn> <msub> <mi>Y</mi> <mi>T</mi> </msub> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi>e</mi> <mn>1</mn> <msub> <mi>Z</mi> <mi>T</mi> </msub> </msubsup> </semantics></math> as computed by UAV #1.</p>
Full article ">Figure 12
<p>Test case #4: Estimation error of target coordinates <math display="inline"><semantics> <msubsup> <mi>e</mi> <mn>1</mn> <msub> <mi>X</mi> <mi>T</mi> </msub> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi>e</mi> <mn>1</mn> <msub> <mi>Y</mi> <mi>T</mi> </msub> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi>e</mi> <mn>1</mn> <msub> <mi>Z</mi> <mi>T</mi> </msub> </msubsup> </semantics></math> as computed by UAV #1.</p>
Full article ">
42 pages, 40649 KiB  
Article
A Multi-Drone System Proof of Concept for Forestry Applications
by André G. Araújo, Carlos A. P. Pizzino, Micael S. Couceiro and Rui P. Rocha
Drones 2025, 9(2), 80; https://doi.org/10.3390/drones9020080 - 21 Jan 2025
Viewed by 717
Abstract
This study presents a multi-drone proof of concept for efficient forest mapping and autonomous operation, framed within the context of the OPENSWARM EU Project. The approach leverages state-of-the-art open-source simultaneous localisation and mapping (SLAM) frameworks, like LiDAR (Light Detection And Ranging) Inertial Odometry [...] Read more.
This study presents a multi-drone proof of concept for efficient forest mapping and autonomous operation, framed within the context of the OPENSWARM EU Project. The approach leverages state-of-the-art open-source simultaneous localisation and mapping (SLAM) frameworks, like LiDAR (Light Detection And Ranging) Inertial Odometry via Smoothing and Mapping (LIO-SAM), and Distributed Collaborative LiDAR SLAM Framework for a Robotic Swarm (DCL-SLAM), seamlessly integrated within the MRS UAV System and Swarm Formation packages. This integration is achieved through a series of procedures compliant with Robot Operating System middleware (ROS), including an auto-tuning particle swarm optimisation method for enhanced flight control and stabilisation, which is crucial for autonomous operation in challenging environments. Field experiments conducted in a forest with multiple drones demonstrate the system’s ability to navigate complex terrains as a coordinated swarm, accurately and collaboratively mapping forest areas. Results highlight the potential of this proof of concept, contributing to the development of scalable autonomous solutions for forestry management. The findings emphasise the significance of integrating multiple open-source technologies to advance sustainable forestry practices using swarms of drones. Full article
Show Figures

Figure 1

Figure 1
<p>System architecture proposed for the multi-drone PoC system.</p>
Full article ">Figure 2
<p>The world frame <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mfenced separators="" open="{" close="}"> <msub> <mi mathvariant="bold">e</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">e</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">e</mi> <mn>3</mn> </msub> </mfenced> </mrow> </semantics></math>, in which the position and orientation of the drone is expressed by translation <math display="inline"><semantics> <mrow> <mi mathvariant="bold">r</mi> <mo>=</mo> <msup> <mrow> <mo>[</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>]</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math> and rotation <math display="inline"><semantics> <mrow> <mi mathvariant="bold">R</mi> <mo>(</mo> <mi>ϕ</mi> <mo>,</mo> <mi>θ</mi> <mo>,</mo> <mi>ψ</mi> <mo>)</mo> </mrow> </semantics></math> to the body frame <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mfenced separators="" open="{" close="}"> <msub> <mi mathvariant="bold">b</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">b</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">b</mi> <mn>3</mn> </msub> </mfenced> </mrow> </semantics></math>. The drone heading vector <math display="inline"><semantics> <mi mathvariant="bold">h</mi> </semantics></math>, which is a projection of <math display="inline"><semantics> <msub> <mover accent="true"> <mi mathvariant="bold">b</mi> <mo stretchy="false">^</mo> </mover> <mn>1</mn> </msub> </semantics></math> to the plane <math display="inline"><semantics> <mrow> <mo form="prefix">span</mo> <mfenced separators="" open="(" close=")"> <msub> <mover accent="true"> <mi mathvariant="bold">e</mi> <mo stretchy="false">^</mo> </mover> <mn>1</mn> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi mathvariant="bold">e</mi> <mo stretchy="false">^</mo> </mover> <mn>2</mn> </msub> </mfenced> </mrow> </semantics></math>, forms the heading angle <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mo form="prefix">atan</mo> <mn>2</mn> <mfenced separators="" open="(" close=")"> <msubsup> <mover accent="true"> <mi mathvariant="bold">b</mi> <mo stretchy="false">^</mo> </mover> <mn>1</mn> <mo>⊤</mo> </msubsup> <msub> <mover accent="true"> <mi mathvariant="bold">e</mi> <mo stretchy="false">^</mo> </mover> <mn>2</mn> </msub> <mo>,</mo> <msubsup> <mover accent="true"> <mi mathvariant="bold">b</mi> <mo stretchy="false">^</mo> </mover> <mn>1</mn> <mo>⊤</mo> </msubsup> <msub> <mover accent="true"> <mi mathvariant="bold">e</mi> <mo stretchy="false">^</mo> </mover> <mn>1</mn> </msub> </mfenced> <mo>=</mo> <mo form="prefix">atan</mo> <mn>2</mn> <mfenced separators="" open="(" close=")"> <msub> <mi mathvariant="bold">h</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">h</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msub> </mfenced> </mrow> </semantics></math>, figure based on [<a href="#B6-drones-09-00080" class="html-bibr">6</a>].</p>
Full article ">Figure 3
<p>Figure based on [<a href="#B6-drones-09-00080" class="html-bibr">6</a>]. The filters simultaneously estimate the states and can be switched or selected by user/arbiter.</p>
Full article ">Figure 4
<p>Simulation of the swarm formation in the forest environment. Together, these visualisations demonstrate the effectiveness of the simulation tools in evaluating and refining the Multi-Drone PoC system prior to field experiments. (<b>a</b>) Octomap representation of a simulated forest environment in Gazebo, shown using a color gradient that varies with height. (<b>b</b>) Representation of swarm formation in the simulation environment. The three colors (pink, green, and blue) in small dot points represent the global maps of each drone. The square markers indicate the reference samples from the Octomap planner’s desired trajectory. The trajectory, represented by vectors, corresponds to the outputs of the MPC tracker. Additionally, the actual paths of each drone are depicted as solid lines. Finally, the solid red lines represent the current swarm formation shape.</p>
Full article ">Figure 5
<p>Global map service. (<b>a</b>) Overview of the global map integration process, where local maps from each drone are collected and aligned using the Iterative Closest Point (ICP) algorithm to create a unified global map of the environment. (<b>b</b>) Resulting integrated global map generated by combining the local maps from three drones, namely drone <math display="inline"><semantics> <mi>α</mi> </semantics></math>, drone <math display="inline"><semantics> <mi>β</mi> </semantics></math>, and drone <math display="inline"><semantics> <mi>γ</mi> </semantics></math>, using the ICP algorithm, showcasing the complete coverage of the surveyed area.</p>
Full article ">Figure 6
<p>Scout v3.</p>
Full article ">Figure 7
<p>Architecture of the PSO-based tuning procedure for the SE(3) controller. The setup consists of a drone running on ROS for real-time control and state feedback, while a laptop executes the Particle Swarm Optimization (PSO) algorithm in MATLAB. Communication between the drone and the laptop enables iterative tuning of the controller parameters to optimize performance.</p>
Full article ">Figure 8
<p>Flight control optimisation process. (<b>a</b>) Real drone performing PSO-based auto tuning. (<b>b</b>) Particle Swarm Optimization (PSO) convergence graph.</p>
Full article ">Figure 9
<p>Forest site description. (<b>a</b>) A point of view of the forest site from the drone’s perspective. (<b>b</b>) Aerial view of the forest site showcasing the diverse canopy structure, ranging from dense evergreen stands to open clearings.</p>
Full article ">Figure 9 Cont.
<p>Forest site description. (<b>a</b>) A point of view of the forest site from the drone’s perspective. (<b>b</b>) Aerial view of the forest site showcasing the diverse canopy structure, ranging from dense evergreen stands to open clearings.</p>
Full article ">Figure 10
<p>Images depicting the field experiments in the forest, highlighting the multi-drone system in operation (drone <math display="inline"><semantics> <mi>α</mi> </semantics></math> in red, drone <math display="inline"><semantics> <mi>β</mi> </semantics></math> in green and drone <math display="inline"><semantics> <mi>γ</mi> </semantics></math> in blue).</p>
Full article ">Figure 11
<p>Progressive mapping of the environment by a single drone at four distinct moments during the field experiment. The figure illustrates the gradual construction of the map, depicted using a color gradient that varies with height, as the drone explores the area. Newly captured features are incrementally integrated into the overall representation.</p>
Full article ">Figure 12
<p>This figure illustrates the first inter-loop closures between two pairs of drones. These closures are crucial for ensuring cooperative mapping in multi-robot systems, reducing errors that may arise from individual robot uncertainties.</p>
Full article ">Figure 13
<p>This figure presents the frequency of inter-loop closures, revealing differences in the contributions of each drone to the overall mapping process.</p>
Full article ">Figure 14
<p>Trajectories executed by the drones during real experiments. (<b>a</b>) Variations in swarm formations over six distinct moments and overall trajectories of each individual drone. (<b>b</b>) Overlay global map (represented in red) and trajectories executed by the drones on the forest terrain.</p>
Full article ">Figure 15
<p>Maps generated by three drones, namely drone <math display="inline"><semantics> <mi>α</mi> </semantics></math>, drone <math display="inline"><semantics> <mi>β</mi> </semantics></math>, drone <math display="inline"><semantics> <mi>γ</mi> </semantics></math>, and the Global Map created by the Global Map Service. For better visualisation, a height threshold was applied and the number of points was reduced.</p>
Full article ">
23 pages, 4582 KiB  
Article
Control-Oriented Real-Time Trajectory Planning for Heterogeneous UAV Formations
by Weichen Qian, Wenjun Yi, Shusen Yuan and Jun Guan
Drones 2025, 9(2), 78; https://doi.org/10.3390/drones9020078 - 21 Jan 2025
Viewed by 363
Abstract
Aiming at the trajectory planning problem for heterogeneous UAV formations in complex environments, a trajectory prediction model combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory networks (LSTM) is designed, and a real-time trajectory planning method is proposed based on this model. By [...] Read more.
Aiming at the trajectory planning problem for heterogeneous UAV formations in complex environments, a trajectory prediction model combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory networks (LSTM) is designed, and a real-time trajectory planning method is proposed based on this model. By pre-training trajectory prediction networks for various types of UAVs, the traditional physics-based models are replaced for flight trajectory prediction. Inspired by Model Predictive Control (MPC), in the trajectory planning stage, the method generates multi-step trajectory points using an improved artificial potential field (APF) method, estimates the actual formation trajectory using the prediction network, and optimizes the trajectory through a multi-objective particle swarm optimization (MOPSO) algorithm after evaluating the planning costs. During actual flight, the optimized parameters generate trajectory points for the formation to follow. Unlike conventional path planning based on simple constraints, the proposed method directly plans trajectory points based on trajectory tracking performance, ensuring high feasibility for the formation to follow. Experimental results show that the CNN-LSTM network outperforms other networks in both short-term and long-term trajectory prediction. The proposed trajectory planning method demonstrates significant advantages in formation maintenance, trajectory tracking, and real-time obstacle avoidance, ensuring flight stability and safety while maintaining high-speed flight. Full article
Show Figures

Figure 1

Figure 1
<p>Quadrotor UAV formation flight.</p>
Full article ">Figure 2
<p>Drone formation tracking control scheme.</p>
Full article ">Figure 3
<p>UAV formation trajectory planning framework.</p>
Full article ">Figure 4
<p>The trajectory prediction network architecture diagram.</p>
Full article ">Figure 5
<p>The comparison of the 100-step prediction results: (<b>a</b>–<b>f</b>), respectively, represent the trajectory predictions over 100 steps for different models, including the 3D trajectory, as well as the x, y, z coordinates, roll, and pitch angles.</p>
Full article ">Figure 5 Cont.
<p>The comparison of the 100-step prediction results: (<b>a</b>–<b>f</b>), respectively, represent the trajectory predictions over 100 steps for different models, including the 3D trajectory, as well as the x, y, z coordinates, roll, and pitch angles.</p>
Full article ">Figure 6
<p>Comparison of 10 s prediction results: (<b>a</b>–<b>f</b>), respectively, represent the trajectory predictions over 10 s for different models, including the 3D trajectory, as well as the x, y, z coordinates, roll, and pitch angles.</p>
Full article ">Figure 6 Cont.
<p>Comparison of 10 s prediction results: (<b>a</b>–<b>f</b>), respectively, represent the trajectory predictions over 10 s for different models, including the 3D trajectory, as well as the x, y, z coordinates, roll, and pitch angles.</p>
Full article ">Figure 7
<p>Formation layout of quadrotor UAVs.</p>
Full article ">Figure 8
<p>Trajectory planning results of the proposed method: (<b>a</b>–<b>d</b>), respectively, represent the trajectory of the proposed method, including the 3D trajectory and the x, y, z coordinates. (<b>e</b>–<b>h</b>), respectively, represent cost function values, including <span class="html-italic">J</span><sub>1</sub>, <span class="html-italic">J</span><sub>2</sub>, <span class="html-italic">J</span><sub>3</sub>, and <span class="html-italic">J</span><sub>4</sub>.</p>
Full article ">Figure 9
<p>The trajectories of the two comparison methods: (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>), respectively, represent the trajectory of method 1, including the 3D trajectory and the x, y, z coordinates. (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>), respectively, represent the trajectory of method 3, including the 3D trajectory and the x, y, z coordinates.</p>
Full article ">
19 pages, 25570 KiB  
Article
Surface Multi-Hazard Effects of Underground Coal Mining in Mountainous Regions
by Xuwen Tian, Xin Yao, Zhenkai Zhou and Tao Tao
Remote Sens. 2025, 17(1), 122; https://doi.org/10.3390/rs17010122 - 2 Jan 2025
Viewed by 482
Abstract
Underground coal mining induces surface subsidence, which in turn impacts the stability of slopes in mountainous regions. However, research that investigates the coupling relationship between surface subsidence in mountainous regions and the occurrence of multiple surface hazards is scarce. Taking a coal mine [...] Read more.
Underground coal mining induces surface subsidence, which in turn impacts the stability of slopes in mountainous regions. However, research that investigates the coupling relationship between surface subsidence in mountainous regions and the occurrence of multiple surface hazards is scarce. Taking a coal mine in southwestern China as a case study, a detailed catalog of the surface hazards in the study area was created based on multi-temporal satellite imagery interpretation and Unmanned aerial vehicle (UAV) surveys. Using interferometric synthetic aperture radar (InSAR) technology and the logistic subsidence prediction method, this study investigated the evolution of surface subsidence induced by underground mining activities and its impact on the triggering of multiple surface hazards. We found that the study area experienced various types of surface hazards, including subsidence, landslides, debris flows, sinkholes, and ground fissures, due to the effects of underground mining activities. The InSAR monitoring results showed that the maximum subsidence at the back edge of the slope terrace was 98.2 mm, with the most severe deformation occurring at the mid-slope of the mountain, where the maximum subsidence reached 139.8 mm. The surface subsidence process followed an S-shaped curve, comprising the stages of initial subsidence, accelerated subsidence, and residual subsidence. Additionally, the subsidence continued even after coal mining operations concluded. Predictions derived from the logistic model indicate that the duration of residual surface subsidence in the study area is approximately 1 to 2 years. This study aimed to provide a scientific foundation for elucidating the temporal and spatial variation patterns of subsidence induced by underground coal mining in mountainous regions and its impact on the formation of multiple surface hazards. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical location and geological settings of the study area. (<b>a</b>) Location of Yunnan Province and Zhaotong City; (<b>b</b>) location of Zhenxiong County and study area; (<b>c</b>) the digital surface model (DSM) of the study area and the location of underground mining working panels; (<b>d</b>) details of working panels 1151 and 1152; (<b>e</b>) geological map of study area: 1 Lower Ordovician Meitan Formation, 2 Middle Ordovician Baota Formation and the coeval Shizipu Formation, 3 Lower Permian Liangshan Formation, 4 Lower Permian Maokou Formation, 5 Lower Permian Qixia Formation, 6 Upper Permian Xuanwei Formation, 7 Upper Permian Emeishan Basalt Formation, 8 Lower Triassic Feixianguan Formation, 9 Lower Triassic Yongningzhen Formation, 10 Middle Triassic Guanling Formation, 11 Upper Triassic Xujiahe Formation, 12 Middle-Upper Cambrian Loushanguan Formation.</p>
Full article ">Figure 2
<p>Engineering geological profile (the location is shown at line segment I–I’ in <a href="#remotesensing-17-00122-f001" class="html-fig">Figure 1</a>c).</p>
Full article ">Figure 3
<p>The technical approach in this study.</p>
Full article ">Figure 4
<p>Three-dimensional model of actual situation generated by UAV.</p>
Full article ">Figure 5
<p>Field investigation photos of the study area. (<b>a</b>) Local distribution of landslides and debris flows; (<b>b</b>,<b>c</b>) landslide accumulation; (<b>d</b>,<b>g</b>) two debris flow hazards; (<b>e</b>) fractured joints in the rock mass of the debris flow source area; (<b>f</b>) small temple destroyed in the central part of the debris flow; (<b>h</b>) vertically offset fissures; (<b>i</b>) a sinkhole in the study area; (<b>j</b>) damaged retaining wall; (<b>k</b>) rockfall damage to roads; (<b>l</b>) buildings destroyed by mining.</p>
Full article ">Figure 6
<p>Sketch of subsidence development for a surface point during coal mining.</p>
Full article ">Figure 7
<p>The development of cumulative deformation in the study area from January 2022 to June 2024. The location of the old goaf (G1 and G2) is shown in the red polygons in (<b>a</b>); the locations of the temporal deformation curve monitoring points are shown in (<b>b</b>); the black polygons represent the excavation positions of the underground mining panels in (<b>a</b>–<b>h</b>).</p>
Full article ">Figure 8
<p>Cumulative displacement of points from January 2022 to June 2024. (<b>a</b>) Points around the 1152 working panel (P5~P8). (<b>b</b>) Points around the 1159 and 1151 working panels (P1~P4). (<b>c</b>) Logistic models fit the P1 and P4 subsidence processes and predicted the duration of residual subsidence.</p>
Full article ">Figure 9
<p>The temporal interpretation of the development process of surface hazards using optical satellite images and UAV orthophotos. (<b>a</b>–<b>j</b>) The interpretation of optical remote sensing images; (<b>k</b>) the interpretation of high-resolution UAV orthophotos; (<b>l</b>) the summarized cataloging of surface hazards in the study area.</p>
Full article ">Figure 10
<p>The number and area of surface hazards occurring in the study area during different periods. (<b>a</b>) The number of surface hazards occurring from 2009 to 2024. (<b>b</b>) A comparison of the frequency of surface hazards with monthly precipitation from 2023 to 2024.</p>
Full article ">Figure 11
<p>Relationship between the grade of cumulative deformation (G<sub>d</sub>) and the landslide area.</p>
Full article ">Figure 12
<p>The process of surface hazards induced by underground mining in mountainous areas. (<b>a</b>) The original slope stage; (<b>b</b>) the early underground coal mining stage; (<b>c</b>) the subsidence increasing and formation of fissures stage; (<b>d</b>) the large-scale landslide and debris flow hazard increase stage.</p>
Full article ">
27 pages, 5910 KiB  
Article
Active Obstacle Avoidance of Multi-Rotor UAV Swarm Based on Stress Matrix Formation Method
by Zhenyue Qiu, Lei Zhang, Yuan Chi and Zequn Li
Mathematics 2025, 13(1), 86; https://doi.org/10.3390/math13010086 - 29 Dec 2024
Viewed by 425
Abstract
Aiming at the formation problem of the multi-rotor UAV swarm, this paper adopts a multi-rotor UAV swarm formation control method based on a stress matrix to ensure the stability of multi-rotor UAV swarm formation. On the basis of achieving the target formation through [...] Read more.
Aiming at the formation problem of the multi-rotor UAV swarm, this paper adopts a multi-rotor UAV swarm formation control method based on a stress matrix to ensure the stability of multi-rotor UAV swarm formation. On the basis of achieving the target formation through a stress matrix, the formation of a multi-rotor UAV swarm can be rotated, scaled, and sheared. When the obstacles are known, the multi-rotor UAV swarm can pass through the obstacle environment smoothly through rotation, scaling, and shearing transformations. However, this transformation cannot cope with the situation where the obstacles are known. This paper proposes an active obstacle avoidance function for multi-rotor UAV swarm formation based on a stress matrix. Through the detection capability of the UAV itself, the obstacle avoidance function is realized autonomously after the UAV detects an unknown obstacle. Due to the effect of a stress matrix, when the navigator performs the active obstacle avoidance function, the formation of the multi-rotor UAV swarm will be destroyed. This paper designs a virtual UAV and only retains the UAV that controls the flight trajectory of the multi-rotor UAV swarm as the only real UAV to ensure that the UAV swarm formation is not destroyed. This paper proves the stability of the multi-rotor UAV swarm formation through simulation experiments, and the multi-rotor UAV swarm can pass through the obstacle environment smoothly when facing known obstacles and unknown obstacles. Full article
Show Figures

Figure 1

Figure 1
<p>Multi-rotor UAV dynamics model. Each circle in the diagram represents a rotor.</p>
Full article ">Figure 2
<p>Illustration of affine transformations of a nominal configuration. The numbers 1 to 4 in the picture are drones 1 to 4.</p>
Full article ">Figure 3
<p>Schematic diagram of UAV active obstacle avoidance.</p>
Full article ">Figure 4
<p>Schematic diagram of UAV obstacle avoidance. The red circle in the figure is the guide drone, the blue circle is the follower drone, and the gray rectangle is the obstacle. The arrow in the figure is the speed direction of the drone.</p>
Full article ">Figure 5
<p>Example diagram of multi-rotor UAV swarm formation. The red circle in the picture is the pilot drone, the orange circle is the virtual drone, and the blue circle is the follower drone.</p>
Full article ">Figure 6
<p>Example diagram of multi-rotor UAV swarm formation. The red circle in the picture is the pilot drone, the orange circle is the virtual drone, and the blue circle is the follower drone. The numbers in the picture are the drone numbers.</p>
Full article ">Figure 7
<p>Schematic diagram of formation trajectory simulation. The numbers in the picture are the numbers of the drones.</p>
Full article ">Figure 8
<p>Position tracking error.</p>
Full article ">Figure 9
<p>Acceleration change.</p>
Full article ">Figure 10
<p>Schematic diagram of formation obstacle avoidance trajectory simulation. The numbers in the picture are the numbers of the drones.</p>
Full article ">Figure 11
<p>Position tracking error.</p>
Full article ">Figure 12
<p>Acceleration change.</p>
Full article ">Figure 13
<p>Schematic diagram of formation obstacle avoidance trajectory simulation. The numbers in the picture are the numbers of the drones.</p>
Full article ">Figure 14
<p>Position tracking error.</p>
Full article ">Figure 15
<p>Acceleration change.</p>
Full article ">Figure 16
<p>Schematic diagram of formation obstacle avoidance trajectory simulation. The numbers in the picture are the numbers of the drones.</p>
Full article ">Figure 17
<p>Position tracking error.</p>
Full article ">Figure 18
<p>Acceleration change.</p>
Full article ">Figure 19
<p>Schematic diagram of formation obstacle avoidance trajectory simulation. The numbers in the picture are the numbers of the drones.</p>
Full article ">Figure 20
<p>Position tracking error.</p>
Full article ">Figure 21
<p>Acceleration change.</p>
Full article ">Figure 22
<p>Composition diagram of the RflySim software platform.</p>
Full article ">Figure 23
<p>RflySim simulation diagram.</p>
Full article ">
23 pages, 3484 KiB  
Article
Gully Erosion Susceptibility Prediction Using High-Resolution Data: Evaluation, Comparison, and Improvement of Multiple Machine Learning Models
by Heyang Li, Jizhong Jin, Feiyang Dong, Jingyao Zhang, Lei Li and Yucheng Zhang
Remote Sens. 2024, 16(24), 4742; https://doi.org/10.3390/rs16244742 - 19 Dec 2024
Viewed by 570
Abstract
Gully erosion is one of the significant environmental issues facing the black soil regions in Northeast China, and its formation is closely related to various environmental factors. This study employs multiple machine learning models to assess gully erosion susceptibility in this region. The [...] Read more.
Gully erosion is one of the significant environmental issues facing the black soil regions in Northeast China, and its formation is closely related to various environmental factors. This study employs multiple machine learning models to assess gully erosion susceptibility in this region. The primary objective is to evaluate and optimize the top-performing model under high-resolution UAV data conditions, utilize the optimized best model to identify key factors influencing the occurrence of gully erosion from 11 variables, and generate a local gully erosion susceptibility map. Using 0.2 m resolution DEM and DOM data obtained from high-resolution UAVs, 2,554,138 pixels from 64 gully and 64 non-gully plots were analyzed and compiled into the research dataset. Twelve models, including Logistic Regression, K-Nearest Neighbors, Classification and Regression Trees, Random Forest, Boosted Regression Trees, Adaptive Boosting, Extreme Gradient Boosting, an Artificial Neural Network, a Convolutional Neural Network, as well as optimized XGBOOST, a CNN with a Multi-Head Attention mechanism, and an ANN with a Multi-Head Attention Mechanism, were utilized to evaluate gully erosion susceptibility in the Dahewan area. The performance of each model was evaluated using ROC curves, and the model fitting performance and robustness were validated through Accuracy and Cohen’s Kappa statistics, as well as RMSE and MAE indicators. The optimized XGBOOST model achieved the highest performance with an AUC-ROC of 0.9909, and through SHAP analysis, we identified roughness as the most significant factor affecting local gully erosion, with a relative importance of 0.277195. Additionally, the Gully Erosion Susceptibility Map generated by the optimized XGBOOST model illustrated the distribution of local gully erosion risks. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The schematic location of the study area; (<b>b</b>) a display of the study area; (<b>c</b>) a field photograph of the gully; and (<b>d</b>) a UAV-captured gully image with the highlighted gully area.</p>
Full article ">Figure 2
<p>Flowchart of the methodology used in this study.</p>
Full article ">Figure 3
<p>Maps of geo-environmental factors (GEFs): (<b>a</b>) Altitude, (<b>b</b>) slope, (<b>c</b>) Aspect, (<b>d</b>) Profile curvature, (<b>e</b>) Plan curvature, (<b>f</b>) Topographic Ruggedness Index, (<b>g</b>) Topographic Position Index, (<b>h</b>) roughness, (<b>i</b>) LS Factor, (<b>j</b>) Topographic Wetness Index, and (<b>k</b>) Stream Power Index.</p>
Full article ">Figure 4
<p>Multicollinearity analysis of the geo-environmental factors.</p>
Full article ">Figure 5
<p>Relative importance of different geo-environmental factors.</p>
Full article ">Figure 6
<p>Gully erosion susceptibility mapping using the optimized XGBOOST mode.</p>
Full article ">
22 pages, 23478 KiB  
Article
Target Detection and Characterization of Multi-Platform Remote Sensing Data
by Koushikey Chhapariya, Emmett Ientilucci, Krishna Mohan Buddhiraju and Anil Kumar
Remote Sens. 2024, 16(24), 4729; https://doi.org/10.3390/rs16244729 - 18 Dec 2024
Viewed by 720
Abstract
Detecting targets in remote sensing imagery, particularly when identifying sparsely distributed materials, is crucial for applications such as defense, mineral exploration, agriculture, and environmental monitoring. The effectiveness of detection and the precision of the results are influenced by several factors, including sensor configurations, [...] Read more.
Detecting targets in remote sensing imagery, particularly when identifying sparsely distributed materials, is crucial for applications such as defense, mineral exploration, agriculture, and environmental monitoring. The effectiveness of detection and the precision of the results are influenced by several factors, including sensor configurations, platform properties, interactions between targets and their background, and the spectral contrast of the targets. Environmental factors, such as atmospheric conditions, also play a significant role. Conventionally, target detection in remote sensing has relied on statistical methods that typically assume a linear process for image formation. However, to enhance detection performance, it is critical to account for the geometric and spectral variabilities across multiple imaging platforms. In this research, we conducted a comprehensive target detection experiment using a unique benchmark multi-platform hyperspectral dataset, where man-made targets were deployed on various surface backgrounds. Data were collected using a hand-held spectroradiometer, UAV-mounted hyperspectral sensors, and airborne platforms, all within a half-hour time window. Multi-spectral space-based sensors (i.e., Worldview and Landsat) also flew over the scene and collected data. The experiment took place on 23 July 2021, at the Rochester Institute of Technology’s Tait Preserve in Penfield, NY, USA. We validated the detection outcomes through receiver operating characteristic (ROC) curves and spectral similarity metrics across various detection algorithms and imaging platforms. This multi-platform analysis provides critical insights into the challenges of hyperspectral target detection in complex, real-world landscapes, demonstrating the influence of platform variability on detection performance and the necessity for robust algorithmic approaches in multi-source data integration. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area imaged by the airborne CASI hyperspectral sensor, showing various ground targets at the test site.</p>
Full article ">Figure 2
<p>True color composite of airborne CASI data and UAV data. The red square outlines the UAV data region, illustrating its smaller coverage compared to CASI. Green boxes on the UAV data highlight the locations of artificial targets, with corresponding field photographs of these targets shown for reference. The enlarged images provide a closer look at the targets against different spectral backgrounds.</p>
Full article ">Figure 3
<p>Spectral signatures of the artificial target materials, captured through in situ reflectance measurements.</p>
Full article ">Figure 4
<p>Methodological framework employed for detecting targets in multi-platform hyperspectral imagery.</p>
Full article ">Figure 5
<p>ROC curves using eight different target detectoin methods for five different target materials. From left to right, 2D and 3D ROC curves for the airborne CASI sensor and UAV-based Headwall sensor.</p>
Full article ">Figure 6
<p>Visualization of the target detection map using the CASI dataset where (<b>a</b>) ground truth map, (<b>b</b>) ACE, (<b>c</b>) MF, (<b>d</b>) CEM, (<b>e</b>) OSP, (<b>f</b>) SAM, (<b>g</b>) TCIMF, (<b>h</b>) KMF, (<b>i</b>) CSCR.</p>
Full article ">Figure 7
<p>Visualization of the target detection map using the UAV dataset where (<b>a</b>) ground truth map, (<b>b</b>) ACE, (<b>c</b>) MF, (<b>d</b>) CEM, (<b>e</b>) OSP, (<b>f</b>) SAM, (<b>g</b>) TCIMF, (<b>h</b>) KMF, (<b>i</b>) CSCR.</p>
Full article ">
17 pages, 20436 KiB  
Article
Research on Cooperative Arrival and Energy Consumption Optimization Strategies of UAV Formations
by Hao Liu, Renwen Chen, Xiaohong Yan, Junyi Zhang and Yongjia Nian
Drones 2024, 8(12), 722; https://doi.org/10.3390/drones8120722 - 30 Nov 2024
Viewed by 690
Abstract
The formation operation of unmanned aerial vehicles (UAVs) is a current research hotspot, particularly in specific mission scenarios where UAV formations are required to cooperatively arrive at designated task areas to meet the needs of coordinated operations. This paper investigates the issues of [...] Read more.
The formation operation of unmanned aerial vehicles (UAVs) is a current research hotspot, particularly in specific mission scenarios where UAV formations are required to cooperatively arrive at designated task areas to meet the needs of coordinated operations. This paper investigates the issues of cooperative arrival and energy consumption optimization for UAV formations in such scenarios. First, focusing on rotorcraft UAVs, the flight energy consumption optimization model and cooperative arrival model are derived and constructed. Next, to address the challenges in solving these models, the multi-objective non-convex functions are transformed into single-objective continuous functions, thereby reducing computational complexity. Furthermore, an interior-point-method-based solving strategy is designed by estimating the initial values of the solving parameters. Finally, simulation experiments validate the feasibility and effectiveness of the proposed method. The experimental results show that when optimizing the energy consumption of a formation of five UAVs, the algorithm converges in just 16 iterations, demonstrating its suitability for practical applications. Full article
Show Figures

Figure 1

Figure 1
<p>Classification of UAV formation research.</p>
Full article ">Figure 2
<p>Schematic diagram of UAV formation task scene.</p>
Full article ">Figure 3
<p>Schematic diagram of UAV formation flight path and task point (3D)—Scenario 1.</p>
Full article ">Figure 4
<p>Flight distance of UAVs.</p>
Full article ">Figure 5
<p>Flight time and total energy consumption of UAV formation. (<b>a</b>) Time of flight. (<b>b</b>)Total energy consumption of UAV formation.</p>
Full article ">Figure 6
<p>UAV formation flight acceleration and velocity changes. (<b>a</b>) UAV−1. (<b>b</b>) UAV−2. (<b>c</b>) UAV−3. (<b>d</b>) UAV−4. (<b>e</b>) UAV−5.</p>
Full article ">Figure 7
<p>Statistical results of UAV formation flight time. (<b>a</b>) Acceleration time. (<b>b</b>) Constant speed flight time.</p>
Full article ">Figure 8
<p>Schematic diagram of UAV formation flight path and task point (2D)—Scenario 2.</p>
Full article ">Figure 9
<p>UAV hovering time.</p>
Full article ">Figure 10
<p>Convergence of the objective function and constraint conditions.</p>
Full article ">Figure 11
<p>Average computation time under different uav formation trajectories. TTR: total task rounds; ATC: average computation time.</p>
Full article ">Figure 12
<p>The trajectory of the UAV formation and the change in the corresponding objective function fitness value.</p>
Full article ">Figure 13
<p>Comparison of algorithm running time for different numbers of UAVs. (<b>Left</b>): UAV flight trajectories corresponding to different numbers of UAVs; (<b>Right</b>): Comparison of running time for different algorithms.</p>
Full article ">
27 pages, 5478 KiB  
Article
Multi-UAV Obstacle Avoidance and Formation Control in Unknown Environments
by Yawen Li, Pengfei Zhang, Zhongliu Wang, Dian Rong, Muyang Niu and Cong Liu
Drones 2024, 8(12), 714; https://doi.org/10.3390/drones8120714 - 28 Nov 2024
Viewed by 785
Abstract
To address the issues of local minima, target unreachability, and significant formation disruption during obstacle avoidance in the conventional artificial potential field (APF), a control approach that integrates APF with optimal consensus control which can achieve cooperative obstacle avoidance is proposed. Based on [...] Read more.
To address the issues of local minima, target unreachability, and significant formation disruption during obstacle avoidance in the conventional artificial potential field (APF), a control approach that integrates APF with optimal consensus control which can achieve cooperative obstacle avoidance is proposed. Based on the double integrator multi-UAV formation model with a fixed undirected communication topology, the optimal consensus control protocol incorporating an obstacle avoidance cost function is introduced. This addresses the limitations of APF-based obstacle avoidance while simultaneously managing multi-UAV formation control. Training interactions in randomly generated unknown obstacle environments are conducted using Random Search for Hyperparameter Optimization (RSHO). Combined with the evaluation model, select the optimal solution of the consensus performance index, control consumption performance index, and obstacle avoidance performance index parameters of the multi-UAV formation control system. Furthermore, a virtual repulsive potential field is designed for each UAV to prevent inter-UAV collisions during obstacle avoidance. Simulation results show that the improved APF (IAPF) with optimal consensus control effectively overcomes the limitations of conventional APF. It achieves multi-UAV formation obstacle avoidance control in unknown environments and avoids the phenomenon of inter-UAV collisions during the obstacle avoidance process while maintaining formation integrity, accelerating formation reconfiguration and convergence, reducing consensus consumption and control loss due to obstacle avoidance, shortening mission time, and enhancing obstacle avoidance efficiency, highlighting the superiority of multi-UAV formation obstacle avoidance. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of communication topology.</p>
Full article ">Figure 2
<p>Force analysis of UAV in the potential field of multiple obstacles.</p>
Full article ">Figure 3
<p>Common local minima cases.</p>
Full article ">Figure 4
<p>Schematic diagram of collision avoidance between UAVs.</p>
Full article ">Figure 5
<p>Flowchart of optimal consensus controller based on IAPF.</p>
Full article ">Figure 6
<p>Multi-UAV formation communication topology.</p>
Full article ">Figure 7
<p>Multi-UAV formation obstacle avoidance path map in a known environment. (<b>a</b>) The conventional APF; (<b>b</b>) the optimal consensus control based on APF; (<b>c</b>) the non-optimal consensus control based on IAPF; (<b>d</b>) the optimal consensus control based on IAPF.</p>
Full article ">Figure 8
<p>Plot of change in velocity in x and y directions. (<b>a</b>) The conventional APF; (<b>b</b>) the optimal consensus control based on APF; (<b>c</b>) the non-optimal consensus control based on IAPF; (<b>d</b>) the optimal consensus control based on IAPF.</p>
Full article ">Figure 9
<p>Inter-UAV spacing. (<b>a</b>) The optimal consensus control based on IAPF without the inter-UAV repulsive potential field constructed; (<b>b</b>) the non-optimal consensus control based on IAPF; (<b>c</b>) the optimal consensus control based on IAPF.</p>
Full article ">Figure 10
<p>Flowchart of the optimal consensus controller based on IAPF in unknown environments.</p>
Full article ">Figure 11
<p>Multi-UAV formation path map in unknown environments.</p>
Full article ">Figure 12
<p>Plot of change in velocity in the x-direction versus the y-direction.</p>
Full article ">Figure 13
<p>Inter-UAV spacing.</p>
Full article ">Figure 14
<p>Multi-UAV formation path map. (<b>a</b>) The optimal consensus control based on IAPF in unknown environments; (<b>b</b>) the optimal consensus control based on IAPF in known environments.</p>
Full article ">Figure 15
<p>Inter-UAV spacing. (<b>a</b>) The optimal consensus control based on IAPF in unknown environments; (<b>b</b>) the optimal consensus control based on IAPF in known environments.</p>
Full article ">
25 pages, 11917 KiB  
Article
Multi-Phase Trajectory Planning for Wind Energy Harvesting in Air-Launched UAV Swarm Rendezvous and Formation Flight
by Xiangsheng Wang, Tielin Ma, Ligang Zhang, Nanxuan Qiao, Pu Xue and Jingcheng Fu
Drones 2024, 8(12), 709; https://doi.org/10.3390/drones8120709 - 28 Nov 2024
Viewed by 570
Abstract
Small air-launched unmanned aerial vehicles (UAVs) face challenges in range and endurance due to their compact size and lightweight design. To address these issues, this paper introduces a multi-phase wind energy harvesting trajectory planning method designed to optimize the onboard electrical energy consumption [...] Read more.
Small air-launched unmanned aerial vehicles (UAVs) face challenges in range and endurance due to their compact size and lightweight design. To address these issues, this paper introduces a multi-phase wind energy harvesting trajectory planning method designed to optimize the onboard electrical energy consumption during rendezvous and formation flight of air-launched fixed-wing swarms. This method strategically manages gravitational potential energy from air-launch deployments and harvests wind energy that aligns with the UAV’s flight speed. We integrate wind energy harvesting strategies for single vehicles with the spatial–temporal coordination of the swarm system. Considering the wind effects into the trajectory planning allows UAVs to enhance their operational capabilities and extend mission duration without changes on the vehicle design. The trajectory planning method is formalized as an optimal control problem (OCP) that ensures spatial–temporal coordination, inter-vehicle collision avoidance, and incorporates a 3-degree of freedom kinematic model of UAVs, extending wind energy harvesting trajectory optimization from an individual UAV to swarm-level applications. The cost function is formulized to comprehensively evaluate electrical energy consumption, endurance, and range. Simulation results demonstrate significant energy savings in both low- and high-altitude mission scenarios. Efficient wind energy utilization can double the maximum formation rendezvous distance and even allow for rendezvous without electrical power consumption when the phase durations are extended reasonably. The subsequent formation flight phase exhibits a maximum endurance increase of 58%. This reduction in electrical energy consumption directly extends the range and endurance of air-launched swarm, thereby enhancing the mission capabilities of the swarm in subsequent flight. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram illustrating the optimal two-phase wind energy harvesting trajectory of air-launched UAV swarms from different mother planes.</p>
Full article ">Figure 2
<p>Typical wind profile of the altitude range [<a href="#B39-drones-08-00709" class="html-bibr">39</a>].</p>
Full article ">Figure 3
<p>Three views and an axonometric view of the air-launched UAV.</p>
Full article ">Figure 4
<p>Aerodynamic and thrust forces acting on the UAV and the aerodynamic angles.</p>
Full article ">Figure 5
<p>Joint wind energy harvesting trajectories of fixed-wing swarms. (<b>a</b>) Closed-loop trajectories in loiter mode; (<b>b</b>) Open-loop trajectories for rendezvous.</p>
Full article ">Figure 6
<p>Transformation process of the multi-phase trajectory OCP for air-launched swarms in the hp-adaptive pseudo-spectral method, <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mi>L</mi> <mo stretchy="false">]</mo> </mrow> </semantics></math> is the phase number.</p>
Full article ">Figure 7
<p>The numerical solution procedure of the hp-adaptive pseudo-spectral method in the multi-phase trajectory optimization of fixed-wing swarm, <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mi>L</mi> <mo stretchy="false">]</mo> </mrow> </semantics></math> is the phase number.</p>
Full article ">Figure 8
<p>Framework of two-phase OCP in the low-altitude mission scenario.</p>
Full article ">Figure 9
<p>Trajectory for the low-altitude mission scenarios without wind energy harvesting in an altitude range of 1.1–0.5 km.</p>
Full article ">Figure 10
<p>Trajectory for the low-altitude mission scenarios with wind energy harvesting in an altitude range of 1.1–0.5 km.</p>
Full article ">Figure 11
<p>The potential, electrical, and kinetic energy in the low-altitude mission scenario.</p>
Full article ">Figure 12
<p>Framework of two-phase OCP in the high-altitude mission scenario.</p>
Full article ">Figure 13
<p>Three-dimensional spatial trajectory and energy diagram of the five cost functions in a high-altitude mission in an altitude range of 8.3–6 km.</p>
Full article ">Figure 13 Cont.
<p>Three-dimensional spatial trajectory and energy diagram of the five cost functions in a high-altitude mission in an altitude range of 8.3–6 km.</p>
Full article ">
19 pages, 3453 KiB  
Article
Autonomous UAV Chasing with Monocular Vision: A Learning-Based Approach
by Yuxuan Jin, Tiantian Song, Chengjie Dai, Ke Wang and Guanghua Song
Aerospace 2024, 11(11), 928; https://doi.org/10.3390/aerospace11110928 - 9 Nov 2024
Viewed by 602
Abstract
In recent years, unmanned aerial vehicles (UAVs) have shown significant potential across diverse applications, drawing attention from both academia and industry. In specific scenarios, UAVs are expected to achieve formation flying without relying on communication or external assistance. In this context, our work [...] Read more.
In recent years, unmanned aerial vehicles (UAVs) have shown significant potential across diverse applications, drawing attention from both academia and industry. In specific scenarios, UAVs are expected to achieve formation flying without relying on communication or external assistance. In this context, our work focuses on the classic leader-follower formation and presents a learning-based UAV chasing control method that enables a quadrotor UAV to autonomously chase a highly maneuverable fixed-wing UAV. The proposed method utilizes a neural network called Vision Follow Net (VFNet), which integrates monocular visual data with the UAV’s flight state information. Utilizing a multi-head self-attention mechanism, VFNet aggregates data over a time window to predict the waypoints for the chasing flight. The quadrotor’s yaw angle is controlled by calculating the line-of-sight (LOS) angle to the target, ensuring that the target remains within the onboard camera’s field of view during the flight. A simulation flight system is developed and used for neural network training and validation. Experimental results indicate that the quadrotor maintains stable chasing performance through various maneuvers of the fixed-wing UAV and can sustain formation over long durations. Our research explores the use of end-to-end neural networks for UAV formation flying, spanning from perception to control. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

Figure 1
<p>The leader-follower system.</p>
Full article ">Figure 2
<p>The monocular camera model.</p>
Full article ">Figure 3
<p>The workflow of the learning-based unmanned aerial vehicle (UAV) chasing control method.</p>
Full article ">Figure 4
<p>Obtaining fixed-wing UAV target from the image captured by monocular camera. Since the area occupied by the target fixed-wing UAV is relatively small in the images, we have enlarged it to enhance visibility.</p>
Full article ">Figure 5
<p>Forward propagation of the vision follow net (VFNet). Parameters in the green rectangles are trainable.</p>
Full article ">Figure 6
<p>The detailed architecture of the learning hidden unit contributions (LHUC) algorithm.</p>
Full article ">Figure 7
<p>The multi-head self-attention architecture used in the waypoint prediction module.</p>
Full article ">Figure 8
<p>The leader-follower formation in the landing state is shown in the Gazebo simulator, with the fixed-wing UAV in a yellow box and the quadrotor UAV in an orange box.</p>
Full article ">Figure 9
<p>The training loss curve.</p>
Full article ">Figure 10
<p>The results of the waypoint prediction experiment. A smaller absolute value of deviation indicates better performance.</p>
Full article ">Figure 11
<p>The trajectories of straight flight.</p>
Full article ">Figure 12
<p>The trajectories of clockwise spiral flight.</p>
Full article ">Figure 13
<p>The trajectories of counterclockwise spiral flight.</p>
Full article ">Figure 14
<p>The trajectories of turning from straight flight to an counterclockwise spiral flight.</p>
Full article ">Figure 15
<p>The trajectories of turning from clockwise flight to counterclockwise spiral flight.</p>
Full article ">Figure 16
<p>The trajectories of long-term flight. In figure, the green line denotes the trajectory of the leader fixed-wing UAV, while the red line represents that of the follower quadrotor UAV.</p>
Full article ">Figure 17
<p>The results of waypoint prediction experiments with VFNet after removing the ResNet component. The smaller absolute value of deviation indicates better performance.</p>
Full article ">
18 pages, 982 KiB  
Review
Remote Sensing and GIS in Natural Resource Management: Comparing Tools and Emphasizing the Importance of In-Situ Data
by Sanjeev Sharma, Justin O. Beslity, Lindsey Rustad, Lacy J. Shelby, Peter T. Manos, Puskar Khanal, Andrew B. Reinmann and Churamani Khanal
Remote Sens. 2024, 16(22), 4161; https://doi.org/10.3390/rs16224161 - 8 Nov 2024
Cited by 2 | Viewed by 3620
Abstract
Remote sensing (RS) and Geographic Information Systems (GISs) provide significant opportunities for monitoring and managing natural resources across various temporal, spectral, and spatial resolutions. There is a critical need for natural resource managers to understand the expanding capabilities of image sources, analysis techniques, [...] Read more.
Remote sensing (RS) and Geographic Information Systems (GISs) provide significant opportunities for monitoring and managing natural resources across various temporal, spectral, and spatial resolutions. There is a critical need for natural resource managers to understand the expanding capabilities of image sources, analysis techniques, and in situ validation methods. This article reviews key image analysis tools in natural resource management, highlighting their unique strengths across diverse applications such as agriculture, forestry, water resources, soil management, and natural hazard monitoring. Google Earth Engine (GEE), a cloud-based platform introduced in 2010, stands out for its vast geospatial data catalog and scalability, making it ideal for global-scale analysis and algorithm development. ENVI, known for advanced multi- and hyperspectral image processing, excels in vegetation monitoring, environmental analysis, and feature extraction. ERDAS IMAGINE specializes in radar data analysis and LiDAR processing, offering robust classification and terrain analysis capabilities. Global Mapper is recognized for its versatility, supporting over 300 data formats and excelling in 3D visualization and point cloud processing, especially in UAV applications. eCognition leverages object-based image analysis (OBIA) to enhance classification accuracy by grouping pixels into meaningful objects, making it effective in environmental monitoring and urban planning. Lastly, QGIS integrates these remote sensing tools with powerful spatial analysis functions, supporting decision-making in sustainable resource management. Together, these tools when paired with in situ data provide comprehensive solutions for managing and analyzing natural resources across scales. Full article
Show Figures

Figure 1

Figure 1
<p>Articles published using different image analysis tools in different time intervals.</p>
Full article ">Figure 2
<p>Map of sites identified and included in database.</p>
Full article ">
25 pages, 5681 KiB  
Article
Multi-Batch Carrier-Based UAV Formation Rendezvous Method Based on Improved Sequential Convex Programming
by Zirui Zhang, Liguo Sun and Yanyang Wang
Drones 2024, 8(11), 615; https://doi.org/10.3390/drones8110615 - 26 Oct 2024
Viewed by 867
Abstract
The limitations of the existing catapults necessitate multiple batches of take-offs for carrier-based unmanned aerial vehicles (UAVs) to form a formation. Because of the differences in takeoff time and location of each batch of UAVs, ensuring the temporal and spatial consistency and rendezvous [...] Read more.
The limitations of the existing catapults necessitate multiple batches of take-offs for carrier-based unmanned aerial vehicles (UAVs) to form a formation. Because of the differences in takeoff time and location of each batch of UAVs, ensuring the temporal and spatial consistency and rendezvous efficiency of the formation becomes crucial. Concerning the challenges mentioned above, a multi-batch formation rendezvous method based on improved sequential convex programming (SCP) is proposed. A reverse solution approach based on the multi-batch rendezvous process is developed. On this basis, a non-convex optimization problem is formulated considering the following constraints: UAV dynamics, collision avoidance, obstacle avoidance, and formation consistency. An SCP method that makes use of the trust region strategy is introduced to solve the problem efficiently. Due to the spatiotemporal coupling characteristics of the rendezvous process, an inappropriate initial solution for SCP will inevitably reduce the rendezvous efficiency. Thus, an initial solution tolerance mechanism is introduced to improve the SCP. This mechanism follows the idea of simulated annealing, allowing the SCP to search for better reference solutions in a wider space. By utilizing the initial solution tolerance SCP (IST-SCP), the multi-batch formation rendezvous algorithm is developed correspondingly. Simulation results are obtained to verify the effectiveness and adaptability of the proposed method. IST-SCP reduces the rendezvous time from poor initial solutions without significantly increasing the computing time. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the process of circling and rendezvous (the solid airplane represents the actual position of the airplane, the hollow airplane represents the target formation position of the airplane, and the green square represents the position of the airplane at takeoff).</p>
Full article ">Figure 2
<p>Gantt chart of the rendezvous process (the green rectangle represents the stage of the aircraft waiting on the flight deck, the yellow rectangle represents the stage of the aircraft from takeoff to circling, and the blue rectangle represents the stage of the aircraft circling in the air).</p>
Full article ">Figure 3
<p>Schematic diagram of the reverse solution approach (the solid red triangle represents the actual position of the formation centroid, while the hollow red triangle represents the expected position of the formation centroid when a certain batch of aircraft is incorporated). First, minimize the rendezvous time of last batch, as shown in (<b>1</b>); second, using this as a reference, reverse-solve the other batches’ trajectories and ensure the temporal consistency constraints are met by adjusting the rendezvous positions, as shown in (<b>2</b>,<b>3</b>); finally, the formation successfully rendezvous, as shown in (<b>4</b>).</p>
Full article ">Figure 4
<p>The initial trajectory leads the sequential solutions to the same side of an obstacle because of the poor approximation of the convexified penalty (the left part of the figure represents the variation in the flight trajectory on the horizontal plane projection as the subproblem is sequentially solved. The right part of the figure shows that this change is to gradually reduce the punishment of obstacle avoidance constraints).</p>
Full article ">Figure 5
<p>Different initial trajectory guesses for one UAV lead to different local optimums of rendezvous time because of the spatiotemporal coupling rendezvous process (the solid line represents the actual trajectory, while the dashed line represents the initial trajectory guess. The green and yellow lines represent different trajectories optimized under different initial guesses).</p>
Full article ">Figure 6
<p>Flow chart of the initial solution tolerance SCP (IST-SCP) algorithm ( superscript * stands for the optimal solution/objective value of the subproblem).</p>
Full article ">Figure 7
<p>The rendezvous scene of carrier-based UAVs.</p>
Full article ">Figure 8
<p>(<b>a</b>) The 3D flight path for each UAV; (<b>b</b>) the projection of the flight path on the horizontal plane.</p>
Full article ">Figure 9
<p>The velocity during the rendezvous process generated by IST-SCP and conventional SCP. Both algorithms can generate the maximum velocity flying on this path. Thus, the difference in rendezvous time is mainly caused by the path.</p>
Full article ">Figure 10
<p>(<b>a</b>) The initial path guess for the first batch is between no-fly zone 1 and no-fly zone 2. The path generated by IST-SCP jumped out of the local optimum, while the path generated by conventional SCP was trapped in it; (<b>b</b>) the initial path guess for the second batch is outside no-fly zone 1 and no-fly zone 3. The IST-SCP method still found a faster path to enter the circling orbit, and entered the formation further upstream in the circling orbit compared with the conventional SCP; (<b>c</b>) the initial path guess for the third batch is between no-fly zone 2 and no-fly zone 3. The path generated by conventional SCP failed to enter the orbit from a closer position, while the path by IST-SCP bypassed no-fly zone 2 and entered the orbit from a closer position.</p>
Full article ">Figure 11
<p>(<b>a</b>) Four batches, each with two UAVs, form a regular octagonal formation (number of UAVs in each batch = (2, 2, 2, 2)); (<b>b</b>) three batches, one UAV in the first batch, two UAVs in the second batch, and three UAVs in the third batch, form a regular hexagonal formation (number of UAVs in each batch = (1, 2, 3)); (<b>c</b>) three batches, one UAV in the first batch, three UAVs in the second batch, and two UAVs in the third batch, form a regular hexagonal formation (number of UAVs in each batch = (1, 3, 2)).</p>
Full article ">Figure 12
<p>(<b>a</b>) Three batches, one UAV in the first batch, two UAVs in the second and third batches, form a diamond-shaped formation (geometric shape of formation = diamond); (<b>b</b>) three batches, one UAV in the first batch, three UAVs in the second and third batches, to form a herringbone formation (geometric shape of formation = herringbone).</p>
Full article ">
25 pages, 2251 KiB  
Article
Toward a Generic Framework for Mission Planning and Execution with a Heterogeneous Multi-Robot System
by Mohsen Denguir, Ameur Touir, Achraf Gazdar and Safwan Qasem
Sensors 2024, 24(21), 6881; https://doi.org/10.3390/s24216881 - 26 Oct 2024
Viewed by 961
Abstract
This paper presents a comprehensive framework for mission planning and execution with a heterogeneous multi-robot system, specifically designed to coordinate unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) in dynamic and unstructured environments. The proposed architecture evaluates the mission requirements, allocates tasks, [...] Read more.
This paper presents a comprehensive framework for mission planning and execution with a heterogeneous multi-robot system, specifically designed to coordinate unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) in dynamic and unstructured environments. The proposed architecture evaluates the mission requirements, allocates tasks, and optimizes resource usage based on the capabilities of the available robots. It then executes the mission utilizing a decentralized control strategy that enables the robots to adapt to environmental changes and maintain formation stability in both 2D and 3D spaces. The framework’s architecture supports loose coupling between its components, enhancing system scalability and maintainability. Key features include a robust task allocation algorithm, and a dynamic formation control mechanism, using a ROS 2 communication protocol that ensures reliable information exchange among robots. The effectiveness of this framework is demonstrated through a case study involving coordinated exploration and data collection tasks, showcasing its ability to manage missions while optimizing robot collaboration. This work advances the field of heterogeneous robotic systems by providing a scalable and adaptable solution for multi-robot coordination in challenging environments. Full article
Show Figures

Figure 1

Figure 1
<p>The two main parts of the proposed framework include the conceptual level.</p>
Full article ">Figure 2
<p>The inputs of the mission planner module.</p>
Full article ">Figure 3
<p>Mission description grammar.</p>
Full article ">Figure 4
<p>Robot description grammar.</p>
Full article ">Figure 5
<p>The interactions between the mission−handling modules.</p>
Full article ">Figure 6
<p>Example of a PD file’s content.</p>
Full article ">Figure 7
<p>Robot driver.</p>
Full article ">Figure 8
<p>Interaction between a controller and a driver: case of a single-command task.</p>
Full article ">Figure 9
<p>Interaction between a controller, a driver, and an oracle in the case of a multiple-commands task.</p>
Full article ">Figure 10
<p>Interactions of software modules during the execution of a multi-robot task (state<sub><span class="html-italic">i</span></sub>: state <span class="html-italic">S<sub>i</sub></span>(<span class="html-italic">t<sub>j</sub></span>), environmenti: environment perception <span class="html-italic">P<sub>i</sub></span>(<span class="html-italic">t<sub>j</sub></span>), aggregated_state: global state <span class="html-italic">S</span>(<span class="html-italic">t<sub>j</sub></span>), aggregated_environment: global environment perception: <span class="html-italic">P</span>(<span class="html-italic">t<sub>j</sub></span>)).</p>
Full article ">Figure 11
<p>Automaton used by the controller <tt>ground_goto_controller</tt>.</p>
Full article ">Figure 12
<p>Automaton used by the controller <tt>aerial_goto_controller</tt>.</p>
Full article ">Figure 13
<p>Robots move to a target point while avoiding an obstacle (part 1). (<b>a</b>) Initial configuration. (<b>b</b>) The robots move toward the target. (<b>c</b>) The first e-puck robot detects an obstacle. (<b>d</b>) UGVs change direction and move. UAV detects an obstacle. (<b>e</b>) UGVs move toward the target. UAV ascends in order to avoid the obstacle. (<b>f</b>) UAV flies over the obstacle.</p>
Full article ">Figure 14
<p>Robots move to a target point while avoiding an obstacle (part 2). (<b>a</b>) UAV tracks UGVs. (<b>b</b>) The robots reach the target position.</p>
Full article ">Figure A1
<p>Mission plan grammar in YAML.</p>
Full article ">Figure A2
<p>Mission description (MD).</p>
Full article ">Figure A3
<p>Robot pool specification.</p>
Full article ">Figure A4
<p>Mission planner process—part 1.</p>
Full article ">Figure A5
<p>Mission planner process—part 2.</p>
Full article ">
21 pages, 6198 KiB  
Article
Research on Real-Time Roundup and Dynamic Allocation Methods for Multi-Dynamic Target Unmanned Aerial Vehicles
by Jinpeng Li, Ruixuan Wei, Qirui Zhang, Ruqiang Shi and Benqi Jiang
Sensors 2024, 24(20), 6565; https://doi.org/10.3390/s24206565 - 12 Oct 2024
Cited by 1 | Viewed by 1055
Abstract
When multi-dynamic target UAVs escape, the uncertainty of the formation method and the external environment causes difficulties in rounding them up, so suitable solutions are needed to improve the roundup success rate. However, traditional methods can generally only enable the encirclement of a [...] Read more.
When multi-dynamic target UAVs escape, the uncertainty of the formation method and the external environment causes difficulties in rounding them up, so suitable solutions are needed to improve the roundup success rate. However, traditional methods can generally only enable the encirclement of a single target, and when the target is scattered and escaping, this will lead to encirclement failure due to the inability to sufficiently allocate UAVs for encirclement. Therefore, in this paper, a real-time roundup and dynamic allocation algorithm for multiple dynamic targets is proposed. A real-time dynamic obstacle avoidance model is established for the roundup problem, drawing on the artificial potential field function. For the escape problem of the rounding process, an optimal rounding allocation strategy is established by drawing on the linear matching method. The algorithm in this paper simulates the UAV in different obstacle environments to round up dynamic targets with different escape methods. The results show that the algorithm is able to achieve the rounding up of multiple dynamic targets in a UAV and obstacle scenario with random initial positions, and the task UAV, which is able to avoid obstacles, can be used in other algorithms for real-time rounding up and dynamic allocation. The results show that the algorithm is able to achieve the rounding up of multi-dynamic targets in scenarios with a random number of UAVs and obstacles with random locations. It results in a 50% increase in the rounding efficiency and a 10-fold improvement in the formation success rate. And the mission UAV is able to avoid obstacles, which can be used in other algorithms for real-time roundup and dynamic allocation. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Three-dimensional view of the UAV geometry relationship. (<b>a</b>) Geometric modeling of UAVs; (<b>b</b>) positional relationships between UAVs.</p>
Full article ">Figure 2
<p>Block diagram of the roundup program.</p>
Full article ">Figure 3
<p>Schematic diagram of the effect of roundup.</p>
Full article ">Figure 4
<p>Flowchart of the roundup allocation strategy.</p>
Full article ">Figure 5
<p>Nine UAVs rounding up three dispersed dynamic targets without obstacles. (<b>a</b>) The roundup process; (<b>b</b>) smoothing of the roundup process.</p>
Full article ">Figure 6
<p>Nine UAVs rounding up three dispersed dynamic targets under static obstacles. (<b>a</b>) The roundup process. (<b>b</b>) Smoothing of the roundup process.</p>
Full article ">Figure 7
<p>Nine UAVs rounding up three dispersed dynamic targets under dynamic obstacles. (<b>a</b>) The roundup process. (<b>b</b>) Smoothing of the roundup process.</p>
Full article ">Figure 8
<p>Nine UAVs rounding up three dispersed dynamic targets under mixed obstacles. (<b>a</b>) Traditional methods of roundup process. (<b>b</b>) Smooth traditional roundup process. (<b>c</b>) The roundup process of this paper’s method. (<b>d</b>) Smoothing of the roundup process of the method in this paper.</p>
Full article ">Figure 8 Cont.
<p>Nine UAVs rounding up three dispersed dynamic targets under mixed obstacles. (<b>a</b>) Traditional methods of roundup process. (<b>b</b>) Smooth traditional roundup process. (<b>c</b>) The roundup process of this paper’s method. (<b>d</b>) Smoothing of the roundup process of the method in this paper.</p>
Full article ">Figure 9
<p>Nine UAVs rounding up three formation dynamic targets without obstacles. (<b>a</b>) The roundup process. (<b>b</b>) Smoothing of the roundup process.</p>
Full article ">Figure 10
<p>Nine UAVs under static obstacles to round up three formation dynamic targets. (<b>a</b>) The roundup process. (<b>b</b>) Smoothing of the roundup process.</p>
Full article ">Figure 11
<p>Nine UAVs rounding up three formation dynamic targets under dynamic obstacles. (<b>a</b>) The roundup process. (<b>b</b>) Smoothing of the roundup process.</p>
Full article ">Figure 12
<p>Sampling diagram of the process of nine UAVs rounding up three formation dynamic targets under mixed obstacles. (<b>a</b>) The roundup process (6 s moment). (<b>b</b>) The roundup process (12 s moment). (<b>c</b>) The roundup process (18 s moment). (<b>d</b>) The roundup process (24 s moment). (<b>e</b>) The roundup process (30 s moment). (<b>f</b>) Smoothing of the roundup process (30 s moment).</p>
Full article ">Figure 13
<p>Sampling diagram of the process of nine UAVs rounding up three formation dynamic targets under mixed obstacles. (<b>a</b>) The roundup process (6 s moment). (<b>b</b>) The roundup process (12 s moment). (<b>c</b>) The roundup process (18 s moment). (<b>d</b>) The roundup process (24 s moment). (<b>e</b>) The roundup process (30 s moment). (<b>f</b>) Smoothing of the roundup process (30 s moment).</p>
Full article ">Figure 14
<p>Nine UAVs rounding up grouped escaping dynamic targets under no obstacles. (<b>a</b>) The roundup process. (<b>b</b>) Smoothing of the roundup process.</p>
Full article ">Figure 15
<p>Nine UAVs rounding up grouped escaping dynamic targets under static obstacles. (<b>a</b>) The roundup process. (<b>b</b>) Smoothing of the roundup process.</p>
Full article ">Figure 16
<p>Nine UAVs rounding up grouped escaping dynamic targets under dynamic obstacles. (<b>a</b>) The roundup process. (<b>b</b>) Smoothing of the roundup process.</p>
Full article ">Figure 17
<p>Nine UAVs rounding up grouped escaping dynamic targets under mixed obstacles. (<b>a</b>) Traditional methods of roundup process. (<b>b</b>) Smooth traditional method roundup process. (<b>c</b>) The roundup process of this paper’s method. (<b>d</b>) Smoothing of the roundup process of the method in this paper.</p>
Full article ">Figure 17 Cont.
<p>Nine UAVs rounding up grouped escaping dynamic targets under mixed obstacles. (<b>a</b>) Traditional methods of roundup process. (<b>b</b>) Smooth traditional method roundup process. (<b>c</b>) The roundup process of this paper’s method. (<b>d</b>) Smoothing of the roundup process of the method in this paper.</p>
Full article ">
Back to TopTop