[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,405)

Search Parameters:
Keywords = unmanned aerial vehicle sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3314 KiB  
Article
Multiple Unmanned Aerial Vehicle Collaborative Target Search by DRL: A DQN-Based Multi-Agent Partially Observable Method
by Heng Xu and Dayong Zhu
Drones 2025, 9(1), 74; https://doi.org/10.3390/drones9010074 - 19 Jan 2025
Viewed by 292
Abstract
As Unmanned Aerial Vehicle (UAV) technology advances, UAVs have attracted widespread attention across military and civilian fields due to their low cost and flexibility. In unknown environments, UAVs can significantly reduce the risk of casualties and improve the safety and covertness when performing [...] Read more.
As Unmanned Aerial Vehicle (UAV) technology advances, UAVs have attracted widespread attention across military and civilian fields due to their low cost and flexibility. In unknown environments, UAVs can significantly reduce the risk of casualties and improve the safety and covertness when performing missions. Reinforcement Learning allows agents to learn optimal policies through trials in the environment, enabling UAVs to respond autonomously according to the real-time conditions. Due to the limitation of the observation range of UAV sensors, UAV target search missions face the challenge of partial observation. Based on this, Partially Observable Deep Q-Network (PODQN), which is a DQN-based algorithm is proposed. The PODQN algorithm utilizes the Gated Recurrent Unit (GRU) to remember the past observation information. It integrates the target network and decomposes the action value for better evaluation. In addition, the artificial potential field is introduced to solve the potential collision problem. The simulation environment for UAV target search is constructed through the custom Markov Decision Process. By comparing the PODQN algorithm with random strategy, DQN, Double DQN, Dueling DQN, VDN, QMIX, it is demonstrated that the proposed PODQN algorithm has the best performance under different agent configurations. Full article
(This article belongs to the Special Issue UAV Detection, Classification, and Tracking)
16 pages, 5402 KiB  
Article
Research on Sensitivity Improvement Methods for RTD Fluxgates Based on Feedback-Driven Stochastic Resonance with PSO
by Rui Wang, Na Pang, Haibo Guo, Xu Hu, Guo Li and Fei Li
Sensors 2025, 25(2), 520; https://doi.org/10.3390/s25020520 - 17 Jan 2025
Viewed by 312
Abstract
With the wide application of Residence Time Difference (RTD) fluxgate sensors in Unmanned Aerial Vehicle (UAV) aeromagnetic measurements, the requirements for their measurement accuracy are increasing. The core characteristics of the RTD fluxgate sensor limit its sensitivity; the high-permeability soft magnetic core is [...] Read more.
With the wide application of Residence Time Difference (RTD) fluxgate sensors in Unmanned Aerial Vehicle (UAV) aeromagnetic measurements, the requirements for their measurement accuracy are increasing. The core characteristics of the RTD fluxgate sensor limit its sensitivity; the high-permeability soft magnetic core is especially easily interfered with by the input noise. In this paper, based on the study of the excitation signal and input noise characteristics, the stochastic resonance is proposed to be realized by adding feedback by taking advantage of the high hysteresis loop rectangular ratio, low coercivity and bistability characteristics of the soft magnetic material core. Simulink is used to construct the sensor model of odd polynomial feedback control, and the Particle Swarm Optimization (PSO) algorithm is used to optimize the coefficients of the feedback function so that the sensor reaches a resonance state, thus reducing the noise interference and improving the sensitivity of the sensor. The simulation results show that optimizing the odd polynomial feedback coefficients with PSO enables the sensor to reach a resonance state, improving sensitivity by at least 23.5%, effectively enhancing sensor performance and laying a foundation for advancements in UAV aeromagnetic measurement technology. Full article
Show Figures

Figure 1

Figure 1
<p>Working principle of RTD fluxgate based on bistable characteristics of magnetic core hysteresis saturation.</p>
Full article ">Figure 2
<p>A diagram of stochastic resonance components.</p>
Full article ">Figure 3
<p>Relationship between time delay and noise intensity with varying coercive force.</p>
Full article ">Figure 4
<p>Stochastic resonance curves of RTD fluxgate with different cubic feedback coefficients.</p>
Full article ">Figure 5
<p>The flowchart of the PSO algorithm.</p>
Full article ">Figure 6
<p>Simulation model of RTD fluxgate with feedback.</p>
Full article ">Figure 7
<p>Simulation results of RTD fluxgate (<span class="html-italic">H</span><sub>x</sub> = 0).</p>
Full article ">Figure 8
<p>Simulation results of RTD fluxgate (<span class="html-italic">H</span><sub>x</sub> ≠ 0).</p>
Full article ">Figure 9
<p>Comparison of time differences between GWO and PSO.</p>
Full article ">Figure 10
<p>Relationship between fitness and time difference obtained by ACO.</p>
Full article ">Figure 11
<p>Comparison of output time delay under different feedback functions.</p>
Full article ">Figure 12
<p>Comparison between y = <span class="html-italic">k</span><sub>9</sub>x<sup>9</sup> and y = <span class="html-italic">k</span><sub>9</sub>x<sup>9</sup> + <span class="html-italic">k</span><sub>7</sub>x<sup>7</sup> + <span class="html-italic">k</span><sub>5</sub>x<sup>5</sup> + <span class="html-italic">k</span><sub>3</sub>x<sup>3</sup> + <span class="html-italic">k</span><sub>1</sub>x.</p>
Full article ">Figure 13
<p>Relationship between fitness and time difference for different feedback functions.</p>
Full article ">
17 pages, 4766 KiB  
Article
Monitoring the Maize Canopy Chlorophyll Content Using Discrete Wavelet Transform Combined with RGB Feature Fusion
by Wenfeng Li, Kun Pan, Yue Huang, Guodong Fu, Wenrong Liu, Jizhong He, Weihua Xiao, Yi Fu and Jin Guo
Agronomy 2025, 15(1), 212; https://doi.org/10.3390/agronomy15010212 - 16 Jan 2025
Viewed by 241
Abstract
To evaluate the accuracy of Discrete Wavelet Transform (DWT) in monitoring the chlorophyll (CHL) content of maize canopies based on RGB images, a field experiment was conducted in 2023. Images of maize canopies during the jointing, tasseling, and grouting stages were captured using [...] Read more.
To evaluate the accuracy of Discrete Wavelet Transform (DWT) in monitoring the chlorophyll (CHL) content of maize canopies based on RGB images, a field experiment was conducted in 2023. Images of maize canopies during the jointing, tasseling, and grouting stages were captured using unmanned aerial vehicle (UAV) remote sensing to extract color, texture, and wavelet features and to construct a color and texture feature dataset and a fusion of wavelet, color, and texture feature datasets. Backpropagation neural network (BP), Stacked Ensemble Learning (SEL), and Gradient Boosting Decision Tree (GBDT) models were employed to develop CHL monitoring models for the maize canopy. The performance of these models was evaluated by comparing their predictions with measured CHL data. The results indicate that the dataset integrating wavelet features achieved higher monitoring accuracy compared to the color and texture feature dataset. Specifically, for the integrated dataset, the BP model achieved an R2 value of 0.728, an RMSE of 3.911, and an NRMSE of 15.24%; the SEL model achieved an R2 value of 0.792, an RMSE of 3.319, and an NRMSE of 15.34%; and the GBDT model achieved an R2 value of 0.756, an RMSE of 3.730, and an NRMSE of 15.45%. Among these, the SEL model exhibited the highest monitoring accuracy. This study provides a fast and reliable method for monitoring maize growth in field conditions. Future research could incorporate cross-validation with hyperspectral and thermal infrared sensors to further enhance model reliability and expand its applicability. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of experimental site location and density settings. Note: D1–D3 represent different densities: 5.7, 6.3, and 6.9 plants/m<sup>2</sup>.</p>
Full article ">Figure 2
<p>Flow chart for remote sensing data processing.</p>
Full article ">Figure 3
<p>Workflow for removing image background.</p>
Full article ">Figure 4
<p>The decomposition procedure of the image. Note: H denotes a high-pass filter, L denotes a low-pass filter, LL denotes the proximate feature, HL denotes the longitudinal edge feature, LH denotes the lateral edge feature, and HH denotes the diagonal feature.</p>
Full article ">Figure 5
<p>Stacking ensemble learning implementation process integrating SVR and LightGBM.</p>
Full article ">Figure 6
<p>Heatmap of correlation between color and texture features and CHL content.</p>
Full article ">Figure 7
<p>Heat map of correlation between wavelet features and CHL content.</p>
Full article ">Figure 8
<p>Scatterplot of predicted chlorophyll content of BP, SEL, and GBDT models based on different data.</p>
Full article ">Figure 9
<p>Distribution of R<sup>2</sup>, RMSE, and NRMSE at different growth stages of maize.</p>
Full article ">Figure 10
<p>Ranking the importance of color, texture, and wavelet features.</p>
Full article ">Figure 11
<p>Schematic diagram of discrete wavelet decomposition.</p>
Full article ">
33 pages, 24705 KiB  
Review
Unmanned Aerial Vehicles for Real-Time Vegetation Monitoring in Antarctica: A Review
by Kaelan Lockhart, Juan Sandino, Narmilan Amarasingam, Richard Hann, Barbara Bollard and Felipe Gonzalez
Remote Sens. 2025, 17(2), 304; https://doi.org/10.3390/rs17020304 - 16 Jan 2025
Viewed by 405
Abstract
The unique challenges of polar ecosystems, coupled with the necessity for high-precision data, make Unmanned Aerial Vehicles (UAVs) an ideal tool for vegetation monitoring and conservation studies in Antarctica. This review draws on existing studies on Antarctic UAV vegetation mapping, focusing on their [...] Read more.
The unique challenges of polar ecosystems, coupled with the necessity for high-precision data, make Unmanned Aerial Vehicles (UAVs) an ideal tool for vegetation monitoring and conservation studies in Antarctica. This review draws on existing studies on Antarctic UAV vegetation mapping, focusing on their methodologies, including surveyed locations, flight guidelines, UAV specifications, sensor technologies, data processing techniques, and the use of vegetation indices. Despite the potential of established Machine-Learning (ML) classifiers such as Random Forest, K Nearest Neighbour, and Support Vector Machine, and gradient boosting in the semantic segmentation of UAV-captured images, there is a notable scarcity of research employing Deep Learning (DL) models in these extreme environments. While initial studies suggest that DL models could match or surpass the performance of established classifiers, even on small datasets, the integration of these advanced models into real-time navigation systems on UAVs remains underexplored. This paper evaluates the feasibility of deploying UAVs equipped with adaptive path-planning and real-time semantic segmentation capabilities, which could significantly enhance the efficiency and safety of mapping missions in Antarctica. This review discusses the technological and logistical constraints observed in previous studies and proposes directions for future research to optimise autonomous drone operations in harsh polar conditions. Full article
(This article belongs to the Special Issue Antarctic Remote Sensing Applications (Second Edition))
Show Figures

Figure 1

Figure 1
<p>Key factors on the importance of mapping vegetation and monitoring its health condition in Antarctica. Whereas polar amplification (i.e., greater warming effect at the poles than the rest of the world) is more pronounced in the Arctic, Antarctic amplification has major global significance [<a href="#B5-remotesensing-17-00304" class="html-bibr">5</a>].</p>
Full article ">Figure 2
<p>Comparison of Antarctic landscapes to demonstrate how vegetation can vary greatly in distribution and appearance between different sites. (<b>a</b>) Aerial shot from a DJI Mini 3 Pro in an Antarctic Specially Protected Area (ASPA), showing vegetation that is abundant and easy to detect. (<b>b</b>) Ground image displaying moribund moss from Bunger Hills, Antarctica, where vegetation may be difficult to map due to similarity to surrounding landscape and sparsity.</p>
Full article ">Figure 3
<p>Locations of Antarctic vegetation studies as listed in <a href="#remotesensing-17-00304-t002" class="html-table">Table 2</a>. The number of studies conducted at those sites is included in parentheses.</p>
Full article ">Figure 4
<p>Diversity of Uncrewed Aerial Vehicle (UAV) deployment in polar vegetation mapping. (<b>a</b>) Pie chart representing the percentage of multirotor vs. fixed-wing UAVs used in studies on polar vegetation mapping. (<b>b</b>) Pie chart representing the percentages of custom vs. commercial UAVs used in studies on polar vegetation mapping.</p>
Full article ">Figure 5
<p>Venn diagram contrasting Uncrewed Aerial Vehicle (UAV) vegetation mapping methods in polar environments. ANN: Artificial Neural Network, EML: Established Machine Learning, Thresh: vegetation index thresholding, Built-in: segmentation algorithms built-in to mapping software, In situ: classifications made by experts in the field, Expert: expert scientist labels images, SVM: Support Vector Machine, SVR: Support Vector Regression, kNN: k-Nearest Neighbours, GEOBIA: Geographic Object-Based Image Analysis, RF: Random Forest, XGBoost: eXtreme Gradient Boosting, LR: Logistic Regression, PLSR: Partial Least-Squares Regression. Relevant references: Thresh: [<a href="#B6-remotesensing-17-00304" class="html-bibr">6</a>]; Thresh: [<a href="#B31-remotesensing-17-00304" class="html-bibr">31</a>]; In situ: [<a href="#B35-remotesensing-17-00304" class="html-bibr">35</a>]; Thresh: [<a href="#B29-remotesensing-17-00304" class="html-bibr">29</a>]; Thresh: [<a href="#B41-remotesensing-17-00304" class="html-bibr">41</a>]; Thresh: [<a href="#B13-remotesensing-17-00304" class="html-bibr">13</a>]; RF: [<a href="#B65-remotesensing-17-00304" class="html-bibr">65</a>]; XGBoost: [<a href="#B40-remotesensing-17-00304" class="html-bibr">40</a>]; RF: [<a href="#B34-remotesensing-17-00304" class="html-bibr">34</a>]; RF: [<a href="#B37-remotesensing-17-00304" class="html-bibr">37</a>]; SVR: [<a href="#B33-remotesensing-17-00304" class="html-bibr">33</a>]; GEOBIA &amp; RF: [<a href="#B38-remotesensing-17-00304" class="html-bibr">38</a>]; SVM &amp; kNN: [<a href="#B36-remotesensing-17-00304" class="html-bibr">36</a>]; Bayesian ANN: [<a href="#B44-remotesensing-17-00304" class="html-bibr">44</a>]; U-Net &amp; XGBoost: [<a href="#B15-remotesensing-17-00304" class="html-bibr">15</a>]; RF: [<a href="#B47-remotesensing-17-00304" class="html-bibr">47</a>]; Built-In: [<a href="#B43-remotesensing-17-00304" class="html-bibr">43</a>]; LR: [<a href="#B52-remotesensing-17-00304" class="html-bibr">52</a>]; RF: [<a href="#B48-remotesensing-17-00304" class="html-bibr">48</a>]; SVR: [<a href="#B33-remotesensing-17-00304" class="html-bibr">33</a>]; RF: [<a href="#B50-remotesensing-17-00304" class="html-bibr">50</a>]; RF: [<a href="#B49-remotesensing-17-00304" class="html-bibr">49</a>]; Built-In: [<a href="#B46-remotesensing-17-00304" class="html-bibr">46</a>]; Built-In: [<a href="#B45-remotesensing-17-00304" class="html-bibr">45</a>]; SVM: [<a href="#B7-remotesensing-17-00304" class="html-bibr">7</a>]; RF: [<a href="#B42-remotesensing-17-00304" class="html-bibr">42</a>]; RF &amp; PLSR: [<a href="#B51-remotesensing-17-00304" class="html-bibr">51</a>]; In situ: [<a href="#B35-remotesensing-17-00304" class="html-bibr">35</a>]; In situ: [<a href="#B31-remotesensing-17-00304" class="html-bibr">31</a>]; Expert: [<a href="#B6-remotesensing-17-00304" class="html-bibr">6</a>]; Thresh: [<a href="#B13-remotesensing-17-00304" class="html-bibr">13</a>].</p>
Full article ">Figure 6
<p>Average spectral response of each class in the Antarctic Specially Protected Area (ASPA) 135 dataset gathered between the second of January and second of February, 2023.</p>
Full article ">
28 pages, 16944 KiB  
Review
Technological Evolution of Architecture, Engineering, Construction, and Structural Health Monitoring of Bridges in Peru: History, Challenges, and Opportunities
by Carlos Cacciuttolo, Esteban Muñoz and Andrés Sotil
Appl. Sci. 2025, 15(2), 831; https://doi.org/10.3390/app15020831 - 16 Jan 2025
Viewed by 464
Abstract
Peru is one of the most diverse countries from a geographical and climatic point of view, where there are three large ecosystem regions called coast, Sierra, and jungle. These characteristics result in the country having many hydrographic basins, with rivers of significant dimensions [...] Read more.
Peru is one of the most diverse countries from a geographical and climatic point of view, where there are three large ecosystem regions called coast, Sierra, and jungle. These characteristics result in the country having many hydrographic basins, with rivers of significant dimensions in terms of the width and length of the channel. In this sense, there is a permanent need to provide connectivity and promote trade between communities through road bridge infrastructure. Thus, Peru historically developed a road network and bridges during the Inca Empire in the Tawantinsuyu region, building a cobblestone road network and suspension bridges with rope cables made of plant fibers from vegetation called Coya-Ichu. This is how bridges in Peru have evolved to meet contemporary vehicular demands and provide structural stability and functionality throughout their useful life. This article presents the following sections: (a) an introduction to the evolution of bridges, (b) the current typology and inventory of bridges, (c) the characterization of the largest bridges, (d) a discussion on the architecture, engineering, construction, and structural health monitoring (AECSHM) of bridges in the face of climate change, earthquakes, and material degradation, and (e) conclusions. Finally, this article presents opportunities and challenges in terms of Peru’s architecture, engineering, construction, and structural health monitoring of road bridges. Special emphasis is given to the use of technologies from the era of Industry 4.0 to promote the digital construction and structural health monitoring of these infrastructures. Finally, it is concluded that the integration of technologies of sensors, the IoT (Internet of Things), AI (artificial intelligence), UAVs (Unmanned Aerial Vehicles), remote sensing, BIM (Building Information Modeling), and DfMA (Design for Manufacturing and Assembly), among others, will allow for more safe, reliable, durable, productive, cost-effective, sustainable, and resilient bridge infrastructures in Peru in the face of climate change. Full article
(This article belongs to the Special Issue Advances in Civil Infrastructures Engineering)
Show Figures

Figure 1

Figure 1
<p>Typical landscape of watershed in Peru.</p>
Full article ">Figure 2
<p>Different panoramic views of the Q’eswachaka bridge. Adapted from [<a href="#B13-applsci-15-00831" class="html-bibr">13</a>,<a href="#B14-applsci-15-00831" class="html-bibr">14</a>].</p>
Full article ">Figure 3
<p>Examples of beam-type bridges built in Peru. (<b>a</b>) Photo in Cajamarca region, (<b>b</b>) photo in Ica region, and (<b>c</b>) photo in Arequipa region.</p>
Full article ">Figure 4
<p>Examples of arch-type bridges built in Peru. (<b>a</b>) Photo in Junín region, and (<b>b</b>) photo in Arequipa region.</p>
Full article ">Figure 5
<p>Examples of suspension-type bridges built in Peru. (<b>a</b>) Photo in San Martin region, and (<b>b</b>) photo in Madre de Dios region.</p>
Full article ">Figure 6
<p>Different panoramic views of the Punta Arenas bridge.</p>
Full article ">Figure 7
<p>Different panoramic views of the Comuneros bridge.</p>
Full article ">Figure 8
<p>Different panoramic views of the Bellavista bridge.</p>
Full article ">Figure 9
<p>Different panoramic views of the Pachitea bridge.</p>
Full article ">Figure 10
<p>Different panoramic views of the La Joya Virgen de Chapi bridge.</p>
Full article ">Figure 11
<p>Different panoramic views of the Huallaga bridge.</p>
Full article ">Figure 12
<p>Different panoramic views of the Chilina bridge.</p>
Full article ">Figure 13
<p>Different panoramic views of the Continental bridge.</p>
Full article ">Figure 14
<p>Different panoramic views of the Aguaytía bridge.</p>
Full article ">Figure 15
<p>Different panoramic views of the Nanay bridge.</p>
Full article ">Figure 16
<p>BIM of Nanay bridge. Adapted from [<a href="#B29-applsci-15-00831" class="html-bibr">29</a>].</p>
Full article ">Figure 17
<p>Panoramic view of the construction process of the Nanay bridge. Adapted from [<a href="#B29-applsci-15-00831" class="html-bibr">29</a>].</p>
Full article ">Figure 18
<p>BIM of the La Joya Virgen de Chapi bridge.</p>
Full article ">Figure 19
<p>Some images of the construction process of the La Joya Virgen de Chapi bridge.</p>
Full article ">
16 pages, 2703 KiB  
Article
Research on RTD Fluxgate Induction Signal Denoising Method Based on Particle Swarm Optimization Wavelet Neural Network
by Xu Hu, Na Pang, Haibo Guo, Rui Wang, Fei Li and Guo Li
Sensors 2025, 25(2), 482; https://doi.org/10.3390/s25020482 - 16 Jan 2025
Viewed by 297
Abstract
Aeromagnetic surveying technology detects minute variations in Earth’s magnetic field and is essential for geological studies, environmental monitoring, and resource exploration. Compared to conventional methods, residence time difference (RTD) fluxgate sensors deployed on unmanned aerial vehicles (UAVs) offer increased flexibility in complex terrains. [...] Read more.
Aeromagnetic surveying technology detects minute variations in Earth’s magnetic field and is essential for geological studies, environmental monitoring, and resource exploration. Compared to conventional methods, residence time difference (RTD) fluxgate sensors deployed on unmanned aerial vehicles (UAVs) offer increased flexibility in complex terrains. However, measurement accuracy and reliability are adversely affected by environmental and sensor noise, including Barkhausen noise. Therefore, we proposed a novel denoising method that integrates Particle Swarm Optimization (PSO) with Wavelet Neural Networks, enhanced by a dynamic compression factor and an adaptive adjustment strategy. This approach leverages PSO to fine-tune the Wavelet Neural Network parameters in real time, significantly improving denoising performance and computational efficiency. Experimental results indicate that, compared to conventional wavelet transform methods, this approach reduces time difference fluctuation by 23.26%, enhances the signal-to-noise ratio (SNR) by 0.46%, and improves sensor precision and stability. This novel approach to processing RTD fluxgate sensor signals not only strengthens noise suppression and measurement accuracy but also holds significant potential for improving UAV-based geological surveying and environmental monitoring in challenging terrains. Full article
Show Figures

Figure 1

Figure 1
<p>Structure diagram of RTD fluxgate sensitive unit.</p>
Full article ">Figure 2
<p>Working principle of RTD fluxgate. (<b>a</b>) A hysteresis loop approaching the ideal state; (<b>b</b>) Magnetic induction intensity generated in the induction coil; (<b>c</b>) exciting magnetic field; (<b>d</b>) The induced voltage output.</p>
Full article ">Figure 3
<p>Topological structure of Wavelet Neural Network.</p>
Full article ">Figure 4
<p>Flow chart of PSO algorithm.</p>
Full article ">Figure 5
<p>Improved PSO–Wavelet Neural Network flow chart.</p>
Full article ">Figure 6
<p>Denoising performance at the signal peak.</p>
Full article ">Figure 7
<p>Overall denoising effect.</p>
Full article ">Figure 8
<p>PSD of time difference signals.</p>
Full article ">
18 pages, 4649 KiB  
Article
Development of an Aerial Manipulation System Using Onboard Cameras and a Multi-Fingered Robotic Hand with Proximity Sensors
by Ryuki Sato, Etienne Marco Badard, Chaves Silva Romulo, Tadashi Wada and Aiguo Ming
Sensors 2025, 25(2), 470; https://doi.org/10.3390/s25020470 - 15 Jan 2025
Viewed by 362
Abstract
Recently, aerial manipulations are becoming more and more important for the practical applications of unmanned aerial vehicles (UAV) to choose, transport, and place objects in global space. In this paper, an aerial manipulation system consisting of a UAV, two onboard cameras, and a [...] Read more.
Recently, aerial manipulations are becoming more and more important for the practical applications of unmanned aerial vehicles (UAV) to choose, transport, and place objects in global space. In this paper, an aerial manipulation system consisting of a UAV, two onboard cameras, and a multi-fingered robotic hand with proximity sensors is developed. To achieve self-contained autonomous navigation to a targeted object, onboard tracking and depth cameras are used to detect the targeted object and to control the UAV to reach the target object, even in a Global Positioning System-denied environment. The robotic hand can perform proximity sensor-based grasping stably for an object that is within a position error tolerance (a circle with a radius of 50 mm) from the center of the hand. Therefore, to successfully grasp the object, a requirement for the position error of the hand (=UAV) during hovering after reaching the targeted object should be less than the tolerance. To meet this requirement, an object detection algorithm to support accurate target localization by combining information from both cameras was developed. In addition, camera mount orientation and UAV attitude sampling rate were determined by experiments, and it is confirmed that these implementations improved the UAV position error to within the grasping tolerance of the robot hand. Finally, the experiments on aerial manipulations using the developed system demonstrated the successful grasping of the targeted object. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Aerial manipulation scheme adopted for the developed UAV system equipped with a multi-fingered robotic hand with proximity sensors.</p>
Full article ">Figure 2
<p>Overview of the developed UAV and multi-fingered robotic hand.</p>
Full article ">Figure 3
<p>System architecture diagram of the aerial manipulation system.</p>
Full article ">Figure 4
<p>Aerial manipulation procedure.</p>
Full article ">Figure 5
<p>Schematic diagram of the object detection and localization algorithm.</p>
Full article ">Figure 6
<p>Object detection output including the color pre-processing stage: (<b>a</b>) the RGB output, (<b>b</b>) the RGB output after applying a mask without pre-processing step, (<b>c</b>) the RGB output after applying a mask with pre-processing step. The green boxes in (<b>b</b>,<b>c</b>) represent the bounding box and indicate detected objects by the algorithm.</p>
Full article ">Figure 7
<p>Compensation for object localization based on UAV position and orientation (represented on a 2D plane for clarity).</p>
Full article ">Figure 8
<p>Targeted object position estimation before and after compensation.</p>
Full article ">Figure 9
<p>Camera configurations: (<b>a</b>) down-facing configuration, (<b>b</b>) front-facing configuration.</p>
Full article ">Figure 10
<p>Flight experiment result using (<b>a</b>) the down-facing configuration and (<b>b</b>) the front-facing configuration. The grey cross markers indicate the object’s position and the green dots represent the mean position of the UAV’s flight path.</p>
Full article ">Figure 11
<p>UAV attitudes during aerial manipulation Experiment 1: object at <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">P</mi> <mrow> <mi>O</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>O</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>O</mi> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>0.250</mn> <mo>,</mo> <mn>0.400</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> [m].</p>
Full article ">Figure 12
<p>UAV attitudes during aerial manipulation Experiment 2: object at <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">P</mi> <mrow> <mi>O</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>O</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>O</mi> <mn>2</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mo>−</mo> <mn>0.200</mn> <mo>,</mo> <mo>−</mo> <mn>0.100</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> [m].</p>
Full article ">Figure 13
<p>UAV attitudes during aerial manipulation Experiment 3: object at <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">P</mi> <mrow> <mi>O</mi> <mn>3</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>O</mi> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>y</mi> <mrow> <mi>O</mi> <mn>3</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>0.350</mn> <mo>,</mo> <mn>0.150</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> [m].</p>
Full article ">
27 pages, 30735 KiB  
Article
A Cloud Detection System for UAV Sense and Avoid: Analysis of a Monocular Approach in Simulation and Flight Tests
by Adrian Dudek and Peter Stütz
Drones 2025, 9(1), 55; https://doi.org/10.3390/drones9010055 - 15 Jan 2025
Viewed by 402
Abstract
In order to contribute to the operation of unmanned aerial vehicles (UAVs) according to visual flight rules (VFR), this article proposes a monocular approach for cloud detection using an electro-optical sensor. Cloud avoidance is motivated by several factors, including improving visibility for collision [...] Read more.
In order to contribute to the operation of unmanned aerial vehicles (UAVs) according to visual flight rules (VFR), this article proposes a monocular approach for cloud detection using an electro-optical sensor. Cloud avoidance is motivated by several factors, including improving visibility for collision prevention and reducing the risks of icing and turbulence. The described workflow is based on parallelized detection, tracking and triangulation of features with prior segmentation of clouds in the image. As output, the system generates a cloud occupancy grid of the aircraft’s vicinity, which can be used for cloud avoidance calculations afterwards. The proposed methodology was tested in simulation and flight experiments. With the aim of developing cloud segmentation methods, datasets were created, one of which was made publicly available and features 5488 labeled, augmented cloud images from a real flight experiment. The trained segmentation models based on the YOLOv8 framework are able to separate clouds from the background even under challenging environmental conditions. For a performance analysis of the subsequent cloud position estimation stage, calculated and actual cloud positions are compared and feature evaluation metrics are applied. The investigations demonstrate the functionality of the approach, even if challenges become apparent under real flight conditions. Full article
(This article belongs to the Special Issue Flight Control and Collision Avoidance of UAVs)
Show Figures

Figure 1

Figure 1
<p>Core elements of cloud detection workflow. Processing steps are executed repeatedly in clockwise direction.</p>
Full article ">Figure 2
<p>Benefit of cloud segmentation. Different cloud scenes captured during flight experiments on 12 October 2023 and 14 May 2024 in the area of Upper Bavaria of Germany showing detected features without (<b>a</b>–<b>c</b>) and with (<b>d</b>–<b>f</b>) prior cloud segmentation. Predicted cloud masks are shown as orange contour (<b>d</b>–<b>f</b>). ORB features are marked in red and Shi–Tomasi features are drawn in green.</p>
Full article ">Figure 3
<p>Two-dimensional (<b>a</b>) and three-dimensional (<b>b</b>) occupancy grids. (<b>a</b>) shows occluded and out-of-FOV cells (orange), cloud-occupied cells (blue) and cloud-free cells (white), while (<b>b</b>) shows cells with <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>updated</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>,</mo> <mi>e</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> <mo>&gt;</mo> <mn>0.5</mn> </mrow> </semantics></math> in green and the true cloud position with its dimension in red. In addition, the flight test carrier, described in <a href="#sec2dot4dot3-drones-09-00055" class="html-sec">Section 2.4.3</a>, is represented as a 3D model in order to visualize the pose (<b>b</b>).</p>
Full article ">Figure 4
<p>Zlin Savage VLA research platform with pods underneath the wings (<b>a</b>). Additionally shown is the sensor pod with the gimbal reconnaissance sensor during a flight test (<b>b</b>).</p>
Full article ">Figure 5
<p>Simulated cloud scenes (<b>a</b>–<b>c</b>) and corresponding segmentation masks below (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 6
<p>Flight recordings covering different cloud scenes from experiments on 14 May 2024 (<b>a</b>), 12 October 2023 (<b>b</b>) and 17 July 2024 (<b>c</b>) with the corresponding cloud mask predictions below (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 7
<p>Simulated cloud approach scenario with segmented contours (orange) and detected and tracked cloud features (red).</p>
Full article ">Figure 8
<p>Feature amount (<b>a</b>,<b>b</b>) and feature density (<b>c</b>,<b>d</b>) during simulated cloud approaches. Cloud approach speed is constant at 70 knots (<b>a</b>,<b>c</b>) and 250 knots (<b>b</b>,<b>d</b>). Blue curves show 50 m baseline configuration and red curves show 200 m baseline configuration.</p>
Full article ">Figure 9
<p>Total feature losses (blue) and contributions from separate filter stages (other colors) during cloud approaches at constant speeds of 70 knots (<b>a</b>) and 250 knots (<b>b</b>), with a baseline of 200 m between sample frames.</p>
Full article ">Figure 10
<p>Comparison between cloud-occupied grid cells inside (continous lines) and outside (dashed lines) the CGTV for 50 m baseline (blue) and for 200 m baseline (red) at constant speeds of 70 kt (<b>a</b>) and 250 kt (<b>b</b>).</p>
Full article ">Figure 11
<p>RViz display showing 3D occupancy grid cell distribution for cloud approaches at 70 kt (<b>a</b>,<b>b</b>) and 250 kt (<b>c</b>,<b>d</b>). Snapshots of cloud occupancy are visualized at 12 km (<b>a</b>,<b>c</b>) and 8 km (<b>b</b>,<b>d</b>) distance between UAVs and cloud centers with a triangulation baseline of 200 m. Cloud-occupied cells are marked in green and the cloud ground truth volumes are drawn in red.</p>
Full article ">Figure 12
<p>Total number of cloud-occupied grid cells outside of CGTV (dashed blue curves) and number of outside cells within certain CGTV vicinity ranges dependant on the color. Displayed are speed–baseline combinations of 70 kt–50 m (<b>a</b>), 250 kt–50 m (<b>b</b>), 70 kt–200 m (<b>c</b>) and 250 kt–200 m (<b>d</b>).</p>
Full article ">Figure 12 Cont.
<p>Total number of cloud-occupied grid cells outside of CGTV (dashed blue curves) and number of outside cells within certain CGTV vicinity ranges dependant on the color. Displayed are speed–baseline combinations of 70 kt–50 m (<b>a</b>), 250 kt–50 m (<b>b</b>), 70 kt–200 m (<b>c</b>) and 250 kt–200 m (<b>d</b>).</p>
Full article ">Figure 13
<p>Approached cloud formation with segmented cloud areas (orange) and detected and tracked features (red). Flight test was conducted on 14 May 2024 in the Upper Bavaria region of Germany.</p>
Full article ">Figure 14
<p>Number of detected and tracked features (<b>a</b>) and feature density (<b>b</b>) during the cloud approach.</p>
Full article ">Figure 15
<p>Total feature losses (blue) and parts of the losses of the separate filter stages (other colors) during cloud approach.</p>
Full article ">Figure 16
<p>RViz snapshots showing above view and 3rd-person view of 3D cloud occupancy grid after 7.2 s (<b>a</b>,<b>b</b>) and 21.35 s (<b>c</b>,<b>d</b>). Cloud log positions are illustrated for the front-left cloud (red) and rear-right cloud (blue) from <a href="#drones-09-00055-f013" class="html-fig">Figure 13</a>. Cells with an occupancy probability of over 50% are marked in green, with lighter shades indicating a higher cloud probability.</p>
Full article ">
47 pages, 14403 KiB  
Review
Chemical Detection Using Mobile Platforms and AI-Based Data Processing Technologies
by Daegwon Noh and Eunsoon Oh
J. Sens. Actuator Netw. 2025, 14(1), 6; https://doi.org/10.3390/jsan14010006 - 13 Jan 2025
Viewed by 451
Abstract
The development of reliable gas sensors is very important in many fields such as safety, environment, and agriculture, and is especially essential for industrial waste and air pollution monitoring. As the performance of mobile platforms equipped with sensors such as smartphones and drones [...] Read more.
The development of reliable gas sensors is very important in many fields such as safety, environment, and agriculture, and is especially essential for industrial waste and air pollution monitoring. As the performance of mobile platforms equipped with sensors such as smartphones and drones and the technologies supporting them (wireless communication, battery performance, data processing technology, etc.) are spreading and improving, a lot of efforts are being made to perform these tasks by using portable systems such as smartphones or installing them on unmanned wireless platforms such as drones. For example, research is continuously being conducted on chemical sensors for field monitoring using smartphones and rapid monitoring of air pollution using unmanned aerial vehicles (UAVs). In this paper, we review the measurement results of various chemical sensors available on mobile platforms including drones and smartphones, and the analysis of detection results using machine learning. This topic covers a wide range of specialized fields such as materials engineering, aerospace engineering, physics, chemistry, environmental engineering, electrical engineering, and machine learning, and it is difficult for experts in one field to grasp the entire content. Therefore, we have explained various concepts with relatively simple pictures so that experts in various fields can comprehensively understand the overall topics. Full article
(This article belongs to the Section Big Data, Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Mobile platforms and sensing methods.</p>
Full article ">Figure 2
<p>A schematic diagram of an NDIR detector.</p>
Full article ">Figure 3
<p>Examples of gases and VOCs that need to be monitored or detected. The emission sources are explained together.</p>
Full article ">Figure 4
<p>Examples of chemical detection using drones. (<b>a</b>) Detection of methane leakage using backscattered-tunable diode laser absorption spectroscopy (Backscattered-TDLAS) [<a href="#B11-jsan-14-00006" class="html-bibr">11</a>]. Copyright 2021 by Iwaszenko et al. Reprinted with permission. (<b>b</b>) Response of NDIR CO<sub>2</sub> detector during the flight of small unmanned aerial systems (quadcopter for vertical profiles and fixed-wing for horizontal profiles) with programmed gas release [<a href="#B12-jsan-14-00006" class="html-bibr">12</a>]. Copyright 2019 by Schuyler et al. Reprinted with permission. (<b>c</b>) Odor prediction process using e-nose adopted on a UAV [<a href="#B9-jsan-14-00006" class="html-bibr">9</a>]. Copyright 2021 by Elsevier. Reprinted with permission.</p>
Full article ">Figure 5
<p>(<b>a</b>) Detection techniques for landmines. (<b>b</b>) The system architecture proposed for surface landmine detection (left) and object detection results of “starfish” (left 3 images) and “butterfly” (right 3 images) landmines. The red rectangles, yellow rectangles, and purple circles indicate correct, wrong, and missing object detection results in the images on the right box [<a href="#B23-jsan-14-00006" class="html-bibr">23</a>]. Copyright 2024 by Vivoli et al. (<b>c</b>) Schematic illustration of the ground penetrating radar (GPR) configured with the 1Tx + 4Rx antenna system (left). A photograph of the robotic platform on the linear test path is also shown (right) [<a href="#B24-jsan-14-00006" class="html-bibr">24</a>]. Copyright 2022 by Pryshchenko et al. (<b>d</b>) Illustration of the flight path for airborne-based GPR system [<a href="#B25-jsan-14-00006" class="html-bibr">25</a>]. Copyright 2022 by García-Fern’andez et al. Reprinted with permission.</p>
Full article ">Figure 6
<p>Applications of smartphone chemical sensors and the sensing methods.</p>
Full article ">Figure 7
<p>(<b>a</b>) Wireless communication module for electrochemical detection [<a href="#B32-jsan-14-00006" class="html-bibr">32</a>]. Copyright 2024 by Boonkaew et al. (<b>b</b>) Additional electrochemical module adapted for a smartphone [<a href="#B33-jsan-14-00006" class="html-bibr">33</a>]. Copyright 2021 by Elsevier. Reprinted with permission. (<b>c</b>) Inkjet-printed colorimetric chemical sensor with optical analysis using a smartphone [<a href="#B34-jsan-14-00006" class="html-bibr">34</a>]. Copyright 2021 by Elsevier. Reprinted with permission. (<b>d</b>) Microfluidic paper-based fluorometric chemical sensor with optical analysis using a smartphone and attachment [<a href="#B35-jsan-14-00006" class="html-bibr">35</a>]. Copyright 2024 by Elsevier. Reprinted with permission.</p>
Full article ">Figure 8
<p>(<b>a</b>) Infrared gas sensor topologies [<a href="#B40-jsan-14-00006" class="html-bibr">40</a>]. Copyright 2019 by Popa et al. (<b>b</b>) A TDLAS measurement set-up and CO<sub>2</sub> and H<sub>2</sub>O measurement results [<a href="#B45-jsan-14-00006" class="html-bibr">45</a>]. Copyright 2023 by Gu et al.</p>
Full article ">Figure 9
<p>(<b>a</b>) A photograph of a remote methane leak detector (RMLD)-adapted UAV (left) and a drawing of the remote gas detection mechanism (right) [<a href="#B47-jsan-14-00006" class="html-bibr">47</a>]. Copyright 2018 by Yang et al. (<b>b</b>) Simulation (left) and experimental (right) results of single-ended TDLAS measurement. The experimental set-up is also shown together [<a href="#B48-jsan-14-00006" class="html-bibr">48</a>]. Copyright 2024 by Hansemann et al.</p>
Full article ">Figure 10
<p>Optical technologies for chemical detection. (<b>a</b>) Dust detector. (<b>b</b>) Fluorescence quenching-based chemical detection technology. (<b>c</b>) Surface-enhanced Raman spectroscopy. (<b>d</b>) Metal nanoparticle aggregation-based colorimetric sensing technology.</p>
Full article ">Figure 11
<p>(<b>a</b>) A schematic drawing of the working mechanism for a n-type MOS gas sensor. (<b>b</b>) Images of MEMS device and the thermographic camera image [<a href="#B69-jsan-14-00006" class="html-bibr">69</a>]. Copyright 2021 by Chen et al.</p>
Full article ">Figure 12
<p>(<b>a</b>) Basic principle of an electrochemical measurement using K<sub>3</sub>Fe(CN)<sub>6</sub> and K<sub>4</sub>Fe(CN)<sub>6</sub> redox couple [<a href="#B96-jsan-14-00006" class="html-bibr">96</a>]. Copyright 2022 by Waifalkar et al. (<b>b</b>) A schematic drawing of an electrochemical gas sensor and a screen-printed electrode. (<b>c</b>) Device structure and photos of the electrochemical CO gas sensor using a gel-electrolyte [<a href="#B94-jsan-14-00006" class="html-bibr">94</a>]. Copyright 2020 by Zhang et al. (<b>d</b>) Detection process of a protein using an electrochemical aptamer sensor for cancer detection [<a href="#B97-jsan-14-00006" class="html-bibr">97</a>]. Copyright 2016 by Zamay et al.</p>
Full article ">Figure 13
<p>(<b>a</b>) Explosive detection by fluorescence quenching method using shutter control and the impact of temperature variation on intensity. (<b>b</b>) Fan voltage and fluorescence intensity as a function of time (left) and fluorescence quenching efficiency with respect to airflow (right) (<b>c</b>) Effect of chamber structure on MOS sensor response. Computational fluid dynamics simulation results (left) and sensor response (SnO<sub>2</sub> sensor exposed to 50 ppm of ethanol) depending upon the chamber structures [<a href="#B130-jsan-14-00006" class="html-bibr">130</a>]. Copyright 2019 by Elsevier. Reprinted with permission.</p>
Full article ">Figure 14
<p>(<b>a</b>) Temperature and humidity effects correction using Gaussian process regression (GPR) [<a href="#B133-jsan-14-00006" class="html-bibr">133</a>]. Copyright 2022 by Elsevier. Reprinted with permission. (<b>b</b>) Time-dependent drift and response degradation of SnO<sub>2</sub>- and Au-doped sensors. Degradation can be controlled by doping [<a href="#B136-jsan-14-00006" class="html-bibr">136</a>]. Copyright 2000 by Elsevier. Reprinted with permission. (<b>c</b>) Correlation between CO measurements recorded by a commercial electrochemical sensor and by a reference sensor. The drift of signals for CO and NO<sub>2</sub> over a period of a year [<a href="#B134-jsan-14-00006" class="html-bibr">134</a>]. Copyright 2023 by Papaconstantinous et al.</p>
Full article ">Figure 15
<p>ANN-based drift correction model and correction results [<a href="#B137-jsan-14-00006" class="html-bibr">137</a>]. In the scatter plots at the bottom, the gray (black) colors indicate uncorrected (corrected) data. Copyright 2024 by Koziel et al.</p>
Full article ">Figure 16
<p>(<b>a</b>) Drift compensation model using ANN, calibration feature encoder (CFE) that extracts drift characteristics, and RMSE of compensation results [<a href="#B138-jsan-14-00006" class="html-bibr">138</a>]. Copyright 2024 by Kwon et al. (<b>b</b>) Exponential moving average feature extraction from long-term period data measured from a MOS sensor array [<a href="#B139-jsan-14-00006" class="html-bibr">139</a>]. Copyright 2012 by Elsevier. Reprinted with permission.</p>
Full article ">Figure 17
<p>Conceptual drawing of machine learning techniques. From left to right, dimension reduction (e.g., principal component analysis), data classification (e.g., SVM), regression (e.g., linear regression), artificial neural network (e.g., deep neural network (DNN)), and ensemble learning techniques (e.g., bootstrap aggregation).</p>
Full article ">Figure 18
<p>Conceptual illustration of dye-based colorimetric sensor arrays and neural network-based data processing technique [<a href="#B173-jsan-14-00006" class="html-bibr">173</a>]. The colorimetric sensor array (4 × 4, total 16 different dyes) consists of dye-immobilized filter paper sensors, each with a diameter of 10 mm. Copyright 2024 by Elsevier. Reprinted and modified with permission.</p>
Full article ">
14 pages, 4547 KiB  
Article
Enhancing Wildlife Detection Using Thermal Imaging Drones: Designing the Flight Path
by Byungwoo Chang, Byungmook Hwang, Wontaek Lim, Hankyu Kim, Wanmo Kang, Yong-Su Park and Dongwook W. Ko
Drones 2025, 9(1), 52; https://doi.org/10.3390/drones9010052 - 13 Jan 2025
Viewed by 472
Abstract
Thermal imaging drones have transformed wildlife monitoring by facilitating the efficient and noninvasive monitoring of animal populations across large areas. In this study, an optimized flight path design was developed for monitoring wildlife on Guleopdo Island, South Korea using the DJI Mavic 3T [...] Read more.
Thermal imaging drones have transformed wildlife monitoring by facilitating the efficient and noninvasive monitoring of animal populations across large areas. In this study, an optimized flight path design was developed for monitoring wildlife on Guleopdo Island, South Korea using the DJI Mavic 3T drone equipped with a thermal camera. We employed a strata-based sampling technique to reclassify topographical and land cover information, creating an optimal survey plan. Using sampling strata, key waypoints were derived, on the basis of which nine flight paths were designed to cover ~50% of the study area. The results demonstrated that an optimized flight path improved the accuracy of detecting Formosan sika deer (Cervus nippon taiouanus). Population estimates indicated at least 128 Formosan sika deer, with higher detection efficiency observed during cloudy weather. Customizing flight paths based on the habitat characteristics proved crucial for efficient monitoring. This study highlights the potential of thermal imaging drones for accurately estimating wildlife populations and supporting conservation efforts. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Location of Guleopdo Island, Ongjin County, Incheon, South Korea, selected as the study site (Source: Esri, Maxar, GeoEye, Earthstar Geographics, CNES/Airbus, DS, USDA, AeroGRID, IGN, and the GIS User Community); (<b>B</b>) photograph of Formosan sika deer inhabiting Guleopdo Island.</p>
Full article ">Figure 2
<p>Overview of the flight path optimization process. The technical workflow includes data integration, sampling strata, strata map generation, and key waypoint determination. The designed flight path is uploaded to the drone for mission execution.</p>
Full article ">Figure 3
<p>Depiction of thermal drone flight strategy and path execution. (<b>A</b>) Thermal drone detecting ground areas along designated waypoints. (<b>B</b>) Drone maintaining a constant altitude while traversing the flight path over varying terrain.</p>
Full article ">Figure 4
<p>Input data used for generating the sampling strata and the resulting strata map. (<b>Top left</b>) DEM showing elevation across Guleopdo Island; (<b>top right</b>) slope map indicating terrain steepness; (<b>bottom left</b>) landcover map categorizing forest and open vegetation areas; and (<b>bottom right</b>) strata map generated from the integrated input data.</p>
Full article ">Figure 5
<p>Field operation setup and finalized flight path on Guleopdo Island. (<b>A</b>) DJI Mavic 3T thermal drone used for wildlife detection. (<b>B</b>) Landscape view of Guleopdo Island. (<b>C</b>) Map showing the finalized flight paths (waylines) and survey routes used for wildlife monitoring.</p>
Full article ">Figure 6
<p>Example images captured at the same waypoint using different sensors. (<b>A</b>) RGB image taken by the RGB camera of the drone; (<b>B</b>) thermal image showing potential wildlife heat signatures; and (<b>C</b>) detection marking showing individuals identified from the thermal image through visual inspection.</p>
Full article ">
36 pages, 13780 KiB  
Article
Combining a Standardized Growth Class Assessment, UAV Sensor Data, GIS Processing, and Machine Learning Classification to Derive a Correlation with the Vigour and Canopy Volume of Grapevines
by Ronald P. Dillner, Maria A. Wimmer, Matthias Porten, Thomas Udelhoven and Rebecca Retzlaff
Sensors 2025, 25(2), 431; https://doi.org/10.3390/s25020431 - 13 Jan 2025
Viewed by 426
Abstract
Assessing vines’ vigour is essential for vineyard management and automatization of viticulture machines, including shaking adjustments of berry harvesters during grape harvest or leaf pruning applications. To address these problems, based on a standardized growth class assessment, labeled ground truth data of precisely [...] Read more.
Assessing vines’ vigour is essential for vineyard management and automatization of viticulture machines, including shaking adjustments of berry harvesters during grape harvest or leaf pruning applications. To address these problems, based on a standardized growth class assessment, labeled ground truth data of precisely located grapevines were predicted with specifically selected Machine Learning (ML) classifiers (Random Forest Classifier (RFC), Support Vector Machines (SVM)), utilizing multispectral UAV (Unmanned Aerial Vehicle) sensor data. The input features for ML model training comprise spectral, structural, and texture feature types generated from multispectral orthomosaics (spectral features), Digital Terrain and Surface Models (DTM/DSM- structural features), and Gray-Level Co-occurrence Matrix (GLCM) calculations (texture features). The specific features were selected based on extensive literature research, including especially the fields of precision agri- and viticulture. To integrate only vine canopy-exclusive features into ML classifications, different feature types were extracted and spatially aggregated (zonal statistics), based on a combined pixel- and object-based image-segmentation-technique-created vine row mask around each single grapevine position. The extracted canopy features were progressively grouped into seven input feature groups for model training. Model overall performance metrics were optimized with grid search-based hyperparameter tuning and repeated-k-fold-cross-validation. Finally, ML-based growth class prediction results were extensively discussed and evaluated for overall (accuracy, f1-weighted) and growth class specific- classification metrics (accuracy, user- and producer accuracy). Full article
(This article belongs to the Special Issue Remote Sensing for Crop Growth Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) Investigation area in Bernkastel-Kues within the Moselle wine region mapped on a high-precision orthomosaic (CRS (Coordinate Reference System) with EPSG (European Petroleum Survey Group) 25,832 ETRS (European Terrestrial Reference System) 89/UTM (Universal Transverse Mercator) zone 32N). Red points represent each vine position that was localized with differential-GPS (see <a href="#sec2dot5-sensors-25-00431" class="html-sec">Section 2.5</a> for more information) (<b>B</b>) Zoomed out view of the investigation area and overview of local vineyard structure and the Moselle river (mapped on Google Earth Satellite map from QuickMapService plugin in QGIS version 3.22) (<b>C</b>) Intermediate zoom of the investigation area, with orthomosaic on Google Earth Satellite map. It can be seen that the UAV- sensor- based orthomosaic and the Google Satellite map show some offset to each other, due to different absolute geographic accuracies and spatial resolution.</p>
Full article ">Figure 2
<p>(<b>A</b>) Zoomed-out view of the canopy-free vine rows of the investigation area. (<b>B</b>) Zoomed-in view of the training system in the investigation area (photos taken in December 2024).</p>
Full article ">Figure 3
<p>Ground truth template examples with label descriptions for the growth classes after Porten [<a href="#B43-sensors-25-00431" class="html-bibr">43</a>]. The specific visual characteristics and correlations to viticultural, oenological, and environmental parameters are described in detail by [<a href="#B43-sensors-25-00431" class="html-bibr">43</a>].</p>
Full article ">Figure 4
<p>Color-coded growth classification after Porten [<a href="#B43-sensors-25-00431" class="html-bibr">43</a>] for single grapevines in the investigation area mapped on multispectral orthomosaic. All geodata are projected to CRS with EPSG: 25,832 ETRS89/UTM zone 32N.</p>
Full article ">Figure 5
<p>Visualization of the developed and applied geo- and image processing workflow in this study, with QGIS and different geospatial libraries in Phyton. Geoprocessing was the foundation for further statistical analysis, and machine learning model predictions of the growth classes after [<a href="#B43-sensors-25-00431" class="html-bibr">43</a>].</p>
Full article ">Figure 6
<p>Sampling rectangles around the vines’ position for zonal statistics pixel aggregation process, together with growth class categorized grapevine stem positions and vine row extracted OSAVI (OSAVI extracted). All geodata are projected to CRS with EPSG: 25,832 ETRS89/UTM zone 32N.</p>
Full article ">Figure 7
<p>Growth class grouped CHM Volume boxplots with significance stars (*) between boxplots generated according to the Mann–Whitney-U-test with <span class="html-italic">p</span>-value significance. ** Signal greater than 0.01 (intermediate significance). *** Signal less than 0.001 (vital significance).</p>
Full article ">Figure 8
<p>Input feature group (1–7) grouped boxplot OA (overall accuracy) in % for the SVM classifier of the seven different SVM models (see legend color of grouped boxplots), with significance stars (*) generated according to the Mann–Whitney-U-test between statistical significant model (SVM 1–SVM 7) results, where significant accuracy differences, derived from repeated-k-fold-cross-validation occurred, with <span class="html-italic">p</span>-value significance classes: ** Signal greater than 0.01 (intermediate significance). *** Signal less than 0.001 (vital significance). No stars between boxplots indicate no statistical differences between the model outputs according to Mann-Whitney-U-test.</p>
Full article ">Figure 9
<p>Input feature group (1–7) grouped boxplot accuracy in % for the RF classifier of the seven different RF classifier models (see legend color of grouped boxplots), with significance stars (*) generated according to the Mann–Whitney-U-test between statistical significant model (RF 1–RF 7) results, where significant OA (overall accuracy) differences derived from repeated-k-fold-cross-validation occurred, with <span class="html-italic">p</span>-value significance classes: * Signal greater than 0.1 (weak significance).** Signal greater than 0.01 (intermediate significance). *** Signal less than 0.001 (vital significance). No stars between boxplots indicate no statistical differences between the model outputs according to Mann-Whitney-U-test.</p>
Full article ">Figure 10
<p>Pairwise statistical comparison of OA (overall accuracy) of the test and train data in % for the SVM classifier of the seven different feature groups input sets (1–7) with significance stars (*) generated according to the Mann–Whitney-U-test between boxplots where significant accuracy differences (OA) derived from repeated-k-fold-cross-validation occurred, with <span class="html-italic">p</span>-value significance classes. * Signal greater than 0.1 (weak significance). *** Signal less than 0.001 (vital significance).</p>
Full article ">Figure 11
<p>Pairwise statistical comparison of accuracy in % and f1-weighted score in % of the test data sets for the RF classifier of the seven different input feature groups (see legend color of grouped boxplots) with significance stars (*) generated according to the Mann–Whitney-U-test between accuracy and f1-weighted, with significant differences derived from repeated-k-fold-cross-validation occurred, with <span class="html-italic">p</span>-value significance classes. *** Signal less than 0.001 (vital significance).</p>
Full article ">Figure 12
<p>Pairwise statistical comparison of overall accuracy of train data set in % for the SVM classifier with f1-weighted in score in % of the seven different input feature groups (see legend color of grouped boxplots) with significance stars (*) generated according to the Mann–Whitney-U-test between accuracy and f1-weighted, where significant accuracy differences derived from repeated-k-fold cross- validation occurred, with <span class="html-italic">p</span>-value significance classes. ** Signal greater than 0.01 (intermediate significance). *** Signal less than 0.001 (vital significance). No stars between boxplots indicate no statistical differences between the model outputs according to Mann-Whitney-U-test.</p>
Full article ">Figure 13
<p>Visualization example of the difference between the ground truth growth classes and the model predicted growth classes from the output from SVM 7 model. Red numbers next to grapevine stems (brown points) with values over zero represent an underestimation of the model prediction compared to ground truth data. In contrast, values less than zero would indicate an overestimation of the growth class model prediction, compared to the ground truth data. Zero values indicate a perfect match of the ground truth with the ML model prediction. The red rectangles represent the area of the zonal statistics aggregation, and the red outline the generated vine row mask (see <a href="#sec2dot6dot9-sensors-25-00431" class="html-sec">Section 2.6.9</a>), where the spatial aggregation of the features was achieved. Pixels outside the vine row mask were not considered for spatial aggregation. All mapped geodata are projected to CRS with EPSG: 25,832/ ETRS89/UTM zone 32N.</p>
Full article ">
15 pages, 13634 KiB  
Article
Design and Implementation of an Emergency Environmental Monitoring System
by Chaowen Li, Shan Zhu, Haiping Sun, Kejie Zhao, Linhao Sun, Shaobin Zhang, Jie Wang and Luming Fang
Electronics 2025, 14(2), 287; https://doi.org/10.3390/electronics14020287 - 12 Jan 2025
Viewed by 607
Abstract
The collection and real-time transmission of emergency environmental information are crucial for rapidly assessing the on-site situation of sudden disasters and responding promptly. However, the acquisition of emergency environmental information, particularly its seamless transmission, faces significant challenges under complex terrain and limited ground [...] Read more.
The collection and real-time transmission of emergency environmental information are crucial for rapidly assessing the on-site situation of sudden disasters and responding promptly. However, the acquisition of emergency environmental information, particularly its seamless transmission, faces significant challenges under complex terrain and limited ground communication. This paper utilizes sensors, line-of-sight communication with unmanned aerial vehicles (UAVs), and LoRa long-distance communication to establish an integrated emergency environmental monitoring system that combines real-time monitoring, UAV-mounted LoRa gateway relaying, and backend data analysis. This system achieves real-time acquisition, seamless transmission, storage management, and visualization of environmental emergency information. First, a portable emergency environmental monitoring device was developed to collect and transmit environmental factor data. Second, a UAV-mounted LoRa gateway was designed to extend the data transmission coverage, ensuring seamless communication. Finally, multiple field experiments were conducted to evaluate the system’s performance. The experimental results indicate that the system possesses reliable capabilities for emergency data collection and transmission in complex environments, providing new technical solutions and practical support for developing and applying emergency environmental monitoring systems. Full article
Show Figures

Figure 1

Figure 1
<p>Structure diagram of the portable environmental monitoring device. 1. Ultrasonic wind speed and direction sensor. 2. Device enclosure 3. OLED display. 4. Switch. 5. Carbon monoxide concentration sensor. 6. Temperature and humidity sensor.</p>
Full article ">Figure 2
<p>Schematic diagram of the device.</p>
Full article ">Figure 3
<p>Circuit framework diagram of the portable environmental monitoring device.</p>
Full article ">Figure 4
<p>PCB of the portable emergency environmental monitoring device.</p>
Full article ">Figure 5
<p>Structure of the ultrasonic wind speed and direction sensor.</p>
Full article ">Figure 6
<p>Time difference measurement principle diagram. X1, X2, X3 and X4: The physical locations of four ultrasonic transducers, used to measure the propagation time difference of ultrasonic waves under the influence of airflow. α: The angle of airflow passing through the sensor, used to determine the wind direction.</p>
Full article ">Figure 7
<p>The system data interaction sequence diagram.</p>
Full article ">Figure 8
<p>Upper machine software interface.</p>
Full article ">Figure 9
<p>The application scenario diagram of the UAV-mounted LoRa gateway.</p>
Full article ">Figure 10
<p>Comparison Diagram of Devices. (<b>a</b>) Meteorological station a was fixed at the experimental site; (<b>b</b>) Meteorological station b was fixed at the experimental site.</p>
Full article ">Figure 11
<p>LoRa Ground Communication Distance Test. (<b>a</b>) maximum communication distance; (<b>b</b>) maximum communication distance in an open environment.</p>
Full article ">Figure 12
<p>UAV-to-Ground Communication Distance Test. (<b>a</b>) LoRa gateway and antenna were mounted on the UAV; (<b>b</b>) maximum communication distance.</p>
Full article ">Figure 13
<p>Signal transmission quality at different flight altitudes.</p>
Full article ">
24 pages, 2581 KiB  
Article
Intelligent Wireless Sensor Network Sensor Selection and Clustering for Tracking Unmanned Aerial Vehicles
by Edward-Joseph Cefai, Matthew Coombes and Daniel O’Boy
Sensors 2025, 25(2), 402; https://doi.org/10.3390/s25020402 - 11 Jan 2025
Viewed by 315
Abstract
Sensor selection is a vital part of Wireless Sensor Network (WSN) management. This becomes of increased importance when considering the use of low-cost, bearing-only sensor nodes for the tracking of Unmanned Aerial Vehicles (UAVs). However, traditional techniques commonly form excessively large sensor clusters, [...] Read more.
Sensor selection is a vital part of Wireless Sensor Network (WSN) management. This becomes of increased importance when considering the use of low-cost, bearing-only sensor nodes for the tracking of Unmanned Aerial Vehicles (UAVs). However, traditional techniques commonly form excessively large sensor clusters, which result in the collection of redundant information, which can deteriorate performance while also increasing the associated network costs. Therefore, this work combines a predictive posterior distribution methodology with a novel simplified objective function for optimally identifying and forming smaller sensor clusters before activation and measurement collection. The goal of the proposed objective function is to reduce network communication and computation costs while still maintaining the tracking performance of using far more sensors. The developed optimisation algorithm results in reducing the size of selected sensor clusters by an average of 50% while still maintaining the tracking performance of general traditional techniques. Full article
Show Figures

Figure 1

Figure 1
<p>Discretisation of the prior distribution estimate, <math display="inline"><semantics> <mrow> <mi>p</mi> <mfenced separators="" open="(" close=")"> <msubsup> <mover accent="true"> <mi mathvariant="bold">x</mi> <mo stretchy="false">^</mo> </mover> <mi mathvariant="bold">x</mi> <mo>−</mo> </msubsup> <mrow> <mo>|</mo> </mrow> <msub> <mi>z</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mfenced> </mrow> </semantics></math>, of a single UAV target, for a WSN scenario consisting of four bearing-only, omnidirectional sensors.</p>
Full article ">Figure 2
<p>Scenario example for discretising the target distributions. The lower left discretised area, <math display="inline"><semantics> <msub> <mi mathvariant="bold">u</mi> <mrow> <mi mathvariant="bold">k</mi> <mo>−</mo> <mn mathvariant="bold">1</mn> </mrow> </msub> </semantics></math>, encompasses the posterior distribution for time step <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mfenced separators="" open="(" close=")"> <mi>p</mi> <mfenced separators="" open="(" close=")"> <msubsup> <mover accent="true"> <mi mathvariant="bold">x</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi mathvariant="bold">k</mi> <mo>−</mo> <mn mathvariant="bold">1</mn> </mrow> <mo>+</mo> </msubsup> <mrow> <mo>|</mo> </mrow> <msub> <mi>z</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mfenced> </mfenced> </semantics></math>, whereas the top right discretised area, <math display="inline"><semantics> <msub> <mi mathvariant="bold">v</mi> <mi mathvariant="bold">k</mi> </msub> </semantics></math>, covers the prior distribution for time step <span class="html-italic">k</span>, <math display="inline"><semantics> <mfenced separators="" open="(" close=")"> <mi>p</mi> <mfenced separators="" open="(" close=")"> <msubsup> <mover accent="true"> <mi mathvariant="bold">x</mi> <mo stretchy="false">^</mo> </mover> <mi mathvariant="bold">k</mi> <mo>−</mo> </msubsup> <mrow> <mo>|</mo> </mrow> <msub> <mi>z</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mfenced> </mfenced> </semantics></math>. Note that <math display="inline"><semantics> <msub> <mi mathvariant="bold">u</mi> <mrow> <mi mathvariant="bold">k</mi> <mo>−</mo> <mn mathvariant="bold">1</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">v</mi> <mi mathvariant="bold">k</mi> </msub> </semantics></math> have identical sample sizes and resolutions.</p>
Full article ">Figure 3
<p>The cone-shaped bearing-only marginal likelihoods for sensors 1 and 2, when the center grid sample is considered to be the predicted bearing measurement.</p>
Full article ">Figure 4
<p>(<b>a</b>) Sensor 1’s predicted likelihood function. (<b>b</b>) Sensor 2’s predicted likelihood function.</p>
Full article ">Figure 5
<p>(<b>a</b>) Predicted posterior distribution for sensor 1. (<b>b</b>) Predicted posterior distribution for sensor 2. (<b>c</b>) Predicted posterior distribution for all sensors. (<b>d</b>) Actual posterior distribution for the selected sensor combination.</p>
Full article ">Figure 6
<p>Example scenario of the sensor selection area, where sensors located within the selection square are initially considered to join a cluster.</p>
Full article ">Figure 7
<p>Cluster size of selected combinations using exhaustive search technique and developed objective function, compared to using all available nodes.</p>
Full article ">Figure 8
<p>RMSE of selected combinations using exhaustive search technique and developed objective function, compared to using all available nodes.</p>
Full article ">
34 pages, 1773 KiB  
Article
Energy-Efficient Aerial STAR-RIS-Aided Computing Offloading and Content Caching for Wireless Sensor Networks
by Xiaoping Yang, Quanzeng Wang, Bin Yang and Xiaofang Cao
Sensors 2025, 25(2), 393; https://doi.org/10.3390/s25020393 - 10 Jan 2025
Viewed by 435
Abstract
Unmanned aerial vehicle (UAV)-based wireless sensor networks (WSNs) hold great promise for supporting ground-based sensors due to the mobility of UAVs and the ease of establishing line-of-sight links. UAV-based WSNs equipped with mobile edge computing (MEC) servers effectively mitigate challenges associated with long-distance [...] Read more.
Unmanned aerial vehicle (UAV)-based wireless sensor networks (WSNs) hold great promise for supporting ground-based sensors due to the mobility of UAVs and the ease of establishing line-of-sight links. UAV-based WSNs equipped with mobile edge computing (MEC) servers effectively mitigate challenges associated with long-distance transmission and the limited coverage of edge base stations (BSs), emerging as a powerful paradigm for both communication and computing services. Furthermore, incorporating simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) as passive relays significantly enhances the propagation environment and service quality of UAV-based WSNs. However, most existing studies place STAR-RISs in fixed positions, ignoring the flexibility of STAR-RISs. Some other studies equip UAVs with STAR-RISs, and UAVs act as flight carriers, ignoring the computing and caching capabilities of UAVs. To address these limitations, we propose an energy-efficient aerial STAR-RIS-aided computing offloading and content caching framework, where we formulate an energy consumption minimization problem to jointly optimize content caching decisions, computing offloading decisions, UAV hovering positions, and STAR-RIS passive beamforming. Given the non-convex nature of this problem, we decompose it into a content caching decision subproblem, a computing offloading decision subproblem, a hovering position subproblem, and a STAR-RIS resource allocation subproblem. We propose a deep reinforcement learning (DRL)–successive convex approximation (SCA) combined algorithm to iteratively achieve near-optimal solutions with low complexity. The numerical results demonstrate that the proposed framework effectively utilizes resources in UAV-based WSNs and significantly reduces overall system energy consumption. Full article
(This article belongs to the Special Issue Recent Developments in Wireless Network Technology)
Show Figures

Figure 1

Figure 1
<p>System model of aerial STAR-RIS-aided WSN.</p>
Full article ">Figure 2
<p>Illustration of task caching and offloading for STAR-RIS-aided UAV system.</p>
Full article ">Figure 3
<p>Time allocation for task processing in STAR-RIS-aided UAV system.</p>
Full article ">Figure 4
<p>The proposed optimization framework of the energy consumption minimization problem.</p>
Full article ">Figure 5
<p>Workflow of PPO algorithm.</p>
Full article ">Figure 6
<p>Energy consumption versus the number of iterations.</p>
Full article ">Figure 7
<p>Energy consumption versus network bandwidth.</p>
Full article ">Figure 8
<p>Energy consumption versus CPU cycles required for computing 1 bit of task data.</p>
Full article ">Figure 9
<p>Energy consumption versus computation task size.</p>
Full article ">Figure 10
<p>Energy consumption versus number of elements.</p>
Full article ">Figure 11
<p>Energy consumption versus sensors’ transmit power.</p>
Full article ">Figure 12
<p>Energy consumption versus SINR.</p>
Full article ">Figure 13
<p>Convergence of average weighted reward sum for various caching DRL learning rates.</p>
Full article ">
20 pages, 10708 KiB  
Article
Evaluation of 3D Models of Archaeological Remains of Almenara Castle Using Two UAVs with Different Navigation Systems
by Juan López-Herrera, Serafín López-Cuervo, Enrique Pérez-Martín, Miguel Ángel Maté-González, Consuelo Vara Izquierdo, José Martínez Peñarroya and Tomás R. Herrero-Tejedor
Heritage 2025, 8(1), 22; https://doi.org/10.3390/heritage8010022 - 10 Jan 2025
Viewed by 471
Abstract
Improvements in the navigation systems incorporated into unmanned aerial vehicles (UAVs) and new sensors are improving the quality of 3D mapping results. In this study, two flights were compared over the archaeological remains of the castle of Almenara, situated in Cuenca, Spain. We [...] Read more.
Improvements in the navigation systems incorporated into unmanned aerial vehicles (UAVs) and new sensors are improving the quality of 3D mapping results. In this study, two flights were compared over the archaeological remains of the castle of Almenara, situated in Cuenca, Spain. We performed one with a DJI Phantom 4 (DJI Innovations Co., Ltd., Shenzhen, China) and the other with a Matrice 300 RTK (DJI Innovations Co., Ltd., Shenzhen, China) and the new Zenmuse P1 camera (45 mp, RGB sensor). With the help of the new software incorporated into the Zenmuse P1 camera gimbal, we could significantly reduce the flight time. We analysed the data obtained with these two UAVs and the built-in RGB sensors, comparing the flight time, the point cloud, and its resolution and obtaining a three-dimensional reconstruction of the castle. We describe the work and the flights carried out, depending on the type of UAV and its RTK positioning system. The improvement in the positioning system provides improvements in flight accuracy and data acquisition. We compared the results obtained in similar studies, and thanks to the advances in UAVs and their sensors with better resolution, we managed to reduce the data collection time and obtained 3D models with the same results as those from other types of sensors. The accuracies obtained with the RTK and the P1 camera are very high. The volumes calculated for a future archaeological excavation are precise, and the 3D models obtained by these means are excellent for the preservation of the cultural asset. These models can have various uses, such as the preservation of an asset of cultural interest, or even its dissemination and analysis in various studies. We propose to use this technology for similar studies of archaeological documentation and the three-dimensional reconstruction and visualisation of cultural heritage in virtual visits on the web. Full article
(This article belongs to the Special Issue 3D Reconstruction of Cultural Heritage and 3D Assets Utilisation)
Show Figures

Figure 1

Figure 1
<p>The castle of Almenara (<b>a</b>), located in Cuenca (Spain) (<b>b</b>) in the municipality of Puebla de Almenara (2°50′31″ W, 39°47′28″ N) (<b>c</b>). View of the municipality of Puebla de Almenara and the castle of Almenara, Cuenca (Spain), in the foothills of Sierra Jarameña (<b>d</b>). WGS84 spatial reference system.</p>
Full article ">Figure 2
<p>The location of the castle treated in this study with its lights and shadows. Own source image from drone flight using Phantom 4. Spatial reference system WGS_1984_UTM_Zone_30N.</p>
Full article ">Figure 3
<p>The location of the castle treated in this study with its lights and shadows. Own source of flight patterns used: (<b>a</b>) Phantom 4 nadiral flight; (<b>b</b>) Phantom 4 oblique flight; and (<b>c</b>) Matrice 300 RTK–P1 with one nadiral and four independent oblique flights, one for each direction.</p>
Full article ">Figure 4
<p>Flight patterns used: Matrice 300 RTK–P1 SmartOblique flight Omega angles: blue (135° SE), red (45° NW), green (45° NE), and yellow (135° SW); the flight combined all with a Kappa angle.</p>
Full article ">Figure 5
<p>Workflow followed in the process of UAV data acquisition (<b>a</b>); processing and 3D model creation (<b>b</b>); and 3D model evaluation (<b>c</b>).</p>
Full article ">Figure 6
<p>Three-dimensional point cloud model of the castle of Almenara. (<b>a</b>) Northeast, (<b>b</b>) northwest, (<b>c</b>) southeast, and (<b>d</b>) southwest views.</p>
Full article ">Figure 7
<p>Planimetry of the walled enclosure obtained from the generated orthophotography and the 3D point cloud model of the Almenara castle.</p>
Full article ">Figure 8
<p>Three-dimensional model of the point cloud of the Almenara castle with recreation of virtual walls.</p>
Full article ">Figure 9
<p>A comparative plot of accuracies obtained between both point clouds P1 vs. Phantom 4 and castle control points at different altitudes with an R2 of 0.9.</p>
Full article ">Figure 10
<p>Profile and errors obtained at different altitudes with the quality control performed with Total Stations and GNSS RTK.</p>
Full article ">Figure 11
<p>Images (<b>a</b>,<b>c</b>) correspond to the Phantom 4 point cloud and (<b>b</b>,<b>d</b>) correspond to the Matrice 300 RTK and P1.</p>
Full article ">Figure 12
<p>Images (<b>a</b>,<b>c</b>) correspond to the Phantom 4 point cloud and (<b>b</b>,<b>d</b>) correspond to the Matrice 300 RTK and P1.</p>
Full article ">Figure 13
<p>Images (<b>a</b>,<b>c</b>) correspond to the Phantom 4 point cloud and (<b>b</b>,<b>d</b>) correspond to the Matrice 300 RTK and P1.</p>
Full article ">Figure 14
<p>Images (<b>a</b>,<b>c</b>) correspond to the Phantom 4 point cloud and (<b>b</b>,<b>d</b>) correspond to the Matrice 300 RTK and P1.</p>
Full article ">Figure 15
<p>A digital model of surfaces in the area near the castle and generation of the TIN model for the estimate of volume (in grey). The perimeter of the study is defined as the base area.</p>
Full article ">Figure 16
<p>A DSM in the area near the collapsed wall and cross-sectional profile of the terrain.</p>
Full article ">
Back to TopTop