[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,800)

Search Parameters:
Keywords = Mobile sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 1100 KiB  
Article
Clinical Whole-Body Gait Characterization Using a Single RGB-D Sensor
by Lukas Boborzi, Johannes Bertram, Roman Schniepp, Julian Decker and Max Wuehr
Sensors 2025, 25(2), 333; https://doi.org/10.3390/s25020333 - 8 Jan 2025
Abstract
Instrumented gait analysis is widely used in clinical settings for the early detection of neurological disorders, monitoring disease progression, and evaluating fall risk. However, the gold-standard marker-based 3D motion analysis is limited by high time and personnel demands. Advances in computer vision now [...] Read more.
Instrumented gait analysis is widely used in clinical settings for the early detection of neurological disorders, monitoring disease progression, and evaluating fall risk. However, the gold-standard marker-based 3D motion analysis is limited by high time and personnel demands. Advances in computer vision now enable markerless whole-body tracking with high accuracy. Here, we present vGait, a comprehensive 3D gait assessment method using a single RGB-D sensor and state-of-the-art pose-tracking algorithms. vGait was validated in healthy participants during frontal- and sagittal-perspective walking. Performance was comparable across perspectives, with vGait achieving high accuracy in detecting initial and final foot contacts (F1 scores > 95%) and reliably quantifying spatiotemporal gait parameters (e.g., stride time, stride length) and whole-body coordination metrics (e.g., arm swing and knee angle ROM) at different levels of granularity (mean, step-to-step variability, side asymmetry). The flexibility, accuracy, and minimal resource requirements of vGait make it a valuable tool for clinical and non-clinical applications, including outpatient clinics, medical practices, nursing homes, and community settings. By enabling efficient and scalable gait assessment, vGait has the potential to enhance diagnostic and therapeutic workflows and improve access to clinical mobility monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental setup. (<b>A</b>) Participants walked along a marked figure-eight path with a diagonal length of 5.1 m, allowing for both frontal-perspective and sagittal-perspective walking. (<b>B</b>) A total of 17 displayed keypoints were analyzed to calculate spatiotemporal gait cycle parameters.</p>
Full article ">Figure 2
<p>Definition of spatial gait characteristics. (<b>A</b>) Stride length is the distance between two successive heel contacts of the same foot, while stride width is the perpendicular distance from one heel contact to the line connecting two successive heel contacts of the opposite foot (i.e., the line of progression). The FPA is the angular deviation between the foot midline and the line of progression. (<b>B</b>) Arm swing ROM is the maximal angular displacement of the line connecting the shoulder and wrist in the walking direction within a gait cycle. (<b>C</b>) Knee ROM is defined as the angular difference between the maximum extension and flexion of the knee during the gait cycle. Exemplary knee joint angle curves (mean ± SD) are shown from vGait (red line) and the ground truth (gray line). Abbreviations: FPA, foot progression angle; ROM, range of motion.</p>
Full article ">Figure 3
<p>Histograms illustrating the temporal agreement (t<sub>gold standard</sub>–t<sub>vGait</sub>) of initial and final foot contacts identified by vGait compared to the gold standard during (<b>A</b>) frontal-perspective walking and (<b>B</b>) sagittal-perspective walking.</p>
Full article ">
46 pages, 9965 KiB  
Article
A Digital Twin Framework to Improve Urban Sustainability and Resiliency: The Case Study of Venice
by Lorenzo Villani, Luca Gugliermetti, Maria Antonia Barucco and Federico Cinquepalmi
Land 2025, 14(1), 83; https://doi.org/10.3390/land14010083 - 3 Jan 2025
Viewed by 585
Abstract
The digital transition is one of the biggest challenges of the new millennium. One of the key drivers of this transition is the need to adapt to the rapidly changing and heterogeneous technological landscape that is continuously evolving. Digital Twin (DT) technology can [...] Read more.
The digital transition is one of the biggest challenges of the new millennium. One of the key drivers of this transition is the need to adapt to the rapidly changing and heterogeneous technological landscape that is continuously evolving. Digital Twin (DT) technology can promote this transition at an urban scale due to its ability to monitor, control, and predict the behaviour of complex systems and processes. As several scientific studies have shown, DTs can be developed for infrastructure and city management, facing the challenges of global changes. DTs are based on sensor-distributed networks and can support urban management and propose intervention strategies based on future forecasts. In the present work, a three-axial operative framework is proposed for developing a DT urban management system using the city of Venice as a case study. The three axes were chosen based on sustainable urban development: energy, mobility, and resiliency. Venice is a fragile city due to its cultural heritage, which needs specific protection strategies. The methodology proposed starts from the analysis of the state-of-the-arts of DT technologies and the definition of key features. Three different axes are proposed, aggregating the key features in a list of fields of intervention for each axis. The Venice open-source database is then analysed to consider the data already available for the city. Finally, a list of DT services for urban management is proposed for each axis. The results show a need to improve the city management system by adopting DT. Full article
(This article belongs to the Special Issue Local and Regional Planning for Sustainable Development)
Show Figures

Figure 1

Figure 1
<p>Digital Twin scalability from a single component up to the city level it is possible to use DT systems to monitor, manage, and develop forecasts.</p>
Full article ">Figure 2
<p>Smart City diamond [<a href="#B40-land-14-00083" class="html-bibr">40</a>].</p>
Full article ">Figure 3
<p>Goals for theme no. 11 Sustainable Communities and Cities of the Sustainable Development Goals proposed by the United Nations (UN).</p>
Full article ">Figure 4
<p>Urban Digital Twin components: tasks, features, data, and targets.</p>
Full article ">Figure 5
<p>Methodological approach for Digital Twin development.</p>
Full article ">Figure 6
<p>Mobility service components.</p>
Full article ">Figure 7
<p>Venice open data analysis, related to general directives and linked by arrows with DT’s services related to the mobility axis.</p>
Full article ">Figure 8
<p>Energy service components.</p>
Full article ">Figure 9
<p>Venice open data analysis, related to general directives and linked by arrows with DT’s services related to the energy axis.</p>
Full article ">Figure 10
<p>Excerpt from the PRGA (General Flood Risk Plan) of the inland part of the Municipality of Venice [<a href="#B164-land-14-00083" class="html-bibr">164</a>]. The map shows the risk of flooding related to the river based on 4 different probabilities, from R1 (moderate risk) to R4 (very high risk).</p>
Full article ">Figure 11
<p>Excerpt from the PA (Flooding Plan) of the inland part of the Municipality of Venice [<a href="#B164-land-14-00083" class="html-bibr">164</a>]. The map shows the risk of flooding related to rain based on 4 different probabilities, from R1 (moderate risk) to R4 (very high risk).</p>
Full article ">Figure 12
<p>Fraction of green vegetation cover in percentage (generated using European Union’s Copernicus Land Monitoring Service information). The image is based on satellite data calculated on 300 square meter pixels and ranges from zero (no vegetation) to 1 (completely covered by plants).</p>
Full article ">Figure 13
<p>Displacement map from Copernicus satellite SAR data (generated using European Union’s Copernicus Land Monitoring Service information).</p>
Full article ">Figure 14
<p>Resiliency service components.</p>
Full article ">Figure 15
<p>Venice open data analysis, related to general directives and linked by arrows with DT’s services related to the resiliency axis.</p>
Full article ">
17 pages, 2803 KiB  
Article
Potential of Apple Vision Pro for Accurate Tree Diameter Measurements in Forests
by Tobias Ofner-Graff, Valentin Sarkleti, Philip Svazek, Andreas Tockner, Sarah Witzmann, Lukas Moik, Ralf Kraßnitzer, Christoph Gollob, Tim Ritter, Martin Kühmaier, Karl Stampfer and Arne Nothdurft
Remote Sens. 2025, 17(1), 141; https://doi.org/10.3390/rs17010141 - 3 Jan 2025
Viewed by 378
Abstract
The determination of diameter at breast height (DBH) is critical in forestry, serving as a key metric for deriving various parameters, including tree volume. Light Detection and Ranging (LiDAR) technology has been increasingly employed in forest inventories, and the development of cost-effective, user-friendly [...] Read more.
The determination of diameter at breast height (DBH) is critical in forestry, serving as a key metric for deriving various parameters, including tree volume. Light Detection and Ranging (LiDAR) technology has been increasingly employed in forest inventories, and the development of cost-effective, user-friendly smartphone and tablet applications (apps) has expanded its broader use. Among these are augmented reality (AR) apps, which have already been tested on mobile devices for their accuracy in measuring forest attributes. In February 2024, Apple introduced the Mixed-Reality Interface (MRITF) via the Apple Vision Pro (AVP), offering sensor capabilities for field data collection. In this study, two apps using the AVP were tested for DBH measurement on 182 trees across 22 sample plots in a near-natural forest, against caliper-based reference measurements. Compared with the reference measurements, both apps exhibited a slight underestimation bias of −1.00 cm and −1.07 cm, and the root-mean-square error (RMSE) was 3.14 cm and 2.34 cm, respectively. The coefficient of determination (R2) between the reference data and the measurements obtained by the two apps was 0.959 and 0.978. The AVP demonstrated its potential as a reliable field tool for DBH measurement, performing consistently across varying terrain. Full article
(This article belongs to the Special Issue Remote Sensing and Smart Forestry II)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) A chalk-marked tree showing the height of the reference measurement in the direction of the sample plot center and a tree ID. (<b>b</b>) Measurement of the DBH with the AVP at the marked point. (<b>c</b>) first-person view at measuring with app HR and (<b>d</b>) Measurement of a DBH with app TM. All measurements larger than 10 cm were rounded by the apps to full centimeters.</p>
Full article ">Figure 2
<p>Schematic illustration of the AVP in frontal view, highlighting the positions and types of integrated sensors (illustration created by the authors).</p>
Full article ">Figure 3
<p>Model performance and residual analysis for two AVP apps (<b>a</b>) Predicted vs. reference values for app HR, (<b>b</b>) for app TM, (<b>c</b>) residuals for app HR, (<b>d</b>) residuals for app TM.</p>
Full article ">Figure 4
<p>Comparison of the time required per sample plot of both apps (n = 22).</p>
Full article ">Figure 5
<p>(<b>a</b>) difference in δ DBH [cm] for different tree classes, (<b>b</b>) difference in δ DBH [cm] for different measuring persons.</p>
Full article ">Figure A1
<p>Residual analysis for two AVP apps: (<b>a</b>) Q-Q plot for residuals of app HR, (<b>b</b>) Q-Q plot for residuals of app TM, (<b>c</b>) histogram of residuals for app HR, (<b>d</b>) histogram of residuals for app TM.</p>
Full article ">
31 pages, 5689 KiB  
Article
Reliability of an Inertial Measurement System Applied to the Technical Assessment of Forehand and Serve in Amateur Tennis Players
by Lucio Caprioli, Cristian Romagnoli, Francesca Campoli, Saeid Edriss, Elvira Padua, Vincenzo Bonaiuto and Giuseppe Annino
Bioengineering 2025, 12(1), 30; https://doi.org/10.3390/bioengineering12010030 - 2 Jan 2025
Viewed by 543
Abstract
Traditional methods for evaluating tennis technique, such as visual observation and video analysis, are often subjective and time consuming. On the other hand, a quick and accurate assessment can provide immediate feedback to players and contribute to technical development, particularly in less experienced [...] Read more.
Traditional methods for evaluating tennis technique, such as visual observation and video analysis, are often subjective and time consuming. On the other hand, a quick and accurate assessment can provide immediate feedback to players and contribute to technical development, particularly in less experienced athletes. This study aims to validate the use of a single inertial measurement system to assess some relevant technical parameters of amateur players. Among other things, we attempt to search for significant correlations between the flexion extension and torsion of the torso and the lateral distance of the ball from the body at the instant of impact. This research involved a group of amateur players who performed a series of standardized gestures (forehands and serves) wearing a sensorized chest strap fitted with a wireless inertial unit. The collected data were processed to extract performance metrics. The percentage coefficient of variation for repeated measurements, Wilcoxon signed-rank test, and Spearman’s correlation were used to determine the system’s reliability. High reliability was found between sets of measurements in all of the investigated parameters. The statistical analysis showed moderate and strong correlations, suggesting possible applications in assessing and optimizing specific aspects of the technique, like the player’s distance to the ball in the forehand or the toss in the serve. The significant variations in technical execution among the subjects emphasized the need for tailored interventions through personalized feedback. Furthermore, the system allows for the highlighting of specific areas where intervention can be achieved in order to improve gesture execution. These results prompt us to consider this system’s effectiveness in developing an on-court mobile application. Full article
(This article belongs to the Special Issue Biomechanics of Physical Exercise)
Show Figures

Figure 1

Figure 1
<p>Illustration of the setup for forehand measurements: the two action cameras were aligned about 5 m from the point of impact and placed on a tripod at 1.10 m above the ground; the Tennis Tutor Plus ball-launching machine was positioned on the ground near the opposite baseline at 1.60 m from the mid-point.</p>
Full article ">Figure 2
<p>Illustration of different body positions and trunk inclination (black dashed line) in relation to the ball distance from the longitudinal axis (orange dashed line), coincident with the first toe of the nondominant foot. The arrows represent the distance between the ball and the longitudinal reference axis.</p>
Full article ">Figure 3
<p>Ball distance detection during serve at the instant of impact from the longitudinal axis, coincident with the first toe of the nondominant foot in the starting position. The arrow represents the distance between the ball and the longitudinal reference axis.</p>
Full article ">Figure 4
<p>Illustration of the change in length of the reference object (the racquet) on the camera plane due to the tilt of the racquet. The arrows represent the width (8.23 m) and length (23.77) of the tennis court, and the length of the racket (68.5 cm) positioned during impact about 1.6 m from the center of the court. Angles α′ and α″ constitute the maximum inclination of the racket with respect to perpendicularity to the dashed red line, and consequently l′ and l″ the maximum possible deformation in length of the tool observed from the rear camera.</p>
Full article ">Figure 5
<p>Illustration of the angle of inclination of the trunk (α) between the <span class="html-italic">X</span>-axis of the sensor and the global <span class="html-italic">Z</span>-axis (the direction of the Earth’s gravitational force). The dashed black lines represent the direction of the Earth’s gravitational force (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>Z</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> Global) and the <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>X</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> axis of the sensor (coincident with the green arrow); while the blue arrow represents the <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>Z</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> axis of the sensor.</p>
Full article ">Figure 6
<p>Illustration of the trunk rotation angle <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> <mo stretchy="false">)</mo> </mrow> </semantics></math> on the horizontal plane with respect to the direction of the Earth’s magnetic north, <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>N</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>. The black dashed line represents the direction of Earth’s magnetic north (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>N</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math>), the blue and red arrow stand for the <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>Z</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>Y</mi> </mrow> <mo>→</mo> </mover> </mrow> </semantics></math> axis of the sensor, respectively, while the white dashed line marks the azimuth angle (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>).</p>
Full article ">Figure 7
<p>Bland–Altman plots with 95% limits of agreement (LoA) showing the difference of measurement between the two session trials relative to lateral distance (<b>a</b>); Gyr X (<b>b</b>); Acc Z (<b>c</b>); dv (<b>d</b>); <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math> (<b>e</b>); and <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math> (<b>f</b>). The bold dashed lines represent the mean difference and the limits (LoA), while the dotted lines and the gray background show the 95% confidence intervals.</p>
Full article ">Figure 8
<p>Boxplot distribution of the lateral distance of all of the shots played by the 21 players. The dots indicate the outliers.</p>
Full article ">Figure 9
<p>Boxplot distribution of the angular torsion velocity (Gyr X) (<b>a</b>) and horizontal acceleration (Acc Z) (<b>b</b>) of all of the shots played by the 21 players. The dots indicate the outliers.</p>
Full article ">Figure 10
<p>Boxplot distribution of trunk angles <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mo stretchy="false">(</mo> <mi>α</mi> </mrow> <mo>^</mo> </mover> <mo stretchy="false">)</mo> </mrow> </semantics></math> (<b>a</b>) and azimuth (<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>) (<b>b</b>) of all of the shots played by the 21 players. The dots indicate the outliers.</p>
Full article ">Figure 11
<p>Correlation between stance type (1: neutral stance; 2: semi-open stance; 3: open stance) and lateral distance (<b>a</b>) and angular torsion velocity (Gyr X) (<b>b</b>). The dots indicate the outliers.</p>
Full article ">Figure 12
<p>Correlation between lateral distance and angular torsion velocity (Gyr X). Green dashed lines indicate 95% predictions intervals.</p>
Full article ">Figure 13
<p>Spearman’s correlation (ρ) in the neutral stance forehand: (<b>a</b>) Gyr X—Lateral distance; (<b>b</b>) Acc Z—Lateral distance; (<b>c</b>) Gyr X—Acc Z; (<b>d</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Lateral distance; (<b>e</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Lateral distance; (<b>f</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Acc Z; (<b>g</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—dv; (<b>h</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Acc Z; (<b>i</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Gyr X. Lateral distance; Gyr X: trunk angular torsion velocity; Acc Z: horizontal acceleration; dv: horizontal velocity; <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>: trunk angle; <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>: azimuth. Green dashed lines indicate 95% predictions intervals.</p>
Full article ">Figure 14
<p>Spearman’s correlation (ρ) in the open stance forehand: (<b>a</b>) Acc Z—Lateral distance; (<b>b</b>) dv—Lateral distance; (<b>c</b>) dv—Gyr X; (<b>d</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Lateral distance; (<b>e</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Lateral distance; (<b>f</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Gyr X; (<b>g</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Acc Z; (<b>h</b>) Gyr X—Acc Z; (<b>i</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—dv. Lateral distance; Gyr X: trunk angular torsion velocity; Acc Z: horizontal acceleration; dv: horizontal velocity; <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>: trunk angle; <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>A</mi> <mi>z</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>: azimuth. Green dashed lines indicate 95% predictions intervals.</p>
Full article ">Figure 15
<p>Bland–Altman plots with 95% limits of agreement (LoA) showing the difference of measurement between the two session trials relative to lateral distance (<b>a</b>); APD (<b>b</b>); Gyr X (<b>c</b>); Gyr Z (<b>d</b>); Acc Z (<b>e</b>); Acc X (<b>f</b>). The bold dashed lines represent the mean difference and the limits (LoA), while the dotted lines and the gray background show the 95% confidence intervals.</p>
Full article ">Figure 16
<p>Bland–Altman plot with 95% limits of agreement (LoA) showing the difference of measurement between the two session trials relative to the angle <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> <mo>.</mo> </mrow> </semantics></math> The bold dashed lines represent the mean difference and the limits (LoA), while the dotted lines and the gray background show the 95% confidence intervals.</p>
Full article ">Figure 17
<p>Boxplot distribution of the lateral (<b>a</b>) and anteroposterior (<b>b</b>) distance in the serves played by the 13 players. The dots indicate the outliers.</p>
Full article ">Figure 18
<p>Boxplot distribution of the torsion angular velocity (Gyr X) (<b>a</b>) and shoulder-over-shoulder angular velocity (Gyr Z) (<b>b</b>) in the serves played by the 13 players. The dots indicate the outliers.</p>
Full article ">Figure 19
<p>Spearman’s correlation (ρ) in the serve: (<b>a</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Lateral distance; (<b>b</b>) Gyr Z—Lateral distance; (<b>c</b>) Gyr X—Lateral Distance; (<b>d</b>) ADP—Lateral distance; (<b>e</b>) Acc Z—Lateral distance; (<b>f</b>) Gyr X—Acc Z; (<b>g</b>) Gyr Z—Acc Z; (<b>h</b>) Acc Z—ADP; (<b>i</b>) Gyr X- ADP. Lateral distance; Gyr X: trunk angular torsion velocity; Gyr Z: shoulder-over-shoulder angular velocity; Acc Z: horizontal acceleration; ADP anterior–posterior distance; <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>: trunk angle. Green dashed lines indicate 95% predictions intervals.</p>
Full article ">Figure 20
<p>Spearman’s correlation (ρ) in the serve: (<b>a</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Acc Z; (<b>b</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>—Gyr X; (<b>c</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> <mo>—</mo> </mrow> </semantics></math>Gyr Z. Gyr X: trunk angular torsion velocity; Gyr Z: shoulder-over-shoulder angular velocity; Acc Z: horizontal acceleration; <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>: trunk angle. Green dashed lines indicate 95% predictions intervals.</p>
Full article ">Figure 21
<p>Spearman’s correlation (ρ) in the serve: (<b>a</b>) Acc X—Lateral distance; (<b>b</b>) Acc X—<math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>; (<b>c</b>) Acc X—Acc Z. Lateral distance; Acc X: vertical acceleration; Acc Z: horizontal acceleration; <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>α</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math>: trunk angle. Green dashed lines indicate 95% predictions intervals.</p>
Full article ">
20 pages, 15263 KiB  
Article
An Efficient Cluster-Based Mutual Authentication and Key Update Protocol for Secure Internet of Vehicles in 5G Sensor Networks
by Xinzhong Su and Youyun Xu
Sensors 2025, 25(1), 212; https://doi.org/10.3390/s25010212 - 2 Jan 2025
Viewed by 259
Abstract
The Internet of Vehicles (IoV), a key component of smart transportation systems, leverages 5G communication for low-latency data transmission, facilitating real-time interactions between vehicles, roadside units (RSUs), and sensor networks. However, the open nature of 5G communication channels exposes IoV systems to significant [...] Read more.
The Internet of Vehicles (IoV), a key component of smart transportation systems, leverages 5G communication for low-latency data transmission, facilitating real-time interactions between vehicles, roadside units (RSUs), and sensor networks. However, the open nature of 5G communication channels exposes IoV systems to significant security threats, such as eavesdropping, replay attacks, and message tampering. To address these challenges, this paper proposes the Efficient Cluster-based Mutual Authentication and Key Update Protocol (ECAUP) designed to secure IoV systems within 5G-enabled sensor networks. The ECAUP meets the unique mobility and security demands of IoV by enabling fine-grained access control and dynamic key updates for RSUs through a factorial tree structure, ensuring both forward and backward secrecy. Additionally, physical unclonable functions (PUFs) are utilized to provide end-to-end authentication and physical layer security, further enhancing the system’s resilience against sophisticated cyber-attacks. The security of the ECAUP is formally verified using BAN Logic and ProVerif, and a comparative analysis demonstrates its superiority in terms of overhead efficiency (more than 50%) and security features over existing protocols. This work contributes to the development of secure, resilient, and efficient intelligent transportation systems, ensuring robust communication and protection in sensor-based IoV environments. Full article
(This article belongs to the Special Issue Advances in Security for Emerging Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>IOV authentication model.</p>
Full article ">Figure 2
<p>Factorial-tree-based accessible device table. The number of leaf nodes at each level in factorial tree is <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> <mo>!</mo> </mrow> </semantics></math>, where <span class="html-italic">t</span> is the level of the tree.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>U</mi> </mrow> </semantics></math> registration.</p>
Full article ">Figure 4
<p>Mutual authentication between <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>S</mi> <mi>U</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>O</mi> <mi>V</mi> <mi>D</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <mrow> <mi>I</mi> <mi>O</mi> <mi>V</mi> <mi>D</mi> </mrow> </semantics></math> join and leave.</p>
Full article ">Figure 6
<p>Proverif simulation results.</p>
Full article ">Figure 7
<p>Comparison of communication cost and calculation cost.</p>
Full article ">
31 pages, 4517 KiB  
Article
Resource Management and Secure Data Exchange for Mobile Sensors Using Ethereum Blockchain
by Burhan Ul Islam Khan, Khang Wen Goh, Abdul Raouf Khan, Megat F. Zuhairi and Mesith Chaimanee
Symmetry 2025, 17(1), 61; https://doi.org/10.3390/sym17010061 - 1 Jan 2025
Viewed by 639
Abstract
A typical Wireless Sensor Network (WSN) defines the usage of static sensors; however, the growing focus on smart cities has led to a rise in the adoption of mobile sensors to meet the varied demands of Internet of Things (IoT) applications. This results [...] Read more.
A typical Wireless Sensor Network (WSN) defines the usage of static sensors; however, the growing focus on smart cities has led to a rise in the adoption of mobile sensors to meet the varied demands of Internet of Things (IoT) applications. This results in significantly increasing dependencies towards secure storage and effective resource management. One way to address this issue is to harness the immutability property of the Ethereum blockchain. However, the existing challenges in IoT communication using blockchain are noted to eventually lead to symmetry issues in the network dynamics of Ethereum. The key issues related to this symmetry are scalability, resource disparities, and centralization risk, which offer sub-optimal opportunities for nodes to gain benefits, influence, or participate in the processes in the blockchain network. Therefore, this paper presents a novel blockchain-based computation model for optimizing resource utilization and offering secure data exchange during active communication among mobile sensors. An empirical method of trust computation was carried out to identify the degree of legitimacy of mobile sensor participation in the network. Finally, a novel cost model has been presented for cost estimation and to enhance the users’ quality of experience. With the aid of a simulation study, the benchmarked outcome of the study exhibited that the proposed scheme achieved a 40% reduced validation time, 28% reduced latency, 23% improved throughput, 38% minimized overhead, 27% reduced cost, and 38% reduced processing time, in contrast to the existing blockchain-based solutions reported in the literature. This outcome prominently exhibits fairer symmetry in the network dynamics of Ethereum presented in the proposed system. Full article
(This article belongs to the Special Issue Symmetry in Cyber Security and Privacy)
Show Figures

Figure 1

Figure 1
<p>Proposed conceptual model of securing communication in sensory applications.</p>
Full article ">Figure 2
<p>Communication system among the actors.</p>
Full article ">Figure 3
<p>Enrollment and validation of sensor.</p>
Full article ">Figure 3 Cont.
<p>Enrollment and validation of sensor.</p>
Full article ">Figure 4
<p>Flowchart of adopted validation.</p>
Full article ">Figure 5
<p>High-level diagram of the proposed framework.</p>
Full article ">Figure 6
<p>Comparative assessment of validation time [<a href="#B28-symmetry-17-00061" class="html-bibr">28</a>,<a href="#B29-symmetry-17-00061" class="html-bibr">29</a>,<a href="#B37-symmetry-17-00061" class="html-bibr">37</a>,<a href="#B38-symmetry-17-00061" class="html-bibr">38</a>,<a href="#B39-symmetry-17-00061" class="html-bibr">39</a>,<a href="#B43-symmetry-17-00061" class="html-bibr">43</a>,<a href="#B44-symmetry-17-00061" class="html-bibr">44</a>,<a href="#B45-symmetry-17-00061" class="html-bibr">45</a>,<a href="#B46-symmetry-17-00061" class="html-bibr">46</a>,<a href="#B47-symmetry-17-00061" class="html-bibr">47</a>,<a href="#B48-symmetry-17-00061" class="html-bibr">48</a>,<a href="#B49-symmetry-17-00061" class="html-bibr">49</a>].</p>
Full article ">Figure 7
<p>Comparative assessment of latency [<a href="#B28-symmetry-17-00061" class="html-bibr">28</a>,<a href="#B29-symmetry-17-00061" class="html-bibr">29</a>,<a href="#B37-symmetry-17-00061" class="html-bibr">37</a>,<a href="#B38-symmetry-17-00061" class="html-bibr">38</a>,<a href="#B39-symmetry-17-00061" class="html-bibr">39</a>,<a href="#B43-symmetry-17-00061" class="html-bibr">43</a>,<a href="#B44-symmetry-17-00061" class="html-bibr">44</a>,<a href="#B45-symmetry-17-00061" class="html-bibr">45</a>,<a href="#B46-symmetry-17-00061" class="html-bibr">46</a>,<a href="#B47-symmetry-17-00061" class="html-bibr">47</a>,<a href="#B48-symmetry-17-00061" class="html-bibr">48</a>,<a href="#B49-symmetry-17-00061" class="html-bibr">49</a>].</p>
Full article ">Figure 8
<p>Comparative assessment of throughput [<a href="#B28-symmetry-17-00061" class="html-bibr">28</a>,<a href="#B29-symmetry-17-00061" class="html-bibr">29</a>,<a href="#B37-symmetry-17-00061" class="html-bibr">37</a>,<a href="#B38-symmetry-17-00061" class="html-bibr">38</a>,<a href="#B39-symmetry-17-00061" class="html-bibr">39</a>,<a href="#B43-symmetry-17-00061" class="html-bibr">43</a>,<a href="#B44-symmetry-17-00061" class="html-bibr">44</a>,<a href="#B45-symmetry-17-00061" class="html-bibr">45</a>,<a href="#B46-symmetry-17-00061" class="html-bibr">46</a>,<a href="#B47-symmetry-17-00061" class="html-bibr">47</a>,<a href="#B48-symmetry-17-00061" class="html-bibr">48</a>,<a href="#B49-symmetry-17-00061" class="html-bibr">49</a>].</p>
Full article ">Figure 9
<p>Comparative assessment of overhead [<a href="#B28-symmetry-17-00061" class="html-bibr">28</a>,<a href="#B29-symmetry-17-00061" class="html-bibr">29</a>,<a href="#B37-symmetry-17-00061" class="html-bibr">37</a>,<a href="#B38-symmetry-17-00061" class="html-bibr">38</a>,<a href="#B39-symmetry-17-00061" class="html-bibr">39</a>,<a href="#B43-symmetry-17-00061" class="html-bibr">43</a>,<a href="#B44-symmetry-17-00061" class="html-bibr">44</a>,<a href="#B45-symmetry-17-00061" class="html-bibr">45</a>,<a href="#B46-symmetry-17-00061" class="html-bibr">46</a>,<a href="#B47-symmetry-17-00061" class="html-bibr">47</a>,<a href="#B48-symmetry-17-00061" class="html-bibr">48</a>,<a href="#B49-symmetry-17-00061" class="html-bibr">49</a>].</p>
Full article ">Figure 10
<p>Comparative assessment of cost [<a href="#B28-symmetry-17-00061" class="html-bibr">28</a>,<a href="#B29-symmetry-17-00061" class="html-bibr">29</a>,<a href="#B37-symmetry-17-00061" class="html-bibr">37</a>,<a href="#B38-symmetry-17-00061" class="html-bibr">38</a>,<a href="#B39-symmetry-17-00061" class="html-bibr">39</a>,<a href="#B43-symmetry-17-00061" class="html-bibr">43</a>,<a href="#B44-symmetry-17-00061" class="html-bibr">44</a>,<a href="#B45-symmetry-17-00061" class="html-bibr">45</a>,<a href="#B46-symmetry-17-00061" class="html-bibr">46</a>,<a href="#B47-symmetry-17-00061" class="html-bibr">47</a>,<a href="#B48-symmetry-17-00061" class="html-bibr">48</a>,<a href="#B49-symmetry-17-00061" class="html-bibr">49</a>].</p>
Full article ">Figure 11
<p>Comparative assessment of processing time [<a href="#B28-symmetry-17-00061" class="html-bibr">28</a>,<a href="#B29-symmetry-17-00061" class="html-bibr">29</a>,<a href="#B37-symmetry-17-00061" class="html-bibr">37</a>,<a href="#B38-symmetry-17-00061" class="html-bibr">38</a>,<a href="#B39-symmetry-17-00061" class="html-bibr">39</a>,<a href="#B43-symmetry-17-00061" class="html-bibr">43</a>,<a href="#B44-symmetry-17-00061" class="html-bibr">44</a>,<a href="#B45-symmetry-17-00061" class="html-bibr">45</a>,<a href="#B46-symmetry-17-00061" class="html-bibr">46</a>,<a href="#B47-symmetry-17-00061" class="html-bibr">47</a>,<a href="#B48-symmetry-17-00061" class="html-bibr">48</a>,<a href="#B49-symmetry-17-00061" class="html-bibr">49</a>].</p>
Full article ">
19 pages, 5238 KiB  
Article
In Situ Raman Spectroscopy for Early Corrosion Detection in Coated AA2024-T3
by Adrienne K. Delluva, Ronald L. Cook, Matt Peppel, Sami Diaz, Rhia M. Martin, Vinh T. Nguyen, Jeannine E. Elliott and Joshua R. Biller
Sensors 2025, 25(1), 179; https://doi.org/10.3390/s25010179 - 31 Dec 2024
Viewed by 350
Abstract
Here we describe the synthesis and evaluation of a molecular corrosion sensor that can be applied in situ in aerospace coatings, then used to detect corrosion after the coating has been applied. A pH-sensitive molecule, 4-mercaptopyridin (4-MP), is attached to a gold nanoparticle [...] Read more.
Here we describe the synthesis and evaluation of a molecular corrosion sensor that can be applied in situ in aerospace coatings, then used to detect corrosion after the coating has been applied. A pH-sensitive molecule, 4-mercaptopyridin (4-MP), is attached to a gold nanoparticle to allow surface-enhanced Raman-scattering (SERS) for signal amplification. These SERS nanoparticles, when combined with an appropriate micron-sized carrier system, are incorporated directly into an MIL-SPEC coating and used to monitor the process onset and progression of corrosion using pH changes occurring at the metal–coating interface. The sensor can track corrosion spatially as it proceeds underneath the coating, due to the mobility of the proton front generated during corrosion and the homogeneous distribution of the sensor in the coating layer. To our knowledge, this report is the first time a 4-MP functionalized gold nanoparticle has been used, along with SERS spectroscopy, to monitor corrosion in an applied commercial coating in a fast, non-contact way. Full article
(This article belongs to the Special Issue Nanotechnology Applications in Sensors Development)
Show Figures

Figure 1

Figure 1
<p>Illustration of corrosion occurring near the metal surface on AA-2024. An acidic environment is a hallmark of severe corrosion.</p>
Full article ">Figure 2
<p>SEM of sensor powder in backscattering mode: bright spots are gold.</p>
Full article ">Figure 3
<p>Raman spectrum of powder corrosion sensor loaded into a primer at 150 ppm, compared to the Raman spectra of the primer alone and the powder sensor alone. (Inset) The Raman spectra of the primary peaks of interest for 4-MP as a function of changing pH, from 1.2 to 12.6.</p>
Full article ">Figure 4
<p>Percentage change in PRR for (<b>A</b>) the sensor dispersed in aqueous solution as a function of pH and (<b>B</b>) loaded into the primer coating, with pH solution applied to the coatings. Both graphs share the same legend.</p>
Full article ">Figure 5
<p>Corrosion exposure results for a scribed AA-2024 panel coated in MIL-DTL-53030 primer, loaded with 150 ppm of corrosion sensor. (<b>A</b>) Photos of the scribe center as a function of time. (<b>B</b>) Percentage change in the PRR as a function of time.</p>
Full article ">Figure 6
<p>MIL-DTL-53030 primer loaded with corrosion sensor was applied to panels with no pretreatment, or those that had been treated with an alodine conversion coating. (<b>A</b>) Raman signal as a function of time in the salt fog. (<b>B</b>) A representative “bare” (no pretreatment) panel after 1500 h of exposure to ASTM B117. (<b>C</b>) Photographs of the alodine panel, stripped after 2000 h in ASTM B117. (Inset) Deep corrosion damage is present, which was indicated by the corrosion sensor prior to stripping.</p>
Full article ">Figure 7
<p>(<b>A</b>) Photos of non-ideal test panel surfaces covered in (left to right) hydraulic fluid, pristine, covered in dirt, and curved. (<b>B</b>) Raman spectra of the different panel states.</p>
Full article ">Figure 8
<p>Comparison of accelerated corrosion on a 3″ × 3″ AA-2024 panel coated in MIL-DTL-53030, as assessed by the Raman corrosion sensor or electrical impedance spectroscopy (EIS). The decrease in the charge transfer resistance tracks well with the decrease in the Raman sensor, brought on by a decrease in pH due to active and severe corrosion.</p>
Full article ">Figure 9
<p>(<b>A</b>) 12″ × 12″ panel used for spatial resolution testing. The scribe is in the bottom right and circles are marked with pen on the surface of the panel to measure the same locations at each time point in ASTM B117. (<b>B</b>) Zoomed-in scribe after 2500 h in ASTM-B117. (<b>C</b>) Zoomed-in scribe after coating was stripped at 6000 h in ASTM B117. Note the holes where it has corroded clean through the panel. (<b>D</b>) PRR values at the spots labeled A in the salt fog.</p>
Full article ">
22 pages, 1580 KiB  
Article
Predictive Forwarding Rule Caching for Latency Reduction in Dynamic SDN
by Doosik Um, Hyung-Seok Park, Hyunho Ryu and Kyung-Joon Park
Sensors 2025, 25(1), 155; https://doi.org/10.3390/s25010155 - 30 Dec 2024
Viewed by 360
Abstract
In mission-critical environments such as industrial and military settings, the use of unmanned vehicles is on the rise. These scenarios typically involve a ground control system (GCS) and nodes such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs). The GCS and [...] Read more.
In mission-critical environments such as industrial and military settings, the use of unmanned vehicles is on the rise. These scenarios typically involve a ground control system (GCS) and nodes such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs). The GCS and nodes exchange different types of information, including control data that direct unmanned vehicle movements and sensor data that capture real-world environmental conditions. The GCS and nodes communicate wirelessly, leading to loss or delays in control and sensor data. Minimizing these issues is crucial to ensure nodes operate as intended over wireless links. In dynamic networks, distributed path calculation methods lead to increased network traffic, as each node independently exchanges control messages to discover new routes. This heightened traffic results in internal interference, causing communication delays and data loss. In contrast, software-defined networking (SDN) offers a centralized approach by calculating paths for all nodes from a single point, reducing network traffic. However, shifting from a distributed to a centralized approach with SDN does not inherently guarantee faster route creation. The speed of generating new routes remains independent of whether the approach is centralized, so SDN does not always lead to faster results. Therefore, a key challenge remains: determining how to create new routes as quickly as possible even within an SDN framework. This paper introduces a caching technique for forwarding rules based on predicted link states in SDN, which was named the CRIMSON (Cashing Routing Information in Mobile SDN Network) algorithm. The CRIMSON algorithm detects network link state changes caused by node mobility and caches new forwarding rules based on predicted topology changes. We validated that the CRIMSON algorithm consistently reduces end-to-end latency by an average of 88.96% and 59.49% compared to conventional reactive and proactive modes, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Necessity of forwarding rule updates in dynamic network. In a dynamic network, topology changes and link changes occur as nodes move around. Accordingly, forwarding rules for node-specific communication must be updated.</p>
Full article ">Figure 2
<p>Comparison between traditional communication and SDN methods. In a traditional network, a control plane is configured on each node. However, in an SDN environment, only the central controller has a control plane. In this case, the central controller provides the forwarding rules.</p>
Full article ">Figure 3
<p>LLDP transmission process for communication between SDN nodes. In an SDN, the SDN controller recognizes new switches through the process of packet-out, deliver LLDP, and packet-in to the switches it already knows.</p>
Full article ">Figure 4
<p>Representation of node link states using adjacency matrix. Graph data for each topology are represented as an adjacency matrix. Depending on the number of nodes (<span class="html-italic">n</span>), an <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>×</mo> <mi>n</mi> </mrow> </semantics></math> matrix is formed, where each row and column data represent the connection status between nodes.</p>
Full article ">Figure 5
<p>Confusion matrix. The confusion matrix calculates Precision, NPV, Specificity, and Recall using the TP, TN, FP, and FN metrics. This matrix is used to evaluate the performance of classification models.</p>
Full article ">Figure 6
<p>System model. This environment includes a GCS and multiple mobile unmanned nodes. The GCS and nodes transmit various types of communication, such as data collection, topology maintenance, and command control.</p>
Full article ">Figure 7
<p>CRIMSON flow. CRIMSON is composed of three main steps. The first is topology change detection and generation of the predicted node locations. The second is the creation of a predictive adjacency matrix. The third is caching the forwarding rules for the predicted link states. Through this process, CRIMSON prepares forwarding rules in advance, reflecting the predicted link states.</p>
Full article ">Figure 8
<p>CRIMSON flow chart. The analysis of time-series data calculates node movement trends, and if they exceed a threshold, the system predicts the node positions. The system calculates distances between nodes using the predicted positions and checks them against the communication range to generate an adjacency matrix. The matrix updates forwarding rules for both direct and alternative paths.</p>
Full article ">Figure 9
<p>Types of topologies used in the simulation. We use five topologies consisting of five nodes with UAV modeling applied. The topology shapes used, from left to right, are linear, v-shaped, trapezoid, star, and pentagon.</p>
Full article ">Figure 10
<p>Evaluation of confusion matrix metrics within the threshold range of 0.001 to 0.035. We assess the values of Precision, Recall, NPV, and Specificity throughout this threshold range. Afterward, we select the optimal threshold value that produces the highest average among these four metrics.</p>
Full article ">Figure 11
<p>Optimization process for finding the Shiftpoint. This process applies three optimization methods to the average value of the four confusion matrix metrics.</p>
Full article ">Figure 12
<p>Latency comparison of CRIMSON based on RTT tests. The comparison includes reactive mode and proactive mode. The simulation measured latency using rtt avg, rtt max, and rtt mdev.</p>
Full article ">Figure 13
<p>Evaluation of LLDP usage in CRIMSON. The proposed CRIMSON method indicates a lower LLDP count compared to the proactive mode. In an SDN system, LLDP packets are transmitted when packet processing is not handled. This indicates that CRIMSON performs packet processing effectively in dynamic networks.</p>
Full article ">Figure 14
<p>Network latency comparison of CRIMSON at various bandwidths. We conduct RTT tests at 0.5 Mbps, 1 Mbps, 5 Mbps, and 10 Mbps for the proposed CRIMSON algorithm. Simulation results confirm that CRIMSON achieves lower and more stable network latency.</p>
Full article ">
27 pages, 3145 KiB  
Article
Optimized Frontier-Based Path Planning Using the TAD Algorithm for Efficient Autonomous Exploration
by Abror Buriboev, Andrew Jaeyong Choi and Heung Seok Jeon
Electronics 2025, 14(1), 74; https://doi.org/10.3390/electronics14010074 - 27 Dec 2024
Viewed by 324
Abstract
A novel path-planning method utilizing the trapezoid, adjacent, and distance, (TAD) characteristics of frontiers is presented in this work. The method uses the mobile robot’s sensor range to detect frontiers throughout each exploration cycle, modifying them at regular intervals to produce their parameters. [...] Read more.
A novel path-planning method utilizing the trapezoid, adjacent, and distance, (TAD) characteristics of frontiers is presented in this work. The method uses the mobile robot’s sensor range to detect frontiers throughout each exploration cycle, modifying them at regular intervals to produce their parameters. This well-thought-out approach makes it possible to choose objective points carefully, guaranteeing seamless navigation. The effectiveness and applicability of the suggested approach with respect to exploration time and distance are demonstrated by empirical validation. Results from experiments show notable gains over earlier algorithms: time consumption decreases by 10% to 89% and overall path distance for full investigation decreases by 12% to 74%. These remarkable results demonstrate the efficacy of the suggested approach and represent a paradigm change in improving mobile robot exploration in uncharted territory. This research introduces a refined algorithm and paves the way for greater efficiency in autonomous robotic exploration. This study opens the door for more effective autonomous robotic exploration by introducing an improved algorithm. Full article
(This article belongs to the Special Issue Autonomous and Intelligent Robotics)
Show Figures

Figure 1

Figure 1
<p>The scheme of process.</p>
Full article ">Figure 2
<p>Navigation policy of Rmap algorithm: (<b>a</b>) initial state of robot; (<b>b</b>) environment scanning; (<b>c</b>) generating first rectangle; (<b>d</b>) generating second rectangle.</p>
Full article ">Figure 3
<p>The block scheme of proposed strategy.</p>
Full article ">Figure 4
<p>Proposed selection methods: (<b>a</b>) distance parameter; (<b>b</b>) adjacent parameter; (<b>c</b>) trapezoid parameter.</p>
Full article ">Figure 5
<p>Experimental environments: (<b>a</b>) small and cyclic environment; (<b>b</b>) small and non-cyclic environment; (<b>c</b>) wide environment.</p>
Full article ">Figure 6
<p>The first round of the algorithm in a cyclic simulated environment. (<b>a</b>) Trapezoid parameter; (<b>b</b>) adjacent parameter; (<b>c</b>) distance parameter.</p>
Full article ">Figure 7
<p>The second round of the algorithm in a cyclic simulated environment. (<b>a</b>) Trapezoid parameter; (<b>b</b>) adjacent parameter; (<b>c</b>) distance parameter.</p>
Full article ">Figure 8
<p>The trajectory of the robot after exploration in a small cyclic environment: (<b>a</b>) Proposed algorithm; (<b>b</b>) Alternative algorithm I; (<b>c</b>) Alternative algorithm II.</p>
Full article ">Figure 9
<p>The trajectory of the robot after exploration in a small non-cyclic environment: (<b>a</b>) Proposed algorithm; (<b>b</b>) Alternative algorithm I; (<b>c</b>) Alternative algorithm II.</p>
Full article ">Figure 10
<p>The trajectory of the robot in a wide environment: (<b>a</b>) Proposed algorithm; (<b>b</b>) Alternative algorithm I; (<b>c</b>) Alternative algorithm II.</p>
Full article ">Figure 11
<p>Memory efficiency of mapping scheme: (<b>a</b>) Grid mapping; (<b>b</b>) Rmap mapping.</p>
Full article ">Figure 12
<p>Number of turning points during the experiments.</p>
Full article ">Figure 13
<p>Traveled distance during the experiments.</p>
Full article ">Figure 14
<p>Exploration time of robot.</p>
Full article ">
13 pages, 5322 KiB  
Article
Assessment of LiDAR-Based Sensing Technologies in Bird–Drone Collision Scenarios
by Paula Seoane, Enrique Aldao, Fernando Veiga-López and Higinio González-Jorge
Drones 2025, 9(1), 13; https://doi.org/10.3390/drones9010013 - 27 Dec 2024
Viewed by 348
Abstract
The deployment of Advanced Air Mobility requires the continued development of technologies to ensure operational safety. One of the key aspects to consider here is the availability of robust solutions to avoid tactical conflicts between drones and other flying elements, such as other [...] Read more.
The deployment of Advanced Air Mobility requires the continued development of technologies to ensure operational safety. One of the key aspects to consider here is the availability of robust solutions to avoid tactical conflicts between drones and other flying elements, such as other drones or birds. Bird detection is a relatively underexplored area, but due to the large number of birds, their shared airspace with drones, and the fact that they are non-cooperative elements within an air traffic management system, it is of interest to study how their detection can be improved and how collisions with them can be avoided. This work demonstrates how a LiDAR sensor mounted on a drone can detect birds of various sizes. A LiDAR simulator, previously developed by the Aerolab research group, is employed in this study. Six different collision trajectories and three different bird sizes (pigeon, falcon, and seagull) are tested. The results show that the LiDAR can detect any of these birds at about 30 m; bird detection improves when the bird gets closer and has a larger size. The detection accuracy is higher than 1 m in most of the cases under study. The errors grow with increasing drone-bird relative speed. Full article
Show Figures

Figure 1

Figure 1
<p>Operational scenarios. (<b>a</b>) trajectory 1, (<b>b</b>) trajectory 2, (<b>c</b>) trajectory 3, (<b>d</b>), trajectory 4, (<b>e</b>) trajectory 5, and (<b>f</b>) trajectory 6.</p>
Full article ">Figure 2
<p>Randomized operational scenarios. (<b>a</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mn>50</mn> <mo>,</mo> <mo>−</mo> <mn>17</mn> <mo>,</mo> <mo> </mo> <mn>0</mn> </mrow> </mfenced> <mo> </mo> <mi>m</mi> <mo>;</mo> <mo> </mo> <mover accent="true"> <mrow> <msub> <mi>v</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mo>−</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>1</mn> </mrow> </mfenced> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mi>m</mi> <mi>s</mi> </mfrac> </mstyle> <mo>;</mo> <mover accent="true"> <mrow> <mo> </mo> <msub> <mi>a</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mo>−</mo> <mn>0.6</mn> <mo>,</mo> <mo>−</mo> <mn>0.8</mn> <mo> </mo> <mn>0.2</mn> </mrow> </mfenced> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mi>m</mi> <mrow> <msup> <mi>s</mi> <mrow> <mn>2</mn> <mo> </mo> </mrow> </msup> </mrow> </mfrac> </mstyle> <mo>,</mo> <mo> </mo> <mfenced> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> </mfenced> <mo> </mo> <mover accent="true"> <mrow> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mn>20</mn> <mo>,</mo> <mo> </mo> <mn>17</mn> <mo>,</mo> <mo> </mo> <mn>0</mn> </mrow> </mfenced> <mo> </mo> <mi>m</mi> <mo>;</mo> <mo> </mo> <mover accent="true"> <mrow> <msub> <mi>v</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mn>11</mn> <mo>,</mo> <mo>−</mo> <mn>8</mn> <mo>,</mo> <mo>−</mo> <mn>1.2</mn> </mrow> </mfenced> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mi>m</mi> <mi>s</mi> </mfrac> </mstyle> <mo>;</mo> <mover accent="true"> <mrow> <msub> <mi>a</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mn>0</mn> <mo>,</mo> <mo>−</mo> <mn>1.2</mn> <mo>,</mo> <mo>−</mo> <mn>0.3</mn> </mrow> </mfenced> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mi>m</mi> <mrow> <msup> <mi>s</mi> <mrow> <mn>2</mn> <mo> </mo> </mrow> </msup> </mrow> </mfrac> </mstyle> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Bird 3D models. (<b>a</b>) pigeon, (<b>b</b>) falcon, and (<b>c</b>) seagull.</p>
Full article ">Figure 4
<p>LiDAR detection algorithm.</p>
Full article ">Figure 5
<p>LiDAR echoes. (<b>a</b>) pigeon 3D model (<b>left</b>) and point cloud (<b>right</b>), (<b>b</b>) falcon 3D model (<b>left</b>) and point cloud (<b>right</b>), and (<b>c</b>) seagull 3D model (<b>left</b>) and point cloud (<b>right</b>).</p>
Full article ">Figure 6
<p>LiDAR echoes simulated for each bird and trajectory.</p>
Full article ">Figure 7
<p>Position detection error depending on the operational scenario: (<b>a</b>) trajectory 1, (<b>b</b>) trajectory 2, (<b>c</b>) trajectory 3, (<b>d</b>) trajectory 4, (<b>e</b>) trajectory 5, and (<b>f</b>) trajectory 6.</p>
Full article ">Figure 8
<p>Error statistical assessment for the 300 simulated trajectories of pigeon encounters: (<b>a</b>) average error in target position as a function of flight speed, (<b>b</b>) average error in target position as a function of sensing distance, (<b>c</b>) average error in speed estimation as a function of flight speed, and (<b>d</b>) average error in speed estimation as a function of sensing distance.</p>
Full article ">Figure 9
<p>Error statistical assessment for the 300 simulated trajectories of seagull encounters: (<b>a</b>) average error in target position as a function of flight speed, (<b>b</b>) average error in target position as a function of sensing distance, (<b>c</b>) average error in speed estimation as a function of flight speed, and (<b>d</b>) average error in speed estimation as a function of sensing distance.</p>
Full article ">Figure 10
<p>Error statistical assessment for the 300 simulated trajectories of falcon encounters: (<b>a</b>) average error in target position as a function of flight speed, (<b>b</b>) average error in target position as a function of sensing distance, (<b>c</b>) average error in speed estimation as a function of flight speed, and (<b>d</b>) average error in speed estimation as a function of sensing distance.</p>
Full article ">
10 pages, 1235 KiB  
Case Report
Evaluation of the Timed Up and Go Test in Patients with Knee Osteoarthritis Using Inertial Sensors
by Elina Gianzina, Christos K. Yiannakopoulos, Georgios Kalinterakis, Spilios Delis and Efstathios Chronopoulos
Int. J. Transl. Med. 2025, 5(1), 2; https://doi.org/10.3390/ijtm5010002 - 25 Dec 2024
Viewed by 247
Abstract
Background: There has been a growing interest in using inertial sensors to explore the temporal aspects of the Timed Up and Go (TUG) test. The current study aimed to analyze the spatiotemporal parameters and phases of the TUG test in patients with knee [...] Read more.
Background: There has been a growing interest in using inertial sensors to explore the temporal aspects of the Timed Up and Go (TUG) test. The current study aimed to analyze the spatiotemporal parameters and phases of the TUG test in patients with knee osteoarthritis (KOA) and compare the results with those of non-arthritic individuals. Methods: This study included 20 patients with KOA and 60 non-arthritic individuals aged 65 to 84 years. All participants performed the TUG test, and 17 spatiotemporal parameters and phase data were collected wirelessly using the BTS G-Walk inertial sensor. Results: Significant mobility impairments were observed in KOA patients, including slower gait speed, impaired sit-to-stand transitions, and reduced turning efficiency. These findings highlight functional deficits in individuals with KOA compared to their non-arthritic counterparts. Conclusions: The results emphasize the need for targeted physiotherapy interventions, such as quadriceps strengthening, balance training, and gait retraining, to address these deficits. However, the study is limited by its small sample size, gender imbalance, and limited validation of the BTS G-Walk device. Future research should include larger, more balanced cohorts, validate sensor reliability, and conduct longitudinal studies. Despite these limitations, the findings align with previous research and underscore the potential of inertial sensors in tailoring rehabilitation strategies and monitoring progress in KOA patients. Full article
Show Figures

Figure 1

Figure 1
<p>The G-walk inertial sensor device was placed in a pocket of a semi-elastic belt positioned above the iliac wings, at the level of the L4 lumbar vertebra.</p>
Full article ">Figure 2
<p>The report of the TUG test, as is provided by the dedicated G-Studio software.</p>
Full article ">Figure 3
<p>The G-Studio software provides a graphic representation of the various phases of the TUG test.</p>
Full article ">
13 pages, 3082 KiB  
Article
Tungsten Diselenide Nanoparticles Produced via Femtosecond Ablation for SERS and Theranostics Applications
by Andrei Ushkov, Dmitriy Dyubo, Nadezhda Belozerova, Ivan Kazantsev, Dmitry Yakubovsky, Alexander Syuy, Gleb V. Tikhonowski, Daniil Tselikov, Ilya Martynov, Georgy Ermolaev, Dmitriy Grudinin, Alexander Melentev, Anton A. Popov, Alexander Chernov, Alexey D. Bolshakov, Andrey A. Vyshnevyy, Aleksey Arsenin, Andrei V. Kabashin, Gleb I. Tselikov and Valentyn Volkov
Nanomaterials 2025, 15(1), 4; https://doi.org/10.3390/nano15010004 - 24 Dec 2024
Viewed by 323
Abstract
Due to their high refractive index, record optical anisotropy and a set of excitonic transitions in visible range at a room temperature, transition metal dichalcogenides have gained much attention. Here, we adapted a femtosecond laser ablation for the synthesis of WSe2 nanoparticles [...] Read more.
Due to their high refractive index, record optical anisotropy and a set of excitonic transitions in visible range at a room temperature, transition metal dichalcogenides have gained much attention. Here, we adapted a femtosecond laser ablation for the synthesis of WSe2 nanoparticles (NPs) with diameters from 5 to 150 nm, which conserve the crystalline structure of the original bulk crystal. This method was chosen due to its inherently substrate-additive-free nature and a high output level. The obtained nanoparticles absorb light stronger than the bulk crystal thanks to the local field enhancement, and they have a much higher photothermal conversion than conventional Si nanospheres. The highly mobile colloidal state of produced NPs makes them flexible for further application-dependent manipulations, which we demonstrated by creating substrates for SERS sensors. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Crystal structure of bulk WSe<sub>2</sub>; (<b>b</b>) EDX characterization of bulk WSe<sub>2</sub> crystal; (<b>c</b>) SAED characterization of microscopic WSe<sub>2</sub> flake. Yellow arrows with indices denote reciprocal lattice vectors. (<b>d</b>) Schematic view of PLAL.</p>
Full article ">Figure 2
<p>(<b>a</b>) Typical TEM image of the synthesized WSe<sub>2</sub> NPs; (<b>b</b>) SEM photographs at inclined view revealing the spherical shape of NPs; (<b>c</b>) EDX spectrum from the synthesized NPs showing the elemental composition of WSe<sub>2</sub> NPs. Copper signal is from the TEM grid. (<b>d</b>) SAED on synthesized WSe<sub>2</sub> NPs with the most visible d<sub>hkl</sub> lines; (<b>e</b>) TEM image of a single nanoparticle showing its polycrystalline structure; (<b>f</b>) Raman spectra of bulk WSe<sub>2</sub> crystal and synthesized NPs, separated by centrifugation at different rotation speeds. Excitation wavelength was 532 nm.</p>
Full article ">Figure 3
<p>Differential centrifugation of WSe<sub>2</sub> NPs. (<b>a</b>) Size distributions and average sizes of nanoparticles, obtained at different rotational speeds and measured by counting on TEM image (violet curve) and by dynamic light scattering spectroscopy (blue curve). (<b>b</b>) Measured extinction spectra for WSe<sub>2</sub> colloids with various NP average diameter <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>D</mi> <mo>〉</mo> </mrow> </semantics></math>. (<b>c</b>) Calculated extinction spectra (total and contributions from electric and magnetic dipole channels) for a spherical WSe<sub>2</sub> NP with a homogenized isotropic dielectric function, obtained from bulk WSe<sub>2</sub> crystal data in (<b>d</b>) as <math display="inline"><semantics> <mrow> <msub> <mi>ε</mi> <mrow> <mi>a</mi> <mi>v</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> <msub> <mi>ε</mi> <mi>o</mi> </msub> <mo>/</mo> <mn>3</mn> <mo>+</mo> <msub> <mi>ε</mi> <mi>e</mi> </msub> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math>. (<b>d</b>) Optical constants of bulk WSe<sub>2</sub>; (<b>e</b>) Image of bottles with centrifugated WSe<sub>2</sub> colloidal solutions in DI water.</p>
Full article ">Figure 4
<p>Photoheating response of WSe<sub>2</sub> NPs. (<b>a</b>) Temperature-dependent Raman spectra at 532 nm excitation of WSe<sub>2</sub> NPs. (<b>b</b>) Laser-induced heating (532 nm irradiation) and <math display="inline"><semantics> <msubsup> <mi mathvariant="normal">E</mi> <mrow> <mn>2</mn> <mi mathvariant="normal">g</mi> </mrow> <mn>1</mn> </msubsup> </semantics></math> peak position shift of WSe<sub>2</sub> NPs and bulk crystal. (<b>c</b>) Dynamics of laser-induced heating of colloids by laser diode 830 nm, 1 W. (<b>d</b>) Measured extinction spectra of WSe<sub>2</sub> and Si water colloids, normalized at photoheating wavelength 830 nm. (<b>e</b>) Time-resolved photoheating of WSe<sub>2</sub> and Si colloids irradiated by the 830 nm laser; both heating (laser is on) and cooling (laser is off) steps of the experiment are shown. (<b>f</b>) Optical extinction and absorption curves of WSe<sub>2</sub> colloid from (<b>d</b>), prepared for the photoheating experiment with a tunable laser source. Absorption curve is calculated using photothermal conversion coefficients, indicated with black arrows and obtained experimentally from the photoheating experiments at different laser wavelengths (see the main text).</p>
Full article ">Figure 5
<p>SERS spectra of (<b>a</b>) rhodamine 6G (R6G) and (<b>b</b>) crystal violet (CV) in the concentration range <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </semantics></math>–<math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>8</mn> </mrow> </msup> </semantics></math> M adsorbed on WSe<sub>2</sub> NPs substrate; (<b>c</b>) R6G and (<b>d</b>) CV in the concentration range <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </semantics></math>–<math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>8</mn> </mrow> </msup> </semantics></math> M adsorbed on MoS<sub>2</sub> NPs substrate. Peaks marked by asterisks (*) correspond to MoS<sub>2</sub>.</p>
Full article ">
20 pages, 6270 KiB  
Article
Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping
by Eun-Seok Park , Saba Arshad and Tae-Hyoung Park
Sensors 2024, 24(24), 8199; https://doi.org/10.3390/s24248199 - 22 Dec 2024
Viewed by 416
Abstract
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, [...] Read more.
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, handheld devices allow data collection from different angles, but this mobility introduces challenges in data quality, particularly when initial calibration between sensors is not precise. Accurate LiDAR-IMU calibration, essential for mapping accuracy in Simultaneous Localization and Mapping applications, involves precise alignment of the sensors’ extrinsic parameters. This research presents a robust initial pose calibration method for LiDAR-IMU systems in handheld devices, specifically designed for indoor environments. The research contributions are twofold. Firstly, we present a robust plane detection method for LiDAR data. This plane detection method removes the noise caused by mobility of scanning device and provides accurate planes for precise LiDAR initial pose estimation. Secondly, we present a robust planes-aided LiDAR calibration method that estimates the initial pose. By employing this LiDAR calibration method, an efficient LiDAR-IMU calibration is achieved for accurate mapping. Experimental results demonstrate that the proposed method achieves lower calibration errors and improved computational efficiency compared to existing methods. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>LiDAR based mapping using (<b>a</b>) LiDAR-IMU calibration method: Error-free mapping, and (<b>b</b>) Without LiDAR-IMU calibration method: Mapping error due to drift, highlighted in yellow circle. The colors in each map represents the intensity of LiDAR point cloud.</p>
Full article ">Figure 2
<p>Overall framework of the proposed initial pose estimation method for robust LiDAR-IMU calibration. Different colors in voxelization shows the intensity of LiDAR points in each voxel. The extracted planes are represented with yellow and green color while red color points indicate noise.</p>
Full article ">Figure 3
<p>Robust plane detection method.</p>
Full article ">Figure 4
<p>Robust plane extraction through refinement. (<b>a</b>) Voxels containing edges and noise have low plane scores due to large distances and high variance represented as red color normal vector while those with high plane scores are represented with blue. (<b>b</b>) The refinement process enables the effective separation and removal of areas containing edges and noise.</p>
Full article ">Figure 5
<p>LiDAR calibration method.</p>
Full article ">Figure 6
<p>IMU downsampling.</p>
Full article ">Figure 7
<p>Qualitative Comparison of the proposed method with the benchmark plane detection algorithms.</p>
Full article ">Figure 8
<p>Top view of LiDAR data. (<b>a</b>) LiDAR raw data before calibration. (<b>b</b>) LiDAR data after calibration using the proposed method.</p>
Full article ">Figure 9
<p>Performance comparison in terms of (<b>a</b>) roll and (<b>b</b>) pitch errors in the VECtor dataset.</p>
Full article ">Figure 10
<p>Performance comparison in terms of the (<b>a</b>) mapping result using LI-init and (<b>b</b>) mapping result using LI-init+Proposed.</p>
Full article ">
31 pages, 1953 KiB  
Article
UAV Trajectory Control and Power Optimization for Low-Latency C-V2X Communications in a Federated Learning Environment
by Xavier Fernando and Abhishek Gupta
Sensors 2024, 24(24), 8186; https://doi.org/10.3390/s24248186 - 22 Dec 2024
Viewed by 1380
Abstract
Unmanned aerial vehicle (UAV)-enabled vehicular communications in the sixth generation (6G) are characterized by line-of-sight (LoS) and dynamically varying channel conditions. However, the presence of obstacles in the LoS path leads to shadowed fading environments. In UAV-assisted cellular vehicle-to-everything (C-V2X) communication, vehicle and [...] Read more.
Unmanned aerial vehicle (UAV)-enabled vehicular communications in the sixth generation (6G) are characterized by line-of-sight (LoS) and dynamically varying channel conditions. However, the presence of obstacles in the LoS path leads to shadowed fading environments. In UAV-assisted cellular vehicle-to-everything (C-V2X) communication, vehicle and UAV mobility and shadowing adversely impact latency and throughput. Moreover, 6G vehicular communications comprise data-intensive applications such as augmented reality, mixed reality, virtual reality, intelligent transportation, and autonomous vehicles. Since vehicles’ sensors generate immense amount of data, the latency in processing these applications also increases, particularly when the data are not independently identically distributed (non-i.i.d.). Furthermore, when the sensors’ data are heterogeneous in size and distribution, the incoming packets demand substantial computing resources, energy efficiency at the UAV servers and intelligent mechanisms to queue the incoming packets. Due to the limited battery power and coverage range of UAV, the quality of service (QoS) requirements such as coverage rate, UAV flying time, and fairness of vehicle selection are adversely impacted. Controlling the UAV trajectory so that it serves a maximum number of vehicles while maximizing battery power usage is a potential solution to enhance QoS. This paper investigates the system performance and communication disruption between vehicles and UAV due to Doppler effect in the orthogonal time–frequency space (OTFS) modulated channel. Moreover, a low-complexity UAV trajectory prediction and vehicle selection method is proposed using federated learning, which exploits related information from past trajectories. The weighted total energy consumption of a UAV is minimized by jointly optimizing the transmission window (Lw), transmit power and UAV trajectory considering Doppler spread. The simulation results reveal that the weighted total energy consumption of the OTFS-based system decreases up to 10% when combined with federated learning to locally process the sensor data at the vehicles and communicate the processed local models to the UAV. The weighted total energy consumption of the proposed federated learning algorithm decreases by 10–15% compared with convex optimization, heuristic, and meta-heuristic algorithms. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>A brief timeline depicting the amalgamation of wireless communication technologies with transportation systems. Also illustrated is the gradual integration of UAVs in vehicular networks in 5G and 6G wireless communication paradigms. A detailed timeline and comprehensive overview of the recent and evolving applications of machine learning techniques in UAV communication frameworks can be found in [<a href="#B14-sensors-24-08186" class="html-bibr">14</a>,<a href="#B15-sensors-24-08186" class="html-bibr">15</a>].</p>
Full article ">Figure 2
<p>System Model: Delay is accumulated as vehicles in different clusters generate and transmit local models to the UAV. The UAV transmits the global model to the vehicles. Note, each vehicle captures a different kind of data packet, leading to non-i.i.d. and heterogeneous data.</p>
Full article ">Figure 3
<p>An illustration of the proposed federated reinforcement learning-based solution approach for UAV trajectory control and power optimization for low-latency C-V2X communications.</p>
Full article ">Figure 4
<p>UAV trajectory varies in a random manner, and the vehicles capture varying sensor data at different TTIs. By processing the sensor data, local models are generated at the vehicles and a global model is generated at the UAV.</p>
Full article ">Figure 5
<p>UAV trajectory and vehicle coverage depending on UAV transmit power (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>) and altitude (<math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math>). The shaded triangular region (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>) indicates the coverage range of the UAV when the UAV is at a specific altitude (<math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math>).</p>
Full article ">Figure 6
<p>Variation in average cost function (UAV energy and latency) with number of vehicles (<span class="html-italic">V</span>).</p>
Full article ">Figure 7
<p>Variation in queuing delay (<math display="inline"><semantics> <msub> <mi>D</mi> <mrow> <mi>q</mi> <mi>u</mi> <mi>e</mi> </mrow> </msub> </semantics></math>) in FL scenario with time slots.</p>
Full article ">Figure 8
<p>Total delay (<math display="inline"><semantics> <mi mathvariant="bold-script">D</mi> </semantics></math>) vs. number of vehicles (<span class="html-italic">V</span>) for different machine learning models.</p>
Full article ">Figure 9
<p>Variation in average packet drop rate with control parameter (<math display="inline"><semantics> <mi>ϱ</mi> </semantics></math>) using fed-DDPG.</p>
Full article ">Figure 10
<p>Variation in average UAV energy with number of vehicles (<span class="html-italic">V</span>) for different machine learning models.</p>
Full article ">Figure 11
<p>Variation in FL computation rate (Mbits/s) with control parameter (<math display="inline"><semantics> <mi>ϱ</mi> </semantics></math>) for different machine learning models.</p>
Full article ">Figure 12
<p>Probability of optimal trajectory prediction for fed-DDPG (using LSTM) vs. UAV altitude (<math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math>) for varying number of vehicles (<span class="html-italic">V</span>) over trials of 250 episodes.</p>
Full article ">Figure 13
<p>Probability of optimal trajectory prediction for actor–critic (using LSTM) vs. UAV altitude (<math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math>) for varying number of vehicles (<span class="html-italic">V</span>) over trials of 500 episodes.</p>
Full article ">Figure 14
<p>Probability of optimal trajectory prediction for CNN-LSTM vs. UAV altitude (<math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math>) for varying number of vehicles (<span class="html-italic">V</span>) over trials of 1000 episodes.</p>
Full article ">Figure 15
<p>Probability of optimal trajectory prediction for RNN vs. UAV altitude (<math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math>) for varying number of vehicles (<span class="html-italic">V</span>) over trials of 1000 episodes.</p>
Full article ">Figure 16
<p>Probability of optimal trajectory prediction for GRU vs. UAV altitude (<math display="inline"><semantics> <mi mathvariant="script">H</mi> </semantics></math>) for varying number of vehicles (<span class="html-italic">V</span>) over trials of 1000 episodes.</p>
Full article ">Figure 17
<p>UAV transmit power (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>) vs. SNR in OTFS modulation scheme for varying number of vehicles (<span class="html-italic">V</span>).</p>
Full article ">
26 pages, 8972 KiB  
Article
IoT-Based LPG Level Sensor for Domestic Stationary Tanks with Data Sharing to a Filling Plant to Optimize Distribution Routes
by Roberto Morales-Caporal, Rodolfo Eleazar Pérez-Loaiza, Edmundo Bonilla-Huerta, Julio Hernández-Pérez and José de Jesús Rangel-Magdaleno
Future Internet 2024, 16(12), 479; https://doi.org/10.3390/fi16120479 - 21 Dec 2024
Viewed by 366
Abstract
This research presents the design and implementation of an Internet of Things (IoT)-based solution to measure the percentage of Liquefied Petroleum Gas (LPG) inside domestic stationary tanks. The IoT-based sensor, in addition to displaying the percentage of the LPG level in the tank [...] Read more.
This research presents the design and implementation of an Internet of Things (IoT)-based solution to measure the percentage of Liquefied Petroleum Gas (LPG) inside domestic stationary tanks. The IoT-based sensor, in addition to displaying the percentage of the LPG level in the tank to the user through a mobile application (app), has the advantage of simultaneously sharing the acquired data with an LPG filling plant via the Internet. The design process and calculations for the selection of the electronic components of the IoT-based sensor are presented. The methodology for obtaining and calibrating the measurement of the tank filling percentage from the magnetic level measurement system is explained in detail. The operation of the developed software, and the communication protocols used are also explained so that the data can be queried both in the user’s app and on the gas company’s web platform safely. The use of the Clark and Wright savings algorithm is proposed to sufficiently optimize the distribution routes that tank trucks should follow when serving different home refill requests from customers located in different places in a city. The experimental results confirm the functionality and viability of the hardware and software developed. In addition, by having the precise location of the tank, the generation of optimized gas refill routes for thirty customers using the heuristic algorithm and a visualization of them on Google Maps is demonstrated. This can lead to competitive advantages for home gas distribution companies. Full article
Show Figures

Figure 1

Figure 1
<p>Mechanical LPG level measurement system in stationary tank: (<b>a</b>) the mechanical tank gauging system; (<b>b</b>) float gauge assembly; and (<b>c</b>) removable, magnetically driven needle dial.</p>
Full article ">Figure 2
<p>Business concept with the developed IoT-GL-Sensor.</p>
Full article ">Figure 3
<p>Block diagram of the IoT-GL-Sensor.</p>
Full article ">Figure 4
<p>(<b>a</b>) Physical position of the Hall-effect sensors. (<b>b</b>) Analog output signals when the mechanical float was moved manually, initially considering the tank empty and gradually moving the float upwards until simulating a full tank, and vice versa.</p>
Full article ">Figure 5
<p>(<b>a</b>) Dial. (<b>b</b>) <math display="inline"><semantics> <mi>θ</mi> </semantics></math> vs. <math display="inline"><semantics> <msub> <mo>%</mo> <mi>c</mi> </msub> </semantics></math>. When the mechanical float was manually moved from an empty tank (0%) to a full tank (100%).</p>
Full article ">Figure 6
<p>Schematic connection diagrams: (<b>a</b>) the MCU device; (<b>b</b>) the Wi-Fi device.</p>
Full article ">Figure 7
<p>Schematic connection diagram of the TLV755P voltage regulator.</p>
Full article ">Figure 8
<p>Hardware conceptual design: (<b>a</b>) sensors PCB footprint, (<b>b</b>) 3D model of the sensors PCB, (<b>c</b>) host PCB footprint, and (<b>d</b>) 3D model of the host PCB.</p>
Full article ">Figure 9
<p>(<b>a</b>) Wi-Fi settings screen; (<b>b</b>) sensor ID generation.</p>
Full article ">Figure 10
<p>Mobile application developed for the IoT-GL-Sensor: (<b>a</b>) app home screen; (<b>b</b>) request for permission to share the location of the mobile device; (<b>c</b>) registration screen; (<b>d</b>,<b>e</b>) warning messages; (<b>f</b>) waiting screen; (<b>g</b>) app home screen with valid data; and (<b>h</b>) screen with the graph of the percentage of LPG level in the tank and the battery level icon.</p>
Full article ">Figure 11
<p>Basic principle of the Clarke and Wright algorithm. Two different routes before and after being joined.</p>
Full article ">Figure 12
<p>Installed IoT-GL-Sensor: (<b>a</b>) without the dial; (<b>b</b>) with the dial; (<b>c</b>) inside the bottom of its housing; and (<b>d</b>) inside the closed housing and with the dial.</p>
Full article ">Figure 13
<p>Location of the filling plant and 30 customers in a specific area of the city.</p>
Full article ">Figure 14
<p>Distribution route 1 (distribution tanker 1).</p>
Full article ">Figure 15
<p>Distribution route 2 (distribution tanker 2).</p>
Full article ">
Back to TopTop