[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (137)

Search Parameters:
Keywords = angle of arrival (AoA)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 6743 KiB  
Article
Online Autonomous Motion Control of Communication-Relay UAV with Channel Prediction in Dynamic Urban Environments
by Cancan Tao and Bowen Liu
Drones 2024, 8(12), 771; https://doi.org/10.3390/drones8120771 - 19 Dec 2024
Viewed by 410
Abstract
In order to improve the network performance of multi-unmanned ground vehicle (UGV) systems in urban environments, this article proposes a novel online autonomous motion-control method for the relay UAV. The problem is solved by jointly considering unknown RF channel parameters, unknown multi-agent mobility, [...] Read more.
In order to improve the network performance of multi-unmanned ground vehicle (UGV) systems in urban environments, this article proposes a novel online autonomous motion-control method for the relay UAV. The problem is solved by jointly considering unknown RF channel parameters, unknown multi-agent mobility, the impact of the environments on channel characteristics, and the unavailable angle-of-arrival (AoA) information of the received signal, making the solution of the problem more practical and comprehensive. The method mainly consists of two parts: wireless channel parameter estimation and optimal relay position search. Considering that in practical applications, the radio frequency (RF) channel parameters in complex urban environments are difficult to obtain in advance and are constantly changing, an estimation algorithm based on Gaussian process learning is proposed for online evaluation of the wireless channel parameters near the current position of the UAV; for the optimal relay position search problem, in order to improve the real-time performance of the method, a line search algorithm and a general gradient-based algorithm are proposed, which are used for point-to-point communication and multi-node communication scenarios, respectively, reducing the two-dimensional search to a one-dimensional search, and the stability proof and convergence conditions of the algorithm are given. Comparative experiments and simulation results under different scenarios show that the proposed motion-control method can drive the UAV to reach or track the optimal relay position and improve the network performance, while demonstrating that it is beneficial to consider the impact of the environments on the channel characteristics. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of air-to-ground relay communication scenario in urban environments.</p>
Full article ">Figure 2
<p>Motion control framework.</p>
Full article ">Figure 3
<p>Schematic diagram of air-to-ground signal propagation.</p>
Full article ">Figure 4
<p>Flight trajectories of the UAV that supports communication for two stationary UGVs.</p>
Full article ">Figure 5
<p>Changes in communication performance when the UAV supports communication for two stationary UGVs.</p>
Full article ">Figure 6
<p>Flight trajectories of the UAV that supports communication for multiple stationary UGVs.</p>
Full article ">Figure 7
<p>Changes in communication performance when the UAV supports communication for multiple stationary UGVs.</p>
Full article ">Figure 8
<p>Flight trajectories of the UAV that supports point-to-point communication for two moving UGVs.</p>
Full article ">Figure 9
<p>Changes in communication performance when the UAV supports point-to-point communication for two moving UGVs.</p>
Full article ">Figure 10
<p>Flight trajectories of the UAV that supports multi-node communication for multiple moving UGVs.</p>
Full article ">Figure 11
<p>Changes in communication performance when the UAV supports multi-node communication for multiple moving UGVs.</p>
Full article ">Figure 12
<p>Flight trajectories of the UAV that supports point-to-point communication for two moving UGVs with unknown channel parameters.</p>
Full article ">Figure 13
<p>Changes in communication performance when the UAV supports point-to-point communication for two moving UGVs with unknown channel parameters.</p>
Full article ">Figure 14
<p>Flight trajectories of the UAV that supports multi-node communication for multiple moving UGVs with unknown channel parameters.</p>
Full article ">Figure 15
<p>Changes in communication performance when the UAV supports multi-node communication for multiple moving UGVs with unknown channel parameters.</p>
Full article ">
31 pages, 22621 KiB  
Article
A Ray-Tracing-Based Single-Site Localization Method for Non-Line-of-Sight Environments
by Shuo Hu, Lixin Guo and Zhongyu Liu
Sensors 2024, 24(24), 7925; https://doi.org/10.3390/s24247925 - 11 Dec 2024
Viewed by 380
Abstract
Localization accuracy in non-line-of-sight (NLOS) scenarios is often hindered by the complex nature of multipath propagation. Traditional approaches typically focus on NLOS node identification and error mitigation techniques. However, the intricacies of NLOS localization are intrinsically tied to propagation challenges. In this paper, [...] Read more.
Localization accuracy in non-line-of-sight (NLOS) scenarios is often hindered by the complex nature of multipath propagation. Traditional approaches typically focus on NLOS node identification and error mitigation techniques. However, the intricacies of NLOS localization are intrinsically tied to propagation challenges. In this paper, we propose a novel single-site localization method tailored for complex multipath NLOS environments, leveraging only angle-of-arrival (AOA) estimates in conjunction with a ray-tracing (RT) algorithm. The method transforms NLOS paths into equivalent line-of-sight (LOS) paths through the generation of generalized sources (GSs) via ray tracing. A novel weighting mechanism for GSs is introduced, which, when combined with an iteratively reweighted least squares (IRLS) estimator, significantly improves the localization accuracy of non-cooperative target sources. Furthermore, a multipath similarity displacement matrix (MSDM) is incorporated to enhance accuracy in regions with pronounced multipath fluctuations. Simulation results validate the efficacy of the proposed algorithm, achieving localization performance that approaches the Cramér–Rao lower bound (CRLB), even in challenging NLOS scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>A flowchart of the proposed RT algorithm.</p>
Full article ">Figure 2
<p>Binary tree structure of ray nodes.</p>
Full article ">Figure 3
<p>Schematic diagram of ray-splitting structure. Red nodes indicate split nodes that will be deleted, while blue nodes represent newly generated split nodes.</p>
Full article ">Figure 4
<p>Schematic diagram of ray tube determination and reception. Red lines represent virtual ray tubes, while blue lines indicate the edge rays of the ray tube.</p>
Full article ">Figure 5
<p>An overview of the overall technical roadmap of the RT-LBS algorithm.</p>
Full article ">Figure 6
<p>Power measurement system architecture and key equipment. The <b>upper half</b> of the figure is the block diagram of the channel sounder used in this paper. The <b>lower half</b> is the key equipment of the sounder, including the signal generator, power amplifier, spectrum analyzer, power supplier, RTK, and antennas.</p>
Full article ">Figure 7
<p>Localization test system architecture and key equipment. The <b>upper half</b> of the figure is the block diagram of the localization test system used in this paper. The <b>lower half</b> is the key equipment in the signal transmitter system, UCA direction-finding equipment, the Rx antenna array, and the RF processing circuit.</p>
Full article ">Figure 8
<p>Measurement scenario. (<b>a</b>) The raw point cloud image of the measurement scenario. (<b>b</b>) The geometric building model extracted from the point cloud.</p>
Full article ">Figure 9
<p>Measurement path and power distribution at (<b>a</b>) 3 GHz frequency, (<b>b</b>) 3.6 GHz frequency, (<b>c</b>) 4 GHz frequency, (<b>d</b>) 5 GHz frequency, and (<b>e</b>) 5.9 GHz frequency.</p>
Full article ">Figure 10
<p>Raw power measurement data and power measurement data after applying the sliding filter at (<b>a</b>) 3 GHz frequency, (<b>b</b>) 3.6 GHz frequency, (<b>c</b>) 4 GHz frequency, (<b>d</b>) 5 GHz frequency, and (<b>e</b>) 5.9 GHz frequency.</p>
Full article ">Figure 11
<p>RSS predictions and measurements in the scenario at (<b>a</b>) 3 GHz frequency, (<b>b</b>) 3.6 GHz frequency, (<b>c</b>) 4 GHz frequency, (<b>d</b>) 5 GHz frequency, and (<b>e</b>) 5.9 GHz frequency. The basic RT method refers to the approach presented in [<a href="#B39-sensors-24-07925" class="html-bibr">39</a>].</p>
Full article ">Figure 12
<p>The angle measurement scenario and the positions of the NCTS (denoted by T1, T2, and T3) and sensor (denoted by R).</p>
Full article ">Figure 13
<p>The AOA spectrum measured for the source located at T1.</p>
Full article ">Figure 14
<p>The AOA spectrum measured for the source located at T2.</p>
Full article ">Figure 15
<p>The AOA spectrum measured for the source located at T3.</p>
Full article ">Figure 16
<p>Comparison between measured AS and simulated multipath at (<b>a</b>) T1 position, (<b>b</b>) T2 position, and (<b>c</b>) T3 position.</p>
Full article ">Figure 17
<p>NCTS and sensor positions and a geometrical map of the scenario. The line segments represent the multipath between the source and the sensor, distinguished using different colors.</p>
Full article ">Figure 18
<p>A comparison of the proposed localization algorithm’s accuracy with the CRLB. (<b>a</b>) The source at location A; (<b>b</b>) the source at location B; (<b>c</b>) the source at location C.</p>
Full article ">Figure 19
<p>Localization error at point A with different AOA and RSSD errors.</p>
Full article ">Figure 20
<p>Localization error at point B with different AOA and RSSD errors.</p>
Full article ">Figure 21
<p>Localization error at point C with different AOA and RSSD errors.</p>
Full article ">Figure 22
<p>MSD distribution at (<b>a</b>) 0.1° AOA error, (<b>b</b>) 0.5°AOA error, (<b>c</b>) 1°AOA error, (<b>d</b>) 2°AOA error, (<b>e</b>) 4°AOA error, and (<b>f</b>) 6°AOA error.</p>
Full article ">Figure 22 Cont.
<p>MSD distribution at (<b>a</b>) 0.1° AOA error, (<b>b</b>) 0.5°AOA error, (<b>c</b>) 1°AOA error, (<b>d</b>) 2°AOA error, (<b>e</b>) 4°AOA error, and (<b>f</b>) 6°AOA error.</p>
Full article ">Figure 23
<p>Schematic diagram of displacement compensation expansion method.</p>
Full article ">Figure 24
<p>Planar Localization Error Distribution with 0.1° AOA error. (<b>a</b>) Original localization algorithm; (<b>b</b>) localization algorithm with MSDM.</p>
Full article ">Figure 25
<p>Planar Localization Error Distribution with 0.5° AOA error. (<b>a</b>) Original localization algorithm; (<b>b</b>) localization algorithm with MSDM.</p>
Full article ">Figure 26
<p>Planar Localization Error Distribution with 1° AOA error. (<b>a</b>) Original localization algorithm; (<b>b</b>) localization algorithm with MSDM.</p>
Full article ">Figure 27
<p>Planar Localization Error Distribution with 2° AOA error. (<b>a</b>) Original localization algorithm; (<b>b</b>) localization algorithm with MSDM.</p>
Full article ">Figure 28
<p>Planar Localization Error Distribution with 4° AOA error. (<b>a</b>) Original localization algorithm; (<b>b</b>) localization algorithm with MSDM.</p>
Full article ">Figure 29
<p>Planar Localization Error Distribution with 6° AOA error. (<b>a</b>) Original localization algorithm; (<b>b</b>) localization algorithm with MSDM.</p>
Full article ">Figure 30
<p>Schematic diagram of GPU acceleration algorithm.</p>
Full article ">Figure 31
<p>Power coverage map.</p>
Full article ">Figure 32
<p>Efficiency comparison of different acceleration methods.</p>
Full article ">
16 pages, 2929 KiB  
Article
TDOA-AOA Localization Algorithm for 5G Intelligent Reflecting Surfaces
by Yuexia Zhang, Changbao Liu, Yuanshuo Gang and Yu Wang
Electronics 2024, 13(22), 4347; https://doi.org/10.3390/electronics13224347 - 6 Nov 2024
Viewed by 620
Abstract
5G positioning technology has become deeply integrated into daily life. However, in wireless signal propagation environments, there may exist non-line-of-sight (NLOS) conditions, which lead to signal blockage and subsequently hinder the provision of positioning services. To address this issue, this paper proposes an [...] Read more.
5G positioning technology has become deeply integrated into daily life. However, in wireless signal propagation environments, there may exist non-line-of-sight (NLOS) conditions, which lead to signal blockage and subsequently hinder the provision of positioning services. To address this issue, this paper proposes an intelligent reflecting surface (IRS) NLOS time difference of arrival–angle of arrival (TDOA-AOA) localization (INTAL) algorithm. First, the algorithm constructs a system model for 5G IRS localization, effectively overcoming the challenges of positioning in NLOS paths. Then, by applying the multiple signal classification algorithm to estimate the time delay and angle, and using the Chan algorithm to obtain the user’s estimated coordinates, an optimization problem is formulated to minimize the distance between the estimated and actual coordinates. The tent–snake optimization algorithm is employed to solve this optimization problem, thereby reducing localization errors. Finally, simulations demonstrate that the INTAL algorithm outperforms the snake optimization (SO) algorithm and the gray wolf optimization (GWO) algorithm under the same conditions, reducing the localization error by 56% and 60% on average, respectively. Additionally, when the signal-to-noise ratio is 30 dB, the localization error of the INTAL algorithm is only 0.2968 m, while the errors for the SO and GWO algorithms are 0.6733 m and 0.7398 m, respectively. This further proves the significant improvement of the algorithm in terms of localization accuracy. Full article
(This article belongs to the Special Issue New Advances in Navigation and Positioning Systems)
Show Figures

Figure 1

Figure 1
<p>Illustration of GIL system model.</p>
Full article ">Figure 2
<p>Illustration of US-IRS signal propagation.</p>
Full article ">Figure 3
<p>Illustration of the pitch angle relationship between US and IRS.</p>
Full article ">Figure 4
<p>Flowchart of the tent–SO algorithm.</p>
Full article ">Figure 5
<p>Variation in fitness with iterations.</p>
Full article ">Figure 6
<p>Three-dimensional localization results.</p>
Full article ">Figure 7
<p>IRS position and positioning error analysis.</p>
Full article ">Figure 8
<p>Positioning error under different numbers of snapshots and varying SNRs.</p>
Full article ">Figure 9
<p>Analysis of positioning errors for different algorithms under varying SNRs.</p>
Full article ">Figure 10
<p>Positioning error with different numbers of IRSs under varying SNRs.</p>
Full article ">
19 pages, 5345 KiB  
Article
Accurate Low Complexity Quadrature Angular Diversity Aperture Receiver for Visible Light Positioning
by Stefanie Cincotta, Adrian Neild, Kristian Helmerson, Michael Zenere and Jean Armstrong
Sensors 2024, 24(18), 6006; https://doi.org/10.3390/s24186006 - 17 Sep 2024
Viewed by 838
Abstract
Despite the many potential applications of an accurate indoor positioning system (IPS), no universal, readily available system exists. Much of the IPS research to date has been based on the use of radio transmitters as positioning beacons. Visible light positioning (VLP) instead uses [...] Read more.
Despite the many potential applications of an accurate indoor positioning system (IPS), no universal, readily available system exists. Much of the IPS research to date has been based on the use of radio transmitters as positioning beacons. Visible light positioning (VLP) instead uses LED lights as beacons. Either cameras or photodiodes (PDs) can be used as VLP receivers, and position estimates are usually based on either the angle of arrival (AOA) or the strength of the received signal. Research on the use of AOA with photodiode receivers has so far been limited by the lack of a suitable compact receiver. The quadrature angular diversity aperture receiver (QADA) can fill this gap. In this paper, we describe a new QADA design that uses only three readily available parts: a quadrant photodiode, a 3D-printed aperture, and a programmable system on a chip (PSoC). Extensive experimental results demonstrate that this design provides accurate AOA estimates within a room-sized test chamber. The flexibility and programmability of the PSoC mean that other sensors can be supported by the same PSoC. This has the potential to allow the AOA estimates from the QADA to be combined with information from other sensors to form future powerful sensor-fusion systems requiring only one beacon. Full article
(This article belongs to the Special Issue Sensors and Techniques for Indoor Positioning and Localization)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) QADA built on a prototyping board; (<b>b</b>) close-up of aperture removed from QPD.</p>
Full article ">Figure 2
<p>Circuit programmed in PSoC and external QPD.</p>
Full article ">Figure 3
<p>Histograms showing the distribution of samples for each quadrant for a QPD with no aperture and with no transmitting light.</p>
Full article ">Figure 4
<p>Histograms showing the distribution of samples for each quadrant for the case of no aperture and square-wave modulated light source.</p>
Full article ">Figure 5
<p>QADA prototype mounted on test platform in test chamber.</p>
Full article ">Figure 6
<p>QADA receiver design.</p>
Full article ">Figure 7
<p>Detail of the light spot on the quadrant photodiode.</p>
Full article ">Figure 8
<p>(<b>a</b>) Estimated angle <math display="inline"><semantics> <mover accent="true"> <mi>α</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> versus calculated angle <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mi mathvariant="normal">C</mi> </msub> </mrow> </semantics></math> (black crosses) and <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>α</mi> <mo stretchy="false">^</mo> </mover> <mo>=</mo> <msub> <mi>α</mi> <mi mathvariant="normal">C</mi> </msub> </mrow> </semantics></math> (red line); (<b>b</b>) error in estimated angle <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>α</mi> <mo stretchy="false">^</mo> </mover> <mo>−</mo> <msub> <mi>α</mi> <mi mathvariant="normal">C</mi> </msub> </mrow> </semantics></math> versus calculated angle <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mi mathvariant="normal">C</mi> </msub> </mrow> </semantics></math> excluding outliers.</p>
Full article ">Figure 9
<p>(<b>a</b>) Estimated angle <math display="inline"><semantics> <mover accent="true"> <mi>ψ</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> versus calculated angle <math display="inline"><semantics> <mrow> <msub> <mi>ψ</mi> <mi mathvariant="normal">C</mi> </msub> </mrow> </semantics></math> (black crosses) and <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>ψ</mi> <mo stretchy="false">^</mo> </mover> <mo>=</mo> <msub> <mi>ψ</mi> <mi mathvariant="normal">C</mi> </msub> </mrow> </semantics></math> (red line); (<b>b</b>) error in estimated angle <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>ψ</mi> <mo stretchy="false">^</mo> </mover> <mo>−</mo> <msub> <mi>ψ</mi> <mi mathvariant="normal">C</mi> </msub> </mrow> </semantics></math> versus calculated angle.</p>
Full article ">Figure 10
<p>(<b>a</b>) Predicted positions within the room of the luminaire centroid (red dots), luminaire outline (blue lines); (<b>b</b>) predicted positions of luminaire centroid (red dots), luminaire outline (blue lines) and luminaire centroid (black asterisk) on an expanded scale. The inner blue line marks the area of the luminaire which transmits light. The outer blue lines include its metal frame.</p>
Full article ">Figure 11
<p>Predicted positions (black dots) and actual positions of QADA (red crosses). Predicted positions calculated using (9) and (10). The luminaire outline is shown in blue.</p>
Full article ">Figure 12
<p>(<b>a</b>) Position of luminaire centroid predicted using <math display="inline"><semantics> <mrow> <msub> <mi>ψ</mi> <mi mathvariant="normal">C</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mover accent="true"> <mi>α</mi> <mo stretchy="false">^</mo> </mover> </semantics></math>; (<b>b</b>) position of luminaire centroid predicted <math display="inline"><semantics> <mover accent="true"> <mi>ψ</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mi mathvariant="normal">C</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">
14 pages, 539 KiB  
Communication
Four-Dimensional Parameter Estimation for Mixed Far-Field and Near-Field Target Localization Using Bistatic MIMO Arrays and Higher-Order Singular Value Decomposition
by Qi Zhang, Hong Jiang and Huiming Zheng
Remote Sens. 2024, 16(18), 3366; https://doi.org/10.3390/rs16183366 - 10 Sep 2024
Viewed by 664
Abstract
In this paper, we present a novel four-dimensional (4D) parameter estimation method to localize the mixed far-field (FF) and near-field (NF) targets using bistatic MIMO arrays and higher-order singular value decomposition (HOSVD). The estimated four parameters include the angle-of-departure (AOD), angle-of-arrival (AOA), range-of-departure [...] Read more.
In this paper, we present a novel four-dimensional (4D) parameter estimation method to localize the mixed far-field (FF) and near-field (NF) targets using bistatic MIMO arrays and higher-order singular value decomposition (HOSVD). The estimated four parameters include the angle-of-departure (AOD), angle-of-arrival (AOA), range-of-departure (ROD), and range-of-arrival (ROA). In the method, we store array data in a tensor form to preserve the inherent multidimensional properties of the array data. First, the observation data are arranged into a third-order tensor and its covariance tensor is calculated. Then, the HOSVD of the covariance tensor is performed. From the left singular vector matrices of the corresponding module expansion of the covariance tensor, the subspaces with respect to transmit and receive arrays are obtained, respectively. The AOD and AOA of the mixed FF and NF targets are estimated with signal-subspace, and the ROD and ROA of the NF targets are achieved using noise-subspace. Finally, the estimated four parameters are matched via a pairing method. The Cramér–Rao lower bound (CRLB) of the mixed target parameters is also derived. The numerical simulations demonstrate the superiority of the tensor-based method. Full article
(This article belongs to the Special Issue Array and Signal Processing for Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Bistatic MIMO array configuration.</p>
Full article ">Figure 2
<p>Simulation results with the proposed algorithm for (<b>a</b>) AOD and AOA estimation and (<b>b</b>) ROD and ROA estimation. <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and 100 independent trials. <math display="inline"><semantics> <mrow> <mfenced separators="" open="(" close=")"> <mrow> <msub> <mi>θ</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mi>r</mi> </msub> <mo>,</mo> <msub> <mi>r</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi>r</mi> <mi>r</mi> </msub> </mrow> </mfenced> <mo>=</mo> <mfenced separators="" open="(" close=")"> <mrow> <msup> <mn>30</mn> <mo>∘</mo> </msup> <mo>,</mo> <mo>−</mo> <msup> <mn>10</mn> <mo>∘</mo> </msup> <mo>,</mo> <mo>+</mo> <mo>∞</mo> <mo>,</mo> <mo>+</mo> <mo>∞</mo> </mrow> </mfenced> <mo>,</mo> <mfenced separators="" open="(" close=")"> <mrow> <msup> <mn>10</mn> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <mn>10</mn> <mi>λ</mi> <mo>,</mo> <mn>6</mn> <mi>λ</mi> </mrow> </mfenced> <mo>,</mo> <mfenced separators="" open="(" close=")"> <mrow> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mn>40</mn> <mo>∘</mo> </msup> <mo>,</mo> <mn>7</mn> <mi>λ</mi> <mo>,</mo> <mn>8</mn> <mi>λ</mi> </mrow> </mfenced> <mo>.</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Performance of RMSE versus SNR for different algorithms. (<b>a</b>) AOD estimation, (<b>b</b>) AOA estimation, (<b>c</b>) ROD estimation, (<b>d</b>) ROA estimation. <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, and 500 Monte Carlo trials. <math display="inline"><semantics> <mrow> <mfenced separators="" open="(" close=")"> <mrow> <msub> <mi>θ</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mi>r</mi> </msub> <mo>,</mo> <msub> <mi>r</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi>r</mi> <mi>r</mi> </msub> </mrow> </mfenced> <mo>=</mo> <mfenced separators="" open="(" close=")"> <mrow> <msup> <mn>30</mn> <mo>∘</mo> </msup> <mo>,</mo> <mo>−</mo> <msup> <mn>10</mn> <mo>∘</mo> </msup> <mo>,</mo> <mo>+</mo> <mo>∞</mo> <mo>,</mo> <mo>+</mo> <mo>∞</mo> </mrow> </mfenced> <mo>,</mo> <mfenced separators="" open="(" close=")"> <mrow> <msup> <mn>10</mn> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <mn>10</mn> <mi>λ</mi> <mo>,</mo> <mn>6</mn> <mi>λ</mi> </mrow> </mfenced> <mo>,</mo> <mfenced separators="" open="(" close=")"> <mrow> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mn>40</mn> <mo>∘</mo> </msup> <mo>,</mo> <mn>7</mn> <mi>λ</mi> <mo>,</mo> <mn>8</mn> <mi>λ</mi> </mrow> </mfenced> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Performance of RMSE versus the number of snapshots for different algorithms. (<b>a</b>) AOD estimation, (<b>b</b>) AOA estimation, (<b>c</b>) ROD estimation and (<b>d</b>) ROA estimation. <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math> dB and 500 Monte Carlo trials. <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>θ</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mi>r</mi> </msub> <mo>,</mo> <msub> <mi>r</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi>r</mi> <mi>r</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <msup> <mn>30</mn> <mo>∘</mo> </msup> <mo>,</mo> <mo>−</mo> <msup> <mn>10</mn> <mo>∘</mo> </msup> <mo>,</mo> <mo>+</mo> <mo>∞</mo> <mo>,</mo> <mo>+</mo> <mo>∞</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msup> <mn>10</mn> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <mn>10</mn> <mi>λ</mi> <mo>,</mo> <mn>6</mn> <mi>λ</mi> </mrow> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mrow> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mn>40</mn> <mo>∘</mo> </msup> <mo>,</mo> <mn>7</mn> <mi>λ</mi> <mo>,</mo> <mn>8</mn> <mi>λ</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">
15 pages, 8532 KiB  
Article
Data-Aided Maximum Likelihood Joint Angle and Delay Estimator Over Orthogonal Frequency Division Multiplex Single-Input Multiple-Output Channels Based on New Gray Wolf Optimization Embedding Importance Sampling
by Maha Abdelkhalek, Souheib Ben Amor and Sofiène Affes
Sensors 2024, 24(17), 5821; https://doi.org/10.3390/s24175821 - 7 Sep 2024
Viewed by 874
Abstract
In this paper, we propose a new data-aided (DA) joint angle and delay (JADE) maximum likelihood (ML) estimator. The latter consists of a substantially modified and, hence, significantly improved gray wolf optimization (GWO) technique by fully integrating and embedding within it the powerful [...] Read more.
In this paper, we propose a new data-aided (DA) joint angle and delay (JADE) maximum likelihood (ML) estimator. The latter consists of a substantially modified and, hence, significantly improved gray wolf optimization (GWO) technique by fully integrating and embedding within it the powerful importance sampling (IS) concept. This new approach, referred to hereafter as GWOEIS (for “GWO embedding IS”), guarantees global optimality, and offers higher resolution capabilities over orthogonal frequency division multiplex (OFDM) (i.e., multi-carrier and multi-path) single-input multiple-output (SIMO) channels. The traditional GWO randomly initializes the wolfs’ positions (angles and delays) and, hence, requires larger packs and longer hunting (iterations) to catch the prey, i.e., find the correct angles of arrival (AoAs) and time delays (TDs), thereby affecting its search efficiency, whereas GWOEIS ensures faster convergence by providing reliable initial estimates based on a simplified importance function. More importantly, and beyond simple initialization of GWO with IS (coined as IS-GWO hereafter), we modify and dynamically update the conventional simple expression for the convergence factor of the GWO algorithm that entirely drives its hunting and tracking mechanisms by accounting for new cumulative distribution functions (CDFs) derived from the IS technique. Simulations unequivocally confirm these significant benefits in terms of increased accuracy and speed Moreover, GWOEIS reaches the Cramér–Rao lower bound (CRLB), even at low SNR levels. Full article
(This article belongs to the Special Issue Feature Papers in the 'Sensor Networks' Section 2024)
Show Figures

Figure 1

Figure 1
<p>Position updating in GWOEIS.</p>
Full article ">Figure 2
<p>Flow chart of GWOEIS algorithm.</p>
Full article ">Figure 3
<p>MSE vs. the SNR in dB for <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <mo>[</mo> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <mspace width="0.277778em"/> <msup> <mn>45</mn> <mo>∘</mo> </msup> <mo>]</mo> </mrow> </semantics></math>; <math display="inline"><semantics> <mi mathvariant="bold-italic">τ</mi> </semantics></math> = [25 ns, 62.5 ns]), <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>H</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> of: (<b>a</b>) the <span class="html-italic">Q</span> TDs, (<b>b</b>) the <span class="html-italic">Q</span> AoAs, and (<b>c</b>) the <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>×</mo> <mi>K</mi> </mrow> </semantics></math> channel coefficients (on average, per element, for all three parameter types).</p>
Full article ">Figure 4
<p>MSE vs. the SNR in dB for <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <mo>[</mo> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <mspace width="0.277778em"/> <msup> <mn>45</mn> <mo>∘</mo> </msup> <mo>]</mo> </mrow> </semantics></math>; <math display="inline"><semantics> <mi mathvariant="bold-italic">τ</mi> </semantics></math> = [25 ns, 62.5 ns]), <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>H</mi> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> of: (<b>a</b>) the <span class="html-italic">Q</span> TDs, (<b>b</b>) the <span class="html-italic">Q</span> AoAs, and (<b>c</b>) the <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>×</mo> <mi>K</mi> </mrow> </semantics></math> channel coefficients (on average, per element, for all three parameter types).</p>
Full article ">Figure 5
<p>MSE vs. the SNR in dB and the samples size <span class="html-italic">R</span> for <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <mo>[</mo> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <mspace width="0.277778em"/> <msup> <mn>45</mn> <mo>∘</mo> </msup> <mo>]</mo> </mrow> </semantics></math>; <math display="inline"><semantics> <mi mathvariant="bold-italic">τ</mi> </semantics></math> = [25 ns, 62.5 ns]) and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>H</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>MSE vs. the SNR in dB and the iterations number (e.g., <math display="inline"><semantics> <msub> <mi>T</mi> <mi>H</mi> </msub> </semantics></math>) for <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">θ</mi> <mo>=</mo> <mo>[</mo> <msup> <mn>20</mn> <mo>∘</mo> </msup> <mo>,</mo> <mspace width="0.277778em"/> <msup> <mn>45</mn> <mo>∘</mo> </msup> <mo>]</mo> </mrow> </semantics></math>; <math display="inline"><semantics> <mi mathvariant="bold-italic">τ</mi> </semantics></math> = [25 ns, 62.5 ns]) and <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>MSE vs. the SNR in dB and the number of paths <span class="html-italic">Q</span> for <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>H</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>RMSE vs. the SNR in dB and the temporal separation <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>τ</mi> </msub> </semantics></math> in ns with <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>H</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>RMSE vs. the SNR in dB and the angular separation <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>θ</mi> </msub> </semantics></math> in degrees with <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>H</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">
34 pages, 5375 KiB  
Article
Advancing mmWave Altimetry for Unmanned Aerial Systems: A Signal Processing Framework for Optimized Waveform Design
by Maaz Ali Awan, Yaser Dalveren, Ali Kara and Mohammad Derawi
Drones 2024, 8(9), 440; https://doi.org/10.3390/drones8090440 - 28 Aug 2024
Viewed by 862
Abstract
This research advances millimeter-wave (mmWave) altimetry for unmanned aerial systems (UASs) by optimizing performance metrics within the constraints of inexpensive automotive radars. Leveraging the software-defined architecture, this study encompasses the intricacies of frequency modulated continuous waveform (FMCW) design for three distinct stages of [...] Read more.
This research advances millimeter-wave (mmWave) altimetry for unmanned aerial systems (UASs) by optimizing performance metrics within the constraints of inexpensive automotive radars. Leveraging the software-defined architecture, this study encompasses the intricacies of frequency modulated continuous waveform (FMCW) design for three distinct stages of UAS flight: cruise, landing approach, and touchdown within a signal processing framework. Angle of arrival (AoA) estimation, traditionally employed in terrain mapping applications, is largely unexplored for UAS radar altimeters (RAs). Time-division multiplexing multiple input–multiple output (TDM-MIMO) is an efficient method for enhancing angular resolution without compromising the size, weight, and power (SWaP) characteristics. Accordingly, this work argues the potential of AoA estimation using TDM-MIMO to augment situational awareness in challenging landing scenarios. To this end, two corner cases comprising landing a small-sized drone on a platform in the middle of a water body are included. Likewise, for the touchdown stage, an improvised rendition of zoom fast Fourier transform (ZFFT) is investigated to achieve millimeter (mm)-level range accuracy. Aptly, it is proposed that a mm-level accurate RA may be exploited as a software redundancy for the critical weight-on-wheels (WoW) system in fixed-wing commercial UASs. Each stage is simulated as a radar scenario using the specifications of automotive radar operating in the 77–81 GHz band to optimize waveform design, setting the stage for field verification. This article addresses challenges arising from radial velocity due to UAS descent rates and terrain variation through theoretical and mathematical approaches for characterization and mandatory compensation. While constant false alarm rate (CFAR) algorithms have been reported for ground detection, a comparison of their variants within the scope UAS altimetry is limited. This study appraises popular CFAR variants to achieve optimized ground detection performance. The authors advocate for dedicated minimum operational performance standards (MOPS) for UAS RAs. Lastly, this body of work identifies potential challenges, proposes solutions, and outlines future research directions. Full article
Show Figures

Figure 1

Figure 1
<p>Uncertainty in altitude estimation due to wide HPBW of radar antenna [<a href="#B13-drones-08-00440" class="html-bibr">13</a>].</p>
Full article ">Figure 2
<p>Signal processing flow for point cloud generation in automotive FMCW radars [<a href="#B36-drones-08-00440" class="html-bibr">36</a>].</p>
Full article ">Figure 3
<p>Radar cube exhibiting slow-time, fast-time, and spatial dimensions [<a href="#B39-drones-08-00440" class="html-bibr">39</a>].</p>
Full article ">Figure 4
<p>Fundamentals of AoA estimation in an SIMO radar [<a href="#B43-drones-08-00440" class="html-bibr">43</a>].</p>
Full article ">Figure 5
<p>Standard deviation in terrain elevation for various land types [<a href="#B46-drones-08-00440" class="html-bibr">46</a>].</p>
Full article ">Figure 6
<p>Comparison of range profile: (<b>a</b>) Single chirp per frame; (<b>b</b>) 16 chirps per frame.</p>
Full article ">Figure 7
<p>Range profile and CFAR threshold.</p>
Full article ">Figure 8
<p>Range profile and CFAR threshold: (<b>a</b>) CFAR−CA; (<b>b</b>) CFAR−CASO.</p>
Full article ">Figure 9
<p>AoA estimation at high altitude.</p>
Full article ">Figure 10
<p>2Tx–4Rx virtual antenna array in an MIMO radar.</p>
Full article ">Figure 11
<p>1Tx–8Rx physical antenna array in an SIMO radar.</p>
Full article ">Figure 12
<p>Combined radiation patterns: (<b>a</b>) 1 × 8 SIMO; (<b>b</b>) 2 × 4 MIMO.</p>
Full article ">Figure 13
<p>TDM-MIMO chirp frame with Doppler induced due to radial velocity.</p>
Full article ">Figure 14
<p>VTOL UAS landing on a drone ship surrounded by water.</p>
Full article ">Figure 15
<p>Angular FFT showing peaks in respective bins: (<b>a</b>) Scenario 1; (<b>b</b>) Scenario 2.</p>
Full article ">Figure 16
<p>Zoom FFT implementation.</p>
Full article ">Figure 17
<p>Range profiles: (<b>a</b>) Coarse; (<b>b</b>) fine.</p>
Full article ">
21 pages, 3044 KiB  
Article
A Dual-Branch Convolutional Neural Network-Based Bluetooth Low Energy Indoor Positioning Algorithm by Fusing Received Signal Strength with Angle of Arrival
by Chunxiang Wu, Yapeng Wang, Wei Ke and Xu Yang
Mathematics 2024, 12(17), 2658; https://doi.org/10.3390/math12172658 - 27 Aug 2024
Cited by 2 | Viewed by 774
Abstract
Indoor positioning is the key enabling technology for many location-aware applications. As GPS does not work indoors, various solutions are proposed for navigating devices. Among these solutions, Bluetooth low energy (BLE) technology has gained significant attention due to its affordability, low power consumption, [...] Read more.
Indoor positioning is the key enabling technology for many location-aware applications. As GPS does not work indoors, various solutions are proposed for navigating devices. Among these solutions, Bluetooth low energy (BLE) technology has gained significant attention due to its affordability, low power consumption, and rapid data transmission capabilities, making it highly suitable for indoor positioning. Received signal strength (RSS)-based positioning has been studied intensively for a long time. However, the accuracy of RSS-based positioning can fluctuate due to signal attenuation and environmental factors like crowd density. Angle of arrival (AoA)-based positioning uses angle measurement technology for location devices and can achieve higher precision, but the accuracy may also be affected by radio reflections, diffractions, etc. In this study, a dual-branch convolutional neural network (CNN)-based BLE indoor positioning algorithm integrating RSS and AoA is proposed, which exploits both RSS and AoA to estimate the position of a target. Given the absence of publicly available datasets, we generated our own dataset for this study. Data were collected from each receiver in three different directions, resulting in a total of 2675 records, which included both RSS and AoA measurements. Of these, 1295 records were designated for training purposes. Subsequently, we evaluated our algorithm using the remaining 1380 unseen test records. Our RSS and AoA fusion algorithm yielded a sub-meter accuracy of 0.79 m, which was significantly better than the 1.06 m and 1.67 m obtained when using only the RSS or the AoA method. Compared with the RSS-only and AoA-only solutions, the accuracy was improved by 25.47% and 52.69%, respectively. These results are even close to the latest commercial proprietary system, which represents the state-of-the-art indoor positioning technology. Full article
Show Figures

Figure 1

Figure 1
<p>Measurement of RSS and AoA.</p>
Full article ">Figure 2
<p>Sample log output with timestamp.</p>
Full article ">Figure 3
<p>Calculation of position from RSS measurements.</p>
Full article ">Figure 4
<p>Calculation of position from AoA measurements.</p>
Full article ">Figure 5
<p>Outline scheme for combining RSS and AoA.</p>
Full article ">Figure 6
<p>Proposed CNN network architecture.</p>
Full article ">Figure 7
<p>Potential orientations of prediction points based on RSS and AoA.</p>
Full article ">Figure 8
<p>Diagram of the experimental schematic.</p>
Full article ">Figure 9
<p>Training and testing points setting.</p>
Full article ">Figure 10
<p>Initial format of datasets.</p>
Full article ">Figure 10 Cont.
<p>Initial format of datasets.</p>
Full article ">Figure 11
<p>Mapping original data into a grayscale image.</p>
Full article ">
14 pages, 3833 KiB  
Article
Real-Time Indoor Visible Light Positioning (VLP) Using Long Short Term Memory Neural Network (LSTM-NN) with Principal Component Analysis (PCA)
by Yueh-Han Shu, Yun-Han Chang, Yuan-Zeng Lin and Chi-Wai Chow
Sensors 2024, 24(16), 5424; https://doi.org/10.3390/s24165424 - 22 Aug 2024
Cited by 1 | Viewed by 898
Abstract
New applications such as augmented reality/virtual reality (AR/VR), Internet-of-Things (IOT), autonomous mobile robot (AMR) services, etc., require high reliability and high accuracy real-time positioning and tracking of persons and devices in indoor areas. Among the different visible-light-positioning (VLP) schemes, such as proximity, time-of-arrival [...] Read more.
New applications such as augmented reality/virtual reality (AR/VR), Internet-of-Things (IOT), autonomous mobile robot (AMR) services, etc., require high reliability and high accuracy real-time positioning and tracking of persons and devices in indoor areas. Among the different visible-light-positioning (VLP) schemes, such as proximity, time-of-arrival (TOA), time-difference-of-arrival (TDOA), angle-of-arrival (AOA), and received-signal-strength (RSS), the RSS scheme is relatively easy to implement. Among these VLP methods, the RSS method is simple and efficient. As the received optical power has an inverse relationship with the distance between the LED transmitter (Tx) and the photodiode (PD) receiver (Rx), position information can be estimated by studying the received optical power from different Txs. In this work, we propose and experimentally demonstrate a real-time VLP system utilizing long short-term memory neural network (LSTM-NN) with principal component analysis (PCA) to mitigate high positioning error, particularly at the positioning unit cell boundaries. Experimental results show that in a positioning unit cell of 100 × 100 × 250 cm3, the average positioning error is 5.912 cm when using LSTM-NN only. By utilizing the PCA, we can observe that the positioning accuracy can be significantly enhanced to 1.806 cm, particularly at the unit cell boundaries and cell corners, showing a positioning error reduction of 69.45%. In the cumulative distribution function (CDF) measurements, when using only the LSTM-NN model, the positioning error of 95% of the experimental data is >15 cm; while using the LSTM-NN with PCA model, the error is reduced to <5 cm. In addition, we also experimentally demonstrate that the proposed real-time VLP system can also be used to predict the direction and the trajectory of the moving Rx. Full article
(This article belongs to the Special Issue Challenges and Future Trends in Optical Communications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Architecture of the VLP system with four LEDs modulated by specific RF carrier frequencies of <span class="html-italic">f</span><sub>1</sub>, <span class="html-italic">f</span><sub>2</sub>, <span class="html-italic">f</span><sub>3</sub>, and <span class="html-italic">f</span><sub>4</sub>, (47 kHz, 59 kHz, 83 kHz, 101 kHz), respectively. (<b>b</b>) Bird-view of the positioning unit cell indicating the training and testing locations.</p>
Full article ">Figure 2
<p>(<b>a</b>) Experimental photo of the VLP experiment. (<b>b</b>) Photo of the client side. The PD, RTO, and PC are all placed on a trolley for training and testing data collections. PD: photodiode; RTO: real-time oscilloscope.</p>
Full article ">Figure 3
<p>Architecture of the VLP Rx. ID: optical identifier; BPF: band-pass filter; LPF: low-pass filter.</p>
Full article ">Figure 4
<p>Flow diagram of the proposed real-time VLP system utilizing LSTM-NN with PCA.</p>
Full article ">Figure 5
<p>Flow diagram of the PCA used in the VLP experiment.</p>
Full article ">Figure 6
<p>Structure of an LSTM cell used in the LSTM-NN model.</p>
Full article ">Figure 7
<p>Structure of the proposed LSTM-NN model used in both training phase and testing phase.</p>
Full article ">Figure 8
<p>Error distributions using (<b>a</b>) the LSTM-NN only and (<b>b</b>) the LSTM-NN with PCA.</p>
Full article ">Figure 9
<p>CDF of the measured positioning error using LSTM-NN only and using the LSTM-NN with PCA.</p>
Full article ">Figure 10
<p>Error distributions using (<b>a</b>) FCN only and (<b>b</b>) FCN with PCA.</p>
Full article ">Figure 11
<p>CDF of the measured positioning error using FCN only and using FCN with PCA.</p>
Full article ">Figure 12
<p>Experimental predicted location of the moving Rx using the LSTM-NN with PCA at different iterations. (<b>a</b>–<b>h</b>) Indication of predicted direction and trajectory of the Rx from iteration 1 to 7.</p>
Full article ">
20 pages, 14212 KiB  
Article
ReLoki: A Light-Weight Relative Localization System Based on UWB Antenna Arrays
by Joseph Prince Mathew and Cameron Nowzari
Sensors 2024, 24(16), 5407; https://doi.org/10.3390/s24165407 - 21 Aug 2024
Viewed by 825
Abstract
Ultra Wide-Band (UWB) sensing has gained popularity in relative localization applications. Many localization solutions rely on using Time of Flight (ToF) sensing based on a beacon–tag system, which requires four or more beacons in the environment for 3D localization. A lesser researched option [...] Read more.
Ultra Wide-Band (UWB) sensing has gained popularity in relative localization applications. Many localization solutions rely on using Time of Flight (ToF) sensing based on a beacon–tag system, which requires four or more beacons in the environment for 3D localization. A lesser researched option is using Angle of Arrival (AoA) readings obtained from UWB antenna pairs to perform relative localization. In this paper, we present a UWB platform called ReLoki that can be used for ranging and AoA-based relative localization in 3D. To enable AoA, ReLoki utilizes the geometry of antenna arrays. In this paper, we present a system design for localization estimates using a Regular Tetrahedral Array (RTA), Regular Orthogonal Array (ROA), and Uniform Square Array (USA). The use of a multi-antenna array enables fully onboard infrastructure-free relative localization between participating ReLoki modules. We also present studies demonstrating sub-50cm localization errors in indoor experiments, achieving performance close to current ToF-based systems, while offering the advantage of not relying on static infrastructure. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the relative localization problem. On the left, we show ReLoki attached to an existing motion platform and capable of relative localization based on fully onboard sensing. Here, the RX agent senses the relative positions <math display="inline"><semantics> <msub> <mi>q</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </semantics></math> of the TX agents w.r.t its body frame whenever a message is received from <span class="html-italic">j</span>. On the right, we show the scenario where ReLoki can act as a mobile beacon. All beacons are capable of localizing a transmitting agent in 3D and adding more beacons will improve estimates.</p>
Full article ">Figure 2
<p>Illustration of the 4-antenna configurations that can be used with ReLoki. Here, we show the ROA, where the antennas are placed orthogonal w.r.t the central antenna, the RTA, where the antennas are placed at the vertices of a regular tetrahedron, and the USA, where the antennas are placed as a square on the same plane.</p>
Full article ">Figure 3
<p>Illustration of angle of incidence for RTA, ROA, and USA Antennas. The angle of incidence measured is used for bearing estimates based on the specific geometry of the antenna array.</p>
Full article ">Figure 4
<p>Angle of incidence measured for the redundant pairs. Here, the measured value is the average of 20 readings. The plot shows the saturation of the angle of incidence measured over <math display="inline"><semantics> <mrow> <msup> <mn>60</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> in one pair and under <math display="inline"><semantics> <mrow> <mo>−</mo> <msup> <mn>60</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> in the other.</p>
Full article ">Figure 5
<p>Timing diagram showing the different phases of transmissions. Message Transfer phase is shown in red, TWR Ranging phase in blue, and AoA Blink phase in green.</p>
Full article ">Figure 6
<p>Single-antenna design for ReLoki. (<b>a</b>) Finished PCB antenna along with the copper plane showing the circular patch antenna and the ground plane. (<b>b</b>) Return loss for the designed antenna showing less than <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>10</mn> <mspace width="3.33333pt"/> <mi>dB</mi> </mrow> </semantics></math> return loss in almost all the UWB band for Ch. 1, 2, and 3. (<b>c</b>) Center frequency and the bandwidth of the UWB bands supported by proposed antenna and DW1000.</p>
Full article ">Figure 6 Cont.
<p>Single-antenna design for ReLoki. (<b>a</b>) Finished PCB antenna along with the copper plane showing the circular patch antenna and the ground plane. (<b>b</b>) Return loss for the designed antenna showing less than <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>10</mn> <mspace width="3.33333pt"/> <mi>dB</mi> </mrow> </semantics></math> return loss in almost all the UWB band for Ch. 1, 2, and 3. (<b>c</b>) Center frequency and the bandwidth of the UWB bands supported by proposed antenna and DW1000.</p>
Full article ">Figure 7
<p>ReLoki controller design. (<b>a</b>) ReLoki hardware block diagram showing the components. Here, we start with the host <span class="html-italic">i</span> initiating a communication request. ReLoki connects to host <span class="html-italic">i</span> and transmits the data. The information is transferred to the receiving ReLoki where it is then combined with the estimated localization data. Finally, the data are sent to the receiving host <span class="html-italic">j</span>. (<b>b</b>) ReLoki PCB design showcasing the different components mentioned in the block diagram.</p>
Full article ">Figure 8
<p>ReLoki experimental setup for covariance measurement. On the left, we have the pan and tilt mechanism and on the right we have the test setup for the <math display="inline"><semantics> <mrow> <mn>1.5</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> range from source.</p>
Full article ">Figure 9
<p>Covariance maps for RTA and ROA antennas. In the top left we have RTA and in the top right we have the ROA array. A darker color means lower error. On the bottom, we show the comparison of RTA antenna array to the ROA antenna array. Here, green boxes represent lower errors for RTA and red represents lower errors for ROA. Yellow a represents comparable performance (combined azimuth and elevation difference within <math display="inline"><semantics> <mrow> <msup> <mn>10</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>) between both.</p>
Full article ">Figure 10
<p>Covariance map for USA antenna. On the top, we have the covariance maps, with darker colors showing lower errors in localization and lighter colors showing higher errors in localization. On the bottom, we show the average of measured vs actual values for azimuth and elevation for 50 readings at a given pan–tilt pair.</p>
Full article ">Figure 11
<p>Localization experiment with RTA antenna on ReLoki. On the left, we have the composite of overlayed frames from the video captured during the experiment. Agent 1 is executing a rectangular motion and Agent 2 is executing a straight-line motion. On the right, we have the output from ReLoki as seen by Agent 3 as well as the Opti-Track data captured. We show both the raw estimation data, in a lighter color, and filtered data using a low-pass filter in a darker color.</p>
Full article ">Figure 12
<p>ReLoki beacon test. On the top, we show the experimental setup. Two beacons are placed <math display="inline"><semantics> <mrow> <mn>8</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> apart and the human operator moves the tag in a hour glass pattern. On the bottom, we show plots of the localization data along with the captured MoCAP data. Here, we show localization data with only one beacon active on the right side and both beacons active on the left. The unused beacon is marked with an “X”. We show the localization errors in both cases.</p>
Full article ">
22 pages, 11909 KiB  
Article
Performance Analysis of UAV-IRS Relay Multi-Hop FSO/THz Link
by Yawei Wang, Rongpeng Liu, Jia Yuan, Jingwei Lu, Ziyang Wang, Ruihuan Wu, Zhongchao Wei and Hongzhan Liu
Electronics 2024, 13(16), 3247; https://doi.org/10.3390/electronics13163247 - 15 Aug 2024
Viewed by 1022
Abstract
As the era of sixth-generation (6G) communications approaches, there will be an unprecedented increase in the number of wireless internet-connected devices and a sharp rise in mobile data traffic. Faced with the scarcity of spectrum resources in traditional communication networks and challenges such [...] Read more.
As the era of sixth-generation (6G) communications approaches, there will be an unprecedented increase in the number of wireless internet-connected devices and a sharp rise in mobile data traffic. Faced with the scarcity of spectrum resources in traditional communication networks and challenges such as rapidly establishing communications after disasters, this study leverages unmanned aerial vehicles (UAVs) to promote an integrated multi-hop communication system combining free-space optical (FSO) communication, terahertz (THz) technology, and intelligent reflecting surface (IRS). This innovative amalgamation capitalizes on the flexibility of UAVs, the deployability of IRS, and the complementary strengths of FSO and THz communications. We have developed a comprehensive channel model that includes the effects of atmospheric turbulence, attenuation, pointing errors, and angle-of-arrival (AOA) fluctuations. Furthermore, we have derived probability density functions (PDFs) and cumulative distribution functions (CDFs) for various switching techniques. Employing advanced methods such as Gaussian–Laguerre quadrature and the central limit theorem (CLT), we have calculated key performance indicators including the average outage probability, bit error rate (BER), and channel capacity. The numerical results demonstrate that IRS significantly enhances the performance of the UAV-based hybrid FSO/THz system. The research indicates that optimizing the number of IRS elements can substantially increase throughput and reliability while minimizing switching costs. Additionally, the multi-hop approach specifically addresses the line-of-sight (LoS) dependency limitations inherent in FSO and THz systems by utilizing UAVs as dynamic relay points. This strategy effectively bridges longer distances, overcoming physical and atmospheric obstacles, and ensures stable communication links even under adverse conditions. This study underscores that the enhanced multi-hop FSO/THz link is highly effective for emergency communications after disasters, addressing the challenge of scarce spectrum resources. By strategically deploying UAVs as relay points in a multi-hop configuration, the system achieves greater flexibility and resilience, making it highly suitable for critical communication scenarios where traditional networks might fail. Full article
(This article belongs to the Special Issue Advanced Optical Wireless Communication Systems)
Show Figures

Figure 1

Figure 1
<p>Multi-hop FSO/THz communication system model with UAV-based IRS relays.</p>
Full article ">Figure 2
<p>Variation curves of outage probability for different modes, pointing errors, and atmospheric turbulence.</p>
Full article ">Figure 3
<p>Variation curve of outage probability for different numbers of links in the scheme with soft switching.</p>
Full article ">Figure 4
<p>Variation curves of the outage probability for different numbers of reflective surface elements.</p>
Full article ">Figure 5
<p>BER variation curves for different turbulence intensity, pointing error, and mode effects.</p>
Full article ">Figure 6
<p>BER variation curves under the effect of different numbers of reflective surface elements.</p>
Full article ">Figure 7
<p>BER variation curves for different numbers of links.</p>
Full article ">Figure 8
<p>Variation curves of channel capacity with different modes, pointing errors, and atmospheric turbulence.</p>
Full article ">Figure 9
<p>Variation curves of channel capacity for different numbers of reflecting surface elements.</p>
Full article ">Figure 10
<p>Variation curve of channel capacity at different numbers of links.</p>
Full article ">
15 pages, 2972 KiB  
Article
Robust Bluetooth AoA Estimation for Indoor Localization Using Particle Filter Fusion
by Kaiyue Qiu, Ruizhi Chen, Guangyi Guo, Yuan Wu and Wei Li
Appl. Sci. 2024, 14(14), 6208; https://doi.org/10.3390/app14146208 - 17 Jul 2024
Viewed by 1120
Abstract
With the growing demand for positioning services, angle-of-arrival (AoA) estimation or direction-finding (DF) has been widely investigated for applications in fifth-generation (5G) technologies. Many existing AoA estimation algorithms only require the measurement of the direction of the incident wave at the transmitter to [...] Read more.
With the growing demand for positioning services, angle-of-arrival (AoA) estimation or direction-finding (DF) has been widely investigated for applications in fifth-generation (5G) technologies. Many existing AoA estimation algorithms only require the measurement of the direction of the incident wave at the transmitter to obtain correct results. However, for most cellular systems, such as Bluetooth indoor positioning systems, due to multipath and non-line-of-sight (NLOS) propagation, indoor positioning accuracy is severely affected. In this paper, a comprehensive algorithm that combines radio measurements from Bluetooth AoA local navigation systems with indoor position estimates is investigated, which is obtained using particle filtering. This algorithm allows us to explore new optimized methods to reduce estimation errors in indoor positioning. First, particle filtering is used to predict the rough position of a moving target. Then, an algorithm with robust beam weighting is used to estimate the AoA of the multipath components. Based on this, a system of pseudo-linear equations for target positioning based on the probabilistic framework of PF and AoA measurement is derived. Theoretical analysis and simulation results show that the algorithm can improve the positioning accuracy by approximately 25.7% on average. Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

Figure 1
<p>Signal model of DOA estimation.</p>
Full article ">Figure 2
<p>Beamforming basic structure.</p>
Full article ">Figure 3
<p>Relationship between the three measures.</p>
Full article ">Figure 4
<p>Flowchart of the algorithm for MVDR+PF.</p>
Full article ">Figure 5
<p>Tracking results under uniform velocity conditions.</p>
Full article ">Figure 6
<p>Target navigation tracking results under variable speed conditions.</p>
Full article ">Figure 7
<p>Changes in the effect of the number of particles on the root mean square value of the algorithm under uniform and variable speed conditions.</p>
Full article ">Figure 8
<p>Variation in RMSE with signal-to-noise ratio and with the number of snapshots. (<b>a</b>) Variation in RMSE with signal-to-noise ratio. (<b>b</b>) Variation in RMSE with the number of snapshots.</p>
Full article ">Figure 9
<p>The cumulative error distribution curves of three algorithms.</p>
Full article ">
14 pages, 360 KiB  
Article
Angle of Arrival Estimator Utilizing the Minimum Number of Omnidirectional Microphones
by Jonghoek Kim
J. Mar. Sci. Eng. 2024, 12(6), 874; https://doi.org/10.3390/jmse12060874 - 24 May 2024
Viewed by 715
Abstract
In sound signal processing, angle of arrival indicates the direction from which a propagating sound signal arrives at a point where multiple omnidirectional microphones are positioned. Considering a small underwater platform (e.g., underwater unmanned vehicle), this article addresses how to estimate a non-cooperative [...] Read more.
In sound signal processing, angle of arrival indicates the direction from which a propagating sound signal arrives at a point where multiple omnidirectional microphones are positioned. Considering a small underwater platform (e.g., underwater unmanned vehicle), this article addresses how to estimate a non-cooperative target’s signal direction utilizing the minimum number of omnidirectional microphones. It is desirable to use the minimum number of microphones, since one can reduce the cost and size of the platform by using small number of omnidirectional microphones. Suppose that each microphone measures a real-valued sound signal whose speed and frequency information are not known in advance. Since two microphones cannot determine a unique AOA solution, this study presents how to estimate the angle of arrival using a general configuration composed of three omnidirectional microphones. The effectiveness of the proposed angle of arrival estimator utilizing only three microphones is demonstrated by comparing it with the state-of-the-art estimation algorithm through computer simulations. Full article
Show Figures

Figure 1

Figure 1
<p>The microphone configuration with three microphones. <math display="inline"><semantics> <mi mathvariant="bold">u</mi> </semantics></math> defines the unit vector from the origin to the target. <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> defines the bearing angle of the target signal, such that <math display="inline"><semantics> <mrow> <mo>−</mo> <mi>π</mi> <mo>&lt;</mo> <mi>ϕ</mi> <mo>≤</mo> <mi>π</mi> </mrow> </semantics></math>. <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mi>i</mi> </msub> </semantics></math><math display="inline"><semantics> <mrow> <mo>(</mo> <mi>i</mi> <mo>∈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>}</mo> <mo>)</mo> </mrow> </semantics></math> defines the 2D coordinates of the <span class="html-italic">i</span>-th microphone. <math display="inline"><semantics> <msub> <mi mathvariant="bold">P</mi> <mi>i</mi> </msub> </semantics></math> defines the projection of <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mi>i</mi> </msub> </semantics></math> onto <math display="inline"><semantics> <mi mathvariant="bold">u</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>There are two microphones, <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mn>2</mn> </msub> </semantics></math>. A straight infinite line connects these two microphones. Dotted arrows indicate the signal direction at each microphone. Utilizing the phase differences at these two microphones, we cannot determine whether the target exists to the left or to the right of this line.</p>
Full article ">Figure 3
<p>A general configuration with three microphones. <math display="inline"><semantics> <mi mathvariant="bold">u</mi> </semantics></math> defines the unit vector from the origin to the target. <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> defines the bearing angle of the target signal, such that <math display="inline"><semantics> <mrow> <mo>−</mo> <mi>π</mi> <mo>&lt;</mo> <mi>ϕ</mi> <mo>≤</mo> <mi>π</mi> </mrow> </semantics></math>. <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mi>i</mi> </msub> </semantics></math><math display="inline"><semantics> <mrow> <mo>(</mo> <mi>i</mi> <mo>∈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>}</mo> <mo>)</mo> </mrow> </semantics></math> defines the 2D coordinates of the <span class="html-italic">i</span>-th microphone. <math display="inline"><semantics> <msub> <mi mathvariant="bold">P</mi> <mi>i</mi> </msub> </semantics></math> defines the projection of <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mi>i</mi> </msub> </semantics></math> onto <math display="inline"><semantics> <mi mathvariant="bold">u</mi> </semantics></math>. <math display="inline"><semantics> <msub> <mi>A</mi> <mi>j</mi> </msub> </semantics></math> denotes the angle of the <span class="html-italic">j</span>-th microphone measured in the counter-clockwise direction starting from the x-axis of the microphone configuration. Moreover, <math display="inline"><semantics> <msub> <mi>r</mi> <mi>j</mi> </msub> </semantics></math> denotes the distance between the center and the <span class="html-italic">j</span>-th microphone.</p>
Full article ">Figure 4
<p>A singular microphone configuration where <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>D</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>D</mi> <mn>3</mn> </msub> </mrow> </semantics></math>. <math display="inline"><semantics> <mi mathvariant="bold">u</mi> </semantics></math> defines the unit vector from the origin to the target. <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> defines the bearing angle of the target signal, such that <math display="inline"><semantics> <mrow> <mo>−</mo> <mi>π</mi> <mo>&lt;</mo> <mi>ϕ</mi> <mo>≤</mo> <mi>π</mi> </mrow> </semantics></math>. <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mi>i</mi> </msub> </semantics></math><math display="inline"><semantics> <mrow> <mo>(</mo> <mi>i</mi> <mo>∈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>}</mo> <mo>)</mo> </mrow> </semantics></math> defines the 2D coordinates of the <span class="html-italic">i</span>-th microphone. <math display="inline"><semantics> <msub> <mi mathvariant="bold">P</mi> <mi>i</mi> </msub> </semantics></math> defines the projection of <math display="inline"><semantics> <msub> <mi mathvariant="bold">S</mi> <mi>i</mi> </msub> </semantics></math> onto <math display="inline"><semantics> <mi mathvariant="bold">u</mi> </semantics></math>. <math display="inline"><semantics> <msub> <mi>A</mi> <mi>j</mi> </msub> </semantics></math> denotes the angle of the <span class="html-italic">j</span>-th microphone measured in the counter-clockwise direction starting from the x-axis of the microphone configuration. Moreover, <math display="inline"><semantics> <msub> <mi>r</mi> <mi>j</mi> </msub> </semantics></math> denotes the distance between the center and the <span class="html-italic">j</span>-th microphone. One cannot determine whether the signal direction is <math display="inline"><semantics> <mi mathvariant="bold">u</mi> </semantics></math> or <math display="inline"><semantics> <mrow> <mo>−</mo> <mi mathvariant="bold">u</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The signal strength at every microphone (<math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mi>G</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>). This implies that the signal-to-noise ratio (SNR) is <math display="inline"><semantics> <mrow> <mn>10</mn> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mrow> <mn>0.5</mn> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>=</mo> <mn>10</mn> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> in dB.</p>
Full article ">Figure 6
<p>This figure depicts a general microphone configuration. We set <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>2</mn> </msub> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>3</mn> </msub> <mo>=</mo> <mo>−</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> in <a href="#sec3dot1-jmse-12-00874" class="html-sec">Section 3.1</a>. Moreover, we set <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.4</mn> <mi>λ</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>3</mn> </msub> <mo>=</mo> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">
28 pages, 6465 KiB  
Article
A Co-Localization Algorithm for Underwater Moving Targets with an Unknown Constant Signal Propagation Speed and Platform Errors
by Yang Liu, Long He, Gang Fan, Xue Wang and Ya Zhang
Sensors 2024, 24(10), 3127; https://doi.org/10.3390/s24103127 - 14 May 2024
Viewed by 1013
Abstract
Underwater mobile acoustic source target localization encounters several challenges, including the unknown propagation speed of the source signal, uncertainty in the observation platform’s position and velocity (i.e., platform systematic errors), and economic costs. This paper proposes a new two-step closed-form localization algorithm that [...] Read more.
Underwater mobile acoustic source target localization encounters several challenges, including the unknown propagation speed of the source signal, uncertainty in the observation platform’s position and velocity (i.e., platform systematic errors), and economic costs. This paper proposes a new two-step closed-form localization algorithm that jointly estimates the angle of arrival (AOA), time difference of arrival (TDOA), and frequency difference of arrival (FDOA) to address these challenges. The algorithm initially introduces auxiliary variables to construct pseudo-linear equations to obtain the initial solution. It then exploits the relationship between the unknown and auxiliary variables to derive the exact solution comprising solely the unknown variables. Both theoretical analyses and simulation experiments demonstrate that the proposed method accurately estimates the position, velocity, and speed of the sound source even with an unknown sound speed and platform systematic errors. It achieves asymptotic optimality within a reasonable error range to approach the Cramér–Rao lower bound (CRLB). Furthermore, the algorithm exhibits low complexity, reduces the number of required localization platforms, and decreases the economic costs. Additionally, the simulation experiments validate the effectiveness of the proposed localization method across various scenarios, outperforming other comparative algorithms. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Underwater multi-platform positioning model.</p>
Full article ">Figure 2
<p>Underwater positioning scenario diagram.</p>
Full article ">Figure 3
<p>Variation in CRLBs with sensor measurement errors under unknown/known <math display="inline"><semantics> <mrow> <msup> <mi>c</mi> <mi>o</mi> </msup> </mrow> </semantics></math>: (<b>a</b>) position CRLBs; (<b>b</b>) velocity CRLBs.</p>
Full article ">Figure 4
<p>Variation in CRLBs with platform systematic errors under unknown/known <math display="inline"><semantics> <mrow> <msup> <mi>c</mi> <mi>o</mi> </msup> </mrow> </semantics></math>: (<b>a</b>) position CRLBs; (<b>b</b>) velocity CRLBs.</p>
Full article ">Figure 5
<p>CRLBs for moving targets at different positions: (<b>a</b>) CRLBs for target position with known <math display="inline"><semantics> <mrow> <msup> <mi>c</mi> <mi>o</mi> </msup> </mrow> </semantics></math>; (<b>b</b>) CRLBs for target position with unknown <math display="inline"><semantics> <mrow> <msup> <mi>c</mi> <mi>o</mi> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Variation curve of target parameter estimation RMSEs with sensor measurement errors: (<b>a</b>) position estimation RMSE; (<b>b</b>) velocity estimation RMSE; (<b>c</b>) speed of sound estimation RMSE.</p>
Full article ">Figure 7
<p>Variation curve of target parameter estimation RMSE with systematic error: (<b>a</b>) position estimation RMSE; (<b>b</b>) velocity estimation RMSE; (<b>c</b>) sound speed estimation RMSE.</p>
Full article ">Figure 8
<p>Comparison of CDF plots for different algorithms: (<b>a</b>) position estimation; (<b>b</b>) velocity estimation; (<b>c</b>) sound speed estimation.</p>
Full article ">Figure 9
<p>Variation in target parameter estimation RMSE with the number of moving platforms: (<b>a</b>) position estimation RMSE; (<b>b</b>) velocity estimation RMSE; (<b>c</b>) sound speed estimation RMSE.</p>
Full article ">Figure 10
<p>Boxplot of the variation in the parameter estimation RMSE of random far-field target sources with measurement errors: (<b>a</b>) position estimation; (<b>b</b>) speed of sound estimation.</p>
Full article ">Figure 11
<p>Boxplot of the variation in the parameter estimation RMSE of random far-field target source with systematic errors: (<b>a</b>) position estimation; (<b>b</b>) speed of sound estimation.</p>
Full article ">
24 pages, 2043 KiB  
Article
UAV Path Optimization for Angle-Only Self-Localization and Target Tracking Based on the Bayesian Fisher Information Matrix
by Kutluyil Dogancay and Hatem Hmam
Sensors 2024, 24(10), 3120; https://doi.org/10.3390/s24103120 - 14 May 2024
Viewed by 1094
Abstract
In this paper, new path optimization algorithms are developed for uncrewed aerial vehicle (UAV) self-localization and target tracking, exploiting beacon (landmark) bearings and angle-of-arrival (AOA) measurements from a manoeuvring target. To account for time-varying rotations in the local UAV coordinates with respect to [...] Read more.
In this paper, new path optimization algorithms are developed for uncrewed aerial vehicle (UAV) self-localization and target tracking, exploiting beacon (landmark) bearings and angle-of-arrival (AOA) measurements from a manoeuvring target. To account for time-varying rotations in the local UAV coordinates with respect to the global Cartesian coordinate system, the unknown orientation angle of the UAV is also estimated jointly with its location from the beacon bearings. This is critically important, as orientation errors can significantly degrade the self-localization performance. The joint self-localization and target tracking problem is formulated as a Kalman filtering problem with an augmented state vector that includes all the unknown parameters and a measurement vector of beacon bearings and target AOA measurements. This formulation encompasses applications where Global Navigation Satellite System (GNSS)-based self-localization is not available or reliable, and only beacons or landmarks can be utilized for UAV self-localization. An optimal UAV path is determined from the optimization of the Bayesian Fisher information matrix by means of A- and D-optimality criteria. The performance of this approach at different measurement noise levels is investigated. A modified closed-form projection algorithm based on a previous work is also proposed to achieve optimal UAV paths. The performance of the developed UAV path optimization algorithms is demonstrated with extensive simulation examples. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion Technology in Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>Geometry for UAV self-localization and target tracking using beacon bearings and the target AOA. The UAV location <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">s</mi> <mi>k</mi> </msub> </semantics></math>, its orientation angle <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mi>k</mi> </msub> </semantics></math> and target location <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">p</mi> <mi>k</mi> </msub> </semantics></math> are unknown and are to be estimated jointly from noisy beacon bearing and target AOA measurements.</p>
Full article ">Figure 2
<p>Illustration of the set of permissible waypoints <math display="inline"><semantics> <msub> <mi mathvariant="script">S</mi> <mi>k</mi> </msub> </semantics></math> and maximum turnrate <math display="inline"><semantics> <msub> <mi>ϑ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>Modified projection algorithm for UAV path optimization.</p>
Full article ">Figure 4
<p>Computational architecture for UAV path optimization and target tracking. The A-optimality, D-optimality and projection algorithms differ in the way <math display="inline"><semantics> <msubsup> <mi>ϑ</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> <mo>∗</mo> </msubsup> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> </mrow> </semantics></math>, is computed.</p>
Full article ">Figure 5
<p>Optimal UAV paths for a stationary target at <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> computed by the (<b>a</b>) A-optimality algorithm, (<b>b</b>) D-optimality algorithm and (<b>c</b>) projection algorithm. (<b>d</b>) A close-up of the projection algorithm. The initial UAV location is marked with “<math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math>”. Black dots and lines indicate the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for initial and final EKF target location estimates. Gray dots and lines show the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for intermediate EKF estimates.</p>
Full article ">Figure 6
<p>Optimal UAV paths for a stationary target at <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> computed by the (<b>a</b>) A-optimality algorithm, (<b>b</b>) D-optimality algorithm and (<b>c</b>) projection algorithm. (<b>d</b>) A close-up of the projection algorithm. The initial UAV location is marked with “<math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math>”. Black dots and lines indicate the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for initial and final EKF target location estimates. Gray dots and lines show the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for intermediate EKF estimates.</p>
Full article ">Figure 6 Cont.
<p>Optimal UAV paths for a stationary target at <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> computed by the (<b>a</b>) A-optimality algorithm, (<b>b</b>) D-optimality algorithm and (<b>c</b>) projection algorithm. (<b>d</b>) A close-up of the projection algorithm. The initial UAV location is marked with “<math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math>”. Black dots and lines indicate the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for initial and final EKF target location estimates. Gray dots and lines show the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for intermediate EKF estimates.</p>
Full article ">Figure 7
<p>Optimal UAV paths for a stationary target at <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> computed by the (<b>a</b>) A-optimality algorithm, (<b>b</b>) D-optimality algorithm and (<b>c</b>) projection algorithm. (<b>d</b>) A close-up of the projection algorithm. The initial UAV location is marked with “<math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math>”. Black dots and lines indicate the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for initial and final EKF target location estimates. Gray dots and lines show the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for intermediate EKF estimates.</p>
Full article ">Figure 8
<p>Optimal UAV paths for target tracking (<math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>) computed by the (<b>a</b>) A-optimality algorithm, (<b>b</b>) D-optimality algorithm and (<b>c</b>) projection algorithm. (<b>d</b>) A close-up of the projection algorithm. The initial UAV and target locations are marked with “<math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math>”. Black dots and lines indicate the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for initial and final EKF target location estimates. Gray dots and lines show the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for intermediate EKF estimates.</p>
Full article ">Figure 9
<p>Optimal UAV paths for target tracking (<math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>) computed by the (<b>a</b>) A-optimality algorithm, (<b>b</b>) D-optimality algorithm and (<b>c</b>) projection algorithm. (<b>d</b>) A close-up of the projection algorithm. The initial UAV and target locations are marked with “<math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math>”. Black dots and lines indicate the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for initial and final EKF target location estimates. Gray dots and lines show the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for intermediate EKF estimates.</p>
Full article ">Figure 10
<p>Optimal UAV paths for target tracking (<math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>) computed by the (<b>a</b>) A-optimality algorithm, (<b>b</b>) D-optimality algorithm and (<b>c</b>) projection algorithm. (<b>d</b>) A close-up of the projection algorithm. The initial UAV and target locations are marked with “<math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math>”. Black dots and lines indicate the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for initial and final EKF target location estimates. Gray dots and lines show the 2-<math display="inline"><semantics> <mi>σ</mi> </semantics></math> error ellipses for intermediate EKF estimates.</p>
Full article ">Figure 11
<p>RMSE for EKF target location estimates for a stationary target at (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>RMSEfor EKF target location estimates for a manoeuvring target at (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">
Back to TopTop