[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Radar Target Detection, Imaging and Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Radar Sensors".

Deadline for manuscript submissions: 31 March 2025 | Viewed by 3442

Special Issue Editors


E-Mail Website
Guest Editor
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
Interests: radar jamming game evolution technology; detection and communication integrated resource management and control technology; weak target detection technology; multi-functional waveform design
Special Issues, Collections and Topics in MDPI journals
School of Aeronautics and Astronautics, Sichuan University, Chengdu 610065, China
Interests: signal detection; multi-sensor resource management; multi-function integrated system resource optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China
Interests: Radar jamming game evolution technology; multi-functional radar system

Special Issue Information

Dear Colleagues,

Radar can sense the target and environment at any time and any weather, is a kind of sensor which plays an important role in a wide range of applications, such as target detection, imaging and recognition. With the advances in radar hardware and software technologies, more flexible radar working modes with more potential have been exploited, together with new theories and methods for advanced radar detection, imaging and recognition. Nowadays, radar detection, imaging and recognition have become an international front and hotspot in the field of sensor research.

The present Special Issue aims to exhibit a number of recent advanced techniques in the fields of theory and application of radar detection, imaging and recognition. Topic may include but not limited to the following topics:

  • Radar detection, tracking, parameter estimation
  • Clutter or jamming suppression
  • Beamforming
  • SAR/ISAR/ultra-wideband radar
  • Radar imaging technology
  • Radar target recognition technology
  • Synthetic aperture techniques
  • Signal and data processing
  • Advanced RF and antenna technologies
  • Waveform diversity
  • Radar design and simulation
  • Radar jamming

Dr. Tianxian Zhang
Dr. Xueting Li
Dr. Yuanhang Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • radar imaging technology
  • radar design and simulation
  • radar detection, tracking, parameter estimation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 1270 KiB  
Article
Node Selection and Path Optimization for Passive Target Localization via UAVs
by Xiaoyou Xing, Zhiwen Zhong, Xueting Li and Yiyang Yue
Sensors 2025, 25(3), 780; https://doi.org/10.3390/s25030780 - 28 Jan 2025
Viewed by 411
Abstract
The performance of passive target localization is affected by the positions of unmanned aerial vehicles (UAVs) at a large scale. In this paper, to improve resource utilization efficiency and localization accuracy, the node selection problem and the path optimization problem are jointly investigated. [...] Read more.
The performance of passive target localization is affected by the positions of unmanned aerial vehicles (UAVs) at a large scale. In this paper, to improve resource utilization efficiency and localization accuracy, the node selection problem and the path optimization problem are jointly investigated. Firstly, the target passive localization model is established and the Chan-based time difference of arrival (TDOA) localization method is introduced. Then, the Cramer–Rao lower bound (CRLB) for Chan-TDOA localization is derived, and the problems of node selection and path optimization are formulated. Secondly, a CRLB-based node selection method is proposed to properly divide the UAVs into several groups, localizing different targets, and a CRLB-based path optimization method is proposed to search for the optimal UAV position configuration at each time step. The proposed path optimization method also effectively handles no-fly-zone (NFZ) constraints, ensuring operational safety while maintaining optimal target tracking performance. Also, to improve the efficiency of path optimization, particle swarm algorithm (PSO) is applied to accelerate the searching process. Finally, numerical simulations are performed to verify the validity and effectiveness of the proposed methods in this paper. Full article
(This article belongs to the Special Issue Radar Target Detection, Imaging and Recognition)
Show Figures

Figure 1

Figure 1
<p>Geometric schematic of the TDOA localization model. The distance between target <span class="html-italic">j</span> and each UAV is denoted as <math display="inline"><semantics> <msub> <mi>r</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math>, where UAV 0 serves as the reference node (or master UAV), and UAV <span class="html-italic">i</span> (<math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>n</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>) represents other nodes in the swarm. The concentric circles around the target indicate the signal propagation.</p>
Full article ">Figure 2
<p>Flow chart of the Chan-TDOA algorithm.</p>
Full article ">Figure 3
<p>Diagram of UAV motion constraints. The UAV moves from position <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>t</mi> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mi>t</mi> </msubsup> <mo stretchy="false">)</mo> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo stretchy="false">)</mo> </mrow> </semantics></math> along a circular arc, where <math display="inline"><semantics> <msubsup> <mi>L</mi> <mi>i</mi> <mi>t</mi> </msubsup> </semantics></math> represents the path length, <math display="inline"><semantics> <msubsup> <mi>θ</mi> <mi>i</mi> <mi>t</mi> </msubsup> </semantics></math> is the turning angle, and <span class="html-italic">v</span> denotes the velocity vector of the UAV.</p>
Full article ">Figure 4
<p>Initial positions of the UAVs and the targets. The blue triangles represent the initial positions of the 9 UAVs (UAV0–UAV8), and the orange stars represent the initial positions of the 3 targets.</p>
Full article ">Figure 5
<p>UAV grouping results and their movement directions after node selection. Different colors represent different UAV groups assigned to their respective targets, and arrows indicate the planned movement directions.</p>
Full article ">Figure 6
<p>Evolution of best fitness value (CRLB) during PSO iterations for the first UAV group at the initial time step.</p>
Full article ">Figure 7
<p>Three selected UAVs localizing target 1 and their moving direction. (<b>a</b>) Movement direction of the three selected UAV groups for target 1. (<b>b</b>) Trajectories of the three selected UAV groups for target 1.</p>
Full article ">Figure 8
<p>Three selected UAVs localizing target 2 and their moving directions. (<b>a</b>) Movement direction of the three selected UAV groups for target 2. (<b>b</b>) Trajectories of the three selected UAV groups for target 2.</p>
Full article ">Figure 9
<p>Three selected UAVs localizing target 3 and their moving direction. (<b>a</b>) Movement direction of the three selected UAV groups for target 3. (<b>b</b>) Trajectories of the three selected UAV groups for target 3.</p>
Full article ">Figure 10
<p>Comparison of CRLB and RMSE under path optimization versus fixed configuration for target 1.</p>
Full article ">Figure 11
<p>Comparison of RMSE and CRLB between the proposed path optimization method and the traditional method.</p>
Full article ">Figure 12
<p>Comparison of RMSE and CRLB between the proposed path optimization method and the traditional method.</p>
Full article ">Figure 13
<p>CRLB comparison and single-iteration computation time comparisons with different optimization algorithms localizing different targets. (<b>a</b>) CRLB comparison with different optimization algorithms localizing target 1. (<b>b</b>) CRLB comparison with different optimization algorithms localizing target 2. (<b>c</b>) CRLB comparison with different optimization algorithms localizing target 3. (<b>d</b>) Single-iteration computation time comparisons between different optimization methods.</p>
Full article ">Figure 14
<p>Optimized UAV paths under different minimum turning radii and their RMSE comparisons. (<b>a</b>) <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 5000 m. (<b>b</b>) <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 7500 m. (<b>c</b>) <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 10,000 m. (<b>d</b>) Comparison of RMSE and CRLB.</p>
Full article ">Figure 14 Cont.
<p>Optimized UAV paths under different minimum turning radii and their RMSE comparisons. (<b>a</b>) <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 5000 m. (<b>b</b>) <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 7500 m. (<b>c</b>) <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> = 10,000 m. (<b>d</b>) Comparison of RMSE and CRLB.</p>
Full article ">Figure 15
<p>Optimized UAV paths and comparison of RMSE and CRLB under different no-fly-zone radii. (<b>a</b>) <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>N</mi> <mi>F</mi> <mi>Z</mi> </mrow> </msub> </semantics></math> = 1000 m. (<b>b</b>) <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>N</mi> <mi>F</mi> <mi>Z</mi> </mrow> </msub> </semantics></math> = 2000 m. (<b>c</b>) <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>N</mi> <mi>F</mi> <mi>Z</mi> </mrow> </msub> </semantics></math> = 3000 m. (<b>d</b>) Comparisons of RMSE and CRLB.</p>
Full article ">
30 pages, 30400 KiB  
Article
Classification of Flying Drones Using Millimeter-Wave Radar: Comparative Analysis of Algorithms Under Noisy Conditions
by Mauro Larrat and Claudomiro Sales
Sensors 2025, 25(3), 721; https://doi.org/10.3390/s25030721 - 24 Jan 2025
Viewed by 322
Abstract
This study evaluates different machine learning algorithms in detecting and identifying drones using radar data from a 60 GHz millimeter-wave sensor. These signals were collected from a bionic bird and two drones, namely DJI Mavic and DJI Phantom 3 Pro, which were represented [...] Read more.
This study evaluates different machine learning algorithms in detecting and identifying drones using radar data from a 60 GHz millimeter-wave sensor. These signals were collected from a bionic bird and two drones, namely DJI Mavic and DJI Phantom 3 Pro, which were represented in complex form to preserve amplitude and phase information. The first benchmarks used four algorithms, namely long short-term memory (LSTM), gated recurrent unit (GRU), one-dimensional convolutional neural network (Conv1D), and Transformer, and they were benchmarked for robustness under noisy conditions, including artificial noise types like white noise, Pareto noise, impulsive noise, and multipath interference. As expected, Transformer outperformed other algorithms in terms of accuracy, even on noisy data; however, in certain noise contexts, particularly Pareto noise, it showed weaknesses. For this purpose, we propose Multimodal Transformer, which incorporates more statistical features—skewness and kurtosis—in addition to amplitude and phase data. This resulted in a improvement in detection accuracy, even under difficult noise conditions. Our results demonstrate the importance of noise in processing radar signals and the benefits afforded by a multimodal presentation of data in detecting unmanned aerial vehicle and birds. This study sets up a benchmark for state-of-the-art machine learning methodologies for radar-based detection systems, providing valuable insight into methods of increasing the robustness of algorithms to environmental noise. Full article
(This article belongs to the Special Issue Radar Target Detection, Imaging and Recognition)
Show Figures

Figure 1

Figure 1
<p>Images of the test subjects used in the experiments: (<b>a</b>) the DJI Mavic drone [<a href="#B47-sensors-25-00721" class="html-bibr">47</a>], (<b>b</b>) the DJI Phantom 3 Pro drone [<a href="#B48-sensors-25-00721" class="html-bibr">48</a>], and (<b>c</b>) the Bionic Bird [<a href="#B49-sensors-25-00721" class="html-bibr">49</a>]. These devices illustrate what were used to evaluate the classification performance of the Multimodal Transformer model.</p>
Full article ">Figure 2
<p>This boxplot shows the impact of white noise on the amplitude (<b>left</b>) and phase (<b>right</b>) of the radar signal. White noise follows a random distribution, primarily affecting the outliers in both amplitude and phase. The amplitude exhibits a broader spread, with more pronounced outliers in both directions. The phase is also impacted, though to a lesser extent, showing a slight median shift and a moderate interquartile range expansion.</p>
Full article ">Figure 3
<p>This boxplot illustrates the impact of Pareto noise on the amplitude (<b>left</b>) and phase (<b>right</b>) of the radar signal. Pareto noise, also known as heavy-tail noise, introduces extreme values more frequently than white noise, resulting in greater data dispersion. The amplitude plot shows a considerable number of high-value outliers, suggesting that the noise causes more frequent positive fluctuations. The phase remains relatively stable, with occasional extreme values.</p>
Full article ">Figure 4
<p>This boxplot illustrates the effect of impulsive noise on the amplitude (<b>left</b>) and phase (<b>right</b>) of the millimeter-wave radar signal. Impulsive noise generates abrupt, random spikes, causing significant data dispersion. The amplitude plot shows a noticeable increase in outliers at both extremes, with a wider interquartile range. While the median amplitude remains relatively stable, the data’s extremes were widely scattered. The phase plot shows a similar pattern, with more visible outliers and a slight median shift.</p>
Full article ">Figure 5
<p>This boxplot shows the effects of multipath interference noise on the amplitude (<b>left</b>) and phase (<b>right</b>) of the radar signal. Multipath interference occurs when the signal reflects off multiple surfaces before reaching the receiver, causing distortions. The amplitude plot reveals increased variability and a larger number of outliers, indicating inconsistencies in the measured values. The phase is less affected but still shows a slight increase in dispersion compared to the original signal.</p>
Full article ">Figure 6
<p>This boxplot illustrates the classification probability outputs of four algorithms—LSTM, GRU, Conv1D, and Transformer—under white noise conditions. The boxplot reveals that LSTM, GRU, and Conv1D exhibited tightly clustered probability distributions with low variance, and their median probabilities remained around or below 0.4 across all classes (Bird, Mavic drone, and P3P drone). This low variability and clustered median values suggests poor classification performance, with predictions lacking high confidence and distinguishing power. In contrast, the Transformer algorithm demonstrated a markedly different behavior, with wider interquartile ranges and higher median probabilities for all classes. The wider spread indicates that Transformer is more resilient to white noise, producing more varied and accurate probability outputs, thus highlighting its superior robustness in handling noisy data compared to the other models.</p>
Full article ">Figure 7
<p>This boxplot presents the classification probabilities of four machine learning models—LSTM, GRU, Conv1D, and Transformer—under white noise conditions for the three target classes: Bird, Mavic drone, and P3P drone. The LSTM, GRU, and Conv1D models display tightly grouped probability distributions with narrow interquartile ranges and median values clustered near or below 0.4 across all classes. This indicates that these models struggle to produce confident predictions in noisy environments as their output probabilities remain low and exhibit limited variability, suggesting a uniform inability to distinguish between the classes under these conditions. In contrast, the Transformer model showed a significantly wider interquartile range and higher median probability values for all classes. This broader distribution highlights Transformer’s superior robustness to white noise, enabling it to generate more confident and diverse predictions across the dataset, outperforming the other models in terms of classification reliability under challenging noise conditions.</p>
Full article ">Figure 8
<p>This boxplot illustrates the classification probabilities for different models—LSTM, GRU, Conv1D, and Transformer—under Pareto noise conditions across three target classes: Bird, Mavic drone, and P3P drone. The LSTM and GRU models exhibited higher median probabilities for the “bird” class, suggesting better performance in this specific category compared to other classes. However, the overall performance across all models was negatively impacted by Pareto noise, which introduces frequent extreme values (outliers) and disrupts the models’ ability to confidently assign accurate probabilities, particularly those affecting classification consistency.</p>
Full article ">Figure 9
<p>This figure presents the ROC curves and AUC scores for the classification performance of LSTM, GRU, Conv1D, and Transformer models under Pareto noise conditions. The Transformer model demonstrated superior performance, with higher AUC scores across all target classes, indicating better discriminative ability compared to the other models. Additionally, Transformer exhibited fewer outliers in classification scores, highlighting its robustness to Pareto noise. In contrast, the LSTM, GRU, and Conv1D models showed higher false positive rates, suggesting difficulty in generalizing to data with high variability caused by noise. These results emphasize the need for further model optimization to handle noise-induced challenges effectively.</p>
Full article ">Figure 10
<p>This boxplot displays the distribution of classification probabilities for the LSTM, GRU, Conv1D, and Transformer models under impulsive noise conditions across the Bird, Mavic drone, and P3P drone classes. The Transformer model consistently showed a higher median probability across all classes, indicating more confident predictions. However, its wider interquartile range suggests that it also exhibits greater uncertainty in some predictions. In contrast, the other models—LSTM, GRU, and Conv1D—showed lower median probabilities and tighter ranges, indicating less confidence and lower variability in their predictions under impulsive noise conditions.</p>
Full article ">Figure 11
<p>The ROC curves and AUC values demonstrate the classification performance of LSTM, GRU, Conv1D, and Transformer models under impulsive noise conditions. The Transformer model outperforms the other models, particularly for the Bird and Mavic drone classes, with higher AUC values, indicating better discrimination capabilities. The LSTM and GRU models show moderate performance but are slightly less effective than Transformer. The Conv1D model performs poorly across most classes, especially for the Bird class, reflecting its inability to effectively handle temporal dependencies in the presence of impulsive noise.</p>
Full article ">Figure 12
<p>Boxplot illustrating the classification accuracy variability of the four machine learning algorithms under multipath interference. The Transformer model consistently demonstrated higher accuracy and lower variability, indicating superior stability and performance compared to the LSTM, GRU, and Conv1D models.</p>
Full article ">Figure 13
<p>ROC curves showing the performance of the Transformer, LSTM, GRU, and Conv1D models in object classification with multipath interference. The Transformer model consistently outperformed the others, with higher AUC values, highlighting its robustness and superior attention mechanism for handling noise and temporal dependencies.</p>
Full article ">Figure 14
<p>Schematic of the Multimodal Transformer Model (MMT) used for radar-based target classification. The model begins with an input layer processing radar features, followed by LayerNormalization for stable learning. Multi-head attention (8 heads) captures complex temporal dependencies in radar signals. Dropout layers (0.1 and 0.2) prevent overfitting. A GlobalAveragePooling1D layer reduces dimensionality, followed by two dense layers with L2 regularization and LeakyReLU activation. The final dense layer outputs classification probabilities using softmax, where the model’s output is 0 for the Mavic drone, 1 for the Phantom 3 Pro drone, or 2 for bionic bird. The model was optimized with Adam and sparse categorical cross-entropy loss for multi-class classification.</p>
Full article ">Figure 15
<p>Boxplots showing the amplitude, phase, skewness, and kurtosis values for the Bird, Mavic, and P3P classes, as extracted from radar signals with added white noise. The Bird class showed lower and more stable values across all features. The Mavic class had moderate values with noticeable outliers. The P3P class consistently showed the highest medians and broader ranges, indicating stronger and more variable radar reflections. The differences in these features help to distinguish the classes in the Transformer model’s classification process.</p>
Full article ">Figure 16
<p>ROC. curves for the classification of the Bird, Mavic, and P3P classes. The model achieved a perfect AUC for all the classes in both noise cases. For white noise, the curves were slightly farther from the vertical axis compared to the Pareto noise, indicating a slightly better robustness to the latter noise type.</p>
Full article ">Figure 17
<p>Boxplots showing the amplitude, phase, skewness, and kurtosis values for the Bird, Mavic, and P3P classes, as extracted from radar signals with added Pareto noise. The median values remained close to zero across the features, with fewer outliers and lower variability compared to white noise. The Bird class showed the most stable distribution, while the Mavic and P3P classes exhibited moderate spreads with fewer extreme values. This reduced variability under Pareto noise led to diminished class separability, resulting in lower classification performance compared to white noise.</p>
Full article ">Figure 18
<p>The ROC curves for the classification of Bird, Mavic drone, and P3P drone when considering Pareto noise with a Multimodal Transformer.</p>
Full article ">Figure 19
<p>Boxplots showing the amplitude, phase, skewness, and kurtosis values for the Bird, Mavic, and P3P classes under impulsive noise. The Bird class exhibited the most stable and compact distribution across all features, while the P3P class showed the highest median values and widest variability, especially in amplitude and kurtosis. The Mavic class displayed intermediate behavior. Impulsive noise significantly increased outliers, particularly in the P3P class, indicating that larger or more complex targets produce more erratic radar reflections.</p>
Full article ">Figure 20
<p>ROC curves for the classification of the Bird, Mavic drone, and P3P drone classes when considering impulsive noise with the Multimodal Transformer.</p>
Full article ">Figure 21
<p>Boxplots showing the amplitude, phase, skewness, and kurtosis values for the Bird, Mavic, and P3P classes under multipath interference. Unlike impulsive noise, the distributions were more uniform across classes, with median values close to zero and consistent interquartile ranges. The P3P class showed a slightly wider spread in amplitude and skewness, suggesting higher susceptibility to multipath effects. Outliers were evenly distributed across classes, indicating random variations in signal reflections that reduce separability between classes.</p>
Full article ">Figure 22
<p>ROC curves for the classification of Bird, Mavic drone, and P3P drone classes when considering multipath interference with the Multimodal Transformer.</p>
Full article ">
19 pages, 8569 KiB  
Article
Two-Dimensional Scattering Center Estimation for Radar Target Recognition Based on Multiple High-Resolution Range Profiles
by Kang-In Lee, Jin-Hyeok Kim and Young-Seek Chung
Sensors 2024, 24(21), 6997; https://doi.org/10.3390/s24216997 - 30 Oct 2024
Viewed by 732
Abstract
A new estimation strategy on locations of two-dimensional target scattering centers for radar target recognition is developed by using multiple high-resolution range profiles (HRRPs). Based on the range information contained in multiple HRRPs obtained from various observation angles, the estimated target scattering centers [...] Read more.
A new estimation strategy on locations of two-dimensional target scattering centers for radar target recognition is developed by using multiple high-resolution range profiles (HRRPs). Based on the range information contained in multiple HRRPs obtained from various observation angles, the estimated target scattering centers can be successfully located at the intersection points of the lines passing through the multiple HRRP points. This geometry-based algorithm can significantly reduce the computational complexity while ensuring the ability to estimate the two-dimensional target scattering centers. The computational complexity is formulated and compared to that of the conventional methods based on the synthetic aperture radar (SAR) images and HRRP sequences. In order to verify the performance of the proposed algorithm, the numerical and experimental results for three different types of aircraft were compared to those from SAR images. At the end of this article, the estimated radar scattering centers are used as the target features for the conventional classifier machine to confirm its target classification performance. Full article
(This article belongs to the Special Issue Radar Target Detection, Imaging and Recognition)
Show Figures

Figure 1

Figure 1
<p>Concept of HRRP projection onto the observation axis.</p>
Full article ">Figure 2
<p>Target information shown as peaks on HRRP axes for three observation angles.</p>
Full article ">Figure 3
<p>HRRPs for three observation angles when two scattering centers exist.</p>
Full article ">Figure 4
<p>Flowchart of the proposed algorithm for two-dimensional radar target scattering center estimation.</p>
Full article ">Figure 5
<p>Generation of <span class="html-italic">N</span> − (<span class="html-italic">K</span> − 1)<span class="html-italic">δ</span> sub-datasets by choosing <span class="html-italic">K</span> neighboring HRRPs out of the entire HRRP dataset of size <span class="html-italic">N</span>.</p>
Full article ">Figure 6
<p>Peak detection for numerical models with the number of peaks in HRRPs with respect to the observation angle. A380: (<b>a</b>,<b>b</b>), Eurofighter: (<b>c</b>,<b>d</b>), F-15: (<b>e</b>,<b>f</b>).</p>
Full article ">Figure 7
<p>Two-dimensional scattering center estimation and corresponding SAR images of the A380: (<b>a</b>,<b>b</b>), Eurofighter: (<b>c</b>,<b>d</b>), and F-15: (<b>e</b>,<b>f</b>).</p>
Full article ">Figure 8
<p>Block diagram of the entire measurement system.</p>
Full article ">Figure 9
<p>Experimental environment with d = 4.2 m and h = 1 m.</p>
Full article ">Figure 10
<p>Estimated scattering centers of the A380 model; (<b>a</b>) drawn on a SAR image, and (<b>b</b>) the corresponding dominant scattering centers.</p>
Full article ">Figure 11
<p>Estimated scattering centers of the Eurofighter model; (<b>a</b>) drawn on a SAR image, and (<b>b</b>) the corresponding dominant scattering centers.</p>
Full article ">Figure 12
<p>Estimated scattering centers of the F-15 model; (<b>a</b>) drawn on a SAR image, and (<b>b</b>) the corresponding dominant scattering centers.</p>
Full article ">Figure 13
<p>Generated images for the A380: SAR (<b>upper row</b>) and the proposed method (<b>lower row</b>).</p>
Full article ">Figure 14
<p>Generated images for the Eurofighter: SAR (<b>upper row</b>) and the proposed method (<b>lower row</b>).</p>
Full article ">Figure 15
<p>Generated images for the F-15: SAR (<b>upper row</b>) and the proposed method (<b>lower row</b>).</p>
Full article ">Figure 16
<p>Classification results using SAR images.</p>
Full article ">Figure 17
<p>Classification results using the proposed method.</p>
Full article ">
17 pages, 3646 KiB  
Article
Motion Clutter Suppression for Non-Cooperative Target Identification Based on Frequency Correlation Dual-SVD Reconstruction
by Weikun He, Yichuan Luo and Xiaoxiao Shang
Sensors 2024, 24(16), 5298; https://doi.org/10.3390/s24165298 - 15 Aug 2024
Viewed by 794
Abstract
Non-cooperative targets, such as birds and unmanned aerial vehicles (UAVs), are typical low-altitude, slow, and small (LSS) targets with low observability. Radar observations in such scenarios are often complicated by strong motion clutter originating from sources like airplanes and cars. Hence, distinguishing between [...] Read more.
Non-cooperative targets, such as birds and unmanned aerial vehicles (UAVs), are typical low-altitude, slow, and small (LSS) targets with low observability. Radar observations in such scenarios are often complicated by strong motion clutter originating from sources like airplanes and cars. Hence, distinguishing between birds and UAVs in environments with strong motion clutter is crucial for improving target monitoring performance and ensuring flight safety. To address the impact of strong motion clutter on discriminating between UAVs and birds, we propose a frequency correlation dual-SVD (singular value decomposition) reconstruction method. This method exploits the strong power and spectral correlation characteristics of motion clutter, contrasted with the weak scattering characteristics of bird and UAV targets, to effectively suppress clutter. Unlike traditional clutter suppression methods based on SVD, our method avoids residual clutter or target loss while preserving the micro-motion characteristics of the targets. Based on the distinct micro-motion characteristics of birds and UAVs, we extract two key features: the sum of normalized large eigenvalues of the target’s micro-motion component and the energy entropy of the time–frequency spectrum of the radar echoes. Subsequently, the kernel fuzzy c-means algorithm is applied to classify bird and UAV targets. The effectiveness of our proposed method is validated through results using both simulation and experimental data. Full article
(This article belongs to the Special Issue Radar Target Detection, Imaging and Recognition)
Show Figures

Figure 1

Figure 1
<p>Range-Doppler spectrum with strong motion clutter interference.</p>
Full article ">Figure 2
<p>The block diagram of the method based on frequency correlation dual SVD reconstruction.</p>
Full article ">Figure 3
<p>Block diagram of the bird and UAV targets discrimination against the background of strong motion clutter.</p>
Full article ">Figure 4
<p>Clutter suppression results of a UAV target. (<b>a</b>) The spectrum of the received signal without clutter. (<b>b</b>) The spectrum of the received signal with clutter. (<b>c</b>) The spectrum after clutter suppression (FODS-SVD method). (<b>d</b>) The spectrum after clutter suppression (FEMP-SVD method). (<b>e</b>) Spectrum after clutter suppression (the proposed method).</p>
Full article ">Figure 5
<p>Clutter suppression results of a bird target. (<b>a</b>) The spectrum of the received signal without clutter. (<b>b</b>) The spectrum of the received signal with clutter. (<b>c</b>) The spectrum after clutter suppression (FODS-SVD method). (<b>d</b>) The spectrum after clutter suppression (FEMP-SVD method). (<b>e</b>) Spectrum after clutter suppression (the proposed method).</p>
Full article ">Figure 5 Cont.
<p>Clutter suppression results of a bird target. (<b>a</b>) The spectrum of the received signal without clutter. (<b>b</b>) The spectrum of the received signal with clutter. (<b>c</b>) The spectrum after clutter suppression (FODS-SVD method). (<b>d</b>) The spectrum after clutter suppression (FEMP-SVD method). (<b>e</b>) Spectrum after clutter suppression (the proposed method).</p>
Full article ">Figure 6
<p>Relative power variation comparison after the clutter suppression.</p>
Full article ">Figure 7
<p>Performance comparison of the three methods.</p>
Full article ">Figure 8
<p>(<b>a</b>) The actual radar environment. (<b>b</b>) Radar beam pattern.</p>
Full article ">Figure 9
<p>Clutter suppression results of the bird target. (<b>a</b>) The spectrum of the received signal. (<b>b</b>) Spectrum after clutter suppression (FODS-SVD method). (<b>c</b>) Spectrum after clutter suppression (FEMP-SVD method). (<b>d</b>) Spectrum after clutter suppression (the proposed method).</p>
Full article ">Figure 10
<p>Clutter suppression results of the UAV target. (<b>a</b>) The spectrum of the received signal. (<b>b</b>) Spectrum after clutter suppression (FODS-SVD method). (<b>c</b>) Spectrum after clutter suppression (FEMP-SVD method). (<b>d</b>) Spectrum after clutter suppression (the proposed method).</p>
Full article ">Figure 10 Cont.
<p>Clutter suppression results of the UAV target. (<b>a</b>) The spectrum of the received signal. (<b>b</b>) Spectrum after clutter suppression (FODS-SVD method). (<b>c</b>) Spectrum after clutter suppression (FEMP-SVD method). (<b>d</b>) Spectrum after clutter suppression (the proposed method).</p>
Full article ">Figure 11
<p>Identification results of birds and UAVs. (<b>a</b>) Characteristic spectral energy entropy. (<b>b</b>) Sum of normalized large eigenvalues. (<b>c</b>) Results obtained by kernel fuzzy c-means clustering.</p>
Full article ">
Back to TopTop