Classification of Flying Drones Using Millimeter-Wave Radar: Comparative Analysis of Algorithms Under Noisy Conditions
<p>Images of the test subjects used in the experiments: (<b>a</b>) the DJI Mavic drone [<a href="#B47-sensors-25-00721" class="html-bibr">47</a>], (<b>b</b>) the DJI Phantom 3 Pro drone [<a href="#B48-sensors-25-00721" class="html-bibr">48</a>], and (<b>c</b>) the Bionic Bird [<a href="#B49-sensors-25-00721" class="html-bibr">49</a>]. These devices illustrate what were used to evaluate the classification performance of the Multimodal Transformer model.</p> "> Figure 2
<p>This boxplot shows the impact of white noise on the amplitude (<b>left</b>) and phase (<b>right</b>) of the radar signal. White noise follows a random distribution, primarily affecting the outliers in both amplitude and phase. The amplitude exhibits a broader spread, with more pronounced outliers in both directions. The phase is also impacted, though to a lesser extent, showing a slight median shift and a moderate interquartile range expansion.</p> "> Figure 3
<p>This boxplot illustrates the impact of Pareto noise on the amplitude (<b>left</b>) and phase (<b>right</b>) of the radar signal. Pareto noise, also known as heavy-tail noise, introduces extreme values more frequently than white noise, resulting in greater data dispersion. The amplitude plot shows a considerable number of high-value outliers, suggesting that the noise causes more frequent positive fluctuations. The phase remains relatively stable, with occasional extreme values.</p> "> Figure 4
<p>This boxplot illustrates the effect of impulsive noise on the amplitude (<b>left</b>) and phase (<b>right</b>) of the millimeter-wave radar signal. Impulsive noise generates abrupt, random spikes, causing significant data dispersion. The amplitude plot shows a noticeable increase in outliers at both extremes, with a wider interquartile range. While the median amplitude remains relatively stable, the data’s extremes were widely scattered. The phase plot shows a similar pattern, with more visible outliers and a slight median shift.</p> "> Figure 5
<p>This boxplot shows the effects of multipath interference noise on the amplitude (<b>left</b>) and phase (<b>right</b>) of the radar signal. Multipath interference occurs when the signal reflects off multiple surfaces before reaching the receiver, causing distortions. The amplitude plot reveals increased variability and a larger number of outliers, indicating inconsistencies in the measured values. The phase is less affected but still shows a slight increase in dispersion compared to the original signal.</p> "> Figure 6
<p>This boxplot illustrates the classification probability outputs of four algorithms—LSTM, GRU, Conv1D, and Transformer—under white noise conditions. The boxplot reveals that LSTM, GRU, and Conv1D exhibited tightly clustered probability distributions with low variance, and their median probabilities remained around or below 0.4 across all classes (Bird, Mavic drone, and P3P drone). This low variability and clustered median values suggests poor classification performance, with predictions lacking high confidence and distinguishing power. In contrast, the Transformer algorithm demonstrated a markedly different behavior, with wider interquartile ranges and higher median probabilities for all classes. The wider spread indicates that Transformer is more resilient to white noise, producing more varied and accurate probability outputs, thus highlighting its superior robustness in handling noisy data compared to the other models.</p> "> Figure 7
<p>This boxplot presents the classification probabilities of four machine learning models—LSTM, GRU, Conv1D, and Transformer—under white noise conditions for the three target classes: Bird, Mavic drone, and P3P drone. The LSTM, GRU, and Conv1D models display tightly grouped probability distributions with narrow interquartile ranges and median values clustered near or below 0.4 across all classes. This indicates that these models struggle to produce confident predictions in noisy environments as their output probabilities remain low and exhibit limited variability, suggesting a uniform inability to distinguish between the classes under these conditions. In contrast, the Transformer model showed a significantly wider interquartile range and higher median probability values for all classes. This broader distribution highlights Transformer’s superior robustness to white noise, enabling it to generate more confident and diverse predictions across the dataset, outperforming the other models in terms of classification reliability under challenging noise conditions.</p> "> Figure 8
<p>This boxplot illustrates the classification probabilities for different models—LSTM, GRU, Conv1D, and Transformer—under Pareto noise conditions across three target classes: Bird, Mavic drone, and P3P drone. The LSTM and GRU models exhibited higher median probabilities for the “bird” class, suggesting better performance in this specific category compared to other classes. However, the overall performance across all models was negatively impacted by Pareto noise, which introduces frequent extreme values (outliers) and disrupts the models’ ability to confidently assign accurate probabilities, particularly those affecting classification consistency.</p> "> Figure 9
<p>This figure presents the ROC curves and AUC scores for the classification performance of LSTM, GRU, Conv1D, and Transformer models under Pareto noise conditions. The Transformer model demonstrated superior performance, with higher AUC scores across all target classes, indicating better discriminative ability compared to the other models. Additionally, Transformer exhibited fewer outliers in classification scores, highlighting its robustness to Pareto noise. In contrast, the LSTM, GRU, and Conv1D models showed higher false positive rates, suggesting difficulty in generalizing to data with high variability caused by noise. These results emphasize the need for further model optimization to handle noise-induced challenges effectively.</p> "> Figure 10
<p>This boxplot displays the distribution of classification probabilities for the LSTM, GRU, Conv1D, and Transformer models under impulsive noise conditions across the Bird, Mavic drone, and P3P drone classes. The Transformer model consistently showed a higher median probability across all classes, indicating more confident predictions. However, its wider interquartile range suggests that it also exhibits greater uncertainty in some predictions. In contrast, the other models—LSTM, GRU, and Conv1D—showed lower median probabilities and tighter ranges, indicating less confidence and lower variability in their predictions under impulsive noise conditions.</p> "> Figure 11
<p>The ROC curves and AUC values demonstrate the classification performance of LSTM, GRU, Conv1D, and Transformer models under impulsive noise conditions. The Transformer model outperforms the other models, particularly for the Bird and Mavic drone classes, with higher AUC values, indicating better discrimination capabilities. The LSTM and GRU models show moderate performance but are slightly less effective than Transformer. The Conv1D model performs poorly across most classes, especially for the Bird class, reflecting its inability to effectively handle temporal dependencies in the presence of impulsive noise.</p> "> Figure 12
<p>Boxplot illustrating the classification accuracy variability of the four machine learning algorithms under multipath interference. The Transformer model consistently demonstrated higher accuracy and lower variability, indicating superior stability and performance compared to the LSTM, GRU, and Conv1D models.</p> "> Figure 13
<p>ROC curves showing the performance of the Transformer, LSTM, GRU, and Conv1D models in object classification with multipath interference. The Transformer model consistently outperformed the others, with higher AUC values, highlighting its robustness and superior attention mechanism for handling noise and temporal dependencies.</p> "> Figure 14
<p>Schematic of the Multimodal Transformer Model (MMT) used for radar-based target classification. The model begins with an input layer processing radar features, followed by LayerNormalization for stable learning. Multi-head attention (8 heads) captures complex temporal dependencies in radar signals. Dropout layers (0.1 and 0.2) prevent overfitting. A GlobalAveragePooling1D layer reduces dimensionality, followed by two dense layers with L2 regularization and LeakyReLU activation. The final dense layer outputs classification probabilities using softmax, where the model’s output is 0 for the Mavic drone, 1 for the Phantom 3 Pro drone, or 2 for bionic bird. The model was optimized with Adam and sparse categorical cross-entropy loss for multi-class classification.</p> "> Figure 15
<p>Boxplots showing the amplitude, phase, skewness, and kurtosis values for the Bird, Mavic, and P3P classes, as extracted from radar signals with added white noise. The Bird class showed lower and more stable values across all features. The Mavic class had moderate values with noticeable outliers. The P3P class consistently showed the highest medians and broader ranges, indicating stronger and more variable radar reflections. The differences in these features help to distinguish the classes in the Transformer model’s classification process.</p> "> Figure 16
<p>ROC. curves for the classification of the Bird, Mavic, and P3P classes. The model achieved a perfect AUC for all the classes in both noise cases. For white noise, the curves were slightly farther from the vertical axis compared to the Pareto noise, indicating a slightly better robustness to the latter noise type.</p> "> Figure 17
<p>Boxplots showing the amplitude, phase, skewness, and kurtosis values for the Bird, Mavic, and P3P classes, as extracted from radar signals with added Pareto noise. The median values remained close to zero across the features, with fewer outliers and lower variability compared to white noise. The Bird class showed the most stable distribution, while the Mavic and P3P classes exhibited moderate spreads with fewer extreme values. This reduced variability under Pareto noise led to diminished class separability, resulting in lower classification performance compared to white noise.</p> "> Figure 18
<p>The ROC curves for the classification of Bird, Mavic drone, and P3P drone when considering Pareto noise with a Multimodal Transformer.</p> "> Figure 19
<p>Boxplots showing the amplitude, phase, skewness, and kurtosis values for the Bird, Mavic, and P3P classes under impulsive noise. The Bird class exhibited the most stable and compact distribution across all features, while the P3P class showed the highest median values and widest variability, especially in amplitude and kurtosis. The Mavic class displayed intermediate behavior. Impulsive noise significantly increased outliers, particularly in the P3P class, indicating that larger or more complex targets produce more erratic radar reflections.</p> "> Figure 20
<p>ROC curves for the classification of the Bird, Mavic drone, and P3P drone classes when considering impulsive noise with the Multimodal Transformer.</p> "> Figure 21
<p>Boxplots showing the amplitude, phase, skewness, and kurtosis values for the Bird, Mavic, and P3P classes under multipath interference. Unlike impulsive noise, the distributions were more uniform across classes, with median values close to zero and consistent interquartile ranges. The P3P class showed a slightly wider spread in amplitude and skewness, suggesting higher susceptibility to multipath effects. Outliers were evenly distributed across classes, indicating random variations in signal reflections that reduce separability between classes.</p> "> Figure 22
<p>ROC curves for the classification of Bird, Mavic drone, and P3P drone classes when considering multipath interference with the Multimodal Transformer.</p> ">
Abstract
:1. Introduction
2. Overview of the Related Research
3. Foundational Theories and Methodological Approaches
3.1. The Detected Signal Representation
- L is a constant related to the radar system;
- is the wavelength of the transmitted radar signal;
- is the distance between the radar and the center of the UAV;
- P is the number of rotors on the UAV;
- K is the number of blades per rotor;
- is the Doppler phase associated with blade of rotor p;
- models the spatial response of the Doppler shift of each blade.
3.2. Noise Models in the Original Signal Representation
3.3. Machine Learning Algorithms for Drone Detection
3.4. Overview of the Dataset Employed in This Research
4. Results
4.1. Introduction to the Results
4.2. Noisy Signal Visualization
4.3. Comparative Analysis of the Algorithms
4.4. Proposed Method for Enhanced Drone Detection Efficiency
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
DJI | Dà-Jiāng Innovations Science and Technology Co., Ltd. |
Conv1D | One-dimensional Convolutional Neural Network |
LSTM | Long Short-Term Memory |
GRU | Gated Recurrent Unit |
CNN | Convolutional Neural Network |
UAV | Unmanned Aerial Vehicle |
C-UAV | Counter-Unmanned Aerial Vehicle |
mmWave | Millimeter Wave |
CVMD | Complex-Valued Micro-Doppler |
SVD | Singular Value Decomposition |
ECSB | Echo Cancellation for Signal-Based Systems |
FMCW | Frequency-Modulated Continuous Wave |
RCS | Radar Cross Section |
References
- Clark, B. United States. In Above and Beyond: State of the Art of Uncrewed Combat Aerial Systems and Future Perspectives; Calcagno, E., Marrone, A., Eds.; Istituto Affari Internazionali (IAI): Rome, Italy, 2023; pp. 31–37. Available online: http://www.jstor.org/stable/resrep55314.6 (accessed on 1 October 2024).
- Pettyjohn, S. Types of drones. In Evolution Not Revolution: Drone Warfare in Russia’s 2022 Invasion of Ukraine; Center for a New American Security: Washington, DC, USA, 2024; pp. 16–37. Available online: http://www.jstor.org/stable/resrep57900.7 (accessed on 1 October 2024).
- Mohammed, A.B.; Fourati, L.C.; Fakhrudeen, A.M. Comprehensive systematic review of intelligent approaches in UAV-based intrusion detection, blockchain, and network security. Comput. Netw. 2023, 239, 110140. [Google Scholar] [CrossRef]
- Seidaliyeva, U.; Omarbekova, A.; Abdrayev, S.; Mukhamezhanov, Y.; Kassenov, K.; Oyewole, O.; Koziel, S. Advances and Challenges in Drone Detection and Classification Techniques: A state-of-the-art review. Sensors 2023, 24, 125. [Google Scholar] [CrossRef] [PubMed]
- Solaiman, S.; Alsuwat, E.; Alharthi, R. Simultaneous Tracking and Recognizing Drone Targets with Millime-ter-Wave Radar and Convolutional Neural Network. Appl. Syst. Innov. 2023, 6, 68. [Google Scholar] [CrossRef]
- Dumitrescu, C.; Pleşu, G.; Băbuţ, M.; Dobrinoiu, C.; Dospinescu, A.; Botezatu, N. Development of an Acoustic System for UAV Detection. Sensors 2020, 20, 4870. [Google Scholar] [CrossRef]
- Song, C.; Li, H. An Acoustic Array Sensor Signal Recognition Algorithm for Low-Altitude Targets Using Multiple Five-Element Acoustic Positioning Systems with VMD. Appl. Sci. 2024, 14, 1075. [Google Scholar] [CrossRef]
- Li, D.; Li, Z.; He, S.; Du, W.; Wei, Z. Multi-source threatening event recognition scheme targeting drone intrusion in the fiber optic DAS system. IEEE Sens. J. 2024, 24, 32185–32195. [Google Scholar] [CrossRef]
- Chen, T.; Yu, J.; Yang, Z. Research on a Sound Source Localization Method for UAV Detection Based on Improved Empirical Mode Decomposition. Sensors 2024, 24, 2701. [Google Scholar] [CrossRef] [PubMed]
- Tejera-Berengue, D.; Zhu-Zhou, F.; Utrilla-Manso, M.; Gil-Pita, R.; Rosa-Zurera, M. Acoustic-Based Detection of UAVs Using Machine Learning: Analysis of Distance and Environmental Effects. In Proceedings of the 2023 IEEE Sensors Applications Symposium (SAS), Ottawa, ON, Canada, 18 July 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Utebayeva, D.; Ilipbayeva, L.; Matson, E.T. Practical study of recurrent neural networks for efficient real-time drone sound detection: A review. Drones 2023, 7, 26. [Google Scholar] [CrossRef]
- He, J.; Fakhreddine, A.; Alexandropoulos, G.C. RIS-Augmented Millimeter-Wave MIMO Systems for Passive Drone Detection. arXiv 2024, arXiv:2402.07259. [Google Scholar]
- Zhai, X.; Zhou, Y.; Zhang, L.; Wu, Q.; Gao, W. YOLO-Drone: An Optimized YOLOv8 Network for Tiny UAV Object Detection. Electronics 2023, 12, 3664. [Google Scholar] [CrossRef]
- Sen, C.; Singh, P.; Gupta, K.; Jain, A.K.; Jain, A.; Jain, A. UAV Based YOLOV-8 optimization technique to detect the small size and high speed drone in different light conditions. In Proceedings of the 2024 2nd International Conference on Disruptive Technologies (ICDT), Greater Noida, India, 15–16 March 2024; pp. 1057–1061. [Google Scholar]
- Cheng, Q.; Li, X.; Zhu, B.; Shi, Y.; Xie, B. Drone Detection Method Based on MobileViT and CA-PANet. Electronics 2023, 12, 223. [Google Scholar] [CrossRef]
- Muhamad Zamri, F.N.; Gunawan, T.S.; Yusoff, S.H.; Alzahrani, A.A.; Bramantoro, A.; Kartiwi, M. Enhanced Small Drone Detection Using Optimized YOLOv8 With Attention Mechanisms. IEEE Access 2024, 12, 90629–90643. [Google Scholar] [CrossRef]
- Zeng, Y.; Wang, G.; Hong, T.; Wu, H.; Yao, R.; Song, C.; Zou, X. UAVData: A Dataset for Unmanned Aerial Vehicle Detection. Soft Comput. 2021, 25, 5385–5393. [Google Scholar] [CrossRef]
- El-Latif, E.I.A. Detection and Identification of Drones Using Long Short-Term Memory and Bayesian Optimization. Multimed. Tools Appl. 2024, 83, 4465–4517. [Google Scholar]
- Xiao, J.; Pisutsin, P.; Tsao, C.W.; Feroskhan, M. Clustering-based Learning for UAV Tracking and Pose Estimation. arXiv 2024, arXiv:2405.16867. [Google Scholar]
- Johnstone, C.B.; Cook, C.R.; Aldisert, A.; Klaas, L.; Michienzi, C.; Rubinstein, G.; Sanders, G.; Szechenyi, N. Case study two: Electro-optical sensors. In Building a Mutually Complementary Supply Chain Between Japan and the United States: Pathways to Deepening Japan-U.S. Defense Equipment and Technology Cooperation; Center for Strategic and International Studies (CSIS): Washington, DC, USA, 2024; pp. 16–20. Available online: http://www.jstor.org/stable/resrep62403.6 (accessed on 1 October 2024).
- Yang, D.; Xie, X.; Zhang, Y.; Li, D.; Liu, S. A Multi-Rotor Drone Micro-Motion Parameter Estimation Method Based on CVMD and SVD. Remote Sens. 2022, 14, 3326. [Google Scholar] [CrossRef]
- Hanif, A.; Ahmad, F.; Zhang, G.; Wang, X. Micro-Doppler Based Target Recognition With Radars: A review. IEEE Sens. J. 2022, 22, 2948–2961. [Google Scholar]
- Soumya, A.; Mohan, C.K.; Cenkeramaddi, L.R. Recent advances in mm Wave-radar-based sensing, its applications, and machine learning techniques: A review. Sensors 2023, 23, 8901. [Google Scholar] [CrossRef] [PubMed]
- Kumawat, H.C.; Chakraborty, M.; Raj, A.A.B. DIAT-RadSATNet—A novel lightweight DCNN architecture for micro-Doppler-based small unmanned aerial vehicle (SUAV) targets’ detection and classification. IEEE Trans. Instrum. Meas. 2022, 71, 1–11. [Google Scholar]
- Kumawat, H.C.; Chakraborty, M.; Arockia Bazil Raj, A.; Dhavale, S.V. DIAT-µSAT: Micro-Doppler Signature Da-taset of Small Unmanned Aerial Vehicle (SUAV). IEEE Dataport 2022, 1–5. [Google Scholar] [CrossRef]
- Texas Instruments. The Fundamentals of Millimeter Wave Radar Sensors. Available online: https://www.ti.com/lit/spyy005 (accessed on 10 September 2024).
- Lu, G.; Bu, Y. Mini-UAV Movement Classification Based on Sparse Decomposition of Micro-Doppler Signature. IEEE Geosci. Remote Sens. Lett. 2024, 21, 3506405. [Google Scholar]
- Kang, S.; Forsten, H.; Semkin, V.; Rangan, S. Millimeter Wave 60 GHz Radar Measurements: UAS and Birds. IEEE Dataport, March 2024. Available online: https://ieee-dataport.org/open-access/millimeter-wave-60-ghz-radar-measurements-uas-and-birds (accessed on 1 October 2024).
- Narayanan, R.M.; Tsang, B.; Bharadwaj, R. Classification and Discrimination of Birds and Small Drones Using Ra-dar Micro-Doppler Spectrogram Images. Signals 2023, 4, 337–358. [Google Scholar] [CrossRef]
- Yan, J.; Hu, H.; Gong, J.; Kong, D.; Li, D. Exploring Radar Micro-Doppler Signatures for Recognition of Drone Types. Drones 2023, 7, 280. [Google Scholar] [CrossRef]
- Fan, S.; Wu, Z.; Xu, W.; Zhu, J.; Tu, G. Micro-Doppler Signature Detection and Recognition of UAVs Based on OMP Algorithm. Sensors 2023, 23, 7922. [Google Scholar] [CrossRef] [PubMed]
- Ye, J.; Gu, F.; Wu, H.; Han, X.; Yuan, X. A new Frequency Hopping Signal Detection of Civil UAV Based on Improved k-means Clustering Algorithm. IEEE Access 2021, 9, 53190–53204. [Google Scholar]
- Xu, C.; Tang, S.; Zhou, H.; Zhao, J.; Wang, X. Adaptive RF fingerprint decomposition in micro UAV detection based on machine learning. In Proceedings of the ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021. [Google Scholar]
- Inani, K.N.; Sangwan, K.S.; Dhiraj. Machine learning based framework for drone detection and identification using RF signals. In Proceedings of the 2023 4th International Conference on Innovative Trends in Information Technology (ICITIIT), Kottayam, India, 16–17 February 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Mandal, S.; Satija, U. Time-Frequency Multiscale Convolutional Neural Network for RF-Based Drone Detection and Identification. IEEE Sens. Lett. 2023, 7, 7003304. [Google Scholar] [CrossRef]
- Garvanov, I.; Kanev, D.; Garvanova, M.; Ivanov, V. Drone Detection Approach Based on Radio Frequency Detector. In Proceedings of the 2023 International Conference Automatics and Informatics (ICAI), Varna, Bulgaria, 5–7 October 2023; pp. 230–234. [Google Scholar] [CrossRef]
- Wang, J.; Gao, Y.; Li, D.; Yu, W. Multi-Domain Features Guided Supervised Contrastive Learning for Radar Target Detection. arXiv 2024, arXiv:2412.12620. [Google Scholar]
- Shi, Y.; Du, L.; Li, C.; Guo, Y.; Du, Y. Unsupervised Domain Adaptation for SAR Target Classification Based on Domain-and Class-Level Alignment: From Simulated to Real Data. ISPRS J. Photogramm. Remote Sens. 2024, 207, 1–13. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0924271623003155 (accessed on 1 October 2024).
- Chen, Y.; Jia, Y. Target detection method based on unsupervised domain adaptation for through-the-wall radar imaging. In Proceedings of the 2023 6th International Conference on Electronics Technology (ICET), Chengdu, China, 12–15 May 2023; pp. 156–160. Available online: https://ieeexplore.ieee.org/document/10137672 (accessed on 1 October 2024).
- Guo, Y.; Du, L.; Lyu, G. SAR Target Detection Based on Domain Adaptive Faster R-CNN with Small Training Data Size. Remote Sens. 2021, 13, 4202. [Google Scholar] [CrossRef]
- Ai, J.; Mao, Y.; Wang, Y.; Xing, M. SAR Target Classification Using the Multikernel-Size Feature Fusion-Based Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. Available online: https://ieeexplore.ieee.org/document/9528903 (accessed on 1 October 2024).
- Liu, Y.; Ding, Z.; Cao, Y.; Chang, M. Multi-scale Feature Fusion UAV Image Object Detection Method Based on Dilated Convolution and Attention Mechanism. In Proceedings of the 2020 8th International Conference on Information Technology: IoT and Smart City (ICIT 2020), Xi’an, China, 25–27 December 2020; pp. 1–12. [Google Scholar] [CrossRef]
- Tan, S.; Duan, Z.; Pu, L. Multi-scale object detection in UAV images based on adaptive feature fusion. PLoS ONE 2024, 19, e0300120. [Google Scholar] [CrossRef]
- Othman, N.A.; Aydin, I. Development of a Novel Lightweight CNN Model for Classification of Human Actions in UAV-Captured Videos. Drones 2023, 7, 148. [Google Scholar] [CrossRef]
- Dewangan, O.; Vij, P. CNN-LSTM framework to automatically detect anomalies in farmland using aerial images from UAVs. BIO Web Conf. 2024, 82, 5015. [Google Scholar] [CrossRef]
- Utebayeva, D.; Almagambetov, A.; Alduraibi, M.; Temirgaliyev, Y.; Ilipbayeva, L.; Marxuly, S. Multi-label UAV sound classification using Stacked Bidirectional LSTM. In Proceedings of the 2020 International Conference on Robotics and Control (IRC), Taichung, Taiwan, 9–11 November 2020; pp. 453–458. [Google Scholar] [CrossRef]
- Rodriguez, Y. Drone Preto e Branco Voando Sob o céU Azul Durante o dia, Unsplash, 28 January 2021. Available online: https://unsplash.com/pt-br/fotografias/drone-preto-e-branco-voando-sob-o-ceu-azul-durante-o-dia-mVI7sD0nTlA (accessed on 1 October 2024).
- Spratt, A. Quadricóptero Branco Voando Durante o Dia, Unsplash, 2 March 2017. Available online: https://unsplash.com/pt-br/fotografias/quadricoptero-branco-voando-durante-o-dia-I9TaTsU_VnE (accessed on 1 October 2024).
- Amazon. Bionic Bird—Smartphone Controlled and Remotely Operated. Amazon. Available online: https://www.amazon.in/Bionic-Bird-Smartphones-Remotely-Controlled/dp/B00TFCS4KC (accessed on 1 October 2024).
- Xu, P.; Zhu, X.; Clifton, D.A. Multimodal learning with transformers: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 12113–12132. [Google Scholar] [CrossRef] [PubMed]
Model Type | Architecture Details | Input Features and Objective |
---|---|---|
LSTM | Two LSTM layers:
| Input: Amplitude and phase Objective: Capture temporal dependencies in sequential data |
GRU | Two GRU layers:
| Input: Amplitude and phase Objective: Reduce computational cost compared to LSTM while retaining performance |
Conv1D | Two Conv1D layers:
Output layer: Dense with softmax activation | Input: Amplitude and phase Objective: Extract local patterns and features from sequential data |
Transformer | Layer normalization, Multi-Head Attention (8 heads, key dimension 128), Batch Normalization, Dropout (0.2), Global Average Pooling, Dense layer (64 units), Output layer: Dense with softmax activation | Input: Amplitude and phase Objective: Leverage self-attention mechanisms for better feature extraction and pattern recognition in sequences |
Model | Time Complexity | Space Complexity |
---|---|---|
LSTM | Sequential processing at each time step High computational cost for long sequences | Memory for storing hidden states |
GRU | Sequential processing at each time step High computational cost for long sequences | Memory for storing hidden states |
Conv1D | Parallel processing across input sequence Lower computational cost | Memory for storing feature maps |
Transformer | Quadratic scaling due to self-attention mechanism Pairwise interaction computation between all tokens | Memory for storing attention weights and hidden states |
Feature | Description | Computation Method |
---|---|---|
Amplitude | Represents the magnitude of the radar signal, indicating the strength of the reflected wave | (absolute value of the complex number) |
Phase | Captures the angular component of the radar signal, which provides information on the relative position of the target | (angle of the complex number) |
Skewness | Measures the asymmetry of the amplitude distribution, helping to detect anomalies in the signal | (third standardized moment) |
Kurtosis | Quantifies the peakedness of the amplitude distribution, indicating how heavy the tails of the distribution are | (fourth standardized moment) |
Model Block | Adjustment Applied | Expected Benefit |
---|---|---|
Input Layer | Class Weights | Balanced predictions across all classes |
Multi-Head Attention | Learning Rate Scheduler | Stable convergence and improved pattern detection |
Regularization Layer | Early Stopping | Reduced overfitting and better generalization |
Feed-Forward Network | Class Weights & Learning Rate Scheduler | Improved representation of underrepresented classes |
Output Layer | Class Weights & Early Stopping | Balanced predictions and optimal stopping point |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Larrat, M.; Sales, C. Classification of Flying Drones Using Millimeter-Wave Radar: Comparative Analysis of Algorithms Under Noisy Conditions. Sensors 2025, 25, 721. https://doi.org/10.3390/s25030721
Larrat M, Sales C. Classification of Flying Drones Using Millimeter-Wave Radar: Comparative Analysis of Algorithms Under Noisy Conditions. Sensors. 2025; 25(3):721. https://doi.org/10.3390/s25030721
Chicago/Turabian StyleLarrat, Mauro, and Claudomiro Sales. 2025. "Classification of Flying Drones Using Millimeter-Wave Radar: Comparative Analysis of Algorithms Under Noisy Conditions" Sensors 25, no. 3: 721. https://doi.org/10.3390/s25030721
APA StyleLarrat, M., & Sales, C. (2025). Classification of Flying Drones Using Millimeter-Wave Radar: Comparative Analysis of Algorithms Under Noisy Conditions. Sensors, 25(3), 721. https://doi.org/10.3390/s25030721