[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Cross-domain prototype similarity correction for few-shot radar modulation signal recognition

Published: 08 August 2024 Publication History

Abstract

The new classes of radar signals are increasingly difficult to acquire under non-cooperative environments, which makes it difficult to support convolutional neural network training with limited labeled samples. The few-shot learning (FSL) methods have shown great performance in classification with limited labeled samples, but the FSL methods ignore that the class distributions between the new and original tasks are significantly different, resulting in a massive challenge in identifying new radar signals. To solve this problem, a few-shot radar modulation signal recognition method based on cross-domain prototype similarity correction (CDPSC) is proposed. Specifically, a residual feature tokenizer transformer (RFTT) model embedded with a pooling token generation block is designed to focus on the important features and improve the ability to represent samples. Meanwhile, the proposed domain prototype similarity mapping (DPSM) strategy adaptively learns the class mapping, reduces the inter-domain difference through feature distribution alignment, and effectively corrects the target domain prototypes. In addition, we introduce a sample prototype embedding (SPE) strategy in the training phase, which can reduce the intra-class distance and increase the inter-class distance. Experimental results demonstrate that the CDPSC method is superior to typical FSL methods in recognition accuracy under different sample numbers.

Highlights

A CDPSC method is proposed for few-shot radar modulation signal recognition.
A domain prototype similarity mapping strategy improves the accuracy of prototypes.
The method improves feature extraction ability by extracting time-series features.
The proposed method has excellent recognition performance.

References

[1]
Gupta M., Hareesh G., Mahla A.K., Electronic warfare: Issues and challenges for emitter classification, Def. Sci. J. 61 (3) (2011) 228–234.
[2]
Zhang C., Wang L., Jiang R., Hu J., Xu S., Radar jamming decision-making in cognitive electronic warfare: A review, IEEE Sens. J. 23 (11) (2023) 11383–11403.
[3]
T. Xi, L. Yishan, P. Xianyue, C. Wentao, Intra-pulse Intentional Modulation Recognition of Radar Signals at Low SNR, in: 2018 IEEE 2nd International Conference on Circuits, System and Simulation, ICCSS, 2018, pp. 66–70.
[4]
Coluccia A., Fascista A., Ricci G., A KNN-based radar detector for coherent targets in non-Gaussian noise, IEEE Signal Process. Lett. 28 (2021) 778–782.
[5]
Xu T., Yuan S., Liu Z., Guo F., Radar emitter recognition based on parameter set clustering and classification, Remote Sens. 14 (18) (2022).
[6]
Vanhoy G., Schucker T., Bose T., Classification of LPI radar signals using spectral correlation and support vector machines, Analog Integr. Circuits Signal Process. 91 (2017) 305–313.
[7]
Wu X., Wang L., Wu C., Guo C., Yan H., Qiao Z., Semantic segmentation of remote sensing images using multiway fusion network, Signal Process. 215 (2024).
[8]
Liu M., Pan H., Ge H., Wang L., MS3Net: Multiscale stratified-split symmetric network with quadra-view attention for hyperspectral image classification, Signal Process. 212 (2023).
[9]
Duan Z., Zhang T., Luo X., Tan J., DCKN: Multi-focus image fusion via dynamic convolutional kernel network, Signal Process. 189 (2021).
[10]
Li Y., Fu M., Zhang H., Xu H., Zhang Q., Hyperspectral image fusion algorithm based on improved deep residual network, Signal Process. 210 (2023).
[11]
Gao J., Ji X., Chen G., Guo R., Main–sub transformer with spectral–spatial separable convolution for hyperspectral image classification, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 17 (2024) 2747–2762.
[12]
Tan Z., Chen J., Kang Q., Zhou M., Abusorrah A., Sedraoui K., Dynamic embedding projection-gated convolutional neural networks for text classification, IEEE Trans. Neural Netw. Learn. Syst. 33 (3) (2022) 973–982.
[13]
Hossain M.R., Hoque M.M., Siddique N., Leveraging the meta-embedding for text classification in a resource-constrained language, Eng. Appl. Artif. Intell. 124 (2023).
[14]
Mamoun M.E., An effective combination of convolutional neural network and support vector machine classifier for arabic handwritten recognition, Autom. Control Comput. Sci. 57 (3) (2023) 267–275.
[15]
Si W., Wan C., Deng Z., Intra-pulse modulation recognition of dual-component radar signals based on deep convolutional neural network, IEEE Commun. Lett. 25 (2021) 3305–3309.
[16]
Huynh-The T., Doan V.-S., Hua C.-H., Pham Q.-V., Nguyen T.-V., Kim D.-S., Accurate LPI radar waveform recognition with CWD-TFA for deep convolutional network, IEEE Wirel. Commun. Lett. 10 (8) (2021) 1638–1642.
[17]
Xu S., Liu L., Zhao Z., DTFTCNet: Radar modulation recognition with deep time-frequency transformation, IEEE Trans. Cogn. Commun. Netw. 9 (5) (2023) 1200–1210.
[18]
J. Kim, S. Cho, S. Hwang, Y. Choi, Automatic LPI Radar Waveform Recognition Using Vision Transformer, in: 2023 IEEE International Radar Conference, RADAR, 2023, pp. 1–6.
[19]
Liu H., Xiao Y., Wu X., Li Y., Zhao P., Liang Y., Wang L., Zhou Y., Semhybridnet: a semantically enhanced hybrid CNN-transformer network for radar pulse image segmentation, Complex Intell. Syst. (2023).
[20]
Yuan S., Li P., Wu B., Towards single-component and dual-component radar emitter signal intra-pulse modulation classification based on convolutional neural network and transformer, Remote Sens. 14 (15) (2022) 3690.
[21]
Ren B., Teh K.C., An H., Gunawan E., Automatic modulation recognition of Dual-Component radar signals using ResSwinT–SwinT network, IEEE Trans. Aerosp. Electron. Syst. 59 (5) (2023) 6405–6418.
[22]
X. Chen, K. He, Exploring simple Siamese representation learning, in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (ISSN: 10636919) 2021, pp. 15745–15753.
[23]
Snell J., Swersky K., Zemel R., Prototypical networks for few-shot learning, in: Advances in Neural Information Processing Systems, 2017, pp. 4078–4088.
[24]
F. Sung, Y. Yang, L. Zhang, T. Xiang, P.H.S. Torr, T.M. Hospedales, Learning to Compare: Relation Network for Few-Shot Learning, in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 1199–1208.
[25]
C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, in: 34th International Conference on Machine Learning, 2017, pp. 1856–1868.
[26]
Sun Q., Liu Y., Chen Z., Chua T.-S., Schiele B., Meta-transfer learning through hard tasks, IEEE Trans. Pattern Anal. Mach. Intell. 44 (3) (2022) 1443–1456.
[27]
Wang Q., Ling H., Zhang B., Li P., Li Z., Shi Y., Zhao C., Zhao C., Bidirectional gated edge-labeling graph recurrent neural network for few-shot learning, IEEE Trans. Cogn. Dev. Syst. 15 (2) (2023) 855–864.
[28]
Y. Chen, Z. Liu, H. Xu, T. Darrell, X. Wang, Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning, in: 2021 IEEE/CVF International Conference on Computer Vision, ICCV, 2021, pp. 9042–9051.
[29]
Wang W., Xing L., Ren P., Jiang Y., Wang G., Liu B., Subspace prototype learning for few-shot remote sensing scene classification, Signal Process. 208 (2023).
[30]
X. Luo, H. Wu, J. Zhang, L. Gao, J. Xu, J. Song, A Closer Look at Few-shot Classification Again, in: Proceedings of Machine Learning Research, (ISSN: 26403498) 2023, pp. 23103–23123.
[31]
Zhang Z., Li Y., Zhai Q., Li Y., Gao M., Few-shot learning for fine-grained signal modulation recognition based on foreground segmentation, IEEE Trans. Veh. Technol. 71 (3) (2022) 2281–2292.
[32]
Huang J., Wu B., Li P., Li X., Wang J., Few-shot learning for radar emitter signal recognition based on improved prototypical network, Remote Sens. 14 (7) (2022) 1681.
[33]
Jing Z., Li P., Wu B., Yuan S., Chen Y., An adaptive focal loss function based on transfer learning for few-shot radar signal intra-pulse modulation classification, Remote Sens. 14 (8) (2022) 1950.
[34]
He Z., Yang B., Chen C., Mu Q., Li Z., CLDA: an adversarial unsupervised domain adaptation method with classifier-level adaptation, Multimedia Tools Appl. 79 (2020) 33973–33991.
[35]
Zhu Y., Zhuang F., Wang J., Ke G., Chen J., Bian J., Xiong H., He Q., Deep subdomain adaptation network for image classification, IEEE Trans. Neural Netw. Learn. Syst. 32 (4) (2021) 1713–1722.
[36]
Ren Y., Liu J., Zhang H., Wang J., TBDA-Net: A task-based bias domain adaptation network under industrial small samples, IEEE Trans. Ind. Inform. 18 (9) (2022) 6109–6119.
[37]
Y.-C. Yu, H.-T. Lin, Semi-Supervised Domain Adaptation with Source Label Adaptation, in: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2023, pp. 24100–24109.
[38]
Wang L., Liu J., Zhang H., Zuo F., KMSA-Net: A knowledge-mining-based semantic-aware network for cross-domain industrial process fault diagnosis, IEEE Trans. Ind. Inform. 20 (2) (2024) 2738–2750.
[39]
Y. Fu, Y. Fu, Y.-G. Jiang, Meta-FDMixup: Cross-Domain Few-Shot Learning Guided by Labeled Target Data, in: Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 5326–5334.
[40]
Y. Fu, Y. Xie, Y. Fu, J. Chen, Y.-G. Jiang, ME-D2N: Multi-Expert Domain Decompositional Network for Cross-Domain Few-Shot Learning, in: Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 6609–6617.
[41]
Zhao Y., Zhang T., Li J., Tian Y., Dual adaptive representation alignment for cross-domain few-shot learning, IEEE Trans. Pattern Anal. Mach. Intell. 45 (10) (2023) 11720–11732.
[42]
Zhang D., Yan H., Chen Y., Li D., Hao C., Cross-domain few-shot learning based on feature adaptive distillation, Neural Comput. Appl. 36 (2024) 4451–4465.
[43]
Xu R., Xing L., Liu B., Tao D., Cao W., Liu W., Cross-domain few-shot classification via class-shared and class-specific dictionaries, Pattern Recognit. 144 (2023).
[44]
H. Liang, Q. Zhang, P. Dai, J. Lu, Boosting the Generalization Capability in Cross-Domain Few-shot Learning via Noise-enhanced Supervised Autoencoder, in: 2021 IEEE/CVF International Conference on Computer Vision, ICCV, 2021, pp. 9404–9414.
[45]
Zheng R., Wang C., He X., Li X., A correction method for the nonlinearity of FMCW radar sensors based on multisynchrosqueezing transform, IEEE Sens. J. 23 (1) (2023) 609–619.
[46]
Zeng Y., Shen S., Xu Z., Water surface acoustic wave detection by a millimeter wave radar, Remote Sens. 15 (16) (2023).
[47]
Xue W., Zhu J., Rong X., Huang Y., Yang Y., Yu Y., The analysis of ground penetrating radar signal based on generalized s transform with parameters optimization, J. Appl. Geophys. 140 (2017) 75–83.
[48]
Erdogan A.Y., Gulum T.O., Durak-Ata L., Yildirim T., Pace P.E., FMCW signal detection and parameter extraction by cross Wigner–Hough transform, IEEE Trans. Aerosp. Electron. Syst. 53 (1) (2017) 334–344.
[49]
Huynh-The T., Doan V.-S., Hua C.-H., Pham Q.-V., Nguyen T.-V., Kim D.-S., Accurate LPI radar waveform recognition with CWD-TFA for deep convolutional network, IEEE Wirel. Commun. Lett. 10 (8) (2021) 1638–1642.
[50]
Méric S., Pancot R., Using polynomial Wigner–Ville distribution for velocity estimation in remote toll applications, IEEE Geosci. Remote Sens. Lett. 11 (2) (2014) 409–413.
[51]
Li G., Zhang H., Gao Y., Ma B., Sea clutter suppression using smoothed pseudo-Wigner–Ville distribution–singular value decomposition during sea spikes, Remote Sens. 15 (22) (2023).
[52]
Fan W., Zhou F., Tao M., Bai X., Rong P., Yang S., Tian T., Interference mitigation for synthetic aperture radar based on deep residual network, Remote Sens. 11 (14) (2019).
[53]
Zhao Y., Chen Y., Lu X., Zhou L., Xiong S., Aerial image recognition in discriminative bi-transformer, Signal Process. 207 (2023).
[54]
Bai J., Lu J., Xiao Z., Chen Z., Jiao L., Generative adversarial networks based on transformer encoder and convolution block for hyperspectral image classification, Remote Sens. 14 (14) (2022) 3426.
[55]
Lu R., Chen B., Cheng Z., Wang P., RAFnet: Recurrent attention fusion network of hyperspectral and multispectral images, Signal Process. 177 (2020).
[56]
Cai T.T., Ma R., Theoretical foundations of t-SNE for visualizing high-dimensional clustered data, J. Mach. Learn. Res. 23 (2022) 301.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Signal Processing
Signal Processing  Volume 223, Issue C
Oct 2024
298 pages

Publisher

Elsevier North-Holland, Inc.

United States

Publication History

Published: 08 August 2024

Author Tags

  1. Cross-domain
  2. Few-shot learning
  3. Prototype similarity correction
  4. Radar modulation signal recognition

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media