[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116016071A - Modulation signal identification method based on double-flow fusion CNN-BiLSTM network - Google Patents

Modulation signal identification method based on double-flow fusion CNN-BiLSTM network Download PDF

Info

Publication number
CN116016071A
CN116016071A CN202211579515.2A CN202211579515A CN116016071A CN 116016071 A CN116016071 A CN 116016071A CN 202211579515 A CN202211579515 A CN 202211579515A CN 116016071 A CN116016071 A CN 116016071A
Authority
CN
China
Prior art keywords
network
cnn
bilstm
stream
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211579515.2A
Other languages
Chinese (zh)
Inventor
龚树凤
闫鑫悦
施汉银
吴哲夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202211579515.2A priority Critical patent/CN116016071A/en
Publication of CN116016071A publication Critical patent/CN116016071A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Digital Transmission Methods That Use Modulated Carrier Waves (AREA)

Abstract

A modulating signal identification method based on double-flow fusion CNN-BiLSTM is realized by using the existing public data set radio ML2016.10a, firstly, the data set is preprocessed, the data is converted into an I/Q format and an A/P format, and the information diversity is increased; in order to extract the space-time characteristics of signals at the same time, each stream consists of CNN and BiLSTM, the BiLSTM network layer carries out forward and backward learning, the recognition performance is improved, the network carries out double-stream fusion, the feature interaction is effectively carried out, and more effective classification features are obtained. The invention can greatly reduce training time and improve accuracy of signal identification performance.

Description

Modulation signal identification method based on double-flow fusion CNN-BiLSTM network
Technical Field
The invention relates to signal modulation identification classification, in particular to a modulation signal identification method based on a double-flow fusion CNN-BiLSTM network.
Background
With the rapid development of communication technology, electronic technology and signal processing technology, new modulation modes of communication signals are continuously emerging, the types of modulation signals are more and more, and the communication environment is more and more complex. The communication signal modulation identification classification refers to that the modulation type of a signal is judged after the received signal without priori knowledge is analyzed and processed, and the communication signal modulation identification classification is used as an important link between signal detection and demodulation in the civil and military fields such as frequency spectrum resource management, illegal radio station interference, electronic countermeasures and the like.
The debug mode identification of the conventional communication signal is largely classified into a Likelihood (LB) based method and a Feature (FB) based method. The communication signal modulation mode identification based on the likelihood method is equivalent to the multiple composite hypothesis test problem, different hypotheses are made for different modulation modes, likelihood functions of received signals under different hypotheses are calculated, and then the values of the likelihood functions are compared with a predefined threshold value to obtain possible categories of the received signals. Although likelihood-based methods are optimal in a bayesian sense, they are often accompanied by considerable computational complexity or require complete prior knowledge and sensitivity to unknown channel condition scenarios.
The feature-based method mainly comprises the steps of extracting feature parameters corresponding to a modulation signal by using a machine learning or deep learning algorithm, and selecting a classification criterion meeting requirements based on the difference between the feature parameters so as to make a signal type judgment. The method is generally divided into three parts, namely signal preprocessing, feature extraction and classification recognition. The actual process of signal preprocessing is mainly determined by the characteristic parameters of the usage data. Feature extraction is the extraction of intrinsic attribute information of the dataset signal. The classification identifies, i.e. uses the extracted feature information to make decisions on the signal type, such as wavelet transforms, cyclostationary features and higher order cumulants, etc.
The FB method has proven to provide a suboptimal solution compared to the LB method, which can use higher quality functions to obtain strong performance with less complexity, extracting some appropriate features from the received signal to distinguish the modulation types. However, FB methods generally rely on the extraction of manual features that require a great deal of domain knowledge and engineering techniques, and neural network-based methods are relatively easier to operate. However, the neural network method is too dependent on data, and the time complexity of training is increased due to too large data volume, so that various characteristics of signals are required to be utilized to improve accuracy. Therefore, in order to increase the diversity of the features, the chapter adopts a double-flow network structure, the first flow inputs I/Q data, the second flow inputs A/P data, a CNN network is used for extracting the spatial characteristics of signals, and a Bi-LSTM network is used for bidirectionally extracting the time characteristics of the signals to judge the modulated signals.
Disclosure of Invention
In order to overcome the defect of low accuracy of the existing low signal-to-noise ratio signal identification, the invention provides a modulation signal identification method based on double-flow fusion BiLSTM, which is more stable and has higher accuracy than the traditional network structure, and can effectively classify the modulation signals with low signal-to-noise ratio.
The technical scheme adopted by the invention is as follows:
a modulation signal identification method based on a double-flow fusion CNN-BiLSTM network comprises the following steps:
step 1: preparing a data set;
step 2: data preprocessing, converting signals into a time inphase/quadrature (I/Q) format and an amplitude/phase (a/P) representation, forming two data sets;
step 3: building a double-flow fusion CNN-BiLSTM network structure and setting network basic parameters;
step 4: the training model is used for inputting the training set into a double-flow fusion CNN-BiLSTM network for training;
step 5: inputting the test set into a trained double-flow fusion CNN-BiLSTM network structure, automatically identifying the type of the modulation signal, and outputting the accuracy of signal identification.
Further, in the step 1, the data set is a publicly available data set Radio ml2016.10a generated using GNU Radio synthesis, including training data and test data, and the types thereof include BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK and PAM48 digital modulations and WB-FM, AM-SSB and AM-DSB3 analog modulation signals.
Still further, in the step 2, the data preprocessing process is as follows:
(2.1) the modulated signal is an I/Q signal consisting of two sets of sequences, I and Q, which are converted into I/Q vectors;
Figure BDA0003990088750000021
wherein the method comprises the steps of
Figure BDA0003990088750000022
Figure BDA0003990088750000023
j represents the number of data set samples, the maximum value is 220000, N represents the number of sampling points, and the maximum value is 128;
(2.2) calculating amplitude and phase vectors:
Figure BDA0003990088750000024
Figure BDA0003990088750000025
(2.3) constructing an AP dataset consisting of amplitude components and phase components and converting to a/P vectors:
Figure BDA0003990088750000026
wherein the method comprises the steps of
Figure BDA0003990088750000027
Figure BDA0003990088750000028
j represents dataThe number of samples is set to 220000, n represents the number of samples, and 128 is the maximum.
Further, in the step 3, the dual-flow fusion CNN-BiLSTM network structure and network basic parameter setting are described as follows:
(3.1) one stream in the double-stream fusion network extracts local original time characteristics from an original I/Q signal, the other stream learns knowledge from an A/P signal, namely the amplitude and phase information after preprocessing, and the characteristics learned from the two streams are interacted in pairs by adopting double-stream information interaction, so that more effective classification characteristics can be acquired, the characteristic diversity is increased, and the performance is further improved;
(3.2) the specific CNN-BiLSTM network model uses CNN and Bi-LSTM network splicing structures, the CNN network structure of each stream is used for extracting the spatial characteristics of data, and the Bi-LSTM network extracts the time characteristics of the data in a two-way long and short time memory mode; each stream network model comprises three zero filling layers, three two-dimensional LSTM layers and a full connection layer, wherein a two-dimensional I/Q signal data set is processed into a three-bit 1 x 2 x 128 size input model, the three convolution layers respectively comprise 256 convolution kernels, 128 and 256 convolution kernels, the first convolution kernel is 2*3, the second convolution kernel is 1*3, the third convolution kernel is 1*3, and finally the full connection layer outputs 11-bit feature vectors which are expressed as prediction probabilities of 11 signals, wherein both the CNN layer and the LSTM layer sample a dropout method, and the dropout rate is p=0.5 so as to avoid overfitting;
(3.3) per-layer convolution and per-layer Bi-LSTM of the network structure using ReLU as the activation function, the network structure using class cross entropy error as the loss function:
Figure BDA0003990088750000031
wherein y is i Representing a predicted value, N representing the number of training samples;
(3.4) the network model uses a random gradient descent algorithm Adam as an optimizer:
Figure BDA0003990088750000032
setting the learning rate mu as 0.0001, setting the loss function to be minimized, and updating network parameters in an iterative way;
and (3.5) the fusion of the two-stream network uses a feature interaction mode of feature transposition multiplication, and the feature fusion is used for enhancing the performance, so that different from the common weighted fusion, more features can be extracted, the diversity of the attributes is enriched, and the performance is further enhanced.
The beneficial effects of the invention are as follows:
1. according to the invention, a network structure combining CNN and BiLSTM is adopted, so that the time and space characteristics of a modulation signal are automatically extracted, and ReLU is used as an activation function in each layer, so that the characteristic information is reduced, the calculation process is quickened, and more characteristic information is reserved to the maximum extent;
2. according to the invention, a CNN-BiLSTM network structure with double-flow fusion is adopted, a first-flow network directly inputs I/Q form signals, a first-flow network directly inputs A/P form signals, and a feature matrix transposition multiplication feature fusion mode is adopted, so that more feature information is extracted, and the recognition accuracy is improved;
drawings
FIG. 1 is a workflow diagram of a modulated signal identification method based on a dual-flow converged CNN-BiLSTM network;
FIG. 2 is a schematic diagram of the operation of a network model;
fig. 3 is an identification performance (snr= -2) of different modulated signals;
fig. 4 is an identification performance (snr=10) of different modulated signals;
FIG. 5 is a performance diagram of average classification accuracy;
fig. 6 is a schematic diagram of the operation of the classification accuracy of each modulated signal.
Detailed Description
The following detailed description of the preferred embodiments of the invention, taken in conjunction with the accompanying drawings, is provided to enable those skilled in the art to more simply and quickly understand the advantages and features of the invention, thereby making clear and defining the scope of the invention.
Referring to fig. 1 to 6, a modulation signal identification method based on a dual-flow fusion CNN-BiLSTM network includes the steps of:
step 1: preparing a data set;
the data set is a public available data set Radio ML2016.10a generated by using GNU Radio synthesis, and comprises training data and test data, wherein the types of the data set comprise 8 digital modulation signals such as BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK and PAM4 and 3 analog modulation signals such as WB-FM, AM-SSB and AM-DSB;
step 2: data preprocessing, converting signals into a time inphase/quadrature (I/Q) format and an amplitude/phase (a/P) representation, forming two data sets;
the data preprocessing process comprises the following steps:
(2.1) the modulated signal is an I/Q signal consisting of two sets of sequences, I and Q, which are converted into I/Q vectors;
Figure BDA0003990088750000041
wherein the method comprises the steps of
Figure BDA0003990088750000042
Figure BDA0003990088750000043
j represents the number of data set samples, the maximum value is 220000, N represents the number of sampling points, and the maximum value is 128;
(2.2) calculating amplitude and phase vectors:
Figure BDA0003990088750000044
Figure BDA0003990088750000045
(2.3) constructing an AP dataset consisting of amplitude components and phase components and converting to a/P vectors:
Figure BDA0003990088750000046
wherein the method comprises the steps of
Figure BDA0003990088750000047
Figure BDA0003990088750000048
j represents the number of data set samples, the maximum value is 220000, N represents the number of sampling points, and the maximum value is 128;
step 3: building a double-flow fusion CNN-BiLSTM network structure and setting network basic parameters;
the double-flow fusion CNN-BiLSTM network structure and network basic parameter setting are described as follows:
(3.1) one stream in a dual stream fusion network is to extract local raw time features from the raw I/Q signal, and the other stream is to learn knowledge from the a/P signal, i.e. the pre-processed amplitude and phase information. The double-flow information interaction is adopted, the features learned from the two flows are interacted in pairs, so that more effective classification features can be obtained, the feature diversity is increased, and the performance is further improved;
(3.2) the specific CNN-BiLSTM network model uses CNN and Bi-LSTM network splicing structures, the CNN network structure of each stream is used for extracting the spatial characteristics of data, and the Bi-directional long-short-term memory (Bi-LSTM) network extracts the temporal characteristics of data; each flow network model comprises three zero filling layers, three two-dimensional LSTM layers and a full connection layer; in a size input model in which a two-dimensional I/Q signal data set is processed into three bits of 1×2×128, three convolution layers respectively include 256, 128 and 256 convolution kernels, wherein the size of the first convolution layer convolution kernel is 2*3, the size of the second convolution layer convolution kernel is 1*3, the size of the third convolution layer convolution kernel is 1*3, and finally, a full-connection layer outputs 11-bit feature vectors which are represented as prediction probabilities of 11 signals, wherein both CNN layers and LSTM layers sample a dropout method, and the dropout rate is p=0.5 so as to avoid overfitting;
(3.3) per-layer convolution and per-layer Bi-LSTM of the network structure using ReLU as the activation function, the network structure using class cross entropy error as the loss function:
Figure BDA0003990088750000051
wherein y is i Represents the predicted value and N represents the number of training samples.
(3.4) the network model uses a random gradient descent algorithm Adam as an optimizer:
Figure BDA0003990088750000052
wherein the learning rate mu is set to 0.0001, the loss function is set to be minimized, and the network parameters are updated iteratively.
And (3.5) the fusion of the two-stream network uses a feature interaction mode of feature transposition multiplication, and the feature fusion is used for enhancing the performance, so that different from the common weighted fusion, more features can be extracted, the diversity of the attributes is enriched, and the performance is further enhanced.
Step 4: the training model is used for inputting the training set into a double-flow fusion CNN-BiLSTM network for training;
step 5: inputting the test set into a trained double-flow fusion CNN-BiLSTM network structure, automatically identifying the type of the modulation signal, and outputting the accuracy of signal identification.
The processing procedure of this embodiment is as follows:
step 1: preparing a data set;
the data set is a public available data set Radio ML2016.10a generated by using GNU Radio synthesis, and comprises training data and test data, wherein the types of the data set comprise 8 digital modulation signals such as BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK and PAM4 and 3 analog modulation signals such as WB-FM, AM-SSB and AM-DSB.
Step 2: data preprocessing, converting signals into a time inphase/quadrature (I/Q) format and an amplitude/phase (a/P) representation, forming two data sets;
the specific method for preprocessing the data comprises the following steps:
(2.1) the modulated signal is an I/Q signal consisting of two sets of sequences, I and Q, which are converted into I/Q vectors;
Figure BDA0003990088750000053
wherein the method comprises the steps of
Figure BDA0003990088750000054
Figure BDA0003990088750000055
j represents the number of data set samples, the maximum value is 220000, N represents the number of sampling points, and the maximum value is 128;
(2.2) calculating amplitude and phase vectors:
Figure BDA0003990088750000061
Figure BDA0003990088750000062
(2.3) constructing an AP dataset consisting of amplitude components and phase components and converting to a/P vectors:
Figure BDA0003990088750000063
wherein the method comprises the steps of
Figure BDA0003990088750000064
Figure BDA0003990088750000065
j represents the data set sampleThe number is 220000 at maximum, N represents the number of sampling points, and the maximum is 128;
step 3: building a double-flow fusion CNN-BiLSTM network structure and setting network basic parameters;
the double-flow fusion CNN-BiLSTM network structure and network basic parameter setting are specifically described as follows:
(3.1) one stream in a dual stream fusion network is to extract local raw time features from the raw I/Q signal, and the other stream is to learn knowledge from the a/P signal, i.e. the pre-processed amplitude and phase information. The double-flow information interaction is adopted, the features learned from the two flows are interacted in pairs, so that more effective classification features can be obtained, the feature diversity is increased, and the performance is further improved;
(3.2) the specific CNN-BiLSTM network model uses CNN and Bi-LSTM network splicing structures, the CNN network structure of each stream is used for extracting the spatial characteristics of data, and the Bi-directional long-short-term memory (Bi-LSTM) network extracts the temporal characteristics of data; each flow network model contains three zero-fill layers, three two-dimensional LSTM layers, and one full link layer. In a size input model in which a two-dimensional I/Q signal data set is processed into three bits 1×2×128, three convolution layers respectively include 256, 128, 256 convolution kernels, a first convolution kernel is 2*3 in size, a second convolution kernel is 1*3 in size, a third convolution kernel is 1*3 in size, and finally a full-connection layer outputs 11-bit feature vectors, which are expressed as prediction probabilities of 11 signals. Wherein both CNN layer and LSTM layer sample the dropout method with a dropout rate of p=0.5 to avoid overfitting.
(3.3) per-layer convolution and per-layer Bi-LSTM of the network structure using ReLU as the activation function, the network structure using class cross entropy error as the loss function:
Figure BDA0003990088750000066
wherein y is i Represents the predicted value and N represents the number of training samples.
(3.4) the network model uses a random gradient descent algorithm Adam as an optimizer:
Figure BDA0003990088750000067
wherein the learning rate mu is set to 0.0001, the loss function is set to be minimized, and the network parameters are updated iteratively.
And (3.5) the fusion of the two-stream network uses a feature interaction mode of feature transposition multiplication, and the feature fusion is used for enhancing the performance, so that different from the common weighted fusion, more features can be extracted, the diversity of the attributes is enriched, and the performance is further enhanced.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (4)

1. The modulation signal identification method based on the double-flow fusion CNN-BiLSTM network is characterized by comprising the following steps:
step 1: preparing a data set;
step 2: data preprocessing, converting signals into a time inphase/quadrature (I/Q) format and an amplitude/phase (a/P) representation, forming two data sets;
step 3: building a double-flow fusion CNN-BiLSTM network structure and setting network basic parameters;
step 4: the training model is used for inputting the training set into a double-flow fusion CNN-BiLSTM network for training;
step 5: inputting the test set into a trained double-flow fusion CNN-BiLSTM network structure, automatically identifying the type of the modulation signal, and outputting the accuracy of signal identification.
2. The method for identifying modulated signals based on a dual-stream converged CNN-BiLSTM network according to claim 1, wherein in the step 1, the data set is a publicly available data set Radio ml2016.10a generated by using GNU Radio synthesis, and includes training data and test data, and the types of the data set include BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK and PAM48 digital modulations and WB-FM, AM-SSB and AM-DSB3 analog modulated signals.
3. The method for identifying modulated signals based on a dual-stream converged CNN-BiLSTM network according to claim 1 or 2, wherein in the step 2, the data preprocessing process is as follows:
(2.1) the modulated signal is an I/Q signal consisting of two sets of sequences, I and Q, which are first converted into I/Q vectors;
Figure FDA0003990088740000011
wherein the method comprises the steps of
Figure FDA0003990088740000012
Figure FDA0003990088740000013
j represents the number of samples of the data set, the maximum value is 220000, N represents the number of sampling points, and the maximum value is 128;
(2.2) calculating amplitude and phase vectors:
Figure FDA0003990088740000014
Figure FDA0003990088740000015
(2.3) constructing an AP dataset consisting of amplitude components and phase components and converting to a/P vectors:
Figure FDA0003990088740000016
wherein the method comprises the steps of
Figure FDA0003990088740000017
Figure FDA0003990088740000018
The number of data set hall is represented, the maximum value is 220000, N represents the sampling point number, and the maximum value is 128.
4. The method for identifying modulated signals based on a dual-stream converged CNN-BiLSTM network according to claim 3, wherein in the step 3, the dual-stream converged CNN-BiLSTM network structure and the network basic parameter setting are described as follows:
(3.1) one stream in the dual stream fusion network is to extract local raw time features from the raw I/Q signal, and the other stream is to learn knowledge from the a/P signal, i.e. the preprocessed amplitude and phase information; the double-flow information interaction is adopted, the features learned from the two flows are interacted in pairs, so that more effective classification features can be obtained, the feature diversity is increased, and the performance is further improved;
(3.2) the specific CNN-BiLSTM network model uses CNN and Bi-LSTM network splicing structures, the CNN network structure of each stream is used for extracting the spatial characteristics of data, and the Bi-LSTM network extracts the time characteristics of the data in a two-way long and short time memory mode; each flow network model comprises three zero filling layers, three two-dimensional LSTM layers and a full connection layer; in a size input model in which a two-dimensional I/Q signal data set is processed into three bits 1×2×128, three convolution layers respectively comprise 256, 128 and 256 convolution kernels, wherein the size of the first convolution layer convolution kernel is 2*3, the size of the second convolution layer convolution kernel is 1*3, the size of the third convolution layer convolution kernel is 1*3, and finally, a full-connection layer outputs 11-bit feature vectors which are expressed as prediction probabilities of 11 signals; the CNN layer and the LSTM layer both sample a dropout method, where the dropout rate is p=0.5 to avoid overfitting;
(3.3) per-layer convolution and per-layer Bi-LSTM of the network structure using ReLU as the activation function, the network structure using class cross entropy error as the loss function:
Figure FDA0003990088740000021
wherein y is i Representing a predicted value, N representing the number of training samples;
(3.4) the network model uses a random gradient descent algorithm Adam as an optimizer:
Figure FDA0003990088740000022
setting the learning rate mu as 0.0001, setting the loss function to be minimized, and updating network parameters in an iterative way;
and (3.5) the fusion of the two-stream network uses a feature interaction mode of feature transposition multiplication, and the feature fusion is used for enhancing the performance, so that different from the common weighted fusion, more features can be extracted, the diversity of the attributes is enriched, and the performance is further enhanced.
CN202211579515.2A 2022-12-09 2022-12-09 Modulation signal identification method based on double-flow fusion CNN-BiLSTM network Pending CN116016071A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211579515.2A CN116016071A (en) 2022-12-09 2022-12-09 Modulation signal identification method based on double-flow fusion CNN-BiLSTM network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211579515.2A CN116016071A (en) 2022-12-09 2022-12-09 Modulation signal identification method based on double-flow fusion CNN-BiLSTM network

Publications (1)

Publication Number Publication Date
CN116016071A true CN116016071A (en) 2023-04-25

Family

ID=86025702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211579515.2A Pending CN116016071A (en) 2022-12-09 2022-12-09 Modulation signal identification method based on double-flow fusion CNN-BiLSTM network

Country Status (1)

Country Link
CN (1) CN116016071A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117932312A (en) * 2024-03-22 2024-04-26 山东大学 Radio positioning recognition system based on space-time attention network and contrast loss

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832417A (en) * 2020-06-16 2020-10-27 杭州电子科技大学 Signal modulation pattern recognition method based on CNN-LSTM model and transfer learning
CN113298031A (en) * 2021-06-16 2021-08-24 中国人民解放军国防科技大学 Signal modulation identification method considering signal physical and time sequence characteristics and application
CN113486724A (en) * 2021-06-10 2021-10-08 重庆邮电大学 Modulation identification model based on CNN-LSTM multi-tributary structure and multiple signal representations
CN114091541A (en) * 2021-11-19 2022-02-25 嘉兴学院 Multi-mode ensemble learning automatic modulation recognition method for offshore complex environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832417A (en) * 2020-06-16 2020-10-27 杭州电子科技大学 Signal modulation pattern recognition method based on CNN-LSTM model and transfer learning
CN113486724A (en) * 2021-06-10 2021-10-08 重庆邮电大学 Modulation identification model based on CNN-LSTM multi-tributary structure and multiple signal representations
CN113298031A (en) * 2021-06-16 2021-08-24 中国人民解放军国防科技大学 Signal modulation identification method considering signal physical and time sequence characteristics and application
CN114091541A (en) * 2021-11-19 2022-02-25 嘉兴学院 Multi-mode ensemble learning automatic modulation recognition method for offshore complex environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
彭岑昕;程伟;李晓柏;张永利;: "一种基于STFT-BiLSTM的通信信号调制方式识别方法", 空军预警学院学报, no. 01, 15 February 2020 (2020-02-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117932312A (en) * 2024-03-22 2024-04-26 山东大学 Radio positioning recognition system based on space-time attention network and contrast loss
CN117932312B (en) * 2024-03-22 2024-06-04 山东大学 Radio positioning recognition system based on space-time attention network and contrast loss

Similar Documents

Publication Publication Date Title
CN110855591B (en) QAM and PSK signal intra-class modulation classification method based on convolutional neural network structure
Zhang et al. Automatic modulation classification using CNN-LSTM based dual-stream structure
CN110163282B (en) Modulation mode identification method based on deep learning
CN108234370B (en) Communication signal modulation mode identification method based on convolutional neural network
CN112702294B (en) Modulation recognition method for multi-level feature extraction based on deep learning
CN108718288B (en) Method for recognizing digital signal modulation mode based on convolutional neural network
CN112861927B (en) Signal modulation classification method based on self-adaptive feature extraction and fusion
CN113014524B (en) Digital signal modulation identification method based on deep learning
CN109887047B (en) Signal-image translation method based on generation type countermeasure network
CN114881092A (en) Signal modulation identification method based on feature fusion
CN114422311B (en) Signal modulation recognition method and system combining deep neural network and expert priori features
Lin et al. A real-time modulation recognition system based on software-defined radio and multi-skip residual neural network
Wang et al. A spatiotemporal multi-stream learning framework based on attention mechanism for automatic modulation recognition
CN116016071A (en) Modulation signal identification method based on double-flow fusion CNN-BiLSTM network
CN116628566A (en) Communication signal modulation classification method based on aggregated residual transformation network
CN116070136A (en) Multi-mode fusion wireless signal automatic modulation recognition method based on deep learning
CN114548201B (en) Automatic modulation identification method and device for wireless signal, storage medium and equipment
Perenda et al. Contrastive learning with self-reconstruction for channel-resilient modulation classification
CN114595729A (en) Communication signal modulation identification method based on residual error neural network and meta-learning fusion
CN114615118A (en) Modulation identification method based on multi-terminal convolution neural network
Sang et al. Deep learning based predictive power allocation for V2X communication
Cai et al. The performance evaluation of big data-driven modulation classification in complex environment
WO2021262052A1 (en) A context aware data receiver for communication signals based on machine learning
CN117278371A (en) Signal modulation identification method based on cascade convolution neural network
CN115589349A (en) QAM signal modulation identification method based on deep learning channel self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination