CN110187321B - Radar radiation source characteristic parameter extraction method based on deep learning in complex environment - Google Patents
Radar radiation source characteristic parameter extraction method based on deep learning in complex environment Download PDFInfo
- Publication number
- CN110187321B CN110187321B CN201910462165.3A CN201910462165A CN110187321B CN 110187321 B CN110187321 B CN 110187321B CN 201910462165 A CN201910462165 A CN 201910462165A CN 110187321 B CN110187321 B CN 110187321B
- Authority
- CN
- China
- Prior art keywords
- radiation source
- neural network
- initial
- encoder
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000005855 radiation Effects 0.000 title claims abstract description 39
- 238000013135 deep learning Methods 0.000 title claims abstract description 10
- 238000000605 extraction Methods 0.000 title claims description 10
- 238000013528 artificial neural network Methods 0.000 claims abstract description 52
- 238000000034 method Methods 0.000 claims abstract description 22
- 239000011159 matrix material Substances 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims description 3
- 238000007670 refining Methods 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 4
- 238000011160 research Methods 0.000 abstract description 2
- 210000002569 neuron Anatomy 0.000 description 14
- 230000006870 function Effects 0.000 description 5
- 230000001537 neural effect Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013329 compounding Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2136—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a method for extracting radar radiation source characteristic parameters in a complex environment based on deep learning, and belongs to the field of electronic reconnaissance. The method comprises the following steps: extracting initial features, constructing a classification neural network, constructing a sparse self-encoder and splicing feature matrixes; the method combines the classification neural network identification with the sparse self-encoder neural network identification, deeply analyzes and researches the essence of the radiation source signal, explores new characteristic parameters, constructs a characteristic vector which is more beneficial to signal identification, and improves the identification capability of the radar radiation source signal in a complex environment.
Description
Technical Field
The invention belongs to the field of electronic reconnaissance, and particularly relates to a radar radiation source characteristic parameter extraction method based on deep learning in a complex environment.
Background
The target identification is a key link in the field of electronic reconnaissance, wherein the main task is the extraction of characteristic parameters of a radiation source signal. The theoretical results in this respect are: the method comprises the following steps of an instantaneous autocorrelation method, a wavelet transformation method, a fuzzy function ridge characteristic method, a wavelet packet and entropy characteristic method and the like, which are used for researching radar target identification from different angles and different levels.
The problems existing in the extraction of the characteristic parameters of the existing radar radiation source are as follows:
the existing method mainly aims at specific signals, and under the complex environment condition of compounding multiple signals, the existing algorithm cannot meet the actual requirement.
The existing characteristic parameters and identification method are effective under the condition of simple electromagnetic environment, but have poor identification effect under the condition of low signal-to-noise ratio (such as SNR less than or equal to 2dB), and can not meet the identification requirement under the condition that multiple intra-pulse modulation signals exist at the same time.
The purpose of identification is to know the type of weapon emitting the signal and to judge its threat level, however, for the identification of the radiation source, less consideration is currently being given.
From the analysis, the essence of the radiation source signal is deeply researched, the characteristic vector more suitable for signal identification is explored, and the method has important significance for realizing the identification of the radar radiation source signal in a complex environment.
Disclosure of Invention
The invention aims to: the method for extracting the characteristic parameters of the radar radiation source in the complex environment based on deep learning is provided, and the problems that the extraction effect of the characteristic parameters of the radar radiation source in the complex environment is poor and the identification requirement cannot be met are solved.
The technical scheme adopted by the invention is as follows:
a radar radiation source characteristic parameter extraction method in a complex environment based on deep learning comprises the following steps:
extracting initial features: extracting parameter information of a radiation source and a loading platform as initial characteristics;
constructing a classification neural network: inputting initial characteristics, constructing an upper-layer classification neural network structure of 'initial characteristics-neural network intermediate layer A-radiation source and loading platform category', and outputting a characteristic matrix A for mapping the relationship between the initial characteristics and the radiation source and the loading platform category through the neural network intermediate layer A;
constructing a sparse self-encoder network: the initial features are simultaneously used as input and output quantities, a lower-layer sparse self-encoder network structure of 'initial feature-encoder-neural network intermediate layer B-decoder' is constructed, and an internal attribute feature matrix B of which the initial features are deeply refined is output through the neural network intermediate layer B;
splicing the feature matrixes: and splicing a relation characteristic matrix A reflecting the initial characteristics, the radiation source and the loading platform types with a characteristic matrix B reflecting the inherent attributes of the initial characteristics to obtain the final complex environment characteristic parameters.
Further, the parameters of the initial characteristics for the radiation source include carrier frequency, pulse width, arrival angle, pulse repetition frequency, antenna scanning period of the radar, pulse arrival time, pulse envelope parameter, intra-pulse modulation parameter, amplitude, and spectrum parameter of signals such as communication and interference;
the parameters of the initial characteristics for the loading platform comprise the moving speed and the spatial position parameters of the loading platform.
Further, the information in the classified neural network is spread towards one direction, and the training mode of the neural network intermediate layer A adopts a supervised learning mode.
Furthermore, in the sparse self-encoder network, an encoder is used for performing dimensionality reduction on the initial features and refining kernel information of the initial features; the decoder is used for training the encoder, judging whether the information extracted by the encoder is accurate or not, whether the characteristic with the same information quantity as the initial characteristic is obtained or not, and feeding back the output error to the initial characteristic so as to train the intermediate layer B of the neural network and output a characteristic matrix B of which the initial characteristic is deeply extracted.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the invention adopts a method of combining a classification neural network and a sparse self-coding neural network, the classification neural network obtains a feature matrix A which maps the initial feature and the category relation of a radiation source and a loading platform according to the initial feature input quantity, the sparse self-coding neural network obtains an internal attribute feature matrix B of which the initial feature is deeply refined according to the initial feature input quantity, the two feature matrices are spliced to obtain final feature parameters, the essence of a radiation source signal is deeply analyzed and researched, and new feature parameters are explored, so that a feature vector which is more beneficial to signal identification is constructed, and the identification capability of the radar radiation source signal in a complex environment is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and that for those skilled in the art, other relevant drawings can be obtained according to the drawings without inventive effort, wherein:
FIG. 1 is a flow chart of characteristic parameter extraction according to the present invention;
FIG. 2 is a schematic diagram of the sparse autoencoder network of the present invention;
fig. 3 is a schematic diagram of the change of the relative entropy value.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration only, not by way of limitation, i.e., the embodiments described are intended as a selection of the best mode contemplated for carrying out the invention, not as a full mode. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
A method for extracting radar radiation source characteristic parameters in a complex environment based on deep learning comprises the following steps:
extracting initial features: extracting parameter information of a radiation source and a loading platform as initial characteristics;
constructing a classification neural network: inputting initial characteristics, constructing an initial characteristic-neural network intermediate layer A-radiation source and loading platform type upper layer classification neural network structure, and outputting a characteristic matrix A for mapping the relationship between the initial characteristics and the radiation source and the loading platform type through the neural network intermediate layer A;
constructing a sparse self-encoder network: the initial features are simultaneously used as input and output quantities, a lower-layer sparse self-encoder network structure of 'initial feature-encoder-neural network intermediate layer B-decoder' is constructed, and an internal attribute feature matrix B of which the initial features are deeply refined is output through the neural network intermediate layer B;
splicing the feature matrixes: and splicing a relation characteristic matrix A reflecting the initial characteristics, the radiation source and the loading platform types with a characteristic matrix B reflecting the inherent attributes of the initial characteristics to obtain the final complex environment characteristic parameters.
Specifically, the feature parameter extraction flow is shown in fig. 1:
firstly, extracting initial characteristic information: aiming at a radiation source, parameters such as carrier frequency, pulse width, arrival angle, pulse repetition frequency, antenna scanning period and the like of a radar are considered, and initial characteristic parameters are formed by combining measurement parameters such as pulse arrival time, pulse envelope parameters, intra-pulse modulation parameters, amplitude, frequency spectrum and the like of signals such as communication and interference; directly selecting characteristics such as moving speed, spatial position and the like of the loading platform as initial characteristic parameters;
and (4) synthesizing initial characteristic parameter information of the radiation source and the state platform to obtain initial characteristic input quantity.
Step two, taking the initial characteristics as input quantity, constructing an upper-layer classification neural network structure of 'initial characteristics-neural network intermediate layer A-radiation source and loading platform category', wherein information in the upper-layer classification neural network is transmitted towards one direction without reverse information transmission, training the neural network intermediate layer A in a supervised learning mode, and finally outputting a characteristic matrix A for mapping the relationship between the initial characteristics and the radiation source and the loading platform category through the neural network intermediate layer A;
specifically, each neuron in the classified neural network is divided into different groups according to the sequence of received information, each group is regarded as a neural layer, the neuron in each neural layer receives the output signal of the neuron in the previous neural layer, and continuously outputs the signal to the neuron in the next neural layer; each neural layer is used as high-dimensional representation of data information in the input signal and can be regarded as a nonlinear function, and complex mapping from the input signal to the output signal is realized by compounding the simple nonlinear function for multiple times;
the neurons in the classified neural network not only receive signals output by other neurons, but also receive feedback signals of the neurons according to the comparison of the radiation source and the loading platform, and modify the parameters of the intermediate layer A of the neural network through the feedback signals, so that the intermediate layer can better extract characteristic parameters.
Step three, taking the initial features as input quantity and output quantity at the same time, constructing a lower-layer sparse self-encoder network structure of 'initial feature-encoder-neural network intermediate layer B-decoder', wherein the encoder in the sparse self-encoder network performs dimensionality reduction on the initial features, and extracts kernel information of the initial features; and the decoder trains the encoder, judges whether the information extracted by the encoder is accurate or not, obtains the characteristics with the same information quantity as the initial characteristics or not, and feeds back the output error to the initial characteristics so as to train the intermediate layer B of the neural network and output a characteristic matrix B of which the initial characteristics are deeply extracted.
Specifically, the sparse autoencoder is an unsupervised machine learning algorithm, and parameters of the autoencoder are continuously adjusted by calculating errors between output and original input of the autoencoder, so that a model is finally trained; an auto-encoder can be used to compress input information and extract useful input features.
The self-encoder is divided into an encoder and a decoder, the encoder converts the d-dimensional features into the p-dimensional features, the decoder reconstructs the p-dimensional features back to the d-dimensional features, when p < d is met, the self-encoder is used for extracting the dimension reduction features, and constraint conditions such as encoding sparsity and value range are added to obtain meaningful output quantity.
As shown in fig. 2: input unlabeled data, namely initial characteristic input quantity, is represented by { x1, x2, x3, · and the target is output hW,b(x) X, that is, sparse autoencoders attempt to approximate an identity function such that the outputClose to the input x;
forcing the self-encoder neural network to learn a "compressed" representation of the input data when the number of neurons in the neural network intermediate layer B is less than the input quantity; when the number of neurons in the middle layer B of the neural network is large, sparsity constraint conditions are applied to the self-coding neural network to achieve the effect of compressing input information and extracting input features.
It is understood that the term sparsity can be interpreted as: a constraint that causes a neuron to be in an inhibited state most of the time is called sparsity constraint, when its output is close to 1, which is considered to be activated, and the output is close to 0, which is considered to be inhibited.
By bjRepresents the activation degree of the neuron j in the middle layer B of the neural network, Bj[x(i)]Denotes the activation degree of neuron j given input x (i), and the average activation degree of neuron jIs represented as follows:
in the formula, m represents the number of inputs x.
Sparsity parameter ρ of the neural network constraint:
where p is typically a small value close to 0.
Further, to achieve sparsity limitation, limitation is requiredKeeping the value of rho in a small range, setting a penalty factor as follows:
in the formula, S2Representing the number of neurons in the intermediate layer B of the neural network;
based on the relative entropy, the penalty factor is expressed as:
wherein,one is averaged by ρ and one isRelative entropy between two bernoulli random variables that are means;
when the temperature is higher than the set temperatureWhen the utility model is used, the water is discharged,and following withThe difference from p increases to appear monotonically increasing.
Setting the sparsity parameter p to 0.2,with followingAs shown in fig. 3; it can be seen that the relative entropy isReaches its minimum value of 0, and whenWhen approaching 0 or 1, the relative entropy becomes very large, tending to infinity, so minimizing the penalty factor can make it possible to makeClose to p.
Thus, the overall cost function of the neural network of the sparse self-encoder is obtained as follows:
in the formula, beta represents the weight for controlling the sparsity penalty factor, and the algorithm can greatly reduce the initial data dimension.
The self-encoder is connected with the neural network intermediate layer B and the decoder through the encoder, and all layers are connected with each other. By minimizing reconstruction errors, network parameters can be efficiently learned to obtain desired feature parameters.
And fourthly, splicing a feature matrix A reflecting the category relation of the initial features, the radiation source and the loading platform with a feature matrix B reflecting the inherent attributes of the initial features to obtain the final complex environment feature parameters.
The method combines the classification neural network identification with the sparse self-encoder neural network identification, deeply analyzes and researches the essence of the radiation source signal, explores new characteristic parameters, constructs a characteristic vector which is more beneficial to signal identification, and improves the identification capability of the radar radiation source signal in a complex environment.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents and improvements made by those skilled in the art within the spirit and scope of the present invention should be included in the present invention.
Claims (3)
1. A radar radiation source characteristic parameter extraction method under a complex environment based on deep learning is characterized by comprising the following steps:
extracting initial features: extracting parameter information of a radiation source and a loading platform as initial characteristics;
constructing a classification neural network: inputting initial characteristics, constructing an upper-layer classification neural network structure of 'initial characteristics-neural network intermediate layer A-radiation source and loading platform category', and outputting a characteristic matrix A for mapping the relationship between the initial characteristics and the radiation source and the loading platform category through the neural network intermediate layer A;
constructing a sparse self-encoder network: the initial features are simultaneously used as input and output quantities, a lower-layer sparse self-encoder network structure of 'initial feature-encoder-neural network intermediate layer B-decoder' is constructed, and an internal attribute feature matrix B of which the initial features are deeply refined is output through the neural network intermediate layer B;
splicing the feature matrixes: splicing a relation characteristic matrix A reflecting the initial characteristics, the radiation source and the type of the loading platform with a characteristic matrix B reflecting the inherent properties of the initial characteristics to obtain final complex environment characteristic parameters;
in the sparse self-encoder network, an encoder is used for carrying out dimensionality reduction on the initial characteristic and refining kernel information of the initial characteristic; the decoder is used for training the encoder, judging whether the information extracted by the encoder is accurate or not, whether the characteristics with the same information quantity as the initial characteristics are obtained or not, and feeding back the output error to the initial characteristics so as to train the intermediate layer B of the neural network and output a characteristic matrix B of which the initial characteristics are deeply extracted.
2. The method for extracting the characteristic parameters of the radar radiation source in the complex environment based on the deep learning as claimed in claim 1, wherein: the parameters of the initial characteristics for the radiation source comprise carrier frequency, pulse width, arrival angle, pulse repetition frequency and antenna scanning period of the radar, and pulse arrival time, pulse envelope parameter, intra-pulse modulation parameter, amplitude and spectrum parameter of communication and interference signals are combined;
the parameters of the initial characteristics for the loading platform comprise the moving speed and the spatial position parameters of the loading platform.
3. The method for extracting the characteristic parameters of the radar radiation source in the complex environment based on the deep learning as claimed in claim 1, wherein: the information in the classified neural network is transmitted towards one direction, and the training mode of the neural network middle layer A adopts a supervised learning mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910462165.3A CN110187321B (en) | 2019-05-30 | 2019-05-30 | Radar radiation source characteristic parameter extraction method based on deep learning in complex environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910462165.3A CN110187321B (en) | 2019-05-30 | 2019-05-30 | Radar radiation source characteristic parameter extraction method based on deep learning in complex environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110187321A CN110187321A (en) | 2019-08-30 |
CN110187321B true CN110187321B (en) | 2022-07-22 |
Family
ID=67718895
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910462165.3A Active CN110187321B (en) | 2019-05-30 | 2019-05-30 | Radar radiation source characteristic parameter extraction method based on deep learning in complex environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110187321B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021203242A1 (en) * | 2020-04-07 | 2021-10-14 | 东莞理工学院 | Deep learning-based mimo multi-antenna signal transmission and detection technologies |
CN112034434B (en) * | 2020-09-04 | 2022-05-20 | 中国船舶重工集团公司第七二四研究所 | Radar radiation source identification method based on sparse time-frequency detection convolutional neural network |
CN112308008B (en) * | 2020-11-12 | 2022-05-17 | 电子科技大学 | Radar radiation source individual identification method based on working mode open set of transfer learning |
CN112859025B (en) * | 2021-01-05 | 2023-12-01 | 河海大学 | Radar signal modulation type classification method based on hybrid network |
CN117347961B (en) * | 2023-12-04 | 2024-02-13 | 中国电子科技集团公司第二十九研究所 | Radar function attribute identification method based on Bayesian learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107545903A (en) * | 2017-07-19 | 2018-01-05 | 南京邮电大学 | A kind of phonetics transfer method based on deep learning |
CN108090412A (en) * | 2017-11-17 | 2018-05-29 | 西北工业大学 | A kind of radar emission source category recognition methods based on deep learning |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4025920A (en) * | 1972-09-28 | 1977-05-24 | Westinghouse Electric Corporation | Identification of radar systems |
US4945494A (en) * | 1989-03-02 | 1990-07-31 | Texas Instruments Incorporated | Neural network and system |
CN105512680B (en) * | 2015-12-02 | 2019-01-08 | 北京航空航天大学 | A kind of more view SAR image target recognition methods based on deep neural network |
WO2018106805A1 (en) * | 2016-12-09 | 2018-06-14 | William Marsh Rice University | Signal recovery via deep convolutional networks |
CN107238822B (en) * | 2017-06-13 | 2020-05-26 | 电子科技大学 | Method for extracting orthogonal nonlinear subspace characteristics of true and false target one-dimensional range profile |
CN107194433B (en) * | 2017-06-14 | 2019-09-13 | 电子科技大学 | A kind of Radar range profile's target identification method based on depth autoencoder network |
US10591586B2 (en) * | 2017-07-07 | 2020-03-17 | Infineon Technologies Ag | System and method for identifying a target using radar sensors |
US11645835B2 (en) * | 2017-08-30 | 2023-05-09 | Board Of Regents, The University Of Texas System | Hypercomplex deep learning methods, architectures, and apparatus for multimodal small, medium, and large-scale data representation, analysis, and applications |
CN107610692B (en) * | 2017-09-22 | 2020-07-21 | 杭州电子科技大学 | Voice recognition method based on neural network stacking self-encoder multi-feature fusion |
CN107832787B (en) * | 2017-10-31 | 2020-09-22 | 杭州电子科技大学 | Radar radiation source identification method based on bispectrum self-coding characteristics |
US10592787B2 (en) * | 2017-11-08 | 2020-03-17 | Adobe Inc. | Font recognition using adversarial neural network training |
CN109545227B (en) * | 2018-04-28 | 2023-05-09 | 华中师范大学 | Depth self-coding network-based speaker sex automatic identification method and system |
CN109285168B (en) * | 2018-07-27 | 2022-02-11 | 河海大学 | Deep learning-based SAR image lake boundary extraction method |
CN109343046B (en) * | 2018-09-19 | 2023-03-24 | 成都理工大学 | Radar gait recognition method based on multi-frequency multi-domain deep learning |
CN109614905B (en) * | 2018-12-03 | 2022-10-21 | 中国人民解放军空军工程大学 | Automatic extraction method for depth intra-pulse features of radar radiation source signals |
-
2019
- 2019-05-30 CN CN201910462165.3A patent/CN110187321B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107545903A (en) * | 2017-07-19 | 2018-01-05 | 南京邮电大学 | A kind of phonetics transfer method based on deep learning |
CN108090412A (en) * | 2017-11-17 | 2018-05-29 | 西北工业大学 | A kind of radar emission source category recognition methods based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN110187321A (en) | 2019-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110187321B (en) | Radar radiation source characteristic parameter extraction method based on deep learning in complex environment | |
CN110222748B (en) | OFDM radar signal identification method based on 1D-CNN multi-domain feature fusion | |
CN110133599B (en) | Intelligent radar radiation source signal classification method based on long-time and short-time memory model | |
CN113298846B (en) | Interference intelligent detection method based on time-frequency semantic perception | |
CN113050042A (en) | Radar signal modulation type identification method based on improved UNet3+ network | |
CN109102029B (en) | Method for evaluating quality of synthesized face sample by using information maximization generation confrontation network model | |
Zhang et al. | Polarimetric HRRP recognition based on ConvLSTM with self-attention | |
CN110532932A (en) | A kind of multi -components radar emitter signal intra-pulse modulation mode recognition methods | |
CN110120926A (en) | Modulation mode of communication signal recognition methods based on evolution BP neural network | |
CN114895263B (en) | Radar active interference signal identification method based on deep migration learning | |
CN113780242A (en) | Cross-scene underwater sound target classification method based on model transfer learning | |
CN109711314A (en) | Radar emitter signal classification method based on Fusion Features and SAE | |
CN113759323A (en) | Signal sorting method and device based on improved K-Means combined convolution self-encoder | |
CN111010356A (en) | Underwater acoustic communication signal modulation mode identification method based on support vector machine | |
CN114355298A (en) | Radar composite modulation pulse signal identification method | |
CN116047427B (en) | Small sample radar active interference identification method | |
CN116482618B (en) | Radar active interference identification method based on multi-loss characteristic self-calibration network | |
CN116797796A (en) | Signal identification method based on time-frequency analysis and deep learning under DRFM intermittent sampling | |
CN111985349B (en) | Classification recognition method and system for radar received signal types | |
CN116471154A (en) | Modulation signal identification method based on multi-domain mixed attention | |
CN113569773B (en) | Interference signal identification method based on knowledge graph and Softmax regression | |
Lu et al. | Convolutional neural networks for hydrometeor classification using dual polarization Doppler radars | |
CN115079116A (en) | Radar target identification method based on Transformer and time convolution network | |
Zhong et al. | A climate adaptation device-free sensing approach for target recognition in foliage environments | |
Ruan et al. | Automatic recognition of radar signal types based on CNN-LSTM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |