[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108985445A - A kind of target bearing SAR discrimination method based on machine Learning Theory - Google Patents

A kind of target bearing SAR discrimination method based on machine Learning Theory Download PDF

Info

Publication number
CN108985445A
CN108985445A CN201810806537.5A CN201810806537A CN108985445A CN 108985445 A CN108985445 A CN 108985445A CN 201810806537 A CN201810806537 A CN 201810806537A CN 108985445 A CN108985445 A CN 108985445A
Authority
CN
China
Prior art keywords
sar
machine learning
sar image
method based
learning theory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810806537.5A
Other languages
Chinese (zh)
Inventor
张寅�
裴季方
王雯璟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhida Technology Co Ltd
Original Assignee
Chengdu Zhida Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhida Technology Co Ltd filed Critical Chengdu Zhida Technology Co Ltd
Priority to CN201810806537.5A priority Critical patent/CN108985445A/en
Publication of CN108985445A publication Critical patent/CN108985445A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of target bearing SAR discrimination method based on machine Learning Theory, based on the sample data after orientation angular discretization, lower dimensional space is mapped data into using the projection matrix that optimal method solves, it realizes the separation of different orientations sample and the aggregation of same orientation angle sample, extracts and distinguish the feature of different orientations sample;And it is based on machine Learning Theory, multilayer feedforward neural network is constructed, automatically extracts out the finite character of different orientations and provide Azimuth prediction result.

Description

SAR target orientation identification method based on machine learning theory
Technical Field
The invention belongs to the field of radar image processing, and particularly relates to an automatic target orientation identification technology for a synthetic aperture radar.
Background
Synthetic Aperture Radar (SAR) is a high-resolution microwave imaging Radar with all-weather and all-day working capability, is widely applied to the fields of battlefield sensing reconnaissance, geographic information acquisition, agriculture and forestry environment monitoring, geological and landform exploration, ocean resource utilization and the like, and has extremely high civil and military values. Identifying or estimating the azimuth of a target from a SAR image is one of the important research directions in SAR image processing. Because the scattering phenomenon generated by the artificial target has great variability, the azimuth angle of the target on the SAR image is difficult to accurately judge, and the difficulty of manual interpretation is increased. SAR Target Azimuth Identification (TAI) is based on theories such as modern signal processing and pattern recognition, target azimuth features are extracted and target azimuth information is estimated under the condition of no manual intervention, and powerful technical support is provided for multiple aspects such as SAR automatic target recognition.
The currently mainstream SAR TAI method mainly includes an image domain-based method and a transform domain-based method. However, the traditional method usually needs an image preprocessing technology, has the problems of high algorithm complexity, poor stability and the like, and is difficult to extract the optimal target azimuth characteristic and perform efficient and accurate azimuth estimation. With the rise and development of artificial intelligence theory, neural network as a machine learning algorithm with strong self-adaptive ability is widely applied to a plurality of fields such as image classification, speech signal processing and the like, and opens up a new idea and direction for SARTAI.
Documents such as "Yiche Jiang, Xiaohui Zhao, Yun Zhang, Bin Hu, and Yuan Zhuang," society estimation based on optimization information in SAR images, "InRadar reference (RadarConf),2016IEEE, 2016, pp.1-4" propose a SAR image orientation estimation method based on target geometric information, but this method needs image preprocessing to enhance the accuracy of attitude estimation, increases the complexity of the algorithm, and is sensitive to changes such as spots and shadows of SAR images.
The documents "lancet M Kaplan and roman Murenzi," dose estimation of the two-dimensional connected wave transform, "lancet recognitions Letters, vol.24, No.14, pp.2269-2280,2003" propose a SAR image orientation estimation method based on two-dimensional continuous wavelet transform, which reduces the computational complexity, but has large fluctuation in stability under different targets and generates obvious errors when the target shape changes. The problem that target azimuth estimation in an original SAR image is affected by scattering phenomena and target changes is still not reasonably solved.
Disclosure of Invention
In order to solve the technical problems, the invention provides an SAR target direction identification method based on a machine learning theory, which can automatically extract effective characteristics of samples with different directions and give direction identification results.
The technical scheme adopted by the invention is as follows: a SAR target orientation identification method based on a machine learning theory comprises the following steps:
s1, collecting an original SAR image;
s2, preprocessing the original SAR image acquired in the step S1 to obtain an SAR image slice with a target positioned in the center;
s3, setting discrete azimuth number according to actual imaging conditions and performance indexes;
s4, generating SAR image slices for training according to the discrete azimuth number set in the step S3;
s5, generating sample data projected to a low-dimensional space according to the SAR image slice generated in the step S4;
s6, constructing a multilayer feedforward neural network;
s7, training the multi-layer feedforward neural network constructed in the step S6 according to the sample data obtained in the step S5.
Further, step S2 is specifically: and (3) combining the original SAR image acquired in the step (S1) with a network structure to manufacture an SAR image slice with a target positioned at the center, and performing gray level enhancement based on power transformation on the sliced image.
Further, step S4 is specifically: discretizing the azimuth angle range according to the discrete azimuth angle number set in the step S3; and then classifying the SAR image into a sample set of the same visual angle according to the discretized azimuth angle.
Further, step S5 is specifically: and obtaining an optimal projection matrix by adopting an optimization method, and generating a sample set which is projected to a low-dimensional space by the sample sets with the same azimuth angle in the step S4 according to the optimal projection matrix.
Furthermore, columns in the optimal projection matrix are generalized eigenvectors corresponding to r maximum eigenvalues in the following formula, wherein r is less than n;
X(D(b)-W(b))XTv=λX(D(s)-W(s))XTv
wherein X ═ X1,x2,…,xN],D(s)And D(b)Is that the diagonal elements are respectivelyAndλ is an eigenvalue of V, and its corresponding eigenvector is V.
Further, the multi-layer deep convolutional neural network of step S6 includes: an input layer with r neurons, 4 hidden layers and a Softmax layer with m neurons; each neuron of the input layer receives one-dimensional characteristic of one SAR image; each hidden layer is a full connection layer, and finally extracted features are input into the Softmax layer.
The invention has the beneficial effects that: the SAR target azimuth identification method based on the machine learning theory disclosed by the invention is characterized in that based on sample data after azimuth discretization, a projection matrix solved by an optimization method is utilized to map the data to a low-dimensional space, so that the separation of samples with different azimuths and the aggregation of samples with the same azimuth are realized, and the characteristics of the samples with different azimuths are extracted and distinguished; based on a machine learning theory, a multi-layer feedforward neural network is constructed, limited features of different azimuth angles are automatically extracted, an azimuth angle prediction result is given, and efficient and accurate estimation of the SAR target azimuth angle is achieved; the invention adjusts the number of discrete azimuth angles of the sample and the network structure according to the actual image characteristics and performance indexes, and has the advantages of flexibility, accuracy, high efficiency and strong system integration.
Drawings
FIG. 1 is a flow chart of a scheme provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a specific network structure according to an embodiment of the present invention;
fig. 3 is a view angle discrimination result of a scene SAR image according to an embodiment of the present invention;
FIG. 4 is a comparison of the average absolute difference of the results provided by the example of the present invention with other methods.
Detailed Description
In order to facilitate understanding of the technical contents of the present invention by those skilled in the art, the following terms are first explained.
The terms: azimuthal discrimination
Azimuth refers to the counterclockwise angle from the radar distance to the principal axis of the target in synthetic aperture radar imaging. The azimuth angle identification means identifying the azimuth angle by using the image characteristics of the target in the image formed by the synthetic aperture radar. A specific estimation method is described in "SiC, JianY, SongX. ANewMethod for TargetA specific estimating Compounds of SARImages [ C ]. International Conifereon multimedia technology. IEEE,2010:1-4.
As shown in fig. 1, a scheme flow chart of the present invention is provided, and the technical scheme of the present invention is as follows: a SAR target orientation identification method based on a machine learning theory comprises the following steps:
s1, collecting an original SAR image; the method specifically comprises the following steps: and acquiring original SAR images with the same resolution and different azimuth angles. The azimuth angles of the original SAR images in this embodiment are distributed in the range of 0 ° to 360 °.
S2, preprocessing the original SAR image acquired in the step S1; the method specifically comprises the following steps: and (3) combining the original SAR image collected in the S1 with a network structure to manufacture an image slice with a target positioned at the center, and performing gray level enhancement based on power transformation on the sliced image: x represents the SAR image before enhancement, and x 'represents the SAR image after enhancement, then x' (u, v) ═ x (u, v)]βu and v respectively represent the horizontal and vertical coordinates, and β represents the enhancement factor.
And S3, setting the discrete azimuth angle number according to the actual imaging condition and the performance index.
S4, generating SAR image slices for training according to the discrete azimuth number set in the step S3; discretizing the azimuthal range [0 °,360 °) to { φjjE [0 °,360 °), j ═ 1,2, …, m }, and the SAR image is sliced { x } according to the discretized azimuth anglei∈RnI-1, 2, …, N belonging to a sample set from the same perspective, with a corresponding discretized azimuth angle ofWherein R isnVectorizing the SAR image for the n dimension.
S5, generating sample data projected to a low-dimensional space according to the SAR image slice generated in the step S4; the method specifically comprises the following steps: according to the m sample sets with the same azimuth angle generated in the S4, an optimal projection matrix V epsilon R is obtained by utilizing an optimization methodn×rGenerating a set of samples { z ] projected into a low dimensional spacei=VTxi∈Rr(r < n), i ═ 1,2, …, m }, maximizing the discrimination between samples from different perspectives.
S51, in the form of V epsilon Rn×rRepresenting a linear mapping of projections into a low dimensional space, denoted Js(V) represents a compound havingObjective function of same azimuth sample, in Jb(V) represents the objective function with different azimuth angle samples, and the calculation formula is:
wherein,denotes xiThe discretized azimuth of;
when in useThenOtherwise
When in useThenOtherwise
When in useAt the same timeThenOtherwise
When in useWhile satisfying any of the following conditions, thenOtherwise
a)
b)
S52、Js(V) and Jb(V) can be simplified as:
Js(V)=trace{VTX(D(s)-W(s))XTV} (3)
Jb(V)=trace{VTX(D(b)-W(b))XTV} (4)
discriminant analysis of the subspace azimuth can be modeled as an optimization problem:
the column of the solved linear projection V is the generalized eigenvector corresponding to the r largest eigenvalues in the following equation.
X(D(b)-W(b))XTv=λX(D(s)-W(s))XTv (6)
Solving a linear projection V to generate a sample z projected to a low-dimensional spacei=VTxi∈Rr
S6, constructing a multilayer deep convolutional neural network; the method specifically comprises the following steps: the multilayer deep convolutional neural network comprises an input layer with r neurons, 4 hidden layers and a Softmax layer with m neurons. Each neuron of the input layer receives one-dimensional characteristic of one SAR image; and each hidden layer is a full connection layer, and finally extracted features are input into the Softmax layer to obtain a label of the azimuth angle of the image sample.
FIG. 2 shows a specific network structure of an embodiment of the present invention, in which the image sample z after dimension reduction is shown in this embodimenti=VTxi∈RrTaking r neurons in an input layer as the input of the r-dimensional feature for the r-dimensional feature, and taking the feature obtained by processing four layers of hidden layers consisting of all connection layers as the input of the next Softmax layer; in this embodiment, a Softmax layer is used as a classifier to obtain a final class label.
S7, training a feedforward neural network, inputting the low-dimensional sample data obtained in the step S5 into the multilayer feedforward neural network constructed in the step S6 for forward propagation, and calculating a cost function value; updating parameters of the multilayer feedforward neural network by using a backward propagation algorithm based on gradient descent; and (5) carrying out forward and backward propagation in an iteration mode until the cost function is converged. The method specifically comprises the following steps:
s71, forward propagation with a(l)Indicating the characteristic of the l < th > layer (l > 2),
then
a(l)=σ(w(l)a(l-1)+b(l)) (7)
Wherein, a(l)Represents the characteristics of the l-th layer; w is a(l)Denotes the l-th layer weight, b(l)Is thatA layer bias term;
if the L-th layer is the output layer, the posterior probability that the current sample belongs to the i-th class is
Wherein z is(L)Representing the input to the output layer and m representing the total number of discrete azimuth angles.
S72, calculating a cost function value, wherein the cross entropy function is taken as the cost function in the embodiment shown in FIG. 2, and the calculation formula is
Wherein L (w, b) represents a cost function; w, b represent the set of weight and bias terms in the network, respectively.
S73, updating the network parameters based on the backward propagation algorithm of gradient descent, wherein the specific calculation formula is
where α is the learning rate.
Finally, the invention also comprises: performing azimuth discrimination performance test on the network trained in the step S7, specifically: inputting the test sample after the dimension reduction projection into a network trained in S7 for forward propagation to obtain posterior probabilities of the test sample belonging to each category, comparing the posterior probabilities of various categories, and taking the category corresponding to the maximum as a final azimuth angle identification result.
Fig. 3 is an azimuth angle discrimination result of a scene SAR image according to an embodiment of the present invention; FIG. 4 is a comparison of the average absolute difference of the results provided by the example of the present invention with other methods. The result shows that the SAR target azimuth angle identification method can utilize the sample data projected to the low-dimensional space to realize the high-efficiency identification of the SAR target azimuth angle, and has obvious advantages in the performance of accuracy compared with other methods. Mean absolute difference is represented by Mean absolute difference in fig. 4; the Aspect identification methods indicate orientation identification methods.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (6)

1. A SAR target orientation identification method based on a machine learning theory is characterized by comprising the following steps:
s1, collecting an original SAR image;
s2, preprocessing the original SAR image acquired in the step S1 to obtain an SAR image slice with a target positioned in the center;
s3, setting discrete azimuth number according to actual imaging conditions and performance indexes;
s4, generating SAR image slices for training according to the discrete azimuth number set in the step S3;
s5, generating sample data projected to a low-dimensional space according to the SAR image slice generated in the step S4;
s6, constructing a multilayer feedforward neural network;
s7, training the multi-layer feedforward neural network constructed in the step S6 according to the sample data obtained in the step S5.
2. The SAR target orientation identification method based on the machine learning theory as claimed in claim 1, wherein the step S2 is specifically: and (3) combining the original SAR image acquired in the step (S1) with a network structure to manufacture an SAR image slice with a target positioned at the center, and performing gray level enhancement based on power transformation on the sliced image.
3. The SAR target orientation identification method based on the machine learning theory as claimed in claim 2, wherein the step S4 is specifically: discretizing the azimuth angle range according to the discrete azimuth angle number set in the step S3; and then classifying the SAR image into a sample set of the same visual angle according to the discretized azimuth angle.
4. The SAR target orientation identification method based on the machine learning theory as claimed in claim 3, wherein the step S5 is specifically: and obtaining an optimal projection matrix by adopting an optimization method, and generating a sample set which is projected to a low-dimensional space by the sample sets with the same azimuth angle in the step S4 according to the optimal projection matrix.
5. The SAR target orientation identification method based on the machine learning theory as claimed in claim 4, characterized in that the column in the optimal projection matrix is a generalized eigenvector corresponding to r maximum eigenvalues in the following formula, wherein r < n;
X(D(b)-W(b))XTv=λX(D(s)-W(s))XTv
wherein X ═ X1,x2,…,xN],D(s)And D(b)Is that the diagonal elements are respectivelyAndλ is an eigenvalue of V, and its corresponding eigenvector is V.
6. The SAR target orientation identification method based on machine learning theory as claimed in claim 5, wherein the step S6 the multi-layer deep convolutional neural network comprises: an input layer with r neurons, 4 hidden layers and a Softmax layer with m neurons; each neuron of the input layer receives one-dimensional characteristic of one SAR image; each hidden layer is a full connection layer, and finally extracted features are input into the Softmax layer.
CN201810806537.5A 2018-07-18 2018-07-18 A kind of target bearing SAR discrimination method based on machine Learning Theory Pending CN108985445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810806537.5A CN108985445A (en) 2018-07-18 2018-07-18 A kind of target bearing SAR discrimination method based on machine Learning Theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810806537.5A CN108985445A (en) 2018-07-18 2018-07-18 A kind of target bearing SAR discrimination method based on machine Learning Theory

Publications (1)

Publication Number Publication Date
CN108985445A true CN108985445A (en) 2018-12-11

Family

ID=64548786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810806537.5A Pending CN108985445A (en) 2018-07-18 2018-07-18 A kind of target bearing SAR discrimination method based on machine Learning Theory

Country Status (1)

Country Link
CN (1) CN108985445A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101249A (en) * 2020-09-18 2020-12-18 电子科技大学 SAR target type identification method based on deep convolutional memory network

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080150786A1 (en) * 1997-10-22 2008-06-26 Intelligent Technologies International, Inc. Combined Imaging and Distance Monitoring for Vehicular Applications
CN101807258A (en) * 2010-01-08 2010-08-18 西安电子科技大学 SAR (Synthetic Aperture Radar) image target recognizing method based on nuclear scale tangent dimensionality reduction
CN102967859A (en) * 2012-11-14 2013-03-13 电子科技大学 Forward-looking scanning radar imaging method
CN103456015A (en) * 2013-09-06 2013-12-18 电子科技大学 SAR target detection method based on optimal fractional domain Gabor spectrum features
CN103852759A (en) * 2014-04-08 2014-06-11 电子科技大学 Scanning radar super-resolution imaging method
CN106408030A (en) * 2016-09-28 2017-02-15 武汉大学 SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN106772380A (en) * 2017-03-31 2017-05-31 电子科技大学 A kind of circumferential synthetic aperture radar imaging method
CN108038445A (en) * 2017-12-11 2018-05-15 电子科技大学 A kind of SAR automatic target recognition methods based on various visual angles deep learning frame
CN108256436A (en) * 2017-12-25 2018-07-06 上海交通大学 A kind of radar HRRP target identification methods based on joint classification

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080150786A1 (en) * 1997-10-22 2008-06-26 Intelligent Technologies International, Inc. Combined Imaging and Distance Monitoring for Vehicular Applications
CN101807258A (en) * 2010-01-08 2010-08-18 西安电子科技大学 SAR (Synthetic Aperture Radar) image target recognizing method based on nuclear scale tangent dimensionality reduction
CN102967859A (en) * 2012-11-14 2013-03-13 电子科技大学 Forward-looking scanning radar imaging method
CN103456015A (en) * 2013-09-06 2013-12-18 电子科技大学 SAR target detection method based on optimal fractional domain Gabor spectrum features
CN103852759A (en) * 2014-04-08 2014-06-11 电子科技大学 Scanning radar super-resolution imaging method
CN106408030A (en) * 2016-09-28 2017-02-15 武汉大学 SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN106772380A (en) * 2017-03-31 2017-05-31 电子科技大学 A kind of circumferential synthetic aperture radar imaging method
CN108038445A (en) * 2017-12-11 2018-05-15 电子科技大学 A kind of SAR automatic target recognition methods based on various visual angles deep learning frame
CN108256436A (en) * 2017-12-25 2018-07-06 上海交通大学 A kind of radar HRRP target identification methods based on joint classification

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JIFANG PEI 等: "Multiview Synthetic Aperture Radar Automatic Target Recognition Optimization: Modeling and Implementation", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
JIFANG PEI 等: "SAR Automatic Target Recognition Based on Multiview Deep Learning Framework", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
JIFANG PEI 等: "Target Aspect Identification in SAR Image: A Machine Learning Approach", 《2018 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM》 *
裴季方: "基于邻域判别嵌入的SAR自动目标识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
裴季方: "多视角SAR目标识别方法研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101249A (en) * 2020-09-18 2020-12-18 电子科技大学 SAR target type identification method based on deep convolutional memory network

Similar Documents

Publication Publication Date Title
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
CN110135267B (en) Large-scene SAR image fine target detection method
Sharifzadeh et al. Ship classification in SAR images using a new hybrid CNN–MLP classifier
CN108830296B (en) Improved high-resolution remote sensing image classification method based on deep learning
CN106909924B (en) Remote sensing image rapid retrieval method based on depth significance
CN107194336B (en) Polarized SAR image classification method based on semi-supervised depth distance measurement network
Han et al. Combining 3D‐CNN and Squeeze‐and‐Excitation Networks for Remote Sensing Sea Ice Image Classification
CN107563355A (en) Hyperspectral abnormity detection method based on generation confrontation network
CN108734171A (en) A kind of SAR remote sensing image ocean floating raft recognition methods of depth collaboration sparse coding network
CN109117739A (en) One kind identifying projection properties extracting method based on neighborhood sample orientation
CN108428220A (en) Satellite sequence remote sensing image sea island reef region automatic geometric correction method
CN111680579B (en) Remote sensing image classification method for self-adaptive weight multi-view measurement learning
CN107358214A (en) Polarization SAR terrain classification method based on convolutional neural networks
WO2023273337A1 (en) Representative feature-based method for detecting dense targets in remote sensing image
CN109558803B (en) SAR target identification method based on convolutional neural network and NP criterion
CN115908924A (en) Multi-classifier-based small sample hyperspectral image semantic segmentation method and system
CN109145993B (en) SAR image classification method based on multi-feature and non-negative automatic encoder
CN105160666B (en) SAR image change detection based on Non-Stationary Analysis and condition random field
CN104050489B (en) SAR ATR method based on multicore optimization
CN114694014A (en) SAR image ship target detection method based on multilayer neural network
CN108985445A (en) A kind of target bearing SAR discrimination method based on machine Learning Theory
CN115410093B (en) Remote sensing image classification method based on dual-channel coding network and conditional random field
CN117893827A (en) Hyperspectral and laser radar classification method based on cyclic generation learning of modal attention
CN106951873A (en) A kind of Remote Sensing Target recognition methods
CN112131962B (en) SAR image recognition method based on electromagnetic scattering characteristics and depth network characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181211