CN107239751A - High Resolution SAR image classification method based on the full convolutional network of non-down sampling contourlet - Google Patents
High Resolution SAR image classification method based on the full convolutional network of non-down sampling contourlet Download PDFInfo
- Publication number
- CN107239751A CN107239751A CN201710364900.8A CN201710364900A CN107239751A CN 107239751 A CN107239751 A CN 107239751A CN 201710364900 A CN201710364900 A CN 201710364900A CN 107239751 A CN107239751 A CN 107239751A
- Authority
- CN
- China
- Prior art keywords
- layer
- pixel
- test data
- eigenmatrix
- data set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
A kind of High Resolution SAR image classification method based on the full convolutional network of non-down sampling contourlet, including inputting High Resolution SAR image to be sorted, multilayer non-down sampling contourlet transform is carried out to each pixel in image, the low frequency coefficient and high frequency coefficient of each pixel is obtained;Low frequency coefficient and high frequency coefficient are selected and merged, the eigenmatrix F based on pixel is constituted;By the element value normalization in eigenmatrix F, normalization characteristic matrix F 1 is obtained;By the stripping and slicing of normalization characteristic matrix F 1, characteristic block matrix F 2 is obtained and as sample data;Construct training dataset eigenmatrix W1 and test data set eigenmatrix W2;Construct the disaggregated model based on full convolutional neural networks;Train classification models;Test data set T is classified using the model trained, the classification of each pixel in test data set T is obtained, obtained each pixel classification and class are marked on a map contrast, classification accuracy is calculated, improves nicety of grading and speed.
Description
Technical field
The invention belongs to image processing field, and in particular to a kind of high score based on the full convolutional network of non-down sampling contourlet
SAR image sorting technique is distinguished, High Resolution SAR image can be applied to, the accuracy of identification of target is effectively improved.
Background technology
Synthetic aperture radar (Synthetic Aperture Radar, SAR) is to be widely studied and apply in recent years
A kind of remote sensor, compared with other sensors such as optics, infrared, SAR imaging do not limited by conditions such as weather, illumination
System, can carry out round-the-clock, round-the-clock investigation to target interested., can and SAR also has certain penetration capacity
There is cloud noise, grove to block or the unfavorable conditions such as target shallow embedding earth's surface under realize detection to target.Further, since
Imaging mechanism special SAR so that High Resolution SAR image includes the content different from other sensors, is provided to target acquisition
More rich comprehensive information.Because SAR possesses numerous significant advantages, with great application potential.In recent years to SAR skills
The research of art is attracted wide attention, and many achievements in research have been successfully applied to environmental monitoring, topographic survey, target acquisition etc.
Aspect.
The key of High Resolution SAR image classification is the target's feature-extraction to High Resolution SAR image, existing SAR image
Sorting technique has the sorting technique based on statistics, the sorting technique based on image texture and the classification side based on deep learning
Method.
Sorting technique based on statistics is classified according to the statistical property difference of heterogeneity image-region, but should
Method have ignored the spatial characteristics of image, therefore classification results are often undesirable.Also occur in that some are based on line in recent years
Manage the sorting technique of feature, such as based on the methods of gray level co-occurrence matrixes (GLCM), the method based on Markov random fields (MRF),
Gabor wavelet method etc., but be due to the mechanism of SAR image coherent imaging, cause the texture in SAR image not substantially and unstable
Strong, computer textural characteristics need to carry out point by point scanning to image in addition, and amount of calculation is huge and can not meet requirement of real-time.
The traditional SAR image sorting technique of the above can only rely on and manually extract some shallow-layer features for representing target property,
These shallow-layer features are drawn only by original input signal is transformed into particular problem space, can not completely symbolize mesh
Mark the neighborhood relevance between pixel.2006, Hinton et al. proposed unsupervised successively greedy training method, solves
" gradient dissipation " problem that depth increase is brought.Then, many scholars propose a variety of DL according to different application backgrounds
Model, such as depth confidence net (Deep Belief Network, DBN), the self-editing ink recorder of stack noise reduction (Stacked Denoising
Autoencoders, SDA) and convolutional neural networks (Convolutional Neural Network, CNN) etc..But, it is above-mentioned
Feature extracting method does not account for multiple dimensioned, multi-direction, many resolution characteristics of High Resolution SAR image, therefore, for background
Complicated High Resolution SAR image is difficult to obtain higher nicety of grading.
The content of the invention
It is an object of the invention to complete based on non-down sampling contourlet there is provided one kind for above-mentioned the problems of the prior art
The High Resolution SAR image classification method of convolutional network, with reference to High Resolution SAR Image Multiscale, the multi-direction, characteristic differentiated more,
The accuracy rate and classification speed of its image classification are improved, and then effectively improves the accuracy of identification of target.
To achieve these goals, the technical solution adopted by the present invention comprises the following steps:
1) High Resolution SAR image to be sorted is inputted, multilayer non-down sampling contourlet is carried out to each pixel in image
Conversion, obtains the low frequency coefficient and high frequency coefficient of each pixel;
2) low frequency coefficient and high frequency coefficient are selected and merged, constitute the eigenmatrix F based on pixel;
3) element value in eigenmatrix F is normalized between [0,1], obtains normalization characteristic matrix F 1;
4) normalization characteristic matrix F 1 is subjected to stripping and slicing, obtains characteristic block matrix F 2 and as sample data;
5) training dataset eigenmatrix W1 is constructed by training dataset D, test data is constructed by test data set T
Collect eigenmatrix W2;
6) disaggregated model based on full convolutional neural networks is constructed;
7) disaggregated model is trained by training dataset D, the model trained;
8) test data set T is classified using the model trained, obtains each pixel in test data set T
Classification, obtained each pixel classification is marked on a map with class and contrasted, classification accuracy is calculated.
Described step 1) three layers of non-down sampling contourlet transform are carried out to each pixel in image;Non-down sampling profile
Wave conversion includes non-lower sampling pyramid decomposition and non-lower sampling anisotropic filter is decomposed, described non-lower sampling pyramid decomposition
Time-frequency plane is decomposed into by a low frequency filial generation and multiple annular high frequency filial generations, non-lower sampling gold by non-lower sampling wave filter group
The band logical image that word tower is decomposed to form decomposes the coefficient for obtaining band logical subgraph by non-lower sampling anisotropic filter again.
Described step 2) by high frequency coefficient according to being ranked up from big to small, choose wherein preceding 50% high frequency coefficient,
Low frequency coefficient after being converted with third layer is merged, and it is that M1 × M2 × 1, M1 is to treat to define the eigenmatrix F sizes based on pixel
The length for SAR image of classifying, M2 is the width of SAR image to be sorted, and fusion results are assigned to the eigenmatrix F based on pixel.
Step 3) described in normalization pass through characteristic line pantography, feature normalization method or feature albefaction method realize;It is special
Levy the maximum max (F) that linear scale method first obtains the eigenmatrix F based on pixel;Again by the feature square based on pixel
Battle array F in each element divided by maximum max (F), obtain normalization characteristic matrix F 1.
Described step 4) by normalization characteristic matrix F 1 according to size be 128 × 128, at intervals of 50 carry out strippings and slicings.
The step 5) concrete operations it is as follows:
SAR image atural object 5a) is divided into 3 classes, position of the corresponding pixel of each classification in image to be classified is recorded,
Three kinds of generation represents position A1, A2, the A3 of three class atural object pixels in image to be classified respectively;
5b) randomly selected from described A1, A2, A3 5% element, generate three kinds of correspondence inhomogeneity atural objects, be selected as instruction
Pixel position B1, B2, B3 of experienced data set, wherein B1 are to correspond to the pixel that training dataset is selected as in the 1st class atural object
Position in image to be classified, B2 is to correspond in the 2nd class atural object to be selected as the pixel of training dataset in image to be classified
In position, B3 is to be selected as position of the pixel of training dataset in image to be classified in the 3rd class atural object of correspondence, and will
Element in B1, B2, B3 merges position L1 of all pixels point of composition training dataset in image to be classified;
5c) test data is selected as with 3 kinds of correspondence inhomogeneity atural objects of remaining 95% Element generation in described A1, A2, A3
Pixel position C1, C2, C3 of collection, wherein C1 are to be selected as the pixel of test data set in the 1st class atural object of correspondence treating point
Position in class image, C2 is to correspond in the 2nd class atural object to be selected as position of the pixel of test data set in image to be classified
Put, C3 is to be selected as position of the pixel of test data set in image to be classified in the 3rd class atural object of correspondence, and by C1, C2,
Element in C3 merges position L2 of all pixels point of composition test data set in image to be classified;
Training dataset D training dataset eigenmatrix W1 5d) is defined, correspondence is taken according to L1 in characteristic block matrix F 2
Value on position, and it is assigned to training dataset D training dataset eigenmatrix W1;
Test data set T test data set eigenmatrix W2 5e) is defined, correspondence is taken according to L2 in characteristic block matrix F 2
Value on position, and it is assigned to test data set T test data set eigenmatrix W2.
The step 6) construction the disaggregated model based on full convolutional neural networks comprise the following steps:
One 6a) is selected successively by input layer, convolutional layer, pond layer, convolutional layer, pond layer, convolutional layer, pond layer, volume
Lamination, pond layer, convolutional layer, Dropout layers, convolutional layer, Dropout layers, convolutional layer, deconvolution up-sampling layer, Crop cut
The 19 layer depth neutral nets that layer, softmax graders are constituted, every layer of parameter is as follows:
For the 1st layer of input layer, it is 3 to set Feature Mapping map number;
For level 2 volume lamination, it is 32, convolution kernel size 5 × 5 to set Feature Mapping map number;
For the 3rd layer of pond layer, it is 2 to set down-sampling size;
For the 4th layer of convolutional layer, it is 64, convolution kernel size 5 × 5 to set Feature Mapping map number;
For the 5th layer of pond layer, it is 2 to set down-sampling size;
For the 6th layer of convolutional layer, it is 96, convolution kernel size 3 × 3 to set Feature Mapping map number;
For the 7th layer of pond layer, it is 2 to set down-sampling size;
For the 8th layer of convolutional layer, it is 128, convolution kernel size 3 × 3 to set Feature Mapping map number;
For the 9th layer of pond layer, it is 2 to set down-sampling size;
For the 10th layer of convolutional layer, it is 128, convolution kernel size 3 × 3 to set Feature Mapping map number;
For Dropout layers of 11th layer, it is 0.5 to set sparse coefficient;
For the 12nd layer of convolutional layer, it is 128, convolution kernel size 1 × 1 to set Feature Mapping map number;
For the 13rd layer Dropout layers, it is 0.5 to set sparse coefficient;
For the 14th layer of convolutional layer, it is 2, convolution kernel size 1 × 1 to set Feature Mapping map number;
For the 15th layer of up-sampling layer, it is 2, convolution kernel size 32 × 32 to set Feature Mapping map number;
For the 16th layer Crop layers, it is 128 × 128 to set the final specification that cuts;
For the 17th layer of Softmax grader, it is 2 to set Feature Mapping map number;
The convolution kernel of the second layer convolutional layer 6b) is dimensioned to 5 × 5, reduces receptive field.
Described step 7) using training dataset eigenmatrix W1 as disaggregated model input, by training dataset D
The classification of each pixel solves the mistake between above-mentioned classification and the correct classification of handmarking as the output of disaggregated model
Difference, and backpropagation, the network parameter of Optimum Classification model, the disaggregated model trained are carried out to error.
Described step 8) inputted test data set eigenmatrix W2 as the disaggregated model trained, point trained
Class model output result be in test data set T each pixel classify obtained class categories.
Compared with prior art, the present invention has following beneficial effect:By the way that image block characteristics are extended into Pixel-level
Feature, it is to avoid due to the repetition storage brought using block of pixels and calculate convolution, improves the speed and efficiency of classification.By
In introducing multilayer non-down sampling contourlet transform before full convolutional neural networks, low frequency coefficient and high frequency coefficient, low frequency have been obtained
Coefficient embodies the rough approximation to target, the i.e. essential information such as target region, and high frequency coefficient can be obtained relatively accurately
The detailed information of target, therefore low frequency coefficient has more discriminant classification ability than high frequency coefficient.The present invention is by low frequency coefficient and height
Frequency coefficient is selected and merged, and improves classification accuracy, due to the full articulamentum in convolutional neural networks being replaced with instead
Convolutional layer, can receive the input picture of arbitrary size, without requiring that it is same that all training image and test image all have
Size.In summary, High Resolution SAR image classification method of the present invention, can not only improve classification accuracy, additionally it is possible to improve and divide
Class speed.
Brief description of the drawings
The flow chart of Fig. 1 sorting techniques of the present invention;
Fig. 2 present invention schemes to the handmarking of image to be classified;
Classification results figure of Fig. 3 present invention to image to be classified.
Embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings.
Referring to Fig. 1, image classification method of the invention realizes that step is as follows:
Step 1, input High Resolution SAR image to be sorted, 3 layers of non-down sampling contourlet transform are carried out to each pixel,
Obtain its high and low frequency coefficient;High Resolution SAR image to be sorted was German NASA (DLR) F-SAR air lines in 2007
The X-band Horizontal Polar Diagram of acquisition, resolution ratio is 1m, and image size is 6187*4278.
Line translation 1a) is entered to the characteristic of division of each pixel and obtains conversion coefficient, its transform method has wavelet transformation, it is non-under
The methods such as sampling Stationary Wavelet Transform, warp wavelet, non-down sampling contourlet transform;
1b) this example carries out 3 layers of conversion to each pixel using non-down sampling contourlet transform, and non-down sampling contourlet becomes
Change and decomposed including non-lower sampling pyramid (NSP) and non-lower sampling anisotropic filter (NSDFB);
Time-frequency plane is decomposed into by 1c) non-lower sampling pyramid (NSP) conversion using non-lower sampling wave filter group (NSFs)
One low frequency filial generation and many annular high frequency filial generations;
1d) non-lower sampling anisotropic filter (NSDFB) is two passage non-lower sampling wave filter groups;
Image obtains the coefficient of 1 low-pass pictures and 3 band logical images by 3 grades of NSP filtering in this example;
The image of this example is after NSP multi-resolution decomposition, and its band logical image further completes image by NSDFB again
0th, 1,3 grades it is multi-direction decompose so that respectively obtain 1, the coefficient of 2,8 band logical subgraphs.
Step 2, select and merge high and low frequency coefficient, constitute the eigenmatrix F based on pixel.This example will be decomposed
To high frequency coefficient according to being ranked up from big to small, therein preceding 50% is chosen, work is merged with the low frequency coefficient after the 3rd layer of conversion
For transform domain characteristic of division.Define a size and be the matrix of M1 × M2 × 1, and fusion results are assigned to matrix, be based on
The feature F of pixel, wherein M1 are the length of SAR image to be sorted, and M2 is the width of SAR image to be sorted.
Step 3, the eigenmatrix F based on pixel is normalized.
Conventional method for normalizing has:Characteristic line pantography, feature normalization and feature albefaction.
This example uses characteristic line pantography, i.e., first obtain the maximum max (F) of the eigenmatrix F based on pixel;
Again by the eigenmatrix F based on pixel each element divided by maximum max (F), obtain normalization characteristic matrix F 1.
Step 4, it is 128 × 128 according to size by normalization characteristic matrix F 1, carries out stripping and slicing processing at intervals of 50, constitutes
Small characteristic block matrix F 2, is used as sample data.
Step 5, training dataset eigenmatrix W1 is constructed by training dataset D, is constructed and tested by test data set T
Data set features matrix W 2;Specifically include following steps:
SAR image atural object 5a) is divided into 3 classes, position of the corresponding pixel of each classification in image to be classified is recorded,
Generate position A1, A2, A3 of 3 kinds of correspondences heterogeneously image vegetarian refreshments;
Wherein, position of the 1st class atural object pixel of A1 correspondences in image to be classified, A2 the 2nd class atural object pixels of correspondence
Position in image to be classified, position of A3 the 3rd class atural object pixels of correspondence in image to be classified;
5b) randomly selected from described heterogeneously image vegetarian refreshments position A1, A2, A3 5% element, generate 3 kinds of correspondences
Inhomogeneity atural object is selected as position B1, B2, B3 of the pixel of training dataset;
Wherein, B1 is to correspond in the 1st class atural object to be selected as position of the pixel of training dataset in image to be classified,
B2 is is selected as position of the pixel of training dataset in image to be classified in the 2nd class atural object of correspondence, B3 is the 3rd class of correspondence
It is selected as position of the pixel of training dataset in image to be classified in atural object, and by the element merging group in B1, B2, B3
Into position L1 of all pixels point in image to be classified of training dataset;
5c) test data is selected as with 3 kinds of correspondence inhomogeneity atural objects of remaining 95% Element generation in described A1, A2, A3
Position C1, C2, C3 of the pixel of collection, wherein C1 are treating to be selected as the pixel of test data set in the 1st class atural object of correspondence
Position in classification chart picture, C2 is to correspond in the 2nd class atural object to be selected as the pixel of test data set in image to be classified
Position, C3 is to be selected as position of the pixel of test data set in image to be classified in the 3rd class atural object of correspondence, and by C1,
Element in C2, C3 merges position L2 of all pixels point of composition test data set in image to be classified;
5d) define training dataset D training dataset eigenmatrix W1, in the eigenmatrix F2 based on image block according to
The value on correspondence position is taken according to L1, and is assigned to training dataset eigenmatrix W1;
Test data set T test data set eigenmatrix W2 5e) is defined, correspondence is taken according to L2 in characteristic block matrix F 2
Value on position, and it is assigned to test data set eigenmatrix W2.
Step 6, the disaggregated model based on full convolutional neural networks is constructed.
6a) select one by input layer → convolutional layer → pond layer → convolutional layer → pond layer → convolutional layer → pond layer →
Convolutional layer → pond layer → convolutional layer → Dropout layers → convolutional layer → Dropout layers → convolutional layer → up-sampling layer (warp
Product) the 19 layer depth neutral nets that constitute of → Crop layer (cutting) → softmax grader, every layer of parameter is as follows:
For the 1st layer of input layer, it is 3 to set Feature Mapping map number;
For level 2 volume lamination, it is 32, convolution kernel size 5 × 5 to set Feature Mapping map number;
For the 3rd layer of pond layer, it is 2 to set down-sampling size;
For the 4th layer of convolutional layer, it is 64, convolution kernel size 5 × 5 to set Feature Mapping map number;
For the 5th layer of pond layer, it is 2 to set down-sampling size;
For the 6th layer of convolutional layer, it is 96, convolution kernel size 3 × 3 to set Feature Mapping map number;
For the 7th layer of pond layer, it is 2 to set down-sampling size;
For the 8th layer of convolutional layer, it is 128, convolution kernel size 3 × 3 to set Feature Mapping map number;
For the 9th layer of pond layer, it is 2 to set down-sampling size;
For the 10th layer of convolutional layer, it is 128, convolution kernel size 3 × 3 to set Feature Mapping map number;
For Dropout layers of 11th layer, it is 0.5 to set sparse coefficient;
For the 12nd layer of convolutional layer, it is 128, convolution kernel size 1 × 1 to set Feature Mapping map number;
For the 13rd layer Dropout layers, it is 0.5 to set sparse coefficient;
For the 14th layer of convolutional layer, it is 2, convolution kernel size 1 × 1 to set Feature Mapping map number;
For the 15th layer of up-sampling layer, it is 2, convolution kernel size 32 × 32 to set Feature Mapping map number;
For the 16th layer Crop layers, it is 128 × 128 to set the final specification that cuts;
For the 17th layer of Softmax grader, it is 2 to set Feature Mapping map number.
The convolution kernel of the second layer convolutional layer 6b) is dimensioned to 5 × 5, reduces receptive field;
Step 7, it is trained with training data set pair disaggregated model, the disaggregated model trained.
Using training dataset eigenmatrix W1 as the input of disaggregated model, the class of each pixel in training dataset D
Not as the output of disaggregated model, by solving the error between above-mentioned classification and the correct classification of handmarking and error being entered
Row backpropagation, the network parameter of Optimum Classification model, the disaggregated model trained, the correct category of handmarking is as schemed
Shown in 2.
Step 8, test data set is classified using the disaggregated model trained.
Using test data set T test data set eigenmatrix W2 as the input of the disaggregated model trained, train
Disaggregated model be output as concentrating test data each pixel classify obtained class categories.
The effect of the present invention is further illustrated by following emulation experiment:
1st, simulated conditions:
Hardware platform is:HPZ840.
Software platform is:Caffe.
2nd, emulation content and result:
Tested with the inventive method under above-mentioned simulated conditions, i.e., randomly selecting 5% from SAR data respectively has mark
The pixel of note obtains the classification results such as Fig. 3 as training sample, remaining markd pixel as test sample.
As can be seen from Figure 3:The region consistency of classification results is preferable, the edge of this three class of farmland, forest and cities and towns
It is more visible, and maintain detailed information.
Reduce training sample successively again, training sample is accounted for 4%, 3%, the 2% of total sample number, by of the invention and full convolution
The test data set nicety of grading of neutral net is contrasted, as a result as shown in table 1:
Table 1
Training sample proportion | FCN-8 | The present invention |
5% | 94.0039% | 94.3360% |
4% | 93.3933% | 94.1524% |
3% | 92.6727% | 93.3117% |
2% | 91.4413% | 92.4162% |
As seen from Table 1, when training sample accounts for 5%, 4%, 3%, the 2% of total sample number, test data set point of the invention
Class precision is above simple full convolutional neural networks.To sum up, the present invention by introduced in full convolutional neural networks it is non-under adopt
Sample profile wave convert, it is contemplated that the directional information and spatial information of High Resolution SAR image, effectively increases the expression of characteristics of image
Ability, enhances the generalization ability of model so that very high nicety of grading can be still reached in the case where training sample is less.
Claims (9)
1. a kind of High Resolution SAR image classification method based on the full convolutional network of non-down sampling contourlet, it is characterised in that bag
Include:
1) High Resolution SAR image to be sorted is inputted, multilayer non-down sampling contourlet transform is carried out to each pixel in image,
Obtain the low frequency coefficient and high frequency coefficient of each pixel;
2) low frequency coefficient and high frequency coefficient are selected and merged, constitute the eigenmatrix F based on pixel;
3) element value in eigenmatrix F is normalized between [0,1], obtains normalization characteristic matrix F 1;
4) normalization characteristic matrix F 1 is subjected to stripping and slicing, obtains characteristic block matrix F 2 and as sample data;
5) training dataset eigenmatrix W1 is constructed by training dataset D, it is special to construct test data set by test data set T
Levy matrix W 2;
6) disaggregated model based on full convolutional neural networks is constructed;
7) disaggregated model is trained by training dataset D, the model trained;
8) test data set T is classified using the model trained, obtains the class of each pixel in test data set T
Not, obtained each pixel classification is marked on a map with class and contrasted, calculate classification accuracy.
2. the High Resolution SAR image classification method according to claim 1 based on the full convolutional network of non-down sampling contourlet,
It is characterized in that:Step 1) three layers of non-down sampling contourlet transform are carried out to each pixel in image;Non-down sampling contourlet
Conversion includes non-lower sampling pyramid decomposition and non-lower sampling anisotropic filter is decomposed, and described non-lower sampling pyramid decomposition is led to
Cross non-lower sampling wave filter group and time-frequency plane is decomposed into a low frequency filial generation and multiple annular high frequency filial generations, non-lower sampling gold word
The band logical image that tower is decomposed to form decomposes the coefficient for obtaining band logical subgraph by non-lower sampling anisotropic filter again.
3. the High Resolution SAR image classification method according to claim 2 based on the full convolutional network of non-down sampling contourlet,
It is characterized in that:Step 2) by high frequency coefficient according to being ranked up from big to small, wherein preceding 50% high frequency coefficient is chosen, with the
Low frequency coefficient fusion after three layers of conversion, it is that M1 × M2 × 1, M1 is to be sorted to define the eigenmatrix F sizes based on pixel
The length of SAR image, M2 is the width of SAR image to be sorted, and fusion results are assigned to the eigenmatrix F based on pixel.
4. the High Resolution SAR image classification method according to claim 1 based on the full convolutional network of non-down sampling contourlet,
It is characterized in that:Step 3) described in normalization pass through characteristic line pantography, feature normalization method or feature albefaction method realize;
Characteristic line pantography first obtains the maximum max (F) of the eigenmatrix F based on pixel;Again by the feature based on pixel
Each element in matrix F divided by maximum max (F), obtains normalization characteristic matrix F 1.
5. the High Resolution SAR image classification method according to claim 1 based on the full convolutional network of non-down sampling contourlet,
It is characterized in that:Step 4) by normalization characteristic matrix F 1 according to size be 128 × 128, at intervals of 50 carry out strippings and slicings.
6. the High Resolution SAR image classification method according to claim 1 based on the full convolutional network of non-down sampling contourlet,
Characterized in that, the step 5) concrete operations it is as follows:
High Resolution SAR image atural object 5a) is divided into 3 classes, position of the corresponding pixel of each classification in image to be classified is recorded
Put, three kinds of generation represents position A1, A2, the A3 of three class atural object pixels in image to be classified respectively;
5b) randomly selected from described A1, A2, A3 5% element, generate three kinds of correspondence inhomogeneity atural objects, be selected as training number
According to pixel position B1, B2, B3 of collection, wherein B1 is treating to be selected as the pixel of training dataset in the 1st class atural object of correspondence
Position in classification chart picture, B2 is to correspond in the 2nd class atural object to be selected as the pixel of training dataset in image to be classified
Position, B3 is to be selected as position of the pixel of training dataset in image to be classified in the 3rd class atural object of correspondence, and by B1,
Element in B2, B3 merges position L1 of all pixels point of composition training dataset in image to be classified;
5c) test data set is selected as with 3 kinds of correspondence inhomogeneity atural objects of remaining 95% Element generation in described A1, A2, A3
Pixel position C1, C2, C3, wherein C1 are to correspond in the 1st class atural object to be selected as the pixel of test data set in figure to be sorted
Position as in, C2 is to correspond in the 2nd class atural object to be selected as position of the pixel of test data set in image to be classified, C3
To be selected as position of the pixel of test data set in image to be classified in the 3rd class atural object of correspondence, and by C1, C2, C3
Element merge position L2 of all pixels point in image to be classified of composition test data set;
Training dataset D training dataset eigenmatrix W1 5d) is defined, correspondence position is taken according to L1 in characteristic block matrix F 2
On value, and be assigned to training dataset D training dataset eigenmatrix W1;
Test data set T test data set eigenmatrix W2 5e) is defined, correspondence position is taken according to L2 in characteristic block matrix F 2
On value, and be assigned to test data set T test data set eigenmatrix W2.
7. the High Resolution SAR image classification method according to claim 1 based on the full convolutional network of non-down sampling contourlet,
Characterized in that, step 6) construction the disaggregated model based on full convolutional neural networks comprise the following steps:
6a) select one successively by input layer, convolutional layer, pond layer, convolutional layer, pond layer, convolutional layer, pond layer, convolutional layer,
Pond layer, convolutional layer, Dropout layers, convolutional layer, Dropout layers, convolutional layer, deconvolution up-sampling layer, Crop cut layer,
The 19 layer depth neutral nets that softmax graders are constituted, every layer of parameter is as follows:
For the 1st layer of input layer, it is 3 to set Feature Mapping map number;
For level 2 volume lamination, it is 32, convolution kernel size 5 × 5 to set Feature Mapping map number;
For the 3rd layer of pond layer, it is 2 to set down-sampling size;
For the 4th layer of convolutional layer, it is 64, convolution kernel size 5 × 5 to set Feature Mapping map number;
For the 5th layer of pond layer, it is 2 to set down-sampling size;
For the 6th layer of convolutional layer, it is 96, convolution kernel size 3 × 3 to set Feature Mapping map number;
For the 7th layer of pond layer, it is 2 to set down-sampling size;
For the 8th layer of convolutional layer, it is 128, convolution kernel size 3 × 3 to set Feature Mapping map number;
For the 9th layer of pond layer, it is 2 to set down-sampling size;
For the 10th layer of convolutional layer, it is 128, convolution kernel size 3 × 3 to set Feature Mapping map number;
For Dropout layers of 11th layer, it is 0.5 to set sparse coefficient;
For the 12nd layer of convolutional layer, it is 128, convolution kernel size 1 × 1 to set Feature Mapping map number;
For the 13rd layer Dropout layers, it is 0.5 to set sparse coefficient;
For the 14th layer of convolutional layer, it is 2, convolution kernel size 1 × 1 to set Feature Mapping map number;
For the 15th layer of up-sampling layer, it is 2, convolution kernel size 32 × 32 to set Feature Mapping map number;
For the 16th layer Crop layers, it is 128 × 128 to set the final specification that cuts;
For the 17th layer of Softmax grader, it is 2 to set Feature Mapping map number;
The convolution kernel of the second layer convolutional layer 6b) is dimensioned to 5 × 5, reduces receptive field.
8. the High Resolution SAR image classification method according to claim 1 based on the full convolutional network of non-down sampling contourlet,
Characterized in that, described step 7) using training dataset eigenmatrix W1 as disaggregated model input, by training dataset D
In each pixel classification as the output of disaggregated model, solve the mistake between above-mentioned classification and the correct classification of handmarking
Difference, and backpropagation, the network parameter of Optimum Classification model, the disaggregated model trained are carried out to error.
9. the High Resolution SAR image classification method according to claim 8 based on the full convolutional network of non-down sampling contourlet,
Characterized in that, described step 8) inputted test data set eigenmatrix W2 as the disaggregated model trained, train
Disaggregated model output result be in test data set T each pixel classify obtained class categories.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710364900.8A CN107239751B (en) | 2017-05-22 | 2017-05-22 | High-resolution SAR image classification method based on non-subsampled contourlet full convolution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710364900.8A CN107239751B (en) | 2017-05-22 | 2017-05-22 | High-resolution SAR image classification method based on non-subsampled contourlet full convolution network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107239751A true CN107239751A (en) | 2017-10-10 |
CN107239751B CN107239751B (en) | 2020-11-03 |
Family
ID=59984361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710364900.8A Active CN107239751B (en) | 2017-05-22 | 2017-05-22 | High-resolution SAR image classification method based on non-subsampled contourlet full convolution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107239751B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107832798A (en) * | 2017-11-20 | 2018-03-23 | 西安电子科技大学 | Polarimetric SAR Image object detection method based on NSCT ladder pessimistic concurrency controls |
CN107944353A (en) * | 2017-11-10 | 2018-04-20 | 西安电子科技大学 | SAR image change detection based on profile ripple BSPP networks |
CN107944470A (en) * | 2017-11-03 | 2018-04-20 | 西安电子科技大学 | SAR image sorting technique based on profile ripple FCN CRF |
CN108062575A (en) * | 2018-01-03 | 2018-05-22 | 广东电子工业研究院有限公司 | A kind of high similarity graph picture identification and sorting technique |
CN108492319A (en) * | 2018-03-09 | 2018-09-04 | 西安电子科技大学 | Moving target detecting method based on the full convolutional neural networks of depth |
CN109344898A (en) * | 2018-09-30 | 2019-02-15 | 北京工业大学 | Convolutional neural networks image classification method based on sparse coding pre-training |
CN109447124A (en) * | 2018-09-28 | 2019-03-08 | 北京达佳互联信息技术有限公司 | Image classification method, device, electronic equipment and storage medium |
CN109444667A (en) * | 2018-12-17 | 2019-03-08 | 国网山东省电力公司电力科学研究院 | Power distribution network initial failure classification method and device based on convolutional neural networks |
CN109886992A (en) * | 2017-12-06 | 2019-06-14 | 深圳博脑医疗科技有限公司 | For dividing the full convolutional network model training method in abnormal signal area in MRI image |
CN109903301A (en) * | 2019-01-28 | 2019-06-18 | 杭州电子科技大学 | A kind of image outline detection method based on multi-stage characteristics channel Optimized Coding Based |
CN110097129A (en) * | 2019-05-05 | 2019-08-06 | 西安电子科技大学 | Remote sensing target detection method based on profile wave grouping feature pyramid convolution |
CN110188774A (en) * | 2019-05-27 | 2019-08-30 | 昆明理工大学 | A kind of current vortex scan image classifying identification method based on deep learning |
CN110702648A (en) * | 2019-09-09 | 2020-01-17 | 浙江大学 | Fluorescent spectrum pollutant classification method based on non-subsampled contourlet transformation |
CN111899232A (en) * | 2020-07-20 | 2020-11-06 | 广西大学 | Method for nondestructive testing of bamboo-wood composite container bottom plate by utilizing image processing |
CN113139579A (en) * | 2021-03-23 | 2021-07-20 | 广东省科学院智能制造研究所 | Image classification method and system based on image feature adaptive convolution network |
CN114037747A (en) * | 2021-11-25 | 2022-02-11 | 佛山技研智联科技有限公司 | Image feature extraction method and device, computer equipment and storage medium |
CN115310482A (en) * | 2022-07-31 | 2022-11-08 | 西南交通大学 | Radar intelligent identification method for bridge reinforcing steel bar |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090146869A1 (en) * | 2007-03-11 | 2009-06-11 | Vawd Applied Science & Technology | Multi frequency spectral imaging radar system and method of target classification |
CN104915676A (en) * | 2015-05-19 | 2015-09-16 | 西安电子科技大学 | Deep-level feature learning and watershed-based synthetic aperture radar (SAR) image classification method |
CN105512680A (en) * | 2015-12-02 | 2016-04-20 | 北京航空航天大学 | Multi-view SAR image target recognition method based on depth neural network |
CN105718957A (en) * | 2016-01-26 | 2016-06-29 | 西安电子科技大学 | Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network |
-
2017
- 2017-05-22 CN CN201710364900.8A patent/CN107239751B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090146869A1 (en) * | 2007-03-11 | 2009-06-11 | Vawd Applied Science & Technology | Multi frequency spectral imaging radar system and method of target classification |
CN104915676A (en) * | 2015-05-19 | 2015-09-16 | 西安电子科技大学 | Deep-level feature learning and watershed-based synthetic aperture radar (SAR) image classification method |
CN105512680A (en) * | 2015-12-02 | 2016-04-20 | 北京航空航天大学 | Multi-view SAR image target recognition method based on depth neural network |
CN105718957A (en) * | 2016-01-26 | 2016-06-29 | 西安电子科技大学 | Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network |
Non-Patent Citations (3)
Title |
---|
YING ZHANG等: "SAR and Infrared Image Fusion Using Nonsubsampled Contourlet Transform", 《IEEE》 * |
张亚楠: "基于深度脊波神经网络的极化SAR影像地物分类", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
陈丽燕等: "基于非下采样轮廓波的图像检索", 《福州大学学报(自然科学版)》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107944470A (en) * | 2017-11-03 | 2018-04-20 | 西安电子科技大学 | SAR image sorting technique based on profile ripple FCN CRF |
CN107944353A (en) * | 2017-11-10 | 2018-04-20 | 西安电子科技大学 | SAR image change detection based on profile ripple BSPP networks |
CN107944353B (en) * | 2017-11-10 | 2019-12-24 | 西安电子科技大学 | SAR image change detection method based on contour wave BSPP network |
CN107832798A (en) * | 2017-11-20 | 2018-03-23 | 西安电子科技大学 | Polarimetric SAR Image object detection method based on NSCT ladder pessimistic concurrency controls |
CN109886992A (en) * | 2017-12-06 | 2019-06-14 | 深圳博脑医疗科技有限公司 | For dividing the full convolutional network model training method in abnormal signal area in MRI image |
CN108062575A (en) * | 2018-01-03 | 2018-05-22 | 广东电子工业研究院有限公司 | A kind of high similarity graph picture identification and sorting technique |
CN108492319A (en) * | 2018-03-09 | 2018-09-04 | 西安电子科技大学 | Moving target detecting method based on the full convolutional neural networks of depth |
CN109447124A (en) * | 2018-09-28 | 2019-03-08 | 北京达佳互联信息技术有限公司 | Image classification method, device, electronic equipment and storage medium |
CN109344898A (en) * | 2018-09-30 | 2019-02-15 | 北京工业大学 | Convolutional neural networks image classification method based on sparse coding pre-training |
CN109444667B (en) * | 2018-12-17 | 2021-02-19 | 国网山东省电力公司电力科学研究院 | Power distribution network early fault classification method and device based on convolutional neural network |
CN109444667A (en) * | 2018-12-17 | 2019-03-08 | 国网山东省电力公司电力科学研究院 | Power distribution network initial failure classification method and device based on convolutional neural networks |
CN109903301A (en) * | 2019-01-28 | 2019-06-18 | 杭州电子科技大学 | A kind of image outline detection method based on multi-stage characteristics channel Optimized Coding Based |
CN109903301B (en) * | 2019-01-28 | 2021-04-13 | 杭州电子科技大学 | Image contour detection method based on multistage characteristic channel optimization coding |
CN110097129B (en) * | 2019-05-05 | 2023-04-28 | 西安电子科技大学 | Remote sensing target detection method based on profile wave grouping characteristic pyramid convolution |
CN110097129A (en) * | 2019-05-05 | 2019-08-06 | 西安电子科技大学 | Remote sensing target detection method based on profile wave grouping feature pyramid convolution |
CN110188774B (en) * | 2019-05-27 | 2022-12-02 | 昆明理工大学 | Eddy current scanning image classification and identification method based on deep learning |
CN110188774A (en) * | 2019-05-27 | 2019-08-30 | 昆明理工大学 | A kind of current vortex scan image classifying identification method based on deep learning |
CN110702648B (en) * | 2019-09-09 | 2020-11-13 | 浙江大学 | Fluorescent spectrum pollutant classification method based on non-subsampled contourlet transformation |
CN110702648A (en) * | 2019-09-09 | 2020-01-17 | 浙江大学 | Fluorescent spectrum pollutant classification method based on non-subsampled contourlet transformation |
CN111899232A (en) * | 2020-07-20 | 2020-11-06 | 广西大学 | Method for nondestructive testing of bamboo-wood composite container bottom plate by utilizing image processing |
CN111899232B (en) * | 2020-07-20 | 2023-07-04 | 广西大学 | Method for nondestructive detection of bamboo-wood composite container bottom plate by image processing |
CN113139579A (en) * | 2021-03-23 | 2021-07-20 | 广东省科学院智能制造研究所 | Image classification method and system based on image feature adaptive convolution network |
CN113139579B (en) * | 2021-03-23 | 2024-02-02 | 广东省科学院智能制造研究所 | Image classification method and system based on image feature self-adaptive convolution network |
CN114037747A (en) * | 2021-11-25 | 2022-02-11 | 佛山技研智联科技有限公司 | Image feature extraction method and device, computer equipment and storage medium |
CN115310482A (en) * | 2022-07-31 | 2022-11-08 | 西南交通大学 | Radar intelligent identification method for bridge reinforcing steel bar |
Also Published As
Publication number | Publication date |
---|---|
CN107239751B (en) | 2020-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107239751A (en) | High Resolution SAR image classification method based on the full convolutional network of non-down sampling contourlet | |
CN105139028B (en) | SAR image sorting technique based on layering sparseness filtering convolutional neural networks | |
CN115063573B (en) | Multi-scale target detection method based on attention mechanism | |
CN104392463B (en) | Image salient region detection method based on joint sparse multi-scale fusion | |
CN107944370B (en) | Classification of Polarimetric SAR Image method based on DCCGAN model | |
CN113160062B (en) | Infrared image target detection method, device, equipment and storage medium | |
CN107229918A (en) | A kind of SAR image object detection method based on full convolutional neural networks | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
CN105069468A (en) | Hyper-spectral image classification method based on ridgelet and depth convolution network | |
CN110222767B (en) | Three-dimensional point cloud classification method based on nested neural network and grid map | |
CN112950780B (en) | Intelligent network map generation method and system based on remote sensing image | |
CN107944470A (en) | SAR image sorting technique based on profile ripple FCN CRF | |
CN107909109A (en) | SAR image sorting technique based on conspicuousness and multiple dimensioned depth network model | |
CN107944353B (en) | SAR image change detection method based on contour wave BSPP network | |
CN103413151A (en) | Hyperspectral image classification method based on image regular low-rank expression dimensionality reduction | |
CN107169492A (en) | Polarization SAR object detection method based on FCN CRF master-slave networks | |
CN107292336A (en) | A kind of Classification of Polarimetric SAR Image method based on DCGAN | |
CN105608454A (en) | Text structure part detection neural network based text detection method and system | |
CN107368852A (en) | A kind of Classification of Polarimetric SAR Image method based on non-down sampling contourlet DCGAN | |
CN103020649A (en) | Forest type identification method based on texture information | |
CN112232328A (en) | Remote sensing image building area extraction method and device based on convolutional neural network | |
CN106683102A (en) | SAR image segmentation method based on ridgelet filters and convolution structure model | |
CN103593853A (en) | Remote-sensing image multi-scale object-oriented classification method based on joint sparsity representation | |
CN117953314B (en) | Multi-dimensional feature optimization ocean substrate classification method and system | |
Dong et al. | New quantitative approach for the morphological similarity analysis of urban fabrics based on a convolutional autoencoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |